DCAI
Loading Light/Dark Toggl

Troubleshooting for A1111 Stable Diffusion web UI

⏱️10min read
📅 Jul 02, 2024
🔄 Jul 26, 2024
Category:📂 Novice
Troubleshooting for A1111 Stable Diffusion web UI featured Image
Supported by

It is not uncommon to run into trouble when operating Stable Diffusion web UI (AUTOMATIC 1111 version). It is also possible that things may go wrong after upgrading to a newer version. In this article, we will explain how to deal with such problems. Basically, we will follow the official troubleshooting instructions.

PR
Seagate Expansion 14TB External Hard Drive USB 3.0 with Rescue Data Recovery Services (STKP14000400)の商品画像
Seagate Expansion 14TB External Hard Drive USB 3.0 with Rescue Data Recovery Services (STKP14000400)
🔗Newegg Link
ZOTAC GAMING GeForce RTX 4070 Ti SUPER Trinity Black Edition DLSS 3 16GB GDDR6X 256-bit 21 Gbps PCIE 4.0 Gaming Graphics Card IceStorm 2.0 Advanced Cooling SPECTRA RGB Lighting ZT-D40730D-10Pの商品画像
ZOTAC GAMING GeForce RTX 4070 Ti SUPER Trinity Black Edition DLSS 3 16GB GDDR6X 256-bit 21 Gbps PCIE 4.0 Gaming Graphics Card IceStorm 2.0 Advanced Cooling SPECTRA RGB Lighting ZT-D40730D-10P
🔗Newegg Link

Rebuild venv

If you have any trouble using WebUI, you can usually solve it by rebuilding venv, but it will take some time to re-download the components. It is recommended that you do this in a place with a good internet environment.

Deleting venv: Open the stable-diffusion-webui folder in File Explorer. Then delete the venv folder in the folder.
Rebuilding venv: Run webui-user.bat to start Stable Diffusion WebUI after the deletion. vennv will start rebuilding, wait a while until the browser is up.

Command Line Arguments

You can change model references and performance settings by writing to set COMMANDLINE_ARGS= in webui-user.bat. For more information about available commands, please refer to the following link.

After running webui-user.bat, the window closes immediately

If the window closes due to some error, write the pause command to webui-user.bat since it is difficult to identify the error.

After editing, run webui-user.bat to start WebUI. The window will not close, so check for error messages at the command prompt.

Example of edited webui-user.bat

@echo off
set PYTHON=
set GIT=
set VENV_DIR=
set COMMANDLINE_ARGS=

call webui.bat
pause

Low VRAM Video Card

If the VRAM of the video card is low (4 GB or less), an insufficient memory error may occur.

Although it will sacrifice the speed of generation, several measures are presented here.

Write the following command line arguments to set COMMANDLINE_ARGS= in webui-user.bat for your purposes.

  • --opt-sdp-no-mem-attentionor--xformers: Reduces GPU memory usage by half
  • --medvram: If you want to generate ~1.3x images with 4GB of VRAM
  • --lowvram --always-batch-cond-uncond: If you get an out-of-memory error using--medvram
  • --lowvram: If you want to generate more than 1.3x images with 4GB of VRAM
  • --disable-model-loading-ram-optimization: Insufficient memory error when using full weight model with 4GB of VRAM (v1.6.0 or higher)

Torch cannot use GPU

This problem is often mentioned, but usually has various causes rather than a WebUI problem.

  • Correct settings were not made during installation.
  • If an error occurs immediately after changing or updating the WebUI configuration.

In the above case, try rebuilding venv as described at the beginning.

If rebuilding the venv does not solve the problem, use the following methods to create an environment report and contact an official or an expert.

  • Open a command prompt and navigate to the directory stable-diffusion-webeui\venv\Scripts.
  • Execute the following command
    python -m torch.utils.collect_env
  • Copy the output text and save it to an appropriate text file for submission.

Generated image is black or green

The generated image may appear as a green or black screen. This error can be caused by the model, such as excessive model weighting, but if the problem is hardware, the error occurs when the GPU of the video card is not able to support half precision.

Try using the command line arguments --upcast-sampling and --xformers together. Also, if you are using the command line argument --precision full --no-half, you may need to use --medvram, as it will significantly increase VRAM usage.

Error “A Tensor with all NaNs was produced in the vae”

This error is also caused by the same reason as the previous error. To verify this, use the command line argument --disable-nan-check.

If the model causes

The CLIP key named embeddings.position_ids is sometimes broken during model merging. This is an int64 tensor and has values from 0 to 76, but when merging, these are converted to floats and an error occurs. For example, in AnythingV3, the value 76 becomes 75.9975, and when loaded in webui, it is recast to int64 and becomes 75.

To correct this, use the extension 🔗stable-diffusion-model-toolkit and modify this tensor value with the Fix broken CLIP position IDs option in the Advanced setting. This method slightly affects the output of the model, so it is off by default.

If the GPU is the cause

The 16XX and 10XX series video cards from NVIDIA require the use of --upcast-sampling and --xformers to run at equivalent speeds. If this does not solve the problem, use --no-half-vae to run VAE at fp32. If this does not solve the problem, run it with --no-half, which is the slowest and most VRAM-consuming method.

Error immediately after activating xformers “CUDA error: no kernel image is available for execution on the device”

Your installed xformers are not adapted to your GPU, if your OS is Windows, Python version is 3.10 and your GPU is “Pascal” or later, reinstall your xformers as follows

Add command line arguments: Add the following command to COMMANDLINE_ARGS= in webui-user.bat
--reinstall-xformers --xformers
Execute the command: After editing webui-user.bat, start WebUI. At startup, xformers will be reinstalled.
Removing reinstallation commands: If WebUI starts, the reinstallation is complete. After completion, edit webui-user.bat again and remove the --reinstall-xformers section.

Error “RuntimeError: Torch is not able to use GPU; add –skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check”.

Your GPU does not support Torch Add the following command to COMMANDLINE_ARGS= in webui-user.bat to skip Torch.

--skip-torch-cuda-test

Error “NameError: name 'xformers' is not defined”

For Windows

Your version of Python is out of date, so update your version to 3.10.

For Linux

You will need to build xformers yourself. Or disable xformers.

Error “OSError: [WinError 126] The specified module could not be found. Error loading “… c10.dll”

Microsoft Visual C++ Redistributable Package X64 is not installed successfully, please refer to the official Microsoft 🔗Microsoft Visual C++ Redistributable latest supported downloads for installation.

After updating to gradio 3.22, “–share” is not working

Windows Defender or Anti-Virus may block Gradio from creating public URLs. If you get a warning message, please add it as an exclusion in the respective software.

Monitor screen drops and goes black during generation.

Your video card is experiencing trouble. Especially if you are using a high-spec video card that requires a lot of power, reviewing the power supply area may solve the problem. If you are building your own PC, please recheck the power supply connections for your video card. A common connection mistake is to use a single cable to connect two power supplies when the video card has two power supplies. Review the installation manual for the power supply and try to connect it correctly.

PR
ASUS TUF Gaming NVIDIA GeForce RTX 4070 Ti SUPER OC Edition Gaming Graphics Card (PCIe 4.0 16GB GDDR6X HDMI 2.1a DisplayPort 1.4a) TUF-RTX4070TIS-O16G-GAMINGの商品画像
ASUS TUF Gaming NVIDIA GeForce RTX 4070 Ti SUPER OC Edition Gaming Graphics Card (PCIe 4.0 16GB GDDR6X HDMI 2.1a DisplayPort 1.4a) TUF-RTX4070TIS-O16G-GAMING
🔗Newegg Link
Seasonic VERTEX GX-1200 1200W 80+ Gold ATX 3.0 & PCIe 5.0 Ready Full-Modular ATX Form Factor Low Noise Premium Japanese Capacitor 12 Year Warranty Nvidia RTX 30/40 Super AMD GPU Compatibleの商品画像
Seasonic VERTEX GX-1200 1200W 80+ Gold ATX 3.0 & PCIe 5.0 Ready Full-Modular ATX Form Factor Low Noise Premium Japanese Capacitor 12 Year Warranty Nvidia RTX 30/40 Super AMD GPU Compatible
🔗Newegg Link
Supported by