Troubleshooting for A1111 Stable Diffusion web UI
- Rebuild venv
- Command Line Arguments
- After running webui-user.bat, the window closes immediately
- Low VRAM Video Card
- Torch cannot use GPU
- Generated image is black or green
- Error “A Tensor with all NaNs was produced in the vae”
- Error immediately after activating xformers “CUDA error: no kernel image is available for execution on the device”
- Error “RuntimeError: Torch is not able to use GPU; add –skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check”.
- Error “NameError: name 'xformers' is not defined”
- Error “OSError: [WinError 126] The specified module could not be found. Error loading “… c10.dll”
- After updating to gradio 3.22, “–share” is not working
- Monitor screen drops and goes black during generation.
It is not uncommon to run into trouble when operating Stable Diffusion web UI (AUTOMATIC 1111 version). It is also possible that things may go wrong after upgrading to a newer version. In this article, we will explain how to deal with such problems. Basically, we will follow the official troubleshooting instructions.
Rebuild venv
If you have any trouble using WebUI, you can usually solve it by rebuilding venv, but it will take some time to re-download the components. It is recommended that you do this in a place with a good internet environment.
stable-diffusion-webui
folder in File Explorer. Then delete the venv
folder in the folder.
webui-user.bat
to start Stable Diffusion WebUI after the deletion. vennv will start rebuilding, wait a while until the browser is up.
Command Line Arguments
You can change model references and performance settings by writing to set COMMANDLINE_ARGS=
in webui-user.bat
. For more information about available commands, please refer to the following link.
After running webui-user.bat, the window closes immediately
If the window closes due to some error, write the pause
command to webui-user.bat
since it is difficult to identify the error.
After editing, run webui-user.bat
to start WebUI. The window will not close, so check for error messages at the command prompt.
Example of edited webui-user.bat
@echo off
set PYTHON=
set GIT=
set VENV_DIR=
set COMMANDLINE_ARGS=
call webui.bat
pause
Low VRAM Video Card
If the VRAM of the video card is low (4 GB or less), an insufficient memory error may occur.
Although it will sacrifice the speed of generation, several measures are presented here.
Write the following command line arguments to set COMMANDLINE_ARGS=
in webui-user.bat
for your purposes.
--opt-sdp-no-mem-attention
or--xformers
: Reduces GPU memory usage by half--medvram
: If you want to generate ~1.3x images with 4GB of VRAM--lowvram --always-batch-cond-uncond
: If you get an out-of-memory error using--medvram
--lowvram
: If you want to generate more than 1.3x images with 4GB of VRAM--disable-model-loading-ram-optimization
: Insufficient memory error when using full weight model with 4GB of VRAM (v1.6.0 or higher)
Torch cannot use GPU
This problem is often mentioned, but usually has various causes rather than a WebUI problem.
- Correct settings were not made during installation.
- If an error occurs immediately after changing or updating the WebUI configuration.
In the above case, try rebuilding venv as described at the beginning.
If rebuilding the venv does not solve the problem, use the following methods to create an environment report and contact an official or an expert.
- Open a command prompt and navigate to the directory
stable-diffusion-webeui\venv\Scripts
. -
Execute the following command
python -m torch.utils.collect_env
- Copy the output text and save it to an appropriate text file for submission.
Generated image is black or green
The generated image may appear as a green or black screen. This error can be caused by the model, such as excessive model weighting, but if the problem is hardware, the error occurs when the GPU of the video card is not able to support half precision.
Try using the command line arguments --upcast-sampling
and --xformers
together. Also, if you are using the command line argument --precision full --no-half
, you may need to use --medvram
, as it will significantly increase VRAM usage.
Error “A Tensor with all NaNs was produced in the vae”
This error is also caused by the same reason as the previous error. To verify this, use the command line argument --disable-nan-check
.
If the model causes
The CLIP key named embeddings.position_ids is sometimes broken during model merging. This is an int64 tensor and has values from 0 to 76, but when merging, these are converted to floats and an error occurs. For example, in AnythingV3, the value 76 becomes 75.9975, and when loaded in webui, it is recast to int64 and becomes 75.
To correct this, use the extension 🔗stable-diffusion-model-toolkit and modify this tensor value with the Fix broken CLIP position IDs option in the Advanced setting. This method slightly affects the output of the model, so it is off by default.
If the GPU is the cause
The 16XX and 10XX series video cards from NVIDIA require the use of --upcast-sampling
and --xformers
to run at equivalent speeds. If this does not solve the problem, use --no-half-vae
to run VAE at fp32. If this does not solve the problem, run it with --no-half
, which is the slowest and most VRAM-consuming method.
Error immediately after activating xformers “CUDA error: no kernel image is available for execution on the device”
Your installed xformers are not adapted to your GPU, if your OS is Windows, Python version is 3.10 and your GPU is “Pascal” or later, reinstall your xformers as follows
COMMANDLINE_ARGS=
in webui-user.bat
--reinstall-xformers --xformers
webui-user.bat
, start WebUI. At startup, xformers will be reinstalled.
webui-user.bat
again and remove the --reinstall-xformers
section.
Error “RuntimeError: Torch is not able to use GPU; add –skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check”.
Your GPU does not support Torch Add the following command to COMMANDLINE_ARGS=
in webui-user.bat
to skip Torch.
--skip-torch-cuda-test
Error “NameError: name 'xformers' is not defined”
For Windows
Your version of Python is out of date, so update your version to 3.10.
For Linux
You will need to build xformers yourself. Or disable xformers.
Error “OSError: [WinError 126] The specified module could not be found. Error loading “… c10.dll”
Microsoft Visual C++ Redistributable Package X64 is not installed successfully, please refer to the official Microsoft 🔗Microsoft Visual C++ Redistributable latest supported downloads for installation.
After updating to gradio 3.22, “–share” is not working
Windows Defender or Anti-Virus may block Gradio from creating public URLs. If you get a warning message, please add it as an exclusion in the respective software.
Monitor screen drops and goes black during generation.
Your video card is experiencing trouble. Especially if you are using a high-spec video card that requires a lot of power, reviewing the power supply area may solve the problem. If you are building your own PC, please recheck the power supply connections for your video card. A common connection mistake is to use a single cable to connect two power supplies when the video card has two power supplies. Review the installation manual for the power supply and try to connect it correctly.