How to use Flux.1 [dev] with WebUI Forge Also how to use GGUF and LoRA
Since the articles on running Flux.1 with ComfyUI have been continuing, let’s run Flux.1 with Stable Diffusion WebUI Forge, an experimental version of A1111 WebUI. WebUI Forge is almost the same as A1111 WebUI’s UI, so those who are familiar with A1111 WebUI will have no problem using it. It is said that WebUI Forge is superior to the original A1111 WebUI in terms of functions. Especially, memory management is excellent, and it will be useful because it automatically manages VRAM. Also, it is easy to use for beginners of AI-generated illustrations because you don’t have to assemble complicated graphs like ComfyUI.
How to install Stable Diffusion WebUI Forge
Download Files
To install Stable Diffusion WebUI Forge, download one of the three versions available on the official GitHub release page. WebUI Forge does not require Python or Git to be installed separately like A1111 WebUI, but comes bundled with all the necessary files.
- webui_forge_cu121_torch21.7z:Bundles used in previous releases.
- webui_forge_cu121_torch231.7z:The official recommended bundle.
- webui_forge_cu124_torch24.7z:The fastest bundle. However, xformers don’t work and MSVC is broken.
In this case, download the second officially recommended file, webui_forge_cu121_torch231.7z
.
Extract and update files
Extract the downloaded 7zip file to the location where you want to install it. (e.g. C:\Users\YOUR USER NAME\)
Once extracted, run update.bat
in the folder to update the software. *If you skip this, you may not be able to use Flux.1. When the update finishes, you will see a message in the terminal saying “Press any key to continue . . .” Press any key to exit.
After the update is complete, run run.bat
to start WebUI Forge. The first time you start WebUI Forge, it will take a little while to start because it downloads and installs the required files, but wait until your browser opens.
If the browser starts, the installation was successful. However, in this state, you cannot do anything because you do not have any models in the system. if you are using A1111 WebUI, use the following method to share your models. If you do not have any models, ignore the next step and install the Flux.1 model.
Share the A1111 WebUI model
There are two ways to share a model. Open and edit the webui-user.bat
file in the \webui_forge_cu121_torch231\webui
with a text editor such as Notepad.
Fill in the “set COMMANDLINE_ARGS=” on line 6 with the paths to the A1111 WebUI ckpt, lora, vae, and embeddings as in the example below.
set COMMANDLINE_ARGS=--ckpt-dir "C:\Users\USER-NAME\stable-diffusion-webui\models\Stable-diffusion" --lora-dir "C:\Users\USER-NAME\stable-diffusion-webui\models\Lora" --embeddings-dir "C:\Users\USER-NAME\StableDiffusion\stable-diffusion-webui\embeddings" --vae-dir "C:\Users\USER-NAME\StableDiffusion\stable-diffusion-webui\models\VAE
If you do not know the path, open the folder of the checkpoint or LoRA you want to share in File Explorer. You can copy the path by right-clicking on the address bar in Explorer and selecting “Copy Address” from the menu.
Sharing Method (2)This is the official method, but VAE could not be shared using this method.
Delete @REM
in the commented out part of lines 9 through 16.
Fill in “Your A1111 checkout dir” on line 9 with the path of the share source. (e.g. C:\Users\YOUR USER NAME\stable-diffusion-webui)
How to install Flux.1 [dev] in WebUI Forge
After you have successfully installed WebUI Forge, let’s install the Flux.1 model. We will introduce the official Flux.1 [dev] version, the GGUF version, and the BitsandBytes NF4 version of the model.
Download Checkpoint Model
Download the model of your choice from the list below. The “Flux.1 [dev] Official model” and “Flux.1 [dev] GGUF model” will be used in the usage described later. The download location is \webui_forge_cu121_torch231\webui\models\Stable-diffusion
. If you are sharing with A1111 WebUI, download it to the shared folder.
The official Flux.1 [dev] model is available on HuggingFace at black-forest-labs. The file is large at 23.8 GB.
Flux.1 [dev]GGUF modelThe Flux.1 [dev] GGUF model is available on HuggingFace at city96. This time we present the 8-bit version.
Flux.1 [dev] BitsandBytes NF4 modelFlux.1 [dev]BitsandBytes NF4 models are available on HuggingFace from WebUI Forge developer lllyasviel. *This model requires “RTX 30XX/40XX”.
Flux.1 [dev] FP8 modelFP8 version of the official model. *This is the one for “GTX 10XX/20XX models”.
Download Text Encoder
After downloading the checkpoint model, download the following text encoder to \webui_forge_cu121_torch231\webui\models\text_encoder
.
Download FP16 (9.79 GB) or FP8 (4.89 GB); FP8 is less accurate but lighter in file size. The text encoder uses RAM for processing, so if your PC’s RAM will fit, use the larger FP16. The model we will use later is FP16 (9.79 GB).
t5xxl_fp16.safetensors t5xxl_fp8_e4m3fn.safetensors t5-v1_1-xxl-encoder-Q8_0.ggufIf you want to use the GGUF model, download the text encoder below, although it is not required. We present the 8-bit version in this issue.
Download VAE
Download the official black-forest-labs VAE to your \webui_forge_cu121_torch231\webui\models\VAE
. If this is also shared, download it to the share source.
This completes the installation of Flux.1 [dev]. Next, let’s look at how to use it.
How to use Flux.1 [dev] in WebUI Forge
Basic Usage
If you have never used A1111 WebUI or WebUI Forge before, the following article explains the basics of A1111 WebUI. The basic operation of WebUI Forge is the same, so please take a look at it.
First, we will introduce the basic Flux.1 [dev] Select flux
in the checkbox labeled “UI” in the upper left corner of the UI, where you can choose “sd/xl/flux/all”. Then the setting will be changed for Fulx.
*If you want to change the default value, you can use “UI defaults ‘flux’” under User Interface in the Settings.
Be careful of the “GPU Weights (MB)” value at the top of the UI. If this value is too high, VRAM will be insufficient to generate the image and the speed will be reduced to 1/10 of the original speed. For more details, please refer to the 🔗official repository.
Loading checkpoint models
Now let’s start from the top of the UI and select the checkpoint model you have just downloaded for Checkpoint. Here we select flux1-dev.safetensors
as an example.
Loading VAE and Text Encoder
Next, select a VAE and a text encoder for VAE / Text Encoder: ae.safetensors
, clip_l.safetensors
, and t5xxl_fp16.safetensors
. Basically, the first two are the same for all models, but the last T5xxl encoder uses one of FP16/FP8/GGUF. You may change it depending on your own RAM capacity.
Entering Prompts
Enter the prompt. In this case, let’s use the prompt of the astronaut in the jungle as described in Forge.
Astronaut in a jungle, cold color palette, muted colors, very detailed, sharp focus
Generation
Other values can be left as default. Click the “Generate” button to generate.
We believe that high quality images unique to Flux.1 [dev] have been generated.
Generation with LoRA and GGUF
Now that you know the basic usage, let’s try to speed up the process by using GGUF model and use LoRA to change the illustration style as an application.
Download LoRAs
First, download the following three LoRAs to \webui_forge_cu121_torch231\webui\models\Lora
. If you share A1111 WebUI models, please download them to the sharer.
Download Upscaler Model
The Upscaler model “4x-UltraSharp” used in the example here is not installed by default, so download the following model to \webui_forge_cu121_torch231\webui\models\ESRGAN
.
The method of loading LoRA is the same as A1111 WebUI, so please refer to the following article for details.
Loading checkpoint model
Change the checkpoint model to flux1-dev-Q8_0.gguf
.
Loading text encoders
You may choose t5-v1_1-xxl-encoder-Q8_0.gguf
for the text encoder for further speedup, but this time we will use t5xxl_fp16.safetensors
for quality.
Diffusion in Low Bits setting
To adapt LoRA to lightweight checkpoint models such as GGUF and FP8, the Diffusion in Low Bits setting must be changed to Automatic (fp16 LoRA)
.
Entering Prompts
Next, the prompt is entered. The prompt understands both natural language style and Danbooru style, so we type a mixture of the two. The last line reads LoRA and sets the weights.
A masterful highly intricate detailed cinematic digital painting.
In the European medieval fantasy era.
An wizard young girl with a wizard hat and robe is making healing potions in a sun-drenched workshop.
There are many bundles of medicinal herbs hanging in the workshop.
A lot of sun rays in the workshop.
The wall of workshop is colored turquoise blue.
On the table are many completed bottles of healing potions.
On a shelf in the background is a small glass terrarium of a tropical rainforest with a tiny floating light particles.
A hyper realistic, very detailed, masterpiece, intricate details, clear flat anime eyes, round shaped pupils, large highlight on eyes, 50mm lens shot, soft edge line for girl's face, smile, looking at viewer, head tilt, correct perspective, flat color style
<lora:FLUX-daubrez-DB4RZ-v2:0.75> <lora:Flux.1_Turbo_Detailer:0.4> <lora:sifw-annihilation-fluxd-lora-v11:0.75>
Entering Parameters
Change the parameters. Let’s change the following list.
- Select
Beta
for Schedule type. - Increase Sampling steps to
30
. - Change Width to
1280
. - Change Height to
720
.
Change the Hires. fix setting as follows, leaving the check box unchecked.
- Select Upscaler as
4x-UltraSharp
. - Hires steps changed to
10
. - Denoising strength down to
0.35
. - Changed Upscale by to
1.5
.
Generation
This completes the parameter changes. Let’s generate it by pressing the “Generate” button.
When you are happy with the composition, use the “✨” button in the preview area to adapt the Hires. fix.
Final Results
Flux.1 can produce photo realistic images at a very good quality, but illustrations are sometimes not so good. However, with LoRA, it is possible to produce high-quality illustrations like this.
Conclusion
In this article, we have shown how to use LoRA and GGUF from the installation of WebUI Forge. Although we did not benchmark it in detail, we thought the Flux.1 generation ran faster than ComfyUI. If you have a PC with low specs, using GGUF to use the heavyweight Flux.1 will be a big advantage.