ComfyUI Flux Dev Turbo Alpha LoRA Workflow

Workflow Name
flux-dev-turbo-alpha.json
Workflow Description
This workflow simply adds the Flux Turbo Alpha LoRA to my “flux-dev” basic workflow, intended for use with all Flux Dev model only checkpoints, specifying VAE and text encoders separately.
Note that I renamed the original Flux Turbo Alpha LoRA from “diffusion_pytorch_model.safetensors” to a more appropriate and recognisable name of “flux-turbo-alpha.safetensors”.
Default Workflow Dependencies
Download FP8 model-only checkpoint, save in ComfyUI\models\unet:
https://huggingface.co/Kijai/flux-fp8/tree/main
Download CLIP text encoder, save in ComfyUI\models\clip:
https://huggingface.co/comfyanonymous/flux_text_encoders/tree/main
Download FP8 T5 text encoder, save in ComfyUI\models\text_encoders:
https://huggingface.co/comfyanonymous/flux_text_encoders/tree/main
Download VAE, save in ComfyUI\models\vae:
https://huggingface.co/black-forest-labs/FLUX.1-dev/tree/main
Download Flux Turbo Alpha LoRA, save in ComfyUI\models\loras:
https://huggingface.co/alimama-creative/FLUX.1-Turbo-Alpha/tree/main
Workflow Details
DualCLIPLoader
Purpose:
This node loads two separate CLIP models, allowing for the use of different text encoders within the workflow.
Customizable Settings:
text_encoder_1: This setting allows the user to select the first CLIP model to be loaded. Changing this will impact how the first set of text prompts are encoded.
text_encoder_2: This setting allows the user to select the second CLIP model to be loaded. Changing this will impact how the second set of text prompts are encoded.
pooling_type: This setting is for the pooling type of the clip models. Changing this will affect the way that the text information is summarized into a latent space.
precision: This allows the user to change the precision of the model.
VAELoader
Purpose:
This node loads a Variational Autoencoder (VAE) model, which is used to decode the latent image into a pixel-based image.
Customizable Settings:
vae_name: This setting allows the user to select the specific VAE model to be used. Changing this will affect the color representation and detail of the final image.
UNETLoader
Purpose:
This node loads the UNET model, which is the core component of the Stable Diffusion process responsible for denoising the latent image.
Customizable Settings:
unet_name: This setting allows the user to select the UNET model file to be used. Changing this will alter the base model used for image generation.
precision: This allows the user to change the precision of the model.
LoraLoaderModelOnly
Purpose:
This node loads a LoRA (Low-Rank Adaptation) model, which fine-tunes the base model for specific styles or subjects. This node only loads the model portion of the Lora.
Customizable Settings:
lora_name: This setting allows the user to select the LoRA model file to be used. Changing this will apply different fine-tuning to the base model.
strength_model: This setting controls the strength of the LoRA’s effect on the model. Adjusting this value will determine how much the LoRA influences the generated image.
CLIPTextEncode (Positive Prompt)
Purpose:
This node encodes the positive text prompt into a format that the Stable Diffusion model can understand.
Customizable Settings:
text: This setting allows the user to input the positive prompt text. Changing this will define the content of the generated image.
width
Purpose:
This primitive node sets the width of the generated latent image.
Customizable Settings:
INT: This setting specifies the width in pixels. Changing this value will alter the horizontal dimension of the output image.
height
Purpose:
This primitive node sets the height of the generated latent image.
Customizable Settings:
INT: This setting specifies the height in pixels. Changing this value will alter the vertical dimension of the output image.
EmptySD3LatentImage
Purpose:
This node creates an empty latent image with specified width and height, the starting point for the image generation.
Customizable Settings:
width: This setting specifies the width of the latent image.
height: This setting specifies the height of the latent image.
batch_size: This sets the number of images to generate within the latent space.
RandomNoise
Purpose:
This node generates random noise, which is used as the initial input for the denoising process.
Customizable Settings:
seed: This setting initializes the random number generator. Changing this seed will produce different noise patterns, leading to variations in the generated image.
noise_type: This setting lets the user choose between randomizing the noise, and setting a fixed noise.
KSamplerSelect
Purpose:
This node selects the sampling method used during the denoising process.
Customizable Settings:
sampler_name: This setting allows the user to choose the sampling algorithm. Different samplers have different characteristics in terms of speed and quality.
BasicScheduler
Purpose:
This node sets the scheduling method used by the KSampler.
Customizable Settings:
scheduler: This selects the method of how the noise is added and removed during the sampling process.
steps: This sets the amount of sampling steps used during the KSampler process.
denoise: This is the amount of denoising that will occur.
FluxGuidance
Purpose:
This node applies a specific guidance method, likely related to the Flux model, to the conditioning.
Customizable Settings:
guidance_scale: This setting controls the strength of the guidance applied to the conditioning. Changing this will impact how closely the generated image follows the prompts.
BasicGuider
Purpose:
This node applies the model and conditioning to create the guider used by the sampler.
Customizable Settings:
There are no customizable settings within this node itself, it takes in the model and conditioning.
SamplerCustomAdvanced
Purpose:
This node performs the actual sampling process, iteratively denoising the latent image based on the guider, sigmas, and noise.
Customizable Settings:
There are no customizable settings within this node itself, it takes in noise, guider, sampler, sigmas, and the latent image.
VAEDecode
Purpose:
This node decodes the latent image generated by the sampler into a pixel-based image.
Customizable Settings:
There are no customizable settings within this node itself, it takes in the latent and VAE.
SaveImage
Purpose:
This node saves the final generated image to a file.
Customizable Settings:
filename_prefix: This setting allows the user to specify a prefix for the saved image file name.
Credits
This workflow has been modified from the ComfyUI workflow:
https://comfyanonymous.github.io/ComfyUI_examples/flux/
Input from the Flux Turbo Alpha T2I workflow:
https://huggingface.co/alimama-creative/FLUX.1-Turbo-Alpha/blob/main/workflows/t2I_flux_turbo.json