ComfyUI Flux Dev Workflow

Workflow Name
flux-dev
Workflow Description
This workflow is a basic workflow for use with all Flux Dev model only checkpoints, specifying VAE and text encoders separately.
Default Workflow Dependencies
Download FP8 model-only checkpoint, save in ComfyUI\models\unet:
https://huggingface.co/Kijai/flux-fp8/blob/main/flux1-dev-fp8-e4m3fn.safetensors
Download CLIP text encoder, save in ComfyUI\models\clip:
https://huggingface.co/comfyanonymous/flux_text_encoders/blob/main/clip_l.safetensors
Download FP8 T5 text encoder, save in ComfyUI\models\clip:
https://huggingface.co/comfyanonymous/flux_text_encoders/blob/main/t5xxl_fp8_e4m3fn.safetensors
Download VAE, save in ComfyUI\models\vae:
https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/ae.safetensors
Workflow Details
UNETLoader
Purpose: This node loads the UNET model, which is responsible for the denoising process during image generation.
Customizable Settings:
unet_name: This setting allows the user to select the UNET model checkpoint file. Changing this file will alter the diffusion process, significantly affecting the generated image’s style and content.
weight_dtype: This setting allows the user to select the precision that the UNET model will operate under. Changing this setting will alter the precision of the model.
DualCLIPLoader
Purpose: This node loads two CLIP models, specifically designed for use with Flux models. It provides the CLIP model needed for text encoding.
Customizable Settings:
clip_name1: This setting allows the user to select the T5-XXL FP8 text encoder checkpoint file. Changing this file will alter the text encoding process, potentially impacting the generated image’s adherence to the prompt.
clip_name2: This setting allows the user to select the CLIP-L text encoder checkpoint file. Changing this file will alter the text encoding process, potentially impacting the generated image’s adherence to the prompt.
type: This setting allows the user to select the Flux text encoder type. Changing this setting will change the text encoder that is used.
VAELoader
Purpose: This node loads the VAE (Variational Autoencoder) model, which is used to encode and decode latent images.
Customizable Settings:
vae_name: This setting allows the user to select the VAE model checkpoint file. Changing this file will alter the encoding and decoding process, affecting the image’s overall quality and detail.
PrimitiveNode (width)
Purpose: This node defines the width of the latent image.
Customizable Settings:
value: This setting allows the user to specify the width of the generated image in pixels. Changing this value will directly affect the image’s horizontal resolution.
control_after_generate: This setting allows the user to select if the value is fixed or able to be changed by other nodes.
PrimitiveNode (height)
Purpose: This node defines the height of the latent image.
Customizable Settings:
value: This setting allows the user to specify the height of the generated image in pixels. Changing this value will directly affect the image’s vertical resolution.
control_after_generate: This setting allows the user to select if the value is fixed or able to be changed by other nodes.
EmptySD3LatentImage
Purpose: This node creates an empty latent image with the specified width and height, serving as the starting point for the diffusion process.
Customizable Settings:
batch_size: Changing this value will change the amount of images that are generated.
RandomNoise
Purpose: This node generates random noise, which is used as the initial input for the denoising process.
Customizable Settings:
noise_seed: This setting allows the user to specify the seed for the random noise generation. Using the same seed will produce the same noise pattern.
control_after_generate: This setting allows the user to randomize the seed each time the workflow is run.
KSamplerSelect
Purpose: This node selects the KSampler algorithm used for the denoising process.
Customizable Settings:
sampler_name: This setting allows the user to select the sampling method. Changing this will change the sampling algorithm.
BasicScheduler
Purpose: This node creates a schedule of noise sigmas, which guide the denoising process.
Customizable Settings:
scheduler: This setting allows the user to select the scheduler type. Changing this will change the way the noise sigmas are distributed.
steps: This setting allows the user to specify the number of sampling steps. Changing this value will affect the level of detail and quality of the generated image.
denoise: This setting allows the user to specify the denoise value. Changing this value will change the amount of noise that is removed.
CLIPTextEncode (Positive Prompt)
Purpose: This node encodes the positive text prompt into a format that the diffusion model can understand.
Customizable Settings:
text: This setting allows the user to input the positive text prompt. Changing this prompt will drastically alter the generated image’s content.
FluxGuidance
Purpose: This node applies Flux guidance to the encoded prompt, enhancing the impact of the prompt on the generated image.
Customizable Settings:
guidance: This setting allows the user to set the flux guidance scale. Changing this value will change the strength of the flux guidance.
BasicGuider
Purpose: This node combines the model and conditioning into a guider, which is used by the sampler.
Customizable Settings: This node contains no customizable settings.
SamplerCustomAdvanced
Purpose: This node performs the diffusion process, iteratively denoising the latent image based on the prompt and noise schedule.
Customizable Settings: This node contains no customizable settings.
VAEDecode
Purpose: This node decodes the latent image generated by the sampler into a pixel image.
Customizable Settings: This node contains no customizable settings.
SaveImage
Purpose: This node saves the final generated image to a file.
Customizable Settings:
filename_prefix: This setting allows the user to specify the output file name prefix. Changing this will change the file name of the saved image.
Credits
This workflow has been modified from the ComfyUI workflow:
https://comfyanonymous.github.io/ComfyUI_examples/flux/