ComfyUI SD 1.5 Workflow

Workflow Name
sd-1.5.json
Workflow Description
Basic workflow to generate images using Stable Diffusion 1.5 model checkpoints.
Workflow Dependencies
Download SD 1.5 checkpoint (save in: ComfyUI\models\checkpoints):
https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5/blob/main/v1-5-pruned-emaonly.safetensors
Workflow Details
CheckpointLoaderSimple
Purpose:
Loads a Stable Diffusion checkpoint file. This file contains the trained model, CLIP model, and VAE, which are essential for image generation.
Customizable Settings:
checkpoint file: Changing this setting loads a different Stable Diffusion 1.5 model, which will drastically alter the style and content of the generated images. Different models are trained on different datasets and can produce widely varying results.
CLIPTextEncode (Positive Prompt)
Purpose:
Encodes the positive text prompt into a format that the Stable Diffusion model can understand (conditioning). This prompt guides the image generation towards the desired content.
Customizable Settings:
text prompt: Changing this setting directly influences the subject, style, and details of the generated image. A more detailed and specific prompt will generally lead to more accurate results.
CLIPTextEncode (Negative Prompt)
Purpose:
Encodes the negative text prompt into a format that the Stable Diffusion model can understand (conditioning). This prompt tells the model what to avoid in the generated image.
Customizable Settings:
text prompt: Changing this setting helps to remove unwanted artifacts, styles, or subjects from the generated image, leading to cleaner and more refined results.
EmptyLatentImage
Purpose:
Creates an empty latent space representation. This is the starting point for the image generation process.
Customizable Settings:
width, height, and batch size: Changing the width and height affects the resolution of the generated image. Changing the batch size determines how many images are generated at once.
KSampler
Purpose:
Performs the actual image generation by iteratively denoising the latent image based on the provided conditioning (prompts).
Customizable Settings:
seed, control after generate, steps, CFG scale, sampler, scheduler, and denoise: The seed controls the random number generation, so changing it will produce different results even with the same prompt. Control after generate settings allow for further manipulation of the latent after the main generation process, often used for things like inpainting or additional refinement. The steps parameter controls how many denoising iterations are performed; more steps usually result in higher quality but longer generation times. The CFG scale controls how closely the generated image adheres to the prompt; higher values mean stronger adherence but can also introduce artifacts. The sampler determines the denoising algorithm used; different samplers have different strengths and weaknesses. The scheduler influences how the denoising process progresses over the steps, impacting the overall image quality and style. Different schedulers can produce significantly different results. The denoise parameter controls the amount of denoising applied.
VAEDecode
Purpose:
Decodes the latent image from the KSampler into a viewable image.
Customizable Settings:
No customizable settings.
SaveImage
Purpose:
Saves the generated image to a file.
Customizable Settings:
filename prefix: Changing this setting alters the name of the saved image file.