Stable Cascade Examples
First download the stable_cascade_stage_c.safetensors and stable_cascade_stage_b.safetensors checkpoints (opens in a new tab) and put them in the ComfyUI/models/checkpoints folder.
Stable cascade is a 3 stage process, first a low resolution latent image is generated with the Stage C diffusion model. This latent is then upscaled using the Stage B diffusion model. This upscaled latent is then upscaled again and converted to pixel space by the Stage A VAE.
Note that you can download all images in this page and then drag or load them on ComfyUI to get the workflow embedded in the image.
Text to Image
Here is a basic text to image workflow:
Image to Image
Here's an example of how to do basic image to image by encoding the image and passing it to Stage C.
Image Variations
Stable Cascade supports creating variations of images using the output of CLIP vision. See the following workflow for an example:
See this next workflow for how to mix multiple images together:
You can find the input image for the above workflows on the unCLIP example page
ControlNet
You can download the stable cascade controlnets from: here (opens in a new tab). For these examples I have renamed the files by adding stable_cascade_ in front of the filename for example: stable_cascade_canny.safetensors (opens in a new tab), stable_cascade_inpainting.safetensors (opens in a new tab)
Here is an example for how to use the Canny Controlnet:
Here is an example for how to use the Inpaint Controlnet, the example input image can be found here. A reminder that you can right click images in the LoadImage node and edit them with the mask editor.