Skip to content

Detailed Tutorial on Flux Redux Workflow

Flux Redux is an adapter model specifically designed for generating image variants. It can generate variants in a similar style based on the input image without the need for text prompts. This tutorial will guide you through the complete process from installation to usage.

Flux Redux Workflow

This tutorial is a detailed guide based on the official ComfyUI workflow. The original official tutorial can be found at: https://comfyanonymous.github.io/ComfyUI_examples/flux/

Introduction to the Flux Redux Model

The Flux Redux model is mainly used for:

  • Generating image variants: Creating new images in a similar style based on the input image
  • No need for prompts: Extracting style features directly from the image
  • Compatible with Flux.1 [Dev] and [Schnell] versions
  • Supports multi-image blending: Can blend styles from multiple input images

Flux Redux model repository: Flux Redux

Preparation

1. Update ComfyUI

First, ensure your ComfyUI is updated to the latest version. If you are unsure how to update and upgrade ComfyUI, please refer to How to Update and Upgrade ComfyUI

2. Download Necessary Models

You need to download the following model files:

Model NameFile NameInstallation LocationDownload Link
CLIP Vision Modelsigclip_vision_patch14_384.safetensorsComfyUI/models/clip_visionDownload
Redux Modelflux1-redux-dev.safetensorsComfyUI/models/style_modelsDownload
CLIP Modelclip_l.safetensorsComfyUI/models/clipDownload
T5 Modelt5xxl_fp16.safetensorsComfyUI/models/clipDownload
Flux Dev Modelflux1-dev.safetensorsComfyUI/models/unetDownload
VAE Modelae.safetensorsComfyUI/models/vaeDownload

3. Download Workflow File

Workflow Usage Guide

Workflow Node Description

The workflow mainly includes the following key nodes:

  1. Model Loading Nodes
  • CLIPVisionLoader: Load the CLIP Vision model
  • StyleModelLoader: Load the Redux model
  • UNETLoader: Load the Flux Dev/Schnell model
  • DualCLIPLoader: Load the CLIP text encoding model
  • VAELoader: Load the VAE model
  1. Image Processing Nodes
  • LoadImage: Load the reference image
  • CLIPVisionEncode: Encode the reference image
  • StyleModelApply: Apply the Redux model
  • FluxGuidance: Control the generation intensity
  • BasicGuider: Basic guider
  1. Sampling Nodes
  • KSamplerSelect: Select the sampler
  • BasicScheduler: Set the sampling schedule
  • SamplerCustomAdvanced: Advanced sampling settings

Usage Steps

  1. Load Models

    • Select sigclip_vision_patch14_384.safetensors in CLIPVisionLoader
    • Load flux1-redux-dev.safetensors in StyleModelLoader
    • Load flux1-dev.safetensors in UNETLoader
    • Load the CLIP model in DualCLIPLoader
    • Load the VAE model in VAELoader
  2. Prepare Reference Image

    • Load the image you want to create variants of in the LoadImage node
    • The image will be automatically processed and encoded
  3. Adjust Generation Parameters

    • Adjust the generation intensity through the FluxGuidance node (default value 3.5)
    • Set the sampling steps in BasicScheduler (recommended 20 steps)
    • Choose an appropriate sampler (recommended euler)
  4. Set Image Size

    • Use PrimitiveNode to set the width and height of the output image
    • Default setting is 1024x1024

Parameter Tuning Suggestions

Here are some practical parameter tuning suggestions:

  • Generation Intensity (FluxGuidance):

    • The larger the value, the greater the change
    • The smaller the value, the closer to the original image
    • Recommended range: 2.0-5.0
  • Sampling Steps:

    • The more steps, the richer the details
    • Recommended range: 20-30 steps

Advanced Techniques

  1. Multi-Image Blending

    • You can add multiple StyleModelApply nodes
    • Each node uses a different reference image
    • Adjust the influence weight of each image
  2. Size Optimization

    • Larger sizes can obtain more details
    • It is recommended to maintain the aspect ratio of the original image
    • Adjust the resolution according to the size of the video memory
  3. Batch Generation

    • You can set multiple random seeds
    • Use batch processing to generate multiple variants
    • Compare and select the best result

Common Problem Solutions

  1. Unsatisfactory Generation Effect

    • Adjust the value of FluxGuidance
    • Increase the sampling steps
    • Try different reference images
  2. Insufficient Video Memory

    • Reduce the image resolution
    • Reduce the sampling steps
    • Use the Flux Schnell version
  3. Model Loading Failure

    • Check if the model file location is correct
    • Confirm if the model file name matches
    • Verify if the model file is completely downloaded

Example Display

You can try the following examples to familiarize yourself with the use of Flux Redux:

  1. Basic Variant Generation
  • FluxGuidance: 3.5
  • Steps: 20
  • Resolution: 1024x1024
  1. Multi-Image Blending
  • Use two reference images
  • Set FluxGuidance to 2.5 and 3.0 respectively
  • Steps: 25

Remember to save your satisfactory parameter combinations for future use.