Skip to content

Flux ControlNet

This article will compile the currently available ControlNet models for the Flux ecosystem.

XLabs-AI/flux-controlnet-collections

XLabs-AI/flux-controlnet-collections is a collection of ControlNet checkpoints provided for the FLUX.1-dev model. This repository was developed by Black Forest Labs, aiming to provide more control options for the Flux ecosystem.

Depth example 1 Canny edge detection example Depth example 2

Main features:

  1. Supports three ControlNet models:

    • Canny (edge detection)
    • HED (edge detection)
    • Depth (depth map, based on Midas)
  2. All models are trained at 1024x1024 resolution, suitable for generating 1024x1024 resolution images.

  3. Provides v3 version, which is an improved and more realistic version that can be used directly in ComfyUI.

  4. Offers custom nodes and workflows for ComfyUI, making it easy for users to get started quickly.

  5. Provides sample images and generation results, showcasing the model’s effects.

Usage:

  1. Use through the official repository’s main.py script.
  2. Use the provided custom nodes and workflows in ComfyUI, official workflow available at https://huggingface.co/XLabs-AI/flux-controlnet-collections/tree/main/workflows
  3. Use the Gradio demo interface.

License:

These model weights follow the FLUX.1 [dev] non-commercial license.

Here’s a list of ControlNet models provided in the XLabs-AI/flux-controlnet-collections repository:

Model NameFile SizeUpload DateDescriptionModel LinkDownload Link
flux-canny-controlnet.safetensors1.49 GBAugust 30, 2023Canny edge detection ControlNet model (initial version)ViewDownload
flux-canny-controlnet_v2.safetensors1.49 GBAugust 30, 2023Canny edge detection ControlNet model (v2 version)ViewDownload
flux-canny-controlnet-v3.safetensors1.49 GBAugust 30, 2023Canny edge detection ControlNet model (v3 version)ViewDownload
flux-depth-controlnet.safetensors1.49 GBAugust 30, 2023Depth map ControlNet model (initial version)ViewDownload
flux-depth-controlnet_v2.safetensors1.49 GBAugust 30, 2023Depth map ControlNet model (v2 version)ViewDownload
flux-depth-controlnet-v3.safetensors1.49 GBAugust 30, 2023Depth map ControlNet model (v3 version)ViewDownload
flux-hed-controlnet.safetensors1.49 GBAugust 30, 2023HED edge detection ControlNet model (initial version)ViewDownload
flux-hed-controlnet-v3.safetensors1.49 GBAugust 30, 2023HED edge detection ControlNet model (v3 version)ViewDownload

This project provides powerful ControlNet models for the Flux ecosystem, allowing users to control the image generation process more precisely, especially suitable for applications that require image generation based on edge detection or depth information.

InstantX Flux Union ControlNet

InstantX Flux Union ControlNet is a versatile ControlNet model designed for FLUX.1 development version. This model integrates multiple control modes, allowing users to control the image generation process more flexibly.

InstantX Flux Union ControlNet

Main features:

  1. Multiple control modes: Supports various control modes, including Canny edge detection, Tile, depth map, blur, pose control, etc.

  2. High performance: Most control modes have achieved high effectiveness, especially Canny, Tile, depth map, blur, and pose control modes.

  3. Continuous optimization: The development team is constantly improving the model to enhance its performance and stability.

  4. Compatibility: Fully compatible with the FLUX.1 development version base model, easily integrating into existing FLUX workflows.

  5. Multi-control inference: Supports the simultaneous use of multiple control modes, providing users with more fine-grained image generation control.

Usage:

  1. Single control mode:

    • Load the Union ControlNet model
    • Select the desired control mode (e.g., Canny, depth map, etc.)
    • Set the control image and related parameters
    • Generate the image
  2. Multi-control mode:

    • Load the Union ControlNet model as FluxMultiControlNetModel
    • Set different control images and parameters for each control mode
    • Apply multiple control modes simultaneously to generate the image

Notes:

  • The current version is a beta version and may not be fully trained yet, so you may encounter some imperfections during use.
  • Some control modes (such as grayscale control) may have lower effectiveness, so it’s recommended to prioritize high-effectiveness modes.
  • Compared to specialized single-function ControlNet models, the Union model may perform slightly less well on certain specific tasks, but it offers greater flexibility and versatility.
File nameSizeView linkDownload link
diffusion_pytorch_model.safetensors6.6 GBViewDownload

Note: After downloading the model, it’s recommended to rename the file to a more descriptive name, such as “flux_union_controlnet.safetensors”, for easier file management and identification in the future.

This Union ControlNet model provides a powerful and flexible image control tool for the FLUX ecosystem, especially suitable for users who need to implement multiple control functions in a single model. With continuous optimization and updates, it has the potential to become one of the most comprehensive and powerful ControlNet models on the FLUX platform.

InstantX Flux Canny ControlNet

In addition to the Union ControlNet model, InstantX also provides a ControlNet model specifically for Canny edge detection. This model focuses on using the Canny edge detection algorithm to control the image generation process, providing users with more precise edge control capabilities.

InstantX Flux Canny ControlNet Example

Main features:

  1. Focus on Canny edge detection: This model is specifically optimized for Canny edge detection, allowing for better processing and utilization of edge information.

  2. High-resolution training: The model was trained in a multi-scale environment with a total pixel count of 10241024, using a batch size of 88 for 30k steps.

  3. Compatible with FLUX.1: Designed for the FLUX.1 development version, it can seamlessly integrate into FLUX workflows.

  4. Uses bfloat16 precision: The model uses bfloat16 precision, which can improve computational efficiency while maintaining accuracy.

Usage:

  1. Install the latest version of the Diffusers library.
  2. Load the Flux Canny ControlNet model and FLUX.1 base model.
  3. Prepare the input image and apply Canny edge detection.
  4. Use the detected edges as control input to generate the image.

Jasperai Flux.1-dev ControlNets Series

Jasperai has developed a series of ControlNet models for Flux.1-dev, designed to provide more precise control for AI image generation. This series includes surface normal, depth map, and super-resolution models, offering users a diverse set of creative tools.

You can view detailed information about these models on the Jasperai collection page on Hugging Face.

1. Surface Normal ControlNet Model

The Surface Normal ControlNet Model uses surface normal maps to guide image generation. This model is specifically optimized for surface normal information, allowing for better processing and utilization of object surface geometric information.

Main features:

  • Focuses on surface normal processing
  • Provides precise geometric information of object surfaces
  • Enhances image depth perception and realism
  • Compatible with FLUX.1 development version

Surface Normal ControlNet Model Example

2. Depth Map ControlNet Model

The Depth Map ControlNet Model uses depth information to control image generation. This model is specifically optimized for depth map information, allowing for better understanding and utilization of scene spatial structure information.

Main features:

  • Focuses on depth map processing
  • Provides spatial structure information of scenes
  • Improves image perspective and spatial sense
  • Compatible with FLUX.1 development version

Depth Map ControlNet Model Example

3. Super-resolution ControlNet Model

The Super-resolution ControlNet Model focuses on improving the quality of low-resolution images. This model can convert low-quality images into high-resolution versions, reconstructing and enhancing image details.

Main features:

  • Focuses on image super-resolution processing
  • Converts low-resolution images to high-resolution versions
  • Reconstructs and enhances image details
  • Compatible with FLUX.1 development version

Super-resolution ControlNet Model Example

These models provide more precise control for AI image generation, allowing creators to generate more realistic and detailed images. Each model is designed for specific image processing needs, offering users a diverse set of creative tools. Users can choose the appropriate model based on their needs to achieve different image generation effects.