Skip to content
Help ComfyUI Wiki remove ads Become a Patron

Tutorial on Using Multiple ControlNets in ComfyUI

Tutorial on Using Multiple ControlNets in ComfyUI In ControlNet, multiple ControlNets can be combined to achieve more precise control. For example, when generating characters, if there is a misalignment in the limbs, you can overlay depth to ensure the correct front-back relationship of the limbs.

In this article, I will use OpenPose and Lineart to achieve a transformation in the visual style.

  • OpenPose is used to control the character’s posture.
  • Lineart is used to maintain consistency in the character’s clothing and facial features.

The key is to chain the conditions of the Apply ControlNet nodes when using multiple ControlNets.

Apply ControlNet Node For more information on the stage control of ControlNet, you can refer to the Apply ControlNet Node Usage Instructions.

Steps to Use Multiple ControlNets in ComfyUI

1. Install Necessary Plugins

If you have learned from other tutorials on ComfyUI Wiki, you should have already installed the corresponding plugins, so you can skip this step.

Since ComfyUI Core does not come with a corresponding Depth image preprocessor, you need to download the corresponding preprocessor plugin in advance. This tutorial requires the use of the ComfyUI ControlNet Auxiliary Preprocessors plugin to generate depth maps.

It is recommended to use ComfyUI Manager for installation. You can refer to the ComfyUI Plugin Installation Tutorial for detailed instructions on plugin installation.

The latest version of ComfyUI Desktop has already pre-installed the ComfyUI Manager plugin.

2. Download Models

First, you need to download the following models:

Model TypeModel FileDownload Link
SD1.5 Base Modeldreamshaper_8.safetensors (optional)Civitai
OpenPose ControlNet Modelcontrol_v11f1p_sd15_openpose.pth (required)Hugging Face
Lineartcontrol_v11p_sd15_lineart.pth (required)Hugging Face

The SD1.5 version of the model can use the models on your own computer; however, in this tutorial, I am using the dreamshaper_8 model as an example.

Please place the model files according to the following structure:

📁ComfyUI
├── 📁models
│   ├── 📁checkpoints
│   │   └── 📁SD1.5
│   │       └── dreamshaper_8.safetensors
│   ├── 📁controlnet
│   │   └── 📁SD1.5
│   │       └── control_v11f1p_sd15_openpose.pth
│   │       └── control_v11p_sd15_lineart.pth

3. Workflow File and Input Image

Download the workflow file and image file below

Input Image

4. Import Workflow in ComfyUI to Load Image for Generation

workflow example

  1. Load the corresponding SD1.5 Checkpoint model at step 1
  2. Load the input image at step 2
  3. Load the OpenPose ControlNet model at step 3
  4. Load the Lineart ControlNet model at step 4
  5. Use Queue or the shortcut Ctrl+Enter to run the workflow for image generation

Scenarios for Combining ControlNets

1. Architectural Visualization Design

ControlNet Combination
Canny Edge + Depth Map + MLSD Line Detection

Parameter Configuration Plan

ControlNet TypeMain FunctionRecommended WeightPreprocessing Parameter SuggestionsPhase
CannyEnsure precise architectural outlines0.9-1.0Low threshold: 50, high threshold: 150Phase One
DepthBuild three-dimensional spatial perspective0.7-0.8MiDaS model, Boost contrast enhancement enabledPhase Two
MLSDCorrect line deformation to maintain geometric accuracy0.4-0.6Minimum line length: 15, maximum line distance: 20Phase Three

2. Dynamic Character Generation

ControlNet Combination
OpenPose Pose + Lineart Sketch + Scribble Color Blocks

Parameter Configuration Plan

ControlNet TypeMain FunctionRecommended WeightResolution Adaptation SuggestionsCollaboration Strategy
OpenPoseControl overall character posture and actions1.0Keep consistent with output sizeMain control network
LineartRefine facial features and equipment details0.6-0.7Enable Anime modeMid to late intervention
ScribbleDefine clothing colors and texture distribution0.4-0.5Use SoftEdge preprocessingOnly affects color layer

3. Product Concept Design

ControlNet Combination
HED Soft Edge + Depth Depth of Field + Normal Normal Map

Parameter Configuration Plan

ControlNet TypeMain FunctionWeight RangeKey Preprocessing SettingsEffect
HEDCapture soft edges and surface transitions of products0.8Gaussian blur: σ=1.5Control contour softness
DepthSimulate real light and shadow with background blur0.6Near field enhancement modeBuild spatial layers
NormalEnhance surface details and reflective properties of materials0.5Generation size: 768x768Enhance material details

4. Scene Atmosphere Rendering

ControlNet Combination
Segmentation Partition + Shuffle Color Tone + Depth Layers

Layer Control Strategy

Control LayerMain FunctionWeightEffect AreaIntervention Timing
SegDivide scene element areas (sky/building)0.9Global compositionFull control
ShuffleControl overall color tone and style transfer0.4Color distributionMid to late intervention
DepthCreate depth effect and spatial layers0.7Background blur areaEarly intervention