Skip to content
ResourceControlNetV1 for SD1.5

ControlNet v1.0 for SD 1.5

Author: lllyasviel
GitHub Repository:https://github.com/lllyasviel/ControlNet Hugging Face Repository:https://huggingface.co/lllyasviel/ControlNet/tree/main Compatible Stable Diffusion Versions: stable diffusion SD1.5,2.0

Here is a compilation of the initial model resources for ControlNet provided by its original author, lllyasviel.

File NameSizeUpdate TimeDescriptionDownload Links
control_sd15_canny.pth5.71 GBFebruary 2023Download Link
control_sd15_depth.pth5.71 GBFebruary 2023Download Link
control_sd15_hed.pth5.71 GBFebruary 2023Download Link
control_sd15_mlsd.pth5.71 GBFebruary 2023Download Link
control_sd15_normal.pth5.71 GBFebruary 2023Download Link
control_sd15_openpose.pth5.71 GBFebruary 2023Download Link
control_sd15_scribble.pth5.71 GBFebruary 2023Download Link
control_sd15_seg.pth5.71 GBFebruary 2023Download Link

How to Use ControlNet Model in ComfyUI

The usage of the ControlNet model is focused in the following article:

It will cover the following topics:

  • How to install the ControlNet model in ComfyUI
  • How to invoke the ControlNet model in ComfyUI
  • ComfyUI ControlNet workflow and examples
  • How to use multiple ControlNet models, etc.

ControlNet Principles

The fundamental principle of ControlNet is to guide the diffusion model in generating images by adding additional control conditions. Specifically, it duplicates the original neural network into two versions: a “locked” copy and a “trainable” copy. During use, the trainable copy learns and adjusts according to new control conditions, while the locked copy maintains its original weights unchanged, ensuring that the generation effect of the diffusion model itself is not affected.

Use Cases for ControlNet

Stable Diffusion ControlNet 1.0 is a powerful plugin capable of controlling image generation through various conditions. In different types of image generation tasks, this plugin can be flexibly applied to achieve the desired effect. Here are several specific application methods:

  1. Precise Image Control ControlNet can control image generation based on conditions such as edge detection, sketch processing, or human pose. For example, when detailed depiction of specific parts of a person is needed, precise image generation can be achieved by defining these conditions.

  2. Fixed Composition and Pose Definition ControlNet allows users to fix the composition of images and define poses, thus generating images that meet expectations. This is particularly useful for character design that requires specific actions or poses.

  3. Contour Drawing and Line Art Generation With ControlNet, a rich and exquisite illustration can be generated from just a line drawing. This is very practical in artistic creation, especially when building complex images from simple lines.

  4. Portrait to Anime Style Effect ControlNet can also be used to convert real portraits into anime style, accurately reproducing the hairstyle and hair color of the characters in the original image. This feature is very helpful for anime production and character design.