ControlNet v1.0 for SD 1.5
Author: lllyasviel
GitHub Repository:https://github.com/lllyasviel/ControlNet (opens in a new tab)
Hugging Face Repository:https://huggingface.co/lllyasviel/ControlNet/tree/main (opens in a new tab)
Compatible Stable Diffusion Versions: stable diffusion SD1.5,2.0
Here is a compilation of the initial model resources for ControlNet provided by its original author, lllyasviel.
ControlNet v1.0 model files and download links.
File Name | Size | Update Time | 说明 | Download Links |
---|---|---|---|---|
control_sd15_canny.pth | 5.71 GB | February 2023 | Download Link (opens in a new tab) | |
control_sd15_depth.pth | 5.71 GB | February 2023 | Download Link (opens in a new tab) | |
control_sd15_hed.pth | 5.71 GB | February 2023 | Download Link (opens in a new tab) | |
control_sd15_mlsd.pth | 5.71 GB | February 2023 | Download Link (opens in a new tab) | |
control_sd15_normal.pth | 5.71 GB | February 2023 | Download Link (opens in a new tab) | |
control_sd15_openpose.pth | 5.71 GB | February 2023 | Download Link (opens in a new tab) | |
control_sd15_scribble.pth | 5.71 GB | February 2023 | Download Link (opens in a new tab) | |
control_sd15_seg.pth | 5.71 GB | February 2023 | Download Link (opens in a new tab) |
How to Use ControlNet Model in ComfyUI
The usage of the ControlNet model is focused in the following article:
How to use ControlNet in ComfyUIIt will cover the following topics:
- How to install the ControlNet model in ComfyUI
- How to invoke the ControlNet model in ComfyUI
- ComfyUI ControlNet workflow and examples
- How to use multiple ControlNet models, etc.
ControlNet Principles
The fundamental principle of ControlNet is to guide the diffusion model in generating images by adding additional control conditions. Specifically, it duplicates the original neural network into two versions: a "locked" copy and a "trainable" copy. During use, the trainable copy learns and adjusts according to new control conditions, while the locked copy maintains its original weights unchanged, ensuring that the generation effect of the diffusion model itself is not affected.
Use Cases for ControlNet
Stable Diffusion ControlNet 1.0 is a powerful plugin capable of controlling image generation through various conditions. In different types of image generation tasks, this plugin can be flexibly applied to achieve the desired effect. Here are several specific application methods:
-
Precise Image Control ControlNet can control image generation based on conditions such as edge detection, sketch processing, or human pose. For example, when detailed depiction of specific parts of a person is needed, precise image generation can be achieved by defining these conditions.
-
Fixed Composition and Pose Definition ControlNet allows users to fix the composition of images and define poses, thus generating images that meet expectations. This is particularly useful for character design that requires specific actions or poses.
-
Contour Drawing and Line Art Generation With ControlNet, a rich and exquisite illustration can be generated from just a line drawing. This is very practical in artistic creation, especially when building complex images from simple lines.
-
Portrait to Anime Style Effect ControlNet can also be used to convert real portraits into anime style, accurately reproducing the hairstyle and hair color of the characters in the original image. This feature is very helpful for anime production and character design.