Skip to content
Help Build a Better ComfyUI Knowledge Base Become a Patron
NewsEasyControl: A New Framework for Efficient and Flexible Control of Diffusion Transformer

EasyControl: A New Framework for Efficient and Flexible Control of Diffusion Transformer

The EasyControl framework, jointly developed by Tiamat AI, ShanghaiTech University, National University of Singapore, and Liblib AI, has been officially released. This framework adds efficient and flexible control capabilities to Diffusion Transformer (DiT) models, and ComfyUI users can now use this technology through a dedicated plugin.

Introduction to EasyControl Framework

EasyControl is an efficient and flexible unified conditional control framework designed for Diffusion Transformers (DiT). As generative model architectures transition from UNet-based models to DiT models, adding effective conditional control to DiT has become a challenge. EasyControl solves this problem through three key innovations:

  1. Lightweight Condition Injection LoRA Module - Processes condition signals independently without modifying base model weights, ensuring compatibility with custom models and supporting flexible injection of various conditions.

  2. Position-Aware Training Paradigm - Normalizes input conditions to fixed resolutions, allowing generation of images with arbitrary aspect ratios and flexible resolutions while optimizing computational efficiency.

  3. Causal Attention Mechanism with KV Cache Technology - Significantly reduces image synthesis latency and improves the overall efficiency of the framework.

These technologies enable EasyControl to support model compatibility (enabling plug-and-play functionality and style lossless control), generation flexibility (supporting multiple resolutions, aspect ratios, and multi-condition combinations), and inference efficiency.

Using EasyControl in ComfyUI

The good news is that ComfyUI users can now use EasyControl in ComfyUI through the ComfyUI-easycontrol plugin. Developed by GitHub user jax-explorer, this plugin brings EasyControl functionality to ComfyUI. Corresponding workflow: easy_control_workflow.json.

Control Types Supported by the Plugin

The ComfyUI-easycontrol plugin supports various control types:

  • Canny Edge Control
  • Depth Map Control
  • HEDSketch Control
  • Pose Control
  • Semantic Segmentation Control
  • Inpainting
  • Subject Control
  • Ghibli Style Control

Ghibli Style Generation

Notably, EasyControl has recently released a specialized Ghibli style generation model. This model was trained using only 100 real Asian faces paired with Ghibli-style counterparts generated by GPT-4o, enabling the transformation of portrait photos into Ghibli animation style images that preserve facial features, similar to works like “Spirited Away” and “My Neighbor Totoro”.

Ghibli Style Example Ghibli Style Example 2

Ghibli Style Generation

Recent Updates

The EasyControl team has released several updates recently:

  • 2025-03-18: Pre-trained checkpoints released on Hugging Face
  • 2025-03-19: Hugging Face demo page launched
  • 2025-04-01: Ghibli style control model released
  • 2025-04-03: ComfyUI-easycontrol plugin support launched
  • 2025-04-07: Integration with CFG-Zero*, enhancing image fidelity and controllability