Skip to content
ComfyUI Wikiβ€’
Help Build a Better ComfyUI Knowledge Base Become a Patron

Wan2.2 Animate represents a breakthrough in AI-driven character animation, offering a comprehensive solution for both character replacement and motion transfer within video content.

This advanced framework enables users to seamlessly animate static character images using reference videos, capturing intricate facial expressions and body movements with remarkable precision. Additionally, it can replace existing characters in videos while maintaining the original environmental context and lighting conditions.

Key Capabilities

  • Dual Operation Modes: Seamlessly switch between character animation and video replacement functions
  • Skeleton-Based Motion Control: Leverages advanced pose estimation for accurate movement replication
  • Expression Fidelity: Preserves subtle facial expressions and micro-movements from source material
  • Environmental Consistency: Maintains original video lighting and color grading for natural integration
  • Extended Video Support: Iterative processing ensures smooth motion continuity across longer sequences

About Wan2.2 Animate workflow

In this tutorial we will include two workflows:

  1. ComfyUI Official Native Version (Core Nodes)
  2. Kijai ComfyUI-WanVideoWrapper version[To be updated]

Comfy Org live stream replay

Wan 2.2 Animate in ComfyUI with Flipping Sigmas / September 19th, 2025
Loading...

Wan2.2 Animate ComfyUI Native Workflow (Core Nodes)

1. Workflow Setup

Begin by downloading the workflow file and importing it into ComfyUI.

Required Input Materials:

Reference Image: Reference_Image

Input Video

2. Model Downloads

Diffusion Models

CLIP Vision Models

LoRA Models

VAE Models

Text Encoders

ComfyUI/
β”œβ”€β”€β”€πŸ“‚ models/
β”‚   β”œβ”€β”€β”€πŸ“‚ diffusion_models/
β”‚   β”‚   β”œβ”€β”€β”€ Wan2_2-Animate-14B_fp8_e4m3fn_scaled_KJ.safetensors
β”‚   β”‚   └─── wan2.2_animate_14B_bf16.safetensors
β”‚   β”œβ”€β”€β”€πŸ“‚ loras/
β”‚   β”‚   └─── lightx2v_I2V_14B_480p_cfg_step_distill_rank64_bf16.safetensors
β”‚   β”œβ”€β”€β”€πŸ“‚ text_encoders/
β”‚   β”‚   └─── umt5_xxl_fp8_e4m3fn_scaled.safetensors 
β”‚   β”œβ”€β”€β”€πŸ“‚ clip_visions/ 
β”‚   β”‚   └─── clip_vision_h.safetensors
β”‚   β””β”€β”€β”€πŸ“‚ vae/
β”‚       └── wan_2.1_vae.safetensors

3. Custom Node Installation

For the complete workflow experience, install the following custom nodes using ComfyUI-Manager or manually:

Required Extensions:

For installation guidance, refer to How to install custom nodes

4. Workflow Configuration

Wan2.2 Animate operates in two distinct modes:

  • Mix Mode: Character replacement within existing video content
  • Move Mode: Motion transfer to animate static character images

4.1 Mix Mode Operation

Workflow Instructions

Setup Guidelines:

  1. Initial Testing: Use smaller video dimensions for first-time execution to verify VRAM compatibility. Ensure video dimensions are multiples of 16 due to WanAnimateToVideo limitations.

  2. Model Verification: Confirm all required models are properly loaded

  3. Prompt Customization: Modify text prompts as needed for your specific use case

  4. Reference Image: Upload your target character image

  5. Input Video Processing: Use provided sample videos initially. The DWPose Estimator from comfyui_controlnet_aux automatically processes input videos into pose and facial control sequences

  6. Points Editor Configuration: The Points Editor from KJNodes requires initial frame loading. Run the workflow once or manually upload the first frame

  7. Video Extension: The β€œVideo Extend” group extends output length

    • Each extension adds 77 frames (approximately 4.8 seconds)
    • Skip for videos under 5 seconds
    • For longer extensions, duplicate the group and connect batch_images and video_frame_offset outputs sequentially
  8. Execution: Click Run or use Ctrl(Cmd) + Enter to begin generation

4.2 Move Mode Configuration

The workflow utilizes subgraph functionality for mode switching:

Subgraph

Mode Switching: To activate Move mode, disconnect the background_video and character_mask inputs from the Video Sampling and output(Subgraph) node.

Wan2.2 Animate ComfyUI-WanVideoWrapper Workflow

[To be update]