Skip to content
Help Build a Better ComfyUI Knowledge Base Become a Patron

HunyuanVideo Image-to-Video GGUF, FP8 and ComfyUI Native Workflow Complete Guide with Examples

Tencent officially released the HunyuanVideo image-to-video model on March 6, 2025. The model is now open-source and can be found at HunyuanVideo-I2V.

Below is the overall architecture diagram of HunyuanVideo:

HunyuanVideo Overall Architecture

ComfyUI now natively supports the HunyuanVideo-I2V model, and community developers kijai and city96 have updated their custom nodes to support the HunyuanVideo-I2V model.

In addition to Tencent’s official model, here are other versions compiled by ComfyUI Wiki:

In this article, we’ll provide complete model installation instructions and workflow examples for each of these versions.

This article focuses on image-to-video workflows. If you want to learn about Tencent Hunyuan’s text-to-video workflow, please refer to Tencent Hunyuan Text-to-Video Workflow Guide and Examples.

ComfyUI Official HunyuanVideo I2V Workflow

This workflow comes from the ComfyUI official documentation.

Before starting this tutorial, please refer to How to Update ComfyUI to update your ComfyUI to the latest version to avoid missing the following Comfy_Core nodes for HunyuanVideo:

  • HunyuanImageToVideo
  • TextEncodeHunyuanVideo_ImageToVideo

1. HunyuanVideo I2V Workflow File

Download the workflow file below, then drag it into ComfyUI, or use the menu Workflows -> Open (ctrl+o) to load the workflow.

Comfy_HunyuanVideo_I2V

JSON Format Workflow Download

2. HunyuanVideo I2V Model Downloads

The following models are from Comfy-Org/HunyuanVideo_repackaged. Please download these models:

After downloading, organize the files according to the structure below and save them to the corresponding folders under ComfyUI/models:

ComfyUI/
β”œβ”€β”€ models/
β”‚   β”œβ”€β”€ clip_vision/
β”‚   β”‚   └── llava_llama3_vision.safetensors
β”‚   β”œβ”€β”€ text_encoders/
β”‚   β”‚   β”œβ”€β”€ clip_l.safetensors
β”‚   β”‚   β”œβ”€β”€ llava_llama3_fp16.safetensors
β”‚   β”‚   └── llava_llama3_fp8_scaled.safetensors
β”‚   β”œβ”€β”€ vae/
β”‚   β”‚   └── hunyuan_video_vae_bf16.safetensors
β”‚   └── diffusion_models/
β”‚       └── hunyuan_video_image_to_video_720p_bf16.safetensors

3. Input Image

Download the image below as the input image

Comfy_HunyuanVideo_I2V_input

Complete the Check for Each HunyuanVideo I2V Workflow Node

Refer to the image to complete the check for each node’s content to ensure the workflow runs normally

ComfyUI Official HunyuanVideo I2V Workflow Node Check

  1. Check the DualCLIPLoader node:
  • Ensure clip_name1: clip_l.safetensors is correctly loaded
  • Ensure clip_name2: llava_llama3_vision.safetensors is correctly loaded
  1. Check the Load CLIP Vision node: Ensure llava_llama3_vision.safetensors is correctly loaded
  2. In the Load Image node, upload the input image provided earlier
  3. Check the Load VAE node: Ensure hunyuan_video_vae_bf16.safetensors is correctly loaded
  4. Check the Load Diffusion Model node: Ensure hunyuan_video_image_to_video_720p_bf16.safetensors is correctly loaded
  • If you encounter a running out of memory. error during execution, you can try setting the weight_dtype to fp8 type
  1. Click the Queue button or use the shortcut key Ctrl(cmd) + Enter(ε›žθ½¦) to execute video generation

Kijai HunyuanVideoWrapper Version

1. Custom Node Installation

You need to install the following custom nodes:

If you don’t know how to install custom nodes, please refer to ComfyUI Custom Node Installation Guide

2. Model Downloads

Downloaded files should be organized according to the structure below and saved to the corresponding folders under ComfyUI/models:

ComfyUI/
β”œβ”€β”€ models/
β”‚   β”œβ”€β”€ vae/
β”‚   β”‚   └── hunyuan_video_vae_bf16.safetensors
β”‚   └── diffusion_models/
β”‚       └── hunyuan_video_I2V_fp8_e4m3fn.safetensors

3. HunyuanVideo I2V Workflow File

Complete the Check for Each HunyuanVideo I2V Workflow Node

Refer to the image to complete the check for each node’s content to ensure the workflow runs normally

Kijai Version HunyuanVideo I2V Workflow Node Check

  1. In the Load Image node, upload the image you want to use for image-to-video generation
  2. In the HunyuanVideo VAE Loader node, ensure hunyuan_video_vae_bf16.safetensors is correctly loaded
  3. In the HunyuanVideo Model Loader node, ensure hunyuan_video_I2V_fp8_e4m3fn.safetensors is correctly loaded
  4. Modify the prompt text in the HyVideo I2V Encode node in the HyVideo I2V Encode node, enter the description of the video you want to generate
  5. Click the Queue button or use the shortcut key Ctrl(cmd) + Enter(ε›žθ½¦) to execute video generation

city96 GGUF Version

1. Custom Node Installation

You need to install the following custom nodes:

If you don’t know how to install custom nodes, please refer to ComfyUI Custom Node Installation Guide

2. Model Downloads

This version’s model is basically the same as the Comfy official version, so please refer to the Comfy official version section for manual download of the corresponding model.

You need to visit city96/HunyuanVideo-I2V-gguf to download the model you need and save the corresponding gguf model file to the ComfyUI/models/unet folder:

ComfyUI/
β”œβ”€β”€ models/
β”‚   β”œβ”€β”€ clip_vision/
β”‚   β”‚   └── llava_llama3_vision.safetensors
β”‚   β”œβ”€β”€ text_encoders/
β”‚   β”‚   β”œβ”€β”€ clip_l.safetensors
β”‚   β”‚   β”œβ”€β”€ llava_llama3_fp16.safetensors
β”‚   β”‚   └── llava_llama3_fp8_scaled.safetensors
β”‚   β”œβ”€β”€ vae/
β”‚   β”‚   └── hunyuan_video_vae_bf16.safetensors
β”‚   └── unet/
β”‚       └── hunyuan-video-i2v-720p-Q4_K_M.gguf // Depending on the GGUF version you downloaded

3. HunyuanVideo I2V Workflow File

Complete the Check for Each HunyuanVideo I2V Workflow Node

Refer to the image to complete the check for each node’s content to ensure the workflow runs normally

city96 GGUF Version HunyuanVideo I2V Workflow Node Check

  1. Check the DualCLIPLoader node:
  • Ensure clip_name1: clip_l.safetensors is correctly loaded
  • Ensure clip_name2: llava_llama3_vision.safetensors is correctly loaded
  1. Check the Load CLIP Vision node: Ensure llava_llama3_vision.safetensors is correctly loaded
  2. In the Load Image node, upload the input image provided earlier
  3. Check the Load VAE node: Ensure hunyuan_video_vae_bf16.safetensors is correctly loaded
  4. Check the Load Diffusion Model node: Ensure the corresponding HunyuanVideo GGUF model is correctly loaded
  5. Click the Queue button or use the shortcut key Ctrl(cmd) + Enter to execute video generation