Skip to content
Nodes ManualConditioningvideo-modelsSVD img2vid Conditioning

SVD img2vid Conditioning

comfyUI节点-SVD_img2vid_Conditioning|SVD_图像到视频_条件

Documentation

  • Class name: SVD_img2vid_Conditioning
  • Category: conditioning/video_models
  • Output node: False

This node is designed for generating conditioning data for video generation tasks, specifically tailored for use with SVD_img2vid models. It takes various inputs including initial images, video parameters, and a VAE model to produce conditioning data that can be used to guide the generation of video frames.

Input types

ParameterComfy dtypeDescription
clip_visionCLIP_VISIONRepresents the CLIP vision model used for encoding visual features from the initial image, playing a crucial role in understanding the content and context of the image for video generation.
init_imageIMAGEThe initial image from which the video will be generated, serving as the starting point for the video generation process.
vaeVAEA Variational Autoencoder (VAE) model used for encoding the initial image into a latent space, facilitating the generation of coherent and continuous video frames.
widthINTThe desired width of the video frames to be generated, allowing for customization of the video’s resolution.
heightINTThe desired height of the video frames, enabling control over the video’s aspect ratio and resolution.
video_framesINTSpecifies the number of frames to be generated for the video, determining the video’s length.
motion_bucket_idINTAn identifier for categorizing the type of motion to be applied in the video generation, aiding in the creation of dynamic and engaging videos.
fpsINTThe frames per second (fps) rate for the video, influencing the smoothness and realism of the generated video.
augmentation_levelFLOATA parameter controlling the level of augmentation applied to the initial image, affecting the diversity and variability of the generated video frames.

Output types

ParameterComfy dtypeDescription
positiveCONDITIONINGThe positive conditioning data, consisting of encoded features and parameters for guiding the video generation process in a desired direction.
negativeCONDITIONINGThe negative conditioning data, providing a contrast to the positive conditioning, which can be used to avoid certain patterns or features in the generated video.
latentLATENTLatent representations generated for each frame of the video, serving as a foundational component for the video generation process.