Select your language

Live Chat Scroll naar beneden

AI Update: Now released - Wan 2.2 & ComfyUI

AI Update: Now released - Wan 2.2 & ComfyUI

Auteur: Siu-Ho Fung

July 29, 2025

📑 Table of Contents


Introduction

As of July 28, 2025, Wan AI has officially released Wan 2.2 as open source — a cutting-edge model for video generation, now fully supported in ComfyUI (GitHub).



🌟 What's New in Wan 2.2?

  • Mixture-of-Experts (MoE) Architecture: Two specialized models (high-noise and low-noise) deliver enhanced video quality and smarter noise handling.
  • Cinematic Aesthetic Control: Finely tuned with principles from the professional film industry—lighting, color, composition, and camera motion.
  • Complex Motion + Semantic Accuracy: Better generalization in dynamic scenes with multiple objects.
  • Low VRAM Option: The WAN2.2-TI2V-5B model runs at 720p@24 fps on just 8 GB VRAM thanks to ComfyUI’s auto-offloading system.

📦 Supported Model Variants

ComfyUI supports three ready-to-use Wan 2.2 models:

  • WAN2.2-TI2V-5B (FP16) – Hybrid text+image-to-video, optimized for VRAM efficiency.
  • WAN2.2-T2V-14B (FP16/FP8) – High-quality text-to-video.
  • WAN2.2-I2V-14B (FP16/FP8) – High-quality image-to-video.

More details and sample videos:
Wan 2.2 Demo & Workflows


Using ComfyUI Manager

  1. Update ComfyUI or ComfyUI Desktop to the latest version.
  2. Go to Workflow → Browse Templates → Video.
  3. Select a Wan 2.2 workflow:
    • Wan 2.2 Text to Video
    • Wan 2.2 Image to Video
    • Wan 2.2 5B Video Generation
  4. Let ComfyUI Manager automatically download the required models, or download them manually via Hugging Face. Note: models must be reselected in the nodes where they are defined.
  5. Click Run and easily start generating cinematic AI videos.

🧩 Available Workflows

Each JSON workflow includes:

  • Nodes for automatic model loading (diffusion, VAE, text encoder).
  • Configurable inputs for prompt, seed, resolution, and more.
  • Real-time preview and an optimized workflow structure.

Available templates:

  • TI2V: Text + Image to Video
  • T2V: Text to Video
  • I2V: Image to Video

Quantized Versions (GGUF)

For users with limited VRAM or running inference on lower-spec hardware, quantized versions of Wan 2.2 models are available. This example uses I2V, but the same process applies for T2V.

To use them in ComfyUI, follow these steps:

  1. Replace the current loader with a GGUF loader node.
  2. Select a quantized model from QuantStack’s Hugging Face page.
    Recommended options (S = small quantization group size, higher quality, slightly slower):
    • Wan2.2-I2V-A14B-Q5_K_S-LowNoise.gguf
    • Wan2.2-I2V-A14B-Q5_K_S-HighNoise.gguf

These models are optimized for faster inference and lower memory usage while retaining high-quality image-to-video generation.

WAN22 VB HN

WAN22 VB LN


Wan 2.2 Example Workflows

5B (Lower VRAM)
14B (Significantly better results for I2V)

More information:
Wan 2.2 ComfyUI Examples


What is a LoRA?

LoRA stands for Low-Rank Adaptation, a technique that allows you to fine-tune existing AI models—like the ones used in ComfyUI—efficiently with additional training data. Instead of training an entirely new model, a LoRA lets you inject specific styles, objects, or characters into a model without significantly increasing its size. This makes it ideal for quickly adding new visual concepts to your workflows.

How to use a LoRA in ComfyUI

You can download LoRA files (usually .safetensors or .pt) from sites like CivitAI or HuggingFace. Then, you can load them into your ComfyUI workflow using the LoRA Loader.

Adding a LoRA to your workflow

  1. Double-click anywhere in your workflow to add a new node.
  2. Search for lora and select the LoRA Loader node.
  3. Connect this node at the right point in your pipeline:
    • Between the Low Noise and High Noise paths between the Model Sampling and the KSampler.
    • Or just before the KSampler, depending on how your workflow is structured.

Tip

Make sure the LoRA files are placed in the correct folder (models/lora/) inside your ComfyUI installation so they’re automatically recognized by the loader. You don’t need to fully restart ComfyUI to see new files, simply press the R key to refresh everything.

Wan2.2 is fully uncensored

One of the biggest advantages of using Wan2.2 is that it’s completely uncensored. This means you have full creative freedom and can combine it with any uncensored LoRA to achieve specific styles, moods, or themes without restrictions. Whether you're working with artistic, stylized, or other content, Wan2.2 will not filter or limit your output.


Why This Release Matters

  • Free for commercial use: licensed under Apache 2.0
  • Cinematic quality: with enhanced realism and motion fidelity
  • Runs on modest hardware: TI2V-5B runs on 8 GB VRAM

🚀 Accelerate Your AI Workloads with Our AI Servers

Want to get the most out of Wan 2.2? Pair it with high-performance AI servers from ServerDirect. These systems are purpose-built for demanding workloads like machine learning, deep learning, and large-scale inference, optimized for speed, efficiency, and scalability.

👉 Explore our AI servers



Final Thoughts

The release of Wan 2.2 and its seamless support in ComfyUI marks a major milestone in AI video generation. Whether you're creating short films, visualizations, or AI-based animations, these tools are now more powerful and accessible than ever.

👉 Update ComfyUI, load a Wan 2.2 workflow, and bring your vision to life, frame by frame.

Schrijf in voor onze Nieuwsbrief

Hebt u vragen of hulp nodig? Wij helpen u graag.

15+ jaar ervaring Preferred partner van Dell, HPE & Supermicro en meer Advies op maat binnen 1 werkdag Snelle levering & installatie Wereldwijde 24/7 onsite support Laagste prijsgarantie