Skip to content

Commit 2d81ef0

Browse files
committed
Support for kijai's ComfyUI-WanVideoWrapper
1 parent 1bf9333 commit 2d81ef0

5 files changed

Lines changed: 1326 additions & 5 deletions

File tree

README.md

Lines changed: 9 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -122,9 +122,12 @@ Currently supported nodes (automatically detected if available):
122122
- CheckpointLoaderNF4MultiGPU
123123
- HunyuanVideoWrapper (requires [ComfyUI-HunyuanVideoWrapper](https://github.com/kijai/ComfyUI-HunyuanVideoWrapper)):
124124
- HyVideoModelLoaderMultiGPU
125-
- HyVideoModelLoaderDiffSynthMultiGPU (**NEW** - MultiGPU-specific node for offloading to an `offload_device` using MultiGPU's device selectors)
126125
- HyVideoVAELoaderMultiGPU
127126
- DownloadAndLoadHyVideoTextEncoderMultiGPU
127+
- WanVideoWrapper (requires [ComfyUI-WanVideoWrapper](https://github.com/kijai/ComfyUI-WanVideoWrapper)):
128+
- WanVideoModelLoader
129+
- WanVideoVAELoader
130+
- LoadWanVideoT5TextEncoder
128131
- **Native to ComfyUI-MultiGPU**
129132
- DeviceSelectorMultiGPU - Allows user to link loaders together to use the same selected device
130133
- HunyuanVideoEmbeddingsAdapter - Allows Kijai's excellent IP2V CLIP for HunyuanVideo to be used with Comfy Core sampler.
@@ -146,6 +149,11 @@ This workflow attaches a HunyuanVideo GGUF-quantized model on `cuda:0` for compu
146149
- [examples/flux1dev_gguf_distorch.json](https://github.com/pollockjj/ComfyUI-MultiGPU/blob/main/examples/flux1dev_gguf_distorch.json)
147150
This workflow loads a FLUX.1-dev model on `cuda:0` for compute and distrubutes its UNet across multiple CUDA devices using new DisTorch distributed-load methodology. While the text encoders and VAE are loaded on GPU 1 and use `cuda:1` for compute.
148151

152+
### Split Wan Video generation across multiple resources
153+
154+
- [examples/hunyuanvideowrapper_native_vae.json](https://github.com/pollockjj/ComfyUI-MultiGPU/blob/main/examples/wanvideo_T2V_example_MultiGPU.json)
155+
This workflow is a simple extension of [kijai's T2V example](https://github.com/kijai/ComfyUI-WanVideoWrapper/blob/main/example_workflows/wanvideo_T2V_example_02.json) from his custom_node.
156+
149157
### Split Hunyuan Video generation across multiple resources
150158

151159
- [examples/hunyuanvideowrapper_native_vae.json](https://github.com/pollockjj/ComfyUI-MultiGPU/blob/main/examples/hunyuanvideowrapper_native_vae.json)

__init__.py

Lines changed: 8 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,8 @@
2727
LoadFluxControlNet,
2828
MMAudioModelLoader, MMAudioFeatureUtilsLoader, MMAudioSampler,
2929
PulidModelLoader, PulidInsightFaceLoader, PulidEvaClipLoader,
30-
HyVideoModelLoader, HyVideoVAELoader, DownloadAndLoadHyVideoTextEncoder
30+
HyVideoModelLoader, HyVideoVAELoader, DownloadAndLoadHyVideoTextEncoder,
31+
WanVideoModelLoader, WanVideoVAELoader, LoadWanVideoT5TextEncoder
3132
)
3233

3334
current_device = mm.get_torch_device()
@@ -735,4 +736,10 @@ def check_module_exists(module_path):
735736
NODE_CLASS_MAPPINGS["HyVideoVAELoaderMultiGPU"] = override_class(HyVideoVAELoader)
736737
NODE_CLASS_MAPPINGS["DownloadAndLoadHyVideoTextEncoderMultiGPU"] = override_class(DownloadAndLoadHyVideoTextEncoder)
737738

739+
if check_module_exists("ComfyUI-WanVideoWrapper") or check_module_exists("comfyui-wanvideowrapper"):
740+
NODE_CLASS_MAPPINGS["WanVideoModelLoaderMultiGPU"] = override_class(WanVideoModelLoader)
741+
NODE_CLASS_MAPPINGS["WanVideoVAELoaderMultiGPU"] = override_class(WanVideoVAELoader)
742+
NODE_CLASS_MAPPINGS["LoadWanVideoT5TextEncoderMultiGPU"] = override_class(LoadWanVideoT5TextEncoder)
743+
744+
738745
logging.info(f"MultiGPU: Registration complete. Final mappings: {', '.join(NODE_CLASS_MAPPINGS.keys())}")

0 commit comments

Comments
 (0)