Skip to content

Commit ef07c0f

Browse files
committed
Documentation clean-up
1 parent 5310d4a commit ef07c0f

14 files changed

Lines changed: 253 additions & 29 deletions

README.md

Lines changed: 52 additions & 29 deletions
Original file line numberDiff line numberDiff line change
@@ -75,40 +75,63 @@ The extension automatically creates MultiGPU versions of loader nodes. Each Mult
7575
Currently supported nodes (automatically detected if available):
7676

7777
- Standard [ComfyUI](https://github.com/comfyanonymous/ComfyUI) model loaders:
78-
- CheckpointLoaderSimpleMultiGPU/CheckpointLoaderSimpleDistorch2MultiGPU
79-
- CLIPLoaderMultiGPU
80-
- ControlNetLoaderMultiGPU
81-
- DualCLIPLoaderMultiGPU
82-
- TripleCLIPLoaderMultiGPU
83-
- UNETLoaderMultiGPU/UNETLoaderDisTorch2MultiGPU, and
84-
- VAELoaderMultiGPU
78+
- [CheckpointLoaderAdvancedMultiGPU](web/docs/CheckpointLoaderAdvancedMultiGPU.md) / [CheckpointLoaderAdvancedDisTorch2MultiGPU](web/docs/CheckpointLoaderAdvancedDisTorch2MultiGPU.md)
79+
- [CheckpointLoaderSimpleMultiGPU](web/docs/CheckpointLoaderSimpleMultiGPU.md) / [CheckpointLoaderSimpleDisTorch2MultiGPU](web/docs/CheckpointLoaderSimpleDisTorch2MultiGPU.md)
80+
- [UNETLoaderMultiGPU](web/docs/UNETLoaderMultiGPU.md) / [UNETLoaderDisTorch2MultiGPU](web/docs/UNETLoaderDisTorch2MultiGPU.md)
81+
- [UNetLoaderLP](web/docs/UNetLoaderLP.md)
82+
- [VAELoaderMultiGPU](web/docs/VAELoaderMultiGPU.md) / [VAELoaderDisTorch2MultiGPU](web/docs/VAELoaderDisTorch2MultiGPU.md)
83+
- [CLIPLoaderMultiGPU](web/docs/CLIPLoaderMultiGPU.md) / [CLIPLoaderDisTorch2MultiGPU](web/docs/CLIPLoaderDisTorch2MultiGPU.md)
84+
- [DualCLIPLoaderMultiGPU](web/docs/DualCLIPLoaderMultiGPU.md) / [DualCLIPLoaderDisTorch2MultiGPU](web/docs/DualCLIPLoaderDisTorch2MultiGPU.md)
85+
- [TripleCLIPLoaderMultiGPU](web/docs/TripleCLIPLoaderMultiGPU.md) / [TripleCLIPLoaderDisTorch2MultiGPU](web/docs/TripleCLIPLoaderDisTorch2MultiGPU.md)
86+
- [QuadrupleCLIPLoaderMultiGPU](web/docs/QuadrupleCLIPLoaderMultiGPU.md) / [QuadrupleCLIPLoaderDisTorch2MultiGPU](web/docs/QuadrupleCLIPLoaderDisTorch2MultiGPU.md)
87+
- [CLIPVisionLoaderMultiGPU](web/docs/CLIPVisionLoaderMultiGPU.md) / [CLIPVisionLoaderDisTorch2MultiGPU](web/docs/CLIPVisionLoaderDisTorch2MultiGPU.md)
88+
- [ControlNetLoaderMultiGPU](web/docs/ControlNetLoaderMultiGPU.md) / [ControlNetLoaderDisTorch2MultiGPU](web/docs/ControlNetLoaderDisTorch2MultiGPU.md)
89+
- [DiffusersLoaderMultiGPU](web/docs/DiffusersLoaderMultiGPU.md) / [DiffusersLoaderDisTorch2MultiGPU](web/docs/DiffusersLoaderDisTorch2MultiGPU.md)
90+
- [DiffControlNetLoaderMultiGPU](web/docs/DiffControlNetLoaderMultiGPU.md) / [DiffControlNetLoaderDisTorch2MultiGPU](web/docs/DiffControlNetLoaderDisTorch2MultiGPU.md)
8591
- WanVideoWrapper (requires [ComfyUI-WanVideoWrapper](https://github.com/kijai/ComfyUI-WanVideoWrapper)):
86-
- WanVideoModelLoaderMultiGPU & WanVideoModelLoaderMultiGPU_2
87-
- WanVideoVAELoaderMultiGPU
88-
- LoadWanVideoT5TextEncoderMultiGPU
89-
- LoadWanVideoClipTextEncoderMultiGPU
90-
- WanVideoTextEncodeMultiGPU
91-
- WanVideoBlockSwapMultiGPU
92-
- WanVideoSamplerMultiGPU
92+
- [WanVideoModelLoaderMultiGPU](web/docs/WanVideoModelLoaderMultiGPU.md)
93+
- [WanVideoVAELoaderMultiGPU](web/docs/WanVideoVAELoaderMultiGPU.md)
94+
- [WanVideoTinyVAELoaderMultiGPU](web/docs/WanVideoTinyVAELoaderMultiGPU.md)
95+
- [WanVideoBlockSwapMultiGPU](web/docs/WanVideoBlockSwapMultiGPU.md)
96+
- [WanVideoImageToVideoEncodeMultiGPU](web/docs/WanVideoImageToVideoEncodeMultiGPU.md)
97+
- [WanVideoEncodeMultiGPU](web/docs/WanVideoEncodeMultiGPU.md)
98+
- [WanVideoDecodeMultiGPU](web/docs/WanVideoDecodeMultiGPU.md)
99+
- [WanVideoSamplerMultiGPU](web/docs/WanVideoSamplerMultiGPU.md)
100+
- [WanVideoVACEEncodeMultiGPU](web/docs/WanVideoVACEEncodeMultiGPU.md)
101+
- [WanVideoClipVisionEncodeMultiGPU](web/docs/WanVideoClipVisionEncodeMultiGPU.md)
102+
- [WanVideoControlnetLoaderMultiGPU](web/docs/WanVideoControlnetLoaderMultiGPU.md)
103+
- [WanVideoUni3C_ControlnetLoaderMultiGPU](web/docs/WanVideoUni3C_ControlnetLoaderMultiGPU.md)
104+
- [WanVideoTextEncodeMultiGPU](web/docs/WanVideoTextEncodeMultiGPU.md)
105+
- [WanVideoTextEncodeCachedMultiGPU](web/docs/WanVideoTextEncodeCachedMultiGPU.md)
106+
- [WanVideoTextEncodeSingleMultiGPU](web/docs/WanVideoTextEncodeSingleMultiGPU.md)
107+
- [LoadWanVideoT5TextEncoderMultiGPU](web/docs/LoadWanVideoT5TextEncoderMultiGPU.md)
108+
- [LoadWanVideoClipTextEncoderMultiGPU](web/docs/LoadWanVideoClipTextEncoderMultiGPU.md)
109+
- [FantasyTalkingModelLoaderMultiGPU](web/docs/FantasyTalkingModelLoaderMultiGPU.md)
110+
- [Wav2VecModelLoaderMultiGPU](web/docs/Wav2VecModelLoaderMultiGPU.md) / [DownloadAndLoadWav2VecModelMultiGPU](web/docs/DownloadAndLoadWav2VecModelMultiGPU.md)
93111
- GGUF loaders (requires [ComfyUI-GGUF](https://github.com/city96/ComfyUI-GGUF)):
94-
- UnetLoaderGGUFMultiGPU/UnetLoaderGGUFDisTorch2MultiGPU
95-
- UnetLoaderGGUFAdvancedMultiGPU
96-
- CLIPLoaderGGUFMultiGPU
97-
- DualCLIPLoaderGGUFMultiGPU
98-
- TripleCLIPLoaderGGUFMultiGPU
112+
- UNet family: [UnetLoaderGGUFMultiGPU](web/docs/UnetLoaderGGUFMultiGPU.md) / [UnetLoaderGGUFDisTorch2MultiGPU](web/docs/UnetLoaderGGUFDisTorch2MultiGPU.md)
113+
- UNet Advanced bundles: [UnetLoaderGGUFAdvancedMultiGPU](web/docs/UnetLoaderGGUFAdvancedMultiGPU.md) / [UnetLoaderGGUFAdvancedDisTorch2MultiGPU](web/docs/UnetLoaderGGUFAdvancedDisTorch2MultiGPU.md)
114+
- CLIP family: [CLIPLoaderGGUFMultiGPU](web/docs/CLIPLoaderGGUFMultiGPU.md) / [CLIPLoaderGGUFDisTorch2MultiGPU](web/docs/CLIPLoaderGGUFDisTorch2MultiGPU.md)
115+
- Dual CLIP: [DualCLIPLoaderGGUFMultiGPU](web/docs/DualCLIPLoaderGGUFMultiGPU.md) / [DualCLIPLoaderGGUFDisTorch2MultiGPU](web/docs/DualCLIPLoaderGGUFDisTorch2MultiGPU.md)
116+
- Triple CLIP: [TripleCLIPLoaderGGUFMultiGPU](web/docs/TripleCLIPLoaderGGUFMultiGPU.md) / [TripleCLIPLoaderGGUFDisTorch2MultiGPU](web/docs/TripleCLIPLoaderGGUFDisTorch2MultiGPU.md)
117+
- Quadruple CLIP: [QuadrupleCLIPLoaderGGUFMultiGPU](web/docs/QuadrupleCLIPLoaderGGUFMultiGPU.md) / [QuadrupleCLIPLoaderGGUFDisTorch2MultiGPU](web/docs/QuadrupleCLIPLoaderGGUFDisTorch2MultiGPU.md)
99118
- XLabAI FLUX ControlNet (requires [x-flux-comfy](https://github.com/XLabAI/x-flux-comfyui)):
100-
- LoadFluxControlNetMultiGPU
119+
- [LoadFluxControlNetMultiGPU](web/docs/LoadFluxControlNetMultiGPU.md)
101120
- Florence2 (requires [ComfyUI-Florence2](https://github.com/kijai/ComfyUI-Florence2)):
102-
- Florence2ModelLoaderMultiGPU
103-
- DownloadAndLoadFlorence2ModelMultiGPU
121+
- [Florence2ModelLoaderMultiGPU](web/docs/Florence2ModelLoaderMultiGPU.md)
122+
- [DownloadAndLoadFlorence2ModelMultiGPU](web/docs/DownloadAndLoadFlorence2ModelMultiGPU.md)
104123
- LTX Video Custom Checkpoint Loader (requires [ComfyUI-LTXVideo](https://github.com/Lightricks/ComfyUI-LTXVideo)):
105-
- LTXVLoaderMultiGPU
106-
- NF4 Checkpoint Format Loader(requires [ComfyUI_bitsandbytes_NF4](https://github.com/comfyanonymous/ComfyUI_bitsandbytes_NF4)):
107-
- CheckpointLoaderNF4MultiGPU
108-
- HunyuanVideoWrapper (requires [ComfyUI-HunyuanVideoWrapper](https://github.com/kijai/ComfyUI-HunyuanVideoWrapper)):
109-
- HyVideoModelLoaderMultiGPU
110-
- HyVideoVAELoaderMultiGPU
111-
- DownloadAndLoadHyVideoTextEncoderMultiGPU
124+
- [LTXVLoaderMultiGPU](web/docs/LTXVLoaderMultiGPU.md)
125+
- NF4 Checkpoint Format Loader (requires [ComfyUI_bitsandbytes_NF4](https://github.com/comfyanonymous/ComfyUI_bitsandbytes_NF4)):
126+
- [CheckpointLoaderNF4MultiGPU](web/docs/CheckpointLoaderNF4MultiGPU.md)
127+
- MMAudio (requires [ComfyUI-MMAudio](https://github.com/comfyanonymous/ComfyUI-MMAudio)):
128+
- [MMAudioModelLoaderMultiGPU](web/docs/MMAudioModelLoaderMultiGPU.md)
129+
- [MMAudioFeatureUtilsLoaderMultiGPU](web/docs/MMAudioFeatureUtilsLoaderMultiGPU.md)
130+
- [MMAudioSamplerMultiGPU](web/docs/MMAudioSamplerMultiGPU.md)
131+
- Pulid (requires [PuLID_ComfyUI](https://github.com/cubiq/PuLID_ComfyUI)):
132+
- [PulidModelLoaderMultiGPU](web/docs/PulidModelLoaderMultiGPU.md)
133+
- [PulidInsightFaceLoaderMultiGPU](web/docs/PulidInsightFaceLoaderMultiGPU.md)
134+
- [PulidEvaClipLoaderMultiGPU](web/docs/PulidEvaClipLoaderMultiGPU.md)
112135

113136
All MultiGPU nodes available for your install can be found in the "multigpu" category in the node menu.
114137

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
# CLIPLoaderGGUFDisTorchMultiGPU
2+
3+
> **Deprecated**: DisTorch V1 legacy nodes are no longer supported. Please migrate to [CLIPLoaderGGUFDisTorch2MultiGPU](CLIPLoaderGGUFDisTorch2MultiGPU.md) for maintained functionality.
4+
5+
This page is retained for archival purposes only. Use the DisTorch2 version linked above for current GGUF CLIP loading with MultiGPU support.
Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
# DualCLIPLoaderGGUFDisTorchMultiGPU
2+
3+
> **Deprecated**: DisTorch V1 legacy nodes are no longer supported. Please migrate to [DualCLIPLoaderGGUFDisTorch2MultiGPU](DualCLIPLoaderGGUFDisTorch2MultiGPU.md) for maintained functionality.
4+
5+
This documentation is retained for reference only. Use the DisTorch2 version above for dual GGUF CLIP workflows with modern allocation support.
Lines changed: 28 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,28 @@
1+
# MMAudioFeatureUtilsLoaderMultiGPU
2+
3+
`MMAudioFeatureUtilsLoaderMultiGPU` gathers the auxiliary MMAudio components (VAE, Synchformer, CLIP, and optional vocoder) on the device you choose so they can feed the sampler without consuming your main GPU.
4+
5+
## Inputs
6+
7+
### Required
8+
9+
| Parameter | Data Type | Description |
10+
| --- | --- | --- |
11+
| `vae_model` | `STRING` | VAE weights from `ComfyUI/models/mmaudio`. |
12+
| `synchformer_model` | `STRING` | Synchformer weights from `ComfyUI/models/mmaudio`. |
13+
| `clip_model` | `STRING` | CLIP weights from `ComfyUI/models/mmaudio`. |
14+
15+
### Optional
16+
17+
| Parameter | Data Type | Description |
18+
| --- | --- | --- |
19+
| `bigvgan_vocoder_model` | `VOCODER_MODEL` | Optional BigVGAN vocoder bundle. |
20+
| `mode` | `STRING` | Feature resolution (`16k` or `44k`). |
21+
| `precision` | `STRING` | Precision for aux weights (`fp16`, `fp32`, `bf16`). |
22+
| `device` | `STRING` | Device receiving the feature utility stack. |
23+
24+
## Outputs
25+
26+
| Output Name | Data Type | Description |
27+
| --- | --- | --- |
28+
| `mmaudio_featureutils` | `MMAUDIO_FEATUREUTILS` | Fully prepared feature utility pack. |
Lines changed: 24 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,24 @@
1+
# MMAudioModelLoaderMultiGPU
2+
3+
`MMAudioModelLoaderMultiGPU` loads MMAudio diffusion checkpoints while letting you pin the model weights to a specific compute device. Use it to keep long-running audio generations off your primary image GPU.
4+
5+
## Inputs
6+
7+
### Required
8+
9+
| Parameter | Data Type | Description |
10+
| --- | --- | --- |
11+
| `mmaudio_model` | `STRING` | Model filename from `ComfyUI/models/mmaudio`. |
12+
| `base_precision` | `STRING` | Weight precision to request (`fp16`, `fp32`, `bf16`). |
13+
14+
### Optional
15+
16+
| Parameter | Data Type | Description |
17+
| --- | --- | --- |
18+
| `device` | `STRING` | Target device for the loaded model (e.g. `cuda:0`, `cpu`). |
19+
20+
## Outputs
21+
22+
| Output Name | Data Type | Description |
23+
| --- | --- | --- |
24+
| `mmaudio_model` | `MMAUDIO_MODEL` | Loaded MMAudio diffusion pipeline. |

web/docs/MMAudioSamplerMultiGPU.md

Lines changed: 33 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,33 @@
1+
# MMAudioSamplerMultiGPU
2+
3+
`MMAudioSamplerMultiGPU` renders audio clips with MMAudio while giving you control over which accelerator runs the diffusion loop and whether frames stay offloaded.
4+
5+
## Inputs
6+
7+
### Required
8+
9+
| Parameter | Data Type | Description |
10+
| --- | --- | --- |
11+
| `mmaudio_model` | `MMAUDIO_MODEL` | Core MMAudio checkpoint prepared by the loader. |
12+
| `feature_utils` | `MMAUDIO_FEATUREUTILS` | Feature utility bundle containing VAE, Synchformer, CLIP, and optional vocoder. |
13+
| `duration` | `FLOAT` | Target duration for the generated audio in seconds. |
14+
| `steps` | `INT` | Number of sampler iterations to run. |
15+
| `cfg` | `FLOAT` | Classifier-free guidance scale. |
16+
| `seed` | `INT` | Random seed, `0` for deterministic repeatability. |
17+
| `prompt` | `STRING` | Positive conditioning text. |
18+
| `negative_prompt` | `STRING` | Negative conditioning text. |
19+
| `mask_away_clip` | `BOOLEAN` | Hide supplied clip video frames during sampling. |
20+
| `force_offload` | `BOOLEAN` | Force temporary offload of the model after sampling. |
21+
22+
### Optional
23+
24+
| Parameter | Data Type | Description |
25+
| --- | --- | --- |
26+
| `images` | `IMAGE` | Reference frames to guide the sampler. |
27+
| `device` | `STRING` | Device that hosts the diffusion pass (`cuda:0`, `cpu`, etc.). |
28+
29+
## Outputs
30+
31+
| Output Name | Data Type | Description |
32+
| --- | --- | --- |
33+
| `audio` | `AUDIO` | Generated audio waveform tensor. |
Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,17 @@
1+
# PulidEvaClipLoaderMultiGPU
2+
3+
`PulidEvaClipLoaderMultiGPU` prepares the EVA CLIP text encoder required by PuLID and keeps it on the device you nominate for downstream conditioning.
4+
5+
## Inputs
6+
7+
### Optional
8+
9+
| Parameter | Data Type | Description |
10+
| --- | --- | --- |
11+
| `device` | `STRING` | Device selected for the EVA CLIP encoder. |
12+
13+
## Outputs
14+
15+
| Output Name | Data Type | Description |
16+
| --- | --- | --- |
17+
| `eva_clip` | `EVA_CLIP` | Loaded EVA CLIP encoder instance. |
Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,23 @@
1+
# PulidInsightFaceLoaderMultiGPU
2+
3+
`PulidInsightFaceLoaderMultiGPU` boots the InsightFace detector needed by PuLID and pins it to the device you specify, ensuring face embeddings come from the best accelerator for your setup.
4+
5+
## Inputs
6+
7+
### Required
8+
9+
| Parameter | Data Type | Description |
10+
| --- | --- | --- |
11+
| `provider` | `STRING` | Execution backend (`CPU`, `CUDA`, `ROCM`, `CoreML`). |
12+
13+
### Optional
14+
15+
| Parameter | Data Type | Description |
16+
| --- | --- | --- |
17+
| `device` | `STRING` | Device assigned to the InsightFace runtime. |
18+
19+
## Outputs
20+
21+
| Output Name | Data Type | Description |
22+
| --- | --- | --- |
23+
| `faceanalysis` | `FACEANALYSIS` | Ready InsightFace analysis module. |
Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,23 @@
1+
# PulidModelLoaderMultiGPU
2+
3+
`PulidModelLoaderMultiGPU` loads PuLID identity preservation checkpoints onto your chosen device so facial guidance workloads can avoid your primary rendering GPU.
4+
5+
## Inputs
6+
7+
### Required
8+
9+
| Parameter | Data Type | Description |
10+
| --- | --- | --- |
11+
| `pulid_file` | `STRING` | PuLID model file from `ComfyUI/models/pulid`. |
12+
13+
### Optional
14+
15+
| Parameter | Data Type | Description |
16+
| --- | --- | --- |
17+
| `device` | `STRING` | Device that will host the PuLID weights. |
18+
19+
## Outputs
20+
21+
| Output Name | Data Type | Description |
22+
| --- | --- | --- |
23+
| `model` | `PULID` | Loaded PuLID model bundle. |
Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
# QuadrupleCLIPLoaderGGUFDisTorchMultiGPU
2+
3+
> **Deprecated**: DisTorch V1 legacy nodes are no longer supported. Please migrate to [QuadrupleCLIPLoaderGGUFDisTorch2MultiGPU](QuadrupleCLIPLoaderGGUFDisTorch2MultiGPU.md) for maintained functionality.
4+
5+
This record is kept solely for archival reasons; adopt the DisTorch2 loader referenced above for four-CLIP GGUF pipelines.

0 commit comments

Comments
 (0)