You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
> **Deprecated**: DisTorch V1 legacy nodes are no longer supported. Please migrate to [CLIPLoaderGGUFDisTorch2MultiGPU](CLIPLoaderGGUFDisTorch2MultiGPU.md) for maintained functionality.
4
+
5
+
This page is retained for archival purposes only. Use the DisTorch2 version linked above for current GGUF CLIP loading with MultiGPU support.
> **Deprecated**: DisTorch V1 legacy nodes are no longer supported. Please migrate to [DualCLIPLoaderGGUFDisTorch2MultiGPU](DualCLIPLoaderGGUFDisTorch2MultiGPU.md) for maintained functionality.
4
+
5
+
This documentation is retained for reference only. Use the DisTorch2 version above for dual GGUF CLIP workflows with modern allocation support.
`MMAudioFeatureUtilsLoaderMultiGPU` gathers the auxiliary MMAudio components (VAE, Synchformer, CLIP, and optional vocoder) on the device you choose so they can feed the sampler without consuming your main GPU.
4
+
5
+
## Inputs
6
+
7
+
### Required
8
+
9
+
| Parameter | Data Type | Description |
10
+
| --- | --- | --- |
11
+
|`vae_model`|`STRING`| VAE weights from `ComfyUI/models/mmaudio`. |
12
+
|`synchformer_model`|`STRING`| Synchformer weights from `ComfyUI/models/mmaudio`. |
13
+
|`clip_model`|`STRING`| CLIP weights from `ComfyUI/models/mmaudio`. |
`MMAudioModelLoaderMultiGPU` loads MMAudio diffusion checkpoints while letting you pin the model weights to a specific compute device. Use it to keep long-running audio generations off your primary image GPU.
4
+
5
+
## Inputs
6
+
7
+
### Required
8
+
9
+
| Parameter | Data Type | Description |
10
+
| --- | --- | --- |
11
+
|`mmaudio_model`|`STRING`| Model filename from `ComfyUI/models/mmaudio`. |
12
+
|`base_precision`|`STRING`| Weight precision to request (`fp16`, `fp32`, `bf16`). |
13
+
14
+
### Optional
15
+
16
+
| Parameter | Data Type | Description |
17
+
| --- | --- | --- |
18
+
|`device`|`STRING`| Target device for the loaded model (e.g. `cuda:0`, `cpu`). |
`MMAudioSamplerMultiGPU` renders audio clips with MMAudio while giving you control over which accelerator runs the diffusion loop and whether frames stay offloaded.
4
+
5
+
## Inputs
6
+
7
+
### Required
8
+
9
+
| Parameter | Data Type | Description |
10
+
| --- | --- | --- |
11
+
|`mmaudio_model`|`MMAUDIO_MODEL`| Core MMAudio checkpoint prepared by the loader. |
`PulidInsightFaceLoaderMultiGPU` boots the InsightFace detector needed by PuLID and pins it to the device you specify, ensuring face embeddings come from the best accelerator for your setup.
`PulidModelLoaderMultiGPU` loads PuLID identity preservation checkpoints onto your chosen device so facial guidance workloads can avoid your primary rendering GPU.
4
+
5
+
## Inputs
6
+
7
+
### Required
8
+
9
+
| Parameter | Data Type | Description |
10
+
| --- | --- | --- |
11
+
|`pulid_file`|`STRING`| PuLID model file from `ComfyUI/models/pulid`. |
12
+
13
+
### Optional
14
+
15
+
| Parameter | Data Type | Description |
16
+
| --- | --- | --- |
17
+
|`device`|`STRING`| Device that will host the PuLID weights. |
> **Deprecated**: DisTorch V1 legacy nodes are no longer supported. Please migrate to [QuadrupleCLIPLoaderGGUFDisTorch2MultiGPU](QuadrupleCLIPLoaderGGUFDisTorch2MultiGPU.md) for maintained functionality.
4
+
5
+
This record is kept solely for archival reasons; adopt the DisTorch2 loader referenced above for four-CLIP GGUF pipelines.
0 commit comments