Skip to content

Commit eeed32b

Browse files
pollockjjCopilot
andauthored
Update example_workflows/qwen_image_edit_2509 unet clip distorch2.json
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
1 parent ef07c0f commit eeed32b

1 file changed

Lines changed: 1 addition & 1 deletion

File tree

example_workflows/qwen_image_edit_2509 unet clip distorch2.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -562,7 +562,7 @@
562562
"title": "ComfyUI-MultiGPU Note - qwen_image_edit_2509 UNET, DualClipLoader DisTorch2 Workflow",
563563
"properties": {},
564564
"widgets_values": [
565-
"A DisTorch2 QWEN Image Edit 2509 workflow with UNet and CLIP DisTorch2. In this case we are using the DisTorch2 node to statically allocate approximately 70% of the UNet model (14.0G) to the cpu and 50% of the CLIP model (3.9G) to cpu. With `eject_models` being `True`, all other models on the `compute` card (`cuda:0`) will be unloaded, regardless of size, ensuring a clean compute card prior to inference.\n\n## Custom Node\n- [ComfyUI-MultiGPU](https://github.com/pollockjj/ComfyUI-MultiGPU)\n```\n📂 ComfyUI/\n├── 📂 custom_nodes/\n│ ├── 📂 ComfyUI-MultiGPU/\n```\n## Model links\n\n**Diffusion model**\n- [qwen_image_edit_2509_fp8_e4m3fn.safetensors](https://huggingface.co/Comfy-Org/https://huggingface.co/Comfy-Org/Qwen-Image-Edit_ComfyUI/blob/main/split_files/diffusion_models/qwen_image_edit_2509_fp8_e4m3fn.safetensors)\n\n**CLIP**\n- [qwen_2.5_vl_7b_fp8_scaled.safetensors](https://huggingface.co/Comfy-Org/Qwen-Image_ComfyUI/blob/main/split_files/text_encoders/qwen_2.5_vl_7b_fp8_scaled.safetensors)\n\n**VAE**\n- [qwen_image_vae.safetensors](https://huggingface.co/Comfy-Org/Qwen-Image_ComfyUI/blob/main/split_files/vae/qwen_image_vae.safetensors)\n\nModel Storage Location\n\n```\n📂 ComfyUI/\n├── 📂 models/\n│ ├── 📂 clip/\n│ │ └── qwen_2.5_vl_7b_fp8_scaled.safetensors [🟪🟪🟪🟪🟪⚙️⚙️⚙️⚙️⚙️]\n│ ├── 📂 unet/\n│ │ ├── qwen_image_edit_2509_fp8_e4m3fn.safetensors [🟩🟩🟩⚙️⚙️⚙️⚙️⚙️⚙️⚙️]\n│ ├── 📂 vae/\n│ │ └── qwen_image_vae.safetensors \n```\n## Device Mapping (Example: Two GPUs, 1 CPU)\n```\n🖥️ system \n├── 🟢 cuda:0\n│ └── qwen_image_edit_2509_fp8_e4m3fn.safetensors [🟩🟩🟩]\n├── 🟣 cuda:1\n│ └── qwen_2.5_vl_7b_fp8_scaled.safetensors[🟪🟪🟪🟪🟪]\n│ └── qwen_image_vae.safetensors\n├── ⚙️ cpu\n └── qwen_image_edit_2509_fp8_e4m3fn.safetensors [⚙️⚙️⚙️⚙️⚙️⚙️⚙️]\n └── qwen_2.5_vl_7b_fp8_scaled.safetensors [⚙️⚙️⚙️⚙️⚙️]\n```"
565+
"A DisTorch2 QWEN Image Edit 2509 workflow with UNet and CLIP DisTorch2. In this case we are using the DisTorch2 node to statically allocate approximately 70% of the UNet model (14.0G) to the cpu and 50% of the CLIP model (3.9G) to cpu. With `eject_models` being `True`, all other models on the `compute` card (`cuda:0`) will be unloaded, regardless of size, ensuring a clean compute card prior to inference.\n\n## Custom Node\n- [ComfyUI-MultiGPU](https://github.com/pollockjj/ComfyUI-MultiGPU)\n```\n📂 ComfyUI/\n├── 📂 custom_nodes/\n│ ├── 📂 ComfyUI-MultiGPU/\n```\n## Model links\n\n**Diffusion model**\n- [qwen_image_edit_2509_fp8_e4m3fn.safetensors](https://huggingface.co/Comfy-Org/Qwen-Image-Edit_ComfyUI/blob/main/split_files/diffusion_models/qwen_image_edit_2509_fp8_e4m3fn.safetensors)\n\n**CLIP**\n- [qwen_2.5_vl_7b_fp8_scaled.safetensors](https://huggingface.co/Comfy-Org/Qwen-Image_ComfyUI/blob/main/split_files/text_encoders/qwen_2.5_vl_7b_fp8_scaled.safetensors)\n\n**VAE**\n- [qwen_image_vae.safetensors](https://huggingface.co/Comfy-Org/Qwen-Image_ComfyUI/blob/main/split_files/vae/qwen_image_vae.safetensors)\n\nModel Storage Location\n\n```\n📂 ComfyUI/\n├── 📂 models/\n│ ├── 📂 clip/\n│ │ └── qwen_2.5_vl_7b_fp8_scaled.safetensors [🟪🟪🟪🟪🟪⚙️⚙️⚙️⚙️⚙️]\n│ ├── 📂 unet/\n│ │ ├── qwen_image_edit_2509_fp8_e4m3fn.safetensors [🟩🟩🟩⚙️⚙️⚙️⚙️⚙️⚙️⚙️]\n│ ├── 📂 vae/\n│ │ └── qwen_image_vae.safetensors \n```\n## Device Mapping (Example: Two GPUs, 1 CPU)\n```\n🖥️ system \n├── 🟢 cuda:0\n│ └── qwen_image_edit_2509_fp8_e4m3fn.safetensors [🟩🟩🟩]\n├── 🟣 cuda:1\n│ └── qwen_2.5_vl_7b_fp8_scaled.safetensors[🟪🟪🟪🟪🟪]\n│ └── qwen_image_vae.safetensors\n├── ⚙️ cpu\n └── qwen_image_edit_2509_fp8_e4m3fn.safetensors [⚙️⚙️⚙️⚙️⚙️⚙️⚙️]\n └── qwen_2.5_vl_7b_fp8_scaled.safetensors [⚙️⚙️⚙️⚙️⚙️]\n```"
566566
],
567567
"color": "#008181",
568568
"bgcolor": "rgba(24,24,27,.9)"

0 commit comments

Comments
 (0)