A curated collection of pre-compiled Python wheels for difficult-to-install AI/ML libraries on Windows.
Report a Broken Link
·
Request a New Wheel
Table of Contents
This repository was created to address a common pain point for AI enthusiasts and developers on the Windows platform: building complex Python packages from source. Libraries like flash-attention, xformers are essential for high-performance AI tasks but often lack official pre-built wheels for Windows, forcing users into a complicated and error-prone compilation process.
The goal here is to provide a centralized, up-to-date collection of direct links to pre-compiled .whl files for these libraries, primarily for the ComfyUI community and other PyTorch users on Windows. This saves you time and lets you focus on what's important: creating amazing things with AI.
To make life even easier, you can use this page Find Windows AI Wheels for quick searches of the required packages.
Follow these simple steps to use the wheels from this repository.
- Python for Windows: Ensure you have a compatible Python version installed (PyTorch currently supports Python 3.9 - 3.14 on Windows). You can get it from the official Python website.
To install a wheel, use pip with the direct URL to the .whl file. Make sure to enclose the URL in quotes.
# Example of installing a specific flash-attention wheel
pip install "https://huggingface.co/lldacing/flash-attention-windows-wheel/blob/main/flash_attn-2.7.4.post1+cu128torch2.7.0cxx11abiFALSE-cp312-cp312-win_amd64.whl"Tip
Find the package you need in the Available Wheels section below, find the row that matches your environment (Python, PyTorch, CUDA version), and copy the link for the pip install command.
Here is the list of tracked packages.
The foundation of everything. Install this first from the official source.
- Official Install Page: https://pytorch.org/get-started/locally/
For convenience, here are direct installation commands for specific versions on Linux/WSL with an NVIDIA GPU. For other configurations (CPU, macOS, ROCm), please use the official install page.
This is the recommended version for most users.
| CUDA Version | Pip Install Command |
|---|---|
| CUDA 13.0 | pip install torch torchvision --index-url https://download.pytorch.org/whl/cu130 |
| CUDA 12.8 | pip install torch torchvision --index-url https://download.pytorch.org/whl/cu128 |
| CUDA 12.6 | pip install torch torchvision --index-url https://download.pytorch.org/whl/cu126 |
Previous Stable Version
This is the recommended version for most users.
| CUDA Version | Pip Install Command |
|---|---|
| CUDA 13.0 | pip install "torch>=2.10.0.dev,<2.11.0" torchvision --index-url https://download.pytorch.org/whl/cu130 |
| CUDA 12.8 | pip install "torch>=2.10.0.dev,<2.11.0" torchvision --index-url https://download.pytorch.org/whl/cu128 |
| CUDA 12.6 | pip install "torch>=2.10.0.dev,<2.11.0" torchvision --index-url https://download.pytorch.org/whl/cu126 |
| CUDA Version | Pip Install Command |
|---|---|
| CUDA 13.0 | pip install "torch>=2.9.0.dev,<2.10.0" torchvision --index-url https://download.pytorch.org/whl/cu130 |
| CUDA 12.8 | pip install "torch>=2.9.0.dev,<2.10.0" torchvision --index-url https://download.pytorch.org/whl/cu128 |
| CUDA 12.6 | pip install "torch>=2.9.0.dev,<2.10.0" torchvision --index-url https://download.pytorch.org/whl/cu126 |
| CUDA Version | Pip Install Command |
|---|---|
| CUDA 12.9 | pip install "torch>=2.8.0.dev,<2.9.0" torchvision --index-url https://download.pytorch.org/whl/cu129 |
| CUDA 12.8 | pip install "torch>=2.8.0.dev,<2.9.0" torchvision --index-url https://download.pytorch.org/whl/cu128 |
| CUDA 12.6 | pip install "torch>=2.8.0.dev,<2.9.0" torchvision --index-url https://download.pytorch.org/whl/cu126 |
| CUDA Version | Pip Install Command |
|---|---|
| CUDA 12.8 | pip install torch==2.7.1 torchvision==0.22.1 torchaudio==2.7.1 --index-url https://download.pytorch.org/whl/cu128 |
| CUDA 12.6 | pip install torch==2.7.1 torchvision==0.22.1 torchaudio==2.7.1 --index-url https://download.pytorch.org/whl/cu126 |
| CUDA 11.8 | pip install torch==2.7.1 torchvision==0.22.1 torchaudio==2.7.1 --index-url https://download.pytorch.org/whl/cu118 |
| CPU only | pip install torch==2.7.1 torchvision==0.22.1 torchaudio==2.7.1 --index-url https://download.pytorch.org/whl/cpu |
Use these for access to the latest features, but expect potential instability.
PyTorch 2.12 (Nightly)
| CUDA Version | Pip Install Command |
|---|---|
| CUDA 13.0 | pip install --pre torch torchvision --index-url https://download.pytorch.org/whl/nightly/cu130 |
| CUDA 12.8 | pip install --pre torch torchvision --index-url https://download.pytorch.org/whl/nightly/cu128 |
| CUDA 12.6 | pip install --pre torch torchvision --index-url https://download.pytorch.org/whl/nightly/cu126 |
▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲
| Package Version | PyTorch Ver | Python Ver | CUDA Ver | Download Link |
|---|---|---|---|---|
2.11.0a0 |
2.12.0 |
3.14 |
13.0 |
Link |
2.11.0a0 |
2.12.0 |
3.13 |
13.0 |
Link |
2.11.0a0 |
2.11.0 |
3.14 |
13.0 |
Link |
2.11.0a0 |
2.11.0 |
3.13 |
13.0 |
Link |
2.11.0a0 |
2.10.0 |
3.13 |
13.0 |
Link |
2.11.0a0 |
2.10.0 |
3.12 |
13.0 |
Link |
2.11.0a0 |
2.10.0 |
3.13 |
12.8 |
Link |
2.8.0a0 |
2.9.0 |
3.12 |
12.8 |
Link |
2.8.0a0 |
2.9.0 |
3.12 |
12.8 |
Link |
# Torchcodec
pip install torchcodec▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲
High-performance attention implementation.
| Package Version | PyTorch Ver | Python Ver | CUDA Ver | CXX11 ABI | Download Link |
|---|---|---|---|---|---|
2.8.4 |
2.12.0 |
3.14 |
13.0 |
✓ | Link |
2.8.4 |
2.12.0 |
3.13 |
13.0 |
✓ | Link |
2.8.4 |
2.11.0 |
3.14 |
13.0 |
✓ | Link |
2.8.4 |
2.11.0 |
3.13 |
13.0 |
✓ | Link |
2.8.3 |
2.11.0 |
3.13 |
13.0 |
✓ | Link |
2.8.3 |
2.11.0 |
3.12 |
13.0 |
✓ | Link |
2.8.3 |
2.10.0 |
3.13 |
13.0 |
✓ | Link |
2.8.3 |
2.10.0 |
3.13 |
13.0 |
✓ | Link |
2.8.3 |
2.10.0 |
3.12 |
13.0 |
✓ | Link |
2.8.3 |
2.10.0 |
3.12 |
13.0 |
✓ | Link |
2.8.3 |
2.10.0 |
3.13 |
12.8 |
✓ | Link |
2.8.3 |
2.9.1 |
3.13 |
13.0 |
✓ | Link |
2.8.3 |
2.9.1 |
3.12 |
13.0 |
✓ | Link |
2.8.3 |
2.9.1 |
3.13 |
12.8 |
✓ | Link |
2.8.3 |
2.9.0 |
3.13 |
13.0 |
✓ | Link |
2.8.3 |
2.9.0 |
3.12 |
13.0 |
✓ | Link |
2.8.3 |
2.9.0 |
3.13 |
12.9 |
✓ | Link |
2.8.3 |
2.9.0 |
3.12 |
12.8 |
✓ | Link |
2.8.3 |
2.8.0 |
3.12 |
12.8 |
✓ | Link |
2.8.2 |
2.9.0 |
3.12 |
12.8 |
✓ | Link |
2.8.2 |
2.8.0 |
3.12 |
12.8 |
✓ | Link |
2.8.2 |
2.8.0 |
3.11 |
12.8 |
✓ | Link |
2.8.2 |
2.8.0 |
3.10 |
12.8 |
✓ | Link |
2.8.2 |
2.7.0 |
3.12 |
12.8 |
✗ | Link |
2.8.2 |
2.7.0 |
3.11 |
12.8 |
✗ | Link |
2.8.2 |
2.7.0 |
3.10 |
12.8 |
✗ | Link |
2.8.1 |
2.8.0 |
3.12 |
12.8 |
✓ | Link |
2.8.0.post2 |
2.8.0 |
3.12 |
12.8 |
✓ | Link |
2.7.4.post1 |
2.8.0 |
3.12 |
12.8 |
✓ | Link |
2.7.4.post1 |
2.8.0 |
3.10 |
12.8 |
✓ | Link |
2.7.4.post1 |
2.7.0 |
3.12 |
12.8 |
✗ | Link |
2.7.4.post1 |
2.7.0 |
3.11 |
12.8 |
✗ | Link |
2.7.4.post1 |
2.7.0 |
3.10 |
12.8 |
✗ | Link |
2.7.4 |
2.8.0 |
3.12 |
12.8 |
✓ | Link |
2.7.4 |
2.8.0 |
3.11 |
12.8 |
✓ | Link |
2.7.4 |
2.8.0 |
3.10 |
12.8 |
✓ | Link |
2.7.4 |
2.7.0 |
3.12 |
12.8 |
✗ | Link |
2.7.4 |
2.7.0 |
3.11 |
12.8 |
✗ | Link |
2.7.4 |
2.7.0 |
3.10 |
12.8 |
✗ | Link |
2.7.4 |
2.6.0 |
3.12 |
12.6 |
✗ | Link |
2.7.4 |
2.6.0 |
3.11 |
12.6 |
✗ | Link |
2.7.4 |
2.6.0 |
3.10 |
12.6 |
✗ | Link |
2.7.4 |
2.6.0 |
3.12 |
12.4 |
✗ | Link |
2.7.4 |
2.6.0 |
3.11 |
12.4 |
✗ | Link |
2.7.4 |
2.6.0 |
3.10 |
12.4 |
✗ | Link |
▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲
Next-generation Flash Attention with improved performance and features.
| Package Version | PyTorch Ver | Python Ver | CUDA Ver | CXX11 ABI | Download Link |
|---|---|---|---|---|---|
3.0.0 |
2.10 |
3.9+ |
13.0 |
✓ | Link |
3.0.0 |
2.10 |
3.9+ |
13.0 |
✓ | Link |
3.0.0 |
2.10 |
3.9+ |
12.8 |
✓ | Link |
3.0.0 |
2.10 |
3.9+ |
12.8 |
✓ | Link |
3.0.0 |
2.9 |
3.9+ |
13.0 |
✓ | Link |
3.0.0 |
2.9 |
3.9+ |
12.8 |
✓ | Link |
▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲
Latest Flash Attention implementation with cutting-edge optimizations.
(No wheels available - package not tracked)
▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲
Another library for memory-efficient attention and other optimizations.
Note
PyTorch provides official pre-built wheels for xformers. You can often install it with pip install xformers
| CUDA Version | Install |
|---|---|
| CUDA 12.6 | pip3 install -U xformers --index-url https://download.pytorch.org/whl/cu126 |
| CUDA 12.8 | pip3 install -U xformers --index-url https://download.pytorch.org/whl/cu128 |
| CUDA 13.0 | pip3 install -U xformers --index-url https://download.pytorch.org/whl/cu130 |
ABI3 version, any Python 3.9-3.12
| Package Version | PyTorch Ver | Python Ver | CUDA Ver | Download Link |
|---|---|---|---|---|
0.0.34 |
2.11 |
3.9+ |
13.0 |
Link |
0.0.34 |
2.10 |
3.9+ |
13.0 |
Link |
0.0.34 |
2.10 |
3.9+ |
13.0 |
Link |
0.0.33 |
2.10 |
3.9+ |
13.0 |
Link |
0.0.33 |
2.9 |
3.9+ |
13.0 |
Link |
0.0.32.post2 |
2.8.0 |
3.9+ |
12.9 |
Link |
0.0.32.post2 |
2.8.0 |
3.9+ |
12.8 |
Link |
0.0.32.post2 |
2.8.0 |
3.9+ |
12.6 |
Link |
▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲
| Package Version | PyTorch Ver | Python Ver | CUDA Ver | Download Link |
|---|---|---|---|---|
2.1.1 |
2.8.0 |
3.12 |
12.8 |
Link |
2.1.1 |
2.7.0 |
3.10 |
12.8 |
Link |
2.1.1 |
2.6.0 |
3.13 |
12.6 |
Link |
2.1.1 |
2.6.0 |
3.12 |
12.6 |
Link |
2.1.1 |
2.6.0 |
3.12 |
12.6 |
Link |
2.1.1 |
2.6.0 |
3.11 |
12.6 |
Link |
2.1.1 |
2.6.0 |
3.10 |
12.6 |
Link |
2.1.1 |
2.6.0 |
3.9 |
12.6 |
Link |
2.1.1 |
2.5.1 |
3.12 |
12.4 |
Link |
2.1.1 |
2.5.1 |
3.11 |
12.4 |
Link |
2.1.1 |
2.5.1 |
3.10 |
12.4 |
Link |
2.1.1 |
2.5.1 |
3.9 |
12.4 |
Link |
◇ ◇ ◇ ◇ ◇ ◇ ◇ ◇ ◇ ◇ ◇ ◇ ◇ ◇ ◇ ◇ ◇
Note
Only supports CUDA >= 12.8, therefore PyTorch >= 2.7.
| Package Version | PyTorch Ver | Python Ver | CUDA Ver | Download Link |
|---|---|---|---|---|
2.2.0.post4 |
2.9.0+ |
3.9+ |
13.0 |
Link |
2.2.0.post4 |
2.9.0+ |
3.9+ |
12.8 |
Link |
2.2.0.post3 |
2.10.0 |
3.12 |
13.0 |
Link |
2.2.0.post3 |
2.10.0 |
3.13 |
12.8 |
Link |
2.2.0.post3 |
2.10.0 |
3.12 |
12.8 |
Link |
2.2.0.post3 |
2.9.0 |
3.13 |
13.0 |
Link |
2.2.0.post3 |
2.9.0 |
3.13 |
12.9 |
Link |
2.2.0.post3 |
2.9.0 |
3.9+ |
12.9 |
Link |
2.2.0.post3 |
2.9.0 |
3.13 |
12.8 |
Link |
2.2.0.post3 |
2.9.0 |
3.9+ |
12.8 |
Link |
2.2.0.post3 |
2.8.0 |
3.13 |
12.9 |
Link |
2.2.0.post3 |
2.8.0 |
3.9+ |
12.9 |
Link |
2.2.0.post3 |
2.8.0 |
3.13 |
12.8 |
Link |
2.2.0.post3 |
2.8.0 |
3.9+ |
12.8 |
Link |
2.2.0.post3 |
2.7.1 |
3.9+ |
12.8 |
Link |
2.2.0.post3 |
2.6.0 |
3.9+ |
12.6 |
Link |
2.2.0.post3 |
2.5.1 |
3.9+ |
12.4 |
Link |
2.2.0.post2 |
2.9.0 |
3.9+ |
12.8 |
Link |
2.2.0.post2 |
2.8.0 |
3.9+ |
12.8 |
Link |
2.2.0.post2 |
2.7.1 |
3.9+ |
12.8 |
Link |
2.2.0.post2 |
2.6.0 |
3.9+ |
12.6 |
Link |
2.2.0.post2 |
2.5.1 |
3.9+ |
12.4 |
Link |
2.2.0 |
2.8.0 |
3.13 |
12.8 |
Link |
2.2.0 |
2.8.0 |
3.12 |
12.8 |
Link |
2.2.0 |
2.8.0 |
3.11 |
12.8 |
Link |
2.2.0 |
2.8.0 |
3.10 |
12.8 |
Link |
2.2.0 |
2.8.0 |
3.9 |
12.8 |
Link |
2.2.0 |
2.7.1 |
3.13 |
12.8 |
Link |
2.2.0 |
2.7.1 |
3.12 |
12.8 |
Link |
2.2.0 |
2.7.1 |
3.11 |
12.8 |
Link |
2.2.0 |
2.7.1 |
3.10 |
12.8 |
Link |
2.2.0 |
2.7.1 |
3.9 |
12.8 |
Link |
| Package Version | PyTorch Ver | Python Ver | CUDA Ver | Download Link |
|---|---|---|---|---|
1.0.0 |
2.9.1 |
3.13 |
13.0 |
Link |
1.0.0 |
2.9.1 |
3.12 |
13.0 |
Link |
1.0.0 |
2.8.0 |
3.13 |
12.8 |
Link |
1.0.0 |
2.8.0 |
3.12 |
12.8 |
Link |
1.0.0 |
2.8.0 |
3.11 |
12.8 |
Link |
1.0.0 |
2.7.1 |
3.13 |
12.8 |
Link |
1.0.0 |
2.7.1 |
3.12 |
12.8 |
Link |
1.0.0 |
2.7.1 |
3.11 |
12.8 |
Link |
▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲
- Official Repo: : mit-han-lab/nunchaku
| Package Version | PyTorch Ver | Python Ver | Download Link |
|---|---|---|---|
1.2.0 |
2.11 |
3.13 |
Link |
1.2.0 |
2.11 |
3.12 |
Link |
1.2.0 |
2.11 |
3.11 |
Link |
1.2.0 |
2.11 |
3.10 |
Link |
1.2.0 |
2.9 |
3.13 |
Link |
1.2.0 |
2.9 |
3.12 |
Link |
1.2.0 |
2.9 |
3.11 |
Link |
1.2.0 |
2.9 |
3.10 |
Link |
1.2.0 |
2.8 |
3.13 |
Link |
1.2.0 |
2.8 |
3.12 |
Link |
1.2.0 |
2.8 |
3.11 |
Link |
1.2.0 |
2.7 |
3.13 |
Link |
1.2.0 |
2.7 |
3.12 |
Link |
1.2.0 |
2.7 |
3.11 |
Link |
1.0.2 |
2.10 |
3.13 |
Link |
1.0.2 |
2.10 |
3.12 |
Link |
1.0.2 |
2.10 |
3.11 |
Link |
1.0.2 |
2.10 |
3.10 |
Link |
1.0.2 |
2.9 |
3.13 |
Link |
1.0.2 |
2.9 |
3.12 |
Link |
1.0.2 |
2.9 |
3.11 |
Link |
1.0.2 |
2.9 |
3.10 |
Link |
1.0.2 |
2.8 |
3.13 |
Link |
1.0.2 |
2.8 |
3.12 |
Link |
1.0.2 |
2.8 |
3.11 |
Link |
1.0.2 |
2.8 |
3.10 |
Link |
1.0.2 |
2.7 |
3.13 |
Link |
1.0.2 |
2.7 |
3.12 |
Link |
1.0.2 |
2.7 |
3.11 |
Link |
1.0.2 |
2.7 |
3.10 |
Link |
1.0.1 |
2.10 |
3.13 |
Link |
1.0.1 |
2.10 |
3.12 |
Link |
1.0.1 |
2.10 |
3.11 |
Link |
1.0.1 |
2.10 |
3.10 |
Link |
1.0.1 |
2.9 |
3.13 |
Link |
1.0.1 |
2.9 |
3.13 |
Link |
1.0.1 |
2.9 |
3.12 |
Link |
1.0.1 |
2.9 |
3.12 |
Link |
1.0.1 |
2.8 |
3.13 |
Link |
1.0.1 |
2.8 |
3.13 |
Link |
1.0.1 |
2.8 |
3.12 |
Link |
1.0.1 |
2.8 |
3.11 |
Link |
1.0.1 |
2.8 |
3.10 |
Link |
1.0.1 |
2.7 |
3.13 |
Link |
1.0.1 |
2.7 |
3.12 |
Link |
1.0.1 |
2.7 |
3.11 |
Link |
1.0.1 |
2.7 |
3.10 |
Link |
1.0.1 |
2.6 |
3.13 |
Link |
1.0.1 |
2.6 |
3.12 |
Link |
1.0.1 |
2.6 |
3.11 |
Link |
1.0.1 |
2.6 |
3.10 |
Link |
1.0.1 |
2.5 |
3.12 |
Link |
1.0.1 |
2.5 |
3.11 |
Link |
1.0.1 |
2.5 |
3.10 |
Link |
1.0.0 |
2.9 |
3.13 |
Link |
1.0.0 |
2.9 |
3.12 |
Link |
1.0.0 |
2.9 |
3.11 |
Link |
1.0.0 |
2.9 |
3.10 |
Link |
1.0.0 |
2.8 |
3.13 |
Link |
1.0.0 |
2.8 |
3.12 |
Link |
1.0.0 |
2.8 |
3.11 |
Link |
1.0.0 |
2.8 |
3.10 |
Link |
1.0.0 |
2.7 |
3.13 |
Link |
1.0.0 |
2.7 |
3.12 |
Link |
1.0.0 |
2.7 |
3.11 |
Link |
1.0.0 |
2.7 |
3.10 |
Link |
1.0.0 |
2.6 |
3.13 |
Link |
1.0.0 |
2.6 |
3.12 |
Link |
1.0.0 |
2.6 |
3.11 |
Link |
1.0.0 |
2.6 |
3.10 |
Link |
1.0.0 |
2.5 |
3.12 |
Link |
1.0.0 |
2.5 |
3.11 |
Link |
1.0.0 |
2.5 |
3.10 |
Link |
0.3.2 |
2.9 |
3.12 |
Link |
0.3.2 |
2.8 |
3.12 |
Link |
0.3.2 |
2.8 |
3.11 |
Link |
0.3.2 |
2.8 |
3.10 |
Link |
0.3.2 |
2.7 |
3.12 |
Link |
0.3.2 |
2.7 |
3.11 |
Link |
0.3.2 |
2.7 |
3.10 |
Link |
0.3.2 |
2.6 |
3.12 |
Link |
0.3.2 |
2.6 |
3.11 |
Link |
0.3.2 |
2.6 |
3.10 |
Link |
0.3.2 |
2.5 |
3.12 |
Link |
0.3.2 |
2.5 |
3.11 |
Link |
0.3.2 |
2.5 |
3.10 |
Link |
▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲
Neighborhood Attention Transformer.
| Package Version | PyTorch Ver | Python Ver | CUDA Ver | Download Link |
|---|---|---|---|---|
0.17.5 |
2.7.0 |
3.12 |
12.8 |
Link |
0.17.5 |
2.7.0 |
3.11 |
12.8 |
Link |
0.17.5 |
2.7.0 |
3.10 |
12.8 |
Link |
0.17.5 |
2.6.0 |
3.12 |
12.6 |
Link |
0.17.5 |
2.6.0 |
3.11 |
12.6 |
Link |
0.17.5 |
2.6.0 |
3.10 |
12.6 |
Link |
0.17.3 |
2.5.1 |
3.12 |
12.4 |
Link |
0.17.3 |
2.5.1 |
3.11 |
12.4 |
Link |
0.17.3 |
2.5.1 |
3.10 |
12.4 |
Link |
0.17.3 |
2.5.0 |
3.12 |
12.4 |
Link |
0.17.3 |
2.5.0 |
3.11 |
12.4 |
Link |
0.17.3 |
2.5.0 |
3.10 |
12.4 |
Link |
0.17.3 |
2.4.1 |
3.12 |
12.4 |
Link |
0.17.3 |
2.4.1 |
3.11 |
12.4 |
Link |
0.17.3 |
2.4.1 |
3.10 |
12.4 |
Link |
0.17.3 |
2.4.0 |
3.12 |
12.4 |
Link |
0.17.3 |
2.4.0 |
3.11 |
12.4 |
Link |
0.17.3 |
2.4.0 |
3.10 |
12.4 |
Link |
▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲
Triton is a language and compiler for writing highly efficient custom deep-learning primitives. Not officially supported on Windows, but a fork provides pre-built wheels.
Supported GPUs:
Note
Different GPU architectures require different Triton versions due to compute capability support.
| Triton Version | Supported GPUs | Compute Capability |
|---|---|---|
3.6.x |
RTX 50xx (Blackwell), RTX 40xx, Ada Lovelace, Hopper | SM 8.9, 9.0, 10.0 |
3.5.x |
RTX 30xx, 40xx, Ada Lovelace, Hopper | SM 8.0, 8.9, 9.0 |
3.4.x |
RTX 20xx, 30xx, 40xx, Ada Lovelace, Hopper | SM 7.5, 8.0, 8.9, 9.0 |
<= 3.2.x |
GTX/RTX 16xx, RTX 20xx, 30xx, 40xx, Ada Lovelace, Hopper | SM 7.0, 7.5, 8.0, 8.9, 9.0 |
Installation:
| Package Version | PyTorch Ver | Compute Capability | Install |
|---|---|---|---|
3.6.x |
>= 2.9 | SM 8.9+ | pip install -U "triton-windows<3.7" |
3.5.x |
>= 2.9 | SM 8.0+ | pip install -U "triton-windows<3.6" |
3.4.x |
>= 2.8 | SM 7.5+ | pip install -U "triton-windows<3.5" |
Python libs:
Important
Triton requires additional Python development libraries for building CUDA kernels. Download the package matching your Python version, extract the ZIP file, and copy the include and libs folders to your Python installation directory.
| Python Ver | Download |
|---|---|
3.13 |
Link |
3.12 |
Link |
3.11 |
Link |
3.10 |
Link |
3.9 |
Link |
▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲
A lightweight wrapper around CUDA custom functions, particularly for 8-bit optimizers, matrix multiplication (LLM.int8()), and quantization functions.
▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲
▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲
| Package Version | PyTorch Ver | CUDA Ver | Download Link |
|---|---|---|---|
0.1.0.post1 |
2.8.0 |
12.8 |
Link |
0.1.0.post1 |
2.7.1 |
12.8 |
Link |
▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲
| Package Version | PyTorch Ver | Python Ver | CUDA Ver | Download Link |
|---|---|---|---|---|
0.0.2.post1 |
2.11 |
3.13 |
13.0 |
Link |
0.0.2.post1 |
2.10 |
3.13 |
13.0 |
Link |
0.0.2.post1 |
2.9.1 |
3.13 |
13.0 |
Link |
▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲
- A deep learning optimization library
- Official Repo: https://github.com/deepspeedai/DeepSpeed
| Package Version | Python Ver | Download Link |
|---|---|---|
0.18.6 |
3.13 |
Link |
▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲
- Facebook AI Research Sequence-to-Sequence Toolkit
- Official Repo: https://github.com/facebookresearch/fairseq
| Package Version | Python Ver | Download Link |
|---|---|---|
0.12.2 |
3.13 |
Link |
▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲▼▲
| Package Version | PyTorch Ver | Python Ver | CUDA Ver | CXX11 ABI | Download Link |
|---|---|---|---|---|---|
1.6.1 |
2.11.0 |
3.14 |
13.0 |
✓ | Link |
1.6.1 |
2.11.0 |
3.13 |
13.0 |
✓ | Link |
1.6.1 |
2.10.0 |
3.13 |
13.0 |
✓ | Link |
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
All wheel information in this repository is managed in the wheels.json file, which serves as the single source of truth. The tables in this README are automatically generated from this file.
This provides a stable, structured JSON endpoint for any external tool or application that needs to access this data without parsing Markdown.
You can access the raw JSON file directly via the following URL:
https://raw.githubusercontent.com/wildminder/AI-windows-whl/main/wheels.json
Example using curl:
curl -L -o wheels.json https://raw.githubusercontent.com/wildminder/AI-windows-whl/main/wheels.jsonThe file contains a list of packages, each with its metadata and an array of wheels, where each wheel object contains version details and a direct download url.
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.
If you have found a new pre-built wheel or a reliable source, please fork the repo and create a pull request, or simply open an issue with the link.
This repository is simply a collection of links. Huge thanks to the individuals and groups who do the hard work of building and hosting these wheels for the community:









