Skip to content

How to Install PyTorch with uv

PyTorch publishes different wheel builds for CPU, CUDA, ROCm, and XPU on separate package indexes (see Why Installing GPU Python Packages Is So Complicated for background). Getting the right build requires telling uv which index to use.

Default behavior without configuration

Running uv add torch torchvision with no extra configuration installs from PyPI. PyPI carries CPU-only wheels for Windows and macOS, and CUDA 12.8 wheels for Linux (as of PyTorch 2.9.1). For projects that only need CPU support on Windows/macOS and GPU support on Linux, this default works without any additional setup.

Configure a CUDA backend in your project

To install a specific CUDA build across platforms, define a PyTorch index in pyproject.toml and route packages to it with tool.uv.sources. This example configures CUDA 12.8 on Linux and Windows while letting macOS fall back to PyPI (since CUDA builds are not available for macOS):

[[tool.uv.index]]
name = "pytorch-cu128"
url = "https://download.pytorch.org/whl/cu128"
explicit = true

[tool.uv.sources]
torch = [
  { index = "pytorch-cu128", marker = "sys_platform == 'linux' or sys_platform == 'win32'" },
]
torchvision = [
  { index = "pytorch-cu128", marker = "sys_platform == 'linux' or sys_platform == 'win32'" },
]

Setting explicit = true prevents uv from searching this index for unrelated packages. Every PyTorch-related package that needs a GPU build must be listed in [tool.uv.sources]. If torchvision or torchaudio is omitted, those packages will resolve from PyPI instead of the CUDA index.

Available backends follow the URL pattern https://download.pytorch.org/whl/{backend}, where {backend} is one of: cpu, cu118, cu126, cu128, cu130, rocm6.4, xpu.

Support multiple backends with extras

Projects that need to work across different hardware can use optional dependency groups to let users choose their backend at install time:

[project.optional-dependencies]
cpu = ["torch>=2.9.1", "torchvision>=0.24.1"]
cu128 = ["torch>=2.9.1", "torchvision>=0.24.1"]

[tool.uv]
conflicts = [[{ extra = "cpu" }, { extra = "cu128" }]]

[[tool.uv.index]]
name = "pytorch-cpu"
url = "https://download.pytorch.org/whl/cpu"
explicit = true

[[tool.uv.index]]
name = "pytorch-cu128"
url = "https://download.pytorch.org/whl/cu128"
explicit = true

[tool.uv.sources]
torch = [
  { index = "pytorch-cpu", extra = "cpu" },
  { index = "pytorch-cu128", extra = "cu128" },
]
torchvision = [
  { index = "pytorch-cpu", extra = "cpu" },
  { index = "pytorch-cu128", extra = "cu128" },
]

Users then install with the extra that matches their hardware:

uv sync --extra cpu
# or
uv sync --extra cu128

The conflicts table tells uv these extras are mutually exclusive, so it will not attempt to resolve both at once.

Quick installs with uv pip and --torch-backend

The uv pip interface offers a --torch-backend flag that selects the correct PyTorch index without any pyproject.toml configuration:

uv pip install torch --torch-backend=cu128

Setting --torch-backend=auto makes uv detect the available GPU hardware (CUDA driver version, AMD GPU, or Intel GPU) and pick the appropriate backend. If no GPU is detected, it falls back to CPU.

uv pip install torch --torch-backend=auto

The UV_TORCH_BACKEND environment variable works the same way:

UV_TORCH_BACKEND=auto uv pip install torch

Valid values are: auto, cpu, cu118, cu126, cu128, cu130, rocm6, xpu.

Important

--torch-backend is only available in the uv pip interface. It does not work with uv lock, uv sync, or uv run. For project-level workflows, use the pyproject.toml configuration described above.

Version requirements

PyTorch index configuration and --torch-backend require uv 0.5.3 or later. Run uv self update or see How to Upgrade uv to get the latest version.

Get Python tooling updates

Subscribe to the newsletter
Last updated on

Please submit corrections and feedback...