Skip to content

uv vs pixi vs conda for Scientific Python

Every scientific Python project faces the same dependency question: can PyPI provide everything this project needs, or does it require packages that only exist on conda channels? The answer determines which tool fits best.

uv installs from PyPI. pixi installs from conda-forge and PyPI together. conda installs from conda channels. Each tool makes different tradeoffs around speed, scope, and maturity, but the package source question is the one that narrows the field.

What each tool can install

uv resolves packages from PyPI, plus Git URLs, local paths, and private indexes. It cannot install non-Python dependencies like the CUDA toolkit, GDAL, or HDF5. For Python packages with good wheel coverage (NumPy, pandas, scikit-learn, SciPy, PyTorch), uv works well. It is the fastest resolver and installer available, and its pyproject.toml configuration follows PEP 621, making projects portable to other standards-compliant tools.

pixi resolves conda-forge packages and PyPI packages together in a single lockfile. It uses uv internally for PyPI resolution. Because conda-forge distributes native libraries alongside their Python bindings, pixi can install CUDA toolkits, GDAL, HDF5, compilers, and non-Python languages in the same project environment. Project environments live in a .pixi/ directory, with no global conda activate step. Pixi also supports multiple named environments from a single manifest, so a project can define separate gpu and cpu configurations.

conda (including mamba and micromamba) has provided language-agnostic package management for over 14 years. It installs Python, R, Julia, C++ libraries, CUDA toolkits, and compilers. The conda-forge community maintains over 30,000 packages. Conda uses a global environment model (conda create -n myenv, conda activate myenv), and its environment.yml format is widely understood across research teams and enterprise infrastructure.

The decision axis: do you need conda packages?

If all dependencies are available on PyPI with good wheel support, uv is the simpler choice. Standard pyproject.toml, a cross-platform uv.lock, and broad community adoption make it the path of least resistance for pure Python projects.

If the project needs native libraries that PyPI cannot provide, the choice is between pixi and conda. Both resolve conda-forge packages. The difference is workflow: pixi provides project-local environments, a built-in task runner, and a single lockfile covering conda and PyPI dependencies. Conda provides a global environment model with 14 years of battle-tested infrastructure behind it.

The relationship between pixi and conda is generational rather than adversarial. Pixi comes from prefix.dev, the company behind the mamba solver and conda-forge tooling. It is the modern, project-local evolution of the conda workflow. Conda is the established workhorse that many teams have years of production infrastructure built around.

Scenario comparisons

PyTorch (no RAPIDS, no geospatial)

With uv, configure index routing in pyproject.toml to get CUDA builds on Linux and CPU builds elsewhere. PyTorch wheels bundle their own CUDA runtime, which makes installs larger but self-contained. See How to Install PyTorch with uv for a full walkthrough.

# pyproject.toml (uv)
[[tool.uv.index]]
name = "pytorch-cu128"
url = "https://download.pytorch.org/whl/cu128"
explicit = true

[[tool.uv.index]]
name = "pytorch-cpu"
url = "https://download.pytorch.org/whl/cpu"
explicit = true

[tool.uv.sources]
torch = [
  { index = "pytorch-cu128", marker = "sys_platform == 'linux'" },
  { index = "pytorch-cpu", marker = "sys_platform != 'linux'" },
]

With pixi, PyTorch is available from conda-forge with CUDA as a separate, shared dependency. The solver matches CUDA versions across packages automatically:

# pixi.toml
[dependencies]
pytorch-gpu = "*"
cuda-version = "12.6.*"

With conda, the traditional command was conda install pytorch pytorch-cuda=12.4 -c pytorch -c nvidia. PyTorch 2.5 was the last release published to the pytorch Anaconda channel. conda-forge now carries PyTorch builds, so conda install -c conda-forge pytorch is the path forward for conda users.

RAPIDS (cuDF, cuML)

uv can install RAPIDS via --extra-index-url=https://pypi.nvidia.com with -cu12 or -cu13 suffixed package names. When combining RAPIDS with PyTorch, dependency conflicts between NVIDIA’s index and PyTorch’s index are possible. See How to Install RAPIDS with uv for details.

pixi and conda handle RAPIDS well. RAPIDS was originally conda-only, and conda-forge RAPIDS packages resolve together with the CUDA toolkit in a single solver pass. No index juggling required.

Geospatial (GDAL, PROJ, GEOS)

uv has a hard time here. GDAL’s Python bindings on PyPI require a system-installed GDAL library, and the versions must match exactly. This pushes the problem outside the Python package manager.

pixi and conda install the GDAL C library and its Python bindings together. pixi add gdal or conda install gdal handles the native library and the Python wrapper as a single resolved dependency.

Mixed GPU stacks (PyTorch + RAPIDS + custom CUDA)

uv requires separate index configurations for PyTorch and RAPIDS, each with their own URL and package naming conventions. Configuration gets complex, and dependency conflicts between indexes are a real risk.

pixi and conda resolve CUDA versions across all packages in one solver pass. Pixi’s multi-environment features allow defining GPU and CPU variants in a single manifest. Conda provides the same solver advantage but requires separate tools like conda-lock for cross-platform lockfiles.

For more on why GPU package installation is complicated, the handbook has a dedicated explanation.

Maturity and ecosystem considerations

These three tools sit at different points on the maturity spectrum.

conda has been in production since 2012. Research labs, pharmaceutical companies, and financial institutions have built CI/CD pipelines, deployment workflows, and team onboarding processes around it. That institutional knowledge has real value. Migrating away from conda means rewriting infrastructure, retraining teams, and accepting risk during the transition. For organizations where conda works, “it works” is a legitimate reason to stay.

conda’s main friction points are speed (the default solver is slower than pixi or uv), the global environment model (which can lead to environment pollution), and Anaconda’s licensing terms for organizations with more than 200 employees. Using Miniforge with conda-forge avoids the licensing issue. Mamba and micromamba address the speed problem as drop-in replacements for conda’s solver.

pixi is newer and under active development. It brings project-local environments, fast resolution, and a unified lockfile to the conda ecosystem. The tradeoff is that pixi.toml configuration is not portable to pip, uv, or poetry (though using pyproject.toml with [tool.pixi] keeps standard [project] metadata portable). Pixi’s adoption is concentrated in scientific computing and ML communities. For more on how pixi compares to uv, see When should I choose pixi over uv?

uv has the broadest adoption across the Python community, the most documentation, and the strongest CI/editor integration. Its limitation is scope: it manages Python packages only. When that scope is sufficient, uv is the most streamlined option. When it is not, uv cannot stretch to cover the gap.

Both pixi and uv are backed by venture capital (prefix.dev and Astral, respectively). Both are pre-1.0. Teams adopting either tool should be comfortable with that maturity profile. For a broader discussion of conda’s strengths, see Why should I choose conda?

Note

conda-forge packages for compiled code tend to be smaller than their PyPI wheel equivalents. Wheels statically bundle compiled libraries, while conda packages dynamically link against shared libraries managed by the solver. For large environments with many compiled packages, this size difference adds up.

Decision guide

Consideration uv pixi conda
Package sources PyPI (+ Git, URL, path) conda-forge + PyPI conda channels
Native library support No Yes Yes
Project-local environments Yes (.venv) Yes (.pixi) No (global by default)
Cross-platform lockfile Built in (uv.lock) Built in (pixi.lock) Separate tool (conda-lock)
Resolution speed Fastest Fast (~3x faster than micromamba) Slowest (mamba helps)
Config portability Standard pyproject.toml pixi.toml or [tool.pixi] environment.yml
Maturity Pre-1.0 (2024+) Pre-1.0 (2023+) 14+ years (2012+)
Community size Largest (general Python) Growing (scientific niche) Large (scientific/enterprise)
CUDA handling Index URL routing First-class dependency First-class dependency
PyTorch install Index config in pyproject.toml pytorch-gpu = "*" conda install pytorch

Start with uv if your dependencies all live on PyPI. If you hit a wall with native libraries, consider pixi for new projects or conda if your team already has conda infrastructure. The two conda-based tools share the same package ecosystem, so the choice between them is about workflow preference and organizational investment, not package availability.

Further reading

Get Python tooling updates

Subscribe to the newsletter
Last updated on

Please submit corrections and feedback...