Installation
============
.. raw:: html
- PyPI:
pip install torch-sla — simplest installation
- GitHub: Clone and install for development
- Optional backends: cuDSS, Eigen for enhanced performance
----
Using pip
---------
To install the latest release:
.. code-block:: bash
pip install torch-sla
Or install from GitHub for the latest development version:
.. code-block:: bash
pip install git+https://github.com/walkerchi/torch-sla.git
Optional Dependencies
---------------------
For additional backends and features:
.. code-block:: bash
# With cuDSS support (requires CUDA 12+)
pip install torch-sla[cuda]
# Full installation with all optional dependencies
pip install torch-sla[all]
# For development
pip install torch-sla[dev]
.. raw:: html
cuDSS Now on PyPI!
CUDA backends use nvmath-python (for cuDSS) and cupy-cuda12x (for CuPy).
Installing torch-sla[cuda] will automatically install them.
Backend Requirements
--------------------
.. list-table::
:widths: 20 30 50
:header-rows: 1
* - Backend
- Installation
- Notes
* - ``scipy``
- ``pip install scipy``
- Default, always available
* - ``pytorch``
- Included with PyTorch
- Native CG/BiCGStab solvers
* - ``cupy``
- ``pip install cupy-cuda12x``
- GPU direct + iterative solvers via cupyx.scipy
* - ``cudss``
- ``pip install nvmath-python[cu12]``
- Best for medium-scale GPU problems (10K-2M DOF)