-
Notifications
You must be signed in to change notification settings - Fork 24.7k
Description
🚀 The feature, motivation and pitch
It's great that nvidia provides wheels for the CUDA related packages and we don't need conda/mamba
to install pytorch anymore, but those packages take up space if you install pytorch in multiple environments.
I would be nice if you could install a pytorch version from pypi that could grab and use your local cuda build.
For example, cupy
provides pip install cupy-cuda12x
. jax
provides pip install "jax[cuda12_local]"
and as far as I'm aware, pip install tensorflow
also appears to use the GPU even if I don't specify pip install "tensorflow[and-cuda]"
which could install the nvidia/cuda wheels as well.
Please close if this is just not possible in pytorch's case or a duplicate (I didn't see it if it's there).
Alternatives
Just have the available space and install the nvidia wheels on every environment separately.
Additional context
No response
cc @seemethere @malfet @osalpekar @atalman @pytorch/pytorch-dev-infra
Metadata
Metadata
Assignees
Labels
Type
Projects
Status