Skip to content

How to run on RTX 50-series GPU? #2574

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
quasiblob opened this issue May 10, 2025 · 5 comments
Open

How to run on RTX 50-series GPU? #2574

quasiblob opened this issue May 10, 2025 · 5 comments

Comments

@quasiblob
Copy link

Hi.

I remember trying RVC last year, but after upgrading GPU I can no longer use RVC.

I downloaded the latest zip-version, but that doesn't seem to be supporting Nvidia 50-series GPUs, there is a warning when I try to run go-web.bat.

However, I have no idea if the zip-version torch can be updated - there seems be conda and poetry related things going on inside the folder, and I barely know anything about those.

I've also tried installing from repo, everything goes OK, but I get this warning in console when I try to run go-web.bat.
Seems like UI dropdowns are not updating like they should, so I can't get this one either working.

Any ideas?

(venv) R:\AI_audio\Retrieval-based-Voice-Conversion-WebUI>go-web.bat

(venv) R:\AI_audio\Retrieval-based-Voice-Conversion-WebUI>venv\Scripts\python.exe infer-web.py --pycmd venv\Scripts\python.exe --port 7897
2025-05-10 20:17:23 | INFO | configs.config | Found GPU NVIDIA GeForce RTX 5090
2025-05-10 20:17:23 | INFO | configs.config | Half-precision floating-point: True, device: cuda:0
R:\AI_audio\Retrieval-based-Voice-Conversion-WebUI\venv\lib\site-packages\gradio_client\documentation.py:106: UserWarning: Could not get documentation group for <class 'gradio.mix.Parallel'>: No known documentation group for module 'gradio.mix'
  warnings.warn(f"Could not get documentation group for {cls}: {exc}")
R:\AI_audio\Retrieval-based-Voice-Conversion-WebUI\venv\lib\site-packages\gradio_client\documentation.py:106: UserWarning: Could not get documentation group for <class 'gradio.mix.Series'>: No known documentation group for module 'gradio.mix'
  warnings.warn(f"Could not get documentation group for {cls}: {exc}")
2025-05-10 20:17:24 | INFO | __main__ | Use Language: en_US
@plxl
Copy link

plxl commented May 11, 2025

I'm also on a 50-series GPU. You can upgrade both pytorch and xformers as they've just released stable builds for cuda 12.8, but RVC seems to rely on torch-directml, which hasn't been updated to support pytorch 2.7, and there may be something else that I'm missing, too.

I upgraded xformers and torch by doing this inside the RVC directory:

runtime\python.exe -m pip uninstall torch torchvision torchaudio xformers
runtime\python.exe -m pip install torch torchvision torchaudio xformers --index-url https://download.pytorch.org/whl/cu128

In the meantime, you can run on CPU if you're not using the real-time GUI by opening configs\config.py and change if torch.cuda.is_available(): to if False:. I was still getting errors with my updated packages, though, so I had to switch them back to torch 2.0.0+cu118 , torch-directml 0.2.0.dev230426 and xformers 0.0.19.

If someone else out there knows how to get it running, please share.

@stazzz-ai
Copy link

I'm also on a 50-series GPU. You can upgrade both pytorch and xformers as they've just released stable builds for cuda 12.8, but RVC seems to rely on torch-directml, which hasn't been updated to support pytorch 2.7, and there may be something else that I'm missing, too.

I upgraded xformers and torch by doing this inside the RVC directory:

runtime\python.exe -m pip uninstall torch torchvision torchaudio xformers
runtime\python.exe -m pip install torch torchvision torchaudio xformers --index-url https://download.pytorch.org/whl/cu128

In the meantime, you can run on CPU if you're not using the real-time GUI by opening configs\config.py and change if torch.cuda.is_available(): to if False:. I was still getting errors with my updated packages, though, so I had to switch them back to torch 2.0.0+cu118 , torch-directml 0.2.0.dev230426 and xformers 0.0.19.

If someone else out there knows how to get it running, please share.

me too I am facing the same.. I was able to train a new model but on CPU. Also I am facing another issue while converting I get an error timed out. which is very annoying

@haofanurusai
Copy link

You may try to run go-realtime-gui.bat and if you got something like:

AttributeError: 'RVC' object has no attribute 'tgt_sr'

you can patch runtime/Lib/site-packages/fairseq/checkpoint_utils.py

search for any torch.load and add weights_only=False to bypass security policy in new version of PyTorch

@haofanurusai
Copy link

BTW I am using RTX 4080 so I am not sure with RTX 50 series

I just updated my PyTorch to 2.7.0 for better speed, then the problem occured, and after patching it got worked

Hope it will help

@quasiblob
Copy link
Author

quasiblob commented May 31, 2025

@haofanurusai

Thanks, I found this and it fixes things partially. However, training wasn't working, I had to do some fixes, after these training starts and finishes, although UI shows 'Error' in bottom right corner.

I got these features working (not that thoroughly tested, just 5 minutes testing on each):

  • Inference
  • Training
  • Merging models (everything else too on this page seems to be working)
  • Onnx export not working (edit - works, but not with simplify)

I don't know about this onnx export, never used it, don't know how it should work. It does something for a while, progress bar fills, then I get "Something went wrong: connection errored out". I found two threads about this here in this repo, but no solution, unless I missed it. Edit - I found the culprit for this export onnx error, it is this line in the export.py:

model, _ = onnxsim.simplify(ExportedPath)

I bypassed it, but then you have to write some code around it, and I also had to change infer-web.py file's def export_onnx(ModelPath, ExportedPath) function, so that it returns something else than None, otherwise Gradio UI seems to get stuck.

Not sure what the proper fix would be, but at least the model exports now without errors, although the simplify operation is skipped.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants
pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy