-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DirectML use device_id to select GPU #410
Comments
Did you confirm that it works when I tried it with DmlExecutionProvider? |
Yes this option is effective, and I confirmed it by monitoring the GPU utilization. |
How do I know which ID corresponds to which GPU? |
windows taskmanager,GPU0 device_id 0,GPU1 device_id 1 |
I apologize. I was not clear enough. How do I know which ID corresponds to which GPU "in python script"? |
I only find a C++ method at present. here is the c++ example: std::vector<IDXGIAdapter*> EnumerateAdapters()
} int main()
} |
I have submitted a feature request to the official ONNX Runtime. |
ok,I have find a python solution now.We can use torch-directml.I have tested that its running result is the same as the official C++code. Here is the python test script: if torch_directml.is_available(): #显示是否有dml设备 else: |
Thank you for your cooperation and for providing information. I think you'll probably need to set up a development environment on Windows to do that. I need to think a bit because I can't risk breaking my current development environment. By the way, as stated in the readme, I don't even have an AMD GPU, so I can't test it. In order to proceed, it's quite risky. Well, for now, I can't get started on it right away, so please wait a bit. |
"There is also a particularly important point: To use torch-directml, we must install torch=2.0.0 torchvision in the CPUVersion; otherwise, the pip may auto-install other packages that depend on Nvidia. So, I suggest splitting the software into two versions: 'directml' and 'cuda'. Both of them support onnx and pth." |
select id will be in next release (not device name), only for experimental |
I tested many different settings. You closed #485 , so I just wanted to update you that I have issue only with crepe. It's increasing res ms even if I set highest chunk. Crepe tiny and full is working for me perfectly. I only had to set a chunk to 128 and it seems good. |
use latest version v.1.5.3.10 |
Tested with RX 5700 AMD GPU, I was having issues unless settings is "Crepe tiny", other seem to cause high res ms. |
amd is used with onnx. crepe is not onnx. crepe tiny is onnx. so, It's operating as expected. |
@vitpekarek **** |
Hello, im sorry but i didnt really understand something up there. So my Problem is i got a Rx 7800x GPU and i cant select her. Could you please say what i exactly have to type and where. |
You haven't to type anything anywhere. Just click on the buttons that may say GPU0, GPU1, etc. One of them should be your 7800x |
It doesnt matter which one i choose my cpu is always on 90%, so it is normal? |
No, it should use the GPU. Make sure that you check the GPU usage regardless of the cpu usage. It should raise whenever you speak. You could also try the v2 of the software instead of the v1. |
how can i download the other software? |
Issue Type
Feature Request
vc client version number
MMVCServerSIO_win_onnxdirectML-cuda_v.1.5.3.8a.zip
OS
Windows11
GPU
Radeon Vega8,RTX3060 6G
Clear setting
yes
Sample model
yes
Input chunk num
yes
Wait for a while
The GUI successfully launched.
read tutorial
yes
Voice Changer type
RVC
Model type
ONNX f0
Situation
print(onnxruntime.get_available_providers())
model_path = "model.onnx"
session = onnxruntime.InferenceSession("model.onnx",providers=['DmlExecutionProvider'])
session.set_providers(['DmlExecutionProvider'], [ {'device_id': 0}])
we can change "device_id" to choose a GPU
can you add a feature to switch GPUs?
The text was updated successfully, but these errors were encountered: