THIS is how to Get CUDA Working with llama-cpp-python! #1928
Closed
PasiKoodaa
started this conversation in
General
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
The official guide suggests installing CUDA support for llama-cpp-python using the following command:
pip install llama-cpp-python --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cu124
However, this method may result in installing a version of llama-cpp-python that is incompatible with your specific CUDA version.
To ensure compatibility, follow these steps:
Check the Available Versions
Visit the --extra-index-url location directly in your browser:
Look for the latest supported version of llama-cpp-python by clicking the link. For example, as of now, the latest compatible version is 0.3.4.
Install the Correct Version
Once you’ve identified the correct version, install it explicitly using the following command:
pip install llama-cpp-python==0.3.4 --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cu124
By specifying the exact version, you can avoid potential compatibility issues and ensure that llama-cpp-python works seamlessly with your installed CUDA version.
Beta Was this translation helpful? Give feedback.
All reactions