We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I was wondering if there's a way to run the Ollama with the plugin by CPU instead of GPU.
Mainly I don't want to use my GPU all the time but the models I'm currently running is very lightweight
The text was updated successfully, but these errors were encountered:
Hey! Thabk you for the suggestion but unfortunately, it is not related to the bot itself. Try reaching out on the Ollama GitHub repo
Sorry, something went wrong.
No branches or pull requests
I was wondering if there's a way to run the Ollama with the plugin by CPU instead of GPU.
Mainly I don't want to use my GPU all the time but the models I'm currently running is very lightweight
The text was updated successfully, but these errors were encountered: