We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Qwen2.5
qwen2.5:72b-instruct-q8_0
ollama
ubuntu, 2x RTX 6000 ADA... inference
guys I have no time to write detailed report,
I just want to let you know that output in Polish language is inferior to what LLAMA 3.1 produces.
So the 'multilingual' statement does not apply to Polish language....
The text was updated successfully, but these errors were encountered:
yangapku
No branches or pull requests
Model Series
Qwen2.5
What are the models used?
qwen2.5:72b-instruct-q8_0
What is the scenario where the problem happened?
ollama
Is this badcase known and can it be solved using avaiable techniques?
Information about environment
ubuntu, 2x RTX 6000 ADA... inference
Description
guys I have no time to write detailed report,
I just want to let you know that output in Polish language is inferior to what LLAMA 3.1 produces.
So the 'multilingual' statement does not apply to Polish language....
The text was updated successfully, but these errors were encountered: