-
-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NONE OF VISION MODELS ARE WORKING FOR FINE-TUNES #1505
Comments
Tried to replicate it but no luck. What is your transformers version here? |
Apologies on the delay - I was planning to add support for text only datasets - it might make things easier |
Yes, yes. All my datasets are text-only, please add the support and ping here when I can test it out! |
I'll ping you when it's in! |
Same error I think it's my mistake. I should to copy Trainer settings from Phi-4 notebook |
@sebaxakerhtc Oh if you're doing continued pretraining, utilize the exact notebook for it, then change the model name - there are some changes needed! |
@danielhanchen Hey, what’s up with the text-only data? Is there some Alpha version that I can already try? If no, is it possible with just transformers (no unsloth) or they have the same issue? |
@yukiarimo I'm currently tracking the issue at #1559 - I'll ping you asap when it's in Unsloth - apologies on the delay |
Hello. I tried fine-tuning Pixtral today! It is not possible to do so!
The previous working code was for LLaMA 3.1 8B as expected:
And this is the new code for Pixtral:
Do this, cause otherwise models don't load LLaVA error:
Error:
Tried the other way:
Error:
Please help! Why it doesn't work? Yes, the dataset is text-only like this:
The text was updated successfully, but these errors were encountered: