Replies: 1 comment
-
Lavavel 11 was released on March 12th, 2024. The pre-training data of Qwen2.5 should cover that period. Thoughts: This issue may be more complex than it initially appears. For general QA without RAG, the research question is that if there are multiple versions of the "object" in the training datasets (different versions have varying proportions), can the model reliably (a) understand the concept of versions and (b) retrieve the content using the query related to the object (e.g., the version id)? The post-training process attempts to answer this question, without conclusive results though. I believe this could be a good read. |
Beta Was this translation helpful? Give feedback.
-
I'm looking for a model with more recent data. So far I've tried
Codestral
ChatGPT4o mini
CodeSeek v2
Qwen2.5
OpenChat
When I try this for example:
Where do you register global middleware in laravel 11?
All the models will answer that it should go in a file named kernel.php located in app/http folder. For Laravel 11 this is incorrect, it should be in boostrap/app.php. This highlights that not the newest version of the documentation and/or code based on Laravel 11 was included in the training set.
I must say I'm impressed with the speed at which Qwen responds on my PC with text-generation-webui and CodeGPT. I use an nvidia 4070ti with 12GB and a 13900K processor with 32GB ram. I use quantized models mostly ie 7b q8 around 8GB or 14b q5 around 10GB.
Isn't there a LoRA set that can update these models or something? I've been looking for such 'updates' but have not been able to find them. I don't know how to train a LoRA myself nor am I really inclined to do so as my PC isn't really powerful enough even though it's not that low a spec.
Thoughts?
Beta Was this translation helpful? Give feedback.
All reactions