Replies: 4 comments 4 replies
-
@mjspeck it isn't included in the prompt, it just overrides the grammar to the generic |
Beta Was this translation helpful? Give feedback.
-
Hello, I have a question about In the method
In my opinion, if there is no BTW ; I have another question, is every model support this parameter ? Or where I can find which model could support this parameter ? |
Beta Was this translation helpful? Give feedback.
-
Hi, I have a question regarding "response_format"={"type": "json_object"}. I expected it to constrain my output to solely JSONs, but unfortunately the model continues to provide explanations outside of the JSON. I'm currently testing mistral-7b-instruct-v0.2 on GPU.
I'm wondering if this has to do with llama-cpp-python or with the Mistral model itself? Any help would be really appreciated! |
Beta Was this translation helpful? Give feedback.
-
How does this interact with temperature? If I set temperature to 0, can I still my own JSON schema for expected output? Or will then sampling not have enough tokens to pick from? |
Beta Was this translation helpful? Give feedback.
-
If I pass in a json schema, is that just converted to a string and then incorporated into the prompt? How so? I'm just looking to understand more of the backend of this and LLMs in general.
Beta Was this translation helpful? Give feedback.
All reactions