You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hi Everyone.
Im sure this is a silly question but Ive been at it for hours not. I think im just not getting something obvious.
So each model will have a prefferd chat template and EOS/BOS token. If running models online you can use HF apply_chat_template.
I found that when using llama_cpp locally I can get the metadata and the jinja template from the LLM_Model with;
(
metadata = LLM_Model.metadata
chat_template = metadata.get('tokenizer.chat_template', None)
)
Is this a good method?
How do other people pull and apply chat templates locally for various models?
Thanks!
Beta Was this translation helpful? Give feedback.
All reactions