We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dear author,
Thank you for the amazing work!!!
I am studying finetuning the Llama 3.2 11B Vision model, but I encounter the following error
ValueError: No chat template is set for this processor. Please either set the chat_template attribute, or provide a chat template as an argument. See https://huggingface.co/docs/transformers/main/en/chat_templating for more information.
chat_template
Based on ocrvqa_dataset.py, apply_chat_template is a step in processing the data (as in the code snippet below). Very much look forward to your reply!
def tokenize_dialogs(dialogs, images, processor): text_prompt = processor.apply_chat_template(dialogs) text_prompt = [prompt.replace('<|begin_of_text|>','') for prompt in text_prompt]
The text was updated successfully, but these errors were encountered:
No branches or pull requests
Dear author,
Thank you for the amazing work!!!
I am studying finetuning the Llama 3.2 11B Vision model, but I encounter the following error
ValueError: No chat template is set for this processor. Please either set the
chat_template
attribute, or provide a chat template as an argument. See https://huggingface.co/docs/transformers/main/en/chat_templating for more information.Based on ocrvqa_dataset.py, apply_chat_template is a step in processing the data (as in the code snippet below). Very much look forward to your reply!
The text was updated successfully, but these errors were encountered: