You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is CARP applied on an already fine-tuned LLM like Chat GPT? If so, if I am trying to apply this concept in a model that has not been fine-tuned at all (for example GPT variants that can be found in Huggingface), how should I prepare the training data to fine-tune the LLM so that CARP can be effectively applied?
I do not understand what the paper says regarding using training sets. From what I understand, there is a training set, and you sample some of them out using SimCSE for few-shot learning demonstration examples. However, I do not understand where the training set is used other than when sampling out the few-shot examples. Were they used to fine-tune the LLM and you keep them for future use?
I apologize if I asked anything that was already mentioned in the paper and I was not paying close attention to it. Thank you in advance
The text was updated successfully, but these errors were encountered:
table 4 in this paper show that as the training set grows,16,128,256,512,1024, the carp models accuracy increase, how did it use the increased training set, did LLM model fine-tune on the training set?
After reading the paper, I have some questions:
I apologize if I asked anything that was already mentioned in the paper and I was not paying close attention to it. Thank you in advance
The text was updated successfully, but these errors were encountered: