You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I used the provided trained weights of LLaMA-Adapter V1 and compared its performance with Alpaca. I wasn't able to get the same result as in figure 6 of the LLaMA-Adapter V1 paper. As shown in the image below, there are a lot of ties.
I used the provided trained weights of LLaMA-Adapter V1 and compared its performance with Alpaca. I wasn't able to get the same result as in figure 6 of the LLaMA-Adapter V1 paper. As shown in the image below, there are a lot of ties.
For Alpaca weights, I followed the official guide from https://huggingface.co/tatsu-lab/alpaca-7b-wdiff. Could you please detail the exact steps to reproduce figure 6?
The text was updated successfully, but these errors were encountered: