You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, thx for the work. I'm trying to reproduce the reported results for the Llama3.2-1B model on MMLU. The result I got is 0.3107, which is lower than the 0.493 reported by Meta
Could you pls let me know if there are any specific settings I might have missed? Thx in advance!
The text was updated successfully, but these errors were encountered:
Results Log
hf (pretrained=/data/models/meta-llama/Llama-3.2-1B,dtype=auto,), gen_kwargs: (None), limit: None, num_fewshot: 5, batch_size: auto (4)
Hi, thx for the work. I'm trying to reproduce the reported results for the Llama3.2-1B model on MMLU. The result I got is 0.3107, which is lower than the 0.493 reported by Meta
Could you pls let me know if there are any specific settings I might have missed? Thx in advance!
The text was updated successfully, but these errors were encountered: