Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can not reproduce Mistral-7b-v0.1 ppl results through whitening only under 20% compression ratio #27

Open
LyoAI opened this issue Jan 25, 2025 · 0 comments

Comments

@LyoAI
Copy link

LyoAI commented Jan 25, 2025

I can not reprduce the results reported from the paper compressing Mistral-7b-v0.1 model under 20% compression ratio and the perplexity seems strange:

PPL after pruning: {'wikitext2': np.float64(330.8548009158036)}
Weight Memory: 24960.9072265625 MiB

Below are how i run the code:

COMPRESSION_RATIO=0.2
HUGGINGFACE_MODEL_REPO="mistralai/Mistral-7B-v0.1"
WHITENING_SAMPLE_NUMBER=256
WHITENING_DATASET="wikitext2"
SAMPLING_SEED=42
MODEL_SEQ_LEN=2048
WHITENING_INFO_SAVING_PATH="."

python SVDLLM.py
--step 1
--ratio $COMPRESSION_RATIO
--model $HUGGINGFACE_MODEL_REPO
--whitening_nsamples $WHITENING_SAMPLE_NUMBER
--dataset $WHITENING_DATASET
--seed $SAMPLING_SEED
--model_seq_len $MODEL_SEQ_LEN
--save_path $WHITENING_INFO_SAVING_PATH
--DEV cuda:3

python SVDLLM.py --step 4 \ --MODEL_PATH

And i can not run with "low-resource" mode either

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant