You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
INFO:transformers.tokenization_utils:Model name '/dfs/data/ckpt/CDial_GPT/CDial-GPT_LCCC-large' not found in model shortcut name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased). Assuming '/dfs/data/ckpt/CDial_GPT/CDial-GPT_LCCC-large' is a path or url to a directory containing tokenizer files.
INFO:transformers.tokenization_utils:Didn't find file /dfs/data/ckpt/CDial_GPT/CDial-GPT_LCCC-large/added_tokens.json. We won't load it.
INFO:transformers.tokenization_utils:Didn't find file /dfs/data/ckpt/CDial_GPT/CDial-GPT_LCCC-large/special_tokens_map.json. We won't load it.
INFO:transformers.tokenization_utils:Didn't find file /dfs/data/ckpt/CDial_GPT/CDial-GPT_LCCC-large/tokenizer_config.json. We won't load it.
INFO:transformers.tokenization_utils:loading file /dfs/data/ckpt/CDial_GPT/CDial-GPT_LCCC-large/vocab.txt
INFO:transformers.tokenization_utils:loading file None
INFO:transformers.tokenization_utils:loading file None
INFO:transformers.tokenization_utils:loading file None
INFO:transformers.configuration_utils:loading configuration file /dfs/data/ckpt/CDial_GPT/CDial-GPT_LCCC-large/config.json
INFO:transformers.modeling_utils:loading weights file /dfs/data/ckpt/CDial_GPT/CDial-GPT_LCCC-large/pytorch_model.bin
Traceback (most recent call last):
File "train.py", line 237, in
train()
File "train.py", line 105, in train
model = model_class.from_pretrained(args.model_checkpoint)
File "/opt/conda/lib/python3.6/site-packages/transformers/modeling_utils.py", line 345, in from_pretrained
state_dict = torch.load(resolved_archive_file, map_location='cpu')
File "/opt/conda/lib/python3.6/site-packages/torch/serialization.py", line 595, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File "/opt/conda/lib/python3.6/site-packages/torch/serialization.py", line 764, in _legacy_load
magic_number = pickle_module.load(f, **pickle_load_args)
_pickle.UnpicklingError: invalid load key, 'v'.
The text was updated successfully, but these errors were encountered:
INFO:transformers.tokenization_utils:Model name '/dfs/data/ckpt/CDial_GPT/CDial-GPT_LCCC-large' not found in model shortcut name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased). Assuming '/dfs/data/ckpt/CDial_GPT/CDial-GPT_LCCC-large' is a path or url to a directory containing tokenizer files.
INFO:transformers.tokenization_utils:Didn't find file /dfs/data/ckpt/CDial_GPT/CDial-GPT_LCCC-large/added_tokens.json. We won't load it.
INFO:transformers.tokenization_utils:Didn't find file /dfs/data/ckpt/CDial_GPT/CDial-GPT_LCCC-large/special_tokens_map.json. We won't load it.
INFO:transformers.tokenization_utils:Didn't find file /dfs/data/ckpt/CDial_GPT/CDial-GPT_LCCC-large/tokenizer_config.json. We won't load it.
INFO:transformers.tokenization_utils:loading file /dfs/data/ckpt/CDial_GPT/CDial-GPT_LCCC-large/vocab.txt
INFO:transformers.tokenization_utils:loading file None
INFO:transformers.tokenization_utils:loading file None
INFO:transformers.tokenization_utils:loading file None
INFO:transformers.configuration_utils:loading configuration file /dfs/data/ckpt/CDial_GPT/CDial-GPT_LCCC-large/config.json
INFO:transformers.modeling_utils:loading weights file /dfs/data/ckpt/CDial_GPT/CDial-GPT_LCCC-large/pytorch_model.bin
Traceback (most recent call last):
File "train.py", line 237, in
train()
File "train.py", line 105, in train
model = model_class.from_pretrained(args.model_checkpoint)
File "/opt/conda/lib/python3.6/site-packages/transformers/modeling_utils.py", line 345, in from_pretrained
state_dict = torch.load(resolved_archive_file, map_location='cpu')
File "/opt/conda/lib/python3.6/site-packages/torch/serialization.py", line 595, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File "/opt/conda/lib/python3.6/site-packages/torch/serialization.py", line 764, in _legacy_load
magic_number = pickle_module.load(f, **pickle_load_args)
_pickle.UnpicklingError: invalid load key, 'v'.
The text was updated successfully, but these errors were encountered: