Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Could not consume arg: --local_rank #18

Open
Droliven opened this issue Sep 18, 2023 · 1 comment
Open

Could not consume arg: --local_rank #18

Droliven opened this issue Sep 18, 2023 · 1 comment

Comments

@Droliven
Copy link

Droliven commented Sep 18, 2023

After finetuning the model with A100 80G * 4 GPU with the following commands:

CUDA_VISIBLE_DEVICES=0,1,2,3 python3 -m torch.distributed.launch --nproc_per_node=4 --nnodes=$WORLD_SIZE --node_rank=$RANK --master_addr=$MASTER_ADDR --master_port=29005 \
    lora_finetune.py \
    --base_model model_zoos/lmsys_vicuna-13b-v1.3 \
    --data_path datas/gpt4tools_instructions/origin/gpt4tools_71k.json \
    --output_dir outputs/gpt4tools \
    --prompt_template_name gpt4tools \
    --num_epochs 6 \
    --batch_size 512 \
    --cutoff_len 2048 \
    --group_by_length \
    --lora_target_modules '[q_proj,k_proj,v_proj,o_proj]' \
    --lora_r 16 \
    --micro_batch_size=8 \
    2>&1 | tee outputs/gpt4tools/log_`date +%Y%m%d-%H%M%S`.out

it shows:

100%|██████████| 834/834 [16:13:11<00:00, 70.01s/it]

 If there's a warning about missing keys above, please disregard :)
ERROR: Could not consume arg: --local_rank=3
Usage: lora_finetune.py --local_rank=3 --base_model /input/danglingwei.dlw/model_zoos/lmsys_vicuna-13b-v1.3 --data_path /input/danglingwei.dlw/datas/gpt4tools_instructions/origin/gpt4tools_71k.json --output_dir /input/danglingwei.dlw/outputs/gpt4tools --prompt_template_name gpt4tools --num_epochs 6 --batch_size 512 --cutoff_len 2048 --group_by_length --lora_target_modules '[q_proj,k_proj,v_proj,o_proj]' --lora_r 16 -

For detailed information on this command, run:
  lora_finetune.py --local_rank=3 --base_model /input/danglingwei.dlw/model_zoos/lmsys_vicuna-13b-v1.3 --data_path /input/danglingwei.dlw/datas/gpt4tools_instructions/origin/gpt4tools_71k.json --output_dir /input/danglingwei.dlw/outputs/gpt4tools --prompt_template_name gpt4tools --num_epochs 6 --batch_size 512 --cutoff_len 2048 --group_by_length --lora_target_modules '[q_proj,k_proj,v_proj,o_proj]' --lora_r 16 - --help

 If there's a warning about missing keys above, please disregard :)
ERROR: Could not consume arg: --local_rank=2
Usage: lora_finetune.py --local_rank=2 --base_model /input/danglingwei.dlw/model_zoos/lmsys_vicuna-13b-v1.3 --data_path /input/danglingwei.dlw/datas/gpt4tools_instructions/origin/gpt4tools_71k.json --output_dir /input/danglingwei.dlw/outputs/gpt4tools --prompt_template_name gpt4tools --num_epochs 6 --batch_size 512 --cutoff_len 2048 --group_by_length --lora_target_modules '[q_proj,k_proj,v_proj,o_proj]' --lora_r 16 -

For detailed information on this command, run:
  lora_finetune.py --local_rank=2 --base_model /input/danglingwei.dlw/model_zoos/lmsys_vicuna-13b-v1.3 --data_path /input/danglingwei.dlw/datas/gpt4tools_instructions/origin/gpt4tools_71k.json --output_dir /input/danglingwei.dlw/outputs/gpt4tools --prompt_template_name gpt4tools --num_epochs 6 --batch_size 512 --cutoff_len 2048 --group_by_length --lora_target_modules '[q_proj,k_proj,v_proj,o_proj]' --lora_r 16 - --help

 If there's a warning about missing keys above, please disregard :)
ERROR: Could not consume arg: --local_rank=1
Usage: lora_finetune.py --local_rank=1 --base_model /input/danglingwei.dlw/model_zoos/lmsys_vicuna-13b-v1.3 --data_path /input/danglingwei.dlw/datas/gpt4tools_instructions/origin/gpt4tools_71k.json --output_dir /input/danglingwei.dlw/outputs/gpt4tools --prompt_template_name gpt4tools --num_epochs 6 --batch_size 512 --cutoff_len 2048 --group_by_length --lora_target_modules '[q_proj,k_proj,v_proj,o_proj]' --lora_r 16 -

For detailed information on this command, run:
  lora_finetune.py --local_rank=1 --base_model /input/danglingwei.dlw/model_zoos/lmsys_vicuna-13b-v1.3 --data_path /input/danglingwei.dlw/datas/gpt4tools_instructions/origin/gpt4tools_71k.json --output_dir /input/danglingwei.dlw/outputs/gpt4tools --prompt_template_name gpt4tools --num_epochs 6 --batch_size 512 --cutoff_len 2048 --group_by_length --lora_target_modules '[q_proj,k_proj,v_proj,o_proj]' --lora_r 16 - --help

 If there's a warning about missing keys above, please disregard :)
ERROR: Could not consume arg: --local_rank=0
Usage: lora_finetune.py --local_rank=0 --base_model /input/danglingwei.dlw/model_zoos/lmsys_vicuna-13b-v1.3 --data_path /input/danglingwei.dlw/datas/gpt4tools_instructions/origin/gpt4tools_71k.json --output_dir /input/danglingwei.dlw/outputs/gpt4tools --prompt_template_name gpt4tools --num_epochs 6 --batch_size 512 --cutoff_len 2048 --group_by_length --lora_target_modules '[q_proj,k_proj,v_proj,o_proj]' --lora_r 16 -

For detailed information on this command, run:
  lora_finetune.py --local_rank=0 --base_model /input/danglingwei.dlw/model_zoos/lmsys_vicuna-13b-v1.3 --data_path /input/danglingwei.dlw/datas/gpt4tools_instructions/origin/gpt4tools_71k.json --output_dir /input/danglingwei.dlw/outputs/gpt4tools --prompt_template_name gpt4tools --num_epochs 6 --batch_size 512 --cutoff_len 2048 --group_by_length --lora_target_modules '[q_proj,k_proj,v_proj,o_proj]' --lora_r 16 - --help
/opt/conda/lib/python3.8/site-packages/torch/distributed/launch.py:180: FutureWarning: The module torch.distributed.launch is deprecated
and will be removed in future. Use torchrun.
Note that --use_env is set by default in torchrun.
If your script expects `--local_rank` argument to be set, please
change it to read from `os.environ['LOCAL_RANK']` instead. See 
https://pytorch.org/docs/stable/distributed.html#launch-utility for 
further instructions

  warnings.warn(
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 570 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 571 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 572 closing signal SIGTERM
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 2) local_rank: 3 (pid: 573) of binary: /opt/conda/bin/python3
Traceback (most recent call last):
  File "/opt/conda/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/opt/conda/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/opt/conda/lib/python3.8/site-packages/torch/distributed/launch.py", line 195, in <module>
    main()
  File "/opt/conda/lib/python3.8/site-packages/torch/distributed/launch.py", line 191, in main
    launch(args)
  File "/opt/conda/lib/python3.8/site-packages/torch/distributed/launch.py", line 176, in launch
    run(args)
  File "/opt/conda/lib/python3.8/site-packages/torch/distributed/run.py", line 753, in run
    elastic_launch(
  File "/opt/conda/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 132, in __call__
    return launch_agent(self._config, self._entrypoint, list(args))
  File "/opt/conda/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 246, in launch_agent
    raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError: 
============================================================
lora_finetune.py FAILED
------------------------------------------------------------
Failures:
  <NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
  time      : 2023-09-18_05:24:03
  host      : zcoregputrain-55-011068195071
  rank      : 3 (local_rank: 3)
  exitcode  : 2 (pid: 573)
  error_file: <N/A>
  traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
@Droliven Droliven reopened this Sep 19, 2023
@Yangr116
Copy link
Collaborator

We use torchrun to launch the training script, you can check here to see its difference with torch.distributed.launch on the local_rank.
Moreover, you can change the fire into argparse.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants