diff --git a/WizardCoder/README.md b/WizardCoder/README.md new file mode 100644 index 0000000..b8a0a19 --- /dev/null +++ b/WizardCoder/README.md @@ -0,0 +1,198 @@ +# WizardCoder: Empowering Code Large Language Models with Evol-Instruct + +[](https://github.com/tatsu-lab/stanford_alpaca/blob/main/LICENSE) +[](https://github.com/tatsu-lab/stanford_alpaca/blob/main/DATA_LICENSE) +[](https://www.python.org/downloads/release/python-390/) + +To develop our WizardCoder model, we begin by adapting the Evol-Instruct method specifically for coding tasks. This involves tailoring the prompt to the domain of code-related instructions. Subsequently, we fine-tune the Code LLM, StarCoder, utilizing the newly created instruction-following training set. + +## News + +- 🔥 Our **WizardCoder-15B-v1.0** model achieves the **57.3 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval), which is **22.3** points higher than the SOTA open-source Code LLMs. +- 🔥 We released **WizardCoder-15B-v1.0** trained with **78k** evolved code instructions. Please checkout the [Model Weights](https://huggingface.co/WizardLM/WizardCoder-15B-V1.0) and [Paper](). +- 📣 Please refer to our Twitter account https://twitter.com/WizardLM_AI and HuggingFace Repo https://huggingface.co/WizardLM . We will use them to announce any new release at the 1st time. + + +## Comparing WizardCoder with the Closed-Source Models. + +The SOTA LLMs for code generation, such as GPT4, Claude, and Bard, are predominantly closed-source. Acquiring access to the APIs of these models proves challenging. In this study, we adopt an alternative approach by retrieving the scores for HumanEval and HumanEval+ from the [LLM-Humaneval-Benchmarks](https://github.com/my-other-github-account/llm-humaneval-benchmarks). Notably, all the mentioned models generate code solutions for each problem utilizing a single attempt, and the resulting pass rate percentage is reported. Our **WizardCoder** generates answers using greedy decoding. + +🔥 The following figure shows that our **WizardCoder attains the third position in this benchmark**, surpassing Claude-Plus (59.8 vs. 53.0) and Bard (59.8 vs. 44.5). Notably, our model exhibits a substantially smaller size compared to these models. + +
+ +## Comparing WizardCoder with the Open-Source Models. + +The following table conducts a comprehensive comparison of our **WizardCoder** with other models on the HumanEval and MBPP benchmarks. We adhere to the approach outlined in previous studies by generating n samples for each problem to estimate the pass@1 score. The findings clearly demonstrate that our **WizardCoder** exhibits a substantial performance advantage over all the open-source models. + + +| Model | HumanEval Pass@1 | MBPP Pass@1 | +|------------------|------------------|-------------| +| CodeGen-16B-Multi| 18.3 |20.9 | +| CodeGeeX | 22.9 |24.4 | +| LLaMA-33B | 21.7 |30.2 | +| LLaMA-65B | 23.7 |37.7 | +| PaLM-540B | 26.2 |36.8 | +| PaLM-Coder-540B | 36.0 |47.0 | +| PaLM 2-S | 37.6 |50.0 | +| CodeGen-16B-Mono | 29.3 |35.3 | +| Code-Cushman-001 | 33.5 |45.9 | +| StarCoder-15B | 33.6 |43.6* | +| InstructCodeT5+ | 35.0 |-- | +| WizardLM-30B 1.0| 37.8 |-- | +| WizardCoder-15B 1.0 | **57.3** |**51.8** | + +*: The reproduced result of StarCoder on MBPP. + +## Call for Feedbacks +We welcome everyone to use your professional and difficult instructions to evaluate WizardCoder, and show us examples of poor performance and your suggestions in the [issue discussion](https://github.com/nlpxucan/WizardLM/issues) area. We are focusing on improving the Evol-Instruct now and hope to relieve existing weaknesses and issues in the the next version of WizardCoder. After that, we will open the code and pipeline of up-to-date Evol-Instruct algorithm and work with you together to improve it. + + +## Contents + +1. [Online Demo](#online-demo) + +2. [Fine-tuning](#fine-tuning) + +3. [Inference](#inference) + +4. [Evaluation](#evaluation) + +5. [Citation](#citation) + +6. [Disclaimer](#disclaimer) + +## Online Demo + +We will provide our latest models for you to try for as long as possible. If you find a link is not working, please try another one. At the same time, please try as many **real-world** and **challenging** code-related problems that you encounter in your work and life as possible. We will continue to evolve our models with your feedbacks. + +[Demo Link](https://1c48cbf5c83110ed.gradio.app/) (We adopt the greedy decoding now.) + +## Fine-tuning + +We fine-tune WizardCoder using the modified code `train.py` from [Llama-X](https://github.com/AetherCortex/Llama-X). +We fine-tune StarCoder-15B with the following hyperparameters: + +| Hyperparameter | StarCoder-15B | +|----------------|---------------| +| Batch size | 512 | +| Learning rate | 2e-5 | +| Epochs | 3 | +| Max length | 2048 | +| Warmup step | 30 | +| LR scheduler | cosine | + +To reproduce our fine-tuning of WizardCoder, please follow the following steps: +1. According to the instructions of [Llama-X](https://github.com/AetherCortex/Llama-X), install the environment, download the training code, and deploy. (Note: `deepspeed==0.9.2` and `transformers==4.29.2`) +2. Replace the `train.py` with the `train_wizardcoder.py` in our repo (`src/train_wizardcoder.py`) +3. Login Huggingface: +```bash +huggingface-cli login +``` +4. Execute the following training command: +```bash +deepspeed train_wizardcoder.py \ + --model_name_or_path "bigcode/starcoder" \ + --data_path "/your/path/to/code_instruction_data.json" \ + --output_dir "/your/path/to/ckpt" \ + --num_train_epochs 3 \ + --model_max_length 2048 \ + --per_device_train_batch_size 16 \ + --per_device_eval_batch_size 1 \ + --gradient_accumulation_steps 4 \ + --evaluation_strategy "no" \ + --save_strategy "steps" \ + --save_steps 50 \ + --save_total_limit 2 \ + --learning_rate 2e-5 \ + --warmup_steps 30 \ + --logging_steps 2 \ + --lr_scheduler_type "cosine" \ + --report_to "tensorboard" \ + --gradient_checkpointing True \ + --deepspeed configs/deepspeed_config.json \ + --fp16 True +``` + +## Inference + +We provide the decoding script for WizardCoder, which reads a input file and generates corresponding responses for each sample, and finally consolidates them into an output file. + +You can specify `base_model`, `input_data_path` and `output_data_path` in `src\inference_wizardcoder.py` to set the decoding model, path of input file and path of output file. + +```bash +pip install jsonlines +``` + +The decoding command is: +``` +python src\inference_wizardcoder.py \ + --base_model "/your/path/to/ckpt" \ + --input_data_path "/your/path/to/input/data.jsonl" \ + --output_data_path "/your/path/to/output/result.jsonl" +``` + + +### Evaluation + +We provide the evaluation script on HumanEval for WizardCoder. + +1. According to the instructions of [HumanEval](https://github.com/openai/human-eval), install the environment. +2. Run the following script to generate the answer. +```bash +model="/path/to/your/model" +temp=0.2 +max_len=2048 +pred_num=200 +num_seqs_per_iter=2 + +output_path=preds/T${temp}_N${pred_num} + +mkdir -p ${output_path} +echo 'Output path: '$output_path +echo 'Model to eval: '$model + +# 164 problems, 21 per GPU if GPU=8 +index=0 +gpu_num=8 +for ((i = 0; i < $gpu_num; i++)); do + start_index=$((i * 21)) + end_index=$(((i + 1) * 21)) + + gpu=$((i)) + echo 'Running process #' ${i} 'from' $start_index 'to' $end_index 'on GPU' ${gpu} + ((index++)) + ( + CUDA_VISIBLE_DEVICES=$gpu python humaneval_gen.py --model ${model} \ + --start_index ${start_index} --end_index ${end_index} --temperature ${temp} \ + --num_seqs_per_iter ${num_seqs_per_iter} --N ${pred_num} --max_len ${max_len} --output_path ${output_path} + ) & + if (($index % $gpu_num == 0)); then wait; fi +done +``` +3. Run the post processing code `src/process_humaneval.py` to collect the code completions from all answer files. +```bash +output_path=preds/T${temp}_N${pred_num} + +echo 'Output path: '$output_path +python process_humaneval.py --path ${output_path} --out_path ${output_path}.jsonl --add_prompt + +evaluate_functional_correctness ${output_path}.jsonl +``` + +### Citation + +Please cite the repo if you use the data or code in this repo. + +``` +@misc{luo2023wizardcoder, + title={WizardCoder: Empowering Code Large Language Models with Evol-Instruct}, + author={Ziyang Luo and Can Xu and Pu Zhao and Qingfeng Sun and Xiubo Geng and Wenxiang Hu and Chongyang Tao and Jing Ma and Qingwei Lin and Daxin Jiang}, + year={2023}, +} +``` +## Disclaimer + +The resources, including code, data, and model weights, associated with this project are restricted for academic research purposes only and cannot be used for commercial purposes. The content produced by any version of WizardCoder is influenced by uncontrollable variables such as randomness, and therefore, the accuracy of the output cannot be guaranteed by this project. This project does not accept any legal liability for the content of the model output, nor does it assume responsibility for any losses incurred due to the use of associated resources and output results. \ No newline at end of file diff --git a/WizardCoder/imgs/pass1.png b/WizardCoder/imgs/pass1.png new file mode 100644 index 0000000..a2fe82f Binary files /dev/null and b/WizardCoder/imgs/pass1.png differ diff --git a/WizardCoder/src/humaneval_gen.py b/WizardCoder/src/humaneval_gen.py new file mode 100644 index 0000000..c54d6ea --- /dev/null +++ b/WizardCoder/src/humaneval_gen.py @@ -0,0 +1,177 @@ +import argparse +import pprint +import sys +import os +import re +from tqdm import tqdm +import torch +from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig +from human_eval.data import write_jsonl, read_problems, stream_jsonl + +if torch.cuda.is_available(): + device = "cuda" +else: + device = "cpu" + +try: + if torch.backends.mps.is_available(): + device = "mps" +except: + pass + +def extract_text(prompt, remove_lines=True): + token = '\"\"\"' + start = token + end = '>>>' + + start_idx = prompt.find(start) + len(start) + end_idx = prompt.find(end) + + output = prompt[start_idx: end_idx] + if remove_lines: + output = output.replace('\n', ' ') + output = re.sub(r"\s+", " ", output).strip() + + return output + +def generate_prompt(input): + INSTRUCTION = f"""Below is an instruction that describes a task. Write a response that appropriately completes the request. + +### Instruction: +Create a Python script for this problem: +{input} + +### Response:""" + return INSTRUCTION + +def get_model( + load_8bit: bool = False, + base_model: str = "bigcode/starcoder", +): + assert base_model, ( + "Please specify a --base_model, e.g. --base_model='bigcode/starcoder'" + ) + + tokenizer = AutoTokenizer.from_pretrained(base_model) + if device == "cuda": + model = AutoModelForCausalLM.from_pretrained( + base_model, + load_in_8bit=load_8bit, + torch_dtype=torch.float16, + device_map="auto", + ) + elif device == "mps": + model = AutoModelForCausalLM.from_pretrained( + base_model, + device_map={"": device}, + torch_dtype=torch.float16, + ) + model.config.pad_token_id = tokenizer.pad_token_id + + if not load_8bit: + model.half() # seems to fix bugs for some users. + + model.eval() + if torch.__version__ >= "2" and sys.platform != "win32": + model = torch.compile(model) + + return tokenizer, model + + +def main(): + parser = argparse.ArgumentParser() + + parser.add_argument('--model', type=str, default='bigcode/starcoder', help="") + parser.add_argument('--output_path', type=str, help="") + parser.add_argument('--start_index', type=int, default=0, help="") + parser.add_argument('--end_index', type=int, default=164, help="") + parser.add_argument('--temperature', type=float, default=0.8, help="") + parser.add_argument('--N', type=int, default=200, help="") + parser.add_argument('--max_len', type=int, default=512, help="") + parser.add_argument('--decoding_style', type=str, default='sampling', help="") + parser.add_argument('--num_seqs_per_iter', type=int, default=50, help='') + parser.add_argument('--overwrite', action='store_true', help='') + + args = parser.parse_args() + + argsdict = vars(args) + print(pprint.pformat(argsdict)) + + STOP_SEQS = ['\nclass', '\ndef', '\n#', '\nif', '\nprint'] + + problems = read_problems() + + task_ids = sorted(problems.keys())[args.start_index: args.end_index] + prompts = [problems[task_id]['prompt'] for task_id in task_ids] + num_samples = len(prompts) + print("Number of samples: {}".format(num_samples)) + + tokenizer, model = get_model(base_model=args.model) + generation_config = GenerationConfig( + pad_token_id=tokenizer.pad_token_id, + do_sample=True, + temperature=args.temperature, + max_length=args.max_len, + num_return_sequences=args.num_seqs_per_iter, + eos_token_id=tokenizer.eos_token_id, + top_p=0.95 + ) + + print(f"Loaded {args.model}.") + for i in tqdm(range(num_samples), ncols=0, total=num_samples): + output_file = args.output_path + '/{}.jsonl'.format(args.start_index + i) + + if os.path.exists(output_file) and not args.overwrite: + print(f'Skip {output_file} as it already exists') + continue + + prompt = prompts[i].replace(' ', '\t') + prompt_batch = [generate_prompt(prompt)] + + ids_batch = [task_ids[i]] + + completion_seqs = [] + + encoding = tokenizer(prompt_batch, return_tensors="pt", truncation=True, max_length=args.max_len).to(device) + + if args.decoding_style == 'sampling': + loops = int(args.N / args.num_seqs_per_iter) + else: + loops = 1 + + for _ in tqdm(range(loops), total=loops, leave=False, ncols=0): + + with torch.no_grad(): + if args.decoding_style == 'sampling': + gen_tokens = model.generate( + **encoding, + generation_config=generation_config + ) + + if gen_tokens is not None: + gen_seqs = tokenizer.batch_decode(gen_tokens, skip_special_tokens=True) + else: + gen_seqs = None + + if gen_seqs is not None: + assert len(ids_batch) == 1 + task_id = ids_batch[0] + + for seq_idx, gen_seq in enumerate(gen_seqs): + completion_seq = gen_seq.split("### Response:")[1] + completion_seq = completion_seq.replace('\t', ' ') + all_code = gen_seq.replace('\t', ' ') + + completion_seqs.append( + {'task_id': task_id, + 'completion': completion_seq, + 'all_code': all_code, + } + ) + + print("Saving results to {}".format(output_file)) + write_jsonl(output_file, completion_seqs) + + +if __name__ == '__main__': + main() \ No newline at end of file diff --git a/WizardCoder/src/inference_wizardcoder.py b/WizardCoder/src/inference_wizardcoder.py new file mode 100644 index 0000000..1ec8f1c --- /dev/null +++ b/WizardCoder/src/inference_wizardcoder.py @@ -0,0 +1,121 @@ +import sys +import os +import fire +import torch +import transformers +import json +import jsonlines + +from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig + +if torch.cuda.is_available(): + device = "cuda" +else: + device = "cpu" + +try: + if torch.backends.mps.is_available(): + device = "mps" +except: + pass + +def evaluate( + batch_data, + tokenizer, + model, + input=None, + temperature=1, + top_p=0.9, + top_k=40, + num_beams=1, + max_new_tokens=2048, + **kwargs, +): + prompts = generate_prompt(batch_data, input) + inputs = tokenizer(prompts, return_tensors="pt", max_length=256, truncation=True, padding=True) + input_ids = inputs["input_ids"].to(device) + generation_config = GenerationConfig( + temperature=temperature, + top_p=top_p, + top_k=top_k, + num_beams=num_beams, + eos_token_id=tokenizer.eos_token_id, + pad_token_id=tokenizer.pad_token_id, + **kwargs, + ) + with torch.no_grad(): + generation_output = model.generate( + input_ids=input_ids, + generation_config=generation_config, + return_dict_in_generate=True, + output_scores=True, + max_new_tokens=max_new_tokens, + ) + s = generation_output.sequences + output = tokenizer.batch_decode(s, skip_special_tokens=True) + return output + + +def generate_prompt(instruction, input=None): + return f"""Below is an instruction that describes a task. Write a response that appropriately completes the request. + +### Instruction: +{instruction} + +### Response:""" + + +def main( + load_8bit: bool = False, + base_model: str = "Model_Path", + input_data_path = "Input.jsonl", + output_data_path = "Output.jsonl", +): + assert base_model, ( + "Please specify a --base_model, e.g. --base_model='bigcode/starcoder'" + ) + + tokenizer = AutoTokenizer.from_pretrained(base_model) + if device == "cuda": + model = AutoModelForCausalLM.from_pretrained( + base_model, + load_in_8bit=load_8bit, + torch_dtype=torch.float16, + device_map="auto", + ) + elif device == "mps": + model = AutoModelForCausalLM.from_pretrained( + base_model, + device_map={"": device}, + torch_dtype=torch.float16, + ) + + model.config.pad_token_id = tokenizer.pad_token_id + + if not load_8bit: + model.half() + + model.eval() + if torch.__version__ >= "2" and sys.platform != "win32": + model = torch.compile(model) + + input_data = jsonlines.open(input_data_path, mode='r') + output_data = jsonlines.open(output_data_path, mode='w') + + for num, line in enumerate(input_data): + one_data = line + id = one_data["idx"] + instruction = one_data["Instruction"] + print(instruction) + _output = evaluate(instruction, tokenizer, model) + final_output = _output[0].split("### Response:")[1].strip() + new_data = { + "id": id, + "instruction": instruction, + "wizardcoder": final_output + } + output_data.write(new_data) + + +if __name__ == "__main__": + fire.Fire(main) \ No newline at end of file diff --git a/WizardCoder/src/process_humaneval.py b/WizardCoder/src/process_humaneval.py new file mode 100644 index 0000000..1023a09 --- /dev/null +++ b/WizardCoder/src/process_humaneval.py @@ -0,0 +1,69 @@ +from human_eval.data import read_problems, write_jsonl, stream_jsonl +import glob +from tqdm import tqdm +import argparse + +parser = argparse.ArgumentParser() + +# Inputs +parser.add_argument( + '--path', + type=str, + help="") +parser.add_argument( + '--out_path', + type=str, + help="") +parser.add_argument( + '--add_prompt', + action='store_true', + help='') + +args = parser.parse_args() + + +files = sorted(glob.glob(args.path + '/*.jsonl')) +print("{} files in {}".format(len(files), args.path)) + +problems = read_problems() + +output = [] +a = 0 +for code_file in tqdm(files, total=len(files)): + codes = [c for c in stream_jsonl(code_file)] + if args.add_prompt: + for code in codes: + task_id = code['task_id'] + prompt = problems[task_id]['prompt'] + completion = code['completion'] + completion = completion.replace("\r", "") + if '```python' in completion: + def_line = completion.index('```python') + completion = completion[def_line:].strip() + completion = completion.replace('```python', '') + # print(completion) + try: + next_line = completion.index('```') + completion = completion[:next_line].strip() + except: + a += 1 + print(completion) + print("================\n") + # print(completion) + if "__name__ == \"__main__\"" in completion: + next_line = completion.index('if __name__ == "__main__":') + completion = completion[:next_line].strip() + # print(completion) + + if "# Example usage" in completion: + # print(completion) + next_line = completion.index('# Example usage') + completion = completion[:next_line].strip() + + code['completion'] = completion + + output += codes + +print("save to {}".format(args.out_path)) +write_jsonl(args.out_path, output) +print(a) \ No newline at end of file diff --git a/WizardCoder/src/train_wizardcoder.py b/WizardCoder/src/train_wizardcoder.py new file mode 100644 index 0000000..245a9ef --- /dev/null +++ b/WizardCoder/src/train_wizardcoder.py @@ -0,0 +1,248 @@ +# Copyright 2023 Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import copy +import logging +import random +from dataclasses import dataclass, field +from typing import Optional, Dict, Sequence + +import torch +import torch.distributed +import transformers +from torch.utils.data import Dataset +from transformers import Trainer +from datasets import load_dataset +import utils + +IGNORE_INDEX = -100 +DEFAULT_PAD_TOKEN = "[PAD]" +DEFAULT_EOS_TOKEN = "<|endoftext|>" +DEFAULT_BOS_TOKEN = "<|endoftext|>" +DEFAULT_UNK_TOKEN = "<|endoftext|>" +PROMPT_DICT = { + "prompt_input": ( + "Below is an instruction that describes a task, paired with an input that provides further context. " + "Write a response that appropriately completes the request.\n\n" + "### Instruction:\n{instruction}\n\n### Input:\n{input}\n\n### Response:" + ), + "prompt_no_input": ( + "Below is an instruction that describes a task. " + "Write a response that appropriately completes the request.\n\n" + "### Instruction:\n{instruction}\n\n### Response:" + ), +} + + +@dataclass +class ModelArguments: + model_name_or_path: Optional[str] = field(default="bigcode/starcoder") + + +@dataclass +class DataArguments: + data_path: str = field(default=None, metadata={"help": "Path to the training data."}) + + +@dataclass +class TrainingArguments(transformers.TrainingArguments): + cache_dir: Optional[str] = field(default=None) + optim: str = field(default="adamw_torch") + model_max_length: int = field( + default=512, + metadata={"help": "Maximum sequence length. Sequences will be right padded (and possibly truncated)."}, + ) + + +def safe_save_model_for_hf_trainer(trainer: transformers.Trainer, output_dir: str): + """Collects the state dict and dump to disk.""" + state_dict = trainer.model.state_dict() + if trainer.args.should_save: + cpu_state_dict = {key: value.cpu() for key, value in state_dict.items()} + del state_dict + trainer._save(output_dir, state_dict=cpu_state_dict) # noqa + + +def smart_tokenizer_and_embedding_resize( + special_tokens_dict: Dict, + tokenizer: transformers.PreTrainedTokenizer, + model: transformers.PreTrainedModel, +): + """Resize tokenizer and embedding. + + Note: This is the unoptimized version that may make your embedding size not be divisible by 64. + """ + num_new_tokens = tokenizer.add_special_tokens(special_tokens_dict) + model.resize_token_embeddings(len(tokenizer)) + + if num_new_tokens > 0: + input_embeddings = model.get_input_embeddings().weight.data + output_embeddings = model.get_output_embeddings().weight.data + + input_embeddings_avg = input_embeddings[:-num_new_tokens].mean(dim=0, keepdim=True) + output_embeddings_avg = output_embeddings[:-num_new_tokens].mean(dim=0, keepdim=True) + + input_embeddings[-num_new_tokens:] = input_embeddings_avg + output_embeddings[-num_new_tokens:] = output_embeddings_avg + + +def _tokenize_fn(strings: Sequence[str], tokenizer: transformers.PreTrainedTokenizer) -> Dict: + """Tokenize a list of strings.""" + tokenized_list = [ + tokenizer( + text, + return_tensors="pt", + padding="longest", + max_length=tokenizer.model_max_length, + truncation=True, + ) + for text in strings + ] + input_ids = labels = [tokenized.input_ids[0] for tokenized in tokenized_list] + input_ids_lens = labels_lens = [ + tokenized.input_ids.ne(tokenizer.pad_token_id).sum().item() for tokenized in tokenized_list + ] + return dict( + input_ids=input_ids, + labels=labels, + input_ids_lens=input_ids_lens, + labels_lens=labels_lens, + ) + + +def preprocess( + sources: Sequence[str], + targets: Sequence[str], + tokenizer: transformers.PreTrainedTokenizer, +) -> Dict: + """Preprocess the data by tokenizing.""" + examples = [s + t for s, t in zip(sources, targets)] + examples_tokenized, sources_tokenized = [_tokenize_fn(strings, tokenizer) for strings in (examples, sources)] + input_ids = examples_tokenized["input_ids"] + labels = copy.deepcopy(input_ids) + for label, source_len in zip(labels, sources_tokenized["input_ids_lens"]): + label[:source_len] = IGNORE_INDEX + return dict(input_ids=input_ids, labels=labels) + + +@dataclass +class DataCollatorForSupervisedDataset(object): + """Collate examples for supervised fine-tuning.""" + + tokenizer: transformers.PreTrainedTokenizer + + def __call__(self, instances: Sequence[Dict]) -> Dict[str, torch.Tensor]: + input_ids, labels = tuple([instance[key] for instance in instances] for key in ("input_ids", "labels")) + input_ids = [torch.tensor(x) for x in input_ids] + input_ids = torch.nn.utils.rnn.pad_sequence( + input_ids, batch_first=True, padding_value=self.tokenizer.pad_token_id + ) + labels = [torch.tensor(x) for x in labels] + labels = torch.nn.utils.rnn.pad_sequence(labels, batch_first=True, padding_value=IGNORE_INDEX) + return dict( + input_ids=input_ids, + labels=labels, + attention_mask=input_ids.ne(self.tokenizer.pad_token_id), + ) + +def train_tokenize_function(examples, tokenizer): + prompt_input, prompt_no_input = PROMPT_DICT["prompt_input"], PROMPT_DICT["prompt_no_input"] + if 'input' in examples: + sources = [ + prompt_input.format_map(dict(instruction=instruction, input=input)) if input != "" \ + else prompt_no_input.format_map(dict(instruction=instruction)) \ + for instruction, input in zip(examples['instruction'], examples['input']) + ] + else: + sources = [ + prompt_no_input.format_map(dict(instruction=instruction)) \ + for instruction in examples['instruction'] + ] + targets = [f"{output}{tokenizer.eos_token}" for output in examples['output']] + data_dict = preprocess(sources, targets, tokenizer) + return data_dict + + +def train(): + parser = transformers.HfArgumentParser((ModelArguments, DataArguments, TrainingArguments)) + model_args, data_args, training_args = parser.parse_args_into_dataclasses() + + model = transformers.AutoModelForCausalLM.from_pretrained( + model_args.model_name_or_path, + cache_dir=training_args.cache_dir, + ) + + tokenizer = transformers.AutoTokenizer.from_pretrained( + model_args.model_name_or_path, + cache_dir=training_args.cache_dir, + model_max_length=training_args.model_max_length, + padding_side="right", + use_fast=True, + ) + if tokenizer.pad_token is None: + smart_tokenizer_and_embedding_resize( + special_tokens_dict=dict(pad_token=DEFAULT_PAD_TOKEN), + tokenizer=tokenizer, + model=model, + ) + if "starcoder" in model_args.model_name_or_path: + tokenizer.add_special_tokens( + { + "eos_token": DEFAULT_EOS_TOKEN, + "bos_token": DEFAULT_BOS_TOKEN, + "unk_token": DEFAULT_UNK_TOKEN, + "pad_token": DEFAULT_PAD_TOKEN, + } + ) + + raw_train_datasets = load_dataset('json', data_files=data_args.data_path, split="train", cache_dir=training_args.cache_dir) + if training_args.local_rank > 0: + torch.distributed.barrier() + + train_dataset = raw_train_datasets.map( + train_tokenize_function, + batched=True, + batch_size=3000, + num_proc=32, + remove_columns=raw_train_datasets.column_names, + load_from_cache_file=True, # not args.overwrite_cache + desc="Running tokenizer on train dataset", + fn_kwargs={"tokenizer": tokenizer} + ) + + if training_args.local_rank == 0: + torch.distributed.barrier() + + if training_args.local_rank == 0: + print(len(train_dataset)) + for index in random.sample(range(len(train_dataset)), 3): + print(f"Sample {index} of the training set: {train_dataset[index]}.") + + data_collator = DataCollatorForSupervisedDataset(tokenizer=tokenizer) + data_module = dict(train_dataset=train_dataset, eval_dataset=None, data_collator=data_collator) + + #Tell Trainer not to attempt DataParallel + model.is_parallelizable = True + model.model_parallel = True + + trainer = Trainer(model=model, tokenizer=tokenizer, args=training_args, **data_module) + model.config.use_cache = False + + trainer.train() + trainer.save_state() + safe_save_model_for_hf_trainer(trainer=trainer, output_dir=training_args.output_dir) + + +if __name__ == "__main__": + train() \ No newline at end of file diff --git a/WizardCoder/test b/WizardCoder/test deleted file mode 100644 index 8b13789..0000000 --- a/WizardCoder/test +++ /dev/null @@ -1 +0,0 @@ -