-
Notifications
You must be signed in to change notification settings - Fork 2.3k
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
HuggingFace -> Hugging Face
- Loading branch information
Showing
1 changed file
with
3 additions
and
3 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -80,15 +80,15 @@ Optional dependencies can also be combines with [option1,option2]. | |
|
||
# Where to find the models? | ||
|
||
You can find llama v2 models on HuggingFace hub [here](https://huggingface.co/meta-llama), where models with `hf` in the name are already converted to HuggingFace checkpoints so no further conversion is needed. The conversion step below is only for original model weights from Meta that are hosted on HuggingFace model hub as well. | ||
You can find llama v2 models on Hugging Face hub [here](https://huggingface.co/meta-llama), where models with `hf` in the name are already converted to Hugging Face checkpoints so no further conversion is needed. The conversion step below is only for original model weights from Meta that are hosted on Hugging Face model hub as well. | ||
|
||
# Model conversion to Hugging Face | ||
The recipes and notebooks in this folder are using the Llama 2 model definition provided by Hugging Face's transformers library. | ||
|
||
Given that the original checkpoint resides under models/7B you can install all requirements and convert the checkpoint with: | ||
|
||
```bash | ||
## Install HuggingFace Transformers from source | ||
## Install Hugging Face Transformers from source | ||
pip freeze | grep transformers ## verify it is version 4.31.0 or higher | ||
|
||
git clone [email protected]:huggingface/transformers.git | ||
|
@@ -145,7 +145,7 @@ Here we use FSDP as discussed in the next section which can be used along with P | |
|
||
## Flash Attention and Xformer Memory Efficient Kernels | ||
|
||
Setting `use_fast_kernels` will enable using of Flash Attention or Xformer memory-efficient kernels based on the hardware being used. This would speed up the fine-tuning job. This has been enabled in `optimum` library from HuggingFace as a one-liner API, please read more [here](https://pytorch.org/blog/out-of-the-box-acceleration/). | ||
Setting `use_fast_kernels` will enable using of Flash Attention or Xformer memory-efficient kernels based on the hardware being used. This would speed up the fine-tuning job. This has been enabled in `optimum` library from Hugging Face as a one-liner API, please read more [here](https://pytorch.org/blog/out-of-the-box-acceleration/). | ||
|
||
```bash | ||
torchrun --nnodes 1 --nproc_per_node 4 examples/finetuning.py --enable_fsdp --use_peft --peft_method lora --model_name /patht_of_model_folder/7B --fsdp_config.pure_bf16 --output_dir Path/to/save/PEFT/model --use_fast_kernels | ||
|