From 3c4b2c17128c306a4639acd0159d32da8fd4ec05 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E6=A2=A6=E5=85=B8?= <30826840+dlutsniper@users.noreply.github.com> Date: Wed, 20 Sep 2023 15:37:01 +0800 Subject: [PATCH 1/2] Update README.md Fix word pt_lora_model --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 890756d..fe3c9a3 100644 --- a/README.md +++ b/README.md @@ -305,7 +305,7 @@ 问题10:会出34B或者70B级别的模型吗? 问题11:为什么长上下文版模型是16K,不是32K或者100K? 问题12:为什么Alpaca模型会回复说自己是ChatGPT? -问题13:为什么pt_lora_mdoel或者sft_lora_model下的adapter_model.bin只有几百k? +问题13:为什么pt_lora_model或者sft_lora_model下的adapter_model.bin只有几百k? ``` From 9b5e8e35baf3a02e9edf31857fd8583629eb6d34 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E6=A2=A6=E5=85=B8?= <30826840+dlutsniper@users.noreply.github.com> Date: Wed, 20 Sep 2023 15:37:43 +0800 Subject: [PATCH 2/2] Update README_EN.md Fix word pt_lora_model --- README_EN.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README_EN.md b/README_EN.md index 89ab6db..ad991ff 100644 --- a/README_EN.md +++ b/README_EN.md @@ -288,7 +288,7 @@ Question 9: How to interprete the results of third-party benchmarks? Question 10: Will you release 34B or 70B models? Question 11: Why the long-context model is 16K context, not 32K or 100K? Question 12: Why does the Alpaca model reply that it is ChatGPT? -Question 13: Why is the adapter_model.bin in the pt_lora_mdoel or sft_lora_model folder only a few hundred kb? +Question 13: Why is the adapter_model.bin in the pt_lora_model or sft_lora_model folder only a few hundred kb? ``` For specific questions and answers, please refer to the project >>> [📚 GitHub Wiki](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/wiki/faq_en)