Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,76 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
A bilingual instruction-tuned LoRA model of https://huggingface.co/baichuan-inc/baichuan-7B
|
5 |
+
|
6 |
+
- Instruction-following datasets used: alpaca, alpaca-zh, codealpaca
|
7 |
+
- Training framework: https://github.com/hiyouga/LLaMA-Efficient-Tuning
|
8 |
+
|
9 |
+
Please follow the [baichuan-7B License](https://huggingface.co/baichuan-inc/baichuan-7B/resolve/main/baichuan-7B%20%E6%A8%A1%E5%9E%8B%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE.pdf) to use this model.
|
10 |
+
|
11 |
+
Usage:
|
12 |
+
|
13 |
+
```python
|
14 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
|
15 |
+
|
16 |
+
tokenizer = AutoTokenizer.from_pretrained("hiyouga/baichuan-7b-sft", trust_remote_code=True)
|
17 |
+
model = AutoModelForCausalLM.from_pretrained("hiyouga/baichuan-7b-sft", trust_remote_code=True).cuda()
|
18 |
+
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
|
19 |
+
|
20 |
+
query = "晚上睡不着怎么办"
|
21 |
+
template = (
|
22 |
+
"A chat between a curious user and an artificial intelligence assistant. "
|
23 |
+
"The assistant gives helpful, detailed, and polite answers to the user's questions.\n"
|
24 |
+
"Human: {}\nAssistant: "
|
25 |
+
)
|
26 |
+
|
27 |
+
inputs = tokenizer([template.format(query)], return_tensors="pt")
|
28 |
+
inputs = inputs.to("cuda")
|
29 |
+
generate_ids = model.generate(**inputs, max_new_tokens=256, streamer=streamer)
|
30 |
+
```
|
31 |
+
|
32 |
+
You could also alternatively launch a CLI demo by using the script in https://github.com/hiyouga/LLaMA-Efficient-Tuning
|
33 |
+
|
34 |
+
```bash
|
35 |
+
python src/cli_demo.py --template default --model_name_or_path hiyouga/baichuan-7b-sft
|
36 |
+
```
|
37 |
+
|
38 |
+
---
|
39 |
+
|
40 |
+
You could reproduce our results with the following scripts using [LLaMA-Efficient-Tuning](https://github.com/hiyouga/LLaMA-Efficient-Tuning):
|
41 |
+
|
42 |
+
```bash
|
43 |
+
CUDA_VISIBLE_DEVICES=0 python src/train_bash.py \
|
44 |
+
--stage sft \
|
45 |
+
--model_name_or_path baichuan-inc/baichuan-7B \
|
46 |
+
--do_train \
|
47 |
+
--dataset alpaca_gpt4_en,alpaca_gpt4_zh,codealpaca \
|
48 |
+
--template default \
|
49 |
+
--finetuning_type lora \
|
50 |
+
--lora_rank 16 \
|
51 |
+
--lora_target W_pack,o_proj,gate_proj,down_proj,up_proj \
|
52 |
+
--output_dir baichuan_lora \
|
53 |
+
--overwrite_cache \
|
54 |
+
--per_device_train_batch_size 8 \
|
55 |
+
--per_device_eval_batch_size 8 \
|
56 |
+
--gradient_accumulation_steps 8 \
|
57 |
+
--preprocessing_num_workers 16 \
|
58 |
+
--lr_scheduler_type cosine \
|
59 |
+
--logging_steps 10 \
|
60 |
+
--save_steps 100 \
|
61 |
+
--eval_steps 100 \
|
62 |
+
--learning_rate 5e-5 \
|
63 |
+
--max_grad_norm 0.5 \
|
64 |
+
--num_train_epochs 2.0 \
|
65 |
+
--dev_ratio 0.01 \
|
66 |
+
--evaluation_strategy steps \
|
67 |
+
--load_best_model_at_end \
|
68 |
+
--plot_loss \
|
69 |
+
--fp16
|
70 |
+
```
|
71 |
+
|
72 |
+
Loss curve on training set:
|
73 |
+
![train](assets/training_loss.svg)
|
74 |
+
|
75 |
+
Loss curve on evaluation set:
|
76 |
+
![eval](assets/eval_loss.svg)
|