Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,72 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: llama2
|
3 |
+
datasets:
|
4 |
+
- huangyt/FINETUNE2_TEST
|
5 |
+
---
|
6 |
+
# Model Card for Model ID
|
7 |
+
|
8 |
+
<!-- Provide a quick summary of what the model is/does. -->
|
9 |
+
|
10 |
+
在llama-2-13b上使用huangyt/FINETUNE2_TEST資料集進行訓練,總資料筆數約17w
|
11 |
+
|
12 |
+
# Fine-Tuning Information
|
13 |
+
- **GPU:** RTX4090 (single core / 24564MiB)
|
14 |
+
- **model:** meta-llama/Llama-2-13b-hf
|
15 |
+
- **dataset:** huangyt/FINETUNE2_TEST (共約2.2w筆訓練集)
|
16 |
+
- **peft_type:** LoRA
|
17 |
+
- **lora_target:** q_proj, v_proj
|
18 |
+
- **per_device_train_batch_size:** 8
|
19 |
+
- **gradient_accumulation_steps:** 8
|
20 |
+
- **learning_rate :** 5e-5
|
21 |
+
- **epoch:** 1
|
22 |
+
- **precision:** bf16
|
23 |
+
- **quantization:** load_in_4bit
|
24 |
+
|
25 |
+
# Fine-Tuning Detail
|
26 |
+
- **train_loss:** 0.567
|
27 |
+
- **train_runtime:** 2:47:57 (use deepspeed)
|
28 |
+
|
29 |
+
# Evaluation
|
30 |
+
- 評估結果來自**HuggingFaceH4/open_llm_leaderboard**
|
31 |
+
- 與Llama-2-13b比較4種Benchmark,包含**ARC**、**HellaSwag**、**MMLU**、**TruthfulQA**
|
32 |
+
|
33 |
+
| Model |Average| ARC |HellaSwag| MMLU |TruthfulQA|
|
34 |
+
|--------------------------------------------------------|-------|-------|---------|-------|----------|
|
35 |
+
|meta-llama/Llama-2-13b-hf | 56.9 | 58.11 | 80.97 | 54.34 | 34.17 |
|
36 |
+
|meta-llama/Llama-2-13b-chat-hf | 59.93 | 59.04 | 81.94 | 54.64 | 44.12 |
|
37 |
+
|CHIH-HUNG/llama-2-13b-Fintune_1_17w | 58.24 | 59.47 | 81 | 54.31 | 38.17 |
|
38 |
+
|CHIH-HUNG/llama-2-13b-huangyt_Fintune_1_17w-q_k_v_o_proj| 58.49 | 59.73 | 81.06 | 54.53 | 38.64 |
|
39 |
+
|CHIH-HUNG/llama-2-13b-Fintune_1_17w-gate_up_down_proj | 58.81 | 57.17 | 82.26 | 55.89 | 39.93 |
|
40 |
+
|
41 |
+
# How to convert dataset to json
|
42 |
+
|
43 |
+
- 在**load_dataset**中輸入資料集名稱,並且在**take**中輸入要取前幾筆資料
|
44 |
+
- 觀察該資料集的欄位名稱,填入**example**欄位中(例如system_prompt、question、response)
|
45 |
+
- 最後指定json檔儲存位置 (**json_filename**)
|
46 |
+
|
47 |
+
```py
|
48 |
+
import json
|
49 |
+
from datasets import load_dataset
|
50 |
+
|
51 |
+
# 讀取數據集,take可以取得該數據集前n筆資料
|
52 |
+
dataset = load_dataset("huangyt/FINETUNE2_TEST", split="train", streaming=True)
|
53 |
+
|
54 |
+
# 提取所需欄位並建立新的字典列表
|
55 |
+
extracted_data = []
|
56 |
+
for example in dataset:
|
57 |
+
extracted_example = {
|
58 |
+
"instruction": example["instruction"],
|
59 |
+
"input": example["input"],
|
60 |
+
"output": example["output"]
|
61 |
+
}
|
62 |
+
extracted_data.append(extracted_example)
|
63 |
+
|
64 |
+
# 指定 JSON 文件名稱
|
65 |
+
json_filename = "FINETUNE2_TEST.json"
|
66 |
+
|
67 |
+
# 寫入 JSON 文件
|
68 |
+
with open(json_filename, "w") as json_file:
|
69 |
+
json.dump(extracted_data, json_file, indent=4)
|
70 |
+
|
71 |
+
print(f"數據已提取並保存為 {json_filename}")
|
72 |
+
```
|