YC-Chen commited on
Commit
71ce048
·
1 Parent(s): 38ec246

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +93 -0
README.md CHANGED
@@ -4,10 +4,103 @@ pipeline_tag: text-generation
4
 
5
  # Model Card for Breeze-7B-Base-v0.1
6
 
 
 
 
 
 
 
 
 
7
 
 
 
 
 
 
 
8
 
9
  ## Model Details
10
  - **Finetuned from:** [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
11
  - **Model type:** Causal decoder-only transformer language model
12
  - **Language:** English and Traditional Chinese (zh-tw)
13
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
 
5
  # Model Card for Breeze-7B-Base-v0.1
6
 
7
+ Breeze-7B-Base-v0.1 is a 7-billion-parameter language model built from Mistral-7B and tailored for Traditional Chinese (TC).
8
+ This model expands the TC vocabulary (extra 30k TC tokens) based on the original Mistral-7B to better adapt to TC and improve inference speed,
9
+ resulting in a doubling of the original tokenizer's inference speed.
10
+ To the best of our knowledge, this is the first work on vocabulary expansion in TC.
11
+ This model uses 250GB of TC data for continued pre-training.
12
+ Breeze-7B-Base-v0.1 performs well on both EN and TC benchmarks.
13
+ This model outperforms Taiwan-LLM-7B-v2.1-base, Taiwan-LLM-13B-v2.0-base, and Yi-6B-Base on all TC benchmarks
14
+ and is comparable with Mistral-7B-v0.1 on MMLU and MT-Bench in English.
15
 
16
+ *A project by the members (in alphabetical order): Chan-Jan Hsu 許湛然, Chang-Le Liu 劉昶樂, Feng-Ting Liao 廖峰挺, Po-Chun Hsu 許博竣, Yi-Chang Chen 陳宜昌, and the supervisor Da-Shan Shiu 許大山.*
17
+
18
+ ## Features
19
+
20
+ - Expanding the vocabulary dictionary for Traditional Chinese from 32k to 62k vocabulary size
21
+ - 8k context length
22
 
23
  ## Model Details
24
  - **Finetuned from:** [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
25
  - **Model type:** Causal decoder-only transformer language model
26
  - **Language:** English and Traditional Chinese (zh-tw)
27
 
28
+ ## Performance
29
+
30
+ | **[Traditional Chinese Benchmarks]** | TMMLU+ (ACC) | DRCD (EM) | MT-Bench-tw (Score) |
31
+ |-------------------------------------------------------------------------------------------------------|--------|------|-------------|
32
+ | Breeze-7B-Base-v0.1 | | | |
33
+ | Breeze-7B-Instruct-v0.1 | | | |
34
+ | mistralai/Mistral-7B-v0.1 | | | |
35
+ | mistralai/Mistral-7B-Instruct-v0.1 | | | |
36
+ | yentinglin/Taiwan-LLM-7B-v2.1-base | | | |
37
+ | yentinglin/Taiwan-LLM-7B-v2.1-chat | | | |
38
+ | yentinglin/Taiwan-LLM-13B-v2.0-base | | | |
39
+ | yentinglin/Taiwan-LLM-13B-v2.0-chat | | | |
40
+ | 01-ai/Yi-6B-Base | | | |
41
+ | 01-ai/Yi-6B-Chat | | | |
42
+ | 01-ai/Yi-34B-Base | | | |
43
+ | 01-ai/Yi-34B-Chat | | | |
44
+ | Qwen/Qwen-7B | | | |
45
+ | Qwen/Qwen-7B-Chat | | | |
46
+ | Qwen/Qwen-14B | | | |
47
+ | Qwen/Qwen-14B-Chat | | | |
48
+ | gpt-3.5-turbo-0613 | | | |
49
+
50
+
51
+ | **[English Benchmarks]** | MMLU (ACC) | MT-Bench (Score) |
52
+ |-------------------------------------------------------------------------------------------------------|--------|------|
53
+ | Breeze-7B-Base-v0.1 | | |
54
+ | Breeze-7B-Instruct-v0.1 | | |
55
+ | mistralai/Mistral-7B-v0.1 | | |
56
+ | mistralai/Mistral-7B-Instruct-v0.1 | | |
57
+ | yentinglin/Taiwan-LLM-7B-v2.1-base | | |
58
+ | yentinglin/Taiwan-LLM-7B-v2.1-chat | | |
59
+ | yentinglin/Taiwan-LLM-13B-v2.0-base | | |
60
+ | yentinglin/Taiwan-LLM-13B-v2.0-chat | | |
61
+ | 01-ai/Yi-6B-Base | | |
62
+ | 01-ai/Yi-6B-Chat | | |
63
+ | 01-ai/Yi-34B-Base | | |
64
+ | 01-ai/Yi-34B-Chat | | |
65
+ | Qwen/Qwen-7B | | | |
66
+ | Qwen/Qwen-7B-Chat | | | |
67
+ | Qwen/Qwen-14B | | | |
68
+ | Qwen/Qwen-14B-Chat | | | |
69
+ | gpt-3.5-turbo-0613 | | |
70
+
71
+
72
+ | **[Inference Speed on Traditional Chinese]** | Speed (char/sec)
73
+ |-------------------------------------------------------------------------------------------------------|--------|
74
+ | Breeze-7B-Base-v0.1 | |
75
+ | mistralai/Mistral-7B-v0.1 | |
76
+ | yentinglin/Taiwan-LLM-7B-v2.1-base | |
77
+ | yentinglin/Taiwan-LLM-13B-v2.0-base | |
78
+ | 01-ai/Yi-6B | |
79
+ | 01-ai/Yi-34B | |
80
+ | Qwen/Qwen-7B | | | |
81
+ | Qwen/Qwen-14B | | | |
82
+
83
+
84
+ ## Use in Transformers
85
+
86
+ First install direct dependencies:
87
+ ```
88
+ pip install transformers torch accelerate
89
+ ```
90
+ If you want faster inference using flash-attention2, you need to install these dependencies:
91
+ ```bash
92
+ pip install packaging ninja
93
+ pip install flash-attn
94
+ ```
95
+ Then load the model in transformers:
96
+ ```python
97
+ from transformers import AutoModelForCausalLM, AutoTokenizer
98
+ import torch
99
+
100
+ model = AutoModelForCausalLM.from_pretrained(
101
+ model="MediaTek-Research/Breeze-7B-Base-v0.1",
102
+ device_map="auto",
103
+ torch_dtype=torch.bfloat16,
104
+ use_flash_attn_2=True # optional
105
+ )
106
+ ```