Afrizal Hasbi Azizy
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -4,199 +4,122 @@ tags:
|
|
4 |
- unsloth
|
5 |
- trl
|
6 |
- sft
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
7 |
---
|
|
|
|
|
|
|
|
|
8 |
|
9 |
-
|
10 |
|
11 |
-
|
12 |
|
|
|
13 |
|
|
|
14 |
|
15 |
-
|
16 |
|
17 |
-
|
18 |
|
19 |
-
|
20 |
|
21 |
-
|
22 |
|
23 |
-
|
24 |
-
- **Funded by [optional]:** [More Information Needed]
|
25 |
-
- **Shared by [optional]:** [More Information Needed]
|
26 |
-
- **Model type:** [More Information Needed]
|
27 |
-
- **Language(s) (NLP):** [More Information Needed]
|
28 |
-
- **License:** [More Information Needed]
|
29 |
-
- **Finetuned from model [optional]:** [More Information Needed]
|
30 |
|
31 |
-
|
32 |
|
33 |
-
|
34 |
|
35 |
-
|
36 |
-
- **Paper [optional]:** [More Information Needed]
|
37 |
-
- **Demo [optional]:** [More Information Needed]
|
38 |
|
39 |
-
|
40 |
-
|
41 |
-
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
42 |
|
43 |
-
###
|
44 |
|
45 |
-
|
46 |
|
47 |
-
[
|
48 |
|
49 |
-
|
50 |
|
51 |
-
|
52 |
|
53 |
-
|
54 |
|
55 |
### Out-of-Scope Use
|
56 |
|
57 |
-
|
58 |
-
|
59 |
-
|
60 |
-
|
61 |
-
##
|
62 |
-
|
63 |
-
|
64 |
-
|
65 |
-
|
66 |
-
|
67 |
-
|
68 |
-
|
69 |
-
|
70 |
-
|
71 |
-
|
72 |
-
|
73 |
-
|
74 |
-
|
75 |
-
|
76 |
-
|
77 |
-
|
78 |
-
|
79 |
-
|
80 |
-
|
81 |
-
|
82 |
-
|
83 |
-
|
84 |
-
|
85 |
-
|
86 |
-
|
87 |
-
|
88 |
-
|
89 |
-
|
90 |
-
|
91 |
-
|
92 |
-
|
93 |
-
|
94 |
-
|
95 |
-
|
96 |
-
|
97 |
-
|
98 |
-
|
99 |
-
|
100 |
-
|
101 |
-
|
102 |
-
|
103 |
-
|
104 |
-
[
|
105 |
-
|
106 |
-
|
107 |
-
|
108 |
-
|
109 |
-
|
110 |
-
|
111 |
-
|
112 |
-
|
113 |
-
|
114 |
-
|
115 |
-
|
116 |
-
|
117 |
-
|
118 |
-
|
119 |
-
|
120 |
-
|
121 |
-
|
122 |
-
[More Information Needed]
|
123 |
-
|
124 |
-
#### Metrics
|
125 |
-
|
126 |
-
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
127 |
-
|
128 |
-
[More Information Needed]
|
129 |
-
|
130 |
-
### Results
|
131 |
-
|
132 |
-
[More Information Needed]
|
133 |
-
|
134 |
-
#### Summary
|
135 |
-
|
136 |
-
|
137 |
-
|
138 |
-
## Model Examination [optional]
|
139 |
-
|
140 |
-
<!-- Relevant interpretability work for the model goes here -->
|
141 |
-
|
142 |
-
[More Information Needed]
|
143 |
-
|
144 |
-
## Environmental Impact
|
145 |
-
|
146 |
-
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
147 |
-
|
148 |
-
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
149 |
-
|
150 |
-
- **Hardware Type:** [More Information Needed]
|
151 |
-
- **Hours used:** [More Information Needed]
|
152 |
-
- **Cloud Provider:** [More Information Needed]
|
153 |
-
- **Compute Region:** [More Information Needed]
|
154 |
-
- **Carbon Emitted:** [More Information Needed]
|
155 |
-
|
156 |
-
## Technical Specifications [optional]
|
157 |
-
|
158 |
-
### Model Architecture and Objective
|
159 |
-
|
160 |
-
[More Information Needed]
|
161 |
-
|
162 |
-
### Compute Infrastructure
|
163 |
-
|
164 |
-
[More Information Needed]
|
165 |
-
|
166 |
-
#### Hardware
|
167 |
-
|
168 |
-
[More Information Needed]
|
169 |
-
|
170 |
-
#### Software
|
171 |
-
|
172 |
-
[More Information Needed]
|
173 |
-
|
174 |
-
## Citation [optional]
|
175 |
-
|
176 |
-
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
177 |
-
|
178 |
-
**BibTeX:**
|
179 |
-
|
180 |
-
[More Information Needed]
|
181 |
-
|
182 |
-
**APA:**
|
183 |
-
|
184 |
-
[More Information Needed]
|
185 |
-
|
186 |
-
## Glossary [optional]
|
187 |
-
|
188 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
189 |
-
|
190 |
-
[More Information Needed]
|
191 |
-
|
192 |
-
## More Information [optional]
|
193 |
-
|
194 |
-
[More Information Needed]
|
195 |
-
|
196 |
-
## Model Card Authors [optional]
|
197 |
-
|
198 |
-
[More Information Needed]
|
199 |
-
|
200 |
-
## Model Card Contact
|
201 |
-
|
202 |
-
[More Information Needed]
|
|
|
4 |
- unsloth
|
5 |
- trl
|
6 |
- sft
|
7 |
+
- llama3
|
8 |
+
- llama
|
9 |
+
- indonesia
|
10 |
+
license: llama3
|
11 |
+
datasets:
|
12 |
+
- catinthebag/TumpengQA
|
13 |
+
language:
|
14 |
+
- id
|
15 |
---
|
16 |
+
<center>
|
17 |
+
<img src="https://imgur.com/9nG5J1T.png" alt="Kancil" width="600" height="300">
|
18 |
+
<p><em>Kancil is a fine-tuned version of Llama 3 8B using synthetic QA dataset generated with Llama 3 70B.</em></p>
|
19 |
+
</center>
|
20 |
|
21 |
+
### Introducing the Kancil family of open models
|
22 |
|
23 |
+
Selamat datang!
|
24 |
|
25 |
+
If you're like me, you love models that are:
|
26 |
|
27 |
+
🤏 Small, but capable!
|
28 |
|
29 |
+
🔓 Open, free-to-use
|
30 |
|
31 |
+
🇮🇩 Fluent in Indonesian
|
32 |
|
33 |
+
That's why I'm proud to announce... the 🦌 Kancil! It's a fine-tuned version of Llama 3 8B with the TumpengQA, an instruction dataset of 28 million words. Both the model and dataset is openly available in Huggingface.
|
34 |
|
35 |
+
What makes this model so cool? 🤨
|
36 |
|
37 |
+
📚 The dataset is synthetically generated from Llama 3 70B. A big problem with existing Indonesian instruction dataset is they're really badly translated versions of English datasets. Llama 3 70B can generate fluent Indonesian! (with minor caveats 😔)
|
|
|
|
|
|
|
|
|
|
|
|
|
38 |
|
39 |
+
🔨 Llama 3 8B can already respond in Indonesian... but it's highly inconsistent 😭 and needs lots of tedious prompt engineering. This model is highly consistent in responding in Indonesian!
|
40 |
|
41 |
+
How did I go about it?
|
42 |
|
43 |
+
✈ Scaling up synthetic data generation! Companies like Microsoft and Meta realized it is absolutely essential for developing LMs. From this and previous experience in creating Jawa Krama dataset, this is surprisingly useful for low-medium resource languages.
|
|
|
|
|
44 |
|
45 |
+
🦚 This was highly inspired by last year's efforts from Merak-7B, a collection of open, fine-tuned Indonesian models. However, Kancil leveraged synthetic data in a very creative way, which makes it unique from Merak!
|
|
|
|
|
46 |
|
47 |
+
### Version 0.0
|
48 |
|
49 |
+
This is the very first working prototype, Kancil V0. It supports basic QA functionalities only. Currently, you cannot chat with it.
|
50 |
|
51 |
+
This model was fine-tuned with QLoRA using the amazing Unsloth framework! It was built on top of [unsloth/llama-3-8b-bnb-4bit](https://huggingface.co/unsloth/llama-3-8b-bnb-4bit) and subsequently merged back to 4 bit (no visible difference with merging back to fp 16).
|
52 |
|
53 |
+
## Uses
|
54 |
|
55 |
+
### Direct Use
|
56 |
|
57 |
+
This model is developed with research purposes for researchers or general AI hobbyists. However, it has one big application: You can have lots of fun with it!
|
58 |
|
59 |
### Out-of-Scope Use
|
60 |
|
61 |
+
This is a minimally-functional research preview model with no safety curation. Do not use this model for commercial or practical applications.
|
62 |
+
|
63 |
+
You are also not allowed to use this model without having fun.
|
64 |
+
|
65 |
+
## Getting started
|
66 |
+
|
67 |
+
As mentioned, this model was trained with Unsloth. Please use its code for better experience.
|
68 |
+
|
69 |
+
```
|
70 |
+
# Install dependencies
|
71 |
+
%%capture
|
72 |
+
!pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git"
|
73 |
+
!pip install --no-deps "xformers<0.0.26" trl peft accelerate bitsandbytes
|
74 |
+
```
|
75 |
+
|
76 |
+
```
|
77 |
+
# Load the model
|
78 |
+
from unsloth import FastLanguageModel
|
79 |
+
import torch
|
80 |
+
|
81 |
+
model, tokenizer = FastLanguageModel.from_pretrained(
|
82 |
+
model_name = "catinthebag/Kancil-V0-llama3",
|
83 |
+
max_seq_length = max_seq_length,
|
84 |
+
dtype = torch.bfloat16, # Will default to float 16 if not available
|
85 |
+
load_in_4bit = True,
|
86 |
+
)
|
87 |
+
```
|
88 |
+
```
|
89 |
+
# This model was trained on this specific prompt template. Changing it might lead to performance degradations.
|
90 |
+
prompt_template = """User: {prompt}
|
91 |
+
Asisten: {response}"""
|
92 |
+
|
93 |
+
EOS_TOKEN = tokenizer.eos_token
|
94 |
+
def formatting_prompts_func(examples):
|
95 |
+
inputs = examples["prompt"]
|
96 |
+
outputs = examples["response"]
|
97 |
+
texts = []
|
98 |
+
for input, output in zip(inputs, outputs):
|
99 |
+
text = prompt_template.format(prompt=input, response=output) + EOS_TOKEN
|
100 |
+
texts.append(text)
|
101 |
+
return { "text" : texts, }
|
102 |
+
pass
|
103 |
+
```
|
104 |
+
```
|
105 |
+
# Start generating!
|
106 |
+
FastLanguageModel.for_inference(model)
|
107 |
+
inputs = tokenizer(
|
108 |
+
[
|
109 |
+
prompt_template.format(
|
110 |
+
prompt="Apa itu generative AI?",
|
111 |
+
response="",
|
112 |
+
)
|
113 |
+
], return_tensors = "pt").to("cuda")
|
114 |
+
|
115 |
+
outputs = model.generate(**inputs, max_new_tokens = 128, temperature=.8, use_cache = True)
|
116 |
+
print(tokenizer.batch_decode(outputs)[0])
|
117 |
+
```
|
118 |
+
|
119 |
+
**Note:** There was an issue with the dataset such that newline characters are printed as string literals. Sorry about that!
|
120 |
+
|
121 |
+
## Acknowledgments
|
122 |
+
|
123 |
+
- **Developed by:** Afrizal Hasbi Azizy
|
124 |
+
- **Funded by [optional]:** DF Labs (dflabs.id)
|
125 |
+
- **License:** Llama 3 Community License
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|