Afrizal Hasbi Azizy
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -19,25 +19,9 @@ language:
|
|
19 |
|
20 |
Selamat datang!
|
21 |
|
22 |
-
|
23 |
|
24 |
-
|
25 |
-
|
26 |
-
๐ Open, free-to-use
|
27 |
-
|
28 |
-
๐ฎ๐ฉ Fluent in Indonesian
|
29 |
-
|
30 |
-
That's why I'm proud to announce... the ๐ฆ Kancil! It's a fine-tuned version of Llama 3 8B with the TumpengQA, an instruction dataset of 28 million words. Both the model and dataset is openly available in Huggingface.
|
31 |
-
|
32 |
-
What makes this model so cool? ๐คจ
|
33 |
-
|
34 |
-
๐ The dataset is synthetically generated from Llama 3 70B. A big problem with existing Indonesian instruction dataset is they're really badly translated versions of English datasets. Llama 3 70B can generate fluent Indonesian! (with minor caveats ๐)
|
35 |
-
|
36 |
-
๐จ Llama 3 8B can already respond in Indonesian... but it's highly inconsistent ๐ญ and needs lots of tedious prompt engineering. This model is highly consistent in responding in Indonesian!
|
37 |
-
|
38 |
-
How did I go about it?
|
39 |
-
|
40 |
-
โ Scaling up synthetic data generation! Companies like Microsoft and Meta realized it is absolutely essential for developing LMs. From this and previous experience in creating Jawa Krama dataset, this is surprisingly useful for low-medium resource languages.
|
41 |
|
42 |
๐ฆ This was highly inspired by last year's efforts from Merak-7B, a collection of open, fine-tuned Indonesian models. However, Kancil leveraged synthetic data in a very creative way, which makes it unique from Merak!
|
43 |
|
|
|
19 |
|
20 |
Selamat datang!
|
21 |
|
22 |
+
I'm super stoked to announce... the ๐ฆ Kancil! It's a fine-tuned version of Llama 3 8B with the TumpengQA, an instruction dataset of 28 million words. Both the model and dataset is openly available in Huggingface.
|
23 |
|
24 |
+
๐ The dataset was synthetically generated from Llama 3 70B. A big problem with existing Indonesian instruction dataset is they're really badly translated versions of English datasets. Llama 3 70B can generate fluent Indonesian! (with minor caveats ๐)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
25 |
|
26 |
๐ฆ This was highly inspired by last year's efforts from Merak-7B, a collection of open, fine-tuned Indonesian models. However, Kancil leveraged synthetic data in a very creative way, which makes it unique from Merak!
|
27 |
|