Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,83 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
+
language:
|
4 |
+
- ind
|
5 |
+
datasets:
|
6 |
+
- uonlp/CulturaX
|
7 |
+
tags:
|
8 |
+
- t5
|
9 |
---
|
10 |
+
|
11 |
+
## IndoNanoT5 Base
|
12 |
+
|
13 |
+
IndoNanoT5 Base is an Indonesian sequence-to-sequence language model based on the [T5](https://arxiv.org/abs/1910.10683) architecture. We conducted pre-training on an open-source Indonesian corpus of [uonlp/CulturaX](https://huggingface.co/datasets/uonlp/CulturaX). On a held-out subset of the corpus, our model achieved an evaluation loss of 2.082 or a perplexity of about 8.02.
|
14 |
+
|
15 |
+
This model was trained using the [nanoT5](https://github.com/PiotrNawrot/nanoT5) PyTorch framework. All training was done on an Nvidia H100 GPU. [LazarusNLP/IndoNanoT5-base](https://huggingface.co/LazarusNLP/IndoNanoT5-base) is released under Apache 2.0 license.
|
16 |
+
|
17 |
+
## Model Detail
|
18 |
+
|
19 |
+
- **Developed by**: [LazarusNLP](https://lazarusnlp.github.io/)
|
20 |
+
- **Model type**: Encoder-decoder T5 transformer language model
|
21 |
+
- **Language(s)**: Indonesian
|
22 |
+
- **License**: [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0.html)
|
23 |
+
- **Contact**: [Wilson Wongso](https://wilsonwongso.dev/)
|
24 |
+
|
25 |
+
## Use in 🤗Transformers
|
26 |
+
|
27 |
+
```python
|
28 |
+
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
|
29 |
+
|
30 |
+
model_checkpoint = "LazarusNLP/IndoNanoT5-base"
|
31 |
+
|
32 |
+
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
|
33 |
+
model = AutoModelForSeq2SeqLM.from_pretrained(model_checkpoint)
|
34 |
+
```
|
35 |
+
|
36 |
+
## Training Datasets
|
37 |
+
|
38 |
+
Around 4B tokens from the following corpora were used during pre-training.
|
39 |
+
|
40 |
+
- [Cleaned, Enormous, and Public: The Multilingual Fuel to Democratize Large Language Models for 167 Languages](https://huggingface.co/datasets/uonlp/CulturaX)
|
41 |
+
|
42 |
+
## Training Hyperparameters
|
43 |
+
|
44 |
+
The following hyperparameters were used during training:
|
45 |
+
|
46 |
+
- `total_steps`: 65536
|
47 |
+
- `input_length`: 512
|
48 |
+
- `batch_size`: 128
|
49 |
+
- `grad_acc`: 1
|
50 |
+
- `base_lr`: 5e-3
|
51 |
+
- `optimizer`: AdamWScaled with `betas=(0.9,0.999)` and `epsilon=1e-08`
|
52 |
+
- `weight_decay`: 0.0
|
53 |
+
- `lr_scheduler`: cosine
|
54 |
+
- `warmup_steps`: 10000
|
55 |
+
- `final_cosine`: 1e-5
|
56 |
+
- `grad_clip`: 1.0
|
57 |
+
- `precision`: `bf16`
|
58 |
+
|
59 |
+
## Acknowledgements
|
60 |
+
|
61 |
+
We would like to acknowledge [nanoT5](https://github.com/PiotrNawrot/nanoT5) for inspiring this project.
|
62 |
+
|
63 |
+
## Credits
|
64 |
+
|
65 |
+
BhinnekaLM is developed with love by:
|
66 |
+
|
67 |
+
<div style="display: flex;">
|
68 |
+
<a href="https://github.com/anantoj">
|
69 |
+
<img src="https://github.com/anantoj.png" alt="GitHub Profile" style="border-radius: 50%;width: 64px;margin:0 4px;">
|
70 |
+
</a>
|
71 |
+
|
72 |
+
<a href="https://github.com/DavidSamuell">
|
73 |
+
<img src="https://github.com/DavidSamuell.png" alt="GitHub Profile" style="border-radius: 50%;width: 64px;margin:0 4px;">
|
74 |
+
</a>
|
75 |
+
|
76 |
+
<a href="https://github.com/stevenlimcorn">
|
77 |
+
<img src="https://github.com/stevenlimcorn.png" alt="GitHub Profile" style="border-radius: 50%;width: 64px;margin:0 4px;">
|
78 |
+
</a>
|
79 |
+
|
80 |
+
<a href="https://github.com/w11wo">
|
81 |
+
<img src="https://github.com/w11wo.png" alt="GitHub Profile" style="border-radius: 50%;width: 64px;margin:0 4px;">
|
82 |
+
</a>
|
83 |
+
</div>
|