File size: 790 Bytes
027edf7 dc53f8a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 |
---
license: apache-2.0
---
# TinyLlama + Japanese
A continual pretraining model of TinyLlama 1.1B with a few Japanese texts.
### Base Model
[TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T)
### Tokenizers
(elyza/ELYZA-japanese-Llama-2-7b)[https://huggingface.co/elyza/ELYZA-japanese-Llama-2-7b]
### Training Dataset
Around 9B tokens in total.
- izumi-lab/wikipedia-ja-20230720
- if001/oscar_2023_filtered
### Validation Dataset
- izumi-lab/wikinews-ja-20230728
- izumi-lab/wikinews-en-20230728
- if001/aozorabunko-clean-sin
### Evaluation
We did not perform.
### Acknowledgement
We acknowledge those who prepared valuable datasets and [lit-gpt](https://github.com/Lightning-AI/lit-gpt).
|