|
--- |
|
license: cc-by-nc-4.0 |
|
language: |
|
- ko |
|
pipeline_tag: text-generation |
|
tags: |
|
- meta |
|
- llama-2 |
|
- llama-2-ko |
|
--- |
|
|
|
## Model Details |
|
|
|
**Model Architecture:** |
|
|
|
urLLM-KO-7B is an auto-regressive language model that leverages an optimized transformer architecture derived from Llama-2-7b. |
|
|
|
**Training Corpus** |
|
|
|
The model was trained using selected datasets from Modu Corpus and Korean Wikipedia (approximately 28GB). |
|
|
|
**Vocab Expansion** |
|
|
|
The expanded vocab size is 51385. |
|
|
|
**Model Card Contact** |
|
|
|
For errors or additional questions about details in this model card, contact [email protected] . |