oheast's picture
Update README.md
5ad4552
|
raw
history blame
1.37 kB
---
license: cc-by-nc-2.0
language:
- ko
library_name: transformers
pipeline_tag: text-generation
---
**The license is `cc-by-nc-2.0`.**
# **GAI-LLM/ko-en-llama2-13b-mixed-v1**
## Model Details
**Model Developers** Donghoon Oh, Hanmin Myung, Eunyoung Kim (SK C&C G.AI Eng)
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture**
GAI-LLM/ko-en-llama2-13b-mixed-v1 is an auto-regressive language model based on the LLaMA2 transformer architecture.
**Base Model** [hyunseoki/ko-en-llama2-13b](https://huggingface.co/hyunseoki/ko-en-llama2-13b)
**Training Dataset**
- We combined Open Korean Dateset using mixed-strategy.
- Kopen-platypus + Everythinglm v2 + jojo0217/korean_rlhf_dataset + sentineg + hellaswag + copa
- We use A100 GPU 80GB * 8, when training.
# **Model Benchmark**
## KO-LLM leaderboard
- Follow up as [Open KO-LLM LeaderBoard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard).
# Implementation Code
```python
### GAI-LLM/ko-en-llama2-13b-mixed-v1
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "GAI-LLM/ko-en-llama2-13b-mixed-v1"
model = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
tokenizer = AutoTokenizer.from_pretrained(repo)
```
---