File size: 1,462 Bytes
50a93c5 11d3ab5 50a93c5 11d3ab5 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 |
---
language:
- ko
datasets:
- kyujinpy/Ko-various-dataset
library_name: transformers
pipeline_tag: text-generation
license: cc-by-nc-sa-4.0
---
# **⭐My custom LLM 13B⭐**
## Model Details
**Model Developers**
- Kyujin Han (kyujinpy)
**Model Architecture**
- My custom LLM 13B is an auto-regressive language model based on the LLaMA2 transformer architecture.
**Base Model**
- [beomi/llama-2-koen-13b](https://huggingface.co/beomi/llama-2-koen-13b)
**Training Dataset**
- [kyujinpy/Ko-various-dataset](https://huggingface.co/datasets/kyujinpy/Ko-various-dataset).
---
# Model comparisons
> Ko-LLM leaderboard(11/27; [link](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard))
| Model | Average | Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 |
| --- | --- | --- | --- | --- | --- | --- |
| ⭐My custom LLM 13B-v1⭐ | 50.19 | 45.99 | 56.93 | 41.78 | 41.66 | **64.58** |
| ⭐My custom LLM 13B v2⭐ | NaN | NaN | NaN | NaN | NaN | NaN |
| ⭐My custom LLM 13B v3⭐ | NaN | NaN | NaN | NaN | NaN | NaN |
# Implementation Code
```python
### KO-Platypus
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "PracticeLLM/Custom-KoLLM-13B-v3"
OpenOrca = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo)
```
---
|