kyujinpy commited on
Commit
ea51d32
·
1 Parent(s): 8c2a5e5

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +67 -0
README.md CHANGED
@@ -1,3 +1,70 @@
1
  ---
 
 
 
 
 
 
2
  license: cc-by-nc-4.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - ko
4
+ datasets:
5
+ - kyujinpy/OpenOrca-KO
6
+ library_name: transformers
7
+ pipeline_tag: text-generation
8
  license: cc-by-nc-4.0
9
  ---
10
+
11
+ # **Korean-OpenOrca-13B**
12
+ ![img](./Korean-OpenOrca.png)
13
+
14
+ ## Model Details
15
+
16
+ **Model Developers** Kyujin Han (kyujinpy)
17
+
18
+ **Input** Models input text only.
19
+
20
+ **Output** Models generate text only.
21
+
22
+ **Model Architecture**
23
+ Korean-OpenOrca-13B is an auto-regressive language model based on the LLaMA2 transformer architecture.
24
+
25
+ **Repo Link**
26
+ Github KoT-platypus: (Coming soon...)
27
+
28
+ **Base Model** [hyunseoki/ko-en-llama2-13b](https://huggingface.co/hyunseoki/ko-en-llama2-13b)
29
+
30
+ **Training Dataset**
31
+ I use [OpenOrca-KO](https://huggingface.co/datasets/kyujinpy/OpenOrca-KO).
32
+ Using DeepL, translate about [OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca).
33
+
34
+ I use A100 GPU 40GB and COLAB, when trianing.
35
+
36
+
37
+ # **Model Benchmark**
38
+
39
+ ## KO-LLM leaderboard
40
+ - Follow up as [Open KO-LLM LeaderBoard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard).
41
+
42
+ ![img](./leaderboard.png)
43
+ | Model | Average |Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 |
44
+ | --- | --- | --- | --- | --- | --- | --- |
45
+ | Korean-OpenOrca-13B(ours) | NaN | NaN | NaN | NaN | NaN | NaN |
46
+ | [KoT-Platypus2-13B](https://huggingface.co/kyujinpy/KoT-platypus2-13B) | 49.55 | 43.69 | 53.05 | 42.29 | 43.34 | 65.38 |
47
+ | [KO-Platypus2-13B](https://huggingface.co/kyujinpy/KO-Platypus2-13B) | 47.90 | 44.20 | 54.31 | 42.47 | 44.41 | 54.11 |
48
+ | [hyunseoki/ko-en-llama2-13b](https://huggingface.co/hyunseoki/ko-en-llama2-13b) | 46.68 | 42.15 | 54.23 | 38.90 | 40.74 | 57.39 |
49
+ | [MarkrAI/kyujin-CoTy-platypus-ko-12.8b](https://huggingface.co/MarkrAI/kyujin-CoTy-platypus-ko-12.8b) | 46.44 | 34.98 | 49.11 | 25.68 | 37.59 | 84.86 |
50
+
51
+ > Compare with Top 4 SOTA models. (update: 10/09)
52
+
53
+
54
+ # Implementation Code
55
+ ```python
56
+ ### KO-Platypus
57
+ from transformers import AutoModelForCausalLM, AutoTokenizer
58
+ import torch
59
+
60
+ repo = "kyujinpy/Korean-OpenOrca-13B"
61
+ OpenOrca = AutoModelForCausalLM.from_pretrained(
62
+ repo,
63
+ return_dict=True,
64
+ torch_dtype=torch.float16,
65
+ device_map='auto'
66
+ )
67
+ OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo)
68
+ ```
69
+
70
+ ---