File size: 2,830 Bytes
43fb4f3
 
 
 
 
 
 
 
ad8d119
43fb4f3
ad8d119
 
650abde
ad8d119
 
 
 
361e20b
a7541f5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
82ceff6
 
 
 
 
 
 
 
 
 
 
ad8d119
 
 
 
 
 
 
31c0944
 
 
f507c91
31c0944
2d09b60
 
 
 
 
 
 
31c0944
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
---
license: llama3
language:
- ko
- en
pipeline_tag: text-generation
---

[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)

## Model Details

axolotl๋ฅผ ์ด์šฉํ•˜์—ฌ ๊ณต๊ฐœ/์ž์ฒด์ ์œผ๋กœ ์ƒ์„ฑ๋œ ํ•œ๊ตญ์–ด, ์˜์–ด ๋ฐ์ดํ„ฐ์…‹์œผ๋กœ ํŒŒ์ธํŠœ๋‹ํ•˜์˜€์Šต๋‹ˆ๋‹ค.

### Model Description

### Socre

#### llm_kr_eval
| ํ‰๊ฐ€ ์ง€ํ‘œ | ์ ์ˆ˜ |
| --- | --- |
| AVG_llm_ok_eval | 0.4282 |
| EL (Easy Language) | 0.1264 |
| FA (False Alarm) | 0.2184 |
| NLI (Natural Language Understanding) | 0.5767 |
| QA (Question Answering) | 0.5100 |
| RC (Reconstruction) | 0.7096 |
| klue_ner_set_f1 (Klue Named Entity Recognition F1 Score) | 0.1429 |
| klue_re_exact_match (Klue Reference Exact Match) | 0.1100 |
| kmmlu_preview_exact_match (Kmmlu Preview Exact Match) | 0.4400 |
| kobest_copa_exact_match (Kobest COPA Exact Match) | 0.8100 |
| kobest_hs_exact_match (Kobest HS Exact Match) | 0.3800 |
| kobest_sn_exact_match (Kobest SN Exact Match) | 0.9000 |
| kobest_wic_exact_match (Kobest WIC Exact Match) | 0.5800 |
| korea_cg_bleu (Korean CG BLEU) | 0.2184 |
| kornli_exact_match (KornLI Exact Match) | 0.5400 |
| korsts_pearson (KorSTS Pearson Correlation Coefficient) | 0.6225 |
| korsts_spearman (KorSTS Spearman Rank Correlation Coefficient) | 0.6064 |

#### LogicKor
| ์นดํ…Œ๊ณ ๋ฆฌ | ์‹ฑ๊ธ€ ์ ์ˆ˜ ํ‰๊ท  | ๋ฉ€ํ‹ฐ ์ ์ˆ˜ ํ‰๊ท  |
| --- | --- | --- |
| ์ˆ˜ํ•™(Math) | 4.43 | 3.71 |
| ์ดํ•ด(Understanding) | 9.29 | 6.86 |
| ์ถ”๋ก (Reasoning) | 5.71 | 5.00 |
| ๊ธ€์“ฐ๊ธฐ(Writing) | 7.86 | 7.43 |
| ์ฝ”๋”ฉ(Coding) | 7.86 | 6.86 |
| ๋ฌธ๋ฒ•(Grammar) | 6.86 | 3.86 |
| ์ „์ฒด ์‹ฑ๊ธ€ ์ ์ˆ˜ ํ‰๊ท  | 7.00 | - |
| ์ „์ฒด ๋ฉ€ํ‹ฐ ์ ์ˆ˜ ํ‰๊ท  | - | 5.62 |
| ์ „์ฒด ์ ์ˆ˜ | - | 6.31 |

### Built with Meta Llama 3
License Llama3 License: https://llama.meta.com/llama3/license

### Applications
This fine-tuned model is particularly suited for [mention applications, e.g., chatbots, question-answering systems, etc.]. Its enhanced capabilities ensure more accurate and contextually appropriate responses in these domains.

### Limitations and Considerations
While our fine-tuning process has optimized the model for specific tasks, it's important to acknowledge potential limitations. The model's performance can still vary based on the complexity of the task and the specificities of the input data. Users are encouraged to evaluate the model thoroughly in their specific context to ensure it meets their requirements.

If you liked this model, please use the card below

```
@article{Llama3KoCarrot8Bit,
  title={CarrotAI/Llama3-Ko-Carrot-8B-it Card},
  author={CarrotAI (L, GEUN)},
  year={2024},
  url = {https://huggingface.co/CarrotAI/Llama3-Ko-Carrot-8B-it/}
}
```