File size: 2,362 Bytes
febefe0
 
 
 
 
 
 
 
 
 
92cce68
febefe0
398320a
 
febefe0
 
502b52a
 
 
 
 
 
 
 
 
 
 
 
febefe0
 
 
 
b6e4c02
febefe0
502b52a
 
d661a36
7967f59
 
 
 
bd2a640
 
7967f59
 
 
 
bd2a640
d661a36
7967f59
 
 
 
398320a
 
7967f59
 
 
 
398320a
9afe5a7
 
 
febefe0
 
398320a
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
base_model: unsloth/llama-3-8b-bnb-4bit
datasets:
- Open-Orca/OpenOrca
---

# Open Orca Llama 3 8B

- **Fine Tuned using dataset:** https://huggingface.co/datasets/Open-Orca/OpenOrca
- **Step Count:** 1000
- **Batch Size:** 2
- **Gradient Accumulation Steps:** 4
- **Context Size:** 8192
- **Num examples:** 4,233,923 
- **Trainable Parameters:** 41,943,040
- **Learning Rate:** 0.0625
- **Training Loss:** 1.090800
- **Fined Tuned using:** Google Colab Pro (Nvidia L4 runtime)

- **Developed by:** akumaburn
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
- **Prompt Format:** Alpaca (https://libertai.io/apis/text-generation/prompting.html)

Some GGUF quantizations are included as well.

mistral-7b-openorca.Q8_0.gguf:
- **MMLU-Test:**     Final result: **41.5836 +/- 0.4174**
- **Arc-Easy:**      Final result: 72.6316 +/- 1.8691
- **Truthful QA:**   Final result: **32.0685 +/- 1.6339**
- **Arc-Challenge:** Final result: **48.8294 +/- 2.8956**

llama-3-8b-bnb-4bit.Q8_0.gguf:
- **MMLU-Test:**     Final result: 40.4074 +/- 0.4156
- **Arc-Easy:**      Final result: 73.8596 +/- 1.8421
- **Truthful QA:**   Final result: 26.6830 +/- 1.5484
- **Arc-Challenge:** Final result: 46.8227 +/- 2.8906

**Open_Orca_Llama-3-8B-unsloth.Q8_0.gguf**:
- **MMLU-Test:**     Final result: 39.3818 +/- 0.4138
- **Arc-Easy:**      Final result: 67.3684 +/- 1.9656
- **Truthful QA:**   Final result: 29.0086 +/- 1.5886
- **Arc-Challenge:** Final result: 42.1405 +/- 2.8604

Meta-Llama-3-8B.Q8_0.gguf:
- **MMLU-Test:**     Final result: 40.8664 +/- 0.4163
- **Arc-Easy:**      Final result: **74.3860 +/- 1.8299**
- **Truthful QA:**   Final result: 28.6414 +/- 1.5826
- **Arc-Challenge:** Final result: 47.1572 +/- 2.8917

Llama.cpp Options For Testing:
--samplers "tfs;typical;temp" --draft 32 --ctx-size 8192 --temp 0.82 --tfs 0.8 --typical 1.1 --repeat-last-n 512 --batch-size 8192 --repeat-penalty 1.0 --n-gpu-layers 100 --threads 12

This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.

[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)