Update README.md
Browse files
README.md
CHANGED
@@ -12,12 +12,25 @@ tags:
|
|
12 |
base_model: unsloth/llama-3-8b-bnb-4bit
|
13 |
---
|
14 |
|
15 |
-
#
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
16 |
|
17 |
- **Developed by:** akumaburn
|
18 |
- **License:** apache-2.0
|
19 |
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
|
20 |
|
|
|
|
|
21 |
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
|
22 |
|
23 |
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
|
|
12 |
base_model: unsloth/llama-3-8b-bnb-4bit
|
13 |
---
|
14 |
|
15 |
+
# Open Orca Llama 3 8B
|
16 |
+
|
17 |
+
- **Fine Tuned using dataset:** https://huggingface.co/datasets/Open-Orca/OpenOrca
|
18 |
+
- **Step Count:** 1000
|
19 |
+
- **Batch Size:** 2
|
20 |
+
- **Gradient Accumulation Steps:** 4
|
21 |
+
- **Context Size:** 8192
|
22 |
+
- **Num examples:** 4,233,923
|
23 |
+
- **Trainable Parameters:** 41,943,040
|
24 |
+
- **Learning Rate:** 0.0625
|
25 |
+
- **Training Loss:** 1.090800
|
26 |
+
- **Fined Tuned using:** Google Colab Pro (Nvidia L4 runtime)
|
27 |
|
28 |
- **Developed by:** akumaburn
|
29 |
- **License:** apache-2.0
|
30 |
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
|
31 |
|
32 |
+
Some GGUF quantizations are included as well.
|
33 |
+
|
34 |
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
|
35 |
|
36 |
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|