Update README.md
Browse files
README.md
CHANGED
@@ -9,6 +9,8 @@ tags:
|
|
9 |
license: apache-2.0
|
10 |
language:
|
11 |
- en
|
|
|
|
|
12 |
---
|
13 |
## Model Introduction
|
14 |
Early experimental model uses unique advance form of supervised tuning. This training program loads the model, and than loads the data from dataset. It will provide data in inference time. Than it trains the LLM.
|
@@ -53,4 +55,4 @@ We use this as evaluator.
|
|
53 |
|
54 |
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
|
55 |
|
56 |
-
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
|
|
9 |
license: apache-2.0
|
10 |
language:
|
11 |
- en
|
12 |
+
datasets:
|
13 |
+
- HuggingFaceH4/MATH-500
|
14 |
---
|
15 |
## Model Introduction
|
16 |
Early experimental model uses unique advance form of supervised tuning. This training program loads the model, and than loads the data from dataset. It will provide data in inference time. Than it trains the LLM.
|
|
|
55 |
|
56 |
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
|
57 |
|
58 |
+
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|