Commit
·
f142afe
1
Parent(s):
83fceca
Update README.md
Browse files
README.md
CHANGED
@@ -15,8 +15,8 @@ inference:
|
|
15 |
max_new_tokens: 250
|
16 |
repetition_penalty: 1.176
|
17 |
---
|
18 |
-
A pre-trained language model, based on the Mistral 7B model, has been scaled down to approximately 248 million parameters. This model has been trained on
|
19 |
-
This model should have a context length of around 32,768 tokens. Safe serialization has been removed due to issues saving model weights.
|
20 |
|
21 |
During evaluation on InstructMix, this model achieved an average perplexity score of 6.3. More epochs are planned for this model on different datasets.
|
22 |
# [Open LLM Leaderboard Evaluation Results (outdated)](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
|
|
15 |
max_new_tokens: 250
|
16 |
repetition_penalty: 1.176
|
17 |
---
|
18 |
+
A pre-trained language model, based on the Mistral 7B model, has been scaled down to approximately 248 million parameters. This model has been trained on 9,540,864 examples. This model isn't intended for direct use but for fine-tuning on a downstream task.
|
19 |
+
This model should have a context length of around 32,768 tokens. Safe serialization has been removed due to issues saving model weights. On the first ~7,500,000 examples of training, the batch size was set to 2. On the rest of the examples, the batch size was bumped up to 1536.
|
20 |
|
21 |
During evaluation on InstructMix, this model achieved an average perplexity score of 6.3. More epochs are planned for this model on different datasets.
|
22 |
# [Open LLM Leaderboard Evaluation Results (outdated)](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|