MultivexAI commited on
Commit
63a192d
·
verified ·
1 Parent(s): 8e88772

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -4
README.md CHANGED
@@ -19,20 +19,22 @@ tags:
19
  **The V2 version performs better in multiple benchmarks than the V1 version, and even outmatches both 1211 and V1 in mathematics: https://huggingface.co/MultivexAI/Gladiator-Mini-Exp-1221-3B-Instruct**
20
 
21
  **Examples:**
22
-
23
  **MATH**
24
  **1211: 13.44 %**
25
  **V1: 13.07 %**
26
  **V2: 13.75 %**
27
-
 
28
  **IFEval**
29
  **V1: 60.79 %**
30
  **V2: 62.15 %**
31
-
 
32
  **BBH**
33
  **V1: 20.40 %**
34
  **V2: 20.65 %**
35
-
36
  # Gladiator-Mini-exp-1221-Instruct
37
 
38
  **Gladiator-Mini-exp-1221** is a 3-billion parameter language model focused on **complex reasoning**. Built upon the foundation of [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct), this experimental model is designed to explore what's achievable with smaller models in analytical thinking. It's all about pushing boundaries and learning what's possible in resource-efficient AI. We believe small models represent the future of open source language models, making AI more accessible and adaptable for a wider range of users and applications.
 
19
  **The V2 version performs better in multiple benchmarks than the V1 version, and even outmatches both 1211 and V1 in mathematics: https://huggingface.co/MultivexAI/Gladiator-Mini-Exp-1221-3B-Instruct**
20
 
21
  **Examples:**
22
+ ----------------
23
  **MATH**
24
  **1211: 13.44 %**
25
  **V1: 13.07 %**
26
  **V2: 13.75 %**
27
+ ----------------
28
+ ----------------
29
  **IFEval**
30
  **V1: 60.79 %**
31
  **V2: 62.15 %**
32
+ ----------------
33
+ ----------------
34
  **BBH**
35
  **V1: 20.40 %**
36
  **V2: 20.65 %**
37
+ ----------------
38
  # Gladiator-Mini-exp-1221-Instruct
39
 
40
  **Gladiator-Mini-exp-1221** is a 3-billion parameter language model focused on **complex reasoning**. Built upon the foundation of [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct), this experimental model is designed to explore what's achievable with smaller models in analytical thinking. It's all about pushing boundaries and learning what's possible in resource-efficient AI. We believe small models represent the future of open source language models, making AI more accessible and adaptable for a wider range of users and applications.