MultivexAI commited on
Commit
2637163
·
verified ·
1 Parent(s): 8153ac8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -7
README.md CHANGED
@@ -6,13 +6,6 @@ base_model:
6
  - meta-llama/Llama-3.2-3B-Instruct
7
  ---
8
 
9
- * **Benchmarks**
10
- ---
11
- | Model | Average | IFEval | BBH | MATH | GPQA | MMLU-PRO | CO2 Cost (kg) |
12
- | :---------------------------------------| :------ | :------ | :------ | :------ | :------ | :------- | :------------ |
13
- | **MultivexAI/Gladiator-Mini-Exp-1211-3B** | **22.22%** | **68.76%** | **22.12%** | **13.44%** | **3.02%** | **23.91%** | **0.60** |
14
- ---
15
-
16
  # Gladiator-Mini-exp-1211
17
 
18
  **Gladiator-Mini-exp-1211** is a 3-billion parameter language model designed for **complex reasoning tasks**. This experimental model, based on [meta-llama/Llama-3.2-3B-Instruct] [https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct], offers surprisingly strong analytical capabilities for its size. It demonstrates the potential of smaller models to achieve impressive performance in analytical thinking. We chose to finetune on a Llama model due to finetuning difficulties with the Qwen 2.5 3B model.
 
6
  - meta-llama/Llama-3.2-3B-Instruct
7
  ---
8
 
 
 
 
 
 
 
 
9
  # Gladiator-Mini-exp-1211
10
 
11
  **Gladiator-Mini-exp-1211** is a 3-billion parameter language model designed for **complex reasoning tasks**. This experimental model, based on [meta-llama/Llama-3.2-3B-Instruct] [https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct], offers surprisingly strong analytical capabilities for its size. It demonstrates the potential of smaller models to achieve impressive performance in analytical thinking. We chose to finetune on a Llama model due to finetuning difficulties with the Qwen 2.5 3B model.