Weyaxi commited on
Commit
848c8d1
·
verified ·
1 Parent(s): 308e24c

update model card

Browse files
Files changed (1) hide show
  1. README.md +124 -53
README.md CHANGED
@@ -1,18 +1,33 @@
1
  ---
2
- license: apache-2.0
3
- base_model: meta-math/MetaMath-Mistral-7B
4
  tags:
 
 
 
 
5
  - axolotl
6
- - generated_from_trainer
7
- model-index:
8
- - name: EulerMath-Mistral-7B
9
- results: []
 
 
 
 
10
  ---
 
11
 
12
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
- should probably proofread and complete it, then remove this comment. -->
 
 
 
 
 
 
 
 
14
 
15
- [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
16
  <details><summary>See axolotl config</summary>
17
 
18
  axolotl version: `0.4.0`
@@ -92,66 +107,122 @@ special_tokens:
92
  bos_token: "<s>"
93
  eos_token: "</s>"
94
  unk_token: "<unk>"
95
-
96
  ```
97
 
98
  </details><br>
99
 
100
- # EulerMath-Mistral-7B
 
 
101
 
102
- This model is a fine-tuned version of [meta-math/MetaMath-Mistral-7B](https://huggingface.co/meta-math/MetaMath-Mistral-7B) on the None dataset.
103
- It achieves the following results on the evaluation set:
104
- - Loss: 0.1956
 
105
 
106
- ## Model description
 
107
 
108
- More information needed
109
 
110
- ## Intended uses & limitations
111
 
112
- More information needed
 
113
 
114
- ## Training and evaluation data
 
 
 
 
 
 
 
115
 
116
- More information needed
117
 
118
- ## Training procedure
119
 
120
- ### Training hyperparameters
121
 
122
- The following hyperparameters were used during training:
123
- - learning_rate: 5e-06
124
- - train_batch_size: 2
125
- - eval_batch_size: 2
126
- - seed: 42
127
- - distributed_type: multi-GPU
128
- - num_devices: 9
129
- - gradient_accumulation_steps: 4
130
- - total_train_batch_size: 72
131
- - total_eval_batch_size: 18
132
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
133
- - lr_scheduler_type: cosine
134
- - lr_scheduler_warmup_steps: 10
135
- - num_epochs: 2
136
 
137
- ### Training results
138
 
139
- | Training Loss | Epoch | Step | Validation Loss |
140
- |:-------------:|:-----:|:----:|:---------------:|
141
- | 0.707 | 0.0 | 1 | 0.9061 |
142
- | 0.3011 | 0.25 | 68 | 0.3263 |
143
- | 0.2585 | 0.5 | 136 | 0.2836 |
144
- | 0.2352 | 0.75 | 204 | 0.2544 |
145
- | 0.2192 | 1.0 | 272 | 0.2268 |
146
- | 0.1527 | 1.23 | 340 | 0.2144 |
147
- | 0.1452 | 1.48 | 408 | 0.2032 |
148
- | 0.144 | 1.73 | 476 | 0.1970 |
149
- | 0.1441 | 1.98 | 544 | 0.1956 |
150
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
151
 
152
- ### Framework versions
153
 
154
- - Transformers 4.38.2
155
- - Pytorch 2.1.2+cu118
156
- - Datasets 2.18.0
157
- - Tokenizers 0.15.0
 
1
  ---
2
+ license: other
 
3
  tags:
4
+ - math
5
+ - alpaca
6
+ - synthetic data
7
+ - instruct
8
  - axolotl
9
+ - finetune
10
+ - gpt4
11
+ datasets:
12
+ - TIGER-Lab/MathInstruct
13
+ - microsoft/orca-math-word-problems-200k
14
+ language:
15
+ - en
16
+ base_model: meta-math/MetaMath-Mistral-7B
17
  ---
18
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6468ce47e134d050a58aa89c/jsw9mC64I69A_KwX0c6oi.png)
19
 
20
+ # 🔢 EulerMath-Mistral-7B
21
+
22
+ This model is a full fine-tuned version of [meta-math/MetaMath-Mistral-7B](meta-math/MetaMath-Mistral-7B) on the following datasets:
23
+
24
+ - 🧮 [TIGER-Lab/MathInstruct](https://huggingface.co/datasets/TIGER-Lab/MathInstruct)
25
+ - 📐 [microsoft/orca-math-word-problems-200k](https://huggingface.co/datasets/microsoft/orca-math-word-problems-200k)
26
+
27
+ This model is finetuned using `8xRTX3090` + `1xRTXA6000` using [axolotl](https://github.com/OpenAccess-AI-Collective/axolotl).
28
+
29
+ This model's training was sponsored by [sablo.ai](https://sablo.ai).
30
 
 
31
  <details><summary>See axolotl config</summary>
32
 
33
  axolotl version: `0.4.0`
 
107
  bos_token: "<s>"
108
  eos_token: "</s>"
109
  unk_token: "<unk>"
 
110
  ```
111
 
112
  </details><br>
113
 
114
+ # 💬 Prompt Template
115
+
116
+ You can use this prompt template while using the model:
117
 
118
+ ### Alpaca
119
+
120
+ ```
121
+ Below is an instruction that describes a task. Write a response that appropriately completes the request.
122
 
123
+ ### Instruction:
124
+ {instruction}
125
 
126
+ ### Response:
127
 
128
+ ```
129
 
130
+ This prompt template is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the
131
+ `tokenizer.apply_chat_template()` method:
132
 
133
+ ```python
134
+ messages = [
135
+ {"role": "system", "content": "You are helpful AI asistant."},
136
+ {"role": "user", "content": "Hello!"}
137
+ ]
138
+ gen_input = tokenizer.apply_chat_template(message, return_tensors="pt")
139
+ model.generate(**gen_input)
140
+ ```
141
 
142
+ # 🔄 Quantizationed versions
143
 
144
+ Quantizationed versions of this model is currently not available. It will be available soon :)
145
 
146
+ # 🎯 Evaluation Results
147
 
148
+ Evaluation Results of this model are low due to the strict requirements for the eval GSM8K eval harness. I evaluated this model using [tinyGSM8k](https://huggingface.co/datasets/tinyBenchmarks/tinyGSM8k) which is a streamlined subset of 100 data points from the GSM8K dataset, enabling efficient evaluation of large language models with reduced computational resources.
 
 
 
 
 
 
 
 
 
 
 
 
 
149
 
150
+ The results are as follows:
151
 
152
+ ```json
153
+ {
154
+ "exact_match,strict-match": 0.02,
155
+ "exact_match_stderr,strict-match": 0.014070529413628952,
156
+ "exact_match,flexible-extract": 0.73,
157
+ "exact_match_stderr,flexible-extract": 0.04461960433384741,
158
+ "alias": "gsm8k"
159
+ }
160
+ ```
 
 
161
 
162
+ As you can see from the results, this model does not meet the required format for `strict-match` results but the given answers is actually correct. However, as indicated by the `flexible-extract` part, this model is actually quite proficient at math.
163
+
164
+ <details><summary>More details with examples</summary>
165
+
166
+ ```json
167
+ {
168
+ "doc_id": 0,
169
+ "doc": {
170
+ "question": "Rory orders 2 subs for $7.50 each, 2 bags of chips for $1.50 each and 2 cookies for $1.00 each for delivery. There’s a 20% delivery fee added at check out and she wants to add a $5.00 tip. What will her delivery order cost?",
171
+ "answer": "2 subs are $7.50 each so that’s 2*7.50 = $<<2*7.5=15.00>>15.00\n2 bags of chips are $1.50 each so that’s 2*1.50 = $<<2*1.50=3.00>>3.00\n2 cookies are $1.00 each so that’s 2*1 = $<<2*1=2.00>>2.00\nHer delivery order will be 15+3+2= $<<15+3+2=20.00>>20.00\nThere’s a 20% delivery fee on the $20.00 which adds .20*20 = $4.00 to her bill\nThe delivery order is $20.00, there’s a $4.00 delivery fee and she adds a $5.00 tip for a total of 20+4+5 = $<<20+4+5=29.00>>29.00\n#### 29",
172
+ "input_formatted": "Question: Bridgette has 2 dogs, 3 cats, and 4 birds. She gives the dogs a bath twice a month. She gives the cats a bath once a month. She gives the birds a bath once every 4 months. In a year, how many baths does she give?\nAnswer: Each dog gets 24 baths a year because 2 x 12 = <<2*12=24>>24\nEach cat gets 12 baths a year because 1 x 12 = <<1*12=12>>12\nEach bird averages .25 baths per month because 1 / 4 = <<1/4=.25>>.25\nEach bird gets 3 baths a year because .25 x 12 = <<.25*12=3>>3\nShe gives 48 dog baths because 2 x 24 = <<2*24=48>>48\nShe gives 72 cat baths because 3 x 12 = 36\nShe gives 12 bird baths a year because 4 x 3 = <<4*3=12>>12\nShe gives 132 baths a year because 48 + 36+ 12 = <<48+36+12=96>>96\n#### 96\n\nQuestion: There are 3 numbers that are consecutive integers. Together they have a sum of 18. What is the largest of the 3 numbers?\nAnswer: Let N = smallest number\nN + 1 = next number\nN + 2 = largest number\nN + (N + 1) + (N + 2) = 18\n3N + 3 = 18\n3N = <<3*5=15>>15\nN = <<5=5>>5\nThe largest number is <<7=7>>7.\n#### 7\n\nQuestion: Betsy won 5 games of Monopoly. Helen won twice as many as Betsy and Susan won three times as many as Betsy. Between them, how many games have they won?\nAnswer: Helen won twice as many games as Betsy's 5 so Helen won 2*5 = <<10=10>>10 games\nSusan won three times as many games as Betsy's 5 so Susan won 3*5 = <<3*5=15>>15 games\nWhen you combine their wins, together they won 5+10+15 = <<5+10+15=30>>30 games total\n#### 30\n\nQuestion: Two friends, Hubert and Ian, are planning to have a pizza party. One box of pizza is worth $14, and a can of soda is worth $1.80. Hubert orders eight boxes of pizza and ten cans of soda. Ian buys ten boxes of pizza and fifteen cans of soda. How much do they spend in all?\nAnswer: The number of boxes of pizza is 8 + 10 = <<8+10=18>>18 boxes.\nThe number of cans of soda is 10 + 15 = <<10+15=25>>25 cans.\nThe eighteen boxes of pizza cost 18 x $14= $<<18*14=252>>252.\nThe cost of 25 cans of soda is 25 x $1.80= $<<25*1.8=45>>45.\nTherefore, the total amount they spend is $252 + $45 = $<<252+45=297>>297\n#### 297\n\nQuestion: Greg drives 30 miles from his workplace to the farmer's market. After buying his groceries at the farmers market, he drives home. To get home, he travels for 30 minutes at 20 miles per hour. How many miles in total does Greg travel?\nAnswer: We must first convert minutes to hours, so 30 minutes * (1 hour/60 minutes) = <<30*(1/60)=0.5>>0.5 hours\nThe number of miles Greg travels on his trip home is 0.5 hours * 20 mph = <<0.5*20=10>>10 miles\nThe total miles Greg travels is 10 + 30 = <<10+30=40>>40 miles\n#### 40\n\nQuestion: Rory orders 2 subs for $7.50 each, 2 bags of chips for $1.50 each and 2 cookies for $1.00 each for delivery. There’s a 20% delivery fee added at check out and she wants to add a $5.00 tip. What will her delivery order cost?\nAnswer:"
173
+ },
174
+ "target": "2 subs are $7.50 each so that’s 2*7.50 = $<<2*7.5=15.00>>15.00\n2 bags of chips are $1.50 each so that’s 2*1.50 = $<<2*1.50=3.00>>3.00\n2 cookies are $1.00 each so that’s 2*1 = $<<2*1=2.00>>2.00\nHer delivery order will be 15+3+2= $<<15+3+2=20.00>>20.00\nThere’s a 20% delivery fee on the $20.00 which adds .20*20 = $4.00 to her bill\nThe delivery order is $20.00, there’s a $4.00 delivery fee and she adds a $5.00 tip for a total of 20+4+5 = $<<20+4+5=29.00>>29.00\n#### 29",
175
+ "arguments": [
176
+ [
177
+ "Question: Jen and Tyler are gymnasts practicing flips. Jen is practicing the triple-flip while Tyler is practicing the double-flip. Jen did sixteen triple-flips during practice. Tyler flipped in the air half the number of times Jen did. How many double-flips did Tyler do?\nAnswer: Jen did 16 triple-flips, so she did 16 * 3 = <<16*3=48>>48 flips.\nTyler did half the number of flips, so he did 48 / 2 = <<48/2=24>>24 flips.\nA double flip has two flips, so Tyler did 24 / 2 = <<24/2=12>>12 double-flips.\n#### 12\n\nQuestion: Four people in a law firm are planning a party. Mary will buy a platter of pasta for $20 and a loaf of bread for $2. Elle and Andrea will split the cost for buying 4 cans of soda which cost $1.50 each, and chicken wings for $10. Joe will buy a cake that costs $5. How much more will Mary spend than the rest of the firm put together?\nAnswer: Mary will spend $20 + $2 = $<<20+2=22>>22.\nElle and Andrea will spend $1.5 x 4 = $<<1.5*4=6>>6 for the soda.\nElle and Andrea will spend $6 + $10 = $<<6+10=16>>16 for the soda and chicken wings.\nElle, Andrea, and Joe together will spend $16 + $5 = $<<16+5=21>>21.\nSo, Mary will spend $22 - $21 = $<<22-21=1>>1 more than all of them combined.\n#### 1\n\nQuestion: A charcoal grill burns fifteen coals to ash every twenty minutes of grilling. The grill ran for long enough to burn three bags of coals. Each bag of coal contains 60 coals. How long did the grill run?\nAnswer: The grill burned 3 * 60 = <<3*60=180>>180 coals.\nIt takes 20 minutes to burn 15 coals, so the grill ran for 180 / 15 * 20 = <<180/15*20=240>>240 minutes.\n#### 240\n\nQuestion: A bear is preparing to hibernate for the winter and needs to gain 1000 pounds. At the end of summer, the bear feasts on berries and small woodland animals. During autumn, it devours acorns and salmon. It gained a fifth of the weight it needed from berries during summer, and during autumn, it gained twice that amount from acorns. Salmon made up half of the remaining weight it had needed to gain. How many pounds did it gain eating small animals?\nAnswer: The bear gained 1 / 5 * 1000 = <<1/5*1000=200>>200 pounds from berries.\nIt gained 2 * 200 = <<2*200=400>>400 pounds from acorns.\nIt still needed 1000 - 200 - 400 = <<1000-200-400=400>>400 pounds.\nThus, it gained 400 / 2 = <<400/2=200>>200 pounds from salmon.\nTherefore, the bear gained 400 - 200 = <<400-200=200>>200 pounds from small animals.\n#### 200\n\nQuestion: Brendan can cut 8 yards of grass per day, he bought a lawnmower and it helped him to cut more yards by Fifty percent per day. How many yards will Brendan be able to cut after a week?\nAnswer: The additional yard Brendan can cut after buying the lawnmower is 8 x 0.50 = <<8*0.50=4>>4 yards.\nSo, the total yards he can cut with the lawnmower is 8 + 4 = <<8+4=12>>12.\nTherefore, the total number of yards he can cut in a week is 12 x 7 = <<12*7=84>>84 yards.\n#### 84\n\nQuestion: Rory orders 2 subs for $7.50 each, 2 bags of chips for $1.50 each and 2 cookies for $1.00 each for delivery. There’s a 20% delivery fee added at check out and she wants to add a $5.00 tip. What will her delivery order cost?\nAnswer:",
178
+ {
179
+ "until": [
180
+ "Question:",
181
+ "</s>",
182
+ "<|im_end|>"
183
+ ],
184
+ "do_sample": false,
185
+ "temperature": 0.0
186
+ }
187
+ ]
188
+ ],
189
+ "resps": [
190
+ [
191
+ "The subs will cost 2 * $7.50 = $<<2*7.5=15>>15.\nThe chips will cost 2 * $1.50 = $<<2*1.5=3>>3.\nThe cookies will cost 2 * $1.00 = $<<2*1=2>>2.\nThe total cost of the food is $15 + $3 + $2 = $<<15+3+2=20>>20.\nThe delivery fee is $20 * 0.20 = $<<20*0.20=4>>4.\nThe total cost of the order is $20 + $4 + $5 = $<<20+4+5=29>>29.\nThe answer is 29"
192
+ ]
193
+ ],
194
+ "filtered_resps": [
195
+ "[invalid]"
196
+ ],
197
+ "exact_match": 0.0
198
+ },
199
+ ```
200
+ </details><br>
201
+
202
+ # 🤖 Additional information about training
203
+
204
+ This model is full fine-tuned for 2 epoch.
205
+
206
+ Total number of steps was 544.
207
+
208
+ <details><summary>Loss graph</summary>
209
+
210
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6468ce47e134d050a58aa89c/j0JhlZy3t3znB0DrWBFAT.png)
211
+
212
+ </details><br>
213
+
214
+ # 🤝 Acknowledgments
215
+
216
+ Thanks to [sablo.ai](https://sablo.ai) for sponsoring this model.
217
+
218
+ Thanks to all the dataset authors mentioned in the datasets section.
219
+
220
+ Thanks to [axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) for making the repository I used to make this model.
221
+
222
+ Thanks to all open source AI community.
223
+
224
+ [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
225
 
226
+ If you would like to support me:
227
 
228
+ [☕ Buy Me a Coffee](https://www.buymeacoffee.com/weyaxi)