tndklab commited on
Commit
92bd02c
·
verified ·
1 Parent(s): a42b942

Model save

Browse files
Files changed (2) hide show
  1. README.md +69 -0
  2. trainer_state.json +684 -0
README.md ADDED
@@ -0,0 +1,69 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: jonatasgrosman/wav2vec2-large-xlsr-53-japanese
4
+ tags:
5
+ - generated_from_trainer
6
+ metrics:
7
+ - wer
8
+ model-index:
9
+ - name: Wav2vec_1
10
+ results: []
11
+ ---
12
+
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
+
16
+ # Wav2vec_1
17
+
18
+ This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-japanese](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-japanese) on the None dataset.
19
+ It achieves the following results on the evaluation set:
20
+ - Loss: 0.0459
21
+ - Wer: 0.2213
22
+ - Cer: 0.1608
23
+
24
+ ## Model description
25
+
26
+ More information needed
27
+
28
+ ## Intended uses & limitations
29
+
30
+ More information needed
31
+
32
+ ## Training and evaluation data
33
+
34
+ More information needed
35
+
36
+ ## Training procedure
37
+
38
+ ### Training hyperparameters
39
+
40
+ The following hyperparameters were used during training:
41
+ - learning_rate: 6e-05
42
+ - train_batch_size: 32
43
+ - eval_batch_size: 32
44
+ - seed: 4
45
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
+ - lr_scheduler_type: linear
47
+ - lr_scheduler_warmup_steps: 1000
48
+ - num_epochs: 8
49
+
50
+ ### Training results
51
+
52
+ | Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
53
+ |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
54
+ | 3.4904 | 1.0 | 120 | 3.4430 | 0.9970 | 0.9991 |
55
+ | 1.1939 | 2.0 | 240 | 1.0064 | 0.8270 | 0.6265 |
56
+ | 0.7726 | 3.0 | 360 | 0.6257 | 0.8198 | 0.5705 |
57
+ | 0.5502 | 4.0 | 480 | 0.4148 | 0.5910 | 0.3415 |
58
+ | 0.4152 | 5.0 | 600 | 0.2439 | 0.4167 | 0.2182 |
59
+ | 0.3159 | 6.0 | 720 | 0.1359 | 0.3084 | 0.1762 |
60
+ | 0.2425 | 7.0 | 840 | 0.0737 | 0.2523 | 0.1509 |
61
+ | 0.1921 | 8.0 | 960 | 0.0459 | 0.2213 | 0.1608 |
62
+
63
+
64
+ ### Framework versions
65
+
66
+ - Transformers 4.35.2
67
+ - Pytorch 2.1.0+cu121
68
+ - Datasets 2.14.6
69
+ - Tokenizers 0.15.0
trainer_state.json ADDED
@@ -0,0 +1,684 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 8.0,
5
+ "eval_steps": 500,
6
+ "global_step": 960,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.08,
13
+ "learning_rate": 6.000000000000001e-07,
14
+ "loss": 16.3378,
15
+ "step": 10
16
+ },
17
+ {
18
+ "epoch": 0.17,
19
+ "learning_rate": 1.2000000000000002e-06,
20
+ "loss": 16.7146,
21
+ "step": 20
22
+ },
23
+ {
24
+ "epoch": 0.25,
25
+ "learning_rate": 1.8e-06,
26
+ "loss": 16.2989,
27
+ "step": 30
28
+ },
29
+ {
30
+ "epoch": 0.33,
31
+ "learning_rate": 2.4000000000000003e-06,
32
+ "loss": 15.6823,
33
+ "step": 40
34
+ },
35
+ {
36
+ "epoch": 0.42,
37
+ "learning_rate": 3e-06,
38
+ "loss": 15.2213,
39
+ "step": 50
40
+ },
41
+ {
42
+ "epoch": 0.5,
43
+ "learning_rate": 3.6e-06,
44
+ "loss": 13.8045,
45
+ "step": 60
46
+ },
47
+ {
48
+ "epoch": 0.58,
49
+ "learning_rate": 4.2000000000000004e-06,
50
+ "loss": 12.3665,
51
+ "step": 70
52
+ },
53
+ {
54
+ "epoch": 0.67,
55
+ "learning_rate": 4.800000000000001e-06,
56
+ "loss": 9.9596,
57
+ "step": 80
58
+ },
59
+ {
60
+ "epoch": 0.75,
61
+ "learning_rate": 5.4e-06,
62
+ "loss": 7.1678,
63
+ "step": 90
64
+ },
65
+ {
66
+ "epoch": 0.83,
67
+ "learning_rate": 6e-06,
68
+ "loss": 5.6617,
69
+ "step": 100
70
+ },
71
+ {
72
+ "epoch": 0.92,
73
+ "learning_rate": 6.6e-06,
74
+ "loss": 4.5458,
75
+ "step": 110
76
+ },
77
+ {
78
+ "epoch": 1.0,
79
+ "learning_rate": 7.2e-06,
80
+ "loss": 3.4904,
81
+ "step": 120
82
+ },
83
+ {
84
+ "epoch": 1.0,
85
+ "eval_cer": 0.999086184248101,
86
+ "eval_loss": 3.4430177211761475,
87
+ "eval_runtime": 13.8851,
88
+ "eval_samples_per_second": 69.139,
89
+ "eval_steps_per_second": 2.161,
90
+ "eval_wer": 0.996996996996997,
91
+ "step": 120
92
+ },
93
+ {
94
+ "epoch": 1.08,
95
+ "learning_rate": 7.8e-06,
96
+ "loss": 3.2384,
97
+ "step": 130
98
+ },
99
+ {
100
+ "epoch": 1.17,
101
+ "learning_rate": 8.400000000000001e-06,
102
+ "loss": 2.8932,
103
+ "step": 140
104
+ },
105
+ {
106
+ "epoch": 1.25,
107
+ "learning_rate": 9e-06,
108
+ "loss": 2.7276,
109
+ "step": 150
110
+ },
111
+ {
112
+ "epoch": 1.33,
113
+ "learning_rate": 9.600000000000001e-06,
114
+ "loss": 2.6587,
115
+ "step": 160
116
+ },
117
+ {
118
+ "epoch": 1.42,
119
+ "learning_rate": 1.02e-05,
120
+ "loss": 2.3907,
121
+ "step": 170
122
+ },
123
+ {
124
+ "epoch": 1.5,
125
+ "learning_rate": 1.08e-05,
126
+ "loss": 2.22,
127
+ "step": 180
128
+ },
129
+ {
130
+ "epoch": 1.58,
131
+ "learning_rate": 1.1400000000000001e-05,
132
+ "loss": 2.0003,
133
+ "step": 190
134
+ },
135
+ {
136
+ "epoch": 1.67,
137
+ "learning_rate": 1.2e-05,
138
+ "loss": 1.8618,
139
+ "step": 200
140
+ },
141
+ {
142
+ "epoch": 1.75,
143
+ "learning_rate": 1.26e-05,
144
+ "loss": 1.6286,
145
+ "step": 210
146
+ },
147
+ {
148
+ "epoch": 1.83,
149
+ "learning_rate": 1.32e-05,
150
+ "loss": 1.4244,
151
+ "step": 220
152
+ },
153
+ {
154
+ "epoch": 1.92,
155
+ "learning_rate": 1.3800000000000002e-05,
156
+ "loss": 1.3615,
157
+ "step": 230
158
+ },
159
+ {
160
+ "epoch": 2.0,
161
+ "learning_rate": 1.44e-05,
162
+ "loss": 1.1939,
163
+ "step": 240
164
+ },
165
+ {
166
+ "epoch": 2.0,
167
+ "eval_cer": 0.6264778114112742,
168
+ "eval_loss": 1.006431221961975,
169
+ "eval_runtime": 15.0117,
170
+ "eval_samples_per_second": 63.95,
171
+ "eval_steps_per_second": 1.998,
172
+ "eval_wer": 0.8269519519519519,
173
+ "step": 240
174
+ },
175
+ {
176
+ "epoch": 2.08,
177
+ "learning_rate": 1.5e-05,
178
+ "loss": 1.3824,
179
+ "step": 250
180
+ },
181
+ {
182
+ "epoch": 2.17,
183
+ "learning_rate": 1.56e-05,
184
+ "loss": 1.0939,
185
+ "step": 260
186
+ },
187
+ {
188
+ "epoch": 2.25,
189
+ "learning_rate": 1.62e-05,
190
+ "loss": 1.0596,
191
+ "step": 270
192
+ },
193
+ {
194
+ "epoch": 2.33,
195
+ "learning_rate": 1.6800000000000002e-05,
196
+ "loss": 1.0087,
197
+ "step": 280
198
+ },
199
+ {
200
+ "epoch": 2.42,
201
+ "learning_rate": 1.74e-05,
202
+ "loss": 0.9328,
203
+ "step": 290
204
+ },
205
+ {
206
+ "epoch": 2.5,
207
+ "learning_rate": 1.8e-05,
208
+ "loss": 0.9045,
209
+ "step": 300
210
+ },
211
+ {
212
+ "epoch": 2.58,
213
+ "learning_rate": 1.86e-05,
214
+ "loss": 0.8645,
215
+ "step": 310
216
+ },
217
+ {
218
+ "epoch": 2.67,
219
+ "learning_rate": 1.9200000000000003e-05,
220
+ "loss": 0.8474,
221
+ "step": 320
222
+ },
223
+ {
224
+ "epoch": 2.75,
225
+ "learning_rate": 1.98e-05,
226
+ "loss": 0.7985,
227
+ "step": 330
228
+ },
229
+ {
230
+ "epoch": 2.83,
231
+ "learning_rate": 2.04e-05,
232
+ "loss": 0.7874,
233
+ "step": 340
234
+ },
235
+ {
236
+ "epoch": 2.92,
237
+ "learning_rate": 2.1e-05,
238
+ "loss": 0.8,
239
+ "step": 350
240
+ },
241
+ {
242
+ "epoch": 3.0,
243
+ "learning_rate": 2.16e-05,
244
+ "loss": 0.7726,
245
+ "step": 360
246
+ },
247
+ {
248
+ "epoch": 3.0,
249
+ "eval_cer": 0.5705065966074591,
250
+ "eval_loss": 0.6256773471832275,
251
+ "eval_runtime": 13.9464,
252
+ "eval_samples_per_second": 68.835,
253
+ "eval_steps_per_second": 2.151,
254
+ "eval_wer": 0.8198198198198198,
255
+ "step": 360
256
+ },
257
+ {
258
+ "epoch": 3.08,
259
+ "learning_rate": 2.22e-05,
260
+ "loss": 0.7963,
261
+ "step": 370
262
+ },
263
+ {
264
+ "epoch": 3.17,
265
+ "learning_rate": 2.2800000000000002e-05,
266
+ "loss": 0.7342,
267
+ "step": 380
268
+ },
269
+ {
270
+ "epoch": 3.25,
271
+ "learning_rate": 2.3400000000000003e-05,
272
+ "loss": 0.7324,
273
+ "step": 390
274
+ },
275
+ {
276
+ "epoch": 3.33,
277
+ "learning_rate": 2.4e-05,
278
+ "loss": 0.6865,
279
+ "step": 400
280
+ },
281
+ {
282
+ "epoch": 3.42,
283
+ "learning_rate": 2.4599999999999998e-05,
284
+ "loss": 0.6731,
285
+ "step": 410
286
+ },
287
+ {
288
+ "epoch": 3.5,
289
+ "learning_rate": 2.52e-05,
290
+ "loss": 0.6683,
291
+ "step": 420
292
+ },
293
+ {
294
+ "epoch": 3.58,
295
+ "learning_rate": 2.58e-05,
296
+ "loss": 0.6583,
297
+ "step": 430
298
+ },
299
+ {
300
+ "epoch": 3.67,
301
+ "learning_rate": 2.64e-05,
302
+ "loss": 0.6218,
303
+ "step": 440
304
+ },
305
+ {
306
+ "epoch": 3.75,
307
+ "learning_rate": 2.7000000000000002e-05,
308
+ "loss": 0.5825,
309
+ "step": 450
310
+ },
311
+ {
312
+ "epoch": 3.83,
313
+ "learning_rate": 2.7600000000000003e-05,
314
+ "loss": 0.5552,
315
+ "step": 460
316
+ },
317
+ {
318
+ "epoch": 3.92,
319
+ "learning_rate": 2.8199999999999998e-05,
320
+ "loss": 0.6122,
321
+ "step": 470
322
+ },
323
+ {
324
+ "epoch": 4.0,
325
+ "learning_rate": 2.88e-05,
326
+ "loss": 0.5502,
327
+ "step": 480
328
+ },
329
+ {
330
+ "epoch": 4.0,
331
+ "eval_cer": 0.34153863727225997,
332
+ "eval_loss": 0.41475021839141846,
333
+ "eval_runtime": 13.9543,
334
+ "eval_samples_per_second": 68.796,
335
+ "eval_steps_per_second": 2.15,
336
+ "eval_wer": 0.5910285285285285,
337
+ "step": 480
338
+ },
339
+ {
340
+ "epoch": 4.08,
341
+ "learning_rate": 2.94e-05,
342
+ "loss": 0.5787,
343
+ "step": 490
344
+ },
345
+ {
346
+ "epoch": 4.17,
347
+ "learning_rate": 3e-05,
348
+ "loss": 0.4964,
349
+ "step": 500
350
+ },
351
+ {
352
+ "epoch": 4.25,
353
+ "learning_rate": 3.06e-05,
354
+ "loss": 0.5245,
355
+ "step": 510
356
+ },
357
+ {
358
+ "epoch": 4.33,
359
+ "learning_rate": 3.12e-05,
360
+ "loss": 0.4688,
361
+ "step": 520
362
+ },
363
+ {
364
+ "epoch": 4.42,
365
+ "learning_rate": 3.18e-05,
366
+ "loss": 0.5043,
367
+ "step": 530
368
+ },
369
+ {
370
+ "epoch": 4.5,
371
+ "learning_rate": 3.24e-05,
372
+ "loss": 0.4769,
373
+ "step": 540
374
+ },
375
+ {
376
+ "epoch": 4.58,
377
+ "learning_rate": 3.3e-05,
378
+ "loss": 0.4966,
379
+ "step": 550
380
+ },
381
+ {
382
+ "epoch": 4.67,
383
+ "learning_rate": 3.3600000000000004e-05,
384
+ "loss": 0.4772,
385
+ "step": 560
386
+ },
387
+ {
388
+ "epoch": 4.75,
389
+ "learning_rate": 3.42e-05,
390
+ "loss": 0.4364,
391
+ "step": 570
392
+ },
393
+ {
394
+ "epoch": 4.83,
395
+ "learning_rate": 3.48e-05,
396
+ "loss": 0.417,
397
+ "step": 580
398
+ },
399
+ {
400
+ "epoch": 4.92,
401
+ "learning_rate": 3.54e-05,
402
+ "loss": 0.4407,
403
+ "step": 590
404
+ },
405
+ {
406
+ "epoch": 5.0,
407
+ "learning_rate": 3.6e-05,
408
+ "loss": 0.4152,
409
+ "step": 600
410
+ },
411
+ {
412
+ "epoch": 5.0,
413
+ "eval_cer": 0.21817351076589184,
414
+ "eval_loss": 0.24392470717430115,
415
+ "eval_runtime": 13.9841,
416
+ "eval_samples_per_second": 68.65,
417
+ "eval_steps_per_second": 2.145,
418
+ "eval_wer": 0.4166666666666667,
419
+ "step": 600
420
+ },
421
+ {
422
+ "epoch": 5.08,
423
+ "learning_rate": 3.66e-05,
424
+ "loss": 0.364,
425
+ "step": 610
426
+ },
427
+ {
428
+ "epoch": 5.17,
429
+ "learning_rate": 3.72e-05,
430
+ "loss": 0.4115,
431
+ "step": 620
432
+ },
433
+ {
434
+ "epoch": 5.25,
435
+ "learning_rate": 3.7800000000000004e-05,
436
+ "loss": 0.3769,
437
+ "step": 630
438
+ },
439
+ {
440
+ "epoch": 5.33,
441
+ "learning_rate": 3.8400000000000005e-05,
442
+ "loss": 0.3635,
443
+ "step": 640
444
+ },
445
+ {
446
+ "epoch": 5.42,
447
+ "learning_rate": 3.9e-05,
448
+ "loss": 0.3743,
449
+ "step": 650
450
+ },
451
+ {
452
+ "epoch": 5.5,
453
+ "learning_rate": 3.96e-05,
454
+ "loss": 0.3188,
455
+ "step": 660
456
+ },
457
+ {
458
+ "epoch": 5.58,
459
+ "learning_rate": 4.02e-05,
460
+ "loss": 0.3608,
461
+ "step": 670
462
+ },
463
+ {
464
+ "epoch": 5.67,
465
+ "learning_rate": 4.08e-05,
466
+ "loss": 0.3285,
467
+ "step": 680
468
+ },
469
+ {
470
+ "epoch": 5.75,
471
+ "learning_rate": 4.14e-05,
472
+ "loss": 0.2964,
473
+ "step": 690
474
+ },
475
+ {
476
+ "epoch": 5.83,
477
+ "learning_rate": 4.2e-05,
478
+ "loss": 0.2799,
479
+ "step": 700
480
+ },
481
+ {
482
+ "epoch": 5.92,
483
+ "learning_rate": 4.26e-05,
484
+ "loss": 0.3272,
485
+ "step": 710
486
+ },
487
+ {
488
+ "epoch": 6.0,
489
+ "learning_rate": 4.32e-05,
490
+ "loss": 0.3159,
491
+ "step": 720
492
+ },
493
+ {
494
+ "epoch": 6.0,
495
+ "eval_cer": 0.17619509966303043,
496
+ "eval_loss": 0.13585154712200165,
497
+ "eval_runtime": 13.8057,
498
+ "eval_samples_per_second": 69.536,
499
+ "eval_steps_per_second": 2.173,
500
+ "eval_wer": 0.3083708708708709,
501
+ "step": 720
502
+ },
503
+ {
504
+ "epoch": 6.08,
505
+ "learning_rate": 4.38e-05,
506
+ "loss": 0.3256,
507
+ "step": 730
508
+ },
509
+ {
510
+ "epoch": 6.17,
511
+ "learning_rate": 4.44e-05,
512
+ "loss": 0.2651,
513
+ "step": 740
514
+ },
515
+ {
516
+ "epoch": 6.25,
517
+ "learning_rate": 4.5e-05,
518
+ "loss": 0.2502,
519
+ "step": 750
520
+ },
521
+ {
522
+ "epoch": 6.33,
523
+ "learning_rate": 4.5600000000000004e-05,
524
+ "loss": 0.2632,
525
+ "step": 760
526
+ },
527
+ {
528
+ "epoch": 6.42,
529
+ "learning_rate": 4.6200000000000005e-05,
530
+ "loss": 0.2412,
531
+ "step": 770
532
+ },
533
+ {
534
+ "epoch": 6.5,
535
+ "learning_rate": 4.6800000000000006e-05,
536
+ "loss": 0.2871,
537
+ "step": 780
538
+ },
539
+ {
540
+ "epoch": 6.58,
541
+ "learning_rate": 4.74e-05,
542
+ "loss": 0.2409,
543
+ "step": 790
544
+ },
545
+ {
546
+ "epoch": 6.67,
547
+ "learning_rate": 4.8e-05,
548
+ "loss": 0.2091,
549
+ "step": 800
550
+ },
551
+ {
552
+ "epoch": 6.75,
553
+ "learning_rate": 4.86e-05,
554
+ "loss": 0.2677,
555
+ "step": 810
556
+ },
557
+ {
558
+ "epoch": 6.83,
559
+ "learning_rate": 4.9199999999999997e-05,
560
+ "loss": 0.2109,
561
+ "step": 820
562
+ },
563
+ {
564
+ "epoch": 6.92,
565
+ "learning_rate": 4.98e-05,
566
+ "loss": 0.1886,
567
+ "step": 830
568
+ },
569
+ {
570
+ "epoch": 7.0,
571
+ "learning_rate": 5.04e-05,
572
+ "loss": 0.2425,
573
+ "step": 840
574
+ },
575
+ {
576
+ "epoch": 7.0,
577
+ "eval_cer": 0.15089382603232623,
578
+ "eval_loss": 0.07371211796998978,
579
+ "eval_runtime": 13.9128,
580
+ "eval_samples_per_second": 69.001,
581
+ "eval_steps_per_second": 2.156,
582
+ "eval_wer": 0.25225225225225223,
583
+ "step": 840
584
+ },
585
+ {
586
+ "epoch": 7.08,
587
+ "learning_rate": 5.1e-05,
588
+ "loss": 0.2069,
589
+ "step": 850
590
+ },
591
+ {
592
+ "epoch": 7.17,
593
+ "learning_rate": 5.16e-05,
594
+ "loss": 0.1888,
595
+ "step": 860
596
+ },
597
+ {
598
+ "epoch": 7.25,
599
+ "learning_rate": 5.22e-05,
600
+ "loss": 0.1926,
601
+ "step": 870
602
+ },
603
+ {
604
+ "epoch": 7.33,
605
+ "learning_rate": 5.28e-05,
606
+ "loss": 0.1794,
607
+ "step": 880
608
+ },
609
+ {
610
+ "epoch": 7.42,
611
+ "learning_rate": 5.3400000000000004e-05,
612
+ "loss": 0.2078,
613
+ "step": 890
614
+ },
615
+ {
616
+ "epoch": 7.5,
617
+ "learning_rate": 5.4000000000000005e-05,
618
+ "loss": 0.1646,
619
+ "step": 900
620
+ },
621
+ {
622
+ "epoch": 7.58,
623
+ "learning_rate": 5.4600000000000006e-05,
624
+ "loss": 0.1417,
625
+ "step": 910
626
+ },
627
+ {
628
+ "epoch": 7.67,
629
+ "learning_rate": 5.520000000000001e-05,
630
+ "loss": 0.1454,
631
+ "step": 920
632
+ },
633
+ {
634
+ "epoch": 7.75,
635
+ "learning_rate": 5.58e-05,
636
+ "loss": 0.173,
637
+ "step": 930
638
+ },
639
+ {
640
+ "epoch": 7.83,
641
+ "learning_rate": 5.6399999999999995e-05,
642
+ "loss": 0.1449,
643
+ "step": 940
644
+ },
645
+ {
646
+ "epoch": 7.92,
647
+ "learning_rate": 5.6999999999999996e-05,
648
+ "loss": 0.1359,
649
+ "step": 950
650
+ },
651
+ {
652
+ "epoch": 8.0,
653
+ "learning_rate": 5.76e-05,
654
+ "loss": 0.1921,
655
+ "step": 960
656
+ },
657
+ {
658
+ "epoch": 8.0,
659
+ "eval_cer": 0.16083157233422812,
660
+ "eval_loss": 0.045883145183324814,
661
+ "eval_runtime": 13.9631,
662
+ "eval_samples_per_second": 68.753,
663
+ "eval_steps_per_second": 2.149,
664
+ "eval_wer": 0.22128378378378377,
665
+ "step": 960
666
+ },
667
+ {
668
+ "epoch": 8.0,
669
+ "step": 960,
670
+ "total_flos": 2.451994748193635e+18,
671
+ "train_loss": 2.0509253946443398,
672
+ "train_runtime": 1630.9039,
673
+ "train_samples_per_second": 18.836,
674
+ "train_steps_per_second": 0.589
675
+ }
676
+ ],
677
+ "logging_steps": 10,
678
+ "max_steps": 960,
679
+ "num_train_epochs": 8,
680
+ "save_steps": 500,
681
+ "total_flos": 2.451994748193635e+18,
682
+ "trial_name": null,
683
+ "trial_params": null
684
+ }