Text2Text Generation
Transformers
PyTorch
mt5
Eval Results
Inference Endpoints
michael-newsrx-com TimeRobber commited on
Commit
fdfef1c
0 Parent(s):

Duplicate from bigscience/mt0-xl

Browse files

Co-authored-by: Thomas Wang <[email protected]>

.gitattributes ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ftz filter=lfs diff=lfs merge=lfs -text
6
+ *.gz filter=lfs diff=lfs merge=lfs -text
7
+ *.h5 filter=lfs diff=lfs merge=lfs -text
8
+ *.joblib filter=lfs diff=lfs merge=lfs -text
9
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
10
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.npy filter=lfs diff=lfs merge=lfs -text
14
+ *.npz filter=lfs diff=lfs merge=lfs -text
15
+ *.onnx filter=lfs diff=lfs merge=lfs -text
16
+ *.ot filter=lfs diff=lfs merge=lfs -text
17
+ *.parquet filter=lfs diff=lfs merge=lfs -text
18
+ *.pb filter=lfs diff=lfs merge=lfs -text
19
+ *.pickle filter=lfs diff=lfs merge=lfs -text
20
+ *.pkl filter=lfs diff=lfs merge=lfs -text
21
+ *.pt filter=lfs diff=lfs merge=lfs -text
22
+ *.pth filter=lfs diff=lfs merge=lfs -text
23
+ *.rar filter=lfs diff=lfs merge=lfs -text
24
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
25
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
26
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
27
+ *.tflite filter=lfs diff=lfs merge=lfs -text
28
+ *.tgz filter=lfs diff=lfs merge=lfs -text
29
+ *.wasm filter=lfs diff=lfs merge=lfs -text
30
+ *.xz filter=lfs diff=lfs merge=lfs -text
31
+ *.zip filter=lfs diff=lfs merge=lfs -text
32
+ *.zst filter=lfs diff=lfs merge=lfs -text
33
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,932 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - bigscience/xP3
4
+ - mc4
5
+ license: apache-2.0
6
+ language:
7
+ - af
8
+ - am
9
+ - ar
10
+ - az
11
+ - be
12
+ - bg
13
+ - bn
14
+ - ca
15
+ - ceb
16
+ - co
17
+ - cs
18
+ - cy
19
+ - da
20
+ - de
21
+ - el
22
+ - en
23
+ - eo
24
+ - es
25
+ - et
26
+ - eu
27
+ - fa
28
+ - fi
29
+ - fil
30
+ - fr
31
+ - fy
32
+ - ga
33
+ - gd
34
+ - gl
35
+ - gu
36
+ - ha
37
+ - haw
38
+ - hi
39
+ - hmn
40
+ - ht
41
+ - hu
42
+ - hy
43
+ - ig
44
+ - is
45
+ - it
46
+ - iw
47
+ - ja
48
+ - jv
49
+ - ka
50
+ - kk
51
+ - km
52
+ - kn
53
+ - ko
54
+ - ku
55
+ - ky
56
+ - la
57
+ - lb
58
+ - lo
59
+ - lt
60
+ - lv
61
+ - mg
62
+ - mi
63
+ - mk
64
+ - ml
65
+ - mn
66
+ - mr
67
+ - ms
68
+ - mt
69
+ - my
70
+ - ne
71
+ - nl
72
+ - 'no'
73
+ - ny
74
+ - pa
75
+ - pl
76
+ - ps
77
+ - pt
78
+ - ro
79
+ - ru
80
+ - sd
81
+ - si
82
+ - sk
83
+ - sl
84
+ - sm
85
+ - sn
86
+ - so
87
+ - sq
88
+ - sr
89
+ - st
90
+ - su
91
+ - sv
92
+ - sw
93
+ - ta
94
+ - te
95
+ - tg
96
+ - th
97
+ - tr
98
+ - uk
99
+ - und
100
+ - ur
101
+ - uz
102
+ - vi
103
+ - xh
104
+ - yi
105
+ - yo
106
+ - zh
107
+ - zu
108
+ pipeline_tag: text2text-generation
109
+ widget:
110
+ - text: >-
111
+ 一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。Would you rate the previous
112
+ review as positive, neutral or negative?
113
+ example_title: zh-en sentiment
114
+ - text: 一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。你认为这句话的立场是赞扬、中立还是批评?
115
+ example_title: zh-zh sentiment
116
+ - text: Suggest at least five related search terms to "Mạng neural nhân tạo".
117
+ example_title: vi-en query
118
+ - text: >-
119
+ Proposez au moins cinq mots clés concernant «Réseau de neurones
120
+ artificiels».
121
+ example_title: fr-fr query
122
+ - text: Explain in a sentence in Telugu what is backpropagation in neural networks.
123
+ example_title: te-en qa
124
+ - text: Why is the sky blue?
125
+ example_title: en-en qa
126
+ - text: >-
127
+ Write a fairy tale about a troll saving a princess from a dangerous dragon.
128
+ The fairy tale is a masterpiece that has achieved praise worldwide and its
129
+ moral is "Heroes Come in All Shapes and Sizes". Story (in Spanish):
130
+ example_title: es-en fable
131
+ - text: >-
132
+ Write a fable about wood elves living in a forest that is suddenly invaded
133
+ by ogres. The fable is a masterpiece that has achieved praise worldwide and
134
+ its moral is "Violence is the last refuge of the incompetent". Fable (in
135
+ Hindi):
136
+ example_title: hi-en fable
137
+ model-index:
138
+ - name: mt0-xl
139
+ results:
140
+ - task:
141
+ type: Coreference resolution
142
+ dataset:
143
+ type: winogrande
144
+ name: Winogrande XL (xl)
145
+ config: xl
146
+ split: validation
147
+ revision: a80f460359d1e9a67c006011c94de42a8759430c
148
+ metrics:
149
+ - type: Accuracy
150
+ value: 52.49
151
+ - task:
152
+ type: Coreference resolution
153
+ dataset:
154
+ type: Muennighoff/xwinograd
155
+ name: XWinograd (en)
156
+ config: en
157
+ split: test
158
+ revision: 9dd5ea5505fad86b7bedad667955577815300cee
159
+ metrics:
160
+ - type: Accuracy
161
+ value: 61.89
162
+ - task:
163
+ type: Coreference resolution
164
+ dataset:
165
+ type: Muennighoff/xwinograd
166
+ name: XWinograd (fr)
167
+ config: fr
168
+ split: test
169
+ revision: 9dd5ea5505fad86b7bedad667955577815300cee
170
+ metrics:
171
+ - type: Accuracy
172
+ value: 59.04
173
+ - task:
174
+ type: Coreference resolution
175
+ dataset:
176
+ type: Muennighoff/xwinograd
177
+ name: XWinograd (jp)
178
+ config: jp
179
+ split: test
180
+ revision: 9dd5ea5505fad86b7bedad667955577815300cee
181
+ metrics:
182
+ - type: Accuracy
183
+ value: 60.27
184
+ - task:
185
+ type: Coreference resolution
186
+ dataset:
187
+ type: Muennighoff/xwinograd
188
+ name: XWinograd (pt)
189
+ config: pt
190
+ split: test
191
+ revision: 9dd5ea5505fad86b7bedad667955577815300cee
192
+ metrics:
193
+ - type: Accuracy
194
+ value: 66.16
195
+ - task:
196
+ type: Coreference resolution
197
+ dataset:
198
+ type: Muennighoff/xwinograd
199
+ name: XWinograd (ru)
200
+ config: ru
201
+ split: test
202
+ revision: 9dd5ea5505fad86b7bedad667955577815300cee
203
+ metrics:
204
+ - type: Accuracy
205
+ value: 59.05
206
+ - task:
207
+ type: Coreference resolution
208
+ dataset:
209
+ type: Muennighoff/xwinograd
210
+ name: XWinograd (zh)
211
+ config: zh
212
+ split: test
213
+ revision: 9dd5ea5505fad86b7bedad667955577815300cee
214
+ metrics:
215
+ - type: Accuracy
216
+ value: 62.9
217
+ - task:
218
+ type: Natural language inference
219
+ dataset:
220
+ type: anli
221
+ name: ANLI (r1)
222
+ config: r1
223
+ split: validation
224
+ revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094
225
+ metrics:
226
+ - type: Accuracy
227
+ value: 38.2
228
+ - task:
229
+ type: Natural language inference
230
+ dataset:
231
+ type: anli
232
+ name: ANLI (r2)
233
+ config: r2
234
+ split: validation
235
+ revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094
236
+ metrics:
237
+ - type: Accuracy
238
+ value: 34.8
239
+ - task:
240
+ type: Natural language inference
241
+ dataset:
242
+ type: anli
243
+ name: ANLI (r3)
244
+ config: r3
245
+ split: validation
246
+ revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094
247
+ metrics:
248
+ - type: Accuracy
249
+ value: 39
250
+ - task:
251
+ type: Natural language inference
252
+ dataset:
253
+ type: super_glue
254
+ name: SuperGLUE (cb)
255
+ config: cb
256
+ split: validation
257
+ revision: 9e12063561e7e6c79099feb6d5a493142584e9e2
258
+ metrics:
259
+ - type: Accuracy
260
+ value: 85.71
261
+ - task:
262
+ type: Natural language inference
263
+ dataset:
264
+ type: super_glue
265
+ name: SuperGLUE (rte)
266
+ config: rte
267
+ split: validation
268
+ revision: 9e12063561e7e6c79099feb6d5a493142584e9e2
269
+ metrics:
270
+ - type: Accuracy
271
+ value: 78.7
272
+ - task:
273
+ type: Natural language inference
274
+ dataset:
275
+ type: xnli
276
+ name: XNLI (ar)
277
+ config: ar
278
+ split: validation
279
+ revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
280
+ metrics:
281
+ - type: Accuracy
282
+ value: 51.85
283
+ - task:
284
+ type: Natural language inference
285
+ dataset:
286
+ type: xnli
287
+ name: XNLI (bg)
288
+ config: bg
289
+ split: validation
290
+ revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
291
+ metrics:
292
+ - type: Accuracy
293
+ value: 54.18
294
+ - task:
295
+ type: Natural language inference
296
+ dataset:
297
+ type: xnli
298
+ name: XNLI (de)
299
+ config: de
300
+ split: validation
301
+ revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
302
+ metrics:
303
+ - type: Accuracy
304
+ value: 54.78
305
+ - task:
306
+ type: Natural language inference
307
+ dataset:
308
+ type: xnli
309
+ name: XNLI (el)
310
+ config: el
311
+ split: validation
312
+ revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
313
+ metrics:
314
+ - type: Accuracy
315
+ value: 53.78
316
+ - task:
317
+ type: Natural language inference
318
+ dataset:
319
+ type: xnli
320
+ name: XNLI (en)
321
+ config: en
322
+ split: validation
323
+ revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
324
+ metrics:
325
+ - type: Accuracy
326
+ value: 56.83
327
+ - task:
328
+ type: Natural language inference
329
+ dataset:
330
+ type: xnli
331
+ name: XNLI (es)
332
+ config: es
333
+ split: validation
334
+ revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
335
+ metrics:
336
+ - type: Accuracy
337
+ value: 54.78
338
+ - task:
339
+ type: Natural language inference
340
+ dataset:
341
+ type: xnli
342
+ name: XNLI (fr)
343
+ config: fr
344
+ split: validation
345
+ revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
346
+ metrics:
347
+ - type: Accuracy
348
+ value: 54.22
349
+ - task:
350
+ type: Natural language inference
351
+ dataset:
352
+ type: xnli
353
+ name: XNLI (hi)
354
+ config: hi
355
+ split: validation
356
+ revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
357
+ metrics:
358
+ - type: Accuracy
359
+ value: 50.24
360
+ - task:
361
+ type: Natural language inference
362
+ dataset:
363
+ type: xnli
364
+ name: XNLI (ru)
365
+ config: ru
366
+ split: validation
367
+ revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
368
+ metrics:
369
+ - type: Accuracy
370
+ value: 53.09
371
+ - task:
372
+ type: Natural language inference
373
+ dataset:
374
+ type: xnli
375
+ name: XNLI (sw)
376
+ config: sw
377
+ split: validation
378
+ revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
379
+ metrics:
380
+ - type: Accuracy
381
+ value: 49.6
382
+ - task:
383
+ type: Natural language inference
384
+ dataset:
385
+ type: xnli
386
+ name: XNLI (th)
387
+ config: th
388
+ split: validation
389
+ revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
390
+ metrics:
391
+ - type: Accuracy
392
+ value: 52.13
393
+ - task:
394
+ type: Natural language inference
395
+ dataset:
396
+ type: xnli
397
+ name: XNLI (tr)
398
+ config: tr
399
+ split: validation
400
+ revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
401
+ metrics:
402
+ - type: Accuracy
403
+ value: 50.56
404
+ - task:
405
+ type: Natural language inference
406
+ dataset:
407
+ type: xnli
408
+ name: XNLI (ur)
409
+ config: ur
410
+ split: validation
411
+ revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
412
+ metrics:
413
+ - type: Accuracy
414
+ value: 47.91
415
+ - task:
416
+ type: Natural language inference
417
+ dataset:
418
+ type: xnli
419
+ name: XNLI (vi)
420
+ config: vi
421
+ split: validation
422
+ revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
423
+ metrics:
424
+ - type: Accuracy
425
+ value: 53.21
426
+ - task:
427
+ type: Natural language inference
428
+ dataset:
429
+ type: xnli
430
+ name: XNLI (zh)
431
+ config: zh
432
+ split: validation
433
+ revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
434
+ metrics:
435
+ - type: Accuracy
436
+ value: 50.64
437
+ - task:
438
+ type: Program synthesis
439
+ dataset:
440
+ type: openai_humaneval
441
+ name: HumanEval
442
+ config: None
443
+ split: test
444
+ revision: e8dc562f5de170c54b5481011dd9f4fa04845771
445
+ metrics:
446
+ - type: Pass@1
447
+ value: 0
448
+ - type: Pass@10
449
+ value: 0
450
+ - type: Pass@100
451
+ value: 0
452
+ - task:
453
+ type: Sentence completion
454
+ dataset:
455
+ type: story_cloze
456
+ name: StoryCloze (2016)
457
+ config: '2016'
458
+ split: validation
459
+ revision: e724c6f8cdf7c7a2fb229d862226e15b023ee4db
460
+ metrics:
461
+ - type: Accuracy
462
+ value: 79.1
463
+ - task:
464
+ type: Sentence completion
465
+ dataset:
466
+ type: super_glue
467
+ name: SuperGLUE (copa)
468
+ config: copa
469
+ split: validation
470
+ revision: 9e12063561e7e6c79099feb6d5a493142584e9e2
471
+ metrics:
472
+ - type: Accuracy
473
+ value: 72
474
+ - task:
475
+ type: Sentence completion
476
+ dataset:
477
+ type: xcopa
478
+ name: XCOPA (et)
479
+ config: et
480
+ split: validation
481
+ revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
482
+ metrics:
483
+ - type: Accuracy
484
+ value: 70
485
+ - task:
486
+ type: Sentence completion
487
+ dataset:
488
+ type: xcopa
489
+ name: XCOPA (ht)
490
+ config: ht
491
+ split: validation
492
+ revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
493
+ metrics:
494
+ - type: Accuracy
495
+ value: 66
496
+ - task:
497
+ type: Sentence completion
498
+ dataset:
499
+ type: xcopa
500
+ name: XCOPA (id)
501
+ config: id
502
+ split: validation
503
+ revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
504
+ metrics:
505
+ - type: Accuracy
506
+ value: 71
507
+ - task:
508
+ type: Sentence completion
509
+ dataset:
510
+ type: xcopa
511
+ name: XCOPA (it)
512
+ config: it
513
+ split: validation
514
+ revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
515
+ metrics:
516
+ - type: Accuracy
517
+ value: 70
518
+ - task:
519
+ type: Sentence completion
520
+ dataset:
521
+ type: xcopa
522
+ name: XCOPA (qu)
523
+ config: qu
524
+ split: validation
525
+ revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
526
+ metrics:
527
+ - type: Accuracy
528
+ value: 56
529
+ - task:
530
+ type: Sentence completion
531
+ dataset:
532
+ type: xcopa
533
+ name: XCOPA (sw)
534
+ config: sw
535
+ split: validation
536
+ revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
537
+ metrics:
538
+ - type: Accuracy
539
+ value: 53
540
+ - task:
541
+ type: Sentence completion
542
+ dataset:
543
+ type: xcopa
544
+ name: XCOPA (ta)
545
+ config: ta
546
+ split: validation
547
+ revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
548
+ metrics:
549
+ - type: Accuracy
550
+ value: 64
551
+ - task:
552
+ type: Sentence completion
553
+ dataset:
554
+ type: xcopa
555
+ name: XCOPA (th)
556
+ config: th
557
+ split: validation
558
+ revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
559
+ metrics:
560
+ - type: Accuracy
561
+ value: 60
562
+ - task:
563
+ type: Sentence completion
564
+ dataset:
565
+ type: xcopa
566
+ name: XCOPA (tr)
567
+ config: tr
568
+ split: validation
569
+ revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
570
+ metrics:
571
+ - type: Accuracy
572
+ value: 58
573
+ - task:
574
+ type: Sentence completion
575
+ dataset:
576
+ type: xcopa
577
+ name: XCOPA (vi)
578
+ config: vi
579
+ split: validation
580
+ revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
581
+ metrics:
582
+ - type: Accuracy
583
+ value: 68
584
+ - task:
585
+ type: Sentence completion
586
+ dataset:
587
+ type: xcopa
588
+ name: XCOPA (zh)
589
+ config: zh
590
+ split: validation
591
+ revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
592
+ metrics:
593
+ - type: Accuracy
594
+ value: 65
595
+ - task:
596
+ type: Sentence completion
597
+ dataset:
598
+ type: Muennighoff/xstory_cloze
599
+ name: XStoryCloze (ar)
600
+ config: ar
601
+ split: validation
602
+ revision: 8bb76e594b68147f1a430e86829d07189622b90d
603
+ metrics:
604
+ - type: Accuracy
605
+ value: 70.09
606
+ - task:
607
+ type: Sentence completion
608
+ dataset:
609
+ type: Muennighoff/xstory_cloze
610
+ name: XStoryCloze (es)
611
+ config: es
612
+ split: validation
613
+ revision: 8bb76e594b68147f1a430e86829d07189622b90d
614
+ metrics:
615
+ - type: Accuracy
616
+ value: 77.17
617
+ - task:
618
+ type: Sentence completion
619
+ dataset:
620
+ type: Muennighoff/xstory_cloze
621
+ name: XStoryCloze (eu)
622
+ config: eu
623
+ split: validation
624
+ revision: 8bb76e594b68147f1a430e86829d07189622b90d
625
+ metrics:
626
+ - type: Accuracy
627
+ value: 69.03
628
+ - task:
629
+ type: Sentence completion
630
+ dataset:
631
+ type: Muennighoff/xstory_cloze
632
+ name: XStoryCloze (hi)
633
+ config: hi
634
+ split: validation
635
+ revision: 8bb76e594b68147f1a430e86829d07189622b90d
636
+ metrics:
637
+ - type: Accuracy
638
+ value: 71.08
639
+ - task:
640
+ type: Sentence completion
641
+ dataset:
642
+ type: Muennighoff/xstory_cloze
643
+ name: XStoryCloze (id)
644
+ config: id
645
+ split: validation
646
+ revision: 8bb76e594b68147f1a430e86829d07189622b90d
647
+ metrics:
648
+ - type: Accuracy
649
+ value: 75.71
650
+ - task:
651
+ type: Sentence completion
652
+ dataset:
653
+ type: Muennighoff/xstory_cloze
654
+ name: XStoryCloze (my)
655
+ config: my
656
+ split: validation
657
+ revision: 8bb76e594b68147f1a430e86829d07189622b90d
658
+ metrics:
659
+ - type: Accuracy
660
+ value: 65.65
661
+ - task:
662
+ type: Sentence completion
663
+ dataset:
664
+ type: Muennighoff/xstory_cloze
665
+ name: XStoryCloze (ru)
666
+ config: ru
667
+ split: validation
668
+ revision: 8bb76e594b68147f1a430e86829d07189622b90d
669
+ metrics:
670
+ - type: Accuracy
671
+ value: 74.85
672
+ - task:
673
+ type: Sentence completion
674
+ dataset:
675
+ type: Muennighoff/xstory_cloze
676
+ name: XStoryCloze (sw)
677
+ config: sw
678
+ split: validation
679
+ revision: 8bb76e594b68147f1a430e86829d07189622b90d
680
+ metrics:
681
+ - type: Accuracy
682
+ value: 71.14
683
+ - task:
684
+ type: Sentence completion
685
+ dataset:
686
+ type: Muennighoff/xstory_cloze
687
+ name: XStoryCloze (te)
688
+ config: te
689
+ split: validation
690
+ revision: 8bb76e594b68147f1a430e86829d07189622b90d
691
+ metrics:
692
+ - type: Accuracy
693
+ value: 68.89
694
+ - task:
695
+ type: Sentence completion
696
+ dataset:
697
+ type: Muennighoff/xstory_cloze
698
+ name: XStoryCloze (zh)
699
+ config: zh
700
+ split: validation
701
+ revision: 8bb76e594b68147f1a430e86829d07189622b90d
702
+ metrics:
703
+ - type: Accuracy
704
+ value: 72.93
705
+ duplicated_from: bigscience/mt0-xl
706
+ ---
707
+
708
+ ![xmtf](https://github.com/bigscience-workshop/xmtf/blob/master/xmtf_banner.png?raw=true)
709
+
710
+ # Table of Contents
711
+
712
+ 1. [Model Summary](#model-summary)
713
+ 2. [Use](#use)
714
+ 3. [Limitations](#limitations)
715
+ 4. [Training](#training)
716
+ 5. [Evaluation](#evaluation)
717
+ 7. [Citation](#citation)
718
+
719
+ # Model Summary
720
+
721
+ > We present BLOOMZ & mT0, a family of models capable of following human instructions in dozens of languages zero-shot. We finetune BLOOM & mT5 pretrained multilingual language models on our crosslingual task mixture (xP3) and find our resulting models capable of crosslingual generalization to unseen tasks & languages.
722
+
723
+ - **Repository:** [bigscience-workshop/xmtf](https://github.com/bigscience-workshop/xmtf)
724
+ - **Paper:** [Crosslingual Generalization through Multitask Finetuning](https://arxiv.org/abs/2211.01786)
725
+ - **Point of Contact:** [Niklas Muennighoff](mailto:[email protected])
726
+ - **Languages:** Refer to [mc4](https://huggingface.co/datasets/mc4) for pretraining & [xP3](https://huggingface.co/bigscience/xP3) for finetuning language proportions. It understands both pretraining & finetuning languages.
727
+ - **BLOOMZ & mT0 Model Family:**
728
+
729
+ <div class="max-w-full overflow-auto">
730
+ <table>
731
+ <tr>
732
+ <th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/bigscience/xP3>xP3</a>. Recommended for prompting in English.
733
+ </tr>
734
+ <tr>
735
+ <td>Parameters</td>
736
+ <td>300M</td>
737
+ <td>580M</td>
738
+ <td>1.2B</td>
739
+ <td>3.7B</td>
740
+ <td>13B</td>
741
+ <td>560M</td>
742
+ <td>1.1B</td>
743
+ <td>1.7B</td>
744
+ <td>3B</td>
745
+ <td>7.1B</td>
746
+ <td>176B</td>
747
+ </tr>
748
+ <tr>
749
+ <td>Finetuned Model</td>
750
+ <td><a href=https://huggingface.co/bigscience/mt0-small>mt0-small</a></td>
751
+ <td><a href=https://huggingface.co/bigscience/mt0-base>mt0-base</a></td>
752
+ <td><a href=https://huggingface.co/bigscience/mt0-large>mt0-large</a></td>
753
+ <td><a href=https://huggingface.co/bigscience/mt0-xl>mt0-xl</a></td>
754
+ <td><a href=https://huggingface.co/bigscience/mt0-xxl>mt0-xxl</a></td>
755
+ <td><a href=https://huggingface.co/bigscience/bloomz-560m>bloomz-560m</a></td>
756
+ <td><a href=https://huggingface.co/bigscience/bloomz-1b1>bloomz-1b1</a></td>
757
+ <td><a href=https://huggingface.co/bigscience/bloomz-1b7>bloomz-1b7</a></td>
758
+ <td><a href=https://huggingface.co/bigscience/bloomz-3b>bloomz-3b</a></td>
759
+ <td><a href=https://huggingface.co/bigscience/bloomz-7b1>bloomz-7b1</a></td>
760
+ <td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a></td>
761
+ </tr>
762
+ </tr>
763
+ <tr>
764
+ <th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/bigscience/xP3mt>xP3mt</a>. Recommended for prompting in non-English.</th>
765
+ </tr>
766
+ <tr>
767
+ <td>Finetuned Model</td>
768
+ <td></td>
769
+ <td></td>
770
+ <td></td>
771
+ <td></td>
772
+ <td><a href=https://huggingface.co/bigscience/mt0-xxl-mt>mt0-xxl-mt</a></td>
773
+ <td></td>
774
+ <td></td>
775
+ <td></td>
776
+ <td></td>
777
+ <td><a href=https://huggingface.co/bigscience/bloomz-7b1-mt>bloomz-7b1-mt</a></td>
778
+ <td><a href=https://huggingface.co/bigscience/bloomz-mt>bloomz-mt</a></td>
779
+ </tr>
780
+ <th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/Muennighoff/P3>P3</a>. Released for research purposes only. Strictly inferior to above models!</th>
781
+ </tr>
782
+ <tr>
783
+ <td>Finetuned Model</td>
784
+ <td></td>
785
+ <td></td>
786
+ <td></td>
787
+ <td></td>
788
+ <td><a href=https://huggingface.co/bigscience/mt0-xxl-p3>mt0-xxl-p3</a></td>
789
+ <td></td>
790
+ <td></td>
791
+ <td></td>
792
+ <td></td>
793
+ <td><a href=https://huggingface.co/bigscience/bloomz-7b1-p3>bloomz-7b1-p3</a></td>
794
+ <td><a href=https://huggingface.co/bigscience/bloomz-p3>bloomz-p3</a></td>
795
+ </tr>
796
+ <th colspan="12">Original pretrained checkpoints. Not recommended.</th>
797
+ <tr>
798
+ <td>Pretrained Model</td>
799
+ <td><a href=https://huggingface.co/google/mt5-small>mt5-small</a></td>
800
+ <td><a href=https://huggingface.co/google/mt5-base>mt5-base</a></td>
801
+ <td><a href=https://huggingface.co/google/mt5-large>mt5-large</a></td>
802
+ <td><a href=https://huggingface.co/google/mt5-xl>mt5-xl</a></td>
803
+ <td><a href=https://huggingface.co/google/mt5-xxl>mt5-xxl</a></td>
804
+ <td><a href=https://huggingface.co/bigscience/bloom-560m>bloom-560m</a></td>
805
+ <td><a href=https://huggingface.co/bigscience/bloom-1b1>bloom-1b1</a></td>
806
+ <td><a href=https://huggingface.co/bigscience/bloom-1b7>bloom-1b7</a></td>
807
+ <td><a href=https://huggingface.co/bigscience/bloom-3b>bloom-3b</a></td>
808
+ <td><a href=https://huggingface.co/bigscience/bloom-7b1>bloom-7b1</a></td>
809
+ <td><a href=https://huggingface.co/bigscience/bloom>bloom</a></td>
810
+ </tr>
811
+ </table>
812
+ </div>
813
+
814
+
815
+
816
+ # Use
817
+
818
+ ## Intended use
819
+
820
+ We recommend using the model to perform tasks expressed in natural language. For example, given the prompt "*Translate to English: Je t’aime.*", the model will most likely answer "*I love you.*". Some prompt ideas from our paper:
821
+ - 一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。你认为这句话的立场是赞扬、中立还是批评?
822
+ - Suggest at least five related search terms to "Mạng neural nhân tạo".
823
+ - Write a fairy tale about a troll saving a princess from a dangerous dragon. The fairy tale is a masterpiece that has achieved praise worldwide and its moral is "Heroes Come in All Shapes and Sizes". Story (in Spanish):
824
+ - Explain in a sentence in Telugu what is backpropagation in neural networks.
825
+
826
+ **Feel free to share your generations in the Community tab!**
827
+
828
+ ## How to use
829
+
830
+ ### CPU
831
+
832
+ <details>
833
+ <summary> Click to expand </summary>
834
+
835
+ ```python
836
+ # pip install -q transformers
837
+ from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
838
+
839
+ checkpoint = "bigscience/mt0-xl"
840
+
841
+ tokenizer = AutoTokenizer.from_pretrained(checkpoint)
842
+ model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint)
843
+
844
+ inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt")
845
+ outputs = model.generate(inputs)
846
+ print(tokenizer.decode(outputs[0]))
847
+ ```
848
+
849
+ </details>
850
+
851
+ ### GPU
852
+
853
+ <details>
854
+ <summary> Click to expand </summary>
855
+
856
+ ```python
857
+ # pip install -q transformers accelerate
858
+ from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
859
+
860
+ checkpoint = "bigscience/mt0-xl"
861
+
862
+ tokenizer = AutoTokenizer.from_pretrained(checkpoint)
863
+ model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint, torch_dtype="auto", device_map="auto")
864
+
865
+ inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt").to("cuda")
866
+ outputs = model.generate(inputs)
867
+ print(tokenizer.decode(outputs[0]))
868
+ ```
869
+
870
+ </details>
871
+
872
+ ### GPU in 8bit
873
+
874
+ <details>
875
+ <summary> Click to expand </summary>
876
+
877
+ ```python
878
+ # pip install -q transformers accelerate bitsandbytes
879
+ from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
880
+
881
+ checkpoint = "bigscience/mt0-xl"
882
+
883
+ tokenizer = AutoTokenizer.from_pretrained(checkpoint)
884
+ model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint, device_map="auto", load_in_8bit=True)
885
+
886
+ inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt").to("cuda")
887
+ outputs = model.generate(inputs)
888
+ print(tokenizer.decode(outputs[0]))
889
+ ```
890
+
891
+ </details>
892
+
893
+ <!-- Necessary for whitespace -->
894
+ ###
895
+
896
+ # Limitations
897
+
898
+ **Prompt Engineering:** The performance may vary depending on the prompt. For BLOOMZ models, we recommend making it very clear when the input stops to avoid the model trying to continue it. For example, the prompt "*Translate to English: Je t'aime*" without the full stop (.) at the end, may result in the model trying to continue the French sentence. Better prompts are e.g. "*Translate to English: Je t'aime.*", "*Translate to English: Je t'aime. Translation:*" "*What is "Je t'aime." in English?*", where it is clear for the model when it should answer. Further, we recommend providing the model as much context as possible. For example, if you want it to answer in Telugu, then tell the model, e.g. "*Explain in a sentence in Telugu what is backpropagation in neural networks.*".
899
+
900
+ # Training
901
+
902
+ ## Model
903
+
904
+ - **Architecture:** Same as [mt5-xl](https://huggingface.co/google/mt5-xl), also refer to the `config.json` file
905
+ - **Finetuning steps:** 10000
906
+ - **Finetuning tokens:** 1.85 billion
907
+ - **Precision:** bfloat16
908
+
909
+ ## Hardware
910
+
911
+ - **TPUs:** TPUv4-128
912
+
913
+ ## Software
914
+
915
+ - **Orchestration:** [T5X](https://github.com/google-research/t5x)
916
+ - **Neural networks:** [Jax](https://github.com/google/jax)
917
+
918
+ # Evaluation
919
+
920
+ We refer to Table 7 from our [paper](https://arxiv.org/abs/2211.01786) & [bigscience/evaluation-results](https://huggingface.co/datasets/bigscience/evaluation-results) for zero-shot results on unseen tasks. The sidebar reports zero-shot performance of the best prompt per dataset config.
921
+
922
+ # Citation
923
+ ```bibtex
924
+ @misc{muennighoff2022crosslingual,
925
+ title={Crosslingual Generalization through Multitask Finetuning},
926
+ author={Niklas Muennighoff and Thomas Wang and Lintang Sutawika and Adam Roberts and Stella Biderman and Teven Le Scao and M Saiful Bari and Sheng Shen and Zheng-Xin Yong and Hailey Schoelkopf and Xiangru Tang and Dragomir Radev and Alham Fikri Aji and Khalid Almubarak and Samuel Albanie and Zaid Alyafeai and Albert Webson and Edward Raff and Colin Raffel},
927
+ year={2022},
928
+ eprint={2211.01786},
929
+ archivePrefix={arXiv},
930
+ primaryClass={cs.CL}
931
+ }
932
+ ```
config.json ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "google/mt5-xl",
3
+ "architectures": [
4
+ "MT5ForConditionalGeneration"
5
+ ],
6
+ "d_ff": 5120,
7
+ "d_kv": 64,
8
+ "d_model": 2048,
9
+ "decoder_start_token_id": 0,
10
+ "dense_act_fn": "gelu_new",
11
+ "dropout_rate": 0.1,
12
+ "eos_token_id": 1,
13
+ "feed_forward_proj": "gated-gelu",
14
+ "initializer_factor": 1.0,
15
+ "is_encoder_decoder": true,
16
+ "is_gated_act": true,
17
+ "layer_norm_epsilon": 1e-06,
18
+ "model_type": "mt5",
19
+ "num_decoder_layers": 24,
20
+ "num_heads": 32,
21
+ "num_layers": 24,
22
+ "output_past": true,
23
+ "pad_token_id": 0,
24
+ "relative_attention_max_distance": 128,
25
+ "relative_attention_num_buckets": 32,
26
+ "tie_word_embeddings": false,
27
+ "tokenizer_class": "T5Tokenizer",
28
+ "torch_dtype": "float32",
29
+ "transformers_version": "4.23.1",
30
+ "use_cache": true,
31
+ "vocab_size": 250112
32
+ }
pytorch_model-00001-of-00002.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2cded4ece5122b52af01c8606cc98a621da6fb59189ad87e6a5682d5cd0487b2
3
+ size 7938340473
pytorch_model-00002-of-00002.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b7bc19684b9eb3b9a62294fc284967a7cc7744fd5021eb97410e4e443fe3c56a
3
+ size 7032322681
pytorch_model.bin.index.json ADDED
@@ -0,0 +1,566 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "total_size": 17019396096
4
+ },
5
+ "weight_map": {
6
+ "decoder.block.0.layer.0.SelfAttention.k.weight": "pytorch_model-00001-of-00002.bin",
7
+ "decoder.block.0.layer.0.SelfAttention.o.weight": "pytorch_model-00001-of-00002.bin",
8
+ "decoder.block.0.layer.0.SelfAttention.q.weight": "pytorch_model-00001-of-00002.bin",
9
+ "decoder.block.0.layer.0.SelfAttention.relative_attention_bias.weight": "pytorch_model-00001-of-00002.bin",
10
+ "decoder.block.0.layer.0.SelfAttention.v.weight": "pytorch_model-00001-of-00002.bin",
11
+ "decoder.block.0.layer.0.layer_norm.weight": "pytorch_model-00001-of-00002.bin",
12
+ "decoder.block.0.layer.1.EncDecAttention.k.weight": "pytorch_model-00001-of-00002.bin",
13
+ "decoder.block.0.layer.1.EncDecAttention.o.weight": "pytorch_model-00001-of-00002.bin",
14
+ "decoder.block.0.layer.1.EncDecAttention.q.weight": "pytorch_model-00001-of-00002.bin",
15
+ "decoder.block.0.layer.1.EncDecAttention.v.weight": "pytorch_model-00001-of-00002.bin",
16
+ "decoder.block.0.layer.1.layer_norm.weight": "pytorch_model-00001-of-00002.bin",
17
+ "decoder.block.0.layer.2.DenseReluDense.wi_0.weight": "pytorch_model-00001-of-00002.bin",
18
+ "decoder.block.0.layer.2.DenseReluDense.wi_1.weight": "pytorch_model-00001-of-00002.bin",
19
+ "decoder.block.0.layer.2.DenseReluDense.wo.weight": "pytorch_model-00001-of-00002.bin",
20
+ "decoder.block.0.layer.2.layer_norm.weight": "pytorch_model-00001-of-00002.bin",
21
+ "decoder.block.1.layer.0.SelfAttention.k.weight": "pytorch_model-00001-of-00002.bin",
22
+ "decoder.block.1.layer.0.SelfAttention.o.weight": "pytorch_model-00001-of-00002.bin",
23
+ "decoder.block.1.layer.0.SelfAttention.q.weight": "pytorch_model-00001-of-00002.bin",
24
+ "decoder.block.1.layer.0.SelfAttention.v.weight": "pytorch_model-00001-of-00002.bin",
25
+ "decoder.block.1.layer.0.layer_norm.weight": "pytorch_model-00001-of-00002.bin",
26
+ "decoder.block.1.layer.1.EncDecAttention.k.weight": "pytorch_model-00001-of-00002.bin",
27
+ "decoder.block.1.layer.1.EncDecAttention.o.weight": "pytorch_model-00001-of-00002.bin",
28
+ "decoder.block.1.layer.1.EncDecAttention.q.weight": "pytorch_model-00001-of-00002.bin",
29
+ "decoder.block.1.layer.1.EncDecAttention.v.weight": "pytorch_model-00001-of-00002.bin",
30
+ "decoder.block.1.layer.1.layer_norm.weight": "pytorch_model-00001-of-00002.bin",
31
+ "decoder.block.1.layer.2.DenseReluDense.wi_0.weight": "pytorch_model-00001-of-00002.bin",
32
+ "decoder.block.1.layer.2.DenseReluDense.wi_1.weight": "pytorch_model-00001-of-00002.bin",
33
+ "decoder.block.1.layer.2.DenseReluDense.wo.weight": "pytorch_model-00001-of-00002.bin",
34
+ "decoder.block.1.layer.2.layer_norm.weight": "pytorch_model-00001-of-00002.bin",
35
+ "decoder.block.10.layer.0.SelfAttention.k.weight": "pytorch_model-00002-of-00002.bin",
36
+ "decoder.block.10.layer.0.SelfAttention.o.weight": "pytorch_model-00002-of-00002.bin",
37
+ "decoder.block.10.layer.0.SelfAttention.q.weight": "pytorch_model-00002-of-00002.bin",
38
+ "decoder.block.10.layer.0.SelfAttention.v.weight": "pytorch_model-00002-of-00002.bin",
39
+ "decoder.block.10.layer.0.layer_norm.weight": "pytorch_model-00002-of-00002.bin",
40
+ "decoder.block.10.layer.1.EncDecAttention.k.weight": "pytorch_model-00002-of-00002.bin",
41
+ "decoder.block.10.layer.1.EncDecAttention.o.weight": "pytorch_model-00002-of-00002.bin",
42
+ "decoder.block.10.layer.1.EncDecAttention.q.weight": "pytorch_model-00002-of-00002.bin",
43
+ "decoder.block.10.layer.1.EncDecAttention.v.weight": "pytorch_model-00002-of-00002.bin",
44
+ "decoder.block.10.layer.1.layer_norm.weight": "pytorch_model-00002-of-00002.bin",
45
+ "decoder.block.10.layer.2.DenseReluDense.wi_0.weight": "pytorch_model-00002-of-00002.bin",
46
+ "decoder.block.10.layer.2.DenseReluDense.wi_1.weight": "pytorch_model-00002-of-00002.bin",
47
+ "decoder.block.10.layer.2.DenseReluDense.wo.weight": "pytorch_model-00002-of-00002.bin",
48
+ "decoder.block.10.layer.2.layer_norm.weight": "pytorch_model-00002-of-00002.bin",
49
+ "decoder.block.11.layer.0.SelfAttention.k.weight": "pytorch_model-00002-of-00002.bin",
50
+ "decoder.block.11.layer.0.SelfAttention.o.weight": "pytorch_model-00002-of-00002.bin",
51
+ "decoder.block.11.layer.0.SelfAttention.q.weight": "pytorch_model-00002-of-00002.bin",
52
+ "decoder.block.11.layer.0.SelfAttention.v.weight": "pytorch_model-00002-of-00002.bin",
53
+ "decoder.block.11.layer.0.layer_norm.weight": "pytorch_model-00002-of-00002.bin",
54
+ "decoder.block.11.layer.1.EncDecAttention.k.weight": "pytorch_model-00002-of-00002.bin",
55
+ "decoder.block.11.layer.1.EncDecAttention.o.weight": "pytorch_model-00002-of-00002.bin",
56
+ "decoder.block.11.layer.1.EncDecAttention.q.weight": "pytorch_model-00002-of-00002.bin",
57
+ "decoder.block.11.layer.1.EncDecAttention.v.weight": "pytorch_model-00002-of-00002.bin",
58
+ "decoder.block.11.layer.1.layer_norm.weight": "pytorch_model-00002-of-00002.bin",
59
+ "decoder.block.11.layer.2.DenseReluDense.wi_0.weight": "pytorch_model-00002-of-00002.bin",
60
+ "decoder.block.11.layer.2.DenseReluDense.wi_1.weight": "pytorch_model-00002-of-00002.bin",
61
+ "decoder.block.11.layer.2.DenseReluDense.wo.weight": "pytorch_model-00002-of-00002.bin",
62
+ "decoder.block.11.layer.2.layer_norm.weight": "pytorch_model-00002-of-00002.bin",
63
+ "decoder.block.12.layer.0.SelfAttention.k.weight": "pytorch_model-00002-of-00002.bin",
64
+ "decoder.block.12.layer.0.SelfAttention.o.weight": "pytorch_model-00002-of-00002.bin",
65
+ "decoder.block.12.layer.0.SelfAttention.q.weight": "pytorch_model-00002-of-00002.bin",
66
+ "decoder.block.12.layer.0.SelfAttention.v.weight": "pytorch_model-00002-of-00002.bin",
67
+ "decoder.block.12.layer.0.layer_norm.weight": "pytorch_model-00002-of-00002.bin",
68
+ "decoder.block.12.layer.1.EncDecAttention.k.weight": "pytorch_model-00002-of-00002.bin",
69
+ "decoder.block.12.layer.1.EncDecAttention.o.weight": "pytorch_model-00002-of-00002.bin",
70
+ "decoder.block.12.layer.1.EncDecAttention.q.weight": "pytorch_model-00002-of-00002.bin",
71
+ "decoder.block.12.layer.1.EncDecAttention.v.weight": "pytorch_model-00002-of-00002.bin",
72
+ "decoder.block.12.layer.1.layer_norm.weight": "pytorch_model-00002-of-00002.bin",
73
+ "decoder.block.12.layer.2.DenseReluDense.wi_0.weight": "pytorch_model-00002-of-00002.bin",
74
+ "decoder.block.12.layer.2.DenseReluDense.wi_1.weight": "pytorch_model-00002-of-00002.bin",
75
+ "decoder.block.12.layer.2.DenseReluDense.wo.weight": "pytorch_model-00002-of-00002.bin",
76
+ "decoder.block.12.layer.2.layer_norm.weight": "pytorch_model-00002-of-00002.bin",
77
+ "decoder.block.13.layer.0.SelfAttention.k.weight": "pytorch_model-00002-of-00002.bin",
78
+ "decoder.block.13.layer.0.SelfAttention.o.weight": "pytorch_model-00002-of-00002.bin",
79
+ "decoder.block.13.layer.0.SelfAttention.q.weight": "pytorch_model-00002-of-00002.bin",
80
+ "decoder.block.13.layer.0.SelfAttention.v.weight": "pytorch_model-00002-of-00002.bin",
81
+ "decoder.block.13.layer.0.layer_norm.weight": "pytorch_model-00002-of-00002.bin",
82
+ "decoder.block.13.layer.1.EncDecAttention.k.weight": "pytorch_model-00002-of-00002.bin",
83
+ "decoder.block.13.layer.1.EncDecAttention.o.weight": "pytorch_model-00002-of-00002.bin",
84
+ "decoder.block.13.layer.1.EncDecAttention.q.weight": "pytorch_model-00002-of-00002.bin",
85
+ "decoder.block.13.layer.1.EncDecAttention.v.weight": "pytorch_model-00002-of-00002.bin",
86
+ "decoder.block.13.layer.1.layer_norm.weight": "pytorch_model-00002-of-00002.bin",
87
+ "decoder.block.13.layer.2.DenseReluDense.wi_0.weight": "pytorch_model-00002-of-00002.bin",
88
+ "decoder.block.13.layer.2.DenseReluDense.wi_1.weight": "pytorch_model-00002-of-00002.bin",
89
+ "decoder.block.13.layer.2.DenseReluDense.wo.weight": "pytorch_model-00002-of-00002.bin",
90
+ "decoder.block.13.layer.2.layer_norm.weight": "pytorch_model-00002-of-00002.bin",
91
+ "decoder.block.14.layer.0.SelfAttention.k.weight": "pytorch_model-00002-of-00002.bin",
92
+ "decoder.block.14.layer.0.SelfAttention.o.weight": "pytorch_model-00002-of-00002.bin",
93
+ "decoder.block.14.layer.0.SelfAttention.q.weight": "pytorch_model-00002-of-00002.bin",
94
+ "decoder.block.14.layer.0.SelfAttention.v.weight": "pytorch_model-00002-of-00002.bin",
95
+ "decoder.block.14.layer.0.layer_norm.weight": "pytorch_model-00002-of-00002.bin",
96
+ "decoder.block.14.layer.1.EncDecAttention.k.weight": "pytorch_model-00002-of-00002.bin",
97
+ "decoder.block.14.layer.1.EncDecAttention.o.weight": "pytorch_model-00002-of-00002.bin",
98
+ "decoder.block.14.layer.1.EncDecAttention.q.weight": "pytorch_model-00002-of-00002.bin",
99
+ "decoder.block.14.layer.1.EncDecAttention.v.weight": "pytorch_model-00002-of-00002.bin",
100
+ "decoder.block.14.layer.1.layer_norm.weight": "pytorch_model-00002-of-00002.bin",
101
+ "decoder.block.14.layer.2.DenseReluDense.wi_0.weight": "pytorch_model-00002-of-00002.bin",
102
+ "decoder.block.14.layer.2.DenseReluDense.wi_1.weight": "pytorch_model-00002-of-00002.bin",
103
+ "decoder.block.14.layer.2.DenseReluDense.wo.weight": "pytorch_model-00002-of-00002.bin",
104
+ "decoder.block.14.layer.2.layer_norm.weight": "pytorch_model-00002-of-00002.bin",
105
+ "decoder.block.15.layer.0.SelfAttention.k.weight": "pytorch_model-00002-of-00002.bin",
106
+ "decoder.block.15.layer.0.SelfAttention.o.weight": "pytorch_model-00002-of-00002.bin",
107
+ "decoder.block.15.layer.0.SelfAttention.q.weight": "pytorch_model-00002-of-00002.bin",
108
+ "decoder.block.15.layer.0.SelfAttention.v.weight": "pytorch_model-00002-of-00002.bin",
109
+ "decoder.block.15.layer.0.layer_norm.weight": "pytorch_model-00002-of-00002.bin",
110
+ "decoder.block.15.layer.1.EncDecAttention.k.weight": "pytorch_model-00002-of-00002.bin",
111
+ "decoder.block.15.layer.1.EncDecAttention.o.weight": "pytorch_model-00002-of-00002.bin",
112
+ "decoder.block.15.layer.1.EncDecAttention.q.weight": "pytorch_model-00002-of-00002.bin",
113
+ "decoder.block.15.layer.1.EncDecAttention.v.weight": "pytorch_model-00002-of-00002.bin",
114
+ "decoder.block.15.layer.1.layer_norm.weight": "pytorch_model-00002-of-00002.bin",
115
+ "decoder.block.15.layer.2.DenseReluDense.wi_0.weight": "pytorch_model-00002-of-00002.bin",
116
+ "decoder.block.15.layer.2.DenseReluDense.wi_1.weight": "pytorch_model-00002-of-00002.bin",
117
+ "decoder.block.15.layer.2.DenseReluDense.wo.weight": "pytorch_model-00002-of-00002.bin",
118
+ "decoder.block.15.layer.2.layer_norm.weight": "pytorch_model-00002-of-00002.bin",
119
+ "decoder.block.16.layer.0.SelfAttention.k.weight": "pytorch_model-00002-of-00002.bin",
120
+ "decoder.block.16.layer.0.SelfAttention.o.weight": "pytorch_model-00002-of-00002.bin",
121
+ "decoder.block.16.layer.0.SelfAttention.q.weight": "pytorch_model-00002-of-00002.bin",
122
+ "decoder.block.16.layer.0.SelfAttention.v.weight": "pytorch_model-00002-of-00002.bin",
123
+ "decoder.block.16.layer.0.layer_norm.weight": "pytorch_model-00002-of-00002.bin",
124
+ "decoder.block.16.layer.1.EncDecAttention.k.weight": "pytorch_model-00002-of-00002.bin",
125
+ "decoder.block.16.layer.1.EncDecAttention.o.weight": "pytorch_model-00002-of-00002.bin",
126
+ "decoder.block.16.layer.1.EncDecAttention.q.weight": "pytorch_model-00002-of-00002.bin",
127
+ "decoder.block.16.layer.1.EncDecAttention.v.weight": "pytorch_model-00002-of-00002.bin",
128
+ "decoder.block.16.layer.1.layer_norm.weight": "pytorch_model-00002-of-00002.bin",
129
+ "decoder.block.16.layer.2.DenseReluDense.wi_0.weight": "pytorch_model-00002-of-00002.bin",
130
+ "decoder.block.16.layer.2.DenseReluDense.wi_1.weight": "pytorch_model-00002-of-00002.bin",
131
+ "decoder.block.16.layer.2.DenseReluDense.wo.weight": "pytorch_model-00002-of-00002.bin",
132
+ "decoder.block.16.layer.2.layer_norm.weight": "pytorch_model-00002-of-00002.bin",
133
+ "decoder.block.17.layer.0.SelfAttention.k.weight": "pytorch_model-00002-of-00002.bin",
134
+ "decoder.block.17.layer.0.SelfAttention.o.weight": "pytorch_model-00002-of-00002.bin",
135
+ "decoder.block.17.layer.0.SelfAttention.q.weight": "pytorch_model-00002-of-00002.bin",
136
+ "decoder.block.17.layer.0.SelfAttention.v.weight": "pytorch_model-00002-of-00002.bin",
137
+ "decoder.block.17.layer.0.layer_norm.weight": "pytorch_model-00002-of-00002.bin",
138
+ "decoder.block.17.layer.1.EncDecAttention.k.weight": "pytorch_model-00002-of-00002.bin",
139
+ "decoder.block.17.layer.1.EncDecAttention.o.weight": "pytorch_model-00002-of-00002.bin",
140
+ "decoder.block.17.layer.1.EncDecAttention.q.weight": "pytorch_model-00002-of-00002.bin",
141
+ "decoder.block.17.layer.1.EncDecAttention.v.weight": "pytorch_model-00002-of-00002.bin",
142
+ "decoder.block.17.layer.1.layer_norm.weight": "pytorch_model-00002-of-00002.bin",
143
+ "decoder.block.17.layer.2.DenseReluDense.wi_0.weight": "pytorch_model-00002-of-00002.bin",
144
+ "decoder.block.17.layer.2.DenseReluDense.wi_1.weight": "pytorch_model-00002-of-00002.bin",
145
+ "decoder.block.17.layer.2.DenseReluDense.wo.weight": "pytorch_model-00002-of-00002.bin",
146
+ "decoder.block.17.layer.2.layer_norm.weight": "pytorch_model-00002-of-00002.bin",
147
+ "decoder.block.18.layer.0.SelfAttention.k.weight": "pytorch_model-00002-of-00002.bin",
148
+ "decoder.block.18.layer.0.SelfAttention.o.weight": "pytorch_model-00002-of-00002.bin",
149
+ "decoder.block.18.layer.0.SelfAttention.q.weight": "pytorch_model-00002-of-00002.bin",
150
+ "decoder.block.18.layer.0.SelfAttention.v.weight": "pytorch_model-00002-of-00002.bin",
151
+ "decoder.block.18.layer.0.layer_norm.weight": "pytorch_model-00002-of-00002.bin",
152
+ "decoder.block.18.layer.1.EncDecAttention.k.weight": "pytorch_model-00002-of-00002.bin",
153
+ "decoder.block.18.layer.1.EncDecAttention.o.weight": "pytorch_model-00002-of-00002.bin",
154
+ "decoder.block.18.layer.1.EncDecAttention.q.weight": "pytorch_model-00002-of-00002.bin",
155
+ "decoder.block.18.layer.1.EncDecAttention.v.weight": "pytorch_model-00002-of-00002.bin",
156
+ "decoder.block.18.layer.1.layer_norm.weight": "pytorch_model-00002-of-00002.bin",
157
+ "decoder.block.18.layer.2.DenseReluDense.wi_0.weight": "pytorch_model-00002-of-00002.bin",
158
+ "decoder.block.18.layer.2.DenseReluDense.wi_1.weight": "pytorch_model-00002-of-00002.bin",
159
+ "decoder.block.18.layer.2.DenseReluDense.wo.weight": "pytorch_model-00002-of-00002.bin",
160
+ "decoder.block.18.layer.2.layer_norm.weight": "pytorch_model-00002-of-00002.bin",
161
+ "decoder.block.19.layer.0.SelfAttention.k.weight": "pytorch_model-00002-of-00002.bin",
162
+ "decoder.block.19.layer.0.SelfAttention.o.weight": "pytorch_model-00002-of-00002.bin",
163
+ "decoder.block.19.layer.0.SelfAttention.q.weight": "pytorch_model-00002-of-00002.bin",
164
+ "decoder.block.19.layer.0.SelfAttention.v.weight": "pytorch_model-00002-of-00002.bin",
165
+ "decoder.block.19.layer.0.layer_norm.weight": "pytorch_model-00002-of-00002.bin",
166
+ "decoder.block.19.layer.1.EncDecAttention.k.weight": "pytorch_model-00002-of-00002.bin",
167
+ "decoder.block.19.layer.1.EncDecAttention.o.weight": "pytorch_model-00002-of-00002.bin",
168
+ "decoder.block.19.layer.1.EncDecAttention.q.weight": "pytorch_model-00002-of-00002.bin",
169
+ "decoder.block.19.layer.1.EncDecAttention.v.weight": "pytorch_model-00002-of-00002.bin",
170
+ "decoder.block.19.layer.1.layer_norm.weight": "pytorch_model-00002-of-00002.bin",
171
+ "decoder.block.19.layer.2.DenseReluDense.wi_0.weight": "pytorch_model-00002-of-00002.bin",
172
+ "decoder.block.19.layer.2.DenseReluDense.wi_1.weight": "pytorch_model-00002-of-00002.bin",
173
+ "decoder.block.19.layer.2.DenseReluDense.wo.weight": "pytorch_model-00002-of-00002.bin",
174
+ "decoder.block.19.layer.2.layer_norm.weight": "pytorch_model-00002-of-00002.bin",
175
+ "decoder.block.2.layer.0.SelfAttention.k.weight": "pytorch_model-00001-of-00002.bin",
176
+ "decoder.block.2.layer.0.SelfAttention.o.weight": "pytorch_model-00001-of-00002.bin",
177
+ "decoder.block.2.layer.0.SelfAttention.q.weight": "pytorch_model-00001-of-00002.bin",
178
+ "decoder.block.2.layer.0.SelfAttention.v.weight": "pytorch_model-00001-of-00002.bin",
179
+ "decoder.block.2.layer.0.layer_norm.weight": "pytorch_model-00001-of-00002.bin",
180
+ "decoder.block.2.layer.1.EncDecAttention.k.weight": "pytorch_model-00001-of-00002.bin",
181
+ "decoder.block.2.layer.1.EncDecAttention.o.weight": "pytorch_model-00001-of-00002.bin",
182
+ "decoder.block.2.layer.1.EncDecAttention.q.weight": "pytorch_model-00001-of-00002.bin",
183
+ "decoder.block.2.layer.1.EncDecAttention.v.weight": "pytorch_model-00001-of-00002.bin",
184
+ "decoder.block.2.layer.1.layer_norm.weight": "pytorch_model-00001-of-00002.bin",
185
+ "decoder.block.2.layer.2.DenseReluDense.wi_0.weight": "pytorch_model-00001-of-00002.bin",
186
+ "decoder.block.2.layer.2.DenseReluDense.wi_1.weight": "pytorch_model-00001-of-00002.bin",
187
+ "decoder.block.2.layer.2.DenseReluDense.wo.weight": "pytorch_model-00001-of-00002.bin",
188
+ "decoder.block.2.layer.2.layer_norm.weight": "pytorch_model-00001-of-00002.bin",
189
+ "decoder.block.20.layer.0.SelfAttention.k.weight": "pytorch_model-00002-of-00002.bin",
190
+ "decoder.block.20.layer.0.SelfAttention.o.weight": "pytorch_model-00002-of-00002.bin",
191
+ "decoder.block.20.layer.0.SelfAttention.q.weight": "pytorch_model-00002-of-00002.bin",
192
+ "decoder.block.20.layer.0.SelfAttention.v.weight": "pytorch_model-00002-of-00002.bin",
193
+ "decoder.block.20.layer.0.layer_norm.weight": "pytorch_model-00002-of-00002.bin",
194
+ "decoder.block.20.layer.1.EncDecAttention.k.weight": "pytorch_model-00002-of-00002.bin",
195
+ "decoder.block.20.layer.1.EncDecAttention.o.weight": "pytorch_model-00002-of-00002.bin",
196
+ "decoder.block.20.layer.1.EncDecAttention.q.weight": "pytorch_model-00002-of-00002.bin",
197
+ "decoder.block.20.layer.1.EncDecAttention.v.weight": "pytorch_model-00002-of-00002.bin",
198
+ "decoder.block.20.layer.1.layer_norm.weight": "pytorch_model-00002-of-00002.bin",
199
+ "decoder.block.20.layer.2.DenseReluDense.wi_0.weight": "pytorch_model-00002-of-00002.bin",
200
+ "decoder.block.20.layer.2.DenseReluDense.wi_1.weight": "pytorch_model-00002-of-00002.bin",
201
+ "decoder.block.20.layer.2.DenseReluDense.wo.weight": "pytorch_model-00002-of-00002.bin",
202
+ "decoder.block.20.layer.2.layer_norm.weight": "pytorch_model-00002-of-00002.bin",
203
+ "decoder.block.21.layer.0.SelfAttention.k.weight": "pytorch_model-00002-of-00002.bin",
204
+ "decoder.block.21.layer.0.SelfAttention.o.weight": "pytorch_model-00002-of-00002.bin",
205
+ "decoder.block.21.layer.0.SelfAttention.q.weight": "pytorch_model-00002-of-00002.bin",
206
+ "decoder.block.21.layer.0.SelfAttention.v.weight": "pytorch_model-00002-of-00002.bin",
207
+ "decoder.block.21.layer.0.layer_norm.weight": "pytorch_model-00002-of-00002.bin",
208
+ "decoder.block.21.layer.1.EncDecAttention.k.weight": "pytorch_model-00002-of-00002.bin",
209
+ "decoder.block.21.layer.1.EncDecAttention.o.weight": "pytorch_model-00002-of-00002.bin",
210
+ "decoder.block.21.layer.1.EncDecAttention.q.weight": "pytorch_model-00002-of-00002.bin",
211
+ "decoder.block.21.layer.1.EncDecAttention.v.weight": "pytorch_model-00002-of-00002.bin",
212
+ "decoder.block.21.layer.1.layer_norm.weight": "pytorch_model-00002-of-00002.bin",
213
+ "decoder.block.21.layer.2.DenseReluDense.wi_0.weight": "pytorch_model-00002-of-00002.bin",
214
+ "decoder.block.21.layer.2.DenseReluDense.wi_1.weight": "pytorch_model-00002-of-00002.bin",
215
+ "decoder.block.21.layer.2.DenseReluDense.wo.weight": "pytorch_model-00002-of-00002.bin",
216
+ "decoder.block.21.layer.2.layer_norm.weight": "pytorch_model-00002-of-00002.bin",
217
+ "decoder.block.22.layer.0.SelfAttention.k.weight": "pytorch_model-00002-of-00002.bin",
218
+ "decoder.block.22.layer.0.SelfAttention.o.weight": "pytorch_model-00002-of-00002.bin",
219
+ "decoder.block.22.layer.0.SelfAttention.q.weight": "pytorch_model-00002-of-00002.bin",
220
+ "decoder.block.22.layer.0.SelfAttention.v.weight": "pytorch_model-00002-of-00002.bin",
221
+ "decoder.block.22.layer.0.layer_norm.weight": "pytorch_model-00002-of-00002.bin",
222
+ "decoder.block.22.layer.1.EncDecAttention.k.weight": "pytorch_model-00002-of-00002.bin",
223
+ "decoder.block.22.layer.1.EncDecAttention.o.weight": "pytorch_model-00002-of-00002.bin",
224
+ "decoder.block.22.layer.1.EncDecAttention.q.weight": "pytorch_model-00002-of-00002.bin",
225
+ "decoder.block.22.layer.1.EncDecAttention.v.weight": "pytorch_model-00002-of-00002.bin",
226
+ "decoder.block.22.layer.1.layer_norm.weight": "pytorch_model-00002-of-00002.bin",
227
+ "decoder.block.22.layer.2.DenseReluDense.wi_0.weight": "pytorch_model-00002-of-00002.bin",
228
+ "decoder.block.22.layer.2.DenseReluDense.wi_1.weight": "pytorch_model-00002-of-00002.bin",
229
+ "decoder.block.22.layer.2.DenseReluDense.wo.weight": "pytorch_model-00002-of-00002.bin",
230
+ "decoder.block.22.layer.2.layer_norm.weight": "pytorch_model-00002-of-00002.bin",
231
+ "decoder.block.23.layer.0.SelfAttention.k.weight": "pytorch_model-00002-of-00002.bin",
232
+ "decoder.block.23.layer.0.SelfAttention.o.weight": "pytorch_model-00002-of-00002.bin",
233
+ "decoder.block.23.layer.0.SelfAttention.q.weight": "pytorch_model-00002-of-00002.bin",
234
+ "decoder.block.23.layer.0.SelfAttention.v.weight": "pytorch_model-00002-of-00002.bin",
235
+ "decoder.block.23.layer.0.layer_norm.weight": "pytorch_model-00002-of-00002.bin",
236
+ "decoder.block.23.layer.1.EncDecAttention.k.weight": "pytorch_model-00002-of-00002.bin",
237
+ "decoder.block.23.layer.1.EncDecAttention.o.weight": "pytorch_model-00002-of-00002.bin",
238
+ "decoder.block.23.layer.1.EncDecAttention.q.weight": "pytorch_model-00002-of-00002.bin",
239
+ "decoder.block.23.layer.1.EncDecAttention.v.weight": "pytorch_model-00002-of-00002.bin",
240
+ "decoder.block.23.layer.1.layer_norm.weight": "pytorch_model-00002-of-00002.bin",
241
+ "decoder.block.23.layer.2.DenseReluDense.wi_0.weight": "pytorch_model-00002-of-00002.bin",
242
+ "decoder.block.23.layer.2.DenseReluDense.wi_1.weight": "pytorch_model-00002-of-00002.bin",
243
+ "decoder.block.23.layer.2.DenseReluDense.wo.weight": "pytorch_model-00002-of-00002.bin",
244
+ "decoder.block.23.layer.2.layer_norm.weight": "pytorch_model-00002-of-00002.bin",
245
+ "decoder.block.3.layer.0.SelfAttention.k.weight": "pytorch_model-00001-of-00002.bin",
246
+ "decoder.block.3.layer.0.SelfAttention.o.weight": "pytorch_model-00001-of-00002.bin",
247
+ "decoder.block.3.layer.0.SelfAttention.q.weight": "pytorch_model-00001-of-00002.bin",
248
+ "decoder.block.3.layer.0.SelfAttention.v.weight": "pytorch_model-00001-of-00002.bin",
249
+ "decoder.block.3.layer.0.layer_norm.weight": "pytorch_model-00001-of-00002.bin",
250
+ "decoder.block.3.layer.1.EncDecAttention.k.weight": "pytorch_model-00001-of-00002.bin",
251
+ "decoder.block.3.layer.1.EncDecAttention.o.weight": "pytorch_model-00001-of-00002.bin",
252
+ "decoder.block.3.layer.1.EncDecAttention.q.weight": "pytorch_model-00001-of-00002.bin",
253
+ "decoder.block.3.layer.1.EncDecAttention.v.weight": "pytorch_model-00001-of-00002.bin",
254
+ "decoder.block.3.layer.1.layer_norm.weight": "pytorch_model-00001-of-00002.bin",
255
+ "decoder.block.3.layer.2.DenseReluDense.wi_0.weight": "pytorch_model-00001-of-00002.bin",
256
+ "decoder.block.3.layer.2.DenseReluDense.wi_1.weight": "pytorch_model-00001-of-00002.bin",
257
+ "decoder.block.3.layer.2.DenseReluDense.wo.weight": "pytorch_model-00001-of-00002.bin",
258
+ "decoder.block.3.layer.2.layer_norm.weight": "pytorch_model-00001-of-00002.bin",
259
+ "decoder.block.4.layer.0.SelfAttention.k.weight": "pytorch_model-00001-of-00002.bin",
260
+ "decoder.block.4.layer.0.SelfAttention.o.weight": "pytorch_model-00001-of-00002.bin",
261
+ "decoder.block.4.layer.0.SelfAttention.q.weight": "pytorch_model-00001-of-00002.bin",
262
+ "decoder.block.4.layer.0.SelfAttention.v.weight": "pytorch_model-00001-of-00002.bin",
263
+ "decoder.block.4.layer.0.layer_norm.weight": "pytorch_model-00001-of-00002.bin",
264
+ "decoder.block.4.layer.1.EncDecAttention.k.weight": "pytorch_model-00001-of-00002.bin",
265
+ "decoder.block.4.layer.1.EncDecAttention.o.weight": "pytorch_model-00001-of-00002.bin",
266
+ "decoder.block.4.layer.1.EncDecAttention.q.weight": "pytorch_model-00001-of-00002.bin",
267
+ "decoder.block.4.layer.1.EncDecAttention.v.weight": "pytorch_model-00001-of-00002.bin",
268
+ "decoder.block.4.layer.1.layer_norm.weight": "pytorch_model-00001-of-00002.bin",
269
+ "decoder.block.4.layer.2.DenseReluDense.wi_0.weight": "pytorch_model-00001-of-00002.bin",
270
+ "decoder.block.4.layer.2.DenseReluDense.wi_1.weight": "pytorch_model-00001-of-00002.bin",
271
+ "decoder.block.4.layer.2.DenseReluDense.wo.weight": "pytorch_model-00002-of-00002.bin",
272
+ "decoder.block.4.layer.2.layer_norm.weight": "pytorch_model-00002-of-00002.bin",
273
+ "decoder.block.5.layer.0.SelfAttention.k.weight": "pytorch_model-00002-of-00002.bin",
274
+ "decoder.block.5.layer.0.SelfAttention.o.weight": "pytorch_model-00002-of-00002.bin",
275
+ "decoder.block.5.layer.0.SelfAttention.q.weight": "pytorch_model-00002-of-00002.bin",
276
+ "decoder.block.5.layer.0.SelfAttention.v.weight": "pytorch_model-00002-of-00002.bin",
277
+ "decoder.block.5.layer.0.layer_norm.weight": "pytorch_model-00002-of-00002.bin",
278
+ "decoder.block.5.layer.1.EncDecAttention.k.weight": "pytorch_model-00002-of-00002.bin",
279
+ "decoder.block.5.layer.1.EncDecAttention.o.weight": "pytorch_model-00002-of-00002.bin",
280
+ "decoder.block.5.layer.1.EncDecAttention.q.weight": "pytorch_model-00002-of-00002.bin",
281
+ "decoder.block.5.layer.1.EncDecAttention.v.weight": "pytorch_model-00002-of-00002.bin",
282
+ "decoder.block.5.layer.1.layer_norm.weight": "pytorch_model-00002-of-00002.bin",
283
+ "decoder.block.5.layer.2.DenseReluDense.wi_0.weight": "pytorch_model-00002-of-00002.bin",
284
+ "decoder.block.5.layer.2.DenseReluDense.wi_1.weight": "pytorch_model-00002-of-00002.bin",
285
+ "decoder.block.5.layer.2.DenseReluDense.wo.weight": "pytorch_model-00002-of-00002.bin",
286
+ "decoder.block.5.layer.2.layer_norm.weight": "pytorch_model-00002-of-00002.bin",
287
+ "decoder.block.6.layer.0.SelfAttention.k.weight": "pytorch_model-00002-of-00002.bin",
288
+ "decoder.block.6.layer.0.SelfAttention.o.weight": "pytorch_model-00002-of-00002.bin",
289
+ "decoder.block.6.layer.0.SelfAttention.q.weight": "pytorch_model-00002-of-00002.bin",
290
+ "decoder.block.6.layer.0.SelfAttention.v.weight": "pytorch_model-00002-of-00002.bin",
291
+ "decoder.block.6.layer.0.layer_norm.weight": "pytorch_model-00002-of-00002.bin",
292
+ "decoder.block.6.layer.1.EncDecAttention.k.weight": "pytorch_model-00002-of-00002.bin",
293
+ "decoder.block.6.layer.1.EncDecAttention.o.weight": "pytorch_model-00002-of-00002.bin",
294
+ "decoder.block.6.layer.1.EncDecAttention.q.weight": "pytorch_model-00002-of-00002.bin",
295
+ "decoder.block.6.layer.1.EncDecAttention.v.weight": "pytorch_model-00002-of-00002.bin",
296
+ "decoder.block.6.layer.1.layer_norm.weight": "pytorch_model-00002-of-00002.bin",
297
+ "decoder.block.6.layer.2.DenseReluDense.wi_0.weight": "pytorch_model-00002-of-00002.bin",
298
+ "decoder.block.6.layer.2.DenseReluDense.wi_1.weight": "pytorch_model-00002-of-00002.bin",
299
+ "decoder.block.6.layer.2.DenseReluDense.wo.weight": "pytorch_model-00002-of-00002.bin",
300
+ "decoder.block.6.layer.2.layer_norm.weight": "pytorch_model-00002-of-00002.bin",
301
+ "decoder.block.7.layer.0.SelfAttention.k.weight": "pytorch_model-00002-of-00002.bin",
302
+ "decoder.block.7.layer.0.SelfAttention.o.weight": "pytorch_model-00002-of-00002.bin",
303
+ "decoder.block.7.layer.0.SelfAttention.q.weight": "pytorch_model-00002-of-00002.bin",
304
+ "decoder.block.7.layer.0.SelfAttention.v.weight": "pytorch_model-00002-of-00002.bin",
305
+ "decoder.block.7.layer.0.layer_norm.weight": "pytorch_model-00002-of-00002.bin",
306
+ "decoder.block.7.layer.1.EncDecAttention.k.weight": "pytorch_model-00002-of-00002.bin",
307
+ "decoder.block.7.layer.1.EncDecAttention.o.weight": "pytorch_model-00002-of-00002.bin",
308
+ "decoder.block.7.layer.1.EncDecAttention.q.weight": "pytorch_model-00002-of-00002.bin",
309
+ "decoder.block.7.layer.1.EncDecAttention.v.weight": "pytorch_model-00002-of-00002.bin",
310
+ "decoder.block.7.layer.1.layer_norm.weight": "pytorch_model-00002-of-00002.bin",
311
+ "decoder.block.7.layer.2.DenseReluDense.wi_0.weight": "pytorch_model-00002-of-00002.bin",
312
+ "decoder.block.7.layer.2.DenseReluDense.wi_1.weight": "pytorch_model-00002-of-00002.bin",
313
+ "decoder.block.7.layer.2.DenseReluDense.wo.weight": "pytorch_model-00002-of-00002.bin",
314
+ "decoder.block.7.layer.2.layer_norm.weight": "pytorch_model-00002-of-00002.bin",
315
+ "decoder.block.8.layer.0.SelfAttention.k.weight": "pytorch_model-00002-of-00002.bin",
316
+ "decoder.block.8.layer.0.SelfAttention.o.weight": "pytorch_model-00002-of-00002.bin",
317
+ "decoder.block.8.layer.0.SelfAttention.q.weight": "pytorch_model-00002-of-00002.bin",
318
+ "decoder.block.8.layer.0.SelfAttention.v.weight": "pytorch_model-00002-of-00002.bin",
319
+ "decoder.block.8.layer.0.layer_norm.weight": "pytorch_model-00002-of-00002.bin",
320
+ "decoder.block.8.layer.1.EncDecAttention.k.weight": "pytorch_model-00002-of-00002.bin",
321
+ "decoder.block.8.layer.1.EncDecAttention.o.weight": "pytorch_model-00002-of-00002.bin",
322
+ "decoder.block.8.layer.1.EncDecAttention.q.weight": "pytorch_model-00002-of-00002.bin",
323
+ "decoder.block.8.layer.1.EncDecAttention.v.weight": "pytorch_model-00002-of-00002.bin",
324
+ "decoder.block.8.layer.1.layer_norm.weight": "pytorch_model-00002-of-00002.bin",
325
+ "decoder.block.8.layer.2.DenseReluDense.wi_0.weight": "pytorch_model-00002-of-00002.bin",
326
+ "decoder.block.8.layer.2.DenseReluDense.wi_1.weight": "pytorch_model-00002-of-00002.bin",
327
+ "decoder.block.8.layer.2.DenseReluDense.wo.weight": "pytorch_model-00002-of-00002.bin",
328
+ "decoder.block.8.layer.2.layer_norm.weight": "pytorch_model-00002-of-00002.bin",
329
+ "decoder.block.9.layer.0.SelfAttention.k.weight": "pytorch_model-00002-of-00002.bin",
330
+ "decoder.block.9.layer.0.SelfAttention.o.weight": "pytorch_model-00002-of-00002.bin",
331
+ "decoder.block.9.layer.0.SelfAttention.q.weight": "pytorch_model-00002-of-00002.bin",
332
+ "decoder.block.9.layer.0.SelfAttention.v.weight": "pytorch_model-00002-of-00002.bin",
333
+ "decoder.block.9.layer.0.layer_norm.weight": "pytorch_model-00002-of-00002.bin",
334
+ "decoder.block.9.layer.1.EncDecAttention.k.weight": "pytorch_model-00002-of-00002.bin",
335
+ "decoder.block.9.layer.1.EncDecAttention.o.weight": "pytorch_model-00002-of-00002.bin",
336
+ "decoder.block.9.layer.1.EncDecAttention.q.weight": "pytorch_model-00002-of-00002.bin",
337
+ "decoder.block.9.layer.1.EncDecAttention.v.weight": "pytorch_model-00002-of-00002.bin",
338
+ "decoder.block.9.layer.1.layer_norm.weight": "pytorch_model-00002-of-00002.bin",
339
+ "decoder.block.9.layer.2.DenseReluDense.wi_0.weight": "pytorch_model-00002-of-00002.bin",
340
+ "decoder.block.9.layer.2.DenseReluDense.wi_1.weight": "pytorch_model-00002-of-00002.bin",
341
+ "decoder.block.9.layer.2.DenseReluDense.wo.weight": "pytorch_model-00002-of-00002.bin",
342
+ "decoder.block.9.layer.2.layer_norm.weight": "pytorch_model-00002-of-00002.bin",
343
+ "decoder.embed_tokens.weight": "pytorch_model-00001-of-00002.bin",
344
+ "decoder.final_layer_norm.weight": "pytorch_model-00002-of-00002.bin",
345
+ "encoder.block.0.layer.0.SelfAttention.k.weight": "pytorch_model-00001-of-00002.bin",
346
+ "encoder.block.0.layer.0.SelfAttention.o.weight": "pytorch_model-00001-of-00002.bin",
347
+ "encoder.block.0.layer.0.SelfAttention.q.weight": "pytorch_model-00001-of-00002.bin",
348
+ "encoder.block.0.layer.0.SelfAttention.relative_attention_bias.weight": "pytorch_model-00001-of-00002.bin",
349
+ "encoder.block.0.layer.0.SelfAttention.v.weight": "pytorch_model-00001-of-00002.bin",
350
+ "encoder.block.0.layer.0.layer_norm.weight": "pytorch_model-00001-of-00002.bin",
351
+ "encoder.block.0.layer.1.DenseReluDense.wi_0.weight": "pytorch_model-00001-of-00002.bin",
352
+ "encoder.block.0.layer.1.DenseReluDense.wi_1.weight": "pytorch_model-00001-of-00002.bin",
353
+ "encoder.block.0.layer.1.DenseReluDense.wo.weight": "pytorch_model-00001-of-00002.bin",
354
+ "encoder.block.0.layer.1.layer_norm.weight": "pytorch_model-00001-of-00002.bin",
355
+ "encoder.block.1.layer.0.SelfAttention.k.weight": "pytorch_model-00001-of-00002.bin",
356
+ "encoder.block.1.layer.0.SelfAttention.o.weight": "pytorch_model-00001-of-00002.bin",
357
+ "encoder.block.1.layer.0.SelfAttention.q.weight": "pytorch_model-00001-of-00002.bin",
358
+ "encoder.block.1.layer.0.SelfAttention.v.weight": "pytorch_model-00001-of-00002.bin",
359
+ "encoder.block.1.layer.0.layer_norm.weight": "pytorch_model-00001-of-00002.bin",
360
+ "encoder.block.1.layer.1.DenseReluDense.wi_0.weight": "pytorch_model-00001-of-00002.bin",
361
+ "encoder.block.1.layer.1.DenseReluDense.wi_1.weight": "pytorch_model-00001-of-00002.bin",
362
+ "encoder.block.1.layer.1.DenseReluDense.wo.weight": "pytorch_model-00001-of-00002.bin",
363
+ "encoder.block.1.layer.1.layer_norm.weight": "pytorch_model-00001-of-00002.bin",
364
+ "encoder.block.10.layer.0.SelfAttention.k.weight": "pytorch_model-00001-of-00002.bin",
365
+ "encoder.block.10.layer.0.SelfAttention.o.weight": "pytorch_model-00001-of-00002.bin",
366
+ "encoder.block.10.layer.0.SelfAttention.q.weight": "pytorch_model-00001-of-00002.bin",
367
+ "encoder.block.10.layer.0.SelfAttention.v.weight": "pytorch_model-00001-of-00002.bin",
368
+ "encoder.block.10.layer.0.layer_norm.weight": "pytorch_model-00001-of-00002.bin",
369
+ "encoder.block.10.layer.1.DenseReluDense.wi_0.weight": "pytorch_model-00001-of-00002.bin",
370
+ "encoder.block.10.layer.1.DenseReluDense.wi_1.weight": "pytorch_model-00001-of-00002.bin",
371
+ "encoder.block.10.layer.1.DenseReluDense.wo.weight": "pytorch_model-00001-of-00002.bin",
372
+ "encoder.block.10.layer.1.layer_norm.weight": "pytorch_model-00001-of-00002.bin",
373
+ "encoder.block.11.layer.0.SelfAttention.k.weight": "pytorch_model-00001-of-00002.bin",
374
+ "encoder.block.11.layer.0.SelfAttention.o.weight": "pytorch_model-00001-of-00002.bin",
375
+ "encoder.block.11.layer.0.SelfAttention.q.weight": "pytorch_model-00001-of-00002.bin",
376
+ "encoder.block.11.layer.0.SelfAttention.v.weight": "pytorch_model-00001-of-00002.bin",
377
+ "encoder.block.11.layer.0.layer_norm.weight": "pytorch_model-00001-of-00002.bin",
378
+ "encoder.block.11.layer.1.DenseReluDense.wi_0.weight": "pytorch_model-00001-of-00002.bin",
379
+ "encoder.block.11.layer.1.DenseReluDense.wi_1.weight": "pytorch_model-00001-of-00002.bin",
380
+ "encoder.block.11.layer.1.DenseReluDense.wo.weight": "pytorch_model-00001-of-00002.bin",
381
+ "encoder.block.11.layer.1.layer_norm.weight": "pytorch_model-00001-of-00002.bin",
382
+ "encoder.block.12.layer.0.SelfAttention.k.weight": "pytorch_model-00001-of-00002.bin",
383
+ "encoder.block.12.layer.0.SelfAttention.o.weight": "pytorch_model-00001-of-00002.bin",
384
+ "encoder.block.12.layer.0.SelfAttention.q.weight": "pytorch_model-00001-of-00002.bin",
385
+ "encoder.block.12.layer.0.SelfAttention.v.weight": "pytorch_model-00001-of-00002.bin",
386
+ "encoder.block.12.layer.0.layer_norm.weight": "pytorch_model-00001-of-00002.bin",
387
+ "encoder.block.12.layer.1.DenseReluDense.wi_0.weight": "pytorch_model-00001-of-00002.bin",
388
+ "encoder.block.12.layer.1.DenseReluDense.wi_1.weight": "pytorch_model-00001-of-00002.bin",
389
+ "encoder.block.12.layer.1.DenseReluDense.wo.weight": "pytorch_model-00001-of-00002.bin",
390
+ "encoder.block.12.layer.1.layer_norm.weight": "pytorch_model-00001-of-00002.bin",
391
+ "encoder.block.13.layer.0.SelfAttention.k.weight": "pytorch_model-00001-of-00002.bin",
392
+ "encoder.block.13.layer.0.SelfAttention.o.weight": "pytorch_model-00001-of-00002.bin",
393
+ "encoder.block.13.layer.0.SelfAttention.q.weight": "pytorch_model-00001-of-00002.bin",
394
+ "encoder.block.13.layer.0.SelfAttention.v.weight": "pytorch_model-00001-of-00002.bin",
395
+ "encoder.block.13.layer.0.layer_norm.weight": "pytorch_model-00001-of-00002.bin",
396
+ "encoder.block.13.layer.1.DenseReluDense.wi_0.weight": "pytorch_model-00001-of-00002.bin",
397
+ "encoder.block.13.layer.1.DenseReluDense.wi_1.weight": "pytorch_model-00001-of-00002.bin",
398
+ "encoder.block.13.layer.1.DenseReluDense.wo.weight": "pytorch_model-00001-of-00002.bin",
399
+ "encoder.block.13.layer.1.layer_norm.weight": "pytorch_model-00001-of-00002.bin",
400
+ "encoder.block.14.layer.0.SelfAttention.k.weight": "pytorch_model-00001-of-00002.bin",
401
+ "encoder.block.14.layer.0.SelfAttention.o.weight": "pytorch_model-00001-of-00002.bin",
402
+ "encoder.block.14.layer.0.SelfAttention.q.weight": "pytorch_model-00001-of-00002.bin",
403
+ "encoder.block.14.layer.0.SelfAttention.v.weight": "pytorch_model-00001-of-00002.bin",
404
+ "encoder.block.14.layer.0.layer_norm.weight": "pytorch_model-00001-of-00002.bin",
405
+ "encoder.block.14.layer.1.DenseReluDense.wi_0.weight": "pytorch_model-00001-of-00002.bin",
406
+ "encoder.block.14.layer.1.DenseReluDense.wi_1.weight": "pytorch_model-00001-of-00002.bin",
407
+ "encoder.block.14.layer.1.DenseReluDense.wo.weight": "pytorch_model-00001-of-00002.bin",
408
+ "encoder.block.14.layer.1.layer_norm.weight": "pytorch_model-00001-of-00002.bin",
409
+ "encoder.block.15.layer.0.SelfAttention.k.weight": "pytorch_model-00001-of-00002.bin",
410
+ "encoder.block.15.layer.0.SelfAttention.o.weight": "pytorch_model-00001-of-00002.bin",
411
+ "encoder.block.15.layer.0.SelfAttention.q.weight": "pytorch_model-00001-of-00002.bin",
412
+ "encoder.block.15.layer.0.SelfAttention.v.weight": "pytorch_model-00001-of-00002.bin",
413
+ "encoder.block.15.layer.0.layer_norm.weight": "pytorch_model-00001-of-00002.bin",
414
+ "encoder.block.15.layer.1.DenseReluDense.wi_0.weight": "pytorch_model-00001-of-00002.bin",
415
+ "encoder.block.15.layer.1.DenseReluDense.wi_1.weight": "pytorch_model-00001-of-00002.bin",
416
+ "encoder.block.15.layer.1.DenseReluDense.wo.weight": "pytorch_model-00001-of-00002.bin",
417
+ "encoder.block.15.layer.1.layer_norm.weight": "pytorch_model-00001-of-00002.bin",
418
+ "encoder.block.16.layer.0.SelfAttention.k.weight": "pytorch_model-00001-of-00002.bin",
419
+ "encoder.block.16.layer.0.SelfAttention.o.weight": "pytorch_model-00001-of-00002.bin",
420
+ "encoder.block.16.layer.0.SelfAttention.q.weight": "pytorch_model-00001-of-00002.bin",
421
+ "encoder.block.16.layer.0.SelfAttention.v.weight": "pytorch_model-00001-of-00002.bin",
422
+ "encoder.block.16.layer.0.layer_norm.weight": "pytorch_model-00001-of-00002.bin",
423
+ "encoder.block.16.layer.1.DenseReluDense.wi_0.weight": "pytorch_model-00001-of-00002.bin",
424
+ "encoder.block.16.layer.1.DenseReluDense.wi_1.weight": "pytorch_model-00001-of-00002.bin",
425
+ "encoder.block.16.layer.1.DenseReluDense.wo.weight": "pytorch_model-00001-of-00002.bin",
426
+ "encoder.block.16.layer.1.layer_norm.weight": "pytorch_model-00001-of-00002.bin",
427
+ "encoder.block.17.layer.0.SelfAttention.k.weight": "pytorch_model-00001-of-00002.bin",
428
+ "encoder.block.17.layer.0.SelfAttention.o.weight": "pytorch_model-00001-of-00002.bin",
429
+ "encoder.block.17.layer.0.SelfAttention.q.weight": "pytorch_model-00001-of-00002.bin",
430
+ "encoder.block.17.layer.0.SelfAttention.v.weight": "pytorch_model-00001-of-00002.bin",
431
+ "encoder.block.17.layer.0.layer_norm.weight": "pytorch_model-00001-of-00002.bin",
432
+ "encoder.block.17.layer.1.DenseReluDense.wi_0.weight": "pytorch_model-00001-of-00002.bin",
433
+ "encoder.block.17.layer.1.DenseReluDense.wi_1.weight": "pytorch_model-00001-of-00002.bin",
434
+ "encoder.block.17.layer.1.DenseReluDense.wo.weight": "pytorch_model-00001-of-00002.bin",
435
+ "encoder.block.17.layer.1.layer_norm.weight": "pytorch_model-00001-of-00002.bin",
436
+ "encoder.block.18.layer.0.SelfAttention.k.weight": "pytorch_model-00001-of-00002.bin",
437
+ "encoder.block.18.layer.0.SelfAttention.o.weight": "pytorch_model-00001-of-00002.bin",
438
+ "encoder.block.18.layer.0.SelfAttention.q.weight": "pytorch_model-00001-of-00002.bin",
439
+ "encoder.block.18.layer.0.SelfAttention.v.weight": "pytorch_model-00001-of-00002.bin",
440
+ "encoder.block.18.layer.0.layer_norm.weight": "pytorch_model-00001-of-00002.bin",
441
+ "encoder.block.18.layer.1.DenseReluDense.wi_0.weight": "pytorch_model-00001-of-00002.bin",
442
+ "encoder.block.18.layer.1.DenseReluDense.wi_1.weight": "pytorch_model-00001-of-00002.bin",
443
+ "encoder.block.18.layer.1.DenseReluDense.wo.weight": "pytorch_model-00001-of-00002.bin",
444
+ "encoder.block.18.layer.1.layer_norm.weight": "pytorch_model-00001-of-00002.bin",
445
+ "encoder.block.19.layer.0.SelfAttention.k.weight": "pytorch_model-00001-of-00002.bin",
446
+ "encoder.block.19.layer.0.SelfAttention.o.weight": "pytorch_model-00001-of-00002.bin",
447
+ "encoder.block.19.layer.0.SelfAttention.q.weight": "pytorch_model-00001-of-00002.bin",
448
+ "encoder.block.19.layer.0.SelfAttention.v.weight": "pytorch_model-00001-of-00002.bin",
449
+ "encoder.block.19.layer.0.layer_norm.weight": "pytorch_model-00001-of-00002.bin",
450
+ "encoder.block.19.layer.1.DenseReluDense.wi_0.weight": "pytorch_model-00001-of-00002.bin",
451
+ "encoder.block.19.layer.1.DenseReluDense.wi_1.weight": "pytorch_model-00001-of-00002.bin",
452
+ "encoder.block.19.layer.1.DenseReluDense.wo.weight": "pytorch_model-00001-of-00002.bin",
453
+ "encoder.block.19.layer.1.layer_norm.weight": "pytorch_model-00001-of-00002.bin",
454
+ "encoder.block.2.layer.0.SelfAttention.k.weight": "pytorch_model-00001-of-00002.bin",
455
+ "encoder.block.2.layer.0.SelfAttention.o.weight": "pytorch_model-00001-of-00002.bin",
456
+ "encoder.block.2.layer.0.SelfAttention.q.weight": "pytorch_model-00001-of-00002.bin",
457
+ "encoder.block.2.layer.0.SelfAttention.v.weight": "pytorch_model-00001-of-00002.bin",
458
+ "encoder.block.2.layer.0.layer_norm.weight": "pytorch_model-00001-of-00002.bin",
459
+ "encoder.block.2.layer.1.DenseReluDense.wi_0.weight": "pytorch_model-00001-of-00002.bin",
460
+ "encoder.block.2.layer.1.DenseReluDense.wi_1.weight": "pytorch_model-00001-of-00002.bin",
461
+ "encoder.block.2.layer.1.DenseReluDense.wo.weight": "pytorch_model-00001-of-00002.bin",
462
+ "encoder.block.2.layer.1.layer_norm.weight": "pytorch_model-00001-of-00002.bin",
463
+ "encoder.block.20.layer.0.SelfAttention.k.weight": "pytorch_model-00001-of-00002.bin",
464
+ "encoder.block.20.layer.0.SelfAttention.o.weight": "pytorch_model-00001-of-00002.bin",
465
+ "encoder.block.20.layer.0.SelfAttention.q.weight": "pytorch_model-00001-of-00002.bin",
466
+ "encoder.block.20.layer.0.SelfAttention.v.weight": "pytorch_model-00001-of-00002.bin",
467
+ "encoder.block.20.layer.0.layer_norm.weight": "pytorch_model-00001-of-00002.bin",
468
+ "encoder.block.20.layer.1.DenseReluDense.wi_0.weight": "pytorch_model-00001-of-00002.bin",
469
+ "encoder.block.20.layer.1.DenseReluDense.wi_1.weight": "pytorch_model-00001-of-00002.bin",
470
+ "encoder.block.20.layer.1.DenseReluDense.wo.weight": "pytorch_model-00001-of-00002.bin",
471
+ "encoder.block.20.layer.1.layer_norm.weight": "pytorch_model-00001-of-00002.bin",
472
+ "encoder.block.21.layer.0.SelfAttention.k.weight": "pytorch_model-00001-of-00002.bin",
473
+ "encoder.block.21.layer.0.SelfAttention.o.weight": "pytorch_model-00001-of-00002.bin",
474
+ "encoder.block.21.layer.0.SelfAttention.q.weight": "pytorch_model-00001-of-00002.bin",
475
+ "encoder.block.21.layer.0.SelfAttention.v.weight": "pytorch_model-00001-of-00002.bin",
476
+ "encoder.block.21.layer.0.layer_norm.weight": "pytorch_model-00001-of-00002.bin",
477
+ "encoder.block.21.layer.1.DenseReluDense.wi_0.weight": "pytorch_model-00001-of-00002.bin",
478
+ "encoder.block.21.layer.1.DenseReluDense.wi_1.weight": "pytorch_model-00001-of-00002.bin",
479
+ "encoder.block.21.layer.1.DenseReluDense.wo.weight": "pytorch_model-00001-of-00002.bin",
480
+ "encoder.block.21.layer.1.layer_norm.weight": "pytorch_model-00001-of-00002.bin",
481
+ "encoder.block.22.layer.0.SelfAttention.k.weight": "pytorch_model-00001-of-00002.bin",
482
+ "encoder.block.22.layer.0.SelfAttention.o.weight": "pytorch_model-00001-of-00002.bin",
483
+ "encoder.block.22.layer.0.SelfAttention.q.weight": "pytorch_model-00001-of-00002.bin",
484
+ "encoder.block.22.layer.0.SelfAttention.v.weight": "pytorch_model-00001-of-00002.bin",
485
+ "encoder.block.22.layer.0.layer_norm.weight": "pytorch_model-00001-of-00002.bin",
486
+ "encoder.block.22.layer.1.DenseReluDense.wi_0.weight": "pytorch_model-00001-of-00002.bin",
487
+ "encoder.block.22.layer.1.DenseReluDense.wi_1.weight": "pytorch_model-00001-of-00002.bin",
488
+ "encoder.block.22.layer.1.DenseReluDense.wo.weight": "pytorch_model-00001-of-00002.bin",
489
+ "encoder.block.22.layer.1.layer_norm.weight": "pytorch_model-00001-of-00002.bin",
490
+ "encoder.block.23.layer.0.SelfAttention.k.weight": "pytorch_model-00001-of-00002.bin",
491
+ "encoder.block.23.layer.0.SelfAttention.o.weight": "pytorch_model-00001-of-00002.bin",
492
+ "encoder.block.23.layer.0.SelfAttention.q.weight": "pytorch_model-00001-of-00002.bin",
493
+ "encoder.block.23.layer.0.SelfAttention.v.weight": "pytorch_model-00001-of-00002.bin",
494
+ "encoder.block.23.layer.0.layer_norm.weight": "pytorch_model-00001-of-00002.bin",
495
+ "encoder.block.23.layer.1.DenseReluDense.wi_0.weight": "pytorch_model-00001-of-00002.bin",
496
+ "encoder.block.23.layer.1.DenseReluDense.wi_1.weight": "pytorch_model-00001-of-00002.bin",
497
+ "encoder.block.23.layer.1.DenseReluDense.wo.weight": "pytorch_model-00001-of-00002.bin",
498
+ "encoder.block.23.layer.1.layer_norm.weight": "pytorch_model-00001-of-00002.bin",
499
+ "encoder.block.3.layer.0.SelfAttention.k.weight": "pytorch_model-00001-of-00002.bin",
500
+ "encoder.block.3.layer.0.SelfAttention.o.weight": "pytorch_model-00001-of-00002.bin",
501
+ "encoder.block.3.layer.0.SelfAttention.q.weight": "pytorch_model-00001-of-00002.bin",
502
+ "encoder.block.3.layer.0.SelfAttention.v.weight": "pytorch_model-00001-of-00002.bin",
503
+ "encoder.block.3.layer.0.layer_norm.weight": "pytorch_model-00001-of-00002.bin",
504
+ "encoder.block.3.layer.1.DenseReluDense.wi_0.weight": "pytorch_model-00001-of-00002.bin",
505
+ "encoder.block.3.layer.1.DenseReluDense.wi_1.weight": "pytorch_model-00001-of-00002.bin",
506
+ "encoder.block.3.layer.1.DenseReluDense.wo.weight": "pytorch_model-00001-of-00002.bin",
507
+ "encoder.block.3.layer.1.layer_norm.weight": "pytorch_model-00001-of-00002.bin",
508
+ "encoder.block.4.layer.0.SelfAttention.k.weight": "pytorch_model-00001-of-00002.bin",
509
+ "encoder.block.4.layer.0.SelfAttention.o.weight": "pytorch_model-00001-of-00002.bin",
510
+ "encoder.block.4.layer.0.SelfAttention.q.weight": "pytorch_model-00001-of-00002.bin",
511
+ "encoder.block.4.layer.0.SelfAttention.v.weight": "pytorch_model-00001-of-00002.bin",
512
+ "encoder.block.4.layer.0.layer_norm.weight": "pytorch_model-00001-of-00002.bin",
513
+ "encoder.block.4.layer.1.DenseReluDense.wi_0.weight": "pytorch_model-00001-of-00002.bin",
514
+ "encoder.block.4.layer.1.DenseReluDense.wi_1.weight": "pytorch_model-00001-of-00002.bin",
515
+ "encoder.block.4.layer.1.DenseReluDense.wo.weight": "pytorch_model-00001-of-00002.bin",
516
+ "encoder.block.4.layer.1.layer_norm.weight": "pytorch_model-00001-of-00002.bin",
517
+ "encoder.block.5.layer.0.SelfAttention.k.weight": "pytorch_model-00001-of-00002.bin",
518
+ "encoder.block.5.layer.0.SelfAttention.o.weight": "pytorch_model-00001-of-00002.bin",
519
+ "encoder.block.5.layer.0.SelfAttention.q.weight": "pytorch_model-00001-of-00002.bin",
520
+ "encoder.block.5.layer.0.SelfAttention.v.weight": "pytorch_model-00001-of-00002.bin",
521
+ "encoder.block.5.layer.0.layer_norm.weight": "pytorch_model-00001-of-00002.bin",
522
+ "encoder.block.5.layer.1.DenseReluDense.wi_0.weight": "pytorch_model-00001-of-00002.bin",
523
+ "encoder.block.5.layer.1.DenseReluDense.wi_1.weight": "pytorch_model-00001-of-00002.bin",
524
+ "encoder.block.5.layer.1.DenseReluDense.wo.weight": "pytorch_model-00001-of-00002.bin",
525
+ "encoder.block.5.layer.1.layer_norm.weight": "pytorch_model-00001-of-00002.bin",
526
+ "encoder.block.6.layer.0.SelfAttention.k.weight": "pytorch_model-00001-of-00002.bin",
527
+ "encoder.block.6.layer.0.SelfAttention.o.weight": "pytorch_model-00001-of-00002.bin",
528
+ "encoder.block.6.layer.0.SelfAttention.q.weight": "pytorch_model-00001-of-00002.bin",
529
+ "encoder.block.6.layer.0.SelfAttention.v.weight": "pytorch_model-00001-of-00002.bin",
530
+ "encoder.block.6.layer.0.layer_norm.weight": "pytorch_model-00001-of-00002.bin",
531
+ "encoder.block.6.layer.1.DenseReluDense.wi_0.weight": "pytorch_model-00001-of-00002.bin",
532
+ "encoder.block.6.layer.1.DenseReluDense.wi_1.weight": "pytorch_model-00001-of-00002.bin",
533
+ "encoder.block.6.layer.1.DenseReluDense.wo.weight": "pytorch_model-00001-of-00002.bin",
534
+ "encoder.block.6.layer.1.layer_norm.weight": "pytorch_model-00001-of-00002.bin",
535
+ "encoder.block.7.layer.0.SelfAttention.k.weight": "pytorch_model-00001-of-00002.bin",
536
+ "encoder.block.7.layer.0.SelfAttention.o.weight": "pytorch_model-00001-of-00002.bin",
537
+ "encoder.block.7.layer.0.SelfAttention.q.weight": "pytorch_model-00001-of-00002.bin",
538
+ "encoder.block.7.layer.0.SelfAttention.v.weight": "pytorch_model-00001-of-00002.bin",
539
+ "encoder.block.7.layer.0.layer_norm.weight": "pytorch_model-00001-of-00002.bin",
540
+ "encoder.block.7.layer.1.DenseReluDense.wi_0.weight": "pytorch_model-00001-of-00002.bin",
541
+ "encoder.block.7.layer.1.DenseReluDense.wi_1.weight": "pytorch_model-00001-of-00002.bin",
542
+ "encoder.block.7.layer.1.DenseReluDense.wo.weight": "pytorch_model-00001-of-00002.bin",
543
+ "encoder.block.7.layer.1.layer_norm.weight": "pytorch_model-00001-of-00002.bin",
544
+ "encoder.block.8.layer.0.SelfAttention.k.weight": "pytorch_model-00001-of-00002.bin",
545
+ "encoder.block.8.layer.0.SelfAttention.o.weight": "pytorch_model-00001-of-00002.bin",
546
+ "encoder.block.8.layer.0.SelfAttention.q.weight": "pytorch_model-00001-of-00002.bin",
547
+ "encoder.block.8.layer.0.SelfAttention.v.weight": "pytorch_model-00001-of-00002.bin",
548
+ "encoder.block.8.layer.0.layer_norm.weight": "pytorch_model-00001-of-00002.bin",
549
+ "encoder.block.8.layer.1.DenseReluDense.wi_0.weight": "pytorch_model-00001-of-00002.bin",
550
+ "encoder.block.8.layer.1.DenseReluDense.wi_1.weight": "pytorch_model-00001-of-00002.bin",
551
+ "encoder.block.8.layer.1.DenseReluDense.wo.weight": "pytorch_model-00001-of-00002.bin",
552
+ "encoder.block.8.layer.1.layer_norm.weight": "pytorch_model-00001-of-00002.bin",
553
+ "encoder.block.9.layer.0.SelfAttention.k.weight": "pytorch_model-00001-of-00002.bin",
554
+ "encoder.block.9.layer.0.SelfAttention.o.weight": "pytorch_model-00001-of-00002.bin",
555
+ "encoder.block.9.layer.0.SelfAttention.q.weight": "pytorch_model-00001-of-00002.bin",
556
+ "encoder.block.9.layer.0.SelfAttention.v.weight": "pytorch_model-00001-of-00002.bin",
557
+ "encoder.block.9.layer.0.layer_norm.weight": "pytorch_model-00001-of-00002.bin",
558
+ "encoder.block.9.layer.1.DenseReluDense.wi_0.weight": "pytorch_model-00001-of-00002.bin",
559
+ "encoder.block.9.layer.1.DenseReluDense.wi_1.weight": "pytorch_model-00001-of-00002.bin",
560
+ "encoder.block.9.layer.1.DenseReluDense.wo.weight": "pytorch_model-00001-of-00002.bin",
561
+ "encoder.block.9.layer.1.layer_norm.weight": "pytorch_model-00001-of-00002.bin",
562
+ "encoder.final_layer_norm.weight": "pytorch_model-00001-of-00002.bin",
563
+ "lm_head.weight": "pytorch_model-00002-of-00002.bin",
564
+ "shared.weight": "pytorch_model-00001-of-00002.bin"
565
+ }
566
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ {
2
+ "eos_token": "</s>",
3
+ "pad_token": "<pad>",
4
+ "unk_token": "<unk>"
5
+ }
spiece.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ef78f86560d809067d12bac6c09f19a462cb3af3f54d2b8acbba26e1433125d6
3
+ size 4309802
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:93c3578052e1605d8332eb961bc08d72e246071974e4cc54aa6991826b802aa5
3
+ size 16330369
tokenizer_config.json ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": null,
3
+ "eos_token": "</s>",
4
+ "extra_ids": 0,
5
+ "name_or_path": "google/mt5-large",
6
+ "pad_token": "<pad>",
7
+ "sp_model_kwargs": {},
8
+ "special_tokens_map_file": "/home/patrick/.cache/torch/transformers/685ac0ca8568ec593a48b61b0a3c272beee9bc194a3c7241d15dcadb5f875e53.f76030f3ec1b96a8199b2593390c610e76ca8028ef3d24680000619ffb646276",
9
+ "tokenizer_class": "T5Tokenizer",
10
+ "unk_token": "<unk>"
11
+ }