bobox commited on
Commit
6b3d1b1
·
verified ·
1 Parent(s): fe62935

Training in progress, step 110, checkpoint

Browse files
checkpoint-110/1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 768,
3
+ "pooling_mode_cls_token": false,
4
+ "pooling_mode_mean_tokens": true,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
checkpoint-110/README.md ADDED
@@ -0,0 +1,964 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: microsoft/deberta-v3-small
3
+ datasets: []
4
+ language: []
5
+ library_name: sentence-transformers
6
+ metrics:
7
+ - pearson_cosine
8
+ - spearman_cosine
9
+ - pearson_manhattan
10
+ - spearman_manhattan
11
+ - pearson_euclidean
12
+ - spearman_euclidean
13
+ - pearson_dot
14
+ - spearman_dot
15
+ - pearson_max
16
+ - spearman_max
17
+ - cosine_accuracy
18
+ - cosine_accuracy_threshold
19
+ - cosine_f1
20
+ - cosine_f1_threshold
21
+ - cosine_precision
22
+ - cosine_recall
23
+ - cosine_ap
24
+ - dot_accuracy
25
+ - dot_accuracy_threshold
26
+ - dot_f1
27
+ - dot_f1_threshold
28
+ - dot_precision
29
+ - dot_recall
30
+ - dot_ap
31
+ - manhattan_accuracy
32
+ - manhattan_accuracy_threshold
33
+ - manhattan_f1
34
+ - manhattan_f1_threshold
35
+ - manhattan_precision
36
+ - manhattan_recall
37
+ - manhattan_ap
38
+ - euclidean_accuracy
39
+ - euclidean_accuracy_threshold
40
+ - euclidean_f1
41
+ - euclidean_f1_threshold
42
+ - euclidean_precision
43
+ - euclidean_recall
44
+ - euclidean_ap
45
+ - max_accuracy
46
+ - max_accuracy_threshold
47
+ - max_f1
48
+ - max_f1_threshold
49
+ - max_precision
50
+ - max_recall
51
+ - max_ap
52
+ pipeline_tag: sentence-similarity
53
+ tags:
54
+ - sentence-transformers
55
+ - sentence-similarity
56
+ - feature-extraction
57
+ - generated_from_trainer
58
+ - dataset_size:116445
59
+ - loss:CachedGISTEmbedLoss
60
+ widget:
61
+ - source_sentence: what is the main purpose of the brain
62
+ sentences:
63
+ - Brain Physiologically, the function of the brain is to exert centralized control
64
+ over the other organs of the body. The brain acts on the rest of the body both
65
+ by generating patterns of muscle activity and by driving the secretion of chemicals
66
+ called hormones. This centralized control allows rapid and coordinated responses
67
+ to changes in the environment. Some basic types of responsiveness such as reflexes
68
+ can be mediated by the spinal cord or peripheral ganglia, but sophisticated purposeful
69
+ control of behavior based on complex sensory input requires the information integrating
70
+ capabilities of a centralized brain.
71
+ - How do scientists know that some mountains were once at the bottom of an ocean?
72
+ - The Smiths Wiki | Fandom powered by Wikia Share Ad blocker interference detected!
73
+ Wikia is a free-to-use site that makes money from advertising. We have a modified
74
+ experience for viewers using ad blockers Wikia is not accessible if you’ve made
75
+ further modifications. Remove the custom ad blocker rule(s) and the page will
76
+ load as expected. The Smiths were an English rock band formed in Manchester in
77
+ 1982. Based on the songwriting partnership of Morrissey (vocals) and Johnny Marr
78
+ (guitar), the band also included Andy Rourke (bass), Mike Joyce (drums) and for
79
+ a brief time Craig Gannon (rhythm guitar). Critics have called them one of the
80
+ most important alternative rock bands to emerge from the British independent music
81
+ scene of the 1980s,and the group has had major influence on subsequent artists.
82
+ Morrissey's lovelorn tales of alienation found an audience amongst youth culture
83
+ bored by the ubiquitous synthesiser-pop bands of the early 1980s, while Marr's
84
+ complex melodies helped return guitar-based music to popularity. The group were
85
+ signed to the independent record label Rough Trade Records , for whom they released
86
+ four studio albums and several compilations, as well as numerous non-LP singles.
87
+ Although they had limited commercial success outside the UK while they were still
88
+ together, and never released a single that charted higher than number 10 in their
89
+ home country, The Smiths won a growing following, and they remain cult and commercial
90
+ favourites. The band broke up in 1987 amid disagreements between Morrissey and
91
+ Marr and has turned down several offers to reform. Welcome to The Smiths Wiki
92
+ - source_sentence: There were 29 Muslims fatalities in the Cave of the Patriarchs
93
+ massacre .
94
+ sentences:
95
+ - In August , after the end of the war in June 1902 , Higgins Southampton left the
96
+ `` SSBavarian '' and returned to Cape Town the following month .
97
+ - Between 29 and 52 Muslims were killed and more than 100 others wounded . [ Settlers
98
+ remember gunman Goldstein ; Hebron riots continue ] .
99
+ - 29 Muslims were killed and more than 100 others wounded . [ Settlers remember
100
+ gunman Goldstein ; Hebron riots continue ] .
101
+ - source_sentence: are tabby cats all male?
102
+ sentences:
103
+ - Did you know orange tabby cats are typically male? In fact, up to 80 percent of
104
+ orange tabbies are male, making orange female cats a bit of a rarity. According
105
+ to the BBC's Focus Magazine, the ginger gene in cats works a little differently
106
+ compared to humans; it is on the X chromosome.
107
+ - Shawnee Trails Council was formed from the merger of the Four Rivers Council and
108
+ the Audubon Council .
109
+ - 'A picture of a modern looking kitchen area
110
+
111
+ '
112
+ - source_sentence: Aamir Khan agreed to act immediately after reading Mehra 's screenplay
113
+ in `` Rang De Basanti '' .
114
+ sentences:
115
+ - Chris Rea — Free listening, videos, concerts, stats and photos at Last.fm singer-songwriter
116
+ Christopher Anton Rea (pronounced Ree-ah), born 4 March 1951, is a singer, songwriter,
117
+ and guitarist from Middlesbrough, England. Rea's recording career began in 1978.
118
+ Although he almost immediately had a US hit single with "Fool (If You Think It's
119
+ Over)", Rea's initial focus was on continental Europe, releasing eight albums
120
+ in the 1980s. It wasn't until 1985's Shamrock Diaries and the songs "Stainsby
121
+ Girls" and "Josephine," that UK audiences began to take notice of him. Follow
122
+ up albums… read more
123
+ - "Healthy Fast Food Meal No. 1. Grilled Chicken Sandwich and Fruit Cup (Chick-fil-A)\
124
+ \ Several fast food chains offer a grilled chicken sandwich. The trick is ordering\
125
+ \ it without mayo or creamy sauce, and making sure itâ\x80\x99s served with a\
126
+ \ whole grain bun."
127
+ - Aamir Khan agreed to act in `` Rang De Basanti '' immediately after reading Mehra
128
+ 's script .
129
+ - source_sentence: 'A man wearing a blue bow tie and a fedora hat in a car. '
130
+ sentences:
131
+ - A man takes a photo of himself wearing a bowtie and hat
132
+ - Scientists explain the world based on what?
133
+ - 'County of Angus - definition of County of Angus by The Free Dictionary County
134
+ of Angus - definition of County of Angus by The Free Dictionary http://www.thefreedictionary.com/County+of+Angus
135
+  (ăng′gəs) n. Any of a breed of hornless beef cattle that originated in Scotland
136
+ and are usually black but also occur in a red variety. Also called Black Angus.
137
+ [After Angus, former county of Scotland.] Angus (ˈæŋɡəs) n (Placename) a council
138
+ area of E Scotland on the North Sea: the historical county of Angus became part
139
+ of Tayside region in 1975; reinstated as a unitary authority (excluding City of
140
+ Dundee) in 1996. Administrative centre: Forfar. Pop: 107 520 (2003 est). Area:
141
+ 2181 sq km (842 sq miles) An•gus'
142
+ model-index:
143
+ - name: SentenceTransformer based on microsoft/deberta-v3-small
144
+ results:
145
+ - task:
146
+ type: semantic-similarity
147
+ name: Semantic Similarity
148
+ dataset:
149
+ name: sts test
150
+ type: sts-test
151
+ metrics:
152
+ - type: pearson_cosine
153
+ value: 0.7489263204555723
154
+ name: Pearson Cosine
155
+ - type: spearman_cosine
156
+ value: 0.7626005619606424
157
+ name: Spearman Cosine
158
+ - type: pearson_manhattan
159
+ value: 0.7591990025704353
160
+ name: Pearson Manhattan
161
+ - type: spearman_manhattan
162
+ value: 0.7477882076989188
163
+ name: Spearman Manhattan
164
+ - type: pearson_euclidean
165
+ value: 0.7622787611500085
166
+ name: Pearson Euclidean
167
+ - type: spearman_euclidean
168
+ value: 0.7539243664071233
169
+ name: Spearman Euclidean
170
+ - type: pearson_dot
171
+ value: 0.6493790443582248
172
+ name: Pearson Dot
173
+ - type: spearman_dot
174
+ value: 0.6306412644605037
175
+ name: Spearman Dot
176
+ - type: pearson_max
177
+ value: 0.7622787611500085
178
+ name: Pearson Max
179
+ - type: spearman_max
180
+ value: 0.7626005619606424
181
+ name: Spearman Max
182
+ - task:
183
+ type: binary-classification
184
+ name: Binary Classification
185
+ dataset:
186
+ name: allNLI dev
187
+ type: allNLI-dev
188
+ metrics:
189
+ - type: cosine_accuracy
190
+ value: 0.7109375
191
+ name: Cosine Accuracy
192
+ - type: cosine_accuracy_threshold
193
+ value: 0.916961669921875
194
+ name: Cosine Accuracy Threshold
195
+ - type: cosine_f1
196
+ value: 0.5853658536585366
197
+ name: Cosine F1
198
+ - type: cosine_f1_threshold
199
+ value: 0.8279993534088135
200
+ name: Cosine F1 Threshold
201
+ - type: cosine_precision
202
+ value: 0.4748201438848921
203
+ name: Cosine Precision
204
+ - type: cosine_recall
205
+ value: 0.7630057803468208
206
+ name: Cosine Recall
207
+ - type: cosine_ap
208
+ value: 0.5495769497490841
209
+ name: Cosine Ap
210
+ - type: dot_accuracy
211
+ value: 0.671875
212
+ name: Dot Accuracy
213
+ - type: dot_accuracy_threshold
214
+ value: 481.2850646972656
215
+ name: Dot Accuracy Threshold
216
+ - type: dot_f1
217
+ value: 0.549165120593692
218
+ name: Dot F1
219
+ - type: dot_f1_threshold
220
+ value: 381.15167236328125
221
+ name: Dot F1 Threshold
222
+ - type: dot_precision
223
+ value: 0.40437158469945356
224
+ name: Dot Precision
225
+ - type: dot_recall
226
+ value: 0.8554913294797688
227
+ name: Dot Recall
228
+ - type: dot_ap
229
+ value: 0.45293867777170244
230
+ name: Dot Ap
231
+ - type: manhattan_accuracy
232
+ value: 0.71484375
233
+ name: Manhattan Accuracy
234
+ - type: manhattan_accuracy_threshold
235
+ value: 186.7671356201172
236
+ name: Manhattan Accuracy Threshold
237
+ - type: manhattan_f1
238
+ value: 0.5696465696465696
239
+ name: Manhattan F1
240
+ - type: manhattan_f1_threshold
241
+ value: 268.783935546875
242
+ name: Manhattan F1 Threshold
243
+ - type: manhattan_precision
244
+ value: 0.4448051948051948
245
+ name: Manhattan Precision
246
+ - type: manhattan_recall
247
+ value: 0.791907514450867
248
+ name: Manhattan Recall
249
+ - type: manhattan_ap
250
+ value: 0.5511647333663136
251
+ name: Manhattan Ap
252
+ - type: euclidean_accuracy
253
+ value: 0.71484375
254
+ name: Euclidean Accuracy
255
+ - type: euclidean_accuracy_threshold
256
+ value: 8.915003776550293
257
+ name: Euclidean Accuracy Threshold
258
+ - type: euclidean_f1
259
+ value: 0.574074074074074
260
+ name: Euclidean F1
261
+ - type: euclidean_f1_threshold
262
+ value: 12.812746047973633
263
+ name: Euclidean F1 Threshold
264
+ - type: euclidean_precision
265
+ value: 0.47876447876447875
266
+ name: Euclidean Precision
267
+ - type: euclidean_recall
268
+ value: 0.7167630057803468
269
+ name: Euclidean Recall
270
+ - type: euclidean_ap
271
+ value: 0.5535962824434967
272
+ name: Euclidean Ap
273
+ - type: max_accuracy
274
+ value: 0.71484375
275
+ name: Max Accuracy
276
+ - type: max_accuracy_threshold
277
+ value: 481.2850646972656
278
+ name: Max Accuracy Threshold
279
+ - type: max_f1
280
+ value: 0.5853658536585366
281
+ name: Max F1
282
+ - type: max_f1_threshold
283
+ value: 381.15167236328125
284
+ name: Max F1 Threshold
285
+ - type: max_precision
286
+ value: 0.47876447876447875
287
+ name: Max Precision
288
+ - type: max_recall
289
+ value: 0.8554913294797688
290
+ name: Max Recall
291
+ - type: max_ap
292
+ value: 0.5535962824434967
293
+ name: Max Ap
294
+ - task:
295
+ type: binary-classification
296
+ name: Binary Classification
297
+ dataset:
298
+ name: Qnli dev
299
+ type: Qnli-dev
300
+ metrics:
301
+ - type: cosine_accuracy
302
+ value: 0.681640625
303
+ name: Cosine Accuracy
304
+ - type: cosine_accuracy_threshold
305
+ value: 0.8160840272903442
306
+ name: Cosine Accuracy Threshold
307
+ - type: cosine_f1
308
+ value: 0.6917562724014337
309
+ name: Cosine F1
310
+ - type: cosine_f1_threshold
311
+ value: 0.7854001522064209
312
+ name: Cosine F1 Threshold
313
+ - type: cosine_precision
314
+ value: 0.5993788819875776
315
+ name: Cosine Precision
316
+ - type: cosine_recall
317
+ value: 0.8177966101694916
318
+ name: Cosine Recall
319
+ - type: cosine_ap
320
+ value: 0.7109982147608755
321
+ name: Cosine Ap
322
+ - type: dot_accuracy
323
+ value: 0.6484375
324
+ name: Dot Accuracy
325
+ - type: dot_accuracy_threshold
326
+ value: 392.5464782714844
327
+ name: Dot Accuracy Threshold
328
+ - type: dot_f1
329
+ value: 0.6688311688311689
330
+ name: Dot F1
331
+ - type: dot_f1_threshold
332
+ value: 368.7878723144531
333
+ name: Dot F1 Threshold
334
+ - type: dot_precision
335
+ value: 0.5421052631578948
336
+ name: Dot Precision
337
+ - type: dot_recall
338
+ value: 0.8728813559322034
339
+ name: Dot Recall
340
+ - type: dot_ap
341
+ value: 0.6053421534358263
342
+ name: Dot Ap
343
+ - type: manhattan_accuracy
344
+ value: 0.685546875
345
+ name: Manhattan Accuracy
346
+ - type: manhattan_accuracy_threshold
347
+ value: 244.63809204101562
348
+ name: Manhattan Accuracy Threshold
349
+ - type: manhattan_f1
350
+ value: 0.6938053097345133
351
+ name: Manhattan F1
352
+ - type: manhattan_f1_threshold
353
+ value: 295.4796142578125
354
+ name: Manhattan F1 Threshold
355
+ - type: manhattan_precision
356
+ value: 0.5957446808510638
357
+ name: Manhattan Precision
358
+ - type: manhattan_recall
359
+ value: 0.8305084745762712
360
+ name: Manhattan Recall
361
+ - type: manhattan_ap
362
+ value: 0.7216536349653324
363
+ name: Manhattan Ap
364
+ - type: euclidean_accuracy
365
+ value: 0.6875
366
+ name: Euclidean Accuracy
367
+ - type: euclidean_accuracy_threshold
368
+ value: 13.026724815368652
369
+ name: Euclidean Accuracy Threshold
370
+ - type: euclidean_f1
371
+ value: 0.689407540394973
372
+ name: Euclidean F1
373
+ - type: euclidean_f1_threshold
374
+ value: 14.538017272949219
375
+ name: Euclidean F1 Threshold
376
+ - type: euclidean_precision
377
+ value: 0.5981308411214953
378
+ name: Euclidean Precision
379
+ - type: euclidean_recall
380
+ value: 0.8135593220338984
381
+ name: Euclidean Recall
382
+ - type: euclidean_ap
383
+ value: 0.7181091181717016
384
+ name: Euclidean Ap
385
+ - type: max_accuracy
386
+ value: 0.6875
387
+ name: Max Accuracy
388
+ - type: max_accuracy_threshold
389
+ value: 392.5464782714844
390
+ name: Max Accuracy Threshold
391
+ - type: max_f1
392
+ value: 0.6938053097345133
393
+ name: Max F1
394
+ - type: max_f1_threshold
395
+ value: 368.7878723144531
396
+ name: Max F1 Threshold
397
+ - type: max_precision
398
+ value: 0.5993788819875776
399
+ name: Max Precision
400
+ - type: max_recall
401
+ value: 0.8728813559322034
402
+ name: Max Recall
403
+ - type: max_ap
404
+ value: 0.7216536349653324
405
+ name: Max Ap
406
+ ---
407
+
408
+ # SentenceTransformer based on microsoft/deberta-v3-small
409
+
410
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [microsoft/deberta-v3-small](https://huggingface.co/microsoft/deberta-v3-small) on the bobox/enhanced_nli-50_k dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
411
+
412
+ ## Model Details
413
+
414
+ ### Model Description
415
+ - **Model Type:** Sentence Transformer
416
+ - **Base model:** [microsoft/deberta-v3-small](https://huggingface.co/microsoft/deberta-v3-small) <!-- at revision a36c739020e01763fe789b4b85e2df55d6180012 -->
417
+ - **Maximum Sequence Length:** 512 tokens
418
+ - **Output Dimensionality:** 768 tokens
419
+ - **Similarity Function:** Cosine Similarity
420
+ - **Training Dataset:**
421
+ - bobox/enhanced_nli-50_k
422
+ <!-- - **Language:** Unknown -->
423
+ <!-- - **License:** Unknown -->
424
+
425
+ ### Model Sources
426
+
427
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
428
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
429
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
430
+
431
+ ### Full Model Architecture
432
+
433
+ ```
434
+ SentenceTransformer(
435
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DebertaV2Model
436
+ (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
437
+ )
438
+ ```
439
+
440
+ ## Usage
441
+
442
+ ### Direct Usage (Sentence Transformers)
443
+
444
+ First install the Sentence Transformers library:
445
+
446
+ ```bash
447
+ pip install -U sentence-transformers
448
+ ```
449
+
450
+ Then you can load this model and run inference.
451
+ ```python
452
+ from sentence_transformers import SentenceTransformer
453
+
454
+ # Download from the 🤗 Hub
455
+ model = SentenceTransformer("bobox/DeBERTa-small-ST-UnifiedDatasets-baseline-checkpoints-tmp")
456
+ # Run inference
457
+ sentences = [
458
+ 'A man wearing a blue bow tie and a fedora hat in a car. ',
459
+ 'A man takes a photo of himself wearing a bowtie and hat',
460
+ 'County of Angus - definition of County of Angus by The Free Dictionary County of Angus - definition of County of Angus by The Free Dictionary http://www.thefreedictionary.com/County+of+Angus \xa0(ăng′gəs) n. Any of a breed of hornless beef cattle that originated in Scotland and are usually black but also occur in a red variety. Also called Black Angus. [After Angus, former county of Scotland.] Angus (ˈæŋɡəs) n (Placename) a council area of E Scotland on the North Sea: the historical county of Angus became part of Tayside region in 1975; reinstated as a unitary authority (excluding City of Dundee) in 1996. Administrative centre: Forfar. Pop: 107 520 (2003 est). Area: 2181 sq km (842 sq miles) An•gus',
461
+ ]
462
+ embeddings = model.encode(sentences)
463
+ print(embeddings.shape)
464
+ # [3, 768]
465
+
466
+ # Get the similarity scores for the embeddings
467
+ similarities = model.similarity(embeddings, embeddings)
468
+ print(similarities.shape)
469
+ # [3, 3]
470
+ ```
471
+
472
+ <!--
473
+ ### Direct Usage (Transformers)
474
+
475
+ <details><summary>Click to see the direct usage in Transformers</summary>
476
+
477
+ </details>
478
+ -->
479
+
480
+ <!--
481
+ ### Downstream Usage (Sentence Transformers)
482
+
483
+ You can finetune this model on your own dataset.
484
+
485
+ <details><summary>Click to expand</summary>
486
+
487
+ </details>
488
+ -->
489
+
490
+ <!--
491
+ ### Out-of-Scope Use
492
+
493
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
494
+ -->
495
+
496
+ ## Evaluation
497
+
498
+ ### Metrics
499
+
500
+ #### Semantic Similarity
501
+ * Dataset: `sts-test`
502
+ * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
503
+
504
+ | Metric | Value |
505
+ |:--------------------|:-----------|
506
+ | pearson_cosine | 0.7489 |
507
+ | **spearman_cosine** | **0.7626** |
508
+ | pearson_manhattan | 0.7592 |
509
+ | spearman_manhattan | 0.7478 |
510
+ | pearson_euclidean | 0.7623 |
511
+ | spearman_euclidean | 0.7539 |
512
+ | pearson_dot | 0.6494 |
513
+ | spearman_dot | 0.6306 |
514
+ | pearson_max | 0.7623 |
515
+ | spearman_max | 0.7626 |
516
+
517
+ #### Binary Classification
518
+ * Dataset: `allNLI-dev`
519
+ * Evaluated with [<code>BinaryClassificationEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.BinaryClassificationEvaluator)
520
+
521
+ | Metric | Value |
522
+ |:-----------------------------|:-----------|
523
+ | cosine_accuracy | 0.7109 |
524
+ | cosine_accuracy_threshold | 0.917 |
525
+ | cosine_f1 | 0.5854 |
526
+ | cosine_f1_threshold | 0.828 |
527
+ | cosine_precision | 0.4748 |
528
+ | cosine_recall | 0.763 |
529
+ | cosine_ap | 0.5496 |
530
+ | dot_accuracy | 0.6719 |
531
+ | dot_accuracy_threshold | 481.2851 |
532
+ | dot_f1 | 0.5492 |
533
+ | dot_f1_threshold | 381.1517 |
534
+ | dot_precision | 0.4044 |
535
+ | dot_recall | 0.8555 |
536
+ | dot_ap | 0.4529 |
537
+ | manhattan_accuracy | 0.7148 |
538
+ | manhattan_accuracy_threshold | 186.7671 |
539
+ | manhattan_f1 | 0.5696 |
540
+ | manhattan_f1_threshold | 268.7839 |
541
+ | manhattan_precision | 0.4448 |
542
+ | manhattan_recall | 0.7919 |
543
+ | manhattan_ap | 0.5512 |
544
+ | euclidean_accuracy | 0.7148 |
545
+ | euclidean_accuracy_threshold | 8.915 |
546
+ | euclidean_f1 | 0.5741 |
547
+ | euclidean_f1_threshold | 12.8127 |
548
+ | euclidean_precision | 0.4788 |
549
+ | euclidean_recall | 0.7168 |
550
+ | euclidean_ap | 0.5536 |
551
+ | max_accuracy | 0.7148 |
552
+ | max_accuracy_threshold | 481.2851 |
553
+ | max_f1 | 0.5854 |
554
+ | max_f1_threshold | 381.1517 |
555
+ | max_precision | 0.4788 |
556
+ | max_recall | 0.8555 |
557
+ | **max_ap** | **0.5536** |
558
+
559
+ #### Binary Classification
560
+ * Dataset: `Qnli-dev`
561
+ * Evaluated with [<code>BinaryClassificationEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.BinaryClassificationEvaluator)
562
+
563
+ | Metric | Value |
564
+ |:-----------------------------|:-----------|
565
+ | cosine_accuracy | 0.6816 |
566
+ | cosine_accuracy_threshold | 0.8161 |
567
+ | cosine_f1 | 0.6918 |
568
+ | cosine_f1_threshold | 0.7854 |
569
+ | cosine_precision | 0.5994 |
570
+ | cosine_recall | 0.8178 |
571
+ | cosine_ap | 0.711 |
572
+ | dot_accuracy | 0.6484 |
573
+ | dot_accuracy_threshold | 392.5465 |
574
+ | dot_f1 | 0.6688 |
575
+ | dot_f1_threshold | 368.7879 |
576
+ | dot_precision | 0.5421 |
577
+ | dot_recall | 0.8729 |
578
+ | dot_ap | 0.6053 |
579
+ | manhattan_accuracy | 0.6855 |
580
+ | manhattan_accuracy_threshold | 244.6381 |
581
+ | manhattan_f1 | 0.6938 |
582
+ | manhattan_f1_threshold | 295.4796 |
583
+ | manhattan_precision | 0.5957 |
584
+ | manhattan_recall | 0.8305 |
585
+ | manhattan_ap | 0.7217 |
586
+ | euclidean_accuracy | 0.6875 |
587
+ | euclidean_accuracy_threshold | 13.0267 |
588
+ | euclidean_f1 | 0.6894 |
589
+ | euclidean_f1_threshold | 14.538 |
590
+ | euclidean_precision | 0.5981 |
591
+ | euclidean_recall | 0.8136 |
592
+ | euclidean_ap | 0.7181 |
593
+ | max_accuracy | 0.6875 |
594
+ | max_accuracy_threshold | 392.5465 |
595
+ | max_f1 | 0.6938 |
596
+ | max_f1_threshold | 368.7879 |
597
+ | max_precision | 0.5994 |
598
+ | max_recall | 0.8729 |
599
+ | **max_ap** | **0.7217** |
600
+
601
+ <!--
602
+ ## Bias, Risks and Limitations
603
+
604
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
605
+ -->
606
+
607
+ <!--
608
+ ### Recommendations
609
+
610
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
611
+ -->
612
+
613
+ ## Training Details
614
+
615
+ ### Training Dataset
616
+
617
+ #### bobox/enhanced_nli-50_k
618
+
619
+ * Dataset: bobox/enhanced_nli-50_k
620
+ * Size: 116,445 training samples
621
+ * Columns: <code>sentence1</code> and <code>sentence2</code>
622
+ * Approximate statistics based on the first 1000 samples:
623
+ | | sentence1 | sentence2 |
624
+ |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
625
+ | type | string | string |
626
+ | details | <ul><li>min: 4 tokens</li><li>mean: 33.67 tokens</li><li>max: 338 tokens</li></ul> | <ul><li>min: 2 tokens</li><li>mean: 51.48 tokens</li><li>max: 512 tokens</li></ul> |
627
+ * Samples:
628
+ | sentence1 | sentence2 |
629
+ |:---------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
630
+ | <code>who is darnell from my name is earl</code> | <code>Eddie Steeples Eddie Steeples (born November 25, 1973)[1] is an American actor known for his roles as the "Rubberband Man" in an advertising campaign for OfficeMax, and as Darnell Turner on the NBC sitcom My Name Is Earl.</code> |
631
+ | <code>Ferrell and the Chili Peppers toured together in 2013 .</code> | <code>Ferrell and the Chili Peppers wrapped up I 'm With You World Tour in April 2013 .</code> |
632
+ | <code>Cells have four cycles.</code> | <code>How many cycles do cells have?</code> |
633
+ * Loss: [<code>CachedGISTEmbedLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cachedgistembedloss) with these parameters:
634
+ ```json
635
+ {'guide': SentenceTransformer(
636
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
637
+ (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
638
+ (2): Normalize()
639
+ ), 'temperature': 0.025}
640
+ ```
641
+
642
+ ### Evaluation Dataset
643
+
644
+ #### bobox/enhanced_nli-50_k
645
+
646
+ * Dataset: bobox/enhanced_nli-50_k
647
+ * Size: 1,506 evaluation samples
648
+ * Columns: <code>sentence1</code> and <code>sentence2</code>
649
+ * Approximate statistics based on the first 1000 samples:
650
+ | | sentence1 | sentence2 |
651
+ |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
652
+ | type | string | string |
653
+ | details | <ul><li>min: 3 tokens</li><li>mean: 32.36 tokens</li><li>max: 341 tokens</li></ul> | <ul><li>min: 2 tokens</li><li>mean: 61.99 tokens</li><li>max: 431 tokens</li></ul> |
654
+ * Samples:
655
+ | sentence1 | sentence2 |
656
+ |:----------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
657
+ | <code>Interestingly, snakes use their forked tongues to smell.</code> | <code>Snakes use their tongue to smell things.</code> |
658
+ | <code>Soil is a renewable resource that can take thousand of years to form.</code> | <code>What is a renewable resource that can take thousand of years to form?</code> |
659
+ | <code>As of March 22 , there were more than 321,000 cases with over 13,600 deaths and more than 96,000 recoveries reported worldwide .</code> | <code>As of 22 March , more than 321,000 cases of COVID-19 have been reported in over 180 countries and territories , resulting in more than 13,600 deaths and 96,000 recoveries .</code> |
660
+ * Loss: [<code>CachedGISTEmbedLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cachedgistembedloss) with these parameters:
661
+ ```json
662
+ {'guide': SentenceTransformer(
663
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
664
+ (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
665
+ (2): Normalize()
666
+ ), 'temperature': 0.025}
667
+ ```
668
+
669
+ ### Training Hyperparameters
670
+ #### Non-Default Hyperparameters
671
+
672
+ - `eval_strategy`: steps
673
+ - `per_device_train_batch_size`: 640
674
+ - `per_device_eval_batch_size`: 128
675
+ - `learning_rate`: 3.75e-05
676
+ - `weight_decay`: 0.0005
677
+ - `lr_scheduler_type`: cosine_with_min_lr
678
+ - `lr_scheduler_kwargs`: {'num_cycles': 0.5, 'min_lr': 7.499999999999999e-06}
679
+ - `warmup_ratio`: 0.33
680
+ - `save_safetensors`: False
681
+ - `fp16`: True
682
+ - `push_to_hub`: True
683
+ - `hub_model_id`: bobox/DeBERTa-small-ST-UnifiedDatasets-baseline-checkpoints-tmp
684
+ - `hub_strategy`: all_checkpoints
685
+ - `batch_sampler`: no_duplicates
686
+
687
+ #### All Hyperparameters
688
+ <details><summary>Click to expand</summary>
689
+
690
+ - `overwrite_output_dir`: False
691
+ - `do_predict`: False
692
+ - `eval_strategy`: steps
693
+ - `prediction_loss_only`: True
694
+ - `per_device_train_batch_size`: 640
695
+ - `per_device_eval_batch_size`: 128
696
+ - `per_gpu_train_batch_size`: None
697
+ - `per_gpu_eval_batch_size`: None
698
+ - `gradient_accumulation_steps`: 1
699
+ - `eval_accumulation_steps`: None
700
+ - `torch_empty_cache_steps`: None
701
+ - `learning_rate`: 3.75e-05
702
+ - `weight_decay`: 0.0005
703
+ - `adam_beta1`: 0.9
704
+ - `adam_beta2`: 0.999
705
+ - `adam_epsilon`: 1e-08
706
+ - `max_grad_norm`: 1.0
707
+ - `num_train_epochs`: 3
708
+ - `max_steps`: -1
709
+ - `lr_scheduler_type`: cosine_with_min_lr
710
+ - `lr_scheduler_kwargs`: {'num_cycles': 0.5, 'min_lr': 7.499999999999999e-06}
711
+ - `warmup_ratio`: 0.33
712
+ - `warmup_steps`: 0
713
+ - `log_level`: passive
714
+ - `log_level_replica`: warning
715
+ - `log_on_each_node`: True
716
+ - `logging_nan_inf_filter`: True
717
+ - `save_safetensors`: False
718
+ - `save_on_each_node`: False
719
+ - `save_only_model`: False
720
+ - `restore_callback_states_from_checkpoint`: False
721
+ - `no_cuda`: False
722
+ - `use_cpu`: False
723
+ - `use_mps_device`: False
724
+ - `seed`: 42
725
+ - `data_seed`: None
726
+ - `jit_mode_eval`: False
727
+ - `use_ipex`: False
728
+ - `bf16`: False
729
+ - `fp16`: True
730
+ - `fp16_opt_level`: O1
731
+ - `half_precision_backend`: auto
732
+ - `bf16_full_eval`: False
733
+ - `fp16_full_eval`: False
734
+ - `tf32`: None
735
+ - `local_rank`: 0
736
+ - `ddp_backend`: None
737
+ - `tpu_num_cores`: None
738
+ - `tpu_metrics_debug`: False
739
+ - `debug`: []
740
+ - `dataloader_drop_last`: False
741
+ - `dataloader_num_workers`: 0
742
+ - `dataloader_prefetch_factor`: None
743
+ - `past_index`: -1
744
+ - `disable_tqdm`: False
745
+ - `remove_unused_columns`: True
746
+ - `label_names`: None
747
+ - `load_best_model_at_end`: False
748
+ - `ignore_data_skip`: False
749
+ - `fsdp`: []
750
+ - `fsdp_min_num_params`: 0
751
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
752
+ - `fsdp_transformer_layer_cls_to_wrap`: None
753
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
754
+ - `deepspeed`: None
755
+ - `label_smoothing_factor`: 0.0
756
+ - `optim`: adamw_torch
757
+ - `optim_args`: None
758
+ - `adafactor`: False
759
+ - `group_by_length`: False
760
+ - `length_column_name`: length
761
+ - `ddp_find_unused_parameters`: None
762
+ - `ddp_bucket_cap_mb`: None
763
+ - `ddp_broadcast_buffers`: False
764
+ - `dataloader_pin_memory`: True
765
+ - `dataloader_persistent_workers`: False
766
+ - `skip_memory_metrics`: True
767
+ - `use_legacy_prediction_loop`: False
768
+ - `push_to_hub`: True
769
+ - `resume_from_checkpoint`: None
770
+ - `hub_model_id`: bobox/DeBERTa-small-ST-UnifiedDatasets-baseline-checkpoints-tmp
771
+ - `hub_strategy`: all_checkpoints
772
+ - `hub_private_repo`: False
773
+ - `hub_always_push`: False
774
+ - `gradient_checkpointing`: False
775
+ - `gradient_checkpointing_kwargs`: None
776
+ - `include_inputs_for_metrics`: False
777
+ - `eval_do_concat_batches`: True
778
+ - `fp16_backend`: auto
779
+ - `push_to_hub_model_id`: None
780
+ - `push_to_hub_organization`: None
781
+ - `mp_parameters`:
782
+ - `auto_find_batch_size`: False
783
+ - `full_determinism`: False
784
+ - `torchdynamo`: None
785
+ - `ray_scope`: last
786
+ - `ddp_timeout`: 1800
787
+ - `torch_compile`: False
788
+ - `torch_compile_backend`: None
789
+ - `torch_compile_mode`: None
790
+ - `dispatch_batches`: None
791
+ - `split_batches`: None
792
+ - `include_tokens_per_second`: False
793
+ - `include_num_input_tokens_seen`: False
794
+ - `neftune_noise_alpha`: None
795
+ - `optim_target_modules`: None
796
+ - `batch_eval_metrics`: False
797
+ - `eval_on_start`: False
798
+ - `eval_use_gather_object`: False
799
+ - `batch_sampler`: no_duplicates
800
+ - `multi_dataset_batch_sampler`: proportional
801
+
802
+ </details>
803
+
804
+ ### Training Logs
805
+ <details><summary>Click to expand</summary>
806
+
807
+ | Epoch | Step | Training Loss | loss | Qnli-dev_max_ap | allNLI-dev_max_ap | sts-test_spearman_cosine |
808
+ |:------:|:----:|:-------------:|:------:|:---------------:|:-----------------:|:------------------------:|
809
+ | 0.0055 | 1 | 8.8159 | - | - | - | - |
810
+ | 0.0110 | 2 | 9.1259 | - | - | - | - |
811
+ | 0.0165 | 3 | 8.9017 | - | - | - | - |
812
+ | 0.0220 | 4 | 9.1969 | - | - | - | - |
813
+ | 0.0275 | 5 | 9.3716 | 1.3746 | 0.6067 | 0.3706 | 0.1943 |
814
+ | 0.0330 | 6 | 9.0425 | - | - | - | - |
815
+ | 0.0385 | 7 | 8.7309 | - | - | - | - |
816
+ | 0.0440 | 8 | 9.0123 | - | - | - | - |
817
+ | 0.0495 | 9 | 8.8095 | - | - | - | - |
818
+ | 0.0549 | 10 | 9.3194 | 1.3227 | 0.6089 | 0.3721 | 0.1976 |
819
+ | 0.0604 | 11 | 8.9873 | - | - | - | - |
820
+ | 0.0659 | 12 | 8.5575 | - | - | - | - |
821
+ | 0.0714 | 13 | 8.8096 | - | - | - | - |
822
+ | 0.0769 | 14 | 8.0996 | - | - | - | - |
823
+ | 0.0824 | 15 | 8.1942 | 1.2244 | 0.6140 | 0.3743 | 0.2085 |
824
+ | 0.0879 | 16 | 8.1654 | - | - | - | - |
825
+ | 0.0934 | 17 | 7.7336 | - | - | - | - |
826
+ | 0.0989 | 18 | 7.9535 | - | - | - | - |
827
+ | 0.1044 | 19 | 7.9322 | - | - | - | - |
828
+ | 0.1099 | 20 | 7.6812 | 1.1301 | 0.6199 | 0.3790 | 0.2233 |
829
+ | 0.1154 | 21 | 7.551 | - | - | - | - |
830
+ | 0.1209 | 22 | 7.3788 | - | - | - | - |
831
+ | 0.1264 | 23 | 7.1746 | - | - | - | - |
832
+ | 0.1319 | 24 | 7.1849 | - | - | - | - |
833
+ | 0.1374 | 25 | 7.1085 | 1.0723 | 0.6195 | 0.3852 | 0.2357 |
834
+ | 0.1429 | 26 | 7.3926 | - | - | - | - |
835
+ | 0.1484 | 27 | 7.1817 | - | - | - | - |
836
+ | 0.1538 | 28 | 7.239 | - | - | - | - |
837
+ | 0.1593 | 29 | 7.0023 | - | - | - | - |
838
+ | 0.1648 | 30 | 6.9898 | 1.0282 | 0.6215 | 0.3898 | 0.2477 |
839
+ | 0.1703 | 31 | 6.9776 | - | - | - | - |
840
+ | 0.1758 | 32 | 6.8088 | - | - | - | - |
841
+ | 0.1813 | 33 | 6.8916 | - | - | - | - |
842
+ | 0.1868 | 34 | 6.6931 | - | - | - | - |
843
+ | 0.1923 | 35 | 6.5707 | 0.9846 | 0.6253 | 0.3952 | 0.2608 |
844
+ | 0.1978 | 36 | 6.6231 | - | - | - | - |
845
+ | 0.2033 | 37 | 6.4951 | - | - | - | - |
846
+ | 0.2088 | 38 | 6.4607 | - | - | - | - |
847
+ | 0.2143 | 39 | 6.4504 | - | - | - | - |
848
+ | 0.2198 | 40 | 6.3649 | 0.9314 | 0.6299 | 0.4041 | 0.2738 |
849
+ | 0.2253 | 41 | 6.2244 | - | - | - | - |
850
+ | 0.2308 | 42 | 6.007 | - | - | - | - |
851
+ | 0.2363 | 43 | 5.977 | - | - | - | - |
852
+ | 0.2418 | 44 | 6.0748 | - | - | - | - |
853
+ | 0.2473 | 45 | 5.7946 | 0.8549 | 0.6404 | 0.4116 | 0.2847 |
854
+ | 0.2527 | 46 | 5.8751 | - | - | - | - |
855
+ | 0.2582 | 47 | 5.543 | - | - | - | - |
856
+ | 0.2637 | 48 | 5.5511 | - | - | - | - |
857
+ | 0.2692 | 49 | 5.411 | - | - | - | - |
858
+ | 0.2747 | 50 | 5.378 | 0.7943 | 0.6557 | 0.4159 | 0.2866 |
859
+ | 0.2802 | 51 | 5.3831 | - | - | - | - |
860
+ | 0.2857 | 52 | 4.9729 | - | - | - | - |
861
+ | 0.2912 | 53 | 5.0425 | - | - | - | - |
862
+ | 0.2967 | 54 | 4.9446 | - | - | - | - |
863
+ | 0.3022 | 55 | 4.9288 | 0.7178 | 0.6679 | 0.4273 | 0.3132 |
864
+ | 0.3077 | 56 | 4.8434 | - | - | - | - |
865
+ | 0.3132 | 57 | 4.6914 | - | - | - | - |
866
+ | 0.3187 | 58 | 4.5254 | - | - | - | - |
867
+ | 0.3242 | 59 | 4.6734 | - | - | - | - |
868
+ | 0.3297 | 60 | 4.2421 | 0.6202 | 0.6684 | 0.4423 | 0.3580 |
869
+ | 0.3352 | 61 | 4.2234 | - | - | - | - |
870
+ | 0.3407 | 62 | 4.0225 | - | - | - | - |
871
+ | 0.3462 | 63 | 4.0034 | - | - | - | - |
872
+ | 0.3516 | 64 | 3.994 | - | - | - | - |
873
+ | 0.3571 | 65 | 3.651 | 0.5489 | 0.6750 | 0.4569 | 0.4014 |
874
+ | 0.3626 | 66 | 3.9308 | - | - | - | - |
875
+ | 0.3681 | 67 | 3.8694 | - | - | - | - |
876
+ | 0.3736 | 68 | 3.7159 | - | - | - | - |
877
+ | 0.3791 | 69 | 3.6499 | - | - | - | - |
878
+ | 0.3846 | 70 | 3.4749 | 0.4923 | 0.6734 | 0.4701 | 0.4465 |
879
+ | 0.3901 | 71 | 3.3356 | - | - | - | - |
880
+ | 0.3956 | 72 | 3.4768 | - | - | - | - |
881
+ | 0.4011 | 73 | 3.2748 | - | - | - | - |
882
+ | 0.4066 | 74 | 3.2789 | - | - | - | - |
883
+ | 0.4121 | 75 | 2.9815 | 0.4422 | 0.6759 | 0.4747 | 0.4924 |
884
+ | 0.4176 | 76 | 3.2356 | - | - | - | - |
885
+ | 0.4231 | 77 | 2.946 | - | - | - | - |
886
+ | 0.4286 | 78 | 2.8888 | - | - | - | - |
887
+ | 0.4341 | 79 | 2.8992 | - | - | - | - |
888
+ | 0.4396 | 80 | 2.9901 | 0.4040 | 0.6786 | 0.4781 | 0.5478 |
889
+ | 0.4451 | 81 | 2.6608 | - | - | - | - |
890
+ | 0.4505 | 82 | 2.831 | - | - | - | - |
891
+ | 0.4560 | 83 | 2.5503 | - | - | - | - |
892
+ | 0.4615 | 84 | 2.8576 | - | - | - | - |
893
+ | 0.4670 | 85 | 2.5726 | 0.3711 | 0.6858 | 0.4898 | 0.6134 |
894
+ | 0.4725 | 86 | 2.7197 | - | - | - | - |
895
+ | 0.4780 | 87 | 2.5123 | - | - | - | - |
896
+ | 0.4835 | 88 | 2.553 | - | - | - | - |
897
+ | 0.4890 | 89 | 2.4862 | - | - | - | - |
898
+ | 0.4945 | 90 | 2.491 | 0.3450 | 0.6997 | 0.5077 | 0.6668 |
899
+ | 0.5 | 91 | 2.3648 | - | - | - | - |
900
+ | 0.5055 | 92 | 2.3788 | - | - | - | - |
901
+ | 0.5110 | 93 | 2.3758 | - | - | - | - |
902
+ | 0.5165 | 94 | 2.3319 | - | - | - | - |
903
+ | 0.5220 | 95 | 2.2336 | 0.3238 | 0.7048 | 0.5252 | 0.7018 |
904
+ | 0.5275 | 96 | 2.3036 | - | - | - | - |
905
+ | 0.5330 | 97 | 2.3034 | - | - | - | - |
906
+ | 0.5385 | 98 | 2.207 | - | - | - | - |
907
+ | 0.5440 | 99 | 2.1732 | - | - | - | - |
908
+ | 0.5495 | 100 | 2.1743 | 0.3036 | 0.7091 | 0.5418 | 0.7272 |
909
+ | 0.5549 | 101 | 2.086 | - | - | - | - |
910
+ | 0.5604 | 102 | 2.0223 | - | - | - | - |
911
+ | 0.5659 | 103 | 2.0878 | - | - | - | - |
912
+ | 0.5714 | 104 | 1.9475 | - | - | - | - |
913
+ | 0.5769 | 105 | 2.1524 | 0.2853 | 0.7159 | 0.5499 | 0.7489 |
914
+ | 0.5824 | 106 | 1.9393 | - | - | - | - |
915
+ | 0.5879 | 107 | 2.1308 | - | - | - | - |
916
+ | 0.5934 | 108 | 1.9469 | - | - | - | - |
917
+ | 0.5989 | 109 | 1.8683 | - | - | - | - |
918
+ | 0.6044 | 110 | 1.8167 | 0.2702 | 0.7217 | 0.5536 | 0.7626 |
919
+
920
+ </details>
921
+
922
+ ### Framework Versions
923
+ - Python: 3.10.14
924
+ - Sentence Transformers: 3.0.1
925
+ - Transformers: 4.44.0
926
+ - PyTorch: 2.4.0
927
+ - Accelerate: 0.33.0
928
+ - Datasets: 2.21.0
929
+ - Tokenizers: 0.19.1
930
+
931
+ ## Citation
932
+
933
+ ### BibTeX
934
+
935
+ #### Sentence Transformers
936
+ ```bibtex
937
+ @inproceedings{reimers-2019-sentence-bert,
938
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
939
+ author = "Reimers, Nils and Gurevych, Iryna",
940
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
941
+ month = "11",
942
+ year = "2019",
943
+ publisher = "Association for Computational Linguistics",
944
+ url = "https://arxiv.org/abs/1908.10084",
945
+ }
946
+ ```
947
+
948
+ <!--
949
+ ## Glossary
950
+
951
+ *Clearly define terms in order to be accessible across audiences.*
952
+ -->
953
+
954
+ <!--
955
+ ## Model Card Authors
956
+
957
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
958
+ -->
959
+
960
+ <!--
961
+ ## Model Card Contact
962
+
963
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
964
+ -->
checkpoint-110/added_tokens.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ {
2
+ "[MASK]": 128000
3
+ }
checkpoint-110/config.json ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "microsoft/deberta-v3-small",
3
+ "architectures": [
4
+ "DebertaV2Model"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "hidden_act": "gelu",
8
+ "hidden_dropout_prob": 0.1,
9
+ "hidden_size": 768,
10
+ "initializer_range": 0.02,
11
+ "intermediate_size": 3072,
12
+ "layer_norm_eps": 1e-07,
13
+ "max_position_embeddings": 512,
14
+ "max_relative_positions": -1,
15
+ "model_type": "deberta-v2",
16
+ "norm_rel_ebd": "layer_norm",
17
+ "num_attention_heads": 12,
18
+ "num_hidden_layers": 6,
19
+ "pad_token_id": 0,
20
+ "pooler_dropout": 0,
21
+ "pooler_hidden_act": "gelu",
22
+ "pooler_hidden_size": 768,
23
+ "pos_att_type": [
24
+ "p2c",
25
+ "c2p"
26
+ ],
27
+ "position_biased_input": false,
28
+ "position_buckets": 256,
29
+ "relative_attention": true,
30
+ "share_att_key": true,
31
+ "torch_dtype": "float32",
32
+ "transformers_version": "4.44.0",
33
+ "type_vocab_size": 0,
34
+ "vocab_size": 128100
35
+ }
checkpoint-110/config_sentence_transformers.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "3.0.1",
4
+ "transformers": "4.44.0",
5
+ "pytorch": "2.4.0"
6
+ },
7
+ "prompts": {},
8
+ "default_prompt_name": null,
9
+ "similarity_fn_name": null
10
+ }
checkpoint-110/modules.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ }
14
+ ]
checkpoint-110/optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b8b58baa1d148c2570e59d52cab7516e156bb31762ea2e676cc136a49116b0af
3
+ size 1130520122
checkpoint-110/pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:738f0a7ea7064dc1fad40f06348ccc1b270737b5df295320877dfeb122ea18a9
3
+ size 565251810
checkpoint-110/rng_state.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:58f811f94539aa733ba4ef861adb95e7c49fb89154fee4002503dcf3153081b7
3
+ size 14244
checkpoint-110/scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6c14f6669285b589459a92ce501e1b7bb3e1c10d97d299ec8dab14ebb69f66e0
3
+ size 1064
checkpoint-110/sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": false
4
+ }
checkpoint-110/special_tokens_map.json ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": "[CLS]",
3
+ "cls_token": "[CLS]",
4
+ "eos_token": "[SEP]",
5
+ "mask_token": "[MASK]",
6
+ "pad_token": "[PAD]",
7
+ "sep_token": "[SEP]",
8
+ "unk_token": {
9
+ "content": "[UNK]",
10
+ "lstrip": false,
11
+ "normalized": true,
12
+ "rstrip": false,
13
+ "single_word": false
14
+ }
15
+ }
checkpoint-110/spm.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c679fbf93643d19aab7ee10c0b99e460bdbc02fedf34b92b05af343b4af586fd
3
+ size 2464616
checkpoint-110/tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
checkpoint-110/tokenizer_config.json ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "1": {
12
+ "content": "[CLS]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "2": {
20
+ "content": "[SEP]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "3": {
28
+ "content": "[UNK]",
29
+ "lstrip": false,
30
+ "normalized": true,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "128000": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "bos_token": "[CLS]",
45
+ "clean_up_tokenization_spaces": true,
46
+ "cls_token": "[CLS]",
47
+ "do_lower_case": false,
48
+ "eos_token": "[SEP]",
49
+ "mask_token": "[MASK]",
50
+ "model_max_length": 1000000000000000019884624838656,
51
+ "pad_token": "[PAD]",
52
+ "sep_token": "[SEP]",
53
+ "sp_model_kwargs": {},
54
+ "split_by_punct": false,
55
+ "tokenizer_class": "DebertaV2Tokenizer",
56
+ "unk_token": "[UNK]",
57
+ "vocab_type": "spm"
58
+ }
checkpoint-110/trainer_state.json ADDED
The diff for this file is too large to render. See raw diff
 
checkpoint-110/training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b50a4a92b5eb29f5d9b19f9e1060fdd6af0a02268cb16ba6bb85ab82bb7ddd6b
3
+ size 5752