Alijeff1214 commited on
Commit
912bef7
1 Parent(s): 3be4547

Upload folder using huggingface_hub

Browse files
All Cluster_tokenizer_plot.png ADDED

Git LFS Details

  • SHA256: 31f51edd1bbf4f8101da48cf351accd6c1a5f262d75dc0ceeec349c99b65c6f3
  • Pointer size: 130 Bytes
  • Size of remote file: 55.1 kB
Final_tokenizer_plot.png ADDED

Git LFS Details

  • SHA256: 4c75fd87a87a5ded76d7c431da6f1c785d33351494f7aa789fb16abe33838ecc
  • Pointer size: 130 Bytes
  • Size of remote file: 56.2 kB
FineTune_GeneralPruning1015899.out ADDED
@@ -0,0 +1,672 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Loading pytorch-gpu/py3/2.1.1
2
+ Loading requirement: cuda/11.8.0 nccl/2.18.5-1-cuda cudnn/8.7.0.84-cuda
3
+ gcc/8.5.0 openmpi/4.1.5-cuda intel-mkl/2020.4 magma/2.7.1-cuda sox/14.4.2
4
+ sparsehash/2.0.3 libjpeg-turbo/2.1.3 ffmpeg/4.4.4
5
+ + HF_DATASETS_OFFLINE=1
6
+ + TRANSFORMERS_OFFLINE=1
7
+ + python3 OnlyGeneralTokenizer.py
8
+
9
+ Checking label assignment:
10
+
11
+ Domain: Mathematics
12
+ Categories: hep-th math-ph math.MP nlin.SI
13
+ Abstract: three new models with vshaped field potentials u are considered a complex scalar field x in dimensio...
14
+
15
+ Domain: Computer Science
16
+ Categories: cs.AR
17
+ Abstract: this special session adresses the problems that designers face when implementing analog and digital ...
18
+
19
+ Domain: Physics
20
+ Categories: physics.plasm-ph
21
+ Abstract: starting from the governing equations for a quantum magnetoplasma including the quantum bohm potenti...
22
+
23
+ Domain: Chemistry
24
+ Categories: nlin.CD
25
+ Abstract: we present recent results on noiseinduced transitions in a nonlinear oscillator with randomly modula...
26
+
27
+ Domain: Statistics
28
+ Categories: stat.AP
29
+ Abstract: in microarray technology a number of critical steps are required to convert the raw measurements int...
30
+
31
+ Domain: Biology
32
+ Categories: q-bio.MN
33
+ Abstract: the architecture of biological networks has been reported to exhibit high level of modularity and to...
34
+ /linkhome/rech/genrug01/uft12cr/.local/lib/python3.11/site-packages/transformers/tokenization_utils_base.py:2057: FutureWarning: Calling BertTokenizer.from_pretrained() with the path to a single file or url is deprecated and won't be possible anymore in v5. Use a model identifier or the path to a directory instead.
35
+ warnings.warn(
36
+
37
+ Training with All Cluster tokenizer:
38
+ Vocabulary size: 16005
39
+ Could not load pretrained weights from /linkhome/rech/genrug01/uft12cr/bert_Model. Starting with random weights. Error: It looks like the config file at '/linkhome/rech/genrug01/uft12cr/bert_Model/config.json' is not a valid JSON file.
40
+ Initialized model with vocabulary size: 16005
41
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/OnlyGeneralTokenizer.py:172: FutureWarning: `torch.cuda.amp.GradScaler(args...)` is deprecated. Please use `torch.amp.GradScaler('cuda', args...)` instead.
42
+ scaler = amp.GradScaler()
43
+ Batch 0:
44
+ input_ids shape: torch.Size([16, 256])
45
+ attention_mask shape: torch.Size([16, 256])
46
+ labels shape: torch.Size([16])
47
+ input_ids max value: 16003
48
+ Vocab size: 16005
49
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/OnlyGeneralTokenizer.py:192: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
50
+ with amp.autocast():
51
+ Batch 100:
52
+ input_ids shape: torch.Size([16, 256])
53
+ attention_mask shape: torch.Size([16, 256])
54
+ labels shape: torch.Size([16])
55
+ input_ids max value: 16003
56
+ Vocab size: 16005
57
+ Batch 200:
58
+ input_ids shape: torch.Size([16, 256])
59
+ attention_mask shape: torch.Size([16, 256])
60
+ labels shape: torch.Size([16])
61
+ input_ids max value: 16003
62
+ Vocab size: 16005
63
+ Batch 300:
64
+ input_ids shape: torch.Size([16, 256])
65
+ attention_mask shape: torch.Size([16, 256])
66
+ labels shape: torch.Size([16])
67
+ input_ids max value: 16003
68
+ Vocab size: 16005
69
+ Batch 400:
70
+ input_ids shape: torch.Size([16, 256])
71
+ attention_mask shape: torch.Size([16, 256])
72
+ labels shape: torch.Size([16])
73
+ input_ids max value: 16003
74
+ Vocab size: 16005
75
+ Batch 500:
76
+ input_ids shape: torch.Size([16, 256])
77
+ attention_mask shape: torch.Size([16, 256])
78
+ labels shape: torch.Size([16])
79
+ input_ids max value: 16003
80
+ Vocab size: 16005
81
+ Batch 600:
82
+ input_ids shape: torch.Size([16, 256])
83
+ attention_mask shape: torch.Size([16, 256])
84
+ labels shape: torch.Size([16])
85
+ input_ids max value: 16003
86
+ Vocab size: 16005
87
+ Batch 700:
88
+ input_ids shape: torch.Size([16, 256])
89
+ attention_mask shape: torch.Size([16, 256])
90
+ labels shape: torch.Size([16])
91
+ input_ids max value: 16003
92
+ Vocab size: 16005
93
+ Batch 800:
94
+ input_ids shape: torch.Size([16, 256])
95
+ attention_mask shape: torch.Size([16, 256])
96
+ labels shape: torch.Size([16])
97
+ input_ids max value: 16003
98
+ Vocab size: 16005
99
+ Batch 900:
100
+ input_ids shape: torch.Size([16, 256])
101
+ attention_mask shape: torch.Size([16, 256])
102
+ labels shape: torch.Size([16])
103
+ input_ids max value: 16003
104
+ Vocab size: 16005
105
+ Epoch 1/3:
106
+ Val Accuracy: 0.7549, Val F1: 0.7014
107
+ Batch 0:
108
+ input_ids shape: torch.Size([16, 256])
109
+ attention_mask shape: torch.Size([16, 256])
110
+ labels shape: torch.Size([16])
111
+ input_ids max value: 16003
112
+ Vocab size: 16005
113
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/OnlyGeneralTokenizer.py:192: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
114
+ with amp.autocast():
115
+ Batch 100:
116
+ input_ids shape: torch.Size([16, 256])
117
+ attention_mask shape: torch.Size([16, 256])
118
+ labels shape: torch.Size([16])
119
+ input_ids max value: 16003
120
+ Vocab size: 16005
121
+ Batch 200:
122
+ input_ids shape: torch.Size([16, 256])
123
+ attention_mask shape: torch.Size([16, 256])
124
+ labels shape: torch.Size([16])
125
+ input_ids max value: 16003
126
+ Vocab size: 16005
127
+ Batch 300:
128
+ input_ids shape: torch.Size([16, 256])
129
+ attention_mask shape: torch.Size([16, 256])
130
+ labels shape: torch.Size([16])
131
+ input_ids max value: 16003
132
+ Vocab size: 16005
133
+ Batch 400:
134
+ input_ids shape: torch.Size([16, 256])
135
+ attention_mask shape: torch.Size([16, 256])
136
+ labels shape: torch.Size([16])
137
+ input_ids max value: 16003
138
+ Vocab size: 16005
139
+ Batch 500:
140
+ input_ids shape: torch.Size([16, 256])
141
+ attention_mask shape: torch.Size([16, 256])
142
+ labels shape: torch.Size([16])
143
+ input_ids max value: 16003
144
+ Vocab size: 16005
145
+ Batch 600:
146
+ input_ids shape: torch.Size([16, 256])
147
+ attention_mask shape: torch.Size([16, 256])
148
+ labels shape: torch.Size([16])
149
+ input_ids max value: 16003
150
+ Vocab size: 16005
151
+ Batch 700:
152
+ input_ids shape: torch.Size([16, 256])
153
+ attention_mask shape: torch.Size([16, 256])
154
+ labels shape: torch.Size([16])
155
+ input_ids max value: 16003
156
+ Vocab size: 16005
157
+ Batch 800:
158
+ input_ids shape: torch.Size([16, 256])
159
+ attention_mask shape: torch.Size([16, 256])
160
+ labels shape: torch.Size([16])
161
+ input_ids max value: 16003
162
+ Vocab size: 16005
163
+ Batch 900:
164
+ input_ids shape: torch.Size([16, 256])
165
+ attention_mask shape: torch.Size([16, 256])
166
+ labels shape: torch.Size([16])
167
+ input_ids max value: 16003
168
+ Vocab size: 16005
169
+ Epoch 2/3:
170
+ Val Accuracy: 0.7937, Val F1: 0.7657
171
+ Batch 0:
172
+ input_ids shape: torch.Size([16, 256])
173
+ attention_mask shape: torch.Size([16, 256])
174
+ labels shape: torch.Size([16])
175
+ input_ids max value: 16003
176
+ Vocab size: 16005
177
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/OnlyGeneralTokenizer.py:192: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
178
+ with amp.autocast():
179
+ Batch 100:
180
+ input_ids shape: torch.Size([16, 256])
181
+ attention_mask shape: torch.Size([16, 256])
182
+ labels shape: torch.Size([16])
183
+ input_ids max value: 16003
184
+ Vocab size: 16005
185
+ Batch 200:
186
+ input_ids shape: torch.Size([16, 256])
187
+ attention_mask shape: torch.Size([16, 256])
188
+ labels shape: torch.Size([16])
189
+ input_ids max value: 16003
190
+ Vocab size: 16005
191
+ Batch 300:
192
+ input_ids shape: torch.Size([16, 256])
193
+ attention_mask shape: torch.Size([16, 256])
194
+ labels shape: torch.Size([16])
195
+ input_ids max value: 16003
196
+ Vocab size: 16005
197
+ Batch 400:
198
+ input_ids shape: torch.Size([16, 256])
199
+ attention_mask shape: torch.Size([16, 256])
200
+ labels shape: torch.Size([16])
201
+ input_ids max value: 16003
202
+ Vocab size: 16005
203
+ Batch 500:
204
+ input_ids shape: torch.Size([16, 256])
205
+ attention_mask shape: torch.Size([16, 256])
206
+ labels shape: torch.Size([16])
207
+ input_ids max value: 16003
208
+ Vocab size: 16005
209
+ Batch 600:
210
+ input_ids shape: torch.Size([16, 256])
211
+ attention_mask shape: torch.Size([16, 256])
212
+ labels shape: torch.Size([16])
213
+ input_ids max value: 16003
214
+ Vocab size: 16005
215
+ Batch 700:
216
+ input_ids shape: torch.Size([16, 256])
217
+ attention_mask shape: torch.Size([16, 256])
218
+ labels shape: torch.Size([16])
219
+ input_ids max value: 16003
220
+ Vocab size: 16005
221
+ Batch 800:
222
+ input_ids shape: torch.Size([16, 256])
223
+ attention_mask shape: torch.Size([16, 256])
224
+ labels shape: torch.Size([16])
225
+ input_ids max value: 16003
226
+ Vocab size: 16005
227
+ Batch 900:
228
+ input_ids shape: torch.Size([16, 256])
229
+ attention_mask shape: torch.Size([16, 256])
230
+ labels shape: torch.Size([16])
231
+ input_ids max value: 16003
232
+ Vocab size: 16005
233
+ Epoch 3/3:
234
+ Val Accuracy: 0.8065, Val F1: 0.7645
235
+
236
+ Test Results for All Cluster tokenizer:
237
+ Accuracy: 0.8065
238
+ F1 Score: 0.7645
239
+ AUC-ROC: 0.8683
240
+
241
+ Training with Final tokenizer:
242
+ Vocabulary size: 18524
243
+ Could not load pretrained weights from /linkhome/rech/genrug01/uft12cr/bert_Model. Starting with random weights. Error: It looks like the config file at '/linkhome/rech/genrug01/uft12cr/bert_Model/config.json' is not a valid JSON file.
244
+ Initialized model with vocabulary size: 18524
245
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/OnlyGeneralTokenizer.py:172: FutureWarning: `torch.cuda.amp.GradScaler(args...)` is deprecated. Please use `torch.amp.GradScaler('cuda', args...)` instead.
246
+ scaler = amp.GradScaler()
247
+ Batch 0:
248
+ input_ids shape: torch.Size([16, 256])
249
+ attention_mask shape: torch.Size([16, 256])
250
+ labels shape: torch.Size([16])
251
+ input_ids max value: 18523
252
+ Vocab size: 18524
253
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/OnlyGeneralTokenizer.py:192: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
254
+ with amp.autocast():
255
+ Batch 100:
256
+ input_ids shape: torch.Size([16, 256])
257
+ attention_mask shape: torch.Size([16, 256])
258
+ labels shape: torch.Size([16])
259
+ input_ids max value: 18523
260
+ Vocab size: 18524
261
+ Batch 200:
262
+ input_ids shape: torch.Size([16, 256])
263
+ attention_mask shape: torch.Size([16, 256])
264
+ labels shape: torch.Size([16])
265
+ input_ids max value: 18523
266
+ Vocab size: 18524
267
+ Batch 300:
268
+ input_ids shape: torch.Size([16, 256])
269
+ attention_mask shape: torch.Size([16, 256])
270
+ labels shape: torch.Size([16])
271
+ input_ids max value: 18523
272
+ Vocab size: 18524
273
+ Batch 400:
274
+ input_ids shape: torch.Size([16, 256])
275
+ attention_mask shape: torch.Size([16, 256])
276
+ labels shape: torch.Size([16])
277
+ input_ids max value: 18523
278
+ Vocab size: 18524
279
+ Batch 500:
280
+ input_ids shape: torch.Size([16, 256])
281
+ attention_mask shape: torch.Size([16, 256])
282
+ labels shape: torch.Size([16])
283
+ input_ids max value: 18523
284
+ Vocab size: 18524
285
+ Batch 600:
286
+ input_ids shape: torch.Size([16, 256])
287
+ attention_mask shape: torch.Size([16, 256])
288
+ labels shape: torch.Size([16])
289
+ input_ids max value: 18523
290
+ Vocab size: 18524
291
+ Batch 700:
292
+ input_ids shape: torch.Size([16, 256])
293
+ attention_mask shape: torch.Size([16, 256])
294
+ labels shape: torch.Size([16])
295
+ input_ids max value: 18523
296
+ Vocab size: 18524
297
+ Batch 800:
298
+ input_ids shape: torch.Size([16, 256])
299
+ attention_mask shape: torch.Size([16, 256])
300
+ labels shape: torch.Size([16])
301
+ input_ids max value: 18523
302
+ Vocab size: 18524
303
+ Batch 900:
304
+ input_ids shape: torch.Size([16, 256])
305
+ attention_mask shape: torch.Size([16, 256])
306
+ labels shape: torch.Size([16])
307
+ input_ids max value: 18523
308
+ Vocab size: 18524
309
+ Epoch 1/3:
310
+ Val Accuracy: 0.6744, Val F1: 0.6438
311
+ Batch 0:
312
+ input_ids shape: torch.Size([16, 256])
313
+ attention_mask shape: torch.Size([16, 256])
314
+ labels shape: torch.Size([16])
315
+ input_ids max value: 18523
316
+ Vocab size: 18524
317
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/OnlyGeneralTokenizer.py:192: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
318
+ with amp.autocast():
319
+ Batch 100:
320
+ input_ids shape: torch.Size([16, 256])
321
+ attention_mask shape: torch.Size([16, 256])
322
+ labels shape: torch.Size([16])
323
+ input_ids max value: 18523
324
+ Vocab size: 18524
325
+ Batch 200:
326
+ input_ids shape: torch.Size([16, 256])
327
+ attention_mask shape: torch.Size([16, 256])
328
+ labels shape: torch.Size([16])
329
+ input_ids max value: 18523
330
+ Vocab size: 18524
331
+ Batch 300:
332
+ input_ids shape: torch.Size([16, 256])
333
+ attention_mask shape: torch.Size([16, 256])
334
+ labels shape: torch.Size([16])
335
+ input_ids max value: 18523
336
+ Vocab size: 18524
337
+ Batch 400:
338
+ input_ids shape: torch.Size([16, 256])
339
+ attention_mask shape: torch.Size([16, 256])
340
+ labels shape: torch.Size([16])
341
+ input_ids max value: 18523
342
+ Vocab size: 18524
343
+ Batch 500:
344
+ input_ids shape: torch.Size([16, 256])
345
+ attention_mask shape: torch.Size([16, 256])
346
+ labels shape: torch.Size([16])
347
+ input_ids max value: 18523
348
+ Vocab size: 18524
349
+ Batch 600:
350
+ input_ids shape: torch.Size([16, 256])
351
+ attention_mask shape: torch.Size([16, 256])
352
+ labels shape: torch.Size([16])
353
+ input_ids max value: 18523
354
+ Vocab size: 18524
355
+ Batch 700:
356
+ input_ids shape: torch.Size([16, 256])
357
+ attention_mask shape: torch.Size([16, 256])
358
+ labels shape: torch.Size([16])
359
+ input_ids max value: 18523
360
+ Vocab size: 18524
361
+ Batch 800:
362
+ input_ids shape: torch.Size([16, 256])
363
+ attention_mask shape: torch.Size([16, 256])
364
+ labels shape: torch.Size([16])
365
+ input_ids max value: 18523
366
+ Vocab size: 18524
367
+ Batch 900:
368
+ input_ids shape: torch.Size([16, 256])
369
+ attention_mask shape: torch.Size([16, 256])
370
+ labels shape: torch.Size([16])
371
+ input_ids max value: 18523
372
+ Vocab size: 18524
373
+ Epoch 2/3:
374
+ Val Accuracy: 0.7737, Val F1: 0.7343
375
+ Batch 0:
376
+ input_ids shape: torch.Size([16, 256])
377
+ attention_mask shape: torch.Size([16, 256])
378
+ labels shape: torch.Size([16])
379
+ input_ids max value: 18523
380
+ Vocab size: 18524
381
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/OnlyGeneralTokenizer.py:192: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
382
+ with amp.autocast():
383
+ Batch 100:
384
+ input_ids shape: torch.Size([16, 256])
385
+ attention_mask shape: torch.Size([16, 256])
386
+ labels shape: torch.Size([16])
387
+ input_ids max value: 18523
388
+ Vocab size: 18524
389
+ Batch 200:
390
+ input_ids shape: torch.Size([16, 256])
391
+ attention_mask shape: torch.Size([16, 256])
392
+ labels shape: torch.Size([16])
393
+ input_ids max value: 18523
394
+ Vocab size: 18524
395
+ Batch 300:
396
+ input_ids shape: torch.Size([16, 256])
397
+ attention_mask shape: torch.Size([16, 256])
398
+ labels shape: torch.Size([16])
399
+ input_ids max value: 18523
400
+ Vocab size: 18524
401
+ Batch 400:
402
+ input_ids shape: torch.Size([16, 256])
403
+ attention_mask shape: torch.Size([16, 256])
404
+ labels shape: torch.Size([16])
405
+ input_ids max value: 18523
406
+ Vocab size: 18524
407
+ Batch 500:
408
+ input_ids shape: torch.Size([16, 256])
409
+ attention_mask shape: torch.Size([16, 256])
410
+ labels shape: torch.Size([16])
411
+ input_ids max value: 18523
412
+ Vocab size: 18524
413
+ Batch 600:
414
+ input_ids shape: torch.Size([16, 256])
415
+ attention_mask shape: torch.Size([16, 256])
416
+ labels shape: torch.Size([16])
417
+ input_ids max value: 18523
418
+ Vocab size: 18524
419
+ Batch 700:
420
+ input_ids shape: torch.Size([16, 256])
421
+ attention_mask shape: torch.Size([16, 256])
422
+ labels shape: torch.Size([16])
423
+ input_ids max value: 18523
424
+ Vocab size: 18524
425
+ Batch 800:
426
+ input_ids shape: torch.Size([16, 256])
427
+ attention_mask shape: torch.Size([16, 256])
428
+ labels shape: torch.Size([16])
429
+ input_ids max value: 18523
430
+ Vocab size: 18524
431
+ Batch 900:
432
+ input_ids shape: torch.Size([16, 256])
433
+ attention_mask shape: torch.Size([16, 256])
434
+ labels shape: torch.Size([16])
435
+ input_ids max value: 18523
436
+ Vocab size: 18524
437
+ Epoch 3/3:
438
+ Val Accuracy: 0.7975, Val F1: 0.7612
439
+
440
+ Test Results for Final tokenizer:
441
+ Accuracy: 0.7978
442
+ F1 Score: 0.7615
443
+ AUC-ROC: 0.8035
444
+
445
+ Training with General tokenizer:
446
+ Vocabulary size: 30522
447
+ Could not load pretrained weights from /linkhome/rech/genrug01/uft12cr/bert_Model. Starting with random weights. Error: It looks like the config file at '/linkhome/rech/genrug01/uft12cr/bert_Model/config.json' is not a valid JSON file.
448
+ Initialized model with vocabulary size: 30522
449
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/OnlyGeneralTokenizer.py:172: FutureWarning: `torch.cuda.amp.GradScaler(args...)` is deprecated. Please use `torch.amp.GradScaler('cuda', args...)` instead.
450
+ scaler = amp.GradScaler()
451
+ Batch 0:
452
+ input_ids shape: torch.Size([16, 256])
453
+ attention_mask shape: torch.Size([16, 256])
454
+ labels shape: torch.Size([16])
455
+ input_ids max value: 29454
456
+ Vocab size: 30522
457
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/OnlyGeneralTokenizer.py:192: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
458
+ with amp.autocast():
459
+ Batch 100:
460
+ input_ids shape: torch.Size([16, 256])
461
+ attention_mask shape: torch.Size([16, 256])
462
+ labels shape: torch.Size([16])
463
+ input_ids max value: 29474
464
+ Vocab size: 30522
465
+ Batch 200:
466
+ input_ids shape: torch.Size([16, 256])
467
+ attention_mask shape: torch.Size([16, 256])
468
+ labels shape: torch.Size([16])
469
+ input_ids max value: 29413
470
+ Vocab size: 30522
471
+ Batch 300:
472
+ input_ids shape: torch.Size([16, 256])
473
+ attention_mask shape: torch.Size([16, 256])
474
+ labels shape: torch.Size([16])
475
+ input_ids max value: 29561
476
+ Vocab size: 30522
477
+ Batch 400:
478
+ input_ids shape: torch.Size([16, 256])
479
+ attention_mask shape: torch.Size([16, 256])
480
+ labels shape: torch.Size([16])
481
+ input_ids max value: 29513
482
+ Vocab size: 30522
483
+ Batch 500:
484
+ input_ids shape: torch.Size([16, 256])
485
+ attention_mask shape: torch.Size([16, 256])
486
+ labels shape: torch.Size([16])
487
+ input_ids max value: 29413
488
+ Vocab size: 30522
489
+ Batch 600:
490
+ input_ids shape: torch.Size([16, 256])
491
+ attention_mask shape: torch.Size([16, 256])
492
+ labels shape: torch.Size([16])
493
+ input_ids max value: 29513
494
+ Vocab size: 30522
495
+ Batch 700:
496
+ input_ids shape: torch.Size([16, 256])
497
+ attention_mask shape: torch.Size([16, 256])
498
+ labels shape: torch.Size([16])
499
+ input_ids max value: 29536
500
+ Vocab size: 30522
501
+ Batch 800:
502
+ input_ids shape: torch.Size([16, 256])
503
+ attention_mask shape: torch.Size([16, 256])
504
+ labels shape: torch.Size([16])
505
+ input_ids max value: 29513
506
+ Vocab size: 30522
507
+ Batch 900:
508
+ input_ids shape: torch.Size([16, 256])
509
+ attention_mask shape: torch.Size([16, 256])
510
+ labels shape: torch.Size([16])
511
+ input_ids max value: 29486
512
+ Vocab size: 30522
513
+ Epoch 1/3:
514
+ Val Accuracy: 0.6932, Val F1: 0.6626
515
+ Batch 0:
516
+ input_ids shape: torch.Size([16, 256])
517
+ attention_mask shape: torch.Size([16, 256])
518
+ labels shape: torch.Size([16])
519
+ input_ids max value: 29513
520
+ Vocab size: 30522
521
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/OnlyGeneralTokenizer.py:192: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
522
+ with amp.autocast():
523
+ Batch 100:
524
+ input_ids shape: torch.Size([16, 256])
525
+ attention_mask shape: torch.Size([16, 256])
526
+ labels shape: torch.Size([16])
527
+ input_ids max value: 29545
528
+ Vocab size: 30522
529
+ Batch 200:
530
+ input_ids shape: torch.Size([16, 256])
531
+ attention_mask shape: torch.Size([16, 256])
532
+ labels shape: torch.Size([16])
533
+ input_ids max value: 29464
534
+ Vocab size: 30522
535
+ Batch 300:
536
+ input_ids shape: torch.Size([16, 256])
537
+ attention_mask shape: torch.Size([16, 256])
538
+ labels shape: torch.Size([16])
539
+ input_ids max value: 29178
540
+ Vocab size: 30522
541
+ Batch 400:
542
+ input_ids shape: torch.Size([16, 256])
543
+ attention_mask shape: torch.Size([16, 256])
544
+ labels shape: torch.Size([16])
545
+ input_ids max value: 29446
546
+ Vocab size: 30522
547
+ Batch 500:
548
+ input_ids shape: torch.Size([16, 256])
549
+ attention_mask shape: torch.Size([16, 256])
550
+ labels shape: torch.Size([16])
551
+ input_ids max value: 29513
552
+ Vocab size: 30522
553
+ Batch 600:
554
+ input_ids shape: torch.Size([16, 256])
555
+ attention_mask shape: torch.Size([16, 256])
556
+ labels shape: torch.Size([16])
557
+ input_ids max value: 29536
558
+ Vocab size: 30522
559
+ Batch 700:
560
+ input_ids shape: torch.Size([16, 256])
561
+ attention_mask shape: torch.Size([16, 256])
562
+ labels shape: torch.Size([16])
563
+ input_ids max value: 29454
564
+ Vocab size: 30522
565
+ Batch 800:
566
+ input_ids shape: torch.Size([16, 256])
567
+ attention_mask shape: torch.Size([16, 256])
568
+ labels shape: torch.Size([16])
569
+ input_ids max value: 29347
570
+ Vocab size: 30522
571
+ Batch 900:
572
+ input_ids shape: torch.Size([16, 256])
573
+ attention_mask shape: torch.Size([16, 256])
574
+ labels shape: torch.Size([16])
575
+ input_ids max value: 29535
576
+ Vocab size: 30522
577
+ Epoch 2/3:
578
+ Val Accuracy: 0.7860, Val F1: 0.7438
579
+ Batch 0:
580
+ input_ids shape: torch.Size([16, 256])
581
+ attention_mask shape: torch.Size([16, 256])
582
+ labels shape: torch.Size([16])
583
+ input_ids max value: 29536
584
+ Vocab size: 30522
585
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/OnlyGeneralTokenizer.py:192: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
586
+ with amp.autocast():
587
+ Batch 100:
588
+ input_ids shape: torch.Size([16, 256])
589
+ attention_mask shape: torch.Size([16, 256])
590
+ labels shape: torch.Size([16])
591
+ input_ids max value: 29598
592
+ Vocab size: 30522
593
+ Batch 200:
594
+ input_ids shape: torch.Size([16, 256])
595
+ attention_mask shape: torch.Size([16, 256])
596
+ labels shape: torch.Size([16])
597
+ input_ids max value: 29237
598
+ Vocab size: 30522
599
+ Batch 300:
600
+ input_ids shape: torch.Size([16, 256])
601
+ attention_mask shape: torch.Size([16, 256])
602
+ labels shape: torch.Size([16])
603
+ input_ids max value: 29605
604
+ Vocab size: 30522
605
+ Batch 400:
606
+ input_ids shape: torch.Size([16, 256])
607
+ attention_mask shape: torch.Size([16, 256])
608
+ labels shape: torch.Size([16])
609
+ input_ids max value: 29577
610
+ Vocab size: 30522
611
+ Batch 500:
612
+ input_ids shape: torch.Size([16, 256])
613
+ attention_mask shape: torch.Size([16, 256])
614
+ labels shape: torch.Size([16])
615
+ input_ids max value: 29454
616
+ Vocab size: 30522
617
+ Batch 600:
618
+ input_ids shape: torch.Size([16, 256])
619
+ attention_mask shape: torch.Size([16, 256])
620
+ labels shape: torch.Size([16])
621
+ input_ids max value: 29586
622
+ Vocab size: 30522
623
+ Batch 700:
624
+ input_ids shape: torch.Size([16, 256])
625
+ attention_mask shape: torch.Size([16, 256])
626
+ labels shape: torch.Size([16])
627
+ input_ids max value: 29536
628
+ Vocab size: 30522
629
+ Batch 800:
630
+ input_ids shape: torch.Size([16, 256])
631
+ attention_mask shape: torch.Size([16, 256])
632
+ labels shape: torch.Size([16])
633
+ input_ids max value: 29532
634
+ Vocab size: 30522
635
+ Batch 900:
636
+ input_ids shape: torch.Size([16, 256])
637
+ attention_mask shape: torch.Size([16, 256])
638
+ labels shape: torch.Size([16])
639
+ input_ids max value: 29486
640
+ Vocab size: 30522
641
+ Epoch 3/3:
642
+ Val Accuracy: 0.8062, Val F1: 0.7665
643
+
644
+ Test Results for General tokenizer:
645
+ Accuracy: 0.8062
646
+ F1 Score: 0.7665
647
+ AUC-ROC: 0.8879
648
+
649
+ Summary of Results:
650
+
651
+ All Cluster Tokenizer:
652
+ Accuracy: 0.8065
653
+ F1 Score: 0.7645
654
+ AUC-ROC: 0.8683
655
+
656
+ Final Tokenizer:
657
+ Accuracy: 0.7978
658
+ F1 Score: 0.7615
659
+ AUC-ROC: 0.8035
660
+
661
+ General Tokenizer:
662
+ Accuracy: 0.8062
663
+ F1 Score: 0.7665
664
+ AUC-ROC: 0.8879
665
+
666
+ Class distribution in training set:
667
+ Class Biology: 439 samples
668
+ Class Chemistry: 454 samples
669
+ Class Computer Science: 1358 samples
670
+ Class Mathematics: 9480 samples
671
+ Class Physics: 2733 samples
672
+ Class Statistics: 200 samples
FineTune_withPlots1082275.out ADDED
@@ -0,0 +1,1071 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Loading pytorch-gpu/py3/2.1.1
2
+ Loading requirement: cuda/11.8.0 nccl/2.18.5-1-cuda cudnn/8.7.0.84-cuda
3
+ gcc/8.5.0 openmpi/4.1.5-cuda intel-mkl/2020.4 magma/2.7.1-cuda sox/14.4.2
4
+ sparsehash/2.0.3 libjpeg-turbo/2.1.3 ffmpeg/4.4.4
5
+ + HF_DATASETS_OFFLINE=1
6
+ + TRANSFORMERS_OFFLINE=1
7
+ + python3 FIneTune_withPlots.py
8
+
9
+ Checking label assignment:
10
+
11
+ Domain: Mathematics
12
+ Categories: math.KT math.RT
13
+ Abstract: we compute the hochschild cohomology and homology of a class of quantum exterior algebras with coeff...
14
+
15
+ Domain: Computer Science
16
+ Categories: cs.AI cs.LO
17
+ Abstract: this paper presents experiments on common knowledge logic conducted with the help of the proof assis...
18
+
19
+ Domain: Physics
20
+ Categories: physics.ins-det physics.gen-ph
21
+ Abstract: soil bulk density affects water storage water and nutrient movement and plant root activity in the s...
22
+
23
+ Domain: Chemistry
24
+ Categories: nlin.CD
25
+ Abstract: two chaotic systems which interact by mutually exchanging a signal built from their delayed internal...
26
+
27
+ Domain: Statistics
28
+ Categories: stat.ME stat.AP
29
+ Abstract: it is difficult to accurately estimate the rates of rape and domestic violence due to the sensitive ...
30
+
31
+ Domain: Biology
32
+ Categories: q-bio.PE
33
+ Abstract: the distribution of genetic polymorphisms in a population contains information about the mutation ra...
34
+ /linkhome/rech/genrug01/uft12cr/.local/lib/python3.11/site-packages/transformers/tokenization_utils_base.py:2057: FutureWarning: Calling BertTokenizer.from_pretrained() with the path to a single file or url is deprecated and won't be possible anymore in v5. Use a model identifier or the path to a directory instead.
35
+ warnings.warn(
36
+
37
+ Training with All Cluster tokenizer:
38
+ Vocabulary size: 16005
39
+ Could not load pretrained weights from /gpfswork/rech/fmr/uft12cr/finetuneAli/Bert_Model. Starting with random weights. Error: Error while deserializing header: HeaderTooLarge
40
+ Initialized model with vocabulary size: 16005
41
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:173: FutureWarning: `torch.cuda.amp.GradScaler(args...)` is deprecated. Please use `torch.amp.GradScaler('cuda', args...)` instead.
42
+ scaler = amp.GradScaler()
43
+ Batch 0:
44
+ input_ids shape: torch.Size([16, 256])
45
+ attention_mask shape: torch.Size([16, 256])
46
+ labels shape: torch.Size([16])
47
+ input_ids max value: 16003
48
+ Vocab size: 16005
49
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
50
+ with amp.autocast():
51
+ Batch 100:
52
+ input_ids shape: torch.Size([16, 256])
53
+ attention_mask shape: torch.Size([16, 256])
54
+ labels shape: torch.Size([16])
55
+ input_ids max value: 16003
56
+ Vocab size: 16005
57
+ Batch 200:
58
+ input_ids shape: torch.Size([16, 256])
59
+ attention_mask shape: torch.Size([16, 256])
60
+ labels shape: torch.Size([16])
61
+ input_ids max value: 16003
62
+ Vocab size: 16005
63
+ Batch 300:
64
+ input_ids shape: torch.Size([16, 256])
65
+ attention_mask shape: torch.Size([16, 256])
66
+ labels shape: torch.Size([16])
67
+ input_ids max value: 16003
68
+ Vocab size: 16005
69
+ Batch 400:
70
+ input_ids shape: torch.Size([16, 256])
71
+ attention_mask shape: torch.Size([16, 256])
72
+ labels shape: torch.Size([16])
73
+ input_ids max value: 16003
74
+ Vocab size: 16005
75
+ Batch 500:
76
+ input_ids shape: torch.Size([16, 256])
77
+ attention_mask shape: torch.Size([16, 256])
78
+ labels shape: torch.Size([16])
79
+ input_ids max value: 16003
80
+ Vocab size: 16005
81
+ Batch 600:
82
+ input_ids shape: torch.Size([16, 256])
83
+ attention_mask shape: torch.Size([16, 256])
84
+ labels shape: torch.Size([16])
85
+ input_ids max value: 16003
86
+ Vocab size: 16005
87
+ Batch 700:
88
+ input_ids shape: torch.Size([16, 256])
89
+ attention_mask shape: torch.Size([16, 256])
90
+ labels shape: torch.Size([16])
91
+ input_ids max value: 16003
92
+ Vocab size: 16005
93
+ Batch 800:
94
+ input_ids shape: torch.Size([16, 256])
95
+ attention_mask shape: torch.Size([16, 256])
96
+ labels shape: torch.Size([16])
97
+ input_ids max value: 16003
98
+ Vocab size: 16005
99
+ Batch 900:
100
+ input_ids shape: torch.Size([16, 256])
101
+ attention_mask shape: torch.Size([16, 256])
102
+ labels shape: torch.Size([16])
103
+ input_ids max value: 16003
104
+ Vocab size: 16005
105
+ Epoch 1/5:
106
+ Train Loss: 0.8860, Train Accuracy: 0.7123
107
+ Val Loss: 0.6624, Val Accuracy: 0.7811, Val F1: 0.7137
108
+ Batch 0:
109
+ input_ids shape: torch.Size([16, 256])
110
+ attention_mask shape: torch.Size([16, 256])
111
+ labels shape: torch.Size([16])
112
+ input_ids max value: 16003
113
+ Vocab size: 16005
114
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
115
+ with amp.autocast():
116
+ Batch 100:
117
+ input_ids shape: torch.Size([16, 256])
118
+ attention_mask shape: torch.Size([16, 256])
119
+ labels shape: torch.Size([16])
120
+ input_ids max value: 16003
121
+ Vocab size: 16005
122
+ Batch 200:
123
+ input_ids shape: torch.Size([16, 256])
124
+ attention_mask shape: torch.Size([16, 256])
125
+ labels shape: torch.Size([16])
126
+ input_ids max value: 16003
127
+ Vocab size: 16005
128
+ Batch 300:
129
+ input_ids shape: torch.Size([16, 256])
130
+ attention_mask shape: torch.Size([16, 256])
131
+ labels shape: torch.Size([16])
132
+ input_ids max value: 16003
133
+ Vocab size: 16005
134
+ Batch 400:
135
+ input_ids shape: torch.Size([16, 256])
136
+ attention_mask shape: torch.Size([16, 256])
137
+ labels shape: torch.Size([16])
138
+ input_ids max value: 16003
139
+ Vocab size: 16005
140
+ Batch 500:
141
+ input_ids shape: torch.Size([16, 256])
142
+ attention_mask shape: torch.Size([16, 256])
143
+ labels shape: torch.Size([16])
144
+ input_ids max value: 16003
145
+ Vocab size: 16005
146
+ Batch 600:
147
+ input_ids shape: torch.Size([16, 256])
148
+ attention_mask shape: torch.Size([16, 256])
149
+ labels shape: torch.Size([16])
150
+ input_ids max value: 16003
151
+ Vocab size: 16005
152
+ Batch 700:
153
+ input_ids shape: torch.Size([16, 256])
154
+ attention_mask shape: torch.Size([16, 256])
155
+ labels shape: torch.Size([16])
156
+ input_ids max value: 16003
157
+ Vocab size: 16005
158
+ Batch 800:
159
+ input_ids shape: torch.Size([16, 256])
160
+ attention_mask shape: torch.Size([16, 256])
161
+ labels shape: torch.Size([16])
162
+ input_ids max value: 16003
163
+ Vocab size: 16005
164
+ Batch 900:
165
+ input_ids shape: torch.Size([16, 256])
166
+ attention_mask shape: torch.Size([16, 256])
167
+ labels shape: torch.Size([16])
168
+ input_ids max value: 16003
169
+ Vocab size: 16005
170
+ Epoch 2/5:
171
+ Train Loss: 0.6292, Train Accuracy: 0.7928
172
+ Val Loss: 0.6377, Val Accuracy: 0.7942, Val F1: 0.7572
173
+ Batch 0:
174
+ input_ids shape: torch.Size([16, 256])
175
+ attention_mask shape: torch.Size([16, 256])
176
+ labels shape: torch.Size([16])
177
+ input_ids max value: 16003
178
+ Vocab size: 16005
179
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
180
+ with amp.autocast():
181
+ Batch 100:
182
+ input_ids shape: torch.Size([16, 256])
183
+ attention_mask shape: torch.Size([16, 256])
184
+ labels shape: torch.Size([16])
185
+ input_ids max value: 16003
186
+ Vocab size: 16005
187
+ Batch 200:
188
+ input_ids shape: torch.Size([16, 256])
189
+ attention_mask shape: torch.Size([16, 256])
190
+ labels shape: torch.Size([16])
191
+ input_ids max value: 16003
192
+ Vocab size: 16005
193
+ Batch 300:
194
+ input_ids shape: torch.Size([16, 256])
195
+ attention_mask shape: torch.Size([16, 256])
196
+ labels shape: torch.Size([16])
197
+ input_ids max value: 16003
198
+ Vocab size: 16005
199
+ Batch 400:
200
+ input_ids shape: torch.Size([16, 256])
201
+ attention_mask shape: torch.Size([16, 256])
202
+ labels shape: torch.Size([16])
203
+ input_ids max value: 16003
204
+ Vocab size: 16005
205
+ Batch 500:
206
+ input_ids shape: torch.Size([16, 256])
207
+ attention_mask shape: torch.Size([16, 256])
208
+ labels shape: torch.Size([16])
209
+ input_ids max value: 16003
210
+ Vocab size: 16005
211
+ Batch 600:
212
+ input_ids shape: torch.Size([16, 256])
213
+ attention_mask shape: torch.Size([16, 256])
214
+ labels shape: torch.Size([16])
215
+ input_ids max value: 16003
216
+ Vocab size: 16005
217
+ Batch 700:
218
+ input_ids shape: torch.Size([16, 256])
219
+ attention_mask shape: torch.Size([16, 256])
220
+ labels shape: torch.Size([16])
221
+ input_ids max value: 16003
222
+ Vocab size: 16005
223
+ Batch 800:
224
+ input_ids shape: torch.Size([16, 256])
225
+ attention_mask shape: torch.Size([16, 256])
226
+ labels shape: torch.Size([16])
227
+ input_ids max value: 16003
228
+ Vocab size: 16005
229
+ Batch 900:
230
+ input_ids shape: torch.Size([16, 256])
231
+ attention_mask shape: torch.Size([16, 256])
232
+ labels shape: torch.Size([16])
233
+ input_ids max value: 16003
234
+ Vocab size: 16005
235
+ Epoch 3/5:
236
+ Train Loss: 0.5420, Train Accuracy: 0.8283
237
+ Val Loss: 0.6224, Val Accuracy: 0.7983, Val F1: 0.7744
238
+ Batch 0:
239
+ input_ids shape: torch.Size([16, 256])
240
+ attention_mask shape: torch.Size([16, 256])
241
+ labels shape: torch.Size([16])
242
+ input_ids max value: 16003
243
+ Vocab size: 16005
244
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
245
+ with amp.autocast():
246
+ Batch 100:
247
+ input_ids shape: torch.Size([16, 256])
248
+ attention_mask shape: torch.Size([16, 256])
249
+ labels shape: torch.Size([16])
250
+ input_ids max value: 16003
251
+ Vocab size: 16005
252
+ Batch 200:
253
+ input_ids shape: torch.Size([16, 256])
254
+ attention_mask shape: torch.Size([16, 256])
255
+ labels shape: torch.Size([16])
256
+ input_ids max value: 16003
257
+ Vocab size: 16005
258
+ Batch 300:
259
+ input_ids shape: torch.Size([16, 256])
260
+ attention_mask shape: torch.Size([16, 256])
261
+ labels shape: torch.Size([16])
262
+ input_ids max value: 16003
263
+ Vocab size: 16005
264
+ Batch 400:
265
+ input_ids shape: torch.Size([16, 256])
266
+ attention_mask shape: torch.Size([16, 256])
267
+ labels shape: torch.Size([16])
268
+ input_ids max value: 16003
269
+ Vocab size: 16005
270
+ Batch 500:
271
+ input_ids shape: torch.Size([16, 256])
272
+ attention_mask shape: torch.Size([16, 256])
273
+ labels shape: torch.Size([16])
274
+ input_ids max value: 16003
275
+ Vocab size: 16005
276
+ Batch 600:
277
+ input_ids shape: torch.Size([16, 256])
278
+ attention_mask shape: torch.Size([16, 256])
279
+ labels shape: torch.Size([16])
280
+ input_ids max value: 16003
281
+ Vocab size: 16005
282
+ Batch 700:
283
+ input_ids shape: torch.Size([16, 256])
284
+ attention_mask shape: torch.Size([16, 256])
285
+ labels shape: torch.Size([16])
286
+ input_ids max value: 16003
287
+ Vocab size: 16005
288
+ Batch 800:
289
+ input_ids shape: torch.Size([16, 256])
290
+ attention_mask shape: torch.Size([16, 256])
291
+ labels shape: torch.Size([16])
292
+ input_ids max value: 16003
293
+ Vocab size: 16005
294
+ Batch 900:
295
+ input_ids shape: torch.Size([16, 256])
296
+ attention_mask shape: torch.Size([16, 256])
297
+ labels shape: torch.Size([16])
298
+ input_ids max value: 16003
299
+ Vocab size: 16005
300
+ Epoch 4/5:
301
+ Train Loss: 0.4496, Train Accuracy: 0.8583
302
+ Val Loss: 0.6285, Val Accuracy: 0.8109, Val F1: 0.7863
303
+ Batch 0:
304
+ input_ids shape: torch.Size([16, 256])
305
+ attention_mask shape: torch.Size([16, 256])
306
+ labels shape: torch.Size([16])
307
+ input_ids max value: 16003
308
+ Vocab size: 16005
309
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
310
+ with amp.autocast():
311
+ Batch 100:
312
+ input_ids shape: torch.Size([16, 256])
313
+ attention_mask shape: torch.Size([16, 256])
314
+ labels shape: torch.Size([16])
315
+ input_ids max value: 16003
316
+ Vocab size: 16005
317
+ Batch 200:
318
+ input_ids shape: torch.Size([16, 256])
319
+ attention_mask shape: torch.Size([16, 256])
320
+ labels shape: torch.Size([16])
321
+ input_ids max value: 16003
322
+ Vocab size: 16005
323
+ Batch 300:
324
+ input_ids shape: torch.Size([16, 256])
325
+ attention_mask shape: torch.Size([16, 256])
326
+ labels shape: torch.Size([16])
327
+ input_ids max value: 16003
328
+ Vocab size: 16005
329
+ Batch 400:
330
+ input_ids shape: torch.Size([16, 256])
331
+ attention_mask shape: torch.Size([16, 256])
332
+ labels shape: torch.Size([16])
333
+ input_ids max value: 16003
334
+ Vocab size: 16005
335
+ Batch 500:
336
+ input_ids shape: torch.Size([16, 256])
337
+ attention_mask shape: torch.Size([16, 256])
338
+ labels shape: torch.Size([16])
339
+ input_ids max value: 16003
340
+ Vocab size: 16005
341
+ Batch 600:
342
+ input_ids shape: torch.Size([16, 256])
343
+ attention_mask shape: torch.Size([16, 256])
344
+ labels shape: torch.Size([16])
345
+ input_ids max value: 16003
346
+ Vocab size: 16005
347
+ Batch 700:
348
+ input_ids shape: torch.Size([16, 256])
349
+ attention_mask shape: torch.Size([16, 256])
350
+ labels shape: torch.Size([16])
351
+ input_ids max value: 16003
352
+ Vocab size: 16005
353
+ Batch 800:
354
+ input_ids shape: torch.Size([16, 256])
355
+ attention_mask shape: torch.Size([16, 256])
356
+ labels shape: torch.Size([16])
357
+ input_ids max value: 16003
358
+ Vocab size: 16005
359
+ Batch 900:
360
+ input_ids shape: torch.Size([16, 256])
361
+ attention_mask shape: torch.Size([16, 256])
362
+ labels shape: torch.Size([16])
363
+ input_ids max value: 16003
364
+ Vocab size: 16005
365
+ Epoch 5/5:
366
+ Train Loss: 0.3687, Train Accuracy: 0.8816
367
+ Val Loss: 0.6460, Val Accuracy: 0.8111, Val F1: 0.7860
368
+
369
+ Test Results for All Cluster tokenizer:
370
+ Accuracy: 0.8111
371
+ F1 Score: 0.7860
372
+ AUC-ROC: 0.8681
373
+
374
+ Training with Final tokenizer:
375
+ Vocabulary size: 18524
376
+ Could not load pretrained weights from /gpfswork/rech/fmr/uft12cr/finetuneAli/Bert_Model. Starting with random weights. Error: Error while deserializing header: HeaderTooLarge
377
+ Initialized model with vocabulary size: 18524
378
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:173: FutureWarning: `torch.cuda.amp.GradScaler(args...)` is deprecated. Please use `torch.amp.GradScaler('cuda', args...)` instead.
379
+ scaler = amp.GradScaler()
380
+ Batch 0:
381
+ input_ids shape: torch.Size([16, 256])
382
+ attention_mask shape: torch.Size([16, 256])
383
+ labels shape: torch.Size([16])
384
+ input_ids max value: 18523
385
+ Vocab size: 18524
386
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
387
+ with amp.autocast():
388
+ Batch 100:
389
+ input_ids shape: torch.Size([16, 256])
390
+ attention_mask shape: torch.Size([16, 256])
391
+ labels shape: torch.Size([16])
392
+ input_ids max value: 18523
393
+ Vocab size: 18524
394
+ Batch 200:
395
+ input_ids shape: torch.Size([16, 256])
396
+ attention_mask shape: torch.Size([16, 256])
397
+ labels shape: torch.Size([16])
398
+ input_ids max value: 18523
399
+ Vocab size: 18524
400
+ Batch 300:
401
+ input_ids shape: torch.Size([16, 256])
402
+ attention_mask shape: torch.Size([16, 256])
403
+ labels shape: torch.Size([16])
404
+ input_ids max value: 18523
405
+ Vocab size: 18524
406
+ Batch 400:
407
+ input_ids shape: torch.Size([16, 256])
408
+ attention_mask shape: torch.Size([16, 256])
409
+ labels shape: torch.Size([16])
410
+ input_ids max value: 18523
411
+ Vocab size: 18524
412
+ Batch 500:
413
+ input_ids shape: torch.Size([16, 256])
414
+ attention_mask shape: torch.Size([16, 256])
415
+ labels shape: torch.Size([16])
416
+ input_ids max value: 18523
417
+ Vocab size: 18524
418
+ Batch 600:
419
+ input_ids shape: torch.Size([16, 256])
420
+ attention_mask shape: torch.Size([16, 256])
421
+ labels shape: torch.Size([16])
422
+ input_ids max value: 18523
423
+ Vocab size: 18524
424
+ Batch 700:
425
+ input_ids shape: torch.Size([16, 256])
426
+ attention_mask shape: torch.Size([16, 256])
427
+ labels shape: torch.Size([16])
428
+ input_ids max value: 18523
429
+ Vocab size: 18524
430
+ Batch 800:
431
+ input_ids shape: torch.Size([16, 256])
432
+ attention_mask shape: torch.Size([16, 256])
433
+ labels shape: torch.Size([16])
434
+ input_ids max value: 18523
435
+ Vocab size: 18524
436
+ Batch 900:
437
+ input_ids shape: torch.Size([16, 256])
438
+ attention_mask shape: torch.Size([16, 256])
439
+ labels shape: torch.Size([16])
440
+ input_ids max value: 18523
441
+ Vocab size: 18524
442
+ Epoch 1/5:
443
+ Train Loss: 0.9291, Train Accuracy: 0.6943
444
+ Val Loss: 0.7526, Val Accuracy: 0.7593, Val F1: 0.6923
445
+ Batch 0:
446
+ input_ids shape: torch.Size([16, 256])
447
+ attention_mask shape: torch.Size([16, 256])
448
+ labels shape: torch.Size([16])
449
+ input_ids max value: 18523
450
+ Vocab size: 18524
451
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
452
+ with amp.autocast():
453
+ Batch 100:
454
+ input_ids shape: torch.Size([16, 256])
455
+ attention_mask shape: torch.Size([16, 256])
456
+ labels shape: torch.Size([16])
457
+ input_ids max value: 18523
458
+ Vocab size: 18524
459
+ Batch 200:
460
+ input_ids shape: torch.Size([16, 256])
461
+ attention_mask shape: torch.Size([16, 256])
462
+ labels shape: torch.Size([16])
463
+ input_ids max value: 18523
464
+ Vocab size: 18524
465
+ Batch 300:
466
+ input_ids shape: torch.Size([16, 256])
467
+ attention_mask shape: torch.Size([16, 256])
468
+ labels shape: torch.Size([16])
469
+ input_ids max value: 18523
470
+ Vocab size: 18524
471
+ Batch 400:
472
+ input_ids shape: torch.Size([16, 256])
473
+ attention_mask shape: torch.Size([16, 256])
474
+ labels shape: torch.Size([16])
475
+ input_ids max value: 18523
476
+ Vocab size: 18524
477
+ Batch 500:
478
+ input_ids shape: torch.Size([16, 256])
479
+ attention_mask shape: torch.Size([16, 256])
480
+ labels shape: torch.Size([16])
481
+ input_ids max value: 18523
482
+ Vocab size: 18524
483
+ Batch 600:
484
+ input_ids shape: torch.Size([16, 256])
485
+ attention_mask shape: torch.Size([16, 256])
486
+ labels shape: torch.Size([16])
487
+ input_ids max value: 18523
488
+ Vocab size: 18524
489
+ Batch 700:
490
+ input_ids shape: torch.Size([16, 256])
491
+ attention_mask shape: torch.Size([16, 256])
492
+ labels shape: torch.Size([16])
493
+ input_ids max value: 18523
494
+ Vocab size: 18524
495
+ Batch 800:
496
+ input_ids shape: torch.Size([16, 256])
497
+ attention_mask shape: torch.Size([16, 256])
498
+ labels shape: torch.Size([16])
499
+ input_ids max value: 18523
500
+ Vocab size: 18524
501
+ Batch 900:
502
+ input_ids shape: torch.Size([16, 256])
503
+ attention_mask shape: torch.Size([16, 256])
504
+ labels shape: torch.Size([16])
505
+ input_ids max value: 18523
506
+ Vocab size: 18524
507
+ Epoch 2/5:
508
+ Train Loss: 0.6952, Train Accuracy: 0.7752
509
+ Val Loss: 0.6884, Val Accuracy: 0.7705, Val F1: 0.7291
510
+ Batch 0:
511
+ input_ids shape: torch.Size([16, 256])
512
+ attention_mask shape: torch.Size([16, 256])
513
+ labels shape: torch.Size([16])
514
+ input_ids max value: 18523
515
+ Vocab size: 18524
516
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
517
+ with amp.autocast():
518
+ Batch 100:
519
+ input_ids shape: torch.Size([16, 256])
520
+ attention_mask shape: torch.Size([16, 256])
521
+ labels shape: torch.Size([16])
522
+ input_ids max value: 18523
523
+ Vocab size: 18524
524
+ Batch 200:
525
+ input_ids shape: torch.Size([16, 256])
526
+ attention_mask shape: torch.Size([16, 256])
527
+ labels shape: torch.Size([16])
528
+ input_ids max value: 18523
529
+ Vocab size: 18524
530
+ Batch 300:
531
+ input_ids shape: torch.Size([16, 256])
532
+ attention_mask shape: torch.Size([16, 256])
533
+ labels shape: torch.Size([16])
534
+ input_ids max value: 18523
535
+ Vocab size: 18524
536
+ Batch 400:
537
+ input_ids shape: torch.Size([16, 256])
538
+ attention_mask shape: torch.Size([16, 256])
539
+ labels shape: torch.Size([16])
540
+ input_ids max value: 18523
541
+ Vocab size: 18524
542
+ Batch 500:
543
+ input_ids shape: torch.Size([16, 256])
544
+ attention_mask shape: torch.Size([16, 256])
545
+ labels shape: torch.Size([16])
546
+ input_ids max value: 18523
547
+ Vocab size: 18524
548
+ Batch 600:
549
+ input_ids shape: torch.Size([16, 256])
550
+ attention_mask shape: torch.Size([16, 256])
551
+ labels shape: torch.Size([16])
552
+ input_ids max value: 18523
553
+ Vocab size: 18524
554
+ Batch 700:
555
+ input_ids shape: torch.Size([16, 256])
556
+ attention_mask shape: torch.Size([16, 256])
557
+ labels shape: torch.Size([16])
558
+ input_ids max value: 18523
559
+ Vocab size: 18524
560
+ Batch 800:
561
+ input_ids shape: torch.Size([16, 256])
562
+ attention_mask shape: torch.Size([16, 256])
563
+ labels shape: torch.Size([16])
564
+ input_ids max value: 18523
565
+ Vocab size: 18524
566
+ Batch 900:
567
+ input_ids shape: torch.Size([16, 256])
568
+ attention_mask shape: torch.Size([16, 256])
569
+ labels shape: torch.Size([16])
570
+ input_ids max value: 18523
571
+ Vocab size: 18524
572
+ Epoch 3/5:
573
+ Train Loss: 0.6147, Train Accuracy: 0.7993
574
+ Val Loss: 0.6780, Val Accuracy: 0.7874, Val F1: 0.7596
575
+ Batch 0:
576
+ input_ids shape: torch.Size([16, 256])
577
+ attention_mask shape: torch.Size([16, 256])
578
+ labels shape: torch.Size([16])
579
+ input_ids max value: 18523
580
+ Vocab size: 18524
581
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
582
+ with amp.autocast():
583
+ Batch 100:
584
+ input_ids shape: torch.Size([16, 256])
585
+ attention_mask shape: torch.Size([16, 256])
586
+ labels shape: torch.Size([16])
587
+ input_ids max value: 18523
588
+ Vocab size: 18524
589
+ Batch 200:
590
+ input_ids shape: torch.Size([16, 256])
591
+ attention_mask shape: torch.Size([16, 256])
592
+ labels shape: torch.Size([16])
593
+ input_ids max value: 18523
594
+ Vocab size: 18524
595
+ Batch 300:
596
+ input_ids shape: torch.Size([16, 256])
597
+ attention_mask shape: torch.Size([16, 256])
598
+ labels shape: torch.Size([16])
599
+ input_ids max value: 18523
600
+ Vocab size: 18524
601
+ Batch 400:
602
+ input_ids shape: torch.Size([16, 256])
603
+ attention_mask shape: torch.Size([16, 256])
604
+ labels shape: torch.Size([16])
605
+ input_ids max value: 18523
606
+ Vocab size: 18524
607
+ Batch 500:
608
+ input_ids shape: torch.Size([16, 256])
609
+ attention_mask shape: torch.Size([16, 256])
610
+ labels shape: torch.Size([16])
611
+ input_ids max value: 18523
612
+ Vocab size: 18524
613
+ Batch 600:
614
+ input_ids shape: torch.Size([16, 256])
615
+ attention_mask shape: torch.Size([16, 256])
616
+ labels shape: torch.Size([16])
617
+ input_ids max value: 18523
618
+ Vocab size: 18524
619
+ Batch 700:
620
+ input_ids shape: torch.Size([16, 256])
621
+ attention_mask shape: torch.Size([16, 256])
622
+ labels shape: torch.Size([16])
623
+ input_ids max value: 18523
624
+ Vocab size: 18524
625
+ Batch 800:
626
+ input_ids shape: torch.Size([16, 256])
627
+ attention_mask shape: torch.Size([16, 256])
628
+ labels shape: torch.Size([16])
629
+ input_ids max value: 18523
630
+ Vocab size: 18524
631
+ Batch 900:
632
+ input_ids shape: torch.Size([16, 256])
633
+ attention_mask shape: torch.Size([16, 256])
634
+ labels shape: torch.Size([16])
635
+ input_ids max value: 18523
636
+ Vocab size: 18524
637
+ Epoch 4/5:
638
+ Train Loss: 0.5494, Train Accuracy: 0.8242
639
+ Val Loss: 0.6878, Val Accuracy: 0.7920, Val F1: 0.7655
640
+ Batch 0:
641
+ input_ids shape: torch.Size([16, 256])
642
+ attention_mask shape: torch.Size([16, 256])
643
+ labels shape: torch.Size([16])
644
+ input_ids max value: 18523
645
+ Vocab size: 18524
646
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
647
+ with amp.autocast():
648
+ Batch 100:
649
+ input_ids shape: torch.Size([16, 256])
650
+ attention_mask shape: torch.Size([16, 256])
651
+ labels shape: torch.Size([16])
652
+ input_ids max value: 18523
653
+ Vocab size: 18524
654
+ Batch 200:
655
+ input_ids shape: torch.Size([16, 256])
656
+ attention_mask shape: torch.Size([16, 256])
657
+ labels shape: torch.Size([16])
658
+ input_ids max value: 18523
659
+ Vocab size: 18524
660
+ Batch 300:
661
+ input_ids shape: torch.Size([16, 256])
662
+ attention_mask shape: torch.Size([16, 256])
663
+ labels shape: torch.Size([16])
664
+ input_ids max value: 18523
665
+ Vocab size: 18524
666
+ Batch 400:
667
+ input_ids shape: torch.Size([16, 256])
668
+ attention_mask shape: torch.Size([16, 256])
669
+ labels shape: torch.Size([16])
670
+ input_ids max value: 18523
671
+ Vocab size: 18524
672
+ Batch 500:
673
+ input_ids shape: torch.Size([16, 256])
674
+ attention_mask shape: torch.Size([16, 256])
675
+ labels shape: torch.Size([16])
676
+ input_ids max value: 18523
677
+ Vocab size: 18524
678
+ Batch 600:
679
+ input_ids shape: torch.Size([16, 256])
680
+ attention_mask shape: torch.Size([16, 256])
681
+ labels shape: torch.Size([16])
682
+ input_ids max value: 18523
683
+ Vocab size: 18524
684
+ Batch 700:
685
+ input_ids shape: torch.Size([16, 256])
686
+ attention_mask shape: torch.Size([16, 256])
687
+ labels shape: torch.Size([16])
688
+ input_ids max value: 18523
689
+ Vocab size: 18524
690
+ Batch 800:
691
+ input_ids shape: torch.Size([16, 256])
692
+ attention_mask shape: torch.Size([16, 256])
693
+ labels shape: torch.Size([16])
694
+ input_ids max value: 18523
695
+ Vocab size: 18524
696
+ Batch 900:
697
+ input_ids shape: torch.Size([16, 256])
698
+ attention_mask shape: torch.Size([16, 256])
699
+ labels shape: torch.Size([16])
700
+ input_ids max value: 18523
701
+ Vocab size: 18524
702
+ Epoch 5/5:
703
+ Train Loss: 0.4703, Train Accuracy: 0.8558
704
+ Val Loss: 0.7217, Val Accuracy: 0.8046, Val F1: 0.7712
705
+
706
+ Test Results for Final tokenizer:
707
+ Accuracy: 0.8043
708
+ F1 Score: 0.7709
709
+ AUC-ROC: 0.8254
710
+
711
+ Training with General tokenizer:
712
+ Vocabulary size: 30522
713
+ Could not load pretrained weights from /gpfswork/rech/fmr/uft12cr/finetuneAli/Bert_Model. Starting with random weights. Error: Error while deserializing header: HeaderTooLarge
714
+ Initialized model with vocabulary size: 30522
715
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:173: FutureWarning: `torch.cuda.amp.GradScaler(args...)` is deprecated. Please use `torch.amp.GradScaler('cuda', args...)` instead.
716
+ scaler = amp.GradScaler()
717
+ Batch 0:
718
+ input_ids shape: torch.Size([16, 256])
719
+ attention_mask shape: torch.Size([16, 256])
720
+ labels shape: torch.Size([16])
721
+ input_ids max value: 29464
722
+ Vocab size: 30522
723
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
724
+ with amp.autocast():
725
+ Batch 100:
726
+ input_ids shape: torch.Size([16, 256])
727
+ attention_mask shape: torch.Size([16, 256])
728
+ labels shape: torch.Size([16])
729
+ input_ids max value: 29521
730
+ Vocab size: 30522
731
+ Batch 200:
732
+ input_ids shape: torch.Size([16, 256])
733
+ attention_mask shape: torch.Size([16, 256])
734
+ labels shape: torch.Size([16])
735
+ input_ids max value: 29446
736
+ Vocab size: 30522
737
+ Batch 300:
738
+ input_ids shape: torch.Size([16, 256])
739
+ attention_mask shape: torch.Size([16, 256])
740
+ labels shape: torch.Size([16])
741
+ input_ids max value: 29320
742
+ Vocab size: 30522
743
+ Batch 400:
744
+ input_ids shape: torch.Size([16, 256])
745
+ attention_mask shape: torch.Size([16, 256])
746
+ labels shape: torch.Size([16])
747
+ input_ids max value: 29336
748
+ Vocab size: 30522
749
+ Batch 500:
750
+ input_ids shape: torch.Size([16, 256])
751
+ attention_mask shape: torch.Size([16, 256])
752
+ labels shape: torch.Size([16])
753
+ input_ids max value: 29280
754
+ Vocab size: 30522
755
+ Batch 600:
756
+ input_ids shape: torch.Size([16, 256])
757
+ attention_mask shape: torch.Size([16, 256])
758
+ labels shape: torch.Size([16])
759
+ input_ids max value: 29130
760
+ Vocab size: 30522
761
+ Batch 700:
762
+ input_ids shape: torch.Size([16, 256])
763
+ attention_mask shape: torch.Size([16, 256])
764
+ labels shape: torch.Size([16])
765
+ input_ids max value: 29536
766
+ Vocab size: 30522
767
+ Batch 800:
768
+ input_ids shape: torch.Size([16, 256])
769
+ attention_mask shape: torch.Size([16, 256])
770
+ labels shape: torch.Size([16])
771
+ input_ids max value: 29445
772
+ Vocab size: 30522
773
+ Batch 900:
774
+ input_ids shape: torch.Size([16, 256])
775
+ attention_mask shape: torch.Size([16, 256])
776
+ labels shape: torch.Size([16])
777
+ input_ids max value: 29469
778
+ Vocab size: 30522
779
+ Epoch 1/5:
780
+ Train Loss: 0.9230, Train Accuracy: 0.6966
781
+ Val Loss: 0.7881, Val Accuracy: 0.7465, Val F1: 0.6718
782
+ Batch 0:
783
+ input_ids shape: torch.Size([16, 256])
784
+ attention_mask shape: torch.Size([16, 256])
785
+ labels shape: torch.Size([16])
786
+ input_ids max value: 29462
787
+ Vocab size: 30522
788
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
789
+ with amp.autocast():
790
+ Batch 100:
791
+ input_ids shape: torch.Size([16, 256])
792
+ attention_mask shape: torch.Size([16, 256])
793
+ labels shape: torch.Size([16])
794
+ input_ids max value: 29464
795
+ Vocab size: 30522
796
+ Batch 200:
797
+ input_ids shape: torch.Size([16, 256])
798
+ attention_mask shape: torch.Size([16, 256])
799
+ labels shape: torch.Size([16])
800
+ input_ids max value: 29477
801
+ Vocab size: 30522
802
+ Batch 300:
803
+ input_ids shape: torch.Size([16, 256])
804
+ attention_mask shape: torch.Size([16, 256])
805
+ labels shape: torch.Size([16])
806
+ input_ids max value: 29464
807
+ Vocab size: 30522
808
+ Batch 400:
809
+ input_ids shape: torch.Size([16, 256])
810
+ attention_mask shape: torch.Size([16, 256])
811
+ labels shape: torch.Size([16])
812
+ input_ids max value: 29402
813
+ Vocab size: 30522
814
+ Batch 500:
815
+ input_ids shape: torch.Size([16, 256])
816
+ attention_mask shape: torch.Size([16, 256])
817
+ labels shape: torch.Size([16])
818
+ input_ids max value: 28993
819
+ Vocab size: 30522
820
+ Batch 600:
821
+ input_ids shape: torch.Size([16, 256])
822
+ attention_mask shape: torch.Size([16, 256])
823
+ labels shape: torch.Size([16])
824
+ input_ids max value: 29238
825
+ Vocab size: 30522
826
+ Batch 700:
827
+ input_ids shape: torch.Size([16, 256])
828
+ attention_mask shape: torch.Size([16, 256])
829
+ labels shape: torch.Size([16])
830
+ input_ids max value: 29558
831
+ Vocab size: 30522
832
+ Batch 800:
833
+ input_ids shape: torch.Size([16, 256])
834
+ attention_mask shape: torch.Size([16, 256])
835
+ labels shape: torch.Size([16])
836
+ input_ids max value: 29433
837
+ Vocab size: 30522
838
+ Batch 900:
839
+ input_ids shape: torch.Size([16, 256])
840
+ attention_mask shape: torch.Size([16, 256])
841
+ labels shape: torch.Size([16])
842
+ input_ids max value: 29339
843
+ Vocab size: 30522
844
+ Epoch 2/5:
845
+ Train Loss: 0.6269, Train Accuracy: 0.7939
846
+ Val Loss: 0.6425, Val Accuracy: 0.7959, Val F1: 0.7705
847
+ Batch 0:
848
+ input_ids shape: torch.Size([16, 256])
849
+ attention_mask shape: torch.Size([16, 256])
850
+ labels shape: torch.Size([16])
851
+ input_ids max value: 29160
852
+ Vocab size: 30522
853
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
854
+ with amp.autocast():
855
+ Batch 100:
856
+ input_ids shape: torch.Size([16, 256])
857
+ attention_mask shape: torch.Size([16, 256])
858
+ labels shape: torch.Size([16])
859
+ input_ids max value: 29464
860
+ Vocab size: 30522
861
+ Batch 200:
862
+ input_ids shape: torch.Size([16, 256])
863
+ attention_mask shape: torch.Size([16, 256])
864
+ labels shape: torch.Size([16])
865
+ input_ids max value: 29535
866
+ Vocab size: 30522
867
+ Batch 300:
868
+ input_ids shape: torch.Size([16, 256])
869
+ attention_mask shape: torch.Size([16, 256])
870
+ labels shape: torch.Size([16])
871
+ input_ids max value: 29160
872
+ Vocab size: 30522
873
+ Batch 400:
874
+ input_ids shape: torch.Size([16, 256])
875
+ attention_mask shape: torch.Size([16, 256])
876
+ labels shape: torch.Size([16])
877
+ input_ids max value: 29536
878
+ Vocab size: 30522
879
+ Batch 500:
880
+ input_ids shape: torch.Size([16, 256])
881
+ attention_mask shape: torch.Size([16, 256])
882
+ labels shape: torch.Size([16])
883
+ input_ids max value: 29458
884
+ Vocab size: 30522
885
+ Batch 600:
886
+ input_ids shape: torch.Size([16, 256])
887
+ attention_mask shape: torch.Size([16, 256])
888
+ labels shape: torch.Size([16])
889
+ input_ids max value: 29560
890
+ Vocab size: 30522
891
+ Batch 700:
892
+ input_ids shape: torch.Size([16, 256])
893
+ attention_mask shape: torch.Size([16, 256])
894
+ labels shape: torch.Size([16])
895
+ input_ids max value: 29605
896
+ Vocab size: 30522
897
+ Batch 800:
898
+ input_ids shape: torch.Size([16, 256])
899
+ attention_mask shape: torch.Size([16, 256])
900
+ labels shape: torch.Size([16])
901
+ input_ids max value: 29513
902
+ Vocab size: 30522
903
+ Batch 900:
904
+ input_ids shape: torch.Size([16, 256])
905
+ attention_mask shape: torch.Size([16, 256])
906
+ labels shape: torch.Size([16])
907
+ input_ids max value: 29532
908
+ Vocab size: 30522
909
+ Epoch 3/5:
910
+ Train Loss: 0.5377, Train Accuracy: 0.8242
911
+ Val Loss: 0.6742, Val Accuracy: 0.7797, Val F1: 0.7674
912
+ Batch 0:
913
+ input_ids shape: torch.Size([16, 256])
914
+ attention_mask shape: torch.Size([16, 256])
915
+ labels shape: torch.Size([16])
916
+ input_ids max value: 29494
917
+ Vocab size: 30522
918
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
919
+ with amp.autocast():
920
+ Batch 100:
921
+ input_ids shape: torch.Size([16, 256])
922
+ attention_mask shape: torch.Size([16, 256])
923
+ labels shape: torch.Size([16])
924
+ input_ids max value: 29461
925
+ Vocab size: 30522
926
+ Batch 200:
927
+ input_ids shape: torch.Size([16, 256])
928
+ attention_mask shape: torch.Size([16, 256])
929
+ labels shape: torch.Size([16])
930
+ input_ids max value: 29454
931
+ Vocab size: 30522
932
+ Batch 300:
933
+ input_ids shape: torch.Size([16, 256])
934
+ attention_mask shape: torch.Size([16, 256])
935
+ labels shape: torch.Size([16])
936
+ input_ids max value: 29536
937
+ Vocab size: 30522
938
+ Batch 400:
939
+ input_ids shape: torch.Size([16, 256])
940
+ attention_mask shape: torch.Size([16, 256])
941
+ labels shape: torch.Size([16])
942
+ input_ids max value: 29602
943
+ Vocab size: 30522
944
+ Batch 500:
945
+ input_ids shape: torch.Size([16, 256])
946
+ attention_mask shape: torch.Size([16, 256])
947
+ labels shape: torch.Size([16])
948
+ input_ids max value: 29238
949
+ Vocab size: 30522
950
+ Batch 600:
951
+ input_ids shape: torch.Size([16, 256])
952
+ attention_mask shape: torch.Size([16, 256])
953
+ labels shape: torch.Size([16])
954
+ input_ids max value: 29536
955
+ Vocab size: 30522
956
+ Batch 700:
957
+ input_ids shape: torch.Size([16, 256])
958
+ attention_mask shape: torch.Size([16, 256])
959
+ labels shape: torch.Size([16])
960
+ input_ids max value: 29292
961
+ Vocab size: 30522
962
+ Batch 800:
963
+ input_ids shape: torch.Size([16, 256])
964
+ attention_mask shape: torch.Size([16, 256])
965
+ labels shape: torch.Size([16])
966
+ input_ids max value: 29390
967
+ Vocab size: 30522
968
+ Batch 900:
969
+ input_ids shape: torch.Size([16, 256])
970
+ attention_mask shape: torch.Size([16, 256])
971
+ labels shape: torch.Size([16])
972
+ input_ids max value: 29464
973
+ Vocab size: 30522
974
+ Epoch 4/5:
975
+ Train Loss: 0.4776, Train Accuracy: 0.8478
976
+ Val Loss: 0.5951, Val Accuracy: 0.8095, Val F1: 0.7732
977
+ Batch 0:
978
+ input_ids shape: torch.Size([16, 256])
979
+ attention_mask shape: torch.Size([16, 256])
980
+ labels shape: torch.Size([16])
981
+ input_ids max value: 28987
982
+ Vocab size: 30522
983
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
984
+ with amp.autocast():
985
+ Batch 100:
986
+ input_ids shape: torch.Size([16, 256])
987
+ attention_mask shape: torch.Size([16, 256])
988
+ labels shape: torch.Size([16])
989
+ input_ids max value: 29605
990
+ Vocab size: 30522
991
+ Batch 200:
992
+ input_ids shape: torch.Size([16, 256])
993
+ attention_mask shape: torch.Size([16, 256])
994
+ labels shape: torch.Size([16])
995
+ input_ids max value: 29083
996
+ Vocab size: 30522
997
+ Batch 300:
998
+ input_ids shape: torch.Size([16, 256])
999
+ attention_mask shape: torch.Size([16, 256])
1000
+ labels shape: torch.Size([16])
1001
+ input_ids max value: 29532
1002
+ Vocab size: 30522
1003
+ Batch 400:
1004
+ input_ids shape: torch.Size([16, 256])
1005
+ attention_mask shape: torch.Size([16, 256])
1006
+ labels shape: torch.Size([16])
1007
+ input_ids max value: 29605
1008
+ Vocab size: 30522
1009
+ Batch 500:
1010
+ input_ids shape: torch.Size([16, 256])
1011
+ attention_mask shape: torch.Size([16, 256])
1012
+ labels shape: torch.Size([16])
1013
+ input_ids max value: 29417
1014
+ Vocab size: 30522
1015
+ Batch 600:
1016
+ input_ids shape: torch.Size([16, 256])
1017
+ attention_mask shape: torch.Size([16, 256])
1018
+ labels shape: torch.Size([16])
1019
+ input_ids max value: 29280
1020
+ Vocab size: 30522
1021
+ Batch 700:
1022
+ input_ids shape: torch.Size([16, 256])
1023
+ attention_mask shape: torch.Size([16, 256])
1024
+ labels shape: torch.Size([16])
1025
+ input_ids max value: 29464
1026
+ Vocab size: 30522
1027
+ Batch 800:
1028
+ input_ids shape: torch.Size([16, 256])
1029
+ attention_mask shape: torch.Size([16, 256])
1030
+ labels shape: torch.Size([16])
1031
+ input_ids max value: 29390
1032
+ Vocab size: 30522
1033
+ Batch 900:
1034
+ input_ids shape: torch.Size([16, 256])
1035
+ attention_mask shape: torch.Size([16, 256])
1036
+ labels shape: torch.Size([16])
1037
+ input_ids max value: 29441
1038
+ Vocab size: 30522
1039
+ Epoch 5/5:
1040
+ Train Loss: 0.3833, Train Accuracy: 0.8814
1041
+ Val Loss: 0.6523, Val Accuracy: 0.7882, Val F1: 0.7792
1042
+
1043
+ Test Results for General tokenizer:
1044
+ Accuracy: 0.7885
1045
+ F1 Score: 0.7796
1046
+ AUC-ROC: 0.8664
1047
+
1048
+ Summary of Results:
1049
+
1050
+ All Cluster Tokenizer:
1051
+ Accuracy: 0.8111
1052
+ F1 Score: 0.7860
1053
+ AUC-ROC: 0.8681
1054
+
1055
+ Final Tokenizer:
1056
+ Accuracy: 0.8043
1057
+ F1 Score: 0.7709
1058
+ AUC-ROC: 0.8254
1059
+
1060
+ General Tokenizer:
1061
+ Accuracy: 0.7885
1062
+ F1 Score: 0.7796
1063
+ AUC-ROC: 0.8664
1064
+
1065
+ Class distribution in training set:
1066
+ Class Biology: 439 samples
1067
+ Class Chemistry: 454 samples
1068
+ Class Computer Science: 1358 samples
1069
+ Class Mathematics: 9480 samples
1070
+ Class Physics: 2733 samples
1071
+ Class Statistics: 200 samples
General_tokenizer_plot.png ADDED

Git LFS Details

  • SHA256: 77883c674d2b28deb2ce0d61ea6ce87bc67db4c3882bf2fd89997f8caaae4475
  • Pointer size: 130 Bytes
  • Size of remote file: 62.2 kB