Alijeff1214 commited on
Commit
3be4547
·
verified ·
1 Parent(s): 51921f3

Upload folder using huggingface_hub

Browse files
FineTune_GeneralOnly933282.out ADDED
@@ -0,0 +1,247 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Loading pytorch-gpu/py3/2.1.1
2
+ Loading requirement: cuda/11.8.0 nccl/2.18.5-1-cuda cudnn/8.7.0.84-cuda
3
+ gcc/8.5.0 openmpi/4.1.5-cuda intel-mkl/2020.4 magma/2.7.1-cuda sox/14.4.2
4
+ sparsehash/2.0.3 libjpeg-turbo/2.1.3 ffmpeg/4.4.4
5
+ + HF_DATASETS_OFFLINE=1
6
+ + TRANSFORMERS_OFFLINE=1
7
+ + python3 OnlyGeneralTokenizer.py
8
+
9
+ Checking label assignment:
10
+
11
+ Domain: Mathematics
12
+ Categories: math.DS math.CA
13
+ Abstract: we prove an inequality for holder continuous differential forms on compact manifolds in which the in...
14
+
15
+ Domain: Computer Science
16
+ Categories: cs.NE
17
+ Abstract: when looking for a solution deterministic methods have the enormous advantage that they do find glob...
18
+
19
+ Domain: Physics
20
+ Categories: physics.hist-ph quant-ph
21
+ Abstract: maxwells demon was born in and still thrives in modern physics he plays important roles in clarifyin...
22
+
23
+ Domain: Chemistry
24
+ Categories: nlin.PS
25
+ Abstract: the modulational instability of two interacting waves in a nonlocal kerrtype medium is considered an...
26
+
27
+ Domain: Statistics
28
+ Categories: astro-ph stat.ME
29
+ Abstract: the identification of increasingly smaller signal from objects observed with a nonperfect instrument...
30
+
31
+ Domain: Biology
32
+ Categories: q-bio.MN cond-mat.stat-mech
33
+ Abstract: we find that discrete noise of inhibiting signal molecules can greatly delay the extinction of plasm...
34
+ /linkhome/rech/genrug01/uft12cr/.local/lib/python3.11/site-packages/transformers/tokenization_utils_base.py:2057: FutureWarning: Calling BertTokenizer.from_pretrained() with the path to a single file or url is deprecated and won't be possible anymore in v5. Use a model identifier or the path to a directory instead.
35
+ warnings.warn(
36
+
37
+ Training with General tokenizer:
38
+ Vocabulary size: 30522
39
+ Could not load pretrained weights from /linkhome/rech/genrug01/uft12cr/bert_Model. Starting with random weights. Error: It looks like the config file at '/linkhome/rech/genrug01/uft12cr/bert_Model/config.json' is not a valid JSON file.
40
+ Initialized model with vocabulary size: 30522
41
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/OnlyGeneralTokenizer.py:172: FutureWarning: `torch.cuda.amp.GradScaler(args...)` is deprecated. Please use `torch.amp.GradScaler('cuda', args...)` instead.
42
+ scaler = amp.GradScaler()
43
+ Batch 0:
44
+ input_ids shape: torch.Size([16, 256])
45
+ attention_mask shape: torch.Size([16, 256])
46
+ labels shape: torch.Size([16])
47
+ input_ids max value: 29464
48
+ Vocab size: 30522
49
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/OnlyGeneralTokenizer.py:192: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
50
+ with amp.autocast():
51
+ Batch 100:
52
+ input_ids shape: torch.Size([16, 256])
53
+ attention_mask shape: torch.Size([16, 256])
54
+ labels shape: torch.Size([16])
55
+ input_ids max value: 29536
56
+ Vocab size: 30522
57
+ Batch 200:
58
+ input_ids shape: torch.Size([16, 256])
59
+ attention_mask shape: torch.Size([16, 256])
60
+ labels shape: torch.Size([16])
61
+ input_ids max value: 29536
62
+ Vocab size: 30522
63
+ Batch 300:
64
+ input_ids shape: torch.Size([16, 256])
65
+ attention_mask shape: torch.Size([16, 256])
66
+ labels shape: torch.Size([16])
67
+ input_ids max value: 29536
68
+ Vocab size: 30522
69
+ Batch 400:
70
+ input_ids shape: torch.Size([16, 256])
71
+ attention_mask shape: torch.Size([16, 256])
72
+ labels shape: torch.Size([16])
73
+ input_ids max value: 29513
74
+ Vocab size: 30522
75
+ Batch 500:
76
+ input_ids shape: torch.Size([16, 256])
77
+ attention_mask shape: torch.Size([16, 256])
78
+ labels shape: torch.Size([16])
79
+ input_ids max value: 29413
80
+ Vocab size: 30522
81
+ Batch 600:
82
+ input_ids shape: torch.Size([16, 256])
83
+ attention_mask shape: torch.Size([16, 256])
84
+ labels shape: torch.Size([16])
85
+ input_ids max value: 29237
86
+ Vocab size: 30522
87
+ Batch 700:
88
+ input_ids shape: torch.Size([16, 256])
89
+ attention_mask shape: torch.Size([16, 256])
90
+ labels shape: torch.Size([16])
91
+ input_ids max value: 29586
92
+ Vocab size: 30522
93
+ Batch 800:
94
+ input_ids shape: torch.Size([16, 256])
95
+ attention_mask shape: torch.Size([16, 256])
96
+ labels shape: torch.Size([16])
97
+ input_ids max value: 29221
98
+ Vocab size: 30522
99
+ Batch 900:
100
+ input_ids shape: torch.Size([16, 256])
101
+ attention_mask shape: torch.Size([16, 256])
102
+ labels shape: torch.Size([16])
103
+ input_ids max value: 29514
104
+ Vocab size: 30522
105
+ Epoch 1/3:
106
+ Val Accuracy: 0.7306, Val F1: 0.6541
107
+ Batch 0:
108
+ input_ids shape: torch.Size([16, 256])
109
+ attention_mask shape: torch.Size([16, 256])
110
+ labels shape: torch.Size([16])
111
+ input_ids max value: 29602
112
+ Vocab size: 30522
113
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/OnlyGeneralTokenizer.py:192: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
114
+ with amp.autocast():
115
+ Batch 100:
116
+ input_ids shape: torch.Size([16, 256])
117
+ attention_mask shape: torch.Size([16, 256])
118
+ labels shape: torch.Size([16])
119
+ input_ids max value: 29374
120
+ Vocab size: 30522
121
+ Batch 200:
122
+ input_ids shape: torch.Size([16, 256])
123
+ attention_mask shape: torch.Size([16, 256])
124
+ labels shape: torch.Size([16])
125
+ input_ids max value: 29601
126
+ Vocab size: 30522
127
+ Batch 300:
128
+ input_ids shape: torch.Size([16, 256])
129
+ attention_mask shape: torch.Size([16, 256])
130
+ labels shape: torch.Size([16])
131
+ input_ids max value: 29464
132
+ Vocab size: 30522
133
+ Batch 400:
134
+ input_ids shape: torch.Size([16, 256])
135
+ attention_mask shape: torch.Size([16, 256])
136
+ labels shape: torch.Size([16])
137
+ input_ids max value: 29535
138
+ Vocab size: 30522
139
+ Batch 500:
140
+ input_ids shape: torch.Size([16, 256])
141
+ attention_mask shape: torch.Size([16, 256])
142
+ labels shape: torch.Size([16])
143
+ input_ids max value: 29464
144
+ Vocab size: 30522
145
+ Batch 600:
146
+ input_ids shape: torch.Size([16, 256])
147
+ attention_mask shape: torch.Size([16, 256])
148
+ labels shape: torch.Size([16])
149
+ input_ids max value: 29602
150
+ Vocab size: 30522
151
+ Batch 700:
152
+ input_ids shape: torch.Size([16, 256])
153
+ attention_mask shape: torch.Size([16, 256])
154
+ labels shape: torch.Size([16])
155
+ input_ids max value: 29454
156
+ Vocab size: 30522
157
+ Batch 800:
158
+ input_ids shape: torch.Size([16, 256])
159
+ attention_mask shape: torch.Size([16, 256])
160
+ labels shape: torch.Size([16])
161
+ input_ids max value: 29280
162
+ Vocab size: 30522
163
+ Batch 900:
164
+ input_ids shape: torch.Size([16, 256])
165
+ attention_mask shape: torch.Size([16, 256])
166
+ labels shape: torch.Size([16])
167
+ input_ids max value: 29417
168
+ Vocab size: 30522
169
+ Epoch 2/3:
170
+ Val Accuracy: 0.7961, Val F1: 0.7582
171
+ Batch 0:
172
+ input_ids shape: torch.Size([16, 256])
173
+ attention_mask shape: torch.Size([16, 256])
174
+ labels shape: torch.Size([16])
175
+ input_ids max value: 29299
176
+ Vocab size: 30522
177
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/OnlyGeneralTokenizer.py:192: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
178
+ with amp.autocast():
179
+ Batch 100:
180
+ input_ids shape: torch.Size([16, 256])
181
+ attention_mask shape: torch.Size([16, 256])
182
+ labels shape: torch.Size([16])
183
+ input_ids max value: 29577
184
+ Vocab size: 30522
185
+ Batch 200:
186
+ input_ids shape: torch.Size([16, 256])
187
+ attention_mask shape: torch.Size([16, 256])
188
+ labels shape: torch.Size([16])
189
+ input_ids max value: 29536
190
+ Vocab size: 30522
191
+ Batch 300:
192
+ input_ids shape: torch.Size([16, 256])
193
+ attention_mask shape: torch.Size([16, 256])
194
+ labels shape: torch.Size([16])
195
+ input_ids max value: 29451
196
+ Vocab size: 30522
197
+ Batch 400:
198
+ input_ids shape: torch.Size([16, 256])
199
+ attention_mask shape: torch.Size([16, 256])
200
+ labels shape: torch.Size([16])
201
+ input_ids max value: 29454
202
+ Vocab size: 30522
203
+ Batch 500:
204
+ input_ids shape: torch.Size([16, 256])
205
+ attention_mask shape: torch.Size([16, 256])
206
+ labels shape: torch.Size([16])
207
+ input_ids max value: 29532
208
+ Vocab size: 30522
209
+ Batch 600:
210
+ input_ids shape: torch.Size([16, 256])
211
+ attention_mask shape: torch.Size([16, 256])
212
+ labels shape: torch.Size([16])
213
+ input_ids max value: 29413
214
+ Vocab size: 30522
215
+ Batch 700:
216
+ input_ids shape: torch.Size([16, 256])
217
+ attention_mask shape: torch.Size([16, 256])
218
+ labels shape: torch.Size([16])
219
+ input_ids max value: 29586
220
+ Vocab size: 30522
221
+ Batch 800:
222
+ input_ids shape: torch.Size([16, 256])
223
+ attention_mask shape: torch.Size([16, 256])
224
+ labels shape: torch.Size([16])
225
+ input_ids max value: 29280
226
+ Vocab size: 30522
227
+ Batch 900:
228
+ input_ids shape: torch.Size([16, 256])
229
+ attention_mask shape: torch.Size([16, 256])
230
+ labels shape: torch.Size([16])
231
+ input_ids max value: 29494
232
+ Vocab size: 30522
233
+ Epoch 3/3:
234
+ Val Accuracy: 0.8204, Val F1: 0.7894
235
+
236
+ Test Results for General tokenizer:
237
+ Accuracy: 0.8204
238
+ F1 Score: 0.7893
239
+ AUC-ROC: 0.8693
240
+
241
+ Class distribution in training set:
242
+ Class Biology: 439 samples
243
+ Class Chemistry: 454 samples
244
+ Class Computer Science: 1358 samples
245
+ Class Mathematics: 9480 samples
246
+ Class Physics: 2733 samples
247
+ Class Statistics: 200 samples
FineTune_GeneralOnly933928.out ADDED
@@ -0,0 +1,672 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Loading pytorch-gpu/py3/2.1.1
2
+ Loading requirement: cuda/11.8.0 nccl/2.18.5-1-cuda cudnn/8.7.0.84-cuda
3
+ gcc/8.5.0 openmpi/4.1.5-cuda intel-mkl/2020.4 magma/2.7.1-cuda sox/14.4.2
4
+ sparsehash/2.0.3 libjpeg-turbo/2.1.3 ffmpeg/4.4.4
5
+ + HF_DATASETS_OFFLINE=1
6
+ + TRANSFORMERS_OFFLINE=1
7
+ + python3 OnlyGeneralTokenizer.py
8
+
9
+ Checking label assignment:
10
+
11
+ Domain: Mathematics
12
+ Categories: math.OA
13
+ Abstract: a result of akemann anderson and pedersen states that if a sequence of pure states of a calgebra a a...
14
+
15
+ Domain: Computer Science
16
+ Categories: cs.PL
17
+ Abstract: a rigid loop is a forloop with a counter not accessible to the loop body or any other part of a prog...
18
+
19
+ Domain: Physics
20
+ Categories: physics.gen-ph
21
+ Abstract: fractional calculus and qdeformed lie algebras are closely related both concepts expand the scope of...
22
+
23
+ Domain: Chemistry
24
+ Categories: quant-ph nlin.CD
25
+ Abstract: we study scarring phenomena in open quantum systems we show numerical evidence that individual reson...
26
+
27
+ Domain: Statistics
28
+ Categories: stat.ME
29
+ Abstract: chess and chance are seemingly strange bedfellows luck andor randomness have no apparent role in mov...
30
+
31
+ Domain: Biology
32
+ Categories: q-bio.MN
33
+ Abstract: in the simplest view of transcriptional regulation the expression of a gene is turned on or off by c...
34
+ /linkhome/rech/genrug01/uft12cr/.local/lib/python3.11/site-packages/transformers/tokenization_utils_base.py:2057: FutureWarning: Calling BertTokenizer.from_pretrained() with the path to a single file or url is deprecated and won't be possible anymore in v5. Use a model identifier or the path to a directory instead.
35
+ warnings.warn(
36
+
37
+ Training with All Cluster tokenizer:
38
+ Vocabulary size: 16005
39
+ Could not load pretrained weights from /linkhome/rech/genrug01/uft12cr/bert_Model. Starting with random weights. Error: It looks like the config file at '/linkhome/rech/genrug01/uft12cr/bert_Model/config.json' is not a valid JSON file.
40
+ Initialized model with vocabulary size: 16005
41
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/OnlyGeneralTokenizer.py:172: FutureWarning: `torch.cuda.amp.GradScaler(args...)` is deprecated. Please use `torch.amp.GradScaler('cuda', args...)` instead.
42
+ scaler = amp.GradScaler()
43
+ Batch 0:
44
+ input_ids shape: torch.Size([16, 256])
45
+ attention_mask shape: torch.Size([16, 256])
46
+ labels shape: torch.Size([16])
47
+ input_ids max value: 16003
48
+ Vocab size: 16005
49
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/OnlyGeneralTokenizer.py:192: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
50
+ with amp.autocast():
51
+ Batch 100:
52
+ input_ids shape: torch.Size([16, 256])
53
+ attention_mask shape: torch.Size([16, 256])
54
+ labels shape: torch.Size([16])
55
+ input_ids max value: 16003
56
+ Vocab size: 16005
57
+ Batch 200:
58
+ input_ids shape: torch.Size([16, 256])
59
+ attention_mask shape: torch.Size([16, 256])
60
+ labels shape: torch.Size([16])
61
+ input_ids max value: 16003
62
+ Vocab size: 16005
63
+ Batch 300:
64
+ input_ids shape: torch.Size([16, 256])
65
+ attention_mask shape: torch.Size([16, 256])
66
+ labels shape: torch.Size([16])
67
+ input_ids max value: 16003
68
+ Vocab size: 16005
69
+ Batch 400:
70
+ input_ids shape: torch.Size([16, 256])
71
+ attention_mask shape: torch.Size([16, 256])
72
+ labels shape: torch.Size([16])
73
+ input_ids max value: 16003
74
+ Vocab size: 16005
75
+ Batch 500:
76
+ input_ids shape: torch.Size([16, 256])
77
+ attention_mask shape: torch.Size([16, 256])
78
+ labels shape: torch.Size([16])
79
+ input_ids max value: 16003
80
+ Vocab size: 16005
81
+ Batch 600:
82
+ input_ids shape: torch.Size([16, 256])
83
+ attention_mask shape: torch.Size([16, 256])
84
+ labels shape: torch.Size([16])
85
+ input_ids max value: 16003
86
+ Vocab size: 16005
87
+ Batch 700:
88
+ input_ids shape: torch.Size([16, 256])
89
+ attention_mask shape: torch.Size([16, 256])
90
+ labels shape: torch.Size([16])
91
+ input_ids max value: 16003
92
+ Vocab size: 16005
93
+ Batch 800:
94
+ input_ids shape: torch.Size([16, 256])
95
+ attention_mask shape: torch.Size([16, 256])
96
+ labels shape: torch.Size([16])
97
+ input_ids max value: 16003
98
+ Vocab size: 16005
99
+ Batch 900:
100
+ input_ids shape: torch.Size([16, 256])
101
+ attention_mask shape: torch.Size([16, 256])
102
+ labels shape: torch.Size([16])
103
+ input_ids max value: 16003
104
+ Vocab size: 16005
105
+ Epoch 1/3:
106
+ Val Accuracy: 0.7549, Val F1: 0.6896
107
+ Batch 0:
108
+ input_ids shape: torch.Size([16, 256])
109
+ attention_mask shape: torch.Size([16, 256])
110
+ labels shape: torch.Size([16])
111
+ input_ids max value: 16003
112
+ Vocab size: 16005
113
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/OnlyGeneralTokenizer.py:192: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
114
+ with amp.autocast():
115
+ Batch 100:
116
+ input_ids shape: torch.Size([16, 256])
117
+ attention_mask shape: torch.Size([16, 256])
118
+ labels shape: torch.Size([16])
119
+ input_ids max value: 16003
120
+ Vocab size: 16005
121
+ Batch 200:
122
+ input_ids shape: torch.Size([16, 256])
123
+ attention_mask shape: torch.Size([16, 256])
124
+ labels shape: torch.Size([16])
125
+ input_ids max value: 16003
126
+ Vocab size: 16005
127
+ Batch 300:
128
+ input_ids shape: torch.Size([16, 256])
129
+ attention_mask shape: torch.Size([16, 256])
130
+ labels shape: torch.Size([16])
131
+ input_ids max value: 16003
132
+ Vocab size: 16005
133
+ Batch 400:
134
+ input_ids shape: torch.Size([16, 256])
135
+ attention_mask shape: torch.Size([16, 256])
136
+ labels shape: torch.Size([16])
137
+ input_ids max value: 16003
138
+ Vocab size: 16005
139
+ Batch 500:
140
+ input_ids shape: torch.Size([16, 256])
141
+ attention_mask shape: torch.Size([16, 256])
142
+ labels shape: torch.Size([16])
143
+ input_ids max value: 16003
144
+ Vocab size: 16005
145
+ Batch 600:
146
+ input_ids shape: torch.Size([16, 256])
147
+ attention_mask shape: torch.Size([16, 256])
148
+ labels shape: torch.Size([16])
149
+ input_ids max value: 16003
150
+ Vocab size: 16005
151
+ Batch 700:
152
+ input_ids shape: torch.Size([16, 256])
153
+ attention_mask shape: torch.Size([16, 256])
154
+ labels shape: torch.Size([16])
155
+ input_ids max value: 16003
156
+ Vocab size: 16005
157
+ Batch 800:
158
+ input_ids shape: torch.Size([16, 256])
159
+ attention_mask shape: torch.Size([16, 256])
160
+ labels shape: torch.Size([16])
161
+ input_ids max value: 16003
162
+ Vocab size: 16005
163
+ Batch 900:
164
+ input_ids shape: torch.Size([16, 256])
165
+ attention_mask shape: torch.Size([16, 256])
166
+ labels shape: torch.Size([16])
167
+ input_ids max value: 16003
168
+ Vocab size: 16005
169
+ Epoch 2/3:
170
+ Val Accuracy: 0.7473, Val F1: 0.7221
171
+ Batch 0:
172
+ input_ids shape: torch.Size([16, 256])
173
+ attention_mask shape: torch.Size([16, 256])
174
+ labels shape: torch.Size([16])
175
+ input_ids max value: 16003
176
+ Vocab size: 16005
177
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/OnlyGeneralTokenizer.py:192: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
178
+ with amp.autocast():
179
+ Batch 100:
180
+ input_ids shape: torch.Size([16, 256])
181
+ attention_mask shape: torch.Size([16, 256])
182
+ labels shape: torch.Size([16])
183
+ input_ids max value: 16003
184
+ Vocab size: 16005
185
+ Batch 200:
186
+ input_ids shape: torch.Size([16, 256])
187
+ attention_mask shape: torch.Size([16, 256])
188
+ labels shape: torch.Size([16])
189
+ input_ids max value: 16003
190
+ Vocab size: 16005
191
+ Batch 300:
192
+ input_ids shape: torch.Size([16, 256])
193
+ attention_mask shape: torch.Size([16, 256])
194
+ labels shape: torch.Size([16])
195
+ input_ids max value: 16003
196
+ Vocab size: 16005
197
+ Batch 400:
198
+ input_ids shape: torch.Size([16, 256])
199
+ attention_mask shape: torch.Size([16, 256])
200
+ labels shape: torch.Size([16])
201
+ input_ids max value: 16003
202
+ Vocab size: 16005
203
+ Batch 500:
204
+ input_ids shape: torch.Size([16, 256])
205
+ attention_mask shape: torch.Size([16, 256])
206
+ labels shape: torch.Size([16])
207
+ input_ids max value: 16003
208
+ Vocab size: 16005
209
+ Batch 600:
210
+ input_ids shape: torch.Size([16, 256])
211
+ attention_mask shape: torch.Size([16, 256])
212
+ labels shape: torch.Size([16])
213
+ input_ids max value: 16003
214
+ Vocab size: 16005
215
+ Batch 700:
216
+ input_ids shape: torch.Size([16, 256])
217
+ attention_mask shape: torch.Size([16, 256])
218
+ labels shape: torch.Size([16])
219
+ input_ids max value: 16003
220
+ Vocab size: 16005
221
+ Batch 800:
222
+ input_ids shape: torch.Size([16, 256])
223
+ attention_mask shape: torch.Size([16, 256])
224
+ labels shape: torch.Size([16])
225
+ input_ids max value: 16003
226
+ Vocab size: 16005
227
+ Batch 900:
228
+ input_ids shape: torch.Size([16, 256])
229
+ attention_mask shape: torch.Size([16, 256])
230
+ labels shape: torch.Size([16])
231
+ input_ids max value: 16003
232
+ Vocab size: 16005
233
+ Epoch 3/3:
234
+ Val Accuracy: 0.8081, Val F1: 0.7870
235
+
236
+ Test Results for All Cluster tokenizer:
237
+ Accuracy: 0.8084
238
+ F1 Score: 0.7874
239
+ AUC-ROC: 0.8421
240
+
241
+ Training with Final tokenizer:
242
+ Vocabulary size: 15253
243
+ Could not load pretrained weights from /linkhome/rech/genrug01/uft12cr/bert_Model. Starting with random weights. Error: It looks like the config file at '/linkhome/rech/genrug01/uft12cr/bert_Model/config.json' is not a valid JSON file.
244
+ Initialized model with vocabulary size: 15253
245
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/OnlyGeneralTokenizer.py:172: FutureWarning: `torch.cuda.amp.GradScaler(args...)` is deprecated. Please use `torch.amp.GradScaler('cuda', args...)` instead.
246
+ scaler = amp.GradScaler()
247
+ Batch 0:
248
+ input_ids shape: torch.Size([16, 256])
249
+ attention_mask shape: torch.Size([16, 256])
250
+ labels shape: torch.Size([16])
251
+ input_ids max value: 15252
252
+ Vocab size: 15253
253
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/OnlyGeneralTokenizer.py:192: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
254
+ with amp.autocast():
255
+ Batch 100:
256
+ input_ids shape: torch.Size([16, 256])
257
+ attention_mask shape: torch.Size([16, 256])
258
+ labels shape: torch.Size([16])
259
+ input_ids max value: 15252
260
+ Vocab size: 15253
261
+ Batch 200:
262
+ input_ids shape: torch.Size([16, 256])
263
+ attention_mask shape: torch.Size([16, 256])
264
+ labels shape: torch.Size([16])
265
+ input_ids max value: 15252
266
+ Vocab size: 15253
267
+ Batch 300:
268
+ input_ids shape: torch.Size([16, 256])
269
+ attention_mask shape: torch.Size([16, 256])
270
+ labels shape: torch.Size([16])
271
+ input_ids max value: 15252
272
+ Vocab size: 15253
273
+ Batch 400:
274
+ input_ids shape: torch.Size([16, 256])
275
+ attention_mask shape: torch.Size([16, 256])
276
+ labels shape: torch.Size([16])
277
+ input_ids max value: 15252
278
+ Vocab size: 15253
279
+ Batch 500:
280
+ input_ids shape: torch.Size([16, 256])
281
+ attention_mask shape: torch.Size([16, 256])
282
+ labels shape: torch.Size([16])
283
+ input_ids max value: 15252
284
+ Vocab size: 15253
285
+ Batch 600:
286
+ input_ids shape: torch.Size([16, 256])
287
+ attention_mask shape: torch.Size([16, 256])
288
+ labels shape: torch.Size([16])
289
+ input_ids max value: 15252
290
+ Vocab size: 15253
291
+ Batch 700:
292
+ input_ids shape: torch.Size([16, 256])
293
+ attention_mask shape: torch.Size([16, 256])
294
+ labels shape: torch.Size([16])
295
+ input_ids max value: 15252
296
+ Vocab size: 15253
297
+ Batch 800:
298
+ input_ids shape: torch.Size([16, 256])
299
+ attention_mask shape: torch.Size([16, 256])
300
+ labels shape: torch.Size([16])
301
+ input_ids max value: 15252
302
+ Vocab size: 15253
303
+ Batch 900:
304
+ input_ids shape: torch.Size([16, 256])
305
+ attention_mask shape: torch.Size([16, 256])
306
+ labels shape: torch.Size([16])
307
+ input_ids max value: 15252
308
+ Vocab size: 15253
309
+ Epoch 1/3:
310
+ Val Accuracy: 0.7096, Val F1: 0.6564
311
+ Batch 0:
312
+ input_ids shape: torch.Size([16, 256])
313
+ attention_mask shape: torch.Size([16, 256])
314
+ labels shape: torch.Size([16])
315
+ input_ids max value: 15252
316
+ Vocab size: 15253
317
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/OnlyGeneralTokenizer.py:192: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
318
+ with amp.autocast():
319
+ Batch 100:
320
+ input_ids shape: torch.Size([16, 256])
321
+ attention_mask shape: torch.Size([16, 256])
322
+ labels shape: torch.Size([16])
323
+ input_ids max value: 15252
324
+ Vocab size: 15253
325
+ Batch 200:
326
+ input_ids shape: torch.Size([16, 256])
327
+ attention_mask shape: torch.Size([16, 256])
328
+ labels shape: torch.Size([16])
329
+ input_ids max value: 15252
330
+ Vocab size: 15253
331
+ Batch 300:
332
+ input_ids shape: torch.Size([16, 256])
333
+ attention_mask shape: torch.Size([16, 256])
334
+ labels shape: torch.Size([16])
335
+ input_ids max value: 15252
336
+ Vocab size: 15253
337
+ Batch 400:
338
+ input_ids shape: torch.Size([16, 256])
339
+ attention_mask shape: torch.Size([16, 256])
340
+ labels shape: torch.Size([16])
341
+ input_ids max value: 15252
342
+ Vocab size: 15253
343
+ Batch 500:
344
+ input_ids shape: torch.Size([16, 256])
345
+ attention_mask shape: torch.Size([16, 256])
346
+ labels shape: torch.Size([16])
347
+ input_ids max value: 15252
348
+ Vocab size: 15253
349
+ Batch 600:
350
+ input_ids shape: torch.Size([16, 256])
351
+ attention_mask shape: torch.Size([16, 256])
352
+ labels shape: torch.Size([16])
353
+ input_ids max value: 15252
354
+ Vocab size: 15253
355
+ Batch 700:
356
+ input_ids shape: torch.Size([16, 256])
357
+ attention_mask shape: torch.Size([16, 256])
358
+ labels shape: torch.Size([16])
359
+ input_ids max value: 15252
360
+ Vocab size: 15253
361
+ Batch 800:
362
+ input_ids shape: torch.Size([16, 256])
363
+ attention_mask shape: torch.Size([16, 256])
364
+ labels shape: torch.Size([16])
365
+ input_ids max value: 15252
366
+ Vocab size: 15253
367
+ Batch 900:
368
+ input_ids shape: torch.Size([16, 256])
369
+ attention_mask shape: torch.Size([16, 256])
370
+ labels shape: torch.Size([16])
371
+ input_ids max value: 15252
372
+ Vocab size: 15253
373
+ Epoch 2/3:
374
+ Val Accuracy: 0.7246, Val F1: 0.6799
375
+ Batch 0:
376
+ input_ids shape: torch.Size([16, 256])
377
+ attention_mask shape: torch.Size([16, 256])
378
+ labels shape: torch.Size([16])
379
+ input_ids max value: 15252
380
+ Vocab size: 15253
381
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/OnlyGeneralTokenizer.py:192: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
382
+ with amp.autocast():
383
+ Batch 100:
384
+ input_ids shape: torch.Size([16, 256])
385
+ attention_mask shape: torch.Size([16, 256])
386
+ labels shape: torch.Size([16])
387
+ input_ids max value: 15252
388
+ Vocab size: 15253
389
+ Batch 200:
390
+ input_ids shape: torch.Size([16, 256])
391
+ attention_mask shape: torch.Size([16, 256])
392
+ labels shape: torch.Size([16])
393
+ input_ids max value: 15252
394
+ Vocab size: 15253
395
+ Batch 300:
396
+ input_ids shape: torch.Size([16, 256])
397
+ attention_mask shape: torch.Size([16, 256])
398
+ labels shape: torch.Size([16])
399
+ input_ids max value: 15252
400
+ Vocab size: 15253
401
+ Batch 400:
402
+ input_ids shape: torch.Size([16, 256])
403
+ attention_mask shape: torch.Size([16, 256])
404
+ labels shape: torch.Size([16])
405
+ input_ids max value: 15252
406
+ Vocab size: 15253
407
+ Batch 500:
408
+ input_ids shape: torch.Size([16, 256])
409
+ attention_mask shape: torch.Size([16, 256])
410
+ labels shape: torch.Size([16])
411
+ input_ids max value: 15252
412
+ Vocab size: 15253
413
+ Batch 600:
414
+ input_ids shape: torch.Size([16, 256])
415
+ attention_mask shape: torch.Size([16, 256])
416
+ labels shape: torch.Size([16])
417
+ input_ids max value: 15252
418
+ Vocab size: 15253
419
+ Batch 700:
420
+ input_ids shape: torch.Size([16, 256])
421
+ attention_mask shape: torch.Size([16, 256])
422
+ labels shape: torch.Size([16])
423
+ input_ids max value: 15252
424
+ Vocab size: 15253
425
+ Batch 800:
426
+ input_ids shape: torch.Size([16, 256])
427
+ attention_mask shape: torch.Size([16, 256])
428
+ labels shape: torch.Size([16])
429
+ input_ids max value: 15252
430
+ Vocab size: 15253
431
+ Batch 900:
432
+ input_ids shape: torch.Size([16, 256])
433
+ attention_mask shape: torch.Size([16, 256])
434
+ labels shape: torch.Size([16])
435
+ input_ids max value: 15252
436
+ Vocab size: 15253
437
+ Epoch 3/3:
438
+ Val Accuracy: 0.7661, Val F1: 0.7440
439
+
440
+ Test Results for Final tokenizer:
441
+ Accuracy: 0.7661
442
+ F1 Score: 0.7441
443
+ AUC-ROC: 0.8256
444
+
445
+ Training with General tokenizer:
446
+ Vocabulary size: 30522
447
+ Could not load pretrained weights from /linkhome/rech/genrug01/uft12cr/bert_Model. Starting with random weights. Error: It looks like the config file at '/linkhome/rech/genrug01/uft12cr/bert_Model/config.json' is not a valid JSON file.
448
+ Initialized model with vocabulary size: 30522
449
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/OnlyGeneralTokenizer.py:172: FutureWarning: `torch.cuda.amp.GradScaler(args...)` is deprecated. Please use `torch.amp.GradScaler('cuda', args...)` instead.
450
+ scaler = amp.GradScaler()
451
+ Batch 0:
452
+ input_ids shape: torch.Size([16, 256])
453
+ attention_mask shape: torch.Size([16, 256])
454
+ labels shape: torch.Size([16])
455
+ input_ids max value: 29464
456
+ Vocab size: 30522
457
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/OnlyGeneralTokenizer.py:192: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
458
+ with amp.autocast():
459
+ Batch 100:
460
+ input_ids shape: torch.Size([16, 256])
461
+ attention_mask shape: torch.Size([16, 256])
462
+ labels shape: torch.Size([16])
463
+ input_ids max value: 29536
464
+ Vocab size: 30522
465
+ Batch 200:
466
+ input_ids shape: torch.Size([16, 256])
467
+ attention_mask shape: torch.Size([16, 256])
468
+ labels shape: torch.Size([16])
469
+ input_ids max value: 29464
470
+ Vocab size: 30522
471
+ Batch 300:
472
+ input_ids shape: torch.Size([16, 256])
473
+ attention_mask shape: torch.Size([16, 256])
474
+ labels shape: torch.Size([16])
475
+ input_ids max value: 29402
476
+ Vocab size: 30522
477
+ Batch 400:
478
+ input_ids shape: torch.Size([16, 256])
479
+ attention_mask shape: torch.Size([16, 256])
480
+ labels shape: torch.Size([16])
481
+ input_ids max value: 29535
482
+ Vocab size: 30522
483
+ Batch 500:
484
+ input_ids shape: torch.Size([16, 256])
485
+ attention_mask shape: torch.Size([16, 256])
486
+ labels shape: torch.Size([16])
487
+ input_ids max value: 29494
488
+ Vocab size: 30522
489
+ Batch 600:
490
+ input_ids shape: torch.Size([16, 256])
491
+ attention_mask shape: torch.Size([16, 256])
492
+ labels shape: torch.Size([16])
493
+ input_ids max value: 29454
494
+ Vocab size: 30522
495
+ Batch 700:
496
+ input_ids shape: torch.Size([16, 256])
497
+ attention_mask shape: torch.Size([16, 256])
498
+ labels shape: torch.Size([16])
499
+ input_ids max value: 29413
500
+ Vocab size: 30522
501
+ Batch 800:
502
+ input_ids shape: torch.Size([16, 256])
503
+ attention_mask shape: torch.Size([16, 256])
504
+ labels shape: torch.Size([16])
505
+ input_ids max value: 28993
506
+ Vocab size: 30522
507
+ Batch 900:
508
+ input_ids shape: torch.Size([16, 256])
509
+ attention_mask shape: torch.Size([16, 256])
510
+ labels shape: torch.Size([16])
511
+ input_ids max value: 29602
512
+ Vocab size: 30522
513
+ Epoch 1/3:
514
+ Val Accuracy: 0.7601, Val F1: 0.7079
515
+ Batch 0:
516
+ input_ids shape: torch.Size([16, 256])
517
+ attention_mask shape: torch.Size([16, 256])
518
+ labels shape: torch.Size([16])
519
+ input_ids max value: 29413
520
+ Vocab size: 30522
521
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/OnlyGeneralTokenizer.py:192: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
522
+ with amp.autocast():
523
+ Batch 100:
524
+ input_ids shape: torch.Size([16, 256])
525
+ attention_mask shape: torch.Size([16, 256])
526
+ labels shape: torch.Size([16])
527
+ input_ids max value: 29413
528
+ Vocab size: 30522
529
+ Batch 200:
530
+ input_ids shape: torch.Size([16, 256])
531
+ attention_mask shape: torch.Size([16, 256])
532
+ labels shape: torch.Size([16])
533
+ input_ids max value: 29464
534
+ Vocab size: 30522
535
+ Batch 300:
536
+ input_ids shape: torch.Size([16, 256])
537
+ attention_mask shape: torch.Size([16, 256])
538
+ labels shape: torch.Size([16])
539
+ input_ids max value: 29098
540
+ Vocab size: 30522
541
+ Batch 400:
542
+ input_ids shape: torch.Size([16, 256])
543
+ attention_mask shape: torch.Size([16, 256])
544
+ labels shape: torch.Size([16])
545
+ input_ids max value: 29339
546
+ Vocab size: 30522
547
+ Batch 500:
548
+ input_ids shape: torch.Size([16, 256])
549
+ attention_mask shape: torch.Size([16, 256])
550
+ labels shape: torch.Size([16])
551
+ input_ids max value: 29560
552
+ Vocab size: 30522
553
+ Batch 600:
554
+ input_ids shape: torch.Size([16, 256])
555
+ attention_mask shape: torch.Size([16, 256])
556
+ labels shape: torch.Size([16])
557
+ input_ids max value: 29464
558
+ Vocab size: 30522
559
+ Batch 700:
560
+ input_ids shape: torch.Size([16, 256])
561
+ attention_mask shape: torch.Size([16, 256])
562
+ labels shape: torch.Size([16])
563
+ input_ids max value: 29536
564
+ Vocab size: 30522
565
+ Batch 800:
566
+ input_ids shape: torch.Size([16, 256])
567
+ attention_mask shape: torch.Size([16, 256])
568
+ labels shape: torch.Size([16])
569
+ input_ids max value: 29458
570
+ Vocab size: 30522
571
+ Batch 900:
572
+ input_ids shape: torch.Size([16, 256])
573
+ attention_mask shape: torch.Size([16, 256])
574
+ labels shape: torch.Size([16])
575
+ input_ids max value: 29413
576
+ Vocab size: 30522
577
+ Epoch 2/3:
578
+ Val Accuracy: 0.8002, Val F1: 0.7716
579
+ Batch 0:
580
+ input_ids shape: torch.Size([16, 256])
581
+ attention_mask shape: torch.Size([16, 256])
582
+ labels shape: torch.Size([16])
583
+ input_ids max value: 29536
584
+ Vocab size: 30522
585
+ /gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/OnlyGeneralTokenizer.py:192: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
586
+ with amp.autocast():
587
+ Batch 100:
588
+ input_ids shape: torch.Size([16, 256])
589
+ attention_mask shape: torch.Size([16, 256])
590
+ labels shape: torch.Size([16])
591
+ input_ids max value: 29413
592
+ Vocab size: 30522
593
+ Batch 200:
594
+ input_ids shape: torch.Size([16, 256])
595
+ attention_mask shape: torch.Size([16, 256])
596
+ labels shape: torch.Size([16])
597
+ input_ids max value: 29605
598
+ Vocab size: 30522
599
+ Batch 300:
600
+ input_ids shape: torch.Size([16, 256])
601
+ attention_mask shape: torch.Size([16, 256])
602
+ labels shape: torch.Size([16])
603
+ input_ids max value: 29464
604
+ Vocab size: 30522
605
+ Batch 400:
606
+ input_ids shape: torch.Size([16, 256])
607
+ attention_mask shape: torch.Size([16, 256])
608
+ labels shape: torch.Size([16])
609
+ input_ids max value: 29237
610
+ Vocab size: 30522
611
+ Batch 500:
612
+ input_ids shape: torch.Size([16, 256])
613
+ attention_mask shape: torch.Size([16, 256])
614
+ labels shape: torch.Size([16])
615
+ input_ids max value: 29292
616
+ Vocab size: 30522
617
+ Batch 600:
618
+ input_ids shape: torch.Size([16, 256])
619
+ attention_mask shape: torch.Size([16, 256])
620
+ labels shape: torch.Size([16])
621
+ input_ids max value: 29461
622
+ Vocab size: 30522
623
+ Batch 700:
624
+ input_ids shape: torch.Size([16, 256])
625
+ attention_mask shape: torch.Size([16, 256])
626
+ labels shape: torch.Size([16])
627
+ input_ids max value: 29536
628
+ Vocab size: 30522
629
+ Batch 800:
630
+ input_ids shape: torch.Size([16, 256])
631
+ attention_mask shape: torch.Size([16, 256])
632
+ labels shape: torch.Size([16])
633
+ input_ids max value: 29536
634
+ Vocab size: 30522
635
+ Batch 900:
636
+ input_ids shape: torch.Size([16, 256])
637
+ attention_mask shape: torch.Size([16, 256])
638
+ labels shape: torch.Size([16])
639
+ input_ids max value: 29566
640
+ Vocab size: 30522
641
+ Epoch 3/3:
642
+ Val Accuracy: 0.8160, Val F1: 0.7785
643
+
644
+ Test Results for General tokenizer:
645
+ Accuracy: 0.8160
646
+ F1 Score: 0.7785
647
+ AUC-ROC: 0.8630
648
+
649
+ Summary of Results:
650
+
651
+ All Cluster Tokenizer:
652
+ Accuracy: 0.8084
653
+ F1 Score: 0.7874
654
+ AUC-ROC: 0.8421
655
+
656
+ Final Tokenizer:
657
+ Accuracy: 0.7661
658
+ F1 Score: 0.7441
659
+ AUC-ROC: 0.8256
660
+
661
+ General Tokenizer:
662
+ Accuracy: 0.8160
663
+ F1 Score: 0.7785
664
+ AUC-ROC: 0.8630
665
+
666
+ Class distribution in training set:
667
+ Class Biology: 439 samples
668
+ Class Chemistry: 454 samples
669
+ Class Computer Science: 1358 samples
670
+ Class Mathematics: 9480 samples
671
+ Class Physics: 2733 samples
672
+ Class Statistics: 200 samples