DILA_DATASET / FineTune_withPlots32k1115474.out
Alijeff1214's picture
Upload folder using huggingface_hub
dfda3f1 verified
Loading pytorch-gpu/py3/2.1.1
Loading requirement: cuda/11.8.0 nccl/2.18.5-1-cuda cudnn/8.7.0.84-cuda
gcc/8.5.0 openmpi/4.1.5-cuda intel-mkl/2020.4 magma/2.7.1-cuda sox/14.4.2
sparsehash/2.0.3 libjpeg-turbo/2.1.3 ffmpeg/4.4.4
+ HF_DATASETS_OFFLINE=1
+ TRANSFORMERS_OFFLINE=1
+ python3 FIneTune_withPlots.py
Checking label assignment:
Domain: Mathematics
Categories: math.OA math.PR
Abstract: we study the distributional behavior for products and for sums of boolean independent random variabl...
Domain: Computer Science
Categories: cs.CL physics.soc-ph
Abstract: zipfs law states that if words of language are ranked in the order of decreasing frequency in texts ...
Domain: Physics
Categories: physics.atom-ph
Abstract: the effects of parity and time reversal violating potential in particular the tensorpseudotensor ele...
Domain: Chemistry
Categories: nlin.AO
Abstract: over a period of approximately five years pankaj ghemawat of harvard business school and daniel levi...
Domain: Statistics
Categories: stat.AP
Abstract: we consider data consisting of photon counts of diffracted xray radiation as a function of the angle...
Domain: Biology
Categories: q-bio.PE q-bio.GN
Abstract: this paper develops simplified mathematical models describing the mutationselection balance for the ...
/linkhome/rech/genrug01/uft12cr/.local/lib/python3.11/site-packages/transformers/tokenization_utils_base.py:2057: FutureWarning: Calling BertTokenizer.from_pretrained() with the path to a single file or url is deprecated and won't be possible anymore in v5. Use a model identifier or the path to a directory instead.
warnings.warn(
Training with All Cluster tokenizer:
Vocabulary size: 29376
Could not load pretrained weights from /gpfswork/rech/fmr/uft12cr/finetuneAli/Bert_Model. Starting with random weights. Error: Error while deserializing header: HeaderTooLarge
Initialized model with vocabulary size: 29376
/gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:173: FutureWarning: `torch.cuda.amp.GradScaler(args...)` is deprecated. Please use `torch.amp.GradScaler('cuda', args...)` instead.
scaler = amp.GradScaler()
Batch 0:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29374
Vocab size: 29376
/gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
with amp.autocast():
Batch 100:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29374
Vocab size: 29376
Batch 200:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29374
Vocab size: 29376
Batch 300:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29374
Vocab size: 29376
Batch 400:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29374
Vocab size: 29376
Batch 500:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29374
Vocab size: 29376
Batch 600:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29374
Vocab size: 29376
Batch 700:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29374
Vocab size: 29376
Batch 800:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29374
Vocab size: 29376
Batch 900:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29374
Vocab size: 29376
Epoch 1/5:
Train Loss: 0.8540, Train Accuracy: 0.7226
Val Loss: 0.6542, Val Accuracy: 0.7833, Val F1: 0.7250
Batch 0:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29374
Vocab size: 29376
/gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
with amp.autocast():
Batch 100:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29374
Vocab size: 29376
Batch 200:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29374
Vocab size: 29376
Batch 300:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29374
Vocab size: 29376
Batch 400:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29374
Vocab size: 29376
Batch 500:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29374
Vocab size: 29376
Batch 600:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29374
Vocab size: 29376
Batch 700:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29374
Vocab size: 29376
Batch 800:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29374
Vocab size: 29376
Batch 900:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29374
Vocab size: 29376
Epoch 2/5:
Train Loss: 0.6120, Train Accuracy: 0.8040
Val Loss: 0.6541, Val Accuracy: 0.7765, Val F1: 0.7610
Batch 0:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29374
Vocab size: 29376
/gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
with amp.autocast():
Batch 100:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29374
Vocab size: 29376
Batch 200:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29374
Vocab size: 29376
Batch 300:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29374
Vocab size: 29376
Batch 400:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29374
Vocab size: 29376
Batch 500:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29374
Vocab size: 29376
Batch 600:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29374
Vocab size: 29376
Batch 700:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29374
Vocab size: 29376
Batch 800:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29374
Vocab size: 29376
Batch 900:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29374
Vocab size: 29376
Epoch 3/5:
Train Loss: 0.5221, Train Accuracy: 0.8347
Val Loss: 0.6959, Val Accuracy: 0.7582, Val F1: 0.7540
Batch 0:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29374
Vocab size: 29376
/gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
with amp.autocast():
Batch 100:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29374
Vocab size: 29376
Batch 200:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29374
Vocab size: 29376
Batch 300:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29374
Vocab size: 29376
Batch 400:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29374
Vocab size: 29376
Batch 500:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29374
Vocab size: 29376
Batch 600:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29374
Vocab size: 29376
Batch 700:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29374
Vocab size: 29376
Batch 800:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29374
Vocab size: 29376
Batch 900:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29374
Vocab size: 29376
Epoch 4/5:
Train Loss: 0.4214, Train Accuracy: 0.8676
Val Loss: 0.5618, Val Accuracy: 0.8204, Val F1: 0.7935
Batch 0:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29374
Vocab size: 29376
/gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
with amp.autocast():
Batch 100:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29374
Vocab size: 29376
Batch 200:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29374
Vocab size: 29376
Batch 300:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29374
Vocab size: 29376
Batch 400:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29374
Vocab size: 29376
Batch 500:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29374
Vocab size: 29376
Batch 600:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29374
Vocab size: 29376
Batch 700:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29374
Vocab size: 29376
Batch 800:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29374
Vocab size: 29376
Batch 900:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29374
Vocab size: 29376
Epoch 5/5:
Train Loss: 0.3263, Train Accuracy: 0.8953
Val Loss: 0.5990, Val Accuracy: 0.8125, Val F1: 0.8073
Test Results for All Cluster tokenizer:
Accuracy: 0.8125
F1 Score: 0.8071
AUC-ROC: 0.8733
Training with Final tokenizer:
Vocabulary size: 27998
Could not load pretrained weights from /gpfswork/rech/fmr/uft12cr/finetuneAli/Bert_Model. Starting with random weights. Error: Error while deserializing header: HeaderTooLarge
Initialized model with vocabulary size: 27998
/gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:173: FutureWarning: `torch.cuda.amp.GradScaler(args...)` is deprecated. Please use `torch.amp.GradScaler('cuda', args...)` instead.
scaler = amp.GradScaler()
Batch 0:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 27997
Vocab size: 27998
/gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
with amp.autocast():
Batch 100:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 27997
Vocab size: 27998
Batch 200:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 27997
Vocab size: 27998
Batch 300:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 27997
Vocab size: 27998
Batch 400:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 27997
Vocab size: 27998
Batch 500:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 27997
Vocab size: 27998
Batch 600:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 27997
Vocab size: 27998
Batch 700:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 27997
Vocab size: 27998
Batch 800:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 27997
Vocab size: 27998
Batch 900:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 27997
Vocab size: 27998
Epoch 1/5:
Train Loss: 0.8917, Train Accuracy: 0.7102
Val Loss: 0.7550, Val Accuracy: 0.7533, Val F1: 0.7130
Batch 0:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 27997
Vocab size: 27998
/gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
with amp.autocast():
Batch 100:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 27997
Vocab size: 27998
Batch 200:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 27997
Vocab size: 27998
Batch 300:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 27997
Vocab size: 27998
Batch 400:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 27997
Vocab size: 27998
Batch 500:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 27997
Vocab size: 27998
Batch 600:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 27997
Vocab size: 27998
Batch 700:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 27997
Vocab size: 27998
Batch 800:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 27997
Vocab size: 27998
Batch 900:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 27997
Vocab size: 27998
Epoch 2/5:
Train Loss: 0.6483, Train Accuracy: 0.7855
Val Loss: 0.6702, Val Accuracy: 0.7822, Val F1: 0.7506
Batch 0:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 27997
Vocab size: 27998
/gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
with amp.autocast():
Batch 100:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 27997
Vocab size: 27998
Batch 200:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 27997
Vocab size: 27998
Batch 300:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 27997
Vocab size: 27998
Batch 400:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 27997
Vocab size: 27998
Batch 500:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 27997
Vocab size: 27998
Batch 600:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 27997
Vocab size: 27998
Batch 700:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 27997
Vocab size: 27998
Batch 800:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 27997
Vocab size: 27998
Batch 900:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 27997
Vocab size: 27998
Epoch 3/5:
Train Loss: 0.5660, Train Accuracy: 0.8135
Val Loss: 0.6397, Val Accuracy: 0.7983, Val F1: 0.7548
Batch 0:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 27997
Vocab size: 27998
/gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
with amp.autocast():
Batch 100:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 27997
Vocab size: 27998
Batch 200:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 27997
Vocab size: 27998
Batch 300:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 27997
Vocab size: 27998
Batch 400:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 27997
Vocab size: 27998
Batch 500:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 27997
Vocab size: 27998
Batch 600:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 27997
Vocab size: 27998
Batch 700:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 27997
Vocab size: 27998
Batch 800:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 27997
Vocab size: 27998
Batch 900:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 27997
Vocab size: 27998
Epoch 4/5:
Train Loss: 0.4725, Train Accuracy: 0.8545
Val Loss: 0.7259, Val Accuracy: 0.7707, Val F1: 0.7672
Batch 0:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 27997
Vocab size: 27998
/gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
with amp.autocast():
Batch 100:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 27997
Vocab size: 27998
Batch 200:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 27997
Vocab size: 27998
Batch 300:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 27997
Vocab size: 27998
Batch 400:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 27997
Vocab size: 27998
Batch 500:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 27997
Vocab size: 27998
Batch 600:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 27997
Vocab size: 27998
Batch 700:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 27997
Vocab size: 27998
Batch 800:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 27997
Vocab size: 27998
Batch 900:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 27997
Vocab size: 27998
Epoch 5/5:
Train Loss: 0.3889, Train Accuracy: 0.8792
Val Loss: 0.5967, Val Accuracy: 0.8174, Val F1: 0.7926
Test Results for Final tokenizer:
Accuracy: 0.8174
F1 Score: 0.7925
AUC-ROC: 0.8663
Training with General tokenizer:
Vocabulary size: 30522
Could not load pretrained weights from /gpfswork/rech/fmr/uft12cr/finetuneAli/Bert_Model. Starting with random weights. Error: Error while deserializing header: HeaderTooLarge
Initialized model with vocabulary size: 30522
/gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:173: FutureWarning: `torch.cuda.amp.GradScaler(args...)` is deprecated. Please use `torch.amp.GradScaler('cuda', args...)` instead.
scaler = amp.GradScaler()
Batch 0:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29605
Vocab size: 30522
/gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
with amp.autocast():
Batch 100:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29438
Vocab size: 30522
Batch 200:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29300
Vocab size: 30522
Batch 300:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29464
Vocab size: 30522
Batch 400:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29494
Vocab size: 30522
Batch 500:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29464
Vocab size: 30522
Batch 600:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29464
Vocab size: 30522
Batch 700:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29464
Vocab size: 30522
Batch 800:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29340
Vocab size: 30522
Batch 900:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29454
Vocab size: 30522
Epoch 1/5:
Train Loss: 0.8557, Train Accuracy: 0.7257
Val Loss: 0.6864, Val Accuracy: 0.7724, Val F1: 0.7309
Batch 0:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29300
Vocab size: 30522
/gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
with amp.autocast():
Batch 100:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29464
Vocab size: 30522
Batch 200:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29494
Vocab size: 30522
Batch 300:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29474
Vocab size: 30522
Batch 400:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29535
Vocab size: 30522
Batch 500:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29577
Vocab size: 30522
Batch 600:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29598
Vocab size: 30522
Batch 700:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29605
Vocab size: 30522
Batch 800:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29160
Vocab size: 30522
Batch 900:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29532
Vocab size: 30522
Epoch 2/5:
Train Loss: 0.5995, Train Accuracy: 0.8029
Val Loss: 0.6449, Val Accuracy: 0.7882, Val F1: 0.7366
Batch 0:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29536
Vocab size: 30522
/gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
with amp.autocast():
Batch 100:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29464
Vocab size: 30522
Batch 200:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29536
Vocab size: 30522
Batch 300:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29464
Vocab size: 30522
Batch 400:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29464
Vocab size: 30522
Batch 500:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29464
Vocab size: 30522
Batch 600:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29413
Vocab size: 30522
Batch 700:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29346
Vocab size: 30522
Batch 800:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29451
Vocab size: 30522
Batch 900:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29280
Vocab size: 30522
Epoch 3/5:
Train Loss: 0.5332, Train Accuracy: 0.8291
Val Loss: 0.6577, Val Accuracy: 0.7942, Val F1: 0.7687
Batch 0:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29464
Vocab size: 30522
/gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
with amp.autocast():
Batch 100:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29464
Vocab size: 30522
Batch 200:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29535
Vocab size: 30522
Batch 300:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29413
Vocab size: 30522
Batch 400:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29461
Vocab size: 30522
Batch 500:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29536
Vocab size: 30522
Batch 600:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29300
Vocab size: 30522
Batch 700:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29536
Vocab size: 30522
Batch 800:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29513
Vocab size: 30522
Batch 900:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29536
Vocab size: 30522
Epoch 4/5:
Train Loss: 0.4665, Train Accuracy: 0.8555
Val Loss: 0.6495, Val Accuracy: 0.7931, Val F1: 0.7709
Batch 0:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29454
Vocab size: 30522
/gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/FIneTune_withPlots.py:202: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
with amp.autocast():
Batch 100:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29598
Vocab size: 30522
Batch 200:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29336
Vocab size: 30522
Batch 300:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29602
Vocab size: 30522
Batch 400:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29598
Vocab size: 30522
Batch 500:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29464
Vocab size: 30522
Batch 600:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29513
Vocab size: 30522
Batch 700:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29464
Vocab size: 30522
Batch 800:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29536
Vocab size: 30522
Batch 900:
input_ids shape: torch.Size([16, 256])
attention_mask shape: torch.Size([16, 256])
labels shape: torch.Size([16])
input_ids max value: 29535
Vocab size: 30522
Epoch 5/5:
Train Loss: 0.3991, Train Accuracy: 0.8781
Val Loss: 0.6572, Val Accuracy: 0.7948, Val F1: 0.7804
Test Results for General tokenizer:
Accuracy: 0.7945
F1 Score: 0.7802
AUC-ROC: 0.8825
Summary of Results:
All Cluster Tokenizer:
Accuracy: 0.8125
F1 Score: 0.8071
AUC-ROC: 0.8733
Final Tokenizer:
Accuracy: 0.8174
F1 Score: 0.7925
AUC-ROC: 0.8663
General Tokenizer:
Accuracy: 0.7945
F1 Score: 0.7802
AUC-ROC: 0.8825
Class distribution in training set:
Class Biology: 439 samples
Class Chemistry: 454 samples
Class Computer Science: 1358 samples
Class Mathematics: 9480 samples
Class Physics: 2733 samples
Class Statistics: 200 samples