|
Loading pytorch-gpu/py3/2.1.1 |
|
Loading requirement: cuda/11.8.0 nccl/2.18.5-1-cuda cudnn/8.7.0.84-cuda |
|
gcc/8.5.0 openmpi/4.1.5-cuda intel-mkl/2020.4 magma/2.7.1-cuda sox/14.4.2 |
|
sparsehash/2.0.3 libjpeg-turbo/2.1.3 ffmpeg/4.4.4 |
|
+ HF_DATASETS_OFFLINE=1 |
|
+ TRANSFORMERS_OFFLINE=1 |
|
+ python3 OnlyGeneralTokenizer.py |
|
|
|
Checking label assignment: |
|
|
|
Domain: Mathematics |
|
Categories: hep-th math-ph math.MP nlin.SI |
|
Abstract: three new models with vshaped field potentials u are considered a complex scalar field x in dimensio... |
|
|
|
Domain: Computer Science |
|
Categories: cs.AR |
|
Abstract: this special session adresses the problems that designers face when implementing analog and digital ... |
|
|
|
Domain: Physics |
|
Categories: physics.plasm-ph |
|
Abstract: starting from the governing equations for a quantum magnetoplasma including the quantum bohm potenti... |
|
|
|
Domain: Chemistry |
|
Categories: nlin.CD |
|
Abstract: we present recent results on noiseinduced transitions in a nonlinear oscillator with randomly modula... |
|
|
|
Domain: Statistics |
|
Categories: stat.AP |
|
Abstract: in microarray technology a number of critical steps are required to convert the raw measurements int... |
|
|
|
Domain: Biology |
|
Categories: q-bio.MN |
|
Abstract: the architecture of biological networks has been reported to exhibit high level of modularity and to... |
|
/linkhome/rech/genrug01/uft12cr/.local/lib/python3.11/site-packages/transformers/tokenization_utils_base.py:2057: FutureWarning: Calling BertTokenizer.from_pretrained() with the path to a single file or url is deprecated and won |
|
warnings.warn( |
|
|
|
Training with All Cluster tokenizer: |
|
Vocabulary size: 16005 |
|
Could not load pretrained weights from /linkhome/rech/genrug01/uft12cr/bert_Model. Starting with random weights. Error: It looks like the config file at |
|
Initialized model with vocabulary size: 16005 |
|
/gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/OnlyGeneralTokenizer.py:172: FutureWarning: `torch.cuda.amp.GradScaler(args...)` is deprecated. Please use `torch.amp.GradScaler( |
|
scaler = amp.GradScaler() |
|
Batch 0: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
/gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/OnlyGeneralTokenizer.py:192: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast( |
|
with amp.autocast(): |
|
Batch 100: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 200: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 300: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 400: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 500: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 600: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 700: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 800: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 900: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Epoch 1/3: |
|
Val Accuracy: 0.7549, Val F1: 0.7014 |
|
Batch 0: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
/gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/OnlyGeneralTokenizer.py:192: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast( |
|
with amp.autocast(): |
|
Batch 100: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 200: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 300: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 400: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 500: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 600: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 700: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 800: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 900: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Epoch 2/3: |
|
Val Accuracy: 0.7937, Val F1: 0.7657 |
|
Batch 0: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
/gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/OnlyGeneralTokenizer.py:192: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast( |
|
with amp.autocast(): |
|
Batch 100: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 200: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 300: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 400: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 500: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 600: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 700: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 800: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Batch 900: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 16003 |
|
Vocab size: 16005 |
|
Epoch 3/3: |
|
Val Accuracy: 0.8065, Val F1: 0.7645 |
|
|
|
Test Results for All Cluster tokenizer: |
|
Accuracy: 0.8065 |
|
F1 Score: 0.7645 |
|
AUC-ROC: 0.8683 |
|
|
|
Training with Final tokenizer: |
|
Vocabulary size: 18524 |
|
Could not load pretrained weights from /linkhome/rech/genrug01/uft12cr/bert_Model. Starting with random weights. Error: It looks like the config file at |
|
Initialized model with vocabulary size: 18524 |
|
/gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/OnlyGeneralTokenizer.py:172: FutureWarning: `torch.cuda.amp.GradScaler(args...)` is deprecated. Please use `torch.amp.GradScaler( |
|
scaler = amp.GradScaler() |
|
Batch 0: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
/gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/OnlyGeneralTokenizer.py:192: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast( |
|
with amp.autocast(): |
|
Batch 100: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
Batch 200: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
Batch 300: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
Batch 400: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
Batch 500: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
Batch 600: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
Batch 700: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
Batch 800: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
Batch 900: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
Epoch 1/3: |
|
Val Accuracy: 0.6744, Val F1: 0.6438 |
|
Batch 0: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
/gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/OnlyGeneralTokenizer.py:192: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast( |
|
with amp.autocast(): |
|
Batch 100: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
Batch 200: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
Batch 300: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
Batch 400: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
Batch 500: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
Batch 600: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
Batch 700: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
Batch 800: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
Batch 900: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
Epoch 2/3: |
|
Val Accuracy: 0.7737, Val F1: 0.7343 |
|
Batch 0: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
/gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/OnlyGeneralTokenizer.py:192: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast( |
|
with amp.autocast(): |
|
Batch 100: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
Batch 200: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
Batch 300: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
Batch 400: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
Batch 500: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
Batch 600: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
Batch 700: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
Batch 800: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
Batch 900: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 18523 |
|
Vocab size: 18524 |
|
Epoch 3/3: |
|
Val Accuracy: 0.7975, Val F1: 0.7612 |
|
|
|
Test Results for Final tokenizer: |
|
Accuracy: 0.7978 |
|
F1 Score: 0.7615 |
|
AUC-ROC: 0.8035 |
|
|
|
Training with General tokenizer: |
|
Vocabulary size: 30522 |
|
Could not load pretrained weights from /linkhome/rech/genrug01/uft12cr/bert_Model. Starting with random weights. Error: It looks like the config file at |
|
Initialized model with vocabulary size: 30522 |
|
/gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/OnlyGeneralTokenizer.py:172: FutureWarning: `torch.cuda.amp.GradScaler(args...)` is deprecated. Please use `torch.amp.GradScaler( |
|
scaler = amp.GradScaler() |
|
Batch 0: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29454 |
|
Vocab size: 30522 |
|
/gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/OnlyGeneralTokenizer.py:192: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast( |
|
with amp.autocast(): |
|
Batch 100: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29474 |
|
Vocab size: 30522 |
|
Batch 200: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29413 |
|
Vocab size: 30522 |
|
Batch 300: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29561 |
|
Vocab size: 30522 |
|
Batch 400: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29513 |
|
Vocab size: 30522 |
|
Batch 500: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29413 |
|
Vocab size: 30522 |
|
Batch 600: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29513 |
|
Vocab size: 30522 |
|
Batch 700: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29536 |
|
Vocab size: 30522 |
|
Batch 800: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29513 |
|
Vocab size: 30522 |
|
Batch 900: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29486 |
|
Vocab size: 30522 |
|
Epoch 1/3: |
|
Val Accuracy: 0.6932, Val F1: 0.6626 |
|
Batch 0: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29513 |
|
Vocab size: 30522 |
|
/gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/OnlyGeneralTokenizer.py:192: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast( |
|
with amp.autocast(): |
|
Batch 100: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29545 |
|
Vocab size: 30522 |
|
Batch 200: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29464 |
|
Vocab size: 30522 |
|
Batch 300: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29178 |
|
Vocab size: 30522 |
|
Batch 400: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29446 |
|
Vocab size: 30522 |
|
Batch 500: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29513 |
|
Vocab size: 30522 |
|
Batch 600: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29536 |
|
Vocab size: 30522 |
|
Batch 700: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29454 |
|
Vocab size: 30522 |
|
Batch 800: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29347 |
|
Vocab size: 30522 |
|
Batch 900: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29535 |
|
Vocab size: 30522 |
|
Epoch 2/3: |
|
Val Accuracy: 0.7860, Val F1: 0.7438 |
|
Batch 0: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29536 |
|
Vocab size: 30522 |
|
/gpfsdswork/projects/rech/fmr/uft12cr/finetuneAli/OnlyGeneralTokenizer.py:192: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast( |
|
with amp.autocast(): |
|
Batch 100: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29598 |
|
Vocab size: 30522 |
|
Batch 200: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29237 |
|
Vocab size: 30522 |
|
Batch 300: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29605 |
|
Vocab size: 30522 |
|
Batch 400: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29577 |
|
Vocab size: 30522 |
|
Batch 500: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29454 |
|
Vocab size: 30522 |
|
Batch 600: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29586 |
|
Vocab size: 30522 |
|
Batch 700: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29536 |
|
Vocab size: 30522 |
|
Batch 800: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29532 |
|
Vocab size: 30522 |
|
Batch 900: |
|
input_ids shape: torch.Size([16, 256]) |
|
attention_mask shape: torch.Size([16, 256]) |
|
labels shape: torch.Size([16]) |
|
input_ids max value: 29486 |
|
Vocab size: 30522 |
|
Epoch 3/3: |
|
Val Accuracy: 0.8062, Val F1: 0.7665 |
|
|
|
Test Results for General tokenizer: |
|
Accuracy: 0.8062 |
|
F1 Score: 0.7665 |
|
AUC-ROC: 0.8879 |
|
|
|
Summary of Results: |
|
|
|
All Cluster Tokenizer: |
|
Accuracy: 0.8065 |
|
F1 Score: 0.7645 |
|
AUC-ROC: 0.8683 |
|
|
|
Final Tokenizer: |
|
Accuracy: 0.7978 |
|
F1 Score: 0.7615 |
|
AUC-ROC: 0.8035 |
|
|
|
General Tokenizer: |
|
Accuracy: 0.8062 |
|
F1 Score: 0.7665 |
|
AUC-ROC: 0.8879 |
|
|
|
Class distribution in training set: |
|
Class Biology: 439 samples |
|
Class Chemistry: 454 samples |
|
Class Computer Science: 1358 samples |
|
Class Mathematics: 9480 samples |
|
Class Physics: 2733 samples |
|
Class Statistics: 200 samples |
|
|