yoda-ner / training.log
gazquez's picture
Added model without flair embeddings
c588f54
raw
history blame
20.7 kB
2022-10-04 14:07:15,489 ----------------------------------------------------------------------------------------------------
2022-10-04 14:07:15,492 Model: "SequenceTagger(
(embeddings): TransformerWordEmbeddings(
(model): BertModel(
(embeddings): BertEmbeddings(
(word_embeddings): Embedding(119547, 768, padding_idx=0)
(position_embeddings): Embedding(512, 768)
(token_type_embeddings): Embedding(2, 768)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(encoder): BertEncoder(
(layer): ModuleList(
(0): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
(intermediate_act_fn): GELUActivation()
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(1): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
(intermediate_act_fn): GELUActivation()
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(2): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
(intermediate_act_fn): GELUActivation()
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(3): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
(intermediate_act_fn): GELUActivation()
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(4): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
(intermediate_act_fn): GELUActivation()
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(5): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
(intermediate_act_fn): GELUActivation()
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(6): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
(intermediate_act_fn): GELUActivation()
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(7): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
(intermediate_act_fn): GELUActivation()
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(8): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
(intermediate_act_fn): GELUActivation()
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(9): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
(intermediate_act_fn): GELUActivation()
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(10): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
(intermediate_act_fn): GELUActivation()
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(11): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
(intermediate_act_fn): GELUActivation()
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
)
(pooler): BertPooler(
(dense): Linear(in_features=768, out_features=768, bias=True)
(activation): Tanh()
)
)
)
(dropout): Dropout(p=0.3, inplace=False)
(word_dropout): WordDropout(p=0.05)
(locked_dropout): LockedDropout(p=0.5)
(linear): Linear(in_features=768, out_features=13, bias=True)
(loss_function): CrossEntropyLoss()
)"
2022-10-04 14:07:15,510 ----------------------------------------------------------------------------------------------------
2022-10-04 14:07:15,510 Corpus: "Corpus: 70000 train + 15000 dev + 15000 test sentences"
2022-10-04 14:07:15,510 ----------------------------------------------------------------------------------------------------
2022-10-04 14:07:15,511 Parameters:
2022-10-04 14:07:15,511 - learning_rate: "0.010000"
2022-10-04 14:07:15,511 - mini_batch_size: "8"
2022-10-04 14:07:15,511 - patience: "3"
2022-10-04 14:07:15,512 - anneal_factor: "0.5"
2022-10-04 14:07:15,512 - max_epochs: "2"
2022-10-04 14:07:15,512 - shuffle: "True"
2022-10-04 14:07:15,512 - train_with_dev: "False"
2022-10-04 14:07:15,513 - batch_growth_annealing: "False"
2022-10-04 14:07:15,513 ----------------------------------------------------------------------------------------------------
2022-10-04 14:07:15,513 Model training base path: "c:\Users\Ivan\Documents\Projects\Yoda\NER\model\flair\src\..\models\trans_sm_flair"
2022-10-04 14:07:15,513 ----------------------------------------------------------------------------------------------------
2022-10-04 14:07:15,513 Device: cuda:0
2022-10-04 14:07:15,514 ----------------------------------------------------------------------------------------------------
2022-10-04 14:07:15,514 Embeddings storage mode: cpu
2022-10-04 14:07:15,514 ----------------------------------------------------------------------------------------------------
2022-10-04 14:08:50,056 epoch 1 - iter 875/8750 - loss 0.77736243 - samples/sec: 74.10 - lr: 0.010000
2022-10-04 14:10:25,613 epoch 1 - iter 1750/8750 - loss 0.58654474 - samples/sec: 73.31 - lr: 0.010000
2022-10-04 14:12:00,221 epoch 1 - iter 2625/8750 - loss 0.49473747 - samples/sec: 74.05 - lr: 0.010000
2022-10-04 14:13:35,035 epoch 1 - iter 3500/8750 - loss 0.43711232 - samples/sec: 73.87 - lr: 0.010000
2022-10-04 14:15:08,344 epoch 1 - iter 4375/8750 - loss 0.39713865 - samples/sec: 75.06 - lr: 0.010000
2022-10-04 14:16:41,989 epoch 1 - iter 5250/8750 - loss 0.36731971 - samples/sec: 74.80 - lr: 0.010000
2022-10-04 14:18:17,847 epoch 1 - iter 6125/8750 - loss 0.34209381 - samples/sec: 73.07 - lr: 0.010000
2022-10-04 14:19:52,115 epoch 1 - iter 7000/8750 - loss 0.32256861 - samples/sec: 74.30 - lr: 0.010000
2022-10-04 14:21:26,066 epoch 1 - iter 7875/8750 - loss 0.30596431 - samples/sec: 74.55 - lr: 0.010000
2022-10-04 14:23:00,059 epoch 1 - iter 8750/8750 - loss 0.29124524 - samples/sec: 74.51 - lr: 0.010000
2022-10-04 14:23:00,061 ----------------------------------------------------------------------------------------------------
2022-10-04 14:23:00,062 EPOCH 1 done: loss 0.2912 - lr 0.010000
2022-10-04 14:24:52,210 Evaluating as a multi-label problem: False
2022-10-04 14:24:52,424 DEV : loss 0.06397613137960434 - f1-score (micro avg) 0.973
2022-10-04 14:24:53,223 BAD EPOCHS (no improvement): 0
2022-10-04 14:24:54,431 saving best model
2022-10-04 14:24:55,749 ----------------------------------------------------------------------------------------------------
2022-10-04 14:26:31,875 epoch 2 - iter 875/8750 - loss 0.15239591 - samples/sec: 72.88 - lr: 0.010000
2022-10-04 14:28:12,311 epoch 2 - iter 1750/8750 - loss 0.15109719 - samples/sec: 69.74 - lr: 0.010000
2022-10-04 14:29:49,414 epoch 2 - iter 2625/8750 - loss 0.15017726 - samples/sec: 72.14 - lr: 0.010000
2022-10-04 14:31:22,789 epoch 2 - iter 3500/8750 - loss 0.14709937 - samples/sec: 75.01 - lr: 0.010000
2022-10-04 14:32:56,365 epoch 2 - iter 4375/8750 - loss 0.14490590 - samples/sec: 74.87 - lr: 0.010000
2022-10-04 14:34:29,769 epoch 2 - iter 5250/8750 - loss 0.14379219 - samples/sec: 75.00 - lr: 0.010000
2022-10-04 14:36:04,122 epoch 2 - iter 6125/8750 - loss 0.14272196 - samples/sec: 74.24 - lr: 0.010000
2022-10-04 14:37:40,084 epoch 2 - iter 7000/8750 - loss 0.14024151 - samples/sec: 73.00 - lr: 0.010000
2022-10-04 14:39:15,077 epoch 2 - iter 7875/8750 - loss 0.13892120 - samples/sec: 73.73 - lr: 0.010000
2022-10-04 14:40:48,611 epoch 2 - iter 8750/8750 - loss 0.13731836 - samples/sec: 74.89 - lr: 0.010000
2022-10-04 14:40:48,617 ----------------------------------------------------------------------------------------------------
2022-10-04 14:40:48,617 EPOCH 2 done: loss 0.1373 - lr 0.010000
2022-10-04 14:42:50,048 Evaluating as a multi-label problem: False
2022-10-04 14:42:50,277 DEV : loss 0.05747831612825394 - f1-score (micro avg) 0.9844
2022-10-04 14:42:51,053 BAD EPOCHS (no improvement): 0
2022-10-04 14:42:52,333 saving best model
2022-10-04 14:42:54,576 ----------------------------------------------------------------------------------------------------
2022-10-04 14:42:54,600 loading file c:\Users\Ivan\Documents\Projects\Yoda\NER\model\flair\src\..\models\trans_sm_flair\best-model.pt
2022-10-04 14:42:57,086 SequenceTagger predicts: Dictionary with 13 tags: O, S-size, B-size, E-size, I-size, S-brand, B-brand, E-brand, I-brand, S-color, B-color, E-color, I-color
2022-10-04 14:44:29,459 Evaluating as a multi-label problem: False
2022-10-04 14:44:29,668 0.9816 0.9857 0.9837 0.9679
2022-10-04 14:44:29,669
Results:
- F-score (micro) 0.9837
- F-score (macro) 0.9843
- Accuracy 0.9679
By class:
precision recall f1-score support
size 0.9820 0.9859 0.9839 17988
brand 0.9773 0.9860 0.9817 11674
color 0.9905 0.9840 0.9872 5070
micro avg 0.9816 0.9857 0.9837 34732
macro avg 0.9833 0.9853 0.9843 34732
weighted avg 0.9816 0.9857 0.9837 34732
2022-10-04 14:44:29,670 ----------------------------------------------------------------------------------------------------