This model is a fine-tuned version of bert-base-uncase with 4 layers on an NLT dataset. It achieves the following results on the evaluation set:
{'precision': 0.9804255319148936} {'recall': 0.9888412017167382} {'f1': 0.9846153846153847} {'accuracy': 0.9849498327759197}
Training hyperparameters:
learning_rate: 1e-4 train_batch_size: 8 eval_batch_size: 8 optimizer: AdamW with betas=(0.9,0.999) and epsilon=1e-08 weight_decay= 0.01 lr_scheduler_type: linear num_epochs: 3
It achieves the following results on the test set:
Incorrect UD Padded: {'precision': 0.6703910614525139} {'recall': 0.32498307379823965} {'f1': 0.43775649794801635} {'accuracy': 0.4777636594663278}
Incorrect UD Unigram: {'precision': 0.6460176991150443} {'recall': 0.3953960731211916} {'f1': 0.4905501889962201} {'accuracy': 0.4862346463362982}
Incorrect UD Bigram: {'precision': 0.6617647058823529} {'recall': 0.33513879485443465} {'f1': 0.44494382022471907} {'accuracy': 0.4769165607793308}
Incorrect UD All: {'precision': 0.5714285714285714} {'recall': 0.002708192281651997} {'f1': 0.005390835579514824} {'accuracy': 0.3748411689961881}
Incorrect Sentence: {'precision': 0.6274149034038639} {'recall': 0.923493568043331} {'f1': 0.7471925499863051} {'accuracy': 0.6090639559508683}
- Downloads last month
- 17