bert-base-german-cased-defakts-fake-binary
This Model is finetuned for sequence classification (binary fake-news classification task) on the german DeFaktS-Dataset. It achieves the following results on the evaluation set:
- Loss: 0.3608
- Accuracy: 0.8531
- F1: 0.8392
- Precision: 0.8636
- Recall: 0.8283
Model description
This Model is finetuned for sequence classification
Dataset
Trained on the DeFactS dataset https://github.com/caisa-lab/DeFaktS-Dataset-Disinformaton-Detection, feature catposfake/catneutral to detect fake news
Intended uses & limitations
Fake news classification
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy@de | F1@de | Precision@de | Recall@de | Loss@de |
---|---|---|---|---|---|---|---|---|
0.5531 | 0.0888 | 50 | 0.4191 | 0.8071 | 0.7958 | 0.7993 | 0.7931 | 0.4192 |
0.4341 | 0.1776 | 100 | 0.3987 | 0.8186 | 0.8132 | 0.8100 | 0.8198 | 0.3988 |
0.3854 | 0.2664 | 150 | 0.3816 | 0.8206 | 0.8100 | 0.8138 | 0.8071 | 0.3817 |
0.4028 | 0.3552 | 200 | 0.3703 | 0.8301 | 0.8221 | 0.8218 | 0.8223 | 0.3704 |
0.3784 | 0.4440 | 250 | 0.3750 | 0.8276 | 0.8065 | 0.8491 | 0.7933 | 0.3752 |
0.3622 | 0.5329 | 300 | 0.3465 | 0.8461 | 0.8389 | 0.8384 | 0.8393 | 0.3465 |
0.3945 | 0.6217 | 350 | 0.4596 | 0.7706 | 0.7704 | 0.7934 | 0.8006 | 0.4595 |
0.4073 | 0.7105 | 400 | 0.3360 | 0.8531 | 0.8419 | 0.8549 | 0.8343 | 0.3361 |
0.3779 | 0.7993 | 450 | 0.3440 | 0.8451 | 0.8399 | 0.8366 | 0.8455 | 0.3441 |
0.3596 | 0.8881 | 500 | 0.3608 | 0.8531 | 0.8392 | 0.8636 | 0.8283 | 0.3610 |
0.3588 | 0.9769 | 550 | 0.3468 | 0.8516 | 0.8375 | 0.8620 | 0.8266 | 0.3468 |
0.287 | 1.0657 | 600 | 0.3416 | 0.8591 | 0.8527 | 0.8517 | 0.8539 | 0.3416 |
0.2395 | 1.1545 | 650 | 0.3976 | 0.8531 | 0.8419 | 0.8547 | 0.8345 | 0.3977 |
0.2278 | 1.2433 | 700 | 0.3635 | 0.8441 | 0.8387 | 0.8355 | 0.8438 | 0.3635 |
0.2495 | 1.3321 | 750 | 0.3294 | 0.8581 | 0.8518 | 0.8506 | 0.8530 | 0.3294 |
0.2455 | 1.4210 | 800 | 0.3448 | 0.8581 | 0.8516 | 0.8507 | 0.8526 | 0.3448 |
0.2472 | 1.5098 | 850 | 0.3743 | 0.8626 | 0.8527 | 0.8635 | 0.8460 | 0.3745 |
Framework versions
- Transformers 4.45.2
- Pytorch 2.3.1+cu121
- Tokenizers 0.20.3
- Downloads last month
- 22
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.