Update README.md
Browse files
README.md
CHANGED
@@ -2,4 +2,28 @@
|
|
2 |
license: mit
|
3 |
tags:
|
4 |
- text-classification
|
|
|
|
|
5 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2 |
license: mit
|
3 |
tags:
|
4 |
- text-classification
|
5 |
+
- PyTorch
|
6 |
+
- Transformers
|
7 |
---
|
8 |
+
|
9 |
+
|
10 |
+
# fakeBert
|
11 |
+
|
12 |
+
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on a [news dataset](https://www.kaggle.com/datasets/clmentbisaillon/fake-and-real-news-dataset) from Kaggle.
|
13 |
+
|
14 |
+
## Model description
|
15 |
+
|
16 |
+
Fine-tuning Bert for text classification.
|
17 |
+
|
18 |
+
## Training and evaluation data
|
19 |
+
|
20 |
+
Training & Validation: [Fake and real news dataset](https://www.kaggle.com/datasets/clmentbisaillon/fake-and-real-news-dataset)
|
21 |
+
Testing: [Fake News Detection Challenge KDD 2020](https://www.kaggle.com/competitions/fakenewskdd2020/overview)
|
22 |
+
|
23 |
+
### Training hyperparameters
|
24 |
+
|
25 |
+
The following hyperparameters were used during training:
|
26 |
+
- learning_rate: 1e-5
|
27 |
+
- train_batch_size: 16
|
28 |
+
- eval_batch_size: 16
|
29 |
+
- optimizer: AdamW
|