Datasets:
dainis-boumber
commited on
Commit
•
941f894
1
Parent(s):
38e9858
Delete twitter_rumours/README.md
Browse files- twitter_rumours/README.md +0 -22
twitter_rumours/README.md
DELETED
@@ -1,22 +0,0 @@
|
|
1 |
-
# Rumors dataset
|
2 |
-
|
3 |
-
This deception dataset was created using PHEME dataset from
|
4 |
-
|
5 |
-
https://figshare.com/articles/dataset/PHEME_dataset_of_rumours_and_non-rumours/4010619/1
|
6 |
-
|
7 |
-
was used in creation of this dataset. We took source tweets only, and ignored replies to them. We used source tweet's label as being a rumour or non-rumour to label it as deceptive or non-deceptive.
|
8 |
-
|
9 |
-
## Cleaning
|
10 |
-
|
11 |
-
The dataset has been cleaned using cleanlab with visual inspection of problems found. No issues were identified. Duplicate entries, entries of length less than 2 characters or exceeding 1000000 characters were removed.
|
12 |
-
|
13 |
-
## Preprocessing
|
14 |
-
|
15 |
-
Whitespace, quotes, bulletpoints, unicode is normalized.
|
16 |
-
|
17 |
-
## Data
|
18 |
-
|
19 |
-
The dataset consists of "text" (string) and "is_deceptive" (1,0). 1 means the text is deceptive, 0 indicates otherwise.
|
20 |
-
|
21 |
-
There are 5789 samples in the dataset, contained in `tweeter_rumours.jsonl`. For reproduceability, the data is also split into training, test, and validation sets in 80/10/10 ratio. They are named `train.jsonl`, `test.jsonl`, `valid.jsonl`. The sampling process was stratified. The training set contains 4631 samples, the validation and the test sets have 579 samles each.
|
22 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|