File size: 1,183 Bytes
cd17ca5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
# Rumors dataset

This deception dataset was created using PHEME dataset from 

https://figshare.com/articles/dataset/PHEME_dataset_of_rumours_and_non-rumours/4010619/1

was used in creation of this dataset. We took source tweets only, and ignored replies to them. We used source tweet's label as being a rumour or non-rumour to label it as deceptive or non-deceptive.

## Cleaning

The dataset has been cleaned using cleanlab with visual inspection of problems found. No issues were identified. Duplicate entries, entries of length less than 2 characters or exceeding 1000000 characters were removed.

## Preprocessing

Whitespace, quotes, bulletpoints, unicode is normalized.

## Data

The dataset consists of "text" (string) and "is_deceptive" (1,0). 1 means the text is deceptive, 0 indicates otherwise.

There are 5789 samples in the dataset, contained in `tweeter_rumours.jsonl`. For reproduceability, the data is also split into training, test, and validation sets in 80/10/10 ratio. They are named `train.jsonl`, `test.jsonl`, `valid.jsonl`. The sampling process was stratified. The training set contains 4631 samples, the validation and the test sets have 579 samles each.