difraud / fake_news /README.md
dainis-boumber's picture
create dataset
cd17ca5
|
raw
history blame
998 Bytes

Fake News

We post-process and split Fake News dataset to ensure uniformity with Political Statements 2.0 and Twitter Rumours as they all go into form GDDS-2.0

Cleaning

Each dataset has been cleaned using Cleanlab. Non-english entries, erroneous (parser error) entries, empty entries, duplicate entries, entries of length less than 2 characters or exceeding 1000000 characters were all removed.

Preprocessing

Whitespace, quotes, bulletpoints, unicode is normalized.

Data

The dataset consists of "text" (string) and "is_deceptive" (1,0). 1 means the text is deceptive, 0 indicates otherwise.

There are 20456 samples in the dataset, contained in phishing.jsonl. For reproduceability, the data is also split into training, test, and validation sets in 80/10/10 ratio. They are named train.jsonl, test.jsonl, valid.jsonl. The sampling process was stratified. The training set contains 16364 samples, the validation and the test sets have 2064 and 2064 samles, respectively.