dainis-boumber's picture
create dataset
cd17ca5
|
raw
history blame
2.38 kB
# LIAR to Political Statements 2.0
This notebook contains the code for alternative conversion(s) of the LIAR dataset to the Political Statements dattaset.
## Labeling
The primary difference is the change in the re-labeling scheme when converting the task from multiclass to binary.
### Old scheme
We use the claim field as the text and map labels “pants-fire,” “false,”
“barely-true,” to deceptive and “half-true,” “mostly-true,” and “true”
to non-deceptive, resulting in 5,669 deceptive and 7,167 truthful
statements.
### New scheme
Following
*Upadhayay, B., Behzadan, V.: "Sentimental liar: Extended corpus and deep learning models for fake claim classification" (2020)*
and
*Shahriar, Sadat, Arjun Mukherjee, and Omprakash Gnawali. "Deception Detection with Feature-Augmentation by Soft Domain Transfer." International Conference on Social Informatics. Cham: Springer International Publishing, 2022.*
we map the labels map labels “pants-fire,” “false,”
“barely-true,” **and “half-true,”** to deceptive; the labels "mostly-true" and "true" are mapped to non-deceptive. The statements that are only half-true are now considered to be deceptive, making the criterion for statement being non-deceptive stricter -- now 2 out of 6 labels map to non-deceptive and 4 map to deceptive.
## Cleaning
The dataset has been cleaned using cleanlab with visual inspection of problems found. Partial sentences, such as "On Iran nuclear deal", "On inflation", were removed. Text with large number of errors induced by a parser were also removed. Statements in language other than English (namely, Spanish) were also removed. Sequences with unicode errors, containing less than one characters or over 1 million characters were removed.
## Preprocessing
Whitespace, quotes, bulletpoints, unicode is normalized.
## Data
The dataset consists of "text" (string) and "is_deceptive" (1,0). 1 means the text is deceptive, 0 indicates otherwise.
There are 12497 samples in the dataset, contained in `political_statements.jsonl`. For reproduceability, the data is also split into training, test, and validation sets in 80/10/10 ratio. They are named `train.jsonl`, `test.jsonl`, `valid.jsonl`. The sampling process was stratified. The training set contains 9997 samples, the validation and the test sets have 1250 samles each in them.