Datasets:
File size: 17,237 Bytes
141fdd2 cd17ca5 141fdd2 cd17ca5 4db0676 cd17ca5 04fbdc7 cd17ca5 f95872a 4db0676 f95872a 4db0676 f95872a 4db0676 f95872a 4db0676 f95872a 4db0676 f95872a 4db0676 f95872a 4db0676 f95872a 4db0676 f95872a 4db0676 f95872a cd17ca5 f95872a cd17ca5 f95872a 4db0676 cd17ca5 4db0676 cd17ca5 f95872a cd17ca5 4db0676 cd17ca5 4db0676 cd17ca5 4db0676 cd17ca5 4db0676 cd17ca5 4db0676 cd17ca5 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 |
---
configs:
- config_name: fake_news
data_files:
- split: train
path: "fake_news/train.jsonl"
- split: test
path: "fake_news/test.jsonl"
- split: validation
path: "fake_news/validation.jsonl"
- config_name: job_scams
data_files:
- split: train
path: "job_scams/train.jsonl"
- split: test
path: "job_scams/test.jsonl"
- split: validation
path: "job_scams/validation.jsonl"
- config_name: phishing
data_files:
- split: train
path: "phishing/train.jsonl"
- split: test
path: "phishing/test.jsonl"
- split: validation
path: "phishing/validation.jsonl"
- config_name: political_statements
data_files:
- split: train
path: "political_statements/train.jsonl"
- split: test
path: "political_statements/test.jsonl"
- split: validation
path: "political_statements/validation.jsonl"
- config_name: product_reviews
data_files:
- split: train
path: "product_reviews/train.jsonl"
- split: test
path: "product_reviews/test.jsonl"
- split: validation
path: "product_reviews/validation.jsonl"
- config_name: sms
data_files:
- split: train
path: "sms/train.jsonl"
- split: test
path: "sms/test.jsonl"
- split: validation
path: "sms/validation.jsonl"
- config_name: twitter_rumours
data_files:
- split: train
path: "twitter_rumours/train.jsonl"
- split: test
path: "twitter_rumours/test.jsonl"
- split: validation
path: "twitter_rumours/validation.jsonl"
---
# GDDs-2.0
The Generalized Deception Dataset version 2.0 is a labeled corpus containing over 95000 samples of
deceptive and truthful texts from a number of independent domains and tasks.
## Authors
Dainis Boumber and Rakesh Verma
ReDAS Lab, University of Houston, 2023. See https://www2.cs.uh.edu/~rmverma/ for contact information.
## DATASET
The entire dataset contains 95854 samples, 37282 are deceptive and 58572 non-deceptive.
There are 7 independent domains in the dataset.
Each task is (or has been converted to) a binary classification problem where `y` is an indicator of deception.
1) **Phishing** (2020 Email phishing benchmark with manually labeled emails)
*- total: 15272 deceptive: 6074 non-deceptive: 9198*
2) **Fake News** (News Articles)
*- total: 20456 deceptive: 8832 non-deceptive: 11624*
3) **Political Statements** (Claims and statements by politicians and other entities, made from Politifact by relabeling LIAR)
*- total: 12497 deceptive: 8042 non-deceptive: 4455*
4) **Product Reviews** (Amazon product reviews)
*- total: 20971 deceptive: 10492 non-deceptive: 10479*
5) **Job Scams** (Job postings on an online board)
*- total: 14295 deceptive: 599 non-deceptive: 13696*
6) **SMS** (combination of SMS Spam from UCI repository and SMS Phishing datasets)
*- total: 6574 deceptive: 1274 non-deceptive: 5300*
7) **Twitter Rumours** (Collection of rumours from PHEME dataset, covers multiple topics)
*- total: 5789 deceptive: 1969 non-deceptive: 3820*
Each one was constructed from one or more datasets. Some tasks were not initially binary and had to be relabeled.
The inputs vary wildly both stylistically and syntactically, as well as in terms of the goal of deception
(or absence of thereof) being performed in the context of each dataset. Nonetheless, all seven datasets contain a significant
fraction of texts that are meant to deceive the person reading them one way or another.
Each subdirectory/config contains the domain/individual dataset split into three files:
`train.jsonl`, `test.jsonl`, and `valid.jsonl`
that contain train, test, and validation sets, respectively.
The splits are:
-- train=80%
-- test=10%
-- valid=10%
The sampling process was random with seed=42. It was stratified with respect to `y` (label) for each domain.
### Fields
Each `jsonl` file has two fields (columns): `text` (string) and `label` (integer)
`text` contains a statement or a claim that is either deceptive or thruthful.
It is guaranteed to be valid unicode, less than 1 million characters, and contains no empty entries or non-values.
`label` answers the question whether text is deceptive: `1` means yes, it is deceptive, `0` means no,
the text is not deceptive (it is truthful).
### Layout
The directory layout of gdds is like so:
``
gdds
fake_news/
train.jsonl
test.jsonl
validation.jsonl
README.md
...
...
...
sms/
train.jsonl
test.jsonl
validation.jsonl
README.md
README.md
LICENSE.txt
``
### Documentation
Primary documentation is this README file. Each dataset's directory contains a `README.md` file with additional details.
The contents of these files are also included at the end of this document in the Appendix.
LICENSE.txt contains the MIT license this dataset is distributed under.
## CHANGES
This dataset is a successor of [the GDD dataset](https://zenodo.org/record/6512468).
Noteable changes from GDD are:
1) Addition of SMS and Twitter Rumours datasets, making it 7 deception datasets from different domains in total
2) Re-labeling of Political Statements dataset using a scheme that better fits with prior published work that used it and is stricter in terms of non-deceptive statement criteria of acceptance (see the README file specific to the dataset within its directory)
3) Job Scams datasets' labeles were previously inverted, with ~13500 labeled as deceptive (is_deceptive=True) and ~600 as non-deceptive. This could lead to potential issues with using metrics such as f1-score, which for binary classification is computed for the class considered to be positive. This issue has been addressed and the deceptive texts are labeled as 1 (e.g. positive or True) while non-deceptive as 0 (e.g. negative or False)
4) All datasets have been processed using Cleanlab, with problematic samples maually examined and issues addressed if needed. See the details in each of the individual datasets README files.
5) All datasets now come in 2 formats: the entirety of the data in a single jsonl file located in the `data/` subdirectory of each dataset, and a standard train-test-valid stratified split of 80-10-10, in 3 separate jsonl files.
6) All datasets have two fields: "text" (string) and "label" (integer, 0 or 1 - 0 indicates that the text is non-deceptive, 1 means it is deceptive)
7) '\n' has been normalized to ' ' for all datasets as it causes issues with BERT's tokenizer in some cases (and to be in line with general whitespace normalization). Broken unicode has been fixed. Whitespace, quotations, and bullet points were normalized. Text is limited to 1,000,000 characters in length and guaranteed to be non-empty. Duplicates within the the same dataset (even in text only) were dropped, so were empty and None values.
## LICENSE
This dataset is published under the MIT license and can be used and modified by anyone free of charge.
See LICENSE.txt file for details.
## CITING
If you found this dataset useful in your research, please consider citing it as:
TODO: ADD our paper reference
## REFERENCES
Original GDD paper:
@inproceedings{10.1145/3508398.3519358,
author = {Zeng, Victor and Liu, Xuting and Verma, Rakesh M.},
title = {Does Deception Leave a Content Independent Stylistic Trace?},
year = {2022},
isbn = {9781450392204},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3508398.3519358},
doi = {10.1145/3508398.3519358},
abstract = {A recent survey claims that there are em no general linguistic cues for deception. Since Internet societies are plagued with deceptive attacks such as phishing and fake news, this claim means that we must build individual datasets and detectors for each kind of attack. It also implies that when a new scam (e.g., Covid) arrives, we must start the whole process of data collection, annotation, and model building from scratch. In this paper, we put this claim to the test by building a quality domain-independent deception dataset and investigating whether a model can perform well on more than one form of deception.},
booktitle = {Proceedings of the Twelfth ACM Conference on Data and Application Security and Privacy},
pages = {349–351},
numpages = {3},
keywords = {domain-independent deception detection, dataset quality/cleaning},
location = {Baltimore, MD, USA},
series = {CODASPY '22}
}
## APPENDIX: Dataset and Domain Details
This section describes each domain/dataset in greater detail.
### Fake News
We post-process and split Fake News dataset to ensure uniformity with Political Statements 2.0 and Twitter Rumours as they all go into form GDDS-2.0
#### Cleaning
Each dataset has been cleaned using Cleanlab. Non-english entries, erroneous (parser error) entries, empty entries, duplicate entries, entries of length less than 2 characters or exceeding 1000000 characters were all removed.
#### Preprocessing
Whitespace, quotes, bulletpoints, unicode is normalized.
#### Data
The dataset consists of "text" (string) and "is_deceptive" (1,0). 1 means the text is deceptive, 0 indicates otherwise.
There are 20456 samples in the dataset, contained in `phishing.jsonl`. For reproduceability, the data is also split into training, test, and validation sets in 80/10/10 ratio. They are named `train.jsonl`, `test.jsonl`, `valid.jsonl`. The sampling process was stratified. The training set contains 16364 samples, the validation and the test sets have 2064 and 2064 samles, respectively.
### Job Scams
We post-process and split Job Scams dataset to ensure uniformity with Political Statements 2.0 and Twitter Rumours as they all go into form GDDS-2.0
#### Cleaning
Each dataset has been cleaned using Cleanlab. Non-english entries, erroneous (parser error) entries, empty entries, duplicate entries, entries of length less than 2 characters or exceeding 1000000 characters were all removed.
#### Preprocessing
Whitespace, quotes, bulletpoints, unicode is normalized.
#### Data
The dataset consists of "text" (string) and "is_deceptive" (1,0). 1 means the text is deceptive, 0 indicates otherwise.
There are 14295 samples in the dataset, contained in `job_scams.jsonl`. For reproduceability, the data is also split into training, test, and validation sets in 80/10/10 ratio. They are named `train.jsonl`, `test.jsonl`, `valid.jsonl`. The sampling process was stratified. The training set contains 11436 samples, the validation and the test sets have 1429 and 1430 samles, respectively.
### Phishing
This dataset consists of various phishing attacks as well as benign emails collected from real users.
#### Cleaning
Each dataset has been cleaned using Cleanlab. Non-english entries, erroneous (parser error) entries, empty entries, duplicate entries, entries of length less than 2 characters or exceeding 1000000 characters were all removed.
#### Preprocessing
Whitespace, quotes, bulletpoints, unicode is normalized.
#### Data
The dataset consists of "text" (string) and "is_deceptive" (1,0). 1 means the text is deceptive, 0 indicates otherwise.
There are 15272 samples in the dataset, contained in `phishing.jsonl`. For reproduceability, the data is also split into training, test, and validation sets in 80/10/10 ratio. They are named `train.jsonl`, `test.jsonl`, `valid.jsonl`. The sampling process was stratified. The training set contains 12217 samples, the validation and the test sets have 1527 and 1528 samles, respectively.
### Political Statements
Political Statements dataset was created from the LIAR corpus.
#### Labeling
The primary difference is the change in the re-labeling scheme when converting the task from multiclass to binary.
#### Old scheme
We use the claim field as the text and map labels “pants-fire,” “false,”
“barely-true,” to deceptive and “half-true,” “mostly-true,” and “true”
to non-deceptive, resulting in 5,669 deceptive and 7,167 truthful
statements.
#### New scheme
Following
*Upadhayay, B., Behzadan, V.: "Sentimental liar: Extended corpus and deep learning models for fake claim classification" (2020)*
and
*Shahriar, Sadat, Arjun Mukherjee, and Omprakash Gnawali. "Deception Detection with Feature-Augmentation by Soft Domain Transfer." International Conference on Social Informatics. Cham: Springer International Publishing, 2022.*
we map the labels map labels “pants-fire,” “false,”
“barely-true,” **and “half-true,”** to deceptive; the labels "mostly-true" and "true" are mapped to non-deceptive. The statements that are only half-true are now considered to be deceptive, making the criterion for statement being non-deceptive stricter -- now 2 out of 6 labels map to non-deceptive and 4 map to deceptive.
#### Cleaning
The dataset has been cleaned using cleanlab with visual inspection of problems found. Partial sentences, such as "On Iran nuclear deal", "On inflation", were removed. Text with large number of errors induced by a parser were also removed. Statements in language other than English (namely, Spanish) were also removed. Sequences with unicode errors, containing less than one characters or over 1 million characters were removed.
#### Preprocessing
Whitespace, quotes, bulletpoints, unicode is normalized.
#### Data
The dataset consists of "text" (string) and "is_deceptive" (1,0). 1 means the text is deceptive, 0 indicates otherwise.
There are 12497 samples in the dataset, contained in `political_statements.jsonl`. For reproduceability, the data is also split into training, test, and validation sets in 80/10/10 ratio. They are named `train.jsonl`, `test.jsonl`, `valid.jsonl`. The sampling process was stratified. The training set contains 9997 samples, the validation and the test sets have 1250 samles each in them.
### Product Reviews
We post-process and split Product Reviews dataset to ensure uniformity with Political Statements 2.0 and Twitter Rumours as they all go into form GDDS-2.0
#### Cleaning
Each dataset has been cleaned using Cleanlab. Non-english entries, erroneous (parser error) entries, empty entries, duplicate entries, entries of length less than 2 characters or exceeding 1000000 characters were all removed.
#### Preprocessing
Whitespace, quotes, bulletpoints, unicode is normalized.
#### Data
The dataset consists of "text" (string) and "is_deceptive" (1,0). 1 means the text is deceptive, 0 indicates otherwise.
There are 20971 samples in the dataset, contained in `product_reviews.jsonl`. For reproduceability, the data is also split into training, test, and validation sets in 80/10/10 ratio. They are named `train.jsonl`, `test.jsonl`, `valid.jsonl`. The sampling process was stratified. The training set contains 16776 samples, the validation and the test sets have 2097 and 2098 samles, respectively.
### SMS
This dataset was created from the SMS Spam Collection and SMS Phishing Dataset for Machine Learning and Pattern Recognition, which contained 5,574 and 5,971 real English SMS messages, respectively. As these two datasets overlap, after de-duplication, the final dataset is made up of 6574 texts released by a private UK-based wireless operator; 1274 of them are deceptive, and the remaining 5300 are not.
#### Cleaning
Each dataset has been cleaned using Cleanlab. Non-english entries, erroneous (parser error) entries, empty entries, duplicate entries, entries of length less than 2 characters or exceeding 1000000 characters were all removed.
#### Preprocessing
Whitespace, quotes, bulletpoints, unicode is normalized.
#### Data
The dataset consists of "text" (string) and "is_deceptive" (1,0). 1 means the text is deceptive, 0 indicates otherwise.
There are 6574 samples in the dataset, contained in `sms.jsonl`. For reproduceability, the data is also split into training, test, and validation sets in 80/10/10 ratio. They are named `train.jsonl`, `test.jsonl`, `valid.jsonl`. The sampling process was stratified. The training set contains 5259 samples, the validation and the test sets have 657 and 658 samles, respectively.
### Rumors dataset
This deception dataset was created using PHEME dataset from
https://figshare.com/articles/dataset/PHEME_dataset_of_rumours_and_non-rumours/4010619/1
was used in creation of this dataset. We took source tweets only, and ignored replies to them. We used source tweet's label as being a rumour or non-rumour to label it as deceptive or non-deceptive.
#### Cleaning
The dataset has been cleaned using cleanlab with visual inspection of problems found. No issues were identified. Duplicate entries, entries of length less than 2 characters or exceeding 1000000 characters were removed.
#### Preprocessing
Whitespace, quotes, bulletpoints, unicode is normalized.
#### Data
The dataset consists of "text" (string) and "is_deceptive" (1,0). 1 means the text is deceptive, 0 indicates otherwise.
There are 5789 samples in the dataset, contained in `tweeter_rumours.jsonl`. For reproduceability, the data is also split into training, test, and validation sets in 80/10/10 ratio. They are named `train.jsonl`, `test.jsonl`, `valid.jsonl`. The sampling process was stratified. The training set contains 4631 samples, the validation and the test sets have 579 samles each.
|