Datasets:
dainis-boumber
commited on
Commit
•
f95872a
1
Parent(s):
04fbdc7
Updated readme
Browse files
README.md
CHANGED
@@ -69,33 +69,67 @@ Dainis Boumber and Rakesh Verma
|
|
69 |
|
70 |
ReDAS Lab, University of Houston, 2023. See https://www2.cs.uh.edu/~rmverma/ for contact information.
|
71 |
|
72 |
-
##
|
73 |
|
74 |
-
|
75 |
-
|
76 |
-
|
77 |
-
|
78 |
-
to
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
79 |
|
80 |
-
|
|
|
81 |
|
82 |
-
|
|
|
|
|
|
|
83 |
|
84 |
-
|
85 |
|
86 |
-
|
87 |
|
88 |
-
|
89 |
|
90 |
-
|
91 |
|
92 |
-
|
93 |
|
94 |
-
|
95 |
|
96 |
-
|
97 |
|
98 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
99 |
|
100 |
The directory layout of gdds is like so:
|
101 |
|
@@ -118,28 +152,13 @@ gdds
|
|
118 |
LICENSE.txt
|
119 |
``
|
120 |
|
121 |
-
Each subdirectory/config contains the domain/individual dataset.
|
122 |
-
`train.jsonl`, `test.jsonl`, and `valid.jsonl` contain train, test, and validation sets, respectively.
|
123 |
-
|
124 |
-
The splits are train=80%, test=10%, valid=10%
|
125 |
-
The sampling process was random with seed=42, and stratified with respect to `y` (label) for each domain.
|
126 |
-
|
127 |
-
### Fields
|
128 |
-
|
129 |
-
Each `jsonl` file has two fields (columns): `text` and `label`
|
130 |
-
|
131 |
-
`label` answers the question whether text is deceptive: `1` means yes, it is deceptive, `0` means no, the text is not deceptive (it is truthful).
|
132 |
-
|
133 |
-
`text` is guaranteed to be valid unicode, less than 1 million characters, and contains no empty entries or non-values.
|
134 |
-
|
135 |
### Documentation
|
136 |
|
137 |
Primary documentation is this README file. Each dataset's directory contains a `README.md` file with additional details.
|
138 |
The contents of these files are also included at the end of this document in the Appendix.
|
139 |
LICENSE.txt contains the MIT license this dataset is distributed under.
|
140 |
|
141 |
-
|
142 |
-
## Changes and Additions
|
143 |
|
144 |
This dataset is a successor of [the GDD dataset](https://zenodo.org/record/6512468).
|
145 |
|
@@ -159,26 +178,6 @@ Noteable changes from GDD are:
|
|
159 |
|
160 |
7) '\n' has been normalized to ' ' for all datasets as it causes issues with BERT's tokenizer in some cases (and to be in line with general whitespace normalization). Broken unicode has been fixed. Whitespace, quotations, and bullet points were normalized. Text is limited to 1,000,000 characters in length and guaranteed to be non-empty. Duplicates within the the same dataset (even in text only) were dropped, so were empty and None values.
|
161 |
|
162 |
-
## Statistics
|
163 |
-
|
164 |
-
The entire dataset contains 95854 samples, 37282 are deceptive and 58572 non-deceptive.
|
165 |
-
|
166 |
-
**The split of data within the individual datasets/domains:**
|
167 |
-
|
168 |
-
fake_news total: 20456 deceptive: 8832 non-deceptive: 11624
|
169 |
-
|
170 |
-
job_scams total: 14295 deceptive: 599 non-deceptive: 13696
|
171 |
-
|
172 |
-
phishing total: 15272 deceptive: 6074 non-deceptive: 9198
|
173 |
-
|
174 |
-
political_statements total: 12497 deceptive: 8042 non-deceptive: 4455
|
175 |
-
|
176 |
-
product_reviews total: 20971 deceptive: 10492 non-deceptive: 10479
|
177 |
-
|
178 |
-
sms total: 6574 deceptive: 1274 non-deceptive: 5300
|
179 |
-
|
180 |
-
twitter_rumours total: 5789 deceptive: 1969 non-deceptive: 3820
|
181 |
-
|
182 |
## LICENSE
|
183 |
|
184 |
This dataset is published under the MIT license and can be used and modified by anyone free of charge.
|
|
|
69 |
|
70 |
ReDAS Lab, University of Houston, 2023. See https://www2.cs.uh.edu/~rmverma/ for contact information.
|
71 |
|
72 |
+
## DATASET
|
73 |
|
74 |
+
The entire dataset contains 95854 samples, 37282 are deceptive and 58572 non-deceptive.
|
75 |
+
|
76 |
+
There are 7 independent domains in the dataset.
|
77 |
+
|
78 |
+
Each task is (or has been converted to) a binary classification problem where `y` is an indicator of deception.
|
79 |
+
|
80 |
+
1) **Phishing** (2020 Email phishing benchmark with manually labeled emails)
|
81 |
+
*- total: 15272 deceptive: 6074 non-deceptive: 9198*
|
82 |
+
|
83 |
+
2) **Fake News** (News Articles)
|
84 |
+
*- total: 20456 deceptive: 8832 non-deceptive: 11624*
|
85 |
+
|
86 |
+
3) **Political Statements** (Claims and statements by politicians and other entities, made from Politifact by relabeling LIAR)
|
87 |
+
*- total: 12497 deceptive: 8042 non-deceptive: 4455*
|
88 |
+
|
89 |
+
4) **Product Reviews** (Amazon product reviews)
|
90 |
+
*- total: 20971 deceptive: 10492 non-deceptive: 10479*
|
91 |
+
|
92 |
+
5) **Job Scams** (Job postings on an online board)
|
93 |
+
*- total: 14295 deceptive: 599 non-deceptive: 13696*
|
94 |
+
|
95 |
+
6) **SMS** (combination of SMS Spam from UCI repository and SMS Phishing datasets)
|
96 |
+
*- total: 6574 deceptive: 1274 non-deceptive: 5300*
|
97 |
|
98 |
+
7) **Twitter Rumours** (Collection of rumours from PHEME dataset, covers multiple topics)
|
99 |
+
*- total: 5789 deceptive: 1969 non-deceptive: 3820*
|
100 |
|
101 |
+
Each one was constructed from one or more datasets. Some tasks were not initially binary and had to be relabeled.
|
102 |
+
The inputs vary wildly both stylistically and syntactically, as well as in terms of the goal of deception
|
103 |
+
(or absence of thereof) being performed in the context of each dataset. Nonetheless, all seven datasets contain a significant
|
104 |
+
fraction of texts that are meant to deceive the person reading them one way or another.
|
105 |
|
106 |
+
Each subdirectory/config contains the domain/individual dataset split into three files:
|
107 |
|
108 |
+
`train.jsonl`, `test.jsonl`, and `valid.jsonl`
|
109 |
|
110 |
+
that contain train, test, and validation sets, respectively.
|
111 |
|
112 |
+
The splits are:
|
113 |
|
114 |
+
-- train=80%
|
115 |
|
116 |
+
-- test=10%
|
117 |
|
118 |
+
-- valid=10%
|
119 |
|
120 |
+
The sampling process was random with seed=42. It was stratified with respect to `y` (label) for each domain.
|
121 |
+
|
122 |
+
### Fields
|
123 |
+
|
124 |
+
Each `jsonl` file has two fields (columns): `text` (string) and `label` (integer)
|
125 |
+
|
126 |
+
`text` contains a statement or a claim that is either deceptive or thruthful.
|
127 |
+
It is guaranteed to be valid unicode, less than 1 million characters, and contains no empty entries or non-values.
|
128 |
+
|
129 |
+
`label` answers the question whether text is deceptive: `1` means yes, it is deceptive, `0` means no,
|
130 |
+
the text is not deceptive (it is truthful).
|
131 |
+
|
132 |
+
### Layout
|
133 |
|
134 |
The directory layout of gdds is like so:
|
135 |
|
|
|
152 |
LICENSE.txt
|
153 |
``
|
154 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
155 |
### Documentation
|
156 |
|
157 |
Primary documentation is this README file. Each dataset's directory contains a `README.md` file with additional details.
|
158 |
The contents of these files are also included at the end of this document in the Appendix.
|
159 |
LICENSE.txt contains the MIT license this dataset is distributed under.
|
160 |
|
161 |
+
## CHANGES
|
|
|
162 |
|
163 |
This dataset is a successor of [the GDD dataset](https://zenodo.org/record/6512468).
|
164 |
|
|
|
178 |
|
179 |
7) '\n' has been normalized to ' ' for all datasets as it causes issues with BERT's tokenizer in some cases (and to be in line with general whitespace normalization). Broken unicode has been fixed. Whitespace, quotations, and bullet points were normalized. Text is limited to 1,000,000 characters in length and guaranteed to be non-empty. Duplicates within the the same dataset (even in text only) were dropped, so were empty and None values.
|
180 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
181 |
## LICENSE
|
182 |
|
183 |
This dataset is published under the MIT license and can be used and modified by anyone free of charge.
|