Datasets:
dainis-boumber
commited on
Commit
•
3b77f86
1
Parent(s):
45bc599
yaml fix
Browse files
README.md
CHANGED
@@ -3,59 +3,66 @@ configs:
|
|
3 |
- config_name: fake_news
|
4 |
data_files:
|
5 |
- split: train
|
6 |
-
path:
|
7 |
- split: test
|
8 |
-
path:
|
9 |
- split: validation
|
10 |
-
path:
|
11 |
- config_name: job_scams
|
12 |
data_files:
|
13 |
- split: train
|
14 |
-
path:
|
15 |
- split: test
|
16 |
-
path:
|
17 |
- split: validation
|
18 |
-
path:
|
19 |
- config_name: phishing
|
20 |
data_files:
|
21 |
- split: train
|
22 |
-
path:
|
23 |
- split: test
|
24 |
-
path:
|
25 |
- split: validation
|
26 |
-
path:
|
27 |
- config_name: political_statements
|
28 |
data_files:
|
29 |
- split: train
|
30 |
-
path:
|
31 |
- split: test
|
32 |
-
path:
|
33 |
- split: validation
|
34 |
-
path:
|
35 |
- config_name: product_reviews
|
36 |
data_files:
|
37 |
- split: train
|
38 |
-
path:
|
39 |
- split: test
|
40 |
-
path:
|
41 |
- split: validation
|
42 |
-
path:
|
43 |
- config_name: sms
|
44 |
data_files:
|
45 |
- split: train
|
46 |
-
path:
|
47 |
- split: test
|
48 |
-
path:
|
49 |
- split: validation
|
50 |
-
path:
|
51 |
- config_name: twitter_rumours
|
52 |
data_files:
|
53 |
- split: train
|
54 |
-
path:
|
55 |
- split: test
|
56 |
-
path:
|
57 |
- split: validation
|
58 |
-
path:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
59 |
---
|
60 |
|
61 |
# GDDs-2.0
|
@@ -227,7 +234,6 @@ location = {Baltimore, MD, USA},
|
|
227 |
series = {CODASPY '22}
|
228 |
}
|
229 |
|
230 |
-
|
231 |
## APPENDIX: Dataset and Domain Details
|
232 |
|
233 |
This section describes each domain/dataset in greater detail.
|
@@ -240,13 +246,6 @@ often reputable sources, such as "[claim] (Reuters)". It contains 35,028 real ne
|
|
240 |
We found a number of out-of-domain statements that are clearly not relevant to news, such as "Cool", which is a potential
|
241 |
problem for transfer learning as well as classification.
|
242 |
|
243 |
-
|
244 |
-
#### Data
|
245 |
-
|
246 |
-
The dataset consists of "text" (string) and "is_deceptive" (1,0). 1 means the text is deceptive, 0 indicates otherwise.
|
247 |
-
|
248 |
-
There are 20456 samples in the dataset, contained in `phishing.jsonl`. For reproduceability, the data is also split into training, test,
|
249 |
-
and validation sets in 80/10/10 ratio. They are named `train.jsonl`, `test.jsonl`, `valid.jsonl`. The sampling process was stratified.
|
250 |
The training set contains 16364 samples, the validation and the test sets have 2064 and 2064 samles, respectively.
|
251 |
|
252 |
### JOB SCAMS
|
@@ -260,18 +259,13 @@ The original Job Labels dataset had the labels inverted when released. The probl
|
|
260 |
|
261 |
#### Cleaning
|
262 |
|
263 |
-
HTML tags
|
264 |
-
|
265 |
-
#### Data
|
266 |
-
|
267 |
-
T**With just under 600 deceptive texts, this dataset is heavily imbalanced.**
|
268 |
|
269 |
### PHISHING
|
270 |
|
271 |
This dataset consists of various phishing attacks as well as benign emails collected from real users.
|
272 |
|
273 |
-
#### Data
|
274 |
-
|
275 |
The training set contains 12217 samples, the validation and the test sets have 1527 and 1528 samples, respectively.
|
276 |
|
277 |
### POLITICAL STATEMENTS
|
@@ -298,7 +292,8 @@ Following
|
|
298 |
|
299 |
and
|
300 |
|
301 |
-
*Shahriar, Sadat, Arjun Mukherjee, and Omprakash Gnawali. "Deception Detection with Feature-Augmentation by Soft Domain Transfer."
|
|
|
302 |
|
303 |
we map the labels map labels “pants-fire,” “false,”
|
304 |
“barely-true,” **and “half-true,”** to deceptive; the labels "mostly-true" and "true" are mapped to non-deceptive.
|
@@ -311,8 +306,6 @@ The dataset has been cleaned using cleanlab with visual inspection of problems f
|
|
311 |
"On inflation", were removed. Text with large number of errors induced by a parser were also removed.
|
312 |
Statements in language other than English (namely, Spanish) were also removed.
|
313 |
|
314 |
-
#### Data
|
315 |
-
|
316 |
The training set contains 9997 samples, the validation and the test sets have 1250 samples each in them.
|
317 |
|
318 |
### PRODUCT REVIEWS
|
@@ -320,7 +313,13 @@ The training set contains 9997 samples, the validation and the test sets have 12
|
|
320 |
We post-process and split Product Reviews dataset to ensure uniformity with Political Statements 2.0 and Twitter Rumours
|
321 |
as they all go into form GDDS-2.0
|
322 |
|
323 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
324 |
|
325 |
The training set contains 16776 samples, the validation and the test sets have 2097 and 2098 samples, respectively.
|
326 |
|
@@ -331,8 +330,6 @@ which contained 5,574 and 5,971 real English SMS messages, respectively. As thes
|
|
331 |
the final dataset is made up of 6574 texts released by a private UK-based wireless operator; 1274 of them are deceptive,
|
332 |
and the remaining 5300 are not.
|
333 |
|
334 |
-
#### Data
|
335 |
-
|
336 |
The training set contains 5259 samples, the validation and the test sets have 657 and 658 samples,
|
337 |
respectively.
|
338 |
|
@@ -345,21 +342,4 @@ https://figshare.com/articles/dataset/PHEME_dataset_of_rumours_and_non-rumours/4
|
|
345 |
was used in creation of this dataset. We took source tweets only, and ignored replies to them.
|
346 |
We used source tweet's label as being a rumour or non-rumour to label it as deceptive or non-deceptive.
|
347 |
|
348 |
-
|
349 |
-
|
350 |
-
The training set contains 4631 samples, the validation and the test sets have 579 samples each.
|
351 |
-
|
352 |
-
|
353 |
-
|
354 |
-
|
355 |
-
|
356 |
-
|
357 |
-
|
358 |
-
|
359 |
-
|
360 |
-
|
361 |
-
|
362 |
-
|
363 |
-
|
364 |
-
|
365 |
-
|
|
|
3 |
- config_name: fake_news
|
4 |
data_files:
|
5 |
- split: train
|
6 |
+
path: fake_news/train.jsonl
|
7 |
- split: test
|
8 |
+
path: fake_news/test.jsonl
|
9 |
- split: validation
|
10 |
+
path: fake_news/validation.jsonl
|
11 |
- config_name: job_scams
|
12 |
data_files:
|
13 |
- split: train
|
14 |
+
path: job_scams/train.jsonl
|
15 |
- split: test
|
16 |
+
path: job_scams/test.jsonl
|
17 |
- split: validation
|
18 |
+
path: job_scams/validation.jsonl
|
19 |
- config_name: phishing
|
20 |
data_files:
|
21 |
- split: train
|
22 |
+
path: phishing/train.jsonl
|
23 |
- split: test
|
24 |
+
path: phishing/test.jsonl
|
25 |
- split: validation
|
26 |
+
path: phishing/validation.jsonl
|
27 |
- config_name: political_statements
|
28 |
data_files:
|
29 |
- split: train
|
30 |
+
path: political_statements/train.jsonl
|
31 |
- split: test
|
32 |
+
path: political_statements/test.jsonl
|
33 |
- split: validation
|
34 |
+
path: political_statements/validation.jsonl
|
35 |
- config_name: product_reviews
|
36 |
data_files:
|
37 |
- split: train
|
38 |
+
path: product_reviews/train.jsonl
|
39 |
- split: test
|
40 |
+
path: product_reviews/test.jsonl
|
41 |
- split: validation
|
42 |
+
path: product_reviews/validation.jsonl
|
43 |
- config_name: sms
|
44 |
data_files:
|
45 |
- split: train
|
46 |
+
path: sms/train.jsonl
|
47 |
- split: test
|
48 |
+
path: sms/test.jsonl
|
49 |
- split: validation
|
50 |
+
path: sms/validation.jsonl
|
51 |
- config_name: twitter_rumours
|
52 |
data_files:
|
53 |
- split: train
|
54 |
+
path: twitter_rumours/train.jsonl
|
55 |
- split: test
|
56 |
+
path: twitter_rumours/test.jsonl
|
57 |
- split: validation
|
58 |
+
path: twitter_rumours/validation.jsonl
|
59 |
+
license: mit
|
60 |
+
task_categories:
|
61 |
+
- text-classification
|
62 |
+
language:
|
63 |
+
- en
|
64 |
+
size_categories:
|
65 |
+
- 10K<n<100K
|
66 |
---
|
67 |
|
68 |
# GDDs-2.0
|
|
|
234 |
series = {CODASPY '22}
|
235 |
}
|
236 |
|
|
|
237 |
## APPENDIX: Dataset and Domain Details
|
238 |
|
239 |
This section describes each domain/dataset in greater detail.
|
|
|
246 |
We found a number of out-of-domain statements that are clearly not relevant to news, such as "Cool", which is a potential
|
247 |
problem for transfer learning as well as classification.
|
248 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
249 |
The training set contains 16364 samples, the validation and the test sets have 2064 and 2064 samles, respectively.
|
250 |
|
251 |
### JOB SCAMS
|
|
|
259 |
|
260 |
#### Cleaning
|
261 |
|
262 |
+
It was cleaned by removing all HTML tags, empty descriptions, and duplicates.
|
263 |
+
The final dataset is heavily imbalanced, with 599 deceptive and 13696 non-deceptive samples out of the 14295 total.
|
|
|
|
|
|
|
264 |
|
265 |
### PHISHING
|
266 |
|
267 |
This dataset consists of various phishing attacks as well as benign emails collected from real users.
|
268 |
|
|
|
|
|
269 |
The training set contains 12217 samples, the validation and the test sets have 1527 and 1528 samples, respectively.
|
270 |
|
271 |
### POLITICAL STATEMENTS
|
|
|
292 |
|
293 |
and
|
294 |
|
295 |
+
*Shahriar, Sadat, Arjun Mukherjee, and Omprakash Gnawali. "Deception Detection with Feature-Augmentation by Soft Domain Transfer."
|
296 |
+
International Conference on Social Informatics. Cham: Springer International Publishing, 2022.*
|
297 |
|
298 |
we map the labels map labels “pants-fire,” “false,”
|
299 |
“barely-true,” **and “half-true,”** to deceptive; the labels "mostly-true" and "true" are mapped to non-deceptive.
|
|
|
306 |
"On inflation", were removed. Text with large number of errors induced by a parser were also removed.
|
307 |
Statements in language other than English (namely, Spanish) were also removed.
|
308 |
|
|
|
|
|
309 |
The training set contains 9997 samples, the validation and the test sets have 1250 samples each in them.
|
310 |
|
311 |
### PRODUCT REVIEWS
|
|
|
313 |
We post-process and split Product Reviews dataset to ensure uniformity with Political Statements 2.0 and Twitter Rumours
|
314 |
as they all go into form GDDS-2.0
|
315 |
|
316 |
+
The dataset is produced from English Amazon Reviews labeled as either real or fake, relabeled as deceptive and non-deceptive respectively.
|
317 |
+
The reviews cover a variety of products with no particular product dominating the dataset. Although the dataset authors filtered out
|
318 |
+
non-English reviews, through outlier detection we found that the dataset still contains reviews in Spanish and other languages.
|
319 |
+
Problematic label detection shows that over 6713 samples are potentially mislabeled; since this technique is error-prone,
|
320 |
+
we visually examine 67 reviews that are found to be the largest potential sources of error (the top percentile) and confirm that
|
321 |
+
most of them appear to be mislabeled. The final dataset of 20,971 reviews is evenly balanced with 10,492 deceptive and 10,479
|
322 |
+
non-deceptive samples.
|
323 |
|
324 |
The training set contains 16776 samples, the validation and the test sets have 2097 and 2098 samples, respectively.
|
325 |
|
|
|
330 |
the final dataset is made up of 6574 texts released by a private UK-based wireless operator; 1274 of them are deceptive,
|
331 |
and the remaining 5300 are not.
|
332 |
|
|
|
|
|
333 |
The training set contains 5259 samples, the validation and the test sets have 657 and 658 samples,
|
334 |
respectively.
|
335 |
|
|
|
342 |
was used in creation of this dataset. We took source tweets only, and ignored replies to them.
|
343 |
We used source tweet's label as being a rumour or non-rumour to label it as deceptive or non-deceptive.
|
344 |
|
345 |
+
The training set contains 4631 samples, the validation and the test sets have 579 samples each.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|