File size: 5,609 Bytes
2cb3cd6
 
2ed7e4b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7bf507a
2ed7e4b
 
 
 
 
 
 
 
 
 
 
 
 
 
2cb3cd6
2ed7e4b
dd4c81a
2ed7e4b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dd4c81a
2ed7e4b
 
 
 
 
 
 
3d65e34
 
2ed7e4b
 
 
3d65e34
2ed7e4b
 
 
 
 
 
3d65e34
2ed7e4b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
64f96e2
2ed7e4b
 
 
64f96e2
2ed7e4b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
---
license: apache-2.0
annotations_creators:
- expert-generated
language_creators:
- found
task_categories:
- text-classification
language:
- en
multilinguality:
- monolingual
source_datasets:
- Opensources https://github.com/BigMcLargeHuge/opensources
- FakeNews Corpus https://github.com/several27/FakeNewsCorpus
tags:
- fake-news-detection
- fake news
- english
- nlp
task_ids:
- topic-classification
- fact-checking
pretty_name: Fake News Opensources
size_categories:
- 1M<n<10M
dataset_info:
  features:
  - name: id
    dtype: int64
  - name: type
    dtype: string
  - name: domain
    dtype: string
  - name: scraped_at
    dtype: string
  - name: url
    dtype: string
  - name: authors
    dtype: string
  - name: title
    dtype: string
  - name: content
    dtype: string
---

# Dataset Card for "Fake News Opensources"

## Table of Contents
- [Dataset Description](#dataset-description)
  - [Dataset Summary](#dataset-summary)
  - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
  - [Languages](#languages)
- [Dataset Structure](#dataset-structure)
  - [Data Instances](#data-instances)
  - [Data Fields](#data-fields)
  - [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
  - [Curation Rationale](#curation-rationale)
  - [Source Data](#source-data)
  - [Annotations](#annotations)
  - [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
  - [Social Impact of Dataset](#social-impact-of-dataset)
  - [Discussion of Biases](#discussion-of-biases)
  - [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
  - [Dataset Curators](#dataset-curators)
  - [Licensing Information](#licensing-information)
  - [Citation Information](#citation-information)
  - [Contributions](#contributions)

## Dataset Description
<!--
- **Paper:** Fake News Opensources
-->
- **Homepage:** [https://github.com/AndyTheFactory/FakeNewsDataset](https://github.com/AndyTheFactory/FakeNewsDataset)
- **Repository:** [https://github.com/AndyTheFactory/FakeNewsDataset](https://github.com/AndyTheFactory/FakeNewsDataset)
- **Point of Contact:** [Andrei Paraschiv](https://github.com/AndyTheFactory)
- 
### Dataset Summary
a consolidated and cleaned up version of the opensources Fake News dataset

Fake News Corpus comprises 8,529,090 individual articles, classified into 12 classes: reliable, unreliable, political, bias, fake, conspiracy, 
rumor clickbait, junk science, satire, hate and unknown. The articles were scraped between the end of 2017 and the beginning of 2018 from various 
news websites, totaling 647 distinct sources, collecting articles dating from various years leading to the 2016 US elections and the year after. 
Documents were classified based on their source, based on the curated website list provided by opensources.co using a leading to a 
high imbalanced class distribution. Their proposed source classification method, was based on six criteria: 
- Title and Domain name analysis,
- “About Us” analysis,
- source or study mentioning,
- writing style analysis,
- aesthetic analysis and social media analysis.

After extensive data cleaning and duplicate removal we retain **5,915,569** records

### Languages

English

## Dataset Structure


### Data Instances


An example record looks as follows.

```
{
  'id': 4059480,
  'type': 'political',
  'domain': 'dailycaller.com',
  'scraped_at': '2017-11-27',
  'url': 'http://dailycaller.com/buzz/massachusettsunited-states/page/2/',
  'authors': 'Jeff Winkler, Jonathan Strong, Ken Blackwell, Pat Mcmahon, Julia Mcclatchy, Admin, Matt Purple',
  'title': 'The Daily Caller',
  'content':'New Hampshire is the state with the highest median income in the nation, according to the U.S. Census Bureau’s report on income, poverty and health insurance',
}
```

### Data Fields

- `id`: The unique article ID
- `type`: the label of the record (one of: reliable, unreliable, political, bias, fake, conspiracy, 
rumor clickbait, junk science, satire, hate)
- 'scraped_at': date of the original scrape run
- 'url': original article url
- 'authors': comma separated list of scraped authors
- 'title': original scraped article title
- `content`: full article text

### Data Splits


Label | Nr Records
:---| :---:
reliable    |  1807323
political   |   968205
bias        |   769874
fake        |   762178
conspiracy  |   494184
rumor       |   375963
unknown     |   230532
clickbait   |   174176
unreliable  |   104537
satire      |    84735
junksci     |    79099
hate        |    64763
 | 
total | 5915569 

## Dataset Creation

### Source Data

News Articles from various sites

#### Who are the source language producers?

News Articles, Blogs


### Annotations

#### Who are the annotators?

Journalists

### Other Known Limitations

The dataset was not manually filtered, therefore some of the labels might not be correct and some of the URLs might not point to the actual articles but other pages on the website. However, because the corpus is intended for use in training machine learning algorithms, those problems should not pose a practical issue.

Additionally, when the dataset will be finalised (as for now only about 80% was cleaned and published), I do not intend to update it, therefore it might quickly become outdated for other purposes than content-based algorithms. However, any contributions are welcome!

### Licensing Information

This data is available and distributed under Apache-2.0 license

### Citation Information

```
tbd
```