Datasets:
Tasks:
Text Classification
Modalities:
Text
Formats:
parquet
Sub-tasks:
sentiment-classification
Languages:
English
Size:
10K - 100K
License:
Sasha Luccioni
commited on
Commit
·
cd89d3c
1
Parent(s):
3f276e9
Eval metadata Batch 4: Tweet Eval, Tweets Hate Speech Detection, VCTK, Weibo NER, Wisesight Sentiment, XSum, Yahoo Answers Topics, Yelp Polarity, Yelp Review Full (#4338)
Browse files* Eval metadata for Tweet Eval, Tweets Hate Speech Detection, VCTK, Weibo NER, Wisesight Sentiment, XSum, Yahoo Answers Topics, Yelp Polarity, Yelp Review Full
* Update README.md
oops, duplicate
* Update README.md
adding header
* Update README.md
adding header
* Update datasets/xsum/README.md
Co-authored-by: Quentin Lhoest <[email protected]>
* Update README.md
removing ROUGE args
* Update README.md
removing F1 params
* Update README.md
* Update README.md
updating to binary F1
* Update README.md
oops typo
* Update README.md
F1 binary
Co-authored-by: sashavor <[email protected]>
Co-authored-by: Quentin Lhoest <[email protected]>
Commit from https://github.com/huggingface/datasets/commit/bce8e6541af98cb0e06b24f9c96a6431cf813a38
README.md
CHANGED
@@ -19,6 +19,46 @@ task_ids:
|
|
19 |
- sentiment-classification
|
20 |
paperswithcode_id: null
|
21 |
pretty_name: Tweets Hate Speech Detection
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
22 |
---
|
23 |
|
24 |
# Dataset Card for Tweets Hate Speech Detection
|
@@ -86,7 +126,7 @@ The dataset contains a label denoting is the tweet a hate speech or not
|
|
86 |
* tweet: content of the tweet as a string.
|
87 |
|
88 |
### Data Splits
|
89 |
-
|
90 |
The data contains training data with :31962 entries
|
91 |
|
92 |
## Dataset Creation
|
@@ -99,7 +139,7 @@ The data contains training data with :31962 entries
|
|
99 |
|
100 |
#### Initial Data Collection and Normalization
|
101 |
|
102 |
-
Crowdsourced from tweets of users
|
103 |
|
104 |
#### Who are the source language producers?
|
105 |
|
@@ -109,11 +149,11 @@ Cwodsourced from twitter
|
|
109 |
|
110 |
#### Annotation process
|
111 |
|
112 |
-
The data has been precprocessed and a model has been trained to assign the relevant label to the tweet
|
113 |
|
114 |
#### Who are the annotators?
|
115 |
|
116 |
-
The data has been provided by Roshan Sharma
|
117 |
|
118 |
### Personal and Sensitive Information
|
119 |
|
@@ -123,7 +163,7 @@ The data has been provided by Roshan Sharma
|
|
123 |
|
124 |
### Social Impact of Dataset
|
125 |
|
126 |
-
With the help of this dataset, one can understand more about the human sentiments and also analye the situations when a particular person intends to make use of hatred/racist comments
|
127 |
|
128 |
### Discussion of Biases
|
129 |
|
@@ -140,7 +180,7 @@ The data could be cleaned up further for additional purposes such as applying a
|
|
140 |
|
141 |
### Dataset Curators
|
142 |
|
143 |
-
Roshan Sharma
|
144 |
|
145 |
### Licensing Information
|
146 |
|
@@ -152,4 +192,4 @@ Roshan Sharma
|
|
152 |
|
153 |
### Contributions
|
154 |
|
155 |
-
Thanks to [@darshan-gandhi](https://github.com/darshan-gandhi) for adding this dataset.
|
|
|
19 |
- sentiment-classification
|
20 |
paperswithcode_id: null
|
21 |
pretty_name: Tweets Hate Speech Detection
|
22 |
+
train-eval-index:
|
23 |
+
- config: default
|
24 |
+
task: text-classification
|
25 |
+
task_id: binary_classification
|
26 |
+
splits:
|
27 |
+
train_split: train
|
28 |
+
col_mapping:
|
29 |
+
tweet: text
|
30 |
+
label: target
|
31 |
+
metrics:
|
32 |
+
- type: accuracy
|
33 |
+
name: Accuracy
|
34 |
+
- type: f1
|
35 |
+
name: F1 binary
|
36 |
+
args:
|
37 |
+
average: binary
|
38 |
+
- type: precision
|
39 |
+
name: Precision macro
|
40 |
+
args:
|
41 |
+
average: macro
|
42 |
+
- type: precision
|
43 |
+
name: Precision micro
|
44 |
+
args:
|
45 |
+
average: micro
|
46 |
+
- type: precision
|
47 |
+
name: Precision weighted
|
48 |
+
args:
|
49 |
+
average: weighted
|
50 |
+
- type: recall
|
51 |
+
name: Recall macro
|
52 |
+
args:
|
53 |
+
average: macro
|
54 |
+
- type: recall
|
55 |
+
name: Recall micro
|
56 |
+
args:
|
57 |
+
average: micro
|
58 |
+
- type: recall
|
59 |
+
name: Recall weighted
|
60 |
+
args:
|
61 |
+
average: weighted
|
62 |
---
|
63 |
|
64 |
# Dataset Card for Tweets Hate Speech Detection
|
|
|
126 |
* tweet: content of the tweet as a string.
|
127 |
|
128 |
### Data Splits
|
129 |
+
|
130 |
The data contains training data with :31962 entries
|
131 |
|
132 |
## Dataset Creation
|
|
|
139 |
|
140 |
#### Initial Data Collection and Normalization
|
141 |
|
142 |
+
Crowdsourced from tweets of users
|
143 |
|
144 |
#### Who are the source language producers?
|
145 |
|
|
|
149 |
|
150 |
#### Annotation process
|
151 |
|
152 |
+
The data has been precprocessed and a model has been trained to assign the relevant label to the tweet
|
153 |
|
154 |
#### Who are the annotators?
|
155 |
|
156 |
+
The data has been provided by Roshan Sharma
|
157 |
|
158 |
### Personal and Sensitive Information
|
159 |
|
|
|
163 |
|
164 |
### Social Impact of Dataset
|
165 |
|
166 |
+
With the help of this dataset, one can understand more about the human sentiments and also analye the situations when a particular person intends to make use of hatred/racist comments
|
167 |
|
168 |
### Discussion of Biases
|
169 |
|
|
|
180 |
|
181 |
### Dataset Curators
|
182 |
|
183 |
+
Roshan Sharma
|
184 |
|
185 |
### Licensing Information
|
186 |
|
|
|
192 |
|
193 |
### Contributions
|
194 |
|
195 |
+
Thanks to [@darshan-gandhi](https://github.com/darshan-gandhi) for adding this dataset.
|