Datasets:
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- cc0-1.0
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- text-scoring
pretty_name: Jigsaw Unintended Bias in Toxicity Classification
tags:
- toxicity-prediction
dataset_info:
features:
- name: target
dtype: float32
- name: comment_text
dtype: string
- name: severe_toxicity
dtype: float32
- name: obscene
dtype: float32
- name: identity_attack
dtype: float32
- name: insult
dtype: float32
- name: threat
dtype: float32
- name: asian
dtype: float32
- name: atheist
dtype: float32
- name: bisexual
dtype: float32
- name: black
dtype: float32
- name: buddhist
dtype: float32
- name: christian
dtype: float32
- name: female
dtype: float32
- name: heterosexual
dtype: float32
- name: hindu
dtype: float32
- name: homosexual_gay_or_lesbian
dtype: float32
- name: intellectual_or_learning_disability
dtype: float32
- name: jewish
dtype: float32
- name: latino
dtype: float32
- name: male
dtype: float32
- name: muslim
dtype: float32
- name: other_disability
dtype: float32
- name: other_gender
dtype: float32
- name: other_race_or_ethnicity
dtype: float32
- name: other_religion
dtype: float32
- name: other_sexual_orientation
dtype: float32
- name: physical_disability
dtype: float32
- name: psychiatric_or_mental_illness
dtype: float32
- name: transgender
dtype: float32
- name: white
dtype: float32
- name: created_date
dtype: string
- name: publication_id
dtype: int32
- name: parent_id
dtype: float32
- name: article_id
dtype: int32
- name: rating
dtype:
class_label:
names:
'0': rejected
'1': approved
- name: funny
dtype: int32
- name: wow
dtype: int32
- name: sad
dtype: int32
- name: likes
dtype: int32
- name: disagree
dtype: int32
- name: sexual_explicit
dtype: float32
- name: identity_annotator_count
dtype: int32
- name: toxicity_annotator_count
dtype: int32
splits:
- name: train
num_bytes: 914264058
num_examples: 1804874
- name: test_private_leaderboard
num_bytes: 49188921
num_examples: 97320
- name: test_public_leaderboard
num_bytes: 49442360
num_examples: 97320
download_size: 0
dataset_size: 1012895339
Dataset Card for Jigsaw Unintended Bias in Toxicity Classification
Table of Contents
- Table of Contents
- Dataset Description
- Dataset Structure
- Dataset Creation
- Considerations for Using the Data
- Additional Information
Dataset Description
- Homepage: https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification
- Repository:
- Paper:
- Leaderboard: https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/leaderboard
- Point of Contact:
Dataset Summary
The Jigsaw Unintended Bias in Toxicity Classification dataset comes from the eponymous Kaggle competition.
Please see the original data description for more information.
Supported Tasks and Leaderboards
The main target for this dataset is toxicity prediction. Several toxicity subtypes are also available, so the dataset can be used for multi-attribute prediction.
See the original leaderboard for reference.
Languages
English
Dataset Structure
Data Instances
A data point consists of an id, a comment, the main target, the other toxicity subtypes as well as identity attributes.
For instance, here's the first train example.
{
"article_id": 2006,
"asian": NaN,
"atheist": NaN,
"bisexual": NaN,
"black": NaN,
"buddhist": NaN,
"christian": NaN,
"comment_text": "This is so cool. It's like, 'would you want your mother to read this??' Really great idea, well done!",
"created_date": "2015-09-29 10:50:41.987077+00",
"disagree": 0,
"female": NaN,
"funny": 0,
"heterosexual": NaN,
"hindu": NaN,
"homosexual_gay_or_lesbian": NaN,
"identity_annotator_count": 0,
"identity_attack": 0.0,
"insult": 0.0,
"intellectual_or_learning_disability": NaN,
"jewish": NaN,
"latino": NaN,
"likes": 0,
"male": NaN,
"muslim": NaN,
"obscene": 0.0,
"other_disability": NaN,
"other_gender": NaN,
"other_race_or_ethnicity": NaN,
"other_religion": NaN,
"other_sexual_orientation": NaN,
"parent_id": NaN,
"physical_disability": NaN,
"psychiatric_or_mental_illness": NaN,
"publication_id": 2,
"rating": 0,
"sad": 0,
"severe_toxicity": 0.0,
"sexual_explicit": 0.0,
"target": 0.0,
"threat": 0.0,
"toxicity_annotator_count": 4,
"transgender": NaN,
"white": NaN,
"wow": 0
}
Data Fields
id
: id of the commenttarget
: value between 0(non-toxic) and 1(toxic) classifying the commentcomment_text
: the text of the commentsevere_toxicity
: value between 0(non-severe_toxic) and 1(severe_toxic) classifying the commentobscene
: value between 0(non-obscene) and 1(obscene) classifying the commentidentity_attack
: value between 0(non-identity_hate) or 1(identity_hate) classifying the commentinsult
: value between 0(non-insult) or 1(insult) classifying the commentthreat
: value between 0(non-threat) and 1(threat) classifying the comment- For a subset of rows, columns containing whether the comment mentions the entities (they may contain NaNs):
male
female
transgender
other_gender
heterosexual
homosexual_gay_or_lesbian
bisexual
other_sexual_orientation
christian
jewish
muslim
hindu
buddhist
atheist
other_religion
black
white
asian
latino
other_race_or_ethnicity
physical_disability
intellectual_or_learning_disability
psychiatric_or_mental_illness
other_disability
- Other metadata related to the source of the comment, such as creation date, publication id, number of likes,
number of annotators, etc:
created_date
publication_id
parent_id
article_id
rating
funny
wow
sad
likes
disagree
sexual_explicit
identity_annotator_count
toxicity_annotator_count
Data Splits
There are four splits:
- train: The train dataset as released during the competition. Contains labels and identity information for a subset of rows.
- test: The train dataset as released during the competition. Does not contain labels nor identity information.
- test_private_expanded: The private leaderboard test set, including toxicity labels and subgroups. The competition target was a binarized version of the toxicity column, which can be easily reconstructed using a >=0.5 threshold.
- test_public_expanded: The public leaderboard test set, including toxicity labels and subgroups. The competition target was a binarized version of the toxicity column, which can be easily reconstructed using a >=0.5 threshold.
Dataset Creation
Curation Rationale
The dataset was created to help in efforts to identify and curb instances of toxicity online.
Source Data
Initial Data Collection and Normalization
[More Information Needed]
Who are the source language producers?
[More Information Needed]
Annotations
Annotation process
[More Information Needed]
Who are the annotators?
[More Information Needed]
Personal and Sensitive Information
[More Information Needed]
Considerations for Using the Data
Social Impact of Dataset
[More Information Needed]
Discussion of Biases
[More Information Needed]
Other Known Limitations
[More Information Needed]
Additional Information
Dataset Curators
[More Information Needed]
Licensing Information
This dataset is released under CC0, as is the underlying comment text.
Citation Information
No citation is available for this dataset, though you may link to the kaggle competition
Contributions
Thanks to @iwontbecreative for adding this dataset.