Arsive's picture
Update README.md
dd6e43f
|
raw
history blame
1.87 kB
metadata
license: apache-2.0
task_categories:
  - text-classification
language:
  - en
size_categories:
  - 1K<n<200K

Dataset info

Training Dataset:

You are provided with a large number of Wikipedia comments which have been labeled by human raters for toxic behavior. The types of toxicity are:

  • toxic
  • severe_toxic
  • obscene
  • threat
  • insult
  • identity_hate

The original dataset can be found here: jigsaw_toxic_classification

Our training dataset is a sampled version from the original dataset, containing equal number of samples for both clean and toxic classes.

Dataset creation:

data = pd.read_csv('train.csv') # train.csv from the original dataset
column_names = ['toxic', 'severe_toxic', 'obscene', 'threat', 'insult', 'identity_hate']
column_labels = data[column_names][2:-1]
train_toxic = data[data[column_names].sum(axis=1) > 0]
train_clean = data[data[column_names].sum(axis=1) == 0]
train_clean_sampled = train_clean.sample(n=16225, random_state=42)

dataframe = pd.concat([train_toxic, train_clean_sampled], axis=0)

dataframe = dataframe.sample(frac=1, random_state=42) dataset = Dataset.from_pandas(dataframe)

train_dataset = dataset.train_test_split(test_size=0.2)['train'] val_dataset = dataset.train_test_split(test_size=0.2)['test']

Caution:

This dataset contains comments that are toxic in nature. Kindly use appropriately.

Citation

  @misc{jigsaw-toxic-comment-classification-challenge,
    author = {cjadams, Jeffrey Sorensen, Julia Elliott, Lucas Dixon, Mark McDonald, nithum, Will Cukierski},
    title = {Toxic Comment Classification Challenge},
    publisher = {Kaggle},
    year = {2017},
    url = {https://kaggle.com/competitions/jigsaw-toxic-comment-classification-challenge}
}