germeval21_detox / README.md
lschoen's picture
remove lodal pdf from card
b0303c4 verified
metadata
dataset_info:
  features:
    - name: comment_id
      dtype: int64
    - name: comment_text
      dtype: string
    - name: Sub1_Toxic
      dtype: int64
    - name: Sub2_Engaging
      dtype: int64
    - name: Sub3_FactClaiming
      dtype: int64
  splits:
    - name: train
      num_bytes: 733617
      num_examples: 3244
    - name: test
      num_bytes: 229587
      num_examples: 944
  download_size: 564666
  dataset_size: 963204
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: test
        path: data/test-*
license: mit
task_categories:
  - text-classification
language:
  - de
pretty_name: 'DeTox GermEval 2021: Fine grained Comment Classification'
size_categories:
  - 100K<n<1M

Dataset for DeTox at GermEval 2021: Fine grained Comment Classification

Has a train test split and 3 labels for each comment: Sub1_Toxic, Sub2_Engaging, and Sub3_Factclaiming.

DatasetDict({
    train: Dataset({
        features: ['comment_id', 'comment_text', 'Sub1_Toxic', 'Sub2_Engaging', 'Sub3_FactClaiming'],
        num_rows: 3244
    })
    test: Dataset({
        features: ['comment_id', 'comment_text', 'Sub1_Toxic', 'Sub2_Engaging', 'Sub3_FactClaiming'],
        num_rows: 944
    })
})

Citation information

Based on the work by Schütz et al.

{schutz-etal-2021-detox,
  title = {{{DeTox}} at {{GermEval}} 2021: {{Toxic}} Comment Classification},
  booktitle = {Proceedings of the {{GermEval}} 2021 Shared Task on the Identification of Toxic, Engaging, and Fact-Claiming Comments},
  author = {Schütz, Mina and Demus, Christoph and Pitz, Jonas and Probol, Nadine and Siegel, Melanie and Labudde, Dirk},
  editor = {Risch, Julian and Stoll, Anke and Wilms, Lena and Wiegand, Michael},
  date = {2021-09},
  pages = {54--61},
  publisher = {Association for Computational Linguistics},
  location = {Duesseldorf, Germany},
  url = {https://aclanthology.org/2021.germeval-1.8},
  abstract = {In this work, we present our approaches on the toxic comment classification task (subtask 1) of the GermEval 2021 Shared Task. For this binary task, we propose three models: a German BERT transformer model; a multilayer perceptron, which was first trained in parallel on textual input and 14 additional linguistic features and then concatenated in an additional layer; and a multilayer perceptron with both feature types as input. We enhanced our pre-trained transformer model by re-training it with over 1 million tweets and fine-tuned it on two additional German datasets of similar tasks. The embeddings of the final fine-tuned German BERT were taken as the textual input features for our neural networks. Our best models on the validation data were both neural networks, however our enhanced German BERT gained with a F1-score = 0.5895 a higher prediction on the test data.},
}