Datasets:
license: openrail++
task_categories:
- text-generation
language:
- ru
size_categories:
- 1K<n<10K
tags:
- textdetox
ParaDetox: Text Detoxification with Parallel Data (Russian)
This repository contains information about Russian Paradetox dataset -- the first parallel corpus for the detoxification task -- as well as models for the detoxification of Russian texts.
📰 Updates
[2025] !!!NOW OPEN!!! TextDetox CLEF2025 shared task: for even more -- 15 languages! website 🤗Starter Kit
[2025] COLNG2025: Daryna Dementieva, Nikolay Babakov, Amit Ronen, Abinew Ali Ayele, Naquee Rizwan, Florian Schneider, Xintong Wang, Seid Muhie Yimam, Daniil Alekhseevich Moskovskiy, Elisei Stakovskii, Eran Kaufman, Ashraf Elnagar, Animesh Mukherjee, and Alexander Panchenko. 2025. Multilingual and Explainable Text Detoxification with Parallel Corpora. In Proceedings of the 31st International Conference on Computational Linguistics, pages 7998–8025, Abu Dhabi, UAE. Association for Computational Linguistics. pdf
[2024] Multilingual TextDetox shared task at CLEF 2024 that covers 9 languages!
[2022] The first work on ParaDetox for English was presented at ACL 2022!
ParaDetox Collection Pipeline

The ParaDetox Dataset collection was done via Toloka.ai crowdsource platform. The collection was done in three steps:
- Task 1: Generation of Paraphrases: The first crowdsourcing task asks users to eliminate toxicity in a given sentence while keeping the content.
- Task 2: Content Preservation Check: We show users the generated paraphrases along with their original variants and ask them to indicate if they have close meanings.
- Task 3: Toxicity Check: Finally, we check if the workers succeeded in removing toxicity.
All these steps were done to ensure high quality of the data and make the process of collection automated. For more details please refer to the original paper.
Detoxification model
The first seqseq SOTA for the detoxification task in Russian -- ruT5 (base) model fine-tuned on the Russian ParaDetox dataset -- we released online in HuggingFace🤗 repository here.
Also, we release the toxicity classifier 🤗here
[Old Versions] You can also check out our demo and telegram bot.
Citation
@article{Dementieva2022RUSSE2022FO,
title = {{RUSSE-2022: Findings of the First Russian Detoxification Shared Task Based on Parallel Corpora}},
author = {Daryna Dementieva and Varvara Logacheva and Irina Nikishina and Alena Fenogenova and David Dale and I. Krotova and Nikita Semenov and Tatiana Shavrina and Alexander Panchenko},
year = 2022,
journal = {COMPUTATIONAL LINGUISTICS AND INTELLECTUAL TECHNOLOGIES},
url = {https://api.semanticscholar.org/CorpusID:253169495}
}
and
@inproceedings{dementieva-etal-2024-multiparadetox,
title = "{M}ulti{P}ara{D}etox: Extending Text Detoxification with Parallel Data to New Languages",
author = "Dementieva, Daryna and
Babakov, Nikolay and
Panchenko, Alexander",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-short.12",
pages = "124--140",
abstract = "Text detoxification is a textual style transfer (TST) task where a text is paraphrased from a toxic surface form, e.g. featuring rude words, to the neutral register. Recently, text detoxification methods found their applications in various task such as detoxification of Large Language Models (LLMs) (Leong et al., 2023; He et al., 2024; Tang et al., 2023) and toxic speech combating in social networks (Deng et al., 2023; Mun et al., 2023; Agarwal et al., 2023). All these applications are extremely important to ensure safe communication in modern digital worlds. However, the previous approaches for parallel text detoxification corpora collection{---}ParaDetox (Logacheva et al., 2022) and APPADIA (Atwell et al., 2022){---}were explored only in monolingual setup. In this work, we aim to extend ParaDetox pipeline to multiple languages presenting MultiParaDetox to automate parallel detoxification corpus collection for potentially any language. Then, we experiment with different text detoxification models{---}from unsupervised baselines to LLMs and fine-tuned models on the presented parallel corpora{---}showing the great benefit of parallel corpus presence to obtain state-of-the-art text detoxification models for any language.",
}
Dataset Card and Paper Corresponding Contact
If you find some issue, do not hesitate to add it to Github Issues.
For any questions, please contact: Daryna Dementieva