README / README.md
indiiigo's picture
Update README.md
22642ae
metadata
title: README
emoji: πŸ‘
colorFrom: red
colorTo: red
sdk: static
pinned: false

This repository contains the best-performing sexism and hate speech detection models from the following paper:

Sen, I., Assenmacher, D., Samory, M., Augenstein, I., Aalst, W.V., & Wagner, C. (2023). People Make Better Edits: Measuring the Efficacy of LLM-Generated Counterfactually Augmented Data for Harmful Language Detection. ArXiv, abs/2311.01270. To Appear at EMNLP'23