Datasets:
language:
- zh
license: cc-by-nc-4.0
size_categories:
- 10K<n<100K
task_categories:
- text-classification
ChineseHarm-bench
A Chinese Harmful Content Detection Benchmark
⚠️ WARNING: This project and associated data contain content that may be toxic, offensive, or disturbing. Use responsibly and with discretion.
Project • Paper • Hugging Face
🌟Benchmark
This folder contains the ChineseHarm-Bench.
bench.json
is the full benchmark combining all categories.- The other files (e.g.,
低俗色情.json
,欺诈.json
) are category-specific subsets.
Each file is a list of examples with:
"文本"
: the input Chinese text"标签"
: the ground-truth label
🚩 Ethics Statement
We obtain all data with proper authorization from the respective data-owning organizations and signed the necessary agreements.
The benchmark is released under the CC BY-NC 4.0 license. All datasets have been anonymized and reviewed by the Institutional Review Board (IRB) of the data provider to ensure privacy protection.
Moreover, we categorically denounce any malicious misuse of this benchmark and are committed to ensuring that its development and use consistently align with human ethical principles.
Acknowledgement
We gratefully acknowledge Tencent for providing the dataset and LLaMA-Factory for the training codebase.
🚩Citation
Please cite our repository if you use ChineseHarm-bench in your work. Thanks!
@misc{liu2025chineseharmbenchchineseharmfulcontent,
title={ChineseHarm-Bench: A Chinese Harmful Content Detection Benchmark},
author={Kangwei Liu and Siyuan Cheng and Bozhong Tian and Xiaozhuan Liang and Yuyang Yin and Meng Han and Ningyu Zhang and Bryan Hooi and Xi Chen and Shumin Deng},
year={2025},
eprint={2506.10960},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2506.10960},
}