INTRODUCTION_TEXT = """
Welcome to the ChineseSafe Leaderboard!
On this leaderboard, we share the evaluation results of LLMs obtained by developing a brand new content moderation benchmark for Chinese. 🎉🎉🎉
# Dataset
To evaluate the safety risk of LLMs of large language models, we present ChineseSafe, a Chinese safety benchmark to facilitate research
on the content safety of LLMs for Chinese (Mandarin).
To align with the regulations for Chinese Internet content moderation, our ChineseSafe contains 205,034 examples
across 4 classes and 10 sub-classes of safety issues. For Chinese contexts, we add several special types of illegal content: political sensitivity, pornography,
and variant/homophonic words. In particular, the benchmark is constructed as a balanced dataset, containing safe and unsafe data collected from internet resources and public datasets [1,2,3].
We hope the evaluation can provides a guideline for developers and researchers to facilitate the safety of LLMs.
The leadboard is under construction and maintained by Hongxin Wei's research group at SUSTech.
Comments, issues, contributions, and collaborations are all welcomed!
Email: weihx@sustech.edu.cn
""" # noqa
# We will release the technical report in the near future.
METRICS_TEXT = """
# Metrics
We report the results with five metrics: overall accuracy, precision/recall for safe/unsafe content.
In particular, the results are shown as metric/std format in the table,
where std indicates the standard deviation of the results with various random seeds.
""" # noqa
EVALUTION_TEXT= """
# Evaluation
We evaluate the models using two methods: perplexity(multiple choice) and generation.
For perplexity, we select the label which is the lowest perplexity as the predicted results.
For generation, we use the content generated by the model to make prediction.
The following are the results of the evaluation. 👇👇👇
""" # noqa
REFERENCE_TEXT = """
# References
[1] Sun H, Zhang Z, Deng J, et al. Safety assessment of chinese large language models[J]. arXiv preprint arXiv:2304.10436, 2023.
[2] https://github.com/konsheng/Sensitive-lexicon
[3] https://www.cluebenchmarks.com/static/pclue.html
"""
# ACKNOWLEDGEMENTS_TEXT = """
# # Acknowledgements
#
# This research is supported by "Data+AI" Data Intelligent Laboratory,
# a joint lab constructed by Deepexi and Department of Statistics and Data Science at SUSTech.
# We gratefully acknowledge the contributions of Prof. Bingyi Jing, Prof. Lili Yang,
# and Asst. Prof.Guanhua Chen for their support throughout this project.
# """
ACKNOWLEDGEMENTS_TEXT = """
# Acknowledgements
This research is supported by the Shenzhen Fundamental Research Program (Grant No.
JCYJ20230807091809020). We gratefully acknowledge the support of "Data+AI" Data Intelligent Laboratory, a joint lab constructed by Deepexi and the Department of Statistics and Data Science
at Southern University of Science and Technology.
"""
CONTACT_TEXT = """
# Contact
The leadboard is under construction and maintained by Hongxin Wei's research group at SUSTech.
We will release the technical report in the near future.
Comments, issues, contributions, and collaborations are all welcomed!
Email: weihx@sustech.edu.cn
"""