Modified page info.
Browse files
utils.py
CHANGED
@@ -39,7 +39,7 @@ For detailed information about the dataset, visit our page on Hugging Face: htt
|
|
39 |
|
40 |
If you are interested in replicating these results or wish to evaluate your models using our dataset, access our evaluation scripts available on GitHub: https://github.com/dnaihao/Chumor-dataset.
|
41 |
|
42 |
-
If you would like to learn more details about our dataset, please check out our paper: https://arxiv.org/
|
43 |
|
44 |
Below you can find the accuracies of different models tested on this dataset.
|
45 |
|
@@ -69,6 +69,16 @@ CITATION_BUTTON_TEXT = r"""
|
|
69 |
journal={arXiv preprint arXiv:2406.12754},
|
70 |
year={2024}
|
71 |
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
72 |
"""
|
73 |
|
74 |
SUBMIT_INTRODUCTION = """# Submit on MMLU-Pro Leaderboard Introduction
|
|
|
39 |
|
40 |
If you are interested in replicating these results or wish to evaluate your models using our dataset, access our evaluation scripts available on GitHub: https://github.com/dnaihao/Chumor-dataset.
|
41 |
|
42 |
+
If you would like to learn more details about our dataset, please check out our paper: https://arxiv.org/pdf/2406.12754; https://arxiv.org/pdf/2412.17729.
|
43 |
|
44 |
Below you can find the accuracies of different models tested on this dataset.
|
45 |
|
|
|
69 |
journal={arXiv preprint arXiv:2406.12754},
|
70 |
year={2024}
|
71 |
}
|
72 |
+
|
73 |
+
@misc{he2024chumor20benchmarkingchinese,
|
74 |
+
title={Chumor 2.0: Towards Benchmarking Chinese Humor Understanding},
|
75 |
+
author={Ruiqi He and Yushu He and Longju Bai and Jiarui Liu and Zhenjie Sun and Zenghao Tang and He Wang and Hanchen Xia and Rada Mihalcea and Naihao Deng},
|
76 |
+
year={2024},
|
77 |
+
eprint={2412.17729},
|
78 |
+
archivePrefix={arXiv},
|
79 |
+
primaryClass={cs.CL},
|
80 |
+
url={https://arxiv.org/abs/2412.17729},
|
81 |
+
}
|
82 |
"""
|
83 |
|
84 |
SUBMIT_INTRODUCTION = """# Submit on MMLU-Pro Leaderboard Introduction
|