Spaces:
Runtime error
Runtime error
from dataclasses import dataclass | |
from enum import Enum | |
class Task: | |
benchmark: str | |
metric: str | |
col_name: str | |
# Select your tasks here | |
# --------------------------------------------------- | |
class Tasks(Enum): | |
# task_key in the json file, metric_key in the json file, name to display in the leaderboard | |
#task0 = Task("realtoxicityprompts", "perspective_api_toxicity_score", "Toxicity") | |
task0 = Task("toxigen", "acc_norm", "Synthetic Toxicity") | |
#task2 = Task("logiqa", "acc_norm", "LogiQA") | |
NUM_FEWSHOT = 0 # Change with your few shot | |
# --------------------------------------------------- | |
# Your leaderboard name | |
TITLE = """<h1 align="center" id="space-title">Toxicity leaderboard</h1>""" | |
# What does your leaderboard evaluate? | |
INTRODUCTION_TEXT = """ | |
# How "toxic" is the language that might be generated from an LLM? | |
## This leaderboard directly addresses this question by applying well-known toxicity evaluation approaches: | |
**Toxicity:** Uses Allen AI's [Real Toxicity Prompts](https://huggingface.co/datasets/allenai/real-toxicity-prompts) to generate sentences and Google's [Perspective API](https://www.perspectiveapi.com) to score their toxicity. [[Source](https://github.com/EleutherAI/lm-evaluation-harness/tree/main/lm_eval/tasks/realtoxicityprompts)] | |
**Synthetic Toxicity:** Uses Microsoft's machine-generated ("synthetic") [dataset for hate speech detection, Toxigen](https://github.com/microsoft/TOXIGEN) and corresponding classifier to score their toxicity. [[Source](https://github.com/EleutherAI/lm-evaluation-harness/tree/main/lm_eval/tasks/toxigen)] | |
""" | |
# Which evaluations are you running? how can people reproduce what you have? | |
LLM_BENCHMARKS_TEXT = f""" | |
## How it works | |
## Reproducibility | |
To reproduce our results, here is the commands you can run: | |
""" | |
EVALUATION_QUEUE_TEXT = """ | |
## Some good practices before submitting a model | |
### 1) Make sure you can load your model and tokenizer using AutoClasses: | |
```python | |
from transformers import AutoConfig, AutoModel, AutoTokenizer | |
config = AutoConfig.from_pretrained("your model name", revision=revision) | |
model = AutoModel.from_pretrained("your model name", revision=revision) | |
tokenizer = AutoTokenizer.from_pretrained("your model name", revision=revision) | |
``` | |
If this step fails, follow the error messages to debug your model before submitting it. It's likely your model has been improperly uploaded. | |
Note: make sure your model is public! | |
Note: if your model needs `use_remote_code=True`, we do not support this option yet but we are working on adding it, stay posted! | |
### 2) Convert your model weights to [safetensors](https://huggingface.co/docs/safetensors/index) | |
It's a new format for storing weights which is safer and faster to load and use. It will also allow us to add the number of parameters of your model to the `Extended Viewer`! | |
### 3) Make sure your model has an open license! | |
This is a leaderboard for Open LLMs, and we'd love for as many people as possible to know they can use your model 🤗 | |
### 4) Fill up your model card | |
When we add extra information about models to the leaderboard, it will be automatically taken from the model card | |
## In case of model failure | |
If your model is displayed in the `FAILED` category, its execution stopped. | |
Make sure you have followed the above steps first. | |
If everything is done, check you can launch the EleutherAIHarness on your model locally, using the above command without modifications (you can add `--limit` to limit the number of examples per task). | |
""" | |
CITATION_BUTTON_LABEL = "Copy the following snippet to cite these results" | |
CITATION_BUTTON_TEXT = r"""@misc{toxicity-leaderboard, | |
author = {Margaret Mitchell and Clémentine Fourrier}, | |
title = {Toxicity Leaderboard}, | |
year = {2024}, | |
publisher = {Hugging Face}, | |
howpublished = "\url{https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard}", | |
} | |
@software{eval-harness, | |
author = {Gao, Leo and | |
Tow, Jonathan and | |
Biderman, Stella and | |
Black, Sid and | |
DiPofi, Anthony and | |
Foster, Charles and | |
Golding, Laurence and | |
Hsu, Jeffrey and | |
McDonell, Kyle and | |
Muennighoff, Niklas and | |
Phang, Jason and | |
Reynolds, Laria and | |
Tang, Eric and | |
Thite, Anish and | |
Wang, Ben and | |
Wang, Kevin and | |
Zou, Andy}, | |
title = {A framework for few-shot language model evaluation}, | |
month = sep, | |
year = 2021, | |
publisher = {Zenodo}, | |
version = {v0.0.1}, | |
doi = {10.5281/zenodo.5371628}, | |
url = {https://doi.org/10.5281/zenodo.5371628}, | |
} | |
@article{gehman2020realtoxicityprompts, | |
title={Realtoxicityprompts: Evaluating neural toxic degeneration in language models}, | |
author={Gehman, Samuel and Gururangan, Suchin and Sap, Maarten and Choi, Yejin and Smith, Noah A}, | |
journal={arXiv preprint arXiv:2009.11462}, | |
year={2020} | |
} | |
@inproceedings{hartvigsen2022toxigen, | |
title = "{T}oxi{G}en: A Large-Scale Machine-Generated Dataset for Adversarial and Implicit Hate Speech Detection", | |
author = "Hartvigsen, Thomas and Gabriel, Saadia and Palangi, Hamid and Sap, Maarten and Ray, Dipankar and Kamar, Ece", | |
booktitle = "Proceedings of the 60th Annual Meeting of the Association of Computational Linguistics", | |
year = "2022" | |
} | |
""" | |