Spaces:
Running
Running
File size: 1,539 Bytes
8dc4b22 c8763bd 9dc4521 8dc4b22 9dc4521 c8763bd 9dc4521 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
TITLE = """<h1 align="center" id="space-title">π€ Open LLM-Perf Leaderboard ποΈ</h1>"""
INTRODUCTION_TEXT = f"""
The π€ Open LLM-Perf Leaderboard ποΈ aims to benchmark the performance (latency & throughput) of Large Language Models (LLMs) on different hardwares and backends using [Optimum-Benchmark](https://github.com/huggingface/optimum-benchmark) and [Optimum](https://github.com/huggingface/optimum) flavors.
Anyone from the community can submit a model or a hardware+backend configuration for automated benchmarking:
- Model submissions should be made in the [π€ Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) and will be added to the π€ Open LLM-Perf Leaderboard ποΈ once they're publicly available.
- Hardware+Backend submissions should be made in the π€ Open LLM-Perf Leaderboard ποΈ [community discussions](https://huggingface.co/spaces/optimum/llm-perf-leaderboard/discussions); An automated process will be set up soon to allow for direct submissions.
"""
CITATION_BUTTON_LABEL = "Copy the following snippet to cite these results"
CITATION_BUTTON_TEXT = """@misc{open-llm-perf-leaderboard,
author = {Ilyas Moutawwakil},
title = {Open LLM-Perf Leaderboard},
year = {2023},
publisher = {Hugging Face},
howpublished = "\url{https://huggingface.co/spaces/optimum/open-llm-perf-leaderboard}",
@software{optimum-benchmark,
author = {Ilyas Moutawwakil},
title = {A framework for benchmarking the performance of Transformers models},
}
""" |