jwilles commited on
Commit
50ce699
·
1 Parent(s): 84e21ef

Remove placeholder text

Browse files
Files changed (2) hide show
  1. app.py +2 -2
  2. src/about.py +21 -23
app.py CHANGED
@@ -113,8 +113,8 @@ with demo:
113
  with gr.TabItem("About", elem_classes="llm-benchmark-tab-table", id=2):
114
  gr.Markdown(ABOUT_TEXT, elem_classes="markdown-text", sanitize_html=False)
115
 
116
- with gr.TabItem("Reproducibility", elem_classes="llm-benchmark-tab-table", id=3):
117
- gr.Markdown(REPRODUCIBILITY_TEXT, elem_classes="markdown-text", sanitize_html=False)
118
 
119
  assets = [black_logo_path, white_logo_path]
120
  demo.launch(allowed_paths=assets)
 
113
  with gr.TabItem("About", elem_classes="llm-benchmark-tab-table", id=2):
114
  gr.Markdown(ABOUT_TEXT, elem_classes="markdown-text", sanitize_html=False)
115
 
116
+ # with gr.TabItem("Reproducibility", elem_classes="llm-benchmark-tab-table", id=3):
117
+ # gr.Markdown(REPRODUCIBILITY_TEXT, elem_classes="markdown-text", sanitize_html=False)
118
 
119
  assets = [black_logo_path, white_logo_path]
120
  demo.launch(allowed_paths=assets)
src/about.py CHANGED
@@ -1,4 +1,3 @@
1
-
2
  # Your leaderboard name
3
  TITLE = """<h1 align="center" id="space-title">Evaluation Leaderboard</h1>"""
4
 
@@ -33,20 +32,20 @@ These benchmarks assess fundamental reasoning and knowledge capabilities of mode
33
 
34
  <div class="benchmark-table-container">
35
 
36
- | Benchmark | Description | Domain |
37
- |--------------------|----------------------------------------------------------------------------------|-----------------------------------------------|
38
- | **ARC-Easy** / **ARC-Challenge** | Multiple-choice science questions measuring scientific & commonsense reasoning. | Example |
39
- | **DROP** | Reading comprehension benchmark emphasizing discrete reasoning steps. | Example |
40
- | **WinoGrande** | Commonsense reasoning challenge focused on co-reference resolution. | Example |
41
- | **GSM8K** | Grade-school math word problems testing arithmetic & multi-step reasoning. | Example |
42
- | **HellaSwag** | Commonsense inference task centered on action completion. | Example |
43
- | **HumanEval** | Evaluates code generation and reasoning in a programming context. | Example |
44
- | **IFEval** | Specialized benchmark for incremental formal reasoning. | Example |
45
- | **IFEval** | Specialized benchmark for incremental formal reasoning. | Example |
46
- | **MATH** | High school-level math questions requiring detailed solutions. | Example |
47
- | **MMLU** / **MMLU-Pro**| Multi-subject multiple-choice tests of advanced knowledge. | Example |
48
- | **GPQA-Diamond** | Question-answering benchmark assessing deeper reasoning & knowledge linking. | Example |
49
- | **MMMU** (Multi-Choice / Open-Ended) | Multilingual & multi-domain tasks testing structured & open responses. | Example |
50
  </div>
51
 
52
  ### 🚀 Agentic Benchmarks
@@ -55,14 +54,13 @@ These benchmarks go beyond basic reasoning and evaluate more advanced, autonomou
55
 
56
  <div class="benchmark-table-container">
57
 
58
- | Benchmark | Description | Key Skills |
59
- |-----------------------|-----------------------------------------------------------------------------|-------------------------------------------------|
60
- | **GAIA** | Evaluates autonomous reasoning, planning, problem-solving, & multi-turn interactions. | Example |
61
- | [**InterCode-CTF**](https://ukgovernmentbeis.github.io/inspect_evals/evals/cybersecurity/in_house_ctf/) | Capture-the-flag challenge focused on code interpretation & debugging. | Example |
62
- | **GDM-In-House-CTF** | Capture-the-flag challenge testing web application security skills. | Example |
63
- | **AgentHarm** / **AgentHarm-Benign** | Measures harmfulness of LLM agents (and benign behavior baseline). | Example |
64
- | **SWE-Bench** | Tests AI agent ability to solve software engineering tasks. | Example |
65
-
66
  </div>
67
  """
68
 
 
 
1
  # Your leaderboard name
2
  TITLE = """<h1 align="center" id="space-title">Evaluation Leaderboard</h1>"""
3
 
 
32
 
33
  <div class="benchmark-table-container">
34
 
35
+ | Benchmark | Description |
36
+ |--------------------|----------------------------------------------------------------------------------|
37
+ | **ARC-Easy** / **ARC-Challenge** | Multiple-choice science questions measuring scientific & commonsense reasoning. |
38
+ | **DROP** | Reading comprehension benchmark emphasizing discrete reasoning steps. |
39
+ | **WinoGrande** | Commonsense reasoning challenge focused on co-reference resolution. |
40
+ | **GSM8K** | Grade-school math word problems testing arithmetic & multi-step reasoning. |
41
+ | **HellaSwag** | Commonsense inference task centered on action completion. |
42
+ | **HumanEval** | Evaluates code generation and reasoning in a programming context. |
43
+ | **IFEval** | Specialized benchmark for incremental formal reasoning. |
44
+ | **IFEval** | Specialized benchmark for incremental formal reasoning. |
45
+ | **MATH** | High school-level math questions requiring detailed solutions. |
46
+ | **MMLU** / **MMLU-Pro**| Multi-subject multiple-choice tests of advanced knowledge. |
47
+ | **GPQA-Diamond** | Question-answering benchmark assessing deeper reasoning & knowledge linking. |
48
+ | **MMMU** (Multi-Choice / Open-Ended) | Multilingual & multi-domain tasks testing structured & open responses. |
49
  </div>
50
 
51
  ### 🚀 Agentic Benchmarks
 
54
 
55
  <div class="benchmark-table-container">
56
 
57
+ | Benchmark | Description |
58
+ |-----------------------|----------------------------------------------------------------------------|
59
+ | **GAIA** | Evaluates autonomous reasoning, planning, problem-solving, & multi-turn interactions. |
60
+ | [**InterCode-CTF**](https://ukgovernmentbeis.github.io/inspect_evals/evals/cybersecurity/in_house_ctf/) | Capture-the-flag challenge focused on code interpretation & debugging. |
61
+ | **GDM-In-House-CTF** | Capture-the-flag challenge testing web application security skills. |
62
+ | **AgentHarm** / **AgentHarm-Benign** | Measures harmfulness of LLM agents (and benign behavior baseline). |
63
+ | **SWE-Bench** | Tests AI agent ability to solve software engineering tasks. |
 
64
  </div>
65
  """
66