Spaces:
Sleeping
Sleeping
Yotam-Perlitz
commited on
Commit
•
d3905ad
1
Parent(s):
f1c3da2
improve motivation
Browse filesSigned-off-by: Yotam-Perlitz <[email protected]>
app.py
CHANGED
@@ -53,7 +53,7 @@ st.markdown(
|
|
53 |
"""
|
54 |
The BenchBench leaderboard ranks benchmarks based on their agreement with the *Aggregate Benchmark* – a comprehensive, combined measure of existing benchmark results.
|
55 |
\n
|
56 |
-
To achive
|
57 |
\n
|
58 |
BenchBench is for you if:
|
59 |
"""
|
@@ -68,11 +68,11 @@ st.markdown(
|
|
68 |
|
69 |
st.markdown(
|
70 |
"""
|
71 |
-
In our work
|
72 |
we standardize BAT and show the importance of its configurations, notably,
|
73 |
-
the benchmarks we compare to, and the models we use to compare with
|
74 |
\n
|
75 |
-
We show that agreements are best
|
76 |
"""
|
77 |
)
|
78 |
|
@@ -325,7 +325,7 @@ reporter = Reporter()
|
|
325 |
z_scores = reporter.get_all_z_scores(agreements=agreements, aggragate_name="aggregate")
|
326 |
z_scores.drop(columns=["n_models_of_corr_with_agg"], inplace=True)
|
327 |
|
328 |
-
corr_name = f"{'Kendall Tau' if corr_type=='kendall' else 'Per.'} Corr."
|
329 |
|
330 |
z_scores["z_score"] = z_scores["z_score"].round(2)
|
331 |
z_scores["corr_with_agg"] = z_scores["corr_with_agg"].round(2)
|
@@ -699,6 +699,7 @@ with st.expander(label="Citations"):
|
|
699 |
|
700 |
st.subheader("Benchmark Report Card")
|
701 |
|
|
|
702 |
|
703 |
benchmarks = allbench.df["scenario"].unique().tolist()
|
704 |
index_to_use = 1
|
@@ -742,11 +743,6 @@ fig = px.scatter(
|
|
742 |
)
|
743 |
st.plotly_chart(fig, use_container_width=True)
|
744 |
|
745 |
-
st.markdown(
|
746 |
-
"BenchBench-Leaderboard complements our study, where we analyzed over 40 prominent benchmarks and introduced standardized practices to enhance the robustness and validity of benchmark evaluations through the [BenchBench Python package](#). "
|
747 |
-
"The BenchBench-Leaderboard serves as a dynamic platform for benchmark comparison and is an essential tool for researchers and practitioners in the language model field aiming to select and utilize benchmarks effectively. "
|
748 |
-
)
|
749 |
-
|
750 |
st.subheader("How did we get the Z Scores?", divider=True)
|
751 |
|
752 |
st.write(r"""
|
@@ -779,151 +775,65 @@ fig.update_layout(
|
|
779 |
# # Plot!
|
780 |
st.plotly_chart(fig, use_container_width=True)
|
781 |
|
|
|
|
|
782 |
st.subheader("Why should you use the BenchBench Leaderboard?")
|
783 |
|
784 |
st.markdown(
|
785 |
"""
|
786 |
-
|
787 |
-
|
788 |
-
|
|
|
789 |
"""
|
790 |
)
|
791 |
|
792 |
st.markdown(
|
793 |
"""
|
794 |
-
- **Lack of Standard Methodologies:**
|
|
|
795 |
"""
|
796 |
)
|
797 |
|
798 |
st.image(
|
799 |
"images/motivation.png",
|
800 |
-
caption="
|
801 |
use_column_width=True,
|
802 |
)
|
803 |
|
804 |
st.markdown(
|
805 |
"""
|
806 |
-
- **Arbitrary Selection of Reference Benchmarks:**
|
807 |
"""
|
808 |
)
|
809 |
st.markdown(
|
810 |
"""
|
811 |
-
- **Inadequate Model Representation:** BAT
|
812 |
"""
|
813 |
)
|
814 |
|
815 |
st.image(
|
816 |
"images/pointplot_granularity_matters.png",
|
817 |
-
caption="
|
818 |
use_column_width=True,
|
819 |
)
|
820 |
|
821 |
st.markdown(
|
822 |
"""
|
823 |
-
- **Overemphasis on Correlation Metrics:**
|
824 |
"""
|
825 |
)
|
826 |
|
827 |
st.markdown(
|
828 |
"""
|
829 |
-
|
|
|
830 |
"""
|
831 |
)
|
832 |
|
833 |
|
834 |
st.image(
|
835 |
"images/ablations.png",
|
836 |
-
caption="
|
837 |
use_column_width=True,
|
838 |
)
|
839 |
-
|
840 |
-
|
841 |
-
st.header("The BenchBench package")
|
842 |
-
|
843 |
-
st.markdown("""
|
844 |
-
### Overview
|
845 |
-
|
846 |
-
The BAT package is designed to facilitate benchmark agreement testing for NLP models. It allows users to easily compare multiple models against various benchmarks and generate comprehensive reports on their agreement.
|
847 |
-
|
848 |
-
### Installation
|
849 |
-
|
850 |
-
To install the BAT package, you can use pip:
|
851 |
-
|
852 |
-
```
|
853 |
-
pip install bat-package
|
854 |
-
```
|
855 |
-
|
856 |
-
### Usage Example
|
857 |
-
|
858 |
-
Below is a step-by-step example of how to use the BAT package to perform agreement testing.
|
859 |
-
|
860 |
-
#### Step 1: Configuration
|
861 |
-
|
862 |
-
First, set up the configuration for the tests:
|
863 |
-
|
864 |
-
```python
|
865 |
-
import pandas as pd
|
866 |
-
from bat import Tester, Config, Benchmark, Reporter
|
867 |
-
from bat.utils import get_holistic_benchmark
|
868 |
-
|
869 |
-
cfg = Config(
|
870 |
-
exp_to_run="example",
|
871 |
-
n_models_taken_list=[0],
|
872 |
-
model_select_strategy_list=["random"],
|
873 |
-
n_exps=10
|
874 |
-
)
|
875 |
-
```
|
876 |
-
|
877 |
-
#### Step 2: Fetch Model Names
|
878 |
-
|
879 |
-
Fetch the names of the reference models to be used for scoring:
|
880 |
-
|
881 |
-
```python
|
882 |
-
tester = Tester(cfg=cfg)
|
883 |
-
models_for_benchmark_scoring = tester.fetch_reference_models_names(
|
884 |
-
reference_benchmark=get_holistic_benchmark(), n_models=20
|
885 |
-
)
|
886 |
-
print(models_for_benchmark_scoring)
|
887 |
-
```
|
888 |
-
|
889 |
-
#### Step 3: Load and Prepare Benchmark
|
890 |
-
|
891 |
-
Load a new benchmark and add an aggregate column:
|
892 |
-
|
893 |
-
```python
|
894 |
-
newbench_name = "fakebench"
|
895 |
-
newbench = Benchmark(
|
896 |
-
pd.read_csv(f"src/bat/assets/{newbench_name}.csv"),
|
897 |
-
data_source=newbench_name,
|
898 |
-
)
|
899 |
-
newbench.add_aggregate(new_col_name=f"{newbench_name}_mwr")
|
900 |
-
```
|
901 |
-
|
902 |
-
#### Step 4: Agreement Testing
|
903 |
-
|
904 |
-
Perform all-vs-all agreement testing on the new benchmark:
|
905 |
-
|
906 |
-
```python
|
907 |
-
newbench_agreements = tester.all_vs_all_agreement_testing(newbench)
|
908 |
-
reporter = Reporter()
|
909 |
-
reporter.draw_agreements(newbench_agreements)
|
910 |
-
```
|
911 |
-
|
912 |
-
#### Step 5: Extend and Clean Benchmark
|
913 |
-
|
914 |
-
Extend the new benchmark with holistic data and clear repeated scenarios:
|
915 |
-
|
916 |
-
```python
|
917 |
-
allbench = newbench.extend(get_holistic_benchmark())
|
918 |
-
allbench.clear_repeated_scenarios(source_to_keep=newbench_name)
|
919 |
-
```
|
920 |
-
|
921 |
-
#### Step 6: Comprehensive Agreement Testing
|
922 |
-
|
923 |
-
Perform comprehensive agreement testing and visualize:
|
924 |
-
|
925 |
-
```python
|
926 |
-
all_agreements = tester.all_vs_all_agreement_testing(allbench)
|
927 |
-
reporter.draw_agreements(all_agreements)
|
928 |
-
```
|
929 |
-
""")
|
|
|
53 |
"""
|
54 |
The BenchBench leaderboard ranks benchmarks based on their agreement with the *Aggregate Benchmark* – a comprehensive, combined measure of existing benchmark results.
|
55 |
\n
|
56 |
+
To achive this, we scraped results from multiple benchmarks (citations below) to allow for obtaining benchmark agreement results with a wide range of benchmark using a large set of models.
|
57 |
\n
|
58 |
BenchBench is for you if:
|
59 |
"""
|
|
|
68 |
|
69 |
st.markdown(
|
70 |
"""
|
71 |
+
In our work - [Benchmark Agreement Testing Done Right](https://arxiv.org/abs/2407.13696) and [opensource repo](https://github.com/IBM/benchbench),
|
72 |
we standardize BAT and show the importance of its configurations, notably,
|
73 |
+
the benchmarks we compare to, and the models we use to compare with (see sidebar).
|
74 |
\n
|
75 |
+
We also show that agreements are best represented with the relative agreement (Z Score) of each benchmark to the Aggragate benchmark, as presented below in the leaderboard.
|
76 |
"""
|
77 |
)
|
78 |
|
|
|
325 |
z_scores = reporter.get_all_z_scores(agreements=agreements, aggragate_name="aggregate")
|
326 |
z_scores.drop(columns=["n_models_of_corr_with_agg"], inplace=True)
|
327 |
|
328 |
+
corr_name = f"{'Kendall Tau' if corr_type=='kendall' else 'Per.'} Corr. w/ Agg"
|
329 |
|
330 |
z_scores["z_score"] = z_scores["z_score"].round(2)
|
331 |
z_scores["corr_with_agg"] = z_scores["corr_with_agg"].round(2)
|
|
|
699 |
|
700 |
st.subheader("Benchmark Report Card")
|
701 |
|
702 |
+
st.markdown("Choose the Benchmark from which you want to get a report.")
|
703 |
|
704 |
benchmarks = allbench.df["scenario"].unique().tolist()
|
705 |
index_to_use = 1
|
|
|
743 |
)
|
744 |
st.plotly_chart(fig, use_container_width=True)
|
745 |
|
|
|
|
|
|
|
|
|
|
|
746 |
st.subheader("How did we get the Z Scores?", divider=True)
|
747 |
|
748 |
st.write(r"""
|
|
|
775 |
# # Plot!
|
776 |
st.plotly_chart(fig, use_container_width=True)
|
777 |
|
778 |
+
import streamlit as st
|
779 |
+
|
780 |
st.subheader("Why should you use the BenchBench Leaderboard?")
|
781 |
|
782 |
st.markdown(
|
783 |
"""
|
784 |
+
Benchmark Agreement Testing (BAT) is crucial for validating new benchmarks and understanding the relationships between existing ones.
|
785 |
+
However, current BAT practices often lack standardization and transparency, leading to inconsistent results and hindering reliable comparisons.
|
786 |
+
The BenchBench Leaderboard addresses these challenges by offering a **principled and data-driven approach to benchmark evaluation**.
|
787 |
+
Let's explore some of the key issues with current BAT practices:
|
788 |
"""
|
789 |
)
|
790 |
|
791 |
st.markdown(
|
792 |
"""
|
793 |
+
- **Lack of Standard Methodologies:** BAT lacks standardized procedures for benchmark and model selection, hindering reproducibility and comparability across studies.
|
794 |
+
Researchers often make arbitrary choices, leading to results that are difficult to interpret and build upon.
|
795 |
"""
|
796 |
)
|
797 |
|
798 |
st.image(
|
799 |
"images/motivation.png",
|
800 |
+
caption="**Example: Model Selection Impacts BAT Conclusions.** Kendall-tau correlations between the LMSys Arena benchmark and three others demonstrate how agreement varies significantly depending on the subset of models considered. This highlights the need for standardized model selection in BAT.",
|
801 |
use_column_width=True,
|
802 |
)
|
803 |
|
804 |
st.markdown(
|
805 |
"""
|
806 |
+
- **Arbitrary Selection of Reference Benchmarks:** The choice of reference benchmarks in BAT is often subjective and lacks a clear rationale. Using different reference benchmarks can lead to widely varying agreement scores, making it difficult to draw robust conclusions about a target benchmark's validity.
|
807 |
"""
|
808 |
)
|
809 |
st.markdown(
|
810 |
"""
|
811 |
+
- **Inadequate Model Representation:** BAT often relies on a limited set of models that may not adequately represent the diversity of modern language models. This can lead to biased agreement scores that favor certain model types and fail to provide a comprehensive view of benchmark performance.
|
812 |
"""
|
813 |
)
|
814 |
|
815 |
st.image(
|
816 |
"images/pointplot_granularity_matters.png",
|
817 |
+
caption="**Example: Agreement Varies with Model Range.** Mean correlation between benchmarks shows that agreement tends to increase with the number of models considered and is generally lower for closely ranked models (blue lines). This highlights the importance of considering multiple granularities in BAT.",
|
818 |
use_column_width=True,
|
819 |
)
|
820 |
|
821 |
st.markdown(
|
822 |
"""
|
823 |
+
- **Overemphasis on Correlation Metrics:** BAT often relies heavily on correlation metrics without fully considering their limitations or the context of their application. While correlation can be informative, it's crucial to remember that high correlation doesn't automatically imply that benchmarks measure the same underlying construct.
|
824 |
"""
|
825 |
)
|
826 |
|
827 |
st.markdown(
|
828 |
"""
|
829 |
+
The BenchBench Leaderboard tackles these challenges by implementing a standardized and transparent approach to BAT, promoting consistency and facilitating meaningful comparisons between benchmarks.
|
830 |
+
By adopting the best practices embedded in the leaderboard, the research community can enhance the reliability and utility of benchmarks for evaluating and advancing language models.
|
831 |
"""
|
832 |
)
|
833 |
|
834 |
|
835 |
st.image(
|
836 |
"images/ablations.png",
|
837 |
+
caption="**BenchBench's Standardized Approach Reduces Variance.** This ablation study demonstrates that following the best practices implemented in BenchBench significantly reduces the variance of BAT results, leading to more robust and reliable conclusions.",
|
838 |
use_column_width=True,
|
839 |
)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|