Spaces:
Running
Running
File size: 16,670 Bytes
aeb6d58 cc8a66b aeb6d58 4161029 aeb6d58 86e1422 aeb6d58 86e1422 aeb6d58 ed92462 86e1422 aeb6d58 cc8a66b aeb6d58 cc8a66b aeb6d58 cc8a66b aeb6d58 cc8a66b 54696b7 cc8a66b ed92462 cc8a66b ed92462 cc8a66b aeb6d58 ae66088 aeb6d58 ae66088 aeb6d58 ae66088 aeb6d58 7cf92c8 aeb6d58 cc8a66b aeb6d58 cc8a66b aeb6d58 cc8a66b aeb6d58 cc8a66b aeb6d58 fbb1c3f aeb6d58 cc8a66b aeb6d58 cc8a66b aeb6d58 cc8a66b aeb6d58 cc8a66b aeb6d58 cc8a66b aeb6d58 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 |
import pandas as pd
import gradio as gr
import os
import re
import requests
import huggingface_hub
from huggingface_hub.utils._errors import EntryNotFoundError, RepositoryNotFoundError
from dotenv import load_dotenv
from matplotlib.colors import LinearSegmentedColormap
load_dotenv()
webhook_url = os.environ.get("WEBHOOK_URL")
file_name_list = [
'7b',
'3b',
'1b5',
]
sheet_name_list = [
'cr',
'bpc',
'bpb',
]
metric_list = [
'Compression Rate (%)',
'Bits Per Character (BPC)',
'Bits Per Byte (BPB)',
]
model_size_list = [
'~7B',
'~3B',
'~1.5B',
]
metric_to_sheet = {
'Compression Rate (%)': 'cr',
'Bits Per Character (BPC)': 'bpc',
'Bits Per Byte (BPB)': 'bpb',
}
model_size_to_file_name = {
'~7B': '7b',
'~3B': '3b',
'~1.5B': '1b5',
}
css = """
.gr-dataframe table {
table-layout: fixed;
width: 100%; /* Ensures the table fills its container */
}
.gr-dataframe th, .gr-dataframe td {
width: 100px; /* Set the exact width of each cell */
overflow: hidden; /* Ensures the content doesn't overflow */
text-overflow: ellipsis; /* Adds an ellipsis (...) if the text overflows */
white-space: nowrap; /* Keeps the content on a single line */
}
"""
about_md = """
# Uncheatable Eval
GitHub page: [https://github.com/Jellyfish042/uncheatable_eval](https://github.com/Jellyfish042/uncheatable_eval)
## Introduction
Traditional LLM benchmarks are easily compromised by unintentional or intentional data leakage, making many benchmarks unreliable and unable to truly reflect the capabilities of LLMs.
Uncheatable Eval addresses this issue by testing LLMs on real-time, newly generated data from the internet,
ensuring that the evaluation is immune to data leaks and cannot be gamed.
## How?
Uncheatable Eval assesses the language modeling capabilities of LLMs on new data from various sources such as recent papers on arXiv, new projects on GitHub, news articles, and more. Since this data is brand new (e.g., from the past 1-2 weeks), it is impossible for these data to be included in the training sets of publicly released models, thus avoiding the impact of unintentional or intentional data leaks.
Specifically, we calculate the sum of negative log probabilities of the models on these texts. In other words, models that are more likely to generate these texts are considered better.
*Note* : Uncheatable Eval only tests base models.
## Q&A
### Why Calculate the Sum of Negative Log Probabilities?
First, the goal of language models, at least today's language models, is to generate text that is as realistic as possible, maximizing the probability of real text. They are trained and designed to do exactly this. Calculating the sum of negative log probabilities on real text is the most direct way to test this capability.
Second, from the perspective of "compression is intelligence," a good way to test a language model would be to use the model with an entropy coding algorithm for compression and test the model's compression rate [[1]](https://arxiv.org/abs/2309.10668)[[2]](https://arxiv.org/abs/2402.00861). A model with a lower compression rate is considered better. Using a language model + arithmetic coding as an example, it is easy to prove that a model's ability to compress a piece of text is proportional to the sum of its negative log probabilities on that text (see [proof](#proof-of-the-equivalence-between-compression-capability-and-negative-log-probability-sum)).
Therefore, the compression rate of a model can be directly calculated through the sum of negative log probabilities, and the method for this has been provided in `show_results_v2.ipynb`.
### Can Models Using Different Tokenizers Be Directly Compared?
Yes. When calculating the sum of negative log probabilities, we essentially treat the model + tokenizer as a single entity or system. As long as this system has a high probability of generating real text, we consider it better. From the perspective of compression, you can choose any tokenizer. From the compression rate perspective, we don't care; we only care about whether your system can compress the text more effectively.
### Is It Really Uncheatable? Can't I train my model on a large number of arXiv papers to improve its test performance on arXiv papers?
Uncheatable Eval's data sources currently include new arXiv papers, new GitHub projects, BBC news, AO3 fanfictions, and new Wikipedia entries, with more sources to be added in the future. If you genuinely achieve excellent results across these data by training extensively on these sources, I would consider you to have developed a genuinely good language model rather than cheating.
From my test results, accurately modeling these data is very challenging. I believe Uncheatable Eval more accurately reflects the value of every bit of data and computing you invest compared to other benchmarks. Models trained with more data and computing are almost always better, and there are no shortcuts. This is a key strength of Uncheatable Eval.
### Is This Too "Random"? Why Consider Random Texts from the Internet as Ground Truth?
This is why we choose rigorous and verified texts such as arXiv papers and news reports, which typically have better quality. Additionally, a round of Uncheatable Eval evaluates a model over millions of tokens, increasing the reliability of the results.
In fact, the model rankings obtained through Uncheatable Eval are very stable. For instance, the model ranked first in January's data is highly likely to remain first in February, March, April, May, and June, indicating that the data obtained through this method is sufficiently representative.
"""
def rename_columns(df):
df.columns = [col.rsplit('_', maxsplit=1)[0] for col in df.columns]
return df
def get_folders_matching_format(directory):
pattern = re.compile(r'^\d{4}-\d{2}$')
folders = []
if not os.path.exists(directory):
return folders
for item in os.listdir(directory):
full_path = os.path.join(directory, item)
if os.path.isdir(full_path) and pattern.match(item):
folders.append(full_path)
return folders
def get_unique_column_names(all_data):
# column_names = {}
#
# for folder_name, files in all_data.items():
# for file_name, sheets in files.items():
# for sheet_name, dataframe in sheets.items():
# for column in dataframe.columns:
# if column not in ['Name', 'Average (The lower the better)', 'Parameters Count (B)']:
# column_names[column] = None
#
# return list(column_names.keys())
return ['ao3_\u200benglish', 'bbc_\u200bnews', 'wikipedia_\u200benglish', 'arxiv_\u200bcomputer_\u200bscience', 'arxiv_\u200bphysics', 'github_\u200bcpp', 'github_\u200bpython']
def update_table(period: str,
models: list,
metric: str,
visible_columns: list,
color_columns: list,
sort_by: str = 'Average (The lower the better)',
ascending: bool = True):
target_data = all_data[period]
target_metric = metric_to_sheet[metric]
if models:
target_model_size = [model_size_to_file_name[model] for model in models]
combined_data = pd.concat([target_data[model][target_metric] for model in target_model_size], axis=0)
combined_data['Name'] = combined_data['Name'].apply(lambda x: x.replace('.pth', ''))
combined_data.reset_index(drop=True, inplace=True)
if 'Average (The lower the better)' in combined_data.columns:
relevant_columns = [col for col in visible_columns if
col not in ['Name', 'Parameters Count (B)', 'Average (The lower the better)']]
combined_data['Average (The lower the better)'] = round(combined_data[relevant_columns].mean(axis=1), 3)
sorted_data = combined_data.sort_values(by=sort_by, ascending=ascending)
sorted_data = sorted_data.rename(columns={'Average (The lower the better)': 'Average (lower=better)'})
visible_columns = ['Name', 'Parameters Count (B)', 'Average (lower=better)'] + visible_columns
filtered_data = sorted_data[visible_columns]
filtered_data.columns = [col.replace('_', ' ') for col in filtered_data.columns]
formatter = {col: "{:.3f}" for col in filtered_data.columns if
filtered_data[col].dtype in ['float64', 'float32']}
def color_column(s):
return ['background-color: #fffdd0' if pd.notna(x) else 'default' for x in s]
# color gradient
colors = ["#63be7b", "#ffffff", "#f8696b"]
cmap = LinearSegmentedColormap.from_list("custom_cmap", colors)
target_color_columns = []
if 'Average' in color_columns:
target_color_columns.append('Average (lower=better)')
if 'Individual Tests' in color_columns:
target_color_columns.extend([col for col in filtered_data.columns if col not in ['Name', 'Parameters Count (B)', 'Average (lower=better)']])
# styler = filtered_data.style.format(formatter).background_gradient(
# cmap=cmap,
# subset=target_color_columns,
# vmin=min_value,
# vmax=max_value
# ).apply(color_column, subset=['Parameters Count (B)'])
# for better visualization
second_largest = filtered_data['Average (lower=better)'].nlargest(2).iloc[-1]
min_value = filtered_data['Average (lower=better)'].min().min()
if len(models) == 1 and '~7B' in models:
max_value = second_largest
else:
max_value = filtered_data['Average (lower=better)'].max().max()
specific_column = 'Average (lower=better)'
styler_specific = filtered_data.style.background_gradient(
cmap=cmap,
subset=[specific_column],
vmin=min_value,
vmax=max_value
)
styler_other = filtered_data.style.background_gradient(
cmap=cmap,
subset=[col for col in target_color_columns if col != specific_column]
)
styler = styler_specific.use(styler_other.export()).format(formatter).apply(color_column, subset=['Parameters Count (B)'])
return styler
else:
return pd.DataFrame()
def check_model_exists(name):
try:
huggingface_hub.hf_hub_download(repo_id=name, filename="config.json")
return True
except (EntryNotFoundError, RepositoryNotFoundError):
print(f'Model {name} does not exist Error. 001')
return False
except requests.exceptions.HTTPError:
print('Network error while checking model existence. 002')
return False
except Exception as e:
print(e, '003')
return False
def submit_model(name):
if not check_model_exists(name):
return f"# ERROR: Model {name} does not exist on Hugging Face!"
try:
response = requests.post(webhook_url, json={"content": name})
if response.status_code == 200:
response_data = response.json()
if response_data.get('status') == 'success':
return "# SUCCESS: We will check the model as soon as possible. Thank you for your submission!"
else:
return f"# ERROR: {response_data.get('message', 'Unknown error')}"
else:
return f"# ERROR: Failed to submit model {name}. Server returned status code {response.status_code}."
except requests.exceptions.HTTPError:
return "# ERROR: Network error while contacting queue. Please try again in a few minutes."
except Exception as e:
print(e)
return "ERROR: Unexpected error. Please try again later."
all_data = {}
time_list = []
for folder in get_folders_matching_format('data'):
folder_name = os.path.basename(folder)
time_list.append(folder_name)
if all_data.get(folder) is None:
all_data[folder_name] = {}
for file_name in file_name_list:
if all_data.get(file_name) is None:
all_data[folder_name][file_name] = {}
for sheet_name in sheet_name_list:
final_file_name = os.path.join(folder, file_name)
all_data[folder_name][file_name][sheet_name] = rename_columns(
pd.read_excel(final_file_name + '.xlsx', sheet_name=sheet_name))
initial_period = time_list[-1]
initial_models = model_size_list[:1]
initial_metric = metric_list[0]
initial_columns = get_unique_column_names(all_data)
initial_colors = ['Average']
initial_data = update_table(initial_period, initial_models, initial_metric, initial_columns, initial_colors)
css = '''
.gradio-container {
max-width: 95% !important;
}
.tab-buttons button {
font-size: 1.3em;
}
.gr-dataframe th {
white-space: normal;
word-break: break-word;
}
'''
with gr.Blocks(css=css) as demo:
gr.HTML('<h1 style="text-align:center"><span style="font-size:1.3em">π Uncheatable Eval Leaderboard</span></h1>')
gr.HTML(
"<h1 style='text-align:center'><span style='font-size:0.8em'>Welcome to Uncheatable Eval, where fancy fine-tuning and cheating wonβt work π«; only compute π», data π, and real innovation π₯ can prevail!</span></h1>")
with gr.Tabs() as tabs:
with gr.Tab("π Leaderboard"):
with gr.Row():
with gr.Column():
period_selector = gr.Dropdown(label="Period", choices=time_list, value=time_list[0])
model_selector = gr.CheckboxGroup(label="Model", choices=model_size_list, value=model_size_list[0])
metric_selector = gr.Dropdown(label="Metric", choices=metric_list, value=metric_list[0])
with gr.Column():
color_selector = gr.CheckboxGroup(label="Colored Columns",
choices=['Average', 'Individual Tests'],
value=['Average'])
colfilter = gr.CheckboxGroup(label="Data Source",
choices=get_unique_column_names(all_data),
value=get_unique_column_names(all_data))
table = gr.Dataframe(initial_data, column_widths=[125, 50, 50, 35, 35, 35, 35, 35, 35, 35], wrap=True)
period_selector.change(update_table,
inputs=[period_selector, model_selector, metric_selector, colfilter, color_selector],
outputs=table)
model_selector.change(update_table,
inputs=[period_selector, model_selector, metric_selector, colfilter, color_selector],
outputs=table)
metric_selector.change(update_table,
inputs=[period_selector, model_selector, metric_selector, colfilter, color_selector],
outputs=table)
colfilter.change(update_table,
inputs=[period_selector, model_selector, metric_selector, colfilter, color_selector],
outputs=table)
color_selector.change(update_table,
inputs=[period_selector, model_selector, metric_selector, colfilter, color_selector],
outputs=table)
with gr.Tab("π MultiLang"):
gr.Markdown("## Coming soon...")
with gr.Tab("βΉοΈ About"):
gr.Markdown(about_md)
with gr.Tab("π Submit"):
with gr.Group():
with gr.Row():
model_name = gr.Textbox(max_lines=1,
placeholder="Enter model name...",
show_label=False,
scale=4)
submit = gr.Button("Submit", variant="primary", scale=0)
output = gr.Markdown(
"# Enter a public HF repo id, then hit Submit to add it to the evaluation queue.")
submit.click(fn=submit_model, inputs=model_name, outputs=output)
demo.launch()
|