wzxii commited on
Commit
24f40c4
1 Parent(s): 7bc3058

Upload index.html

Browse files
Files changed (1) hide show
  1. index.html +4 -4
index.html CHANGED
@@ -222,14 +222,14 @@
222
  understand LLM coding ability through a diverse set of benchmarks and leaderboards, such as:
223
  </p>
224
  <ul>
225
- <li><a href="https://evalplus.github.io/leaderboard.html" target="_blank">EvalPlus Leaderboard</a></li>
226
  <li><a href="https://huggingface.co/spaces/bigcode/bigcode-models-leaderboard" target="_blank">Big Code Models Leaderboard</a></li>
227
- <li><a href="https://github.com/amazon-science/cceval" target="_blank">CrossCodeEval</a></li>
228
- <li><a href="https://evo-eval.github.io" target="_blank">Evo-Eval</a></li>
229
- <li><a href="https://crux-eval.github.io/leaderboard.html" target="_blank">CRUXEval</a></li>
230
  <li><a href="https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard" target="_blank">Chatbot Arena Leaderboard</a></li>
231
  <li><a href="https://fudanselab-classeval.github.io/leaderboard.html" target="_blank">ClassEval</a></li>
232
  <li><a href="https://bigcode-bench.github.io" target="_blank">Code Lingua</a></li>
 
 
 
 
233
  <li><a href="https://github.com/01-ai/HumanEval.jl" target="_blank">HumanEval.jl - Julia version HumanEval with EvalPlus test cases</a></li>
234
  <li><a href="https://infi-coder.github.io/infibench/" target="_blank">InfiBench</a></li>
235
  <li><a href="https://livecodebench.github.io/leaderboard.html" target="_blank">LiveCodeBench</a></li>
 
222
  understand LLM coding ability through a diverse set of benchmarks and leaderboards, such as:
223
  </p>
224
  <ul>
 
225
  <li><a href="https://huggingface.co/spaces/bigcode/bigcode-models-leaderboard" target="_blank">Big Code Models Leaderboard</a></li>
 
 
 
226
  <li><a href="https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard" target="_blank">Chatbot Arena Leaderboard</a></li>
227
  <li><a href="https://fudanselab-classeval.github.io/leaderboard.html" target="_blank">ClassEval</a></li>
228
  <li><a href="https://bigcode-bench.github.io" target="_blank">Code Lingua</a></li>
229
+ <li><a href="https://github.com/amazon-science/cceval" target="_blank">CrossCodeEval</a></li>
230
+ <li><a href="https://crux-eval.github.io/leaderboard.html" target="_blank">CRUXEval</a></li>
231
+ <li><a href="https://evalplus.github.io/leaderboard.html" target="_blank">EvalPlus Leaderboard</a></li>
232
+ <li><a href="https://evo-eval.github.io" target="_blank">Evo-Eval</a></li>
233
  <li><a href="https://github.com/01-ai/HumanEval.jl" target="_blank">HumanEval.jl - Julia version HumanEval with EvalPlus test cases</a></li>
234
  <li><a href="https://infi-coder.github.io/infibench/" target="_blank">InfiBench</a></li>
235
  <li><a href="https://livecodebench.github.io/leaderboard.html" target="_blank">LiveCodeBench</a></li>