Spaces:
Running
Running
metadata
title: WikiRacing Language Models
emoji: π
colorFrom: purple
colorTo: gray
sdk: docker
app_port: 7860
hf_oauth: true
hf_oauth_scopes:
- inference-api
- email
Can you wikirace faster than an LLM? π
Go head-to-head with Qwen, Gemma, and DeepSeek on the Huggingface Space
Or run 100s of agents on any model in parallel for efficient evaluations see README