Lighteval documentation
Inference Providers as backend
You are viewing main version, which requires installation from source. If you'd like
regular pip install, checkout the latest stable version (v0.9.2).
Inference Providers as backend
Lighteval allows to use Hugging Face’s Inference Providers to evaluate llms on supported providers such as Black Forest Labs, Cerebras, Fireworks AI, Nebius, Together AI and many more.
Quick use
Do not forget to set your HuggingFace API key.
You can set it using the HF_TOKEN
environment variable or by using the huggingface-cli
command.
lighteval endpoint inference-providers \
"model_name=deepseek-ai/DeepSeek-R1,provider=hf-inference" \
"lighteval|gsm8k|0|0"
Using a config file
You can use config files to define the model and the provider to use.
lighteval endpoint inference-providers \
examples/model_configs/inference_providers.yaml \
"lighteval|gsm8k|0|0"
with the following config file:
model_parameters:
model_name: "deepseek-ai/DeepSeek-R1-Distill-Qwen-32B"
provider: "novita"
timeout: null
proxies: null
parallel_calls_count: 10
generation_parameters:
temperature: 0.8
top_k: 10
max_new_tokens: 10000
By default, inference requests are billed to your personal account.
Optionally, you can charge them to an organization by setting org_to_bill="<your_org_name>"
(requires being a member of that organization).