Lighteval documentation
Litellm as backend
You are viewing main version, which requires installation from source. If you'd like
regular pip install, checkout the latest stable version (v0.9.0).
Litellm as backend
Lighteval allows to use litellm, a backend allowing you to call all LLM APIs using the OpenAI format [Bedrock, Huggingface, VertexAI, TogetherAI, Azure, OpenAI, Groq etc.].
Documentation for available APIs and compatible endpoints can be found here.
Quick use
lighteval endpoint litellm \
"gpt-3.5-turbo" \
"lighteval|gsm8k|0|0"
Using a config file
Litellm allows generation with any OpenAI compatible endpoint, for example you can evaluate a model running on a local vllm server.
To do so you will need to use a config file like so:
model:
base_params:
model_name: "openai/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B"
base_url: "URL OF THE ENDPOINT YOU WANT TO USE"
api_key: "" # remove or keep empty as needed
generation:
temperature: 0.5
max_new_tokens: 256
stop_tokens: [""]
top_p: 0.9
seed: 0
repetition_penalty: 1.0
frequency_penalty: 0.0