File size: 12,751 Bytes
6b55d1c a742dfb 6b55d1c 980c40c 6b55d1c 980c40c cf76a98 6b55d1c 980c40c 6b55d1c 980c40c 6b55d1c 980c40c 6b55d1c 980c40c 6b55d1c 980c40c 6b55d1c 980c40c 6b55d1c 80a8a96 6b55d1c cf76a98 6b55d1c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 |
---
license: apache-2.0
tags:
- reward-model
- rlhf
- principle-following
- qwen
pipeline_tag: text-generation
base_model:
- Qwen/Qwen3-8B
language:
- en
- zh
---
<div align="center">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/zhuohaoyu/RewardAnything/main/assets/rewardanything-logo-horizontal-dark-mode.png">
<source media="(prefers-color-scheme: light)" srcset="https://raw.githubusercontent.com/zhuohaoyu/RewardAnything/main/assets/rewardanything-logo-horizontal.png">
<img alt="RewardAnything" src="https://raw.githubusercontent.com/zhuohaoyu/RewardAnything/main/assets/rewardanything-logo-horizontal.png" width="400">
</picture>
<p>
<a href="https://zhuohaoyu.github.io/RewardAnything"><img alt="Website" src="https://img.shields.io/badge/π_Project-Website-A593C2?style=flat-square&labelColor=8A7AA8"></a>
<a href="https://huggingface.co/WisdomShell/RewardAnything-8B-v1"><img alt="Model Weights" src="https://img.shields.io/badge/π€_HuggingFace-Model_Weights-D4A574?style=flat-square&labelColor=B8956A"></a>
<a href="https://arxiv.org/abs/2506.03637"><img alt="Paper" src="https://img.shields.io/badge/π_arXiv-Paper-C7969C?style=flat-square&labelColor=A8798A"></a>
<a href="https://pypi.org/project/rewardanything/"><img alt="PyPI" src="https://img.shields.io/pypi/v/rewardanything.svg?style=flat-square&color=7B9BB3&labelColor=5A7A94"></a>
</p>
<h1> RewardAnything: Generalizable Principle-Following Reward Models </h1>
<a>Zhuohao Yu<sup>1,Β§</sup></a> 
<a>Jiali Zeng<sup>2</sup></a> 
<a>Weizheng Gu<sup>1</sup></a> 
<a>Yidong Wang<sup>1</sup></a> 
<a>Jindong Wang<sup>3</sup></a> 
<a>Fandong Meng<sup>2</sup></a> 
<a>Jie Zhou<sup>2</sup></a> 
<a>Yue Zhang<sup>4</sup></a> 
<a>Shikun Zhang<sup>1</sup></a> 
<a>Wei Ye<sup>1,β </sup></a>
<div>
<p>
<sup>1</sup>Peking University 
<sup>2</sup>WeChat AI 
<sup>3</sup>William & Mary 
<sup>4</sup>Westlake University
</p>
<p><sup>Β§</sup>Work done during Zhuohao's internship at Pattern Recognition Center, WeChat AI, Tencent Inc; <sup>β </sup>Corresponding author.</p>
</div>
</div>
Traditional reward models learn **implicit preferences** from fixed datasets, leading to static judgments that struggle with the **nuanced and multifaceted nature of human values**.
We believe that, much like Large Language Models follow diverse instructions, reward models must be able to understand and follow **explicitly specified principles**.
**RewardAnything** embodies this new paradigm. Our models are designed to interpret natural language principles at inference time, enabling **dynamic adaptation** to a wide array of evaluation criteria **without costly retraining**. This approach shifts from fitting a single preference distribution to achieving true principle-following generalization.
## π Key Features
- π§ **Principle-Following**: Directly interprets and applies reward criteria specified in natural language
- π **Dynamic Adaptability**: Generalizes to new, unseen principles at inference time without retraining
- π° **Resource Efficient**: Eliminates costly cycles of collecting preference data and retraining RMs
- π **State-of-the-Art Performance**: Achieves SOTA on RM-Bench and excels on our RABench benchmark
- π§© **Easy Integration**: Works seamlessly with existing RLHF pipelines (PPO, GRPO)
- π **Interpretable**: Provides transparent reasoning for evaluation decisions
## π Quick Start
### Installation
```bash
pip install rewardanything
```
RewardAnything offers three flexible deployment options to fit your workflow:
## 1. π Local Inference (Recommended for Quick Testing)
**Best for**: Quick experimentation, small-scale evaluation, research
**Pros**: Simple setup, no external dependencies
**Cons**: Requires local GPU, slower for batch processing
```python
import rewardanything
# Load model locally (similar to HuggingFace)
reward_model = rewardanything.from_pretrained(
"WisdomShell/RewardAnything-8B-v1", # Model path/name
device="cuda", # Device placement
torch_dtype="auto" # Automatic dtype selection
)
# Define your evaluation principle
principle = "I prefer clear, concise and helpful responses over long and detailed ones."
# Your evaluation data
prompt = "How do I learn Python programming effectively?"
responses = {
"response_a": "Start with Python.org's tutorial, practice daily with small projects, and join r/learnpython for help. Focus on fundamentals first.",
"response_b": "Here's a comprehensive approach: 1) Start with Python basics including variables, data types, operators, control structures like if-statements, for-loops, while-loops, and functions, 2) Practice with small projects like calculators, text games, and data manipulation scripts, 3) Use interactive platforms like Codecademy, Python.org's official tutorial, edX courses, Coursera specializations, and YouTube channels, 4) Join communities like r/learnpython, Stack Overflow, Python Discord servers, and local meetups for support and networking, 5) Build progressively complex projects including web scrapers, APIs, data analysis tools, and web applications, 6) Read books like 'Automate the Boring Stuff', 'Python Crash Course', and 'Effective Python', 7) Dedicate 1-2 hours daily for consistent progress and track your learning journey.",
"response_c": "Learn Python by coding."
}
# Get comprehensive evaluation
result = reward_model.judge(
principle=principle,
prompt=prompt,
responses=responses
)
print(f"Scores: {result.scores}")
print(f"Best to worst: {result.ranking}")
print(f"Reasoning: {result.reasoning}")
```
## 2. π vLLM Deployment (Recommended for Production & RL Training)
**Best for**: High-throughput batch inference, RLHF training, production workloads
**Pros**: Fast batch processing, optimized inference, scalable
**Cons**: Requires vLLM setup
### Step 1: Setup vLLM Server
First, install and start a vLLM server. See the [vLLM quickstart guide](https://docs.vllm.ai/en/latest/getting_started/quickstart.html#openai-compatible-server) for detailed instructions:
```bash
# Install vLLM
pip install vllm
# Start vLLM server with RewardAnything model
vllm serve WisdomShell/RewardAnything-8B-v1 \
--host 0.0.0.0 \
--port 8000 \
--max-model-len 8192 \
--tensor-parallel-size 1
```
### Step 2: Configure RewardAnything Server
Create a config file `config.json`:
```json
{
"api_key": ["dummy-key-for-vllm"],
"api_model": "WisdomShell/RewardAnything-8B-v1",
"api_base": ["http://localhost:8000/v1"],
"api_timeout": 120.0,
"generation_config": {
"temperature": 0.0,
"max_tokens": 4096
},
"num_workers": 8,
"request_limit": 500,
"request_limit_period": 60
}
```
### Step 3: Start RewardAnything Server
```bash
# Start the RewardAnything API server
rewardanything serve -c config.json --port 8001
```
### Step 4: Use in Your Code
```python
import rewardanything
# Connect to the RewardAnything server
client = rewardanything.Client("http://localhost:8001")
# Process batch requests efficiently
requests = [
{
"principle": "Prefer clear, concise and helpful responses over long and detailed ones.",
"prompt": "How to learn programming?",
"responses": {
"assistant_a": "Start with Python, practice daily, build projects.",
"assistant_b": "Read books and hope for the best.",
"assistant_c": "Start with Python.org's tutorial, practice daily with small projects, and join r/learnpython for help. Focus on fundamentals first."
}
},
# ... more requests
]
results = client.judge_batch(requests)
for result in results:
print(f"Winner: {result.ranking[0]}")
```
## 3. π§ Direct HuggingFace Integration
**Best for**: Custom workflows, advanced users, integration with existing HF pipelines
**Pros**: Full control, custom processing
**Cons**: Manual parsing required
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
from rewardanything.processing import prepare_chat_messages, parse_rewardanything_output
# Load model and tokenizer directly
model = AutoModelForCausalLM.from_pretrained(
"WisdomShell/RewardAnything-8B-v1",
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("WisdomShell/RewardAnything-8B-v1")
# Prepare evaluation data
principle = "Judge responses based on helpfulness and accuracy"
prompt = "What is the capital of France?"
responses = {
"model_a": "Paris is the capital of France.",
"model_b": "I think it might be Lyon or Paris."
}
# Prepare chat messages (handles masking automatically)
messages, masked2real = prepare_chat_messages(principle, prompt, responses)
# Format with chat template
formatted_input = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
# Generate response
inputs = tokenizer(formatted_input, return_tensors="pt").to(model.device)
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=4096,
temperature=0.1,
do_sample=True,
pad_token_id=tokenizer.eos_token_id
)
# Decode output
generated_tokens = outputs[0][inputs.input_ids.shape[1]:]
output_text = tokenizer.decode(generated_tokens, skip_special_tokens=True)
# Parse structured results (handles JSON parsing robustly)
result = parse_rewardanything_output(output_text, masked2real)
print(f"Raw output: {output_text}")
print(f"Parsed scores: {result.scores}")
print(f"Ranking: {result.ranking}")
print(f"Reasoning: {result.reasoning}")
```
## π When to Use Each Method
| Use Case | Method | Why |
|----------|--------|-----|
| Quick testing | Local Inference | Simplest setup |
| Research & development | Local Inference | Full control, easy debugging |
| RLHF training | vLLM Deployment | High throughput, optimized for batches |
| Production evaluation | vLLM Deployment | Scalable, reliable |
| Large-scale evaluation | vLLM Deployment | Best performance |
| Custom integration | Direct HuggingFace | Maximum flexibility |
## π¬ Advanced Usage
### Custom Principles
RewardAnything excels with sophisticated, multi-criteria principles:
```python
complex_principle = """
Evaluate responses using these criteria:
1. **Technical Accuracy** (40%): Factual correctness and up-to-date information
2. **Clarity** (30%): Clear explanations and logical structure
3. **Practical Value** (20%): Actionable advice and real-world applicability
4. **Safety** (10%): No harmful content, appropriate disclaimers
For conflicting criteria, prioritize: safety > accuracy > clarity > practical value.
"""
result = reward_model.judge(complex_principle, prompt, responses)
```
### Integration with RLHF
```python
# Example: Use in PPO training loop
def reward_function(principle, prompt, response):
result = reward_model.judge(
principle=principle,
prompt=prompt,
responses={"generated": response, "reference": "baseline response"}
)
return result.scores["generated"]
# Use in your RLHF training
rewards = [reward_function(principle, prompt, resp) for resp in generated_responses]
```
### Response Masking
RewardAnything automatically masks model names to prevent bias:
```python
result = reward_model.judge(
principle="Judge based on helpfulness",
prompt="How to cook pasta?",
responses={
"gpt4": "Boil water, add pasta...",
"claude": "Start by bringing water to boil..."
},
mask_responses=True # Default: True, model sees "model-1", "model-2"
)
```
## π Performance & Benchmarks
Please refer to our paper for performance metrics and comparison.
## π Documentation
- [Full Documentation](docs/PROJECT_DOCS.md)
- [API Reference](docs/api.md)
- [Examples](examples/)
- [Configuration Guide](docs/configuration.md)
## π€ Contributing
We welcome contributions! See [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.
## π Citation
```bibtex
@article{yu2025rewardanything,
title={RewardAnything: Generalizable Principle-Following Reward Models},
author={Yu, Zhuohao and Zeng, Jiali and Gu, Weizheng and Wang, Yidong and Wang, Jindong and Meng, Fandong and Zhou, Jie and Zhang, Yue and Zhang, Shikun and Ye, Wei},
journal={arXiv preprint arXiv:2506.03637},
year={2025}
}
```
## π License
This project is licensed under the Apache 2.0 License - see the [LICENSE](LICENSE) file for details.
## π Acknowledgments
Special thanks to the open-source community and all contributors who made this project possible. |