|
--- |
|
license: mit |
|
configs: |
|
- config_name: Difficulty Score |
|
data_files: Qwen2.5-Math-7B--deepscaler--difficulty.csv |
|
- config_name: Response |
|
data_files: Qwen2.5-Math-7B--deepscaler.csv |
|
task_categories: |
|
- reinforcement-learning |
|
--- |
|
|
|
# Difficulty Estimation on DeepScaleR |
|
|
|
We annotate the entire [**DeepScaleR**](https://huggingface.co/agentica-org/DeepScaleR-1.5B-Preview) dataset with a **difficulty score** based on the performance of the [Qwen 2.5-MATH-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) model. This provides an adaptive signal for curriculum construction and model evaluation. |
|
|
|
**DeepScaleR** is a curated dataset of 40,000 reasoning-intensive problems used to train and evaluate reinforcement learning-based methods for large language models. |
|
|
|
## Difficulty Scoring Method |
|
|
|
Difficulty scores are estimated using the **Qwen 2.5-MATH-7B** model with the following generation settings: |
|
|
|
- `temperature = 0.6` |
|
- `top_p = 0.9` |
|
- `max_tokens = 4096` |
|
- Inference performed using [vLLM](https://github.com/vllm-project/vllm) |
|
- Each problem is attempted **128 times** |
|
|
|
The difficulty score `d_i` for each problem is computed as: |
|
|
|
d_i = 100 × (1 - (# successes / 128)) |
|
|
|
This approach balances the evaluation signal: |
|
- A **strong model** would trivially solve easy problems, compressing the difficulty scale. |
|
- A **weak model** would fail uniformly, providing poor resolution. |
|
- Qwen 2.5-MATH-7B was selected for its **mid-range capabilities**, offering meaningful gradients across a wide spectrum of problems. |
|
|
|
## Difficulty Estimation on Other Datasets |
|
|
|
We also apply the same difficulty estimation procedure to the following datasets: |
|
|
|
- [Open Reasoner Zero](https://huggingface.co/datasets/lime-nlp/orz_math_difficulty) |
|
- [MATH](https://huggingface.co/datasets/lime-nlp/MATH_difficulty) |
|
- [GSM8K](https://huggingface.co/datasets/lime-nlp/GSM8K_difficulty) |
|
|
|
## 📬 Contact |
|
|
|
For questions or feedback, feel free to reach out to [**Taiwei Shi**](https://maksimstw.github.io/) at [[email protected]](mailto:[email protected]). |
|
|
|
## 📚 Citations |
|
|
|
Github: https://github.com/uscnlp-lime/verl |
|
|
|
If you find our dataset useful, please cite [Efficient Reinforcement Finetuning via Adaptive Curriculum Learning](https://huggingface.co/papers/2504.05520): |
|
|
|
```bibtex |
|
@misc{shi2025efficientreinforcementfinetuningadaptive, |
|
title={Efficient Reinforcement Finetuning via Adaptive Curriculum Learning}, |
|
author={Taiwei Shi and Yiyang Wu and Linxin Song and Tianyi Zhou and Jieyu Zhao}, |
|
year={2025}, |
|
eprint={2504.05520}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.LG}, |
|
url={https://arxiv.org/abs/2504.05520}, |
|
} |
|
``` |