Datasets:

Modalities:
Tabular
Text
Formats:
csv
ArXiv:
Libraries:
Datasets
pandas
License:
MaksimSTW commited on
Commit
c858d8e
·
verified ·
1 Parent(s): 0bad52a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +40 -3
README.md CHANGED
@@ -1,3 +1,40 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ ---
4
+
5
+ # Difficulty Estimation on DeepScaleR
6
+
7
+ We annotate the entire [**DeepScaleR**](https://huggingface.co/agentica-org/DeepScaleR-1.5B-Preview) dataset with a **difficulty score** based on the performance of the [Qwen 2.5-MATH-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) model. This provides an adaptive signal for curriculum construction and model evaluation.
8
+
9
+ **DeepScaleR** is a curated dataset of 40,000 reasoning-intensive problems used to train and evaluate reinforcement learning-based methods for large language models.
10
+
11
+ ## Difficulty Scoring Method
12
+
13
+ Difficulty scores are estimated using the **Qwen 2.5-MATH-7B** model with the following generation settings:
14
+
15
+ - `temperature = 0.6`
16
+ - `top_p = 0.9`
17
+ - `max_tokens = 4096`
18
+ - Inference performed using [vLLM](https://github.com/vllm-project/vllm)
19
+ - Each problem is attempted **128 times**
20
+
21
+ The difficulty score `d_i` for each problem is computed as:
22
+
23
+ d_i = 100 × (1 - (# successes / 128))
24
+
25
+ This approach balances the evaluation signal:
26
+ - A **strong model** would trivially solve easy problems, compressing the difficulty scale.
27
+ - A **weak model** would fail uniformly, providing poor resolution.
28
+ - Qwen 2.5-MATH-7B was selected for its **mid-range capabilities**, offering meaningful gradients across a wide spectrum of problems.
29
+
30
+ ## Difficulty Estimation on Other Datasets
31
+
32
+ We also apply the same difficulty estimation procedure to the following datasets:
33
+
34
+ - [Open Reasoner Zero](https://huggingface.co/datasets/lime-nlp/orz_math_difficulty)
35
+ - [MATH](https://huggingface.co/datasets/lime-nlp/MATH_difficulty)
36
+ - [GSM8K](https://huggingface.co/datasets/lime-nlp/GSM8K_difficulty)
37
+
38
+ ## 📬 Contact
39
+
40
+ For questions or feedback, feel free to reach out to **Taiwei Shi** at [[email protected]](mailto:[email protected]).