File size: 3,386 Bytes
3314b06
41cb964
 
3314b06
 
 
 
41cb964
3314b06
 
 
41cb964
82727d8
41cb964
398234b
d09fd5b
41cb964
 
 
 
d09fd5b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
41cb964
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d09fd5b
 
 
 
 
 
 
 
398234b
d09fd5b
 
 
 
 
 
398234b
 
d09fd5b
41cb964
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
---
base_model:
- Qwen/Qwen2.5-7B
datasets:
- MegaScience/MegaScience
language:
- en
license: apache-2.0
metrics:
- accuracy
pipeline_tag: text-generation
library_name: transformers
---

# [MegaScience: Pushing the Frontiers of Post-Training Datasets for Science Reasoning](https://arxiv.org/abs/2507.16812)

Scientific reasoning is critical for developing AI scientists and supporting human researchers in advancing the frontiers of natural science discovery. This work introduces **TextbookReasoning**, an open dataset featuring truthful reference answers extracted from 12k university-level scientific textbooks, comprising 650k reasoning questions. It further presents **MegaScience**, a large-scale mixture of high-quality open-source datasets totaling 1.25 million instances, developed through systematic ablation studies. Models trained on MegaScience demonstrate superior performance and training efficiency, significantly outperforming corresponding official instruct models, especially for larger and stronger base models.

Find the code and more details on the [MegaScience GitHub repository](https://github.com/GAIR-NLP/lm-open-science-evaluation).

## Qwen2.5-7B-MegaScience

### Training Recipe

- **LR**: 5e-6
- **LR Schedule**: Cosine
- **Batch Size**: 512
- **Max Length**: 4,096
- **Warm Up Ratio**: 0.05
- **Epochs**: 3

### Evaluation Results

<div style="display: flex; justify-content: left; gap: 20px;">
<img src="https://cdn-uploads.huggingface.co/production/uploads/616bfc2b40e2f69baa1c7add/abIVZ2XB9D-o-TCyvOkDE.png" alt="Data Pipeline" style="width:80%;">
</div>

<div style="display: flex; justify-content: left; gap: 20px;">
<img src="https://cdn-uploads.huggingface.co/production/uploads/616bfc2b40e2f69baa1c7add/xFTJ7nevc3S4UYJxUS7ue.png" alt="Data Pipeline" style="width:80%;">
</div>

## Quickstart

You can use this model directly with the `transformers` library:

```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

model_id = "MegaScience/Qwen2.5-7B-MegaScience"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    torch_dtype=torch.bfloat16, # or torch.float16 if bfloat16 is not supported
    device_map="auto"
)

messages = [
    {"role": "user", "content": "What is the capital of France?"},
]

text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
model_inputs = tokenizer(text, return_tensors="pt").to(model.device)

generated_ids = model.generate(
    model_inputs.input_ids,
    max_new_tokens=256
)
generated_text = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(generated_text)
```

### More about MegaScience

<div style="display: flex; justify-content: left; gap: 20px;">
<img src="https://cdn-uploads.huggingface.co/production/uploads/616bfc2b40e2f69baa1c7add/VogIpBbjfNxXFP9DfVMms.png" alt="Data Pipeline" style="width:100%;">
</div>

## Citation

Check out our [paper](https://arxiv.org/abs/2507.16812) for more details. If you use our dataset or find our work useful, please cite

```
@article{fan2025megascience,
  title={MegaScience: Pushing the Frontiers of Post-Training Datasets for Science Reasoning},
  author={Fan, Run-Ze and Wang, Zengzhi and Liu, Pengfei},
  year={2025},
  journal={arXiv preprint arXiv:2507.16812},
  url={https://arxiv.org/abs/2507.16812}
}
```