---
library_name: transformers
---

# STILL-3-TOOL-32B

<p align="center">
  <img src="STILL-3-TOOL-32B.png" width="666"/>
</p>

We propose [**STILL-3-Tool-32B**](https://huggingface.co/RUC-AIBOX/STILL-3-TOOL-32B), leveraging python code to help the reasoning process.

During evaluation, **STILL-3-Tool-32B** achieves **81.70%** accuracy on AIME 2024, matching the performance of **o3-mini**, outperforming **o1** and **DeepSeek-R1**.

**We open-source our [code](https://github.com/RUCAIBox/Slow_Thinking_with_LLMs), [model](https://huggingface.co/RUC-AIBOX/STILL-3-TOOL-32B), and [data](https://huggingface.co/datasets/RUC-AIBOX/STILL-3-TOOL-32B-Data).**

For more details, please refer to our [**Notion page**](https://lake-bayberry-173.notion.site/Empowering-Reasoning-Models-with-Wings-Tool-Manipulation-Significantly-Enhances-the-Reasoning-Abili-1a6ab1cf72428023a105c16eec90968e).

## Citation

Please kindly cite our report if they are helpful for your research.

```
@article{Slow_Thinking_with_LLMs_3_Tool,
  title={Tool Manipulation Significantly Enhances the Reasoning Ability of O1- and R1-like LLMs},
  author={RUCAIBox STILL Team},
  url={https://github.com/RUCAIBox/Slow_Thinking_with_LLMs},
  year={2025}
}
```