File size: 5,317 Bytes
633f80b 7913b94 633f80b 7913b94 633f80b 7913b94 633f80b 7913b94 633f80b 0242edb 633f80b 7913b94 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 |
---
language:
- en
license: apache-2.0
task_categories:
- other
library_name:
- datasets
configs:
- config_name: oTTT
data_files:
- split: test
path: ttt_puzzles_test.json
- config_name: dTTT
data_files:
- split: test
path: dttt_puzzles_test.json
- config_name: cTTT
data_files:
- split: test
path: cube_puzzles_test.json
- config_name: sTTT
data_files:
- split: test
path: stones_puzzles_test.json
---
# TTT-Bench: A Benchmark for Evaluating Reasoning Ability with Simple and Novel Tic-Tac-Toe-style Games
π [Paper](https://arxiv.org/abs/2506.10209) | π€ [Dataset](https://huggingface.co/datasets/amd/TTT-Bench) | π [Website](https://prakamya-mishra.github.io/TTTBench/)
We introduce **TTT-Bench**, a new benchmark specifically created to evaluate the reasoning capability of LRMs through a suite of **simple and novel two-player Tic-Tac-Toe-style games**.
Although trivial for humans, these games require basic strategic reasoning, including predicting an opponent's intentions and understanding spatial configurations.
TTT-Bench is a simple and diverse benchmark consisting of four types of two-player games, namely **oTTT** (ordinary TTT), **dTTT** (double TTT), **cTTT** (TTT in cube), and **sTTT** (TTT with squares) (see Figure below for examples), and the questions in the benchmark test the reasoning capability of LRMs by posing the question of predicting the best move by a player.
The objectives of these games are not only simple but are also introduced here for the first time (except oTTT), with no documented prior literature available online, making our benchmark uncontaminated and highly reliable.
<div align="center">
<img src="TTTBench_hf.png" style="object-fit: contain;"/>
<em><b>Figure 1:</b> Visualization of four game types in TTT-Bench (oTTT, dTTT, cTTT, and sTTT). For each game, the illustration consists of the board position annotations (above), the current game state with Alice's (white) and Bob's (black), and the correct next move (represented by a check mark).</em>
</div>
Briefly, for each task in TTT-Bench, two players play a strategic game on a configuration dependent on the task to reach a winning state that satisfies a task-dependent winning constraint, and the player who reaches the winning state first wins the game. All the problems in this benchmark are formulated as a text-based board game question that starts with the description of the game, as well as the current game state with all the moves played by both players. The final TTT-Bench task questions are formulated to predict the next best move position by the current player, like: `Where should Bob play next?`.
<div align="center">
<img src="TTTBench_test_set_dist_hf.png" style="object-fit: contain;" width="50%"/>
<em><b>Figure 2:</b> Distribution of TTT-Bench question solution types. <b>Above:</b> Distribution of questions and their game verdicts (Win, Blocked, or Fork) based on how the game concludes after the corresponding solution move is played. <b>Below:</b> Distribution of questions and their solution types (single answer or multiple next-best moves).</em>
</div>
The final TTT-Benchtest set consists of ~100 samples from each game. As shown in Figure above, not only is this dataset diverse across different game verdicts, but it also consists of questions with both single and multiple next-best move solutions. Overall, the final TTT-Bench benchmark consists of 412 questions from four simple two-player TTT games.
# Dataset Details
The benchmark dataset consists of four splits corresponding to the four types of games, where in each split the dataset columns represents:
- `problem`: Prompt question for the game.
- `solution`: Ground truth next best move solution(s).
- `is_unique_solution`: Boolean indicating whether there exists a single or multiple next best move solution(s).
- `verdict`: Game verdict for the ground truth solution.
- `currect_board_visualization`: Game board ASCII visualization with played moves.
- `board_visualization`: Game board ASCII visualization with position labels.
- `problem_number`: Problem ID
# License
For fostering future research efforts in evaluating the reasoning capability of LRMs, we release this benchmark under the `Apache 2.0 License`.
Copyright(c) 2025 Advanced Micro Devices,Inc. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
# Citation
```
@misc{mishra2025tttbenchbenchmarkevaluatingreasoning,
title={TTT-Bench: A Benchmark for Evaluating Reasoning Ability with Simple and Novel Tic-Tac-Toe-style Games},
author={Prakamya Mishra and Jiang Liu and Jialian Wu and Xiaodong Yu and Zicheng Liu and Emad Barsoum},
year={2025},
eprint={2506.10209},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2506.10209},
}
``` |