HoangHa commited on
Commit
001ef8b
·
verified ·
1 Parent(s): 4d8529b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +133 -6
README.md CHANGED
@@ -11,12 +11,139 @@ language:
11
  - en
12
  ---
13
 
14
- # Uploaded model
15
 
16
- - **Developed by:** Homebrew Research
17
- - **License:** apache-2.0
18
- - **Finetuned from model :** deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
19
 
20
- This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
 
 
 
 
 
 
 
21
 
22
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  - en
12
  ---
13
 
14
+ [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
15
 
16
+ <div align="center">
 
 
17
 
18
+ # AlphaMaze: Teaching LLMs to Think Visually
19
+ <!---
20
+ <a href='https://homebrew.ltd/blog/alpha-maze'><img src='https://img.shields.io/badge/Project-Blog-Green'></a>
21
+ <a href='https://huggingface.co/homebrewltd/AlphaMaze-v0.2-1.5B'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue'></a>
22
+ <a href='https://huggingface.co/datasets/homebrewltd/Maze-Reasoning-v0.1'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Data-green'></a>
23
+ <a href='https://alphamaze.menlo.ai/'><img src='https://img.shields.io/badge/Project-Demo-violet'></a>
24
+ <a href=''><img src='https://img.shields.io/badge/Paper-Arxiv-red'></a>
25
+ -->
26
 
27
+ [**About**](#About) | [**Demo**](#Demo) | [**Models and Datasets**](#Models-and-Dataset) | [**Benchmarks**](#Benchmarks) | [**How to Run Locally**](#Run-Locally) |
28
+
29
+ <img src="./alphamaze.gif" width="400"/>
30
+ </div>
31
+
32
+ ## About
33
+ Welcome to **AlphaMaze**, a novel approach to enhance visual reasoning in large language models (LLMs). Forget complex image processing – AlphaMaze challenges models with a deceptively simple task: solving mazes presented *entirely* in text. AlphaMaze isn't just about finding the exit; it's about understanding the maze's structure and strategically deciding when to reset after encountering a dead end.
34
+
35
+ Prior research, like [Microsoft's "Multimodal Visualization-of-Thought (MVoT)"](https://arxiv.org/abs/2501.07542), explored visual reasoning through image generation. But AlphaMaze takes a different, more focused path. We believe that if a model can internally reconstruct a maze from a text description and use that *mental map* to plan its moves, it demonstrates a genuine capacity for visual reasoning – even without generating a single image. AlphaMaze moves beyond the limitations of multiple-choice evaluations, providing a richer, more nuanced assessment of a model's spatial understanding. We're not just testing if a model *can* solve a maze; we're revealing *how* it thinks about space.
36
+
37
+ ## Demo
38
+
39
+ AlphaMaze tackle a text-based maze! See how it interprets the maze, plans its moves, and strategically resets when it encounters a dead end.
40
+
41
+ [![Watch the AlphaMaze Demo](https://img.youtube.com/vi/dUS9wR03on8/0.jpg)](https://www.youtube.com/watch?v=dUS9wR03on8)
42
+
43
+ Alternatively, you can explore it on our [demo website](https://alphamaze.menlo.ai/).
44
+
45
+ ## Models and Datasets
46
+
47
+ ### Models
48
+
49
+ You can find our AlphaMaze models on Hugging Face 🤗! We're committed to open-source and easy access for the research community.
50
+
51
+ | Model | Backbone | Size | Link |
52
+ |--------------|--------------------------------------------------------------------------|-------|----------------------------------------------------------------------------|
53
+ | AlphaMaze-v0.1 | [DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) | 1.5B | [🤗 AlphaMaze-v0.1](https://huggingface.co/homebrewltd/AlphaMaze-v0.2-1.5B) |
54
+
55
+ ### Datasets
56
+
57
+ We've released our datasets on Hugging Face 🤗 to support reproducibility and further research.
58
+
59
+ | Dataset | Description | Size | Link |
60
+ |--------------------------------------|-----------------------------------------------------|-------|-----------------------------------------------------------------------------------------|
61
+ | Maze-Reasoning-v0.1 | Training set used for Supervised Fine-Tuning (SFT) | 420k | [🤗 Maze-Reasoning-v0.1](https://huggingface.co/datasets/homebrewltd/Maze-Reasoning-v0.1) |
62
+ | Maze-Reasoning-Reset-v0.1 | Training set for SFT, including reset actions | 50k | [🤗 Maze-Reasoning-Reset-v0.1](https://huggingface.co/datasets/homebrewltd/Maze-Reasoning-Reset-v0.1) |
63
+ | Maze-Reasoning-GRPO-v0.1 | Training set used for GRPO model | 180k | [🤗 Maze-Reasoning-GRPO-v0.1](https://huggingface.co/datasets/homebrewltd/Maze-Reasoning-GRPO-v0.1) |
64
+
65
+ ## Benchmarks
66
+
67
+ ### Supervised Fine-Tuning (SFT)
68
+
69
+ We used [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory) for Supervised Fine-Tuning (SFT) of our AlphaMaze model. Here's a summary of our key training runs:
70
+
71
+ | Run ID | Model Config | Dataset | Steps | Final Loss | Hardware | Key Findings |
72
+ |--------|-------------------------------------------------------------------------------------------|--------------------------------------------------------------------------|-------|------------|-------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
73
+ | exp-1 | [Full Finetune (Qwen-1.5B)](https://github.com/janhq/visual-thinker/blob/main/training/Llama-factory-config/Qwen2.5_1.5B_distil.yaml) | [Maze Reasoning](https://huggingface.co/datasets/jan-hq/Maze-Reasoning) | 3125 | 0.01 | ~1.5 hours on 6xA6000 | Initial run with new maze tokens. Observed lower performance. |
74
+ | exp-2 | [Full Finetune (Qwen-1.5B)](https://github.com/janhq/visual-thinker/blob/main/training/Llama-factory-config/Qwen2.5_1.5B_distil.yaml) | [Maze Reasoning](https://huggingface.co/datasets/jan-hq/Maze-Reasoning) | 3125 | 0.01 | ~1.5 hours on 6xA6000 | Trained using pure text descriptions (no new tokens). Surprisingly strong performance. |
75
+ | exp-3 | [Full Finetune (Qwen-7B)](https://github.com/janhq/visual-thinker/blob/main/training/Llama-factory-config/Qwen2.5_7B_distil.yaml) | [Maze Reasoning](https://huggingface.co/datasets/jan-hq/Maze-Reasoning) | 2344 | 0.0077 | ~12 hours on 6xA6000 | Extended training with pure text descriptions (larger model). |
76
+ | exp-4 | [Full Finetune (Qwen-1.5B)](https://github.com/janhq/visual-thinker/blob/main/training/Llama-factory-config/Qwen2.5_1.5B_distil.yaml) | [Maze Reasoning](https://huggingface.co/datasets/jan-hq/Maze-Reasoning) | 2344 | ~0 | ~1.5 hours on 6xA6000 | Further extended training with pure text descriptions. Near-zero loss. |
77
+ | exp-5 | [Full Finetune (Qwen-1.5B)](https://github.com/janhq/visual-thinker/blob/main/training/Llama-factory-config/Qwen2.5_1.5B_distil.yaml) | [Maze Reasoning](https://huggingface.co/datasets/jan-hq/Maze-Reasoning) | 3681 | ~0.02 | ~1 hours on 8xH200 | Experiment with new maze tokens and different hardware. |
78
+
79
+ **Key Observations from SFT:**
80
+
81
+ * Adding new maze-specific tokens did *not* improve performance, and in some cases, resulted in worse results.
82
+ * Surprisingly, the model performed well even with *pure text descriptions*, suggesting a strong ability to learn spatial relationships from text alone.
83
+ * Loss value equal to 0 is concerning.
84
+
85
+ **Note:** These results suggest that reducing token complexity can lead to improved performance in translating spatial information into language.
86
+
87
+ ### Generalized Reward-based Policy Optimization (GRPO)
88
+
89
+ We employed [Unsloth](https://unsloth.ai/) for Generalized Reward-based Policy Optimization (GRPO) to further refine the model's maze-solving policy.
90
+
91
+ The plot below shows the MazeBench scores (blue crosses) achieved during GRPO training, along with a linear regression trendline (red dashed line). The upward trend demonstrates that GRPO effectively guides the model towards improved maze-solving strategies.
92
+
93
+ ![GRPO Training Progress](./grpo_progress.png)
94
+ _GRPO training progress, showing MazeBench scores over training steps._
95
+
96
+
97
+ ## Run Locally
98
+
99
+ For an example of using AlphaMaze with HuggingFace Transformers:
100
+
101
+ ```python
102
+ import torch
103
+ from transformers import AutoTokenizer, AutoModelForCausalLM
104
+ import flash_attn
105
+
106
+ model_path = "homebrewltd/AlphaMaze-v0.2-1.5B"
107
+
108
+ tokenizer = AutoTokenizer.from_pretrained(model_path)
109
+
110
+ model = AutoModelForCausalLM.from_pretrained(
111
+ model_path,
112
+ torch_dtype=torch.float16,
113
+ device_map="auto",
114
+ attn_implementation="flash_attention_2",
115
+ )
116
+
117
+ maze = """You are a helpful assistant that solves mazes. You will be given a maze represented by a series of tokens. The tokens represent: - Coordinates: <|row-col|> (e.g., <|0-0|>, <|2-4|>) - Walls: <|no_wall|>, <|up_wall|>, <|down_wall|>, <|left_wall|>, <|right_wall|>, <|up_down_wall|>, etc. - Origin: <|origin|> - Target: <|target|> - Movement: <|up|>, <|down|>, <|left|>, <|right|>, <|blank|> Your task is to output the sequence of movements (<|up|>, <|down|>, <|left|>, <|right|>) required to navigate from the origin to the target, based on the provided maze representation. Think step by step. At each step, predict only the next movement token. Output only the move tokens, separated by spaces. MAZE: <|0-0|><|up_left_wall|><|blank|><|0-1|><|up_down_wall|><|blank|><|0-2|><|up_down_wall|><|blank|><|0-3|><|up_right_wall|><|blank|><|0-4|><|up_left_right_wall|><|blank|> <|1-0|><|down_left_wall|><|blank|><|1-1|><|up_right_wall|><|blank|><|1-2|><|up_left_wall|><|blank|><|1-3|><|down_right_wall|><|blank|><|1-4|><|left_right_wall|><|blank|> <|2-0|><|up_left_wall|><|blank|><|2-1|><|down_right_wall|><|blank|><|2-2|><|down_left_wall|><|blank|><|2-3|><|up_down_wall|><|blank|><|2-4|><|down_right_wall|><|target|> <|3-0|><|left_right_wall|><|blank|><|3-1|><|up_left_wall|><|origin|><|3-2|><|up_right_wall|><|blank|><|3-3|><|up_down_left_wall|><|blank|><|3-4|><|up_right_wall|><|blank|> <|4-0|><|down_left_wall|><|blank|><|4-1|><|down_right_wall|><|blank|><|4-2|><|down_left_wall|><|blank|><|4-3|><|up_down_wall|><|blank|><|4-4|><|down_right_wall|><|blank|>"""
118
+
119
+ messages = [
120
+ {
121
+ "role": "user",
122
+ "content": maze
123
+ }
124
+ ]
125
+
126
+ input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors='pt').to("cuda")
127
+ generated_ids = model.generate(input_ids, max_new_tokens=2500, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id)
128
+ response = tokenizer.decode(generated_ids[0], skip_special_tokens=True, clean_up_tokenization_space=True)
129
+ print(f"Solving maze: {response}")
130
+ ```
131
+
132
+ ## Next Steps
133
+ We are exploring further GRPO enhancements to boost maze-solving capabilities. Stay tuned for more updates on how GRPO is paving the way for improved spatial reasoning in LLMs!
134
+
135
+ ## Join Us
136
+
137
+ We're looking for collaborators and plan to expand the model's capabilities to include additional spatial tasks in the future.
138
+
139
+ ## References
140
+
141
+ ```bibtex
142
+ ```
143
+
144
+ ## Acknowledgement
145
+
146
+ - [llama-factory](https://github.com/hiyouga/LLaMA-Factory)
147
+ - [unsloth](https://unsloth.ai/)
148
+ - [Deepseek](https://github.com/deepseek-ai/DeepSeek-R1)
149
+ - [Multimodal Visualization-of-Thought (MVoT)](https://arxiv.org/abs/2501.07542)