maohaos2 commited on
Commit
fce4773
·
verified ·
1 Parent(s): 2c1a095

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -8,7 +8,7 @@ pinned: false
8
  ---
9
 
10
  # **Introduction**
11
- We aim to advance LLM reasoning with autoregressive search capabilities, i.e., a single LLM performs an extended reasoning process with self-reflection and self-exploration of new strategies.
12
  We achieve this through our proposed Chain-of-Action-Thought (COAT) reasoning and a new post-training paradigm: 1) a small-scale format tuning (FT) stage to internalize the COAT reasoning format and 2) a large-scale self-improvement
13
  stage leveraging reinforcement learning (RL). Our approach results in Satori, a 7B LLM trained on open-source model (Qwen-2.5-Math-7B) and open-source data (OpenMathInstruct-2 and NuminaMath). Key features of Satori include:
14
  - Capable of self-reflection and self-exploration without external guidance.
 
8
  ---
9
 
10
  # **Introduction**
11
+ We aim to advance LLM reasoning to enable LLMs with autoregressive search capabilities, where a single LLM performs an extended reasoning process with self-reflection and self-exploration of new strategies.
12
  We achieve this through our proposed Chain-of-Action-Thought (COAT) reasoning and a new post-training paradigm: 1) a small-scale format tuning (FT) stage to internalize the COAT reasoning format and 2) a large-scale self-improvement
13
  stage leveraging reinforcement learning (RL). Our approach results in Satori, a 7B LLM trained on open-source model (Qwen-2.5-Math-7B) and open-source data (OpenMathInstruct-2 and NuminaMath). Key features of Satori include:
14
  - Capable of self-reflection and self-exploration without external guidance.