Sniper commited on
Commit
d383c6e
·
verified ·
1 Parent(s): 424bd24

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -0
README.md CHANGED
@@ -33,6 +33,8 @@ library_name: transformers
33
 
34
  Currently, Marco-o1 Large Language Model (LLM) is powered by _Chain-of-Thought (CoT) fine-tuning_, _Monte Carlo Tree Search (MCTS)_, _reflection mechanisms_, and _innovative reasoning strategies_—optimized for complex real-world problem-solving tasks.
35
 
 
 
36
  ![img.png](assets/img.png)
37
 
38
  ## 🚀 Highlights
 
33
 
34
  Currently, Marco-o1 Large Language Model (LLM) is powered by _Chain-of-Thought (CoT) fine-tuning_, _Monte Carlo Tree Search (MCTS)_, _reflection mechanisms_, and _innovative reasoning strategies_—optimized for complex real-world problem-solving tasks.
35
 
36
+ 💌 <ins>We would like to emphasize that this research work is inspired by OpenAI's o1 (from which the name is also derived). Given the unclear technical roadmap for large reasoning models, this work explores multiple experimental approaches. Our focus is on open-ended questions, and we have observed interesting phenomena in multilingual applications. However, we must acknowledge that the current performance and completeness still fall short of a fully realized "o1" model. This is not a one-time effort, and we remain committed to continuous optimization and ongoing improvement.</ins>
37
+
38
  ![img.png](assets/img.png)
39
 
40
  ## 🚀 Highlights