Can you provide code for inference with MCTS?
The examples I see in the github repo (talk_with_model.py, talk_with_model_vllm.py) don't seem to use that.
Hello, we will continue to release our code, models, and datasets in the future, along with updates to our Marco-o1-RL (Reinforcement Learning) versions.
The examples I see in the github repo (talk_with_model.py, talk_with_model_vllm.py) don't seem to use that.
Use MCTS in the prompt , i am using llamacpp with version Q8
which is bigger 9.11 or 9.9, use a MCTS (step) /
Certainly! Let's analyze whether 9.11 is bigger than 9.9 using the Monte Carlo Tree Search (MCTS) approach, even though MCTS is typically used for decision-making in complex environments like game trees. However, for this simple comparison, I'll adapt the concept to a more straightforward method that resembles MCTS steps.
**Step 1: Define the Problem**
We need to determine whether 9.11 is greater than 9.9.
**Step 2: Initialize the Search Tree**
Imagine we have a tree where each node represents a comparison between two numbers. The root node is our initial comparison between 9.11 and 9.9.
**Step 3: Expand the Tree**
At the root node, we expand by considering different aspects to compare 9.11 and 9.9:
- **First Digit Comparison:** Compare the whole number parts.
- **Second Digit Comparison:** If the whole numbers are equal, compare the tenths place.
- **Third Digit Comparison:** If necessary, compare the hundredths place.
**Step 4: Simulate Multiple Outcomes**
Since MCTS relies on simulations, we'll simulate multiple scenarios:
1. **Scenario 1:** 9.11 vs. 9.9
- Compare whole numbers: 9 vs. 9 → Equal.
- Compare tenths place: 1 vs. 9 → 1 < 9.
- Conclusion: 9.11 < 9.9.
2. **Scenario 2:** 9.9 vs. 9.11
- Compare whole numbers: 9 vs. 9 → Equal.
- Compare tenths place: 9 vs. 1 → 9 > 1.
- Conclusion: 9.9 > 9.11.
3. **Scenario 3:** 9.11 vs. 9.9
- Same as Scenario 1, leading to the same conclusion.
**Step 5: Backpropagation**
After simulating multiple scenarios, we update the tree to reflect the outcomes:
- The path leading to 9.11 being less than 9.9 is confirmed in all simulations.
- The path leading to 9.9 being greater than 9.11 is also confirmed.
**Step 6: Decision Making**
Based on the simulations, it's clear that 9.11 is not greater than 9.9.
**Conclusion**
Through this adapted MCTS approach, we've determined that 9.11 is less than 9.9 by comparing their whole numbers and decimal places step by step.
**Alternative Approach: Direct Comparison**
For clarity, let's compare the two numbers directly:
- 9.11 has a value of nine and one tenth.
- 9.9 has a value of nine and nine tenths.
Since nine tenths (0.9) is greater than one tenth (0.1), it follows that 9.9 > 9.11.
**Final Answer:**
9.9 is bigger than 9.11.
@mirek190
Interesting approach, but I don't think this is the way it's supposed to work. According to the paper MCTS shall be guided by scores calculated from token probabilities.
Also I found this: 'We introduced a reflection mechanism by adding the phrase “Wait! Maybe I made some mistakes! I need to rethink from scratch.” at the end of each thought process.'
This is not exactly rocket science, but it would be nice to have a reference implementation.
I find that it would not infer in MCTS way when I request it to correct the code based on the error message
Here is the snippet of my code:
...
messages=[
{"role":"system", "content" : "you are a helpful assistant in coding, specialized in python\
and LBM (CFD method). You would carefully output the program without bug. now take deep breath\
" },
{ 'role': 'user', 'content': correct_prompt+"\n Solve the problem and use a MCTS (step) /"}
]
...
The variable correct_prompt includes the original code and corresponding error from the compiler. I add " "\n Solve the problem and use a MCTS (step) /" " at the end. In other comments, it is said that it would prompt it to reply in mcts way.