File size: 5,880 Bytes
034ea16
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0365640
034ea16
 
 
 
 
 
 
 
 
 
 
 
0365640
034ea16
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0365640
034ea16
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
---
license: llama3.2
datasets:
- O1-OPEN/OpenO1-SFT
language:
- en
base_model:
- meta-llama/Llama-3.2-3B-Instruct
library_name: transformers
tags:
- reasoning
- o1
---

# Model Card: mkurman/llama-3.2-MEDIT-3B-o1

This model is a variant of **o1-like reasoning** that has been fine-tuned on **MedIT Solutions Llama 3.2 3B Instruct** (a variant of **Meta LLama 3.2 3B Instruct**). The model introduces specific tags (`<Thought>` and `<Output>`) for chain-of-thought style text generation, with a focus on instruct-style reasoning tasks.
This model was fine-tuned for exact matching rather than generating a diverse distribution. Therefore, I recommend testing it with `do_sample=False` or setting `temperature=0.0` for deterministic outputs.

---

## Model Details

**Model name**: `mkurman/llama-3.2-MEDIT-3B-o1`  
**Type**: Small Language Model (SLM)  
**Base model**: [MedIT Solutions](https://huggingface.co/meditsolutions) Llama 3.2 3B Instruct (derived from Meta Llama 3.2 3B Instruct)  
**Architecture**: 3 billion parameters  
**License**: llama3.2

**Intended Use Cases**:
- General question answering
- Instruction-based generation
- Reasoning and chain-of-thought exploration

**Not Recommended For**:
- Sensitive, real-world medical diagnosis without expert verification
- Highly domain-specific or regulated fields outside the model’s training scope

---

## Usage

### Important Notes on Usage

1. **Stop strings**:  
   Because the model uses `<Thought>` and `<Output>` tags to separate internal reasoning from the final answer, you must supply `</Output>` as a stop sequence (or multiple stop sequences, if your framework allows) to avoid the model generating infinitely.  

2. **Preventing `<|python_tag|>` bug**:  
   Sometimes the model starts with `<|python_tag|>` instead of the intended `<Thought>`. As a workaround, add `"<Thought>\n\n"` to the **end of your generation prompt** (in your chat template) to ensure it starts correctly.

3. **Libraries/Tools**:  
   - **Ollama** and **LM Studio**: Via GGUF file.  
   - **Jupyter Notebook** (or similar): Using the [Transformers library](https://github.com/huggingface/transformers).  

---

### In Ollama or LM Studio

If you are loading the GGUF file, follow the instructions provided by Ollama or LM Studio. Typically, it involves placing the model file in the appropriate directory and selecting it within the interface. 

> **Example** (in Ollama CLI):
> ```bash
> ollama run hf.co/mkurman/llama-3.2-MEDIT-3B-o1
> ```

You can then issue prompts. Make sure to set stop sequences to `</Output>` (and possibly `</Thought>` if your environment supports multiple stops).

---

### In a Jupyter Notebook or Python Script (Transformers)

```python
from transformers import AutoTokenizer, AutoModelForCausalLM

# 1. Load the tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("mkurman/llama-3.2-MEDIT-3B-o1")
model = AutoModelForCausalLM.from_pretrained("mkurman/llama-3.2-MEDIT-3B-o1")

# 2. Define and encode your prompt
#   Add '<Thought>\n\n' at the end if you want to ensure 
#   the model uses the correct reasoning tag.
prompt = [{'role': 'user', 'content': 'Write a short instagram post about hypertension in children. Finish with 3 hashtags'}]
input_ids = tokenizer(tokenizer.apply_chat_template(prompt, tokenize=False, add_generation_prompt=True) + '<Thought>\n\n', return_tensors='pt')

# 3. Generate response with stop sequences (if your generation method supports them)
#    If your method doesn't support stop sequences directly, 
#    you can manually slice the model's output at '</Output>'.
output = model.generate(
    input_ids=input_ids,
    max_new_tokens=256,
    do_sample=False, #or True with temperature 0.0
    temperature=0.0,
    # Some generation methods or libraries allow specifying stop sequences. 
    # This is an example if your environment supports it.
    # stop=["</Output>"]
)

# 4. Decode the output
decoded_output = tokenizer.decode(output[0], skip_special_tokens=True)
print(decoded_output)
```

**Note**: If your generation library does not allow direct stop sequences, you can manually parse and remove any tokens that appear after `</Output>`.

---

## Example Prompt/Response

**Prompt**:
```
<Talk about the impact of regular exercise on cardiovascular health>
<Thought>
```
*(Remember to add `<Thought>\n\n` at the end if you see the `<|python_tag|>` bug.)*

**Model’s Reasoning** (`<Thought>` block):
```
Exercise improves heart function by ...
```

**Model’s Final Answer** (`<Output>` block):
```
Regular exercise has been shown to ...
</Output>
```

You would display the `<Output>` portion as the final user-facing answer.  

---

## Limitations and Bias

- **Hallucination**: The model may generate plausible-sounding but incorrect or nonsensical answers.  
- **Medical Information**: Never rely on this model as a source of truth! **this model is not a certified medical professional**. Always verify with qualified experts before acting on medical advice.  
- **Biases**: The model’s outputs may reflect biases present in the training data. Users should evaluate content for fairness and accuracy.

---

## License and Citation

Please refer to the base model’s [Llama 3.2 Community License Agreement](LICENSE.txt) and any additional licenses from MedIT Solutions. If you use this model in your work, please cite:

```
@misc{mkurman2025llama3medit3bo1,
  title={{mkurman/llama-3.2-MEDIT-3B-o1}: A fine-tuned Llama 3.2 3B Instruct model for reasoning tasks},
  author={Kurman, Mariusz},
  year={2025},
  howpublished={\url{https://huggingface.co/mkurman/llama-3.2-MEDIT-3B-o1}}
}
```

---

## Contact

For questions, comments, or issues related to `mkurman/llama-3.2-MEDIT-3B-o1`, please open an [issue on the model repository](https://huggingface.co/mkurman/llama-3.2-MEDIT-3B-o1) or contact [mkurman](https://huggingface.co/mkurman).