--- license: llama3.2 datasets: - O1-OPEN/OpenO1-SFT language: - en base_model: - meta-llama/Llama-3.2-3B-Instruct library_name: transformers tags: - reasoning - o1 --- # Model Card: mkurman/llama-3.2-MEDIT-3B-o1 This model is a variant of **o1-like reasoning** that has been fine-tuned on **MedIT Solutions Llama 3.2 3B Instruct** (a variant of **Meta LLama 3.2 3B Instruct**). The model introduces specific tags (`` and ``) for chain-of-thought style text generation, with a focus on instruct-style reasoning tasks. This model was fine-tuned for exact matching rather than generating a diverse distribution. Therefore, I recommend testing it with `do_sample=False` or setting `temperature=0.0` for deterministic outputs. --- ## Model Details **Model name**: `mkurman/llama-3.2-MEDIT-3B-o1` **Type**: Small Language Model (SLM) **Base model**: [MedIT Solutions](https://huggingface.co/meditsolutions) Llama 3.2 3B Instruct (derived from Meta Llama 3.2 3B Instruct) **Architecture**: 3 billion parameters **License**: llama3.2 **Intended Use Cases**: - General question answering - Instruction-based generation - Reasoning and chain-of-thought exploration **Not Recommended For**: - Sensitive, real-world medical diagnosis without expert verification - Highly domain-specific or regulated fields outside the model’s training scope --- ## Usage ### Important Notes on Usage 1. **Stop strings**: Because the model uses `` and `` tags to separate internal reasoning from the final answer, you must supply `` as a stop sequence (or multiple stop sequences, if your framework allows) to avoid the model generating infinitely. 2. **Preventing `<|python_tag|>` bug**: Sometimes the model starts with `<|python_tag|>` instead of the intended ``. As a workaround, add `"\n\n"` to the **end of your generation prompt** (in your chat template) to ensure it starts correctly. 3. **Libraries/Tools**: - **Ollama** and **LM Studio**: Via GGUF file. - **Jupyter Notebook** (or similar): Using the [Transformers library](https://github.com/huggingface/transformers). --- ### In Ollama or LM Studio If you are loading the GGUF file, follow the instructions provided by Ollama or LM Studio. Typically, it involves placing the model file in the appropriate directory and selecting it within the interface. > **Example** (in Ollama CLI): > ```bash > ollama run hf.co/mkurman/llama-3.2-MEDIT-3B-o1 > ``` You can then issue prompts. Make sure to set stop sequences to `` (and possibly `` if your environment supports multiple stops). --- ### In a Jupyter Notebook or Python Script (Transformers) ```python from transformers import AutoTokenizer, AutoModelForCausalLM # 1. Load the tokenizer and model tokenizer = AutoTokenizer.from_pretrained("mkurman/llama-3.2-MEDIT-3B-o1") model = AutoModelForCausalLM.from_pretrained("mkurman/llama-3.2-MEDIT-3B-o1") # 2. Define and encode your prompt # Add '\n\n' at the end if you want to ensure # the model uses the correct reasoning tag. prompt = [{'role': 'user', 'content': 'Write a short instagram post about hypertension in children. Finish with 3 hashtags'}] input_ids = tokenizer(tokenizer.apply_chat_template(prompt, tokenize=False, add_generation_prompt=True) + '\n\n', return_tensors='pt') # 3. Generate response with stop sequences (if your generation method supports them) # If your method doesn't support stop sequences directly, # you can manually slice the model's output at ''. output = model.generate( input_ids=input_ids, max_new_tokens=256, do_sample=False, #or True with temperature 0.0 temperature=0.0, # Some generation methods or libraries allow specifying stop sequences. # This is an example if your environment supports it. # stop=[""] ) # 4. Decode the output decoded_output = tokenizer.decode(output[0], skip_special_tokens=True) print(decoded_output) ``` **Note**: If your generation library does not allow direct stop sequences, you can manually parse and remove any tokens that appear after ``. --- ## Example Prompt/Response **Prompt**: ``` ``` *(Remember to add `\n\n` at the end if you see the `<|python_tag|>` bug.)* **Model’s Reasoning** (`` block): ``` Exercise improves heart function by ... ``` **Model’s Final Answer** (`` block): ``` Regular exercise has been shown to ... ``` You would display the `` portion as the final user-facing answer. --- ## Limitations and Bias - **Hallucination**: The model may generate plausible-sounding but incorrect or nonsensical answers. - **Medical Information**: Never rely on this model as a source of truth! **this model is not a certified medical professional**. Always verify with qualified experts before acting on medical advice. - **Biases**: The model’s outputs may reflect biases present in the training data. Users should evaluate content for fairness and accuracy. --- ## License and Citation Please refer to the base model’s [Llama 3.2 Community License Agreement](LICENSE.txt) and any additional licenses from MedIT Solutions. If you use this model in your work, please cite: ``` @misc{mkurman2025llama3medit3bo1, title={{mkurman/llama-3.2-MEDIT-3B-o1}: A fine-tuned Llama 3.2 3B Instruct model for reasoning tasks}, author={Kurman, Mariusz}, year={2025}, howpublished={\url{https://huggingface.co/mkurman/llama-3.2-MEDIT-3B-o1}} } ``` --- ## Contact For questions, comments, or issues related to `mkurman/llama-3.2-MEDIT-3B-o1`, please open an [issue on the model repository](https://huggingface.co/mkurman/llama-3.2-MEDIT-3B-o1) or contact [mkurman](https://huggingface.co/mkurman).