mkurman commited on
Commit
0365640
·
verified ·
1 Parent(s): 8e0d9ee

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -82,7 +82,7 @@ model = AutoModelForCausalLM.from_pretrained("mkurman/llama-3.2-MEDIT-3B-o1")
82
  prompt = [{'role': 'user', 'content': 'Write a short instagram post about hypertension in children. Finish with 3 hashtags'}]
83
  input_ids = tokenizer(tokenizer.apply_chat_template(prompt, tokenize=False, add_generation_prompt=True) + '<Thought>\n\n', return_tensors='pt')
84
 
85
- # 4. Generate response with stop sequences (if your generation method supports them)
86
  # If your method doesn't support stop sequences directly,
87
  # you can manually slice the model's output at '</Output>'.
88
  output = model.generate(
@@ -95,7 +95,7 @@ output = model.generate(
95
  # stop=["</Output>"]
96
  )
97
 
98
- # 5. Decode the output
99
  decoded_output = tokenizer.decode(output[0], skip_special_tokens=True)
100
  print(decoded_output)
101
  ```
@@ -138,7 +138,7 @@ You would display the `<Output>` portion as the final user-facing answer.
138
 
139
  ## License and Citation
140
 
141
- Please refer to the base model’s **Llama 3.2 license from Meta** and any additional licenses from MedIT Solutions. If you use this model in your work, please cite:
142
 
143
  ```
144
  @misc{mkurman2025llama3medit3bo1,
 
82
  prompt = [{'role': 'user', 'content': 'Write a short instagram post about hypertension in children. Finish with 3 hashtags'}]
83
  input_ids = tokenizer(tokenizer.apply_chat_template(prompt, tokenize=False, add_generation_prompt=True) + '<Thought>\n\n', return_tensors='pt')
84
 
85
+ # 3. Generate response with stop sequences (if your generation method supports them)
86
  # If your method doesn't support stop sequences directly,
87
  # you can manually slice the model's output at '</Output>'.
88
  output = model.generate(
 
95
  # stop=["</Output>"]
96
  )
97
 
98
+ # 4. Decode the output
99
  decoded_output = tokenizer.decode(output[0], skip_special_tokens=True)
100
  print(decoded_output)
101
  ```
 
138
 
139
  ## License and Citation
140
 
141
+ Please refer to the base model’s [Llama 3.2 Community License Agreement](LICENSE.txt) and any additional licenses from MedIT Solutions. If you use this model in your work, please cite:
142
 
143
  ```
144
  @misc{mkurman2025llama3medit3bo1,