Matt
commited on
Commit
•
8872e45
1
Parent(s):
eeb56b9
Update README
Browse files
README.md
CHANGED
@@ -205,6 +205,21 @@ Hello, who are you?<|im_end|>
|
|
205 |
Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by a man named Teknium, who designed me to assist and support users with their needs and requests.<|im_end|>
|
206 |
```
|
207 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
208 |
To utilize the prompt format without a system prompt, simply leave the line out.
|
209 |
|
210 |
Currently, I recommend using LM Studio for chatting with Hermes 2. It is a GUI application that utilizes GGUF models with a llama.cpp backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box.
|
|
|
205 |
Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by a man named Teknium, who designed me to assist and support users with their needs and requests.<|im_end|>
|
206 |
```
|
207 |
|
208 |
+
This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the
|
209 |
+
`tokenizer.apply_chat_template()` method:
|
210 |
+
|
211 |
+
```python
|
212 |
+
messages = [
|
213 |
+
{"role": "system", "content": "You are Hermes 2."},
|
214 |
+
{"role": "user", "content": "Hello, who are you?"}
|
215 |
+
]
|
216 |
+
gen_input = tokenizer.apply_chat_template(message, return_tensors="pt")
|
217 |
+
model.generate(**gen_input)
|
218 |
+
```
|
219 |
+
|
220 |
+
When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure
|
221 |
+
that the model continues with an assistant response.
|
222 |
+
|
223 |
To utilize the prompt format without a system prompt, simply leave the line out.
|
224 |
|
225 |
Currently, I recommend using LM Studio for chatting with Hermes 2. It is a GUI application that utilizes GGUF models with a llama.cpp backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box.
|