Add chat template to README
Browse files
README.md
CHANGED
@@ -49,11 +49,17 @@ from transformers import pipeline
|
|
49 |
|
50 |
pipe = pipeline("text-generation", model="HuggingFaceH4/zephyr-7b-alpha", torch_dtype=torch.bfloat16, device_map="auto")
|
51 |
|
52 |
-
# We use
|
53 |
-
|
54 |
-
prompt =
|
55 |
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
|
56 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
57 |
```
|
58 |
|
59 |
## Bias, Risks, and Limitations
|
@@ -83,6 +89,7 @@ Zephyr 7B Alpha achieves the following results on the evaluation set:
|
|
83 |
### Training hyperparameters
|
84 |
|
85 |
The following hyperparameters were used during training:
|
|
|
86 |
- learning_rate: 5e-07
|
87 |
- train_batch_size: 2
|
88 |
- eval_batch_size: 4
|
|
|
49 |
|
50 |
pipe = pipeline("text-generation", model="HuggingFaceH4/zephyr-7b-alpha", torch_dtype=torch.bfloat16, device_map="auto")
|
51 |
|
52 |
+
# We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
|
53 |
+
messages = [{"role": "system", "content": "You are a friendly chatbot who always responds in the style of a pirate"}, {"role": "user", "content": "How many helicopters can a human eat in one sitting?"}]
|
54 |
+
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|
55 |
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
|
56 |
+
print(outputs[0]["generated_text"])
|
57 |
+
# <|system|>
|
58 |
+
# You are a friendly chatbot who always responds in the style of a pirate.</s>
|
59 |
+
# <|user|>
|
60 |
+
# How many helicopters can a human eat in one sitting?</s>
|
61 |
+
# <|assistant|>
|
62 |
+
# Ah, me hearty matey! But yer question be a puzzler! A human cannot eat a helicopter in one sitting, as helicopters are not edible. They be made of metal, plastic, and other materials, not food!
|
63 |
```
|
64 |
|
65 |
## Bias, Risks, and Limitations
|
|
|
89 |
### Training hyperparameters
|
90 |
|
91 |
The following hyperparameters were used during training:
|
92 |
+
|
93 |
- learning_rate: 5e-07
|
94 |
- train_batch_size: 2
|
95 |
- eval_batch_size: 4
|