bleysg Rocketknight1 HF staff commited on
Commit
acef37e
1 Parent(s): 953ffc4

Add chat template (#8)

Browse files

- Add chat template (777d488ed5aca08804670593ab475e5c8de7868c)
- Fix template typo (f82a8e1570c6156c3a1d2a02c31e2ccac6471393)
- Explain chat template in README (50a0c3c092a6deea0e36272d4e6509cdd9676219)


Co-authored-by: Matthew Carrigan <[email protected]>

Files changed (2) hide show
  1. README.md +28 -1
  2. tokenizer_config.json +1 -0
README.md CHANGED
@@ -64,6 +64,33 @@ We used [OpenAI's Chat Markup Language (ChatML)](https://github.com/openai/opena
64
 
65
  This means that, e.g., in [oobabooga](https://github.com/oobabooga/text-generation-webui/) the "`MPT-Chat`" instruction template should work, as it also uses ChatML.
66
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
67
  ## Example Prompt Exchange
68
 
69
  ```
@@ -173,4 +200,4 @@ Commodity cost was ~$400.
173
  archivePrefix={arXiv},
174
  primaryClass={cs.AI}
175
  }
176
- ```
 
64
 
65
  This means that, e.g., in [oobabooga](https://github.com/oobabooga/text-generation-webui/) the "`MPT-Chat`" instruction template should work, as it also uses ChatML.
66
 
67
+ This formatting has also been set as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating),
68
+ which means that lists of messages can be formatted for you with the `apply_chat_template()` method:
69
+
70
+ ```python
71
+ chat = [
72
+ {"role": "user", "content": "Hello, how are you?"},
73
+ {"role": "assistant", "content": "I'm doing great. How can I help you today?"},
74
+ {"role": "user", "content": "I'd like to show off how chat templating works!"},
75
+ ]
76
+ tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
77
+ ```
78
+
79
+ which will yield:
80
+
81
+ ```
82
+ <|im_start|>user
83
+ Hello, how are you?<|im_end|>
84
+ <|im_start|>assistant
85
+ I'm doing great. How can I help you today?<|im_end|>
86
+ <|im_start|>user
87
+ I'd like to show off how chat templating works!<|im_end|>
88
+ <|im_start|>assistant
89
+ ```
90
+
91
+ If you use `tokenize=True` and `return_tensors="pt"` instead, then you will get a tokenized
92
+ and formatted conversation ready to pass to `model.generate()`.
93
+
94
  ## Example Prompt Exchange
95
 
96
  ```
 
200
  archivePrefix={arXiv},
201
  primaryClass={cs.AI}
202
  }
203
+ ```
tokenizer_config.json CHANGED
@@ -45,6 +45,7 @@
45
  },
46
  "additional_special_tokens": [],
47
  "bos_token": "<s>",
 
48
  "clean_up_tokenization_spaces": false,
49
  "eos_token": "<|im_end|>",
50
  "legacy": true,
 
45
  },
46
  "additional_special_tokens": [],
47
  "bos_token": "<s>",
48
+ "chat_template": "{% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{% for message in messages %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}",
49
  "clean_up_tokenization_spaces": false,
50
  "eos_token": "<|im_end|>",
51
  "legacy": true,