Multi-turn chat?
#1
by
mukundtibrewala
- opened
Thanks so much for releasing this! Looking forward to fine-tuning it, and running inference via vLLM.
Quick question: what prompt template should I use for multiturn chat? Also, I noticed that the model card only shows the use of [INST]
tokens, not <s>
tokens. Is this intentional?