mpasila's picture
Update README.md
ef547ba verified
metadata
license: agpl-3.0
language:
  - en
size_categories:
  - 1K<n<10K

Single channel's conversations turned into ShareGPT format from Discord-Data. It has also been optimized for Llama 3.1 tokenizer with each conversation being max 8192 tokens.

Since I'm using Unsloth I had to add another adjustment, for some reason it adds a 28 token length system prompt to each conversation so I also need to account for that in this dataset, and Llama 3.1 format also uses 7 tokens per message just for the formatting.