Update README.md
Browse files
README.md
CHANGED
@@ -18,4 +18,6 @@ dataset_info:
|
|
18 |
---
|
19 |
# Dataset Card for "santacoder-token-usage"
|
20 |
|
|
|
|
|
21 |
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
|
|
18 |
---
|
19 |
# Dataset Card for "santacoder-token-usage"
|
20 |
|
21 |
+
Token usage count per language when tokenizing the `"bigcode/stack-dedup-alt-comments"` dataset with the `santacoder` tokenizer. There are less tokens than in the tokenizer because of vocabulary mismatch between the datasets used to train the tokenizer and the ones that ended up being used to train the model.
|
22 |
+
|
23 |
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|