File size: 421 Bytes
3862f25
 
 
 
 
 
1
2
3
4
5
6
7
---
language:
- en
---
This tokenizer was trained on a small corpus of concatenated ARPAbet pronunciation tokens + punctuation from the python g2p_en library computed over the entire `synthbot/pony-speech` dataset and 240k lines from `generics_kb_best`, from `community-datasets/generics_kb`.
i.e. `But one on one, let's clean it.` -> `BAH1T WAH1N AA1N WAH1N , LEH1TS KLIY1N IH1T .` Uses Unigram with vocab size of 1024.