therealvul commited on
Commit
3862f25
·
verified ·
1 Parent(s): a6f5175

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -0
README.md ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ ---
5
+ This tokenizer was trained on a small corpus of concatenated ARPAbet pronunciation tokens + punctuation from the python g2p_en library computed over the entire `synthbot/pony-speech` dataset and 240k lines from `generics_kb_best`, from `community-datasets/generics_kb`.
6
+ i.e. `But one on one, let's clean it.` -> `BAH1T WAH1N AA1N WAH1N , LEH1TS KLIY1N IH1T .` Uses Unigram with vocab size of 1024.