|
--- |
|
license: cc-by-sa-4.0 |
|
language: |
|
- en |
|
size_categories: |
|
- 10M<n<100M |
|
--- |
|
|
|
This is the dataset presented in my [ASRU-2023 paper](https://arxiv.org/abs/2309.17267). |
|
|
|
It consists of multiple files: |
|
|
|
Keys2Paragraphs.txt (internal name in scripts: yago_wiki.txt): |
|
4.3 million unique words/phrases (English Wikipedia titles or their parts) occurring in 33.8 million English Wikipedia paragraphs. |
|
|
|
Keys2Corruptions.txt (internal name in scripts: sub_misspells.txt): |
|
26 million phrase pairs in the corrupted phrase inventory, as recognized by different ASR models |
|
|
|
Keys2Related.txt (internal name in scripts: related_phrases.txt): |
|
62.7 million phrase pairs in the related phrase inventory |
|
|
|
FalsePositives.txt (internal name in scripts: false_positives.txt): |
|
449 thousand phrase pairs in the false positive phrase inventory |
|
|
|
NgramMappings.txt (internal name in scripts: replacement_vocab_filt.txt): |
|
5.5 million character n-gram mappings dictionary |
|
|
|
asr |
|
outputs of g2p+tts+asr using 4 different ASR systems (conformer ctc was used twice), |
|
gives pairs of initial phrase and its recognition result. |
|
Does not include .wav files, but these can be reproduced by feeding g2p to tts |
|
|
|
giza |
|
raw outputs of GIZA++ alignments for each corpus, |
|
from these we get NgramMappings.txt and Keys2Corruptions.txt |
|
|
|
This [example code](https://github.com/bene-ges/nemo_compatible/blob/spellmapper_new_false_positive_sampling/scripts/nlp/en_spellmapper/dataset_preparation/build_training_data_from_wiki_en_asr_adapt.sh) shows how to generate training data from this dataset. |
|
|