--- dataset_info: features: - name: audio dtype: audio - name: transcription dtype: string - name: speech_status dtype: string - name: gender dtype: string - name: duration dtype: float64 splits: - name: train num_bytes: 1741001462.72 num_examples: 16552 download_size: 1564871166 dataset_size: 1741001462.72 configs: - config_name: default data_files: - split: train path: data/train-* task_categories: - automatic-speech-recognition - audio-classification language: - en tags: - medical - dysarthria - TORGO - dysarthric speech size_categories: - 1BShort words
These are useful for studying speech acoustics without the need for word boundary detection. This category includes the following: - Repetitions of the English digits, 'yes', 'no', 'up', 'down', 'left', 'right', 'forward', 'back', 'select', 'menu', and the international radio alphabet (e.g., 'alpha', 'bravo', 'charlie'). These words are useful for hypothetical command software for accessibility. - 50 words from the the word intelligibility section of the Frenchay Dysarthria Assessment (Enderby, 1983). - 360 words from the word intelligibility section of the Yorkston-Beukelman Assessment of Intelligibility of Dysarthric Speech (Yorkston and Beukelman, 1981). - The 10 most common words in the British National Corpus. Restricted sentences
In order to utilize lexical, syntactic, and semantic processing in ASR, full and syntactically correct sentences are recorded. These include the following: - Preselected phoneme-rich sentences such as "The quick brown fox jumps over the lazy dog", "She had your dark suit in greasy wash water all year", and "Don't ask me to carry an oily rag like that." - The Grandfather passage. - 162 sentences from the sentence intelligibility section of the Yorkston-Beukelman Assessment of Intelligibility of Dysarthric Speech (Yorkston and Beukelman, 1981). - The 460 TIMIT-derived sentences used as prompts in the MOCHA-TIMIT database (Wrench, 1999; Zue et al, 1989). ## Dataset Structure - Data points comprise the path to the audio file and its transcription. - Additional fields include gender, speech status (dysarthria or healthy), and duration - No dev/test split is provided as there is no standard split for this dataset. - Filenames are as follows: - speakerNumber_sessionNumber_micType_utteranceNumber.wav - Speaker number has the format of gender-speechStatus-speakerNumber (e.g. FC01 = Female control #1, M04 = Male dysarthric #4) ```python from datasets import load_dataset dataset = load_dataset("abnerh/TORGO-database") print(dataset) DatasetDict({ train: Dataset({ features: ['audio', 'transcription', 'speech_status', 'gender', 'duration'], num_rows: 16552 }) }) ``` ```python dataset = load_dataset("abnerh/TORGO-database") print(dataset['train'][0]) {'audio': {'path': 'FC01_1_arrayMic_0066.wav', 'array': array([ 0.00125122, 0.00387573, 0.00115967, ..., 0.00149536, -0.00326538, 0.00027466]), 'sampling_rate': 16000}, 'transcription': 'alpha', 'speech_status': 'healthy', 'gender': 'female', 'duration': 3.3} ``` ```python print(dataset['train'][12200]) {'audio': {'path': 'M02_1_headMic_0066.wav', 'array': array([ 0.00115967, 0.00106812, 0.00091553, ..., -0.00073242, -0.00082397, -0.00054932]), 'sampling_rate': 16000}, 'transcription': 'yet he still thinks as swiftly as ever', 'speech_status': 'dysarthria', 'gender': 'male', 'duration': 7.605} ``` Use of this database is free for academic (non-profit) purposes. If you use these data in any publication, you must reference at least one of the following papers: - Rudzicz, F., Hirst, G., Van Lieshout, P. (2012) Vocal tract representation in the recognition of cerebral palsied speech. The Journal of Speech, Language, and Hearing Research, 55(4):1190-1207, August. - Rudzicz, F., Namasivayam, A.K., Wolff, T. (2012) The TORGO database of acoustic and articulatory speech from speakers with dysarthria. Language Resources and Evaluation, 46(4), pages 523--541. This may be the most informative of the database itself. - Rudzicz, F.(2012) Using articulatory likelihoods in the recognition of dysarthric speech. Speech Communication, 54(3), March, pages 430--444.