--- license: apache-2.0 language: - en --- # Synthesized English Speech with Grammatical Errors Dataset (SESGE) A dataset containing English speech with grammatical errors, along with the corresponding transcriptions. Utterances are synthesized using a text-to-speech model, whereas the grammatically incorrect texts come from the [C4_200M](https://aclanthology.org/2021.bea-1.4) synthetic dataset. ## Introduction The Synthesized English Speech with Grammatical Errors (SESGE) dataset was developed to support the [DeMINT](https://github.com/transducens/demint) project developed at Universitat d'Alacant, Spain. The objective of DeMINT was to develop an intelligent tutoring system that helps non-native English speakers improve their language skills by analyzing and providing feedback on the transcripts of their online meetings. As part of this, a system able to transcribe spoken English keeping the original grammatical errors intact was essential. Existing speech-to-text (STT) models like Whisper tend to correct grammatical errors due to their strong internal language models, making them unsuitable for this task. Therefore, SESGE was created to train a custom STT model that could accurately transcribe spoken English with grammatical errors preserved. ## Dataset description Given the absence of a suitable dataset for training an error-preserving STT system, DeMINT fine-tuned a Whisper model with data from two primary sources: - [COREFL](https://www.peterlang.com/document/1049094) (dataset [here](http://corefl.learnercorpora.com )). The COREFL dataset consists of essays written by non-native English students with various levels of proficiency. While some of these essays have associated audio recordings, the majority do not. To expand the audio dataset, we used the [StyleTTS2](https://arxiv.org/abs/2306.07691) text-to-speech model to generate synthetic audio for the remaining texts. Multiple voices were used for synthesis to increase the diversity of the dataset. - [C4_200M](https://github.com/google-research-datasets/C4_200M-synthetic-dataset-for-grammatical-error-correction). The C4_200M dataset contains synthetically generated English sentences with grammatical errors, produced using a corruption model. Like with COREFL, StyleTTS2 was employed to synthesize audio from these texts, diversifying the voices to enhance the training set. This dataset primarily provides varied sentence structures and error types, although with a limited number of distinct voices. Due to licensing restrictions associated with the COREFL dataset, only the portion derived from the C4_200M dataset is publicly available as part of the SESGE dataset. This means that while COREFL data was used during our training, only the C4_200M-based data is included in this dataset. Training samples comprise 28,592 utterances from C4_200M. Validation and test sets contain 700 samples each. ## Derived models Two models were trained on the SESGE dataset by fine-tuning Whisper, enabling error-preserving STT. These models are available on the Hugging Face Hub: - [Error-Preserving Whisper model](https://huggingface.co/Transducens/error-preserving-whisper) - [Error-Preserving Whisper distilled model](https://huggingface.co/Transducens/error-preserving-whisper-distilled) Both models have been optimized to transcribe spoken English while retaining grammatical errors, making them suitable for language-learning applications where fidelity to spoken errors is essential. ## How to cite this work If you use the SESGE dataset, please cite the following paper: ```bibtex @inproceedings{demint2024, author = {Pérez-Ortiz, Juan Antonio and Esplà-Gomis, Miquel and Sánchez-Cartagena, Víctor M. and Sánchez-Martínez, Felipe and Chernysh, Roman and Mora-Rodríguez, Gabriel and Berezhnoy, Lev}, title = {{DeMINT}: Automated Language Debriefing for English Learners via {AI} Chatbot Analysis of Meeting Transcripts}, booktitle = {Proceedings of the 13th Workshop on NLP for Computer Assisted Language Learning}, month = october, year = {2024}, url = {https://aclanthology.org/volumes/2024.nlp4call-1/}, } ```