Datasets:

Languages:
English
ArXiv:
License:
japerez commited on
Commit
6401f48
·
verified ·
1 Parent(s): b7e8587

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -9
README.md CHANGED
@@ -11,8 +11,9 @@ text-to-speech model, whereas the grammatically incorrect texts come from the [C
11
 
12
  ## Introduction
13
 
14
- The Synthesized English Speech with Grammatical Errors (SESGE) dataset was developed to support the [DeMINT](https://github.com/transducens/demint) project.
15
- The objective of DeMINT is to develop an intelligent tutoring system that helps non-native English speakers improve their language skills by analyzing and providing
 
16
  feedback on the transcripts of their online meetings. As part of this, a system able to transcribe spoken English keeping the original
17
  grammatical errors intact was essential.
18
  Existing speech-to-text (STT) models like Whisper tend to correct grammatical errors due to their strong internal language models, making them unsuitable for this task.
@@ -22,15 +23,14 @@ Therefore, SESGE was created to train a custom STT model that could accurately t
22
 
23
  Given the absence of a suitable dataset for training an error-preserving STT system, DeMINT fine-tuned a Whisper model with data from two primary sources:
24
 
25
- - [COREFL](https://www.peterlang.com/document/1049094)
 
26
  The COREFL dataset consists of essays written by non-native English students with various levels of proficiency.
27
  While some of these essays have associated audio recordings, the majority do not.
28
  To expand the audio dataset, we used the [StyleTTS2](https://arxiv.org/abs/2306.07691) text-to-speech model to generate synthetic audio for the remaining texts.
29
  Multiple voices were used for synthesis to increase the diversity of the dataset.
30
- COREFL also includes audio directly recorded by students, which introduces natural speech variability and common errors found among L1-Spanish speakers,
31
- a key demographic for the DeMINT project.
32
 
33
- - [C4_200M](https://github.com/google-research-datasets/C4_200M-synthetic-dataset-for-grammatical-error-correction)
34
  The C4_200M dataset contains synthetically generated English sentences with grammatical errors, produced using a corruption model.
35
  Like with COREFL, StyleTTS2 was employed to synthesize audio from these texts, diversifying the voices to enhance the training set.
36
  This dataset primarily provides varied sentence structures and error types, although with a limited number of distinct voices.
@@ -38,14 +38,14 @@ This dataset primarily provides varied sentence structures and error types, alth
38
  Due to licensing restrictions associated with the COREFL dataset, only the portion derived from the C4_200M dataset is publicly available as part of the
39
  SESGE dataset. This means that while COREFL data was used during our training, only the C4_200M-based data is included in this dataset.
40
 
41
- Training samples comprise **28,592** utterances from C4_200M.
42
 
43
  ## Models
44
 
45
  Two models were trained on the SESGE dataset by fine-tuning Whisper, enabling error-preserving STT. These models are available on the Hugging Face Hub:
46
 
47
- - [Error-Preserving Whisper Model](https://huggingface.co/Transducens/error-preserving-whisper)
48
- - [Error-Preserving Whisper Distilled Model](https://huggingface.co/Transducens/error-preserving-whisper-distilled)
49
 
50
  Both models have been optimized to transcribe spoken English while retaining grammatical errors, making them suitable for language-learning applications
51
  where fidelity to spoken errors is essential.
 
11
 
12
  ## Introduction
13
 
14
+ The Synthesized English Speech with Grammatical Errors (SESGE) dataset was developed to support the [DeMINT](https://github.com/transducens/demint) project
15
+ developed at Universitat d'Alacant, Spain.
16
+ The objective of DeMINT was to develop an intelligent tutoring system that helps non-native English speakers improve their language skills by analyzing and providing
17
  feedback on the transcripts of their online meetings. As part of this, a system able to transcribe spoken English keeping the original
18
  grammatical errors intact was essential.
19
  Existing speech-to-text (STT) models like Whisper tend to correct grammatical errors due to their strong internal language models, making them unsuitable for this task.
 
23
 
24
  Given the absence of a suitable dataset for training an error-preserving STT system, DeMINT fine-tuned a Whisper model with data from two primary sources:
25
 
26
+ - [COREFL](https://www.peterlang.com/document/1049094) (dataset [here](http://corefl.learnercorpora.com
27
+ )).
28
  The COREFL dataset consists of essays written by non-native English students with various levels of proficiency.
29
  While some of these essays have associated audio recordings, the majority do not.
30
  To expand the audio dataset, we used the [StyleTTS2](https://arxiv.org/abs/2306.07691) text-to-speech model to generate synthetic audio for the remaining texts.
31
  Multiple voices were used for synthesis to increase the diversity of the dataset.
 
 
32
 
33
+ - [C4_200M](https://github.com/google-research-datasets/C4_200M-synthetic-dataset-for-grammatical-error-correction).
34
  The C4_200M dataset contains synthetically generated English sentences with grammatical errors, produced using a corruption model.
35
  Like with COREFL, StyleTTS2 was employed to synthesize audio from these texts, diversifying the voices to enhance the training set.
36
  This dataset primarily provides varied sentence structures and error types, although with a limited number of distinct voices.
 
38
  Due to licensing restrictions associated with the COREFL dataset, only the portion derived from the C4_200M dataset is publicly available as part of the
39
  SESGE dataset. This means that while COREFL data was used during our training, only the C4_200M-based data is included in this dataset.
40
 
41
+ Training samples comprise 28,592 utterances from C4_200M. Validation and test sets contain 700 samples each.
42
 
43
  ## Models
44
 
45
  Two models were trained on the SESGE dataset by fine-tuning Whisper, enabling error-preserving STT. These models are available on the Hugging Face Hub:
46
 
47
+ - [Error-Preserving Whisper model](https://huggingface.co/Transducens/error-preserving-whisper)
48
+ - [Error-Preserving Whisper distilled model](https://huggingface.co/Transducens/error-preserving-whisper-distilled)
49
 
50
  Both models have been optimized to transcribe spoken English while retaining grammatical errors, making them suitable for language-learning applications
51
  where fidelity to spoken errors is essential.