zbrunner commited on
Commit
f1ba3ad
·
verified ·
1 Parent(s): 24a783e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +38 -1
README.md CHANGED
@@ -4,4 +4,41 @@ task_categories:
4
  - automatic-speech-recognition
5
  language:
6
  - en
7
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  - automatic-speech-recognition
5
  language:
6
  - en
7
+ ---
8
+ # MultiSeg Dataset
9
+
10
+ ## Description
11
+ MultiSeg is a perturbed and altered version of the TEDLIUM3 dataset, specifically created for evaluating the robustness of Automatic Speech Recognition (ASR) systems. This dataset is derived from the 'speakeroverlap' subset, which consists of held-back training data from TEDLIUM3.
12
+
13
+ ## Purpose
14
+ The primary purpose of the MultiSeg dataset is to:
15
+
16
+ - Elicit hallucinations from ASR systems
17
+ - Evaluate ASR performance under various perturbation conditions
18
+ - Assess the impact of speaker-dependent factors on ASR accuracy
19
+
20
+ ## Dataset Creation
21
+ The MultiSeg dataset was created by applying the following modifications to the original TEDLIUM3 'speakeroverlap' subset:
22
+
23
+
24
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65ce04eb0b263c5a5977cc13/ob1mloh3xtKUsLPDbkn-O.png)
25
+
26
+ - Concatenation of two speech segments
27
+ - Injection of silence between speech segments
28
+ - Application of variable Signal-to-Noise Ratio (SNR)
29
+ - Addition of reverberation effects
30
+
31
+ These modifications aim to simulate real-world challenging conditions for ASR systems.
32
+
33
+ ## Usage
34
+ To use this dataset:
35
+
36
+ - Download the dataset from the Hugging Face repository
37
+ - Load the audio files and corresponding transcriptions
38
+ - Use the dataset to evaluate the hallucinatory tendencies of your ASR system.
39
+ - Hallucination measurement algorithm to follow shortly on my GitHub.
40
+
41
+ ## Original Dataset Information
42
+ This dataset is derived from TEDLIUM3, which is released under the Creative Commons BY-NC-ND 3.0 license.
43
+ TEDLIUM3 release 1
44
+ François Hernandez, Vincent Nguyen, Sahar Ghannay, Natalia Tomashenko, and Yannick Estève, "TED-LIUM 3: twice as much data and corpus repartition for experiments on speaker adaptation", submitted to SPECOM 2018.