Create README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,54 @@
|
|
1 |
-
---
|
2 |
-
license: cc
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: cc
|
3 |
+
pretty_name: M-AILABS Speech Dataset (French)
|
4 |
+
languages:
|
5 |
+
- fr
|
6 |
+
task_categories:
|
7 |
+
- speech-processing
|
8 |
+
task_ids:
|
9 |
+
- automatic-speech-recognition
|
10 |
+
size_categories:
|
11 |
+
fr:
|
12 |
+
- 1K<n<10K
|
13 |
+
---
|
14 |
+
|
15 |
+
## Dataset Description
|
16 |
+
- **Homepage:** https://www.caito.de/2019/01/the-m-ailabs-speech-dataset/
|
17 |
+
|
18 |
+
### Dataset Summary
|
19 |
+
|
20 |
+
The M-AILABS Speech Dataset is the first large dataset that we are providing free-of-charge, freely usable as training data for speech recognition and speech synthesis.
|
21 |
+
|
22 |
+
Most of the data is based on LibriVox and Project Gutenberg. The training data consist of nearly thousand hours of audio and the text-files in prepared format.
|
23 |
+
|
24 |
+
A transcription is provided for each clip. Clips vary in length from 1 to 20 seconds and have a total length of approximately shown in the list (and in the respective info.txt-files) below.
|
25 |
+
|
26 |
+
|
27 |
+
The texts were published between 1884 and 1964, and are in the public domain. The audio was recorded by the LibriVox project and is also in the public domain – except for Ukrainian.
|
28 |
+
|
29 |
+
Ukrainian audio was kindly provided either by Nash Format or Gwara Media for machine learning purposes only (please check the data info.txt files for details).
|
30 |
+
|
31 |
+
|
32 |
+
### Languages
|
33 |
+
|
34 |
+
French
|
35 |
+
|
36 |
+
## Dataset Structure
|
37 |
+
### Data Instances
|
38 |
+
A typical data point comprises the path to the audio file, called audio and its sentence.
|
39 |
+
|
40 |
+
|
41 |
+
### Data Fields
|
42 |
+
|
43 |
+
- audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
|
44 |
+
|
45 |
+
- sentence: The sentence the user was prompted to speak
|
46 |
+
|
47 |
+
### Data Splits
|
48 |
+
The speech material has been subdivided into portions for train and test.
|
49 |
+
The train split consists of [TODO] audio clips and the related sentences.
|
50 |
+
The test split consists of [TODO] audio clips and the related sentences.
|
51 |
+
|
52 |
+
|
53 |
+
### Contributions
|
54 |
+
[@gigant](https://huggingface.co/gigant) added this dataset.
|