Upload README.MD
Browse files
README.MD
ADDED
@@ -0,0 +1,152 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
annotations_creators:
|
3 |
+
- expert-generated
|
4 |
+
license: cdla-permissive-1.0
|
5 |
+
language_creators:
|
6 |
+
- expert-generated
|
7 |
+
size_categories:
|
8 |
+
- 100M<n<100G
|
9 |
+
language:
|
10 |
+
- en
|
11 |
+
task_categories:
|
12 |
+
- automatic-speech-recognition
|
13 |
+
- voice-activity-detection
|
14 |
+
multilinguality:
|
15 |
+
- monolingual
|
16 |
+
task_ids: []
|
17 |
+
pretty_name: dipco
|
18 |
+
tags:
|
19 |
+
- speech-recognition
|
20 |
+
---
|
21 |
+
|
22 |
+
# DipCo - Dinner Party Corpus, Interspeech 2019
|
23 |
+
|
24 |
+
(Zenodo Link)[https://zenodo.org/record/8122551]
|
25 |
+
|
26 |
+
- Contact person(s)
|
27 |
+
Maas, Roland; Hoffmeister, Björn
|
28 |
+
|
29 |
+
- Distributor(s)
|
30 |
+
Yang, Chao-Han Huck
|
31 |
+
|
32 |
+
|
33 |
+
The ‘DipCo’ data corpus is a new data set that was publicly released by Amazon to help speech scientists address the difficult problem of separating speech signals in reverberant rooms with multiple speakers.
|
34 |
+
|
35 |
+
The corpus was created with the assistance of Amazon volunteers, who simulated the dinner-party scenario in the lab. We conducted multiple sessions, each involving four participants. At the beginning of each session, participants served themselves food from a buffet table. Most of the session took place at a dining table, and at fixed points in several sessions, we piped music into the room, to reproduce a noise source that will be common in real-world environments.
|
36 |
+
|
37 |
+
Each participant was outfitted with a headset microphone, which captured a clear, speaker-specific signal. Also dispersed around the room were five devices with seven microphones each, which fed audio signals directly to an administrator’s laptop. In each session, music playback started at a given time mark. The close-talk recordings were segmented and separately transcribed.
|
38 |
+
|
39 |
+
## Sessions
|
40 |
+
|
41 |
+
Each session contains the close talk recordings of 4 participants and the far-field recordings from the 5 devices. The following name conventions are used:
|
42 |
+
|
43 |
+
* sessions have a ```<session_id>``` label denoted by ```S01, S02, S03, ...``
|
44 |
+
* participants have a ```<speaker_id>``` label denoted by ```P01, P02, P03, P04, ...```
|
45 |
+
* devices have a ```<device_id>``` label denoted by ```U01, U02, U03, U04, U05```
|
46 |
+
* array microphone have a ```<channel_id>``` label denoted by ```CH1, CH2, CH3, CH4, CH5, CH6, CH7```
|
47 |
+
|
48 |
+
We currently have the following sessions:
|
49 |
+
|
50 |
+
| **Session** | **Participants** | **Hours** **[hh:mm]** | **#Utts** | **Music start [hh:mm:ss]** |
|
51 |
+
| ----------- | ------------------------------ | ---------------------- | --------- | -------------------------- |
|
52 |
+
| S01 | P01, **P02**, **P03**, P04 | 00:47 | 903 | 00:38:52 |
|
53 |
+
| S02 | **P05**, **P06**, **P07**, P08 | 00:30 | 448 | 00:19:30 |
|
54 |
+
| S03 | **P09**, **P10**, **P11**, P12 | 00:46 | 1128 | 00:33:45 |
|
55 |
+
| S04 | **P13**, P14, **P15**, P16 | 00:45 | 1294 | 00:23:25 |
|
56 |
+
| S05 | **P17**, **P18**, **P19**, P20 | 00:45 | 1012 | 00:31:15 |
|
57 |
+
| S06 | **P21**, P22, **P23**, **P24** | 00:20 | 604 | 00:06:17 |
|
58 |
+
| S07 | **P21**, P22, **P23**, **P24** | 00:26 | 632 | 00:10:05 |
|
59 |
+
| S08 | **P25**, P26, P27, P28 | 00:15 | 352 | 00:01:02 |
|
60 |
+
| S09 | P29, **P30**, P31, **P32** | 00:22 | 505 | 00:12:18 |
|
61 |
+
| S10 | P29, **P30**, P31, **P32** | 00:20 | 432 | 00:07:10 |
|
62 |
+
The sessions have been split into a development and evaluation set as follows:
|
63 |
+
|
64 |
+
| **Dataset** | **Sessions** | **Hours** [**hh:mm**] | **#Utts** |
|
65 |
+
| ----------- | ----------------------- | ----------------------- | --------- |
|
66 |
+
| Dev | S02, S04, S05, S09, S10 | 02:43 | 3691 |
|
67 |
+
| Eval | S01, S03, S06, S07, S08 | 02:36 | 3619 |
|
68 |
+
|
69 |
+
The DiPCo data set has the following directory structure:
|
70 |
+
```bash
|
71 |
+
DiPCo/
|
72 |
+
├── audio
|
73 |
+
│ ├── dev
|
74 |
+
│ └── eval
|
75 |
+
└── transcriptions
|
76 |
+
├── dev
|
77 |
+
└── eval
|
78 |
+
```
|
79 |
+
|
80 |
+
## Audio
|
81 |
+
|
82 |
+
The audio data is converted into WAV format with a sample rate of 16kHz and 16-bit precision. The close-talk recordings were made by monaural microphone and contain a single channel. The far-field recordings of all 5 devices were microphone array recordings and contain 7 raw audio channels.
|
83 |
+
|
84 |
+
The WAV file name convention is as follows:
|
85 |
+
|
86 |
+
* close talk recording of session ```<session_id>``` and participant ```<speaker_id>```
|
87 |
+
* ```<session_id>_<speaker_id>.wav```, e.g. ```S01_P03.wav```
|
88 |
+
* farfield recording of microphone ```<channel_id>``` of session ```<session_id>``` and device ```<device_id>```
|
89 |
+
* ```<session_id>_<device_id>.<channel_id>.wav```, e.g. ```S02_U3.CH1.wav```
|
90 |
+
|
91 |
+
|
92 |
+
## Transcriptions
|
93 |
+
|
94 |
+
Per session, a JSON format transcription file ```<session_id>.json``` has been provided. The JSON files contains for each transcribed utterance the following metadata:
|
95 |
+
|
96 |
+
* Session ID ("session_id")
|
97 |
+
|
98 |
+
* Speaker ID ("speaker_id")
|
99 |
+
|
100 |
+
* Gender ("gender_id")
|
101 |
+
* Mother Tongue ("mother_tongue")
|
102 |
+
* Nativeness ("nativeness")
|
103 |
+
|
104 |
+
* Transcription ("words")
|
105 |
+
|
106 |
+
* Start time of utterance ("start_time")
|
107 |
+
* The close-talk microphone recording of the speaker (```close-talk```)
|
108 |
+
* The farfield microphone array recordings of devices with ```<device_id>``` label
|
109 |
+
|
110 |
+
* End time ("end_time")
|
111 |
+
|
112 |
+
* Reference signal that was used transcribing the audio ("ref")
|
113 |
+
|
114 |
+
The following is an example annotation of one utterance in a JSON file:
|
115 |
+
|
116 |
+
```json
|
117 |
+
{
|
118 |
+
"start_time": {
|
119 |
+
"U01": "00:02:12.79",
|
120 |
+
"U02": "00:02:12.79",
|
121 |
+
"U03": "00:02:12.79",
|
122 |
+
"U04": "00:02:12.79",
|
123 |
+
"U05": "00:02:12.79",
|
124 |
+
"close-talk": "00:02:12.79"
|
125 |
+
},
|
126 |
+
"end_time": {
|
127 |
+
"U01": "00:02:14.84",
|
128 |
+
"U02": "00:02:14.84",
|
129 |
+
"U03": "00:02:14.84",
|
130 |
+
"U04": "00:02:14.84",
|
131 |
+
"U05": "00:02:14.84",
|
132 |
+
"close-talk": "00:02:14.84"
|
133 |
+
},
|
134 |
+
"gender": "male",
|
135 |
+
"mother_tongue": "U.S. English",
|
136 |
+
"nativeness": "native",
|
137 |
+
"ref": "close-talk",
|
138 |
+
"session_id": "S02",
|
139 |
+
"speaker_id": "P05",
|
140 |
+
"words": "[noise] how do you like the food"
|
141 |
+
},
|
142 |
+
```
|
143 |
+
|
144 |
+
Transcriptions include the following tags:
|
145 |
+
|
146 |
+
- [noise] noise made by the speaker (coughing, lip smacking, clearing throat, breathing, etc.)
|
147 |
+
- [unintelligible] speech was not well understood by transcriber
|
148 |
+
- [laugh] participant laughing
|
149 |
+
|
150 |
+
## License Summary
|
151 |
+
|
152 |
+
The DiPCo data set has been released under the CDLA-Permissive license. See the LICENSE file.
|