Datasets:
File size: 6,780 Bytes
d1894e0 443e9e8 7e920d1 443e9e8 8c77eda 443e9e8 982234a 8c77eda 443e9e8 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 |
---
dataset_info:
features:
- name: audio
dtype: audio
- name: speaker_id
dtype: string
- name: label
dtype:
class_label:
names:
'0': '"Applause"'
'1': '"Breathing"'
'2': '"Chatter"'
'3': '"Clapping"'
'4': '"Clicking"'
'5': '"Conversation"'
'6': '"Cough"'
'7': '"Crowd"'
'8': '"Door"'
'9': '"Female speech'
'10': '"Hubbub'
'11': '"Inside'
'12': '"Knock"'
'13': '"Laughter"'
'14': '"Male speech'
'15': '"None of the above"'
'16': '"Pink noise"'
'17': '"Silence"'
'18': '"Speech"'
'19': '"Television"'
'20': '"Throat clearing"'
'21': '"Typing"'
'22': '"Walk'
'23': '"White noise"'
- name: start
dtype: string
- name: id
dtype:
class_label:
names:
'0': /m/01b_21
'1': /m/01h8n0
'2': /m/01j3sz
'3': /m/028ght
'4': /m/028v0c
'5': /m/02dgv
'6': /m/02zsn
'7': /m/0316dw
'8': /m/03qtwd
'9': /m/05zppz
'10': /m/07c52
'11': /m/07pbtc8
'12': /m/07qc9xj
'13': /m/07qfr4h
'14': /m/07r4wb8
'15': /m/07rkbfh
'16': /m/09x0r
'17': /m/0chx_
'18': /m/0cj0r
'19': /m/0dl9sf8
'20': /m/0l15bq
'21': /m/0lyf6
'22': /t/dd00125
'23': /t/dd00126
'24': none
splits:
- name: train
num_bytes: 2193371354.8
num_examples: 70254
download_size: 2135840263
dataset_size: 2193371354.8
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: unknown
task_categories:
- audio-classification
size_categories:
- 10K<n<100K
---
# Dataset Card for Ambient Acoustic Context
The Ambient Acoustic Context dataset contains 1-second segments for activities that occur in a workplace setting. Each segment is associated with speaker_id.
## Dataset Details
Using Amazin Mechanical Turk, crowd workers were asked to listen to 1-second segments and choose the right label. To ensure the quality of the annotations, audio segments that did not reach majority agreement among the turkers were excluded.
### Dataset Sources
- **Paper:** https://dl.acm.org/doi/10.1145/3379503.3403535
- **Website** https://www.esense.io/datasets/ambientacousticcontext/index.html
## Uses
In order to prepare the dataset for the FL settings, we recommend using [Flower Dataset](https://flower.ai/docs/datasets/) (flwr-datasets) for the dataset download and partitioning and [Flower](https://flower.ai/docs/framework/) (flwr) for conducting FL experiments.
To partition the dataset, do the following.
1. Install the package.
```bash
pip install flwr-datasets[audio]
```
2. Use the HF Dataset under the hood in Flower Datasets.
```python
from flwr_datasets import FederatedDataset
from flwr_datasets.partitioner import NaturalIdPartitioner
fds = FederatedDataset(
dataset="flwrlabs/ambient-acoustic-context",
partitioners={"train": NaturalIdPartitioner(partition_by="speaker_id")}
)
partition = fds.load_partition(partition_id=0)
```
## Dataset Structure
### Data Instances
The first instance of the train split is presented below:
```
{'audio': {'path': 'id_-kyuX8l4VWY_30_40_05.wav',
'array': array([-0.09686279, -0.00747681, -0.0149231 , ..., 0.12243652,
0.15652466, 0.0710144 ]),
'sampling_rate': 16000},
'speaker_id': 'id_-kyuX8l4VWY_30_40',
'label': 7,
'start': '69',
'id': 8}
```
### Data Split
```
DatasetDict({
train: Dataset({
features: ['audio', 'speaker_id', 'label', 'start', 'id'],
num_rows: 70254
})
})
```
## Citation
When working with the Ambient Acoustic Context dataset, please cite the original paper.
If you're using this dataset with Flower Datasets and Flower, cite Flower.
**BibTeX:**
Original paper:
```
@inproceedings{10.1145/3379503.3403535,
author = {Park, Chunjong and Min, Chulhong and Bhattacharya, Sourav and Kawsar, Fahim},
title = {Augmenting Conversational Agents with Ambient Acoustic Contexts},
year = {2020},
isbn = {9781450375160},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3379503.3403535},
doi = {10.1145/3379503.3403535},
abstract = {Conversational agents are rich in content today. However, they are entirely oblivious to users’ situational context, limiting their ability to adapt their response and interaction style. To this end, we explore the design space for a context augmented conversational agent, including analysis of input segment dynamics and computational alternatives. Building on these, we propose a solution that redesigns the input segment intelligently for ambient context recognition, achieved in a two-step inference pipeline. We first separate the non-speech segment from acoustic signals and then use a neural network to infer diverse ambient contexts. To build the network, we curated a public audio dataset through crowdsourcing. Our experimental results demonstrate that the proposed network can distinguish between 9 ambient contexts with an average F1 score of 0.80 with a computational latency of 3 milliseconds. We also build a compressed neural network for on-device processing, optimised for both accuracy and latency. Finally, we present a concrete manifestation of our solution in designing a context-aware conversational agent and demonstrate use cases.},
booktitle = {22nd International Conference on Human-Computer Interaction with Mobile Devices and Services},
articleno = {33},
numpages = {9},
keywords = {Acoustic ambient context, Conversational agents},
location = {Oldenburg, Germany},
series = {MobileHCI '20}
}
````
Flower:
```
@article{DBLP:journals/corr/abs-2007-14390,
author = {Daniel J. Beutel and
Taner Topal and
Akhil Mathur and
Xinchi Qiu and
Titouan Parcollet and
Nicholas D. Lane},
title = {Flower: {A} Friendly Federated Learning Research Framework},
journal = {CoRR},
volume = {abs/2007.14390},
year = {2020},
url = {https://arxiv.org/abs/2007.14390},
eprinttype = {arXiv},
eprint = {2007.14390},
timestamp = {Mon, 03 Aug 2020 14:32:13 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2007-14390.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
## Dataset Card Contact
In case of any doubts about the dataset preprocessing and preparation, please contact [Flower Labs](https://flower.ai/). |