Datasets:

Modalities:
Audio
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
License:
adamnarozniak's picture
Update README.md
982234a verified
|
raw
history blame
6.76 kB
---
dataset_info:
features:
- name: audio
dtype: audio
- name: speaker_id
dtype: string
- name: label
dtype:
class_label:
names:
'0': '"Applause"'
'1': '"Breathing"'
'2': '"Chatter"'
'3': '"Clapping"'
'4': '"Clicking"'
'5': '"Conversation"'
'6': '"Cough"'
'7': '"Crowd"'
'8': '"Door"'
'9': '"Female speech'
'10': '"Hubbub'
'11': '"Inside'
'12': '"Knock"'
'13': '"Laughter"'
'14': '"Male speech'
'15': '"None of the above"'
'16': '"Pink noise"'
'17': '"Silence"'
'18': '"Speech"'
'19': '"Television"'
'20': '"Throat clearing"'
'21': '"Typing"'
'22': '"Walk'
'23': '"White noise"'
- name: start
dtype: string
- name: id
dtype:
class_label:
names:
'0': /m/01b_21
'1': /m/01h8n0
'2': /m/01j3sz
'3': /m/028ght
'4': /m/028v0c
'5': /m/02dgv
'6': /m/02zsn
'7': /m/0316dw
'8': /m/03qtwd
'9': /m/05zppz
'10': /m/07c52
'11': /m/07pbtc8
'12': /m/07qc9xj
'13': /m/07qfr4h
'14': /m/07r4wb8
'15': /m/07rkbfh
'16': /m/09x0r
'17': /m/0chx_
'18': /m/0cj0r
'19': /m/0dl9sf8
'20': /m/0l15bq
'21': /m/0lyf6
'22': /t/dd00125
'23': /t/dd00126
'24': none
splits:
- name: train
num_bytes: 2193371354.8
num_examples: 70254
download_size: 2135840263
dataset_size: 2193371354.8
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: unknown
task_categories:
- audio-classification
size_categories:
- 10K<n<100K
---
# Dataset Card for Ambient Acoustic Context
The Ambient Acoustic Context dataset contains 1-second segments for activities that occur in a workplace setting. Each segment is associated with speaker_id.
## Dataset Details
Using Amazin Mechanical Turk, crowd workers were asked to listen to 1-second segments and choose the right label. To ensure the quality of the annotations, audio segments that did not reach majority agreement among the turkers were excluded.
### Dataset Sources
- **Paper:** https://dl.acm.org/doi/10.1145/3379503.3403535
- **Website** https://www.esense.io/datasets/ambientacousticcontext/index.html
## Uses
In order to prepare the dataset for the FL settings, we recommend using [Flower Dataset](https://flower.ai/docs/datasets/) (flwr-datasets) for the dataset download and partitioning and [Flower](https://flower.ai/docs/framework/) (flwr) for conducting FL experiments.
To partition the dataset, do the following.
1. Install the package.
```bash
pip install flwr-datasets[audio]
```
2. Use the HF Dataset under the hood in Flower Datasets.
```python
from flwr_datasets import FederatedDataset
from flwr_datasets.partitioner import IidPartitioner
fds = FederatedDataset(
dataset="flwrlabs/ambient-acoustic-context",
partitioners={"train": IidPartitioner(num_partitions=10)}
)
partition = fds.load_partition(partition_id=0)
```
## Dataset Structure
### Data Instances
The first instance of the train split is presented below:
```
{'audio': {'path': 'id_-kyuX8l4VWY_30_40_05.wav',
'array': array([-0.09686279, -0.00747681, -0.0149231 , ..., 0.12243652,
0.15652466, 0.0710144 ]),
'sampling_rate': 16000},
'speaker_id': 'id_-kyuX8l4VWY_30_40',
'label': 7,
'start': '69',
'id': 8}
```
### Data Split
```
DatasetDict({
train: Dataset({
features: ['audio', 'speaker_id', 'label', 'start', 'id'],
num_rows: 70254
})
})
```
## Citation
When working with the Ambient Acoustic Context dataset, please cite the original paper.
If you're using this dataset with Flower Datasets and Flower, cite Flower.
**BibTeX:**
Original paper:
```
@inproceedings{10.1145/3379503.3403535,
author = {Park, Chunjong and Min, Chulhong and Bhattacharya, Sourav and Kawsar, Fahim},
title = {Augmenting Conversational Agents with Ambient Acoustic Contexts},
year = {2020},
isbn = {9781450375160},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3379503.3403535},
doi = {10.1145/3379503.3403535},
abstract = {Conversational agents are rich in content today. However, they are entirely oblivious to users’ situational context, limiting their ability to adapt their response and interaction style. To this end, we explore the design space for a context augmented conversational agent, including analysis of input segment dynamics and computational alternatives. Building on these, we propose a solution that redesigns the input segment intelligently for ambient context recognition, achieved in a two-step inference pipeline. We first separate the non-speech segment from acoustic signals and then use a neural network to infer diverse ambient contexts. To build the network, we curated a public audio dataset through crowdsourcing. Our experimental results demonstrate that the proposed network can distinguish between 9 ambient contexts with an average F1 score of 0.80 with a computational latency of 3 milliseconds. We also build a compressed neural network for on-device processing, optimised for both accuracy and latency. Finally, we present a concrete manifestation of our solution in designing a context-aware conversational agent and demonstrate use cases.},
booktitle = {22nd International Conference on Human-Computer Interaction with Mobile Devices and Services},
articleno = {33},
numpages = {9},
keywords = {Acoustic ambient context, Conversational agents},
location = {Oldenburg, Germany},
series = {MobileHCI '20}
}
````
Flower:
```
@article{DBLP:journals/corr/abs-2007-14390,
author = {Daniel J. Beutel and
Taner Topal and
Akhil Mathur and
Xinchi Qiu and
Titouan Parcollet and
Nicholas D. Lane},
title = {Flower: {A} Friendly Federated Learning Research Framework},
journal = {CoRR},
volume = {abs/2007.14390},
year = {2020},
url = {https://arxiv.org/abs/2007.14390},
eprinttype = {arXiv},
eprint = {2007.14390},
timestamp = {Mon, 03 Aug 2020 14:32:13 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2007-14390.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
## Dataset Card Contact
In case of any doubts about the dataset preprocessing and preparation, please contact [Flower Labs](https://flower.ai/).