Datasets:

Modalities:
Audio
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
License:
audio
audioduration (s)
0
1.02
speaker_id
stringlengths
18
22
label
class label
24 classes
start
stringclasses
25 values
id
class label
25 classes
id_-kyuX8l4VWY_30_40
7"Crowd"
69
8/m/03qtwd
id_-kyuX8l4VWY_30_40
7"Crowd"
69
8/m/03qtwd
id_-kyuX8l4VWY_30_40
7"Crowd"
69
8/m/03qtwd
id_-kyuX8l4VWY_30_40
7"Crowd"
69
8/m/03qtwd
id_-kyuX8l4VWY_30_40
7"Crowd"
69
8/m/03qtwd
id_-kyuX8l4VWY_30_40
7"Crowd"
69
8/m/03qtwd
id_-kyuX8l4VWY_30_40
7"Crowd"
69
8/m/03qtwd
id_-kyuX8l4VWY_30_40
7"Crowd"
69
8/m/03qtwd
id_-kyuX8l4VWY_30_40
null
null
null
id_-kyuX8l4VWY_30_40
7"Crowd"
69
8/m/03qtwd
id_OIpGIvnK9ns_130_140
15"None of the above"
-1
24none
id_OIpGIvnK9ns_130_140
15"None of the above"
-1
24none
id_OIpGIvnK9ns_130_140
15"None of the above"
-1
24none
id_OIpGIvnK9ns_130_140
18"Speech"
0
16/m/09x0r
id_OIpGIvnK9ns_130_140
18"Speech"
0
16/m/09x0r
id_OIpGIvnK9ns_130_140
18"Speech"
0
16/m/09x0r
id_OIpGIvnK9ns_130_140
15"None of the above"
-1
24none
id_OIpGIvnK9ns_130_140
18"Speech"
0
16/m/09x0r
id_OIpGIvnK9ns_130_140
18"Speech"
0
16/m/09x0r
id_OIpGIvnK9ns_130_140
null
null
null
id_OIpGIvnK9ns_130_140
15"None of the above"
-1
24none
id_OIpGIvnK9ns_130_140
15"None of the above"
-1
24none
id_SnXfFeqhFY4_430_440
18"Speech"
0
16/m/09x0r
id_SnXfFeqhFY4_430_440
18"Speech"
0
16/m/09x0r
id_SnXfFeqhFY4_430_440
15"None of the above"
-1
24none
id_SnXfFeqhFY4_430_440
18"Speech"
0
16/m/09x0r
id_SnXfFeqhFY4_430_440
null
null
null
id_SnXfFeqhFY4_430_440
15"None of the above"
-1
24none
id_SnXfFeqhFY4_430_440
15"None of the above"
-1
24none
id_SnXfFeqhFY4_430_440
15"None of the above"
-1
24none
id_SnXfFeqhFY4_430_440
18"Speech"
0
16/m/09x0r
id_SnXfFeqhFY4_430_440
18"Speech"
0
16/m/09x0r
id_SnXfFeqhFY4_430_440
18"Speech"
0
16/m/09x0r
id_SnXfFeqhFY4_430_440
18"Speech"
0
16/m/09x0r
id_S6kjov9iHLs_30_40
23"White noise"
520
17/m/0chx_
id_S6kjov9iHLs_30_40
23"White noise"
520
17/m/0chx_
id_S6kjov9iHLs_30_40
15"None of the above"
-1
24none
id_S6kjov9iHLs_30_40
23"White noise"
520
17/m/0chx_
id_S6kjov9iHLs_30_40
null
null
null
id_S6kjov9iHLs_30_40
23"White noise"
520
17/m/0chx_
id_S6kjov9iHLs_30_40
23"White noise"
520
17/m/0chx_
id_S6kjov9iHLs_30_40
null
null
null
id_S6kjov9iHLs_30_40
23"White noise"
520
17/m/0chx_
id_S6kjov9iHLs_30_40
23"White noise"
520
17/m/0chx_
id_S6kjov9iHLs_30_40
23"White noise"
520
17/m/0chx_
id_S6kjov9iHLs_30_40
23"White noise"
520
17/m/0chx_
id_DjZLBwKtsxI_130_140
15"None of the above"
-1
24none
id_DjZLBwKtsxI_130_140
18"Speech"
0
16/m/09x0r
id_DjZLBwKtsxI_130_140
18"Speech"
0
16/m/09x0r
id_DjZLBwKtsxI_130_140
15"None of the above"
-1
24none
id_DjZLBwKtsxI_130_140
15"None of the above"
-1
24none
id_DjZLBwKtsxI_130_140
null
null
null
id_DjZLBwKtsxI_130_140
15"None of the above"
-1
24none
id_DjZLBwKtsxI_130_140
15"None of the above"
-1
24none
id_DjZLBwKtsxI_130_140
15"None of the above"
-1
24none
id_DjZLBwKtsxI_130_140
15"None of the above"
-1
24none
id_DjZLBwKtsxI_130_140
15"None of the above"
-1
24none
id_PCsQ3zgL3CU_230_240
18"Speech"
0
16/m/09x0r
id_PCsQ3zgL3CU_230_240
15"None of the above"
-1
24none
id_PCsQ3zgL3CU_230_240
15"None of the above"
-1
24none
id_PCsQ3zgL3CU_230_240
15"None of the above"
-1
24none
id_PCsQ3zgL3CU_230_240
15"None of the above"
-1
24none
id_PCsQ3zgL3CU_230_240
18"Speech"
0
16/m/09x0r
id_PCsQ3zgL3CU_230_240
15"None of the above"
-1
24none
id_PCsQ3zgL3CU_230_240
15"None of the above"
-1
24none
id_PCsQ3zgL3CU_230_240
null
null
null
id_PCsQ3zgL3CU_230_240
18"Speech"
0
16/m/09x0r
id_PCsQ3zgL3CU_230_240
18"Speech"
0
16/m/09x0r
id_4mcBRIgb8-E_130_140
18"Speech"
0
16/m/09x0r
id_4mcBRIgb8-E_130_140
null
null
null
id_4mcBRIgb8-E_130_140
15"None of the above"
-1
24none
id_4mcBRIgb8-E_130_140
15"None of the above"
-1
24none
id_4mcBRIgb8-E_130_140
18"Speech"
0
16/m/09x0r
id_4mcBRIgb8-E_130_140
18"Speech"
0
16/m/09x0r
id_4mcBRIgb8-E_130_140
18"Speech"
0
16/m/09x0r
id_4mcBRIgb8-E_130_140
15"None of the above"
-1
24none
id_4mcBRIgb8-E_130_140
15"None of the above"
-1
24none
id_4mcBRIgb8-E_130_140
15"None of the above"
-1
24none
id_XCbGJ-bIkmg_160_170
18"Speech"
0
16/m/09x0r
id_XCbGJ-bIkmg_160_170
18"Speech"
0
16/m/09x0r
id_XCbGJ-bIkmg_160_170
18"Speech"
0
16/m/09x0r
id_XCbGJ-bIkmg_160_170
18"Speech"
0
16/m/09x0r
id_XCbGJ-bIkmg_160_170
18"Speech"
0
16/m/09x0r
id_XCbGJ-bIkmg_160_170
18"Speech"
0
16/m/09x0r
id_XCbGJ-bIkmg_160_170
18"Speech"
0
16/m/09x0r
id_XCbGJ-bIkmg_160_170
15"None of the above"
-1
24none
id_XCbGJ-bIkmg_160_170
null
null
null
id_XCbGJ-bIkmg_160_170
18"Speech"
0
16/m/09x0r
id_yB9PdXxCCwY_290_300
null
null
null
id_yB9PdXxCCwY_290_300
16"Pink noise"
521
18/m/0cj0r
id_yB9PdXxCCwY_290_300
null
null
null
id_yB9PdXxCCwY_290_300
null
null
null
id_yB9PdXxCCwY_290_300
18"Speech"
0
16/m/09x0r
id_yB9PdXxCCwY_290_300
null
null
null
id_yB9PdXxCCwY_290_300
18"Speech"
0
16/m/09x0r
id_yB9PdXxCCwY_290_300
null
null
null
id_yB9PdXxCCwY_290_300
null
null
null
id_yB9PdXxCCwY_290_300
16"Pink noise"
521
18/m/0cj0r
id_yB9PdXxCCwY_290_300
16"Pink noise"
521
18/m/0cj0r
id_AmGaTVVPQL0_220_230
18"Speech"
0
16/m/09x0r

Dataset Card for Ambient Acoustic Context

The Ambient Acoustic Context dataset contains 1-second segments for activities that occur in a workplace setting. Each segment is associated with speaker_id.

Dataset Details

Using Amazin Mechanical Turk, crowd workers were asked to listen to 1-second segments and choose the right label. To ensure the quality of the annotations, audio segments that did not reach majority agreement among the turkers were excluded.

Dataset Sources

Uses

In order to prepare the dataset for the FL settings, we recommend using Flower Dataset (flwr-datasets) for the dataset download and partitioning and Flower (flwr) for conducting FL experiments.

To partition the dataset, do the following.

  1. Install the package.
pip install flwr-datasets[audio]
  1. Use the HF Dataset under the hood in Flower Datasets.
from flwr_datasets import FederatedDataset
from flwr_datasets.partitioner import NaturalIdPartitioner

fds = FederatedDataset(
    dataset="flwrlabs/ambient-acoustic-context",
    partitioners={"train": NaturalIdPartitioner(partition_by="speaker_id")}
)
partition = fds.load_partition(partition_id=0)

Dataset Structure

Data Instances

The first instance of the train split is presented below:

{'audio': {'path': 'id_-kyuX8l4VWY_30_40_05.wav',
  'array': array([-0.09686279, -0.00747681, -0.0149231 , ...,  0.12243652,
          0.15652466,  0.0710144 ]),
  'sampling_rate': 16000},
 'speaker_id': 'id_-kyuX8l4VWY_30_40',
 'label': 7,
 'start': '69',
 'id': 8}

Data Split

DatasetDict({
    train: Dataset({
        features: ['audio', 'speaker_id', 'label', 'start', 'id'],
        num_rows: 70254
    })
})

Citation

When working with the Ambient Acoustic Context dataset, please cite the original paper. If you're using this dataset with Flower Datasets and Flower, cite Flower.

BibTeX:

Original paper:

@inproceedings{10.1145/3379503.3403535,
author = {Park, Chunjong and Min, Chulhong and Bhattacharya, Sourav and Kawsar, Fahim},
title = {Augmenting Conversational Agents with Ambient Acoustic Contexts},
year = {2020},
isbn = {9781450375160},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3379503.3403535},
doi = {10.1145/3379503.3403535},
abstract = {Conversational agents are rich in content today. However, they are entirely oblivious to users’ situational context, limiting their ability to adapt their response and interaction style. To this end, we explore the design space for a context augmented conversational agent, including analysis of input segment dynamics and computational alternatives. Building on these, we propose a solution that redesigns the input segment intelligently for ambient context recognition, achieved in a two-step inference pipeline. We first separate the non-speech segment from acoustic signals and then use a neural network to infer diverse ambient contexts. To build the network, we curated a public audio dataset through crowdsourcing. Our experimental results demonstrate that the proposed network can distinguish between 9 ambient contexts with an average F1 score of 0.80 with a computational latency of 3 milliseconds. We also build a compressed neural network for on-device processing, optimised for both accuracy and latency. Finally, we present a concrete manifestation of our solution in designing a context-aware conversational agent and demonstrate use cases.},
booktitle = {22nd International Conference on Human-Computer Interaction with Mobile Devices and Services},
articleno = {33},
numpages = {9},
keywords = {Acoustic ambient context, Conversational agents},
location = {Oldenburg, Germany},
series = {MobileHCI '20}
}

Flower:

@article{DBLP:journals/corr/abs-2007-14390,
  author       = {Daniel J. Beutel and
                  Taner Topal and
                  Akhil Mathur and
                  Xinchi Qiu and
                  Titouan Parcollet and
                  Nicholas D. Lane},
  title        = {Flower: {A} Friendly Federated Learning Research Framework},
  journal      = {CoRR},
  volume       = {abs/2007.14390},
  year         = {2020},
  url          = {https://arxiv.org/abs/2007.14390},
  eprinttype    = {arXiv},
  eprint       = {2007.14390},
  timestamp    = {Mon, 03 Aug 2020 14:32:13 +0200},
  biburl       = {https://dblp.org/rec/journals/corr/abs-2007-14390.bib},
  bibsource    = {dblp computer science bibliography, https://dblp.org}
}

Dataset Card Contact

In case of any doubts about the dataset preprocessing and preparation, please contact Flower Labs.

Downloads last month
164
Edit dataset card