You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

Dataset Card for DreamCatcher

DreamCatcher is the first open-source sleep event dataset targeted at ubiquitous sensors on commercial devices. By integrating data from non-wearers, DreamCatcher facilitates the development and evaluation of wearer-aware sleep event monitoring.

Dataset Details

Dataset Description

This comprehensive dataset was collected from 12 pairs (24 participants) over a total of 420 hours (420 hour.person), providing a substantial and diverse resource for sleep monitoring research.

  • Key characteristics of DreamCatcher include:

    • Multi-modal data: The dataset combines audio and motion data to provide a more complete picture of sleep events.
    • Dual-channel audio: The use of two microphones allows for the differentiation between wearer-generated sounds and those coming from the environment or other individuals.
    • Fine-grained labeling: The data is meticulously annotated with detailed labels for each sleep event, allowing for accurate training and evaluation of algorithms.
    • Real-world data: The data was collected in real-world environments, making it more representative of real-life sleep scenarios.
    • Large scale: The dataset’s size provides ample data for training and testing robust algorithms.
  • Curated by: Pervasive Human Computer Interaction Laboratory

  • Language(s) (NLP): Multiple(primarily English)

  • License: cc-by-4.0

Dataset Sources [optional]

Uses

This dataset could be used for:

  • Developing and evaluating sleep event detection algorithms on earables: The dataset includes a variety of sleep events, such as snoring, teeth grinding, talking in sleep, swallowing, coughing, body movement, etc., along with synchronous dual-channel audio and motion data. This makes it ideal for training and testing algorithms that can identify and classify these events using ear-worn devices.
  • Exploring solutions for wearer-aware sleep monitoring in multi-sleeper scenarios: DreamCatcher data was collected in real-world environments where individuals share sleeping spaces with others. This makes it valuable for developing algorithms that can distinguish between the wearer’s sleep events and those generated by others in the room.
  • Assisting in the diagnosis of sleep disorders: By analyzing sleep events, researchers and clinicians can gain insights into potential sleep disorders, such as sleep apnea, bruxism, and insomnia. This dataset can help develop tools that assist in the diagnosis and monitoring of these conditions.
  • Inferring sleep stages: The types and frequencies of sleep events can provide information about sleep stages, such as light sleep, deep sleep, and REM sleep. This dataset can be used to develop algorithms that infer sleep stages based on sleep event patterns.
  • Improving the design and functionality of smart earbuds: DreamCatcher data can be used to develop smart earbuds that can monitor sleep events, provide personalized sleep recommendations, and enhance overall sleep quality.

Dataset Structure

The dataset contains the following fields for each single sample:

  • id: A unique identifier for the sample.
  • user_id: A unique identifier for the participant.
  • start_time: A timestamp indicating the start time of the sample.
  • end_time: A timestamp indicating the end time of the sample.
  • label: A label indicating the type of sleep event. Possible labels include:
    • noise: Acoustic events emitted by non-wearers or background noises.
    • bruxism: Grinding or clenching teeth.
    • swallow: Reflexively or intentionally swallowing saliva.
    • somniloquy: Talking aloud, murmuring, or shouting while asleep.
    • breathe: One inhalation + one exhalation.
    • cough: Coughing, throat clearing, or sniffling.
    • snore: One inhalation + one exhalation with prominent vibrations or whistling.
    • movement: Shifts in position or gestures made.
  • audio_data: The path to the dual-channel audio data.
  • motion_data: The path to the motion data.

Audio Data: The audio data is stored in WAV format and includes signals from the feedback and feedforward microphones. The sampling rate is 24 kHz, and each channel uses a 16-bit PDM data format.

Motion Data: The motion data is stored in CSV format and includes six-axis data from the accelerometer and gyroscope. The sampling rate is approximately 94 Hz.

Data Splitting: The dataset is split into training, validation, and test sets according to participant IDs to evaluate the generalization ability of models on unseen users. The splitting strategy aims to balance the distribution of different labels and ensure that participants in the training, validation, and test sets do not overlap.

Citation

BibTeX: @inproceedings{ wang2024dreamcatcher, title={DreamCatcher: A Wearer-aware Multi-modal Sleep Event Dataset Based on Earables in Non-restrictive Environments}, author={Zeyu Wang and Xiyuxing Zhang and Ruotong Yu and Yuntao Wang and Kenneth Christofferson and Jingru Zhang and Alex Mariakakis and Yuanchun Shi}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=PcbSZwVVc5} }

APA: Wang, Z., Zhang, X., Yu, R., Wang, Y., Christofferson, K., Zhang, J., ... & Shi, Y. DreamCatcher: A Wearer-aware Multi-modal Sleep Event Dataset Based on Earables in Non-restrictive Environments. In The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track.

Downloads last month
15