Getting Started with WavePulse Radio Transcripts Dataset
This tutorial will help you get started with using the WavePulse Radio Transcripts dataset from Hugging Face.
Prerequisites
Before starting, make sure you have the required packages installed:
pip install datasets
pip install huggingface-hub
Basic Setup
First, let's set up our environment with some helpful configurations:
from datasets import load_dataset
import huggingface_hub
# Increase timeout for large downloads
huggingface_hub.constants.HF_HUB_DOWNLOAD_TIMEOUT = 60
# Set up cache directory (optional)
cache_dir = "wavepulse_dataset"
Loading Strategies
1. Loading a Specific State (Recommended for Beginners)
Instead of loading the entire dataset, start with one state:
# Load data for just New York
ny_dataset = load_dataset("nyu-dice-lab/wavepulse-radio-raw-transcripts",
"NY",
cache_dir=cache_dir)
2. Streaming Mode (Memory Efficient)
If you're working with limited RAM:
# Stream the dataset
stream_dataset = load_dataset("nyu-dice-lab/wavepulse-radio-raw-transcripts",
streaming=True,
cache_dir=cache_dir)
# Access data in a streaming fashion
for example in stream_dataset["train"].take(5):
print(example["text"])
Common Tasks
1. Filtering by Date Range
# Filter for August 2024
filtered_ds = dataset.filter(
lambda x: "2024-08-01" <= x['datetime'] <= "2024-08-31"
)
2. Finding Specific Stations
# Get unique stations
stations = set(dataset["train"]["station"])
# Filter for a specific station
station_ds = dataset.filter(lambda x: x['station'] == 'KENI')
3. Analyzing Transcripts
# Get all segments from a specific transcript
transcript_ds = dataset.filter(
lambda x: x['transcript_id'] == 'AK_KAGV_2024_08_25_13_00'
)
# Sort segments by their index to maintain order
sorted_segments = sorted(transcript_ds, key=lambda x: x['segment_index'])
Best Practices
Memory Management:
- Start with a single state or small sample
- Use streaming mode for large-scale processing
- Clear cache when needed:
from datasets import clear_cache; clear_cache()
Disk Space:
- Ensure at least 75-80 GB free space for full dataset
- Use state-specific loading to reduce space requirements
- Regular cache cleanup
Error Handling:
- Always include timeout configurations
- Implement retry logic for large downloads
- Handle connection errors gracefully
Example Use Cases
1. Basic Content Analysis
# Count segments per station
from collections import Counter
station_counts = Counter(dataset["train"]["station"])
print("Most common stations:", station_counts.most_common(5))
2. Time-based Analysis
# Get distribution of segments across hours
import datetime
hour_distribution = Counter(
datetime.datetime.fromisoformat(dt).hour
for dt in dataset["train"]["datetime"]
)
3. Speaker Analysis
# Analyze speaker patterns in a transcript
def analyze_speakers(transcript_id):
segments = dataset.filter(
lambda x: x['transcript_id'] == transcript_id
)
speakers = [seg['speaker'] for seg in segments]
return Counter(speakers)
Common Issues and Solutions
Timeout Errors:
# Increase timeout duration huggingface_hub.constants.HF_HUB_DOWNLOAD_TIMEOUT = 120
Memory Errors:
# Use streaming to process in chunks for batch in dataset.iter(batch_size=1000): process_batch(batch)
Disk Space Issues:
# Check available space before downloading import shutil total, used, free = shutil.disk_usage("/") print(f"Free disk space: {free // (2**30)} GB")
Need Help?
- Dataset documentation: https://huggingface.co/datasets/nyu-dice-lab/wavepulse-radio-raw-transcripts
- Project website: https://wave-pulse.io
- Report issues: https://github.com/nyu-dice-lab/wavepulse/issues
Remember to cite the dataset in your work:
@article{mittal2024wavepulse,
title={WavePulse: Real-time Content Analytics of Radio Livestreams},
author={Mittal, Govind and Gupta, Sarthak and Wagle, Shruti and Chopra, Chirag
and DeMattee, Anthony J and Memon, Nasir and Ahamad, Mustaque
and Hegde, Chinmay},
journal={arXiv preprint arXiv:2412.17998},
year={2024}
}