ytseg / README.md
retkowski's picture
Resolve merge conflict
62e9cc7
metadata
license: cc-by-nc-sa-4.0
language:
  - en
tags:
  - text segmentation
  - smart chaptering
  - segmentation
  - youtube
  - asr
pretty_name: YTSeg
size_categories:
  - 10K<n<100K
task_categories:
  - token-classification
  - automatic-speech-recognition

From Text Segmentation to Smart Chaptering: A Novel Benchmark for Structuring Video Transcriptions

We present YTSeg, a topically and structurally diverse benchmark for the text segmentation task based on YouTube transcriptions. The dataset comprises 19,299 videos from 393 channels, amounting to 6,533 content hours. The topics are wide-ranging, covering domains such as science, lifestyle, politics, health, economy, and technology. The videos are from various types of content formats, such as podcasts, lectures, news, corporate events & promotional content, and, more broadly, videos from individual content creators. We refer to the paper (acl | arXiv) for further information. We provide both text and audio data as well as a download script for the video data.

Data Overview

YTSeg

Each video is represented as a JSON object with the following fields:

Field Description
text A flat list of sentences.
targets The target segmentation as string of binary values (e.g., 000100000010).
channel_id The YouTube channel ID which this video belongs to.
video_id The YouTube video ID.
audio_path Path to the .mp3 file of the video
chapters A list of chapter titles corresponding to each segment
Partition # Examples
Training 16,404 (85%)
Validation 1,447 (7.5%)
Testing 1,448 (7.5%)
Total 19,229

YTSeg[Titles]

Each chapter of a video is represented as a JSON object with the following fields:

Field Description
input The complete chapter/section text.
input_with_chapters The complete chapter/section text with previous section titles prepended.
target The target chapter title.
channel_id The YouTube channel ID which this chapter's video belongs to.
video_id The YouTube video ID which this chapter belongs to.
chapter_idx The index and placement of the chapter in the video (e.g., the first chapter has index 0).
Partition # Examples
Training 146,907 (84.8%)
Validation 13,206 (7.6%)
Testing 13,082 (7.6%)
Total 173,195

Audio Data

We provide audio files for all examples in the dataset, preprocessed into the .mp3 format with a standardized sample rate of 16,000 Hz and a single channel (mono). These files are organized within the directory structure as follows: data/audio/<channel_id>/<video_id>.mp3.

Video Data

A download script for the video and audio data is provided.

python download_videos.py

In the script, you can further specify a target folder (default is ./video) and target formats in a priority list.

Loading Text Data

This repository comes with a simple, exemplary script to read in the text data with pandas.

from load_data import get_partition
test_data = get_partition('test')

Equivalently, to read in YTSeg[Titles]:

from load_data import get_title_partition
test_data = get_title_partition('test')

Citing

We kindly request you to cite our corresponding EACL 2024 paper if you use our dataset.

@inproceedings{retkowski-waibel-2024-text,
    title = "From Text Segmentation to Smart Chaptering: A Novel Benchmark for Structuring Video Transcriptions",
    author = "Retkowski, Fabian  and Waibel, Alexander",
    editor = "Graham, Yvette  and Purver, Matthew",
    booktitle = "Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)",
    month = mar,
    year = "2024",
    address = "St. Julian{'}s, Malta",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2024.eacl-long.25",
    pages = "406--419",
    abstract = "Text segmentation is a fundamental task in natural language processing, where documents are split into contiguous sections. However, prior research in this area has been constrained by limited datasets, which are either small in scale, synthesized, or only contain well-structured documents. In this paper, we address these limitations by introducing a novel benchmark YTSeg focusing on spoken content that is inherently more unstructured and both topically and structurally diverse. As part of this work, we introduce an efficient hierarchical segmentation model MiniSeg, that outperforms state-of-the-art baselines. Lastly, we expand the notion of text segmentation to a more practical {``}smart chaptering{''} task that involves the segmentation of unstructured content, the generation of meaningful segment titles, and a potential real-time application of the models.",
}

Changelog

  • 25.07.2024 -- Added complete list of chapter titles to YTSeg (YTSeg[Titles] is a filtered subset)
  • 09.04.2024 -- Added audio data
  • 27.02.2024 -- Initial release

License

The dataset is available under the Creative Commons Attribution-NonCommercial-ShareAlike (CC BY-NC-SA) 4.0 license. We note that we do not own the copyright of the videos and as such opted to release the dataset with a non-commercial license, with the intended use to be in research and education.