Datasets:
File size: 6,076 Bytes
5156398 a8d6c71 5156398 b461bc9 5156398 b461bc9 5156398 366df31 5156398 50bd1b0 5156398 366df31 5156398 4f95fca 48028a0 5156398 366df31 5156398 50bd1b0 5156398 50bd1b0 5156398 50bd1b0 5156398 366df31 5156398 1cb5304 c3a00ef 1cb5304 c3a00ef 48028a0 5156398 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 |
---
license: cc-by-nc-sa-4.0
language:
- en
tags:
- text segmentation
- smart chaptering
- segmentation
- youtube
- asr
pretty_name: YTSeg
size_categories:
- 10K<n<100K
task_categories:
- token-classification
- automatic-speech-recognition
---
# From Text Segmentation to Smart Chaptering: A Novel Benchmark for Structuring Video Transcriptions
We present <span style="font-variant:small-caps; font-weight:700;">YTSeg</span>, a topically and structurally diverse benchmark for the text segmentation task based on YouTube transcriptions. The dataset comprises 19,299 videos from 393 channels, amounting to 6,533 content hours. The topics are wide-ranging, covering domains such as science, lifestyle, politics, health, economy, and technology. The videos are from various types of content formats, such as podcasts, lectures, news, corporate events \& promotional content, and, more broadly, videos from individual content creators. We refer to the **paper** ([acl](https://aclanthology.org/2024.eacl-long.25/) | [arXiv](https://arxiv.org/abs/2402.17633)) for further information. We provide both text and audio data as well as a download script for the video data.
## Data Overview
### <span style="font-variant:small-caps;">YTSeg</span>
Each video is represented as a JSON object with the following fields:
| Field | Description |
|--------------|------------------------------------------------|
| `text` | A flat list of sentences. |
| `targets` | The target segmentation as string of binary values (e.g., `000100000010`). |
| `channel_id` | The YouTube channel ID which this video belongs to. |
| `video_id` | The YouTube video ID. |
| `audio_path` | Path to the .mp3 file of the video |
| `chapters` | A list of chapter titles corresponding to each segment |
| Partition | # Examples |
|------------|--------------|
| Training | 16,404 (85%) |
| Validation | 1,447 (7.5%) |
| Testing | 1,448 (7.5%) |
| Total | 19,229 |
### <span style="font-variant:small-caps;">YTSeg[Titles]</span>
Each chapter of a video is represented as a JSON object with the following fields:
| Field | Description |
|--------------|------------------------------------------------|
| `input` | The complete chapter/section text. |
| `input_with_chapters` | The complete chapter/section text with previous section titles prepended. |
| `target` | The target chapter title. |
| `channel_id` | The YouTube channel ID which this chapter's video belongs to. |
| `video_id` | The YouTube video ID which this chapter belongs to. |
| `chapter_idx` | The index and placement of the chapter in the video (e.g., the first chapter has index `0`). |
| Partition | # Examples |
|------------|--------------|
| Training | 146,907 (84.8%)|
| Validation | 13,206 (7.6%) |
| Testing | 13,082 (7.6%) |
| Total | 173,195 |
### Audio Data
We provide audio files for all examples in the dataset, preprocessed into the .mp3 format with a standardized sample rate of 16,000 Hz and a single channel (mono). These files are organized within the directory structure as follows: `data/audio/<channel_id>/<video_id>.mp3`.
### Video Data
A download script for the video and audio data is provided.
```py
python download_videos.py
```
In the script, you can further specify a target folder (default is `./video`) and target formats in a priority list.
## Loading Text Data
This repository comes with a simple, exemplary script to read in the text data with `pandas`.
```py
from load_data import get_partition
test_data = get_partition('test')
```
Equivalently, to read in <span style="font-variant:small-caps;">YTSeg[Titles]</span>:
```py
from load_data import get_title_partition
test_data = get_title_partition('test')
```
## Citing
We kindly request you to cite our corresponding EACL 2024 paper if you use our dataset.
```
@inproceedings{retkowski-waibel-2024-text,
title = "From Text Segmentation to Smart Chaptering: A Novel Benchmark for Structuring Video Transcriptions",
author = "Retkowski, Fabian and Waibel, Alexander",
editor = "Graham, Yvette and Purver, Matthew",
booktitle = "Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = mar,
year = "2024",
address = "St. Julian{'}s, Malta",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.eacl-long.25",
pages = "406--419",
abstract = "Text segmentation is a fundamental task in natural language processing, where documents are split into contiguous sections. However, prior research in this area has been constrained by limited datasets, which are either small in scale, synthesized, or only contain well-structured documents. In this paper, we address these limitations by introducing a novel benchmark YTSeg focusing on spoken content that is inherently more unstructured and both topically and structurally diverse. As part of this work, we introduce an efficient hierarchical segmentation model MiniSeg, that outperforms state-of-the-art baselines. Lastly, we expand the notion of text segmentation to a more practical {``}smart chaptering{''} task that involves the segmentation of unstructured content, the generation of meaningful segment titles, and a potential real-time application of the models.",
}
```
## Changelog
- 25.07.2024 -- Added complete list of chapter titles to `YTSeg` (`YTSeg[Titles]` is a filtered subset)
- 09.04.2024 -- Added audio data
- 27.02.2024 -- Initial release
## License
The dataset is available under the **Creative Commons Attribution-NonCommercial-ShareAlike (CC BY-NC-SA) 4.0** license. We note that we do not own the copyright of the videos and as such opted to release the dataset with a non-commercial license, with the intended use to be in research and education. |