retkowski commited on
Commit
366df31
1 Parent(s): c3a00ef

Minor description edits

Browse files
Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -12,13 +12,13 @@ size_categories:
12
  - 10K<n<100K
13
  ---
14
 
15
- # YTSeg
16
 
17
- We present `YTSeg`, a topically and structurally diverse benchmark for the text segmentation task based on YouTube transcriptions. The dataset comprises 19,299 videos from 393 channels, amounting to 6,533 content hours. The topics are wide-ranging, covering domains such as science, lifestyle, politics, health, economy, and technology. The videos are from various types of content formats, such as podcasts, lectures, news, corporate events \& promotional content, and, more broadly, videos from individual content creators. We refer to the [paper](https://arxiv.org/abs/2402.17633) for further information.
18
 
19
  ## Data Overview
20
 
21
- ### YTSeg
22
 
23
  Each video is represented as a JSON object with the following fields:
24
 
@@ -37,7 +37,7 @@ Each video is represented as a JSON object with the following fields:
37
  | Testing | 1,448 (7.5%) |
38
  | Total | 19,229 |
39
 
40
- ### YTSeg[Titles]
41
 
42
  Each chapter of a video is represented as a JSON object with the following fields:
43
 
@@ -69,14 +69,14 @@ In the script, you can further specify a target folder (default is `./video`) an
69
 
70
  ## Loading Data
71
 
72
- This repository comes with a simple, examplary script to read in the data with `pandas`.
73
 
74
  ```py
75
  from load_data import get_partition
76
  test_data = get_partition('test')
77
  ```
78
 
79
- Equivalently, to read in `YTSeg[Titles]`:
80
 
81
  ```py
82
  from load_data import get_title_partition
 
12
  - 10K<n<100K
13
  ---
14
 
15
+ # From Text Segmentation to Smart Chaptering: A Novel Benchmark for Structuring Video Transcriptions
16
 
17
+ We present <span style="font-variant:small-caps; font-weight:700;">YTSeg</span>, a topically and structurally diverse benchmark for the text segmentation task based on YouTube transcriptions. The dataset comprises 19,299 videos from 393 channels, amounting to 6,533 content hours. The topics are wide-ranging, covering domains such as science, lifestyle, politics, health, economy, and technology. The videos are from various types of content formats, such as podcasts, lectures, news, corporate events \& promotional content, and, more broadly, videos from individual content creators. We refer to the [paper](https://arxiv.org/abs/2402.17633) for further information.
18
 
19
  ## Data Overview
20
 
21
+ ### <span style="font-variant:small-caps;">YTSeg</span>
22
 
23
  Each video is represented as a JSON object with the following fields:
24
 
 
37
  | Testing | 1,448 (7.5%) |
38
  | Total | 19,229 |
39
 
40
+ ### <span style="font-variant:small-caps;">YTSeg[Titles]</span>
41
 
42
  Each chapter of a video is represented as a JSON object with the following fields:
43
 
 
69
 
70
  ## Loading Data
71
 
72
+ This repository comes with a simple, exemplary script to read in the data with `pandas`.
73
 
74
  ```py
75
  from load_data import get_partition
76
  test_data = get_partition('test')
77
  ```
78
 
79
+ Equivalently, to read in <span style="font-variant:small-caps;">YTSeg[Titles]</span>:
80
 
81
  ```py
82
  from load_data import get_title_partition