File size: 4,195 Bytes
3245442
 
 
 
 
 
 
 
 
7d6fa61
 
3245442
992126e
3245442
dc1febd
 
 
3245442
e564b1f
3245442
7d6fa61
3245442
790e11f
 
 
 
 
3245442
 
 
 
 
 
 
 
7d6fa61
 
 
3245442
 
 
 
7d6fa61
 
 
3245442
7d6fa61
 
 
 
 
3245442
 
 
 
 
 
790e11f
 
3245442
 
e564b1f
 
 
7d6fa61
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ae1c8f4
e564b1f
 
 
 
 
 
7d6fa61
e564b1f
 
 
 
 
 
 
 
ae1c8f4
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
---
license: cc-by-4.0
task_categories:
- automatic-speech-recognition
language:
- en
size_categories:
- 100K<n<1M
---
# Librispeech-PC (punctuation and capitalization restored)


### Original Dataset can be found in <https://www.openslr.org/145/>

- I made it for personal use, so the code might not be pefect.



### How to Use

- Almost the same with [Librispeech](https://huggingface.co/datasets/openslr/librispeech_asr) dataset module since i refered to its [source code](https://huggingface.co/datasets/openslr/librispeech_asr/blob/main/librispeech_asr.py).

- three types of transcripts are given
  - `text_normalized` : the trascript from Librispeech ASR
  - `text`, `text_raw` : the trascripts from Librispeech-PC


```python
from datasets import load_dataset
from pprint import pprint
from IPython.display import display, Audio
import aiohttp
import os

# set huggingface cache directory for the extracted raw files and huggingface-cli token
# if don't, you might download the raw tar.gz file in home cache dir even if you set `cache_dir` param
os.environ['HF_HOME'] = "/data_dir/to/download"  
!export HF_HOME="/data_dir/to/download"


# download dataset
# if already have librispeech_asr in the cache_dir it will use the same audio files.
libripc = load_dataset("yoom618/librispeech_pc",
                       "all",  # all, clean, other
                       cache_dir="/data_dir/to/download",
                       trust_remote_code=True,
                       # storage_options={
                       #     # add if you need to increase the timeout for openslr download
                       #     'client_kwargs': {'timeout': aiohttp.ClientTimeout(total=7200)}
                       # },
                       )


# check dataset info
print(libripc)

display(Audio(libripc['train.clean.100'][0]['audio']['array'], 
              rate=libripc['train.clean.100'][0]['audio']['sampling_rate'],
              autoplay=False))
pprint(libripc['train.clean.100'][0]['audio'])

```


- Data Sample

  ```raw
  {'audio': {'array': array([ 7.01904297e-04,  7.32421875e-04,  7.32421875e-04, ...,
                             -2.74658203e-04, -1.83105469e-04, -3.05175781e-05]),
             'path': '/data_dir/to/download/downloads/extracted/.../374-180298-0000.flac',
             'sampling_rate': 16000},
   'chapter_id': 180298,
   'duration': 14.529999732971191,
   'file': '/data_dir/to/download/downloads/extracted/.../374-180298-0000.flac',
   'id': '374-180298-0000',
   'speaker_id': 374,
   'text': 'Chapter sixteen I might have told you of the beginning of this '
           'liaison in a few lines, but I wanted you to see every step by which '
           'we came, I to agree to whatever Marguerite wished,',
   'text_normalized': 'CHAPTER SIXTEEN I MIGHT HAVE TOLD YOU OF THE BEGINNING OF '
                      'THIS LIAISON IN A FEW LINES BUT I WANTED YOU TO SEE EVERY '
                      'STEP BY WHICH WE CAME I TO AGREE TO WHATEVER MARGUERITE '
                      'WISHED',
   'text_raw': 'Chapter sixteen I might have told you of the beginning of this '
               'liaison in a few lines, but I wanted you to see every step by '
               'which we came, I to agree to whatever Marguerite wished,'}
  ```


### The number of samples in Librispeech-PC

- train
  - `train.clean.100` :  26,041  (  2,498 out of  28,539 were dropped )
  - `train.clean.360` :  95,404  (  8,610 out of 104,014 were dropped )
  - `train.other.500` : 134,679  ( 14,009 out of 148,688 were dropped )
- dev (validation)
  - `dev.clean`  : 2,530  ( 173 out of 2,703 were dropped )
  - `dev.other`  : 2,728  ( 136 out of 2,864 were dropped )
- test
  - `test.clean` : 2,417  ( 203 out of 2,620 were dropped )
  - `test.other` : 2,856  (  83 out of 2,939 were dropped )



### Citation Information

```
@article{meister2023librispeechpc,
         title={LibriSpeech-PC: Benchmark for Evaluation of Punctuation and Capitalization Capabilities of end-to-end ASR Models}, 
         author={A. Meister and M. Novikov and N. Karpov and E. Bakhturina and V. Lavrukhin and B. Ginsburg},
         journal={arXiv preprint arXiv:2310.02943},
         year={2023},
        }
```