File size: 21,644 Bytes
dc50279 657aafe aea1f1f 944ee5f 5e438fc cd0efed e3d65f2 2e33f66 b14157c 67b44d3 a3f3d45 0129f3f 51ad6c2 fbca0aa 0cad4ee 3578147 0d05a17 90281ad dbf8860 58271bf 06861db fb392bd b3963e5 a087d1b a98501f 56cd726 3dc8708 489a0db 6a19f85 375e489 969be92 59a1b04 0caf8e6 051884c e0b67e7 80a6e90 fca8758 62c495b 9f057bf cb37d9c 67cd585 cf77594 d665a39 a091e9b 88d909c cabfbcd 7f93f3d 9673125 ff6245d 2b15f05 79d00d2 00619de 7e50302 60381ac 80072a7 262d821 9a9ecee fa3a390 d4f1cf2 de53254 d871bf5 21276fa d353039 75f9213 e3b307e 0ee99ff d5c6228 1a2a3db 287b1b0 e2105cb 4b32500 f7cea0d 092c36e 76d7819 55913b1 13f318e 50c0b4f 5ec64ba 4b5909e 4f489de c58f456 51982c9 94eb40a 9a75d50 9c4e995 f6db981 657aafe aea1f1f 944ee5f 5e438fc cd0efed e3d65f2 2e33f66 b14157c 67b44d3 a3f3d45 0129f3f 51ad6c2 fbca0aa 0cad4ee 3578147 0d05a17 90281ad dbf8860 58271bf 06861db fb392bd b3963e5 a087d1b a98501f 56cd726 3dc8708 489a0db 6a19f85 375e489 969be92 59a1b04 0caf8e6 051884c e0b67e7 80a6e90 fca8758 62c495b 9f057bf cb37d9c 67cd585 cf77594 d665a39 a091e9b 88d909c cabfbcd 7f93f3d 9673125 ff6245d 2b15f05 79d00d2 00619de 7e50302 60381ac 80072a7 262d821 9a9ecee fa3a390 d4f1cf2 de53254 d871bf5 21276fa d353039 75f9213 e3b307e 0ee99ff d5c6228 1a2a3db 287b1b0 e2105cb 4b32500 f7cea0d 092c36e 76d7819 55913b1 13f318e 50c0b4f 5ec64ba 4b5909e 4f489de c58f456 51982c9 94eb40a 9a75d50 9c4e995 f6db981 a7d73e0 77c4503 a7d73e0 2f301ac a7d73e0 e188723 42b3dcc 77c4503 a59b3b2 74f16c3 77c4503 0bb5805 a7d73e0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 |
---
license: apache-2.0
task_categories:
- automatic-speech-recognition
- text-to-speech
language:
- en
pretty_name: Technical Indian English
size_categories:
- 1K<n<10K
configs:
- config_name: default
data_files:
- split: train_0
path: data/train_0-*
- split: train_1
path: data/train_1-*
- split: train_2
path: data/train_2-*
- split: train_3
path: data/train_3-*
- split: train_4
path: data/train_4-*
- split: train_5
path: data/train_5-*
- split: train_6
path: data/train_6-*
- split: train_7
path: data/train_7-*
- split: train_8
path: data/train_8-*
- split: train_9
path: data/train_9-*
- split: train_10
path: data/train_10-*
- split: train_11
path: data/train_11-*
- split: train_12
path: data/train_12-*
- split: train_13
path: data/train_13-*
- split: train_14
path: data/train_14-*
- split: train_15
path: data/train_15-*
- split: train_16
path: data/train_16-*
- split: train_17
path: data/train_17-*
- split: train_18
path: data/train_18-*
- split: train_19
path: data/train_19-*
- split: train_20
path: data/train_20-*
- split: train_21
path: data/train_21-*
- split: train_22
path: data/train_22-*
- split: train_23
path: data/train_23-*
- split: train_24
path: data/train_24-*
- split: train_25
path: data/train_25-*
- split: train_26
path: data/train_26-*
- split: train_27
path: data/train_27-*
- split: train_28
path: data/train_28-*
- split: train_29
path: data/train_29-*
- split: train_30
path: data/train_30-*
- split: train_31
path: data/train_31-*
- split: train_32
path: data/train_32-*
- split: train_33
path: data/train_33-*
- split: train_34
path: data/train_34-*
- split: train_35
path: data/train_35-*
- split: train_36
path: data/train_36-*
- split: train_37
path: data/train_37-*
- split: train_38
path: data/train_38-*
- split: train_39
path: data/train_39-*
- split: train_40
path: data/train_40-*
- split: train_41
path: data/train_41-*
- split: train_42
path: data/train_42-*
- split: train_43
path: data/train_43-*
- split: train_44
path: data/train_44-*
- split: train_45
path: data/train_45-*
- split: train_46
path: data/train_46-*
- split: train_47
path: data/train_47-*
- split: train_48
path: data/train_48-*
- split: train_49
path: data/train_49-*
- split: train_50
path: data/train_50-*
- split: train_51
path: data/train_51-*
- split: train_52
path: data/train_52-*
- split: train_53
path: data/train_53-*
- split: train_54
path: data/train_54-*
- split: train_55
path: data/train_55-*
- split: train_56
path: data/train_56-*
- split: train_57
path: data/train_57-*
- split: train_58
path: data/train_58-*
- split: train_59
path: data/train_59-*
- split: train_60
path: data/train_60-*
- split: train_61
path: data/train_61-*
- split: train_62
path: data/train_62-*
- split: train_63
path: data/train_63-*
- split: train_64
path: data/train_64-*
- split: train_65
path: data/train_65-*
- split: train_66
path: data/train_66-*
- split: train_67
path: data/train_67-*
- split: train_68
path: data/train_68-*
- split: train_69
path: data/train_69-*
- split: train_70
path: data/train_70-*
- split: train_71
path: data/train_71-*
- split: train_72
path: data/train_72-*
- split: train_73
path: data/train_73-*
- split: train_74
path: data/train_74-*
- split: train_75
path: data/train_75-*
- split: train_76
path: data/train_76-*
- split: train_77
path: data/train_77-*
- split: train_78
path: data/train_78-*
- split: test_0
path: data/test_0-*
- split: test_1
path: data/test_1-*
- split: test_2
path: data/test_2-*
- split: test_3
path: data/test_3-*
- split: test_4
path: data/test_4-*
- split: test_5
path: data/test_5-*
dataset_info:
features:
- name: audio
struct:
- name: array
sequence:
sequence: float32
- name: path
dtype: string
- name: sampling_rate
dtype: int64
- name: split
dtype: string
- name: ID
dtype: string
- name: Transcript
dtype: string
- name: Normalised_Transcript
dtype: string
- name: Speech_Duration_seconds
dtype: float64
- name: Speaker_ID
dtype: int64
- name: Gender
dtype: string
- name: Caste
dtype: string
- name: Year_Class
dtype: string
- name: Speech_Class
dtype: string
- name: Discipline_Group
dtype: string
- name: Native_Region
dtype: string
- name: Topic
dtype: string
splits:
- name: train_0
num_bytes: 159596908
num_examples: 100
- name: train_1
num_bytes: 154466417
num_examples: 100
- name: train_2
num_bytes: 164830755
num_examples: 100
- name: train_3
num_bytes: 163846670
num_examples: 100
- name: train_4
num_bytes: 158878351
num_examples: 100
- name: train_5
num_bytes: 161562786
num_examples: 100
- name: train_6
num_bytes: 168529715
num_examples: 100
- name: train_7
num_bytes: 163769246
num_examples: 100
- name: train_8
num_bytes: 152866617
num_examples: 100
- name: train_9
num_bytes: 171234967
num_examples: 100
- name: train_10
num_bytes: 155676874
num_examples: 100
- name: train_11
num_bytes: 166546675
num_examples: 100
- name: train_12
num_bytes: 154204346
num_examples: 100
- name: train_13
num_bytes: 161604831
num_examples: 100
- name: train_14
num_bytes: 163285492
num_examples: 100
- name: train_15
num_bytes: 156010091
num_examples: 100
- name: train_16
num_bytes: 155817421
num_examples: 100
- name: train_17
num_bytes: 165098083
num_examples: 100
- name: train_18
num_bytes: 170197491
num_examples: 100
- name: train_19
num_bytes: 155464475
num_examples: 100
- name: train_20
num_bytes: 155351724
num_examples: 100
- name: train_21
num_bytes: 159715260
num_examples: 100
- name: train_22
num_bytes: 158236240
num_examples: 100
- name: train_23
num_bytes: 159682266
num_examples: 100
- name: train_24
num_bytes: 166115920
num_examples: 100
- name: train_25
num_bytes: 157975696
num_examples: 100
- name: train_26
num_bytes: 163387926
num_examples: 100
- name: train_27
num_bytes: 156164315
num_examples: 100
- name: train_28
num_bytes: 163665051
num_examples: 100
- name: train_29
num_bytes: 161448207
num_examples: 100
- name: train_30
num_bytes: 152968507
num_examples: 100
- name: train_31
num_bytes: 158547084
num_examples: 100
- name: train_32
num_bytes: 159756851
num_examples: 100
- name: train_33
num_bytes: 162052446
num_examples: 100
- name: train_34
num_bytes: 169312452
num_examples: 100
- name: train_35
num_bytes: 170415545
num_examples: 100
- name: train_36
num_bytes: 159185426
num_examples: 100
- name: train_37
num_bytes: 155372992
num_examples: 100
- name: train_38
num_bytes: 156961021
num_examples: 100
- name: train_39
num_bytes: 155754650
num_examples: 100
- name: train_40
num_bytes: 164206647
num_examples: 100
- name: train_41
num_bytes: 153346275
num_examples: 100
- name: train_42
num_bytes: 152080502
num_examples: 100
- name: train_43
num_bytes: 158419068
num_examples: 100
- name: train_44
num_bytes: 158057125
num_examples: 100
- name: train_45
num_bytes: 165164816
num_examples: 100
- name: train_46
num_bytes: 157659132
num_examples: 100
- name: train_47
num_bytes: 158897047
num_examples: 100
- name: train_48
num_bytes: 168559462
num_examples: 100
- name: train_49
num_bytes: 167699018
num_examples: 100
- name: train_50
num_bytes: 159117923
num_examples: 100
- name: train_51
num_bytes: 157182317
num_examples: 100
- name: train_52
num_bytes: 159672528
num_examples: 100
- name: train_53
num_bytes: 152821680
num_examples: 100
- name: train_54
num_bytes: 164752542
num_examples: 100
- name: train_55
num_bytes: 165649574
num_examples: 100
- name: train_56
num_bytes: 164706387
num_examples: 100
- name: train_57
num_bytes: 154830453
num_examples: 100
- name: train_58
num_bytes: 161133030
num_examples: 100
- name: train_59
num_bytes: 154735208
num_examples: 100
- name: train_60
num_bytes: 164090726
num_examples: 100
- name: train_61
num_bytes: 156685845
num_examples: 100
- name: train_62
num_bytes: 159936561
num_examples: 100
- name: train_63
num_bytes: 160654183
num_examples: 100
- name: train_64
num_bytes: 161032032
num_examples: 100
- name: train_65
num_bytes: 155268183
num_examples: 100
- name: train_66
num_bytes: 164158067
num_examples: 100
- name: train_67
num_bytes: 168308047
num_examples: 100
- name: train_68
num_bytes: 168014390
num_examples: 100
- name: train_69
num_bytes: 161971102
num_examples: 100
- name: train_70
num_bytes: 156137089
num_examples: 100
- name: train_71
num_bytes: 148956376
num_examples: 100
- name: train_72
num_bytes: 155518828
num_examples: 100
- name: train_73
num_bytes: 166295901
num_examples: 100
- name: train_74
num_bytes: 151141940
num_examples: 100
- name: train_75
num_bytes: 158780014
num_examples: 100
- name: train_76
num_bytes: 158061024
num_examples: 100
- name: train_77
num_bytes: 155858659
num_examples: 100
- name: train_78
num_bytes: 131617110
num_examples: 84
- name: test_0
num_bytes: 152436572
num_examples: 100
- name: test_1
num_bytes: 161351141
num_examples: 100
- name: test_2
num_bytes: 160833508
num_examples: 100
- name: test_3
num_bytes: 154454493
num_examples: 100
- name: test_4
num_bytes: 164697965
num_examples: 100
- name: test_5
num_bytes: 152846641
num_examples: 100
download_size: 13602878681
dataset_size: 13573354921
---
# Dataset Card for TIE_Shorts
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/raianand1991/TIE
- **Paper:** https://arxiv.org/abs/2307.10587
- **Point of Contact:** [[email protected]](mailto:[email protected])
### Dataset Summary
TIE_shorts is a derived version of the [Technical Indian English (TIE)](https://github.com/raianand1991/TIE) dataset, a large-scale speech dataset (~ 8K hours) originally consisting of approximately 750 GB of content
sourced from the [NPTEL](https://nptel.ac.in/) platform. The original TIE dataset contains around 9.8K technical lectures in English delivered by instructors from various regions across India,
with each lecture averaging about 50 minutes. These lectures cover a wide range of technical subjects and capture diverse linguistic features characteristic of Indian
English.
The TIE_shorts version (~ 70 hours audio and 600K ground-truth tokens) was created to facilitate efficient training and usage in speech processing tasks by providing shorter audio samples. In TIE_shorts,
consecutive audio snippets from the original dataset were merged based on timestamps, with a condition that the final merged audio should not exceed 30 seconds in duration.
This process results in 25–30 second audio clips, each accompanied by a corresponding ground-truth transcript. This approach retains the linguistic diversity of the original
dataset while significantly reducing the size and complexity, making TIE_shorts ideal for Automatic Speech Recognition (ASR) and other speech-to-text applications.
As the dataset consisting of approximately 9.8K files spoken by 331 speakers from diverse demographics across the Indian population, this data is also well-suited for speaker identification and text-to-speech (TTS) training applications.
### Example usage
VoxPopuli contains labelled data for 18 languages. To load a specific language pass its name as a config name:
```python
from datasets import load_dataset
voxpopuli_croatian = load_dataset("facebook/voxpopuli", "hr")
```
To load all the languages in a single dataset use "multilang" config name:
```python
voxpopuli_all = load_dataset("facebook/voxpopuli", "multilang")
```
To load a specific set of languages, use "multilang" config name and pass a list of required languages to `languages` parameter:
```python
voxpopuli_slavic = load_dataset("facebook/voxpopuli", "multilang", languages=["hr", "sk", "sl", "cs", "pl"])
```
To load accented English data, use "en_accented" config name:
```python
voxpopuli_accented = load_dataset("facebook/voxpopuli", "en_accented")
```
**Note that L2 English subset contains only `test` split.**
### Supported Tasks and Leaderboards
* automatic-speech-recognition: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER).
Accented English subset can also be used for research in ASR for accented speech (15 L2 accents)
### Languages
VoxPopuli contains labelled (transcribed) data for 18 languages:
| Language | Code | Transcribed Hours | Transcribed Speakers | Transcribed Tokens |
|:---:|:---:|:---:|:---:|:---:|
| English | En | 543 | 1313 | 4.8M |
| German | De | 282 | 531 | 2.3M |
| French | Fr | 211 | 534 | 2.1M |
| Spanish | Es | 166 | 305 | 1.6M |
| Polish | Pl | 111 | 282 | 802K |
| Italian | It | 91 | 306 | 757K |
| Romanian | Ro | 89 | 164 | 739K |
| Hungarian | Hu | 63 | 143 | 431K |
| Czech | Cs | 62 | 138 | 461K |
| Dutch | Nl | 53 | 221 | 488K |
| Finnish | Fi | 27 | 84 | 160K |
| Croatian | Hr | 43 | 83 | 337K |
| Slovak | Sk | 35 | 96 | 270K |
| Slovene | Sl | 10 | 45 | 76K |
| Estonian | Et | 3 | 29 | 18K |
| Lithuanian | Lt | 2 | 21 | 10K |
| Total | | 1791 | 4295 | 15M |
Accented speech transcribed data has 15 various L2 accents:
| Accent | Code | Transcribed Hours | Transcribed Speakers |
|:---:|:---:|:---:|:---:|
| Dutch | en_nl | 3.52 | 45 |
| German | en_de | 3.52 | 84 |
| Czech | en_cs | 3.30 | 26 |
| Polish | en_pl | 3.23 | 33 |
| French | en_fr | 2.56 | 27 |
| Hungarian | en_hu | 2.33 | 23 |
| Finnish | en_fi | 2.18 | 20 |
| Romanian | en_ro | 1.85 | 27 |
| Slovak | en_sk | 1.46 | 17 |
| Spanish | en_es | 1.42 | 18 |
| Italian | en_it | 1.11 | 15 |
| Estonian | en_et | 1.08 | 6 |
| Lithuanian | en_lt | 0.65 | 7 |
| Croatian | en_hr | 0.42 | 9 |
| Slovene | en_sl | 0.25 | 7 |
## Dataset Structure
### Data Instances
```python
{
'audio_id': '20180206-0900-PLENARY-15-hr_20180206-16:10:06_5',
'language': 11, # "hr"
'audio': {
'path': '/home/polina/.cache/huggingface/datasets/downloads/extracted/44aedc80bb053f67f957a5f68e23509e9b181cc9e30c8030f110daaedf9c510e/train_part_0/20180206-0900-PLENARY-15-hr_20180206-16:10:06_5.wav',
'array': array([-0.01434326, -0.01055908, 0.00106812, ..., 0.00646973], dtype=float32),
'sampling_rate': 16000
},
'raw_text': '',
'normalized_text': 'poast genitalnog sakaenja ena u europi tek je jedna od manifestacija takve tetne politike.',
'gender': 'female',
'speaker_id': '119431',
'is_gold_transcript': True,
'accent': 'None'
}
```
### Data Fields
* `audio_id` (string) - id of audio segment
* `language` (datasets.ClassLabel) - numerical id of audio segment
* `audio` (datasets.Audio) - a dictionary containing the path to the audio, the decoded audio array, and the sampling rate. In non-streaming mode (default), the path points to the locally extracted audio. In streaming mode, the path is the relative path of an audio inside its archive (as files are not downloaded and extracted locally).
* `raw_text` (string) - original (orthographic) audio segment text
* `normalized_text` (string) - normalized audio segment transcription
* `gender` (string) - gender of speaker
* `speaker_id` (string) - id of speaker
* `is_gold_transcript` (bool) - ?
* `accent` (string) - type of accent, for example "en_lt", if applicable, else "None".
### Data Splits
All configs (languages) except for accented English contain data in three splits: train, validation and test. Accented English `en_accented` config contains only test split.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
The raw data is collected from 2009-2020 [European Parliament event recordings](https://multimedia.europarl.europa.eu/en/home)
#### Initial Data Collection and Normalization
The VoxPopuli transcribed set comes from aligning the full-event source speech audio with the transcripts for plenary sessions. Official timestamps
are available for locating speeches by speaker in the full session, but they are frequently inaccurate, resulting in truncation of the speech or mixture
of fragments from the preceding or the succeeding speeches. To calibrate the original timestamps,
we perform speaker diarization (SD) on the full-session audio using pyannote.audio (Bredin et al.2020) and adopt the nearest SD timestamps (by L1 distance to the original ones) instead for segmentation.
Full-session audios are segmented into speech paragraphs by speaker, each of which has a transcript available.
The speech paragraphs have an average duration of 197 seconds, which leads to significant. We hence further segment these paragraphs into utterances with a
maximum duration of 20 seconds. We leverage speech recognition (ASR) systems to force-align speech paragraphs to the given transcripts.
The ASR systems are TDS models (Hannun et al., 2019) trained with ASG criterion (Collobert et al., 2016) on audio tracks from in-house deidentified video data.
The resulting utterance segments may have incorrect transcriptions due to incomplete raw transcripts or inaccurate ASR force-alignment.
We use the predictions from the same ASR systems as references and filter the candidate segments by a maximum threshold of 20% character error rate(CER).
#### Who are the source language producers?
Speakers are participants of the European Parliament events, many of them are EU officials.
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
Gender speakers distribution is imbalanced, percentage of female speakers is mostly lower than 50% across languages, with the minimum of 15% for the Lithuanian language data.
VoxPopuli includes all available speeches from the 2009-2020 EP events without any selections on the topics or speakers.
The speech contents represent the standpoints of the speakers in the EP events, many of which are EU officials.
### Other Known Limitations
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The dataset is distributet under CC0 license, see also [European Parliament's legal notice](https://www.europarl.europa.eu/legal-notice/en/) for the raw data.
### Citation Information
Please cite this paper:
```bibtex
@inproceedings{wang-etal-2021-voxpopuli,
title = "{V}ox{P}opuli: A Large-Scale Multilingual Speech Corpus for Representation Learning, Semi-Supervised Learning and Interpretation",
author = "Wang, Changhan and
Riviere, Morgane and
Lee, Ann and
Wu, Anne and
Talnikar, Chaitanya and
Haziza, Daniel and
Williamson, Mary and
Pino, Juan and
Dupoux, Emmanuel",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-long.80",
pages = "993--1003",
}
```
### Contributions
Thanks to [@polinaeterna](https://github.com/polinaeterna) for adding this dataset. |