File size: 2,547 Bytes
5a9f979
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
---
language: "en"
thumbnail:
tags:
- embeddings
- Speaker
- Verification
- Identification
- pytorch
- xvectors
- TDNN
license: "apache-2.0"
datasets:
- voxceleb
metrics:
- EER
- min_dct
---

# Speaker Verification with xvector embeddings on Voxceleb

This repository provides all the necessary tools to extract speaker embeddings with a pretrained TDNN model using SpeechBrain. 
The system is trained on Voxceleb 1+ Voxceleb2 training data. 

For a better experience, we encourage you to learn more about
[SpeechBrain](https://speechbrain.github.io). The given ASR model performance on Voxceleb1-test set are:

| Release | EER(%) | minDCF | 
|:-------------:|:--------------:|:--------------:|
| 05-03-21 | -.- | -.- | 


## Pipeline description
This system is composed of a TDNN model coupled with statistical pooling. The system is trained with Categorical Cross-Entropy Loss.  

## Install SpeechBrain

First of all, please install SpeechBrain with the following command:

```
pip install \\we hide ! SpeechBrain is still private :p
```

Please notice that we encourage you to read our tutorials and learn more about
[SpeechBrain](https://speechbrain.github.io).

### Compute your speaker embeddings

```python
import torchaudio
from speechbrain.pretrained import SpeakerRecognition
verification = SpeakerRecognition.from_hparams(source="speechbrain/spkrec-xvect-voxceleb")
signal, fs =torchaudio.load('samples/audio_samples/example1.wav')
embeddings = verification.encode(signal)
```

#### Referencing xvectors
```@inproceedings{DBLP:conf/odyssey/SnyderGMSPK18,
  author    = {David Snyder and
               Daniel Garcia{-}Romero and
               Alan McCree and
               Gregory Sell and
               Daniel Povey and
               Sanjeev Khudanpur},
  title     = {Spoken Language Recognition using X-vectors},
  booktitle = {Odyssey 2018},
  pages     = {105--111},
  year      = {2018},
}
```


#### Referencing SpeechBrain

```
@misc{SB2021,
    author = {Ravanelli, Mirco and Parcollet, Titouan and Rouhe, Aku and Plantinga, Peter and Rastorgueva, Elena and Lugosch, Loren and Dawalatabad, Nauman and Ju-Chieh, Chou and Heba, Abdel and Grondin, Francois and Aris, William and Liao, Chien-Feng and Cornell, Samuele and Yeh, Sung-Lin and Na, Hwidong and Gao, Yan and Fu, Szu-Wei and Subakan, Cem and De Mori, Renato and Bengio, Yoshua },
    title = {SpeechBrain},
    year = {2021},
    publisher = {GitHub},
    journal = {GitHub repository},
    howpublished = {\url{https://github.com/speechbrain/speechbrain}},
  }
```