nianlong commited on
Commit
c385977
1 Parent(s): 89140e0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +20 -27
README.md CHANGED
@@ -2,14 +2,14 @@
2
  license: apache-2.0
3
  ---
4
  # Positive Transfer Of The Whisper Speech Transformer To Human And Animal Voice Activity Detection
5
- We proposed WhisperSeg, utilizing the Whisper Transformer pre-trained for Automatic Speech Recognition (ASR) for both human and animal Voice Activity Detection (VAD). For more details, please refer to our paper
6
-
7
-
8
  > [**Positive Transfer of the Whisper Speech Transformer to Human and Animal Voice Activity Detection**](https://doi.org/10.1101/2023.09.30.560270)
9
  >
10
  > Nianlong Gu, Kanghwi Lee, Maris Basha, Sumit Kumar Ram, Guanghao You, Richard H. R. Hahnloser <br>
11
  > University of Zurich and ETH Zurich
12
 
 
 
13
 
14
  The model "nccratliri/whisperseg-large-ms" is the checkpoint of the multi-species WhisperSeg-large that was finetuned on the vocal segmentation datasets of five species.
15
 
@@ -27,19 +27,13 @@ import librosa
27
  import json
28
  segmenter = WhisperSegmenter( "nccratliri/whisperseg-large-ms", device="cuda" )
29
 
30
- sr = 32000
31
- min_frequency = 0
32
- spec_time_step = 0.0025
33
- min_segment_length = 0.01
34
- eps = 0.02
35
- num_trials = 3
36
 
37
- audio, _ = librosa.load( "data/example_subset/Zebra_finch/test_adults/zebra_finch_g17y2U-f00007.wav",
38
  sr = sr )
39
-
40
- prediction = segmenter.segment( audio, sr = sr, min_frequency = min_frequency, spec_time_step = spec_time_step,
41
- min_segment_length = min_segment_length, eps = eps,num_trials = num_trials )
42
-
43
  print(prediction)
44
  ```
45
  {'onset': [0.01, 0.38, 0.603, 0.758, 0.912, 1.813, 1.967, 2.073, 2.838, 2.982, 3.112, 3.668, 3.828, 3.953, 5.158, 5.323, 5.467], 'offset': [0.073, 0.447, 0.673, 0.83, 1.483, 1.882, 2.037, 2.643, 2.893, 3.063, 3.283, 3.742, 3.898, 4.523, 5.223, 5.393, 6.043], 'cluster': ['zebra_finch_0', 'zebra_finch_0', 'zebra_finch_0', 'zebra_finch_0', 'zebra_finch_0', 'zebra_finch_0', 'zebra_finch_0', 'zebra_finch_0', 'zebra_finch_0', 'zebra_finch_0', 'zebra_finch_0', 'zebra_finch_0', 'zebra_finch_0', 'zebra_finch_0', 'zebra_finch_0', 'zebra_finch_0', 'zebra_finch_0']}
@@ -54,24 +48,23 @@ spec_viewer.visualize( audio = audio, sr = sr, min_frequency= min_frequency, pre
54
  ```
55
  ![vis](https://github.com/nianlonggu/WhisperSeg/blob/master/assets/res_zebra_finch_adults_prediction_only.png?raw=true)
56
 
57
- Run it in Google Colab: <a href="https://colab.research.google.com/github/nianlonggu/WhisperSeg/blob/master/docs/WhisperSeg_Voice_Activity_Detection_Demo.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
 
58
  For more details, please refer to the GitHub repository: https://github.com/nianlonggu/WhisperSeg
59
 
60
  ## Citation
61
  When using our code or models for your work, please cite the following paper:
62
  ```
63
- @article {Gu2023.09.30.560270,
64
- author = {Nianlong Gu and Kanghwi Lee and Maris Basha and Sumit Kumar Ram and Guanghao You and Richard Hahnloser},
65
- title = {Positive Transfer of the Whisper Speech Transformer to Human and Animal Voice Activity Detection},
66
- elocation-id = {2023.09.30.560270},
67
- year = {2023},
68
- doi = {10.1101/2023.09.30.560270},
69
- publisher = {Cold Spring Harbor Laboratory},
70
- abstract = {This paper introduces WhisperSeg, utilizing the Whisper Transformer pre-trained for Automatic Speech Recognition (ASR) for human and animal Voice Activity Detection (VAD). Contrary to traditional methods that detect human voice or animal vocalizations from a short audio frame and rely on careful threshold selection, WhisperSeg processes entire spectrograms of long audio and generates plain text representations of onset, offset, and type of voice activity. Processing a longer audio context with a larger network greatly improves detection accuracy from few labeled examples. We further demonstrate a positive transfer of detection performance to new animal species, making our approach viable in the data-scarce multi-species setting.Competing Interest StatementThe authors have declared no competing interest.},
71
- URL = {https://www.biorxiv.org/content/early/2023/10/02/2023.09.30.560270},
72
- eprint = {https://www.biorxiv.org/content/early/2023/10/02/2023.09.30.560270.full.pdf},
73
- journal = {bioRxiv}
74
- }
75
  ```
76
 
77
  ## Contact
 
2
  license: apache-2.0
3
  ---
4
  # Positive Transfer Of The Whisper Speech Transformer To Human And Animal Voice Activity Detection
5
+ We proposed **WhisperSeg**, utilizing the Whisper Transformer pre-trained for Automatic Speech Recognition (ASR) for both human and animal Voice Activity Detection (VAD). For more details, please refer to our paper:
 
 
6
  > [**Positive Transfer of the Whisper Speech Transformer to Human and Animal Voice Activity Detection**](https://doi.org/10.1101/2023.09.30.560270)
7
  >
8
  > Nianlong Gu, Kanghwi Lee, Maris Basha, Sumit Kumar Ram, Guanghao You, Richard H. R. Hahnloser <br>
9
  > University of Zurich and ETH Zurich
10
 
11
+ *Accepted to the 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2024)*
12
+
13
 
14
  The model "nccratliri/whisperseg-large-ms" is the checkpoint of the multi-species WhisperSeg-large that was finetuned on the vocal segmentation datasets of five species.
15
 
 
27
  import json
28
  segmenter = WhisperSegmenter( "nccratliri/whisperseg-large-ms", device="cuda" )
29
 
30
+ sr = 32000
31
+ spec_time_step = 0.0025
 
 
 
 
32
 
33
+ audio, _ = librosa.load( "data/example_subset/Zebra_finch/test_adults/zebra_finch_g17y2U-f00007.wav",
34
  sr = sr )
35
+ ## Note if spec_time_step is not provided, a default value will be used by the model.
36
+ prediction = segmenter.segment( audio, sr = sr, spec_time_step = spec_time_step )
 
 
37
  print(prediction)
38
  ```
39
  {'onset': [0.01, 0.38, 0.603, 0.758, 0.912, 1.813, 1.967, 2.073, 2.838, 2.982, 3.112, 3.668, 3.828, 3.953, 5.158, 5.323, 5.467], 'offset': [0.073, 0.447, 0.673, 0.83, 1.483, 1.882, 2.037, 2.643, 2.893, 3.063, 3.283, 3.742, 3.898, 4.523, 5.223, 5.393, 6.043], 'cluster': ['zebra_finch_0', 'zebra_finch_0', 'zebra_finch_0', 'zebra_finch_0', 'zebra_finch_0', 'zebra_finch_0', 'zebra_finch_0', 'zebra_finch_0', 'zebra_finch_0', 'zebra_finch_0', 'zebra_finch_0', 'zebra_finch_0', 'zebra_finch_0', 'zebra_finch_0', 'zebra_finch_0', 'zebra_finch_0', 'zebra_finch_0']}
 
48
  ```
49
  ![vis](https://github.com/nianlonggu/WhisperSeg/blob/master/assets/res_zebra_finch_adults_prediction_only.png?raw=true)
50
 
51
+ Run it in Google Colab:
52
+ <a href="https://colab.research.google.com/github/nianlonggu/WhisperSeg/blob/master/docs/WhisperSeg_Voice_Activity_Detection_Demo.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
53
  For more details, please refer to the GitHub repository: https://github.com/nianlonggu/WhisperSeg
54
 
55
  ## Citation
56
  When using our code or models for your work, please cite the following paper:
57
  ```
58
+ @INPROCEEDINGS{10447620,
59
+ author={Gu, Nianlong and Lee, Kanghwi and Basha, Maris and Kumar Ram, Sumit and You, Guanghao and Hahnloser, Richard H. R.},
60
+ booktitle={ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
61
+ title={Positive Transfer of the Whisper Speech Transformer to Human and Animal Voice Activity Detection},
62
+ year={2024},
63
+ volume={},
64
+ number={},
65
+ pages={7505-7509},
66
+ keywords={Voice activity detection;Adaptation models;Animals;Transformers;Acoustics;Human voice;Spectrogram;Voice activity detection;audio segmentation;Transformer;Whisper},
67
+ doi={10.1109/ICASSP48485.2024.10447620}}
 
 
68
  ```
69
 
70
  ## Contact