Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -1,6 +1,159 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Frame-VAD Multilingual MarbleNet v2.0
|
2 |
+
|
3 |
+
## Description
|
4 |
+
|
5 |
+
Frame-VAD Multilingual MarbleNet v2.0 is a convolutional neural network for voice activity detection (VAD) that serves as the first step for Speech Recognition and Speaker Diarization. It is a frame-based model that outputs a speech probability for each 20 millisecond frame of the input audio. <br>
|
6 |
+
To reduce false positive errors — cases where the model incorrectly detects speech when none is present — the model was trained with white noise and real-word noise perturbations. During training, the volume of audios was also varied. Additionally, the training data includes non-speech audio samples to help the model distinguish between speech and non-speech sounds (such as coughing, laughter, and breathing, etc.) <br>
|
7 |
+
|
8 |
+
This model is ready for commercial use. <br>
|
9 |
+
|
10 |
+
### License/Terms of Use
|
11 |
+
GOVERNING TERMS: Your use of this model is governed by the [NVIDIA Open Model License Agreement](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license).
|
12 |
+
|
13 |
+
|
14 |
+
Deployment Geography: Global <br>
|
15 |
+
|
16 |
+
Use Case: Developers, speech processing engineers, and AI researchers will use it as the first step for other speech processing models. <br>
|
17 |
+
|
18 |
+
|
19 |
+
## Reference
|
20 |
+
[1] Jia, Fei, Somshubra Majumdar, and Boris Ginsburg. "MarbleNet: Deep 1D Time-Channel Separable Convolutional Neural Network for Voice Activity Detection." ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2021. <br>
|
21 |
+
[2] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo)
|
22 |
+
<br>
|
23 |
+
|
24 |
+
## Model Architecture
|
25 |
+
|
26 |
+
**Architecture Type:** Convolutional Neural Network (CNN) <br>
|
27 |
+
**Network Architecture:** MarbleNet <br>
|
28 |
+
|
29 |
+
**This model has 91.5K of model parameters** <br>
|
30 |
+
|
31 |
+
### Input
|
32 |
+
**Input Type(s):** Audio <br>
|
33 |
+
**Input Format:** .wav files <br>
|
34 |
+
**Input Parameters:** 1D <br>
|
35 |
+
**Other Properties Related to Input:** 16000 Hz Mono-channel Audio, Pre-Processing Not Needed <br>
|
36 |
+
|
37 |
+
### Output:
|
38 |
+
**Output Type(s):** Sequence of speech probabilities for each 20 millisecond frame <br>
|
39 |
+
**Output Format:** Float Array <br>
|
40 |
+
**Output Parameters:** 1D <br>
|
41 |
+
**Other Properties Related to Output:** May need post-processing, such as smoothing, which reduces sudden fluctuations in detected speech probability for more natural transitions, and thresholding, which sets a cutoff value to determine whether a frame contains speech based on probability (e.g., classifying frames above 0.5 as speech and others as silence or noise). <br>
|
42 |
+
|
43 |
+
Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions.
|
44 |
+
|
45 |
+
|
46 |
+
|
47 |
+
## How to Use the Model
|
48 |
+
TODO
|
49 |
+
|
50 |
+
|
51 |
+
## Software Integration:
|
52 |
+
**Runtime Engine(s):**
|
53 |
+
* NeMo-2.0.0 <br>
|
54 |
+
|
55 |
+
|
56 |
+
**Supported Hardware Microarchitecture Compatibility:** <br>
|
57 |
+
* [NVIDIA Ampere] <br>
|
58 |
+
* [NVIDIA Blackwell] <br>
|
59 |
+
* [NVIDIA Jetson] <br>
|
60 |
+
* [NVIDIA Hopper] <br>
|
61 |
+
* [NVIDIA Lovelace] <br>
|
62 |
+
* [NVIDIA Pascal] <br>
|
63 |
+
* [NVIDIA Turing] <br>
|
64 |
+
* [NVIDIA Volta] <br>
|
65 |
+
|
66 |
+
## Preferred/Supported Operating System(s):
|
67 |
+
* [Linux] <br>
|
68 |
+
|
69 |
+
## Model Version(s):
|
70 |
+
Frame-VAD Multilingual MarbleNet v2.0 <br>
|
71 |
+
|
72 |
+
## Training, Testing, and Evaluation Datasets:
|
73 |
+
|
74 |
+
### Training Dataset:
|
75 |
+
**Link:**
|
76 |
+
1. [ICSI (en)](https://groups.inf.ed.ac.uk/ami/icsi/download/)
|
77 |
+
2. [AMI (en)](https://groups.inf.ed.ac.uk/ami/corpus/)
|
78 |
+
3. [MLS (fr, es)](https://www.openslr.org/94/)
|
79 |
+
4. [MCV7 (de, ru)](https://commonvoice.mozilla.org/en/datasets)
|
80 |
+
5. [RULS (ru)](https://www.openslr.org/96/)
|
81 |
+
6. [SOVA (ru)](https://github.com/sovaai/sova-dataset)
|
82 |
+
7. [Aishell2 (zh)](https://www.aishelltech.com/)
|
83 |
+
8. [Librispeech (en)](https://www.openslr.org/12)
|
84 |
+
9. [Fisher (en)](https://www.ldc.upenn.edu/)
|
85 |
+
10. [MUSAN (noise)](https://www.openslr.org/17/)
|
86 |
+
11. [Freesound (noise)](https://freesound.org/)
|
87 |
+
12. [Vocalsound (noise)](https://github.com/YuanGongND/vocalsound)
|
88 |
+
13. [Ichbi (noise)](https://bhichallenge.med.auth.gr/ICBHI_2017_Challenge)
|
89 |
+
14. [Coswara (noise)](https://github.com/iiscleap/Coswara-Data) <br>
|
90 |
+
|
91 |
+
Data Collection Method by dataset: <br>
|
92 |
+
* Hybrid: Human, Annotated, Synthetic <br>
|
93 |
+
|
94 |
+
|
95 |
+
Labeling Method by dataset: <br>
|
96 |
+
* Hybrid: Human, Annotated, Synthetic <br>
|
97 |
+
|
98 |
+
**Properties:**
|
99 |
+
2600 hours of real-world data, 1000 hours of synthetic data, and 330 hours of noise data
|
100 |
+
<br>
|
101 |
+
|
102 |
+
|
103 |
+
### Testing Dataset:
|
104 |
+
|
105 |
+
**Link:**
|
106 |
+
1. [Freesound (noise)](https://freesound.org/)
|
107 |
+
2. [MUSAN (noise)](https://www.openslr.org/17/)
|
108 |
+
3. [Librispeech (en)](https://www.openslr.org/12)
|
109 |
+
4. [Fisher (en)](https://www.ldc.upenn.edu/)
|
110 |
+
5. [MLS (fr, es)](https://www.openslr.org/94/)
|
111 |
+
6. [MCV7 (de, ru)](https://commonvoice.mozilla.org/en/datasets)
|
112 |
+
7. [AMI (en)](https://groups.inf.ed.ac.uk/ami/corpus/)
|
113 |
+
8. [Aishell2 (zh)](https://www.aishelltech.com/)
|
114 |
+
9. [CH109 (en)](https://catalog.ldc.upenn.edu/LDC97S42) <br>
|
115 |
+
|
116 |
+
Data Collection Method by dataset: <br>
|
117 |
+
* Hybrid: Human, Annotated <br>
|
118 |
+
|
119 |
+
Labeling Method by dataset: <br>
|
120 |
+
* Hybrid: Human, Annotated <br>
|
121 |
+
|
122 |
+
|
123 |
+
**Properties:**
|
124 |
+
Around 100 hours of multilingual (Chinese, German, Russian, English, Spanish) audio data <br>
|
125 |
+
|
126 |
+
|
127 |
+
### Evaluation Dataset:
|
128 |
+
|
129 |
+
**Link:**
|
130 |
+
1. [VoxConverse-test](https://github.com/joonson/voxconverse/tree/master)
|
131 |
+
2. [VoxConverse-dev](https://github.com/joonson/voxconverse/tree/master)
|
132 |
+
3. [AMI-test](https://github.com/BUTSpeechFIT/AMI-diarization-setup/tree/main/only_words/rttms)
|
133 |
+
4. [Earnings21](https://github.com/revdotcom/speech-datasets/tree/main/earnings21)
|
134 |
+
5. [AISHELL4-test](https://www.openslr.org/111/)
|
135 |
+
6. [CH109](https://catalog.ldc.upenn.edu/LDC97S42)
|
136 |
+
7. [AVA-SPEECH](https://github.com/rafaelgreca/ava-speech-downloader) <br>
|
137 |
+
|
138 |
+
Data Collection Method by dataset: <br>
|
139 |
+
* Hybrid: Human, Annotated <br>
|
140 |
+
|
141 |
+
Labeling Method by dataset: <br>
|
142 |
+
* Hybrid: Human, Annotated <br>
|
143 |
+
|
144 |
+
**Properties:**
|
145 |
+
Around 182 hours of multilingual (Chinese, English) audio data <br>
|
146 |
+
|
147 |
+
|
148 |
+
# Inference:
|
149 |
+
**Engine:** NVIDIA NeMo <br>
|
150 |
+
**Test Hardware:** <br>
|
151 |
+
* RTX 5000 <br>
|
152 |
+
* A100 <br>
|
153 |
+
* V100 <br>
|
154 |
+
|
155 |
+
|
156 |
+
## Ethical Considerations
|
157 |
+
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
|
158 |
+
|
159 |
+
For more detailed information on ethical considerations for this model, please see the Model Card++ [Explainability](https://gitlab-master.nvidia.com/dkaramyan/nv-modelcard/-/blob/main/frame_vad_multilingual_v2/nv-modelcard++/explainability-example.md?ref_type=heads), [Bias](https://gitlab-master.nvidia.com/dkaramyan/nv-modelcard/-/blob/main/frame_vad_multilingual_v2/nv-modelcard++/bias-example.md?ref_type=heads), [Safety & Security](https://gitlab-master.nvidia.com/dkaramyan/nv-modelcard/-/blob/main/frame_vad_multilingual_v2/nv-modelcard++/safety-example.md?ref_type=heads), and [Privacy](https://gitlab-master.nvidia.com/dkaramyan/nv-modelcard/-/blob/main/frame_vad_multilingual_v2/nv-modelcard++/privacy-example.md?ref_type=heads) Subcards [here](https://gitlab-master.nvidia.com/dkaramyan/nv-modelcard/-/tree/main/frame_vad_multilingual_v2/nv-modelcard++?ref_type=heads). Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
|