crystal-technologies commited on
Commit
c1bb68d
·
1 Parent(s): 6e73cd3

Delete SoundScribe

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. SoundScribe/SpeakerID/CITATION.cff +0 -41
  2. SoundScribe/SpeakerID/CONTRIBUTING.md +0 -79
  3. SoundScribe/SpeakerID/Dockerfile +0 -140
  4. SoundScribe/SpeakerID/Jenkinsfile +0 -0
  5. SoundScribe/SpeakerID/LICENSE +0 -201
  6. SoundScribe/SpeakerID/README.rst +0 -387
  7. SoundScribe/SpeakerID/ci.groovy +0 -119
  8. SoundScribe/SpeakerID/docs/Makefile +0 -216
  9. SoundScribe/SpeakerID/docs/source/_static/css/custom.css +0 -366
  10. SoundScribe/SpeakerID/docs/source/_static/js/pk_scripts.js +0 -19
  11. SoundScribe/SpeakerID/docs/source/_templates/layout.html +0 -14
  12. SoundScribe/SpeakerID/docs/source/asr/api.rst +0 -322
  13. SoundScribe/SpeakerID/docs/source/asr/asr_all.bib +0 -1043
  14. SoundScribe/SpeakerID/docs/source/asr/asr_language_modeling.rst +0 -548
  15. SoundScribe/SpeakerID/docs/source/asr/configs.rst +0 -1110
  16. SoundScribe/SpeakerID/docs/source/asr/data/asrlm_results.csv +0 -2
  17. SoundScribe/SpeakerID/docs/source/asr/data/benchmark_by.csv +0 -2
  18. SoundScribe/SpeakerID/docs/source/asr/data/benchmark_ca.csv +0 -4
  19. SoundScribe/SpeakerID/docs/source/asr/data/benchmark_code_switching.csv +0 -3
  20. SoundScribe/SpeakerID/docs/source/asr/data/benchmark_de.csv +0 -7
  21. SoundScribe/SpeakerID/docs/source/asr/data/benchmark_en.csv +0 -41
  22. SoundScribe/SpeakerID/docs/source/asr/data/benchmark_es.csv +0 -8
  23. SoundScribe/SpeakerID/docs/source/asr/data/benchmark_fr.csv +0 -9
  24. SoundScribe/SpeakerID/docs/source/asr/data/benchmark_hi.csv +0 -2
  25. SoundScribe/SpeakerID/docs/source/asr/data/benchmark_hr.csv +0 -4
  26. SoundScribe/SpeakerID/docs/source/asr/data/benchmark_it.csv +0 -3
  27. SoundScribe/SpeakerID/docs/source/asr/data/benchmark_kab.csv +0 -2
  28. SoundScribe/SpeakerID/docs/source/asr/data/benchmark_mr.csv +0 -3
  29. SoundScribe/SpeakerID/docs/source/asr/data/benchmark_multilingual.csv +0 -5
  30. SoundScribe/SpeakerID/docs/source/asr/data/benchmark_pl.csv +0 -3
  31. SoundScribe/SpeakerID/docs/source/asr/data/benchmark_ru.csv +0 -4
  32. SoundScribe/SpeakerID/docs/source/asr/data/benchmark_rw.csv +0 -3
  33. SoundScribe/SpeakerID/docs/source/asr/data/benchmark_ua.csv +0 -2
  34. SoundScribe/SpeakerID/docs/source/asr/data/benchmark_zh.csv +0 -4
  35. SoundScribe/SpeakerID/docs/source/asr/data/scores/be/conformer_be.csv +0 -3
  36. SoundScribe/SpeakerID/docs/source/asr/data/scores/by/fastconformer_by.csv +0 -2
  37. SoundScribe/SpeakerID/docs/source/asr/data/scores/ca/conformer_ca.csv +0 -3
  38. SoundScribe/SpeakerID/docs/source/asr/data/scores/ca/quartznet15x5_ca.csv +0 -2
  39. SoundScribe/SpeakerID/docs/source/asr/data/scores/de/citrinet_de.csv +0 -2
  40. SoundScribe/SpeakerID/docs/source/asr/data/scores/de/conformer_de.csv +0 -3
  41. SoundScribe/SpeakerID/docs/source/asr/data/scores/de/contextnet_de.csv +0 -2
  42. SoundScribe/SpeakerID/docs/source/asr/data/scores/de/fastconformer_de.csv +0 -2
  43. SoundScribe/SpeakerID/docs/source/asr/data/scores/de/quartznet15x5_de.csv +0 -2
  44. SoundScribe/SpeakerID/docs/source/asr/data/scores/en/citrinet_en.csv +0 -7
  45. SoundScribe/SpeakerID/docs/source/asr/data/scores/en/conformer_en.csv +0 -28
  46. SoundScribe/SpeakerID/docs/source/asr/data/scores/en/contextnet_en.csv +0 -7
  47. SoundScribe/SpeakerID/docs/source/asr/data/scores/en/fastconformer_en.csv +0 -4
  48. SoundScribe/SpeakerID/docs/source/asr/data/scores/en/jasper10x5dr_en.csv +0 -2
  49. SoundScribe/SpeakerID/docs/source/asr/data/scores/en/quartznet15x5_en.csv +0 -2
  50. SoundScribe/SpeakerID/docs/source/asr/data/scores/en/squeezeformer_en.csv +0 -7
SoundScribe/SpeakerID/CITATION.cff DELETED
@@ -1,41 +0,0 @@
1
- cff-version: 1.2.0
2
- message: "If you use this software, please cite it as below."
3
- title: "NeMo: a toolkit for Conversational AI and Large Language Models"
4
- url: https://nvidia.github.io/NeMo/
5
- repository-code: https://github.com/NVIDIA/NeMo
6
- authors:
7
- - family-names: Harper
8
- given-names: Eric
9
- - family-names: Majumdar
10
- given-names: Somshubra
11
- - family-names: Kuchaiev
12
- given-names: Oleksii
13
- - family-names: Jason
14
- given-names: Li
15
- - family-names: Zhang
16
- given-names: Yang
17
- - family-names: Bakhturina
18
- given-names: Evelina
19
- - family-names: Noroozi
20
- given-names: Vahid
21
- - family-names: Subramanian
22
- given-names: Sandeep
23
- - family-names: Nithin
24
- given-names: Koluguri
25
- - family-names: Jocelyn
26
- given-names: Huang
27
- - family-names: Jia
28
- given-names: Fei
29
- - family-names: Balam
30
- given-names: Jagadeesh
31
- - family-names: Yang
32
- given-names: Xuesong
33
- - family-names: Livne
34
- given-names: Micha
35
- - family-names: Dong
36
- given-names: Yi
37
- - family-names: Naren
38
- given-names: Sean
39
- - family-names: Ginsburg
40
- given-names: Boris
41
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
SoundScribe/SpeakerID/CONTRIBUTING.md DELETED
@@ -1,79 +0,0 @@
1
- # Contributions are welcome!
2
-
3
- We do all of NeMo's development in the open. Contributions from NeMo community are welcome.
4
-
5
-
6
- # Pull Requests (PR) Guidelines
7
-
8
- **Send your PRs to the `main` branch**
9
-
10
- 1) Make sure your PR does one thing. Have a clear answer to "What does this PR do?".
11
- 2) Read General Principles and style guide below
12
- 3) Make sure you sign your commits. E.g. use ``git commit -s`` when before your commit
13
- 4) Make sure all unittests finish successfully before sending PR ``pytest`` or (if yor dev box does not have GPU) ``pytest --cpu`` from NeMo's root folder
14
- 5) Send your PR and request a review
15
-
16
- ## Unit tests
17
- Quick tests (locally, while developing)
18
- ```
19
- pytest
20
- # If you don't have NVIDIA GPU do:
21
- # pytest --cpu
22
- ```
23
- Full tests, including pre-trained model downloads
24
- ```
25
- pytest --with_downloads
26
- ```
27
-
28
- ## Whom should you ask for review:
29
- 1. For changes to NeMo's core: @ericharper, @titu1994, @blisc, or @okuchaiev
30
- 1. For changes to NeMo's ASR collection: @titu1994, @redoctopus, @jbalam-nv, or @okuchaiev
31
- 1. For changes to NeMo's NLP collection: @MaximumEntropy, @ericharper, @ekmb, @yzhang123, @VahidooX, @vladgets, or @okuchaiev
32
- 1. For changes to NeMo's TTS collection: @blisc, or @okuchaiev
33
-
34
- Note that some people may self-assign to review your PR - in which case, please wait for them to add a review.
35
-
36
- Your pull requests must pass all checks and peer-review before they can be merged.
37
-
38
- # General principles
39
- 1. **User-oriented**: make it easy for end users, even at the cost of writing more code in the background
40
- 1. **Robust**: make it hard for users to make mistakes.
41
- 1. **Well-tested**: please add simple, fast unittests. Consider adding CI tests for end-to-end functionality.
42
- 1. **Reusable**: for every piece of code, think about how it can be reused in the future and make it easy to be reused.
43
- 1. **Readable**: code should be easier to read.
44
- 1. **Legal**: if you copy even one line of code from the Internet, make sure that the code allows the license that NeMo supports. Give credit and link back to the code.
45
- 1. **Sensible**: code should make sense. If you think a piece of code might be confusing, write comments.
46
-
47
- ## Class naming conventions
48
- * No “I”, “Interface”, “NM” nor “NeMo” pre/postfixes anywhere
49
- * Core interfaces have simple names: Typing, Cloud, Serialization, FileIO*
50
- * Core classes have the simplest names ever: NeuralModule, Model, Graph, Dataset, Loss, Module*
51
- * Abstract classes in the Model hierarchy have Model postfix
52
- * A config class for MyModel should be called MyModelConfig
53
- * Leaf Neural Module classes have simple names without any postfixes (e.g. AudioPreprocess)
54
- * Leaf Datasets have Dataset postfix (e.g. AudioToSpeechLabelDataset)
55
- * Leaf Losses have Loss postfix (e.g. CTCLoss)
56
- * Leaf Models do not have any postfix, just name (e.g. QuartzNet)
57
-
58
- ## Python style
59
- We use ``black`` as our style guide. To check whether your code will pass style check (from the NeMo's repo folder) run:
60
- ``python setup.py style`` and if it does not pass run ``python setup.py style --fix``.
61
-
62
- 1. Include docstrings for every class and method exposed to the user.
63
- 1. Use Python 3 type hints for every class and method exposed to the user.
64
- 1. Avoid wild import: ``from X import *`` unless in ``X.py``, ``__all__`` is defined.
65
- 1. Minimize the use of ``**kwargs``.
66
- 1. ``RaiseError`` is preferred to ``assert``. Write: ```if X: raise Error``` instead of ```assert X```.
67
- 1. Classes are preferred to standalone methods.
68
- 1. Methods should be atomic. A method shouldn't be longer than 75 lines, e.g. can be fit into the computer screen without scrolling.
69
- 1. If a method has arguments that don't fit into one line, each argument should be in its own line for readability.
70
- 1. Add ``__init__.py`` for every folder.
71
- 1. F-strings are prefered to formatted strings.
72
- 1. Loggers are preferred to print. In NeMo, you can use logger from ``from nemo.utils import logging``
73
- 1. Private functions (functions start with ``_``) shouldn't be called outside its host file.
74
- 1. If a comment lasts multiple lines, use ``'''`` instead of ``#``.
75
-
76
- # Collections
77
- Collection is a logical grouping of related Neural Modules. It is a grouping of modules that share a domain area or semantics.
78
- When contributing module to a collection, please make sure it belongs to that category.
79
- If you would like to start a new one and contribute back to the platform, you are very welcome to do so.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
SoundScribe/SpeakerID/Dockerfile DELETED
@@ -1,140 +0,0 @@
1
- # syntax=docker/dockerfile:experimental
2
-
3
- # Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved.
4
- #
5
- # Licensed under the Apache License, Version 2.0 (the "License");
6
- # you may not use this file except in compliance with the License.
7
- # You may obtain a copy of the License at
8
- #
9
- # http://www.apache.org/licenses/LICENSE-2.0
10
- #
11
- # Unless required by applicable law or agreed to in writing, software
12
- # distributed under the License is distributed on an "AS IS" BASIS,
13
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14
- # See the License for the specific language governing permissions and
15
- # limitations under the License.
16
-
17
- ARG BASE_IMAGE=nvcr.io/nvidia/pytorch:23.08-py3
18
-
19
- # build an image that includes only the nemo dependencies, ensures that dependencies
20
- # are included first for optimal caching, and useful for building a development
21
- # image (by specifying build target as `nemo-deps`)
22
- FROM ${BASE_IMAGE} as nemo-deps
23
-
24
- # dependency flags; should be declared after FROM
25
- # torchaudio: not required by default
26
- ARG REQUIRE_TORCHAUDIO=false
27
- # k2: not required by default
28
- ARG REQUIRE_K2=false
29
- # ais cli: not required by default, install only if required
30
- ARG REQUIRE_AIS_CLI=false
31
-
32
- # Ensure apt-get won't prompt for selecting options
33
- ENV DEBIAN_FRONTEND=noninteractive
34
- # libavdevice-dev rerquired for latest torchaudio
35
- RUN apt-get update && \
36
- apt-get upgrade -y && \
37
- apt-get install -y \
38
- libsndfile1 sox \
39
- libfreetype6 \
40
- swig \
41
- ffmpeg \
42
- libavdevice-dev && \
43
- rm -rf /var/lib/apt/lists/*
44
-
45
- WORKDIR /workspace/
46
- # install megatron core, this can be removed once 0.3 pip package is released
47
- RUN git clone https://github.com/NVIDIA/Megatron-LM.git && \
48
- cd Megatron-LM && \
49
- git checkout ab0336a5c8eab77aa74ae604ba1e73decbf6d560 && \
50
- pip install -e .
51
-
52
- WORKDIR /tmp/
53
-
54
- # Distributed Adam support for multiple dtypes
55
- RUN git clone https://github.com/NVIDIA/apex.git && \
56
- cd apex && \
57
- git checkout 52e18c894223800cb611682dce27d88050edf1de && \
58
- pip3 install -v --no-build-isolation --disable-pip-version-check --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" --global-option="--fast_layer_norm" --global-option="--distributed_adam" --global-option="--deprecated_fused_adam" ./
59
-
60
- # uninstall stuff from base container
61
- RUN pip3 uninstall -y sacrebleu torchtext
62
-
63
- # build torchaudio
64
- WORKDIR /tmp/torchaudio_build
65
- COPY scripts/installers /tmp/torchaudio_build/scripts/installers/
66
- RUN INSTALL_MSG=$(/bin/bash /tmp/torchaudio_build/scripts/installers/install_torchaudio_latest.sh); INSTALL_CODE=$?; \
67
- echo ${INSTALL_MSG}; \
68
- if [ ${INSTALL_CODE} -ne 0 ]; then \
69
- echo "torchaudio installation failed"; \
70
- if [ "${REQUIRE_TORCHAUDIO}" = true ]; then \
71
- exit ${INSTALL_CODE}; \
72
- else echo "Skipping failed torchaudio installation"; fi \
73
- else echo "torchaudio installed successfully"; fi
74
-
75
- # install nemo dependencies
76
- WORKDIR /tmp/nemo
77
- COPY requirements .
78
- RUN for f in $(ls requirements*.txt); do pip3 install --disable-pip-version-check --no-cache-dir -r $f; done
79
-
80
- # install flash attention dependencies
81
- RUN pip install flash-attn
82
- # pinned triton version for flash-attention https://github.com/HazyResearch/flash-attention/blob/main/flash_attn/flash_attn_triton.py#L3
83
- RUN pip install triton==2.0.0.dev20221202
84
- # install numba for latest containers
85
- RUN pip install numba>=0.57.1
86
-
87
- # install k2, skip if installation fails
88
- COPY scripts /tmp/nemo/scripts/
89
- RUN INSTALL_MSG=$(/bin/bash /tmp/nemo/scripts/speech_recognition/k2/setup.sh); INSTALL_CODE=$?; \
90
- echo ${INSTALL_MSG}; \
91
- if [ ${INSTALL_CODE} -ne 0 ]; then \
92
- echo "k2 installation failed"; \
93
- if [ "${REQUIRE_K2}" = true ]; then \
94
- exit ${INSTALL_CODE}; \
95
- else echo "Skipping failed k2 installation"; fi \
96
- else echo "k2 installed successfully"; fi
97
-
98
- # copy nemo source into a scratch image
99
- FROM scratch as nemo-src
100
- COPY . .
101
-
102
- # start building the final container
103
- FROM nemo-deps as nemo
104
- ARG NEMO_VERSION=1.21.0
105
-
106
- # Check that NEMO_VERSION is set. Build will fail without this. Expose NEMO and base container
107
- # version information as runtime environment variable for introspection purposes
108
- RUN /usr/bin/test -n "$NEMO_VERSION" && \
109
- /bin/echo "export NEMO_VERSION=${NEMO_VERSION}" >> /root/.bashrc && \
110
- /bin/echo "export BASE_IMAGE=${BASE_IMAGE}" >> /root/.bashrc
111
-
112
- # Install NeMo
113
- RUN --mount=from=nemo-src,target=/tmp/nemo,rw cd /tmp/nemo && pip install ".[all]"
114
-
115
- # Check install
116
- RUN python -c "import nemo.collections.nlp as nemo_nlp" && \
117
- python -c "import nemo.collections.tts as nemo_tts" && \
118
- python -c "import nemo_text_processing.text_normalization as text_normalization"
119
-
120
-
121
- # copy scripts/examples/tests into container for end user
122
- WORKDIR /workspace/nemo
123
- COPY scripts /workspace/nemo/scripts
124
- COPY examples /workspace/nemo/examples
125
- COPY tests /workspace/nemo/tests
126
- COPY tutorials /workspace/nemo/tutorials
127
- # COPY README.rst LICENSE /workspace/nemo/
128
-
129
- RUN printf "#!/bin/bash\njupyter lab --no-browser --allow-root --ip=0.0.0.0" >> start-jupyter.sh && \
130
- chmod +x start-jupyter.sh
131
-
132
- # If required, install AIS CLI
133
- RUN if [ "${REQUIRE_AIS_CLI}" = true ]; then \
134
- INSTALL_MSG=$(/bin/bash scripts/installers/install_ais_cli_latest.sh); INSTALL_CODE=$?; \
135
- echo ${INSTALL_MSG}; \
136
- if [ ${INSTALL_CODE} -ne 0 ]; then \
137
- echo "AIS CLI installation failed"; \
138
- exit ${INSTALL_CODE}; \
139
- else echo "AIS CLI installed successfully"; fi \
140
- else echo "Skipping AIS CLI installation"; fi
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
SoundScribe/SpeakerID/Jenkinsfile DELETED
The diff for this file is too large to render. See raw diff
 
SoundScribe/SpeakerID/LICENSE DELETED
@@ -1,201 +0,0 @@
1
- Apache License
2
- Version 2.0, January 2004
3
- http://www.apache.org/licenses/
4
-
5
- TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
6
-
7
- 1. Definitions.
8
-
9
- "License" shall mean the terms and conditions for use, reproduction,
10
- and distribution as defined by Sections 1 through 9 of this document.
11
-
12
- "Licensor" shall mean the copyright owner or entity authorized by
13
- the copyright owner that is granting the License.
14
-
15
- "Legal Entity" shall mean the union of the acting entity and all
16
- other entities that control, are controlled by, or are under common
17
- control with that entity. For the purposes of this definition,
18
- "control" means (i) the power, direct or indirect, to cause the
19
- direction or management of such entity, whether by contract or
20
- otherwise, or (ii) ownership of fifty percent (50%) or more of the
21
- outstanding shares, or (iii) beneficial ownership of such entity.
22
-
23
- "You" (or "Your") shall mean an individual or Legal Entity
24
- exercising permissions granted by this License.
25
-
26
- "Source" form shall mean the preferred form for making modifications,
27
- including but not limited to software source code, documentation
28
- source, and configuration files.
29
-
30
- "Object" form shall mean any form resulting from mechanical
31
- transformation or translation of a Source form, including but
32
- not limited to compiled object code, generated documentation,
33
- and conversions to other media types.
34
-
35
- "Work" shall mean the work of authorship, whether in Source or
36
- Object form, made available under the License, as indicated by a
37
- copyright notice that is included in or attached to the work
38
- (an example is provided in the Appendix below).
39
-
40
- "Derivative Works" shall mean any work, whether in Source or Object
41
- form, that is based on (or derived from) the Work and for which the
42
- editorial revisions, annotations, elaborations, or other modifications
43
- represent, as a whole, an original work of authorship. For the purposes
44
- of this License, Derivative Works shall not include works that remain
45
- separable from, or merely link (or bind by name) to the interfaces of,
46
- the Work and Derivative Works thereof.
47
-
48
- "Contribution" shall mean any work of authorship, including
49
- the original version of the Work and any modifications or additions
50
- to that Work or Derivative Works thereof, that is intentionally
51
- submitted to Licensor for inclusion in the Work by the copyright owner
52
- or by an individual or Legal Entity authorized to submit on behalf of
53
- the copyright owner. For the purposes of this definition, "submitted"
54
- means any form of electronic, verbal, or written communication sent
55
- to the Licensor or its representatives, including but not limited to
56
- communication on electronic mailing lists, source code control systems,
57
- and issue tracking systems that are managed by, or on behalf of, the
58
- Licensor for the purpose of discussing and improving the Work, but
59
- excluding communication that is conspicuously marked or otherwise
60
- designated in writing by the copyright owner as "Not a Contribution."
61
-
62
- "Contributor" shall mean Licensor and any individual or Legal Entity
63
- on behalf of whom a Contribution has been received by Licensor and
64
- subsequently incorporated within the Work.
65
-
66
- 2. Grant of Copyright License. Subject to the terms and conditions of
67
- this License, each Contributor hereby grants to You a perpetual,
68
- worldwide, non-exclusive, no-charge, royalty-free, irrevocable
69
- copyright license to reproduce, prepare Derivative Works of,
70
- publicly display, publicly perform, sublicense, and distribute the
71
- Work and such Derivative Works in Source or Object form.
72
-
73
- 3. Grant of Patent License. Subject to the terms and conditions of
74
- this License, each Contributor hereby grants to You a perpetual,
75
- worldwide, non-exclusive, no-charge, royalty-free, irrevocable
76
- (except as stated in this section) patent license to make, have made,
77
- use, offer to sell, sell, import, and otherwise transfer the Work,
78
- where such license applies only to those patent claims licensable
79
- by such Contributor that are necessarily infringed by their
80
- Contribution(s) alone or by combination of their Contribution(s)
81
- with the Work to which such Contribution(s) was submitted. If You
82
- institute patent litigation against any entity (including a
83
- cross-claim or counterclaim in a lawsuit) alleging that the Work
84
- or a Contribution incorporated within the Work constitutes direct
85
- or contributory patent infringement, then any patent licenses
86
- granted to You under this License for that Work shall terminate
87
- as of the date such litigation is filed.
88
-
89
- 4. Redistribution. You may reproduce and distribute copies of the
90
- Work or Derivative Works thereof in any medium, with or without
91
- modifications, and in Source or Object form, provided that You
92
- meet the following conditions:
93
-
94
- (a) You must give any other recipients of the Work or
95
- Derivative Works a copy of this License; and
96
-
97
- (b) You must cause any modified files to carry prominent notices
98
- stating that You changed the files; and
99
-
100
- (c) You must retain, in the Source form of any Derivative Works
101
- that You distribute, all copyright, patent, trademark, and
102
- attribution notices from the Source form of the Work,
103
- excluding those notices that do not pertain to any part of
104
- the Derivative Works; and
105
-
106
- (d) If the Work includes a "NOTICE" text file as part of its
107
- distribution, then any Derivative Works that You distribute must
108
- include a readable copy of the attribution notices contained
109
- within such NOTICE file, excluding those notices that do not
110
- pertain to any part of the Derivative Works, in at least one
111
- of the following places: within a NOTICE text file distributed
112
- as part of the Derivative Works; within the Source form or
113
- documentation, if provided along with the Derivative Works; or,
114
- within a display generated by the Derivative Works, if and
115
- wherever such third-party notices normally appear. The contents
116
- of the NOTICE file are for informational purposes only and
117
- do not modify the License. You may add Your own attribution
118
- notices within Derivative Works that You distribute, alongside
119
- or as an addendum to the NOTICE text from the Work, provided
120
- that such additional attribution notices cannot be construed
121
- as modifying the License.
122
-
123
- You may add Your own copyright statement to Your modifications and
124
- may provide additional or different license terms and conditions
125
- for use, reproduction, or distribution of Your modifications, or
126
- for any such Derivative Works as a whole, provided Your use,
127
- reproduction, and distribution of the Work otherwise complies with
128
- the conditions stated in this License.
129
-
130
- 5. Submission of Contributions. Unless You explicitly state otherwise,
131
- any Contribution intentionally submitted for inclusion in the Work
132
- by You to the Licensor shall be under the terms and conditions of
133
- this License, without any additional terms or conditions.
134
- Notwithstanding the above, nothing herein shall supersede or modify
135
- the terms of any separate license agreement you may have executed
136
- with Licensor regarding such Contributions.
137
-
138
- 6. Trademarks. This License does not grant permission to use the trade
139
- names, trademarks, service marks, or product names of the Licensor,
140
- except as required for reasonable and customary use in describing the
141
- origin of the Work and reproducing the content of the NOTICE file.
142
-
143
- 7. Disclaimer of Warranty. Unless required by applicable law or
144
- agreed to in writing, Licensor provides the Work (and each
145
- Contributor provides its Contributions) on an "AS IS" BASIS,
146
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
147
- implied, including, without limitation, any warranties or conditions
148
- of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
149
- PARTICULAR PURPOSE. You are solely responsible for determining the
150
- appropriateness of using or redistributing the Work and assume any
151
- risks associated with Your exercise of permissions under this License.
152
-
153
- 8. Limitation of Liability. In no event and under no legal theory,
154
- whether in tort (including negligence), contract, or otherwise,
155
- unless required by applicable law (such as deliberate and grossly
156
- negligent acts) or agreed to in writing, shall any Contributor be
157
- liable to You for damages, including any direct, indirect, special,
158
- incidental, or consequential damages of any character arising as a
159
- result of this License or out of the use or inability to use the
160
- Work (including but not limited to damages for loss of goodwill,
161
- work stoppage, computer failure or malfunction, or any and all
162
- other commercial damages or losses), even if such Contributor
163
- has been advised of the possibility of such damages.
164
-
165
- 9. Accepting Warranty or Additional Liability. While redistributing
166
- the Work or Derivative Works thereof, You may choose to offer,
167
- and charge a fee for, acceptance of support, warranty, indemnity,
168
- or other liability obligations and/or rights consistent with this
169
- License. However, in accepting such obligations, You may act only
170
- on Your own behalf and on Your sole responsibility, not on behalf
171
- of any other Contributor, and only if You agree to indemnify,
172
- defend, and hold each Contributor harmless for any liability
173
- incurred by, or claims asserted against, such Contributor by reason
174
- of your accepting any such warranty or additional liability.
175
-
176
- END OF TERMS AND CONDITIONS
177
-
178
- APPENDIX: How to apply the Apache License to your work.
179
-
180
- To apply the Apache License to your work, attach the following
181
- boilerplate notice, with the fields enclosed by brackets "[]"
182
- replaced with your own identifying information. (Don't include
183
- the brackets!) The text should be enclosed in the appropriate
184
- comment syntax for the file format. We also recommend that a
185
- file or class name and description of purpose be included on the
186
- same "printed page" as the copyright notice for easier
187
- identification within third-party archives.
188
-
189
- Copyright [yyyy] [name of copyright owner]
190
-
191
- Licensed under the Apache License, Version 2.0 (the "License");
192
- you may not use this file except in compliance with the License.
193
- You may obtain a copy of the License at
194
-
195
- http://www.apache.org/licenses/LICENSE-2.0
196
-
197
- Unless required by applicable law or agreed to in writing, software
198
- distributed under the License is distributed on an "AS IS" BASIS,
199
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
200
- See the License for the specific language governing permissions and
201
- limitations under the License.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
SoundScribe/SpeakerID/README.rst DELETED
@@ -1,387 +0,0 @@
1
-
2
- |status| |documentation| |codeql| |license| |pypi| |pyversion| |downloads| |black|
3
-
4
- .. |status| image:: http://www.repostatus.org/badges/latest/active.svg
5
- :target: http://www.repostatus.org/#active
6
- :alt: Project Status: Active – The project has reached a stable, usable state and is being actively developed.
7
-
8
- .. |documentation| image:: https://readthedocs.com/projects/nvidia-nemo/badge/?version=main
9
- :alt: Documentation
10
- :target: https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/
11
-
12
- .. |license| image:: https://img.shields.io/badge/License-Apache%202.0-brightgreen.svg
13
- :target: https://github.com/NVIDIA/NeMo/blob/master/LICENSE
14
- :alt: NeMo core license and license for collections in this repo
15
-
16
- .. |pypi| image:: https://badge.fury.io/py/nemo-toolkit.svg
17
- :target: https://badge.fury.io/py/nemo-toolkit
18
- :alt: Release version
19
-
20
- .. |pyversion| image:: https://img.shields.io/pypi/pyversions/nemo-toolkit.svg
21
- :target: https://badge.fury.io/py/nemo-toolkit
22
- :alt: Python version
23
-
24
- .. |downloads| image:: https://static.pepy.tech/personalized-badge/nemo-toolkit?period=total&units=international_system&left_color=grey&right_color=brightgreen&left_text=downloads
25
- :target: https://pepy.tech/project/nemo-toolkit
26
- :alt: PyPi total downloads
27
-
28
- .. |codeql| image:: https://github.com/nvidia/nemo/actions/workflows/codeql.yml/badge.svg?branch=main&event=push
29
- :target: https://github.com/nvidia/nemo/actions/workflows/codeql.yml
30
- :alt: CodeQL
31
-
32
- .. |black| image:: https://img.shields.io/badge/code%20style-black-000000.svg
33
- :target: https://github.com/psf/black
34
- :alt: Code style: black
35
-
36
- .. _main-readme:
37
-
38
- **NVIDIA NeMo**
39
- ===============
40
-
41
- Introduction
42
- ------------
43
-
44
- NVIDIA NeMo is a conversational AI toolkit built for researchers working on automatic speech recognition (ASR),
45
- text-to-speech synthesis (TTS), large language models (LLMs), and
46
- natural language processing (NLP).
47
- The primary objective of NeMo is to help researchers from industry and academia to reuse prior work (code and pretrained models)
48
- and make it easier to create new `conversational AI models <https://developer.nvidia.com/conversational-ai#started>`_.
49
-
50
- All NeMo models are trained with `Lightning <https://github.com/Lightning-AI/lightning>`_ and
51
- training is automatically scalable to 1000s of GPUs.
52
- Additionally, NeMo Megatron LLM models can be trained up to 1 trillion parameters using tensor and pipeline model parallelism.
53
- NeMo models can be optimized for inference and deployed for production use-cases with `NVIDIA Riva <https://developer.nvidia.com/riva>`_.
54
-
55
- Getting started with NeMo is simple.
56
- State of the Art pretrained NeMo models are freely available on `HuggingFace Hub <https://huggingface.co/models?library=nemo&sort=downloads&search=nvidia>`_ and
57
- `NVIDIA NGC <https://catalog.ngc.nvidia.com/models?query=nemo&orderBy=weightPopularDESC>`_.
58
- These models can be used to transcribe audio, synthesize speech, or translate text in just a few lines of code.
59
-
60
- We have extensive `tutorials <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/starthere/tutorials.html>`_ that
61
- can be run on `Google Colab <https://colab.research.google.com>`_.
62
-
63
- For advanced users that want to train NeMo models from scratch or finetune existing NeMo models
64
- we have a full suite of `example scripts <https://github.com/NVIDIA/NeMo/tree/main/examples>`_ that support multi-GPU/multi-node training.
65
-
66
- For scaling NeMo LLM training on Slurm clusters or public clouds, please see the `NVIDIA NeMo Megatron Launcher <https://github.com/NVIDIA/NeMo-Megatron-Launcher>`_.
67
- The NM launcher has extensive recipes, scripts, utilities, and documentation for training NeMo LLMs and also has an `Autoconfigurator <https://github.com/NVIDIA/NeMo-Megatron-Launcher#53-using-autoconfigurator-to-find-the-optimal-configuration>`_
68
- which can be used to find the optimal model parallel configuration for training on a specific cluster.
69
-
70
- Key Features
71
- ------------
72
-
73
- * Speech processing
74
- * `HuggingFace Space for Audio Transcription (File, Microphone and YouTube) <https://huggingface.co/spaces/smajumdar/nemo_multilingual_language_id>`_
75
- * `Pretrained models <https://ngc.nvidia.com/catalog/collections/nvidia:nemo_asr>`_ available in 14+ languages
76
- * `Automatic Speech Recognition (ASR) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/intro.html>`_
77
- * Supported ASR `models <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/asr/models.html>`_:
78
- * Jasper, QuartzNet, CitriNet, ContextNet
79
- * Conformer-CTC, Conformer-Transducer, FastConformer-CTC, FastConformer-Transducer
80
- * Squeezeformer-CTC and Squeezeformer-Transducer
81
- * LSTM-Transducer (RNNT) and LSTM-CTC
82
- * Supports the following decoders/losses:
83
- * CTC
84
- * Transducer/RNNT
85
- * Hybrid Transducer/CTC
86
- * NeMo Original `Multi-blank Transducers <https://arxiv.org/abs/2211.03541>`_ and `Token-and-Duration Transducers (TDT) <https://arxiv.org/abs/2304.06795>`_
87
- * Streaming/Buffered ASR (CTC/Transducer) - `Chunked Inference Examples <https://github.com/NVIDIA/NeMo/tree/stable/examples/asr/asr_chunked_inference>`_
88
- * `Cache-aware Streaming Conformer <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/asr/models.html#cache-aware-streaming-conformer>`_ with multiple lookaheads.
89
- * Beam Search decoding
90
- * `Language Modelling for ASR (CTC and RNNT) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/asr_language_modeling.html>`_: N-gram LM in fusion with Beam Search decoding, Neural Rescoring with Transformer
91
- * `Support of long audios for Conformer with memory efficient local attention <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/results.html#inference-on-long-audio>`_
92
- * `Speech Classification, Speech Command Recognition and Language Identification <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/speech_classification/intro.html>`_: MatchboxNet (Command Recognition), AmberNet (LangID)
93
- * `Voice activity Detection (VAD) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/asr/speech_classification/models.html#marblenet-vad>`_: MarbleNet
94
- * ASR with VAD Inference - `Example <https://github.com/NVIDIA/NeMo/tree/stable/examples/asr/asr_vad>`_
95
- * `Speaker Recognition <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/speaker_recognition/intro.html>`_: TitaNet, ECAPA_TDNN, SpeakerNet
96
- * `Speaker Diarization <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/speaker_diarization/intro.html>`_
97
- * Clustering Diarizer: TitaNet, ECAPA_TDNN, SpeakerNet
98
- * Neural Diarizer: MSDD (Multi-scale Diarization Decoder)
99
- * `Speech Intent Detection and Slot Filling <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/speech_intent_slot/intro.html>`_: Conformer-Transformer
100
- * Natural Language Processing
101
- * `NeMo Megatron pre-training of Large Language Models <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/nlp/nemo_megatron/intro.html>`_
102
- * `Neural Machine Translation (NMT) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/machine_translation/machine_translation.html>`_
103
- * `Punctuation and Capitalization <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/punctuation_and_capitalization.html>`_
104
- * `Token classification (named entity recognition) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/token_classification.html>`_
105
- * `Text classification <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/text_classification.html>`_
106
- * `Joint Intent and Slot Classification <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/joint_intent_slot.html>`_
107
- * `Question answering <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/question_answering.html>`_
108
- * `GLUE benchmark <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/glue_benchmark.html>`_
109
- * `Information retrieval <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/information_retrieval.html>`_
110
- * `Entity Linking <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/entity_linking.html>`_
111
- * `Dialogue State Tracking <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/sgd_qa.html>`_
112
- * `Prompt Learning <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/nemo_megatron/prompt_learning.html>`_
113
- * `NGC collection of pre-trained NLP models. <https://ngc.nvidia.com/catalog/collections/nvidia:nemo_nlp>`_
114
- * `Synthetic Tabular Data Generation <https://developer.nvidia.com/blog/generating-synthetic-data-with-transformers-a-solution-for-enterprise-data-challenges/>`_
115
- * Text-to-Speech Synthesis (TTS):
116
- * `Documentation <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/tts/intro.html#>`_
117
- * Mel-Spectrogram generators: FastPitch, SSL FastPitch, Mixer-TTS/Mixer-TTS-X, RAD-TTS, Tacotron2
118
- * Vocoders: HiFiGAN, UnivNet, WaveGlow
119
- * End-to-End Models: VITS
120
- * `Pre-trained Model Checkpoints in NVIDIA GPU Cloud (NGC) <https://ngc.nvidia.com/catalog/collections/nvidia:nemo_tts>`_
121
- * `Tools <https://github.com/NVIDIA/NeMo/tree/stable/tools>`_
122
- * `Text Processing (text normalization and inverse text normalization) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/text_normalization/intro.html>`_
123
- * `NeMo Forced Aligner <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/tools/nemo_forced_aligner.html>`_
124
- * `CTC-Segmentation tool <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/tools/ctc_segmentation.html>`_
125
- * `Speech Data Explorer <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/tools/speech_data_explorer.html>`_: a dash-based tool for interactive exploration of ASR/TTS datasets
126
- * `Speech Data Processor <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/tools/speech_data_processor.html>`_
127
-
128
-
129
- Built for speed, NeMo can utilize NVIDIA's Tensor Cores and scale out training to multiple GPUs and multiple nodes.
130
-
131
- Requirements
132
- ------------
133
-
134
- 1) Python 3.10 or above
135
- 2) Pytorch 1.13.1 or above
136
- 3) NVIDIA GPU, if you intend to do model training
137
-
138
- Documentation
139
- -------------
140
-
141
- .. |main| image:: https://readthedocs.com/projects/nvidia-nemo/badge/?version=main
142
- :alt: Documentation Status
143
- :scale: 100%
144
- :target: https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/
145
-
146
- .. |stable| image:: https://readthedocs.com/projects/nvidia-nemo/badge/?version=stable
147
- :alt: Documentation Status
148
- :scale: 100%
149
- :target: https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/
150
-
151
- +---------+-------------+------------------------------------------------------------------------------------------------------------------------------------------+
152
- | Version | Status | Description |
153
- +=========+=============+==========================================================================================================================================+
154
- | Latest | |main| | `Documentation of the latest (i.e. main) branch. <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/>`_ |
155
- +---------+-------------+------------------------------------------------------------------------------------------------------------------------------------------+
156
- | Stable | |stable| | `Documentation of the stable (i.e. most recent release) branch. <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/>`_ |
157
- +---------+-------------+------------------------------------------------------------------------------------------------------------------------------------------+
158
-
159
- Tutorials
160
- ---------
161
- A great way to start with NeMo is by checking `one of our tutorials <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/starthere/tutorials.html>`_.
162
-
163
- You can also get a high-level overview of NeMo by watching the talk *NVIDIA NeMo: Toolkit for Conversational AI*, presented at PyData Yerevan 2022:
164
-
165
- |pydata|
166
-
167
- .. |pydata| image:: https://img.youtube.com/vi/J-P6Sczmas8/maxres3.jpg
168
- :target: https://www.youtube.com/embed/J-P6Sczmas8?mute=0&start=14&autoplay=0
169
- :width: 600
170
- :alt: NeMo presentation at PyData@Yerevan 2022
171
-
172
- Getting help with NeMo
173
- ----------------------
174
- FAQ can be found on NeMo's `Discussions board <https://github.com/NVIDIA/NeMo/discussions>`_. You are welcome to ask questions or start discussions there.
175
-
176
-
177
- Installation
178
- ------------
179
- Conda
180
- ~~~~~
181
-
182
- We recommend installing NeMo in a fresh Conda environment.
183
-
184
- .. code-block:: bash
185
-
186
- conda create --name nemo python==3.10.12
187
- conda activate nemo
188
-
189
- Install PyTorch using their `configurator <https://pytorch.org/get-started/locally/>`_.
190
-
191
- .. code-block:: bash
192
-
193
- conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia
194
-
195
- The command used to install PyTorch may depend on your system. Please use the configurator linked above to find the right command for your system.
196
-
197
- Pip
198
- ~~~
199
- Use this installation mode if you want the latest released version.
200
-
201
- .. code-block:: bash
202
-
203
- apt-get update && apt-get install -y libsndfile1 ffmpeg
204
- pip install Cython
205
- pip install nemo_toolkit['all']
206
-
207
- Depending on the shell used, you may need to use ``"nemo_toolkit[all]"`` instead in the above command.
208
-
209
- Pip from source
210
- ~~~~~~~~~~~~~~~
211
- Use this installation mode if you want the version from a particular GitHub branch (e.g main).
212
-
213
- .. code-block:: bash
214
-
215
- apt-get update && apt-get install -y libsndfile1 ffmpeg
216
- pip install Cython
217
- python -m pip install git+https://github.com/NVIDIA/NeMo.git@{BRANCH}#egg=nemo_toolkit[all]
218
-
219
-
220
- From source
221
- ~~~~~~~~~~~
222
- Use this installation mode if you are contributing to NeMo.
223
-
224
- .. code-block:: bash
225
-
226
- apt-get update && apt-get install -y libsndfile1 ffmpeg
227
- git clone https://github.com/NVIDIA/NeMo
228
- cd NeMo
229
- ./reinstall.sh
230
-
231
- If you only want the toolkit without additional conda-based dependencies, you may replace ``reinstall.sh``
232
- with ``pip install -e .`` when your PWD is the root of the NeMo repository.
233
-
234
- Mac computers with Apple silicon
235
- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
236
- To install NeMo on Mac with Apple M-Series GPU:
237
-
238
- - create a new Conda environment
239
-
240
- - install PyTorch 2.0 or higher
241
-
242
- - run the following code:
243
-
244
- .. code-block:: shell
245
-
246
- # [optional] install mecab using Homebrew, to use sacrebleu for NLP collection
247
- # you can install Homebrew here: https://brew.sh
248
- brew install mecab
249
-
250
- # [optional] install pynini using Conda, to use text normalization
251
- conda install -c conda-forge pynini
252
-
253
- # install Cython manually
254
- pip install cython
255
-
256
- # clone the repo and install in development mode
257
- git clone https://github.com/NVIDIA/NeMo
258
- cd NeMo
259
- ./reinstall.sh
260
-
261
- RNNT
262
- ~~~~
263
- Note that RNNT requires numba to be installed from conda.
264
-
265
- .. code-block:: bash
266
-
267
- conda remove numba
268
- pip uninstall numba
269
- conda install -c conda-forge numba
270
-
271
- NeMo Megatron
272
- ~~~~~~~~~~~~~
273
- NeMo Megatron training requires NVIDIA Apex to be installed.
274
- Install it manually if not using the NVIDIA PyTorch container.
275
-
276
- To install Apex, run
277
-
278
- .. code-block:: bash
279
-
280
- git clone https://github.com/NVIDIA/apex.git
281
- cd apex
282
- git checkout 52e18c894223800cb611682dce27d88050edf1de
283
- pip install -v --no-build-isolation --disable-pip-version-check --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" --global-option="--fast_layer_norm" --global-option="--distributed_adam" --global-option="--deprecated_fused_adam" ./
284
-
285
- It is highly recommended to use the NVIDIA PyTorch or NeMo container if having issues installing Apex or any other dependencies.
286
-
287
- While installing Apex, it may raise an error if the CUDA version on your system does not match the CUDA version torch was compiled with.
288
- This raise can be avoided by commenting it here: https://github.com/NVIDIA/apex/blob/master/setup.py#L32
289
-
290
- cuda-nvprof is needed to install Apex. The version should match the CUDA version that you are using:
291
-
292
- .. code-block:: bash
293
-
294
- conda install -c nvidia cuda-nvprof=11.8
295
-
296
- packaging is also needed:
297
-
298
- .. code-block:: bash
299
-
300
- pip install packaging
301
-
302
- With the latest versions of Apex, the `pyproject.toml` file in Apex may need to be deleted in order to install locally.
303
-
304
-
305
- Transformer Engine
306
- ~~~~~~~~~~~~~~~~~~
307
- NeMo Megatron GPT has been integrated with `NVIDIA Transformer Engine <https://github.com/NVIDIA/TransformerEngine>`_
308
- Transformer Engine enables FP8 training on NVIDIA Hopper GPUs.
309
- `Install <https://docs.nvidia.com/deeplearning/transformer-engine/user-guide/installation.html>`_ it manually if not using the NVIDIA PyTorch container.
310
-
311
- .. code-block:: bash
312
-
313
- pip install --upgrade git+https://github.com/NVIDIA/TransformerEngine.git@stable
314
-
315
- It is highly recommended to use the NVIDIA PyTorch or NeMo container if having issues installing Transformer Engine or any other dependencies.
316
-
317
- Transformer Engine requires PyTorch to be built with CUDA 11.8.
318
-
319
-
320
- Flash Attention
321
- ~~~~~~~~~~~~~~~~~~~~
322
- Transformer Engine already supports Flash Attention for GPT models. If you want to use Flash Attention for non-causal models or use with attention bias (introduced from position encoding, e.g. Alibi), please install `flash-attn <https://github.com/HazyResearch/flash-attention>`_.
323
-
324
- .. code-block:: bash
325
-
326
- pip install flash-attn
327
- pip install triton==2.0.0.dev20221202
328
-
329
- NLP inference UI
330
- ~~~~~~~~~~~~~~~~~~~~
331
- To launch the inference web UI server, please install the gradio `gradio <https://gradio.app/>`_.
332
-
333
- .. code-block:: bash
334
-
335
- pip install gradio==3.34.0
336
-
337
- NeMo Text Processing
338
- ~~~~~~~~~~~~~~~~~~~~
339
- NeMo Text Processing, specifically (Inverse) Text Normalization, is now a separate repository `https://github.com/NVIDIA/NeMo-text-processing <https://github.com/NVIDIA/NeMo-text-processing>`_.
340
-
341
- Docker containers:
342
- ~~~~~~~~~~~~~~~~~~
343
- We release NeMo containers alongside NeMo releases. For example, NeMo ``r1.20.0`` comes with container ``nemo:23.06``, you may find more details about released containers in `releases page <https://github.com/NVIDIA/NeMo/releases>`_.
344
-
345
- To use built container, please run
346
-
347
- .. code-block:: bash
348
-
349
- docker pull nvcr.io/nvidia/nemo:23.06
350
-
351
- To build a nemo container with Dockerfile from a branch, please run
352
-
353
- .. code-block:: bash
354
-
355
- DOCKER_BUILDKIT=1 docker build -f Dockerfile -t nemo:latest .
356
-
357
-
358
- If you choose to work with the main branch, we recommend using NVIDIA's PyTorch container version 23.06-py3 and then installing from GitHub.
359
-
360
- .. code-block:: bash
361
-
362
- docker run --gpus all -it --rm -v <nemo_github_folder>:/NeMo --shm-size=8g \
363
- -p 8888:8888 -p 6006:6006 --ulimit memlock=-1 --ulimit \
364
- stack=67108864 --device=/dev/snd nvcr.io/nvidia/pytorch:23.06-py3
365
-
366
- Examples
367
- --------
368
-
369
- Many examples can be found under the `"Examples" <https://github.com/NVIDIA/NeMo/tree/stable/examples>`_ folder.
370
-
371
-
372
- Contributing
373
- ------------
374
-
375
- We welcome community contributions! Please refer to `CONTRIBUTING.md <https://github.com/NVIDIA/NeMo/blob/stable/CONTRIBUTING.md>`_ for the process.
376
-
377
- Publications
378
- ------------
379
-
380
- We provide an ever-growing list of `publications <https://nvidia.github.io/NeMo/publications/>`_ that utilize the NeMo framework.
381
-
382
- If you would like to add your own article to the list, you are welcome to do so via a pull request to this repository's ``gh-pages-src`` branch.
383
- Please refer to the instructions in the `README of that branch <https://github.com/NVIDIA/NeMo/tree/gh-pages-src#readme>`_.
384
-
385
- License
386
- -------
387
- NeMo is released under an `Apache 2.0 license <https://github.com/NVIDIA/NeMo/blob/stable/LICENSE>`_.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
SoundScribe/SpeakerID/ci.groovy DELETED
@@ -1,119 +0,0 @@
1
- @Library('blossom-github-lib@master')
2
- import ipp.blossom.*
3
-
4
- podTemplate(cloud:'sc-ipp-blossom-prod', yaml : """
5
- apiVersion: v1
6
- kind: Pod
7
- metadata:
8
- labels:
9
- some-label: some-label-value
10
- spec:
11
- volumes:
12
- - name: scratch
13
- nfs:
14
- server: ipp1-cdot01-col01
15
- path: /vol/scratch1/scratch.okuchaiev_blossom
16
- containers:
17
- - name: latestdlfw
18
- image: nvcr.io/nvidia/pytorch:23.02-py3
19
- command:
20
- - cat
21
- volumeMounts:
22
- - name: scratch
23
- mountPath: /testdata
24
- resources:
25
- limits:
26
- nvidia.com/gpu: 2
27
- restartPolicy: Never
28
- backoffLimit: 4
29
- tty: true
30
- shm-size: 32g
31
- nodeSelector:
32
- kubernetes.io/os: linux
33
- nvidia.com/gpu_type: "Tesla_T4x4"
34
- nvidia.com/node_type: gpu_tester
35
- nvidia.com/driver_version: "510.20"
36
- """
37
- ) {
38
- node(POD_LABEL) {
39
- def githubHelper
40
- stage('Get Token') {
41
- withCredentials([usernamePassword(credentialsId: 'GHAtoken', passwordVariable: 'GIT_PASSWORD', usernameVariable: 'GIT_USERNAME')]) {
42
- // create new instance of helper object
43
- githubHelper = GithubHelper.getInstance("${GIT_PASSWORD}", githubData)
44
- }
45
-
46
- }
47
- def stageName = ''
48
- try {
49
- currentBuild.description = githubHelper.getBuildDescription()
50
- container('latestdlfw') {
51
- stage('Code checkout') {
52
- // update status on github
53
- githubHelper.updateCommitStatus("$BUILD_URL", "$stageName Running", GitHubCommitState.PENDING)
54
- checkout changelog: true, poll: true, scm: [$class: 'GitSCM', branches: [[name: "pr/"+githubHelper.getPRNumber()]],
55
- doGenerateSubmoduleConfigurations: false,
56
- submoduleCfg: [],
57
- userRemoteConfigs: [[credentialsId: 'github-token', url: githubHelper.getCloneUrl(), refspec: '+refs/pull/*/head:refs/remotes/origin/pr/*']]]
58
- }
59
-
60
- stage('Code Style') {
61
- sh "apt-get update && \
62
- apt-get install -y bc && \
63
- nvidia-smi && \
64
- pip install -r requirements/requirements_test.txt && \
65
- python setup.py style && ls -l /testdata/TestData && ln -s /testdata/TestData /home/TestData && \
66
- ls -l /home && ls -l /home/TestData"
67
- }
68
-
69
- stage('Installation') {
70
- sh "git config --global --add safe.directory '*' && nvidia-smi && ./reinstall.sh release"
71
- }
72
-
73
- stage('L0: GPU unit tests') {
74
- sh "NEMO_NUMBA_MINVER=0.53 pytest -m 'not pleasefixme'"
75
- }
76
-
77
- parallel( //USE CUDA_VISIBLE_DEVICES to execute 2 single GPU tests in parallel here
78
- [
79
- "L1: NMT Training Pre-LN": { sh 'CUDA_VISIBLE_DEVICES=0 python examples/nlp/machine_translation/enc_dec_nmt.py \
80
- --config-path=conf \
81
- --config-name=aayn_base \
82
- do_testing=true \
83
- model.train_ds.src_file_name=/testdata/TestData/nlp/nmt/toy_data/wmt14-de-en.src \
84
- model.train_ds.tgt_file_name=/testdata/TestData/nlp/nmt/toy_data/wmt14-de-en.ref \
85
- model.validation_ds.src_file_name=/testdata/TestData/nlp/nmt/toy_data/wmt14-de-en.src \
86
- model.validation_ds.tgt_file_name=/testdata/TestData/nlp/nmt/toy_data/wmt14-de-en.src \
87
- model.test_ds.src_file_name=/testdata/TestData/nlp/nmt/toy_data/wmt14-de-en.src \
88
- model.test_ds.tgt_file_name=/testdata/TestData/nlp/nmt/toy_data/wmt14-de-en.src \
89
- model.encoder_tokenizer.tokenizer_model=/testdata/TestData/nlp/nmt/toy_data/tt_tokenizer.BPE.4096.model \
90
- model.decoder_tokenizer.tokenizer_model=/testdata/TestData/nlp/nmt/toy_data/tt_tokenizer.BPE.4096.model \
91
- model.encoder.pre_ln=true \
92
- model.decoder.pre_ln=true \
93
- trainer.devices=[0] \
94
- trainer.accelerator="gpu" \
95
- +trainer.fast_dev_run=true \
96
- +trainer.limit_test_batches=2 \
97
- exp_manager=null \
98
- '},
99
- "L1: Speech to text": { sh 'CUDA_VISIBLE_DEVICES=1 python examples/asr/asr_ctc/speech_to_text_ctc.py \
100
- model.train_ds.manifest_filepath=/testdata/TestData/an4_dataset/an4_train.json \
101
- model.validation_ds.manifest_filepath=/testdata/TestData/an4_dataset/an4_val.json \
102
- trainer.devices=[0] \
103
- trainer.accelerator="gpu" \
104
- +trainer.fast_dev_run=True \
105
- exp_manager=null \
106
- '}
107
- ]
108
- )//end of parallel
109
- }
110
- githubHelper.updateCommitStatus("$BUILD_URL", "Complete", GitHubCommitState.SUCCESS)
111
- }
112
- catch (Exception ex){
113
- currentBuild.result = 'FAILURE'
114
- println ex
115
- githubHelper.updateCommitStatus("$BUILD_URL", "$stageName Failed", GitHubCommitState.FAILURE)
116
- }
117
-
118
- }
119
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
SoundScribe/SpeakerID/docs/Makefile DELETED
@@ -1,216 +0,0 @@
1
- # Makefile for Sphinx documentation
2
- #
3
-
4
- # You can set these variables from the command line.
5
- SPHINXOPTS =
6
- SPHINXBUILD = sphinx-build
7
- PAPER =
8
- BUILDDIR = build
9
-
10
- # User-friendly check for sphinx-build
11
- ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1)
12
- $(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from http://sphinx-doc.org/)
13
- endif
14
-
15
- # Internal variables.
16
- PAPEROPT_a4 = -D latex_paper_size=a4
17
- PAPEROPT_letter = -D latex_paper_size=letter
18
- ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source
19
- # the i18n builder cannot share the environment and doctrees with the others
20
- I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source
21
-
22
- .PHONY: help
23
- help:
24
- @echo "Please use \`make <target>' where <target> is one of"
25
- @echo " html to make standalone HTML files"
26
- @echo " dirhtml to make HTML files named index.html in directories"
27
- @echo " singlehtml to make a single large HTML file"
28
- @echo " pickle to make pickle files"
29
- @echo " json to make JSON files"
30
- @echo " htmlhelp to make HTML files and a HTML help project"
31
- @echo " qthelp to make HTML files and a qthelp project"
32
- @echo " applehelp to make an Apple Help Book"
33
- @echo " devhelp to make HTML files and a Devhelp project"
34
- @echo " epub to make an epub"
35
- @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
36
- @echo " latexpdf to make LaTeX files and run them through pdflatex"
37
- @echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx"
38
- @echo " text to make text files"
39
- @echo " man to make manual pages"
40
- @echo " texinfo to make Texinfo files"
41
- @echo " info to make Texinfo files and run them through makeinfo"
42
- @echo " gettext to make PO message catalogs"
43
- @echo " changes to make an overview of all changed/added/deprecated items"
44
- @echo " xml to make Docutils-native XML files"
45
- @echo " pseudoxml to make pseudoxml-XML files for display purposes"
46
- @echo " linkcheck to check all external links for integrity"
47
- @echo " doctest to run all doctests embedded in the documentation (if enabled)"
48
- @echo " coverage to run coverage check of the documentation (if enabled)"
49
-
50
- .PHONY: clean
51
- clean:
52
- rm -rf $(BUILDDIR)/*
53
-
54
- .PHONY: html
55
- html:
56
- $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
57
- @echo
58
- @echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
59
-
60
- .PHONY: dirhtml
61
- dirhtml:
62
- $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
63
- @echo
64
- @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
65
-
66
- .PHONY: singlehtml
67
- singlehtml:
68
- $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
69
- @echo
70
- @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
71
-
72
- .PHONY: pickle
73
- pickle:
74
- $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
75
- @echo
76
- @echo "Build finished; now you can process the pickle files."
77
-
78
- .PHONY: json
79
- json:
80
- $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
81
- @echo
82
- @echo "Build finished; now you can process the JSON files."
83
-
84
- .PHONY: htmlhelp
85
- htmlhelp:
86
- $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
87
- @echo
88
- @echo "Build finished; now you can run HTML Help Workshop with the" \
89
- ".hhp project file in $(BUILDDIR)/htmlhelp."
90
-
91
- .PHONY: qthelp
92
- qthelp:
93
- $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
94
- @echo
95
- @echo "Build finished; now you can run "qcollectiongenerator" with the" \
96
- ".qhcp project file in $(BUILDDIR)/qthelp, like this:"
97
- @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/OpenSeq2Seq.qhcp"
98
- @echo "To view the help file:"
99
- @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/OpenSeq2Seq.qhc"
100
-
101
- .PHONY: applehelp
102
- applehelp:
103
- $(SPHINXBUILD) -b applehelp $(ALLSPHINXOPTS) $(BUILDDIR)/applehelp
104
- @echo
105
- @echo "Build finished. The help book is in $(BUILDDIR)/applehelp."
106
- @echo "N.B. You won't be able to view it unless you put it in" \
107
- "~/Library/Documentation/Help or install it in your application" \
108
- "bundle."
109
-
110
- .PHONY: devhelp
111
- devhelp:
112
- $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
113
- @echo
114
- @echo "Build finished."
115
- @echo "To view the help file:"
116
- @echo "# mkdir -p $$HOME/.local/share/devhelp/OpenSeq2Seq"
117
- @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/OpenSeq2Seq"
118
- @echo "# devhelp"
119
-
120
- .PHONY: epub
121
- epub:
122
- $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
123
- @echo
124
- @echo "Build finished. The epub file is in $(BUILDDIR)/epub."
125
-
126
- .PHONY: latex
127
- latex:
128
- $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
129
- @echo
130
- @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
131
- @echo "Run \`make' in that directory to run these through (pdf)latex" \
132
- "(use \`make latexpdf' here to do that automatically)."
133
-
134
- .PHONY: latexpdf
135
- latexpdf:
136
- $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
137
- @echo "Running LaTeX files through pdflatex..."
138
- $(MAKE) -C $(BUILDDIR)/latex all-pdf
139
- @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
140
-
141
- .PHONY: latexpdfja
142
- latexpdfja:
143
- $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
144
- @echo "Running LaTeX files through platex and dvipdfmx..."
145
- $(MAKE) -C $(BUILDDIR)/latex all-pdf-ja
146
- @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
147
-
148
- .PHONY: text
149
- text:
150
- $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
151
- @echo
152
- @echo "Build finished. The text files are in $(BUILDDIR)/text."
153
-
154
- .PHONY: man
155
- man:
156
- $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
157
- @echo
158
- @echo "Build finished. The manual pages are in $(BUILDDIR)/man."
159
-
160
- .PHONY: texinfo
161
- texinfo:
162
- $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
163
- @echo
164
- @echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo."
165
- @echo "Run \`make' in that directory to run these through makeinfo" \
166
- "(use \`make info' here to do that automatically)."
167
-
168
- .PHONY: info
169
- info:
170
- $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
171
- @echo "Running Texinfo files through makeinfo..."
172
- make -C $(BUILDDIR)/texinfo info
173
- @echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo."
174
-
175
- .PHONY: gettext
176
- gettext:
177
- $(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale
178
- @echo
179
- @echo "Build finished. The message catalogs are in $(BUILDDIR)/locale."
180
-
181
- .PHONY: changes
182
- changes:
183
- $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
184
- @echo
185
- @echo "The overview file is in $(BUILDDIR)/changes."
186
-
187
- .PHONY: linkcheck
188
- linkcheck:
189
- $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
190
- @echo
191
- @echo "Link check complete; look for any errors in the above output " \
192
- "or in $(BUILDDIR)/linkcheck/output.txt."
193
-
194
- .PHONY: doctest
195
- doctest:
196
- $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
197
- @echo "Testing of doctests in the sources finished, look at the " \
198
- "results in $(BUILDDIR)/doctest/output.txt."
199
-
200
- .PHONY: coverage
201
- coverage:
202
- $(SPHINXBUILD) -b coverage $(ALLSPHINXOPTS) $(BUILDDIR)/coverage
203
- @echo "Testing of coverage in the sources finished, look at the " \
204
- "results in $(BUILDDIR)/coverage/python.txt."
205
-
206
- .PHONY: xml
207
- xml:
208
- $(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml
209
- @echo
210
- @echo "Build finished. The XML files are in $(BUILDDIR)/xml."
211
-
212
- .PHONY: pseudoxml
213
- pseudoxml:
214
- $(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml
215
- @echo
216
- @echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml."
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
SoundScribe/SpeakerID/docs/source/_static/css/custom.css DELETED
@@ -1,366 +0,0 @@
1
- body {
2
- font-size: 100%;
3
- font-family: 'NVIDIA Sans', sans-serif;
4
- }
5
-
6
-
7
- /* Width of template */
8
-
9
- .wy-nav-content {
10
- max-width: 1200px !important;
11
- }
12
-
13
-
14
-
15
- /* Standard Text Formatting */
16
-
17
- h1 {
18
- color: #76b900;
19
- text-align: center;
20
- /* background-color: #ffffff; */
21
- }
22
-
23
- h2 {
24
- color: #ffffff;
25
- /* background-color: #ffffff; */
26
- /* #76b900 */
27
- Padding: 5px;
28
- }
29
-
30
- h3 {
31
- padding-top: 0px;
32
- border-top: solid 3px #000000;
33
- /* #76b900 */
34
- border-bottom: solid 3px #000000;
35
- /* #76b900 */
36
- }
37
-
38
- p {
39
- margin-bottom: 24px;
40
- }
41
-
42
- /* Link Colors */
43
- a {
44
- color: #76b900;
45
- }
46
-
47
- a:visited {
48
- color: #218219;
49
- }
50
-
51
- .container-xl {
52
- margin-right: unset;
53
- margin-left: unset;
54
- }
55
-
56
- section {
57
- overflow-x: auto;
58
- }
59
-
60
- /* ----------------------------------------------TABLES--------------------------------------- */
61
- section table {
62
- overflow-x: auto;
63
- display: block;
64
- }
65
-
66
- table {
67
- font-size: small;
68
- }
69
-
70
- /* Table head Color */
71
- thead td {
72
- background-color: #333333 !important;
73
- }
74
-
75
- .row-odd p {
76
- /*padding-bottom: 0px;*/
77
- /*margin-bottom: 0px;*/
78
- }
79
-
80
- /* even rows*/
81
-
82
- .row-even tr {
83
- background-color: #e5f1e6 !important;
84
- }
85
-
86
- /* odd rows*/
87
-
88
-
89
- .wy-table-responsive table tr {
90
- background-color: #ffffff !important;
91
- }
92
-
93
-
94
-
95
- .wy-table-responsive table td {
96
- white-space: normal;
97
- }
98
-
99
-
100
- /* Removes bottom margin in tables*/
101
-
102
- .rst-content .line-block {
103
- margin-bottom: 0px;
104
- }
105
-
106
- .wy-table-responsive {
107
- overflow: visible !important;
108
- }
109
-
110
- /* reduces the size of text in multiline table columns. */
111
-
112
- .rst-content table.docutils td {
113
- font-size: 80%;
114
- }
115
-
116
- .rst-content dl:not(.docutils) dt {
117
-
118
- background-color: inherit;
119
- color: #000000;
120
- border-top: solid 0px #000000;
121
-
122
- }
123
-
124
- .rst-content dl:not(.docutils) dt:before {
125
- color: #333333;
126
- }
127
-
128
- .rst-content .line-block {
129
- margin-bottom: 0px;
130
- }
131
-
132
- .wy-side-nav-search,
133
- .wy-nav-top {
134
- background-color: #000000;
135
- padding: 0;
136
- }
137
-
138
- .wy-side-nav-search img {
139
- padding: 0px;
140
- padding: 0px 0px;
141
- margin-bottom: 0;
142
- }
143
-
144
- .wy-side-nav-search input[type=text] {
145
- border-radius: 0px;
146
- }
147
-
148
-
149
- .wy-menu-vertical p.caption {
150
- color: #76b900;
151
- }
152
-
153
-
154
- .wy-side-nav-search>a img.logo,
155
- .wy-side-nav-search .wy-dropdown>a img.logo {
156
- margin: 0px 0px 0px 0px;
157
- }
158
-
159
- .wy-nav-content {
160
- margin: 0;
161
- min-height: 100%;
162
- height: 100%;
163
- background: #ffffff;
164
- }
165
-
166
- /* List (numbered, bulleted) padding Fix */
167
-
168
-
169
- .wy-plain-list-decimal li {
170
- margin-top: -6px;
171
- margin-bottom: -6px;
172
- }
173
-
174
- .rst-content .section ol.loweralpha {
175
- margin-top: -6px;
176
- margin-bottom: 12px;
177
- }
178
-
179
- .wy-plain-list-disc,
180
- .rst-content .toctree-wrapper ul,
181
- article ul {
182
- margin-top: 0px !important;
183
- margin-bottom: 12px;
184
- }
185
-
186
- /* Alert Boxes */
187
- /* Background color of Alert Box Title */
188
-
189
- .rst-content .section ul {
190
- margin-top: -12px;
191
- margin-bottom: 16px;
192
- }
193
-
194
- .wy-alert.wy-alert-info .wy-alert-title,
195
- .rst-content .note .wy-alert-title,
196
- .rst-content .wy-alert-info.attention .wy-alert-title,
197
- .rst-content .wy-alert-info.caution .wy-alert-title,
198
- .rst-content .wy-alert-info.danger .wy-alert-title,
199
- .rst-content .wy-alert-info.error .wy-alert-title,
200
- .rst-content .wy-alert-info.hint .wy-alert-title,
201
- .rst-content .wy-alert-info.important .wy-alert-title,
202
- .rst-content .wy-alert-info.tip .wy-alert-title,
203
- .rst-content .wy-alert-info.warning .wy-alert-title,
204
- .rst-content .seealso .wy-alert-title,
205
- .rst-content .wy-alert-info.admonition-todo .wy-alert-title,
206
- .rst-content .wy-alert-info.admonition .wy-alert-title,
207
- .wy-alert.wy-alert-info .rst-content .admonition-title,
208
- .rst-content .wy-alert.wy-alert-info .admonition-title,
209
- .rst-content .note .admonition-title,
210
- .rst-content .wy-alert-info.attention .admonition-title,
211
- .rst-content .wy-alert-info.caution .admonition-title,
212
- .rst-content .wy-alert-info.danger .admonition-title,
213
- .rst-content .wy-alert-info.error .admonition-title,
214
- .rst-content .wy-alert-info.hint .admonition-title,
215
- .rst-content .wy-alert-info.important .admonition-title,
216
- .rst-content .wy-alert-info.tip .admonition-title,
217
- .rst-content .wy-alert-info.warning .admonition-title,
218
- .rst-content .seealso .admonition-title,
219
- .rst-content .wy-alert-info.admonition-todo .admonition-title,
220
- .rst-content .wy-alert-info.admonition .admonition-title {
221
- background: #76b900;
222
- }
223
-
224
- /* Background and Font Color of Alert Box Main Body*/
225
- .wy-alert.wy-alert-info,
226
- .rst-content .note,
227
- .rst-content .wy-alert-info.attention,
228
- .rst-content .wy-alert-info.caution,
229
- .rst-content .wy-alert-info.danger,
230
- .rst-content .wy-alert-info.error,
231
- .rst-content .wy-alert-info.hint,
232
- .rst-content .wy-alert-info.important,
233
- .rst-content .wy-alert-info.tip,
234
- .rst-content .wy-alert-info.warning,
235
- .rst-content .seealso,
236
- .rst-content .wy-alert-info.admonition-todo,
237
- .rst-content .wy-alert-info.admonition {
238
- background: #333333;
239
- color: #999999;
240
- }
241
-
242
- .section {
243
- margin-top: 50px;
244
- }
245
-
246
- /* Logo */
247
- .navbar-brand-box {
248
- background-color: #ffffff;
249
- }
250
-
251
- /* ---------------------------------------------- Media Queries --------------------------------------- */
252
- @media (min-width: 1200px) {
253
- .container-xl {
254
- max-width: 100%;
255
- }
256
- }
257
-
258
- @media (min-width: none) {
259
- body {
260
- font-size: 18px;
261
- }
262
-
263
- #site-navigation nav ul.nav {
264
- font-size: 18px;
265
- }
266
-
267
- #site-navigation nav.bd-links p {
268
- font-size: 18px;
269
- }
270
-
271
- #site-navigation {
272
- width: 350px;
273
- }
274
-
275
- .toc-h2 {
276
- font-size: 18px;
277
- }
278
-
279
- .toc-h3 {
280
- font-size: 1rem;
281
- }
282
-
283
- .toc-h4 {
284
- font-size: 0.85rem;
285
- }
286
-
287
- .header-article .bd-toc {
288
- font-size: 18px;
289
- }
290
-
291
- #main-content>div {
292
- margin-left: 10%;
293
- margin-right: 10%;
294
- }
295
- }
296
-
297
- /* ---------------------------------------------- NVIDIA Sans --------------------------------------- */
298
-
299
- :root {
300
- --md-text-font: "NVIDIA Sans";
301
- /* --md-code-font: "NVIDIA Sans"; */
302
- }
303
-
304
- @font-face {
305
- font-family: "NVIDIA Sans";
306
- src: url(https://aws1.discourse-cdn.com/nvidia/original/3X/5/2/52891dda673228d54e5d57bf1e4a3880d4b22405.woff2) format("woff2"),
307
- url(https://aws1.discourse-cdn.com/nvidia/original/3X/e/0/e090b7dda7a582522c7f9045c6ce949cce60134f.woff) format("woff");
308
- font-weight: 300;
309
- font-style: normal;
310
- }
311
-
312
- @font-face {
313
- font-family: "NVIDIA Sans";
314
- src: url(https://aws1.discourse-cdn.com/nvidia/original/3X/a/1/a107baabcbf6b241099122336bce7429bcfd377a.woff2) format("woff2"),
315
- url(https://aws1.discourse-cdn.com/nvidia/original/3X/3/a/3a6060a4e3bce70e5552ba0de8af4b22c6cf9144.woff) format("woff");
316
- font-weight: 300;
317
- font-style: italic;
318
- }
319
-
320
- @font-face {
321
- font-family: "NVIDIA Sans";
322
- src: url(https://aws1.discourse-cdn.com/nvidia/original/3X/9/9/9920d2b172b01d92fc9c1c0e521dcf45b59c47c3.woff2) format("woff2"),
323
- url(https://aws1.discourse-cdn.com/nvidia/original/3X/6/c/6c7d947928a7e4ef3e80ed409bef6c243f2148cb.woff) format("woff");
324
- font-weight: 400;
325
- font-style: normal;
326
- }
327
-
328
- @font-face {
329
- font-family: "NVIDIA Sans";
330
- src: url(https://aws1.discourse-cdn.com/nvidia/original/3X/e/8/e8e63fe1244372cd942d957f44a5616a1eba0644.woff2) format("woff2"),
331
- url(https://aws1.discourse-cdn.com/nvidia/original/3X/0/f/0f1fb2af0283ab09d36e7097bb07d895c3228f12.woff) format("woff");
332
- font-weight: 400;
333
- font-style: italic;
334
- }
335
-
336
- @font-face {
337
- font-family: "NVIDIA Sans";
338
- src: url(https://aws1.discourse-cdn.com/nvidia/original/3X/7/9/79d3c513a9cd72c59f65354f39f89ca52dc17dd2.woff2) format("woff2"),
339
- url(https://aws1.discourse-cdn.com/nvidia/original/3X/2/5/2581ac533f5d01f4985d8a7245b0766b4630ced8.woff) format("woff");
340
- font-weight: 500;
341
- font-style: normal;
342
- }
343
-
344
- @font-face {
345
- font-family: "NVIDIA Sans";
346
- src: url(https://aws1.discourse-cdn.com/nvidia/original/3X/3/9/39d9ef1ee9770dd503f19bb2ace2fdb4eff3bb50.woff2) format("woff2"),
347
- url(https://aws1.discourse-cdn.com/nvidia/original/3X/7/b/7bb5d5e2e71b2e13c8098b2e67c0a0ed9258e6c7.woff) format("woff");
348
- font-weight: 500;
349
- font-style: italic;
350
- }
351
-
352
- @font-face {
353
- font-family: "NVIDIA Sans";
354
- src: url(https://aws1.discourse-cdn.com/nvidia/original/3X/0/5/05276a55a43eb3f74981ec1e93252727afcd9d16.woff2) format("woff2"),
355
- url(https://aws1.discourse-cdn.com/nvidia/original/3X/9/c/9cfec7ed941b06564aa4d5ca14610e81542d070f.woff) format("woff");
356
- font-weight: 700;
357
- font-style: normal;
358
- }
359
-
360
- @font-face {
361
- font-family: "NVIDIA Sans";
362
- src: url(https://aws1.discourse-cdn.com/nvidia/original/3X/a/e/aebd14d09ba56f541e1b8735fb051e33710f9ae7.woff2) format("woff2"),
363
- url(https://aws1.discourse-cdn.com/nvidia/original/3X/e/d/edbdabef43acc5c12e84a94baaa5542c9404cfeb.woff) format("woff");
364
- font-weight: 700;
365
- font-style: italic;
366
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
SoundScribe/SpeakerID/docs/source/_static/js/pk_scripts.js DELETED
@@ -1,19 +0,0 @@
1
- document.addEventListener("DOMContentLoaded", function () {
2
- var params = window.location.search.substring(1).split("&").reduce(function (params, param) {
3
- if (!param) {
4
- return params;
5
- }
6
-
7
- var values = param.split("=");
8
- var name = values[0];
9
- var value = values[1];
10
- params[name] = value;
11
- return params;
12
- }, {});
13
-
14
- var form = document.getElementById("feedback-form");
15
- for (var name in params) {
16
- var input = form.querySelector("[name=" + name + "]");
17
- input.value = params[name];
18
- }
19
- });
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
SoundScribe/SpeakerID/docs/source/_templates/layout.html DELETED
@@ -1,14 +0,0 @@
1
- {% extends "!layout.html" %}
2
-
3
- {% block extrahead %}
4
-
5
- <script type="text/javascript"
6
- src="//assets.adobedtm.com/b92787824f2e0e9b68dc2e993f9bd995339fe417/satelliteLib-7ba51e58dc61bcb0e9311aadd02a0108ab24cc6c.js"></script>
7
-
8
- {% endblock %}
9
-
10
- {% block footer %}
11
-
12
- <script type="text/javascript">_satellite.pageBottom();</script>
13
-
14
- {% endblock %}
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
SoundScribe/SpeakerID/docs/source/asr/api.rst DELETED
@@ -1,322 +0,0 @@
1
- NeMo ASR collection API
2
- =======================
3
-
4
-
5
- Model Classes
6
- -------------
7
-
8
- .. autoclass:: nemo.collections.asr.models.EncDecCTCModel
9
- :show-inheritance:
10
- :members: transcribe, change_vocabulary, setup_training_data, setup_optimization, setup_validation_data, setup_test_data, register_artifact
11
-
12
-
13
- .. autoclass:: nemo.collections.asr.models.EncDecCTCModelBPE
14
- :show-inheritance:
15
- :members: transcribe, change_vocabulary, setup_training_data, setup_optimization, setup_validation_data, setup_test_data, register_artifact
16
-
17
-
18
- .. autoclass:: nemo.collections.asr.models.EncDecRNNTModel
19
- :show-inheritance:
20
- :members: transcribe, change_vocabulary, setup_training_data, setup_optimization, setup_validation_data, setup_test_data, register_artifact
21
-
22
-
23
- .. autoclass:: nemo.collections.asr.models.EncDecRNNTBPEModel
24
- :show-inheritance:
25
- :members: transcribe, change_vocabulary, setup_training_data, setup_optimization, setup_validation_data, setup_test_data, register_artifact
26
-
27
-
28
- .. autoclass:: nemo.collections.asr.models.EncDecClassificationModel
29
- :show-inheritance:
30
- :members: setup_training_data, setup_optimization, setup_validation_data, setup_test_data, register_artifact
31
-
32
-
33
- .. autoclass:: nemo.collections.asr.models.EncDecSpeakerLabelModel
34
- :show-inheritance:
35
- :members: setup_training_data, setup_optimization, setup_validation_data, setup_test_data, register_artifact
36
-
37
-
38
- .. autoclass:: nemo.collections.asr.models.hybrid_asr_tts_models.ASRWithTTSModel
39
- :show-inheritance:
40
- :members: from_asr_config, from_pretrained_models, save_asr_model_to, setup_training_data
41
-
42
- .. _confidence-ensembles-api:
43
-
44
- .. autoclass:: nemo.collections.asr.models.confidence_ensembles.ConfidenceEnsembleModel
45
- :show-inheritance:
46
- :members: transcribe
47
-
48
- Modules
49
- -------
50
-
51
- .. autoclass:: nemo.collections.asr.modules.ConvASREncoder
52
- :show-inheritance:
53
- :members:
54
-
55
- .. autoclass:: nemo.collections.asr.modules.ConvASRDecoder
56
- :show-inheritance:
57
- :members:
58
-
59
- .. autoclass:: nemo.collections.asr.modules.ConvASRDecoderClassification
60
- :show-inheritance:
61
- :members:
62
-
63
- .. autoclass:: nemo.collections.asr.modules.SpeakerDecoder
64
- :show-inheritance:
65
- :members:
66
-
67
- .. _conformer-encoder-api:
68
-
69
- .. autoclass:: nemo.collections.asr.modules.ConformerEncoder
70
- :show-inheritance:
71
- :members:
72
-
73
- .. _squeezeformer-encoder-api:
74
-
75
- .. autoclass:: nemo.collections.asr.modules.SqueezeformerEncoder
76
- :show-inheritance:
77
- :members:
78
-
79
- .. _rnn-encoder-api:
80
-
81
- .. autoclass:: nemo.collections.asr.modules.RNNEncoder
82
- :show-inheritance:
83
- :members:
84
-
85
- .. _rnnt-decoder-api:
86
-
87
- .. autoclass:: nemo.collections.asr.modules.RNNTDecoder
88
- :show-inheritance:
89
- :members:
90
-
91
- .. autoclass:: nemo.collections.asr.modules.StatelessTransducerDecoder
92
- :show-inheritance:
93
- :members:
94
-
95
- .. _rnnt-joint-api:
96
-
97
- .. autoclass:: nemo.collections.asr.modules.RNNTJoint
98
- :show-inheritance:
99
- :members:
100
-
101
- .. autoclass:: nemo.collections.asr.modules.SampledRNNTJoint
102
- :show-inheritance:
103
- :members:
104
-
105
-
106
-
107
- Parts
108
- -----
109
-
110
- .. autoclass:: nemo.collections.asr.parts.submodules.jasper.JasperBlock
111
- :show-inheritance:
112
- :members:
113
-
114
-
115
- Mixins
116
- ------
117
-
118
- .. autoclass:: nemo.collections.asr.parts.mixins.mixins.ASRBPEMixin
119
- :show-inheritance:
120
- :members:
121
-
122
- .. autoclass:: nemo.collections.asr.parts.mixins.mixins.ASRModuleMixin
123
- :show-inheritance:
124
- :members:
125
-
126
- .. autoclass:: nemo.collections.asr.parts.mixins.interctc_mixin.InterCTCMixin
127
- :show-inheritance:
128
- :members:
129
-
130
- Datasets
131
- --------
132
-
133
- Character Encoding Datasets
134
- ~~~~~~~~~~~~~~~~~~~~~~~~~~~
135
-
136
- .. autoclass:: nemo.collections.asr.data.audio_to_text.AudioToCharDataset
137
- :show-inheritance:
138
- :members:
139
-
140
- .. autoclass:: nemo.collections.asr.data.audio_to_text.TarredAudioToCharDataset
141
- :show-inheritance:
142
- :members:
143
-
144
-
145
- Text-to-Text Datasets for Hybrid ASR-TTS models
146
- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
147
-
148
- .. autoclass:: nemo.collections.asr.data.text_to_text.TextToTextDataset
149
- :show-inheritance:
150
- :members:
151
-
152
- .. autoclass:: nemo.collections.asr.data.text_to_text.TextToTextIterableDataset
153
- :show-inheritance:
154
- :members:
155
-
156
-
157
- Subword Encoding Datasets
158
- ~~~~~~~~~~~~~~~~~~~~~~~~~
159
-
160
- .. autoclass:: nemo.collections.asr.data.audio_to_text.AudioToBPEDataset
161
- :show-inheritance:
162
- :members:
163
-
164
- .. autoclass:: nemo.collections.asr.data.audio_to_text.TarredAudioToBPEDataset
165
- :show-inheritance:
166
- :members:
167
-
168
- Audio Preprocessors
169
- -------------------
170
-
171
- .. autoclass:: nemo.collections.asr.modules.AudioToMelSpectrogramPreprocessor
172
- :show-inheritance:
173
- :members:
174
-
175
- .. autoclass:: nemo.collections.asr.modules.AudioToMFCCPreprocessor
176
- :show-inheritance:
177
- :members:
178
-
179
- Audio Augmentors
180
- ----------------
181
-
182
- .. autoclass:: nemo.collections.asr.modules.SpectrogramAugmentation
183
- :show-inheritance:
184
- :members:
185
-
186
- .. autoclass:: nemo.collections.asr.modules.CropOrPadSpectrogramAugmentation
187
- :show-inheritance:
188
- :members:
189
-
190
- .. autoclass:: nemo.collections.asr.parts.preprocessing.perturb.SpeedPerturbation
191
- :show-inheritance:
192
- :members:
193
-
194
- .. autoclass:: nemo.collections.asr.parts.preprocessing.perturb.TimeStretchPerturbation
195
- :show-inheritance:
196
- :members:
197
-
198
- .. autoclass:: nemo.collections.asr.parts.preprocessing.perturb.GainPerturbation
199
- :show-inheritance:
200
- :members:
201
-
202
- .. autoclass:: nemo.collections.asr.parts.preprocessing.perturb.ImpulsePerturbation
203
- :show-inheritance:
204
- :members:
205
-
206
- .. autoclass:: nemo.collections.asr.parts.preprocessing.perturb.ShiftPerturbation
207
- :show-inheritance:
208
- :members:
209
-
210
- .. autoclass:: nemo.collections.asr.parts.preprocessing.perturb.NoisePerturbation
211
- :show-inheritance:
212
- :members:
213
-
214
- .. autoclass:: nemo.collections.asr.parts.preprocessing.perturb.WhiteNoisePerturbation
215
- :show-inheritance:
216
- :members:
217
-
218
- .. autoclass:: nemo.collections.asr.parts.preprocessing.perturb.RirAndNoisePerturbation
219
- :show-inheritance:
220
- :members:
221
-
222
- .. autoclass:: nemo.collections.asr.parts.preprocessing.perturb.TranscodePerturbation
223
- :show-inheritance:
224
- :members:
225
-
226
- Miscellaneous Classes
227
- ---------------------
228
-
229
- CTC Decoding
230
- ~~~~~~~~~~~~
231
-
232
- .. autoclass:: nemo.collections.asr.metrics.wer.CTCDecoding
233
- :show-inheritance:
234
- :members:
235
-
236
- .. autoclass:: nemo.collections.asr.metrics.wer_bpe.CTCBPEDecoding
237
- :show-inheritance:
238
- :members:
239
-
240
- .. autoclass:: nemo.collections.asr.parts.submodules.ctc_greedy_decoding.GreedyCTCInfer
241
- :show-inheritance:
242
- :members:
243
-
244
- .. autoclass:: nemo.collections.asr.parts.submodules.ctc_beam_decoding.BeamCTCInfer
245
- :show-inheritance:
246
- :members:
247
-
248
- RNNT Decoding
249
- ~~~~~~~~~~~~~
250
-
251
- .. autoclass:: nemo.collections.asr.metrics.rnnt_wer.RNNTDecoding
252
- :show-inheritance:
253
- :members:
254
-
255
- .. autoclass:: nemo.collections.asr.metrics.rnnt_wer_bpe.RNNTBPEDecoding
256
- :show-inheritance:
257
- :members:
258
-
259
- .. autoclass:: nemo.collections.asr.parts.submodules.rnnt_greedy_decoding.GreedyRNNTInfer
260
- :show-inheritance:
261
- :members:
262
-
263
- .. autoclass:: nemo.collections.asr.parts.submodules.rnnt_greedy_decoding.GreedyBatchedRNNTInfer
264
- :show-inheritance:
265
- :members:
266
-
267
- .. autoclass:: nemo.collections.asr.parts.submodules.rnnt_beam_decoding.BeamRNNTInfer
268
- :show-inheritance:
269
- :members:
270
-
271
- Hypotheses
272
- ~~~~~~~~~~
273
-
274
- .. autoclass:: nemo.collections.asr.parts.utils.rnnt_utils.Hypothesis
275
- :show-inheritance:
276
- :no-members:
277
-
278
- .. autoclass:: nemo.collections.asr.parts.utils.rnnt_utils.NBestHypotheses
279
- :show-inheritance:
280
- :no-members:
281
-
282
- Adapter Networks
283
- ~~~~~~~~~~~~~~~~
284
-
285
- .. autoclass:: nemo.collections.asr.parts.submodules.adapters.multi_head_attention_adapter_module.MultiHeadAttentionAdapter
286
- :show-inheritance:
287
- :members:
288
- :member-order: bysource
289
-
290
- -----
291
-
292
- .. autoclass:: nemo.collections.asr.parts.submodules.adapters.multi_head_attention_adapter_module.RelPositionMultiHeadAttentionAdapter
293
- :show-inheritance:
294
- :members:
295
- :member-order: bysource
296
-
297
- -----
298
-
299
- .. autoclass:: nemo.collections.asr.parts.submodules.adapters.multi_head_attention_adapter_module.PositionalEncodingAdapter
300
- :show-inheritance:
301
- :members:
302
- :member-order: bysource
303
-
304
- -----
305
-
306
- .. autoclass:: nemo.collections.asr.parts.submodules.adapters.multi_head_attention_adapter_module.RelPositionalEncodingAdapter
307
- :show-inheritance:
308
- :members:
309
- :member-order: bysource
310
-
311
-
312
- Adapter Strategies
313
- ~~~~~~~~~~~~~~~~~~
314
-
315
- .. autoclass:: nemo.collections.asr.parts.submodules.adapters.multi_head_attention_adapter_module.MHAResidualAddAdapterStrategy
316
- :show-inheritance:
317
- :members:
318
- :member-order: bysource
319
- :undoc-members: adapter_module_names
320
-
321
- -----
322
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
SoundScribe/SpeakerID/docs/source/asr/asr_all.bib DELETED
@@ -1,1043 +0,0 @@
1
- @article{matchboxnet,
2
- title={{MatchboxNet}: 1D Time-Channel Separable Convolutional Neural Network Architecture for Speech Commands Recognition},
3
- author={Majumdar, Somshubra and Ginsburg, Boris},
4
- journal={Proc. Interspeech 2020},
5
- year={2020}
6
- }
7
-
8
- @article{marblenet,
9
- title={MarbleNet: Deep 1D Time-Channel Separable Convolutional Neural Network for Voice Activity Detection},
10
- author={Jia, Fei and Majumdar, Somshubra and Ginsburg, Boris},
11
- journal={arXiv preprint arXiv:2010.13886},
12
- year={2020}
13
- }
14
-
15
- @inproceedings{panayotov2015librispeech,
16
- title={Librispeech: an ASR corpus based on public domain audio books},
17
- author={Panayotov, Vassil and Chen, Guoguo and Povey, Daniel and Khudanpur, Sanjeev},
18
- booktitle={Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on},
19
- pages={5206--5210},
20
- year={2015},
21
- organization={IEEE}
22
- }
23
-
24
- @article{luong17,
25
- author = {Minh{-}Thang Luong and Eugene Brevdo and Rui Zhao},
26
- title = {Neural Machine Translation (seq2seq) Tutorial},
27
- journal = {https://github.com/tensorflow/nmt},
28
- year = {2017},
29
- }
30
-
31
- @INPROCEEDINGS{LaurentSeqWiseBN,
32
- author={C. {Laurent} and G. {Pereyra} and P. {Brakel} and Y. {Zhang} and Y. {Bengio}},
33
- booktitle={2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
34
- title={Batch normalized recurrent neural networks},
35
- year={2016},
36
- volume={},
37
- number={},
38
- pages={2657-2661},
39
- keywords={feedforward neural nets;learning (artificial intelligence);recurrent neural nets;speech recognition;batch normalized recurrent neural networks;RNN;sequential data;long-term dependency learning;convergence rate improvement;intermediate representation normalization;feedforward neural networks;speech recognition task;language modeling;training criterion;Training;Recurrent neural networks;Convergence;Speech recognition;Computer architecture;Speech;batch normalization;RNN;LSTM;optimization},
40
- doi={10.1109/ICASSP.2016.7472159},
41
- ISSN={2379-190X},
42
- month={March},}
43
-
44
- @article{graves2005,
45
- author = {Alex Graves and Jurgen Schmidhuber},
46
- title = {Framewise phoneme classification with bidirectional LSTM and other neural network architectures},
47
- journal = {Neural Networks, vol. 18},
48
- pages={602–-610},
49
- year = {2005},
50
- }
51
-
52
- @inproceedings{graves2006,
53
- title={Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks},
54
- author={Graves, Alex and Fern{\'a}ndez, Santiago and Gomez, Faustino and Schmidhuber, J{\"u}rgen},
55
- booktitle={Proceedings of the 23rd international conference on Machine learning},
56
- pages={369--376},
57
- year={2006},
58
- organization={ACM}
59
- }
60
-
61
- @article{li2019jasper,
62
- title={Jasper: An End-to-End Convolutional Neural Acoustic Model},
63
- author={Li, Jason and Lavrukhin, Vitaly and Ginsburg, Boris and Leary, Ryan and Kuchaiev, Oleksii and Cohen, Jonathan M and Nguyen, Huyen and Gadde, Ravi Teja},
64
- journal={arXiv preprint arXiv:1904.03288},
65
- year={2019}
66
- }
67
-
68
- @misc{ardila2019common,
69
- title={Common Voice: A Massively-Multilingual Speech Corpus},
70
- author={Rosana Ardila and Megan Branson and Kelly Davis and Michael Henretty and Michael Kohler and Josh Meyer and Reuben Morais and Lindsay Saunders and Francis M. Tyers and Gregor Weber},
71
- year={2019},
72
- eprint={1912.06670},
73
- archivePrefix={arXiv},
74
- primaryClass={cs.CL}
75
- }
76
-
77
- @article{graves2012,
78
- title={Sequence Transduction with Recurrent Neural Networks},
79
- author={Graves, Alex},
80
- journal={arXiv preprint arXiv:1211.3711},
81
- year={2012}
82
- }
83
-
84
-
85
- @article{graves2013,
86
- title={Generating sequences with recurrent neural networks},
87
- author={Graves, Alex},
88
- journal={arXiv preprint arXiv:1308.0850},
89
- year={2013}
90
- }
91
-
92
- @article{sergeev2018horovod,
93
- title={Horovod: fast and easy distributed deep learning in TensorFlow},
94
- author={Sergeev, Alexander and Del Balso, Mike},
95
- journal={arXiv preprint arXiv:1802.05799},
96
- year={2018}
97
- }
98
-
99
- @misc{NVVolta,
100
- title = {NVIDIA TESLA V100 GPU ARCHITECTURE},
101
- howpublished = {\url{http://images.nvidia.com/content/volta-architecture/pdf/volta-architecture-whitepaper.pdf}},
102
- note = {Accessed: 2018-10-09}
103
- }
104
-
105
- @article{NVTuring,
106
- title = {NVIDIA TURING GPU ARCHITECTURE},
107
- howpublished = {\url{https://www.nvidia.com/content/dam/en-zz/Solutions/design-visualization/technologies/turing-architecture/NVIDIA-Turing-Architecture-Whitepaper.pdf}},
108
- author = {NVIDIA},
109
- year = {2018},
110
- note = {Accessed: 2018-10-09}
111
- }
112
-
113
- @misc{Rygaard2015,
114
- title = {Using Synthesized Speech to Improve Speech Recognition for Low-Resource Languages},
115
- author = {Luise Valentin Rygaard},
116
- howpublished = {\url{https://parasol.tamu.edu/dreu2015/Rygaard/report.pdf}},
117
- year = {2015},
118
- }
119
-
120
- @misc{OpenSeq2Seq,
121
- title = {OpenSeq2Seq: extensible toolkit for distributed and mixed precision training of sequence-to-sequence models},
122
- author = {Kuchaiev, Oleksii and Ginsburg, Boris and Gitman, Igor and Lavrukhin,Vitaly and Case, Carl and Micikevicius, Paulius},
123
- howpublished = {\url{https://arxiv.org/abs/1805.10387}},
124
- year = {2018},
125
- }
126
-
127
- @misc{MPGuide,
128
- title = {Training with Mixed Precision},
129
- howpublished = {\url{http://docs.nvidia.com/deeplearning/sdk/mixed-precision-training/}},
130
- note = {Accessed: 2018-04-06},
131
- }
132
-
133
- @misc{Mozilla,
134
- title = {Mozilla: A Journey to less than 10\% Word Error Rate},
135
- howpublished = {\url{https://hacks.mozilla.org/2017/11/a-journey-to-10-word-error-rate/}},
136
- note = {Accessed: 2018-04-06},
137
- }
138
-
139
- @article{Waibel1989,
140
- title={A time-delay neural network architecture for isolated word recognition},
141
- author={Waibel, Alexander, and Hanazawa, Toshiyki and Hinton,Geoffrey and Shirano, Kiyohiro and Lang, Kevin },
142
- journal={IEEE Trans. on Acoustics, Speech and Signal Processing},
143
- year={1989}
144
- }
145
-
146
- @article{Lang1990,
147
- title={A time-delay neural network architecture for isolated word recognition},
148
- author={Lang, Kevin and Waibel, Alexander, and Hinton,Geoffrey },
149
- journal={Neural Networks},
150
- year={1990}
151
- }
152
-
153
- @book{Bengio1996,
154
- Author = {Bengio, Y.},
155
- Publisher = {International Thomson Computer Press},
156
- Title = {Neural Networks for Speech and Sequence Recognition},
157
- Year = {1996}
158
- }
159
-
160
- @article{Bengio1992,
161
- title={Global optimization of a neural network-hidden Markov model hybrid},
162
- author={Bengio, Y., and De Mori, R., and Flammia, G., and Kompe, R. },
163
- journal={IEEE Transactions on Neural Networks, 3(2), 252–259},
164
- year={1992}
165
- }
166
-
167
- @article{Bourlard1994,
168
- title={Connectionist speech recognition: a hybrid approach},
169
- author={Bourlard, H. A. and Morgan, N.},
170
- journal={volume 247 Springer },
171
- year={1994}
172
- }
173
-
174
- @article{srivastava14a,
175
- author = {Nitish Srivastava, and Geoffrey Hinton, and Alex Krizhevsky, and Ilya Sutskever, and Ruslan Salakhutdinov},
176
- title = {Dropout: A Simple Way to Prevent Neural Networks from Overfitting},
177
- journal = {Journal of Machine Learning Research},
178
- year = {2014},
179
- volume = {15},
180
- pages = {1929-1958},
181
- url = {http://jmlr.org/papers/v15/srivastava14a.html}
182
- }
183
-
184
-
185
- @article{Hinton2012,
186
- title={Deep Neural Networks for Acoustic Modeling in Speech Recognition},
187
- author={ Hinton,Geoffrey and Deng, Li and Yu, Dong and Dahl,George and Mohamed,Abdel-rahman and Jaitly, Navdeep and Senior, Andrew and Vanhoucke, Vincent and Nguyen, Patrick and Kingsbury, Brian and Sainath, Tara},
188
- journal={IEEE Signal Processing Magazine},
189
- year={2012}
190
- }
191
-
192
- @article{Graves2014,
193
- title={Towards End-to-End Speech Recognition with Recurrent Neural Networks},
194
- author={Graves, Alex and Jaitly, Navdeep},
195
- journal={International Conference on Machine Learning},
196
- year={2014}
197
- }
198
-
199
- @article{Chorowski2014,
200
- title={End-to-end Continuous Speech Recognition using Attention-based Recurrent NN: First Results},
201
- author={ Chorowski, Jan, and Bahdanau, Dzmitry , and Cho, Kyunghyun , and Bengio, Yoshua },
202
- journal={Neural Information Processing Systems: Workshop Deep Learning and Representation Learning Workshop },
203
- year={2014}
204
- }
205
-
206
- @article{Sak2014,
207
- title={Long short-term memory recurrent neural network architectures for large scale acoustic modeling},
208
- author={Sak, Hasim and Senior, Andrew and Beaufays, Francoise },
209
- journal={Interspeech 2014},
210
- year={2014}
211
- }
212
-
213
- @article{Ko2015,
214
- title={Audio Augmentation for Speech Recognition},
215
- author={Tom, Ko and Vijayaditya, Peddinti and Daniel, Povey
216
- and Sanjeev, Khudanpur },
217
- journal={Interspeech 2015},
218
- year={2015}
219
- }
220
-
221
- @article{Tjandra2017,
222
- title={Listening while Speaking: Speech Chain by Deep Learning},
223
- author={Andros, Tjandra and Sakriani, Sakti and Satoshi, Nakamura },
224
- journal={ASRU 2017},
225
- year={2017}
226
- }
227
-
228
- @article{Tjandra2018,
229
- title={Machine Speech Chain with One-shot Speaker Adaptation},
230
- author={Andros, Tjandra and Sakriani, Sakti and Satoshi, Nakamura },
231
- journal={Interspeech 2018},
232
- year={2018}
233
- }
234
-
235
- @article{bahdanau2014neural,
236
- title={Neural machine translation by jointly learning to align and translate},
237
- author={Bahdanau, Dzmitry and Cho, Kyunghyun and Bengio, Yoshua},
238
- journal={arXiv preprint arXiv:1409.0473},
239
- year={2014}
240
- }
241
-
242
- @article{cho2014learning,
243
- title={Learning phrase representations using RNN encoder-decoder for statistical machine translation},
244
- author={Cho, Kyunghyun and Van Merri{\"e}nboer, Bart and Gulcehre, Caglar and Bahdanau, Dzmitry and Bougares, Fethi and Schwenk, Holger and Bengio, Yoshua},
245
- journal={arXiv preprint arXiv:1406.1078},
246
- year={2014}
247
- }
248
-
249
- @article{rush2015neural,
250
- title={A neural attention model for abstractive sentence summarization},
251
- author={Rush, Alexander M and Chopra, Sumit and Weston, Jason},
252
- journal={arXiv preprint arXiv:1509.00685},
253
- year={2015}
254
- }
255
-
256
- @article{micikevicius2017mixed,
257
- title={Mixed precision training},
258
- author={Micikevicius, Paulius and Narang, Sharan and Alben, Jonah and Diamos, Gregory and Elsen, Erich and Garcia, David and Ginsburg, Boris and Houston, Michael and Kuchaev, Oleksii and Venkatesh, Ganesh and others},
259
- journal={arXiv preprint arXiv:1710.03740},
260
- year={2017}
261
- }
262
-
263
- @ARTICLE{Britz:2017,
264
- author = {{Britz}, Denny and {Goldie}, Anna and {Luong}, Thang and {Le}, Quoc},
265
- title = {Massive Exploration of Neural Machine Translation Architectures},
266
- journal = {ArXiv e-prints arXiv:1703.03906},
267
- archivePrefix = "arXiv",
268
- eprinttype = {arxiv},
269
- eprint = {1703.03906},
270
- primaryClass = "cs.CL",
271
- keywords = {Computer Science - Computation and Language},
272
- year = 2017,
273
- month = mar
274
- }
275
-
276
- @inproceedings{abadi2016tensorflow,
277
- title={TensorFlow: A System for Large-Scale Machine Learning.},
278
- author={Abadi, Mart{\'\i}n and Barham, Paul and Chen, Jianmin and Chen, Zhifeng and Davis, Andy and Dean, Jeffrey and Devin, Matthieu and Ghemawat, Sanjay and Irving, Geoffrey and Isard, Michael and others},
279
- booktitle={OSDI},
280
- volume={16},
281
- pages={265--283},
282
- year={2016}
283
- }
284
-
285
- @article{tensor2tensor,
286
- author = {Ashish Vaswani and Samy Bengio and Eugene Brevdo and Francois Chollet and Aidan N. Gomez and Stephan Gouws and Llion Jones and \L{}ukasz Kaiser and Nal Kalchbrenner and Niki Parmar and Ryan Sepassi and
287
- Noam Shazeer and Jakob Uszkoreit},
288
- title = {Tensor2Tensor for Neural Machine Translation},
289
- journal = {CoRR},
290
- volume = {abs/1803.07416},
291
- year = {2018},
292
- url = {http://arxiv.org/abs/1803.07416},
293
- }
294
-
295
- @article{gehring2017convs2s,
296
- author = {Gehring, Jonas, and Auli, Michael and Grangier, David and Yarats, Denis and Dauphin, Yann N},
297
- title = "{Convolutional Sequence to Sequence Learning}",
298
- journal = {ArXiv e-prints arXiv:1705.03122},
299
- archivePrefix = "arXiv",
300
- eprinttype = {arxiv},
301
- eprint = {1705.03122},
302
- primaryClass = "cs.CL",
303
- keywords = {Computer Science - Computation and Language},
304
- year = 2017,
305
- month = May,
306
- }
307
-
308
- @inproceedings{chan2015,
309
- title={Listen, attend and spell},
310
- author={Chan, William and Jaitly, Navdeep and Le, Quoc V and Vinyals, Oriol},
311
- booktitle={Acoustics, Speech and Signal Processing (ICASSP), 2016 IEEE International Conference on},
312
- pages={5206--5210},
313
- year={2016},
314
- organization={IEEE}
315
- }
316
-
317
- @inproceedings{xu2015show,
318
- title={Show, attend and tell: Neural image caption generation with visual attention},
319
- author={Xu, Kelvin and Ba, Jimmy and Kiros, Ryan and Cho, Kyunghyun and Courville, Aaron and Salakhudinov, Ruslan and Zemel, Rich and Bengio, Yoshua},
320
- booktitle={International Conference on Machine Learning},
321
- pages={2048--2057},
322
- year={2015}
323
- }
324
-
325
- @incollection{Sutskever2014,
326
- title = {Sequence to Sequence Learning with Neural Networks},
327
- author = {Sutskever, Ilya and Vinyals, Oriol and Le, Quoc V},
328
- booktitle = {Advances in Neural Information Processing Systems 27},
329
- editor = {Z. Ghahramani and M. Welling and C. Cortes and N. D. Lawrence and K. Q. Weinberger},
330
- pages = {3104--3112},
331
- year = {2014},
332
- publisher = {Curran Associates, Inc.},
333
- url = {http://papers.nips.cc/paper/5346-sequence-to-sequence-learning-with-neural-networks.pdf}
334
- }
335
-
336
- @article{DeepSpeech2014,
337
- title = {Deep Speech: Scaling up end-to-end speech recognition},
338
- author = {Awni Y. Hannun and Carl Case and Jared Casper and Bryan Catanzaro and Greg Diamos and Erich Elsen and Ryan Prenger and Sanjeev Satheesh and Shubho Sengupta and Adam Coates and Andrew Y. Ng},
339
- journal = {CoRR},
340
- volume = {abs/1412.5567},
341
- year = {2014},
342
- url = {http://arxiv.org/abs/1412.5567},
343
- archivePrefix = {arXiv},
344
- eprint = {1412.5567},
345
- timestamp = {Mon, 13 Aug 2018 16:48:07 +0200},
346
- biburl = {https://dblp.org/rec/bib/journals/corr/HannunCCCDEPSSCN14},
347
- bibsource = {dblp computer science bibliography, https://dblp.org}
348
- }
349
-
350
- @inproceedings{DeepSpeech2,
351
- author = {Amodei, Dario and Ananthanarayanan, Sundaram and Anubhai, Rishita and Bai, Jingliang and Battenberg, Eric and Case, Carl and Casper, Jared and Catanzaro, Bryan and Cheng, Qiang and Chen, Guoliang and Chen, Jie and Chen, Jingdong and Chen, Zhijie and Chrzanowski, Mike and Coates, Adam and Diamos, Greg and Ding, Ke and Du, Niandong and Elsen, Erich and Engel, Jesse and Fang, Weiwei and Fan, Linxi and Fougner, Christopher and Gao, Liang and Gong, Caixia and Hannun, Awni and Han, Tony and Johannes, Lappi Vaino and Jiang, Bing and Ju, Cai and Jun, Billy and LeGresley, Patrick and Lin, Libby and Liu, Junjie and Liu, Yang and Li, Weigao and Li, Xiangang and Ma, Dongpeng and Narang, Sharan and Ng, Andrew and Ozair, Sherjil and Peng, Yiping and Prenger, Ryan and Qian, Sheng and Quan, Zongfeng and Raiman, Jonathan and Rao, Vinay and Satheesh, Sanjeev and Seetapun, David and Sengupta, Shubho and Srinet, Kavya and Sriram, Anuroop and Tang, Haiyuan and Tang, Liliang and Wang, Chong and Wang, Jidong and Wang, Kaifu and Wang, Yi and Wang, Zhijian and Wang, Zhiqian and Wu, Shuang and Wei, Likai and Xiao, Bo and Xie, Wen and Xie, Yan and Yogatama, Dani and Yuan, Bin and Zhan, Jun and Zhu, Zhenyao},
352
- title = {Deep Speech 2: End-to-end Speech Recognition in English and Mandarin},
353
- booktitle = {Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48},
354
- series = {ICML'16},
355
- year = {2016},
356
- location = {New York, NY, USA},
357
- pages = {173--182},
358
- numpages = {10},
359
- url = {http://dl.acm.org/citation.cfm?id=3045390.3045410},
360
- acmid = {3045410},
361
- publisher = {JMLR.org},
362
- }
363
-
364
- @inproceedings{prabhavalkar2017comparison,
365
- title={A comparison of sequence-to-sequence models for speech recognition},
366
- author={Prabhavalkar, Rohit and Rao, Kanishka and Sainath, Tara N and Li, Bo and Johnson, Leif and Jaitly, Navdeep},
367
- booktitle={Proc. Interspeech},
368
- pages={939--943},
369
- year={2017}
370
- }
371
-
372
- @article{chiu2017state,
373
- title={State-of-the-art speech recognition with sequence-to-sequence models},
374
- author={Chiu, Chung-Cheng and Sainath, Tara N and Wu, Yonghui and Prabhavalkar, Rohit and Nguyen, Patrick and Chen, Zhifeng and Kannan, Anjuli and Weiss, Ron J and Rao, Kanishka and Gonina, Katya and others},
375
- journal={arXiv preprint arXiv:1712.01769},
376
- year={2017}
377
- }
378
-
379
- @misc{NVMixed,
380
- title = {{NVIDA's Mixed-Precision Training - TensorFlow example}},
381
- howpublished = {\url{https://docs.nvidia.com/deeplearning/sdk/mixed-precision-training/#example_tensorflow}},
382
- author={NVIDIA},
383
- note = {Accessed: 2018-10-09},
384
- year={2018}
385
- }
386
-
387
- @article{gehring2017,
388
- title={Convolutional sequence to sequence learning},
389
- author={Gehring, Jonas and Auli, Michael and Grangier, David and Yarats, Denis and Dauphin, Yann N},
390
- journal={arXiv preprint arXiv:1705.03122},
391
- year={2017}
392
- }
393
-
394
- @article{collobert2016,
395
- title={Wav2letter: an end-to-end convnet-based speech recognition system},
396
- author={Collobert, Ronan and Puhrsch, Christian and Synnaeve, Gabriel},
397
- journal={arXiv preprint arXiv:1609.03193},
398
- year={2016}
399
- }
400
-
401
- @inproceedings{Zhang2016,
402
- author={Ying Zhang and Mohammad Pezeshki and Philémon Brakel and Saizheng Zhang and César Laurent and Yoshua Bengio and Aaron Courville},
403
- title={Towards End-to-End Speech Recognition with Deep Convolutional Neural Networks},
404
- year=2016,
405
- booktitle={Interspeech 2016},
406
- doi={10.21437/Interspeech.2016-1446},
407
- url={http://dx.doi.org/10.21437/Interspeech.2016-1446},
408
- pages={410--414}
409
- }
410
-
411
- @inproceedings{Zhang2017,
412
- title={Very deep convolutional networks for end-to-end speech recognition},
413
- author={Zhang, Yu, and Chan, William, and Jaitly, Navdeep},
414
- booktitle={Acoustics, Speech and Signal Processing (ICASSP), 2017 IEEE International Conference on},
415
- year={2017},
416
- organization={IEEE}
417
- }
418
-
419
-
420
- @article{Wang2017,
421
- title={Tacotron: Towards End-to-End Speech Synthesis},
422
- author={ Wang, Yuxuan, and Skerry-Ryan, RJ, and Stanton, Daisy and Wu, Yonghui and Weiss, Ron, and Jaitly, Navdeep and Yang, Zongheng and Xiao, Ying and Chen,Zhifeng and Bengio, Samy and Le, Quoc and Agiomyrgiannakis, Yannis and Clark,Rob and Saurous, Rif A.},
423
- journal={arXiv preprint arXiv:1703.10135},
424
- year={2017}
425
- }
426
-
427
- @article{griffin1984signal,
428
- title={Signal estimation from modified short-time Fourier transform},
429
- author={Griffin, Daniel and Lim, Jae},
430
- journal={IEEE Transactions on Acoustics, Speech, and Signal Processing},
431
- volume={32},
432
- number={2},
433
- pages={236--243},
434
- year={1984},
435
- publisher={IEEE}
436
- }
437
-
438
- @misc{ito2017lj,
439
- title={The LJ speech dataset},
440
- author={Ito, Keith and others},
441
- year={2017}
442
- }
443
-
444
- @misc{mailabs,
445
- title = {{The M-AILABS Speech Dataset}},
446
- howpublished = {\url{http://www.m-ailabs.bayern/en/the-mailabs-speech-dataset/}},
447
- author={M-AILABS},
448
- note = {Accessed: 2018-10-09},
449
- year={2018}
450
- }
451
-
452
- @article{merity2016pointer,
453
- title={Pointer sentinel mixture models},
454
- author={Merity, Stephen and Xiong, Caiming and Bradbury, James and Socher, Richard},
455
- journal={arXiv preprint arXiv:1609.07843},
456
- year={2016}
457
- }
458
-
459
- @inproceedings{socher2013recursive,
460
- title={Recursive deep models for semantic compositionality over a sentiment treebank},
461
- author={Socher, Richard and Perelygin, Alex and Wu, Jean and Chuang, Jason and Manning, Christopher D and Ng, Andrew and Potts, Christopher},
462
- booktitle={Proceedings of the 2013 conference on empirical methods in natural language processing},
463
- pages={1631--1642},
464
- year={2013}
465
- }
466
-
467
- @InProceedings{maas-EtAl:2011:ACL-HLT2011,
468
- author = {Maas, Andrew L. and Daly, Raymond E. and Pham, Peter T. and Huang, Dan and Ng, Andrew Y. and Potts, Christopher},
469
- title = {Learning Word Vectors for Sentiment Analysis},
470
- booktitle = {Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies},
471
- month = {June},
472
- year = {2011},
473
- address = {Portland, Oregon, USA},
474
- publisher = {Association for Computational Linguistics},
475
- pages = {142--150},
476
- url = {http://www.aclweb.org/anthology/P11-1015}
477
- }
478
-
479
- @inproceedings{Povey2018SemiOrthogonalLM,
480
- title={Semi-Orthogonal Low-Rank Matrix Factorization for Deep Neural Networks},
481
- author={Daniel Povey and Gaofeng Cheng and Yiming Wang and Ke Li and Hainan Xu and Mahsa Yarmohammadi and Sanjeev Khudanpur},
482
- booktitle={Interspeech},
483
- year={2018}
484
- }
485
-
486
- @article{CAPIO2017,
487
- author = {Kyu J. Han and Akshay Chandrashekaran and Jungsuk Kim and Ian R. Lane},
488
- title = {The {CAPIO} 2017 Conversational Speech Recognition System},
489
- journal = {CoRR},
490
- volume = {abs/1801.00059},
491
- year = {2018},
492
- url = {http://arxiv.org/abs/1801.00059},
493
- archivePrefix = {arXiv},
494
- eprint = {1801.00059},
495
- timestamp = {Mon, 13 Aug 2018 16:49:10 +0200},
496
- biburl = {https://dblp.org/rec/bib/journals/corr/abs-1801-00059},
497
- bibsource = {dblp computer science bibliography, https://dblp.org}
498
- }
499
-
500
- @article{WaveNet,
501
- author = {A{\"{a}}ron van den Oord and Sander Dieleman and Heiga Zen and Karen Simonyan and Oriol Vinyals and Alex Graves and Nal Kalchbrenner and Andrew W. Senior and Koray Kavukcuoglu},
502
- title = {WaveNet: {A} Generative Model for Raw Audio},
503
- journal = {CoRR},
504
- volume = {abs/1609.03499},
505
- year = {2016},
506
- url = {http://arxiv.org/abs/1609.03499},
507
- archivePrefix = {arXiv},
508
- eprint = {1609.03499},
509
- timestamp = {Mon, 13 Aug 2018 16:49:15 +0200},
510
- biburl = {https://dblp.org/rec/bib/journals/corr/OordDZSVGKSK16},
511
- bibsource = {dblp computer science bibliography, https://dblp.org}
512
- }
513
-
514
- @article{FacebookGERENGBackTranslation,
515
- author = {Rico Sennrich and Barry Haddow and Alexandra Birch},
516
- title = {Improving Neural Machine Translation Models with Monolingual Data},
517
- journal = {CoRR},
518
- volume = {abs/1511.06709},
519
- year = {2015},
520
- url = {http://arxiv.org/abs/1511.06709},
521
- archivePrefix = {arXiv},
522
- eprint = {1511.06709},
523
- timestamp = {Mon, 13 Aug 2018 16:47:05 +0200},
524
- biburl = {https://dblp.org/rec/bib/journals/corr/SennrichHB15a},
525
- bibsource = {dblp computer science bibliography, https://dblp.org}
526
- }
527
-
528
- @article{GlobalStyleTokens,
529
- author = {Yuxuan Wang and Daisy Stanton and Yu Zhang and R. J. Skerry{-}Ryan and Eric Battenberg and Joel Shor and Ying Xiao and Fei Ren and Ye Jia and Rif A. Saurous},
530
- title = {Style Tokens: Unsupervised Style Modeling, Control and Transfer in End-to-End Speech Synthesis},
531
- journal = {CoRR},
532
- volume = {abs/1803.09017},
533
- year = {2018},
534
- url = {http://arxiv.org/abs/1803.09017},
535
- archivePrefix = {arXiv},
536
- eprint = {1803.09017},
537
- timestamp = {Mon, 13 Aug 2018 16:46:53 +0200},
538
- biburl = {https://dblp.org/rec/bib/journals/corr/abs-1803-09017},
539
- bibsource = {dblp computer science bibliography, https://dblp.org}
540
- }
541
-
542
- @article{IoffeS15BatchNorm,
543
- author = {Sergey Ioffe and Christian Szegedy},
544
- title = {Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift},
545
- journal = {CoRR},
546
- volume = {abs/1502.03167},
547
- year = {2015},
548
- url = {http://arxiv.org/abs/1502.03167},
549
- archivePrefix = {arXiv},
550
- eprint = {1502.03167},
551
- timestamp = {Mon, 13 Aug 2018 16:47:06 +0200},
552
- biburl = {https://dblp.org/rec/bib/journals/corr/IoffeS15},
553
- bibsource = {dblp computer science bibliography, https://dblp.org}
554
- }
555
-
556
- @article{kingma,
557
- author = {Diederik P. Kingma and
558
- Jimmy Ba},
559
- title = {Adam: {A} Method for Stochastic Optimization},
560
- journal = {CoRR},
561
- volume = {abs/1412.6980},
562
- year = {2014},
563
- url = {http://arxiv.org/abs/1412.6980},
564
- archivePrefix = {arXiv},
565
- eprint = {1412.6980},
566
- timestamp = {Mon, 13 Aug 2018 01:00:00 +0200},
567
- biburl = {https://dblp.org/rec/bib/journals/corr/KingmaB14},
568
- bibsource = {dblp computer science bibliography, https://dblp.org}
569
- }
570
-
571
- @incollection{Salimans2016WeightNorm,
572
- title = {Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks},
573
- author = {Salimans, Tim and Kingma, Durk P},
574
- booktitle = {Advances in Neural Information Processing Systems 29},
575
- editor = {D. D. Lee and M. Sugiyama and U. V. Luxburg and I. Guyon and R. Garnett},
576
- pages = {901--909},
577
- year = {2016},
578
- publisher = {Curran Associates, Inc.},
579
- url = {http://papers.nips.cc/paper/6114-weight-normalization-a-simple-reparameterization-to-accelerate-training-of-deep-neural-networks.pdf}
580
- }
581
-
582
- @article{wu2016google,
583
- title={Google's neural machine translation system: Bridging the gap between human and machine translation},
584
- author={Wu, Yonghui and Schuster, Mike and Chen, Zhifeng and Le, Quoc V and Norouzi, Mohammad and Macherey, Zolfgang and Krikun, Maxim and Cao, Yuan and Gao, Qin and Macherey, Klaus and others},
585
- journal={arXiv preprint arXiv:1609.08144},
586
- year={2016}
587
- }
588
-
589
- @inproceedings{opennmt,
590
- author = {Guillaume Klein and Yoon Kim and Yuntian Deng and Jean Senellart and Alexander M. Rush},
591
- title = {OpenNMT: Open-Source Toolkit for Neural Machine Translation},
592
- booktitle = {Proc. ACL},
593
- year = {2017},
594
- url = {https://doi.org/10.18653/v1/P17-4012},
595
- doi = {10.18653/v1/P17-4012}
596
- }
597
-
598
- @article{paszke2017automatic,
599
- title={Automatic differentiation in PyTorch},
600
- author={Paszke, Adam and Gross, Sam and Chintala, Soumith and Chanan, Gregory and Yang, Edward and DeVito, Zachary and Lin, Zeming and Desmaison, Alban and Antiga, Luca and Lerer, Adam},
601
- year={2017}
602
- }
603
-
604
- @article{yu2014introduction,
605
- title={An introduction to computational networks and the computational network toolkit},
606
- author={Yu, Dong and Eversole, Adam and Seltzer, Mike and Yao, Kaisheng and Huang, Zhiheng and Guenter, Brian and Kuchaiev, Oleksii and Zhang, Yu and Seide, Frank and Wang, Huaming and others},
607
- journal={Microsoft Technical Report MSR-TR-2014--112},
608
- year={2014}
609
- }
610
-
611
- @article{nvidia2017v100,
612
- title={V100 GPU architecture. The world’s most advanced data center GPU. Version WP-08608-001\_v1. 1},
613
- author={NVIDIA, Tesla},
614
- journal={NVIDIA. Aug},
615
- pages={108},
616
- year={2017}
617
- }
618
-
619
- @article{Ba2016LayerNorm,
620
- author = {Jimmy Lei Ba and Jamie Ryan Kiros and Geoffrey E Hinton},
621
- title = {Layer normalization},
622
- journal = {CoRR},
623
- volume = {abs/1607.06450},
624
- year = {2016},
625
- url = {http://arxiv.org/abs/1607.06450},
626
- archivePrefix = {arXiv},
627
- }
628
-
629
- @inproceedings{Dauphin2017GLU,
630
- author = {Dauphin, Yann N. and Fan, Angela and Auli, Michael and Grangier, David},
631
- title = {Language Modeling with Gated Convolutional Networks},
632
- booktitle = {Proceedings of the 34th International Conference on Machine Learning - Volume 70},
633
- series = {ICML'17},
634
- year = {2017},
635
- location = {Sydney, NSW, Australia},
636
- pages = {933--941},
637
- numpages = {9},
638
- url = {http://dl.acm.org/citation.cfm?id=3305381.3305478},
639
- acmid = {3305478},
640
- publisher = {JMLR.org},
641
- }
642
-
643
- @incollection{Oord2016PixelCNN,
644
- title = {Conditional Image Generation with PixelCNN Decoders},
645
- author = {van den Oord, Aaron and Kalchbrenner, Nal and Espeholt, Lasse and kavukcuoglu, koray and Vinyals, Oriol and Graves, Alex},
646
- booktitle = {Advances in Neural Information Processing Systems 29},
647
- editor = {D. D. Lee and M. Sugiyama and U. V. Luxburg and I. Guyon and R. Garnett},
648
- pages = {4790--4798},
649
- year = {2016},
650
- publisher = {Curran Associates, Inc.},
651
- url = {http://papers.nips.cc/paper/6527-conditional-image-generation-with-pixelcnn-decoders.pdf}
652
- }
653
-
654
- @article{he2015,
655
- title={Deep residual learning for image recognition},
656
- author={K. He, and X. Zhang, and S. Ren, and J. Sun},
657
- journal={arXiv preprint arXiv:1512.03385},
658
- year={2015}
659
- }
660
-
661
- @article{huang2016,
662
- title={Densely Connected Convolutional Networks},
663
- author={Gao Huang, and Zhuang Liu, and Laurens van der Maaten, and Kilian Q. Weinberger},
664
- journal={arXiv preprint arXiv:1608.06993},
665
- year={2016}
666
- }
667
-
668
- @inproceedings{heafield2011kenlm,
669
- title={KenLM: Faster and smaller language model queries},
670
- author={Heafield, Kenneth},
671
- booktitle={Proceedings of the sixth workshop on statistical machine translation},
672
- pages={187--197},
673
- year={2011},
674
- organization={Association for Computational Linguistics}
675
- }
676
-
677
- @article{dai2018transformer,
678
- title={Transformer-XL: Language Modeling with Longer-Term Dependency},
679
- author={Dai, Zihang and Yang, Zhilin and Yang, Yiming and Cohen, William W and Carbonell, Jaime and Le, Quoc V and Salakhutdinov, Ruslan},
680
- year={2018},
681
- journal = {CoRR},
682
- volume = {abs/1901.02860},
683
- url = {http://arxiv.org/abs/1901.02860},
684
- archivePrefix = {arXiv},
685
- eprint = {1901.02860},
686
- timestamp = {Fri, 01 Feb 2019 13:39:59 +0100},
687
- biburl = {https://dblp.org/rec/bib/journals/corr/abs-1901-02860},
688
- bibsource = {dblp computer science bibliography, https://dblp.org}
689
- }
690
-
691
- @inproceedings{Saon+2016,
692
- author={George Saon and Tom Sercu and Steven Rennie and Hong-Kwang J. Kuo},
693
- title={The IBM 2016 English Conversational Telephone Speech Recognition System},
694
- year=2016,
695
- booktitle={Interspeech 2016},
696
- doi={10.21437/Interspeech.2016-1460},
697
- url={http://dx.doi.org/10.21437/Interspeech.2016-1460},
698
- pages={7--11}
699
- }
700
-
701
- @INPROCEEDINGS{Sercu-2016,
702
- author={T. {Sercu} and C. {Puhrsch} and B. {Kingsbury} and Y. {LeCun}},
703
- booktitle={2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
704
- title={Very deep multilingual convolutional neural networks for LVCSR},
705
- year={2016},
706
- volume={},
707
- number={},
708
- pages={4955-4959},
709
- keywords={natural language processing;neural nets;speech recognition;very deep multilingual convolutional neural networks;LVCSR;CNN;large vocabulary continuous speech recognition systems;word error rate;Training;Context;Hidden Markov models;Neural networks;Computer architecture;Kernel;Training data;Convolutional Networks;Multilingual;Acoustic Modeling;Speech Recognition;Neural Networks},
710
- doi={10.1109/ICASSP.2016.7472620},
711
- ISSN={2379-190X},
712
- month={March},}
713
-
714
-
715
- @inproceedings{Sercu+2016,
716
- author={Tom Sercu and Vaibhava Goel},
717
- title={Advances in Very Deep Convolutional Neural Networks for LVCSR},
718
- year=2016,
719
- booktitle={Interspeech 2016},
720
- doi={10.21437/Interspeech.2016-1033},
721
- url={http://dx.doi.org/10.21437/Interspeech.2016-1033},
722
- pages={3429--3433}
723
- }
724
-
725
- @INPROCEEDINGS{Xiong-2018,
726
- author={W. {Xiong} and L. {Wu} and F. {Alleva} and J. {Droppo} and X. {Huang} and A. {Stolcke}},
727
- booktitle={2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
728
- title={The Microsoft 2017 Conversational Speech Recognition System},
729
- year={2018},
730
- volume={},
731
- number={},
732
- pages={5934-5938},
733
- keywords={convolution;feedforward neural nets;natural language processing;speaker recognition;speech processing;language model rescoring step;senone level;switchboard domains;character-based LSTM language models;NIST 2000 switchboard test set;frame level;word-level voting;acoustic model posteriors;dialog session aware LSTM language models;CNN-BLSTM acoustic model;Microsoft 2017 conversational speech recognition system;Acoustics;Error analysis;Training;Speech recognition;Switches;Computational modeling;Context modeling;Conversational speech recognition;CNN;LACE;BLSTM;LSTM-LM;system combination;human parity},
734
- doi={10.1109/ICASSP.2018.8461870},
735
- ISSN={2379-190X},
736
- month={April},}
737
-
738
- @inproceedings{zeyer2018improved,
739
- author={Albert Zeyer and Kazuki Irie and Ralf Schlüter and Hermann Ney},
740
- title={Improved Training of End-to-end Attention Models for Speech Recognition},
741
- year=2018,
742
- booktitle={Proc. Interspeech 2018},
743
- pages={7--11},
744
- doi={10.21437/Interspeech.2018-1616},
745
- url={http://dx.doi.org/10.21437/Interspeech.2018-1616}
746
- }
747
-
748
- @article{Wav2LetterV2,
749
- author = {Vitaliy Liptchinsky and
750
- Gabriel Synnaeve and
751
- Ronan Collobert},
752
- title = {Letter-Based Speech Recognition with Gated ConvNets},
753
- journal = {CoRR},
754
- volume = {abs/1712.09444},
755
- year = {2017},
756
- url = {http://arxiv.org/abs/1712.09444},
757
- archivePrefix = {arXiv},
758
- eprint = {1712.09444},
759
- timestamp = {Mon, 13 Aug 2018 16:46:33 +0200},
760
- biburl = {https://dblp.org/rec/bib/journals/corr/abs-1712-09444},
761
- bibsource = {dblp computer science bibliography, https://dblp.org}
762
- }
763
-
764
- @article{zeghidour2018,
765
- author = {Neil Zeghidour and
766
- Qiantong Xu and
767
- Vitaliy Liptchinsky and
768
- Nicolas Usunier and
769
- Gabriel Synnaeve and
770
- Ronan Collobert},
771
- title = {Fully Convolutional Speech Recognition},
772
- journal = {CoRR},
773
- volume = {abs/1812.06864},
774
- year = {2018},
775
- url = {http://arxiv.org/abs/1812.06864},
776
- archivePrefix = {arXiv},
777
- eprint = {1812.06864},
778
- timestamp = {Tue, 01 Jan 2019 15:01:25 +0100},
779
- biburl = {https://dblp.org/rec/bib/journals/corr/abs-1812-06864},
780
- bibsource = {dblp computer science bibliography, https://dblp.org}
781
- }
782
-
783
- @inproceedings{Hadian2018,
784
- author={Hossein Hadian and Hossein Sameti and Daniel Povey and Sanjeev Khudanpur},
785
- title={End-to-end Speech Recognition Using Lattice-free MMI},
786
- year=2018,
787
- booktitle={Proc. Interspeech 2018},
788
- pages={12--16},
789
- doi={10.21437/Interspeech.2018-1423},
790
- url={http://dx.doi.org/10.21437/Interspeech.2018-1423}
791
- }
792
-
793
- @inproceedings{Tang2018,
794
- author={Jian Tang and Yan Song and Lirong Dai and Ian McLoughlin},
795
- title={Acoustic Modeling with Densely Connected Residual Network for Multichannel Speech Recognition},
796
- year=2018,
797
- booktitle={Proc. Interspeech 2018},
798
- pages={1783--1787},
799
- doi={10.21437/Interspeech.2018-1089},
800
- url={http://dx.doi.org/10.21437/Interspeech.2018-1089}
801
- }
802
-
803
- @article{Kurata2017LanguageMW,
804
- title={Language modeling with highway LSTM},
805
- author={Gakuto Kurata and Bhuvana Ramabhadran and George Saon and Abhinav Sethy},
806
- journal={2017 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU)},
807
- year={2017},
808
- pages={244-251}
809
- }
810
-
811
- @inproceedings{Saon2017,
812
- author={George Saon and Gakuto Kurata and Tom Sercu and Kartik Audhkhasi and Samuel Thomas and Dimitrios Dimitriadis and Xiaodong Cui and Bhuvana Ramabhadran and Michael Picheny and Lynn-Li Lim and Bergul Roomi and Phil Hall},
813
- title={English Conversational Telephone Speech Recognition by Humans and Machines},
814
- year=2017,
815
- booktitle={Proc. Interspeech 2017},
816
- pages={132--136},
817
- doi={10.21437/Interspeech.2017-405},
818
- url={http://dx.doi.org/10.21437/Interspeech.2017-405}
819
- }
820
-
821
- @inproceedings{Povey+2016,
822
- author={Daniel Povey and Vijayaditya Peddinti and Daniel Galvez and Pegah Ghahremani and Vimal Manohar and Xingyu Na and Yiming Wang and Sanjeev Khudanpur},
823
- title={Purely Sequence-Trained Neural Networks for ASR Based on Lattice-Free MMI},
824
- year=2016,
825
- booktitle={Interspeech 2016},
826
- doi={10.21437/Interspeech.2016-595},
827
- url={http://dx.doi.org/10.21437/Interspeech.2016-595},
828
- pages={2751--2755}
829
- }
830
-
831
- @article{Yang2018,
832
- author = {Xuerui Yang and
833
- Jiwei Li and
834
- Xi Zhou},
835
- title = {A novel pyramidal-FSMN architecture with lattice-free {MMI} for speech
836
- recognition},
837
- journal = {CoRR},
838
- volume = {abs/1810.11352},
839
- year = {2018},
840
- url = {http://arxiv.org/abs/1810.11352},
841
- archivePrefix = {arXiv},
842
- eprint = {1810.11352},
843
- timestamp = {Wed, 31 Oct 2018 14:24:29 +0100},
844
- biburl = {https://dblp.org/rec/bib/journals/corr/abs-1810-11352},
845
- bibsource = {dblp computer science bibliography, https://dblp.org}
846
- }
847
-
848
- @article{liptchinsky2017based,
849
- title={Letter-Based Speech Recognition with Gated ConvNets},
850
- author={Liptchinsky, Vitaliy and Synnaeve, Gabriel and Collobert, Ronan},
851
- journal={arXiv preprint arXiv:1712.09444},
852
- year={2017}
853
- }
854
-
855
- @inproceedings{Weng2018,
856
- author={Chao Weng and Jia Cui and Guangsen Wang and Jun Wang and Chengzhu Yu and Dan Su and Dong Yu},
857
- title={Improving Attention Based Sequence-to-Sequence Models for End-to-End English Conversational Speech Recognition},
858
- year=2018,
859
- booktitle={Proc. Interspeech 2018},
860
- pages={761--765},
861
- doi={10.21437/Interspeech.2018-1030},
862
- url={http://dx.doi.org/10.21437/Interspeech.2018-1030}
863
- }
864
-
865
- @INPROCEEDINGS{Battenberg2017,
866
- author={E. {Battenberg} and J. {Chen} and R. {Child} and A. {Coates} and Y. G. Y. {Li} and H. {Liu} and S. {Satheesh} and A. {Sriram} and Z. {Zhu}},
867
- booktitle={2017 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU)},
868
- title={Exploring neural transducers for end-to-end speech recognition},
869
- year={2017},
870
- volume={},
871
- number={},
872
- pages={206-213},
873
- keywords={recurrent neural nets;speech recognition;Hub500 benchmark;CTC models;speech recognition pipeline;RNN-Transducer models;language model;Seq2Seq models;end-to-end speech recognition;neural transducers;Decoding;Hidden Markov models;Transducers;Task analysis;Speech;Mathematical model;Neural networks},
874
- doi={10.1109/ASRU.2017.8268937},
875
- ISSN={},
876
- month={Dec},
877
- }
878
-
879
- @inproceedings{
880
- loshchilov2018,
881
- title={Decoupled Weight Decay Regularization},
882
- author={Ilya Loshchilov and Frank Hutter},
883
- booktitle={International Conference on Learning Representations},
884
- year={2019},
885
- url={https://openreview.net/forum?id=Bkg6RiCqY7},
886
- }
887
-
888
- @article{zhang2017ndadam,
889
- author = {Zijun Zhang and Lin Ma and Zongpeng Li and Chuan Wu},
890
- title = {Normalized Direction-preserving Adam},
891
- journal = {arXiv e-prints arXiv:1709.04546},
892
- year = {2017},
893
- }
894
-
895
- @article{park2019,
896
- author = {{Park}, Daniel S. and {Chan}, William and {Zhang}, Yu and
897
- {Chiu}, Chung-Cheng and {Zoph}, Barret and {Cubuk}, Ekin D. and
898
- {Le}, Quoc V.},
899
- title = "{SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition}",
900
- journal = {arXiv e-prints},
901
- year = "2019",
902
- eid = {arXiv:1904.08779},
903
- eprint = {1904.08779},
904
- }
905
-
906
- @article{novograd2019,
907
- author = {{Ginsburg}, Boris and {Castonguay}, Patrice and {Hrinchuk}, Oleksii and
908
- {Kuchaiev}, Oleksii and {Lavrukhin}, Vitaly and {Leary}, Ryan and
909
- {Li}, Jason and {Nguyen}, Huyen and {Cohen}, Jonathan M.},
910
- title = "{Stochastic Gradient Methods with Layer-wise Adaptive Moments for Training of Deep Networks}",
911
- journal = {arXiv e-prints},
912
- year = "2019",
913
- eid = {arXiv:1905.11286},
914
- eprint = {1905.11286},
915
- }
916
-
917
- @article{kriman2019quartznet,
918
- title={Quartznet: {Deep} automatic speech recognition with 1d time-channel separable convolutions},
919
- author={Kriman, Samuel and Beliaev, Stanislav and Ginsburg, Boris and Huang, Jocelyn and Kuchaiev, Oleksii and Lavrukhin, Vitaly and Leary, Ryan and Li, Jason and Zhang, Yang},
920
- journal={arXiv preprint arXiv:1910.10261},
921
- year={2019}
922
- }
923
-
924
- @misc{itu1988g711,
925
- title={{ITU-T} {G.711} - {Pulse} code modulation ({PCM}) of voice frequencies},
926
- author={ITU-T Geneva Switzerland},
927
- year={1988},
928
- }
929
-
930
- @article{han2020contextnet,
931
- title={ContextNet: Improving convolutional neural networks for automatic speech recognition with global context},
932
- author={Han, Wei and Zhang, Zhengdong and Zhang, Yu and Yu, Jiahui and Chiu, Chung-Cheng and Qin, James and Gulati, Anmol and Pang, Ruoming and Wu, Yonghui},
933
- journal={arXiv:2005.03191},
934
- year={2020}
935
- }
936
-
937
- @inproceedings{hu2018squeeze,
938
- title={Squeeze-and-excitation networks},
939
- author={Hu, Jie and Shen, Li and Sun, Gang},
940
- booktitle={ICVPR},
941
- year={2018}
942
- }
943
-
944
- @article{koluguri2020speakernet,
945
- title={SpeakerNet: 1D Depth-wise Separable Convolutional Network for Text-Independent Speaker Recognition and Verification},
946
- author={Koluguri, Nithin Rao and Li, Jason and Lavrukhin, Vitaly and Ginsburg, Boris},
947
- journal={arXiv preprint arXiv:2010.12653},
948
- year={2020}
949
- }
950
-
951
- @article{gulati2020conformer,
952
- title={Conformer: Convolution-augmented transformer for speech recognition},
953
- author={Gulati, Anmol and Qin, James and Chiu, Chung-Cheng and Parmar, Niki and Zhang, Yu and Yu, Jiahui and Han, Wei and Wang, Shibo and Zhang, Zhengdong and Wu, Yonghui and others},
954
- journal={arXiv preprint arXiv:2005.08100},
955
- year={2020}
956
- }
957
-
958
- @article{koluguri2021titanet,
959
- title={TitaNet: Neural Model for speaker representation with 1D Depth-wise separable convolutions and global context},
960
- author={Koluguri, Nithin Rao and Park, Taejin and Ginsburg, Boris},
961
- journal={arXiv preprint arXiv:2110.04410},
962
- year={2021}
963
- }
964
-
965
- @article{Dawalatabad_2021,
966
- title={ECAPA-TDNN Embeddings for Speaker Diarization},
967
- url={http://dx.doi.org/10.21437/Interspeech.2021-941},
968
- DOI={10.21437/interspeech.2021-941},
969
- journal={Interspeech 2021},
970
- publisher={ISCA},
971
- author={Dawalatabad, Nauman and Ravanelli, Mirco and Grondin, François and Thienpondt, Jenthe and Desplanques, Brecht and Na, Hwidong},
972
- year={2021},
973
- month={Aug}
974
- }
975
-
976
- @article{park2022multi,
977
- title = {Multi-scale Speaker Diarization with Dynamic Scale Weighting},
978
- author = {Park, Tae Jin and Koluguri, Nithin Rao and Balam, Jagadeesh and Ginsburg, Boris},
979
- journal = {https://arxiv.org/abs/2203.15974},
980
- year = {2022}
981
- }
982
-
983
-
984
- @inproceedings{he2019streaming,
985
- title={Streaming end-to-end speech recognition for mobile devices},
986
- author={He, Yanzhang and Sainath, Tara N and Prabhavalkar, Rohit and McGraw, Ian and Alvarez, Raziel and Zhao, Ding and Rybach, David and Kannan, Anjuli and Wu, Yonghui and Pang, Ruoming and others},
987
- booktitle={ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
988
- pages={6381--6385},
989
- year={2019},
990
- organization={IEEE}
991
- }
992
-
993
- @misc{wav2vec2,
994
- doi = {10.48550/ARXIV.2006.11477},
995
- url = {https://arxiv.org/abs/2006.11477},
996
- author = {Baevski, Alexei and Zhou, Henry and Mohamed, Abdelrahman and Auli, Michael},
997
- title = {wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations},
998
- publisher = {arXiv},
999
- year = {2020},
1000
- copyright = {arXiv.org perpetual, non-exclusive license}
1001
- }
1002
-
1003
- @misc{w2v_bert,
1004
- doi = {10.48550/ARXIV.2108.06209},
1005
- url = {https://arxiv.org/abs/2108.06209},
1006
- author = {Chung, Yu-An and Zhang, Yu and Han, Wei and Chiu, Chung-Cheng and Qin, James and Pang, Ruoming and Wu, Yonghui},
1007
- title = {W2v-BERT: Combining Contrastive Learning and Masked Language Modeling for Self-Supervised Speech Pre-Training},
1008
- publisher = {arXiv},
1009
- year = {2021},
1010
- copyright = {arXiv.org perpetual, non-exclusive license}
1011
- }
1012
-
1013
- @misc{ssl_inter,
1014
- doi = {10.48550/ARXIV.2112.08778},
1015
- url = {https://arxiv.org/abs/2112.08778},
1016
- author = {Wang, Chengyi and Wu, Yu and Chen, Sanyuan and Liu, Shujie and Li, Jinyu and Qian, Yao and Yang, Zhenglu},
1017
- title = {Self-Supervised Learning for speech recognition with Intermediate layer supervision},
1018
- publisher = {arXiv},
1019
- year = {2021},
1020
- copyright = {arXiv.org perpetual, non-exclusive license}
1021
- }
1022
-
1023
- @misc{kim2022squeezeformer,
1024
- doi = {10.48550/ARXIV.2206.00888},
1025
- url = {https://arxiv.org/abs/2206.00888},
1026
- author = {Kim, Sehoon and Gholami, Amir and Shaw, Albert and Lee, Nicholas and Mangalam, Karttikeya and Malik, Jitendra and Mahoney, Michael W. and Keutzer, Kurt},
1027
- keywords = {Audio and Speech Processing (eess.AS), Computation and Language (cs.CL), Sound (cs.SD), FOS: Electrical engineering, electronic engineering, information engineering, FOS: Electrical engineering, electronic engineering, information engineering, FOS: Computer and information sciences, FOS: Computer and information sciences},
1028
- title = {Squeezeformer: An Efficient Transformer for Automatic Speech Recognition},
1029
- publisher = {arXiv},
1030
- year = {2022},
1031
- copyright = {arXiv.org perpetual, non-exclusive license}
1032
- }
1033
-
1034
- @misc{park2022multi,
1035
- doi = {10.48550/ARXIV.2203.15974},
1036
- url = {https://arxiv.org/abs/2203.15974},
1037
- author = {Park, Tae Jin and Koluguri, Nithin Rao and Balam, Jagadeesh and Ginsburg, Boris},
1038
- keywords = {Audio and Speech Processing (eess.AS), Computation and Language (cs.CL), FOS: Electrical engineering, electronic engineering, information engineering, FOS: Electrical engineering, electronic engineering, information engineering, FOS: Computer and information sciences, FOS: Computer and information sciences},
1039
- title = {Multi-scale Speaker Diarization with Dynamic Scale Weighting},
1040
- publisher = {arXiv},
1041
- year = {2022},
1042
- copyright = {Creative Commons Attribution 4.0 International}
1043
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
SoundScribe/SpeakerID/docs/source/asr/asr_language_modeling.rst DELETED
@@ -1,548 +0,0 @@
1
- #####################
2
- ASR Language Modeling
3
- #####################
4
-
5
- Language models have shown to help the accuracy of ASR models. NeMo supports the following two approaches to incorporate language models into the ASR models:
6
-
7
- * :ref:`ngram_modeling`
8
- * :ref:`neural_rescoring`
9
-
10
- It is possible to use both approaches on the same ASR model.
11
-
12
-
13
- .. _ngram_modeling:
14
-
15
- ************************
16
- N-gram Language Modeling
17
- ************************
18
-
19
- In this approach, an N-gram LM is trained on text data, then it is used in fusion with beam search decoding to find the
20
- best candidates. The beam search decoders in NeMo support language models trained with KenLM library (
21
- `https://github.com/kpu/kenlm <https://github.com/kpu/kenlm>`__).
22
- The beam search decoders and KenLM library are not installed by default in NeMo, and you need to install them to be
23
- able to use beam search decoding and N-gram LM.
24
- Please refer to `scripts/asr_language_modeling/ngram_lm/install_beamsearch_decoders.sh <https://github.com/NVIDIA/NeMo/blob/stable/scripts/asr_language_modeling/ngram_lm/install_beamsearch_decoders.sh>`__
25
- on how to install them. Alternatively, you can build Docker image
26
- `scripts/installers/Dockerfile.ngramtools <https://github.com/NVIDIA/NeMo/blob/stable/scripts/installers/Dockerfile.ngramtools>`__ with all the necessary dependencies.
27
-
28
- NeMo supports both character-based and BPE-based models for N-gram LMs. An N-gram LM can be used with beam search
29
- decoders on top of the ASR models to produce more accurate candidates. The beam search decoder would incorporate
30
- the scores produced by the N-gram LM into its score calculations as the following:
31
-
32
- .. code-block::
33
-
34
- final_score = acoustic_score + beam_alpha*lm_score + beam_beta*seq_length
35
-
36
- where acoustic_score is the score predicted by the acoustic encoder and lm_score is the one estimated by the LM.
37
- Parameter 'beam_alpha' specifies amount of importance to place on the N-gram language model, and 'beam_beta' is a
38
- penalty term to consider the sequence length in the scores. Larger alpha means more importance on the LM and less
39
- importance on the acoustic model. Negative values for beta will give penalty to longer sequences and make the decoder
40
- to prefer shorter predictions, while positive values would result in longer candidates.
41
-
42
- .. _train-ngram-lm:
43
-
44
- Train N-gram LM
45
- ===============
46
-
47
- The script to train an N-gram language model with KenLM can be found at
48
- `scripts/asr_language_modeling/ngram_lm/train_kenlm.py <https://github.com/NVIDIA/NeMo/blob/stable/scripts/asr_language_modeling/ngram_lm/train_kenlm.py>`__.
49
-
50
- This script would train an N-gram language model with KenLM library which can be used with the beam search decoders
51
- on top of the ASR models. This script supports both character level and BPE level encodings and models which are
52
- detected automatically from the type of the model.
53
-
54
-
55
- You may train the N-gram model as the following:
56
-
57
- .. code-block::
58
-
59
- python train_kenlm.py nemo_model_file=<path to the .nemo file of the model> \
60
- train_paths=<list of paths to the training text or JSON manifest files> \
61
- kenlm_bin_path=<path to the bin folder of KenLM library> \
62
- kenlm_model_file=<path to store the binary KenLM model> \
63
- ngram_length=<order of N-gram model> \
64
- preserve_arpa=true
65
-
66
- The `train_paths` parameter allows for various input types, such as a list of text files, JSON manifests, or directories, to be used as the training data.
67
- If the file's extension is anything other than `.json`, it assumes that data format is plain text. For plain text format, each line should contain one
68
- sample. For JSON manifest file, the file need to contain json formatted samples per each line like this:
69
-
70
- .. code-block::
71
-
72
- {"audio_filepath": "/data_path/file1.wav", "text": "The transcript of the audio file."}
73
-
74
- It just extracts the `text` field from each line to create the training text file. After the N-gram model is trained,
75
- it is stored at the path specified by `kenlm_model_file`.
76
-
77
- The following is the list of the arguments for the training script:
78
-
79
- +------------------+----------+-------------+-------------------------------------------------------------------------------------------------+
80
- | **Argument** | **Type** | **Default** | **Description** |
81
- +------------------+----------+-------------+-------------------------------------------------------------------------------------------------+
82
- | nemo_model_file | str | Required | The path to `.nemo` file of the ASR model, or name of a pretrained NeMo model to extract a tokenizer. |
83
- +------------------+----------+-------------+-------------------------------------------------------------------------------------------------+
84
- | train_paths | List[str] | Required | List of training files or folders. Files can be a plain text file or ".json" manifest or ".json.gz". |
85
- +------------------+----------+-------------+-------------------------------------------------------------------------------------------------+
86
- | kenlm_model_file | str | Required | The path to store the KenLM binary model file. |
87
- +------------------+----------+-------------+-------------------------------------------------------------------------------------------------+
88
- | kenlm_bin_path | str | Required | The path to the bin folder of KenLM. It is a folder named `bin` under where KenLM is installed. |
89
- +------------------+----------+-------------+-------------------------------------------------------------------------------------------------+
90
- | ngram_length** | int | Required | Specifies order of N-gram LM. |
91
- +------------------+----------+-------------+-------------------------------------------------------------------------------------------------+
92
- | ngram_prune | List[int] | [0] | List of thresholds to prune N-grams. Example: [0,0,1]. See Pruning section on the https://kheafield.com/code/kenlm/estimation |
93
- +------------------+----------+-------------+-------------------------------------------------------------------------------------------------+
94
- | cache_path | str | "" | Cache path to save tokenized files. |
95
- +------------------+----------+-------------+-------------------------------------------------------------------------------------------------+
96
- | preserve_arpa | bool | ``False`` | Whether to preserve the intermediate ARPA file after construction of the BIN file. |
97
- +------------------+----------+-------------+-------------------------------------------------------------------------------------------------+
98
- | verbose | int | 1 | Verbose level. |
99
- +------------------+----------+-------------+-------------------------------------------------------------------------------------------------+
100
-
101
- ** Note: Recommend to use 6 as the order of the N-gram model for BPE-based models. Higher orders may need the re-compilation of KenLM to support it.
102
-
103
- Evaluate by Beam Search Decoding and N-gram LM
104
- ==============================================
105
-
106
- NeMo's beam search decoders are capable of using the KenLM's N-gram models to find the best candidates.
107
- The script to evaluate an ASR model with beam search decoding and N-gram models can be found at
108
- `scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram.py <https://github.com/NVIDIA/NeMo/blob/stable/scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram.py>`__.
109
-
110
- This script has a large number of possible argument overrides, therefore it is advised to use ``python eval_beamsearch_ngram.py --help`` to see the full list of arguments.
111
-
112
- You may evaluate an ASR model as the following:
113
-
114
- .. code-block::
115
-
116
- python eval_beamsearch_ngram.py nemo_model_file=<path to the .nemo file of the model> \
117
- input_manifest=<path to the evaluation JSON manifest file \
118
- kenlm_model_file=<path to the binary KenLM model> \
119
- beam_width=[<list of the beam widths, separated with commas>] \
120
- beam_alpha=[<list of the beam alphas, separated with commas>] \
121
- beam_beta=[<list of the beam betas, separated with commas>] \
122
- preds_output_folder=<optional folder to store the predictions> \
123
- probs_cache_file=null \
124
- decoding_mode=beamsearch_ngram \
125
- decoding_strategy="<Beam library such as beam, pyctcdecode or flashlight>"
126
-
127
- It can evaluate a model in the three following modes by setting the argument `--decoding_mode`:
128
-
129
- * greedy: Just greedy decoding is done, and no beam search decoding is performed.
130
- * beamsearch: The beam search decoding is done but without using the N-gram language model, final results would be equivalent to setting the weight of LM (beam_beta) to zero.
131
- * beamsearch_ngram: The beam search decoding is done with N-gram LM.
132
-
133
- The `beamsearch` mode would evaluate by beam search decoding without any language model.
134
- It would report the performances in terms of Word Error Rate (WER) and Character Error Rate (CER). Moreover,
135
- the WER/CER of the model when the best candidate is selected among the candidates is also reported as the best WER/CER.
136
- It can be an indicator of how good the predicted candidates are.
137
-
138
- The script would initially load the ASR model and predict the outputs of the model's encoder as log probabilities.
139
- This part would be computed in batches on a device selected by `--device`, which can be CPU (`--device=cpu`) or a
140
- single GPU (`--device=cuda:0`). The batch size of this part can get specified by `--acoustic_batch_size`. You may use
141
- the largest batch size feasible to speed up the step of calculating the log probabilities. You may also use `--use_amp`
142
- to speed up the calculation of log probabilities and make it possible to use larger sizes for `--acoustic_batch_size`.
143
- Currently multi-GPU is not supported for calculating the log probabilities, but using `--probs_cache_file` can help.
144
- It stores the log probabilities produced from the model's encoder into a pickle file so that next time the first step
145
- can get skipped.
146
-
147
- The following is the list of the important arguments for the evaluation script:
148
-
149
- +---------------------+----------+------------------+-------------------------------------------------------------------------+
150
- | **Argument** | **Type** | **Default** | **Description** |
151
- +---------------------+----------+------------------+-------------------------------------------------------------------------+
152
- | nemo_model_file | str | Required | The path of the `.nemo` file of the ASR model to extract the tokenizer. |
153
- +---------------------+----------+------------------+-------------------------------------------------------------------------+
154
- | input_manifest | str | Required | Path to the training file, it can be a text file or JSON manifest. |
155
- +---------------------+----------+------------------+-------------------------------------------------------------------------+
156
- | kenlm_model_file | str | Required | The path to store the KenLM binary model file. |
157
- +---------------------+----------+------------------+-------------------------------------------------------------------------+
158
- | preds_output_folder | str | None | The path to an optional folder to store the predictions. |
159
- +---------------------+----------+------------------+-------------------------------------------------------------------------+
160
- | probs_cache_file | str | None | The cache file for storing the outputs of the model. |
161
- +---------------------+----------+------------------+-------------------------------------------------------------------------+
162
- | acoustic_batch_size | int | 16 | The batch size to calculate log probabilities. |
163
- +---------------------+----------+------------------+-------------------------------------------------------------------------+
164
- | use_amp | bool | False | Whether to use AMP if available to calculate log probabilities. |
165
- +---------------------+----------+------------------+-------------------------------------------------------------------------+
166
- | device | str | cuda | The device to load the model onto to calculate log probabilities. |
167
- | | | | It can `cpu`, `cuda`, `cuda:0`, `cuda:1`, ... |
168
- +---------------------+----------+------------------+-------------------------------------------------------------------------+
169
- | decoding_mode | str | beamsearch_ngram | The decoding scheme to be used for evaluation. |
170
- +---------------------+----------+------------------+-------------------------------------------------------------------------+
171
- | beam_width | float | Required | List of the width or list of the widths of the beam search decoding. |
172
- +---------------------+----------+------------------+-------------------------------------------------------------------------+
173
- | beam_alpha | float | Required | List of the alpha parameter for the beam search decoding. |
174
- +---------------------+----------+------------------+-------------------------------------------------------------------------+
175
- | beam_beta | float | Required | List of the beta parameter for the beam search decoding. |
176
- +---------------------+----------+------------------+-------------------------------------------------------------------------+
177
- | beam_batch_size | int | 128 | The batch size to be used for beam search decoding. |
178
- | | | | Larger batch size can be a little faster, but uses larger memory. |
179
- +---------------------+----------+------------------+-------------------------------------------------------------------------+
180
- | decoding_strategy | str | beam | String argument for type of decoding strategy for the model. |
181
- +---------------------+----------+------------------+-------------------------------------------------------------------------+
182
- | decoding | Dict | BeamCTC | Subdict of beam search configs. Values found via |
183
- | | Config | InferConfig | python eval_beamsearch_ngram.py --help |
184
- +---------------------+----------+------------------+-------------------------------------------------------------------------+
185
- | text_processing.do_lowercase | bool | ``False`` | Whether to make the training text all lower case. |
186
- +---------------------+----------+------------------+-------------------------------------------------------------------------+
187
- | text_processing.punctuation_marks | str | "" | String with punctuation marks to process. Example: ".\,?" |
188
- +---------------------+----------+------------------+-------------------------------------------------------------------------+
189
- | text_processing.rm_punctuation | bool | ``False``| Whether to remove punctuation marks from text. |
190
- +---------------------+----------+------------------+-------------------------------------------------------------------------+
191
- | text_processing.separate_punctuation | bool |``True``| Whether to separate punctuation with the previous word by space. |
192
- +---------------------+----------+------------------+-------------------------------------------------------------------------+
193
-
194
- Width of the beam search (`--beam_width`) specifies the number of top candidates/predictions the beam search decoder
195
- would search for. Larger beams result in more accurate but slower predictions.
196
-
197
- .. note::
198
-
199
- The ``eval_beamsearch_ngram.py`` script contains the entire subconfig used for CTC Beam Decoding.
200
- Therefore it is possible to forward arguments for various beam search libraries such as ``flashlight``
201
- and ``pyctcdecode`` via the ``decoding`` subconfig.
202
-
203
- There is also a tutorial to learn more about evaluating the ASR models with N-gram LM here:
204
- `Offline ASR Inference with Beam Search and External Language Model Rescoring <https://colab.research.google.com/github/NVIDIA/NeMo/blob/stable/tutorials/asr/Offline_ASR.ipynb>`_
205
-
206
- Beam Search Engines
207
- -------------------
208
-
209
- NeMo ASR CTC supports multiple beam search engines for decoding. The default engine is ``beam`` which is the OpenSeq2Seq
210
- decoding library.
211
-
212
- OpenSeq2Seq (``beam``)
213
- ~~~~~~~~~~~~~~~~~~~~~~
214
-
215
- CPU-based beam search engine that is quite efficient and supports char and subword models. It requires a character/subword
216
- KenLM model to be provided.
217
-
218
- The config for this decoding library is described above.
219
-
220
- Flashlight (``flashlight``)
221
- ~~~~~~~~~~~~~~~~~~~~~~~~~~~
222
-
223
- Flashlight is a C++ library for ASR decoding provided at `https://github.com/flashlight/flashlight <https://github.com/flashlight/flashlight>`_. It is a CPU and CUDA-based beam search engine that is quite efficient and supports
224
- char and subword models. It an ARPA KenLM file.
225
-
226
- It supports several advanced features such as lexicon based / lexicon free decoding, beam pruning threshold, and more.
227
-
228
- .. code-block:: python
229
-
230
- @dataclass
231
- class FlashlightConfig:
232
- lexicon_path: Optional[str] = None
233
- boost_path: Optional[str] = None
234
- beam_size_token: int = 16
235
- beam_threshold: float = 20.0
236
- unk_weight: float = -math.inf
237
- sil_weight: float = 0.0
238
-
239
- .. code-block::
240
-
241
- # Lexicon-based decoding
242
- python eval_beamsearch_ngram.py ... \
243
- decoding_strategy="flashlight" \
244
- decoding.beam.flashlight_cfg.lexicon_path='/path/to/lexicon.lexicon' \
245
- decoding.beam.flashlight_cfg.beam_size_token = 32 \
246
- decoding.beam.flashlight_cfg.beam_threshold = 25.0
247
-
248
- # Lexicon-free decoding
249
- python eval_beamsearch_ngram.py ... \
250
- decoding_strategy="flashlight" \
251
- decoding.beam.flashlight_cfg.beam_size_token = 32 \
252
- decoding.beam.flashlight_cfg.beam_threshold = 25.0
253
-
254
- PyCTCDecode (``pyctcdecode``)
255
- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
256
-
257
- PyCTCDecode is a Python library for ASR decoding provided at `https://github.com/kensho-technologies/pyctcdecode <https://github.com/kensho-technologies/pyctcdecode>`_. It is a CPU-based beam search engine that is somewhat efficient for a pure python library, and supports char and subword models. It requires a character/subword KenLM ARPA / BINARY model to be provided.
258
-
259
- It has advanced features such as word boosting which can be useful for transcript customization.
260
-
261
- .. code-block:: python
262
-
263
- @dataclass
264
- class PyCTCDecodeConfig:
265
- beam_prune_logp: float = -10.0
266
- token_min_logp: float = -5.0
267
- prune_history: bool = False
268
- hotwords: Optional[List[str]] = None
269
- hotword_weight: float = 10.0
270
-
271
- .. code-block::
272
-
273
- # PyCTCDecoding
274
- python eval_beamsearch_ngram.py ... \
275
- decoding_strategy="pyctcdecode" \
276
- decoding.beam.pyctcdecode_cfg.beam_prune_logp = -10. \
277
- decoding.beam.pyctcdecode_cfg.token_min_logp = -5. \
278
- decoding.beam.pyctcdecode_cfg.hotwords=[<List of str words>] \
279
- decoding.beam.pyctcdecode_cfg.hotword_weight=10.0
280
-
281
-
282
- Hyperparameter Grid Search
283
- --------------------------
284
-
285
- Beam search decoding with N-gram LM has three main hyperparameters: `beam_width`, `beam_alpha`, and `beam_beta`.
286
- The accuracy of the model is dependent to the values of these parameters, specially beam_alpha and beam_beta.
287
- You may specify a single or list of values for each of these parameters to perform grid search. It would perform the
288
- beam search decoding on all the combinations of the these three hyperparameters.
289
- For instance, the following set of parameters would results in 2*1*2=4 beam search decodings:
290
-
291
- .. code-block::
292
-
293
- python eval_beamsearch_ngram.py ... \
294
- beam_width=[64,128] \
295
- beam_alpha=[1.0] \
296
- beam_beta=[1.0,0.5]
297
-
298
-
299
- Beam search ngram decoding for Transducer models (RNNT and HAT)
300
- ===============================================================
301
-
302
- The similar script to evaluate an RNNT/HAT model with beam search decoding and N-gram models can be found at
303
- `scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram_transducer.py <https://github.com/NVIDIA/NeMo/blob/stable/scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram_transducer.py>`_
304
-
305
- .. code-block::
306
-
307
- python eval_beamsearch_ngram_transducer.py nemo_model_file=<path to the .nemo file of the model> \
308
- input_manifest=<path to the evaluation JSON manifest file \
309
- kenlm_model_file=<path to the binary KenLM model> \
310
- beam_width=[<list of the beam widths, separated with commas>] \
311
- beam_alpha=[<list of the beam alphas, separated with commas>] \
312
- preds_output_folder=<optional folder to store the predictions> \
313
- probs_cache_file=null \
314
- decoding_strategy=<greedy_batch or maes decoding>
315
- maes_prefix_alpha=[<list of the maes prefix alphas, separated with commas>] \
316
- maes_expansion_gamma=[<list of the maes expansion gammas, separated with commas>] \
317
- hat_subtract_ilm=<in case of HAT model: subtract internal LM or not (True/False)> \
318
- hat_ilm_weight=[<in case of HAT model: list of the HAT internal LM weights, separated with commas>] \
319
-
320
-
321
-
322
- .. _neural_rescoring:
323
-
324
- ****************
325
- Neural Rescoring
326
- ****************
327
-
328
- In this approach a neural network is used which can gives scores to a candidate. A candidate is the text transcript predicted by the decoder of the ASR model.
329
- The top K candidates produced by the beam search decoding (beam width of K) are given to a neural language model to rank them.
330
- Ranking can be done by a language model which gives a score to each candidate.
331
- This score is usually combined with the scores from the beam search decoding to produce the final scores and rankings.
332
-
333
- Train Neural Rescorer
334
- =====================
335
-
336
- An example script to train such a language model with Transformer can be found at `examples/nlp/language_modeling/transformer_lm.py <https://github.com/NVIDIA/NeMo/blob/stable/examples/nlp/language_modeling/transformer_lm.py>`__.
337
- It trains a ``TransformerLMModel`` which can be used as a neural rescorer for an ASR system. Full documentation on language models training is available at:
338
-
339
- :doc:`../nlp/language_modeling`
340
-
341
- You may also use a pretrained language model from HuggingFace library like Transformer-XL and GPT instead of training your model.
342
- Models like BERT and RoBERTa are not supported by this script as they are trained as a Masked Language Model and are not efficient and effective to score sentences out of the box.
343
-
344
-
345
- Evaluation
346
- ==========
347
-
348
- Given a trained TransformerLMModel `.nemo` file or a pretrained HF model, the script available at
349
- `scripts/asr_language_modeling/neural_rescorer/eval_neural_rescorer.py <https://github.com/NVIDIA/NeMo/blob/stable/scripts/asr_language_modeling/neural_rescorer/eval_neural_rescorer.py>`__
350
- can be used to re-score beams obtained with ASR model. You need the `.tsv` file containing the candidates produced
351
- by the acoustic model and the beam search decoding to use this script. The candidates can be the result of just the beam
352
- search decoding or the result of fusion with an N-gram LM. You may generate this file by specifying `--preds_output_folder` for
353
- `scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram.py <https://github.com/NVIDIA/NeMo/blob/stable/scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram.py>`__.
354
-
355
- The neural rescorer would rescore the beams/candidates by using two parameters of `rescorer_alpha` and `rescorer_beta` as the following:
356
-
357
- .. code-block::
358
-
359
- final_score = beam_search_score + rescorer_alpha*neural_rescorer_score + rescorer_beta*seq_length
360
-
361
- Parameter `rescorer_alpha` specifies amount of importance to place on the neural rescorer model, and `rescorer_beta` is
362
- a penalty term to consider the sequence length in the scores. They have similar effects like the parameters
363
- `beam_alpha` and `beam_beta` of beam search decoder and N-gram LM.
364
-
365
- You may follow the following steps to evaluate a neural LM:
366
-
367
- #. Obtain `.tsv` file with beams and their corresponding scores. Scores can be from a regular beam search decoder or
368
- in fusion with an N-gram LM scores. For a given beam size `beam_size` and a number of examples
369
- for evaluation `num_eval_examples`, it should contain (`num_eval_examples` x `beam_size`) lines of
370
- form `beam_candidate_text \t score`. This file can be generated by `scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram.py <https://github.com/NVIDIA/NeMo/blob/stable/scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram.py>`__
371
-
372
- #. Rescore the candidates by `scripts/asr_language_modeling/neural_rescorer/eval_neural_rescorer.py <https://github.com/NVIDIA/NeMo/blob/stable/scripts/asr_language_modeling/neural_rescorer/eval_neural_rescorer.py>`__.
373
-
374
- .. code-block::
375
-
376
- python eval_neural_rescorer.py
377
- --lm_model=[path to .nemo file of the LM or the name of a HF pretrained model]
378
- --beams_file=[path to beams .tsv file]
379
- --beam_size=[size of the beams]
380
- --eval_manifest=[path to eval manifest .json file]
381
- --batch_size=[batch size used for inference on the LM model]
382
- --alpha=[the value for the parameter rescorer_alpha]
383
- --beta=[the value for the parameter rescorer_beta]
384
- --scores_output_file=[the optional path to store the rescored candidates]
385
-
386
- The candidates along with their new scores would be stored at the file specified by `--scores_output_file`.
387
-
388
- The following is the list of the arguments for the evaluation script:
389
-
390
- +---------------------+--------+------------------+-------------------------------------------------------------------------+
391
- | **Argument** |**Type**| **Default** | **Description** |
392
- +---------------------+--------+------------------+-------------------------------------------------------------------------+
393
- | lm_model | str | Required | The path of the '.nemo' file of an ASR model, or the name of a |
394
- | | | | HuggingFace pretrained model like 'transfo-xl-wt103' or 'gpt2' |
395
- +---------------------+--------+------------------+-------------------------------------------------------------------------+
396
- | eval_manifest | str | Required | Path to the evaluation manifest file (.json manifest file) |
397
- +---------------------+--------+------------------+-------------------------------------------------------------------------+
398
- | beams_file | str | Required | path to beams file (.tsv) containing the candidates and their scores |
399
- +---------------------+--------+------------------+-------------------------------------------------------------------------+
400
- | beam_size | int | Required | The width of the beams (number of candidates) generated by the decoder |
401
- +---------------------+--------+------------------+-------------------------------------------------------------------------+
402
- | alpha | float | None | The value for parameter rescorer_alpha |
403
- | | | | Not passing value would enable linear search for rescorer_alpha |
404
- +---------------------+--------+------------------+-------------------------------------------------------------------------+
405
- | beta | float | None | The value for parameter rescorer_beta |
406
- | | | | Not passing value would enable linear search for rescorer_beta |
407
- +---------------------+--------+------------------+-------------------------------------------------------------------------+
408
- | batch_size | int | 16 | The batch size used to calculate the scores |
409
- +---------------------+--------+------------------+-------------------------------------------------------------------------+
410
- | max_seq_length | int | 512 | Maximum sequence length (in tokens) for the input |
411
- +---------------------+--------+------------------+-------------------------------------------------------------------------+
412
- | scores_output_file | str | None | The optional file to store the rescored beams |
413
- +---------------------+--------+------------------+-------------------------------------------------------------------------+
414
- | use_amp | bool | ``False`` | Whether to use AMP if available calculate the scores |
415
- +---------------------+--------+------------------+-------------------------------------------------------------------------+
416
- | device | str | cuda | The device to load LM model onto to calculate the scores |
417
- | | | | It can be 'cpu', 'cuda', 'cuda:0', 'cuda:1', ... |
418
- +---------------------+--------+------------------+-------------------------------------------------------------------------+
419
-
420
-
421
- Hyperparameter Linear Search
422
- ----------------------------
423
-
424
- This script also supports linear search for parameters `alpha` and `beta`. If any of the two is not
425
- provided, a linear search is performed to find the best value for that parameter. When linear search is used, initially
426
- `beta` is set to zero and the best value for `alpha` is found, then `alpha` is fixed with
427
- that value and another linear search is done to find the best value for `beta`.
428
- If any of the of these two parameters is already specified, then search for that one is skipped. After each search for a
429
- parameter, the plot of WER% for different values of the parameter is also shown.
430
-
431
- It is recommended to first use the linear search for both parameters on a validation set by not providing any values for `--alpha` and `--beta`.
432
- Then check the WER curves and decide on the best values for each parameter. Finally, evaluate the best values on the test set.
433
-
434
-
435
- Word Boosting
436
- =============
437
-
438
- The Flashlight decoder supports word boosting during CTC decoding using a KenLM binary and corresponding lexicon. Word boosting only
439
- works in lexicon decoding mode, it does not work in lexicon-free mode. Word boosting allows one to bias the decoder for certain words,
440
- such that you can manually increase or decrease the probability of emitting certain words. This can be very helpful if you have certain
441
- uncommon or industry-specific words which you want to ensure transcribe correctly.
442
-
443
- For more information on word boosting, see `here <https://docs.nvidia.com/deeplearning/riva/user-guide/docs/tutorials/asr-python-advanced-wordboosting.html>`__
444
- and `here <https://docs.nvidia.com/deeplearning/riva/user-guide/docs/asr/asr-customizing.html#word-boosting>`__
445
-
446
- In order to use word boosting in Nemo, you need to create a simple tab-separated text file which contains each word to be boosted, followed by
447
- tab, and then the boosted score for that word.
448
-
449
- For example:
450
-
451
- .. code-block::
452
-
453
- nvidia 40
454
- geforce 50
455
- riva 80
456
- turing 30
457
- badword -100
458
-
459
- Positive scores boost words higher in the LM decoding step so they show up more frequently, whereas negative scores
460
- squelch words so they show up less frequently. The recommended range for the boost score is +/- 20 to 100.
461
-
462
- The boost file handles both in-vocabulary words and OOV words just fine, so you can specify both IV and OOV words with corresponding scores.
463
-
464
- You can then pass this file to your flashlight config object during decoding:
465
-
466
- .. code-block::
467
-
468
- # Lexicon-based decoding
469
- python eval_beamsearch_ngram.py ... \
470
- decoding_strategy="flashlight" \
471
- decoding.beam.flashlight_cfg.lexicon_path='/path/to/lexicon.lexicon' \
472
- decoding.beam.flashlight_cfg.boost_path='/path/to/my_boost_file.boost' \
473
- decoding.beam.flashlight_cfg.beam_size_token = 32 \
474
- decoding.beam.flashlight_cfg.beam_threshold = 25.0
475
-
476
- Combine N-gram Language Models
477
- ==============================
478
-
479
- Before combining N-gram LMs install required OpenGrm NGram library using `scripts/installers/install_opengrm.sh <https://github.com/NVIDIA/NeMo/blob/stable/scripts/installers/install_opengrm.sh>`__.
480
- Alternatively, you can use Docker image `scripts/installers/Dockerfile.ngramtools <https://github.com/NVIDIA/NeMo/blob/stable/scripts/installers/Dockerfile.ngramtools>`__ with all the necessary dependencies.
481
-
482
- To combine two N-gram language models, you can use the script ngram_merge.py located at
483
- `scripts/asr_language_modeling/ngram_lm/ngram_merge.py <https://github.com/NVIDIA/NeMo/blob/stable/scripts/asr_language_modeling/ngram_lm/ngram_merge.py>`__.
484
-
485
- This script interpolate two ARPA N-gram language models and creates a KenLM binary file that can be used with the beam search decoders on top of ASR models.
486
- You can specify weights (`--alpha` and `--beta`) for each of the models (`--ngram_a` and `--ngram_b`) correspondingly: `alpha` * `ngram_a` + `beta` * `ngram_b`.
487
- This script supports both character level and BPE level encodings and models which are detected automatically from the type of the model.
488
-
489
- To combine two N-gram models, you can use the following command:
490
-
491
- .. code-block::
492
-
493
- python ngram_merge.py --kenlm_bin_path <path to the bin folder of KenLM library> \
494
- --ngram_bin_path <path to the bin folder of OpenGrm Ngram library> \
495
- --arpa_a <path to the ARPA N-gram model file A> \
496
- --alpha <weight of N-gram model A> \
497
- --arpa_b <path to the ARPA N-gram model file B> \
498
- --beta <weight of N-gram model B> \
499
- --out_path <path to folder to store the output files>
500
-
501
-
502
-
503
- If you provide `--test_file` and `--nemo_model_file`, the script will calculate the perplexity of the resulting N-gram model on the test set.
504
- Note, the result of each step during the process is cached in the temporary file in the `--out_path`, to speed up further run.
505
- You can use the `--force` flag to discard the cache and recalculate everything from scratch.
506
-
507
- .. code-block::
508
-
509
- python ngram_merge.py --kenlm_bin_path <path to the bin folder of KenLM library> \
510
- --ngram_bin_path <path to the bin folder of OpenGrm Ngram library> \
511
- --arpa_a <path to the ARPA N-gram model file A> \
512
- --alpha <weight of N-gram model A> \
513
- --arpa_b <path to the ARPA N-gram model file B> \
514
- --beta <weight of N-gram model B> \
515
- --out_path <path to folder to store the output files>
516
- --nemo_model_file <path to the .nemo file of the model> \
517
- --test_file <path to the test file> \
518
- --symbols <path to symbols (.syms) file> \
519
- --force <flag to recalculate and rewrite all cached files>
520
-
521
-
522
- The following is the list of the arguments for the opengrm script:
523
-
524
- +----------------------+--------+------------------+-------------------------------------------------------------------------+
525
- | **Argument** |**Type**| **Default** | **Description** |
526
- +----------------------+--------+------------------+-------------------------------------------------------------------------+
527
- | kenlm_bin_path | str | Required | The path to the bin folder of KenLM library. It is a folder named `bin` under where KenLM is installed. |
528
- +----------------------+--------+------------------+-------------------------------------------------------------------------+
529
- | ngram_bin_path | str | Required | The path to the bin folder of OpenGrm Ngram. It is a folder named `bin` under where OpenGrm Ngram is installed. |
530
- +----------------------+--------+------------------+-------------------------------------------------------------------------+
531
- | arpa_a | str | Required | Path to the ARPA N-gram model file A |
532
- +----------------------+--------+------------------+-------------------------------------------------------------------------+
533
- | alpha | float | Required | Weight of N-gram model A |
534
- +----------------------+--------+------------------+-------------------------------------------------------------------------+
535
- | arpa_b | int | Required | Path to the ARPA N-gram model file B |
536
- +----------------------+--------+------------------+-------------------------------------------------------------------------+
537
- | beta | float | Required | Weight of N-gram model B |
538
- +----------------------+--------+------------------+-------------------------------------------------------------------------+
539
- | out_path | str | Required | Path for writing temporary and resulting files. |
540
- +----------------------+--------+------------------+-------------------------------------------------------------------------+
541
- | test_file | str | None | Path to test file to count perplexity if provided. |
542
- +----------------------+--------+------------------+-------------------------------------------------------------------------+
543
- | symbols | str | None | Path to symbols (.syms) file. Could be calculated if it is not provided.|
544
- +----------------------+--------+------------------+-------------------------------------------------------------------------+
545
- | nemo_model_file | str | None | The path to '.nemo' file of the ASR model, or name of a pretrained NeMo model. |
546
- +----------------------+--------+------------------+-------------------------------------------------------------------------+
547
- | force | bool | ``False`` | Whether to recompile and rewrite all files |
548
- +----------------------+--------+------------------+-------------------------------------------------------------------------+
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
SoundScribe/SpeakerID/docs/source/asr/configs.rst DELETED
@@ -1,1110 +0,0 @@
1
- NeMo ASR Configuration Files
2
- ============================
3
-
4
- This section describes the NeMo configuration file setup that is specific to models in the ASR collection. For general information
5
- about how to set up and run experiments that is common to all NeMo models (e.g. Experiment Manager and PyTorch Lightning trainer
6
- parameters), see the :doc:`../core/core` section.
7
-
8
- The model section of the NeMo ASR configuration files generally requires information about the dataset(s) being used, the preprocessor
9
- for audio files, parameters for any augmentation being performed, as well as the model architecture specification. The sections on
10
- this page cover each of these in more detail.
11
-
12
- Example configuration files for all of the NeMo ASR scripts can be found in the
13
- `config directory of the examples <https://github.com/NVIDIA/NeMo/tree/stable/examples/asr/conf>`_.
14
-
15
-
16
- Dataset Configuration
17
- ---------------------
18
-
19
- Training, validation, and test parameters are specified using the ``train_ds``, ``validation_ds``, and
20
- ``test_ds`` sections in the configuration file, respectively. Depending on the task, there may be arguments specifying the sample rate
21
- of the audio files, the vocabulary of the dataset (for character prediction), whether or not to shuffle the dataset, and so on. You may
22
- also decide to leave fields such as the ``manifest_filepath`` blank, to be specified via the command-line at runtime.
23
-
24
- Any initialization parameter that is accepted for the Dataset class used in the experiment can be set in the config file.
25
- Refer to the `Datasets <./api.html#Datasets>`__ section of the API for a list of Datasets and their respective parameters.
26
-
27
- An example ASR train and validation configuration should look similar to the following:
28
-
29
- .. code-block:: yaml
30
-
31
- # Specified at the beginning of the config file
32
- labels: &labels [" ", "a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m",
33
- "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z", "'"]
34
-
35
- model:
36
- train_ds:
37
- manifest_filepath: ???
38
- sample_rate: 16000
39
- labels: *labels # Uses the labels above
40
- batch_size: 32
41
- trim_silence: True
42
- max_duration: 16.7
43
- shuffle: True
44
- num_workers: 8
45
- pin_memory: true
46
- # tarred datasets
47
- is_tarred: false # If set to true, uses the tarred version of the Dataset
48
- tarred_audio_filepaths: null # Not used if is_tarred is false
49
- shuffle_n: 2048 # Not used if is_tarred is false
50
- # bucketing params
51
- bucketing_strategy: "synced_randomized"
52
- bucketing_batch_size: null
53
- bucketing_weights: null
54
-
55
- validation_ds:
56
- manifest_filepath: ???
57
- sample_rate: 16000
58
- labels: *labels # Uses the labels above
59
- batch_size: 32
60
- shuffle: False # No need to shuffle the validation data
61
- num_workers: 8
62
- pin_memory: true
63
-
64
- There are two ways to test/validate on more than one manifest:
65
-
66
- - Specify a list in the `manifest_filepath` field. Results will be reported for each, the first one being used for overall loss / WER (specify `val_dl_idx` if you wish to change that). In this case, all manifests will share configuration parameters.
67
- - Use the ds_item key and pass a list of config objects to it. This allows you to use differently configured datasets for validation, e.g.
68
-
69
- .. code-block:: yaml
70
-
71
- model:
72
- validation_ds:
73
- ds_item:
74
- - name: dataset1
75
- manifest_filepath: ???
76
- # Config parameters for dataset1
77
- ...
78
- - name: dataset2
79
- manifest_filepath: ???
80
- # Config parameters for dataset2
81
- ...
82
-
83
- By default, dataloaders are set up when the model is instantiated. However, dataloader setup can be deferred to
84
- model's `setup()` method by setting ``defer_setup`` in the configuration.
85
-
86
- For example, training data setup can be deferred as follows:
87
-
88
- .. code-block:: yaml
89
-
90
- model:
91
- train_ds:
92
- # Configure training data as usual
93
- ...
94
- # Defer train dataloader setup from `__init__` to `setup`
95
- defer_setup: true
96
-
97
-
98
- Preprocessor Configuration
99
- --------------------------
100
-
101
- If you are loading audio files for your experiment, you will likely want to use a preprocessor to convert from the
102
- raw audio signal to features (e.g. mel-spectrogram or MFCC). The ``preprocessor`` section of the config specifies the audio
103
- preprocessor to be used via the ``_target_`` field, as well as any initialization parameters for that preprocessor.
104
-
105
- An example of specifying a preprocessor is as follows:
106
-
107
- .. code-block:: yaml
108
-
109
- model:
110
- ...
111
- preprocessor:
112
- # _target_ is the audio preprocessor module you want to use
113
- _target_: nemo.collections.asr.modules.AudioToMelSpectrogramPreprocessor
114
- normalize: "per_feature"
115
- window_size: 0.02
116
- ...
117
- # Other parameters for the preprocessor
118
-
119
- Refer to the `Audio Preprocessors <./api.html#Audio Preprocessors>`__ API section for the preprocessor options, expected arguments,
120
- and defaults.
121
-
122
- Augmentation Configurations
123
- ---------------------------
124
-
125
- There are a few on-the-fly spectrogram augmentation options for NeMo ASR, which can be specified by the
126
- configuration file using a ``spec_augment`` section.
127
-
128
- For example, there are options for `Cutout <https://arxiv.org/abs/1708.04552>`_ and
129
- `SpecAugment <https://arxiv.org/abs/1904.08779>`_ available via the ``SpectrogramAugmentation`` module.
130
-
131
- The following example sets up both ``Cutout`` (via the ``rect_*`` parameters) and ``SpecAugment`` (via the ``freq_*``
132
- and ``time_*`` parameters).
133
-
134
- .. code-block:: yaml
135
-
136
- model:
137
- ...
138
- spec_augment:
139
- _target_: nemo.collections.asr.modules.SpectrogramAugmentation
140
- # Cutout parameters
141
- rect_masks: 5 # Number of rectangles to cut from any given spectrogram
142
- rect_freq: 50 # Max cut of size 50 along the frequency dimension
143
- rect_time: 120 # Max cut of size 120 along the time dimension
144
- # SpecAugment parameters
145
- freq_masks: 2 # Cut two frequency bands
146
- freq_width: 15 # ... of width 15 at maximum
147
- time_masks: 5 # Cut out 10 time bands
148
- time_width: 25 # ... of width 25 at maximum
149
-
150
- You can use any combination of ``Cutout``, frequency/time ``SpecAugment``, or neither of them.
151
-
152
- With NeMo ASR, you can also add augmentation pipelines that can be used to simulate various kinds of noise
153
- added to audio in the channel. Augmentors in a pipeline are applied on the audio data read in the data layer. Online
154
- augmentors can be specified in the config file using an ``augmentor`` section in ``train_ds``. The following example
155
- adds an augmentation pipeline that first adds white noise to an audio sample with a probability of 0.5 and at a level
156
- randomly picked between -50 dB and -10 dB and then passes the resultant samples through a room impulse response randomly
157
- picked from the manifest file provided for ``impulse`` augmentation in the config file.
158
-
159
- .. code-block:: yaml
160
-
161
- model:
162
- ...
163
- train_ds:
164
- ...
165
- augmentor:
166
- white_noise:
167
- prob: 0.5
168
- min_level: -50
169
- max_level: -10
170
- impulse:
171
- prob: 0.3
172
- manifest_path: /path/to/impulse_manifest.json
173
-
174
- Refer to the `Audio Augmentors <./api.html#Audio Augmentors>`__ API section for more details.
175
-
176
- Tokenizer Configurations
177
- ------------------------
178
-
179
- Some models utilize sub-word encoding via an external tokenizer instead of explicitly defining their vocabulary.
180
-
181
- For such models, a ``tokenizer`` section is added to the model config. ASR models currently support two types of
182
- custom tokenizers:
183
-
184
- - Google Sentencepiece tokenizers (tokenizer type of ``bpe`` in the config)
185
- - HuggingFace WordPiece tokenizers (tokenizer type of ``wpe`` in the config)
186
- - Aggregate tokenizers ((tokenizer type of ``agg`` in the config), see below)
187
-
188
- In order to build custom tokenizers, refer to the ``ASR_with_Subword_Tokenization`` notebook available in the
189
- ASR tutorials directory.
190
-
191
- The following example sets up a ``SentencePiece Tokenizer`` at a path specified by the user:
192
-
193
- .. code-block:: yaml
194
-
195
- model:
196
- ...
197
- tokenizer:
198
- dir: "<path to the directory that contains the custom tokenizer files>"
199
- type: "bpe" # can be "bpe" or "wpe"
200
-
201
- The Aggregate (``agg``) tokenizer feature makes it possible to combine tokenizers in order to train multilingual
202
- models. The config file would look like this:
203
-
204
- .. code-block:: yaml
205
-
206
- model:
207
- ...
208
- tokenizer:
209
- type: "agg" # aggregate tokenizer
210
- langs:
211
- en:
212
- dir: "<path to the directory that contains the tokenizer files>"
213
- type: "bpe" # can be "bpe" or "wpe"
214
- es:
215
- dir: "<path to the directory that contains the tokenizer files>"
216
- type: "bpe" # can be "bpe" or "wpe"
217
-
218
- In the above config file, each language is associated with its own pre-trained tokenizer, which gets assigned
219
- a token id range in the order the tokenizers are listed. To train a multilingual model, one needs to populate the
220
- ``lang`` field in the manifest file, allowing the routing of each sample to the correct tokenizer. At inference time,
221
- the routing is done based on the inferred token id range.
222
-
223
- For models which utilize sub-word tokenization, we share the decoder module (``ConvASRDecoder``) with character tokenization models.
224
- All parameters are shared, but for models which utilize sub-word encoding, there are minor differences when setting up the config. For
225
- such models, the tokenizer is utilized to fill in the missing information when the model is constructed automatically.
226
-
227
- For example, a decoder config corresponding to a sub-word tokenization model should look similar to the following:
228
-
229
- .. code-block:: yaml
230
-
231
- model:
232
- ...
233
- decoder:
234
- _target_: nemo.collections.asr.modules.ConvASRDecoder
235
- feat_in: *enc_final
236
- num_classes: -1 # filled with vocabulary size from tokenizer at runtime
237
- vocabulary: [] # filled with vocabulary from tokenizer at runtime
238
-
239
-
240
- On-the-fly Code Switching
241
- -------------------------
242
-
243
- Nemo supports creating code-switched synthetic utterances on-the-fly during training/validation/testing. This allows you to create ASR models which
244
- support intra-utterance code switching. If you have Nemo formatted audio data on disk (either JSON manifests or tarred audio data), you
245
- can easily mix as many of these audio sources together as desired by adding some extra parameters to your `train_ds`, `validation_ds`, and `test_ds`.
246
-
247
- Please note that this allows you to mix any kind of audio sources together to create synthetic utterances which sample from all sources. The most
248
- common use case for this is blending different languages together to create a multilingual code-switched model, but you can also blend
249
- together different audio sources from the same languages (or language families), to create noise robust data, or mix fast and slow speech from the
250
- same language.
251
-
252
- For multilingual code-switched models, we recommend using AggTokenizer for your Tokenizer if mixing different languages.
253
-
254
- The following example shows how to mix 3 different languages: English (en), German (de), and Japanese (ja) added to the `train_ds` model block, however
255
- you can add similar logic to your `validation_ds` and `test_ds` blocks for on-the-fly code-switched validation and test data too. This example mixes
256
- together 3 languages, but you can use as many as you want. However, be advised that the more languages you add, the higher your `min_duration` and `max_duration`
257
- need to be set to ensure all languages are sampled into each synthetic utterance, and setting these hyperparameters higher will use more VRAM per mini-batch during
258
- training and evaluation.
259
-
260
- .. code-block:: yaml
261
-
262
- model:
263
- train_ds:
264
- manifest_filepath: [/path/to/EN/tarred_manifest.json, /path/to/DE/tarred_manifest.json, /path/to/JA/tarred_manifest.json]
265
- tarred_audio_filepaths: ['/path/to/EN/tars/audio__OP_0..511_CL_.tar', '/path/to/DE/tars/audio__OP_0..1023_CL_.tar', '/path/to/JA/tars/audio__OP_0..2047_CL_.tar']
266
- is_code_switched: true
267
- is_tarred: true
268
- shuffle: true
269
- code_switched: # add this block for code-switching
270
- min_duration: 12 # the minimum number of seconds for each synthetic code-switched utterance
271
- max_duration: 20 # the maximum number of seconds for each synthetic code-switched utterance
272
- min_monolingual: 0.3 # the minimum percentage of utterances which will be pure monolingual (0.3 = 30%)
273
- probs: [0.25, 0.5, 0.25] # the probability to sample each language (matches order of `language` above) if not provided, assumes uniform distribution
274
- force_monochannel: true # if your source data is multi-channel, then setting this to True will force the synthetic utterances to be mono-channel
275
- sampling_scales: 0.75 # allows you to down/up sample individual languages. Can set this as an array for individual languages, or a scalar for all languages
276
- seed: 123 # add a seed for replicability in future runs (highly useful for `validation_ds` and `test_ds`)
277
-
278
-
279
- Model Architecture Configurations
280
- ---------------------------------
281
-
282
- Each configuration file should describe the model architecture being used for the experiment. Models in the NeMo ASR collection need
283
- an ``encoder`` section and a ``decoder`` section, with the ``_target_`` field specifying the module to use for each.
284
-
285
- Here is the list of the parameters in the model section which are shared among most of the ASR models:
286
-
287
- +-------------------------+------------------+---------------------------------------------------------------------------------------------------------------+---------------------------------+
288
- | **Parameter** | **Datatype** | **Description** | **Supported Values** |
289
- +=========================+==================+===============================================================================================================+=================================+
290
- | :code:`log_prediction` | bool | Whether a random sample should be printed in the output at each step, along with its predicted transcript. | |
291
- +-------------------------+------------------+---------------------------------------------------------------------------------------------------------------+---------------------------------+
292
- | :code:`ctc_reduction` | string | Specifies the reduction type of CTC loss. Defaults to ``mean_batch`` which would take the average over the | :code:`none`, |
293
- | | | batch after taking the average over the length of each sample. | :code:`mean_batch` |
294
- | | | | :code:`mean`, :code:`sum` |
295
- +-------------------------+------------------+---------------------------------------------------------------------------------------------------------------+---------------------------------+
296
-
297
- The following sections go into more detail about the specific configurations of each model architecture.
298
-
299
- For more information about the ASR models, refer to the :doc:`Models <./models>` section.
300
-
301
- Jasper and QuartzNet
302
- ~~~~~~~~~~~~~~~~~~~~
303
-
304
- The `Jasper <./models.html#Jasper>`__ and `QuartzNet <./models.html#QuartzNet>`__ models are very similar, and as such the components in their
305
- configs are very similar as well.
306
-
307
- Both architectures use the ``ConvASREncoder`` for the ``encoder``, with parameters detailed in the table below. The encoder parameters
308
- include details about the Jasper/QuartzNet ``[BxR]`` encoder architecture, including how many blocks to use (``B``), how many times
309
- to repeat each sub-block (``R``), and the convolution parameters for each block.
310
-
311
- The number of blocks ``B`` is determined by the number of list elements under ``jasper`` minus the one prologue and two epilogue blocks.
312
- The number of sub-blocks ``R`` is determined by setting the ``repeat`` parameter.
313
-
314
- To use QuartzNet (which uses more compact time-channel separable convolutions) instead of Jasper, add :code:`separable: true` to all
315
- but the last block in the architecture.
316
-
317
- Change the parameter name ``jasper``.
318
-
319
- +-------------------------+------------------+---------------------------------------------------------------------------------------------------------------+-------------------------------------+
320
- | **Parameter** | **Datatype** | **Description** | **Supported Values** |
321
- +=========================+==================+===============================================================================================================+=====================================+
322
- | :code:`feat_in` | int | The number of input features. Should be equal to :code:`features` in the preprocessor parameters. | |
323
- +-------------------------+------------------+---------------------------------------------------------------------------------------------------------------+-------------------------------------+
324
- | :code:`activation` | string | Which activation function to use in the encoder. | :code:`hardtanh`, :code:`relu`, |
325
- | | | | :code:`selu`, :code:`swish` |
326
- +-------------------------+------------------+---------------------------------------------------------------------------------------------------------------+-------------------------------------+
327
- | :code:`conv_mask` | bool | Whether to use masked convolutions in the encoder. Defaults to ``true``. | |
328
- +-------------------------+------------------+---------------------------------------------------------------------------------------------------------------+-------------------------------------+
329
- | :code:`jasper` | | A list of blocks that specifies your encoder architecture. Each entry in this list represents one block in | |
330
- | | | the architecture and contains the parameters for that block, including convolution parameters, dropout, and | |
331
- | | | the number of times the block is repeated. Refer to the `Jasper <https://arxiv.org/pdf/1904.03288.pdf>`_ and | |
332
- | | | `QuartzNet <https://arxiv.org/pdf/1910.10261.pdf>`_ papers for details about specific model configurations. | |
333
- +-------------------------+------------------+---------------------------------------------------------------------------------------------------------------+-------------------------------------+
334
-
335
- A QuartzNet 15x5 (fifteen blocks, each sub-block repeated five times) encoder configuration should look similar to the following example:
336
-
337
- .. code-block:: yaml
338
-
339
- # Specified at the beginning of the file for convenience
340
- n_mels: &n_mels 64 # Used for both the preprocessor and encoder as number of input features
341
- repeat: &repeat 5 # R=5
342
- dropout: &dropout 0.0
343
- separable: &separable true # Set to true for QN. Set to false for Jasper.
344
-
345
- model:
346
- ...
347
- encoder:
348
- _target_: nemo.collections.asr.modules.ConvASREncoder
349
- feat_in: *n_mels # Should match "features" in the preprocessor.
350
- activation: relu
351
- conv_mask: true
352
-
353
- jasper: # This field name should be "jasper" for both types of models.
354
-
355
- # Prologue block
356
- - dilation: [1]
357
- dropout: *dropout
358
- filters: 256
359
- kernel: [33]
360
- repeat: 1 # Prologue block is not repeated.
361
- residual: false
362
- separable: *separable
363
- stride: [2]
364
-
365
- # Block 1
366
- - dilation: [1]
367
- dropout: *dropout
368
- filters: 256
369
- kernel: [33]
370
- repeat: *repeat
371
- residual: true
372
- separable: *separable
373
- stride: [1]
374
-
375
- ... # Entries for blocks 2~14
376
-
377
- # Block 15
378
- - dilation: [1]
379
- dropout: *dropout
380
- filters: 512
381
- kernel: [75]
382
- repeat: *repeat
383
- residual: true
384
- separable: *separable
385
- stride: [1]
386
-
387
- # Two epilogue blocks
388
- - dilation: [2]
389
- dropout: *dropout
390
- filters: 512
391
- kernel: [87]
392
- repeat: 1 # Epilogue blocks are not repeated
393
- residual: false
394
- separable: *separable
395
- stride: [1]
396
-
397
- - dilation: [1]
398
- dropout: *dropout
399
- filters: &enc_filters 1024
400
- kernel: [1]
401
- repeat: 1 # Epilogue blocks are not repeated
402
- residual: false
403
- stride: [1]
404
-
405
- Both Jasper and QuartzNet use the ``ConvASRDecoder`` as the decoder. The decoder parameters are detailed in the following table.
406
-
407
- +-------------------------+------------------+---------------------------------------------------------------------------------------------------------------+---------------------------------+
408
- | **Parameter** | **Datatype** | **Description** | **Supported Values** |
409
- +=========================+==================+===============================================================================================================+=================================+
410
- | :code:`feat_in` | int | The number of input features to the decoder. Should be equal to the number of filters in the last block of | |
411
- | | | the encoder. | |
412
- +-------------------------+------------------+---------------------------------------------------------------------------------------------------------------+---------------------------------+
413
- | :code:`vocabulary` | list | A list of the valid output characters for your model. For example, for an English dataset, this could be a | |
414
- | | | list of all lowercase letters, space, and apostrophe. | |
415
- +-------------------------+------------------+---------------------------------------------------------------------------------------------------------------+---------------------------------+
416
- | :code:`num_classes` | int | Number of output classes, i.e. the length of :code:`vocabulary`. | |
417
- +-------------------------+------------------+---------------------------------------------------------------------------------------------------------------+---------------------------------+
418
-
419
- For example, a decoder config corresponding to the encoder above should look similar to the following:
420
-
421
- .. code-block:: yaml
422
-
423
- model:
424
- ...
425
- decoder:
426
- _target_: nemo.collections.asr.modules.ConvASRDecoder
427
- feat_in: *enc_filters
428
- vocabulary: *labels
429
- num_classes: 28 # Length of the vocabulary list
430
-
431
- Citrinet
432
- ~~~~~~~~
433
-
434
- The `Citrinet <./models.html#Citrinet>`__ and `QuartzNet <./models.html#QuartzNet>`__ models are very similar, and as such the
435
- components in their configs are very similar as well. Citrinet utilizes Squeeze and Excitation, as well as sub-word tokenization, in
436
- contrast to QuartzNet. Depending on the dataset, we utilize different tokenizers. For Librispeech, we utilize the HuggingFace WordPiece
437
- tokenizer, and for all other datasets we utilize the Google Sentencepiece tokenizer - usually the ``unigram`` tokenizer type.
438
-
439
- Both architectures use the ``ConvASREncoder`` for the ``encoder``, with parameters detailed above. The encoder parameters include
440
- details about the Citrinet-C encoder architecture, including how many filters are used per channel (``C``). The Citrinet-C
441
- configuration is a shortform notation for Citrinet-21x5xC, such that ``B = 21`` and ``R = 5`` are the default and should generally
442
- not be changed.
443
-
444
- To use Citrinet instead of QuartzNet, refer to the ``citrinet_512.yaml`` configuration found inside the ``examples/asr/conf/citrinet``
445
- directory. Citrinet is primarily comprised of the same :class:`~nemo.collections.asr.parts.submodules.jasper.JasperBlock` as ``Jasper`` or
446
- ``QuartzNet``.
447
-
448
- While the configs for Citrinet and QuartzNet are similar, we note the additional flags used for Citrinet below. Refer to the
449
- ``JasperBlock`` documentation for the meaning of these arguments.
450
-
451
- +---------------------------+------------------+-----------------------------------------------------------------------------------------------------------+-----------------------------------+
452
- | **Parameter** | **Datatype** | **Description** | **Supported Values** |
453
- +===========================+==================+===========================================================================================================+===================================+
454
- | :code:`se` | bool | Whether to apply squeeze-and-excitation mechanism or not. | :code:`true` or :code:`false` |
455
- +---------------------------+------------------+-----------------------------------------------------------------------------------------------------------+-----------------------------------+
456
- | :code:`se_context_size` | int | SE context size. -1 means global context. | :code:`-1` or :code:`+ve int` |
457
- +---------------------------+------------------+-----------------------------------------------------------------------------------------------------------+-----------------------------------+
458
- | :code:`stride_last` | bool | Stride on the final repeated block or all repeated blocks. | :code:`true` or :code:`false` |
459
- +---------------------------+------------------+-----------------------------------------------------------------------------------------------------------+-----------------------------------+
460
- | :code:`residual_mode` | str | Type of residual branch to construct. | :code:`"add"` or |
461
- | | | Can be pointwise residual addition or pointwise strided residual attention | :code:`"stride_add"` |
462
- +---------------------------+------------------+-----------------------------------------------------------------------------------------------------------+-----------------------------------+
463
-
464
- A Citrinet-512 config should look similar to the following:
465
-
466
- .. code-block:: yaml
467
-
468
- model:
469
- ...
470
- # Specify some defaults across the entire model
471
- model_defaults:
472
- repeat: 5
473
- dropout: 0.1
474
- separable: true
475
- se: true
476
- se_context_size: -1
477
- ...
478
- encoder:
479
- _target_: nemo.collections.asr.modules.ConvASREncoder
480
- feat_in: *n_mels # Should match "features" in the preprocessor.
481
- activation: relu
482
- conv_mask: true
483
-
484
- jasper: # This field name should be "jasper" for the JasperBlock (which constructs Citrinet).
485
-
486
- # Prologue block
487
- - filters: 512
488
- repeat: 1
489
- kernel: [5]
490
- stride: [1]
491
- dilation: [1]
492
- dropout: 0.0
493
- residual: false
494
- separable: ${model.model_defaults.separable}
495
- se: ${model.model_defaults.se}
496
- se_context_size: ${model.model_defaults.se_context_size}
497
-
498
- # Block 1
499
- - filters: 512
500
- repeat: ${model.model_defaults.repeat}
501
- kernel: [11]
502
- stride: [2]
503
- dilation: [1]
504
- dropout: ${model.model_defaults.dropout}
505
- residual: true
506
- separable: ${model.model_defaults.separable}
507
- se: ${model.model_defaults.se}
508
- se_context_size: ${model.model_defaults.se_context_size}
509
- stride_last: true
510
- residual_mode: "stride_add"
511
-
512
- ... # Entries for blocks 2~21
513
-
514
- # Block 22
515
- - filters: 512
516
- repeat: ${model.model_defaults.repeat}
517
- kernel: [39]
518
- stride: [1]
519
- dilation: [1]
520
- dropout: ${model.model_defaults.dropout}
521
- residual: true
522
- separable: ${model.model_defaults.separable}
523
- se: ${model.model_defaults.se}
524
- se_context_size: ${model.model_defaults.se_context_size}
525
-
526
- # Epilogue block
527
-
528
- - filters: &enc_final 640
529
- repeat: 1
530
- kernel: [41]
531
- stride: [1]
532
- dilation: [1]
533
- dropout: 0.0
534
- residual: false
535
- separable: ${model.model_defaults.separable}
536
- se: ${model.model_defaults.se}
537
- se_context_size: ${model.model_defaults.se_context_size}
538
-
539
- As mentioned above, Citrinet uses the ``ConvASRDecoder`` as the decoder layer similar to QuartzNet. Only the configuration must be
540
- changed slightly as Citrinet utilizes sub-word tokenization.
541
-
542
- .. note::
543
- The following information is relevant to any of the above models that implements its encoder as an :class:`~nemo.collections.asr.modules.conv_asr.ConvASREncoder`, and utilizes the ``SqueezeExcite`` mechanism.
544
-
545
- The ``SqueezeExcite`` block within a :class:`~nemo.collections.asr.modules.conv_asr.ConvASREncoder` network can be modified to utilize a different context window after the model has been instantiated (even after the model has been trained) so as to evaluate the model with limited context. This can be achieved using the :meth:`~nemo.collections.asr.parts.mixins.mixins.ASRModuleMixin.change_conv_asr_se_context_window`
546
-
547
- .. code-block:: python
548
-
549
- # Here, model can be any model that has a `ConvASREncoder` as its encoder, and utilized `SqueezeExcite` blocks
550
- # `context_window` : It is an integer representing the number of timeframes (each corresponding to some window stride).
551
- # `update_config` : Bool flag which determines whether the config of the model should be updated to reflect the new context window.
552
-
553
- # Here, we specify that 128 timeframes of 0.01s stride should be the context window
554
- # This is equivalent to 128 * 0.01s context window for `SqueezeExcite`
555
- model.change_conv_asr_se_context_window(context_window=128, update_config=True)
556
-
557
- Conformer-CTC
558
- ~~~~~~~~~~~~~
559
-
560
- The config files for Conformer-CTC model contain character-based encoding and sub-word encoding at
561
- ``<NeMo_git_root>/examples/asr/conf/conformer/conformer_ctc_char.yaml`` and ``<NeMo_git_root>/examples/asr/conf/conformer/conformer_ctc_bpe.yaml``
562
- respectively. Some components of the configs of `Conformer-CTC <./models.html#Conformer-CTC>`__ include the following datasets:
563
-
564
- * ``train_ds``, ``validation_ds``, and ``test_ds``
565
- * opimizer (``optim``)
566
- * augmentation (``spec_augment``)
567
- * ``decoder``
568
- * ``trainer``
569
- * ``exp_manager``
570
-
571
- These datasets are similar to other ASR models like `QuartzNet <./models.html#QuartzNet>`__. There should be a tokenizer section where you can
572
- specify the tokenizer if you want to use sub-word encoding instead of character-based encoding.
573
-
574
-
575
- The encoder section includes the details about the Conformer-CTC encoder architecture. You may find more information in the
576
- config files and also :ref:`nemo.collections.asr.modules.ConformerEncoder <conformer-encoder-api>`.
577
-
578
- Squeezeformer-CTC
579
- ~~~~~~~~~~~~~~~~~
580
-
581
- The config files for Squeezeformer-CTC model contain character-based encoding and sub-word encoding at
582
- ``<NeMo_git_root>/examples/asr/conf/squeezeformer/squeezeformer_ctc_char.yaml`` and ``<NeMo_git_root>/examples/asr/conf/squeezeformer/squeezeformer_ctc_bpe.yaml``
583
- respectively. Components of the configs of `Squeezeformer-CTC <./models.html#Squeezeformer-CTC>`__ are similar to Conformer config - `QuartzNet <./configs.html#Conformer-CTC>`__.
584
-
585
- The encoder section includes the details about the Squeezeformer-CTC encoder architecture. You may find more information in the
586
- config files and also :ref:`nemo.collections.asr.modules.SqueezeformerEncoder <squeezeformer-encoder-api>`.
587
-
588
-
589
- ContextNet
590
- ~~~~~~~~~~
591
-
592
- Please refer to the model page of `ContextNet <./models.html#ContextNet>`__ for more information on this model.
593
-
594
- Conformer-Transducer
595
- ~~~~~~~~~~~~~~~~~~~~
596
-
597
- Please refer to the model page of `Conformer-Transducer <./models.html#Conformer-Transducer>`__ for more information on this model.
598
-
599
- LSTM-Transducer and LSTM-CTC
600
- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
601
-
602
- The config files for LSTM-Transducer and LSTM-CTC models can be found at ``<NeMo_git_root>/examples/asr/conf/lstm/lstm_transducer_bpe.yaml`` and ``<NeMo_git_root>/examples/asr/conf/lstm/lstm_ctc_bpe.yaml`` respectively.
603
- Most of the of the configs of are similar to other ctc or transducer models. The main difference is the encoder part.
604
- The encoder section includes the details about the RNN-based encoder architecture. You may find more information in the
605
- config files and also :ref:`nemo.collections.asr.modules.RNNEncoder <rnn-encoder-api>`.
606
-
607
-
608
- InterCTC Config
609
- ---------------
610
-
611
- All CTC-based models also support `InterCTC loss <https://arxiv.org/abs/2102.03216>`_. To use it, you need to specify
612
- 2 parameters as in example below
613
-
614
- .. code-block:: yaml
615
-
616
- model:
617
- # ...
618
- interctc:
619
- loss_weights: [0.3]
620
- apply_at_layers: [8]
621
-
622
- which can be used to reproduce the default setup from the paper (assuming the total number of layers is 18).
623
- You can also specify multiple CTC losses from different layers, e.g., to get 2 losses from layers 3 and 8 with
624
- weights 0.1 and 0.3, specify:
625
-
626
- .. code-block:: yaml
627
-
628
- model:
629
- # ...
630
- interctc:
631
- loss_weights: [0.1, 0.3]
632
- apply_at_layers: [3, 8]
633
-
634
- Note that the final-layer CTC loss weight is automatically computed to normalize
635
- all weight to 1 (0.6 in the example above).
636
-
637
-
638
- Stochastic Depth Config
639
- -----------------------
640
-
641
- `Stochastic Depth <https://arxiv.org/abs/2102.03216>`_ is a useful technique for regularizing ASR model training.
642
- Currently it's only supported for :ref:`nemo.collections.asr.modules.ConformerEncoder <conformer-encoder-api>`. To
643
- use it, specify the following parameters in the encoder config file to reproduce the default setup from the paper:
644
-
645
- .. code-block:: yaml
646
-
647
- model:
648
- # ...
649
- encoder:
650
- # ...
651
- stochastic_depth_drop_prob: 0.3
652
- stochastic_depth_mode: linear # linear or uniform
653
- stochastic_depth_start_layer: 1
654
-
655
- See :ref:`documentation of ConformerEncoder <conformer-encoder-api>` for more details. Note that stochastic depth
656
- is supported for both CTC and Transducer model variations (or any other kind of model/loss that's using
657
- conformer as encoder).
658
-
659
-
660
- Transducer Configurations
661
- -------------------------
662
-
663
- All CTC-based ASR model configs can be modified to support Transducer loss training. Below, we discuss the modifications required in the config to enable Transducer training. All modifications are made to the ``model`` config.
664
-
665
- Model Defaults
666
- ~~~~~~~~~~~~~~
667
-
668
- It is a subsection to the model config representing the default values shared across the entire model represented as ``model.model_defaults``.
669
-
670
- There are three values that are primary components of a transducer model. They are :
671
-
672
- * ``enc_hidden``: The hidden dimension of the final layer of the Encoder network.
673
- * ``pred_hidden``: The hidden dimension of the final layer of the Prediction network.
674
- * ``joint_hidden``: The hidden dimension of the intermediate layer of the Joint network.
675
-
676
- One can access these values inside the config by using OmegaConf interpolation as follows :
677
-
678
- .. code-block:: yaml
679
-
680
- model:
681
- ...
682
- model_defaults:
683
- enc_hidden: 256
684
- pred_hidden: 256
685
- joint_hidden: 256
686
- ...
687
- decoder:
688
- ...
689
- prednet:
690
- pred_hidden: ${model.model_defaults.pred_hidden}
691
-
692
- Acoustic Encoder Model
693
- ~~~~~~~~~~~~~~~~~~~~~~
694
-
695
- The transducer model is comprised of three models combined. One of these models is the Acoustic (encoder) model. We should be able to drop in any CTC Acoustic model config into this section of the transducer config.
696
-
697
- The only condition that needs to be met is that **the final layer of the acoustic model must have the hidden dimension defined in ``model_defaults.enc_hidden``**.
698
-
699
- Decoder / Prediction Model
700
- ~~~~~~~~~~~~~~~~~~~~~~~~~~
701
-
702
- The Prediction model is generally an autoregressive, causal model that consumes text tokens and returns embeddings that will be used by the Joint model. The base config for an LSTM based Prediction network can be found in the the ``decoder`` section of `ContextNet <./models.html#ContextNet>`__ or other Transducer architectures. For further information refer to the ``Intro to Transducers`` tutorial in the ASR tutorial section.
703
-
704
- **This config can be copy-pasted into any custom transducer model with no modification.**
705
-
706
- Let us discuss some of the important arguments:
707
-
708
- * ``blank_as_pad``: In ordinary transducer models, the embedding matrix does not acknowledge the ``Transducer Blank`` token (similar to CTC Blank). However, this causes the autoregressive loop to be more complicated and less efficient. Instead, this flag which is set by default, will add the ``Transducer Blank`` token to the embedding matrix - and use it as a pad value (zeros tensor). This enables more efficient inference without harming training. For further information refer to the ``Intro to Transducers`` tutorial in the ASR tutorial section.
709
-
710
- * ``prednet.pred_hidden``: The hidden dimension of the LSTM and the output dimension of the Prediction network.
711
-
712
- .. code-block:: yaml
713
-
714
- decoder:
715
- _target_: nemo.collections.asr.modules.RNNTDecoder
716
- normalization_mode: null
717
- random_state_sampling: false
718
- blank_as_pad: true
719
-
720
- prednet:
721
- pred_hidden: ${model.model_defaults.pred_hidden}
722
- pred_rnn_layers: 1
723
- t_max: null
724
- dropout: 0.0
725
-
726
- Joint Model
727
- ~~~~~~~~~~~
728
-
729
- The Joint model is a simple feed-forward Multi-Layer Perceptron network. This MLP accepts the output of the Acoustic and Prediction models and computes a joint probability distribution over the entire vocabulary space. The base config for the Joint network can be found in the the ``joint`` section of `ContextNet <./models.html#ContextNet>`__ or other Transducer architectures. For further information refer to the ``Intro to Transducers`` tutorial in the ASR tutorial section.
730
-
731
- **This config can be copy-pasted into any custom transducer model with no modification.**
732
-
733
- The Joint model config has several essential components which we discuss below :
734
-
735
- * ``log_softmax``: Due to the cost of computing softmax on such large tensors, the Numba CUDA implementation of RNNT loss will implicitly compute the log softmax when called (so its inputs should be logits). The CPU version of the loss doesn't face such memory issues so it requires log-probabilities instead. Since the behaviour is different for CPU-GPU, the ``None`` value will automatically switch behaviour dependent on whether the input tensor is on a CPU or GPU device.
736
-
737
- * ``preserve_memory``: This flag will call ``torch.cuda.empty_cache()`` at certain critical sections when computing the Joint tensor. While this operation might allow us to preserve some memory, the empty_cache() operation is tremendously slow and will slow down training by an order of magnitude or more. It is available to use but not recommended.
738
-
739
- * ``fuse_loss_wer``: This flag performs "batch splitting" and then "fused loss + metric" calculation. It will be discussed in detail in the next tutorial that will train a Transducer model.
740
-
741
- * ``fused_batch_size``: When the above flag is set to True, the model will have two distinct "batch sizes". The batch size provided in the three data loader configs (``model.*_ds.batch_size``) will now be the ``Acoustic model`` batch size, whereas the ``fused_batch_size`` will be the batch size of the ``Prediction model``, the ``Joint model``, the ``transducer loss`` module and the ``decoding`` module.
742
-
743
- * ``jointnet.joint_hidden``: The hidden intermediate dimension of the joint network.
744
-
745
- .. code-block:: yaml
746
-
747
- joint:
748
- _target_: nemo.collections.asr.modules.RNNTJoint
749
- log_softmax: null # sets it according to cpu/gpu device
750
-
751
- # fused mode
752
- fuse_loss_wer: false
753
- fused_batch_size: 16
754
-
755
- jointnet:
756
- joint_hidden: ${model.model_defaults.joint_hidden}
757
- activation: "relu"
758
- dropout: 0.0
759
-
760
- Sampled Softmax Joint Model
761
- ^^^^^^^^^^^^^^^^^^^^^^^^^^^
762
-
763
- There are some situations where a large vocabulary with a Transducer model - such as for multilingual models with a large
764
- number of languages. In this setting, we need to consider the cost of memory of training Transducer networks which does
765
- not allow large vocabulary.
766
-
767
- For such cases, one can instead utilize the ``SampledRNNTJoint`` module instead of the usual ``RNNTJoint`` module, in order
768
- to compute the loss using a sampled subset of the vocabulary rather than the full vocabulary file.
769
-
770
- It adds only one additional parameter :
771
-
772
- * ``n_samples``: Specifies the minimum number of tokens to sample from the vocabulary space,
773
- excluding the RNNT blank token. If a given value is larger than the entire vocabulary size,
774
- then the full vocabulary will be used.
775
-
776
- The only difference in config required is to replace ``nemo.collections.asr.modules.RNNTJoint`` with ``nemo.collections.asr.modules.SampledRNNTJoint``
777
-
778
- .. code-block:: yaml
779
-
780
- joint:
781
- _target_: nemo.collections.asr.modules.SampledRNNTJoint
782
- n_samples: 500
783
- ... # All other arguments from RNNTJoint can be used after this.
784
-
785
-
786
- Effect of Batch Splitting / Fused Batch step
787
- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
788
-
789
- The following information below explain why memory is an issue when training Transducer models and how NeMo tackles the issue with its Fused Batch step. The material can be read for a thorough understanding, otherwise, it can be skipped. You can also follow these steps in the "ASR_with_Transducers" tutorial.
790
-
791
- **Diving deeper into the memory costs of Transducer Joint**
792
-
793
- One of the significant limitations of Transducers is the exorbitant memory cost of computing the Joint module. The Joint module is comprised of two steps.
794
-
795
- 1) Projecting the Acoustic and Transcription feature dimensions to some standard hidden dimension (specified by model.model_defaults.joint_hidden)
796
-
797
- 2) Projecting this intermediate hidden dimension to the final vocabulary space to obtain the transcription.
798
-
799
- Take the following example.
800
-
801
- BS=32 ; T (after 2x stride) = 800, U (with character encoding) = 400-450 tokens, Vocabulary size V = 28 (26 alphabet chars, space and apostrophe). Let the hidden dimension of the Joint model be 640 (Most Google Transducer papers use hidden dimension of 640).
802
-
803
- * :math:`Memory \, (Hidden, \, gb) = 32 \times 800 \times 450 \times 640 \times 4 = 29.49` gigabytes (4 bytes per float).
804
-
805
- * :math:`Memory \, (Joint, \, gb) = 32 \times 800 \times 450 \times 28 \times 4 = 1.290` gigabytes (4 bytes per float)
806
-
807
- **NOTE**: This is just for the forward pass! We need to double this memory to store gradients! This much memory is also just for the Joint model **alone**. Far more memory is required for the Prediction model as well as the large Acoustic model itself and its gradients!
808
-
809
- Even with mixed precision, that's $\sim 30$ GB of GPU RAM for just 1 part of the network + its gradients.
810
-
811
- Effect of Fused Batch Step
812
- ^^^^^^^^^^^^^^^^^^^^^^^^^^
813
-
814
- The fundamental problem is that the joint tensor grows in size when ``[T x U]`` grows in size. This growth in memory cost is due to many reasons - either by model construction (downsampling) or the choice of dataset preprocessing (character tokenization vs. sub-word tokenization).
815
-
816
- Another dimension that NeMo can control is **batch**. Due to how we batch our samples, small and large samples all get clumped together into a single batch. So even though the individual samples are not all as long as the maximum length of T and U in that batch, when a batch of such samples is constructed, it will consume a significant amount of memory for the sake of compute efficiency.
817
-
818
- So as is always the case - **trade-off compute speed for memory savings**.
819
-
820
- The fused operation goes as follows :
821
-
822
- 1) Forward the entire acoustic model in a single pass. (Use global batch size here for acoustic model - found in ``model.*_ds.batch_size``)
823
-
824
- 2) Split the Acoustic Model's logits by ``fused_batch_size`` and loop over these sub-batches.
825
-
826
- 3) Construct a sub-batch of same ``fused_batch_size`` for the Prediction model. Now the target sequence length is :math:`U_{sub-batch} < U`.
827
-
828
- 4) Feed this :math:`U_{sub-batch}` into the Joint model, along with a sub-batch from the Acoustic model (with :math:`T_{sub-batch} < T)`. Remember, we only have to slice off a part of the acoustic model here since we have the full batch of samples :math:`(B, T, D)` from the acoustic model.
829
-
830
- 5) Performing steps (3) and (4) yields :math:`T_{sub-batch}` and :math:`U_{sub-batch}`. Perform sub-batch joint step - costing an intermediate :math:`(B, T_{sub-batch}, U_{sub-batch}, V)` in memory.
831
-
832
- 6) Compute loss on sub-batch and preserve in a list to be later concatenated.
833
-
834
- 7) Compute sub-batch metrics (such as Character / Word Error Rate) using the above Joint tensor and sub-batch of ground truth labels. Preserve the scores to be averaged across the entire batch later.
835
-
836
- 8) Delete the sub-batch joint matrix :math:`(B, T_{sub-batch}, U_{sub-batch}, V)`. Only gradients from .backward() are preserved now in the computation graph.
837
-
838
- 9) Repeat steps (3) - (8) until all sub-batches are consumed.
839
-
840
- 10) Cleanup step. Compute full batch WER and log. Concatenate loss list and pass to PTL to compute the equivalent of the original (full batch) Joint step. Delete ancillary objects necessary for sub-batching.
841
-
842
- Transducer Decoding
843
- ~~~~~~~~~~~~~~~~~~~
844
-
845
- Models which have been trained with CTC can transcribe text simply by performing a regular argmax over the output of their decoder. For transducer-based models, the three networks must operate in a synchronized manner in order to transcribe the acoustic features. The base config for the Transducer decoding step can be found in the the ``decoding`` section of `ContextNet <./models.html#ContextNet>`__ or other Transducer architectures. For further information refer to the ``Intro to Transducers`` tutorial in the ASR tutorial section.
846
-
847
- **This config can be copy-pasted into any custom transducer model with no modification.**
848
-
849
- The most important component at the top level is the ``strategy``. It can take one of many values:
850
-
851
- * ``greedy``: This is sample-level greedy decoding. It is generally exceptionally slow as each sample in the batch will be decoded independently. For publications, this should be used alongside batch size of 1 for exact results.
852
-
853
- * ``greedy_batch``: This is the general default and should nearly match the ``greedy`` decoding scores (if the acoustic features are not affected by feature mixing in batch mode). Even for small batch sizes, this strategy is significantly faster than ``greedy``.
854
-
855
- * ``beam``: Runs beam search with the implicit language model of the Prediction model. It will generally be quite slow, and might need some tuning of the beam size to get better transcriptions.
856
-
857
- * ``tsd``: Time synchronous decoding. Please refer to the paper: `Alignment-Length Synchronous Decoding for RNN Transducer <https://ieeexplore.ieee.org/document/9053040>`_ for details on the algorithm implemented. Time synchronous decoding (TSD) execution time grows by the factor T * max_symmetric_expansions. For longer sequences, T is greater and can therefore take a long time for beams to obtain good results. TSD also requires more memory to execute.
858
-
859
- * ``alsd``: Alignment-length synchronous decoding. Please refer to the paper: `Alignment-Length Synchronous Decoding for RNN Transducer <https://ieeexplore.ieee.org/document/9053040>`_ for details on the algorithm implemented. Alignment-length synchronous decoding (ALSD) execution time is faster than TSD, with a growth factor of T + U_max, where U_max is the maximum target length expected during execution. Generally, T + U_max < T * max_symmetric_expansions. However, ALSD beams are non-unique. Therefore it is required to use larger beam sizes to achieve the same (or close to the same) decoding accuracy as TSD. For a given decoding accuracy, it is possible to attain faster decoding via ALSD than TSD.
860
-
861
- * ``maes``: Modified Adaptive Expansion Search Decoding. Please refer to the paper `Accelerating RNN Transducer Inference via Adaptive Expansion Search <https://ieeexplore.ieee.org/document/9250505>`_. Modified Adaptive Synchronous Decoding (mAES) execution time is adaptive w.r.t the number of expansions (for tokens) required per timestep. The number of expansions can usually be constrained to 1 or 2, and in most cases 2 is sufficient. This beam search technique can possibly obtain superior WER while sacrificing some evaluation time.
862
-
863
- .. code-block:: yaml
864
-
865
- decoding:
866
- strategy: "greedy_batch"
867
-
868
- # preserve decoding alignments
869
- preserve_alignments: false
870
-
871
- # Overrides the fused batch size after training.
872
- # Setting it to -1 will process whole batch at once when combined with `greedy_batch` decoding strategy
873
- fused_batch_size: Optional[int] = -1
874
-
875
- # greedy strategy config
876
- greedy:
877
- max_symbols: 10
878
-
879
- # beam strategy config
880
- beam:
881
- beam_size: 2
882
- score_norm: true
883
- softmax_temperature: 1.0 # scale the logits by some temperature prior to softmax
884
- tsd_max_sym_exp: 10 # for Time Synchronous Decoding, int > 0
885
- alsd_max_target_len: 5.0 # for Alignment-Length Synchronous Decoding, float > 1.0
886
- maes_num_steps: 2 # for modified Adaptive Expansion Search, int > 0
887
- maes_prefix_alpha: 1 # for modified Adaptive Expansion Search, int > 0
888
- maes_expansion_beta: 2 # for modified Adaptive Expansion Search, int >= 0
889
- maes_expansion_gamma: 2.3 # for modified Adaptive Expansion Search, float >= 0
890
-
891
- Transducer Loss
892
- ~~~~~~~~~~~~~~~
893
-
894
- This section configures the type of Transducer loss itself, along with possible sub-sections. By default, an optimized implementation of Transducer loss will be used which depends on Numba for CUDA acceleration. The base config for the Transducer loss section can be found in the the ``loss`` section of `ContextNet <./models.html#ContextNet>`__ or other Transducer architectures. For further information refer to the ``Intro to Transducers`` tutorial in the ASR tutorial section.
895
-
896
- **This config can be copy-pasted into any custom transducer model with no modification.**
897
-
898
- The loss config is based on a resolver pattern and can be used as follows:
899
-
900
- 1) ``loss_name``: ``default`` is generally a good option. Will select one of the available resolved losses and match the kwargs from a sub-configs passed via explicit ``{loss_name}_kwargs`` sub-config.
901
-
902
- 2) ``{loss_name}_kwargs``: This sub-config is passed to the resolved loss above and can be used to configure the resolved loss.
903
-
904
-
905
- .. code-block:: yaml
906
-
907
- loss:
908
- loss_name: "default"
909
- warprnnt_numba_kwargs:
910
- fastemit_lambda: 0.0
911
-
912
- FastEmit Regularization
913
- ^^^^^^^^^^^^^^^^^^^^^^^
914
-
915
- FastEmit Regularization is supported for the default Numba based WarpRNNT loss. Recently proposed regularization approach - `FastEmit: Low-latency Streaming ASR with Sequence-level Emission Regularization <https://arxiv.org/abs/2010.11148>`_ allows us near-direct control over the latency of transducer models.
916
-
917
- Refer to the above paper for results and recommendations of ``fastemit_lambda``.
918
-
919
-
920
- .. _Hybrid-ASR-TTS_model__Config:
921
-
922
- Hybrid ASR-TTS Model Configuration
923
- ----------------------------------
924
-
925
- :ref:`Hybrid ASR-TTS model <Hybrid-ASR-TTS_model>` consists of three parts:
926
-
927
- * ASR model (``EncDecCTCModelBPE``, ``EncDecRNNTBPEModel`` or ``EncDecHybridRNNTCTCBPEModel``)
928
- * TTS Mel Spectrogram Generator (currently, only :ref:`FastPitch <FastPitch_model>` model is supported)
929
- * :ref:`Enhancer model <SpectrogramEnhancer_model>` (optional)
930
-
931
- Also, the config allows to specify :ref:`text-only dataset <Hybrid-ASR-TTS_model__Text-Only-Data>`.
932
-
933
- Main parts of the config:
934
-
935
- * ASR model
936
- * ``asr_model_path``: path to the ASR model checkpoint (`.nemo`) file, loaded only once, then the config of the ASR model is stored in the ``asr_model`` field
937
- * ``asr_model_type``: needed only when training from scratch. ``rnnt_bpe`` corresponds to ``EncDecRNNTBPEModel``, ``ctc_bpe`` to ``EncDecCTCModelBPE``, ``hybrid_rnnt_ctc_bpe`` to ``EncDecHybridRNNTCTCBPEModel``
938
- * ``asr_model_fuse_bn``: fusing BatchNorm in the pretrained ASR model, can improve quality in finetuning scenario
939
- * TTS model
940
- * ``tts_model_path``: path to the pretrained TTS model checkpoint (`.nemo`) file, loaded only once, then the config of the model is stored in the ``tts_model`` field
941
- * Enhancer model
942
- * ``enhancer_model_path``: optional path to the enhancer model. Loaded only once, the config is stored in the ``enhancer_model`` field
943
- * ``train_ds``
944
- * ``text_data``: properties related to text-only data
945
- * ``manifest_filepath``: path (or paths) to :ref:`text-only dataset <Hybrid-ASR-TTS_model__Text-Only-Data>` manifests
946
- * ``speakers_filepath``: path (or paths) to the text file containing speaker ids for the multi-speaker TTS model (speakers are sampled randomly during training)
947
- * ``min_words`` and ``max_words``: parameters to filter text-only manifests by the number of words
948
- * ``tokenizer_workers``: number of workers for initial tokenization (when loading the data). ``num_CPUs / num_GPUs`` is a recommended value.
949
- * ``asr_tts_sampling_technique``, ``asr_tts_sampling_temperature``, ``asr_tts_sampling_probabilities``: sampling parameters for text-only and audio-text data (if both specified). Correspond to ``sampling_technique``, ``sampling_temperature``, and ``sampling_probabilities`` parameters of the :mod:`ConcatDataset <nemo.collections.common.data.dataset.ConcatDataset>`.
950
- * all other components are similar to conventional ASR models
951
- * ``validation_ds`` and ``test_ds`` correspond to the underlying ASR model
952
-
953
-
954
- .. code-block:: yaml
955
-
956
- model:
957
- sample_rate: 16000
958
-
959
- # asr model
960
- asr_model_path: ???
961
- asr_model: null
962
- asr_model_type: null # rnnt_bpe, ctc_bpe or hybrid_rnnt_ctc_bpe; needed only if instantiating from config, otherwise type is auto inferred
963
- asr_model_fuse_bn: false # only ConformerEncoder supported now, use false for other models
964
-
965
- # tts model
966
- tts_model_path: ???
967
- tts_model: null
968
-
969
- # enhancer model
970
- enhancer_model_path: null
971
- enhancer_model: null
972
-
973
- train_ds:
974
- text_data:
975
- manifest_filepath: ???
976
- speakers_filepath: ???
977
- min_words: 1
978
- max_words: 45 # 45 - recommended value, ~16.7 sec for LibriSpeech
979
- tokenizer_workers: 1
980
- asr_tts_sampling_technique: round-robin # random, round-robin, temperature
981
- asr_tts_sampling_temperature: null
982
- asr_tts_sampling_probabilities: null # [0.5,0.5] – ASR,TTS
983
- manifest_filepath: ???
984
- batch_size: 16 # you may increase batch_size if your memory allows
985
- # other params
986
-
987
- Finetuning with Text-Only Data
988
- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
989
-
990
- To finetune existing ASR model using text-only data use ``<NeMo_git_root>/examples/asr/asr_with_tts/speech_to_text_bpe_with_text_finetune.py`` script with the corresponding config ``<NeMo_git_root>/examples/asr/conf/asr_tts/hybrid_asr_tts.yaml``.
991
-
992
- Please specify paths to all the required models (ASR, TTS, and Enhancer checkpoints), along with ``train_ds.text_data.manifest_filepath`` and ``train_ds.text_data.speakers_filepath``.
993
-
994
- .. code-block:: shell
995
-
996
- python speech_to_text_bpe_with_text_finetune.py \
997
- model.asr_model_path=<path to ASR model> \
998
- model.tts_model_path=<path to compatible TTS model> \
999
- model.enhancer_model_path=<optional path to enhancer model> \
1000
- model.asr_model_fuse_bn=<true recommended if ConformerEncoder with BatchNorm, false otherwise> \
1001
- model.train_ds.manifest_filepath=<path to manifest with audio-text pairs or null> \
1002
- model.train_ds.text_data.manifest_filepath=<path(s) to manifest with train text> \
1003
- model.train_ds.text_data.speakers_filepath=<path(s) to speakers list> \
1004
- model.train_ds.text_data.tokenizer_workers=4 \
1005
- model.validation_ds.manifest_filepath=<path to validation manifest> \
1006
- model.train_ds.batch_size=<batch_size>
1007
-
1008
- Training from Scratch
1009
- ~~~~~~~~~~~~~~~~~~~~~
1010
-
1011
- To train ASR model from scratch using text-only data use ``<NeMo_git_root>/examples/asr/asr_with_tts/speech_to_text_bpe_with_text.py`` script with conventional ASR model config, e.g. ``<NeMo_git_root>/examples/asr/conf/conformer/conformer_ctc_bpe.yaml`` or ``<NeMo_git_root>/examples/asr/conf/conformer/conformer_transducer_bpe.yaml``
1012
-
1013
- Please specify the ASR model type, paths to the TTS model, and (optional) enhancer, along with text-only data-related fields.
1014
- Use ``++`` or ``+`` markers for these options, since the options are not present in the original ASR model config.
1015
-
1016
- .. code-block:: shell
1017
-
1018
- python speech_to_text_bpe_with_text.py \
1019
- ++asr_model_type=<rnnt_bpe or ctc_bpe> \
1020
- ++tts_model_path=<path to compatible tts model> \
1021
- ++enhancer_model_path=<optional path to enhancer model> \
1022
- ++model.train_ds.text_data.manifest_filepath=<path(s) to manifests with train text> \
1023
- ++model.train_ds.text_data.speakers_filepath=<path(s) to speakers list> \
1024
- ++model.train_ds.text_data.min_words=1 \
1025
- ++model.train_ds.text_data.max_words=45 \
1026
- ++model.train_ds.text_data.tokenizer_workers=4
1027
-
1028
- Fine-tuning Configurations
1029
- --------------------------
1030
-
1031
- All ASR scripts support easy fine-tuning by partially/fully loading the pretrained weights from a checkpoint into the **currently instantiated model**. Note that the currently instantiated model should have parameters that match the pre-trained checkpoint (such that weights may load properly). In order to directly fine-tune a pre-existing checkpoint, please follow the tutorial `ASR Language Fine-tuning. <https://colab.research.google.com/github/NVIDIA/NeMo/blob/stable/tutorials/asr/ASR_CTC_Language_Finetuning.ipynb>`_
1032
-
1033
- Models can be fine-tuned in two ways:
1034
- * By updating or retaining current tokenizer alone
1035
- * By updating model architecture and tokenizer
1036
-
1037
- Fine-tuning by updating or retaining current tokenizer
1038
- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1039
-
1040
- In this case, the model architecture is not updated. The model is initialized with the pre-trained weights by
1041
- two ways:
1042
-
1043
- 1) Providing a path to a NeMo model (via ``init_from_nemo_model``)
1044
- 2) Providing a name of a pretrained NeMo model (which will be downloaded via the cloud) (via ``init_from_pretrained_model``)
1045
-
1046
- Then users can use existing tokenizer or update the tokenizer with new vocabulary. This is useful when users don't want to update the model architecture
1047
- but want to update the tokenizer with new vocabulary.
1048
-
1049
- The same script can be used to finetune CTC, RNNT or Hybrid models as well.
1050
-
1051
- <NeMo_repo>/examples/asr/speech_to_text_finetune.py script supports this type of fine-tuning with the following arguments:
1052
-
1053
- .. code-block:: sh
1054
-
1055
- python examples/asr/speech_to_text_finetune.py \
1056
- --config-path=<path to dir of configs> \
1057
- --config-name=<name of config without .yaml>) \
1058
- model.train_ds.manifest_filepath="<path to manifest file>" \
1059
- model.validation_ds.manifest_filepath="<path to manifest file>" \
1060
- model.tokenizer.update_tokenizer=<True/False> \ # True to update tokenizer, False to retain existing tokenizer
1061
- model.tokenizer.dir=<path to tokenizer dir> \ # Path to tokenizer dir when update_tokenizer=True
1062
- model.tokenizer.type=<tokenizer type> \ # tokenizer type when update_tokenizer=True
1063
- trainer.devices=-1 \
1064
- trainer.accelerator='gpu' \
1065
- trainer.max_epochs=50 \
1066
- +init_from_nemo_model="<path to .nemo model file>" (or +init_from_pretrained_model="<name of pretrained checkpoint>")
1067
-
1068
- Fine-tuning by changing model architecture and tokenizer
1069
- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1070
-
1071
- If users want to update the model architecture as well they can use the following script:
1072
-
1073
- For providing pretrained model, users can provide Pre-trained weights in multiple ways -
1074
-
1075
- 1) Providing a path to a NeMo model (via ``init_from_nemo_model``)
1076
- 2) Providing a name of a pretrained NeMo model (which will be downloaded via the cloud) (via ``init_from_pretrained_model``)
1077
- 3) Providing a path to a Pytorch Lightning checkpoint file (via ``init_from_ptl_ckpt``)
1078
-
1079
- There are multiple ASR subtasks inside the ``examples/asr/`` directory, you can substitute the ``<subtask>`` tag below.
1080
-
1081
- .. code-block:: sh
1082
-
1083
- python examples/asr/<subtask>/script_to_<script_name>.py \
1084
- --config-path=<path to dir of configs> \
1085
- --config-name=<name of config without .yaml>) \
1086
- model.train_ds.manifest_filepath="<path to manifest file>" \
1087
- model.validation_ds.manifest_filepath="<path to manifest file>" \
1088
- trainer.devices=-1 \
1089
- trainer.accelerator='gpu' \
1090
- trainer.max_epochs=50 \
1091
- +init_from_nemo_model="<path to .nemo model file>" # (or +init_from_pretrained_model, +init_from_ptl_ckpt )
1092
-
1093
- To reinitialize part of the model, to make it different from the pretrained model, users can mention them through config:
1094
-
1095
- .. code-block:: yaml
1096
-
1097
- init_from_nemo_model: "<path to .nemo model file>"
1098
- asr_model:
1099
- include: ["preprocessor","encoder"]
1100
- exclude: ["decoder"]
1101
-
1102
- Fine-tuning Execution Flow Diagram
1103
- ----------------------------------
1104
-
1105
- When preparing your own training or fine-tuning scripts, please follow the execution flow diagram order for correct inference.
1106
-
1107
- Depending on the type of model, there may be extra steps that must be performed -
1108
-
1109
- * CTC Models - `Examples directory for CTC Models <https://github.com/NVIDIA/NeMo/blob/stable/examples/asr/asr_ctc/README.md>`_
1110
- * RNN Transducer Models - `Examples directory for Transducer Models <https://github.com/NVIDIA/NeMo/blob/stable/examples/asr/asr_transducer/README.md>`_
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
SoundScribe/SpeakerID/docs/source/asr/data/asrlm_results.csv DELETED
@@ -1,2 +0,0 @@
1
- Model Name,Model Base Class,Model Card
2
- asrlm_en_transformer_large_ls,TransformerLMModel,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:asrlm_en_transformer_large_ls"
 
 
 
SoundScribe/SpeakerID/docs/source/asr/data/benchmark_by.csv DELETED
@@ -1,2 +0,0 @@
1
- Model,Model Base Class,Model Card
2
- stt_by_fastconformer_hybrid_large_pc,EncDecHybridRNNTCTCBPEModel,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_by_fastconformer_hybrid_large_pc"
 
 
 
SoundScribe/SpeakerID/docs/source/asr/data/benchmark_ca.csv DELETED
@@ -1,4 +0,0 @@
1
- Model,Model Base Class,Model Card
2
- stt_ca_quartznet15x5,EncDecCTCModel,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_ca_quartznet15x5"
3
- stt_ca_conformer_ctc_large,EncDecCTCModel,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_ca_conformer_ctc_large"
4
- stt_ca_conformer_transducer_large,EncDecRNNTBPEModel,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_ca_conformer_transducer_large"
 
 
 
 
 
SoundScribe/SpeakerID/docs/source/asr/data/benchmark_code_switching.csv DELETED
@@ -1,3 +0,0 @@
1
- Model,Model Base Class,Model Card
2
- stt_enes_conformer_ctc_large_codesw,EncDecCTCModelBPE,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_enes_conformer_ctc_large_codesw"
3
- stt_enes_conformer_transducer_large_codesw,EncDecRNNTBPEModel,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_enes_conformer_transducer_large_codesw"
 
 
 
 
SoundScribe/SpeakerID/docs/source/asr/data/benchmark_de.csv DELETED
@@ -1,7 +0,0 @@
1
- Model,Model Base Class,Model Card
2
- stt_de_quartznet15x5,EncDecCTCModel,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_de_quartznet15x5"
3
- stt_de_citrinet_1024,EncDecCTCModel,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_de_citrinet_1024"
4
- stt_de_contextnet_1024,EncDecRNNTBPEModel,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_de_contextnet_1024"
5
- stt_de_conformer_ctc_large,EncDecCTCModelBPE,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_de_conformer_ctc_large"
6
- stt_de_conformer_transducer_large,EncDecRNNTBPEModel,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_de_conformer_transducer_large"
7
- stt_de_fastconformer_hybrid_large_pc,EncDecHybridRNNTCTCBPEModel,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_de_fastconformer_hybrid_large_pc"
 
 
 
 
 
 
 
 
SoundScribe/SpeakerID/docs/source/asr/data/benchmark_en.csv DELETED
@@ -1,41 +0,0 @@
1
- Model Name,Model Base Class,Model Card
2
- QuartzNet15x5Base-En,EncDecCTCModel,"https://ngc.nvidia.com/catalog/models/nvidia:nemospeechmodels"
3
- stt_en_jasper10x5dr,EncDecCTCModel,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_en_jasper10x5dr"
4
- stt_en_citrinet_256,EncDecCTCModelBPE,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_en_citrinet_256"
5
- stt_en_citrinet_512,EncDecCTCModelBPE,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_en_citrinet_512"
6
- stt_en_citrinet_1024,EncDecCTCModelBPE,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_en_citrinet_1024"
7
- stt_en_citrinet_256_gamma_0_25,EncDecCTCModelBPE,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_en_citrinet_256_gamma_0_25"
8
- stt_en_citrinet_512_gamma_0_25,EncDecCTCModelBPE,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_en_citrinet_512_gamma_0_25"
9
- stt_en_citrinet_1024_gamma_0_25,EncDecCTCModelBPE,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_en_citrinet_1024_gamma_0_25"
10
- stt_en_contextnet_256_mls,EncDecRNNTBPEModel,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_en_contextnet_256_mls"
11
- stt_en_contextnet_512_mls,EncDecRNNTBPEModel,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_en_contextnet_512_mls"
12
- stt_en_contextnet_1024_mls,EncDecRNNTBPEModel,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_en_contextnet_1024_mls"
13
- stt_en_contextnet_256,EncDecRNNTBPEModel,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_en_contextnet_256"
14
- stt_en_contextnet_512,EncDecRNNTBPEModel,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_en_contextnet_512"
15
- stt_en_contextnet_1024,EncDecRNNTBPEModel,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_en_contextnet_1024"
16
- stt_en_conformer_ctc_small,EncDecCTCModelBPE,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_en_conformer_ctc_small"
17
- stt_en_conformer_ctc_medium,EncDecCTCModelBPE,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_en_conformer_ctc_medium"
18
- stt_en_conformer_ctc_large,EncDecCTCModelBPE,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_en_conformer_ctc_large"
19
- stt_en_conformer_ctc_xlarge,EncDecCTCModelBPE,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_en_conformer_ctc_xlarge"
20
- stt_en_conformer_ctc_small_ls,EncDecCTCModelBPE,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_en_conformer_ctc_small_ls"
21
- stt_en_conformer_ctc_medium_ls,EncDecCTCModelBPE,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_en_conformer_ctc_medium_ls"
22
- stt_en_conformer_ctc_large_ls,EncDecCTCModelBPE,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_en_conformer_ctc_large_ls"
23
- stt_en_conformer_transducer_large_ls,EncDecRNNTBPEModel,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_en_conformer_transducer_large_ls"
24
- stt_en_conformer_transducer_small,EncDecRNNTBPEModel,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_en_conformer_transducer_small"
25
- stt_en_conformer_transducer_medium,EncDecRNNTBPEModel,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_en_conformer_transducer_medium"
26
- stt_en_conformer_transducer_large,EncDecRNNTBPEModel,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_en_conformer_transducer_large"
27
- stt_en_conformer_transducer_xlarge,EncDecRNNTBPEModel,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_en_conformer_transducer_xlarge"
28
- stt_en_conformer_transducer_xxlarge,EncDecRNNTBPEModel,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_en_conformer_transducer_xxlarge"
29
- stt_en_fastconformer_ctc_large_ls,EncDecCTCModelBPE,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_en_fastconformer_ctc_large_ls"
30
- stt_en_fastconformer_transducer_large_ls,EncDecRNNTBPEModel,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_en_fastconformer_transducer_large_ls"
31
- stt_en_fastconformer_transducer_large,EncDecRNNTBPEModel,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_en_fastconformer_transducer_large"
32
- stt_en_fastconformer_ctc_large,EncDecCTCModelBPE,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_en_fastconformer_ctc_large"
33
- stt_en_fastconformer_hybrid_large_pc,EncDecHybridRNNTCTCBPEModel,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_en_fastconformer_hybrid_large_pc"
34
- stt_en_fastconformer_transducer_xlarge,EncDecRNNTBPEModel,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_en_fastconformer_transducer_xlarge"
35
- stt_en_fastconformer_ctc_xlarge,EncDecCTCModelBPE,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_en_fastconformer_ctc_xlarge"
36
- stt_en_fastconformer_transducer_xxlarge,EncDecRNNTBPEModel,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_en_fastconformer_transducer_xxlarge"
37
- stt_en_fastconformer_ctc_xxlarge,EncDecCTCModelBPE,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_en_fastconformer_ctc_xxlarge"
38
- stt_en_fastconformer_hybrid_large_streaming_80ms,EncDecHybridRNNTCTCBPEModel,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_en_fastconformer_hybrid_large_streaming_80ms"
39
- stt_en_fastconformer_hybrid_large_streaming_480ms,EncDecHybridRNNTCTCBPEModel,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_en_fastconformer_hybrid_large_streaming_480ms"
40
- stt_en_fastconformer_hybrid_large_streaming_1040ms,EncDecHybridRNNTCTCBPEModel,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_en_fastconformer_hybrid_large_streaming_1040ms"
41
- stt_en_fastconformer_hybrid_large_streaming_multi,EncDecHybridRNNTCTCBPEModel,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_en_fastconformer_hybrid_large_streaming_multi"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
SoundScribe/SpeakerID/docs/source/asr/data/benchmark_es.csv DELETED
@@ -1,8 +0,0 @@
1
- Model,Model Base Class,Model Card
2
- stt_es_quartznet15x5,EncDecCTCModel,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_es_quartznet15x5"
3
- stt_es_citrinet_512,EncDecCTCModelBPE,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_es_citrinet_512"
4
- stt_es_citrinet_1024_gamma_0_25,EncDecCTCModelBPE,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_es_citrinet_1024_gamma_0_25"
5
- stt_es_conformer_ctc_large,EncDecCTCModelBPE,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_es_conformer_ctc_large"
6
- stt_es_conformer_transducer_large,EncDecRNNTBPEModel,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_es_conformer_transducer_large"
7
- stt_es_contextnet_1024,EncDecRNNTBPEModel,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_es_contextnet_1024"
8
- stt_es_fastconformer_hybrid_large_pc,EncDecHybridRNNTCTCBPEModel,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_es_fastconformer_hybrid_large_pc"
 
 
 
 
 
 
 
 
 
SoundScribe/SpeakerID/docs/source/asr/data/benchmark_fr.csv DELETED
@@ -1,9 +0,0 @@
1
- Model,Model Base Class,Model Card
2
- stt_fr_quartznet15x5,EncDecCTCModel,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_fr_quartznet15x5"
3
- stt_fr_citrinet_1024_gamma_0_25,EncDecCTCModelBPE,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_fr_citrinet_1024_gamma_0_25"
4
- stt_fr_no_hyphen_citrinet_1024_gamma_0_25,EncDecCTCModelBPE,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_fr_citrinet_1024_gamma_0_25"
5
- stt_fr_contextnet_1024,EncDecRNNTBPEModel,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_fr_contextnet_1024"
6
- stt_fr_conformer_ctc_large,EncDecCTCModelBPE,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_fr_conformer_ctc_large"
7
- stt_fr_no_hyphen_conformer_ctc_large,EncDecCTCModelBPE,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_fr_conformer_ctc_large"
8
- stt_fr_conformer_transducer_large,EncDecRNNTBPEModel,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_fr_conformer_transducer_large"
9
- stt_fr_fastconformer_hybrid_large_pc,EncDecHybridRNNTCTCBPEModel,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_fr_fastconformer_hybrid_large_pc"
 
 
 
 
 
 
 
 
 
 
SoundScribe/SpeakerID/docs/source/asr/data/benchmark_hi.csv DELETED
@@ -1,2 +0,0 @@
1
- Model Name,Model Base Class,Model Card
2
- stt_hi_conformer_ctc_medium,EncDecCTCModelBPE,"https://catalog.ngc.nvidia.com/orgs/nvidia/teams/nemo/models/stt_hi_conformer_ctc_medium"
 
 
 
SoundScribe/SpeakerID/docs/source/asr/data/benchmark_hr.csv DELETED
@@ -1,4 +0,0 @@
1
- Model,Model Base Class,Model Card
2
- stt_hr_conformer_ctc_large,EncDecCTCModel,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_hr_conformer_ctc_large"
3
- stt_hr_conformer_transducer_large,EncDecRNNTBPEModel,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_hr_conformer_transducer_large"
4
- stt_hr_fastconformer_hybrid_large_pc,EncDecHybridRNNTCTCBPEModel,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_hr_fastconformer_hybrid_large_pc"
 
 
 
 
 
SoundScribe/SpeakerID/docs/source/asr/data/benchmark_it.csv DELETED
@@ -1,3 +0,0 @@
1
- Model,Model Base Class,Model Card
2
- stt_it_quartznet15x5,EncDecCTCModel,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_it_quartznet15x5"
3
- stt_it_fastconformer_hybrid_large_pc,EncDecHybridRNNTCTCBPEModel,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_it_fastconformer_hybrid_large_pc"
 
 
 
 
SoundScribe/SpeakerID/docs/source/asr/data/benchmark_kab.csv DELETED
@@ -1,2 +0,0 @@
1
- Model,Model Base Class,Model Card
2
- stt_kab_conformer_transducer_large,EncDecRNNTBPEModel,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_kab_conformer_transducer_large"
 
 
 
SoundScribe/SpeakerID/docs/source/asr/data/benchmark_mr.csv DELETED
@@ -1,3 +0,0 @@
1
- Model Name,Model Base Class,Model Card
2
- stt_mr_conformer_ctc_medium,EncDecCTCModelBPE,"https://catalog.ngc.nvidia.com/orgs/nvidia/teams/nemo/models/stt_mr_conformer_ctc_medium"
3
-
 
 
 
 
SoundScribe/SpeakerID/docs/source/asr/data/benchmark_multilingual.csv DELETED
@@ -1,5 +0,0 @@
1
- Model,Model Base Class,Model Card
2
- stt_enes_conformer_ctc_large,EncDecCTCModelBPE,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_enes_conformer_ctc_large"
3
- stt_enes_conformer_transducer_large,EncDecRNNTBPEModel,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_enes_conformer_transducer_large"
4
- stt_multilingual_fastconformer_hybrid_large_pc,EncDecHybridRNNTCTCBPEModel,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_multilingual_fastconformer_hybrid_large_pc"
5
- stt_multilingual_fastconformer_hybrid_large_pc_blend_eu,EncDecHybridRNNTCTCBPEModel,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_multilingual_fastconformer_hybrid_large_pc_blend_eu"
 
 
 
 
 
 
SoundScribe/SpeakerID/docs/source/asr/data/benchmark_pl.csv DELETED
@@ -1,3 +0,0 @@
1
- Model,Model Base Class,Model Card
2
- stt_pl_quartznet15x5,EncDecCTCModel,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_pl_quartznet15x5"
3
- stt_pl_fastconformer_hybrid_large_pc,EncDecHybridRNNTCTCBPEModel,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_pl_fastconformer_hybrid_large_pc"
 
 
 
 
SoundScribe/SpeakerID/docs/source/asr/data/benchmark_ru.csv DELETED
@@ -1,4 +0,0 @@
1
- Model,Model Base Class,Model Card
2
- stt_ru_quartznet15x5,EncDecCTCModel,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_ru_quartznet15x5"
3
- stt_ru_fastconformer_hybrid_large_pc,EncDecHybridRNNTCTCBPEModel,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_ru_fastconformer_hybrid_large_pc"
4
-
 
 
 
 
 
SoundScribe/SpeakerID/docs/source/asr/data/benchmark_rw.csv DELETED
@@ -1,3 +0,0 @@
1
- Model,Model Base Class,Model Card
2
- stt_rw_conformer_ctc_large,EncDecCTCModel,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_rw_conformer_ctc_large"
3
- stt_rw_conformer_transducer_large,EncDecRNNTBPEModel,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_rw_conformer_transducer_large"
 
 
 
 
SoundScribe/SpeakerID/docs/source/asr/data/benchmark_ua.csv DELETED
@@ -1,2 +0,0 @@
1
- Model,Model Base Class,Model Card
2
- stt_ua_fastconformer_hybrid_large_pc,EncDecHybridRNNTCTCBPEModel,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_ua_fastconformer_hybrid_large_pc"
 
 
 
SoundScribe/SpeakerID/docs/source/asr/data/benchmark_zh.csv DELETED
@@ -1,4 +0,0 @@
1
- Model,Model Base Class,Model Card
2
- stt_zh_citrinet_512,EncDecCTCModel,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_zh_citrinet_512"
3
- stt_zh_citrinet_1024_gamma_0_25,EncDecCTCModel,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_zh_citrinet_1024_gamma_0_25"
4
- stt_zh_conformer_transducer_large,EncDecRNNTModel,"https://ngc.nvidia.com/catalog/models/nvidia:nemo:stt_zh_conformer_transducer_large"
 
 
 
 
 
SoundScribe/SpeakerID/docs/source/asr/data/scores/be/conformer_be.csv DELETED
@@ -1,3 +0,0 @@
1
- Model Name,Language,MCV Test-Set v10 (be)
2
- stt_be_conformer_ctc_large,be,4.7 %
3
- stt_be_conformer_transducer_large,be,3.8 %
 
 
 
 
SoundScribe/SpeakerID/docs/source/asr/data/scores/by/fastconformer_by.csv DELETED
@@ -1,2 +0,0 @@
1
- Model Name,Language,MCV Dev-Set v12.0 (be),MCV Test-Set v12.0 (be)
2
- stt_by_fastconformer_hybrid_large_pc,by,2.7 %,2.7 %
 
 
 
SoundScribe/SpeakerID/docs/source/asr/data/scores/ca/conformer_ca.csv DELETED
@@ -1,3 +0,0 @@
1
- Model Name,Language,MCV Dev-Set (v??) (ca),MCV Dev-Set v9.0 (ca),MCV Test-Set v9.0 (ca)
2
- stt_ca_conformer_ctc_large,ca,,4.70,4.27
3
- stt_ca_conformer_transducer_large,ca,,4.43,3.85
 
 
 
 
SoundScribe/SpeakerID/docs/source/asr/data/scores/ca/quartznet15x5_ca.csv DELETED
@@ -1,2 +0,0 @@
1
- Model Name,Language,MCV Dev-Set (v??) (ca),MCV Dev-Set v9.0 (ca),MCV Test-Set v9.0 (ca)
2
- stt_ca_quartznet15x5,ca,6.0,,
 
 
 
SoundScribe/SpeakerID/docs/source/asr/data/scores/de/citrinet_de.csv DELETED
@@ -1,2 +0,0 @@
1
- Model Name,Language,MCV Dev-Set (v??) (de),MCV Dev-Set v12.0 (de),MCV Dev-Set v7.0 (de),MCV Test-Set v12.0 (de),MCV Test-Set v7.0 (de),MLS Dev (en),MLS Test (en),VoxPopuli Dev (de),VoxPopuli Test (de)
2
- stt_de_citrinet_1024,de,,,6.63,,7.59,4.06,5.07,12.33,10.02
 
 
 
SoundScribe/SpeakerID/docs/source/asr/data/scores/de/conformer_de.csv DELETED
@@ -1,3 +0,0 @@
1
- Model Name,Language,MCV Dev-Set (v??) (de),MCV Dev-Set v12.0 (de),MCV Dev-Set v7.0 (de),MCV Test-Set v12.0 (de),MCV Test-Set v7.0 (de),MLS Dev (en),MLS Test (en),VoxPopuli Dev (de),VoxPopuli Test (de)
2
- stt_de_conformer_ctc_large,de,,,5.84,,6.68,3.85,4.63,12.56,10.51
3
- stt_de_conformer_transducer_large,de,,,4.75,,5.36,3.46,4.19,11.21,9.14
 
 
 
 
SoundScribe/SpeakerID/docs/source/asr/data/scores/de/contextnet_de.csv DELETED
@@ -1,2 +0,0 @@
1
- Model Name,Language,MCV Dev-Set (v??) (de),MCV Dev-Set v12.0 (de),MCV Dev-Set v7.0 (de),MCV Test-Set v12.0 (de),MCV Test-Set v7.0 (de),MLS Dev (en),MLS Test (en),VoxPopuli Dev (de),VoxPopuli Test (de)
2
- stt_de_contextnet_1024,de,,,4.76,,5.5,3.53,4.2,11.32,9.4
 
 
 
SoundScribe/SpeakerID/docs/source/asr/data/scores/de/fastconformer_de.csv DELETED
@@ -1,2 +0,0 @@
1
- Model Name,Language,MCV Dev-Set (v??) (de),MCV Dev-Set v12.0 (de),MCV Dev-Set v7.0 (de),MCV Test-Set v12.0 (de),MCV Test-Set v7.0 (de),MLS Dev (en),MLS Test (en),VoxPopuli Dev (de),VoxPopuli Test (de)
2
- stt_de_fastconformer_hybrid_large_pc,de,,4.2 %,,4.9 %,,3.3 %,3.8 %,10.8 %,8.7 %
 
 
 
SoundScribe/SpeakerID/docs/source/asr/data/scores/de/quartznet15x5_de.csv DELETED
@@ -1,2 +0,0 @@
1
- Model Name,Language,MCV Dev-Set (v??) (de),MCV Dev-Set v12.0 (de),MCV Dev-Set v7.0 (de),MCV Test-Set v12.0 (de),MCV Test-Set v7.0 (de),MLS Dev (en),MLS Test (en),VoxPopuli Dev (de),VoxPopuli Test (de)
2
- stt_de_quartznet15x5,de,11.78,,,,,,,,
 
 
 
SoundScribe/SpeakerID/docs/source/asr/data/scores/en/citrinet_en.csv DELETED
@@ -1,7 +0,0 @@
1
- Model Name,Language,EuroParl Test Set (en),Fisher Test Set (en),Librispeech Dev-Clean,Librispeech Dev-Other,Librispeech Test-Clean,Librispeech Test-Other,MCV Test-Set v11.0 (en),MCV Test-Set v8.0 (en),MLS Dev (en),MLS Test (en),NSC Part1,NSC Part6,Peoples Speech Test v1,SLR 83 Test,SPGI Test,VoxPopuli Test (en),WSJ Dev 93,WSJ Eval 92
2
- stt_en_citrinet_256,en,,,4.2 % WER,10.7 % WER,4.4 % WER,10.7 % WER,,,,,,,,,,,,
3
- stt_en_citrinet_512,en,,,3.7 % WER,8.9 % WER,3.7 % WER,8.9 % WER,,,,,,,,,,,,
4
- stt_en_citrinet_1024,en,,,3.7 % WER,8.3 % WER,3.6 % WER,7.9 % WER,,,,,,,,,,,,
5
- stt_en_citrinet_256_gamma_0_25,en,,,4.7 %,10.6 %,4.8 %,10.7 %,,,,,8.3 %,,,,,,5.8 %,3.6 %
6
- stt_en_citrinet_512_gamma_0_25,en,,,4.0 %,9.0 %,3.9 %,9.0 %,,,,,6.9 %,,,,,,4.4 %,3.6 %
7
- stt_en_citrinet_1024_gamma_0_25,en,,,3.4 %,7.7 %,3.4 %,7.6 %,,,,,6.2 %,,,,,,4.0 %,2.5 %
 
 
 
 
 
 
 
 
SoundScribe/SpeakerID/docs/source/asr/data/scores/en/conformer_en.csv DELETED
@@ -1,28 +0,0 @@
1
- Model Name,Language,EuroParl Test Set (en),Fisher Test Set (en),Librispeech Dev-Clean,Librispeech Dev-Other,Librispeech Test-Clean,Librispeech Test-Other,MCV Test-Set v11.0 (en),MCV Test-Set v8.0 (en),MLS Dev (en),MLS Test (en),NSC Part1,NSC Part6,Peoples Speech Test v1,SLR 83 Test,SPGI Test,VoxPopuli Test (en),WSJ Dev 93,WSJ Eval 92
2
- stt_en_conformer_ctc_small,en,,,3.6,8.1,3.7,8.1,,,,,,,,,,,,
3
- stt_en_conformer_ctc_medium,en,,,2.5,5.8,2.6,5.9,,,,,,,,,,,,
4
- stt_en_conformer_ctc_large,en,,,1.9,4.4,2.1,4.5,,,,,,,,,,,,
5
- stt_en_conformer_ctc_xlarge,en,,,1.77 %,3.79 %,2.00 %,3.74 %,,7.88 %,,5.99 %,,6.44 %,22.90 %,5.50 %,,,2.36 %,
6
- stt_en_conformer_ctc_small_ls,en,,,3.3,8.8,3.4,8.8,,,,,,,,,,,,
7
- stt_en_conformer_ctc_medium_ls,en,,,2.7,7.4,3.0,7.3,,,,,,,,,,,,
8
- stt_en_conformer_ctc_large_ls,en,,,2.4,6.2,2.7,6.0,,,,,,,,,,,,
9
- stt_en_conformer_transducer_small,en,,,2.8,6.6,2.5,6.6,,,,,,,,,,,,
10
- stt_en_conformer_transducer_medium,en,,,2.0,4.6,2.1,4.7,,,,,,,,,,,,
11
- stt_en_conformer_transducer_large,en,,,1.6,3.5,1.7,3.7,,,,,,,,,,,,
12
- stt_en_conformer_transducer_large_ls,en,,,2.1,5.0,2.3,5.1,,,,,,,,,,,,
13
- stt_en_conformer_transducer_xlarge,en,,,1.48 %,2.95 %,1.62 %,3.01 %,,6.46 %,4.59 %,5.32 %,5.70 %,6.47 %,21.32 %,,,,2.05 %,1.17 %
14
- stt_en_conformer_transducer_xxlarge,en,,,1.52 %,3.09 %,1.72 %,3.14 %,,,5.29 %,5.85 %,6.64 %,,,,,,2.42 %,1.49 %
15
- stt_en_fastconformer_hybrid_large_streaming_80ms (CTC),en,,,,,3.5 %,8.1 %,,,10.2 %,7.2 %,,,,,,,3.5 %,2.3 %
16
- stt_en_fastconformer_hybrid_large_streaming_480ms (CTC),en,,,,,3.6 %,7.5 %,,,9.8 %,7.0 %,,,,,,,3.5 %,2.1 %
17
- stt_en_fastconformer_hybrid_large_streaming_1040ms (CTC),en,,,,,2.7 %,6.4 %,,,9.0 %,7.0 %,,,,,,,3.2 %,1.9 %
18
- stt_en_fastconformer_hybrid_large_streaming_80ms (RNNT),en,,,,,2.7 %,6.5 %,,,9.1 %,6.9 %,,,,,,,3.2 %,1.9 %
19
- stt_en_fastconformer_hybrid_large_streaming_480ms (RNNT),en,,,,,2.7 %,6.1 %,,,8.5 %,6.7 %,,,,,,,3.1 %,1.8 %
20
- stt_en_fastconformer_hybrid_large_streaming_1040ms (RNNT),en,,,,,2.3 %,5.5 %,,,8.0 %,6.6 %,,,,,,,2.9 %,1.6 %
21
- stt_en_fastconformer_hybrid_large_streaming_multi (RNNT - 0ms),en,,,,,,7.0 %,,,,,,,,,,,,
22
- stt_en_fastconformer_hybrid_large_streaming_multi (RNNT - 80ms),en,,,,,,6.4 %,,,,,,,,,,,,
23
- stt_en_fastconformer_hybrid_large_streaming_multi (RNNT - 480),en,,,,,,5.7 %,,,,,,,,,,,,
24
- stt_en_fastconformer_hybrid_large_streaming_multi (RNNT - 1040),en,,,,,,5.4 %,,,,,,,,,,,,
25
- stt_en_fastconformer_hybrid_large_streaming_multi (CTC - 0ms),en,,,,,,8.4 %,,,,,,,,,,,,
26
- stt_en_fastconformer_hybrid_large_streaming_multi (CTC - 80ms),en,,,,,,7.8 %,,,,,,,,,,,,
27
- stt_en_fastconformer_hybrid_large_streaming_multi (CTC - 480),en,,,,,,6.7 %,,,,,,,,,,,,
28
- stt_en_fastconformer_hybrid_large_streaming_multi (CTC - 1040),en,,,,,,6.2 %,,,,,,,,,,,,
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
SoundScribe/SpeakerID/docs/source/asr/data/scores/en/contextnet_en.csv DELETED
@@ -1,7 +0,0 @@
1
- Model Name,Language,EuroParl Test Set (en),Fisher Test Set (en),Librispeech Dev-Clean,Librispeech Dev-Other,Librispeech Test-Clean,Librispeech Test-Other,MCV Test-Set v11.0 (en),MCV Test-Set v8.0 (en),MLS Dev (en),MLS Test (en),NSC Part1,NSC Part6,Peoples Speech Test v1,SLR 83 Test,SPGI Test,VoxPopuli Test (en),WSJ Dev 93,WSJ Eval 92
2
- stt_en_contextnet_256,en,,,3.3 %,7.9 %,3.3 %,8.0 %,,,9.7 %,11.0 %,7.1 %,,,,,,4.6 %,3.2 %
3
- stt_en_contextnet_512,en,,,2.0 %,4.8 %,2.2 %,5.0 %,,,6.6 %,7.3 %,5.9 %,,,,,,2.8 %,1.4 %
4
- stt_en_contextnet_1024,en,,,1.7 %,3.8 %,1.9 %,4.0 %,,7.9 %,,5.9 %,5.2 %,6.5 %,21.7 %,4.7 %,,,2.3 %,1.3 %
5
- stt_en_contextnet_256_mls,en,,,,9.0 %,,9.2 %,,,9.4 %,10.9 %,,,,,,,,
6
- stt_en_contextnet_512_mls,en,,,,5.2 %,,5.2 %,,,5.6 %,6.6 %,,,,,,,,
7
- stt_en_contextnet_1024_mls,en,,,,4.1 %,,4.2 %,,,4.6 %,5.6 %,,,,,,,,
 
 
 
 
 
 
 
 
SoundScribe/SpeakerID/docs/source/asr/data/scores/en/fastconformer_en.csv DELETED
@@ -1,4 +0,0 @@
1
- Model Name,Language,EuroParl Test Set (en),Fisher Test Set (en),Librispeech Dev-Clean,Librispeech Dev-Other,Librispeech Test-Clean,Librispeech Test-Other,MCV Test-Set v11.0 (en),MCV Test-Set v8.0 (en),MLS Dev (en),MLS Test (en),NSC Part1,NSC Part6,Peoples Speech Test v1,SLR 83 Test,SPGI Test,VoxPopuli Test (en),WSJ Dev 93,WSJ Eval 92
2
- stt_en_fastconformer_ctc_large,en,,,1.9,4.2,2.1,4.2,,,,,,,,,,,,
3
- stt_en_fastconformer_transducer_large,en,,,2.0,3.8,1.8,3.8,,,,,,,,,,,,
4
- stt_en_fastconformer_hybrid_large_pc,en,8.0 %,10.3 %,,,2.0 %,4.1 %,8.2 %,,,4.5 %,4.6 %,,,,2.3 %,4.5 %,,
 
 
 
 
 
SoundScribe/SpeakerID/docs/source/asr/data/scores/en/jasper10x5dr_en.csv DELETED
@@ -1,2 +0,0 @@
1
- Model Name,Language,EuroParl Test Set (en),Fisher Test Set (en),Librispeech Dev-Clean,Librispeech Dev-Other,Librispeech Test-Clean,Librispeech Test-Other,MCV Test-Set v11.0 (en),MCV Test-Set v8.0 (en),MLS Dev (en),MLS Test (en),NSC Part1,NSC Part6,Peoples Speech Test v1,SLR 83 Test,SPGI Test,VoxPopuli Test (en),WSJ Dev 93,WSJ Eval 92
2
- stt_en_jasper10x5dr,en,,,3.74,10.21,,,,,,,,,,,,,,
 
 
 
SoundScribe/SpeakerID/docs/source/asr/data/scores/en/quartznet15x5_en.csv DELETED
@@ -1,2 +0,0 @@
1
- Model Name,Language,EuroParl Test Set (en),Fisher Test Set (en),Librispeech Dev-Clean,Librispeech Dev-Other,Librispeech Test-Clean,Librispeech Test-Other,MCV Test-Set v11.0 (en),MCV Test-Set v8.0 (en),MLS Dev (en),MLS Test (en),NSC Part1,NSC Part6,Peoples Speech Test v1,SLR 83 Test,SPGI Test,VoxPopuli Test (en),WSJ Dev 93,WSJ Eval 92
2
- stt_en_quartznet15x5,en,,,4.38,11.3,,,,,,,,,,,,,,
 
 
 
SoundScribe/SpeakerID/docs/source/asr/data/scores/en/squeezeformer_en.csv DELETED
@@ -1,7 +0,0 @@
1
- Model Name,Language,EuroParl Test Set (en),Fisher Test Set (en),Librispeech Dev-Clean,Librispeech Dev-Other,Librispeech Test-Clean,Librispeech Test-Other,MCV Test-Set v11.0 (en),MCV Test-Set v8.0 (en),MLS Dev (en),MLS Test (en),NSC Part1,NSC Part6,Peoples Speech Test v1,SLR 83 Test,SPGI Test,VoxPopuli Test (en),WSJ Dev 93,WSJ Eval 92
2
- stt_en_squeezeformer_ctc_xsmall_ls,en,,,3.6 %,9.7 %,3.8 %,9.4 %,,,,,,,,,,,,
3
- stt_en_squeezeformer_ctc_small_ls,en,,,2.9 %,7.4 %,3.1 %,7.4 %,,,,,,,,,,,,
4
- stt_en_squeezeformer_ctc_small_medium_ls,en,,,2.7 %,7.0 %,2.8 %,7.1 %,,,,,,,,,,,,
5
- stt_en_squeezeformer_ctc_medium_ls,en,,,2.4 %,6.2 %,2.6 %,6.3 %,,,,,,,,,,,,
6
- stt_en_squeezeformer_ctc_medium_large_ls,en,,,2.3 %,6.0 %,2.5 %,5.9 %,,,,,,,,,,,,
7
- stt_en_squeezeformer_ctc_large_ls,en,,,2.3 %,5.7 %,2.4 %,5.7 %,,,,,,,,,,,,