naymaraq commited on
Commit
0d700fc
·
verified ·
1 Parent(s): 1d7bf00

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +22 -2
README.md CHANGED
@@ -59,8 +59,28 @@ Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated sys
59
 
60
 
61
  ## How to Use the Model
62
- TODO
63
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
64
 
65
  ## Software Integration:
66
  **Runtime Engine(s):**
 
59
 
60
 
61
  ## How to Use the Model
62
+ The model is available for use in the NeMo toolkit [2], and can be used as a pre-trained checkpoint for inference.
63
+
64
+ ### Automatically load the model
65
+
66
+ ```python
67
+ import nemo.collections.asr as nemo_asr
68
+ asr_model = nemo_asr.models.EncDecFrameClassificationModel.from_pretrained(model_name="frame_vad_multilingual_marblenet_v2.0.nemo")
69
+ ```
70
+
71
+ ### Perform VAD Inference
72
+
73
+ ```bash
74
+ python <NEMO_ROOT>/examples/asr/speech_classification/frame_vad_infer.py \
75
+ --config-path="../conf/vad" \
76
+ --config-name="frame_vad_infer_postprocess.yaml" \
77
+ vad.model_path=<Path to .nemo file from which model should be instantiated> \
78
+ input_manifest=<Path of manifest file of evaluation data, where audio files should have unique names> \
79
+ prepare_manifest.auto_split=True \
80
+ prepare_manifest.split_duration=7200 \
81
+ vad.parameters.shift_length_in_sec=0.02 \
82
+ out_manifest_filepath=<Path of output manifest file>
83
+ ```
84
 
85
  ## Software Integration:
86
  **Runtime Engine(s):**