NeMo
CasanovaE's picture
Update README.md
881ede6 verified
|
raw
history blame
8.05 kB
metadata
license: other
license_name: nsclv1
license_link: https://developer.nvidia.com/downloads/license/nsclv1

NVIDIA Low Frame-rate Speech Codec

Model architecture | Model size | Language

The Low Frame-rate Speech Codec is a neural audio codec that leverages finite scalar quantization and adversarial training with large speech language models to achieve high-quality audio compression with a 1.89 kbps bitrate and 21.5 frames per second.

Model Architecture

Low Frame-rate Speech Codec model is composed of a fully convolutional generator neural network and three discriminators. The generator comprises an encoder, followed by vector quantization, and a HiFi-GAN-based decoder. The encoder consists of five residual blocks, each block containing three residual layers similar to the multi-receptive field fusion (MRF) module. For the vector quantization, we have used Finite Scalar Quantization (FSQ) with eight codebooks and four dimensions per code and 2016 codes per codebook. For the discriminators, we utilize three neural networks, all employing a squared-GAN and feature-matching loss. We adopt the multi-period discriminator and the multi-scale complex STFT discriminator. Additionally, we proposed the use of Speech Language Models (SLMs) as a discriminator. SLMs encode information ranging from acoustic to semantic aspects, which could benefit our model's training, especially in low frame rate settings where accurate pronunciation is difficult to achieve due to the high compression rate. We adopted the 12-layer WavLM, pre-trained on 94k hours of data, as the SLM. During training, we resample the input audio to 16 kHz before feeding it into the WavLM model, extracting the intermediary layer features. These features are then fed to a discriminative head composed of four 1D convolutional layers.

For more details please check our paper.

Input

  • Input Type: Audio
  • Input Format(s): .wav files
  • Input Parameters: One-Dimensional (1D)
  • Other Properties Related to Input: 22050 Hz Mono-channel Audio

Output

  • Output Type: Audio
  • Output Format: .wav files
  • Output Parameters: One Dimensional (1D)
  • Other Properties Related to Output: 22050 Hz Mono-channel Audio

How to Use this Model

The model is available for use in the NVIDIA NeMo, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.

Inference

For inference, you can follow our Audio Codec Inference Tutorial which automatically downloads the model checkpoint. Note that you will need to set the model_name parameter to "nvidia/Low-Frame-rate-Speech-Codec".

Alternatively, you can manually download the checkpoint and use the code below to make an inference on the model:

import librosa
import torch
import soundfile as sf
from nemo.collections.tts.models import AudioCodecModel

codec_path = ??? # set here the model .nemo checkpoint path
path_to_input_audio = ??? # path of the input audio
path_to_output_audio = ??? # path of the reconstructed output audio

nemo_codec_model = AudioCodecModel.restore_from(restore_path=codec_path, map_location="cpu").eval()

# get discrete tokens from audio
audio, _ = librosa.load(path_to_input_audio, sr=nemo_codec_model.sample_rate)

device = 'cuda' if torch.cuda.is_available() else 'cpu'
audio_tensor = torch.from_numpy(audio).unsqueeze(dim=0).to(device)
audio_len = torch.tensor([audio_tensor[0].shape[0]]).to(device)

encoded_tokens, encoded_len = nemo_codec_model.encode(audio=audio_tensor, audio_len=audio_len)

# Reconstruct audio from tokens
reconstructed_audio, _ = nemo_codec_model.decode(tokens=encoded_tokens, tokens_len=encoded_len)

# save reconstructed audio
output_audio = reconstructed_audio.cpu().numpy().squeeze()
sf.write(path_to_output_audio, output_audio, nemo_codec_model.sample_rate)

Training

For fine-tuning on another dataset please follow the steps available at our Audio Codec Training Tutorial. Note that you will need to set the CONFIG_FILENAME parameter to the "audio_codec_low_frame_rate_22050.yaml" config. You also will need to set pretrained_model_name to "audio_codec_low_frame_rate_22khz".

Training, Testing, and Evaluation Datasets:

The Low Frame-rate Speech Codec was trained on 28.7k hours of speech data spanning 105 languages. The model was evaluated using multilingual audiobook-style data and high-quality English recordings. For further details, refer to our paper.

Training Datasets

The Low Frame-rate Speech Codec is trained on a total of 28.7k hrs of speech data from 105 languages.

  • MLS English [25.5k]

    • Data Collection Method: by Human

    • Labeling Method: Automated

  • Common Voice[3.2k]

    • Data Collection Method: by Human

    • Labeling Method: by Human

Evaluation Datasets

  • MLS English

    • Data Collection Method: by Human

    • Labeling Method: Automated

  • Common Voice

    • Data Collection Method: by Human

    • Labeling Method: by Human

Test Datasets

  • MLS

    • Data Collection Method: by Human

    • Labeling Method: Automated

    • Properties: We randomly selected 200 samples from each of the eight languages in the 44kHz MLS dataset.

  • DAPS

    • Data Collection Method: by Human

    • Labeling Method: Automated

    • Properties: To assess our models' performance on studio-quality audio, we utilized the F10 and M10 speakers from the DAPS Clear dataset. These speakers were also employed in the evaluation of the DAC model.

Software Integration

Supported Hardware Microarchitecture Compatibility:

  • NVIDIA Ampere
  • NVIDIA Blackwell
  • NVIDIA Jetson
  • NVIDIA Hopper
  • NVIDIA Lovelace
  • NVIDIA Pascal
  • NVIDIA Turing
  • NVIDIA Volta

Runtime Engine

  • Nemo 2.0.0

Preferred Operating System

  • Linux

License/Terms of Use

This model is for research and development only (non-commercial use) and the license to use this model is covered by the NSCLv1.

Ethical Considerations:

NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse. Please report security vulnerabilities or NVIDIA AI Concerns here.