SONAR / README.md
alexmourachko
update readme
2209e43
|
raw
history blame
4.99 kB
metadata
license: cc-by-nc-4.0

SONAR

[Paper] [Demo]

We introduce SONAR, a new multilingual and multimodal fixed-size sentence embedding space. Our single text encoder, covering 200 languages, substantially outperforms existing sentence embeddings such as LASER3 and LabSE on the xsim and xsim++ multilingual similarity search tasks.

Speech segments can be embedded in the same \sonar embedding space using language-specific speech encoders trained in a teacher-student setting on speech transcription data. Our encoders outperform existing speech encoders on similarity search tasks. We also provide a text decoder for 200 languages, which allows us to perform text-to-text and speech-to-text machine translation, including for zero-shot language and modality combinations.

Our text-to-text results are competitive compared to the state-of-the-art NLLB~1B model, despite the fixed-size bottleneck representation. Our zero-shot speech-to-text translation results compare favorably with strong supervised baselines such as Whisper.

Model inference support thanks Fairseq2

Installing

See our github repo

Usage

Compute text sentence embeddings:

from sonar.inference_pipelines.text import TextToEmbeddingModelPipeline
t2vec_model = TextToEmbeddingModelPipeline(encoder="text_sonar_basic_encoder",
                                           tokenizer="text_sonar_basic_encoder")
sentences = ['My name is SONAR.', 'I can embed the sentences into vectorial space.']
t2vec_model.predict(sentences, source_lang="eng_Latn").shape
# torch.Size([2, 1024])

Translate with SONAR

from sonar.inference_pipelines.text import TextToTextModelPipeline
t2t_model = TextToTextModelPipeline(encoder="text_sonar_basic_encoder",
                                    decoder="text_sonar_basic_decoder",
                                    tokenizer="text_sonar_basic_encoder")  # tokenizer is attached to both encoder and decoder cards

sentences = ['My name is SONAR.', 'I can embed the sentences into vectorial space.']
t2t_model.predict(sentences, source_lang="eng_Latn", target_lang="fra_Latn")
# ['Mon nom est SONAR.', "Je peux intégrer les phrases dans l'espace vectoriel."]

Compute speech sentence embeddings:

import torch
from sonar.inference_pipelines.speech import SpeechToEmbeddingPipeline, SpeechInferenceParams

speech_embedding_dp_builder = SpeechToEmbeddingPipeline.load_from_name("sonar_speech_encoder_eng")

speech_ctx = SpeechInferenceParams(
    data_file="..../test_fleurs_fra-eng.tsv",
    audio_root_dir=".../audio_zips",
    audio_path_index=2,
    batch_size=4,
)

speech_embedding_dp = speech_embedding_dp_builder.build_pipeline(speech_ctx)
with torch.inference_mode():
    speech_emb = next(iter(speech_embedding_dp))
speech_emb["audio"]["data"].sentence_embeddings

Speech-to-text with SONAR

import torch
from sonar.inference_pipelines import SpeechToTextPipeline, SpeechInferenceParams

speech_to_text_dp_builder = SpeechToTextPipeline.load_from_name(encoder_name="sonar_speech_encoder_eng", 
                                                                decoder_name="text_sonar_basic_decoder")

speech_ctx = SpeechInferenceParams(
    data_file=".../test_fleurs_fra-eng.tsv",
    audio_root_dir=".../audio_zips",
    audio_path_index=2,
    target_lang='fra_Latn',
    batch_size=4,
)
speech_to_text_dp = speech_to_text_dp_builder.build_pipeline(speech_ctx)
with torch.inference_mode():
    speech_text_translation = next(iter(speech_to_text_dp))
speech_text_translation

Predicting cross-lingual semantic similarity with BLASER-2 models

import torch
from sonar.models.blaser.loader import load_blaser_model

blaser_ref = load_blaser_model("blaser_st2st_ref_v2_0").eval()
blaser_qe = load_blaser_model("blaser_st2st_qe_v2_0").eval()
# BLASER-2 is supposed to work with SONAR speech and text embeddings,
# but we didn't include their extraction in this snippet, to keep it simple.
emb = torch.ones([1, 1024])
print(blaser_ref(src=emb, ref=emb, mt=emb).item())  # 5.2552
print(blaser_qe(src=emb, mt=emb).item())  # 4.9819

See more complete demo notebooks :

Model details

  • Developed by: Paul-Ambroise Duquenne et al.

  • License: CC-BY-NC 4.0 license

  • Cite as:

    @article{Duquenne:2023:sonar_arxiv, author = {Paul-Ambroise Duquenne and Holger Schwenk and Benoit Sagot}, title = {{SONAR:} Sentence-Level Multimodal and Language-Agnostic Representations}, publisher = {arXiv}, year = {2023}, url = {https://arxiv.org/abs/unk}, }