CTranslate2 NLLB-200 Translation Example

This guide demonstrates how to use NLLB-finetuned model for bilingual translation between Portuguese (por_Latn) and a target language (vmw_Latn).

Prerequisites

  • Install required packages:
pip install transformers torch

Inference

from transformers import AutoModelForSeq2SeqLM, NllbTokenizer, AutoTokenizer
import torch

src_lang="por_Latn"
tgt_lang="vmw_Latn"
text="Olá mundo das língua!"

device = "cuda:0" if torch.cuda.is_available() else "cpu"

model_name="felerminoali/nllb_bilingual_pt-vmw_65k"
model = AutoModelForSeq2SeqLM.from_pretrained(model_name).to(device)
tokenizer = NllbTokenizer.from_pretrained(model_name)

tokenizer.src_lang = src_lang
tokenizer.tgt_lang = tgt_lang

inputs = tokenizer(
    text, return_tensors='pt', padding=True, truncation=True, 
    max_length=1024
)
model.eval() # turn off training mode
result = model.generate(
    **inputs.to(model.device),
    forced_bos_token_id=tokenizer.convert_tokens_to_ids(tgt_lang)
)

print(tokenizer.batch_decode(result, skip_special_tokens=True)[0])
Downloads last month
9
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for felerminoali/nllb_bilingual_pt-vmw_65k

Finetuned
(114)
this model

Dataset used to train felerminoali/nllb_bilingual_pt-vmw_65k