Transformers documentation
NLLB
This model was released on 2022-07-11 and added to Hugging Face Transformers on 2022-07-18.
NLLB
NLLB: No Language Left Behind is a multilingual translation model. It’s trained on data using data mining techniques tailored for low-resource languages and supports over 200 languages. NLLB features a conditional compute architecture using a Sparsely Gated Mixture of Experts.
You can find all the original NLLB checkpoints under the AI at Meta organization.
This model was contributed by Lysandre.
Click on the NLLB models in the right sidebar for more examples of how to apply NLLB to different translation tasks.
The example below demonstrates how to translate text with Pipeline or the AutoModel class.
import torch
from transformers import pipeline
pipeline = pipeline(task="translation", model="facebook/nllb-200-distilled-600M", src_lang="eng_Latn", tgt_lang="fra_Latn", torch_dtype=torch.float16, device=0)
pipeline("UN Chief says there is no military solution in Syria")
Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the Quantization overview for more available quantization backends.
The example below uses bitsandbytes to quantize the weights to 8-bits.
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, BitsAndBytesConfig
bnb_config = BitsAndBytesConfig(load_in_8bit=True)
model = AutoModelForSeq2SeqLM.from_pretrained("facebook/nllb-200-distilled-1.3B", quantization_config=bnb_config)
tokenizer = AutoTokenizer.from_pretrained("facebook/nllb-200-distilled-1.3B")
article = "UN Chief says there is no military solution in Syria"
inputs = tokenizer(article, return_tensors="pt").to("cuda")
translated_tokens = model.generate(
**inputs, forced_bos_token_id=tokenizer.convert_tokens_to_ids("fra_Latn"), max_length=30,
)
print(tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0])
Use the AttentionMaskVisualizer to better understand what tokens the model can and cannot attend to.
from transformers.utils.attention_visualizer import AttentionMaskVisualizer
visualizer = AttentionMaskVisualizer("facebook/nllb-200-distilled-600M")
visualizer("UN Chief says there is no military solution in Syria")

Notes
The tokenizer was updated in April 2023 to prefix the source sequence with the source language rather than the target language. This prioritizes zero-shot performance at a minor cost to supervised performance.
>>> from transformers import NllbTokenizer >>> tokenizer = NllbTokenizer.from_pretrained("facebook/nllb-200-distilled-600M") >>> tokenizer("How was your day?").input_ids [256047, 13374, 1398, 4260, 4039, 248130, 2]
To revert to the legacy behavior, use the code example below.
>>> from transformers import NllbTokenizer >>> tokenizer = NllbTokenizer.from_pretrained("facebook/nllb-200-distilled-600M", legacy_behaviour=True)
For non-English languages, specify the language’s BCP-47 code with the
src_lang
keyword as shown below.See example below for a translation from Romanian to German.
>>> from transformers import AutoModelForSeq2SeqLM, AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("facebook/nllb-200-distilled-600M") >>> model = AutoModelForSeq2SeqLM.from_pretrained("facebook/nllb-200-distilled-600M") >>> article = "UN Chief says there is no military solution in Syria" >>> inputs = tokenizer(article, return_tensors="pt") >>> translated_tokens = model.generate( ... **inputs, forced_bos_token_id=tokenizer.convert_tokens_to_ids("fra_Latn"), max_length=30 ... ) >>> tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0] Le chef de l'ONU dit qu'il n'y a pas de solution militaire en Syrie
NllbTokenizer
class transformers.NllbTokenizer
< source >( vocab_file bos_token = '<s>' eos_token = '</s>' sep_token = '</s>' cls_token = '<s>' unk_token = '<unk>' pad_token = '<pad>' mask_token = '<mask>' tokenizer_file = None src_lang = None tgt_lang = None sp_model_kwargs: typing.Optional[dict[str, typing.Any]] = None additional_special_tokens = None legacy_behaviour = False **kwargs )
Parameters
- vocab_file (
str
) — Path to the vocabulary file. - bos_token (
str
, optional, defaults to"<s>"
) — The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.When building a sequence using special tokens, this is not the token that is used for the beginning of sequence. The token used is the
cls_token
. - eos_token (
str
, optional, defaults to"</s>"
) — The end of sequence token.When building a sequence using special tokens, this is not the token that is used for the end of sequence. The token used is the
sep_token
. - sep_token (
str
, optional, defaults to"</s>"
) — The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens. - cls_token (
str
, optional, defaults to"<s>"
) — The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens. - unk_token (
str
, optional, defaults to"<unk>"
) — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. - pad_token (
str
, optional, defaults to"<pad>"
) — The token used for padding, for example when batching sequences of different lengths. - mask_token (
str
, optional, defaults to"<mask>"
) — The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict. - tokenizer_file (
str
, optional) — The path to a tokenizer file to use instead of the vocab file. - src_lang (
str
, optional) — The language to use as source language for translation. - tgt_lang (
str
, optional) — The language to use as target language for translation. - sp_model_kwargs (
dict[str, str]
) — Additional keyword arguments to pass to the model initialization.
Construct an NLLB tokenizer.
Adapted from RobertaTokenizer and XLNetTokenizer. Based on SentencePiece.
The tokenization method is <tokens> <eos> <language code>
for source language documents, and `<language code>
Examples:
>>> from transformers import NllbTokenizer
>>> tokenizer = NllbTokenizer.from_pretrained(
... "facebook/nllb-200-distilled-600M", src_lang="eng_Latn", tgt_lang="fra_Latn"
... )
>>> example_english_phrase = " UN Chief Says There Is No Military Solution in Syria"
>>> expected_translation_french = "Le chef de l'ONU affirme qu'il n'y a pas de solution militaire en Syrie."
>>> inputs = tokenizer(example_english_phrase, text_target=expected_translation_french, return_tensors="pt")
build_inputs_with_special_tokens
< source >( token_ids_0: list token_ids_1: typing.Optional[list[int]] = None ) → list[int]
Parameters
- token_ids_0 (
list[int]
) — List of IDs to which the special tokens will be added. - token_ids_1 (
list[int]
, optional) — Optional second list of IDs for sequence pairs.
Returns
list[int]
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. An NLLB sequence has the following format, where X
represents the sequence:
input_ids
(for encoder)X [eos, src_lang_code]
decoder_input_ids
: (for decoder)X [eos, tgt_lang_code]
BOS is never used. Pairs of sequences are not the expected use case, but they will be handled without a separator.
NllbTokenizerFast
class transformers.NllbTokenizerFast
< source >( vocab_file = None tokenizer_file = None bos_token = '<s>' eos_token = '</s>' sep_token = '</s>' cls_token = '<s>' unk_token = '<unk>' pad_token = '<pad>' mask_token = '<mask>' src_lang = None tgt_lang = None additional_special_tokens = None legacy_behaviour = False **kwargs )
Parameters
- vocab_file (
str
) — Path to the vocabulary file. - bos_token (
str
, optional, defaults to"<s>"
) — The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.When building a sequence using special tokens, this is not the token that is used for the beginning of sequence. The token used is the
cls_token
. - eos_token (
str
, optional, defaults to"</s>"
) — The end of sequence token.When building a sequence using special tokens, this is not the token that is used for the end of sequence. The token used is the
sep_token
. - sep_token (
str
, optional, defaults to"</s>"
) — The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens. - cls_token (
str
, optional, defaults to"<s>"
) — The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens. - unk_token (
str
, optional, defaults to"<unk>"
) — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. - pad_token (
str
, optional, defaults to"<pad>"
) — The token used for padding, for example when batching sequences of different lengths. - mask_token (
str
, optional, defaults to"<mask>"
) — The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict. - tokenizer_file (
str
, optional) — The path to a tokenizer file to use instead of the vocab file. - src_lang (
str
, optional) — The language to use as source language for translation. - tgt_lang (
str
, optional) — The language to use as target language for translation.
Construct a “fast” NLLB tokenizer (backed by HuggingFace’s tokenizers library). Based on BPE.
This tokenizer inherits from PreTrainedTokenizerFast which contains most of the main methods. Users should refer to this superclass for more information regarding those methods.
The tokenization method is <tokens> <eos> <language code>
for source language documents, and `<language code>
Examples:
>>> from transformers import NllbTokenizerFast
>>> tokenizer = NllbTokenizerFast.from_pretrained(
... "facebook/nllb-200-distilled-600M", src_lang="eng_Latn", tgt_lang="fra_Latn"
... )
>>> example_english_phrase = " UN Chief Says There Is No Military Solution in Syria"
>>> expected_translation_french = "Le chef de l'ONU affirme qu'il n'y a pas de solution militaire en Syrie."
>>> inputs = tokenizer(example_english_phrase, text_target=expected_translation_french, return_tensors="pt")
build_inputs_with_special_tokens
< source >( token_ids_0: list token_ids_1: typing.Optional[list[int]] = None ) → list[int]
Parameters
- token_ids_0 (
list[int]
) — List of IDs to which the special tokens will be added. - token_ids_1 (
list[int]
, optional) — Optional second list of IDs for sequence pairs.
Returns
list[int]
list of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. The special tokens depend on calling set_lang.
An NLLB sequence has the following format, where X
represents the sequence:
input_ids
(for encoder)X [eos, src_lang_code]
decoder_input_ids
: (for decoder)X [eos, tgt_lang_code]
BOS is never used. Pairs of sequences are not the expected use case, but they will be handled without a separator.
create_token_type_ids_from_sequences
< source >( token_ids_0: list token_ids_1: typing.Optional[list[int]] = None ) → list[int]
Create a mask from the two sequences passed to be used in a sequence-pair classification task. nllb does not make use of token type ids, therefore a list of zeros is returned.
Reset the special tokens to the source lang setting.
- In legacy mode: No prefix and suffix=[eos, src_lang_code].
- In default mode: Prefix=[src_lang_code], suffix = [eos]
Reset the special tokens to the target lang setting.
- In legacy mode: No prefix and suffix=[eos, tgt_lang_code].
- In default mode: Prefix=[tgt_lang_code], suffix = [eos]