# Model Name This is a multilingually fine-tuned version of [NLLB](https://arxiv.org/abs/2207.04672) based on [nllb-200-distilled-600M](https://huggingface.co/facebook/nllb-200-distilled-600M) using the text data of MuST-C v1.0 (En -> 8). ## Usage ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("johntsi/nllb-200-distilled-600M_mustc_en-to-8") model = AutoModelForSeq2SeqLM.from_pretrained("johntsi/nllb-200-distilled-600M_mustc_en-to-8") model.eval() model.to("cuda") text = "Translate this text to German." inputs = tokenizer(text, return_tensors="pt").to("cuda") outputs = model.generate( **inputs, num_beams=5, forced_bos_token_id=tokenizer.lang_code_to_id["deu_Latn"] ) translated_text = tokenizer.decode(outputs[0], skip_special_tokens=True) print(translated_text) ```