File size: 3,502 Bytes
79339a5 e660218 c1be65a 79339a5 5dbba7d 79339a5 1a6f824 5dbba7d 4761e1d 5dbba7d 4761e1d 5dbba7d 1a6f824 5dbba7d 1a6f824 4761e1d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 |
---
license: cc-by-nc-4.0
language:
- de
- frr
pipeline_tag: translation
base_model: facebook/nllb-200-distilled-600M
inference: false
---
# Northern Frisian translation model
This is an [NLLB-200-600M](https://huggingface.co/facebook/nllb-200-distilled-600M) model fine-tuned for translating between German and
the Northern Frisian dialect Mooring following [this great blogpost](https://cointegrated.medium.com/a37fc706b865).
## Data
The dataset for finetuning consisted of 7194 sentence pairs of the Ååstermooring dialect of North Frisian with German translation.
Most examples (roughly 5100) were taken directly from
["Rüm Hart"](https://www.nordfriiskfutuur.eu/fileadmin/Content/Nordfriisk_Futuur/E-Books/N._A._Johannsen__Ruem_hart.pdf)
published by the Nordfriisk Instituut. For sentence splitting the python
[sentence-splitting library](https://pypi.org/project/sentence-splitter/) was used. The splitting wasn't perfect,
especially in cases of direct speech, so that manual re-alignment and further splitting was necessary.
A further roughly 2000 examples were taken from the Frasch Uurdebök, Friesisches Wörterbuch, Neumünster 1988.
Finally, a little under 180 very simple self-written examples were used as evaluation data set.
## Usage
How to use the model:
```python
!pip install transformers==4.33
from transformers import AutoModelForSeq2SeqLM, NllbTokenizer
def create_tokenizer_with_new_lang(model_id, new_lang):
tokenizer = NllbTokenizer.from_pretrained(model_id)
old_len = len(tokenizer) - int(new_lang in tokenizer.added_tokens_encoder)
tokenizer.lang_code_to_id[new_lang] = old_len-1
tokenizer.id_to_lang_code[old_len-1] = new_lang
# always move "mask" to the last position
tokenizer.fairseq_tokens_to_ids["<mask>"] = len(tokenizer.sp_model) + len(tokenizer.lang_code_to_id) + tokenizer.fairseq_offset
tokenizer.fairseq_tokens_to_ids.update(tokenizer.lang_code_to_id)
tokenizer.fairseq_ids_to_tokens = {v: k for k, v in tokenizer.fairseq_tokens_to_ids.items()}
if new_lang not in tokenizer._additional_special_tokens:
tokenizer._additional_special_tokens.append(new_lang)
# clear the added token encoder; otherwise a new token may end up there by mistake
tokenizer.added_tokens_encoder = {}
return tokenizer
def translate(
text,
tokenizer,
model,
src_lang='frr_Latn',
tgt_lang='deu_Latn',
a=32,
b=3,
max_input_length=1024,
num_beams=4,
**kwargs
):
tokenizer.src_lang = src_lang
tokenizer.tgt_lang = tgt_lang
inputs = tokenizer(text, return_tensors='pt', padding=True, truncation=True, max_length=max_input_length)
result = model.generate(
**inputs.to(model.device),
forced_bos_token_id=tokenizer.convert_tokens_to_ids(tgt_lang),
max_new_tokens=int(a + b * inputs.input_ids.shape[1]),
num_beams=num_beams,
**kwargs
)
return tokenizer.batch_decode(result, skip_special_tokens=True)
path = "CmdCody/nllb-deu-moo"
tokenizer = create_tokenizer_with_new_lang(path, 'frr_Latn')
model = AutoModelForSeq2SeqLM.from_pretrained(path)
translate("Momme booget önj Naibel", tokenizer=tokenizer, model=model)
```
## Training
The model was trained in a Google Colab notebook for 5000 steps and a batch size of 16 following the above mentioned blog post.
Metrics on the evaluation data set:
| | Bleu | ChrF++ |
|-----------|-------|--------|
| Frr -> De | 48.79 | 65.12 |
| De -> Frr | 47.56 | 65.03 | |