Edit model card

Transnormer 19th century (beta v02)

This model generates a normalized version of historical input text for German from the 19th century.

Model description

Transnormer is a byte-level sequence-to-sequence model for normalizing historical German text. This model was trained on text from the 19th and late 18th century, by performing a fine-tuning of google/byt5-small. The fine-tuning data was a modified version of the DTA EvalCorpus (see section Training and evaluation data).

Uses

This model is intended for users that are working with historical text and are in need of a normalized version, i.e. a version that comes closer to modern spelling. Historical text typically contains spelling variations and extinct spellings that differ from contemporary text. This can be a drawback when working with historical text: The variation can impair the performance of NLP tools (POS tagging, etc.) that were trained on contemporary language, and a full text search becomes more tedious due to numerous spelling options for the same search term. Historical text normalization, as offered by this model, can mitigate these problems to some extent.

Note that this model is intended for the normalization of historical German text from a specific time period. It is not intended for other types of text that may require normalization (e.g. computer mediated communication), other languages than German or other time frames. There may be other models available for that on the Hub.

The model can be further fine-tuned to be adapted or improved, e.g. as described in the Transformers tutorials.

Demo Usage

from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

tokenizer = AutoTokenizer.from_pretrained("ybracke/transnormer-19c-beta-v02")
model = AutoModelForSeq2SeqLM.from_pretrained("ybracke/transnormer-19c-beta-v02")
sentence = "Die Königinn ſaß auf des Pallaſtes mittlerer Tribune."
inputs = tokenizer(sentence, return_tensors="pt",)
outputs = model.generate(**inputs, num_beams=4, max_length=128)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
# >>> ['Die Königin saß auf des Palastes mittlerer Tribüne.']

Or use this model with the pipeline API like this:

from transformers import pipeline

transnormer = pipeline(model='ybracke/transnormer-19c-beta-v02')
sentence = "Die Königinn ſaß auf des Pallaſtes mittlerer Tribune."
print(transnormer(sentence, num_beams=4, max_length=128))
# >>> [{'generated_text': 'Die Königin saß auf des Palastes mittlerer Tribüne.'}]

Recommendations

The model was trained using a maximum input length of 512 bytes (~70 words). Inference is generally possible for longer sequences, but may be worse than for shorter sequence. Generally, shorter sequences ensures inference that is faster and less computationally expensive. Consider splitting long sequences to process them separately.

Training and evaluation data

The model was fine-tuned and evaluated on splits derived from the DTA EvalCorpus, a parallel corpus containing of 121 texts from Deutsches Textarchiv (German Text Archive). The corpus was originally created by aligning historic prints in original spelling with an edition in contemporary orthography.

The original corpus creators applied some corrections to the modern versions (see Jurish et al. 2013). For our use of the corpus, we further improved the quality of the normalized part of the corpus by enforcing spellings that accord to the German orthography reform (post 1996) and by applying selected rules of the LanguageTool and custom replacements to remove some errors and inconsistencies. We plan to publish the corpus as a dataset on the Huggingface Hub in the future.

The training set contains 96 documents with 4.6M source tokens, the dev and test set contain 13 documents (405K tokens) and 12 documents (381K tokens), respectively.

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10 (published model: 8 epochs)

Framework versions

  • Transformers 4.31.0
  • Pytorch 2.1.0+cu121
  • Datasets 2.18.0
  • Tokenizers 0.13.3
Downloads last month
71
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for ybracke/transnormer-19c-beta-v02

Base model

google/byt5-small
Finetuned
(16)
this model

Evaluation results