Ancient Greek BERT finetuned for tagging and parsing PROIEL (UD)

This is a finetuned checkpoint of Ancient Greek BERT by Singh, Rutten and Lefever (2021), which has been trained on the UD version of PROIEL per 9 May 2023.

The code for training and using the model can be found on GitHub. The config file used is here: config.py.

If you use this model for something academic, feel free to cite the master's thesis that it sprung out of:

Bjerkeland, D. C. 2022. Tagging and Parsing Old Texts with New Techniques. University of Oslo. URL: http://urn.nb.no/URN:NBN:no-98954.

Performance

This is the performance on the test set of the UD version of PROIEL per 9 May 2023.

Metric Accuracy
UPOS 0.9820489710079615
XPOS 0.9822742977317109
feats 0.926768814781433
all tags 0.916103349857293
UAS 0.8862100045065344
LAS 0.8496319663512093
LA 0.9076911521706474

Bibliography

Downloads last month
59
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.