ERCDiDip commited on
Commit
f7c46e4
1 Parent(s): b984bf7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +91 -0
README.md CHANGED
@@ -1,3 +1,94 @@
1
  ---
2
  license: mit
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
+ tag: text-classification
4
+ widget:
5
+ - text: "Sehent hoerent oder lesent daß div chint, div bechoment von frowen Chvnegvnde Heinriches des Losen"
6
+ - text: "Mihály zágrábi püspök előtt Vaguth (dict.) László c. a püspöki várnépek (castrenses) Csázma comitatus-beli volt földjének egy részét, amelyet szolgálataiért predialis jogon tőle kapott, 1 szőlővel együtt (a Zuynar föld azon része kivételével, amelyet a püspök László c.-től elvett és a megvakított Kokosnak adományozott"
7
+ - text: "Rath und Gemeinde der Stadt Wismar beschweren sich über die von den Hauptleuten, Beamten und Vasallen des Grafen Johann von Holstein und Stormarn ihren Bürgern seit Jahren zugefügten Unbilden, indem sie ein Verzeichniss der erlittenen einzelnen Verluste beibringen."
8
+ - text: "Diplomă de înnobilare emisă de împăratul romano-german Rudolf al II-lea de Habsburg la în favoarea familiei Szőke de Galgóc. Aussteller: Rudolf al II-lea de Habsburg, împărat romano-german Empfänger: Szőke de Galgóc, familie"
9
+ - text: "бѣ жє болѧ єтєръ лазаръ отъ виѳаньѧ градьца марьина и марѳꙑ сєстрꙑ єѧ | бѣ жє марьꙗ помазавъшиꙗ господа мѵромъ и отьръши ноѕѣ єго власꙑ своими єѧжє братъ лазаръ болѣашє"
10
+ - text: "μῆνιν ἄειδε θεὰ Πηληϊάδεω Ἀχιλῆος οὐλομένην, ἣ μυρί᾽ Ἀχαιοῖς ἄλγε᾽ ἔθηκε, πολλὰς δ᾽ ἰφθίμους ψυχὰς Ἄϊδι προΐαψεν ἡρώων, αὐτοὺς δὲ ἑλώρια τεῦχε κύνεσσιν οἰωνοῖσί"
11
  ---
12
+
13
+ # XLM-RoBERTa (base) language-detection model (modern and medieval)
14
+
15
+ This model is a fine-tuned version of xlm-roberta-base on the [monasterium.net](https://www.icar-us.eu/en/cooperation/online-portals/monasterium-net/) dataset.
16
+
17
+ ## Model description
18
+ On the top of this XLM-RoBERTa transformer model is a classification head. Please refer this model together with to the [XLM-RoBERTa (base-sized model)](https://huggingface.co/xlm-roberta-base) card or the paper [Unsupervised Cross-lingual Representation Learning at Scale by Conneau et al.](https://arxiv.org/abs/1911.02116) for additional information.
19
+
20
+ ## Intended uses & limitations
21
+ You can directly use this model as a language detector, i.e. for sequence classification tasks. Currently, it supports the following 41 languages, modern and medieval:
22
+
23
+ Modern: Bulgarian (bg), Croatian (hr), Czech (cs), Danish (da), Dutch (nl), English (en), Estonian (et), Finnish (fi), French (fr), German (de), Greek (el), Hungarian (hu), Irish (ga), Italian (it), Latvian (lv), Lithuanian (lt), Maltese (mt), Polish (pl), Portuguese (pt), Romanian (ro), Slovak (sk), Slovenian (sl), Spanish (es), Swedish (sv), Russian (ru), Turkish (tr), Basque (eu), Catalan (ca), Albanian (sq), Serbian (se), Ukrainian (uk), Norwegian (no), Arabic (ar), Chinese (zh), Hebrew (he)
24
+
25
+ Medieval: Middle High German (mhd), Latin (la), Middle Low German (gml), Old French (fro), Old Church Slavonic (chu), Early New High German (fnhd), Ancient and Medieval Greek (grc)
26
+
27
+ ## Training and evaluation data
28
+ The model was fine-tuned using the Monasterium and Wikipedia datasets, which consist of text sequences in 40 languages. The training set contains 80k samples, while the validation and test sets contain 16k. The average accuracy on the test set is 99.59% (this matches the average macro/weighted F1-score, the test set being perfectly balanced).
29
+
30
+ ## Training procedure
31
+ Fine-tuning was done via the Trainer API with WeightedLossTrainer.
32
+
33
+ ## Training hyperparameters
34
+ The following hyperparameters were used during training:
35
+ - learning_rate: 2e-05
36
+ - train_batch_size: 20
37
+ - eval_batch_size: 20
38
+ - seed: 42
39
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
40
+ - lr_scheduler_type: linear
41
+ - num_epochs: 3
42
+
43
+ mixed_precision_training: Native AMP
44
+
45
+ ## Training results
46
+
47
+ | Training Loss | Training Loss | F1
48
+ | ------------- | ------------- | -------- |
49
+ | 0.000300 | 0.048985 | 0.991585 |
50
+ | 0.000100 | 0.033340 | 0.994663 |
51
+ | 0.000000 | 0.032938 | 0.995979 |
52
+
53
+
54
+ ## Using example
55
+
56
+ ```
57
+ #Install packages
58
+ !pip install transformers --quiet
59
+
60
+ #Import libraries
61
+ import torch
62
+ from transformers import pipeline
63
+
64
+ #Define pipeline
65
+ classificator = pipeline("text-classification", model="ERCDiDip/40_langdetect_v01")
66
+
67
+ #Use pipeline
68
+ classificator("clemens etc dilecto filio scolastico ecclesie wetflari ensi treveren dioc salutem etc significarunt nobis dilecti filii commendator et fratres hospitalis beate marie theotonicorum")
69
+ ```
70
+
71
+ ## Updates
72
+ - 25th November 2022: Adding Ancient and Medieval Greek (grc)
73
+
74
+ ## Framework versions
75
+ - Transformers 4.24.0
76
+ - Pytorch 1.13.0
77
+ - Datasets 2.6.1
78
+ - Tokenizers 0.13.3
79
+
80
+ ## Citation
81
+ Please cite the following papers when using this model.
82
+
83
+ ```
84
+ @misc{ercdidip2022,
85
+ title={langdetect v01 (Revision 9fab42a)},
86
+ author={Kovács, Tamás, Atzenhofer-Baumgartner, Florian, Aoun, Sandy, Nicolaou, Anguelos, Luger, Daniel, Decker, Franziska, Lamminger, Florian and Vogeler, Georg},
87
+ year = { 2022 },
88
+ url = { https://huggingface.co/ERCDiDip/40_langdetect_v01 },
89
+ doi = { 10.57967/hf/0099 },
90
+ publisher = { Hugging Face }
91
+ }
92
+ ```
93
+
94
+ This model is part of the [From Digital to Distant Diplomatics (DiDip) ERC project](https://cordis.europa.eu/project/id/101019327) funded by the European Research Council.