Davlan commited on
Commit
88a2d9d
·
1 Parent(s): 131e017

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +59 -0
README.md CHANGED
@@ -1,3 +1,62 @@
1
  ---
2
  license: apache-2.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  ---
4
+
5
+ Hugging Face's logo
6
+ ---
7
+ language:
8
+ - af
9
+ - nr
10
+ - nso
11
+ - ss
12
+ - st
13
+ - tn
14
+ - ts
15
+ - ve
16
+ - xh
17
+ - zu
18
+ - multilingual
19
+
20
+
21
+ datasets:
22
+ - masakhaner
23
+ ---
24
+ # xlm-roberta-base-sadilar-ner
25
+ ## Model description
26
+ **xlm-roberta-base-sadilar-ner** is the first **Named Entity Recognition** model for 10 South African languages (Afrikaans, isiNdebele, isiXhosa, isiZulu, Sepedi, Sesotho, Setswana, siSwati, Tshivenda and Xitsonga) based on a fine-tuned XLM-RoBERTa large model. It achieves the **state-of-the-art performance** for the NER task. It has been trained to recognize three types of entities: location (LOC), organizations (ORG), and person (PER).
27
+ Specifically, this model is a *xlm-roberta-large* model that was fine-tuned on an aggregation of South African languages datasets obtained from [SADILAR](https://www.sadilar.org/index.php/en/) dataset.
28
+ ## Intended uses & limitations
29
+ #### How to use
30
+ You can use this model with Transformers *pipeline* for NER.
31
+ ```python
32
+ from transformers import AutoTokenizer, AutoModelForTokenClassification
33
+ from transformers import pipeline
34
+ tokenizer = AutoTokenizer.from_pretrained("Davlan/xlm-roberta-base-sadilar-ner")
35
+ model = AutoModelForTokenClassification.from_pretrained("Davlan/xlm-roberta-base-sadilar-ner")
36
+ nlp = pipeline("ner", model=model, tokenizer=tokenizer)
37
+ example = "Kuchaza kona ukuthi uMengameli uMnuz Cyril Ramaphosa, usebatshelile ukuthi uzosikhipha maduze isitifiketi."
38
+ ner_results = nlp(example)
39
+ print(ner_results)
40
+ ```
41
+ #### Limitations and bias
42
+ This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
43
+ ## Training data
44
+ This model was fine-tuned on 10 African NER datasets (Afrikaans, isiNdebele, isiXhosa, isiZulu, Sepedi, Sesotho, Setswana, siSwati, Tshivenda and Xitsonga) [SADILAR](https://www.sadilar.org/index.php/en/) dataset
45
+
46
+ The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes:
47
+ Abbreviation|Description
48
+ -|-
49
+ O|Outside of a named entity
50
+ B-PER |Beginning of a person’s name right after another person’s name
51
+ I-PER |Person’s name
52
+ B-ORG |Beginning of an organisation right after another organisation
53
+ I-ORG |Organisation
54
+ B-LOC |Beginning of a location right after another location
55
+ I-LOC |Location
56
+
57
+ ### BibTeX entry and citation info
58
+
59
+ ```
60
+
61
+
62
+