File size: 11,765 Bytes
908edf6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
import streamlit as st

# Custom CSS for better styling
st.markdown("""

    <style>

        .main-title {

            font-size: 36px;

            color: #4A90E2;

            font-weight: bold;

            text-align: center;

        }

        .sub-title {

            font-size: 24px;

            color: #4A90E2;

            margin-top: 20px;

        }

        .section {

            background-color: #f9f9f9;

            padding: 15px;

            border-radius: 10px;

            margin-top: 20px;

        }

        .section h2 {

            font-size: 22px;

            color: #4A90E2;

        }

        .section p, .section ul {

            color: #666666;

        }

        .link {

            color: #4A90E2;

            text-decoration: none;

        }

    </style>

""", unsafe_allow_html=True)

# Main Title
st.markdown('<div class="main-title">NER Model for 10 African Languages</div>', unsafe_allow_html=True)

# What is Named Entity Recognition (NER)?
st.markdown('<div class="sub-title">What is Named Entity Recognition (NER)?</div>', unsafe_allow_html=True)
st.markdown("""

<div class="section">

    <p><strong>Named Entity Recognition (NER)</strong> is a crucial task in Natural Language Processing (NLP) that involves identifying and classifying entities within a text into predefined categories such as names of people, organizations, locations, dates, and more. NER helps in structuring unstructured text, making it easier to analyze and extract meaningful information.</p>

    <p>For example, in the sentence "Barack Obama was born in Hawaii," NER would identify "Barack Obama" as a person (PER) and "Hawaii" as a location (LOC).</p>

</div>

""", unsafe_allow_html=True)

# Importance of NER
st.markdown('<div class="sub-title">Importance of NER</div>', unsafe_allow_html=True)
st.markdown("""

<div class="section">

    <p>NER is essential for various applications, including:</p>

    <ul>

        <li><strong>Information Extraction:</strong> Extracting structured information from unstructured text, such as news articles, social media posts, and legal documents.</li>

        <li><strong>Search and Retrieval:</strong> Enhancing search engines and information retrieval systems by indexing and retrieving documents based on named entities.</li>

        <li><strong>Content Recommendation:</strong> Improving content recommendation systems by understanding the context and entities mentioned in the text.</li>

        <li><strong>Question Answering:</strong> Enhancing question-answering systems by accurately identifying entities within the text to provide precise answers.</li>

    </ul>

</div>

""", unsafe_allow_html=True)

# Description
st.markdown('<div class="sub-title">Description</div>', unsafe_allow_html=True)
st.markdown("""

<div class="section">

    <p>This model is imported from Hugging Face. It’s been trained using <code>xlm_roberta_large</code> fine-tuned model on 10 African languages: Amharic, Hausa, Igbo, Kinyarwanda, Luganda, Nigerian Pidgin, Swahili, Wolof, and Yorùbá.</p>

</div>

""", unsafe_allow_html=True)

# Predicted Entities
st.markdown('<div class="sub-title">Predicted Entities</div>', unsafe_allow_html=True)
st.markdown("""

<div class="section">

    <ul>

        <li><strong>DATE:</strong> Refers to temporal expressions such as specific dates and periods (e.g., "3-10-2000").</li>

        <li><strong>LOC:</strong> Refers to geographical locations, including cities, countries, and other landmarks (e.g., "Addis Ababa").</li>

        <li><strong>PER:</strong> Refers to names of people or characters (e.g., "Ahmed Vanda").</li>

        <li><strong>ORG:</strong> Refers to names of organizations, companies, and institutions (e.g., "United Nations").</li>

    </ul>

</div>

""", unsafe_allow_html=True)

# How to use
st.markdown('<div class="sub-title">How to use</div>', unsafe_allow_html=True)
st.markdown("""

<div class="section">

    <p>To use this model, follow these steps in Python:</p>

</div>

""", unsafe_allow_html=True)
st.code("""

from sparknlp.base import *

from sparknlp.annotator import *

from pyspark.ml import Pipeline



# Define the components of the pipeline

documentAssembler = DocumentAssembler() \\

    .setInputCol("text") \\

    .setOutputCol("document")



sentenceDetector = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \\

    .setInputCols(["document"]) \\

    .setOutputCol("sentence")



tokenizer = Tokenizer() \\

    .setInputCols(["sentence"]) \\

    .setOutputCol("token")



tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlm_roberta_large_token_classifier_masakhaner", "xx") \\

    .setInputCols(["sentence",'token']) \\

    .setOutputCol("ner")



ner_converter = NerConverter() \\

    .setInputCols(["sentence", "token", "ner"]) \\

    .setOutputCol("ner_chunk")



nlpPipeline = Pipeline(stages=[documentAssembler, sentenceDetector, tokenizer, tokenClassifier, ner_converter])



empty_data = spark.createDataFrame([[""]]).toDF("text")



model = nlpPipeline.fit(empty_data)

text = '''አህመድ ቫንዳ ከ3-10-2000 ጀምሮ በአዲስ አበባ ኖሯል።'''

result = model.transform(spark.createDataFrame([[text]]).toDF("text"))



# Display the results

result.selectExpr("explode(arrays_zip(ner_chunk.result, ner_chunk.metadata)) as entity")

       .selectExpr("entity['0'] as chunk", "entity['1'].entity as ner_label")

       .show(truncate=False)

""", language="python")

# Results
import pandas as pd

# Create the data for the DataFrame
data = {
    "chunk": ["አህመድ ቫንዳ", "ከ3-10-2000 ጀምሮ", "በአዲስ አበባ"],
    "ner_label": ["PER", "DATE", "LOC"]
}

# Creating the DataFrame
df = pd.DataFrame(data)
df.index += 1
st.dataframe(df)

# What Can We Do with This Model?
st.markdown('<div class="sub-title">What Can We Do with This Model?</div>', unsafe_allow_html=True)
st.markdown("""

<div class="section">

    <p>This NER model for 10 African languages enables various applications:</p>

    <ul>

        <li><strong>Multilingual Information Extraction:</strong> Efficiently extract information from texts written in diverse African languages, which is valuable for local news, social media analysis, and governmental reports.</li>

        <li><strong>Cross-Language Search and Retrieval:</strong> Enhance search engines to support multiple African languages, making it easier to find relevant information across different linguistic contexts.</li>

        <li><strong>Content Recommendation:</strong> Provide more accurate content recommendations by understanding the entities mentioned in the user's preferred language.</li>

        <li><strong>Research and Analytics:</strong> Support linguistic research and analytics by processing texts in multiple African languages to identify trends and patterns.</li>

    </ul>

</div>

""", unsafe_allow_html=True)

# Model Information
st.markdown('<div class="sub-title">Model Information</div>', unsafe_allow_html=True)
st.markdown("""

<div class="section">

    <ul>

        <li><strong>Model Name:</strong> xlm_roberta_large_token_classifier_masakhaner</li>

        <li><strong>Compatibility:</strong> Spark NLP 3.3.2+</li>

        <li><strong>License:</strong> Open Source</li>

        <li><strong>Edition:</strong> Official</li>

        <li><strong>Input Labels:</strong> [sentence, token]</li>

        <li><strong>Output Labels:</strong> [ner]</li>

        <li><strong>Language:</strong> xx</li>

        <li><strong>Case sensitive:</strong> true</li>

        <li><strong>Max sentence length:</strong> 256</li>

    </ul>

</div>

""", unsafe_allow_html=True)

# Data Source
st.markdown('<div class="sub-title">Data Source</div>', unsafe_allow_html=True)
st.markdown("""

<div class="section">

    <p>The model was trained using the dataset available at <a href="https://huggingface.co/Davlan/xlm-roberta-large-masakhaner" class="link">Hugging Face</a>.</p>

</div>

""", unsafe_allow_html=True)

# Benchmarking
st.markdown('<div class="sub-title">Benchmarking</div>', unsafe_allow_html=True)
st.markdown("""

<div class="section">

    <p>Evaluating the performance of NER models is crucial to understanding their effectiveness in real-world applications. Below are the benchmark results for the <code>xlm_roberta_large_token_classifier_masakhaner</code> model, focusing on various named entity categories across 10 African languages. The metrics used include F1-score, which is a standard for evaluating classification models.</p>

</div>

""", unsafe_allow_html=True)
st.markdown("""

---

| language | F1-score |

|----------|----------|

| amh      | 75.76    |

| hau      | 91.75    |

| ibo      | 86.26    |

| kin      | 76.38    |

| lug      | 84.64    |

| luo      | 80.65    |

| pcm      | 89.55    |

| swa      | 89.48    |

| wol      | 70.70    |

| yor      | 82.05    |

---

""", unsafe_allow_html=True)

st.markdown("""

<div class="section">

    <p>These results demonstrate the model's ability to accurately identify and classify named entities in multiple African languages. The F1-scores indicate the balance between precision and recall for each language, reflecting the model's robustness across diverse linguistic contexts.</p>

</div>

""", unsafe_allow_html=True)

# Conclusion/Summary
st.markdown('<div class="sub-title">Conclusion</div>', unsafe_allow_html=True)
st.markdown("""

<div class="section">

    <p>The <code>xlm_roberta_large_token_classifier_masakhaner</code> model showcases significant performance in recognizing named entities in 10 African languages. This model leverages <code>xlm_roberta_large</code> embeddings to enhance its understanding and accuracy in identifying entities such as persons, locations, dates, and organizations. Its integration into Spark NLP provides an efficient and scalable solution for processing multilingual text data, making it an invaluable tool for researchers and developers working with African languages.</p>

</div>

""", unsafe_allow_html=True)

# References
st.markdown('<div class="sub-title">References</div>', unsafe_allow_html=True)
st.markdown("""

<div class="section">

    <ul>

        <li><a class="link" href="https://nlp.johnsnowlabs.com/api/com/johnsnowlabs/nlp/annotators/classifier/dl/XlmRoBertaForTokenClassification" target="_blank" rel="noopener">XlmRoBertaForTokenClassification</a> annotator documentation</li>

        <li>Model Used: <a class="link" href="https://sparknlp.org/2021/12/06/xlm_roberta_large_token_classifier_masakhaner_xx.html" rel="noopener">xlm_roberta_large_token_classifier_masakhaner_xx</a></li>

        <li><a class="link" href="https://nlp.johnsnowlabs.com/recognize_entitie" target="_blank" rel="noopener">Visualization demos for NER in Spark NLP</a></li>

        <li><a class="link" href="https://www.johnsnowlabs.com/named-entity-recognition-ner-with-bert-in-spark-nlp/">Named Entity Recognition (NER) with BERT in Spark NLP</a></li>

    </ul>

</div>

""", unsafe_allow_html=True)

# Community & Support
st.markdown('<div class="sub-title">Community & Support</div>', unsafe_allow_html=True)
st.markdown("""

<div class="section">

    <ul>

        <li><a class="link" href="https://sparknlp.org/" target="_blank">Official Website</a>: Documentation and examples</li>

        <li><a class="link" href="https://github.com/JohnSnowLabs/spark-nlp" target="_blank">GitHub Repository</a>: Report issues or contribute</li>

        <li><a class="link" href="https://forum.johnsnowlabs.com/" target="_blank">Community Forum</a>: Ask questions, share ideas, and get support</li>

    </ul>

</div>

""", unsafe_allow_html=True)