File size: 12,669 Bytes
1eb93bf
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
import streamlit as st

# Custom CSS for better styling
st.markdown("""

    <style>

        .main-title {

            font-size: 36px;

            color: #4A90E2;

            font-weight: bold;

            text-align: center;

        }

        .sub-title {

            font-size: 24px;

            color: #4A90E2;

            margin-top: 20px;

        }

        .section {

            background-color: #f9f9f9;

            padding: 15px;

            border-radius: 10px;

            margin-top: 20px;

        }

        .section h2 {

            font-size: 22px;

            color: #4A90E2;

        }

        .section p, .section ul {

            color: #666666;

        }

        .link {

            color: #4A90E2;

            text-decoration: none;

        }

    </style>

""", unsafe_allow_html=True)

# Main Title
st.markdown('<div class="main-title">State-of-the-Art Named Entity Recognition with Spark NLP (French)</div>', unsafe_allow_html=True)

# Introduction
st.markdown("""

<div class="section">

    <p>Named Entity Recognition (NER) is the task of identifying important words in a text and associating them with a category. For example, we may be interested in finding all the personal names in documents, or company names in news articles. Other examples include domain-specific uses such as identifying all disease names in a clinical text, or company trading codes in financial ones.</p>

    <p>NER can be implemented with many approaches. In this post, we introduce a deep learning-based method using the NerDL model. This approach leverages the scalability of Spark NLP with Python.</p>

</div>

""", unsafe_allow_html=True)

# Introduction to Spark NLP
st.markdown('<div class="sub-title">Introduction to Spark NLP</div>', unsafe_allow_html=True)
st.markdown("""

<div class="section">

    <p>Spark NLP is an open-source library maintained by John Snow Labs. It is built on top of Apache Spark and Spark ML and provides simple, performant & accurate NLP annotations for machine learning pipelines that can scale easily in a distributed environment.</p>

    <p>To install Spark NLP, you can simply use any package manager like conda or pip. For example, using pip you can simply run <code>pip install spark-nlp</code>. For different installation options, check the official <a href="https://nlp.johnsnowlabs.com/docs/en/install" target="_blank" class="link">documentation</a>.</p>

</div>

""", unsafe_allow_html=True)

# Using NerDL Model
st.markdown('<div class="sub-title">Using NerDL Model</div>', unsafe_allow_html=True)
st.markdown("""

<div class="section">

    <p>The NerDL model in Spark NLP is a deep learning-based approach for NER tasks. It uses a Char CNNs - BiLSTM - CRF architecture that achieves state-of-the-art results in most datasets. The training data should be a labeled Spark DataFrame in the format of CoNLL 2003 IOB with annotation type columns.</p>

</div>

""", unsafe_allow_html=True)

# Setup Instructions
st.markdown('<div class="sub-title">Setup</div>', unsafe_allow_html=True)
st.markdown('<p>To install Spark NLP in Python, use your favorite package manager (conda, pip, etc.). For example:</p>', unsafe_allow_html=True)
st.code("""

pip install spark-nlp

pip install pyspark

""", language="bash")

st.markdown("<p>Then, import Spark NLP and start a Spark session:</p>", unsafe_allow_html=True)
st.code("""

import sparknlp



# Start Spark Session

spark = sparknlp.start()

""", language='python')

# Example Usage with NerDL Model in Italian
st.markdown('<div class="sub-title">Example Usage with NerDL Model in Italian</div>', unsafe_allow_html=True)
st.markdown("""

<div class="section">

    <p>Below is an example of how to set up and use the NerDL model for named entity recognition in Italian:</p>

</div>

""", unsafe_allow_html=True)
st.code('''

from sparknlp.base import *

from sparknlp.annotator import *

from pyspark.ml import Pipeline

from pyspark.sql.functions import col, expr, round, concat, lit



# Document Assembler

document_assembler = DocumentAssembler() \\

    .setInputCol("text") \\

    .setOutputCol("document")



# Tokenizer

tokenizer = Tokenizer() \\

    .setInputCols(["document"]) \\

    .setOutputCol("token")



# Word Embeddings

embeddings = WordEmbeddingsModel.pretrained('glove_840B_300', lang='xx') \\

    .setInputCols(["document", "token"]) \\

    .setOutputCol("embeddings")



# NerDL Model

ner_model = NerDLModel.pretrained('ner_wikiner_glove_840B_300', 'xx') \\

    .setInputCols(["document", "token", "embeddings"]) \\

    .setOutputCol("ner")



# NER Converter

ner_converter = NerConverter() \\

    .setInputCols(["document", "token", "ner"]) \\

    .setOutputCol("ner_chunk")



# Pipeline

pipeline = Pipeline(stages=[

    document_assembler,

    tokenizer,

    embeddings,

    ner_model,

    ner_converter

])



# Example sentence

example = """William Henry Gates III (* 28. Oktober 1955 in London) ist ein US-amerikanischer Geschäftsmann, Softwareentwickler, Investor und Philanthrop. Er ist bekannt als Mitbegründer der Microsoft Corporation. Während seiner Karriere bei Microsoft war Gates Vorsitzender, Chief Executive Officer (CEO), Präsident und Chief Software Architect und bis Mai 2014 der größte Einzelaktionär. Er ist einer der bekanntesten Unternehmer und Pioniere der Mikrocomputer-Revolution der 1970er und 1980er Jahre. Gates wurde in Seattle, Washington, geboren und wuchs dort auf. 1975 gründete er Microsoft zusammen mit seinem Freund aus Kindertagen, Paul Allen, in Albuquerque, New Mexico. Es entwickelte sich zum weltweit größten Unternehmen für Personal-Computer-Software. Gates leitete das Unternehmen als Chairman und CEO, bis er im Januar 2000 als CEO zurücktrat. Er blieb jedoch Chairman und wurde Chief Software Architect. In den späten neunziger Jahren wurde Gates für seine Geschäftstaktiken kritisiert, die als wettbewerbswidrig angesehen wurden. Diese Meinung wurde durch zahlreiche Gerichtsurteile bestätigt. Im Juni 2006 gab Gates bekannt, dass er eine Teilzeitstelle bei Microsoft und eine Vollzeitstelle bei der Bill & Melinda Gates Foundation, der privaten gemeinnützigen Stiftung, die er und seine Frau Melinda Gates im Jahr 2000 gegründet haben, übernehmen wird. [ 9] Er übertrug seine Aufgaben nach und nach auf Ray Ozzie und Craig Mundie. Im Februar 2014 trat er als Vorsitzender von Microsoft zurück und übernahm eine neue Position als Technologieberater, um den neu ernannten CEO Satya Nadella zu unterstützen."""

data = spark.createDataFrame([[example]]).toDF("text")



# Transforming data

result = pipeline.fit(data).transform(data)



# Select the result, entity, and confidence columns

result.select(

    expr("explode(ner_chunk) as ner_chunk")

).select(

    col("ner_chunk.result").alias("result"),

    col("ner_chunk.metadata").getItem("entity").alias("entity"),

    concat(

        round((col("ner_chunk.metadata").getItem("confidence").cast("float") * 100), 2),

        lit("%")

    ).alias("confidence")

).show(truncate=False)

''', language="python")

st.text("""

+-------------------------------+------+----------+

|result                         |entity|confidence|

+-------------------------------+------+----------+

|William Henry Gates III        |PER   |57.6%     |

|London                         |LOC   |91.51%    |

|US-amerikanischer              |MISC  |99.56%    |

|Microsoft Corporation          |ORG   |72.69%    |

|Microsoft                      |ORG   |98.13%    |

|Gates                          |PER   |93.14%    |

|Gates                          |PER   |77.17%    |

|Seattle                        |LOC   |89.12%    |

|Washington                     |LOC   |73.5%     |

|Microsoft                      |ORG   |95.15%    |

|Paul Allen                     |PER   |85.83%    |

|Albuquerque                    |LOC   |84.27%    |

|New Mexico                     |LOC   |67.22%    |

|Gates                          |PER   |97.12%    |

|Gates                          |PER   |97.68%    |

|Gates                          |PER   |98.26%    |

|Microsoft                      |ORG   |98.1%     |

|Bill & Melinda Gates Foundation|MISC  |42.91%    |

|Melinda Gates                  |PER   |87.0%     |

|Ray Ozzie                      |PER   |83.86%    |

+-------------------------------+------+----------+

""")

# Benchmark Section
st.markdown('<div class="sub-title">Benchmark</div>', unsafe_allow_html=True)
st.markdown("""

<div class="section">

    <p>Evaluating the performance of NER models is crucial to understanding their effectiveness in real-world applications. Below are the benchmark results for the "ner_wikiner_glove_840B_300" model on <strong>French</strong> text, focusing on various named entity categories. The metrics used include precision, recall, and F1-score, which are standard for evaluating classification models.</p>

</div>

""", unsafe_allow_html=True)
st.markdown("""

---

#### Classification Report



| Label | Precision | Recall | F1-Score | Support |

|-------|-----------|--------|----------|---------|

| B-LOC | 0.81      | 0.87   | 0.84     | 11482   |

| I-ORG | 0.81      | 0.74   | 0.77     | 2143    |

| I-LOC | 0.75      | 0.60   | 0.67     | 4495    |

| I-PER | 0.95      | 0.94   | 0.95     | 5339    |

| B-ORG | 0.86      | 0.78   | 0.82     | 2556    |

| B-PER | 0.92      | 0.94   | 0.93     | 7524    |



#### Averages



| Metric         | Precision | Recall | F1-Score | Support |

|----------------|-----------|--------|----------|---------|

| Micro Average  | 0.86      | 0.85   | 0.85     | 33539   |

| Macro Average  | 0.85      | 0.81   | 0.83     | 33539   |

| Weighted Avg   | 0.86      | 0.85   | 0.85     | 33539   |



#### Overall Metrics



- Processed 348,522 tokens with 25,499 phrases; found: 25,298 phrases; correct: 21,261.

- Accuracy (non-O): **80.86%**

- Overall Accuracy: **97.44%**

- Precision: **84.04%**

- Recall: **83.38%**

- F1 Score: **83.71**



#### Entity-Specific Metrics



| Entity | Precision | Recall | F1-Score | Instances |

|--------|-----------|--------|----------|-----------|

| LOC    | 80.06%    | 85.79% | 82.83    | 12303     |

| MISC   | 82.42%    | 63.25% | 71.57    | 3021      |

| ORG    | 83.49%    | 75.59% | 79.34    | 2314      |

| PER    | 91.24%    | 92.89% | 92.06    | 7660      |



---

""", unsafe_allow_html=True)

# Summary
st.markdown('<div class="sub-title">Summary</div>', unsafe_allow_html=True)
st.markdown("""

<div class="section">

    <p>In this article, we discussed named entity recognition using a deep learning-based method with the "wikiner_840B_300" model for Italian. We introduced how to perform the task using the open-source Spark NLP library with Python, which can be used at scale in the Spark ecosystem. These methods can be used for natural language processing applications in various fields, including finance and healthcare.</p>

</div>

""", unsafe_allow_html=True)

# References
st.markdown('<div class="sub-title">References</div>', unsafe_allow_html=True)
st.markdown("""

<div class="section">

    <ul>

        <li><a class="link" href="https://sparknlp.org/api/python/reference/autosummary/sparknlp/annotator/ner/ner_dl/index.html" target="_blank" rel="noopener">NerDLModel</a> annotator documentation</li>

        <li>Model Used: <a class="link" href="https://sparknlp.org/2021/07/19/ner_wikiner_glove_840B_300_xx.html" target="_blank" rel="noopener">ner_wikiner_glove_840B_300</a></li>

        <li><a class="link" href="https://nlp.johnsnowlabs.com/recognize_entitie" target="_blank" rel="noopener">Visualization demos for NER in Spark NLP</a></li>

        <li><a class="link" href="https://www.johnsnowlabs.com/named-entity-recognition-ner-with-bert-in-spark-nlp/">Named Entity Recognition (NER) with BERT in Spark NLP</a></li>

    </ul>

</div>

""", unsafe_allow_html=True)

# Community & Support
st.markdown('<div class="sub-title">Community & Support</div>', unsafe_allow_html=True)
st.markdown("""

<div class="section">

    <ul>

        <li><a class="link" href="https://sparknlp.org/" target="_blank">Official Website</a>: Documentation and examples</li>

        <li><a class="link" href="https://join.slack.com/t/spark-nlp/shared_invite/zt-198dipu77-L3UWNe_AJf4Rqb3DaMb-7A" target="_blank">Slack Community</a>: Connect with other Spark NLP users</li>

        <li><a class="link" href="https://github.com/JohnSnowLabs/spark-nlp" target="_blank">GitHub Repository</a>: Source code and issue tracker</li>

    </ul>

</div>

""", unsafe_allow_html=True)