File size: 9,435 Bytes
4c983c8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
import streamlit as st

# Custom CSS for better styling
st.markdown("""

    <style>

        .main-title {

            font-size: 36px;

            color: #4A90E2;

            font-weight: bold;

            text-align: center;

        }

        .sub-title {

            font-size: 24px;

            color: #4A90E2;

            margin-top: 20px;

        }

        .section {

            background-color: #f9f9f9;

            padding: 15px;

            border-radius: 10px;

            margin-top: 20px;

        }

        .section h2 {

            font-size: 22px;

            color: #4A90E2;

        }

        .section p, .section ul {

            color: #666666;

        }

        .link {

            color: #4A90E2;

            text-decoration: none;

        }

    </style>

""", unsafe_allow_html=True)

# Main Title
st.markdown('<div class="main-title">Named Entity Recognition for Korean (GloVe 840B 300d)</div>', unsafe_allow_html=True)

# Description
st.markdown('<div class="sub-title">Description</div>', unsafe_allow_html=True)
st.markdown("""

<div class="section">

    <p>This model annotates named entities in a text, which can be used to find features such as names of people, places, and organizations in the BIO format. The model does not read words directly but instead reads word embeddings, which represent words as points such that more semantically similar words are closer together.</p>

    <p>This model uses the pre-trained <code>glove_840B_300</code> embeddings model from WordEmbeddings annotator as an input, so be sure to use the same embeddings in the pipeline.</p>

</div>

""", unsafe_allow_html=True)

# Predicted Entities
st.markdown('<div class="sub-title">Predicted Entities</div>', unsafe_allow_html=True)
st.markdown("""

<div class="section">

    <ul>

        <li>Dates-DT</li>

        <li>Locations-LC</li>

        <li>Organizations-OG</li>

        <li>Persons-PS</li>

        <li>Time-TI</li>

    </ul>

</div>

""", unsafe_allow_html=True)

# How to use
st.markdown('<div class="sub-title">How to use</div>', unsafe_allow_html=True)
st.markdown("""

<div class="section">

    <p>To use this model, follow these steps in Python:</p>

</div>

""", unsafe_allow_html=True)
st.code("""

from sparknlp.base import *

from sparknlp.annotator import *

from pyspark.ml import Pipeline



# Define the components of the pipeline

document_assembler = DocumentAssembler() \\

    .setInputCol("text") \\

    .setOutputCol("document")



sentence_detector = SentenceDetector() \\

    .setInputCols(["document"]) \\

    .setOutputCol("sentence")



word_segmenter = WordSegmenterModel.pretrained("wordseg_kaist_ud", "ko") \\

    .setInputCols(["sentence"]) \\

    .setOutputCol("token")



embeddings = WordEmbeddingsModel.pretrained("glove_840B_300", "xx") \\

    .setInputCols(["document", "token"]) \\

    .setOutputCol("embeddings")



ner = NerDLModel.pretrained("ner_kmou_glove_840B_300d", "ko") \\

    .setInputCols(["document", "token", "embeddings"]) \\

    .setOutputCol("ner")



ner_converter = NerConverter().setInputCols(["document", "token", "ner"]).setOutputCol("ner_chunk")



# Create the pipeline

pipeline = Pipeline(stages=[document_assembler, sentence_detector, word_segmenter, embeddings, ner, ner_converter])



# Create sample data

example = spark.createDataFrame([['라이프니츠 의 주도 로 베를린 에 세우 어 지 ㄴ 베를린 과학아카데미']], ["text"])



# Fit and transform data with the pipeline

result = pipeline.fit(example).transform(example)



# Select the result, entity

result.select(

    expr("explode(ner_chunk) as ner_chunk")

).select(

    col("ner_chunk.result").alias("chunk"),

    col("ner_chunk.metadata").getItem("entity").alias("ner_label")

).show(truncate=False)

""", language="python")

# Results
import pandas as pd

# Create the data for the DataFrame
data = {
    "token": ["라이프니츠", "베를린", "과학아카데미"],
    "ner": ["B-PS", "B-OG", "I-OG"]
}

# Creating the DataFrame
df = pd.DataFrame(data)
df.index += 1
st.dataframe(df)

# Model Information
st.markdown('<div class="sub-title">Model Information</div>', unsafe_allow_html=True)
st.markdown("""

<div class="section">

    <p>The <code>ner_kmou_glove_840B_300d</code> model details are as follows:</p>

    <ul>

        <li><strong>Model Name:</strong> ner_kmou_glove_840B_300d</li>

        <li><strong>Type:</strong> ner</li>

        <li><strong>Compatibility:</strong> Spark NLP 2.7.0+</li>

        <li><strong>License:</strong> Open Source</li>

        <li><strong>Edition:</strong> Official</li>

        <li><strong>Input Labels:</strong> [sentence, token, embeddings]</li>

        <li><strong>Output Labels:</strong> [ner]</li>

        <li><strong>Language:</strong> ko</li>

    </ul>

</div>

""", unsafe_allow_html=True)

# Data Source
st.markdown('<div class="sub-title">Data Source</div>', unsafe_allow_html=True)
st.markdown("""

<div class="section">

    <p>The model was trained by the Korea Maritime and Ocean University NLP data set.</p>

</div>

""", unsafe_allow_html=True)

# Benchmarking
st.markdown('<div class="sub-title">Benchmarking</div>', unsafe_allow_html=True)
st.markdown("""

<div class="section">

    <p>Evaluating the performance of NER models is crucial to understanding their effectiveness in real-world applications. Below are the benchmark results for the <code>ner_kmou_glove_840B_300d</code> model, focusing on various named entity categories. The metrics used include precision, recall, and F1-score, which are standard for evaluating classification models.</p>

</div>

""", unsafe_allow_html=True)
st.markdown("""

---

|    ner_tag   | precision | recall | f1-score | support |

|:------------:|:---------:|:------:|:--------:|:-------:|

|     B-DT     |    0.95   |  0.29  |   0.44   |   132   |

|     B-LC     |    0.00   |  0.00  |   0.00   |   166   |

|     B-OG     |    1.00   |  0.06  |   0.11   |   149   |

|     B-PS     |    0.86   |  0.13  |   0.23   |   287   |

|     B-TI     |    0.50   |  0.05  |   0.09   |    20   |

|     I-DT     |    0.94   |  0.36  |   0.52   |   164   |

|     I-LC     |    0.00   |  0.00  |   0.00   |    4    |

|     I-OG     |    1.00   |  0.08  |   0.15   |    25   |

|     I-PS     |    1.00   |  0.08  |   0.15   |    12   |

|     I-TI     |    0.50   |  0.10  |   0.17   |    10   |

|       O      |    0.94   |  1.00  |   0.97   |  12830  |

|   accuracy   |    0.94   |  13799 |          |         |

|   macro avg  |    0.70   |  0.20  |   0.26   |  13799  |

| weighted avg |    0.93   |  0.94  |   0.92   |  13799  |

""", unsafe_allow_html=True)

st.markdown("""

<div class="section">

    <p>These results demonstrate the model's ability to accurately identify and classify named entities in Korean text. Precision measures the accuracy of the positive predictions, recall measures the model's ability to find all relevant instances, and F1-score provides a balance between precision and recall.</p>

</div>

""", unsafe_allow_html=True)

# Conclusion/Summary
st.markdown('<div class="sub-title">Conclusion</div>', unsafe_allow_html=True)
st.markdown("""

<div class="section">

    <p>The <code>ner_kmou_glove_840B_300d</code> model demonstrates effective named entity recognition in Korean texts, with varied performance metrics across different entity types. This model leverages <code>glove_840B_300</code> embeddings to enhance its understanding and accuracy in identifying entities like persons, locations, organizations, and more. Its integration into Spark NLP allows for efficient and scalable processing of Korean text data, making it a valuable tool for researchers and developers working with Korean language applications.</p>

</div>

""", unsafe_allow_html=True)

# References
st.markdown('<div class="sub-title">References</div>', unsafe_allow_html=True)
st.markdown("""

<div class="section">

    <ul>

        <li><a class="link" href="https://sparknlp.org/api/python/reference/autosummary/sparknlp/annotator/ner/ner_dl/index.html" target="_blank" rel="noopener">NerDLModel</a> annotator documentation</li>

        <li>Model Used: <a class="link" href="https://sparknlp.org/2021/01/03/ner_ud_kaist_glove_840B_300d_ko.html" rel="noopener">ner_kmou_glove_840B_300d</a></li>

        <li><a class="link" href="https://nlp.johnsnowlabs.com/recognize_entitie" target="_blank" rel="noopener">Visualization demos for NER in Spark NLP</a></li>

        <li><a class="link" href="https://www.johnsnowlabs.com/named-entity-recognition-ner-with-bert-in-spark-nlp/">Named Entity Recognition (NER) with BERT in Spark NLP</a></li>

    </ul>

</div>

""", unsafe_allow_html=True)

# Community & Support
st.markdown('<div class="sub-title">Community & Support</div>', unsafe_allow_html=True)
st.markdown("""

<div class="section">

    <ul>

        <li><a class="link" href="https://sparknlp.org/" target="_blank">Official Website</a>: Documentation and examples</li>

        <li><a class="link" href="https://github.com/JohnSnowLabs/spark-nlp" target="_blank">GitHub Repository</a>: Report issues or contribute</li>

        <li><a class="link" href="https://forum.johnsnowlabs.com/" target="_blank">Community Forum</a>: Ask questions, share ideas, and get support</li>

    </ul>

</div>

""", unsafe_allow_html=True)