import streamlit as st # Custom CSS for better styling st.markdown(""" """, unsafe_allow_html=True) # Main Title st.markdown('
Named Entity Recognition for Korean (GloVe 840B 300d)
', unsafe_allow_html=True) # Description st.markdown('
Description
', unsafe_allow_html=True) st.markdown("""

This model annotates named entities in a text, which can be used to find features such as names of people, places, and organizations in the BIO format. The model does not read words directly but instead reads word embeddings, which represent words as points such that more semantically similar words are closer together.

This model uses the pre-trained glove_840B_300 embeddings model from WordEmbeddings annotator as an input, so be sure to use the same embeddings in the pipeline.

""", unsafe_allow_html=True) # Predicted Entities st.markdown('
Predicted Entities
', unsafe_allow_html=True) st.markdown("""
""", unsafe_allow_html=True) # How to use st.markdown('
How to use
', unsafe_allow_html=True) st.markdown("""

To use this model, follow these steps in Python:

""", unsafe_allow_html=True) st.code(""" from sparknlp.base import * from sparknlp.annotator import * from pyspark.ml import Pipeline # Define the components of the pipeline document_assembler = DocumentAssembler() \\ .setInputCol("text") \\ .setOutputCol("document") sentence_detector = SentenceDetector() \\ .setInputCols(["document"]) \\ .setOutputCol("sentence") word_segmenter = WordSegmenterModel.pretrained("wordseg_kaist_ud", "ko") \\ .setInputCols(["sentence"]) \\ .setOutputCol("token") embeddings = WordEmbeddingsModel.pretrained("glove_840B_300", "xx") \\ .setInputCols(["document", "token"]) \\ .setOutputCol("embeddings") ner = NerDLModel.pretrained("ner_kmou_glove_840B_300d", "ko") \\ .setInputCols(["document", "token", "embeddings"]) \\ .setOutputCol("ner") ner_converter = NerConverter().setInputCols(["document", "token", "ner"]).setOutputCol("ner_chunk") # Create the pipeline pipeline = Pipeline(stages=[document_assembler, sentence_detector, word_segmenter, embeddings, ner, ner_converter]) # Create sample data example = spark.createDataFrame([['라이프니츠 의 주도 로 베를린 에 세우 어 지 ㄴ 베를린 과학아카데미']], ["text"]) # Fit and transform data with the pipeline result = pipeline.fit(example).transform(example) # Select the result, entity result.select( expr("explode(ner_chunk) as ner_chunk") ).select( col("ner_chunk.result").alias("chunk"), col("ner_chunk.metadata").getItem("entity").alias("ner_label") ).show(truncate=False) """, language="python") # Results import pandas as pd # Create the data for the DataFrame data = { "token": ["라이프니츠", "베를린", "과학아카데미"], "ner": ["B-PS", "B-OG", "I-OG"] } # Creating the DataFrame df = pd.DataFrame(data) df.index += 1 st.dataframe(df) # Model Information st.markdown('
Model Information
', unsafe_allow_html=True) st.markdown("""

The ner_kmou_glove_840B_300d model details are as follows:

""", unsafe_allow_html=True) # Data Source st.markdown('
Data Source
', unsafe_allow_html=True) st.markdown("""

The model was trained by the Korea Maritime and Ocean University NLP data set.

""", unsafe_allow_html=True) # Benchmarking st.markdown('
Benchmarking
', unsafe_allow_html=True) st.markdown("""

Evaluating the performance of NER models is crucial to understanding their effectiveness in real-world applications. Below are the benchmark results for the ner_kmou_glove_840B_300d model, focusing on various named entity categories. The metrics used include precision, recall, and F1-score, which are standard for evaluating classification models.

""", unsafe_allow_html=True) st.markdown(""" --- | ner_tag | precision | recall | f1-score | support | |:------------:|:---------:|:------:|:--------:|:-------:| | B-DT | 0.95 | 0.29 | 0.44 | 132 | | B-LC | 0.00 | 0.00 | 0.00 | 166 | | B-OG | 1.00 | 0.06 | 0.11 | 149 | | B-PS | 0.86 | 0.13 | 0.23 | 287 | | B-TI | 0.50 | 0.05 | 0.09 | 20 | | I-DT | 0.94 | 0.36 | 0.52 | 164 | | I-LC | 0.00 | 0.00 | 0.00 | 4 | | I-OG | 1.00 | 0.08 | 0.15 | 25 | | I-PS | 1.00 | 0.08 | 0.15 | 12 | | I-TI | 0.50 | 0.10 | 0.17 | 10 | | O | 0.94 | 1.00 | 0.97 | 12830 | | accuracy | 0.94 | 13799 | | | | macro avg | 0.70 | 0.20 | 0.26 | 13799 | | weighted avg | 0.93 | 0.94 | 0.92 | 13799 | """, unsafe_allow_html=True) st.markdown("""

These results demonstrate the model's ability to accurately identify and classify named entities in Korean text. Precision measures the accuracy of the positive predictions, recall measures the model's ability to find all relevant instances, and F1-score provides a balance between precision and recall.

""", unsafe_allow_html=True) # Conclusion/Summary st.markdown('
Conclusion
', unsafe_allow_html=True) st.markdown("""

The ner_kmou_glove_840B_300d model demonstrates effective named entity recognition in Korean texts, with varied performance metrics across different entity types. This model leverages glove_840B_300 embeddings to enhance its understanding and accuracy in identifying entities like persons, locations, organizations, and more. Its integration into Spark NLP allows for efficient and scalable processing of Korean text data, making it a valuable tool for researchers and developers working with Korean language applications.

""", unsafe_allow_html=True) # References st.markdown('
References
', unsafe_allow_html=True) st.markdown("""
""", unsafe_allow_html=True) # Community & Support st.markdown('
Community & Support
', unsafe_allow_html=True) st.markdown("""
""", unsafe_allow_html=True)