import streamlit as st # Custom CSS for better styling st.markdown(""" """, unsafe_allow_html=True) # Main Title st.markdown('
State-of-the-Art Named Entity Recognition with Spark NLP (Italian)
', unsafe_allow_html=True) # Introduction st.markdown("""

Named Entity Recognition (NER) is the task of identifying important words in a text and associating them with a category. For example, we may be interested in finding all the personal names in documents, or company names in news articles. Other examples include domain-specific uses such as identifying all disease names in a clinical text, or company trading codes in financial ones.

NER can be implemented with many approaches. In this post, we introduce a deep learning-based method using the NerDL model. This approach leverages the scalability of Spark NLP with Python.

""", unsafe_allow_html=True) # Introduction to Spark NLP st.markdown('
Introduction to Spark NLP
', unsafe_allow_html=True) st.markdown("""

Spark NLP is an open-source library maintained by John Snow Labs. It is built on top of Apache Spark and Spark ML and provides simple, performant & accurate NLP annotations for machine learning pipelines that can scale easily in a distributed environment.

To install Spark NLP, you can simply use any package manager like conda or pip. For example, using pip you can simply run pip install spark-nlp. For different installation options, check the official documentation.

""", unsafe_allow_html=True) # Using NerDL Model st.markdown('
Using NerDL Model
', unsafe_allow_html=True) st.markdown("""

The NerDL model in Spark NLP is a deep learning-based approach for NER tasks. It uses a Char CNNs - BiLSTM - CRF architecture that achieves state-of-the-art results in most datasets. The training data should be a labeled Spark DataFrame in the format of CoNLL 2003 IOB with annotation type columns.

""", unsafe_allow_html=True) # Setup Instructions st.markdown('
Setup
', unsafe_allow_html=True) st.markdown('

To install Spark NLP in Python, use your favorite package manager (conda, pip, etc.). For example:

', unsafe_allow_html=True) st.code(""" pip install spark-nlp pip install pyspark """, language="bash") st.markdown("

Then, import Spark NLP and start a Spark session:

", unsafe_allow_html=True) st.code(""" import sparknlp # Start Spark Session spark = sparknlp.start() """, language='python') # Example Usage with NerDL Model in Italian st.markdown('
Example Usage with NerDL Model in Italian
', unsafe_allow_html=True) st.markdown("""

Below is an example of how to set up and use the NerDL model for named entity recognition in Italian:

""", unsafe_allow_html=True) st.code(''' from sparknlp.base import * from sparknlp.annotator import * from pyspark.ml import Pipeline from pyspark.sql.functions import col, expr, round, concat, lit # Document Assembler document_assembler = DocumentAssembler() \\ .setInputCol("text") \\ .setOutputCol("document") # Tokenizer tokenizer = Tokenizer() \\ .setInputCols(["document"]) \\ .setOutputCol("token") # Word Embeddings embeddings = WordEmbeddingsModel.pretrained('glove_840B_300', lang='xx') \\ .setInputCols(["document", "token"]) \\ .setOutputCol("embeddings") # NerDL Model ner_model = NerDLModel.pretrained('ner_wikiner_glove_840B_300', 'xx') \\ .setInputCols(["document", "token", "embeddings"]) \\ .setOutputCol("ner") # NER Converter ner_converter = NerConverter() \\ .setInputCols(["document", "token", "ner"]) \\ .setOutputCol("ner_chunk") # Pipeline pipeline = Pipeline(stages=[ document_assembler, tokenizer, embeddings, ner_model, ner_converter ]) # Example sentence example = """ Giuseppe Verdi nacque a Le Roncole, una frazione di Busseto, il 10 ottobre 1813. Era un compositore italiano, noto per le sue opere come La Traviata e Rigoletto. """ data = spark.createDataFrame([[example]]).toDF("text") # Transforming data result = pipeline.fit(data).transform(data) # Select the result, entity, and confidence columns result.select( expr("explode(ner_chunk) as ner_chunk") ).select( col("ner_chunk.result").alias("result"), col("ner_chunk.metadata").getItem("entity").alias("entity"), concat( round((col("ner_chunk.metadata").getItem("confidence").cast("float") * 100), 2), lit("%") ).alias("confidence") ).show(truncate=False) ''', language="python") st.text(""" +--------------+------+----------+ |result |entity|confidence| +--------------+------+----------+ |Giuseppe Verdi|PER |66.19% | |Le Roncole |LOC |51.45% | |Busseto |LOC |85.39% | |La Traviata |MISC |69.35% | |Rigoletto |MISC |93.34% | +--------------+------+----------+ """) # Benchmark Section st.markdown('
Benchmark
', unsafe_allow_html=True) st.markdown("""

Evaluating the performance of NER models is crucial to understanding their effectiveness in real-world applications. Below are the benchmark results for the "ner_wikiner_glove_840B_300" model on Italian text, focusing on various named entity categories. The metrics used include precision, recall, and F1-score, which are standard for evaluating classification models.

""", unsafe_allow_html=True) st.markdown("""
Detailed Results
| Entity Type | Precision | Recall | F1-Score | Support | |-------------|:---------:|:------:|:--------:|--------:| | **B-LOC** | 0.88 | 0.92 | 0.90 | 13,050 | | **I-ORG** | 0.78 | 0.71 | 0.74 | 1,211 | | **I-LOC** | 0.89 | 0.85 | 0.87 | 7,454 | | **I-PER** | 0.93 | 0.94 | 0.94 | 4,539 | | **B-ORG** | 0.88 | 0.72 | 0.79 | 2,222 | | **B-PER** | 0.90 | 0.93 | 0.92 | 7,206 |
Averages
| Average Type | Precision | Recall | F1-Score | Support | |--------------|:---------:|:------:|:--------:|--------:| | **Micro** | 0.89 | 0.89 | 0.89 | 35,682 | | **Macro** | 0.88 | 0.85 | 0.86 | 35,682 | | **Weighted** | 0.89 | 0.89 | 0.89 | 35,682 |
Category-Specific Performance
| Category | Precision | Recall | F1-Score | Support | |----------|:---------:|:------:|:--------:|--------:| | **LOC** | 86.33% | 90.53% | 88.38 | 13,685 | | **MISC** | 81.88% | 67.03% | 73.72 | 3,069 | | **ORG** | 85.91% | 70.52% | 77.46 | 1,824 | | **PER** | 89.54% | 92.08% | 90.79 | 7,410 |
Additional Metrics
- **Processed Tokens:** 349,242 - **Total Phrases:** 26,227 - **Found Phrases:** 25,988 - **Correct Phrases:** 22,529 - **Accuracy (non-O):** 85.99% - **Overall Accuracy:** 98.06% """, unsafe_allow_html=True) # Summary st.markdown('
Summary
', unsafe_allow_html=True) st.markdown("""

In this article, we discussed named entity recognition using a deep learning-based method with the "wikiner_840B_300" model for Italian. We introduced how to perform the task using the open-source Spark NLP library with Python, which can be used at scale in the Spark ecosystem. These methods can be used for natural language processing applications in various fields, including finance and healthcare.

""", unsafe_allow_html=True) # References st.markdown('
References
', unsafe_allow_html=True) st.markdown("""
""", unsafe_allow_html=True) # Community & Support st.markdown('
Community & Support
', unsafe_allow_html=True) st.markdown("""
""", unsafe_allow_html=True)