Hebrew-NER / pages /Workflow & Model Overview.py
abdullahmubeen10's picture
Upload 10 files
6362d22 verified
import streamlit as st
import pandas as pd
# Custom CSS for better styling
st.markdown("""
<style>
.main-title {
font-size: 36px;
color: #4A90E2;
font-weight: bold;
text-align: center;
}
.sub-title {
font-size: 24px;
color: #4A90E2;
margin-top: 20px;
}
.section {
background-color: #f9f9f9;
padding: 15px;
border-radius: 10px;
margin-top: 20px;
}
.section h2 {
font-size: 22px;
color: #4A90E2;
}
.section p, .section ul {
color: #666666;
}
.link {
color: #4A90E2;
text-decoration: none;
}
</style>
""", unsafe_allow_html=True)
# Main Title
st.markdown('<div class="main-title">Detect 10 Different Entities in Hebrew (hebrew_cc_300d embeddings)</div>', unsafe_allow_html=True)
# Introduction
st.markdown("""
<div class="section">
<p>Named Entity Recognition (NER) models identify and categorize important entities in a text. This page details a word embeddings-based NER model for Hebrew texts, using the <code>hebrew_cc_300d</code> word embeddings. The model is pretrained and available for use with Spark NLP.</p>
</div>
""", unsafe_allow_html=True)
# Model Description
st.markdown('<div class="sub-title">Description</div>', unsafe_allow_html=True)
st.markdown("""
<div class="section">
<p>This model uses Hebrew word embeddings to find 10 different types of entities in Hebrew text. It is trained using <code>hebrew_cc_300d</code> word embeddings, so please use the same embeddings in the pipeline. It can identify the following types of entities:</p>
<ul>
<li>PERS (Persons)</li>
<li>DATE (Dates)</li>
<li>ORG (Organizations)</li>
<li>LOC (Locations)</li>
<li>PERCENT (Percentage)</li>
<li>MONEY (Money)</li>
<li>TIME (Time)</li>
<li>MISC_AFF (Miscellaneous Affiliation)</li>
<li>MISC_EVENT (Miscellaneous Event)</li>
<li>MISC_ENT (Miscellaneous Entity)</li>
</ul>
</div>
""", unsafe_allow_html=True)
# Setup Instructions
st.markdown('<div class="sub-title">Setup</div>', unsafe_allow_html=True)
st.markdown('<p>To use the model, you need Spark NLP installed. You can install it using pip:</p>', unsafe_allow_html=True)
st.code("""
pip install spark-nlp
pip install pyspark
""", language="bash")
st.markdown("<p>Then, import Spark NLP and start a Spark session:</p>", unsafe_allow_html=True)
st.code("""
import sparknlp
# Start Spark Session
spark = sparknlp.start()
""", language='python')
# Example Usage
st.markdown('<div class="sub-title">Example Usage with Hebrew NER Model</div>', unsafe_allow_html=True)
st.markdown("""
<div class="section">
<p>Below is an example of how to set up and use the <code>hebrewner_cc_300d</code> model for named entity recognition in Hebrew:</p>
</div>
""", unsafe_allow_html=True)
st.code('''
from sparknlp.base import *
from sparknlp.annotator import *
from pyspark.ml import Pipeline
from pyspark.sql.functions import col, expr, round, concat, lit, explode
# Define the components of the pipeline
documentAssembler = DocumentAssembler() \\
.setInputCol("text") \\
.setOutputCol("document")
sentence_detector = SentenceDetector() \\
.setInputCols(["document"]) \\
.setOutputCol("sentence")
tokenizer = Tokenizer() \\
.setInputCols(["sentence"]) \\
.setOutputCol("token")
word_embeddings = WordEmbeddingsModel.pretrained("hebrew_cc_300d", "he") \\
.setInputCols(["sentence", "token"]) \\
.setOutputCol("embeddings")
ner = NerDLModel.pretrained("hebrewner_cc_300d", "he") \\
.setInputCols(["sentence", "token", "embeddings"]) \\
.setOutputCol("ner")
ner_converter = NerConverter().setInputCols(["sentence", "token", "ner"]).setOutputCol("ner_chunk")
# Create the pipeline
pipeline = Pipeline(stages=[documentAssembler, sentence_detector, tokenizer, word_embeddings, ner, ner_converter])
# Create sample data
example = """
ื‘- 25 ืœืื•ื’ื•ืกื˜ ืขืฆืจ ื”ืฉื‘"ื› ืืช ืžื•ื—ืžื“ ืื‘ื•-ื’'ื•ื™ื™ื“ , ืื–ืจื— ื™ืจื“ื ื™ , ืฉื’ื•ื™ืก ืœืืจื’ื•ืŸ ื”ืคืช"ื— ื•ื”ื•ืคืขืœ ืขืœ ื™ื“ื™ ื—ื™ื–ื‘ืืœืœื”. ืื‘ื•-ื’'ื•ื™ื™ื“ ื”ืชื›ื•ื•ืŸ ืœื”ืงื™ื ื—ื•ืœื™ื•ืช ื˜ืจื•ืจ ื‘ื’ื“ื” ื•ื‘ืงืจื‘ ืขืจื‘ื™ื™ ื™ืฉืจืืœ , ืœื‘ืฆืข ืคื™ื’ื•ืข ื‘ืจื›ื‘ืช ื™ืฉืจืืœ ื‘ื ื”ืจื™ื” , ืœืคื’ื•ืข ื‘ืžื˜ืจื•ืช ื™ืฉืจืืœื™ื•ืช ื‘ื™ืจื“ืŸ ื•ืœื—ื˜ื•ืฃ ื—ื™ื™ืœื™ื ื›ื“ื™ ืœืฉื—ืจืจ ืืกื™ืจื™ื ื‘ื™ื˜ื—ื•ื ื™ื™ื.
"""
data = spark.createDataFrame([[example]]).toDF("text")
# Fit and transform data with the pipeline
result = pipeline.fit(data).transform(data)
# Select the result, entity
result.select(
expr("explode(ner_chunk) as ner_chunk")
).select(
col("ner_chunk.result").alias("chunk"),
col("ner_chunk.metadata").getItem("entity").alias("ner_label")
).show(truncate=False)
''', language="python")
import pandas as pd
# Create the data for the DataFrame
data = {
"chunk": [
"25 ืœืื•ื’ื•ืกื˜",
"ื”ืฉื‘\"ื›",
"ืžื•ื—ืžื“ ืื‘ื•-ื’'ื•ื™ื™ื“",
"ื™ืจื“ื ื™",
"ื”ืคืช\"ื—",
"ื—ื™ื–ื‘ืืœืœื”",
"ืื‘ื•-ื’'ื•ื™ื™ื“",
"ื‘ื’ื“ื”",
"ืขืจื‘ื™ื™",
"ื™ืฉืจืืœ",
"ื‘ืจื›ื‘ืช ื™ืฉืจืืœ",
"ื‘ื ื”ืจื™ื”",
"ื™ืฉืจืืœื™ื•ืช",
"ื‘ื™ืจื“ืŸ"
],
"ner_label": [
"DATE",
"ORG",
"PERS",
"MISC_AFF",
"ORG",
"ORG",
"PERS",
"LOC",
"MISC_AFF",
"LOC",
"ORG",
"LOC",
"MISC_AFF",
"LOC"
]
}
# Creating the DataFrame
df = pd.DataFrame(data)
df.index += 1
st.dataframe(df)
# Model Information
st.markdown('<div class="sub-title">Model Information</div>', unsafe_allow_html=True)
st.markdown("""
<div class="section">
<p>The <code>hebrewner_cc_300d</code> model details are as follows:</p>
<ul>
<li><strong>Model Name:</strong> hebrewner_cc_300d</li>
<li><strong>Type:</strong> ner</li>
<li><strong>Compatibility:</strong> Spark NLP 4.0.2+</li>
<li><strong>License:</strong> Open Source</li>
<li><strong>Edition:</strong> Official</li>
<li><strong>Input Labels:</strong> [document, token, word_embeddings]</li>
<li><strong>Output Labels:</strong> [ner]</li>
<li><strong>Language:</strong> he</li>
<li><strong>Size:</strong> 14.8 MB</li>
</ul>
</div>
""", unsafe_allow_html=True)
# Benchmark Section
st.markdown('<div class="sub-title">Benchmark</div>', unsafe_allow_html=True)
st.markdown("""
<div class="section">
<p>Evaluating the performance of NER models is crucial to understanding their effectiveness in real-world applications. Below are the benchmark results for the <code>hebrewner_cc_300d</code> model, focusing on various named entity categories. The metrics used include precision, recall, and F1-score, which are standard for evaluating classification models.</p>
</div>
""", unsafe_allow_html=True)
st.markdown("""
---
| Label | TP | FP | FN | Precision | Recall | F1-Score |
|--------------|-----|-----|-----|-----------|---------|----------|
| I-TIME | 5 | 2 | 0 | 0.714286 | 1.000000| 0.833333 |
| I-MISC_AFF | 2 | 0 | 3 | 1.000000 | 0.400000| 0.571429 |
| B-MISC_EVENT | 7 | 0 | 1 | 1.000000 | 0.875000| 0.933333 |
| B-LOC | 180 | 24 | 37 | 0.882353 | 0.829493| 0.855107 |
| I-ORG | 124 | 47 | 38 | 0.725146 | 0.765432| 0.744745 |
| B-DATE | 50 | 4 | 7 | 0.925926 | 0.877193| 0.900901 |
| I-PERS | 157 | 10 | 15 | 0.940120 | 0.912791| 0.926254 |
| I-DATE | 39 | 7 | 8 | 0.847826 | 0.829787| 0.838710 |
| B-MISC_AFF | 132 | 11 | 9 | 0.923077 | 0.936170| 0.929577 |
| I-MISC_EVENT | 6 | 0 | 2 | 1.000000 | 0.750000| 0.857143 |
| B-TIME | 4 | 0 | 1 | 1.000000 | 0.800000| 0.888889 |
| I-PERCENT | 8 | 0 | 0 | 1.000000 | 1.000000| 1.000000 |
| I-MISC_ENT | 11 | 3 | 10 | 0.785714 | 0.523810| 0.628571 |
| B-MISC_ENT | 8 | 1 | 5 | 0.888889 | 0.615385| 0.727273 |
| I-LOC | 79 | 18 | 23 | 0.814433 | 0.774510| 0.793970 |
| B-PERS | 231 | 22 | 26 | 0.913044 | 0.898833| 0.905882 |
| B-MONEY | 36 | 2 | 2 | 0.947368 | 0.947368| 0.947368 |
| B-PERCENT | 28 | 3 | 0 | 0.903226 | 1.000000| 0.949152 |
| B-ORG | 166 | 41 | 37 | 0.801932 | 0.817734| 0.809756 |
| I-MONEY | 61 | 1 | 1 | 0.983871 | 0.983871| 0.983871 |
| Macro-average| 1334| 196 | 225 | 0.899861 | 0.826869| 0.861822 |
| Micro-average| 1334| 196 | 225 | 0.871895 | 0.855677| 0.863710 |
""", unsafe_allow_html=True)
# Summary
st.markdown('<div class="sub-title">Summary</div>', unsafe_allow_html=True)
st.markdown("""
<div class="section">
<p>This page provided an overview of the <code>hebrewner_cc_300d</code> model for Hebrew NER. We discussed how to set up and use the model with Spark NLP, including example code and results. We also provided details on the model's specifications and links to relevant resources for further exploration.</p>
</div>
""", unsafe_allow_html=True)
# References
st.markdown('<div class="sub-title">References</div>', unsafe_allow_html=True)
st.markdown("""
<div class="section">
<ul>
<li><a class="link" href="https://sparknlp.org/api/python/reference/autosummary/sparknlp/annotator/ner/ner_dl/index.html" target="_blank" rel="noopener">NerDLModel</a> annotator documentation</li>
<li>Model Used: <a class="link" href="https://sparknlp.org/2022/08/09/hebrewner_cc_300d_he_3_0.html" rel="noopener">hebrewner_cc_300d_he_3_0</a></li>
<li><a class="link" href="https://www.cs.bgu.ac.il/~elhadad/nlpproj/naama/" target="_blank" rel="noopener">Data Source</a></li>
<li><a class="link" href="https://nlp.johnsnowlabs.com/recognize_entitie" target="_blank" rel="noopener">Visualization demos for NER in Spark NLP</a></li>
<li><a class="link" href="https://www.johnsnowlabs.com/named-entity-recognition-ner-with-bert-in-spark-nlp/">Named Entity Recognition (NER) with BERT in Spark NLP</a></li>
</ul>
</div>
""", unsafe_allow_html=True)
# Community & Support
st.markdown('<div class="sub-title">Community & Support</div>', unsafe_allow_html=True)
st.markdown("""
<div class="section">
<ul>
<li><a class="link" href="https://sparknlp.org/" target="_blank">Official Website</a>: Documentation and examples</li>
<li><a class="link" href="https://github.com/JohnSnowLabs/spark-nlp" target="_blank">GitHub Repository</a>: Report issues or contribute</li>
<li><a class="link" href="https://forum.johnsnowlabs.com/" target="_blank">Community Forum</a>: Ask questions, share ideas, and get support</li>
</ul>
</div>
""", unsafe_allow_html=True)