import streamlit as st import pandas as pd # Custom CSS for better styling st.markdown(""" """, unsafe_allow_html=True) # Main Title st.markdown('
Detect 10 Different Entities in Hebrew (hebrew_cc_300d embeddings)
', unsafe_allow_html=True) # Introduction st.markdown("""

Named Entity Recognition (NER) models identify and categorize important entities in a text. This page details a word embeddings-based NER model for Hebrew texts, using the hebrew_cc_300d word embeddings. The model is pretrained and available for use with Spark NLP.

""", unsafe_allow_html=True) # Model Description st.markdown('
Description
', unsafe_allow_html=True) st.markdown("""

This model uses Hebrew word embeddings to find 10 different types of entities in Hebrew text. It is trained using hebrew_cc_300d word embeddings, so please use the same embeddings in the pipeline. It can identify the following types of entities:

""", unsafe_allow_html=True) # Setup Instructions st.markdown('
Setup
', unsafe_allow_html=True) st.markdown('

To use the model, you need Spark NLP installed. You can install it using pip:

', unsafe_allow_html=True) st.code(""" pip install spark-nlp pip install pyspark """, language="bash") st.markdown("

Then, import Spark NLP and start a Spark session:

", unsafe_allow_html=True) st.code(""" import sparknlp # Start Spark Session spark = sparknlp.start() """, language='python') # Example Usage st.markdown('
Example Usage with Hebrew NER Model
', unsafe_allow_html=True) st.markdown("""

Below is an example of how to set up and use the hebrewner_cc_300d model for named entity recognition in Hebrew:

""", unsafe_allow_html=True) st.code(''' from sparknlp.base import * from sparknlp.annotator import * from pyspark.ml import Pipeline from pyspark.sql.functions import col, expr, round, concat, lit, explode # Define the components of the pipeline documentAssembler = DocumentAssembler() \\ .setInputCol("text") \\ .setOutputCol("document") sentence_detector = SentenceDetector() \\ .setInputCols(["document"]) \\ .setOutputCol("sentence") tokenizer = Tokenizer() \\ .setInputCols(["sentence"]) \\ .setOutputCol("token") word_embeddings = WordEmbeddingsModel.pretrained("hebrew_cc_300d", "he") \\ .setInputCols(["sentence", "token"]) \\ .setOutputCol("embeddings") ner = NerDLModel.pretrained("hebrewner_cc_300d", "he") \\ .setInputCols(["sentence", "token", "embeddings"]) \\ .setOutputCol("ner") ner_converter = NerConverter().setInputCols(["sentence", "token", "ner"]).setOutputCol("ner_chunk") # Create the pipeline pipeline = Pipeline(stages=[documentAssembler, sentence_detector, tokenizer, word_embeddings, ner, ner_converter]) # Create sample data example = """ ב- 25 לאוגוסט עצר השב"כ את מוחמד אבו-ג'וייד , אזרח ירדני , שגויס לארגון הפת"ח והופעל על ידי חיזבאללה. אבו-ג'וייד התכוון להקים חוליות טרור בגדה ובקרב ערביי ישראל , לבצע פיגוע ברכבת ישראל בנהריה , לפגוע במטרות ישראליות בירדן ולחטוף חיילים כדי לשחרר אסירים ביטחוניים. """ data = spark.createDataFrame([[example]]).toDF("text") # Fit and transform data with the pipeline result = pipeline.fit(data).transform(data) # Select the result, entity result.select( expr("explode(ner_chunk) as ner_chunk") ).select( col("ner_chunk.result").alias("chunk"), col("ner_chunk.metadata").getItem("entity").alias("ner_label") ).show(truncate=False) ''', language="python") import pandas as pd # Create the data for the DataFrame data = { "chunk": [ "25 לאוגוסט", "השב\"כ", "מוחמד אבו-ג'וייד", "ירדני", "הפת\"ח", "חיזבאללה", "אבו-ג'וייד", "בגדה", "ערביי", "ישראל", "ברכבת ישראל", "בנהריה", "ישראליות", "בירדן" ], "ner_label": [ "DATE", "ORG", "PERS", "MISC_AFF", "ORG", "ORG", "PERS", "LOC", "MISC_AFF", "LOC", "ORG", "LOC", "MISC_AFF", "LOC" ] } # Creating the DataFrame df = pd.DataFrame(data) df.index += 1 st.dataframe(df) # Model Information st.markdown('
Model Information
', unsafe_allow_html=True) st.markdown("""

The hebrewner_cc_300d model details are as follows:

""", unsafe_allow_html=True) # Benchmark Section st.markdown('
Benchmark
', unsafe_allow_html=True) st.markdown("""

Evaluating the performance of NER models is crucial to understanding their effectiveness in real-world applications. Below are the benchmark results for the hebrewner_cc_300d model, focusing on various named entity categories. The metrics used include precision, recall, and F1-score, which are standard for evaluating classification models.

""", unsafe_allow_html=True) st.markdown(""" --- | Label | TP | FP | FN | Precision | Recall | F1-Score | |--------------|-----|-----|-----|-----------|---------|----------| | I-TIME | 5 | 2 | 0 | 0.714286 | 1.000000| 0.833333 | | I-MISC_AFF | 2 | 0 | 3 | 1.000000 | 0.400000| 0.571429 | | B-MISC_EVENT | 7 | 0 | 1 | 1.000000 | 0.875000| 0.933333 | | B-LOC | 180 | 24 | 37 | 0.882353 | 0.829493| 0.855107 | | I-ORG | 124 | 47 | 38 | 0.725146 | 0.765432| 0.744745 | | B-DATE | 50 | 4 | 7 | 0.925926 | 0.877193| 0.900901 | | I-PERS | 157 | 10 | 15 | 0.940120 | 0.912791| 0.926254 | | I-DATE | 39 | 7 | 8 | 0.847826 | 0.829787| 0.838710 | | B-MISC_AFF | 132 | 11 | 9 | 0.923077 | 0.936170| 0.929577 | | I-MISC_EVENT | 6 | 0 | 2 | 1.000000 | 0.750000| 0.857143 | | B-TIME | 4 | 0 | 1 | 1.000000 | 0.800000| 0.888889 | | I-PERCENT | 8 | 0 | 0 | 1.000000 | 1.000000| 1.000000 | | I-MISC_ENT | 11 | 3 | 10 | 0.785714 | 0.523810| 0.628571 | | B-MISC_ENT | 8 | 1 | 5 | 0.888889 | 0.615385| 0.727273 | | I-LOC | 79 | 18 | 23 | 0.814433 | 0.774510| 0.793970 | | B-PERS | 231 | 22 | 26 | 0.913044 | 0.898833| 0.905882 | | B-MONEY | 36 | 2 | 2 | 0.947368 | 0.947368| 0.947368 | | B-PERCENT | 28 | 3 | 0 | 0.903226 | 1.000000| 0.949152 | | B-ORG | 166 | 41 | 37 | 0.801932 | 0.817734| 0.809756 | | I-MONEY | 61 | 1 | 1 | 0.983871 | 0.983871| 0.983871 | | Macro-average| 1334| 196 | 225 | 0.899861 | 0.826869| 0.861822 | | Micro-average| 1334| 196 | 225 | 0.871895 | 0.855677| 0.863710 | """, unsafe_allow_html=True) # Summary st.markdown('
Summary
', unsafe_allow_html=True) st.markdown("""

This page provided an overview of the hebrewner_cc_300d model for Hebrew NER. We discussed how to set up and use the model with Spark NLP, including example code and results. We also provided details on the model's specifications and links to relevant resources for further exploration.

""", unsafe_allow_html=True) # References st.markdown('
References
', unsafe_allow_html=True) st.markdown("""
""", unsafe_allow_html=True) # Community & Support st.markdown('
Community & Support
', unsafe_allow_html=True) st.markdown("""
""", unsafe_allow_html=True)