File size: 8,400 Bytes
68747de
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
import streamlit as st
import pandas as pd

# Custom CSS for better styling
st.markdown("""

    <style>

        .main-title {

            font-size: 36px;

            color: #4A90E2;

            font-weight: bold;

            text-align: center;

        }

        .sub-title {

            font-size: 24px;

            color: #4A90E2;

            margin-top: 20px;

        }

        .section {

            background-color: #f9f9f9;

            padding: 15px;

            border-radius: 10px;

            margin-top: 20px;

        }

        .section h2 {

            font-size: 22px;

            color: #4A90E2;

        }

        .section p, .section ul {

            color: #666666;

        }

        .link {

            color: #4A90E2;

            text-decoration: none;

        }

    </style>

""", unsafe_allow_html=True)

# Main Title
st.markdown('<div class="main-title">Persian Named Entity Recognition - Word Embeddings-based Model</div>', unsafe_allow_html=True)

# Introduction
st.markdown("""

<div class="section">

    <p>Named Entity Recognition (NER) models identify and categorize important entities in a text. This page details a word embeddings-based NER model for Persian texts, using the <code>persian_w2v_cc_300d</code> word embeddings. The model is pretrained and available for use with Spark NLP.</p>

</div>

""", unsafe_allow_html=True)

# Model Description
st.markdown('<div class="sub-title">Description</div>', unsafe_allow_html=True)
st.markdown("""

<div class="section">

    <p>The <code>personer_cc_300d</code> model uses Persian word embeddings to find 6 different types of entities in Persian text. It is trained using <code>persian_w2v_cc_300d</code> word embeddings, so please use the same embeddings in the pipeline. It can identify the following types of entities:</p>

    <ul>

        <li>PER (Persons)</li>

        <li>FAC (Facilities)</li>

        <li>PRO (Products)</li>

        <li>LOC (Locations)</li>

        <li>ORG (Organizations)</li>

        <li>EVENT (Events)</li>

    </ul>

</div>

""", unsafe_allow_html=True)

# Setup Instructions
st.markdown('<div class="sub-title">Setup</div>', unsafe_allow_html=True)
st.markdown('<p>To use the model, you need Spark NLP installed. You can install it using pip:</p>', unsafe_allow_html=True)
st.code("""

pip install spark-nlp

pip install pyspark

""", language="bash")

st.markdown("<p>Then, import Spark NLP and start a Spark session:</p>", unsafe_allow_html=True)
st.code("""

import sparknlp



# Start Spark Session

spark = sparknlp.start()

""", language='python')

# Example Usage
st.markdown('<div class="sub-title">Example Usage with Persian NER Model</div>', unsafe_allow_html=True)
st.markdown("""

<div class="section">

    <p>Below is an example of how to set up and use the <code>personer_cc_300d</code> model for named entity recognition in Persian:</p>

</div>

""", unsafe_allow_html=True)
st.code('''

from sparknlp.base import *

from sparknlp.annotator import *

from pyspark.ml import Pipeline

from pyspark.sql.functions import col, expr, round, concat, lit, explode



# Define the components of the pipeline

documentAssembler = DocumentAssembler() \\

    .setInputCol("text") \\

    .setOutputCol("document")



sentenceDetector = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \\

    .setInputCols(["document"]) \\

    .setOutputCol("sentence")



tokenizer = Tokenizer() \\

    .setInputCols(["sentence"]) \\

    .setOutputCol("token")



word_embeddings = WordEmbeddingsModel.pretrained("persian_w2v_cc_300d", "fa") \\

    .setInputCols(["document", "token"]) \\

    .setOutputCol("embeddings")



ner = NerDLModel.pretrained("personer_cc_300d", "fa") \\

    .setInputCols(["sentence", "token", "embeddings"]) \\

    .setOutputCol("ner")



ner_converter = NerConverter().setInputCols(["sentence", "token", "ner"]).setOutputCol("ner_chunk")



# Create the pipeline

pipeline = Pipeline(stages=[documentAssembler, sentenceDetector, tokenizer, word_embeddings, ner, ner_converter])



# Create sample data

example = """

به گزارش خبرنگار ایرنا ، بر اساس تصمیم این مجمع ، محمد قمی نماینده مردم پاکدشت به عنوان رئیس و علی‌اکبر موسوی خوئینی و شمس‌الدین وهابی نمایندگان مردم تهران به عنوان نواب رئیس انتخاب شدند

"""

data = spark.createDataFrame([[example]]).toDF("text")



# Fit and transform data with the pipeline

result = pipeline.fit(data).transform(data)



# Select the result, entity

result.select(

    expr("explode(ner_chunk) as ner_chunk")

).select(

    col("ner_chunk.result").alias("chunk"),

    col("ner_chunk.metadata").getItem("entity").alias("ner_label")

).show(truncate=False)

''', language="python")

import pandas as pd

# Create the data for the DataFrame
data = {
    "chunk": [
        "خبرنگار ایرنا", 
        "محمد قمی", 
        "پاکدشت", 
        "علی‌اکبر موسوی خوئینی", 
        "شمس‌الدین وهابی", 
        "تهران"
    ],
    "ner_label": [
        "ORG", 
        "PER", 
        "LOC", 
        "PER", 
        "PER", 
        "LOC"
    ]
}

# Creating the DataFrame
df = pd.DataFrame(data)
df.index += 1
st.dataframe(df)

# Model Information
st.markdown('<div class="sub-title">Model Information</div>', unsafe_allow_html=True)
st.markdown("""

<div class="section">

    <p>The <code>personer_cc_300d</code> model details are as follows:</p>

    <ul>

        <li><strong>Model Name:</strong> personer_cc_300d</li>

        <li><strong>Type:</strong> ner</li>

        <li><strong>Compatibility:</strong> Spark NLP 2.7.0+</li>

        <li><strong>License:</strong> Open Source</li>

        <li><strong>Edition:</strong> Official</li>

        <li><strong>Input Labels:</strong> [document, token, word_embeddings]</li>

        <li><strong>Output Labels:</strong> [ner]</li>

        <li><strong>Language:</strong> fa</li>

        <li><strong>Dependencies:</strong> persian_w2v_cc_300d</li>

    </ul>

</div>

""", unsafe_allow_html=True)

# Summary
st.markdown('<div class="sub-title">Summary</div>', unsafe_allow_html=True)
st.markdown("""

<div class="section">

    <p>This page provided an overview of the <code>personer_cc_300d</code> model for Persian NER. We discussed how to set up and use the model with Spark NLP, including example code and results. We also provided details on the model's specifications and links to relevant resources for further exploration.</p>

</div>

""", unsafe_allow_html=True)

# References
st.markdown('<div class="sub-title">References</div>', unsafe_allow_html=True)
st.markdown("""

<div class="section">

    <ul>

        <li><a class="link" href="https://sparknlp.org/api/python/reference/autosummary/sparknlp/annotator/ner/ner_dl/index.html" target="_blank" rel="noopener">NerDLModel</a> annotator documentation</li>

        <li>Model Used: <a class="link" href="https://sparknlp.org/2020/12/07/personer_cc_300d_fa.html" rel="noopener">personer_cc_300d_fa</a></li>

        <li><a class="link" href="https://www.aclweb.org/anthology/C16-1319/" target="_blank" rel="noopener">Data Source</a></li>

        <li><a class="link" href="https://nlp.johnsnowlabs.com/recognize_entitie" target="_blank" rel="noopener">Visualization demos for NER in Spark NLP</a></li>

        <li><a class="link" href="https://www.johnsnowlabs.com/named-entity-recognition-ner-with-bert-in-spark-nlp/">Named Entity Recognition (NER) with BERT in Spark NLP</a></li>

    </ul>

</div>

""", unsafe_allow_html=True)

# Community & Support
st.markdown('<div class="sub-title">Community & Support</div>', unsafe_allow_html=True)
st.markdown("""

<div class="section">

    <ul>

        <li><a class="link" href="https://sparknlp.org/" target="_blank">Official Website</a>: Documentation and examples</li>

        <li><a class="link" href="https://github.com/JohnSnowLabs/spark-nlp" target="_blank">GitHub Repository</a>: Report issues or contribute</li>

        <li><a class="link" href="https://forum.johnsnowlabs.com/" target="_blank">Community Forum</a>: Ask questions, share ideas, and get support</li>

    </ul>

</div>

""", unsafe_allow_html=True)