Spaces:
Sleeping
Sleeping
import streamlit as st | |
# Custom CSS for better styling | |
st.markdown(""" | |
<style> | |
.main-title { | |
font-size: 36px; | |
color: #4A90E2; | |
font-weight: bold; | |
text-align: center; | |
} | |
.sub-title { | |
font-size: 24px; | |
color: #4A90E2; | |
margin-top: 20px; | |
} | |
.section { | |
background-color: #f9f9f9; | |
padding: 15px; | |
border-radius: 10px; | |
margin-top: 20px; | |
} | |
.section h2 { | |
font-size: 22px; | |
color: #4A90E2; | |
} | |
.section p, .section ul { | |
color: #666666; | |
} | |
.link { | |
color: #4A90E2; | |
text-decoration: none; | |
} | |
</style> | |
""", unsafe_allow_html=True) | |
# Main Title | |
st.markdown('<div class="main-title">Named Entity Recognition for Chinese (BERT-MSRA Dataset)</div>', unsafe_allow_html=True) | |
# Description | |
st.markdown('<div class="sub-title">Description</div>', unsafe_allow_html=True) | |
st.markdown(""" | |
<div class="section"> | |
<p>This model annotates named entities in a text, which can be used to find features such as names of people, places, and organizations. The model does not read words directly but instead reads word embeddings, which represent words as points such that more semantically similar words are closer together.</p> | |
<p>This model uses the pre-trained <code>bert_base_chinese</code> embeddings model from BertEmbeddings annotator as an input, so be sure to use the same embeddings in the pipeline.</p> | |
</div> | |
""", unsafe_allow_html=True) | |
# Predicted Entities | |
st.markdown('<div class="sub-title">Predicted Entities</div>', unsafe_allow_html=True) | |
st.markdown(""" | |
<div class="section"> | |
<ul> | |
<li>Persons-PER</li> | |
<li>Locations-LOC</li> | |
<li>Organizations-ORG</li> | |
</ul> | |
</div> | |
""", unsafe_allow_html=True) | |
# How to use | |
st.markdown('<div class="sub-title">How to use</div>', unsafe_allow_html=True) | |
st.markdown(""" | |
<div class="section"> | |
<p>To use this model, follow these steps in Python:</p> | |
</div> | |
""", unsafe_allow_html=True) | |
st.code(""" | |
from sparknlp.base import * | |
from sparknlp.annotator import * | |
from pyspark.ml import Pipeline | |
# Define the components of the pipeline | |
document_assembler = DocumentAssembler() \\ | |
.setInputCol("text") \\ | |
.setOutputCol("document") | |
sentence_detector = SentenceDetector() \\ | |
.setInputCols(["document"]) \\ | |
.setOutputCol("sentence") | |
word_segmenter = WordSegmenterModel.pretrained("wordseg_large", "zh") \\ | |
.setInputCols(["sentence"]) \\ | |
.setOutputCol("token") | |
embeddings = BertEmbeddings.pretrained(name='bert_base_chinese', lang='zh') \\ | |
.setInputCols(["document", "token"]) \\ | |
.setOutputCol("embeddings") | |
ner = NerDLModel.pretrained("ner_msra_bert_768d", "zh") \\ | |
.setInputCols(["document", "token", "embeddings"]) \\ | |
.setOutputCol("ner") | |
ner_converter = NerConverter() \\ | |
.setInputCols(["sentence", "token", "ner"]) \\ | |
.setOutputCol("entities") | |
# Create the pipeline | |
pipeline = Pipeline(stages=[document_assembler, sentence_detector, word_segmenter, embeddings, ner, ner_converter]) | |
# Create sample data | |
example = spark.createDataFrame([['马云在浙江省杭州市出生,是阿里巴巴集团的主要创始人。']], ["text"]) | |
# Fit and transform data with the pipeline | |
result = pipeline.fit(example).transform(example) | |
# Select the result, entity | |
result.select( | |
expr("explode(entities) as entity") | |
).select( | |
col("entity.result").alias("chunk"), | |
col("entity.metadata").getItem("entity").alias("ner_label") | |
).show(truncate=False) | |
""", language="python") | |
# Results | |
import pandas as pd | |
# Create the data for the DataFrame | |
data = { | |
"token": ["马云", "浙江省", "杭州市", "出生", "阿里巴巴集团", "创始人"], | |
"ner": ["PER", "LOC", "LOC", "ORG", "ORG", "PER"] | |
} | |
# Creating the DataFrame | |
df = pd.DataFrame(data) | |
df.index += 1 | |
st.dataframe(df) | |
# Model Information | |
st.markdown('<div class="sub-title">Model Information</div>', unsafe_allow_html=True) | |
st.markdown(""" | |
<div class="section"> | |
<p>The <code>ner_msra_bert_768d</code> model details are as follows:</p> | |
<ul> | |
<li><strong>Model Name:</strong> ner_msra_bert_768d</li> | |
<li><strong>Type:</strong> ner</li> | |
<li><strong>Compatibility:</strong> Spark NLP 2.7.0+</li> | |
<li><strong>License:</strong> Open Source</li> | |
<li><strong>Edition:</strong> Official</li> | |
<li><strong>Input Labels:</strong> [sentence, token, embeddings]</li> | |
<li><strong>Output Labels:</strong> [ner]</li> | |
<li><strong>Language:</strong> zh</li> | |
</ul> | |
</div> | |
""", unsafe_allow_html=True) | |
# Data Source | |
st.markdown('<div class="sub-title">Data Source</div>', unsafe_allow_html=True) | |
st.markdown(""" | |
<div class="section"> | |
<p>The model was trained on the MSRA (Levow, 2006) data set created by “Microsoft Research Asia”.</p> | |
</div> | |
""", unsafe_allow_html=True) | |
# Benchmarking | |
st.markdown('<div class="sub-title">Benchmarking</div>', unsafe_allow_html=True) | |
st.markdown(""" | |
<div class="section"> | |
<p>Evaluating the performance of NER models is crucial to understanding their effectiveness in real-world applications. Below are the benchmark results for the <code>ner_msra_bert_768d</code> model, focusing on various named entity categories. The metrics used include precision, recall, and F1-score, which are standard for evaluating classification models.</p> | |
</div> | |
""", unsafe_allow_html=True) | |
st.markdown(""" | |
--- | |
| ner_tag | precision | recall | f1-score | support | | |
|--------------|-----------|--------|----------|---------| | |
| LOC | 0.97 | 0.97 | 0.97 | 2777 | | |
| O | 1.00 | 1.00 | 1.00 | 146826 | | |
| ORG | 0.88 | 0.99 | 0.93 | 1292 | | |
| PER | 0.97 | 0.97 | 0.97 | 1430 | | |
| accuracy | 1.00 | 152325 | | | | |
| macro avg | 0.95 | 0.98 | 0.97 | 152325 | | |
| weighted avg | 1.00 | 1.00 | 1.00 | 152325 | | |
--- | |
""", unsafe_allow_html=True) | |
st.markdown(""" | |
<div class="section"> | |
<p>These results demonstrate the model's ability to accurately identify and classify named entities in Chinese text. Precision measures the accuracy of the positive predictions, recall measures the model's ability to find all relevant instances, and F1-score provides a balance between precision and recall.</p> | |
</div> | |
""", unsafe_allow_html=True) | |
# Conclusion/Summary | |
st.markdown('<div class="sub-title">Conclusion</div>', unsafe_allow_html=True) | |
st.markdown(""" | |
<div class="section"> | |
<p>The <code>ner_msra_bert_768d</code> model demonstrates effective named entity recognition in Chinese texts, with high performance metrics across different entity types. This model leverages <code>bert_base_chinese</code> embeddings to enhance its understanding and accuracy in identifying entities like persons, locations, and organizations. Its integration into Spark NLP allows for efficient and scalable processing of Chinese text data, making it a valuable tool for researchers and developers working with Chinese language applications.</p> | |
</div> | |
""", unsafe_allow_html=True) | |
# References | |
st.markdown('<div class="sub-title">References</div>', unsafe_allow_html=True) | |
st.markdown(""" | |
<div class="section"> | |
<ul> | |
<li><a class="link" href="https://sparknlp.org/api/python/reference/autosummary/sparknlp/annotator/ner/ner_dl/index.html" target="_blank" rel="noopener">NerDLModel</a> annotator documentation</li> | |
<li>Model Used: <a class="link" href="https://sparknlp.org/2021/01/03/ner_msra_bert_768d_zh.html" rel="noopener">ner_msra_bert_768d_zh</a></li> | |
<li>The model was trained on the <a class="link" href="https://www.aclweb.org/anthology/W06-0115/">MSRA (Levow, 2006)</a> data set created by “Microsoft Research Asia”.</li> | |
<li><a class="link" href="https://nlp.johnsnowlabs.com/recognize_entitie" target="_blank" rel="noopener">Visualization demos for NER in Spark NLP</a></li> | |
<li><a class="link" href="https://www.johnsnowlabs.com/named-entity-recognition-ner-with-bert-in-spark-nlp/">Named Entity Recognition (NER) with BERT in Spark NLP</a></li> | |
</ul> | |
</div> | |
""", unsafe_allow_html=True) | |
# Community & Support | |
st.markdown('<div class="sub-title">Community & Support</div>', unsafe_allow_html=True) | |
st.markdown(""" | |
<div class="section"> | |
<ul> | |
<li><a class="link" href="https://sparknlp.org/" target="_blank">Official Website</a>: Documentation and examples</li> | |
<li><a class="link" href="https://github.com/JohnSnowLabs/spark-nlp" target="_blank">GitHub Repository</a>: Report issues or contribute</li> | |
<li><a class="link" href="https://forum.johnsnowlabs.com/" target="_blank">Community Forum</a>: Ask questions, share ideas, and get support</li> | |
</ul> | |
</div> | |
""", unsafe_allow_html=True) | |