File size: 11,743 Bytes
12f20ca
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
import streamlit as st

# Custom CSS for better styling
st.markdown("""

    <style>

        .main-title {

            font-size: 36px;

            color: #4A90E2;

            font-weight: bold;

            text-align: center;

        }

        .sub-title {

            font-size: 24px;

            color: #4A90E2;

            margin-top: 20px;

        }

        .section {

            background-color: #f9f9f9;

            padding: 15px;

            border-radius: 10px;

            margin-top: 20px;

        }

        .section h2 {

            font-size: 22px;

            color: #4A90E2;

        }

        .section p, .section ul {

            color: #666666;

        }

        .link {

            color: #4A90E2;

            text-decoration: none;

        }

        .benchmark-table {

            width: 100%;

            border-collapse: collapse;

            margin-top: 20px;

        }

        .benchmark-table th, .benchmark-table td {

            border: 1px solid #ddd;

            padding: 8px;

            text-align: left;

        }

        .benchmark-table th {

            background-color: #4A90E2;

            color: white;

        }

    </style>

""", unsafe_allow_html=True)

# Main Title
st.markdown('<div class="main-title">Detect Time-related Terminology</div>', unsafe_allow_html=True)

# Description
st.markdown("""

<div class="section">

    <p><strong>Detect Time-related Terminology</strong> is a crucial NLP task that involves identifying and classifying key temporal entities in text. This app leverages the <strong>roberta_token_classifier_timex_semeval</strong> model, which has been imported from Hugging Face and trained to detect time-related terminology using RoBERTa embeddings and RobertaForTokenClassification for NER purposes.</p>

</div>

""", unsafe_allow_html=True)

# What is NER
st.markdown('<div class="sub-title">What is Named Entity Recognition (NER)?</div>', unsafe_allow_html=True)
st.markdown("""

<div class="section">

    <p><strong>Named Entity Recognition (NER)</strong> is a process in Natural Language Processing (NLP) that locates and classifies named entities into predefined categories such as dates, times, periods, and other temporal expressions. For example, in the sentence "Model training was started at 22:12C and it took 3 days from Tuesday to Friday," NER helps identify '22:12C' as a time period, '3 days' as a calendar interval, and 'Tuesday' and 'Friday' as days of the week.</p>

    <p>NER models are trained to understand the context and semantics of entities within text, enabling automated systems to recognize and categorize these entities accurately. This capability is essential for developing intelligent systems capable of processing and responding to user queries efficiently.</p>

</div>

""", unsafe_allow_html=True)

# Predicted Entities
st.markdown('<div class="sub-title">Predicted Entities</div>', unsafe_allow_html=True)
st.markdown("""

<div class="section">

    <ul>

        <li><strong>Period:</strong> Specific times such as "22:12C".</li>

        <li><strong>Year:</strong> Years like "2023".</li>

        <li><strong>Calendar-Interval:</strong> Intervals such as "3 days".</li>

        <li><strong>Month-Of-Year:</strong> Months like "January".</li>

        <li><strong>Day-Of-Month:</strong> Specific days like "15th".</li>

        <li><strong>Day-Of-Week:</strong> Days like "Tuesday".</li>

        <li><strong>Hour-Of-Day:</strong> Hours such as "10 AM".</li>

        <li><strong>Minute-Of-Hour:</strong> Minutes like "45".</li>

        <li><strong>Number:</strong> Numerical values like "3".</li>

        <li><strong>Second-Of-Minute:</strong> Seconds like "30".</li>

        <li><strong>Time-Zone:</strong> Time zones such as "PST".</li>

        <li><strong>Part-Of-Day:</strong> Parts of the day like "morning".</li>

        <li><strong>Season-Of-Year:</strong> Seasons like "summer".</li>

        <li><strong>AMPM-Of-Day:</strong> "AM" or "PM".</li>

        <li><strong>Part-Of-Week:</strong> Parts of the week like "weekend".</li>

        <li><strong>Week-Of-Year:</strong> Weeks like "week 42".</li>

        <li><strong>Two-Digit-Year:</strong> Years represented in two digits like "'99".</li>

        <li><strong>Sum:</strong> Total values of time periods, e.g., "3 days and 2 hours".</li>

        <li><strong>Difference:</strong> Subtracted time periods, e.g., "5 days ago".</li>

        <li><strong>Union:</strong> Combination of multiple time-related entities.</li>

        <li><strong>Intersection:</strong> Overlapping time periods.</li>

        <li><strong>Every-Nth:</strong> Repeated intervals, e.g., "every 3rd day".</li>

        <li><strong>This:</strong> Referring to the current period, e.g., "this week".</li>

        <li><strong>Last:</strong> Referring to the previous period, e.g., "last year".</li>

        <li><strong>Next:</strong> Referring to the following period, e.g., "next month".</li>

        <li><strong>Before:</strong> Time before a specific point, e.g., "before noon".</li>

        <li><strong>After:</strong> Time after a specific point, e.g., "after 5 PM".</li>

        <li><strong>Between:</strong> Time between two points, e.g., "between Monday and Friday".</li>

        <li><strong>NthFromStart:</strong> Nth position from the start.</li>

        <li><strong>NthFromEnd:</strong> Nth position from the end.</li>

        <li><strong>Frequency:</strong> How often something occurs, e.g., "weekly".</li>

        <li><strong>Modifier:</strong> Modifiers for time-related entities.</li>

    </ul>

</div>

""", unsafe_allow_html=True)

# How to Use the Model
st.markdown('<div class="sub-title">How to Use the Model</div>', unsafe_allow_html=True)
st.markdown("""

<div class="section">

    <p>To use this model, follow these steps in Python:</p>

</div>

""", unsafe_allow_html=True)
st.code('''

from sparknlp.base import *

from sparknlp.annotator import *

from pyspark.ml import Pipeline

from pyspark.sql.functions import col, expr, round, concat, lit



# Define the components of the pipeline

document_assembler = DocumentAssembler() \\

    .setInputCol("text") \\

    .setOutputCol("document")



sentence_detector = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "en") \\

    .setInputCols(["document"]) \\

    .setOutputCol("sentence")



tokenizer = Tokenizer() \\

    .setInputCols(["sentence"]) \\

    .setOutputCol("token")



token_classifier = RoBertaForTokenClassification.pretrained("roberta_token_classifier_timex_semeval", "en") \\

    .setInputCols(["sentence", "token"]) \\

    .setOutputCol("ner")



ner_converter = NerConverter() \\

    .setInputCols(["sentence", "token", "ner"]) \\

    .setOutputCol("ner_chunk")



# Create the pipeline

pipeline = Pipeline(stages=[

    document_assembler,

    sentence_detector,

    tokenizer,

    token_classifier,

    ner_converter

])



# Create some example data

text = "Model training was started at 22:12C and it took 3 days from Tuesday to Friday."

data = spark.createDataFrame([[text]]).toDF("text")



# Apply the pipeline to the data

model = pipeline.fit(data)

result = model.transform(data)



# Select the result, entity

result.select(

    expr("explode(ner_chunk) as ner_chunk")

).select(

    col("ner_chunk.result").alias("chunk"),

    col("ner_chunk.metadata.entity").alias("entity")

).show(truncate=False)

''', language='python')

# Results

st.text("""

+-------+-----------------+

|chunk  |entity           |

+-------+-----------------+

|took   |Frequency        |

|3      |Number           |

|days   |Calendar-Interval|

|Tuesday|Day-Of-Week      |

|to     |Between          |

|Friday |Day-Of-Week      |

+-------+-----------------+

""")

# Model Information
st.markdown('<div class="sub-title">Model Information</div>', unsafe_allow_html=True)
st.markdown("""

<div class="section">

    <table class="benchmark-table">

        <tr>

            <th>Model Name</th>

            <td>roberta_token_classifier_timex_semeval</td>

        </tr>

        <tr>

            <th>Compatibility</th>

            <td>Spark NLP 3.3.4+</td>

        </tr>

        <tr>

            <th>License</th>

            <td>Open Source</td>

        </tr>

        <tr>

            <th>Edition</th>

            <td>Official</td>

        </tr>

        <tr>

            <th>Input Labels</th>

            <td>[sentence, token]</td>

        </tr>

        <tr>

            <th>Output Labels</th>

            <td>[ner]</td>

        </tr>

        <tr>

            <th>Language</th>

            <td>en</td>

        </tr>

        <tr>

            <th>Size</th>

            <td>439.5 MB</td>

        </tr>

        <tr>

            <th>Case sensitive</th>

            <td>true</td>

        </tr>

        <tr>

            <th>Max sentence length</th>

            <td>256</td>

        </tr>

    </table>

</div>

""", unsafe_allow_html=True)

# Data Source
st.markdown('<div class="sub-title">Data Source</div>', unsafe_allow_html=True)
st.markdown("""

<div class="section">

    <p>For more information about the dataset used to train this model, visit the <a class="link" href="https://huggingface.co/clulab/roberta-timex-semeval" target="_blank">Hugging Face page</a>.</p>

</div>

""", unsafe_allow_html=True)

# Conclusion
st.markdown('<div class="sub-title">Conclusion</div>', unsafe_allow_html=True)
st.markdown("""

<div class="section">

    <p>Detecting time-related terminology is essential for a wide range of applications. This model, leveraging RoBERTa embeddings and RobertaForTokenClassification, provides robust capabilities for identifying and classifying temporal entities within text.</p>

    <p>By integrating this model into your systems, you can enhance scheduling, event tracking, historical data analysis, and more. The high accuracy and comprehensive coverage of time-related entities make this model a valuable tool for many applications.</p>

</div>

""", unsafe_allow_html=True)

# References
st.markdown('<div class="sub-title">References</div>', unsafe_allow_html=True)
st.markdown("""

<div class="section">

    <ul>

        <li><a class="link" href="https://sparknlp.org/api/com/johnsnowlabs/nlp/annotators/classifier/dl/RoBertaForTokenClassification.html#!=(x$1:Any):Boolean" target="_blank" rel="noopener">RoBertaForTokenClassification</a> annotator documentation</li>

        <li>Model Used: <a class="link" href="https://sparknlp.org/2021/12/28/roberta_token_classifier_timex_semeval_en.html" rel="noopener">roberta_token_classifier_timex_semeval_en</a></li>

        <li><a class="link" href="https://nlp.johnsnowlabs.com/recognize_entitie" target="_blank" rel="noopener">Visualization demos for NER in Spark NLP</a></li>

        <li><a class="link" href="https://www.johnsnowlabs.com/named-entity-recognition-ner-with-bert-in-spark-nlp/">Named Entity Recognition (NER) with BERT in Spark NLP</a></li>

    </ul>

</div>

""", unsafe_allow_html=True)

# Community & Support
st.markdown('<div class="sub-title">Community & Support</div>', unsafe_allow_html=True)
st.markdown("""

<div class="section">

    <ul>

        <li><a class="link" href="https://sparknlp.org/" target="_blank">Official Website</a>: Documentation and examples</li>

        <li><a class="link" href="https://github.com/JohnSnowLabs/spark-nlp" target="_blank">GitHub Repository</a>: Report issues or contribute</li>

        <li><a class="link" href="https://forum.johnsnowlabs.com/" target="_blank">Community Forum</a>: Ask questions, share ideas, and connect with other users</li>

    </ul>

</div>

""", unsafe_allow_html=True)