import streamlit as st # Custom CSS for better styling st.markdown(""" """, unsafe_allow_html=True) # Introduction st.markdown('
Detecting Toxic Comments with Spark NLP
', unsafe_allow_html=True) st.markdown("""

Welcome to the Spark NLP Toxic Comment Detection Demo App! Discussing things you care about can be difficult. The threat of abuse and harassment online means that many people stop expressing themselves and give up on seeking different opinions. Platforms struggle to effectively facilitate conversations, leading many communities to limit or completely shut down user comments.

This app demonstrates how to use Spark NLP's MulticlassifierDL to automatically detect toxic comments, including categories like identity hate, insult, obscene, severe toxic, and threat.

""", unsafe_allow_html=True) # st.image('images/Toxic-Comments.jpg', caption="Different types of toxic comments detected using Spark NLP", use_column_width='auto') # About Toxic Comment Classification st.markdown('
About Toxic Comment Classification
', unsafe_allow_html=True) st.markdown("""

The Conversation AI team, a research initiative founded by Jigsaw and Google (both part of Alphabet), is working on tools to help improve online conversations. One area of focus is the study of negative online behaviors, like toxic comments (comments that are rude, disrespectful, or likely to make someone leave a discussion).

This app utilizes the Spark NLP MulticlassifierDL model to detect various types of toxicity in comments. This model is capable of identifying and categorizing toxic comments into different classes such as toxic, severe toxic, identity hate, insult, obscene, and threat.

""", unsafe_allow_html=True) # Using MulticlassifierDL in Spark NLP st.markdown('
Using MulticlassifierDL in Spark NLP
', unsafe_allow_html=True) st.markdown("""

The MulticlassifierDL annotator in Spark NLP uses deep learning to classify text into multiple categories. This approach allows for a more nuanced understanding of the toxicity in comments, providing better tools for moderating online discussions.

Spark NLP also offers other annotators and models for different NLP tasks. If you are interested in exploring more, please check the MulticlassifierDL documentation.

""", unsafe_allow_html=True) st.markdown('

Example Usage in Python

', unsafe_allow_html=True) st.markdown('

Here’s how you can implement toxic comment classification using the MulticlassifierDL annotator in Spark NLP:

', unsafe_allow_html=True) # Setup Instructions st.markdown('
Setup
', unsafe_allow_html=True) st.markdown('

To install Spark NLP in Python, use your favorite package manager (conda, pip, etc.). For example:

', unsafe_allow_html=True) st.code(""" pip install spark-nlp pip install pyspark """, language="bash") st.markdown("

Then, import Spark NLP and start a Spark session:

", unsafe_allow_html=True) st.code(""" import sparknlp # Start Spark Session spark = sparknlp.start() """, language='python') # Toxic Comment Classification Example st.markdown('
Example Usage: Toxic Comment Classification with MulticlassifierDL
', unsafe_allow_html=True) st.code(''' from sparknlp.base import DocumentAssembler from sparknlp.annotator import UniversalSentenceEncoder, MultiClassifierDLModel from pyspark.ml import Pipeline # Step 1: Transforms raw texts to document annotation document = DocumentAssembler() \\ .setInputCol("text") \\ .setOutputCol("document") # Step 2: Use Universal Sentence Encoder for embeddings use = UniversalSentenceEncoder.pretrained() \\ .setInputCols(["document"]) \\ .setOutputCol("use_embeddings") # Step 3: Multiclass classification model docClassifier = MultiClassifierDLModel.pretrained("multiclassifierdl_use_toxic") \\ .setInputCols(["use_embeddings"]) \\ .setOutputCol("category") \\ .setThreshold(0.5) # Define the pipeline pipeline = Pipeline( stages = [ document, use, docClassifier ] ) # Create a Spark Data Frame with example sentences data = spark.createDataFrame( [ ["She should stop sticking her tongue out before someone rubs their ass on it. Filthy bitch!!!"] ] ).toDF("text") # use the column name `text` defined in the pipeline as input # Fit-transform to get predictions result = pipeline.fit(data).transform(data).select("text", "category.result").show(truncate=50) ''', language='python') st.text(""" +--------------------------------------------------+------------------------+ | text| result| +--------------------------------------------------+------------------------+ |She should stop sticking her tongue out before ...|[toxic, insult, obscene]| +--------------------------------------------------+------------------------+ """) st.markdown("""

The code snippet demonstrates how to set up a pipeline in Spark NLP to classify toxic comments using the MulticlassifierDL annotator. The resulting DataFrame contains the predictions for each comment.

""", unsafe_allow_html=True) # One-liner Alternative st.markdown('
One-liner Alternative
', unsafe_allow_html=True) st.markdown("""

In October 2022, John Snow Labs released the open-source johnsnowlabs library that contains all the company products, open-source and licensed, under one common library. This simplified the workflow, especially for users working with more than one of the libraries (e.g., Spark NLP + Healthcare NLP). This new library is a wrapper on all of John Snow Lab’s libraries and can be installed with pip:

pip install johnsnowlabs

""", unsafe_allow_html=True) st.markdown('

To run toxic comment classification with one line of code, we can simply:

', unsafe_allow_html=True) st.code(""" # Import the NLP module which contains Spark NLP and NLU libraries from johnsnowlabs import nlp sample_text = ["You are a horrible person!", "I love your new profile picture!", "Go away, no one likes you."] # Returns a pandas DataFrame, we select the desired columns nlp.load('en.classify.toxic').predict(sample_text, output_level='sentence') """, language='python') st.image('images/johnsnowlabs-toxic-output.png', use_column_width='auto') st.markdown("""

This approach demonstrates how to use the johnsnowlabs library to perform toxic comment classification with a single line of code. The resulting DataFrame contains the predictions for each comment.

""", unsafe_allow_html=True) # Benchmarking st.markdown('
Benchmarking
', unsafe_allow_html=True) st.markdown("""

Here are the benchmarking results for the MulticlassifierDL model on the toxic comment classification task:

    precision    recall  f1-score   support

    0       0.56      0.30      0.39       127
    1       0.71      0.70      0.70       761
    2       0.76      0.72      0.74       824
    3       0.55      0.21      0.31       147
    4       0.79      0.38      0.51       50
    5       0.94      1.00      0.97       1504

    micro avg       0.83      0.80      0.81      3413
    macro avg       0.72      0.55      0.60      3413
    weighted avg    0.81      0.80      0.80      3413
    samples avg     0.84      0.83      0.80      3413

    F1 micro averaging: 0.8113432835820896
""", unsafe_allow_html=True) # Additional Resources st.markdown('
Additional Resources
', unsafe_allow_html=True) st.markdown("""
""", unsafe_allow_html=True) st.markdown('
Community & Support
', unsafe_allow_html=True) st.markdown("""
""", unsafe_allow_html=True)