duchaba commited on
Commit
06bff1f
Β·
verified Β·
1 Parent(s): f02b846

minor updates on wordings

Browse files
Files changed (1) hide show
  1. app.py +7 -4
app.py CHANGED
@@ -954,9 +954,10 @@ txt1 = """
954
 
955
  ### Identify 14 categories of text toxicity.
956
 
957
- >The purpose of this NLP (Natural Language Processing) AI demonstration is to prevent profanity, vulgarity, hate speech, violence, sexism, and any other offensive language.
958
  >It is **not an act of censorship**, as the final UI (User Interface) will give the reader, but not a young reader, the option to click on a label to read the toxic message.
959
- >The goal is to create a safer and more respectful environment for you, your colleages, and your family.
 
960
  ---
961
  ### 🌴 Helpful Instruction:
962
 
@@ -965,7 +966,7 @@ txt1 = """
965
  2. Click the "Measure 14 Toxicity" button.
966
  3. View the result on the Donut plot.
967
  4. (**Optional**) Click on the "Fetch Real World Toxic Dataset" below.
968
- 5. Please find below the explanation of additional options available.
969
  """
970
  txt2 = """
971
  ## 🌻 Author and Developer Notes:
@@ -999,12 +1000,14 @@ txt2 = """
999
  - Yellow is an "unsafe" message by your toxicity level
1000
 
1001
  - The real-world dataset is from the Jigsaw Rate Severity of Toxic Comments on Kaggle. It has 30,108 records.
 
 
1002
  - The intent is to share with Duc's friends and colleagues, but for those with nefarious intent, this Text Moderation model is governed by the GNU 3.0 License: https://www.gnu.org/licenses/gpl-3.0.en.html
1003
  - Author: **[Duc Haba](https://linkedin.com/in/duchaba), 2024**
1004
  ---
1005
  # 🌟 "AI Solution Architect" Course by ELVTR
1006
 
1007
- >Welcome to the fascinating world of AI and natural language processing (NLP). This NLP model is a part of the course. In our journey together, we will explore the [AI Solution Architect](https://elvtr.com/course/ai-solution-architect?utm_source=instructor&utm_campaign=AISA&utm_content=linkedin) course, meticulously crafted by ELVTR in collaboration with Duc Haba. This course is intended to serve as your gateway into the dynamic and constantly evolving field of AI Solution Architect, providing you with a comprehensive understanding of its complexities and applications.
1008
 
1009
  >An AI Solution Architect (AISA) is a mastermind who possesses a deep understanding of the complex technicalities of AI and knows how to creatively integrate them into real-world solutions. They bridge the gap between theoretical AI models and practical, effective applications. AISA works as a strategist to design AI systems that align with business objectives and technical requirements. They delve into algorithms, data structures, and computational theories to translate them into tangible, impactful AI solutions that have the potential to revolutionize industries.
1010
 
 
954
 
955
  ### Identify 14 categories of text toxicity.
956
 
957
+ > This NLP (Natural Language Processing) AI demonstration aims to prevent profanity, vulgarity, hate speech, violence, sexism, and other offensive language.
958
  >It is **not an act of censorship**, as the final UI (User Interface) will give the reader, but not a young reader, the option to click on a label to read the toxic message.
959
+ >The goal is to create a safer and more respectful environment for you, your colleages, and your family.
960
+ > This NLP app is 1 of 3 hands-on apps, ["AI Solution Architect," from ELVTR and Duc Haba](https://elvtr.com/course/ai-solution-architect?utm_source=instructor&utm_campaign=AISA&utm_content=linkedin).
961
  ---
962
  ### 🌴 Helpful Instruction:
963
 
 
966
  2. Click the "Measure 14 Toxicity" button.
967
  3. View the result on the Donut plot.
968
  4. (**Optional**) Click on the "Fetch Real World Toxic Dataset" below.
969
+ 5. There are additional options and notes below.
970
  """
971
  txt2 = """
972
  ## 🌻 Author and Developer Notes:
 
1000
  - Yellow is an "unsafe" message by your toxicity level
1001
 
1002
  - The real-world dataset is from the Jigsaw Rate Severity of Toxic Comments on Kaggle. It has 30,108 records.
1003
+ - Citation:
1004
+ - Ian Kivlichan, Jeffrey Sorensen, Lucas Dixon, Lucy Vasserman, Meghan Graham, Tin Acosta, Walter Reade. (2021). Jigsaw Rate Severity of Toxic Comments . Kaggle. https://kaggle.com/competitions/jigsaw-toxic-severity-rating
1005
  - The intent is to share with Duc's friends and colleagues, but for those with nefarious intent, this Text Moderation model is governed by the GNU 3.0 License: https://www.gnu.org/licenses/gpl-3.0.en.html
1006
  - Author: **[Duc Haba](https://linkedin.com/in/duchaba), 2024**
1007
  ---
1008
  # 🌟 "AI Solution Architect" Course by ELVTR
1009
 
1010
+ >Welcome to the fascinating world of AI and natural language processing (NLP). This NLP model is a part of one of three hands-on application. In our journey together, we will explore the [AI Solution Architect](https://elvtr.com/course/ai-solution-architect?utm_source=instructor&utm_campaign=AISA&utm_content=linkedin) course, meticulously crafted by ELVTR in collaboration with Duc Haba. This course is intended to serve as your gateway into the dynamic and constantly evolving field of AI Solution Architect, providing you with a comprehensive understanding of its complexities and applications.
1011
 
1012
  >An AI Solution Architect (AISA) is a mastermind who possesses a deep understanding of the complex technicalities of AI and knows how to creatively integrate them into real-world solutions. They bridge the gap between theoretical AI models and practical, effective applications. AISA works as a strategist to design AI systems that align with business objectives and technical requirements. They delve into algorithms, data structures, and computational theories to translate them into tangible, impactful AI solutions that have the potential to revolutionize industries.
1013