Spaces:
Runtime error
Runtime error
XquanL
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -8,6 +8,7 @@ sdk_version: 3.13.0
|
|
8 |
app_file: app.py
|
9 |
pinned: false
|
10 |
---
|
|
|
11 |
|
12 |
|
13 |
# Reddit Explicit Text Classifier
|
@@ -22,9 +23,12 @@ pinned: false
|
|
22 |
[](https://github.com/YZhu0225/reddit_text_classification/actions/workflows/main.yml)
|
23 |
[](https://github.com/YZhu0225/reddit_text_classification/actions/workflows/sync_to_hugging_face_hub.yml)
|
24 |
|
|
|
|
|
|
|
25 |
## Introduction
|
26 |
|
27 |
-
Reddit is a place where people come together to have a variety of conversations on the internet. However, the negative impacts of abusive language on users in online communities are severe. As data science
|
28 |
|
29 |
In this project, we created a text classifier Hugging Face Spaces app and Gradio interface that classifies not safe for work (NSFW) content, specifically text that is considered inappropriate and unprofessional. We used a pre-trained DistilBERT transformer model for the sentiment analysis. The model was fine-tuned on Reddit posts and predicts 2 classes - which are NSFW and safe for work (SFW).
|
30 |
|
|
|
8 |
app_file: app.py
|
9 |
pinned: false
|
10 |
---
|
11 |
+
<img width="719" alt="Screen Shot 2022-12-14 at 3 00 11 PM" src="https://user-images.githubusercontent.com/112578003/207716480-a5ac9596-8095-46d5-9df9-d6973af38e3e.png">
|
12 |
|
13 |
|
14 |
# Reddit Explicit Text Classifier
|
|
|
23 |
[](https://github.com/YZhu0225/reddit_text_classification/actions/workflows/main.yml)
|
24 |
[](https://github.com/YZhu0225/reddit_text_classification/actions/workflows/sync_to_hugging_face_hub.yml)
|
25 |
|
26 |
+
## Demo
|
27 |
+
[<img width="700" src="https://user-images.githubusercontent.com/112578003/207716480-a5ac9596-8095-46d5-9df9-d6973af38e3e.png">](https://youtu.be/0OY0CCK3lI4 "Reddit")
|
28 |
+
|
29 |
## Introduction
|
30 |
|
31 |
+
Reddit is a place where people come together to have a variety of conversations on the internet. However, the negative impacts of abusive language on users in online communities are severe. As students passionate about data science, we are interested in detecting inappropriate and unprofessional Reddit posts and warning users based on the url of the posts.
|
32 |
|
33 |
In this project, we created a text classifier Hugging Face Spaces app and Gradio interface that classifies not safe for work (NSFW) content, specifically text that is considered inappropriate and unprofessional. We used a pre-trained DistilBERT transformer model for the sentiment analysis. The model was fine-tuned on Reddit posts and predicts 2 classes - which are NSFW and safe for work (SFW).
|
34 |
|