Spaces:
Runtime error
Runtime error
Michelle Li
commited on
update readme
Browse files
README.md
CHANGED
@@ -23,15 +23,15 @@ pinned: false
|
|
23 |
|
24 |
## Demo
|
25 |
|
26 |
-
Link to
|
27 |
|
28 |
[<img width="700" src="https://user-images.githubusercontent.com/112578003/207716480-a5ac9596-8095-46d5-9df9-d6973af38e3e.png">](https://youtu.be/0OY0CCK3lI4 "Reddit")
|
29 |
|
30 |
## Introduction
|
31 |
|
32 |
-
Reddit is a place where people come together to have a variety of conversations on the internet. However, the negative impacts of abusive language on users in online communities are severe. As students passionate about data science, we are interested in detecting inappropriate and unprofessional Reddit posts and
|
33 |
|
34 |
-
In this project, we created a text classifier Hugging Face Spaces app and Gradio interface that classifies not safe for work (NSFW) content, specifically text that is considered inappropriate and unprofessional. We used a pre-trained DistilBERT transformer model for the sentiment analysis. The model was fine-tuned on Reddit posts and predicts 2 classes -
|
35 |
|
36 |
## Workflow
|
37 |
<p align="center">
|
@@ -40,7 +40,7 @@ In this project, we created a text classifier Hugging Face Spaces app and Gradio
|
|
40 |
|
41 |
### Get Reddit data
|
42 |
|
43 |
-
* Data pulled in notebook `reddit_data/reddit_new.ipynb`
|
44 |
|
45 |
### Verify GPU works in this [repo](https://github.com/nogibjj/Reddit_Classifier_Final_Project)
|
46 |
* Run pytorch training test: `python utils/quickstart_pytorch.py`
|
|
|
23 |
|
24 |
## Demo
|
25 |
|
26 |
+
Link to Youtube demo:
|
27 |
|
28 |
[<img width="700" src="https://user-images.githubusercontent.com/112578003/207716480-a5ac9596-8095-46d5-9df9-d6973af38e3e.png">](https://youtu.be/0OY0CCK3lI4 "Reddit")
|
29 |
|
30 |
## Introduction
|
31 |
|
32 |
+
Reddit is a place where people come together to have a variety of conversations on the internet. However, the negative impacts of abusive language on users in online communities are severe. As students passionate about data science, we are interested in detecting inappropriate and unprofessional Reddit posts and warn users about explicit content in these posts.
|
33 |
|
34 |
+
In this project, we created a text classifier Hugging Face Spaces app and a Gradio interface that classifies not safe for work (NSFW) content, specifically text that is considered inappropriate and unprofessional. We used a pre-trained DistilBERT transformer model for the sentiment analysis. The model was fine-tuned on Reddit posts and predicts 2 classes - NSFW and safe for work (SFW).
|
35 |
|
36 |
## Workflow
|
37 |
<p align="center">
|
|
|
40 |
|
41 |
### Get Reddit data
|
42 |
|
43 |
+
* Data pulled in notebook `reddit_data/reddit_new.ipynb` to fine-tune Hugging Face model.
|
44 |
|
45 |
### Verify GPU works in this [repo](https://github.com/nogibjj/Reddit_Classifier_Final_Project)
|
46 |
* Run pytorch training test: `python utils/quickstart_pytorch.py`
|