Spaces:
Runtime error
Runtime error
Yuanjing Zhu
commited on
adding badges
Browse files
README.md
CHANGED
@@ -12,7 +12,15 @@ pinned: false
|
|
12 |
|
13 |
# Reddit Explicit Text Classifier
|
14 |
|
15 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
16 |
|
17 |
In this project, we created a text classifier Hugging Face Spaces app and Gradio interface that classifies not safe for work (NSFW) content, specifically text that is considered inappropriate and unprofessional. We used a pre-trained DistilBERT transformer model for the sentiment analysis. The model was fine-tuned on Reddit posts and predicts 2 classes - which are NSFW and safe for work (SFW).
|
18 |
|
|
|
12 |
|
13 |
# Reddit Explicit Text Classifier
|
14 |
|
15 |
+

|
16 |
+

|
17 |
+

|
18 |
+

|
19 |
+

|
20 |
+

|
21 |
+
|
22 |
+
[](https://github.com/YZhu0225/reddit_text_classification/actions/workflows/main.yml)
|
23 |
+
[](https://github.com/YZhu0225/reddit_text_classification/actions/workflows/sync_to_hugging_face_hub.yml)
|
24 |
|
25 |
In this project, we created a text classifier Hugging Face Spaces app and Gradio interface that classifies not safe for work (NSFW) content, specifically text that is considered inappropriate and unprofessional. We used a pre-trained DistilBERT transformer model for the sentiment analysis. The model was fine-tuned on Reddit posts and predicts 2 classes - which are NSFW and safe for work (SFW).
|
26 |
|