Yuanjing Zhu commited on
Commit
69676a1
·
unverified ·
1 Parent(s): 044ea91

adding badges

Browse files
Files changed (1) hide show
  1. README.md +9 -1
README.md CHANGED
@@ -12,7 +12,15 @@ pinned: false
12
 
13
  # Reddit Explicit Text Classifier
14
 
15
- [![Python application test with Github Actions](https://github.com/YZhu0225/reddit_text_classification/actions/workflows/main.yml/badge.svg)](https://github.com/YZhu0225/reddit_text_classification/actions/workflows/main.yml) [![Sync to Hugging Face hub](https://github.com/YZhu0225/reddit_text_classification/actions/workflows/sync_to_hugging_face_hub.yml/badge.svg)](https://github.com/YZhu0225/reddit_text_classification/actions/workflows/sync_to_hugging_face_hub.yml)
 
 
 
 
 
 
 
 
16
 
17
  In this project, we created a text classifier Hugging Face Spaces app and Gradio interface that classifies not safe for work (NSFW) content, specifically text that is considered inappropriate and unprofessional. We used a pre-trained DistilBERT transformer model for the sentiment analysis. The model was fine-tuned on Reddit posts and predicts 2 classes - which are NSFW and safe for work (SFW).
18
 
 
12
 
13
  # Reddit Explicit Text Classifier
14
 
15
+ ![maven](http://img.shields.io/badge/Python-3.10.4-green)
16
+ ![maven](http://img.shields.io/badge/gradio-3.13.0-orange)
17
+ ![maven](http://img.shields.io/badge/praw-7.6.1-blue)
18
+ ![maven](http://img.shields.io/badge/huggingface-0.11.1-yellowgreen)
19
+ ![maven](http://img.shields.io/badge/torch-1.13.0-yellow)
20
+ ![maven](http://img.shields.io/badge/transformers-4.25.1-lightgrey)
21
+
22
+ [![Python application test with Github Actions](https://github.com/YZhu0225/reddit_text_classification/actions/workflows/main.yml/badge.svg)](https://github.com/YZhu0225/reddit_text_classification/actions/workflows/main.yml)
23
+ [![Sync to Hugging Face hub](https://github.com/YZhu0225/reddit_text_classification/actions/workflows/sync_to_hugging_face_hub.yml/badge.svg)](https://github.com/YZhu0225/reddit_text_classification/actions/workflows/sync_to_hugging_face_hub.yml)
24
 
25
  In this project, we created a text classifier Hugging Face Spaces app and Gradio interface that classifies not safe for work (NSFW) content, specifically text that is considered inappropriate and unprofessional. We used a pre-trained DistilBERT transformer model for the sentiment analysis. The model was fine-tuned on Reddit posts and predicts 2 classes - which are NSFW and safe for work (SFW).
26