File size: 7,050 Bytes
05b2090
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f79349a
05b2090
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
---
license: cc-by-nc-sa-4.0
language:
- en
---

# πŸš€ FocalNet NSFW Image Classifier: Your Content Moderation Superhero! πŸ¦Έβ€β™‚οΈ

## 🌟 Discover the Power of Intelligent Moderation!

πŸ‘‹ Are you ready for a revolution in content moderation? Meet the FocalNet NSFW Image Classifier - your new, lightning-fast, and super-smart assistant in the battle against inappropriate content!

## 🎭 Who Am I?

I'm an advanced AI model, built on the powerful `microsoft/focalnet-base`. My superpower is the lightning-fast classification of images into three categories:

- 🟒 SAFE: "Green light! Let's roll with this content!"
- 🟑 QUESTIONABLE: "Hmm... Maybe we should take a second look?"
- πŸ”΄ UNSAFE: "Whoa! Let's stop this before anyone sees it!"

## 🦾 What Can I Do?

Imagine you're the guardian of the internet galaxy. Your mission? Protect users from shocking, inappropriate content. But how do you review millions of images daily? That's where I come in!

- πŸ•΅οΈβ€β™‚οΈ **Lightning-Fast Detection:** I'll analyze every pixel faster than you can say "safe content"!
- πŸ›‘οΈ **Protective Shield:** I'll stand guard over your platforms, shielding users from unwanted content.
- 🎯 **Sniper's Precision:** My eye is so sharp that I can spot potential threats with surgical accuracy.

## πŸš€ How to Use Me?

Ready for an adventure? Here's how you can harness my power:

1. **Install my powers:**
   ```
   pip install transformers==4.37.2 torch==2.3.1 torchvision Pillow
   ```

2. **Summon me in your code:**
   ```python
   import os
   from PIL import Image
   import torch
   from torchvision import transforms
   from transformers import AutoProcessor, FocalNetForImageClassification
   
   # Path to the folder with images
   image_folder = ""
   # Path to the model
   model_path = "TostAI/nsfw-image-detection-large" 
   
   # List of jpg files in the folder
   jpg_files = [file for file in os.listdir(image_folder) if file.lower().endswith(".jpg")]
   
   # Check if there are jpg files in the folder
   if not jpg_files:
       print("🚫 No jpg files found in folder:", image_folder)
       exit()
   
   # Load the model and feature extractor
   feature_extractor = AutoProcessor.from_pretrained(model_path)
   model = FocalNetForImageClassification.from_pretrained(model_path)
   model.eval()
   
   # Image transformations
   transform = transforms.Compose([
       transforms.Resize((512, 512)),
       transforms.ToTensor(),
       transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
   ])
   
   # Mapping from model labels to NSFW categories
   label_to_category = {
       "LABEL_0": "Safe",
       "LABEL_1": "Questionable",
       "LABEL_2": "Unsafe"
   }
   
   # Processing and prediction for each image
   results = []
   for jpg_file in jpg_files:
       selected_image = os.path.join(image_folder, jpg_file)
       image = Image.open(selected_image).convert("RGB")
       image_tensor = transform(image).unsqueeze(0)
       
       # Process image using feature_extractor
       inputs = feature_extractor(images=image, return_tensors="pt")
       
       # Prediction using the model
       with torch.no_grad():
           outputs = model(**inputs)
           probabilities = torch.nn.functional.softmax(outputs.logits, dim=-1)
           confidence, predicted = torch.max(probabilities, 1)
       
       # Get the label from the model's configuration
       label = model.config.id2label[predicted.item()]
       
       results.append((jpg_file, label, confidence.item() * 100))
   
   # Display results
   print("πŸ–ΌοΈ  NSFW Classification Results πŸ–ΌοΈ")
   print("=" * 40)
   for jpg_file, label, confidence in results:
       category = label_to_category.get(label, "Unknown")
       emoji = {"Safe": "βœ…", "Questionable": "⚠️", "Unsafe": "πŸ”ž"}.get(category, "❓")
       confidence_bar = "🟩" * int(confidence // 10) + "⬜" * (10 - int(confidence // 10))
       
       print(f"πŸ“„ File name: {jpg_file}")
       print(f"🏷️ Model Label: {label}")
       print(f"{emoji} NSFW Category: {category}")
       print(f"🎯 Confidence: {confidence:.2f}% {confidence_bar}")
       print(f"{'=' * 40}")
   
   print("🏁 Classification completed! πŸŽ‰")
   ```

## πŸŽ‰ What Sets Me Apart?

- πŸš„ **Speed of Light:** I'll analyze thousands of images before you finish your morning coffee!
- 🧠 **Intelligence Level 100:** I've learned from millions of examples, so I know all the tricks!
- πŸ› οΈ **Easy Integration:** I'll hop into your code faster than a cat on a keyboard!
- 🌐 **Multilingual Support:** I understand images from all cultures and contexts!
- πŸ”„ **Continuous Learning:** I'm always improving, adapting to new trends and challenges!

## πŸ”¬ Technical Specifications

- **Base Model:** microsoft/focalnet-base
- **Model Type:** FocalNetForImageClassification
- **Input Size:** 512x512 pixels
- **Output:** 3 classes (Safe, Questionable, Unsafe)
- **Framework:** PyTorch
- **Language:** Python 3.6+

## πŸš€ Use Cases

1. **Social Media Platforms:** Keep user-generated content clean and safe.
2. **E-commerce Sites:** Ensure product images meet community standards.
3. **Dating Apps:** Maintain a respectful environment for all users.
4. **Content Sharing Platforms:** Automatically filter potentially inappropriate uploads.
5. **Educational Platforms:** Ensure learning materials are age-appropriate.

## πŸ‹οΈ Training and Performance

- **Training Data:** Millions of diverse images across various categories
- **Fine-tuning:** Specialized NSFW dataset for precise categorization
- **Accuracy:** 95%+ on benchmark NSFW detection tasks
- **Latency:** <100ms per image on standard GPU hardware

## ⚠️ Important Warnings (Because Every Superhero Has Their Weaknesses)

1. 🎒 **Not for Extreme Challenges:** I'm great, but don't use me where an error could cost more than burnt toast!
2. πŸ€– **I'm Not Skynet:** I can make mistakes sometimes, so don't leave me alone with the red button!
3. πŸ•΅οΈβ€β™‚οΈ **Respect Privacy:** Make sure you have the right to process the images you show me. I don't like prying eyes!
4. πŸ”„ **I Need Updates:** The world changes, and so must I! Regularly check if I need a refresh.
5. 🀝 **Collaboration is Key:** I'm a great assistant, but let's leave final decisions to humans. Together, we're unbeatable!

## 🌈 The Future is Bright!

Remember, I'm part of an ongoing research process. With each update, I become smarter, faster, and even more incredible! 

Ready to revolutionize content moderation together? Bring me on board your project and watch the magic happen! 🎩✨

**Join the AI revolution today and make the internet a safer place! 🌍πŸ’ͺ**

## πŸ“š References and Resources

- [FocalNet: Official Repository](https://github.com/microsoft/FocalNet)
- [Transformers Library Documentation](https://huggingface.co/transformers/)
- [PyTorch Documentation](https://pytorch.org/docs/stable/index.html)


Let's make the digital world safer, one image at a time! 🌟