prithivMLmods commited on
Commit
60e5031
·
verified ·
1 Parent(s): 53d7e20

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +67 -3
README.md CHANGED
@@ -1,3 +1,67 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+ # **AI-vs-Deepfake-vs-Real-v2.0**
5
+
6
+ > **AI-vs-Deepfake-vs-Real-v2.0** is an image classification vision-language encoder model fine-tuned from `google/siglip2-base-patch16-224` for a single-label classification task. It is designed to distinguish AI-generated images, deepfake images, and real images using the `SiglipForImageClassification` architecture.
7
+
8
+ The model categorizes images into three classes:
9
+ - **Class 0:** "AI" – The image is fully AI-generated, created by machine learning models.
10
+ - **Class 1:** "Deepfake" – The image is a manipulated deepfake, where real content has been altered.
11
+ - **Class 2:** "Real" – The image is an authentic, unaltered photograph.
12
+
13
+ # **Run with Transformers🤗**
14
+
15
+ ```python
16
+ !pip install -q transformers torch pillow gradio
17
+ ```
18
+
19
+ ```python
20
+ import gradio as gr
21
+ from transformers import AutoImageProcessor, SiglipForImageClassification
22
+ from PIL import Image
23
+ import torch
24
+
25
+ # Load model and processor
26
+ model_name = "prithivMLmods/AI-vs-Deepfake-vs-Real-v2.0"
27
+ model = SiglipForImageClassification.from_pretrained(model_name)
28
+ processor = AutoImageProcessor.from_pretrained(model_name)
29
+
30
+ def image_classification(image):
31
+ """Classifies an image as AI-generated, deepfake, or real."""
32
+ image = Image.fromarray(image).convert("RGB")
33
+ inputs = processor(images=image, return_tensors="pt")
34
+
35
+ with torch.no_grad():
36
+ outputs = model(**inputs)
37
+ logits = outputs.logits
38
+ probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()
39
+
40
+ labels = model.config.id2label
41
+ predictions = {labels[i]: round(probs[i], 3) for i in range(len(probs))}
42
+
43
+ return predictions
44
+
45
+ # Create Gradio interface
46
+ iface = gr.Interface(
47
+ fn=image_classification,
48
+ inputs=gr.Image(type="numpy"),
49
+ outputs=gr.Label(label="Classification Result"),
50
+ title="AI vs Deepfake vs Real Image Classification",
51
+ description="Upload an image to determine whether it is AI-generated, a deepfake, or a real image."
52
+ )
53
+
54
+ # Launch the app
55
+ if __name__ == "__main__":
56
+ iface.launch()
57
+ ```
58
+
59
+ # **Intended Use**
60
+
61
+ The **AI-vs-Deepfake-vs-Real-v2.0** model is designed to classify images into three categories: **AI-generated, deepfake, or real**. It helps in identifying whether an image is fully synthetic, altered through deepfake techniques, or an unaltered real image.
62
+
63
+ ### Potential Use Cases:
64
+ - **Deepfake Detection:** Identifying manipulated deepfake content in media.
65
+ - **AI-Generated Image Identification:** Distinguishing AI-generated images from real or deepfake images.
66
+ - **Content Verification:** Supporting fact-checking and digital forensics in assessing image authenticity.
67
+ - **Social Media and News Filtering:** Helping platforms flag AI-generated or deepfake content.