Update README.md
Browse files
README.md
CHANGED
@@ -5,6 +5,27 @@ license: apache-2.0
|
|
5 |
|
6 |
> **AI-vs-Deepfake-vs-Real-v2.0** is an image classification vision-language encoder model fine-tuned from `google/siglip2-base-patch16-224` for a single-label classification task. It is designed to distinguish AI-generated images, deepfake images, and real images using the `SiglipForImageClassification` architecture.
|
7 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
8 |
The model categorizes images into three classes:
|
9 |
- **Class 0:** "AI" – The image is fully AI-generated, created by machine learning models.
|
10 |
- **Class 1:** "Deepfake" – The image is a manipulated deepfake, where real content has been altered.
|
|
|
5 |
|
6 |
> **AI-vs-Deepfake-vs-Real-v2.0** is an image classification vision-language encoder model fine-tuned from `google/siglip2-base-patch16-224` for a single-label classification task. It is designed to distinguish AI-generated images, deepfake images, and real images using the `SiglipForImageClassification` architecture.
|
7 |
|
8 |
+
```py
|
9 |
+
"label2id": {
|
10 |
+
"Artificial": 0,
|
11 |
+
"Deepfake": 1,
|
12 |
+
"Real": 2
|
13 |
+
},
|
14 |
+
```
|
15 |
+
```py
|
16 |
+
"log_history": [
|
17 |
+
{
|
18 |
+
"epoch": 1.0,
|
19 |
+
"eval_accuracy": 0.9915991599159916,
|
20 |
+
"eval_loss": 0.0240725576877594,
|
21 |
+
"eval_model_preparation_time": 0.0023,
|
22 |
+
"eval_runtime": 248.0631,
|
23 |
+
"eval_samples_per_second": 40.308,
|
24 |
+
"eval_steps_per_second": 5.039,
|
25 |
+
"step": 313
|
26 |
+
}
|
27 |
+
```
|
28 |
+
|
29 |
The model categorizes images into three classes:
|
30 |
- **Class 0:** "AI" – The image is fully AI-generated, created by machine learning models.
|
31 |
- **Class 1:** "Deepfake" – The image is a manipulated deepfake, where real content has been altered.
|