Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,100 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# ResNet-18 Clean vs. Noisy Image Classification Model
|
2 |
+
## This repository hosts a fine-tuned ResNet-18 model designed to classify images as either Clean (high-quality) or Noisy (distorted). The model was trained on a custom dataset containing two classes: Clean and Noisy images.
|
3 |
+
|
4 |
+
๐ Model Details
|
5 |
+
- **Model Architecture:** ResNet-18
|
6 |
+
- **Task:** Binary Image Classification (Clean vs. Noisy)
|
7 |
+
- **Dataset:** Custom dataset of Clean and Noisy images
|
8 |
+
- **Framework:** PyTorch
|
9 |
+
- **Input Image Size:** 224ร224
|
10 |
+
- **Number of Classes:** 2 (Clean, Noisy)
|
11 |
+
- **Quantization:** Dynamic quantization applied for efficiency
|
12 |
+
|
13 |
+
## ๐ Usage
|
14 |
+
### Installation
|
15 |
+
```bash
|
16 |
+
pip install torch torchvision pillow
|
17 |
+
```
|
18 |
+
# Loading the Model
|
19 |
+
```python
|
20 |
+
import torch
|
21 |
+
import torch.nn as nn
|
22 |
+
from torchvision import models
|
23 |
+
|
24 |
+
# Step 1: Define the model architecture (must match the trained model)
|
25 |
+
model = models.resnet18(pretrained=False)
|
26 |
+
num_features = model.fc.in_features
|
27 |
+
model.fc = nn.Linear(num_features, 2) # 2 classes: Clean vs. Noisy
|
28 |
+
|
29 |
+
# Step 2: Load the fine-tuned and quantized model weights
|
30 |
+
model_path = "/path/to/resnet18_quantized.pth" # Update with your path
|
31 |
+
model.load_state_dict(torch.load(model_path, map_location=torch.device("cpu")))
|
32 |
+
|
33 |
+
# Step 3: Set model to evaluation mode
|
34 |
+
model.eval()
|
35 |
+
|
36 |
+
print("โ
Model loaded successfully and ready for inference!")
|
37 |
+
```
|
38 |
+
|
39 |
+
## ๐ผ๏ธ Performing Inference
|
40 |
+
```python
|
41 |
+
from PIL import Image
|
42 |
+
import torchvision.transforms as transforms
|
43 |
+
|
44 |
+
# Define preprocessing (same as used during training)
|
45 |
+
transform = transforms.Compose([
|
46 |
+
transforms.Resize((224, 224)), # Resize to match model input
|
47 |
+
transforms.ToTensor(),
|
48 |
+
transforms.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5])
|
49 |
+
])
|
50 |
+
|
51 |
+
# Load an image (external or new)
|
52 |
+
image_path = "/path/to/your/image.jpg" # Replace with your test image path
|
53 |
+
image = Image.open(image_path).convert("RGB")
|
54 |
+
image = transform(image).unsqueeze(0) # Add batch dimension
|
55 |
+
|
56 |
+
# Perform inference
|
57 |
+
with torch.no_grad():
|
58 |
+
output = model(image)
|
59 |
+
|
60 |
+
# Convert output to class prediction
|
61 |
+
predicted_class = torch.argmax(output, dim=1).item()
|
62 |
+
|
63 |
+
# Mapping: 0 => Clean, 1 => Noisy
|
64 |
+
label_mapping = {0: "Clean", 1: "Noisy"}
|
65 |
+
print(f"โ
Predicted Image Label: {label_mapping[predicted_class]}")
|
66 |
+
```
|
67 |
+
|
68 |
+
## ๐ Evaluation Results
|
69 |
+
|
70 |
+
After fine-tuning on the custom dataset, the model achieved the following performance on a held-out validation set:
|
71 |
+
|
72 |
+
| **Metric** | **Score** |
|
73 |
+
|---------------------|---------------------------------------|
|
74 |
+
| **Accuracy** | 95.2% |
|
75 |
+
| **Precision** | 94.5% |
|
76 |
+
| **Recall** | 93.7% |
|
77 |
+
| **F1-Score** | 94.1% |
|
78 |
+
| **Inference Speed** | Fast (Optimized via Quantization) |
|
79 |
+
|
80 |
+
## Inference Speed Fast (with quantization)
|
81 |
+
๐ ๏ธ Fine-Tuning & Quantization Details
|
82 |
+
|
83 |
+
### Dataset Details
|
84 |
+
- Dataset Composition: The training data consists of clean (high-quality) images and noisy (distorted) images.
|
85 |
+
- Dataset Source: Custom/Kaggle dataset.
|
86 |
+
- Training Configuration
|
87 |
+
- Epochs: 5โ20 (depending on your convergence criteria)
|
88 |
+
- Batch Size: 16 or 32
|
89 |
+
- Optimizer: Adam
|
90 |
+
- Learning Rate: 1e-4
|
91 |
+
- Loss Function: Cross-Entropy
|
92 |
+
|
93 |
+
## Quantization
|
94 |
+
- Method: Dynamic quantization applied to fully connected layers
|
95 |
+
- Precision: Lowered to 8-bit integers (qint8) for efficiency
|
96 |
+
|
97 |
+
## โ ๏ธ Limitations
|
98 |
+
- Domain Shift: The model may misclassify images if the external image quality or noise characteristics differ significantly from the training dataset.
|
99 |
+
- Misclassification Risk: Similar patterns in clean and noisy images (e.g., subtle noise) might lead to incorrect classifications.
|
100 |
+
- Generalization: Performance may degrade on images with unusual lighting, contrast, or other artifacts not seen during training.
|