File size: 5,787 Bytes
84a4148 31b9437 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 |
---
title: EVAv1 S12
emoji: π’
colorFrom: blue
colorTo: blue
sdk: gradio
sdk_version: 3.39.0
app_file: app.py
pinned: false
license: mit
---
# S12
# CIFAR10 Image Classification with PyTorch Lightning
This project implements an image classifier trained on the CIFAR10 dataset using PyTorch Lightning. The project aims to showcase the use of ResNet architecture, data augmentation, custom dataset classes, and learning rate schedulers.
## Project Structure
The project is structured as follows:
1. Data loading and preprocessing
2. Dataset statistics calculation
3. Data Augmentation
3. Model creation
4. Training and evaluation
### Data Loading and Preprocessing
The data for this project is the CIFAR10 dataset, which is loaded using PyTorch's built-in datasets. To ensure that our model generalizes well, we apply several data augmentations to our training set including normalization, padding, random cropping, and horizontal flipping.
### Dataset Statistics Calculation
Before we start training our model, we calculate per-channel mean and standard deviation for our dataset. These statistics are used to normalize our data, which helps make our training process more stable.
```
Dataset Mean - [0.49139968 0.48215841 0.44653091]
Dataset Std - [0.24703223 0.24348513 0.26158784]
```
### Dataset Augmentation
```python
def get_transforms(means, stds):
train_transforms = A.Compose(
[
A.Normalize(mean=means, std=stds, always_apply=True),
A.RandomCrop(height=32, width=32, pad=4, always_apply=True),
A.HorizontalFlip(),
A.Cutout (fill_value=means),
ToTensorV2(),
]
)
test_transforms = A.Compose(
[
A.Normalize(mean=means, std=stds, always_apply=True),
ToTensorV2(),
]
)
return(train_transforms, test_transforms)
```

### Model Creation
The model we use for this project is a Custom ResNet, a type of convolutional neural network known for its high performance on image classification tasks.
```
| Name | Type | Params
---------------------------------------------------
0 | criterion | CrossEntropyLoss | 0
1 | accuracy | MulticlassAccuracy | 0
2 | prep_layer | Sequential | 1.9 K
3 | layer_one | Sequential | 74.0 K
4 | res_block1 | ResBlock | 295 K
5 | layer_two | Sequential | 295 K
6 | layer_three | Sequential | 1.2 M
7 | res_block2 | ResBlock | 4.7 M
8 | max_pool | MaxPool2d | 0
9 | fc | Linear | 5.1 K
---------------------------------------------------
6.6 M Trainable params
0 Non-trainable params
6.6 M Total params
26.292 Total estimated model params size (MB)
```
#### ResNet Architecture and Residual Blocks
The defining feature of the ResNet architecture is its use of residual blocks and skip connections. Each residual block consists of a series of convolutional layers followed by a skip connection that adds the input of the block to its output. These connections allow the model to learn identity functions, making it easier for the network to learn complex patterns. This characteristic is particularly beneficial in deeper networks, as it helps to alleviate the problem of vanishing gradients.
### Training and Evaluation
To train our model, we use the Adam optimizer with a OneCycle learning rate scheduler.
```
Epoch 23: 100%
196/196 [00:27<00:00, 7.11it/s, v_num=0, val_loss=0.639, val_acc=0.776, train_loss=0.686, train_acc=0.762]
βββββββββββββββββββββββββββββ³ββββββββββββββββββββββββββββ
β Test metric β DataLoader 0 β
β‘ββββββββββββββββββββββββββββββββββββββββββββββββββββββββ©
β test_acc β 0.8758000135421753 β
β test_loss β 0.39947837591171265 β
```
#### OneCycle Learning Rate Scheduler
The OneCycle learning rate scheduler varies the learning rate between a minimum and maximum value according to a certain policy. This dynamic learning rate can help improve the performance of our model. We train our model for a total of 24 epochs.
### Learning Rate Finder
```python
def LR_Finder(model, criterion, optimizer, trainloader):
lr_finder = LRFinder(model, optimizer, criterion, device="cuda")
lr_finder.range_test(trainloader, end_lr=10, num_iter=200, step_mode='exp')
max_lr = lr_finder.plot(suggest_lr=True, skip_start=0, skip_end=0)
lr_finder.reset()
return(max_lr[1])
```

## Dependencies
This project requires the following dependencies:
- torch
- torchvision
- numpy
- albumentations
- matplotlib
- torchsummary
## Usage
To run this project, you can clone the repository and run the main script:
```bash
git clone https://github.com/Delve-ERAV1/S12.git
cd S12
gradio app.py
```
### Upload New Image

### View Misclassified Images

## Results

## References
Deep Residual Learning for Image Recognition Kaiming He et al
Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates Leslie N. Smith |