Rixhot commited on
Commit
80b0f53
·
verified ·
1 Parent(s): fcda1b7

Upload Untitled1.ipynb

Browse files
Files changed (1) hide show
  1. Untitled1.ipynb +699 -0
Untitled1.ipynb ADDED
@@ -0,0 +1,699 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "markdown",
5
+ "metadata": {
6
+ "id": "Jlq7oGlpguCe"
7
+ },
8
+ "source": [
9
+ "# AI Art Style Detector Project - Topics Used\n",
10
+ "\n",
11
+ "## Machine Learning and Deep Learning Topics:\n",
12
+ "\n",
13
+ "### 1. Image Preprocessing:\n",
14
+ "- **Image Loading**: Loading images from file input using Keras's `image.load_img()`.\n",
15
+ "- **Resizing**: Resizing the input image to a fixed size (`224x224`) before feeding it into the model.\n",
16
+ "- **Normalization**: Scaling pixel values to the range `[0, 1]` for efficient model input.\n",
17
+ "\n",
18
+ "### 2. Model Loading and Inference:\n",
19
+ "- **Loading Pre-trained Models**: Using `tensorflow.keras.models.load_model()` to load a trained deep learning model (like a CNN for image classification).\n",
20
+ "- **Prediction**: Using the model to make predictions by feeding the preprocessed image data into the model and getting class probabilities.\n",
21
+ "\n",
22
+ "### 3. Transfer Learning:\n",
23
+ "- **Pre-trained Models**: The model is likely built on a pre-trained CNN model (such as VGG16, ResNet, etc.) through **transfer learning**, where the lower layers are frozen, and only the higher layers are fine-tuned for the specific art style classification task.\n",
24
+ " \n",
25
+ "### 4. Classification:\n",
26
+ "- **Categorical Output**: The model predicts which art style category (e.g., Impressionism, Surrealism) an artwork belongs to.\n",
27
+ "- **Softmax Activation**: The output layer of the model typically uses **softmax** activation to produce probabilities for each art style class.\n",
28
+ "\n",
29
+ "---\n",
30
+ "\n",
31
+ "## Web Application Development Topics (Using Streamlit):\n",
32
+ "\n",
33
+ "### 1. Streamlit Layout:\n",
34
+ "- **Column Layouts**: Using `st.columns()` to create responsive, side-by-side layouts for displaying images and results.\n",
35
+ "- **Expander**: Using `st.expander()` to allow users to reveal additional information about the model and its functionality.\n",
36
+ "\n",
37
+ "### 2. File Uploading:\n",
38
+ "- **Image Upload**: Using `st.file_uploader()` to allow users to upload images directly from their local device into the web app.\n",
39
+ "- **Image Display**: Using `st.image()` to display the uploaded image on the web app.\n",
40
+ "\n",
41
+ "### 3. Interactive Widgets:\n",
42
+ "- **Dropdown/Selectbox**: Using `st.selectbox()` to allow users to interactively select art styles and get more information about them.\n",
43
+ "- **Buttons and Inputs**: You could add buttons and input fields to extend functionality, like adding manual entry for predicting specific images.\n",
44
+ "\n",
45
+ "### 4. Visualization:\n",
46
+ "- **Plotly Charts**: Using **Plotly** to visualize art style distributions (like bar charts), making the app more interactive and engaging.\n",
47
+ "- **Matplotlib/Seaborn** (Optional): Visualizing the results or image transformations (though Plotly is integrated here).\n",
48
+ "\n",
49
+ "### 5. Styling the UI:\n",
50
+ "- **Custom CSS**: Using custom CSS injected into the Streamlit app with `st.markdown()` to enhance the look and feel of the app (e.g., custom colors, fonts, and element styling).\n",
51
+ " \n",
52
+ "### 6. Streamlit Features:\n",
53
+ "- **Markdown Rendering**: Using `st.markdown()` to render HTML and CSS for custom styling or display content.\n",
54
+ "- **File Handling**: Streamlit handles file uploading, downloading, and processing in a straightforward way using `st.file_uploader()`.\n",
55
+ "\n",
56
+ "---\n",
57
+ "\n",
58
+ "## Deep Learning Topics in Model Development (for Art Style Classification):\n",
59
+ "\n",
60
+ "### 1. Convolutional Neural Networks (CNNs):\n",
61
+ "- **Convolutional Layers**: CNNs are well-suited for image classification tasks due to their ability to automatically learn spatial hierarchies of features.\n",
62
+ "- **Pooling Layers**: Max-pooling layers to reduce the spatial dimensions of the image while retaining important features.\n",
63
+ "- **Fully Connected Layers**: Dense layers to perform the final classification.\n",
64
+ "\n",
65
+ "### 2. Transfer Learning:\n",
66
+ "- Using pre-trained networks like **VGG16**, **ResNet**, or **Inception** as feature extractors, and fine-tuning the final layers for specific art styles.\n",
67
+ " \n",
68
+ "### 3. Activation Functions:\n",
69
+ "- **ReLU (Rectified Linear Unit)**: For non-linear transformations in hidden layers.\n",
70
+ "- **Softmax**: For multi-class classification, used in the final output layer to output probabilities for each class.\n",
71
+ "\n",
72
+ "### 4. Model Training (Optional):\n",
73
+ "- **Data Augmentation**: Techniques to artificially expand the dataset (e.g., rotations, flips, etc.).\n",
74
+ "- **Loss Function**: Typically **categorical cross-entropy** for multi-class classification tasks.\n",
75
+ "- **Optimizer**: Such as **Adam**, to adjust weights during training.\n",
76
+ "\n",
77
+ "### 5. Evaluation Metrics:\n",
78
+ "- **Accuracy**: How often the model predicts the correct class.\n",
79
+ "- **Confusion Matrix**: (Optional) To evaluate the model’s performance across different art styles.\n",
80
+ "\n",
81
+ "---\n",
82
+ "\n",
83
+ "## Other Relevant Topics:\n",
84
+ "\n",
85
+ "### 1. Data Handling and Preprocessing:\n",
86
+ "- **Numpy**: Used for image array manipulation and preparing input data.\n",
87
+ "- **Pandas**: For organizing and visualizing art style statistics (e.g., counts, distributions).\n",
88
+ "\n",
89
+ "### 2. Model Evaluation and Fine-tuning (Optional):\n",
90
+ "- **Hyperparameter Tuning**: Tweaking the learning rate, batch size, etc., to improve model performance.\n",
91
+ "- **Cross-validation**: Ensuring the model performs well on unseen data.\n",
92
+ "\n",
93
+ "---\n",
94
+ "\n",
95
+ "## In Summary:\n",
96
+ "The main topics used in this project are:\n",
97
+ "\n",
98
+ "- **Machine Learning**: CNNs, transfer learning, model prediction, image preprocessing, and classification.\n",
99
+ "- **Deep Learning**: Using pre-trained models, fine-tuning, and evaluating the model’s performance.\n",
100
+ "- **Streamlit Web Development**: Interactive web app development, custom UI with CSS, file handling, and visualizations.\n",
101
+ "- **Data Science**: Data manipulation, model deployment, and visualization using Pandas and Plotly.\n"
102
+ ]
103
+ },
104
+ {
105
+ "cell_type": "code",
106
+ "execution_count": 4,
107
+ "metadata": {
108
+ "id": "atG_3xNvU720"
109
+ },
110
+ "outputs": [
111
+ {
112
+ "name": "stderr",
113
+ "output_type": "stream",
114
+ "text": [
115
+ "The syntax of the command is incorrect.\n"
116
+ ]
117
+ }
118
+ ],
119
+ "source": [
120
+ "!mkdir -p ~/.kaggle\n"
121
+ ]
122
+ },
123
+ {
124
+ "cell_type": "code",
125
+ "execution_count": 5,
126
+ "metadata": {
127
+ "colab": {
128
+ "base_uri": "https://localhost:8080/"
129
+ },
130
+ "id": "s_XA6A_YU7zn",
131
+ "outputId": "9e66b83c-065f-4b5b-c274-44a57986ebac"
132
+ },
133
+ "outputs": [
134
+ {
135
+ "name": "stderr",
136
+ "output_type": "stream",
137
+ "text": [
138
+ "'cp' is not recognized as an internal or external command,\n",
139
+ "operable program or batch file.\n"
140
+ ]
141
+ }
142
+ ],
143
+ "source": [
144
+ "!cp kaggle.json ~/.kaggle/\n"
145
+ ]
146
+ },
147
+ {
148
+ "cell_type": "code",
149
+ "execution_count": 6,
150
+ "metadata": {
151
+ "colab": {
152
+ "base_uri": "https://localhost:8080/"
153
+ },
154
+ "id": "FW1jquyKU7wu",
155
+ "outputId": "381ed4f7-26cd-4372-8510-5930a1aa320f"
156
+ },
157
+ "outputs": [
158
+ {
159
+ "name": "stderr",
160
+ "output_type": "stream",
161
+ "text": [
162
+ "'chmod' is not recognized as an internal or external command,\n",
163
+ "operable program or batch file.\n"
164
+ ]
165
+ }
166
+ ],
167
+ "source": [
168
+ "!chmod 600 ~/.kaggle/kaggle.json\n"
169
+ ]
170
+ },
171
+ {
172
+ "cell_type": "code",
173
+ "execution_count": 7,
174
+ "metadata": {
175
+ "colab": {
176
+ "base_uri": "https://localhost:8080/"
177
+ },
178
+ "id": "8hv1Lom6Uec_",
179
+ "outputId": "3a93e47f-896f-4478-84a2-e2d4e29a5e46"
180
+ },
181
+ "outputs": [
182
+ {
183
+ "name": "stderr",
184
+ "output_type": "stream",
185
+ "text": [
186
+ "'chmod' is not recognized as an internal or external command,\n",
187
+ "operable program or batch file.\n"
188
+ ]
189
+ }
190
+ ],
191
+ "source": [
192
+ "!chmod 600 kaggle.json\n"
193
+ ]
194
+ },
195
+ {
196
+ "cell_type": "code",
197
+ "execution_count": null,
198
+ "metadata": {
199
+ "colab": {
200
+ "base_uri": "https://localhost:8080/"
201
+ },
202
+ "id": "Tm2JYiyWVCGC",
203
+ "outputId": "0d222bc9-c378-4822-8999-e57859643897"
204
+ },
205
+ "outputs": [
206
+ {
207
+ "name": "stdout",
208
+ "output_type": "stream",
209
+ "text": [
210
+ "Requirement already satisfied: kaggle in /usr/local/lib/python3.10/dist-packages (1.6.17)\n",
211
+ "Requirement already satisfied: six>=1.10 in /usr/local/lib/python3.10/dist-packages (from kaggle) (1.17.0)\n",
212
+ "Requirement already satisfied: certifi>=2023.7.22 in /usr/local/lib/python3.10/dist-packages (from kaggle) (2024.12.14)\n",
213
+ "Requirement already satisfied: python-dateutil in /usr/local/lib/python3.10/dist-packages (from kaggle) (2.8.2)\n",
214
+ "Requirement already satisfied: requests in /usr/local/lib/python3.10/dist-packages (from kaggle) (2.32.3)\n",
215
+ "Requirement already satisfied: tqdm in /usr/local/lib/python3.10/dist-packages (from kaggle) (4.67.1)\n",
216
+ "Requirement already satisfied: python-slugify in /usr/local/lib/python3.10/dist-packages (from kaggle) (8.0.4)\n",
217
+ "Requirement already satisfied: urllib3 in /usr/local/lib/python3.10/dist-packages (from kaggle) (2.2.3)\n",
218
+ "Requirement already satisfied: bleach in /usr/local/lib/python3.10/dist-packages (from kaggle) (6.2.0)\n",
219
+ "Requirement already satisfied: webencodings in /usr/local/lib/python3.10/dist-packages (from bleach->kaggle) (0.5.1)\n",
220
+ "Requirement already satisfied: text-unidecode>=1.3 in /usr/local/lib/python3.10/dist-packages (from python-slugify->kaggle) (1.3)\n",
221
+ "Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.10/dist-packages (from requests->kaggle) (3.4.0)\n",
222
+ "Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.10/dist-packages (from requests->kaggle) (3.10)\n"
223
+ ]
224
+ }
225
+ ],
226
+ "source": [
227
+ "!pip install kaggle\n"
228
+ ]
229
+ },
230
+ {
231
+ "cell_type": "code",
232
+ "execution_count": null,
233
+ "metadata": {
234
+ "colab": {
235
+ "base_uri": "https://localhost:8080/"
236
+ },
237
+ "id": "OWi7m5uXSobo",
238
+ "outputId": "1542c1cb-adab-4b3b-db59-612707a19593"
239
+ },
240
+ "outputs": [],
241
+ "source": [
242
+ "#!/bin/bash\n",
243
+ "!kaggle datasets download steubk/wikiart"
244
+ ]
245
+ },
246
+ {
247
+ "cell_type": "code",
248
+ "execution_count": null,
249
+ "metadata": {
250
+ "colab": {
251
+ "base_uri": "https://localhost:8080/"
252
+ },
253
+ "id": "0ado9rLlWD67",
254
+ "outputId": "32eea455-9996-4e3f-b87c-ee0f40ee5485"
255
+ },
256
+ "outputs": [
257
+ {
258
+ "name": "stderr",
259
+ "output_type": "stream",
260
+ "text": [
261
+ "ERROR:root:Internal Python error in the inspect module.\n",
262
+ "Below is the traceback from this internal error.\n",
263
+ "\n",
264
+ "\n",
265
+ "KeyboardInterrupt\n",
266
+ "\n"
267
+ ]
268
+ }
269
+ ],
270
+ "source": [
271
+ "import zipfile\n",
272
+ "\n",
273
+ "with zipfile.ZipFile(\"/content/wikiart.zip\", \"r\") as zip_ref:\n",
274
+ " zip_ref.extractall(\"wikiart_data\")\n"
275
+ ]
276
+ },
277
+ {
278
+ "cell_type": "code",
279
+ "execution_count": null,
280
+ "metadata": {
281
+ "id": "QKleAJNUWFED"
282
+ },
283
+ "outputs": [],
284
+ "source": [
285
+ "!ls wikiart_data\n"
286
+ ]
287
+ },
288
+ {
289
+ "cell_type": "markdown",
290
+ "metadata": {
291
+ "id": "WPI-bMGJWG-w"
292
+ },
293
+ "source": [
294
+ "# **1. Data Preprocessing**"
295
+ ]
296
+ },
297
+ {
298
+ "cell_type": "markdown",
299
+ "metadata": {
300
+ "id": "qAUPwMBYWN8p"
301
+ },
302
+ "source": [
303
+ "**Import Libraries**"
304
+ ]
305
+ },
306
+ {
307
+ "cell_type": "code",
308
+ "execution_count": null,
309
+ "metadata": {
310
+ "id": "KwB9KW7vWLFy"
311
+ },
312
+ "outputs": [],
313
+ "source": [
314
+ "import os # For operating system\n",
315
+ "import numpy as np\n",
316
+ "import matplotlib.pyplot as plt # for plotting\n",
317
+ "import tensorflow as tf\n",
318
+ "from tensorflow.keras.preprocessing.image import ImageDataGenerator\n",
319
+ "from tensorflow.keras.applications import VGG16\n",
320
+ "from tensorflow.keras import layers, models\n",
321
+ "from sklearn.model_selection import train_test_split\n"
322
+ ]
323
+ },
324
+ {
325
+ "cell_type": "code",
326
+ "execution_count": null,
327
+ "metadata": {
328
+ "id": "jAfmnJEIb02C"
329
+ },
330
+ "outputs": [],
331
+ "source": [
332
+ "import tensorflow as tf\n",
333
+ "from tensorflow.keras.applications import MobileNetV2\n",
334
+ "from tensorflow.keras.preprocessing.image import ImageDataGenerator\n",
335
+ "from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint\n",
336
+ "from tensorflow.keras.optimizers import AdamW\n",
337
+ "from tensorflow.keras.mixed_precision import set_global_policy"
338
+ ]
339
+ },
340
+ {
341
+ "cell_type": "markdown",
342
+ "metadata": {
343
+ "id": "XlwZJ5DSXgGF"
344
+ },
345
+ "source": [
346
+ "**(B) Load ans Explore the Data**"
347
+ ]
348
+ },
349
+ {
350
+ "cell_type": "code",
351
+ "execution_count": null,
352
+ "metadata": {
353
+ "colab": {
354
+ "base_uri": "https://localhost:8080/"
355
+ },
356
+ "id": "2wIkoq7AXRpC",
357
+ "outputId": "317f840d-2a2a-4021-e8c4-24a6485e6b2c"
358
+ },
359
+ "outputs": [
360
+ {
361
+ "name": "stdout",
362
+ "output_type": "stream",
363
+ "text": [
364
+ "['Contemporary_Realism', 'Northern_Renaissance', 'Action_painting', 'wclasses.csv', 'Cubism', 'Color_Field_Painting', 'Realism', 'Rococo', 'Fauvism', 'Romanticism', 'High_Renaissance', 'New_Realism', 'Naive_Art_Primitivism', 'Synthetic_Cubism', 'Art_Nouveau_Modern', 'Baroque', 'Minimalism', 'Impressionism', 'Symbolism', 'Mannerism_Late_Renaissance', 'Abstract_Expressionism', 'Early_Renaissance', 'Analytical_Cubism', 'Post_Impressionism', 'Ukiyo_e', 'classes.csv', 'Pointillism', 'Pop_Art', 'Expressionism']\n"
365
+ ]
366
+ }
367
+ ],
368
+ "source": [
369
+ "# set dataset directory path\n",
370
+ "dataset_dir = '/content/wikiart_data'\n",
371
+ "# check the classes available in the dataset\n",
372
+ "classes = os.listdir(dataset_dir)\n",
373
+ "print(classes)"
374
+ ]
375
+ },
376
+ {
377
+ "cell_type": "code",
378
+ "execution_count": null,
379
+ "metadata": {
380
+ "colab": {
381
+ "base_uri": "https://localhost:8080/",
382
+ "height": 356
383
+ },
384
+ "id": "upLuzm4hcWtO",
385
+ "outputId": "b71042b1-8292-4ba5-eff1-30183c52574d"
386
+ },
387
+ "outputs": [
388
+ {
389
+ "ename": "OSError",
390
+ "evalue": "[Errno 28] No space left on device",
391
+ "output_type": "error",
392
+ "traceback": [
393
+ "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
394
+ "\u001b[0;31mOSError\u001b[0m Traceback (most recent call last)",
395
+ "\u001b[0;32m<ipython-input-27-d9e77e74453b>\u001b[0m in \u001b[0;36m<cell line: 13>\u001b[0;34m()\u001b[0m\n\u001b[1;32m 12\u001b[0m \u001b[0;31m# Extract the zip file\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 13\u001b[0m \u001b[0;32mwith\u001b[0m \u001b[0mzipfile\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mZipFile\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m\"/content/wikiart.zip\"\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m\"r\"\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;32mas\u001b[0m \u001b[0mzip_ref\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 14\u001b[0;31m \u001b[0mzip_ref\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mextractall\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m\"/content/wikiart_data\"\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 15\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 16\u001b[0m \u001b[0;31m# Create directories if they don't exist\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
396
+ "\u001b[0;32m/usr/lib/python3.10/zipfile.py\u001b[0m in \u001b[0;36mextractall\u001b[0;34m(self, path, members, pwd)\u001b[0m\n\u001b[1;32m 1658\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 1659\u001b[0m \u001b[0;32mfor\u001b[0m \u001b[0mzipinfo\u001b[0m \u001b[0;32min\u001b[0m \u001b[0mmembers\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m-> 1660\u001b[0;31m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_extract_member\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mzipinfo\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mpath\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mpwd\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 1661\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 1662\u001b[0m \u001b[0;34m@\u001b[0m\u001b[0mclassmethod\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
397
+ "\u001b[0;32m/usr/lib/python3.10/zipfile.py\u001b[0m in \u001b[0;36m_extract_member\u001b[0;34m(self, member, targetpath, pwd)\u001b[0m\n\u001b[1;32m 1713\u001b[0m \u001b[0;32mwith\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mopen\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mmember\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mpwd\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mpwd\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;32mas\u001b[0m \u001b[0msource\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;31m \u001b[0m\u001b[0;31m\\\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 1714\u001b[0m \u001b[0mopen\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mtargetpath\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m\"wb\"\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;32mas\u001b[0m \u001b[0mtarget\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m-> 1715\u001b[0;31m \u001b[0mshutil\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mcopyfileobj\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0msource\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mtarget\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 1716\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 1717\u001b[0m \u001b[0;32mreturn\u001b[0m \u001b[0mtargetpath\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
398
+ "\u001b[0;32m/usr/lib/python3.10/shutil.py\u001b[0m in \u001b[0;36mcopyfileobj\u001b[0;34m(fsrc, fdst, length)\u001b[0m\n\u001b[1;32m 196\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0;32mnot\u001b[0m \u001b[0mbuf\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 197\u001b[0m \u001b[0;32mbreak\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 198\u001b[0;31m \u001b[0mfdst_write\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mbuf\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 199\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 200\u001b[0m \u001b[0;32mdef\u001b[0m \u001b[0m_samefile\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0msrc\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mdst\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
399
+ "\u001b[0;31mOSError\u001b[0m: [Errno 28] No space left on device"
400
+ ]
401
+ }
402
+ ],
403
+ "source": [
404
+ "import os\n",
405
+ "import shutil\n",
406
+ "import numpy as np\n",
407
+ "from sklearn.model_selection import train_test_split\n",
408
+ "import zipfile\n",
409
+ "\n",
410
+ "# Define paths\n",
411
+ "dataset_dir = \"/content/wikiart_data\" # All images in this folder\n",
412
+ "train_dir = \"/content/train\" # Folder for training images\n",
413
+ "val_dir = \"/content/val\" # Folder for validation images\n",
414
+ "\n",
415
+ "# Extract the zip file\n",
416
+ "with zipfile.ZipFile(\"/content/wikiart.zip\", \"r\") as zip_ref:\n",
417
+ " zip_ref.extractall(\"/content/wikiart_data\")\n",
418
+ "\n",
419
+ "# Create directories if they don't exist\n",
420
+ "os.makedirs(train_dir, exist_ok=True)\n",
421
+ "os.makedirs(val_dir, exist_ok=True)\n",
422
+ "\n",
423
+ "# Create subdirectories for classes\n",
424
+ "classes = [d for d in os.listdir(dataset_dir) if os.path.isdir(os.path.join(dataset_dir, d))]\n",
425
+ "for cls in classes:\n",
426
+ " os.makedirs(os.path.join(train_dir, cls), exist_ok=True)\n",
427
+ " os.makedirs(os.path.join(val_dir, cls), exist_ok=True)\n",
428
+ "\n",
429
+ "# Split dataset\n",
430
+ "for cls in classes:\n",
431
+ " cls_dir = os.path.join(dataset_dir, cls)\n",
432
+ " images = os.listdir(cls_dir)\n",
433
+ " # Check if the images list is empty before using train_test_split\n",
434
+ " if not images:\n",
435
+ " print(f\"Warning: No images found in {cls_dir}. Skipping this directory.\")\n",
436
+ " continue # Skip to the next class\n",
437
+ " # added to handle if there is only one image in the directory\n",
438
+ " if len(images) == 1:\n",
439
+ " print(f\"Warning: Only one image found in {cls_dir}. Skipping this directory.\")\n",
440
+ " continue\n",
441
+ " train_images, val_images = train_test_split(images, test_size=0.2, random_state=42) # 80% train, 20% val\n",
442
+ "\n",
443
+ " # Move files to respective folders\n",
444
+ " for img in train_images:\n",
445
+ " try:\n",
446
+ " shutil.move(os.path.join(cls_dir, img), os.path.join(train_dir, cls, img))\n",
447
+ " except shutil.Error as e:\n",
448
+ " print(f\"Error moving file {img} from {cls_dir} to {train_dir}/{cls}: {e}\")\n",
449
+ " for img in val_images:\n",
450
+ " try:\n",
451
+ " shutil.move(os.path.join(cls_dir, img), os.path.join(val_dir, cls, img))\n",
452
+ " except shutil.Error as e:\n",
453
+ " print(f\"Error moving file {img} from {cls_dir} to {val_dir}/{cls}: {e}\")\n",
454
+ "\n",
455
+ "print(\"Dataset split completed.\")"
456
+ ]
457
+ },
458
+ {
459
+ "cell_type": "markdown",
460
+ "metadata": {
461
+ "id": "KRDb2vLAX1m-"
462
+ },
463
+ "source": [
464
+ "**(c) Image Resizing and Normalization**"
465
+ ]
466
+ },
467
+ {
468
+ "cell_type": "code",
469
+ "execution_count": null,
470
+ "metadata": {
471
+ "colab": {
472
+ "base_uri": "https://localhost:8080/"
473
+ },
474
+ "id": "CPa6EY8bXxMN",
475
+ "outputId": "6a2ac532-d5ec-4e80-e8e3-902ac557fdcc"
476
+ },
477
+ "outputs": [
478
+ {
479
+ "name": "stdout",
480
+ "output_type": "stream",
481
+ "text": [
482
+ "Found 65166 images belonging to 27 classes.\n",
483
+ "Found 16278 images belonging to 27 classes.\n"
484
+ ]
485
+ }
486
+ ],
487
+ "source": [
488
+ "# Set parameters\n",
489
+ "image_size = (128, 128) # Smaller image size for memory efficiency\n",
490
+ "batch_size = 16 # Reduced batch size\n",
491
+ "num_classes = 10 # Adjust based on your dataset\n",
492
+ "\n",
493
+ "# Data augmentation and rescaling\n",
494
+ "train_datagen = ImageDataGenerator(\n",
495
+ " rescale=1.0 / 255,\n",
496
+ " rotation_range=20,\n",
497
+ " width_shift_range=0.2,\n",
498
+ " height_shift_range=0.2,\n",
499
+ " shear_range=0.2,\n",
500
+ " zoom_range=0.2,\n",
501
+ " horizontal_flip=True\n",
502
+ ")\n",
503
+ "\n",
504
+ "val_datagen = ImageDataGenerator(rescale=1.0 / 255)\n",
505
+ "\n",
506
+ "# Data generators\n",
507
+ "train_gen = train_datagen.flow_from_directory(\n",
508
+ " train_dir,\n",
509
+ " target_size=image_size,\n",
510
+ " batch_size=batch_size,\n",
511
+ " class_mode='categorical'\n",
512
+ ")\n",
513
+ "\n",
514
+ "val_gen = val_datagen.flow_from_directory(\n",
515
+ " val_dir,\n",
516
+ " target_size=image_size,\n",
517
+ " batch_size=batch_size,\n",
518
+ " class_mode='categorical'\n",
519
+ ")"
520
+ ]
521
+ },
522
+ {
523
+ "cell_type": "markdown",
524
+ "metadata": {
525
+ "id": "i9U4kDsnZ4rW"
526
+ },
527
+ "source": [
528
+ "# **2. Model Architecture**"
529
+ ]
530
+ },
531
+ {
532
+ "cell_type": "code",
533
+ "execution_count": null,
534
+ "metadata": {
535
+ "id": "_Lr-JPXQcBJo"
536
+ },
537
+ "outputs": [],
538
+ "source": [
539
+ "# Load pre-trained MobileNetV2 with frozen layers\n",
540
+ "base_model = MobileNetV2(weights='imagenet', include_top=False, input_shape=(128, 128, 3))\n",
541
+ "base_model.trainable = False # Freeze base layers to reduce computation\n",
542
+ "\n",
543
+ "# Build the model\n",
544
+ "model = tf.keras.Sequential([\n",
545
+ " base_model,\n",
546
+ " tf.keras.layers.GlobalAveragePooling2D(),\n",
547
+ " tf.keras.layers.Dense(256, activation='relu'),\n",
548
+ " tf.keras.layers.Dropout(0.5),\n",
549
+ " tf.keras.layers.Dense(num_classes, activation='softmax', dtype='float32') # Ensure outputs are float32\n",
550
+ "])"
551
+ ]
552
+ },
553
+ {
554
+ "cell_type": "markdown",
555
+ "metadata": {
556
+ "id": "3iO9Hv-na53V"
557
+ },
558
+ "source": [
559
+ "**(b) compile the model**"
560
+ ]
561
+ },
562
+ {
563
+ "cell_type": "code",
564
+ "execution_count": null,
565
+ "metadata": {
566
+ "id": "M-IwFmTBZ9P5"
567
+ },
568
+ "outputs": [],
569
+ "source": [
570
+ "# Compile the model\n",
571
+ "optimizer = AdamW(learning_rate=0.001)\n",
572
+ "model.compile(optimizer=optimizer, loss='categorical_crossentropy', metrics=['accuracy'])\n",
573
+ "\n"
574
+ ]
575
+ },
576
+ {
577
+ "cell_type": "markdown",
578
+ "metadata": {
579
+ "id": "7dw_MJpYbHze"
580
+ },
581
+ "source": [
582
+ "**(c) Train the model**"
583
+ ]
584
+ },
585
+ {
586
+ "cell_type": "code",
587
+ "execution_count": null,
588
+ "metadata": {
589
+ "id": "ro14UX1HbFx7"
590
+ },
591
+ "outputs": [],
592
+ "source": [
593
+ "# Callbacks\n",
594
+ "checkpoint = ModelCheckpoint('best_model.h5', save_best_only=True, monitor='val_loss')\n",
595
+ "early_stopping = EarlyStopping(monitor='val_loss', patience=3, restore_best_weights=True)\n",
596
+ "\n",
597
+ "# Train the model\n",
598
+ "history = model.fit(\n",
599
+ " train_gen,\n",
600
+ " validation_data=val_gen,\n",
601
+ " epochs=20,\n",
602
+ " callbacks=[checkpoint, early_stopping]\n",
603
+ ")"
604
+ ]
605
+ },
606
+ {
607
+ "cell_type": "markdown",
608
+ "metadata": {
609
+ "id": "B23w_jvmbpVd"
610
+ },
611
+ "source": [
612
+ "# **4. Evaluate the model**"
613
+ ]
614
+ },
615
+ {
616
+ "cell_type": "code",
617
+ "execution_count": null,
618
+ "metadata": {
619
+ "id": "RorPfsd_bmB2"
620
+ },
621
+ "outputs": [],
622
+ "source": [
623
+ "# Plot training and validation accuracy\n",
624
+ "plt.plot(history.history['accuracy'], label='Training Accuracy')\n",
625
+ "plt.plot(history.history['val_accuracy'], label='Validation Accuracy')\n",
626
+ "plt.title('Model Accuracy')\n",
627
+ "plt.xlabel('Epochs')\n",
628
+ "plt.ylabel('Accuracy')\n",
629
+ "plt.legend()\n",
630
+ "plt.show()\n",
631
+ "\n",
632
+ "# Plot training and validation loss\n",
633
+ "plt.plot(history.history['loss'], label='Training Loss')\n",
634
+ "plt.plot(history.history['val_loss'], label='Validation Loss')\n",
635
+ "plt.title('Model Loss')\n",
636
+ "plt.xlabel('Epochs')\n",
637
+ "plt.ylabel('Loss')\n",
638
+ "plt.legend()\n",
639
+ "plt.show()\n"
640
+ ]
641
+ },
642
+ {
643
+ "cell_type": "markdown",
644
+ "metadata": {
645
+ "id": "fB4FmTpIcEbc"
646
+ },
647
+ "source": [
648
+ "# 5. Model testing"
649
+ ]
650
+ },
651
+ {
652
+ "cell_type": "code",
653
+ "execution_count": null,
654
+ "metadata": {
655
+ "id": "uJyJa4rfcD0Z"
656
+ },
657
+ "outputs": [],
658
+ "source": [
659
+ "from tensorflow.keras.preprocessing import image\n",
660
+ "\n",
661
+ "# Load a test image\n",
662
+ "img_path = '/path_to_test_image/test_image.jpg'\n",
663
+ "img = image.load_img(img_path, target_size=(img_size, img_size))\n",
664
+ "img_array = image.img_to_array(img) / 255.0 # Normalize\n",
665
+ "img_array = np.expand_dims(img_array, axis=0)\n",
666
+ "\n",
667
+ "# Predict the style\n",
668
+ "prediction = model.predict(img_array)\n",
669
+ "predicted_class = classes[np.argmax(prediction)]\n",
670
+ "print(f\"Predicted Art Style: {predicted_class}\")\n"
671
+ ]
672
+ }
673
+ ],
674
+ "metadata": {
675
+ "accelerator": "GPU",
676
+ "colab": {
677
+ "gpuType": "T4",
678
+ "provenance": []
679
+ },
680
+ "kernelspec": {
681
+ "display_name": "Python 3",
682
+ "name": "python3"
683
+ },
684
+ "language_info": {
685
+ "codemirror_mode": {
686
+ "name": "ipython",
687
+ "version": 3
688
+ },
689
+ "file_extension": ".py",
690
+ "mimetype": "text/x-python",
691
+ "name": "python",
692
+ "nbconvert_exporter": "python",
693
+ "pygments_lexer": "ipython3",
694
+ "version": "3.9.19"
695
+ }
696
+ },
697
+ "nbformat": 4,
698
+ "nbformat_minor": 0
699
+ }