File size: 3,924 Bytes
e3a59fd
af21950
 
 
 
 
e3a59fd
af21950
e3a59fd
 
af21950
e3a59fd
5f1f14e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
---
title: IrisModel
emoji: 🏢
colorFrom: blue
colorTo: indigo
sdk: docker
pinned: false
license: mit
---

Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference

# packaging-ml-model
Learn how to package a machine learning model into a container



# Steps to use the repository

1. Clone the repository 
2. Create a virtual environment ( to isolate the dependencies ) 
3. Install the requirements with the following command: 

```

pip install -r requirements.txt 
```

# Build the model file 
1. Execute the following command to build the model 

```
python model.py 

```

- This will build the model and serialize it into a file called 
as model.joblib, this is what we'll load into memory when we 
build our inference API via fastAPI 

# Build a fastAPI based app

- The source code for this is available in the app.py file 
- You can check whether it's working by executing the following 
    command: 

```
uvicorn main:app --reload

```

# Generate a Docker file 

Generate Dockerfile in the same directory as the app and add the 
following contents to it: 

```

# Use Python base image
FROM python:3.9-slim

# Set working directory in the container
WORKDIR /app

# Copy requirements.txt to container
COPY requirements.txt .

# Install dependencies
RUN pip install --no-cache-dir -r requirements.txt

# Copy the FastAPI app files to the container
COPY app /app

# Expose port 80 for FastAPI app
EXPOSE 80

# Command to start the FastAPI app
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "80"]

```

# Build the Docker Image from the instructions given in the Dockerfile 

```
docker build -t packaged-model-001 .
```

# Build the container out of the image 
```
docker run -p 8000:80 packaged-model-001
```


# Verify whether the container is running 

- Use postman to call the post end-point available at localhost:8000/predict 

```

{
    "sepal_length": 2,
    "sepal_width": 3.0,
    "petal_length": 4.0,
    "petal_width": 1.5
}

```


# push the image to docker registry 

1. Login to Docker 

```

docker login

```

- The above command should show text like the following: 

```
Log in with your Docker ID or email address to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com/ to create one.
You can log in with your password or a Personal Access Token (PAT). Using a limited-scope PAT grants better security and is required for organizations using SSO. Learn more at https://docs.docker.com/go/access-tokens/

Username: 
```

- Use the PAT as the password 
** Note: You can also use your password but the use of PAT with minimal access is recommended. 


# Create a repository on DockerHub 

```
1. Go to Dockerhub ( search on Google ) 

2. Create an Account if not already existing

3. Create a new Repository 

```


# Tag your local image 

```

docker tag packaged-model-001:latest riio/packaged-model-001:latest

```


## Push the image to DockerHub 

```
docker push riio/packaged-model-001:latest
```


## Sample Output: 

```

(venv) username@machine packaging-ml-model % docker push riio/packaged-model-001:latest
The push refers to repository [docker.io/riio/packaged-model-001]
fd749012a9d2: Pushed 
963141bae3f4: Pushing  253.5MB
4f83a3ffc58c: Pushed 
7b34bc82ecfd: Pushed 
b958f60e4e67: Pushed 
f02ce41627b1: Pushed 
eeac00a5e55e: Pushed 
34e7752745be: Pushed 
8560597d922c: Pushing  100.2MB
```


## Congratulations.

- You just packaged your machine learning model and made it available to the world with the power of containers. 

## How anybody can use your packaged model? 

It's simple

```
docker pull riio/packaged-model-001:latest 

```


## How to run the container ? 

```

docker run -p 8000:80 riio/packaged-model-001:latest
```



# Link to access the Container: 


- https://model-deployment-005.purplecliff-0cc0d310.centralindia.azurecontainerapps.io/predict