Spaces:
Running
Running
Setup workshop example
Browse files- .github/workflows/deploy.yml +20 -0
- .gitignore +5 -0
- README.md +86 -1
- app.py +58 -0
- requirements.txt +3 -0
.github/workflows/deploy.yml
ADDED
@@ -0,0 +1,20 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
name: Deploy
|
2 |
+
on:
|
3 |
+
push:
|
4 |
+
branches: [ main ]
|
5 |
+
|
6 |
+
# to run this workflow manually from the Actions tab
|
7 |
+
workflow_dispatch:
|
8 |
+
|
9 |
+
jobs:
|
10 |
+
sync-to-hub:
|
11 |
+
runs-on: ubuntu-latest
|
12 |
+
steps:
|
13 |
+
- uses: actions/checkout@v3
|
14 |
+
with:
|
15 |
+
fetch-depth: 0
|
16 |
+
lfs: true
|
17 |
+
- name: Push to hub
|
18 |
+
env:
|
19 |
+
HF_TOKEN: ${{ secrets.HF_TOKEN }}
|
20 |
+
run: git push https://andreped:[email protected]/spaces/andreped/ViT-ImageClassifier main
|
.gitignore
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
venv/
|
2 |
+
*.jpg
|
3 |
+
*.jpeg
|
4 |
+
*.png
|
5 |
+
flagged/
|
README.md
CHANGED
@@ -1 +1,86 @@
|
|
1 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
title: 'ViT: Image Classifier'
|
3 |
+
colorFrom: indigo
|
4 |
+
colorTo: indigo
|
5 |
+
sdk: gradio
|
6 |
+
app_port: 7860
|
7 |
+
emoji: 🫁
|
8 |
+
pinned: false
|
9 |
+
license: mit
|
10 |
+
app_file: app.py
|
11 |
+
---
|
12 |
+
|
13 |
+
|
14 |
+
# INF-1600 AI Deployment workshop
|
15 |
+
|
16 |
+
This workshop was developed for the _"Intro to Artificial Intelligence"_ course with
|
17 |
+
code INF-1600 at UiT: The Arctic University of Norway.
|
18 |
+
|
19 |
+
The workshop was a collaboration with UiT and Sopra Steria.
|
20 |
+
|
21 |
+
In this workshop, you will get a prior on:
|
22 |
+
* Cloning and pushing code from/to GitHub
|
23 |
+
* Load and run a pretrained image classification model from [Transformers]().
|
24 |
+
* Develop a simple web application to enable users to test the model using [Gradio]().
|
25 |
+
* Making a web app accessible on the local network
|
26 |
+
* Making a public web app anyone can access using [Hugging Face Spaces]().
|
27 |
+
|
28 |
+
## Getting Started
|
29 |
+
|
30 |
+
1. Make your first GitHub account by going [here](https://github.com/) and signing up.
|
31 |
+
|
32 |
+
2. After logging in to GitHub, go to the code repository [here](https://github.com/andreped/INF1600-ai-workshop).
|
33 |
+
|
34 |
+
3. Make a fork of the repository and make your own copy of the code by making a fork.
|
35 |
+
|
36 |
+
4. Now you are ready to clone your own fork to your own laptop by opening a terminal and running (remember to replace `<username>` with your own GitHub user name):
|
37 |
+
```
|
38 |
+
git clone git+https://github.com/<username>/INF1600-ai-workshop.git
|
39 |
+
```
|
40 |
+
|
41 |
+
5. After cloning, go inside the repository, and from the terminal run these lines to create a virtual environment and activate it:
|
42 |
+
```
|
43 |
+
virtualenv -ppython3 venv --clear
|
44 |
+
source venv/bin/activate
|
45 |
+
```
|
46 |
+
|
47 |
+
6. **TASK:** Analyse the `app.py` script which python packages are required, and
|
48 |
+
add the missing dependencies to the `requirements.txt` file.
|
49 |
+
|
50 |
+
7. Install dependencies to the virtual environment by:
|
51 |
+
```
|
52 |
+
pip install -r requirements.txt
|
53 |
+
```
|
54 |
+
|
55 |
+
8. To test if everything is working, try to run the `app.py` script to launch the web server.
|
56 |
+
|
57 |
+
9. You can then access the web app by going to [http://127.0.0.1:7860](http://127.0.0.1:7860) in your favourite web browser.
|
58 |
+
|
59 |
+
10. From the prompted website, try clicking one of the image examples and clicking the orange `Submit` button. The model results should show on the right after a few seconds.
|
60 |
+
|
61 |
+
10. Try accessing this address from your mobile phone.
|
62 |
+
|
63 |
+
11. This should not work, to access the app from a different device, you need to serve it.
|
64 |
+
Try setting `share=True` in the `interface.launch()` call in the `app.py` script.
|
65 |
+
When running `app.py` now, you should be given a different web address. Try using that one instead.
|
66 |
+
|
67 |
+
But of course, hosting the app yourself from your laptop is not ideal. What if there was some alternative way to do this **completely for free**...
|
68 |
+
|
69 |
+
12. Click [here](https://huggingface.co/join) to go to the Hugging Face sign up page and make an account.
|
70 |
+
|
71 |
+
13. After making an account and logging in, click the `+ New` button on the left of the website and choose `Space` from the dropdown.
|
72 |
+
|
73 |
+
14. In the `Create a new Space` tab, choose a `Space name` for the app, choose a License (preferably `MIT`), among the `Space SDKs` choose `Gradio`, and finally, click `Create Space`.
|
74 |
+
|
75 |
+
15. At the bottom of the page, click the `Create` hyperlink at the `(Hunt: Create the app.py file (...))`
|
76 |
+
|
77 |
+
16. Name the file `app.py`, copy-paste the `app.py` code
|
78 |
+
|
79 |
+
## Workshop Organizers
|
80 |
+
|
81 |
+
* [André Pedersen](https://github.com/andreped), Apps, Sopra Steria
|
82 |
+
* [Tor-Arne Schmidt Nordmo](https://uit.no/ansatte/person?p_document_id=581687), IFI, UiT: The Arctic University of Norway
|
83 |
+
|
84 |
+
## License
|
85 |
+
|
86 |
+
The code in this repository is released under [MIT license](https://github.com/andreped/INF1600-ai-workshop).
|
app.py
ADDED
@@ -0,0 +1,58 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import re
|
2 |
+
import requests
|
3 |
+
|
4 |
+
import gradio as gr
|
5 |
+
from torch import topk
|
6 |
+
from torch.nn.functional import softmax
|
7 |
+
from transformers import ViTImageProcessor, ViTForImageClassification
|
8 |
+
|
9 |
+
|
10 |
+
def load_label_data():
|
11 |
+
file_url = "https://gist.githubusercontent.com/yrevar/942d3a0ac09ec9e5eb3a/raw/238f720ff059c1f82f368259d1ca4ffa5dd8f9f5/imagenet1000_clsidx_to_labels.txt"
|
12 |
+
response = requests.get(file_url)
|
13 |
+
labels = []
|
14 |
+
pattern = '["\'](.*?)["\']'
|
15 |
+
for line in response.text.split('\n'):
|
16 |
+
try:
|
17 |
+
tmp = re.findall(pattern, line)[0]
|
18 |
+
labels.append(tmp)
|
19 |
+
except IndexError:
|
20 |
+
pass
|
21 |
+
return labels
|
22 |
+
|
23 |
+
|
24 |
+
def run_model(image, nb_classes):
|
25 |
+
processor = ViTImageProcessor.from_pretrained('google/vit-base-patch16-224')
|
26 |
+
model = ViTForImageClassification.from_pretrained('google/vit-base-patch16-224')
|
27 |
+
|
28 |
+
inputs = processor(images=image, return_tensors="pt")
|
29 |
+
outputs = model(**inputs)
|
30 |
+
outputs = softmax(outputs.logits, dim=1)
|
31 |
+
outputs = topk(outputs, k=nb_classes)
|
32 |
+
return outputs
|
33 |
+
|
34 |
+
|
35 |
+
def classify_image(image, labels, nb_classes):
|
36 |
+
top10 = run_model(image, nb_classes=nb_classes)
|
37 |
+
return {labels[top10[1][0][i]]: float(top10[0][0][i]) for i in range(nb_classes)}
|
38 |
+
|
39 |
+
|
40 |
+
def main():
|
41 |
+
nb_classes = 10
|
42 |
+
labels = load_label_data()
|
43 |
+
examples=[
|
44 |
+
['cat.jpg'],
|
45 |
+
['dog.jpeg']
|
46 |
+
]
|
47 |
+
|
48 |
+
# define UI
|
49 |
+
image = gr.Image(height=512)
|
50 |
+
label = gr.Label(num_top_classes=nb_classes)
|
51 |
+
interface = gr.Interface(
|
52 |
+
fn=lambda x: classify_image(x, labels, nb_classes), inputs=image, outputs=label, title='Vision Transformer Image Classifier', examples=examples,
|
53 |
+
)
|
54 |
+
interface.launch(debug=True, share=False, height=600, width=1200) # by setting share=True you can serve the website for others to access
|
55 |
+
|
56 |
+
|
57 |
+
if __name__ == "__main__":
|
58 |
+
main()
|
requirements.txt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
gradio
|
2 |
+
torch
|
3 |
+
transformers
|