Spaces:
Running
Running
Datasculptor
commited on
Commit
โข
c61a719
1
Parent(s):
d503546
Update README.md
Browse files
README.md
CHANGED
@@ -1,10 +1,87 @@
|
|
1 |
---
|
2 |
title: README
|
3 |
-
emoji:
|
4 |
-
colorFrom:
|
5 |
colorTo: indigo
|
6 |
sdk: static
|
7 |
pinned: false
|
8 |
---
|
9 |
|
10 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
title: README
|
3 |
+
emoji: ๐
|
4 |
+
colorFrom: indigo
|
5 |
colorTo: indigo
|
6 |
sdk: static
|
7 |
pinned: false
|
8 |
---
|
9 |
|
10 |
+
<div class="grid lg:grid-cols-2 gap-x-4">
|
11 |
+
<h2 class="lg:col-span-2">This organization invites participants to showoff AI art on huggingface as a Gradio Web Demo</h2>
|
12 |
+
<h3 class="my-8 lg:col-span-2" style="font-size:20px; font-weight:bold">Join organization by clicking <a href="https://huggingface.co/login?next=%2FMLearningAI" style="text-decoration: underline" target="_blank">here</a></h3>
|
13 |
+
<h4 class="my-8 lg:col-span-2" style="font-size:20px; font-weight:bold">Hugging Face Gradio MLearningAI
|
14 |
+
</h4>
|
15 |
+
<p class="lg:col-span-2">
|
16 |
+
MLearningAI Community is accepting Gradio AI art demo submissions, you can submit demos for multiple projects</b>. Find tutorial on getting started with Gradio on Hugging Face <a href="https://huggingface.co/blog/gradio-spaces" style="text-decoration: underline" target="_blank">here</a> and to get started with the new Gradio Blocks API <a href="https://gradio.app/introduction_to_blocks/" style="text-decoration: underline" target="_blank">here</a></p>
|
17 |
+
|
18 |
+
</h4>
|
19 |
+
<p class="lg:col-span-2">
|
20 |
+
In this tutorial, we will demonstrate how to showcase your demo with an easy to use web interface using the Gradio Python library and host it on Hugging Face Spaces so that conference attendees can easily find and try out your demos. Also, see <a href="https://gradio.app/introduction_to_blocks/" style="text-decoration: underline" target="_blank">https://gradio.app/introduction_to_blocks/</a>, for a more flexible way to build Gradio Demos
|
21 |
+
</p>
|
22 |
+
<h3 class="my-8 lg:col-span-2" style="font-size:20px; font-weight:bold">๐ Create a Gradio Demo from your Model
|
23 |
+
</h3>
|
24 |
+
<p class="lg:col-span-2">
|
25 |
+
The first step is to create a web demo from your model. As an example, we will be creating a demo from an image classification model (called model) which we will be uploading to Spaces. The full code for steps 1-4 can be found in this <a href="https://colab.research.google.com/drive/1S6seNoJuU7_-hBX5KbXQV4Fb_bbqdPBk?usp=sharing" style="text-decoration: underline" target="_blank">colab notebook</a>.
|
26 |
+
</p><br />
|
27 |
+
|
28 |
+
<h3 class="my-8 lg:col-span-2" style="font-size:20px; font-weight:bold">1. Install the gradio library
|
29 |
+
</h3>
|
30 |
+
<p class="lg:col-span-2">
|
31 |
+
All you need to do is to run this in the terminal: <code>pip install gradio</code>
|
32 |
+
</p>
|
33 |
+
<br />
|
34 |
+
<h3 class="my-8 lg:col-span-2" style="font-size:20px; font-weight:bold">2. Define a function in your Python code that performs inference with your model on a data point and returns the prediction
|
35 |
+
</h3>
|
36 |
+
<p class="lg:col-span-2">
|
37 |
+
Hereโs we define our image classification model prediction function in PyTorch (any framework, like TensorFlow, scikit-learn, JAX, or a plain Python will work as well):
|
38 |
+
<pre>
|
39 |
+
<code>def predict(inp):
|
40 |
+
|
41 |
+
inp = Image.fromarray(inp.astype('uint8'), 'RGB')
|
42 |
+
|
43 |
+
inp = transforms.ToTensor()(inp).unsqueeze(0)
|
44 |
+
|
45 |
+
with torch.no_grad():
|
46 |
+
|
47 |
+
prediction = torch.nn.functional.softmax(model(inp)[0], dim=0)
|
48 |
+
|
49 |
+
return {labels[i]: float(prediction[i]) for i in range(1000)}
|
50 |
+
</code>
|
51 |
+
</pre>
|
52 |
+
</p>
|
53 |
+
|
54 |
+
<h3 class="my-8 lg:col-span-2" style="font-size:20px; font-weight:bold">3. Then create a Gradio Interface using the function and the appropriate input and output types
|
55 |
+
</h3>
|
56 |
+
<p class="lg:col-span-2">
|
57 |
+
For the image classification model from Step 2, it would like like this:
|
58 |
+
</p>
|
59 |
+
<pre>
|
60 |
+
<code>
|
61 |
+
inputs = gr.inputs.Image()
|
62 |
+
|
63 |
+
outputs = gr.outputs.Label(num_top_classes=3)
|
64 |
+
|
65 |
+
io = gr.Interface(fn=predict, inputs=inputs, outputs=outputs)
|
66 |
+
</code>
|
67 |
+
</pre>
|
68 |
+
<p class="lg:col-span-2">
|
69 |
+
If you need help creating a Gradio Interface for your model, check out the Gradio Getting Started guide.
|
70 |
+
</p>
|
71 |
+
|
72 |
+
<h3 class="my-8 lg:col-span-2" style="font-size:20px; font-weight:bold">4. Then launch() you Interface to confirm that it runs correctly locally (or wherever you are running Python)
|
73 |
+
</h3>
|
74 |
+
<pre>
|
75 |
+
<code>
|
76 |
+
io.launch()
|
77 |
+
</code>
|
78 |
+
</pre>
|
79 |
+
<p class="lg:col-span-2">
|
80 |
+
You should see a web interface like the following where you can drag and drop your data points and see the predictions:
|
81 |
+
</p>
|
82 |
+
<img class="lg:col-span-2" src="https://i.imgur.com/1hsIgJJ.png" alt="Gradio Interface" style="margin:10px">
|
83 |
+
</div>
|
84 |
+
|
85 |
+
|
86 |
+
|
87 |
+
|