File size: 3,151 Bytes
055fea6
 
 
 
 
 
 
 
 
d4afeb7
 
897989b
 
 
 
af5b147
9dfde9a
d4afeb7
 
 
 
 
 
 
 
 
9aa91fc
fdec844
d4afeb7
 
 
 
180eb5e
d4afeb7
 
3b19079
d4afeb7
fdec844
 
0b53a03
180eb5e
fdec844
d4afeb7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b7ad1d4
fdec844
 
a56d2da
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b7ad1d4
 
 
af5b147
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
---
title: README
emoji: 🌍
colorFrom: indigo
colorTo: indigo
sdk: static
pinned: false
---

# ZeroGPU Spaces

<div style="background-color: rgb(224 224 224);color: rgb(0 0 0);border-radius: 8px;padding: 0.5rem 1rem;">
  <span style="font-weight: 600;">Want to use ZeroGPU on your Spaces?</span> Use the "Request to join this org" button to join the waitlist.
</div>

<img src="https://cdn-uploads.huggingface.co/production/uploads/5f17f0a0925b9863e28ad517/naVZI-v41zNxmGlhEhGDJ.gif" style="width: 100%; max-width:700px"/>

*ZeroGPU* is a new kind of hardware for Spaces.

It has two goals :
- Provide **free GPU access** for Spaces
- Allow Spaces to run on **multiple GPUs**

This is achieved by making Spaces efficiently hold and release GPUs as needed
(as opposed to a classical GPU Space that holds exactly one GPU at any point in time)

ZeroGPU uses _Nvidia A100_ GPU devices under the hood (40GB of vRAM are available for each workloads)


# Compatibility

*ZeroGPU* Spaces should mostly be compatible with any PyTorch-based GPU Space.<br>
Compatibility with high level HF libraries like `transformers` or `diffusers` is slightly more guaranteed<br>
That said, ZeroGPU Spaces are not as broadly compatible as classical GPU Spaces and you might still encounter unexpected bugs

Also, for now, ZeroGPU Spaces only works with the **Gradio SDK**

Supported versions:
- Gradio: 4+
- PyTorch: All versions from `2.0.0` to `2.2.0`
- Python: `3.10.13`

# Usage

In order to make your Space work with ZeroGPU you need to **decorate** the Python functions that actually require a GPU with `@spaces.GPU`<br>
During the time when a decorated function is invoked, the Space will be attributed a GPU, and it will release it upon completion of the function.<br>
Here is a practical example :

```diff
+import spaces
from diffusers import DiffusionPipeline

pipe = DiffusionPipeline.from_pretrained(...)
pipe.to('cuda')

[email protected]
def generate(prompt):
    return pipe(prompt).images

gr.Interface(
    fn=generate,
    inputs=gr.Text(),
    outputs=gr.Gallery(),
).launch()
```

1. We first `import spaces` (importing it first might prevent some issues but is not mandatory)
2. Then we decorate the `generate` function by adding a `@spaces.GPU` line before its definition

Note that `@spaces.GPU` is effect-free and can be safely used on non-ZeroGPU environments

## Duration

If you expect your GPU function to take more than __60s__ then you need to specify a `duration` param in the decorator like:

```python
@spaces.GPU(duration=120)
def generate(prompt):
   return pipe(prompt).images
```

It will set the maximum duration of your function call to 120s.

You can also specify a duration if you know that your function will take far less than the 60s default.

The lower the duration, the higher priority your Space visitors will have in the queue

# Early access

Feel free to join this organization if you want to try ZeroGPU as a Space author. βœ‹ We should accept you shortly after checking your HF profile

<img src="https://cdn-uploads.huggingface.co/production/uploads/5f17f0a0925b9863e28ad517/cAlvAOu9QC547zrmRVpS5.gif" style="width:100%;"/>