silait commited on
Commit
6bd5406
·
verified ·
1 Parent(s): 0e91eda

Update index.html

Browse files
Files changed (1) hide show
  1. index.html +311 -182
index.html CHANGED
@@ -1,186 +1,315 @@
1
  <!DOCTYPE html>
2
  <html>
3
- <head>
4
- <title>WebSD | Home</title>
5
- <meta charset="utf-8">
6
- <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
7
- <link rel="stylesheet"
8
- href="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0-beta.2/css/bootstrap.min.css"
9
- integrity="sha384-PsH8R72JQ3SOdhVi3uxftmaW6Vc51MKb0q5P2rRUpPvrszuE4W1povHYgTpBfshb"
10
- crossorigin="anonymous">
11
- <link rel="stylesheet"
12
- href="https://maxcdn.bootstrapcdn.com/font-awesome/4.7.0/css/font-awesome.min.css">
13
- <link rel="stylesheet" href="/assets/css/main.css">
14
- <link rel="stylesheet" href="/assets/css/group.css">
15
- <!-- <link rel="stylesheet" href="/css/table.css"> -->
16
- <link rel="shortcut icon" href="/assets/img/logo/mlc-favicon.png">
17
- <meta http-equiv="origin-trial" content="Agx76XA0ITxMPF0Z8rbbcMllwuxsyp9qdtQaXlLqu1JUrdHB6FPonuyIKJ3CsBREUkeioJck4nn3KO0c0kkwqAMAAABJeyJvcmlnaW4iOiJodHRwOi8vbG9jYWxob3N0Ojg4ODgiLCJmZWF0dXJlIjoiV2ViR1BVIiwiZXhwaXJ5IjoxNjkxNzExOTk5fQ==">
18
- <meta http-equiv="origin-trial" content="AnmwqQ1dtYDQTYkZ5iMtHdINCaxjE94uWQBKp2yOz1wPTcjSRtOHUGQG+r2BxsEuM0qhxTVnuTjyh31HgTeA8gsAAABZeyJvcmlnaW4iOiJodHRwczovL21sYy5haTo0NDMiLCJmZWF0dXJlIjoiV2ViR1BVIiwiZXhwaXJ5IjoxNjkxNzExOTk5LCJpc1N1YmRvbWFpbiI6dHJ1ZX0=">
19
- <script src="dist/tvmjs_runtime.wasi.js"></script>
20
- <script src="dist/tvmjs.bundle.js"></script>
21
-
22
- </head>
23
- <body>
24
- <div class="container">
25
- <!-- This is a bit nasty, but it basically says be a column first, and on larger screens be a spaced out row -->
26
- <div class="header d-flex
27
- flex-column
28
- flex-md-row justify-content-md-between">
29
- <a href="/" id="navtitle">
30
- <img src="/assets/img/logo/mlc-logo-with-text-landscape.svg" height="70px"
31
- alt="MLC" id="logo">
32
- </a>
33
- <ul id="topbar" class="nav nav-pills justify-content-center">
34
-
35
-
36
-
37
-
38
-
39
-
40
-
41
-
42
-
43
- <li class="nav-item">
44
-
45
- <a class="nav-link active"
46
- href="/">
47
- Home
48
- </a>
49
-
50
- </li>
51
-
52
-
53
-
54
-
55
-
56
-
57
-
58
-
59
-
60
- <li class="nav-item">
61
-
62
- <a class="nav-link "
63
- href="https://github.com/mlc-ai/web-stable-diffusion">
64
- Github
65
- </a>
66
-
67
- </li>
68
-
69
-
70
-
71
- </ul>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
72
  </div>
73
-
74
-
75
-
76
-
77
- <!-- Schedule -->
78
-
79
- <h1 id="web-stable-diffusion">Web Stable Diffusion</h1>
80
-
81
- <p>This project brings stable diffusion models to web browsers. <strong>Everything runs inside the browser with no need of server support.</strong> To our knowledge, this is the the world’s first stable diffusion completely running on the browser. Please check out our <a href="https://github.com/mlc-ai/web-stable-diffusion">GitHub repo</a> to see how we did it. There is also a <a href="#text-to-image-generation-demo">demo</a> which you can try out.</p>
82
-
83
- <p><img src="img/fig/browser-screenshot.png" alt="Browser screenshot" width="100%" /></p>
84
-
85
- <p>We have been seeing amazing progress through AI models recently. Thanks to the open-source effort, developers can now easily compose open-source models together to produce amazing tasks. Stable diffusion enables the automatic creation of photorealistic images as well as images in various styles based on text input. These models are usually big and compute-heavy, which means we have to pipe through all computation requests to (GPU) servers when developing web applications based on these models. Additionally, most of the workloads have to run on a specific type of GPUs where popular deep-learning frameworks are readily available.</p>
86
-
87
- <p>This project takes a step to change that status quo and bring more diversity to the ecosystem. There are a lot of reasons to get some (or all) of the computation to the client side. There are many possible benefits, such as cost reduction on the service provider side, as well as an enhancement for personalization and privacy protection. The development of personal computers (even mobile devices) is going in the direction that enables such possibilities. The client side is getting pretty powerful. For example, the latest MacBook Pro can have up to 96GB of unified RAM that can be used to store the model weights and a reasonably powerful GPU to run many of the workloads.</p>
88
-
89
- <p>Wouldn’t it be fun to directly bring the ML models to the client, have the user open a browser tab, and instantly run the stable diffusion models on the browser? This project provides the first affirmative answer to this question.</p>
90
-
91
- <h2 id="text-to-image-generation-demo">Text to Image Generation Demo</h2>
92
-
93
- <p>Because WebGPU is not yet fully stable, nor have there ever been such large-scale AI models running on top of WebGPU, so we are testing the limit here. It may not work in your environment. So far, we have only tested it on Mac with M1/M2 GPUs in Chrome Canary (a nightly build of Chrome) because WebGPU is quite new. We have tested on Windows and it does not work at this moment due to possible driver issues. We anticipate the support broadens as WebGPU matures. Please check out the <a href="#instructions">use instructions</a> and <a href="#notes">notes</a> below.</p>
94
-
95
- <h3 id="instructions">Instructions</h3>
96
-
97
- <p>If you have a Mac computer with Apple silicon, here are the instructions for you to run stable diffusion on your browser locally:</p>
98
-
99
- <ul>
100
- <li>Install <a href="https://www.google.com/chrome/canary/">Chrome Canary</a>, a developer version of Chrome that enables the use of WebGPU.</li>
101
- <li>Launch Chrome Canary. <strong>You are recommended to launch from terminal with the following command:</strong>
102
- <div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code>/Applications/Google<span class="se">\ </span>Chrome<span class="se">\ </span>Canary.app/Contents/MacOS/Google<span class="se">\ </span>Chrome<span class="se">\ </span>Canary <span class="nt">--enable-dawn-features</span><span class="o">=</span>disable_robustness
103
- </code></pre></div> </div>
104
- <p>This command turns off the robustness check from Chrome Canary that slows down image generation to times. It is not necessary, but we strongly recommend you to start Chrome with this command.</p>
105
- </li>
106
- <li>Enter your prompt, click “Generate” we are ready to go! The image generation will start after downloading and fetching the model parameters to local cache. The download may take a few minutes, only for the first run. The subsequent refreshes and runs will be faster.</li>
107
- <li>Feel free to enter different prompts as well as negative prompts to generate the image you want.</li>
108
- <li>We provide an option to render images for the intermediate steps of UNet stage. Select “Run VAE every two UNet steps after step 10” for “Render intermediate steps” and click “Generate” again, and you will see how an image gets generated along the process.</li>
109
- </ul>
110
-
111
- <h3 id="demo">Demo</h3>
112
-
113
- <script>
114
- var tvmjsGlobalEnv = tvmjsGlobalEnv || {};
115
- </script>
116
-
117
- <script type="module">
118
- import init, { TokenizerWasm } from "./dist/tokenizers-wasm/tokenizers_wasm.js";
119
-
120
- var initialized = false;
121
- async function getTokenizer(name) {
122
- if (!initialized) {
123
- await init();
124
- }
125
- const jsonText = await (await fetch("https://huggingface.co/" + name + "/raw/main/tokenizer.json")).text();
126
- return new TokenizerWasm(jsonText);
127
- }
128
-
129
- tvmjsGlobalEnv.getTokenizer = getTokenizer;
130
- </script>
131
-
132
- <script src="dist/stable_diffusion.js"></script>
133
-
134
- <div>
135
- Input prompt: <input name="inputPrompt" id="inputPrompt" type="text" value="A photo of an astronaut riding a horse on mars" size="77" /> <br />
136
- Negative prompt (optional): <input name="negativePrompt" id="negativePrompt" type="text" value="" size="77" />
137
- </div>
138
-
139
- <div>
140
- Select scheduler -
141
- <select name="scheduler" id="schedulerId">
142
- <option value="0">Multi-step DPM Solver (20 steps)</option>
143
- <option value="1">PNDM (50 steps)</option>
144
- </select>
145
-
146
- <br />
147
-
148
- Render intermediate steps (may slow down execution) -
149
- <select name="vae-cycle" id="vaeCycle">
150
- <option value="-1">No</option>
151
- <option value="2">Run VAE every two UNet steps after step 10</option>
152
- </select>
153
-
154
- <div id="progress">
155
- <label id="gpu-tracker-label"></label><br />
156
- <label id="progress-tracker-label"></label><br />
157
- <progress id="progress-tracker-progress" max="100" value="100"> </progress>
158
- </div>
159
- <button onclick="tvmjsGlobalEnv.asyncOnGenerate()">Generate</button>
160
- </div>
161
-
162
- <div>
163
- <canvas id="canvas" width="512" height="512"></canvas>
164
- </div>
165
- <div id="log"></div>
166
-
167
- <h3 id="notes">Notes</h3>
168
-
169
- <ul>
170
- <li>WebGPU spec does comes with FP16 support already, but the implementation does not yet support this feature at this moment. As a result, the memory consumption of running the demo is about 7GB. For Apple silicon Mac with only 8GB of unified memory, it may take longer (a few minutes) to generate an image. This demo may also work for Mac with AMD GPU.</li>
171
- <li>Please check out our <a href="https://github.com/mlc-ai/web-stable-diffusion">GitHub repo</a> for running the same shader flow locally on your GPU device through the native driver. Right now, there are still gaps (e.g., without launching Chrome from command line, Chrome’s WebGPU implementation inserts bound clips for all array index access, such that <code class="language-plaintext highlighter-rouge">a[i]</code> becomes <code class="language-plaintext highlighter-rouge">a[min(i, a.size)]</code>, which are not optimized out by the downstream shader compilers), but we believe it is feasible to close such gaps as WebGPU dispatches to these native drivers.</li>
172
- </ul>
173
-
174
- <h2 id="disclaimer">Disclaimer</h2>
175
-
176
- <p>This demo site is for research purposes only. Please conform to the <a href="https://huggingface.co/runwayml/stable-diffusion-v1-5#uses">uses of stable diffusion models</a>.</p>
177
-
178
-
179
- </div> <!-- /container -->
180
-
181
- <!-- Support retina images. -->
182
- <script type="text/javascript"
183
- src="/assets/js/srcset-polyfill.js"></script>
184
- </body>
185
-
186
  </html>
 
1
  <!DOCTYPE html>
2
  <html>
3
+ <head>
4
+ <title>WebSD | Home</title>
5
+ <meta charset="utf-8" />
6
+ <meta
7
+ name="viewport"
8
+ content="width=device-width, initial-scale=1, shrink-to-fit=no"
9
+ />
10
+ <link
11
+ rel="stylesheet"
12
+ href="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0-beta.2/css/bootstrap.min.css"
13
+ integrity="sha384-PsH8R72JQ3SOdhVi3uxftmaW6Vc51MKb0q5P2rRUpPvrszuE4W1povHYgTpBfshb"
14
+ crossorigin="anonymous"
15
+ />
16
+ <link
17
+ rel="stylesheet"
18
+ href="https://maxcdn.bootstrapcdn.com/font-awesome/4.7.0/css/font-awesome.min.css"
19
+ />
20
+ <link rel="stylesheet" href="/assets/css/main.css" />
21
+ <link rel="stylesheet" href="/assets/css/group.css" />
22
+ <!-- <link rel="stylesheet" href="/css/table.css"> -->
23
+ <!-- <link rel="shortcut icon" href="/assets/img/logo/mlc-favicon.png" /> -->
24
+ <meta
25
+ http-equiv="origin-trial"
26
+ content="Agx76XA0ITxMPF0Z8rbbcMllwuxsyp9qdtQaXlLqu1JUrdHB6FPonuyIKJ3CsBREUkeioJck4nn3KO0c0kkwqAMAAABJeyJvcmlnaW4iOiJodHRwOi8vbG9jYWxob3N0Ojg4ODgiLCJmZWF0dXJlIjoiV2ViR1BVIiwiZXhwaXJ5IjoxNjkxNzExOTk5fQ=="
27
+ />
28
+ <meta
29
+ http-equiv="origin-trial"
30
+ content="AnmwqQ1dtYDQTYkZ5iMtHdINCaxjE94uWQBKp2yOz1wPTcjSRtOHUGQG+r2BxsEuM0qhxTVnuTjyh31HgTeA8gsAAABZeyJvcmlnaW4iOiJodHRwczovL21sYy5haTo0NDMiLCJmZWF0dXJlIjoiV2ViR1BVIiwiZXhwaXJ5IjoxNjkxNzExOTk5LCJpc1N1YmRvbWFpbiI6dHJ1ZX0="
31
+ />
32
+ <script src="dist/tvmjs_runtime.wasi.js"></script>
33
+ <script src="dist/tvmjs.bundle.js"></script>
34
+ </head>
35
+ <body>
36
+ <div class="container">
37
+ <!-- This is a bit nasty, but it basically says be a column first, and on larger screens be a spaced out row -->
38
+ <div
39
+ class="header d-flex flex-column flex-md-row justify-content-md-between"
40
+ >
41
+ <a href="/" id="navtitle">
42
+ <img
43
+ src="/assets/img/logo/mlc-logo-with-text-landscape.svg"
44
+ height="70px"
45
+ alt="MLC"
46
+ id="logo"
47
+ />
48
+ </a>
49
+ <ul id="topbar" class="nav nav-pills justify-content-center">
50
+ <li class="nav-item">
51
+ <a class="nav-link active" href="/"> Home </a>
52
+ </li>
53
+
54
+ <li class="nav-item">
55
+ <a
56
+ class="nav-link"
57
+ href="https://github.com/mlc-ai/web-stable-diffusion"
58
+ >
59
+ Github
60
+ </a>
61
+ </li>
62
+ </ul>
63
+ </div>
64
+
65
+
66
+ <!-- Schedule -->
67
+
68
+ <h1 id="web-stable-diffusion">Web Stable Diffusion</h1>
69
+
70
+ <h3 id="demo">Demo</h3>
71
+
72
+ <script>
73
+ var tvmjsGlobalEnv = tvmjsGlobalEnv || {};
74
+ </script>
75
+
76
+ <script type="module">
77
+ import init, {
78
+ TokenizerWasm,
79
+ } from "./dist/tokenizers-wasm/tokenizers_wasm.js";
80
+
81
+ var initialized = false;
82
+ async function getTokenizer(name) {
83
+ if (!initialized) {
84
+ await init();
85
+ }
86
+ const jsonText = await (
87
+ await fetch(
88
+ "https://huggingface.co/" + name + "/raw/main/tokenizer.json"
89
+ )
90
+ ).text();
91
+ return new TokenizerWasm(jsonText);
92
+ }
93
+
94
+ tvmjsGlobalEnv.getTokenizer = getTokenizer;
95
+ </script>
96
+
97
+ <script src="dist/stable_diffusion.js"></script>
98
+
99
+ <div>
100
+ Input prompt:
101
+ <input
102
+ name="inputPrompt"
103
+ id="inputPrompt"
104
+ type="text"
105
+ value="A photo of an astronaut riding a horse on mars"
106
+ size="77"
107
+ />
108
+ <br />
109
+ Negative prompt (optional):
110
+ <input
111
+ name="negativePrompt"
112
+ id="negativePrompt"
113
+ type="text"
114
+ value=""
115
+ size="77"
116
+ />
117
+ </div>
118
+
119
+ <div>
120
+ Select scheduler -
121
+ <select name="scheduler" id="schedulerId">
122
+ <option value="0">Multi-step DPM Solver (20 steps)</option>
123
+ <option value="1">PNDM (50 steps)</option>
124
+ </select>
125
+
126
+ <br />
127
+
128
+ Render intermediate steps (may slow down execution) -
129
+ <select name="vae-cycle" id="vaeCycle">
130
+ <option value="-1">No</option>
131
+ <option value="2">Run VAE every two UNet steps after step 10</option>
132
+ </select>
133
+
134
+ <div id="progress">
135
+ <label id="gpu-tracker-label"></label><br />
136
+ <label id="progress-tracker-label"></label><br />
137
+ <progress
138
+ id="progress-tracker-progress"
139
+ max="100"
140
+ value="100"
141
+ ></progress>
142
+ </div>
143
+ <button onclick="tvmjsGlobalEnv.asyncOnGenerate()">Generate</button>
144
+ </div>
145
+
146
+ <div>
147
+ <canvas id="canvas" width="512" height="512"></canvas>
148
+ </div>
149
+ <div id="log"></div>
150
+
151
+ <p>
152
+ This project brings stable diffusion models to web browsers.
153
+ <strong
154
+ >Everything runs inside the browser with no need of server
155
+ support.</strong
156
+ >
157
+ To our knowledge, this is the the world’s first stable diffusion
158
+ completely running on the browser. Please check out our
159
+ <a href="https://github.com/mlc-ai/web-stable-diffusion">GitHub repo</a>
160
+ to see how we did it. There is also a
161
+ <a href="#text-to-image-generation-demo">demo</a> which you can try out.
162
+ </p>
163
+
164
+ <p>
165
+ <img
166
+ src="img/fig/browser-screenshot.png"
167
+ alt="Browser screenshot"
168
+ width="100%"
169
+ />
170
+ </p>
171
+
172
+ <p>
173
+ We have been seeing amazing progress through AI models recently. Thanks
174
+ to the open-source effort, developers can now easily compose open-source
175
+ models together to produce amazing tasks. Stable diffusion enables the
176
+ automatic creation of photorealistic images as well as images in various
177
+ styles based on text input. These models are usually big and
178
+ compute-heavy, which means we have to pipe through all computation
179
+ requests to (GPU) servers when developing web applications based on
180
+ these models. Additionally, most of the workloads have to run on a
181
+ specific type of GPUs where popular deep-learning frameworks are readily
182
+ available.
183
+ </p>
184
+
185
+ <p>
186
+ This project takes a step to change that status quo and bring more
187
+ diversity to the ecosystem. There are a lot of reasons to get some (or
188
+ all) of the computation to the client side. There are many possible
189
+ benefits, such as cost reduction on the service provider side, as well
190
+ as an enhancement for personalization and privacy protection. The
191
+ development of personal computers (even mobile devices) is going in the
192
+ direction that enables such possibilities. The client side is getting
193
+ pretty powerful. For example, the latest MacBook Pro can have up to 96GB
194
+ of unified RAM that can be used to store the model weights and a
195
+ reasonably powerful GPU to run many of the workloads.
196
+ </p>
197
+
198
+ <p>
199
+ Wouldn’t it be fun to directly bring the ML models to the client, have
200
+ the user open a browser tab, and instantly run the stable diffusion
201
+ models on the browser? This project provides the first affirmative
202
+ answer to this question.
203
+ </p>
204
+
205
+ <h2 id="text-to-image-generation-demo">Text to Image Generation Demo</h2>
206
+
207
+ <p>
208
+ Because WebGPU is not yet fully stable, nor have there ever been such
209
+ large-scale AI models running on top of WebGPU, so we are testing the
210
+ limit here. It may not work in your environment. So far, we have only
211
+ tested it on Mac with M1/M2 GPUs in Chrome Canary (a nightly build of
212
+ Chrome) because WebGPU is quite new. We have tested on Windows and it
213
+ does not work at this moment due to possible driver issues. We
214
+ anticipate the support broadens as WebGPU matures. Please check out the
215
+ <a href="#instructions">use instructions</a> and
216
+ <a href="#notes">notes</a> below.
217
+ </p>
218
+
219
+ <h3 id="instructions">Instructions</h3>
220
+
221
+ <p>
222
+ If you have a Mac computer with Apple silicon, here are the instructions
223
+ for you to run stable diffusion on your browser locally:
224
+ </p>
225
+
226
+ <ul>
227
+ <li>
228
+ Install
229
+ <a href="https://www.google.com/chrome/canary/">Chrome Canary</a>, a
230
+ developer version of Chrome that enables the use of WebGPU.
231
+ </li>
232
+ <li>
233
+ Launch Chrome Canary.
234
+ <strong
235
+ >You are recommended to launch from terminal with the following
236
+ command:</strong
237
+ >
238
+ <div class="language-shell highlighter-rouge">
239
+ <div class="highlight">
240
+ <pre
241
+ class="highlight"
242
+ ><code>/Applications/Google<span class="se">\ </span>Chrome<span class="se">\ </span>Canary.app/Contents/MacOS/Google<span class="se">\ </span>Chrome<span class="se">\ </span>Canary <span class="nt">--enable-dawn-features</span><span class="o">=</span>disable_robustness
243
+ </code></pre>
244
  </div>
245
+ </div>
246
+ <p>
247
+ This command turns off the robustness check from Chrome Canary that
248
+ slows down image generation to times. It is not necessary, but we
249
+ strongly recommend you to start Chrome with this command.
250
+ </p>
251
+ </li>
252
+ <li>
253
+ Enter your prompt, click “Generate” we are ready to go! The image
254
+ generation will start after downloading and fetching the model
255
+ parameters to local cache. The download may take a few minutes, only
256
+ for the first run. The subsequent refreshes and runs will be faster.
257
+ </li>
258
+ <li>
259
+ Feel free to enter different prompts as well as negative prompts to
260
+ generate the image you want.
261
+ </li>
262
+ <li>
263
+ We provide an option to render images for the intermediate steps of
264
+ UNet stage. Select “Run VAE every two UNet steps after step 10” for
265
+ “Render intermediate steps” and click “Generate” again, and you will
266
+ see how an image gets generated along the process.
267
+ </li>
268
+ </ul>
269
+
270
+
271
+
272
+ <h3 id="notes">Notes</h3>
273
+
274
+ <ul>
275
+ <li>
276
+ WebGPU spec does comes with FP16 support already, but the
277
+ implementation does not yet support this feature at this moment. As a
278
+ result, the memory consumption of running the demo is about 7GB. For
279
+ Apple silicon Mac with only 8GB of unified memory, it may take longer
280
+ (a few minutes) to generate an image. This demo may also work for Mac
281
+ with AMD GPU.
282
+ </li>
283
+ <li>
284
+ Please check out our
285
+ <a href="https://github.com/mlc-ai/web-stable-diffusion"
286
+ >GitHub repo</a
287
+ >
288
+ for running the same shader flow locally on your GPU device through
289
+ the native driver. Right now, there are still gaps (e.g., without
290
+ launching Chrome from command line, Chrome’s WebGPU implementation
291
+ inserts bound clips for all array index access, such that
292
+ <code class="language-plaintext highlighter-rouge">a[i]</code> becomes
293
+ <code class="language-plaintext highlighter-rouge"
294
+ >a[min(i, a.size)]</code
295
+ >, which are not optimized out by the downstream shader compilers),
296
+ but we believe it is feasible to close such gaps as WebGPU dispatches
297
+ to these native drivers.
298
+ </li>
299
+ </ul>
300
+
301
+ <h2 id="disclaimer">Disclaimer</h2>
302
+
303
+ <p>
304
+ This demo site is for research purposes only. Please conform to the
305
+ <a href="https://huggingface.co/runwayml/stable-diffusion-v1-5#uses"
306
+ >uses of stable diffusion models</a
307
+ >.
308
+ </p>
309
+ </div>
310
+ <!-- /container -->
311
+
312
+ <!-- Support retina images. -->
313
+ <script type="text/javascript" src="/assets/js/srcset-polyfill.js"></script>
314
+ </body>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
315
  </html>