Spaces:
Running
on
Zero
Running
on
Zero
Commit
Β·
59f0768
1
Parent(s):
c3af266
updating info
Browse files
README.md
CHANGED
@@ -1,5 +1,5 @@
|
|
1 |
---
|
2 |
-
title:
|
3 |
emoji: π’
|
4 |
colorFrom: gray
|
5 |
colorTo: blue
|
|
|
1 |
---
|
2 |
+
title: stable-melodyflow
|
3 |
emoji: π’
|
4 |
colorFrom: gray
|
5 |
colorTo: blue
|
app.py
CHANGED
@@ -343,6 +343,43 @@ with gr.Blocks(title="π΅ Stable Audio Loop Generator") as iface:
|
|
343 |
gr.Markdown("# π΅ Stable Audio Loop Generator")
|
344 |
gr.Markdown("**Generate synchronized drum and instrument loops with stable-audio-open-small, then transform with MelodyFlow!**")
|
345 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
346 |
with gr.Accordion("How This Works", open=False):
|
347 |
gr.Markdown("""
|
348 |
**Workflow:**
|
|
|
343 |
gr.Markdown("# π΅ Stable Audio Loop Generator")
|
344 |
gr.Markdown("**Generate synchronized drum and instrument loops with stable-audio-open-small, then transform with MelodyFlow!**")
|
345 |
|
346 |
+
# ========== MODELS & PROJECT INFO ==========
|
347 |
+
with gr.Accordion("π About the Models & Project", open=False):
|
348 |
+
|
349 |
+
with gr.Accordion("π stable-audio-open-small", open=False):
|
350 |
+
gr.Markdown("""
|
351 |
+
**stable-audio-open-small** is an incredibly fast model from the zachs and friends at Stability AI. It's capable of generating 12 seconds of audio in under a second, which gives rise to a lot of very interesting kinds of UX.
|
352 |
+
|
353 |
+
**Note about generation speed in this zerogpu space:** You'll notice generation times are a little slower here than if you were to use the model on a local GPU. That's just a result of the way zerogpu spaces work I think... let me know if there's a way to keep the model loaded in a zerogpu space!
|
354 |
+
|
355 |
+
**Links:**
|
356 |
+
- π€ [Model on HuggingFace](https://huggingface.co/stabilityai/stable-audio-open-small)
|
357 |
+
- π³ [Docker API Implementation](https://github.com/betweentwomidnights/stable-audio-api)
|
358 |
+
""")
|
359 |
+
|
360 |
+
with gr.Accordion("ποΈ MelodyFlow", open=False):
|
361 |
+
gr.Markdown("""
|
362 |
+
**MelodyFlow** is a model by Meta that can use regularized latent inversion to do transformations of input audio.
|
363 |
+
|
364 |
+
It's not officially a part of the audiocraft repo yet, but we use it as a docker container in the backend for gary4live.
|
365 |
+
|
366 |
+
**Links:**
|
367 |
+
- π€ [MelodyFlow Space](https://huggingface.co/spaces/Facebook/MelodyFlow)
|
368 |
+
- π³ [Standalone API Implementation](https://github.com/betweentwomidnights/melodyflow)
|
369 |
+
""")
|
370 |
+
|
371 |
+
with gr.Accordion("πΉ gary4live Project", open=False):
|
372 |
+
gr.Markdown("""
|
373 |
+
**gary4live** is a free/open source project that uses these models, along with MusicGen, inside of Ableton Live. I run a backend myself so that we can all experiment with it, but you can also spin the backend up locally using docker-compose with our repo.
|
374 |
+
|
375 |
+
**Project Links:**
|
376 |
+
- π΅ [Main Project Repo](https://github.com/betweentwomidnights/gary4live)
|
377 |
+
- π₯οΈ [Backend Implementation](https://github.com/betweentwomidnights/gary-backend-combined)
|
378 |
+
|
379 |
+
**Installers:**
|
380 |
+
- πΏ [PC & Mac Installers on Gumroad](https://thepatch.gumroad.com/l/gary4live)
|
381 |
+
""")
|
382 |
+
|
383 |
with gr.Accordion("How This Works", open=False):
|
384 |
gr.Markdown("""
|
385 |
**Workflow:**
|