|
--- |
|
license: mit |
|
--- |
|
|
|
# Dream Machine API |
|
|
|
**Model Page:** [Dream Machine API](https://piapi.ai/dream-machine-api) |
|
|
|
This model card illustartes the steps to use Dream Machine API endpoint. |
|
You can also check out other model cards: |
|
|
|
- [Midjourney API](https://huggingface.co/PiAPI/Midjourney-API) |
|
- [Faceswap API](https://huggingface.co/PiAPI/Faceswap-API) |
|
- [Suno API](https://huggingface.co/PiAPI/Suno-API) |
|
|
|
**Model Information** |
|
|
|
Dream Machine, created by Luma Labs, is an advanced AI model that swiftly produces high-quality, realistic videos from text and images. These videos boast physical accuracy, consistent characters, and naturally impactful shots. Although Luma Lab doesn’t currently provide a Dream Machine API within their Luma API suite, PiAPI has stepped up to develop the unofficial Dream Machine API. This enables developers globally to integrate cutting-edge text-to-video and image-to-video generation into their applications or platforms. |
|
|
|
## Usage Steps |
|
|
|
Below we share the code snippets on how to use Dream Machine API's Video Generation endpoint. |
|
- The programming language is Python |
|
|
|
**Create a task ID from the Video Generation endpoint** |
|
|
|
<pre><code class="language-python"> |
|
<span class="hljs-keyword">import</span> http.client |
|
|
|
conn = http.client.HTTPSConnection(<span class="hljs-string">"api.piapi.ai"</span>) |
|
|
|
payload = <span class="hljs-string">"{\n \"prompt\": \"dog running\",\n \"expand_prompt\": true\n}"</span> |
|
|
|
headers = { |
|
<span class="hljs-built_in">'X-API-Key': "{{x-api-key}}"</span>, //Insert your API Key here |
|
<span class="hljs-built_in">'Content-Type': "application/json"</span>, |
|
<span class="hljs-built_in">'Accept': "application/json"</span> |
|
} |
|
|
|
conn.request("POST", "/api/luma/v1/video", payload, headers) |
|
|
|
res = conn.getresponse() |
|
data = res.read() |
|
|
|
<span class="hljs-keyword">print</span>(data.decode("utf-8")) |
|
</code></pre> |
|
|
|
|
|
|
|
**Retrieve the task ID** |
|
|
|
<pre><code class="language-python"> |
|
{ |
|
<span class="hljs-built_in">"code"</span>: 200, |
|
<span class="hljs-built_in">"data"</span>: { |
|
<span class="hljs-built_in">"task_id"</span>: "6c4*****************aaaa" //Record the taskID provided in your response terminal |
|
}, |
|
<span class="hljs-built_in">"message"</span>: "success" |
|
} |
|
</code></pre> |
|
|
|
|
|
|
|
**Insert the Video Generation task ID into the fetch endpoint** |
|
|
|
<pre><code class="language-python"> |
|
<span class="hljs-keyword">import</span> http.client |
|
|
|
conn = http.client.HTTPSConnection(<span class="hljs-string">"api.piapi.ai"</span>) |
|
|
|
|
|
headers = { |
|
<span class="hljs-built_in">{ 'Accept': "application/json" }</span>, |
|
} |
|
|
|
conn.request("GET", "/api/luma/v1/video/task_id", headers=headers) //Replace the "task_id" with your task ID |
|
|
|
res = conn.getresponse() |
|
data = res.read() |
|
|
|
<span class="hljs-keyword">print</span>(data.decode("utf-8")) |
|
</code></pre> |
|
|
|
|
|
|
|
**For fetch endpoint responses** - Refer to our [documentation](https://piapi.ai/docs/dream-machine/get-video) for more detailed information. |
|
|
|
|
|
|
|
<br> |
|
|
|
|
|
|
|
## Contact us |
|
|
|
Contact us at <a href="mailto:[email protected]">[email protected]</a> for any inquires. |
|
|
|
<br> |
|
|
|
|
|
|
|
|
|
|