File size: 3,104 Bytes
aeb4df8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d90a6f3
aeb4df8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
de28a7c
aeb4df8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
---
license: mit
---

# Dream Machine API

**Model Page:** [Dream Machine API](https://piapi.ai/dream-machine-api)

This model card illustartes the steps to use Dream Machine API endpoint.
You can also check out other model cards:

- [Midjourney API](https://huggingface.co/PiAPI/Midjourney-API)
- [Faceswap API](https://huggingface.co/PiAPI/Faceswap-API)
- [Suno API](https://huggingface.co/PiAPI/Suno-API)

**Model Information**

Dream Machine, created by Luma Labs, is an advanced AI model that swiftly produces high-quality, realistic videos from text and images. These videos boast physical accuracy, consistent characters, and naturally impactful shots. Although Luma Lab doesn’t currently provide a Dream Machine API within their Luma API suite, PiAPI has stepped up to develop the unofficial Dream Machine API. This enables developers globally to integrate cutting-edge text-to-video and image-to-video generation into their applications or platforms.

## Usage Steps

Below we share the code snippets on how to use Dream Machine API's Video Generation endpoint. 
- The programming language is Python

**Create a task ID from the Video Generation endpoint**

<pre><code class="language-python">
  <span class="hljs-keyword">import</span> http.client
  
  conn = http.client.HTTPSConnection(<span class="hljs-string">"api.piapi.ai"</span>)
  
  payload = <span class="hljs-string">"{\n  \"prompt\": \"dog running\",\n  \"expand_prompt\": true\n}"</span>
  
  headers = {
      
      <span class="hljs-built_in">'Content-Type': "application/json"</span>,
      <span class="hljs-built_in">'Accept': "application/json"</span>
  }
  
  conn.request("POST", "/api/luma/v1/video", payload, headers)
  
  res = conn.getresponse()
  data = res.read()
  
  <span class="hljs-keyword">print</span>(data.decode("utf-8"))
</code></pre>



**Retrieve the task ID**

<pre><code class="language-python">
  {
      <span class="hljs-built_in">"code"</span>: 200,
      <span class="hljs-built_in">"data"</span>: {
          <span class="hljs-built_in">"task_id"</span>: "6c4*****************aaaa"    //Record the taskID provided in your response terminal
      },
      <span class="hljs-built_in">"message"</span>: "success"
  }
</code></pre>



**Insert the Video Generation task ID into the fetch endpoint**

<pre><code class="language-python">
  <span class="hljs-keyword">import</span> http.client
  
  conn = http.client.HTTPSConnection(<span class="hljs-string">"api.piapi.ai"</span>)
  
  
  headers = {
      <span class="hljs-built_in">{ 'Accept': "application/json" }</span>,
  }
  
  conn.request("GET", "/api/luma/v1/video/task_id", headers=headers)      //Replace the "task_id" with your task ID
  
  res = conn.getresponse()
  data = res.read()
  
  <span class="hljs-keyword">print</span>(data.decode("utf-8"))
</code></pre>



**For fetch endpoint responses** - Refer to our [documentation](https://piapi.ai/docs/dream-machine/get-video) for more detailed information.



<br>



## Contact us

Contact us at <a href="mailto:[email protected]">[email protected]</a> for any inquires.

<br>