Spaces:
Running
Running
Elle McFarlane
commited on
Commit
·
98de0be
1
Parent(s):
e142172
add readme again
Browse files
README.md
ADDED
@@ -0,0 +1,48 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
title: Text2EMotionDiffuse
|
3 |
+
emoji: 🏢
|
4 |
+
colorFrom: blue
|
5 |
+
colorTo: red
|
6 |
+
sdk: gradio
|
7 |
+
sdk_version: 3.44.1
|
8 |
+
app_file: app.py
|
9 |
+
pinned: false
|
10 |
+
license: mit
|
11 |
+
---
|
12 |
+
<div align="center">
|
13 |
+
|
14 |
+
<h1>Extension of MotionDiffuse for SMPLX features</h1>
|
15 |
+
<h3>02456 Deep Learning, DTU Compute, Fall 2023</h3>
|
16 |
+
<div>
|
17 |
+
<a>Elle Mcfarlane</a> 
|
18 |
+
<a>Alejandro Cirugeda</a> 
|
19 |
+
<a>Jonathan Mikler</a> 
|
20 |
+
<a>Menachem Franko</a> 
|
21 |
+
</div>
|
22 |
+
<div>
|
23 |
+
equal contribution 
|
24 |
+
<h4 align="center">
|
25 |
+
<a href="https://arxiv.org/abs/2208.15001" target='_blank'>[MotionDiffuse Paper]</a> •
|
26 |
+
<a href="https://github.com/mingyuan-zhang/MotionDiffuse" target='_blank'>[MotionDiffuse Code]</a>
|
27 |
+
</h4>
|
28 |
+
</div>
|
29 |
+
|
30 |
+
<div style="display: flex; justify-content: center;">
|
31 |
+
<img src="text2motion/other/denoising.png" alt="De-noising process" style="width: 50%; margin: 0 auto;">
|
32 |
+
<img src="text2motion/other/happy_guy.png" alt="Happy guy" style="width: 40%; margin: 0 auto;">
|
33 |
+
</div>
|
34 |
+
|
35 |
+
</div>
|
36 |
+
|
37 |
+
## Summary
|
38 |
+
Conditioning human motion on natural language (text-to-motion) is critical for many graphics-based applications, including training neural nets for motion-based tasks like detecting changes in posture for medical applications. Recently diffusion models have become popular for text-to-motion generation, but many are trained on human pose representations that lack face and hand details and thus fall short on prompts that involve emotion or detailed object interaction. To fill this gap, we re-trained the text-to-motion model MotionDiffuse on a new dataset Motion-X, which uses SMPL-X poses to include facial expressions and fully articulated hands.
|
39 |
+
|
40 |
+
## Installation
|
41 |
+
Go to [text2motion/DTU_readme.md](text2motion/dtu_README.md) for installation instructions
|
42 |
+
|
43 |
+
## Demo
|
44 |
+
to experience the demo of the model, check the notebook [text2motion/demo.ipynb](text2motion/demo.ipynb). The notebook will guide you through the process of generating a motion from a text prompt. the same code is also available as a python script [text2motion/demo.py](text2motion/demo.py).\
|
45 |
+
**Note:** To visualize the output, the `make gen` command must be run from the `text2motion` directory.
|
46 |
+
|
47 |
+
## Acknowledgements
|
48 |
+
The group would like to thank the authors of the original paper for their work and for making their code available. Also, a deep thank you to Frederik Warbug, for his support and technical guidance and to the DTU HPC team for their support with the HPC cluster.
|