Papers
arxiv:2503.06955

Motion Anything: Any to Motion Generation

Published on Mar 10
· Submitted by SteveZeyuZhang on Mar 13
Authors:
,
,
,
,
,
,
,

Abstract

Conditional motion generation has been extensively studied in computer vision, yet two critical challenges remain. First, while masked autoregressive methods have recently outperformed diffusion-based approaches, existing masking models lack a mechanism to prioritize dynamic frames and body parts based on given conditions. Second, existing methods for different conditioning modalities often fail to integrate multiple modalities effectively, limiting control and coherence in generated motion. To address these challenges, we propose Motion Anything, a multimodal motion generation framework that introduces an Attention-based Mask Modeling approach, enabling fine-grained spatial and temporal control over key frames and actions. Our model adaptively encodes multimodal conditions, including text and music, improving controllability. Additionally, we introduce Text-Music-Dance (TMD), a new motion dataset consisting of 2,153 pairs of text, music, and dance, making it twice the size of AIST++, thereby filling a critical gap in the community. Extensive experiments demonstrate that Motion Anything surpasses state-of-the-art methods across multiple benchmarks, achieving a 15% improvement in FID on HumanML3D and showing consistent performance gains on AIST++ and TMD. See our project website https://steve-zeyu-zhang.github.io/MotionAnything

Community

Paper author Paper submitter
edited 1 day ago

We present Motion Anything, an any-to-motion approach that enables fine-grained spatial and temporal control over generated motion by introducing an attention-based mask modeling strategy. Our framework adaptively encodes multimodal conditions, including text and music, and is validated on the newly proposed TMD dataset, demonstrating superior performance over state-of-the-art methods.

Motion Anything: Any to Motion Generation
Zeyu Zhang, Yiran Wang, Wei Mao, Danning Li, Akira Zhao, Wu Biao, Zirui Song, Bohan Zhuang, Ian Reid, Richard Hartley

Project Website: https://steve-zeyu-zhang.github.io/MotionAnything

·

The teaser has great taste. It would be better if you could cite CLAY.
Longwen Zhang, Ziyu Wang, Qixuan Zhang, Qiwei Qiu, Anqi Pang, Haoran Jiang, Wei Yang, Lan Xu, and Jingyi Yu. 2024. CLAY: A Controllable Large-scale Generative Model for Creating High-quality 3D Assets. ACM Trans. Graph. 43, 4, Article 120 (July 2024), 20 pages. https://doi.org/10.1145/3658146

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2503.06955 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2503.06955 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2503.06955 in a Space README.md to link it from this page.

Collections including this paper 3