Abstract
Recent advancements in generative modeling now enable the creation of 4D content (moving 3D objects) controlled with text prompts. 4D generation has large potential in applications like virtual worlds, media, and gaming, but existing methods provide limited control over the appearance and geometry of generated content. In this work, we introduce a method for animating user-provided 3D objects by conditioning on textual prompts to guide 4D generation, enabling custom animations while maintaining the identity of the original object. We first convert a 3D mesh into a ``static" 4D Neural Radiance Field (NeRF) that preserves the visual attributes of the input object. Then, we animate the object using an Image-to-Video diffusion model driven by text. To improve motion realism, we introduce an incremental viewpoint selection protocol for sampling perspectives to promote lifelike movement and a masked Score Distillation Sampling (SDS) loss, which leverages attention maps to focus optimization on relevant regions. We evaluate our model in terms of temporal coherence, prompt adherence, and visual fidelity and find that our method outperforms baselines that are based on other approaches, achieving up to threefold improvements in identity preservation measured using LPIPS scores, and effectively balancing visual quality with dynamic content.
Community
@akhaliq
@kramp
Dear AK and HF team ,
๐ We would like to kindly request your assistance in sharing our latest research paper, "Bringing Objects to Life: 4D generation from 3D objects".
We believe it may be of significant interest for HF Daily Paper.
๐ Recent advancements in generative modeling now enable the creation of 4D content (moving 3D objects) controlled with text prompts.
4D generation has large potential in applications like virtual worlds, media, and gaming, but existing methods provide limited control over the appearance and geometry of generated content.
๐ In this work, we introduce a method for animating user-provided 3D objects by conditioning on textual prompts to guide 4D generation, enabling custom animations while maintaining the identity of the original object.
๐ We first convert a 3D mesh into a ``static" 4D Neural Radiance Field (NeRF) that preserves the visual attributes of the input object. Then, we animate the object using an Image-to-Video diffusion model driven by text. To improve motion realism, we introduce an incremental viewpoint selection protocol for sampling perspectives to promote lifelike movement and a masked Score Distillation Sampling (SDS) loss, which leverages attention maps to focus optimization on relevant regions.
๐ We evaluate our model in terms of temporal coherence, prompt adherence, and visual fidelity and find that our method outperforms baselines that are based on other approaches, achieving up to threefold improvements in identity preservation measured using LPIPS scores, and effectively balancing visual quality with dynamic content.
๐ Paper: https://arxiv.org/abs/2412.20422
๐ Project Page: https://3-to-4d.github.io/3-to-4d/
We would greatly appreciate your assistance and consideration of our paper for inclusion.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- PaintScene4D: Consistent 4D Scene Generation from Text Prompts (2024)
- InTraGen: Trajectory-controlled Video Generation for Object Interactions (2024)
- MVLight: Relightable Text-to-3D Generation via Light-conditioned Multi-View Diffusion (2024)
- FruitNinja: 3D Object Interior Texture Generation with Gaussian Splatting (2024)
- Toward Scene Graph and Layout Guided Complex 3D Scene Generation (2024)
- FlipSketch: Flipping Static Drawings to Text-Guided Sketch Animations (2024)
- Sketch-guided Cage-based 3D Gaussian Splatting Deformation (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper