Abstract
We introduce Reangle-A-Video, a unified framework for generating synchronized multi-view videos from a single input video. Unlike mainstream approaches that train multi-view video diffusion models on large-scale 4D datasets, our method reframes the multi-view video generation task as video-to-videos translation, leveraging publicly available image and video diffusion priors. In essence, Reangle-A-Video operates in two stages. (1) Multi-View Motion Learning: An image-to-video diffusion transformer is synchronously fine-tuned in a self-supervised manner to distill view-invariant motion from a set of warped videos. (2) Multi-View Consistent Image-to-Images Translation: The first frame of the input video is warped and inpainted into various camera perspectives under an inference-time cross-view consistency guidance using DUSt3R, generating multi-view consistent starting images. Extensive experiments on static view transport and dynamic camera control show that Reangle-A-Video surpasses existing methods, establishing a new solution for multi-view video generation. We will publicly release our code and data. Project page: https://hyeonho99.github.io/reangle-a-video/
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- GEN3C: 3D-Informed World-Consistent Video Generation with Precise Camera Control (2025)
- IMFine: 3D Inpainting via Geometry-guided Multi-view Refinement (2025)
- MotionMatcher: Motion Customization of Text-to-Video Diffusion Models via Motion Feature Matching (2025)
- OmniEraser: Remove Objects and Their Effects in Images with Paired Video-Frame Data (2025)
- ObjectMover: Generative Object Movement with Video Prior (2025)
- MotionCanvas: Cinematic Shot Design with Controllable Image-to-Video Generation (2025)
- VideoHandles: Editing 3D Object Compositions in Videos Using Video Generative Priors (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper