Abstract
Physical AI needs to be trained digitally first. It needs a digital twin of itself, the policy model, and a digital twin of the world, the world model. In this paper, we present the Cosmos World Foundation Model Platform to help developers build customized world models for their Physical AI setups. We position a world foundation model as a general-purpose world model that can be fine-tuned into customized world models for downstream applications. Our platform covers a video curation pipeline, pre-trained world foundation models, examples of post-training of pre-trained world foundation models, and video tokenizers. To help Physical AI builders solve the most critical problems of our society, we make our platform open-source and our models open-weight with permissive licenses available via https://github.com/NVIDIA/Cosmos.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Synthetic Vision: Training Vision-Language Models to Understand Physics (2024)
- Predictive Inverse Dynamics Models are Scalable Learners for Robotic Manipulation (2024)
- OpenEMMA: Open-Source Multimodal Model for End-to-End Autonomous Driving (2024)
- DeepSeek-V3 Technical Report (2024)
- DroidCall: A Dataset for LLM-powered Android Intent Invocation (2024)
- Local deployment of large-scale music AI models on commodity hardware (2024)
- Large Action Models: From Inception to Implementation (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 13
Browse 13 models citing this paperDatasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper