An Empirical Study of Autoregressive Pre-training from Videos Paper • 2501.05453 • Published 3 days ago • 28
OpenOmni: Large Language Models Pivot Zero-shot Omnimodal Alignment across Language with Real-time Self-Aware Emotional Speech Synthesis Paper • 2501.04561 • Published 4 days ago • 15
Sa2VA: Marrying SAM2 with LLaVA for Dense Grounded Understanding of Images and Videos Paper • 2501.04001 • Published 5 days ago • 36
Cosmos World Foundation Model Platform for Physical AI Paper • 2501.03575 • Published 5 days ago • 54
MotionBench: Benchmarking and Improving Fine-grained Video Motion Understanding for Vision Language Models Paper • 2501.02955 • Published 6 days ago • 39
VideoRefer Suite: Advancing Spatial-Temporal Object Understanding with Video LLM Paper • 2501.00599 • Published 12 days ago • 40
2.5 Years in Class: A Multimodal Textbook for Vision-Language Pretraining Paper • 2501.00958 • Published 11 days ago • 91
HuatuoGPT-o1, Towards Medical Complex Reasoning with LLMs Paper • 2412.18925 • Published 18 days ago • 89
Next Token Prediction Towards Multimodal Intelligence: A Comprehensive Survey Paper • 2412.18619 • Published 27 days ago • 52
Apollo: An Exploration of Video Understanding in Large Multimodal Models Paper • 2412.10360 • Published 30 days ago • 136
LAION-SG: An Enhanced Large-Scale Dataset for Training Complex Image-Text Models with Structural Annotations Paper • 2412.08580 • Published Dec 11, 2024 • 45
AV-Odyssey Bench: Can Your Multimodal LLMs Really Understand Audio-Visual Information? Paper • 2412.02611 • Published Dec 3, 2024 • 23
InternLM-XComposer2.5-OmniLive: A Comprehensive Multimodal System for Long-term Streaming Video and Audio Interactions Paper • 2412.09596 • Published about 1 month ago • 92
AGLA: Mitigating Object Hallucinations in Large Vision-Language Models with Assembly of Global and Local Attention Paper • 2406.12718 • Published Jun 18, 2024 • 1
MMRel: A Relation Understanding Dataset and Benchmark in the MLLM Era Paper • 2406.09121 • Published Jun 13, 2024 • 1