Qwen2-VL Collection Vision-language model series based on Qwen2 • 16 items • Updated 20 days ago • 181
PixMo Collection A set of vision-language datasets built by Ai2 and used to train the Molmo family of models. Read more at https://molmo.allenai.org/blog • 9 items • Updated 28 days ago • 50
LLM-Neo Collection Model hub for LLM-Neo, including Llama3.1-Neo-1B-100w and Minitron-4B-Depth-Neo-10w. • 3 items • Updated Nov 20 • 4
VLM Judge Distillation Collection Distilling the 13B SpaceLLaVA VLM-as-a-Judge into a Florence-2 model to efficiently quality filter spatialVQA datasets like OpenSpaces • 4 items • Updated Nov 14 • 1
DepthPro Models Collection Depth Pro: Sharp Monocular Metric Depth in Less Than a Second • 3 items • Updated Oct 15 • 5
OpenSpaces VLMs Collection VLMs fine-tuned for spatial VQA using the OpenSpaces dataset. • 6 items • Updated Oct 27 • 2
Molmo Collection Artifacts for open multimodal language models. • 5 items • Updated 28 days ago • 289
SpaceVLMs Collection Features VLMs fine-tuned for enhanced spatial reasoning using a synthetic data pipeline similar to Spatial VLM. • 9 items • Updated Oct 15 • 5
SpatialVLM: Endowing Vision-Language Models with Spatial Reasoning Capabilities Paper • 2401.12168 • Published Jan 22 • 26
LEAP Hand: Low-Cost, Efficient, and Anthropomorphic Hand for Robot Learning Paper • 2309.06440 • Published Sep 12, 2023 • 9