File size: 3,192 Bytes
20d7251
 
 
 
 
 
 
dbc541d
 
 
 
20d7251
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
94bb57a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20d7251
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
---
library_name: transformers
license: llama3
datasets:
- remyxai/mantis-spacellava
tags:
- remyx
- interleaved
- multi-image
base_model:
- TIGER-Lab/Mantis-8B-siglip-llama3
---


![image/png](https://cdn-uploads.huggingface.co/production/uploads/647777304ae93470ffc28913/2MDiSD0Q3Lfe0JtnkdqxB.png)

# Model Card for SpaceMantis

**SpaceMantis** fine-tunes [Mantis-8B-siglip-llama3](TIGER-Lab/Mantis-8B-siglip-llama3) for enhanced spatial reasoning.


## Model Details

Uses LoRA fine-tune on the [spacellava dataset](https://huggingface.co/datasets/remyxai/vqasynth_spacellava) designed with [VQASynth](https://github.com/remyxai/VQASynth/tree/main) to enhance spatial reasoning as in [SpatialVLM](https://spatial-vlm.github.io/).

### Model Description

This model uses data synthesis techniques and publically available models to reproduce the work described in SpatialVLM to enhance the spatial reasoning of multimodal models.
With a pipeline of expert models, we can infer spatial relationships between objects in a scene to create VQA dataset for spatial reasoning.


- **Developed by:** remyx.ai
- **Model type:** MultiModal Model, Vision Language Model, Llama 3

## Quick Start

To run SpaceMantis, follow these steps:

```python
import torch
from PIL import Image
from models.mllava import MLlavaProcessor, LlavaForConditionalGeneration, chat_mllava

# Load the model and processor
attn_implementation = None  # or "flash_attention_2"
processor = MLlavaProcessor.from_pretrained("remyxai/SpaceMantis")
model = LlavaForConditionalGeneration.from_pretrained("remyxai/SpaceMantis", device_map="cuda", torch_dtype=torch.float16, attn_implementation=attn_implementation)

generation_kwargs = {
    "max_new_tokens": 1024,
    "num_beams": 1,
    "do_sample": False
}

# Function to run inference
def run_inference(image_path, content):
    # Load the image
    image = Image.open(image_path).convert("RGB")
    # Convert the image to base64
    images = [image]
    # Run the inference
    response, history = chat_mllava(content, images, model, processor, **generation_kwargs)
    return response

# Example usage
image_path = "path/to/your/image.jpg"
content = "Your question here."
response = run_inference(image_path, content)
print("Response:", response)
```

### Model Sources
- **Dataset:** [SpaceLLaVA](https://huggingface.co/datasets/remyxai/vqasynth_spacellava)
- **Repository:** [VQASynth](https://github.com/remyxai/VQASynth/tree/main)
- **Paper:** [SpatialVLM](https://arxiv.org/abs/2401.12168)



## Citation
```
@article{chen2024spatialvlm,
  title = {SpatialVLM: Endowing Vision-Language Models with Spatial Reasoning Capabilities},
  author = {Chen, Boyuan and Xu, Zhuo and Kirmani, Sean and Ichter, Brian and Driess, Danny and Florence, Pete and Sadigh, Dorsa and Guibas, Leonidas and Xia, Fei},
  journal = {arXiv preprint arXiv:2401.12168},
  year = {2024},
  url = {https://arxiv.org/abs/2401.12168},
}

@article{jiang2024mantis,
  title={MANTIS: Interleaved Multi-Image Instruction Tuning},
  author={Jiang, Dongfu and He, Xuan and Zeng, Huaye and Wei, Con and Ku, Max and Liu, Qian and Chen, Wenhu},
  journal={arXiv preprint arXiv:2405.01483},
  year={2024}
}
```