salma-remyx
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -28,6 +28,43 @@ With a pipeline of expert models, we can infer spatial relationships between obj
|
|
28 |
- **Developed by:** remyx.ai
|
29 |
- **Model type:** MultiModal Model, Vision Language Model, Llama 3
|
30 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
31 |
### Model Sources
|
32 |
- **Dataset:** [SpaceLLaVA](https://huggingface.co/datasets/remyxai/vqasynth_spacellava)
|
33 |
- **Repository:** [VQASynth](https://github.com/remyxai/VQASynth/tree/main)
|
|
|
28 |
- **Developed by:** remyx.ai
|
29 |
- **Model type:** MultiModal Model, Vision Language Model, Llama 3
|
30 |
|
31 |
+
## Quick Start
|
32 |
+
|
33 |
+
To run SpaceMantis, follow these steps:
|
34 |
+
|
35 |
+
```python
|
36 |
+
import torch
|
37 |
+
from PIL import Image
|
38 |
+
from models.mllava import MLlavaProcessor, LlavaForConditionalGeneration, chat_mllava
|
39 |
+
|
40 |
+
# Load the model and processor
|
41 |
+
attn_implementation = None # or "flash_attention_2"
|
42 |
+
processor = MLlavaProcessor.from_pretrained("remyxai/SpaceMantis")
|
43 |
+
model = LlavaForConditionalGeneration.from_pretrained("remyxai/SpaceMantis", device_map="cuda", torch_dtype=torch.float16, attn_implementation=attn_implementation)
|
44 |
+
|
45 |
+
generation_kwargs = {
|
46 |
+
"max_new_tokens": 1024,
|
47 |
+
"num_beams": 1,
|
48 |
+
"do_sample": False
|
49 |
+
}
|
50 |
+
|
51 |
+
# Function to run inference
|
52 |
+
def run_inference(image_path, content):
|
53 |
+
# Load the image
|
54 |
+
image = Image.open(image_path).convert("RGB")
|
55 |
+
# Convert the image to base64
|
56 |
+
images = [image]
|
57 |
+
# Run the inference
|
58 |
+
response, history = chat_mllava(content, images, model, processor, **generation_kwargs)
|
59 |
+
return response
|
60 |
+
|
61 |
+
# Example usage
|
62 |
+
image_path = "path/to/your/image.jpg"
|
63 |
+
content = "Your question here."
|
64 |
+
response = run_inference(image_path, content)
|
65 |
+
print("Response:", response)
|
66 |
+
```
|
67 |
+
|
68 |
### Model Sources
|
69 |
- **Dataset:** [SpaceLLaVA](https://huggingface.co/datasets/remyxai/vqasynth_spacellava)
|
70 |
- **Repository:** [VQASynth](https://github.com/remyxai/VQASynth/tree/main)
|