|
--- |
|
license: apache-2.0 |
|
--- |
|
|
|
# GeoChat-7B |
|
|
|
GeoChat is the first grounded Large Vision Language Model, specifically tailored to Remote Sensing(RS) scenarios. Unlike general-domain models, GeoChat excels in handling high-resolution RS imagery, employing region-level reasoning for comprehensive scene interpretation. Leveraging a newly created RS multimodal dataset, GeoChat is fine-tuned using the LLaVA-1.5 architecture. This results in robust zero-shot performance across various RS tasks, including image and region captioning, visual question answering, scene classification, visually grounded conversations, and referring object detection. |
|
|
|
<!-- Provide a longer summary of what this model is. --> |
|
- **Developed by MBZUAI** |
|
|
|
### Model Sources |
|
|
|
<!-- Provide the basic links for the model. --> |
|
|
|
- **Repository:** https://github.com/mbzuai-oryx/GeoChat |
|
- **Paper:** https://arxiv.org/abs/2311.15826 |
|
|
|
**BibTeX:** |
|
|
|
```bibtex |
|
@misc{kuckreja2023geochat, |
|
title={GeoChat: Grounded Large Vision-Language Model for Remote Sensing}, |
|
author={Kartik Kuckreja and Muhammad Sohail Danish and Muzammal Naseer and Abhijit Das and Salman Khan and Fahad Shahbaz Khan}, |
|
year={2023}, |
|
eprint={2311.15826}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.CV} |
|
} |
|
``` |
|
## Authors |
|
Kartik Kuckreja, Muhammad Sohail |
|
|
|
## Contact |
|
[email protected] |
|
|
|
|