File size: 8,235 Bytes
4ae33db ae459d9 4ae33db |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 |
# ImageRAG
<font size=4><div align='center'> [[Github](https://github.com/om-ai-lab/ImageRAG)] [[π Paper](https://ieeexplore.ieee.org/document/11039502)] [[π Blog (in Mandarin)](https://mp.weixin.qq.com/s/BcFejPAcxh4Rx_rh1JvRJA)]</div></font>
## β¨ Highlight
Ultrahigh resolution (UHR) remote sensing imagery (RSI) (e.g. 10,000 X 10,000 pixels) poses a significant challenge for current RS vision-language models (RSVLMs). If one chooses to resize the UHR image to the standard in
put image size, the extensive spatial and contextual information that UHR images contain will be neglected. Otherwise, the original size of these images often exceeds the token limits of standard RSVLMs, making it difficult to pro
cess the entire image and capture long-range dependencies to answer the query based on the abundant visual context.
* Three crucial aspects for MLLMs to effectively handle UHR RSI are:
* Managing small targets, ensuring that the model can accurately be aware and analyze fine details
within images
* Processing the UHR image in a way that integrates with MLLMs without significantly increasing
the number of image tokens, which would lead to high computational costs
* Achieving these goals while minimizing the need for additional training or specialized
annotation.
* We contribute the ImageRAG framework, which offers several key advantages as follows:
* It retrieves and emphasizes relevant visual context from the UHR image based on the text query, allowing the MLLM to focus on important details, even tiny ones.
* It integrates various external knowledge sources (store in vector database) to guide the model, enhancing the understanding of the query and UHR RSI
* ImageRAG requires only a small amount of training, making it a practical solution for efficiently handling UHR RSI.
## π Update
π₯π₯π₯ Last Updated on 2025.07.10 π₯π₯π₯
* **TODO**: Validate the codebase using uploaded data in isolate enviromemt
* **2025.07.10**: Upload checkpoint, cache and dataset.
* **2025.06.25**: Upload codebase and scripts.
* **2025.05.24**: ImageRAG is accepted by IEEE Geoscience and Remote Sensing Magazine π
* IEEE Early Access (we prefer this version): https://ieeexplore.ieee.org/document/11039502
* Arxiv: https://arxiv.org/abs/2411.07688
## π Setup Codebase and Data
* Clone this repo:
```bash
git clone https://github.com/om-ai-lab/ImageRAG.git
```
* Download data, caches and checkpoints for ImageRAG from huggingface:
* https://huggingface.co/omlab/ImageRAG
* Using the [hf mirror](https://hf-mirror.com/) if you encounter connection problems:
```bash
./hfd.sh omlab/ImageRAG --local-dir ImageRAG_hf
```
* Merge two repos:
```bash
mv ImageRAG_hf/cache ImageRAG_hf/checkpoint ImageRAG_hf/data ImageRAG/
```
* Unzip all zip files:
* cache/patch/mmerealworld.zip
* cache/vector_database/crsd_vector_database.zip
* cache/vector_database/lrsd_vector_database.zip
* The ImageRAG directory structure should look like this:
```bash
/training/zilun/ImageRAG
βββ codebase
βββ inference
βββ patchify
βββ main_inference_mmerealworld_imagerag_preextract.py
......
βββ config
βββ config_mmerealworld-baseline-zoom4kvqa10k2epoch_server.yaml
βββ config_mmerealworld-detectiongt-zoom4kvqa10k2epoch_server.yaml
......
βββ data
βββ dataset
βββ MME-RealWorld
βββ remote_sensing
βββ remote_sensing
βββ 03553_Toronto.png
......
βββ crsd_clip_3M.pkl
......
βββ cache
βββ patch
βββ mmerealworld
βββ vit
βββ cc
βββ grid
βββ vector_database
βββ crsd_vector_database
βββ lrsd_vector_database
βββ checkpoint
βββ InternVL2_5-8B_lora32_vqa10k_zoom4k_2epoch_merged
......
βββ script
βββ clip_cc.sh
......
```
## π Setup Env
```bash
conda create -n imagerag python=3.10
conda activate imagerag
cd /training/zilun/ImageRAG
export PYTHONPATH=$PYTHONPATH:/training/zilun/ImageRAG
# Install torch, torchvision and flash attention accroding to your cuda version
pip install torch==2.5.1 torchvision==0.20.1 torchaudio==2.5.1 --index-url https://download.pytorch.org/whl/cu121
pip install ninja
MAX_JOBS=16 pip install flash-attn --no-build-isolation
```
```bash
pip install requirement.txt
```
```bash
python
>>> import nltk
>>> nltk.download('stopwords')
python -m spacy download en_core_web_sm
```
## π Setup SGLang (Docker)
* Host Qwen2.5-32B-Instruct using SGLang for text parsing module
```bash
# Pull sglang docker (we use mirror just for speeding up the download process)
docker pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/lmsysorg/sglang:latest
docker tag swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/lmsysorg/sglang:latest docker.io/lmsysorg/sglang:latest
# docker load -i sglang.tar
bash script/sglang_start.sh
```
## π Feature Extraction (Optional, use Ray to parallelize the process)
* Necessary if you need to run ImageRAG in cutomized data
```bash
ray start --head --port=6379
# extract patch features (e.g. MMERealworld-RS)
python codebase/ray_feat_extract_patch.py --ray_mode auto --num_runner 8
# extract image & text features for vector database (Section V-D, external data)
python codebase/ray_feat_extract_vectorstore.py --ray_mode auto --num_runner 8
# ray stop (optional)
```
## π Inference
* See imagerag_result directory for result examples.
### Run Baseline Inference (No ImageRAG, No GT, No Inference while detecting)
```bash
# inference
CUDA_VISIBLE_DEVICES=0 python codebase/main_inference_mmerealworld_imagerag_preextract.py --cfg_path config/config_mmerealworld-baseline-zoom4kvqa10k2epoch_server.yaml
# eval inference result
python codebase/inference/MME-RealWorld-RS/eval_your_results.py --results_file data/eval/mmerealworld_zoom4kvqa10k2epoch_baseline.jsonl
```
### Run Regular VQA Task Inference in Parallel
```bash
# {clip, georsclip, remoteclip, mcipclip} x {vit, cc, grid} x {rerank, mean, cluster} x {0, ... ,7}
bash script/georsclip_grid.sh rerank 0
```
### Run Inferring VQA Task Inference (No ImageRAG, No Inference while detecting, BBoxes are needed)
```bash
# inference
CUDA_VISIBLE_DEVICES=0 python codebase/main_inference_mmerealworld_imagerag_preextract.py --cfg_path config/config_mmerealworld-detectiongt-zoom4kvqa10k2epoch_server.yaml
# eval inference result
python codebase/inference/MME-RealWorld-RS/eval_your_results.py --results_file data/eval/mmerealworld_zoom4kvqa10k2epoch_baseline.jsonl
```
## π¨βπ« Contact
[email protected]
## ποΈ Citation
```bash
@ARTICLE{11039502,
author={Zhang, Zilun and Shen, Haozhan and Zhao, Tiancheng and Guan, Zian and Chen, Bin and Wang, Yuhao and Jia, Xu and Cai, Yuxiang and Shang, Yongheng and Yin, Jianwei},
journal={IEEE Geoscience and Remote Sensing Magazine},
title={Enhancing Ultrahigh Resolution Remote Sensing Imagery Analysis With ImageRAG: A new framework},
year={2025},
volume={},
number={},
pages={2-27},
keywords={Image resolution;Visualization;Benchmark testing;Training;Image color analysis;Standards;Remote sensing;Analytical models;Accuracy;Vehicle dynamics},
doi={10.1109/MGRS.2025.3574742}}
```
|