khang119966 commited on
Commit
de713cc
·
verified ·
1 Parent(s): f5cb860

Delete ORIGINAL_README.md

Browse files
Files changed (1) hide show
  1. ORIGINAL_README.md +0 -166
ORIGINAL_README.md DELETED
@@ -1,166 +0,0 @@
1
- # Sa2VA: Marrying SAM2 with LLaVA for Dense Grounded Understanding of Images and Videos
2
-
3
- [\[🏠 Sa2VA\]](https://lxtgh.github.io/project/sa2va) [\[📜 arXiv\]](https://arxiv.org/abs/2501.04001) [\[🤗 HuggingFace\]](https://huggingface.co/collections/ByteDance/sa2va-model-zoo-677e3084d71b5f108d00e093) [\[🎥 Introduction\]]() [\[🧑‍💻 GitHub\]](https://github.com/magic-research/Sa2VA) [\[Online Demo (Sa2VA-4B)\]](https://5512470799b6b35fbc.gradio.live/)
4
-
5
-
6
- [**Haobo Yuan**](https://yuanhaobo.me/)<sup>1*</sup> · [**Xiangtai Li**](https://scholar.google.com/citations?user=NmHgX-wAAAAJ)<sup>2*&dagger;</sup> · [**Tao Zhang**](https://zhang-tao-whu.github.io/)<sup>2,3*</sup> · [**Zilong Huang**](http://speedinghzl.github.io/)<sup>2</sup> · [**Shilin Xu**](https://xushilin1.github.io/)<sup>4</sup> ·[**Shunping Ji**](https://scholar.google.com/citations?user=FjoRmF4AAAAJ&hl=en)<sup>3</sup> ·[**Yunhai Tong**](https://scholar.google.com/citations?user=T4gqdPkAAAAJ&hl=zh-CN)<sup>4</sup> ·
7
-
8
- [**Lu Qi**](https://luqi.info/)<sup>2</sup> · [**Jiashi Feng**](https://sites.google.com/site/jshfeng/)<sup>2</sup> · [**Ming-Hsuan Yang**](https://faculty.ucmerced.edu/mhyang/)<sup>1</sup>
9
-
10
- <sup>1</sup>UC Merced&emsp;&emsp;&emsp;&emsp;<sup>2</sup>ByteDance Seed&emsp;&emsp;&emsp;&emsp;<sup>3</sup>WHU&emsp;&emsp;&emsp;&emsp;<sup>4</sup>PKU
11
-
12
- &dagger; project lead&emsp;* the first three authors equally contribute to the work.
13
-
14
- ![Teaser](assets/images/teaser.jpg)
15
-
16
- ## Overiew
17
- This repository contains the code for the paper "Sa2VA: Marrying SAM2 with LLaVA for Dense Grounded Understanding of Images and Videos".
18
-
19
- Sa2VA is the the first unified model for dense grounded understanding of both images and videos. Unlike existing multi-modal large language models, which are often limited to specific modalities and tasks, Sa2VA supports a wide range of image and video tasks, including referring segmentation and conversation, with minimal one-shot instruction tuning. Sa2VA combines SAM-2, a foundation video segmentation model, with LLaVA, an advanced vision-language model, and unifies text, image, and video into a shared LLM token space.
20
-
21
- ## Model Zoo
22
- We provide the following models:
23
- | Model Name | Base MLLM | Language Part | HF Link |
24
- |:----------:|:-----------------------------------------------------------------:|:-----------------------------------------------------------------------------:|:----------------------------------------------------:|
25
- | Sa2VA-1B | [InternVL2.0-1B](https://huggingface.co/OpenGVLab/InternVL2-1B) | [Qwen2-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct) | [🤗 link](https://huggingface.co/ByteDance/Sa2VA-1B) |
26
- | Sa2VA-4B | [InternVL2.5-4B](https://huggingface.co/OpenGVLab/InternVL2_5-4B) | [Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) | [🤗 link](https://huggingface.co/ByteDance/Sa2VA-4B) |
27
- | Sa2VA-8B | [InternVL2.5-8B](https://huggingface.co/OpenGVLab/InternVL2_5-8B) | [internlm2_5-7b-chat](https://huggingface.co/internlm/internlm2_5-7b-chat) | [🤗 link](https://huggingface.co/ByteDance/Sa2VA-8B) |
28
-
29
- ## Gradio Demos
30
-
31
- We provide a script that implements interactive chat using gradio, which requires installing `gradio==4.42.0`. You can try it to quickly build a chat interface locally.
32
- ```shell
33
- PYTHONPATH=. python projects/llava_sam2/gradio/app.py ByteDance/Sa2VA-4B
34
- ```
35
-
36
- ## Quick Start
37
-
38
- Our Sa2VA model is available on 🤗HuggingFace. With very few steps, you can try it with your own data. You can install the `demo/requirements.txt` to avoid training-only packages.
39
-
40
-
41
- **Option1 - scripts:**
42
-
43
- Supposing you have a folder (`PATH_TO_FOLDER`) that contains images of a video, you can use the following script to chat with the Sa2VA model or segment the objects in the videos.
44
-
45
- ```bash
46
- > cd scripts
47
- > python demo.py PATH_TO_FOLDER --model_path ByteDance/Sa2VA-8B --work-dir OUTPUT_DIR --text "<image>Please describe the video content."
48
- ```
49
-
50
- If the output contains the segmentation results, the results will be saved to `OUTPUT_DIR`.
51
-
52
- **Option2 - Jupter Notebook:**
53
-
54
- Please refer to `demo.ipynb`.
55
-
56
- ## Demo
57
-
58
- <details open>
59
- <summary>Demo 1</summary>
60
- Input Video (Source: La La Land 2016):
61
-
62
- ![Error](assets/videos/exp_1.gif)
63
-
64
- Instruction: "Please segment the girl wearing the yellow dress."
65
- </details>
66
-
67
- <details open>
68
- <summary>Demo 2</summary>
69
- Input Video (Source: La La Land 2016):
70
-
71
- ![Error](assets/videos/exp_2.gif)
72
-
73
- Instruction: "Please segment the main character."
74
- </details>
75
-
76
-
77
- <details open>
78
- <summary>Demo 3</summary>
79
- Input Video (Source: Internet):
80
-
81
- ![Error](assets/videos/apt_exp_1_all.gif)
82
-
83
- Instruction: "Please segment the person wearing sun glasses."
84
- </details>
85
-
86
-
87
- <details open>
88
- <summary>Demo 4</summary>
89
- Input Video (Source: Internet):
90
-
91
- ![Error](assets/videos/apt_exp_2_all.gif)
92
-
93
- Instruction: "Instruction: "Please segment the singing girl."
94
- </details>
95
-
96
- <details open>
97
- <summary>Demo 5</summary>
98
- Input Video:
99
-
100
- ![Error](assets/videos/gf_exp1.gif)
101
-
102
- Instruction: "What is the atmosphere of the scene?"
103
-
104
- Answer: "The scene has a dark and mysterious atmosphere, with the men dressed in suits and ties, and the dimly lit room."
105
- </details>
106
-
107
-
108
- ## Training
109
- <details open>
110
- <summary>Installation</summary>
111
-
112
- 1. Please install the python and pytorch first:
113
- ```bash
114
- > conda create -n vlm python=3.10
115
- > conda activate vlm
116
- > conda install pytorch==2.3.1 torchvision==0.18.1 pytorch-cuda=12.1 cuda -c pytorch -c "nvidia/label/cuda-12.1.0" -c "nvidia/label/cuda-12.1.1"
117
- ```
118
-
119
- 2. Install mmcv:
120
- ```bash
121
- > pip install mmcv==2.2.0 -f https://download.openmmlab.com/mmcv/dist/cu121/torch2.3/index.html
122
- ```
123
-
124
- 3. Install other dependencies:
125
- ```bash
126
- > pip install -r requirements.txt
127
- ```
128
- </details>
129
-
130
- <details open>
131
- <summary>Pretrained Model Preparation</summary>
132
-
133
- You are expected to download the following pretrained models and place them in the `./pretrained` directory:
134
- - [sam2_hiera_large.pt](https://huggingface.co/facebook/sam2-hiera-large)
135
- - [InternVL2_5-4B](https://huggingface.co/OpenGVLab/InternVL2_5-4B)
136
-
137
- </details>
138
-
139
- <details open>
140
- <summary>Data Preparation</summary>
141
-
142
- (TODO) Please download the training datasets and place them in the `data` directory. The download link is [here](https://huggingface.co/datasets/Dense-World/Sa2VA-Training).
143
-
144
- </details>
145
-
146
-
147
- <details open>
148
- <summary>Training Script</summary>
149
-
150
- Please run the following script to train:
151
- ```bash
152
- > bash tools/dist.sh train projects/llava_sam2/configs/sa2va_4b.py 8
153
- ```
154
- </details>
155
-
156
-
157
- ## References
158
- If you find this repository useful, please consider referring the following paper:
159
- ```
160
- @article{sa2va,
161
- title={Sa2VA: Marrying SAM2 with LLaVA for Dense Grounded Understanding of Images and Videos},
162
- author={Yuan, Haobo and Li, Xiangtai and Zhang, Tao and Huang, Zilong and Xu, Shilin and Ji, Shunping and Tong, Yunhai and Qi, Lu and Feng, Jiashi and Yang, Ming-Hsuan},
163
- journal={arXiv},
164
- year={2025}
165
- }
166
- ```