Update README.md
Browse files
README.md
CHANGED
@@ -1138,3 +1138,41 @@ configs:
|
|
1138 |
- split: train
|
1139 |
path: Tie/train-*
|
1140 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1138 |
- split: train
|
1139 |
path: Tie/train-*
|
1140 |
---
|
1141 |
+
|
1142 |
+
# Abstract
|
1143 |
+
The remarkable capabilities of the Segment Anything Model (SAM) for tackling image segmentation tasks in an intuitive and interactive manner has sparked interest in the design of effective visual prompts. Such interest has led to the creation of automated point prompt selection strategies, typically motivated from a feature extraction perspective. However, there is still very little understanding of how appropriate these automated visual prompting strategies are, particularly when compared to humans, across diverse image domains. Additionally, the performance benefits of including such automated visual prompting strategies within the finetuning process of SAM also remains unexplored, as does the effect of interpretable factors like distance between the prompt points on segmentation performance. To bridge these gaps, we leverage a recently released visual prompting dataset, PointPrompt, and introduce a number of benchmarking tasks that provide an array of opportunities to improve the understanding of the way human prompts differ from automated ones and what underlying factors make for effective visual prompts. We demonstrate that the resulting segmentation scores obtained by humans are approximately 29% higher than those given by automated strategies and identify potential features that are indicative of prompting performance with R2 scores over 0.5. Additionally, we demonstrate that performance when using automated methods can be improved by up to 68% via a finetuning approach. Overall, our experiments not only showcase the existing gap between human prompts and automated methods, but also highlight potential avenues through which this gap can be leveraged to improve effective visual prompt design. Further details along with the dataset links and codes are available at [this link](https://alregib.ece.gatech.edu/pointprompt-a-visual-prompting-dataset-based-on-the-segment-anything-model/).
|
1144 |
+
|
1145 |
+
# Prompting Data
|
1146 |
+
- **Masks**: Contains a list of the binary masks produced for each image, where `masks[i]` contains the mask for during timestamp `i`
|
1147 |
+
- **Points**: Contains the inclusion and exclusion points. Each image has an outer list of size `(t,)` where `t` is the number of timesteps for that image and an inner list of size `(n, 2)` where `n` is the number of points at a given timestep
|
1148 |
+
- **Scores**: Contains the scores at each timestep for every image (mIoU)
|
1149 |
+
- **Sorts**: Contains sorted timestamp indexes, going from max to min based on the score
|
1150 |
+
- **Eachround**: Indicates which timesteps belong to each of the two rounds (if they exist). Each entry contains a list of lenght `t` (number of timestamps) where values of `0` corresponds to timestamps that belong to the first round and values of `1` correspond to timestamps that belong to the second round
|
1151 |
+
|
1152 |
+
# Quick usage:
|
1153 |
+
|
1154 |
+
- To get the best (highes score) mask for a given image : `masks[sorts[0]]`
|
1155 |
+
- To get the best set of prompts for that image : `green[sorts[0]]` and `red[sorts[0]]`
|
1156 |
+
- To get which round produced the highest score in that image : `eachround[sorts[0]]`
|
1157 |
+
|
1158 |
+
# Data Download
|
1159 |
+
|
1160 |
+
Sample code to download the dataset:
|
1161 |
+
|
1162 |
+
```python
|
1163 |
+
from datasets import load_dataset
|
1164 |
+
|
1165 |
+
# Download the 'Bird' subset from HuggingFace
|
1166 |
+
pointprompt_bird = load_dataset('gOLIVES/PointPrompt', 'Bird', split='train')
|
1167 |
+
|
1168 |
+
# Print the scores the scores from st1 for the first image
|
1169 |
+
print(pointprompt_bird[0]['st1_scores'])
|
1170 |
+
```
|
1171 |
+
|
1172 |
+
# Links
|
1173 |
+
**Associated Website**: https://alregib.ece.gatech.edu/
|
1174 |
+
|
1175 |
+
# Citation
|
1176 |
+
If you find the work useful, please include the following citation in your work:
|
1177 |
+
|
1178 |
+
J. Quesada∗, Z. Fowler∗, M. Alotaibi, M. Prabhushankar, and G. AlRegib, ”Benchmarking Human and Automated Prompting in the Segment Anything Model”, In IEEE International Conference on Big Data 2024, Washington DC, USA
|