prince-canuma commited on
Commit
c9b7313
·
verified ·
1 Parent(s): 1a2084b

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +204 -0
README.md ADDED
@@ -0,0 +1,204 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - GUI agents
4
+ - vision-language-action model
5
+ - computer use
6
+ base_model:
7
+ - Qwen/Qwen2-VL-2B-Instruct
8
+ license: mit
9
+ ---
10
+ [Github](https://github.com/showlab/ShowUI/tree/main) | [arXiv](https://arxiv.org/abs/2411.17465) | [HF Paper](https://huggingface.co/papers/2411.17465) | [Spaces](https://huggingface.co/spaces/showlab/ShowUI) | [Datasets](https://huggingface.co/datasets/showlab/ShowUI-desktop-8K) | [Quick Start](https://huggingface.co/showlab/ShowUI-2B)
11
+ <img src="examples/showui.jpg" alt="ShowUI" width="640">
12
+
13
+ ShowUI is a lightweight (2B) vision-language-action model designed for GUI agents.
14
+
15
+ ## 🤗 Try our HF Space Demo
16
+ https://huggingface.co/spaces/showlab/ShowUI
17
+
18
+
19
+ ## ⭐ Quick Start
20
+
21
+ 1. Load model
22
+ ```python
23
+ import ast
24
+ import torch
25
+ from PIL import Image, ImageDraw
26
+ from qwen_vl_utils import process_vision_info
27
+ from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
28
+
29
+ def draw_point(image_input, point=None, radius=5):
30
+ if isinstance(image_input, str):
31
+ image = Image.open(BytesIO(requests.get(image_input).content)) if image_input.startswith('http') else Image.open(image_input)
32
+ else:
33
+ image = image_input
34
+
35
+ if point:
36
+ x, y = point[0] * image.width, point[1] * image.height
37
+ ImageDraw.Draw(image).ellipse((x - radius, y - radius, x + radius, y + radius), fill='red')
38
+ display(image)
39
+ return
40
+
41
+ model = Qwen2VLForConditionalGeneration.from_pretrained(
42
+ "showlab/ShowUI-2B",
43
+ torch_dtype=torch.bfloat16,
44
+ device_map="auto"
45
+ )
46
+
47
+ min_pixels = 256*28*28
48
+ max_pixels = 1344*28*28
49
+
50
+ processor = AutoProcessor.from_pretrained("showlab/ShowUI-2B", min_pixels=min_pixels, max_pixels=max_pixels)
51
+ ```
52
+
53
+ 2. **UI Grounding**
54
+ ```python
55
+ img_url = 'examples/web_dbd7514b-9ca3-40cd-b09a-990f7b955da1.png'
56
+ query = "Nahant"
57
+
58
+
59
+ _SYSTEM = "Based on the screenshot of the page, I give a text description and you give its corresponding location. The coordinate represents a clickable location [x, y] for an element, which is a relative coordinate on the screenshot, scaled from 0 to 1."
60
+ messages = [
61
+ {
62
+ "role": "user",
63
+ "content": [
64
+ {"type": "text", "text": _SYSTEM},
65
+ {"type": "image", "image": img_url, "min_pixels": min_pixels, "max_pixels": max_pixels},
66
+ {"type": "text", "text": query}
67
+ ],
68
+ }
69
+ ]
70
+
71
+ text = processor.apply_chat_template(
72
+ messages, tokenize=False, add_generation_prompt=True,
73
+ )
74
+ image_inputs, video_inputs = process_vision_info(messages)
75
+ inputs = processor(
76
+ text=[text],
77
+ images=image_inputs,
78
+ videos=video_inputs,
79
+ padding=True,
80
+ return_tensors="pt",
81
+ )
82
+ inputs = inputs.to("cuda")
83
+
84
+ generated_ids = model.generate(**inputs, max_new_tokens=128)
85
+ generated_ids_trimmed = [
86
+ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
87
+ ]
88
+ output_text = processor.batch_decode(
89
+ generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
90
+ )[0]
91
+
92
+ click_xy = ast.literal_eval(output_text)
93
+ # [0.73, 0.21]
94
+
95
+ draw_point(img_url, click_xy, 10)
96
+ ```
97
+
98
+ This will visualize the grounding results like (where the red points are [x,y])
99
+
100
+ ![download](https://github.com/user-attachments/assets/8fe2783d-05b6-44e6-a26c-8718d02b56cb)
101
+
102
+ 3. **UI Navigation**
103
+ - Set up system prompt.
104
+ ```python
105
+ _NAV_SYSTEM = """You are an assistant trained to navigate the {_APP} screen.
106
+ Given a task instruction, a screen observation, and an action history sequence,
107
+ output the next action and wait for the next observation.
108
+ Here is the action space:
109
+ {_ACTION_SPACE}
110
+ """
111
+
112
+ _NAV_FORMAT = """
113
+ Format the action as a dictionary with the following keys:
114
+ {'action': 'ACTION_TYPE', 'value': 'element', 'position': [x,y]}
115
+
116
+ If value or position is not applicable, set it as `None`.
117
+ Position might be [[x1,y1], [x2,y2]] if the action requires a start and end position.
118
+ Position represents the relative coordinates on the screenshot and should be scaled to a range of 0-1.
119
+ """
120
+
121
+ action_map = {
122
+ 'web': """
123
+ 1. `CLICK`: Click on an element, value is not applicable and the position [x,y] is required.
124
+ 2. `INPUT`: Type a string into an element, value is a string to type and the position [x,y] is required.
125
+ 3. `SELECT`: Select a value for an element, value is not applicable and the position [x,y] is required.
126
+ 4. `HOVER`: Hover on an element, value is not applicable and the position [x,y] is required.
127
+ 5. `ANSWER`: Answer the question, value is the answer and the position is not applicable.
128
+ 6. `ENTER`: Enter operation, value and position are not applicable.
129
+ 7. `SCROLL`: Scroll the screen, value is the direction to scroll and the position is not applicable.
130
+ 8. `SELECT_TEXT`: Select some text content, value is not applicable and position [[x1,y1], [x2,y2]] is the start and end position of the select operation.
131
+ 9. `COPY`: Copy the text, value is the text to copy and the position is not applicable.
132
+ """,
133
+
134
+ 'phone': """
135
+ 1. `INPUT`: Type a string into an element, value is not applicable and the position [x,y] is required.
136
+ 2. `SWIPE`: Swipe the screen, value is not applicable and the position [[x1,y1], [x2,y2]] is the start and end position of the swipe operation.
137
+ 3. `TAP`: Tap on an element, value is not applicable and the position [x,y] is required.
138
+ 4. `ANSWER`: Answer the question, value is the status (e.g., 'task complete') and the position is not applicable.
139
+ 5. `ENTER`: Enter operation, value and position are not applicable.
140
+ """
141
+ }
142
+ ```
143
+
144
+ ```python
145
+ img_url = 'examples/chrome.png'
146
+ split='web'
147
+ system_prompt = _NAV_SYSTEM.format(_APP=split, _ACTION_SPACE=action_map[split]) + _NAV_FORMAT
148
+ query = "Search the weather for the New York city."
149
+
150
+ messages = [
151
+ {
152
+ "role": "user",
153
+ "content": [
154
+ {"type": "text", "text": system_prompt},
155
+ {"type": "text", "text": f'Task: {query}'},
156
+ # {"type": "text", "text": PAST_ACTION},
157
+ {"type": "image", "image": img_url, "min_pixels": min_pixels, "max_pixels": max_pixels},
158
+ ],
159
+ }
160
+ ]
161
+
162
+ text = processor.apply_chat_template(
163
+ messages, tokenize=False, add_generation_prompt=True,
164
+ )
165
+ image_inputs, video_inputs = process_vision_info(messages)
166
+ inputs = processor(
167
+ text=[text],
168
+ images=image_inputs,
169
+ videos=video_inputs,
170
+ padding=True,
171
+ return_tensors="pt",
172
+ )
173
+ inputs = inputs.to("cuda")
174
+
175
+ generated_ids = model.generate(**inputs, max_new_tokens=128)
176
+ generated_ids_trimmed = [
177
+ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
178
+ ]
179
+ output_text = processor.batch_decode(
180
+ generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
181
+ )[0]
182
+
183
+ print(output_text)
184
+ # {'action': 'CLICK', 'value': None, 'position': [0.49, 0.42]},
185
+ # {'action': 'INPUT', 'value': 'weather for New York city', 'position': [0.49, 0.42]},
186
+ # {'action': 'ENTER', 'value': None, 'position': None}
187
+ ```
188
+
189
+ ![download](https://github.com/user-attachments/assets/624097ea-06f2-4c8f-83f6-b6b9ee439c0c)
190
+
191
+
192
+ If you find our work helpful, please consider citing our paper.
193
+
194
+ ```
195
+ @misc{lin2024showui,
196
+ title={ShowUI: One Vision-Language-Action Model for GUI Visual Agent},
197
+ author={Kevin Qinghong Lin and Linjie Li and Difei Gao and Zhengyuan Yang and Shiwei Wu and Zechen Bai and Weixian Lei and Lijuan Wang and Mike Zheng Shou},
198
+ year={2024},
199
+ eprint={2411.17465},
200
+ archivePrefix={arXiv},
201
+ primaryClass={cs.CV},
202
+ url={https://arxiv.org/abs/2411.17465},
203
+ }
204
+ ```