harpreetsahota commited on
Commit
cf80927
·
verified ·
1 Parent(s): 65ea52a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +73 -92
README.md CHANGED
@@ -9,6 +9,9 @@ task_ids: []
9
  pretty_name: ShowUI_Web
10
  tags:
11
  - fiftyone
 
 
 
12
  - image
13
  - object-detection
14
  dataset_summary: '
@@ -46,7 +49,7 @@ dataset_summary: '
46
 
47
  # Note: other available arguments include ''max_samples'', etc
48
 
49
- dataset = load_from_hub("harpreetsahota/ShowUI_Web")
50
 
51
 
52
  # Launch the App
@@ -60,10 +63,8 @@ dataset_summary: '
60
 
61
  # Dataset Card for ShowUI_Web
62
 
63
- <!-- Provide a quick summary of the dataset. -->
64
-
65
-
66
 
 
67
 
68
 
69
  This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 21988 samples.
@@ -84,141 +85,121 @@ from fiftyone.utils.huggingface import load_from_hub
84
 
85
  # Load the dataset
86
  # Note: other available arguments include 'max_samples', etc
87
- dataset = load_from_hub("harpreetsahota/ShowUI_Web")
88
 
89
  # Launch the App
90
  session = fo.launch_app(dataset)
91
  ```
92
 
 
93
 
94
  ## Dataset Details
95
 
96
  ### Dataset Description
97
 
98
- <!-- Provide a longer summary of what this dataset is. -->
99
-
100
 
101
-
102
- - **Curated by:** [More Information Needed]
103
- - **Funded by [optional]:** [More Information Needed]
104
- - **Shared by [optional]:** [More Information Needed]
105
  - **Language(s) (NLP):** en
106
- - **License:** [More Information Needed]
107
-
108
- ### Dataset Sources [optional]
109
 
110
- <!-- Provide the basic links for the dataset. -->
111
 
112
- - **Repository:** [More Information Needed]
113
- - **Paper [optional]:** [More Information Needed]
114
- - **Demo [optional]:** [More Information Needed]
115
 
116
  ## Uses
117
 
118
- <!-- Address questions around how the dataset is intended to be used. -->
119
-
120
  ### Direct Use
121
 
122
- <!-- This section describes suitable use cases for the dataset. -->
123
-
124
- [More Information Needed]
 
 
125
 
126
  ### Out-of-Scope Use
127
 
128
- <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
129
-
130
- [More Information Needed]
 
 
131
 
132
  ## Dataset Structure
133
 
134
- <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
135
 
136
- [More Information Needed]
137
 
138
  ## Dataset Creation
139
 
140
  ### Curation Rationale
141
 
142
- <!-- Motivation for the creation of this dataset. -->
143
-
144
- [More Information Needed]
145
 
146
  ### Source Data
147
 
148
- <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
149
-
150
  #### Data Collection and Processing
151
 
152
- <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
153
-
154
- [More Information Needed]
 
 
 
 
155
 
156
  #### Who are the source data producers?
157
 
158
- <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
159
-
160
- [More Information Needed]
161
-
162
- ### Annotations [optional]
163
-
164
- <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
165
-
166
- #### Annotation process
167
-
168
- <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
169
-
170
- [More Information Needed]
171
-
172
- #### Who are the annotators?
173
-
174
- <!-- This section describes the people or systems who created the annotations. -->
175
-
176
- [More Information Needed]
177
-
178
- #### Personal and Sensitive Information
179
-
180
- <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
181
-
182
- [More Information Needed]
183
 
184
  ## Bias, Risks, and Limitations
185
 
186
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
187
-
188
- [More Information Needed]
 
 
189
 
190
  ### Recommendations
191
 
192
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
193
-
194
- Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
195
-
196
- ## Citation [optional]
197
 
198
- <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
199
 
200
  **BibTeX:**
201
-
202
- [More Information Needed]
 
 
 
 
 
 
 
 
 
203
 
204
  **APA:**
205
-
206
- [More Information Needed]
207
-
208
- ## Glossary [optional]
209
-
210
- <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
211
-
212
- [More Information Needed]
213
-
214
- ## More Information [optional]
215
-
216
- [More Information Needed]
217
-
218
- ## Dataset Card Authors [optional]
219
-
220
- [More Information Needed]
221
-
222
- ## Dataset Card Contact
223
-
224
- [More Information Needed]
 
9
  pretty_name: ShowUI_Web
10
  tags:
11
  - fiftyone
12
+ - visual-agents
13
+ - gui-grounding
14
+ - os-agents
15
  - image
16
  - object-detection
17
  dataset_summary: '
 
49
 
50
  # Note: other available arguments include ''max_samples'', etc
51
 
52
+ dataset = load_from_hub("Voxel51/ShowUI_Web")
53
 
54
 
55
  # Launch the App
 
63
 
64
  # Dataset Card for ShowUI_Web
65
 
 
 
 
66
 
67
+ ![image/png](showui_web.gif)
68
 
69
 
70
  This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 21988 samples.
 
85
 
86
  # Load the dataset
87
  # Note: other available arguments include 'max_samples', etc
88
+ dataset = load_from_hub("Voxel51/ShowUI_Web")
89
 
90
  # Launch the App
91
  session = fo.launch_app(dataset)
92
  ```
93
 
94
+ # Web Dataset from ShowUI
95
 
96
  ## Dataset Details
97
 
98
  ### Dataset Description
99
 
100
+ The Web dataset is a custom-collected corpus of web interface screenshots and element annotations created specifically for training GUI visual agents. It focuses on visually rich UI elements across 22 representative website scenarios (including Airbnb, Booking, AMD, Apple, etc.), purposefully filtering out static text elements to concentrate on interactive components like buttons and checkboxes. This curation strategy was based on the observation that most Vision-Language Models already possess strong OCR capabilities, making visually interactive elements more valuable for training.
 
101
 
102
+ - **Curated by:** Show Lab, National University of Singapore and Microsoft
 
 
 
103
  - **Language(s) (NLP):** en
104
+ - **License:** Apache-2.0
 
 
105
 
106
+ ### Dataset Sources
107
 
108
+ - **Repository:** https://github.com/showlab/ShowUI (main project repository) and https://huggingface.co/datasets/showlab/ShowUI-web
109
+ - **Paper:** https://arxiv.org/abs/2411.17465
 
110
 
111
  ## Uses
112
 
 
 
113
  ### Direct Use
114
 
115
+ The dataset is designed for training vision-language-action models for GUI visual agents operating in web environments. It can be used for:
116
+ - Training models to identify interactive UI elements visually
117
+ - Web element grounding (mapping textual queries to visual elements)
118
+ - Supporting web navigation tasks by providing high-quality visual element references
119
+ - Learning to distinguish between different types of web UI components
120
 
121
  ### Out-of-Scope Use
122
 
123
+ While not explicitly stated in the paper, this dataset would likely be unsuitable for:
124
+ - Training models for desktop or mobile interfaces exclusively
125
+ - General image understanding unrelated to UI navigation
126
+ - Training models where text-based element identification is the primary goal
127
+ - Applications requiring user data or personalized interfaces
128
 
129
  ## Dataset Structure
130
 
131
+ The dataset contains:
132
+ - 22,000 web screenshots across 22 representative website scenarios
133
+ - 576,000 annotated UI elements (filtered from an original 926,000)
134
+ - An average of 26 elements per screenshot
135
+ - Element annotations focus on interactive visual components (buttons, checkboxes, etc.)
136
+ - Annotations exclude static text elements (which comprised about 40% of the original data)
137
+
138
+ ## FiftyOne Dataset Structure
139
+
140
+ **Basic Info:** 21,988 web UI screenshots with interaction annotations
141
+
142
+ **Core Fields:**
143
+ - `instructions`: ListField(StringField) - List of potential text instructions or UI element texts
144
+ - `detections`: EmbeddedDocumentField(Detections) containing multiple Detection objects:
145
+ - `label`: Element type (e.g., "ListItem")
146
+ - `bounding_box`: A list of relative bounding box coordinates in [0, 1] in the following format: `[<top-left-x>, <top-left-y>, <width>, <height>]`
147
+ - `text`: Text content of element
148
+ - `keypoints`: EmbeddedDocumentField(Keypoints) containing interaction points:
149
+ - `label`: Element type (e.g., "ListItem")
150
+ - `points`: A list of `(x, y)` keypoints in `[0, 1] x [0, 1]`
151
+ - `text`: Text content associated with the interaction point
152
 
153
+ The dataset captures web interface elements and interaction points with detailed text annotations for web interaction research. Each element has both its bounding box coordinates and a corresponding interaction point, allowing for both element detection and precise interaction targeting.
154
 
155
  ## Dataset Creation
156
 
157
  ### Curation Rationale
158
 
159
+ The authors identified that most existing web datasets contain a high proportion of static text elements (around 40%) that provide limited value for training visual GUI agents, since modern Vision-Language Models already possess strong OCR capabilities. Instead, they focused on collecting visually distinctive interactive elements that would better enable models to learn UI navigation skills. This selective approach prioritizes quality and relevance over raw quantity.
 
 
160
 
161
  ### Source Data
162
 
 
 
163
  #### Data Collection and Processing
164
 
165
+ The authors:
166
+ 1. Developed a custom parser using PyAutoGUI [34]
167
+ 2. Selected 22 representative website scenarios (including Airbnb, Booking, AMD, Apple, etc.)
168
+ 3. Collected multiple screenshots per scenario to maximize annotation coverage
169
+ 4. Initially gathered 926,000 element annotations across 22,000 screenshots
170
+ 5. Filtered out elements classified as static text, retaining 576,000 visually interactive elements
171
+ 6. Focused on elements tagged with categories like "Button" or "Checkbox"
172
 
173
  #### Who are the source data producers?
174
 
175
+ The data was collected from 22 publicly accessible websites across various domains (e-commerce, technology, travel, etc.). The specific screenshots and annotations were produced by the authors of the ShowUI paper (Show Lab, National University of Singapore and Microsoft).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
176
 
177
  ## Bias, Risks, and Limitations
178
 
179
+ The paper doesn't explicitly discuss biases or limitations specific to this dataset, but potential limitations might include:
180
+ - Limited to 22 website scenarios, which may not represent the full diversity of web interfaces
181
+ - Filtering out static text could limit the model's ability to handle text-heavy interfaces
182
+ - Potential overrepresentation of popular or mainstream websites compared to niche or specialized interfaces
183
+ - May not capture the full range of web accessibility features or alternative UI designs
184
 
185
  ### Recommendations
186
 
187
+ Users should be aware that this dataset deliberately excludes static text elements, which makes it complementary to text-focused datasets but potentially incomplete on its own. For comprehensive web navigation models, it should be used alongside datasets that include text recognition capabilities. Additionally, researchers may want to evaluate whether the 22 selected website scenarios adequately represent their target application domains.
 
 
 
 
188
 
189
+ ## Citation
190
 
191
  **BibTeX:**
192
+ ```bibtex
193
+ @misc{lin2024showui,
194
+ title={ShowUI: One Vision-Language-Action Model for GUI Visual Agent},
195
+ author={Kevin Qinghong Lin and Linjie Li and Difei Gao and Zhengyuan Yang and Shiwei Wu and Zechen Bai and Weixian Lei and Lijuan Wang and Mike Zheng Shou},
196
+ year={2024},
197
+ eprint={2411.17465},
198
+ archivePrefix={arXiv},
199
+ primaryClass={cs.CV},
200
+ url={https://arxiv.org/abs/2411.17465},
201
+ }
202
+ ```
203
 
204
  **APA:**
205
+ Lin, K. Q., Li, L., Gao, D., Yang, Z., Wu, S., Bai, Z., Lei, S. W., Wang, L., & Shou, M. Z. (2024). ShowUI: One Vision-Language-Action Model for GUI Visual Agent. arXiv preprint arXiv:2411.17465.