Datasets:

Modalities:
Image
ArXiv:
Libraries:
Datasets
License:
BAJUKA commited on
Commit
6997eed
Β·
verified Β·
1 Parent(s): 67436ca

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +90 -86
README.md CHANGED
@@ -1,86 +1,90 @@
1
- ---
2
- license: apache-2.0
3
- task_categories:
4
- - image-text-to-text
5
- ---
6
-
7
- # UI-Vision: A Desktop-centric GUI Benchmark for Visual Perception and Interaction
8
- <div style="display: flex; gap: 10px;">
9
- <a href="https://github.com/uivision/UI-Vision">
10
- <img src="https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white" alt="github" class="img-fluid" />
11
- </a>
12
- <a href="https://arxiv.org/abs/2503.15661">
13
- <img src="https://img.shields.io/badge/arXiv-paper-b31b1b.svg?style=for-the-badge" alt="paper" class="img-fluid" />
14
- </a>
15
- <a href="https://uivision.github.io/">
16
- <img src="https://img.shields.io/badge/website-%23b31b1b.svg?style=for-the-badge&logo=globe&logoColor=white" alt="website" class="img-fluid" />
17
- </a>
18
- </div>
19
-
20
- ## Introduction
21
-
22
- Autonomous agents that navigate Graphical User Interfaces (GUIs) to automate tasks like document editing and file management can greatly enhance computer workflows. While existing research focuses on online settings, desktop environments, critical for many professional and everyday tasks, remain underexplored due to data collection challenges and licensing issues. We introduce UI-Vision, the first comprehensive, license-permissive benchmark for offline, fine-grained evaluation of computer use agents in real-world desktop environments. Unlike online benchmarks, UI-Vision provides: (i) dense, high-quality annotations of human demonstrations, including bounding boxes, UI labels, and action trajectories (clicks, drags, and keyboard inputs) across 83 software applications, and (ii) three fine-to-coarse grained tasks-Element Grounding, Layout Grounding, and Action Prediction-with well-defined metrics to rigorously evaluate agents' performance in desktop environments. Our evaluation reveals critical limitations in state-of-the-art models like UI-TARS-72B, including issues with understanding professional software, spatial reasoning, and complex actions like drag-and-drop. These findings highlight the challenges in developing fully autonomous computer use agents. By releasing UI-Vision as open-source, we aim to advance the development of more capable agents for real-world desktop tasks.
23
-
24
- <div align="center">
25
- <img src="assets/data_pipeline.png" alt="Dataset Overview" width="1000" height="450" class="img-fluid" />
26
- </div>
27
-
28
- ## Data Structure
29
-
30
- To get started with UI-Vision:
31
-
32
- 1. Clone the repository:
33
- ```bash
34
- git clone https://github.com/uivision/UI-Vision.git
35
- ```
36
-
37
- 2. The repository is organized as follows:
38
-
39
- ```
40
- uivision/
41
- β”œβ”€β”€ annotations/ # Dataset annotations
42
- β”‚ β”œβ”€β”€ element_grounding/
43
- β”‚ β”‚ β”œβ”€β”€ element_grounding_basic.json
44
- β”‚ β”‚ β”œβ”€β”€ element_grounding_functional.json
45
- β”‚ β”‚ └── element_grounding_spatial.json
46
- β”‚ └── layout_grounding/
47
- β”‚ └── layout_grounding.json
48
- β”œβ”€β”€ images/ # Dataset images
49
- β”‚ β”œβ”€β”€ element_grounding/
50
- β”‚ └── layout_grounding/
51
- β”œβ”€β”€ assets/ # HuggingFace README assets
52
- └── README.md
53
- ```
54
-
55
- ## Usage
56
-
57
- To run the models:
58
-
59
- 1. Visit our [GitHub repository](https://github.com/uivision/UI-Vision) for the latest code
60
- 2. Make sure to specify the correct paths to:
61
- - Annotation files
62
- - Task image folders
63
-
64
- ## Citation
65
-
66
- If you find this work useful in your research, please consider citing:
67
-
68
- ```bibtex
69
- @misc{nayak2025uivisiondesktopcentricguibenchmark,
70
- title={UI-Vision: A Desktop-centric GUI Benchmark for Visual Perception and Interaction},
71
- author={Shravan Nayak and Xiangru Jian and Kevin Qinghong Lin and Juan A. Rodriguez and
72
- Montek Kalsi and Rabiul Awal and Nicolas Chapados and M. Tamer Γ–zsu and
73
- Aishwarya Agrawal and David Vazquez and Christopher Pal and Perouz Taslakian and
74
- Spandana Gella and Sai Rajeswar},
75
- year={2025},
76
- eprint={2503.15661},
77
- archivePrefix={arXiv},
78
- primaryClass={cs.CV},
79
- url={https://arxiv.org/abs/2503.15661},
80
- }
81
- ```
82
-
83
- ## License
84
-
85
- This project is licensed under the [Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0).
86
-
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - image-text-to-text
5
+ ---
6
+
7
+ # UI-Vision: A Desktop-centric GUI Benchmark for Visual Perception and Interaction
8
+ <div style="display: flex; gap: 10px;">
9
+ <a href="https://github.com/uivision/UI-Vision">
10
+ <img src="https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white" alt="github" class="img-fluid" />
11
+ </a>
12
+ <a href="https://arxiv.org/abs/2503.15661">
13
+ <img src="https://img.shields.io/badge/arXiv-paper-b31b1b.svg?style=for-the-badge" alt="paper" class="img-fluid" />
14
+ </a>
15
+ <a href="https://uivision.github.io/">
16
+ <img src="https://img.shields.io/badge/website-%23b31b1b.svg?style=for-the-badge&logo=globe&logoColor=white" alt="website" class="img-fluid" />
17
+ </a>
18
+ </div>
19
+
20
+ ## Introduction
21
+
22
+ Autonomous agents that navigate Graphical User Interfaces (GUIs) to automate tasks like document editing and file management can greatly enhance computer workflows. While existing research focuses on online settings, desktop environments, critical for many professional and everyday tasks, remain underexplored due to data collection challenges and licensing issues. We introduce UI-Vision, the first comprehensive, license-permissive benchmark for offline, fine-grained evaluation of computer use agents in real-world desktop environments. Unlike online benchmarks, UI-Vision provides: (i) dense, high-quality annotations of human demonstrations, including bounding boxes, UI labels, and action trajectories (clicks, drags, and keyboard inputs) across 83 software applications, and (ii) three fine-to-coarse grained tasks-Element Grounding, Layout Grounding, and Action Prediction-with well-defined metrics to rigorously evaluate agents' performance in desktop environments. Our evaluation reveals critical limitations in state-of-the-art models like UI-TARS-72B, including issues with understanding professional software, spatial reasoning, and complex actions like drag-and-drop. These findings highlight the challenges in developing fully autonomous computer use agents. By releasing UI-Vision as open-source, we aim to advance the development of more capable agents for real-world desktop tasks.
23
+
24
+ <div align="center">
25
+ <img src="assets/data_pipeline.png" alt="Dataset Overview" width="1000" height="450" class="img-fluid" />
26
+ </div>
27
+
28
+ ## Data Structure
29
+
30
+ To get started with UI-Vision:
31
+
32
+ 1. Clone the repository to get the images and annotations:
33
+ ```bash
34
+ git clone https://huggingface.co/datasets/ServiceNow/ui-vision
35
+ ```
36
+
37
+ 2. The repository is organized as follows:
38
+
39
+ ```
40
+ uivision/
41
+ β”œβ”€β”€ annotations/ # Dataset annotations
42
+ β”‚ β”œβ”€β”€ element_grounding/
43
+ β”‚ β”‚ β”œβ”€β”€ element_grounding_basic.json
44
+ β”‚ β”‚ β”œβ”€β”€ element_grounding_functional.json
45
+ β”‚ β”‚ └── element_grounding_spatial.json
46
+ β”‚ └── layout_grounding/
47
+ β”‚ └── layout_grounding.json
48
+ β”œβ”€β”€ images/ # Dataset images
49
+ β”‚ β”œβ”€β”€ element_grounding/
50
+ β”‚ └── layout_grounding/
51
+ β”œβ”€β”€ assets/ # HuggingFace README assets
52
+ └── README.md
53
+ ```
54
+
55
+ ## Usage
56
+
57
+ To run the models:
58
+
59
+ 1. Visit our [GitHub repository](https://github.com/uivision/UI-Vision) for the latest code
60
+ 2. Make sure to specify the correct paths to:
61
+ - Annotation files
62
+ - Task image folders
63
+
64
+ ## Citation
65
+
66
+ If you find this work useful in your research, please consider citing:
67
+
68
+ ```bibtex
69
+ @misc{nayak2025uivisiondesktopcentricguibenchmark,
70
+ title={UI-Vision: A Desktop-centric GUI Benchmark for Visual Perception and Interaction},
71
+ author={Shravan Nayak and Xiangru Jian and Kevin Qinghong Lin and Juan A. Rodriguez and
72
+ Montek Kalsi and Rabiul Awal and Nicolas Chapados and M. Tamer Γ–zsu and
73
+ Aishwarya Agrawal and David Vazquez and Christopher Pal and Perouz Taslakian and
74
+ Spandana Gella and Sai Rajeswar},
75
+ year={2025},
76
+ eprint={2503.15661},
77
+ archivePrefix={arXiv},
78
+ primaryClass={cs.CV},
79
+ url={https://arxiv.org/abs/2503.15661},
80
+ }
81
+ ```
82
+
83
+ ## License
84
+
85
+ This project is licensed under the MIT License.
86
+
87
+ ## Intended Usage
88
+
89
+ This dataset is intended to be used by the community to evaluate and analyze their models. We are continuously striving to improve the dataset. If you have any suggestions or problems regarding the dataset, please contact the authors. We also welcome OPT-OUT requests if users want their data removed. To do so, they can either submit a PR or contact the authors directly.
90
+