Spaces:
Runtime error
Runtime error
File size: 13,139 Bytes
d50be28 d73c58e d50be28 d73c58e d50be28 d73c58e d50be28 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 |
---
title: computer_use_ootb
app_file: app.py
sdk: gradio
sdk_version: 5.13.2
---
<h2 align="center">
<a href="https://computer-use-ootb.github.io">
<img src="./assets/ootb_logo.png" alt="Logo" style="display: block; margin: 0 auto; filter: invert(1) brightness(2);">
</a>
</h2>
<h5 align="center"> If you like our project, please give us a star β on GitHub for the latest update.</h5>
<h5 align=center>
[](https://arxiv.org/abs/2411.10323)
[](https://computer-use-ootb.github.io)
[](https://hits.seeyoufarm.com/api/count/incr/badge.svg?url=https%3A%2F%2Fgithub.com%2Fshowlab%2Fcomputer_use_ootb&count_bg=%2379C83D&title_bg=%23555555&icon=&icon_color=%23E7E7E7&title=hits&edge_flat=false)
</h5>
## <img src="./assets/ootb_icon.png" alt="Star" style="height:25px; vertical-align:middle; filter: invert(1) brightness(2);"> Overview
**Computer Use <span style="color:rgb(106, 158, 210)">O</span><span style="color:rgb(111, 163, 82)">O</span><span style="color:rgb(209, 100, 94)">T</span><span style="color:rgb(238, 171, 106)">B</span>**<img src="./assets/ootb_icon.png" alt="Star" style="height:20px; vertical-align:middle; filter: invert(1) brightness(2);"> is an out-of-the-box (OOTB) solution for Desktop GUI Agent, including API-based (**Claude 3.5 Computer Use**) and locally-running models (**<span style="color:rgb(106, 158, 210)">S</span><span style="color:rgb(111, 163, 82)">h</span><span style="color:rgb(209, 100, 94)">o</span><span style="color:rgb(238, 171, 106)">w</span>UI**, **UI-TARS**).
**No Docker** is required, and it supports both **Windows** and **macOS**. OOTB provides a user-friendly interface based on Gradio. π¨
Visit our study on GUI Agent of Claude 3.5 Computer Use [[project page]](https://computer-use-ootb.github.io). π
## Update
- **[2025/02/08]** We've added the support for [**UI-TARS**](https://github.com/bytedance/UI-TARS). Follow [Cloud Deployment](https://github.com/bytedance/UI-TARS?tab=readme-ov-file#cloud-deployment) or [VLLM deployment](https://github.com/bytedance/UI-TARS?tab=readme-ov-file#local-deployment-vllm) to implement UI-TARS and run it locally in OOTB.
- **Major Update! [2024/12/04]** **Local Runπ₯** is now live! Say hello to [**<span style="color:rgb(106, 158, 210)">S</span><span style="color:rgb(111, 163, 82)">h</span><span style="color:rgb(209, 100, 94)">o</span><span style="color:rgb(238, 171, 106)">w</span>UI**](https://github.com/showlab/ShowUI), an open-source 2B vision-language-action (VLA) model for GUI Agent. Now compatible with `"gpt-4o + ShowUI" (~200x cheaper)`* & `"Qwen2-VL + ShowUI" (~30x cheaper)`* for only few cents for each taskπ°! <span style="color: grey; font-size: small;">*compared to Claude Computer Use</span>.
- **[2024/11/20]** We've added some examples to help you get hands-on experience with Claude 3.5 Computer Use.
- **[2024/11/19]** Forget about the single-display limit set by Anthropic - you can now use **multiple displays** π!
- **[2024/11/18]** We've released a deep analysis of Claude 3.5 Computer Use: [https://arxiv.org/abs/2411.10323](https://arxiv.org/abs/2411.10323).
- **[2024/11/11]** Forget about the low-resolution display limit set by Anthropic β you can now use *any resolution you like* and still keep the **screenshot token cost low** π!
- **[2024/11/11]** Now both **Windows** and **macOS** platforms are supported π!
- **[2024/10/25]** Now you can **Remotely Control** your computer π» through your mobile device π± β **No Mobile App Installation** required! Give it a try and have fun π.
## Demo Video
https://github.com/user-attachments/assets/f50b7611-2350-4712-af9e-3d31e30020ee
<div style="display: flex; justify-content: space-around;">
<a href="https://youtu.be/Ychd-t24HZw" target="_blank" style="margin-right: 10px;">
<img src="https://img.youtube.com/vi/Ychd-t24HZw/maxresdefault.jpg" alt="Watch the video" width="48%">
</a>
<a href="https://youtu.be/cvgPBazxLFM" target="_blank">
<img src="https://img.youtube.com/vi/cvgPBazxLFM/maxresdefault.jpg" alt="Watch the video" width="48%">
</a>
</div>
## π Getting Started
### 0. Prerequisites
- Instal Miniconda on your system through this [link](https://www.anaconda.com/download?utm_source=anacondadocs&utm_medium=documentation&utm_campaign=download&utm_content=topnavalldocs). (**Python Version: >= 3.12**).
- Hardware Requirements (optional, for ShowUI local-run):
- **Windows (CUDA-enabled):** A compatible NVIDIA GPU with CUDA support, >=6GB GPU memory
- **macOS (Apple Silicon):** M1 chip (or newer), >=16GB unified RAM
### 1. Clone the Repository π
Open the Conda Terminal. (After installation Of Miniconda, it will appear in the Start menu.)
Run the following command on **Conda Terminal**.
```bash
git clone https://github.com/showlab/computer_use_ootb.git
cd computer_use_ootb
```
### 2.1 Install Dependencies π§
```bash
pip install -r dev-requirements.txt
```
### 2.2 (Optional) Get Prepared for **<span style="color:rgb(106, 158, 210)">S</span><span style="color:rgb(111, 163, 82)">h</span><span style="color:rgb(209, 100, 94)">o</span><span style="color:rgb(238, 171, 106)">w</span>UI** Local-Run
1. Download all files of the ShowUI-2B model via the following command. Ensure the `ShowUI-2B` folder is under the `computer_use_ootb` folder.
```python
python install_tools/install_showui.py
```
2. Make sure to install the correct GPU version of PyTorch (CUDA, MPS, etc.) on your machine. See [install guide and verification](https://pytorch.org/get-started/locally/).
3. Get API Keys for [GPT-4o](https://platform.openai.com/docs/quickstart) or [Qwen-VL](https://help.aliyun.com/zh/dashscope/developer-reference/acquisition-and-configuration-of-api-key). For mainland China users, Qwen API free trial for first 1 mil tokens is [available](https://help.aliyun.com/zh/dashscope/developer-reference/tongyi-qianwen-vl-plus-api).
### 2.3 (Optional) Get Prepared for **UI-TARS** Local-Run
1. Follow [Cloud Deployment](https://github.com/bytedance/UI-TARS?tab=readme-ov-file#cloud-deployment) or [VLLM deployment](https://github.com/bytedance/UI-TARS?tab=readme-ov-file#local-deployment-vllm) guides to deploy your UI-TARS server.
2. Test your UI-TARS sever with the script `.\install_tools\test_ui-tars_server.py`.
### 2.4 (Optional) If you want to deploy Qwen model as planner on ssh server
1. git clone this project on your ssh server
2. python computer_use_demo/remote_inference.py
### 3. Start the Interface βΆοΈ
**Start the OOTB interface:**
```bash
python app.py
```
If you successfully start the interface, you will see two URLs in the terminal:
```bash
* Running on local URL: http://127.0.0.1:7860
* Running on public URL: https://xxxxxxxxxxxxxxxx.gradio.live (Do not share this link with others, or they will be able to control your computer.)
```
> <u>For convenience</u>, we recommend running one or more of the following command to set API keys to the environment variables before starting the interface. Then you donβt need to manually pass the keys each run. On Windows Powershell (via the `set` command if on cmd):
> ```bash
> $env:ANTHROPIC_API_KEY="sk-xxxxx" (Replace with your own key)
> $env:QWEN_API_KEY="sk-xxxxx"
> $env:OPENAI_API_KEY="sk-xxxxx"
> ```
> On macOS/Linux, replace `$env:ANTHROPIC_API_KEY` with `export ANTHROPIC_API_KEY` in the above command.
### 4. Control Your Computer with Any Device can Access the Internet
- **Computer to be controlled**: The one installed software.
- **Device Send Command**: The one opens the website.
Open the website at http://localhost:7860/ (if you're controlling the computer itself) or https://xxxxxxxxxxxxxxxxx.gradio.live in your mobile browser for remote control.
Enter the Anthropic API key (you can obtain it through this [website](https://console.anthropic.com/settings/keys)), then give commands to let the AI perform your tasks.
### ShowUI Advanced Settings
We provide a 4-bit quantized ShowUI-2B model for cost-efficient inference (currently **only support CUDA devices**). To download the 4-bit quantized ShowUI-2B model:
```
python install_tools/install_showui-awq-4bit.py
```
Then, enable the quantized setting in the 'ShowUI Advanced Settings' dropdown menu.
Besides, we also provide a slider to quickly adjust the `max_pixel` parameter in the ShowUI model. This controls the visual input size of the model and greatly affects the memory and inference speed.
## π GUI Agent Model Zoo
Now, OOTB supports customizing the GUI Agent via the following models:
- **Unified Model**: Unified planner & actor, can both make the high-level planning and take the low-level control.
- **Planner**: General-purpose LLMs, for handling the high-level planning and decision-making.
- **Actor**: Vision-language-action models, for handling the low-level control and action command generation.
<div align="center">
<b>Supported GUI Agent Models, OOTB</b>
</div>
<table align="center">
<tbody>
<tr align="center" valign="bottom">
<td>
<b>[API] Unified Model</b>
</td>
<td>
<b>[API] Planner</b>
</td>
<td>
<b>[Local] Planner</b>
</td>
<td>
<b>[API] Actor</b>
</td>
<td>
<b>[Local] Actor</b>
</td>
</tr>
<tr valign="top">
<td>
<ul>
<li><a href="">Claude 3.5 Sonnet</a></li>
</ul>
</td>
<td>
<ul>
<li><a href="">GPT-4o</a></li>
<li><a href="">Qwen2-VL-Max</a></li>
<li><a href="">Qwen2-VL-2B(ssh)</a></li>
<li><a href="">Qwen2-VL-7B(ssh)</a></li>
<li><a href="">Qwen2.5-VL-7B(ssh)</a></li>
<li><a href="">Deepseek V3 (soon)</a></li>
</ul>
</td>
<td>
<ul>
<li><a href="">Qwen2-VL-2B</a></li>
<li><a href="">Qwen2-VL-7B</a></li>
</ul>
</td>
<td>
<ul>
<li><a href="https://github.com/showlab/ShowUI">ShowUI</a></li>
<li><a href="https://huggingface.co/bytedance-research/UI-TARS-7B-DPO">UI-TARS-7B/72B-DPO (soon)</a></li>
</ul>
</td>
<td>
<ul>
<li><a href="https://github.com/showlab/ShowUI">ShowUI</a></li>
<li><a href="https://huggingface.co/bytedance-research/UI-TARS-7B-DPO">UI-TARS-7B/72B-DPO</a></li>
</ul>
</td>
</tr>
</td>
</table>
> where [API] models are based on API calling the LLMs that can inference remotely,
and [Local] models can use your own device that inferences locally with no API costs.
## π₯οΈ Supported Systems
- **Windows** (Claude β
, ShowUI β
)
- **macOS** (Claude β
, ShowUI β
)
## π OOTB Iterface
<div style="display: flex; align-items: center; gap: 10px;">
<figure style="text-align: center;">
<img src="./assets/gradio_interface.png" alt="Desktop Interface" style="width: auto; object-fit: contain;">
</figure>
</div>
## β οΈ Risks
- **Potential Dangerous Operations by the Model**: The models' performance is still limited and may generate unintended or potentially harmful outputs. Recommend continuously monitoring the AI's actions.
- **Cost Control**: Each task may cost a few dollars for Claude 3.5 Computer Use.πΈ
## π
Roadmap
- [ ] **Explore available features**
- [ ] The Claude API seems to be unstable when solving tasks. We are investigating the reasons: resolutions, types of actions required, os platforms, or planning mechanisms. Welcome any thoughts or comments on it.
- [ ] **Interface Design**
- [x] **Support for Gradio** β¨
- [ ] **Simpler Installation**
- [ ] **More Features**... π
- [ ] **Platform**
- [x] **Windows**
- [x] **macOS**
- [x] **Mobile** (Send command)
- [ ] **Mobile** (Be controlled)
- [ ] **Support for More MLLMs**
- [x] **Claude 3.5 Sonnet** π΅
- [x] **GPT-4o**
- [x] **Qwen2-VL**
- [ ] **Local MLLMs**
- [ ] ...
- [ ] **Improved Prompting Strategy**
- [ ] Optimize prompts for cost-efficiency. π‘
- [x] **Improved Inference Speed**
- [x] Support int4 Quantization.
## Join Discussion
Welcome to discuss with us and continuously improve the user experience of Computer Use - OOTB. Reach us using this [**Discord Channel**](https://discord.gg/vMMJTSew37) or the WeChat QR code below!
<div style="display: flex; flex-direction: row; justify-content: space-around;">
<!-- <img src="./assets/wechat_2.jpg" alt="gradio_interface" width="30%"> -->
<img src="./assets/wechat_3.jpg" alt="gradio_interface" width="30%">
</div>
<div style="height: 30px;"></div>
<hr>
<a href="https://computer-use-ootb.github.io">
<img src="./assets/ootb_logo.png" alt="Logo" width="30%" style="display: block; margin: 0 auto; filter: invert(1) brightness(2);">
</a>
|