Update README.md
Browse files
README.md
CHANGED
@@ -9,7 +9,7 @@ pipeline_tag: image-text-to-text
|
|
9 |
|
10 |
<div align="center">
|
11 |
|
12 |
-
[\[🏠Homepage\]](https://osatlas.github.io) [\[💻Code\]](https://github.com/OS-Copilot/OS-Atlas) [\[🚀Quick Start\]](#quick-start) [\[📝Paper\]](https://arxiv.org/abs/2410.23218) [\[🤗Models\]](https://huggingface.co/collections/OS-Copilot/os-atlas-67246e44003a1dfcc5d0d045) [\[🤗ScreenSpot-v2\]](https://huggingface.co/datasets/OS-Copilot/ScreenSpot-v2)
|
13 |
|
14 |
</div>
|
15 |
|
@@ -30,7 +30,7 @@ For generating single-step actions in GUI agent tasks, you can use:
|
|
30 |
|
31 |
`OS-Atlas-Action-4B` is a GUI action model finetuned from OS-Atlas-Base-4B. By taking as input a system prompt, basic and custom actions, and a task instruction, the model generates thoughtful reasoning (`thought`) and executes the appropriate next step (`action`).
|
32 |
|
33 |
-
Note that the released OS-Atlas-Pro-4B model is described in the Section 5.4 of the paper. Compared to the OS-Atlas model in Tables 4 and 5, the Pro model demonstrates superior generalizability and performance. Critically, it is not constrained to specific tasks or training datasets merely to satisfy particular experimental conditions like OOD and SFT. Furthermore, this approach prevents us from overdosing HuggingFace by uploading over 20+ distinct model checkpoints.
|
34 |
### Installation
|
35 |
To use `OS-Atlas-Action-4B`, first install the necessary dependencies:
|
36 |
```
|
@@ -41,7 +41,7 @@ For additional dependencies, please refer to the [InternVL2 documentation](https
|
|
41 |
### Example Inference Code
|
42 |
First download the [example image](https://github.com/OS-Copilot/OS-Atlas/blob/main/examples/images/action_example_1.jpg) and save it to the current directory.
|
43 |
|
44 |
-
|
45 |
```python
|
46 |
import torch
|
47 |
import torchvision.transforms as T
|
|
|
9 |
|
10 |
<div align="center">
|
11 |
|
12 |
+
[\[🏠Homepage\]](https://osatlas.github.io) [\[💻Code\]](https://github.com/OS-Copilot/OS-Atlas) [\[🚀Quick Start\]](#quick-start) [\[📝Paper\]](https://arxiv.org/abs/2410.23218) [\[🤗Models\]](https://huggingface.co/collections/OS-Copilot/os-atlas-67246e44003a1dfcc5d0d045)[\[🤗Data\]](https://huggingface.co/datasets/OS-Copilot/OS-Atlas-data) [\[🤗ScreenSpot-v2\]](https://huggingface.co/datasets/OS-Copilot/ScreenSpot-v2)
|
13 |
|
14 |
</div>
|
15 |
|
|
|
30 |
|
31 |
`OS-Atlas-Action-4B` is a GUI action model finetuned from OS-Atlas-Base-4B. By taking as input a system prompt, basic and custom actions, and a task instruction, the model generates thoughtful reasoning (`thought`) and executes the appropriate next step (`action`).
|
32 |
|
33 |
+
Note that the released `OS-Atlas-Pro-4B` model is described in the Section 5.4 of the paper. Compared to the OS-Atlas model in Tables 4 and 5, the Pro model demonstrates superior generalizability and performance. Critically, it is not constrained to specific tasks or training datasets merely to satisfy particular experimental conditions like OOD and SFT. Furthermore, this approach prevents us from overdosing HuggingFace by uploading over 20+ distinct model checkpoints.
|
34 |
### Installation
|
35 |
To use `OS-Atlas-Action-4B`, first install the necessary dependencies:
|
36 |
```
|
|
|
41 |
### Example Inference Code
|
42 |
First download the [example image](https://github.com/OS-Copilot/OS-Atlas/blob/main/examples/images/action_example_1.jpg) and save it to the current directory.
|
43 |
|
44 |
+
Below is an example of how to perform inference using the model:
|
45 |
```python
|
46 |
import torch
|
47 |
import torchvision.transforms as T
|