File size: 6,352 Bytes
4576a7a
367db65
 
 
 
 
 
 
 
 
 
 
 
4576a7a
367db65
569ac67
 
4576a7a
8815531
 
 
367db65
e19498a
367db65
ae4c1e1
 
367db65
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ae4c1e1
 
126c01e
ae4c1e1
367db65
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
126c01e
367db65
126c01e
ae4c1e1
367db65
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ae4c1e1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
---
task_categories:
- image-segmentation
- object-detection
- robotics
- zero-shot-object-detection
size_categories:
- n>1T
configs:
- config_name: MegaPose-ShapeNetCore
  data_files: MegaPose-ShapeNetCore/*.tar
- config_name: MegaPose-GSO
  data_files: MegaPose-GSO/*.tar
---
# BOP: Benchmark for 6D Object Pose Estimation

The goal of BOP is to capture the state of the art in 6DoF object pose estimation and related tasks such as 2D object detection and segmentation. An accurate, fast, robust, scalable and easy-to-train method that solves this task will have a big impact in application fields such as robotics or augmented reality.

Homepage: https://bop.felk.cvut.cz/

BOP Toolkit: https://github.com/thodan/bop_toolkit/

<details><summary>Downloading datasets</summary>

#### Option 1: Using `huggingface_hub`:

a. Install the library:
```
pip install --upgrade huggingface_hub
```
b. Download the dataset:
```
from huggingface_hub import snapshot_download

dataset_name = "hope"
local_dir = "./datasets"

snapshot_download(repo_id="bop-benchmark/datasets", 
                  allow_patterns=f"{dataset_name}/*zip",
                  repo_type="dataset", 
                  local_dir=local_dir)
```
If you want to download the entire BOP datasets (~3TB), please remove the `allow_patterns` argument. More options are available in the [official documentation](https://huggingface.co/docs/huggingface_hub/main/en/guides/download).


#### Option 2: Using `huggingface_hub[cli]`:

a. Install the library:
```
pip install -U "huggingface_hub[cli]"
```
b. Download the dataset:
```
export LOCAL_DIR=./datasets
export DATASET_NAME=hope

huggingface-cli download bop-benchmark/datasets --include "$DATASET_NAME/*.zip" --local-dir $LOCAL_DIR --repo-type=dataset  
```
Please remove this argument `--include "$DATASET_NAME/*.zip"` to download entire BOP datasets (~3TB). More options are available in the [official documentation](https://huggingface.co/docs/huggingface_hub/main/en/guides/download).

#### Option 3: Using `wget`:

Similar `wget` command as in [BOP website](https://bop.felk.cvut.cz/datasets/) can be used to download the dataset from huggingface hub:
```
export SRC=https://huggingface.co/datasets/bop-benchmark/datasets/resolve/main

wget $SRC/lm/lm_base.zip         # Base archive 
wget $SRC/lm/lm_models.zip       # 3D object models
wget $SRC/lm/lm_test_all.zip     # All test images ("_bop19" for a subset)
wget $SRC/lm/lm_train_pbr.zip    # PBR training images 
```

Datasets are stored in `.zip` format. You can extract them using the following command:
```
bash scripts/extract_bop.sh
```

If you are running on a machine with high bandwidth, you can increase your download speed by adding the following environment variable:
```
pip install huggingface_hub[hf_transfer]
export HF_HUB_ENABLE_HF_TRANSFER=1
```

</details>

<details><summary>Uploading datasets</summary>

You create a new dataset and want to share it with BOP community. Here is a step-by-step guide to upload the dataset and create a pull request to [our huggingface hub](https://huggingface.co/datasets/bop-benchmark/datasets/). Feel free to reach out to [email protected] if you have any questions.

Similar to the download process, you can upload the dataset using the `huggingface_hub` library or `huggingface_hub[cli]`. We recommend using `huggingface_hub[cli]` for its simplicity.

#### Option 1: Using `huggingface_hub[cli]`:

a. Install the library:
```
pip install -U "huggingface_hub[cli]"
```

b. Log-in and create a token
```
huggingface-cli login
```
Then go to [this link](https://huggingface.co/settings/tokens) and generate a token. IMPORTANT: the token should have write access as shown below:

<img src="./media/token_hf.png" alt="image" width="300">


Make sure you are in the bop-benchmark group by running:
```
huggingface-cli whoami
```

c. Upload dataset:

The command is applied for both folders and specific files:
```
# Usage:  huggingface-cli upload bop-benchmark/datasets [local_path] [path_in_repo] --repo-type=dataset --create-pr
```
For example, to upload hope dataset:
```
export LOCAL_FOLDER=./datasets/hope
export HF_FOLDER=/hope

huggingface-cli upload bop-benchmark/datasets $LOCAL_FOLDER $HF_FOLDER --repo-type=dataset --create-pr
```

#### Option 2: Using `huggingface_hub`:

a. Install the library:
```
pip install --upgrade huggingface_hub
```
b. Creating a pull-request:

We recommend organizing the dataset in a folder and then uploading it to the huggingface hub. For example, to upload `lmo`:
```
from huggingface_hub import HfApi, CommitOperationAdd

dataset_name = "lmo"
local_dir = "./datasets/lmo"

operations = []
for file in local_dir.glob("*"):
    add_commit = CommitOperationAdd(
        path_in_repo=f"/{dataset_name}",
        path_or_fileobj=local_dir,
    )
    operations.append(add_commit)


api = HfApi()
MY_TOKEN = # get from https://huggingface.co/settings/tokens
api.create_commit(repo_id="bop-benchmark/datasets", 
                  repo_type="dataset",
                  commit_message=f"adding {dataset_name} dataset", 
                  token=MY_TOKEN,
                  operations=operations, 
                  create_pr=True)

```
If your dataset is large (> 500 GB), you can upload it in chunks by adding the `multi_commits=True, multi_commits_verbose=True,` argument. More options are available in the [official documentation](https://huggingface.co/docs/huggingface_hub/v0.22.2/en/package_reference/hf_api#huggingface_hub.HfApi.create_pull_request).

</details>

<details><summary>FAQ</summary>

#### 1. How to upload a large file > 50 GB?
Note that HuggingFace limits the size of each file to 50 GB. If your dataset is larger, you can split it into smaller files:
```
zip -s 50g input.zip --out output.zip
```
This command will split the `input.zip` into multiple files of 50GB size `output.zip`, `output.z01`, `output.z01`, ... You can then extract them using one of the following commands:
```
# option 1: combine 
zip -s0 output.zip --out input.zip

# option 2: using 7z to unzip directly
7z x output.zip
```
#### 2. How to increase download speed?
If you are running on a machine with high bandwidth, you can increase your download speed by adding the following environment variable:
```
pip install huggingface_hub[hf_transfer]
export HF_HUB_ENABLE_HF_TRANSFER=1
```
</details>