nv-nguyen commited on
Commit
629e526
·
verified ·
1 Parent(s): 8815531

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +19 -23
README.md CHANGED
@@ -33,14 +33,14 @@ b. Download the dataset:
33
  from huggingface_hub import snapshot_download
34
 
35
  dataset_name = "hope"
36
- local_dir = "./datasets"
37
 
38
- snapshot_download(repo_id="bop-benchmark/datasets",
39
- allow_patterns=f"{dataset_name}/*zip",
40
  repo_type="dataset",
41
  local_dir=local_dir)
42
  ```
43
- If you want to download the entire BOP datasets (~3TB), please remove the `allow_patterns` argument. More options are available in the [official documentation](https://huggingface.co/docs/huggingface_hub/main/en/guides/download).
44
 
45
 
46
  #### Option 2: Using `huggingface_hub[cli]`:
@@ -51,28 +51,24 @@ pip install -U "huggingface_hub[cli]"
51
  ```
52
  b. Download the dataset:
53
  ```
54
- export LOCAL_DIR=./datasets
55
  export DATASET_NAME=hope
56
 
57
- huggingface-cli download bop-benchmark/datasets --include "$DATASET_NAME/*.zip" --local-dir $LOCAL_DIR --repo-type=dataset
58
  ```
59
- Please remove this argument `--include "$DATASET_NAME/*.zip"` to download entire BOP datasets (~3TB). More options are available in the [official documentation](https://huggingface.co/docs/huggingface_hub/main/en/guides/download).
60
 
61
  #### Option 3: Using `wget`:
62
 
63
  Similar `wget` command as in [BOP website](https://bop.felk.cvut.cz/datasets/) can be used to download the dataset from huggingface hub:
64
  ```
65
- export SRC=https://huggingface.co/datasets/bop-benchmark/datasets/resolve/main
 
66
 
67
- wget $SRC/lm/lm_base.zip # Base archive
68
- wget $SRC/lm/lm_models.zip # 3D object models
69
- wget $SRC/lm/lm_test_all.zip # All test images ("_bop19" for a subset)
70
- wget $SRC/lm/lm_train_pbr.zip # PBR training images
71
- ```
72
-
73
- Datasets are stored in `.zip` format. You can extract them using the following command:
74
- ```
75
- bash scripts/extract_bop.sh
76
  ```
77
 
78
  If you are running on a machine with high bandwidth, you can increase your download speed by adding the following environment variable:
@@ -114,14 +110,14 @@ c. Upload dataset:
114
 
115
  The command is applied for both folders and specific files:
116
  ```
117
- # Usage: huggingface-cli upload bop-benchmark/datasets [local_path] [path_in_repo] --repo-type=dataset --create-pr
118
  ```
119
  For example, to upload hope dataset:
120
  ```
121
- export LOCAL_FOLDER=./datasets/hope
122
- export HF_FOLDER=/hope
123
 
124
- huggingface-cli upload bop-benchmark/datasets $LOCAL_FOLDER $HF_FOLDER --repo-type=dataset --create-pr
125
  ```
126
 
127
  #### Option 2: Using `huggingface_hub`:
@@ -142,7 +138,7 @@ local_dir = "./datasets/lmo"
142
  operations = []
143
  for file in local_dir.glob("*"):
144
  add_commit = CommitOperationAdd(
145
- path_in_repo=f"/{dataset_name}",
146
  path_or_fileobj=local_dir,
147
  )
148
  operations.append(add_commit)
@@ -150,7 +146,7 @@ for file in local_dir.glob("*"):
150
 
151
  api = HfApi()
152
  MY_TOKEN = # get from https://huggingface.co/settings/tokens
153
- api.create_commit(repo_id="bop-benchmark/datasets",
154
  repo_type="dataset",
155
  commit_message=f"adding {dataset_name} dataset",
156
  token=MY_TOKEN,
 
33
  from huggingface_hub import snapshot_download
34
 
35
  dataset_name = "hope"
36
+ local_dir = "./bop_datasets"
37
 
38
+ snapshot_download(repo_id=f"bop-benchmark/{dataset_name}",
39
+ allow_patterns="*zip",
40
  repo_type="dataset",
41
  local_dir=local_dir)
42
  ```
43
+ More options are available in the [official documentation](https://huggingface.co/docs/huggingface_hub/main/en/guides/download).
44
 
45
 
46
  #### Option 2: Using `huggingface_hub[cli]`:
 
51
  ```
52
  b. Download the dataset:
53
  ```
54
+ export LOCAL_DIR=./bop_datasets
55
  export DATASET_NAME=hope
56
 
57
+ huggingface-cli download bop-benchmark/$DATASET_NAME --include "*.zip" --local-dir $LOCAL_DIR --repo-type=dataset
58
  ```
59
+ More options are available in the [official documentation](https://huggingface.co/docs/huggingface_hub/main/en/guides/download).
60
 
61
  #### Option 3: Using `wget`:
62
 
63
  Similar `wget` command as in [BOP website](https://bop.felk.cvut.cz/datasets/) can be used to download the dataset from huggingface hub:
64
  ```
65
+ mkdir $DATASET_NAME; cd $DATASET_NAME
66
+ export SRC=https://huggingface.co/datasets/bop-benchmark/${DATASET_NAME}/resolve/main
67
 
68
+ wget $SRC/${DATASET_NAME}_base.zip # Base archive with dataset info, camera parameters, etc.
69
+ wget $SRC/${DATASET_NAME}_models.zip # 3D object models.
70
+ wget $SRC/${DATASET_NAME}_test_all.zip # All test images ("_bop19" for a subset used in the BOP Challenge 2019).
71
+ wget $SRC/${DATASET_NAME}_train_pbr.zip # PBR training images (rendered with BlenderProc4BOP).
 
 
 
 
 
72
  ```
73
 
74
  If you are running on a machine with high bandwidth, you can increase your download speed by adding the following environment variable:
 
110
 
111
  The command is applied for both folders and specific files:
112
  ```
113
+ # Usage: huggingface-cli upload bop-benchmark/DATASET_NAME [local_path] [path_in_repo] --repo-type=dataset --create-pr
114
  ```
115
  For example, to upload hope dataset:
116
  ```
117
+ export LOCAL_FOLDER=./bop_datasets/hope
118
+ export HF_FOLDER=./
119
 
120
+ huggingface-cli upload bop-benchmark/hope $LOCAL_FOLDER $HF_FOLDER --repo-type=dataset --create-pr
121
  ```
122
 
123
  #### Option 2: Using `huggingface_hub`:
 
138
  operations = []
139
  for file in local_dir.glob("*"):
140
  add_commit = CommitOperationAdd(
141
+ path_in_repo="./",
142
  path_or_fileobj=local_dir,
143
  )
144
  operations.append(add_commit)
 
146
 
147
  api = HfApi()
148
  MY_TOKEN = # get from https://huggingface.co/settings/tokens
149
+ api.create_commit(repo_id=f"bop-benchmark/{dataset_name}",
150
  repo_type="dataset",
151
  commit_message=f"adding {dataset_name} dataset",
152
  token=MY_TOKEN,