MS-HAB-TidyHouse / README.md
arth-shukla's picture
Update README.md
1f303a4 verified
---
# Example metadata to be added to a dataset card.
# Full dataset card template at https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md
language:
- en
license: mit # Example: apache-2.0 or any license from https://hf.co/docs/hub/repositories-licenses
tags:
- robotics
- manipulation
- rearrangement
- computer-vision
- reinforcement-learning
- imitation-learning
- rgbd
- rgb
- depth
- low-level-control
- whole-body-control
- home-assistant
- simulation
- maniskill
annotations_creators:
- machine-generated # Generated from RL policies with filtering
language_creators:
- machine-generated
language_details: en-US
pretty_name: ManiSkill-HAB TidyHouse Dataset
size_categories:
- 1M<n<10M # Dataset has 18K episodes with 3.6M transitions
# source_datasets: # None, original
task_categories:
- robotics
- reinforcement-learning
task_ids:
- grasping
- task-planning
configs:
- config_name: pick-002_master_chef_can
data_files:
- split: trajectories
path: pick/002_master_chef_can.h5
- split: metadata
path: pick/002_master_chef_can.json
- config_name: pick-003_cracker_box
data_files:
- split: trajectories
path: pick/003_cracker_box.h5
- split: metadata
path: pick/003_cracker_box.json
- config_name: pick-004_sugar_box
data_files:
- split: trajectories
path: pick/004_sugar_box.h5
- split: metadata
path: pick/004_sugar_box.json
- config_name: pick-005_tomato_soup_can
data_files:
- split: trajectories
path: pick/005_tomato_soup_can.h5
- split: metadata
path: pick/005_tomato_soup_can.json
- config_name: pick-007_tuna_fish_can
data_files:
- split: trajectories
path: pick/007_tuna_fish_can.h5
- split: metadata
path: pick/007_tuna_fish_can.json
- config_name: pick-008_pudding_box
data_files:
- split: trajectories
path: pick/008_pudding_box.h5
- split: metadata
path: pick/008_pudding_box.json
- config_name: pick-009_gelatin_box
data_files:
- split: trajectories
path: pick/009_gelatin_box.h5
- split: metadata
path: pick/009_gelatin_box.json
- config_name: pick-010_potted_meat_can
data_files:
- split: trajectories
path: pick/010_potted_meat_can.h5
- split: metadata
path: pick/010_potted_meat_can.json
- config_name: pick-024_bowl
data_files:
- split: trajectories
path: pick/024_bowl.h5
- split: metadata
path: pick/024_bowl.json
- config_name: place-002_master_chef_can
data_files:
- split: trajectories
path: place/002_master_chef_can.h5
- split: metadata
path: place/002_master_chef_can.json
- config_name: place-003_cracker_box
data_files:
- split: trajectories
path: place/003_cracker_box.h5
- split: metadata
path: place/003_cracker_box.json
- config_name: place-004_sugar_box
data_files:
- split: trajectories
path: place/004_sugar_box.h5
- split: metadata
path: place/004_sugar_box.json
- config_name: place-005_tomato_soup_can
data_files:
- split: trajectories
path: place/005_tomato_soup_can.h5
- split: metadata
path: place/005_tomato_soup_can.json
- config_name: place-007_tuna_fish_can
data_files:
- split: trajectories
path: place/007_tuna_fish_can.h5
- split: metadata
path: place/007_tuna_fish_can.json
- config_name: place-008_pudding_box
data_files:
- split: trajectories
path: place/008_pudding_box.h5
- split: metadata
path: place/008_pudding_box.json
- config_name: place-009_gelatin_box
data_files:
- split: trajectories
path: place/009_gelatin_box.h5
- split: metadata
path: place/009_gelatin_box.json
- config_name: place-010_potted_meat_can
data_files:
- split: trajectories
path: place/010_potted_meat_can.h5
- split: metadata
path: place/010_potted_meat_can.json
- config_name: place-024_bowl
data_files:
- split: trajectories
path: place/024_bowl.h5
- split: metadata
path: place/024_bowl.json
# # Optional. This part can be used to store the feature types and size of the dataset to be used in python. This can be automatically generated using the datasets-cli.
# dataset_info:
# features:
# - name: {feature_name_0} # Example: id
# dtype: {feature_dtype_0} # Example: int32
# - name: {feature_name_1} # Example: text
# dtype: {feature_dtype_1} # Example: string
# - name: {feature_name_2} # Example: image
# dtype: {feature_dtype_2} # Example: image
# # Example for SQuAD:
# # - name: id
# # dtype: string
# # - name: title
# # dtype: string
# # - name: context
# # dtype: string
# # - name: question
# # dtype: string
# # - name: answers
# # sequence:
# # - name: text
# # dtype: string
# # - name: answer_start
# # dtype: int32
# config_name: {config_name} # Name of the dataset subset. Example for glue: sst2
# splits:
# - name: {split_name_0} # Example: train
# num_bytes: {split_num_bytes_0} # Example for SQuAD: 79317110
# num_examples: {split_num_examples_0} # Example for SQuAD: 87599
# download_size: {dataset_download_size} # Example for SQuAD: 35142551
# dataset_size: {dataset_size} # Example for SQuAD: 89789763
# It can also be a list of multiple subsets (also called "configurations"):
# ```yaml
# dataset_info:
# - config_name: {config0}
# features:
# ...
# - config_name: {config1}
# features:
# ...
# ```
# # Optional. If you want your dataset to be protected behind a gate that users have to accept to access the dataset. More info at https://huggingface.co/docs/hub/datasets-gated
# extra_gated_fields:
# - {field_name_0}: {field_type_0} # Example: Name: text
# - {field_name_1}: {field_type_1} # Example: Affiliation: text
# - {field_name_2}: {field_type_2} # Example: Email: text
# - {field_name_3}: {field_type_3} # Example for speech datasets: I agree to not attempt to determine the identity of speakers in this dataset: checkbox
# extra_gated_prompt: {extra_gated_prompt} # Example for speech datasets: By clicking on “Access repository” below, you also agree to not attempt to determine the identity of speakers in the dataset.
# # Optional. Add this if you want to encode a train and evaluation info in a structured way for AutoTrain or Evaluation on the Hub
# train-eval-index:
# - config: {config_name} # The dataset subset name to use. Example for datasets without subsets: default. Example for glue: sst2
# task: {task_name} # The task category name (same as task_category). Example: question-answering
# task_id: {task_type} # The AutoTrain task id. Example: extractive_question_answering
# splits:
# train_split: train # The split to use for training. Example: train
# eval_split: validation # The split to use for evaluation. Example: test
# col_mapping: # The columns mapping needed to configure the task_id.
# # Example for extractive_question_answering:
# # question: question
# # context: context
# # answers:
# # text: text
# # answer_start: answer_start
# metrics:
# - type: {metric_type} # The metric id. Example: wer. Use metric id from https://hf.co/metrics
# name: {metric_name} # Tne metric name to be displayed. Example: Test WER
---
# ManiSkill-HAB TidyHouse Dataset
**[Paper](https://arxiv.org/abs/2412.13211)**
| **[Website](https://arth-shukla.github.io/mshab)**
| **[Code](https://github.com/arth-shukla/mshab)**
| **[Models](https://huggingface.co/arth-shukla/mshab_checkpoints)**
| **[(Full) Dataset](https://arth-shukla.github.io/mshab/#dataset-section)**
| **[Supplementary](https://sites.google.com/view/maniskill-hab)**
<!-- Provide a quick summary of the dataset. -->
Whole-body, low-level control/manipulation demonstration dataset for ManiSkill-HAB TidyHouse.
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
Demonstration dataset for ManiSkill-HAB TidyHouse. Each subtask/object combination (e.g pick 002_master_chef_can) has 1000 successful episodes (200 samples/demonstration) gathered using [RL policies](https://huggingface.co/arth-shukla/mshab_checkpoints) fitered for safe robot behavior with a rule-based event labeling system.
TidyHouse contains the Pick and Place subtasks. Relative to the other MS-HAB long-horizon tasks (PrepareGroceries, SetTable), TidyHouse Pick is approximately medium difficulty, while TidyHouse Place is medium-to-hard difficulty (on a scale of easy-medium-hard).
### Related Datasets
Full information about the MS-HAB datasets (size, difficulty, links, etc), including the other long horizon tasks, are available [on the ManiSkill-HAB website](https://arth-shukla.github.io/mshab/#dataset-section).
- [ManiSkill-HAB PrepareGroceries Dataset](https://huggingface.co/datasets/arth-shukla/MS-HAB-PrepareGroceries)
- [ManiSkill-HAB SetTable Dataset](https://huggingface.co/datasets/arth-shukla/MS-HAB-SetTable)
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
This dataset can be used to train vision-based learning from demonstrations and imitation learning methods, which can be evaluated with the [MS-HAB environments](https://github.com/arth-shukla/mshab). This dataset may be useful as synthetic data for computer vision tasks as well.
### Out-of-Scope Use
While blind state-based policies can be trained on this dataset, it is recommended to train vision-based policies to handle collisions and obstructions.
## Dataset Structure
Each subtask/object combination has files `[SUBTASK]/[OBJECT].json` and `[SUBTASK]/[OBJECT].h5`. The JSON file contains episode metadata, event labels, etc, while the HDF5 file contains the demonstration data.
## Dataset Creation
<!-- TODO (arth): link paper appendix, maybe html, for the event labeling system -->
The data is gathered using [RL policies](https://huggingface.co/arth-shukla/mshab_checkpoints) fitered for safe robot behavior with a rule-based event labeling system.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
The dataset is purely synthetic.
While MS-HAB supports high-quality ray-traced rendering, this dataset uses ManiSkill's default rendering for data generation due to efficiency. However, users can generate their own data with the [data generation code](https://github.com/arth-shukla/mshab/blob/main/mshab/utils/gen/gen_data.py).
<!-- TODO (arth): citation -->
## Citation
```
@article{shukla2024maniskillhab,
author = {Arth Shukla and Stone Tao and Hao Su},
title = {ManiSkill-HAB: A Benchmark for Low-Level Manipulation in Home Rearrangement Tasks},
journal = {CoRR},
volume = {abs/2412.13211},
year = {2024},
url = {https://doi.org/10.48550/arXiv.2412.13211},
doi = {10.48550/ARXIV.2412.13211},
eprinttype = {arXiv},
eprint = {2412.13211},
timestamp = {Mon, 09 Dec 2024 01:29:24 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2412-13211.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```