The dataset viewer is not available for this split.
Error code: StreamingRowsError Exception: OSError Message: cannot find loader for this HDF5 file Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 322, in compute compute_first_rows_from_parquet_response( File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 88, in compute_first_rows_from_parquet_response rows_index = indexer.get_rows_index( File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 640, in get_rows_index return RowsIndex( File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 521, in __init__ self.parquet_index = self._init_parquet_index( File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 538, in _init_parquet_index response = get_previous_step_or_raise( File "/src/libs/libcommon/src/libcommon/simple_cache.py", line 591, in get_previous_step_or_raise raise CachedArtifactError( libcommon.simple_cache.CachedArtifactError: The previous step failed. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/src/services/worker/src/worker/utils.py", line 96, in get_rows_or_raise return get_rows( File "/src/libs/libcommon/src/libcommon/utils.py", line 197, in decorator return func(*args, **kwargs) File "/src/services/worker/src/worker/utils.py", line 73, in get_rows rows_plus_one = list(itertools.islice(ds, rows_max_number + 1)) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1393, in __iter__ example = _apply_feature_types_on_example( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1082, in _apply_feature_types_on_example decoded_example = features.decode_example(encoded_example, token_per_repo_id=token_per_repo_id) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1983, in decode_example return { File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1984, in <dictcomp> column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1349, in decode_nested_example return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/image.py", line 188, in decode_example image.load() # to avoid "Too many open files" errors File "/src/services/worker/.venv/lib/python3.9/site-packages/PIL/ImageFile.py", line 366, in load raise OSError(msg) OSError: cannot find loader for this HDF5 file
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Dataset Card for TRANSIC Data
This dataset card is accompanied with the CoRL 2024 paper titled TRANSIC: Sim-to-Real Policy Transfer by Learning from Online Correction. It includes generated simulation data and real-robot human correction data for sim-to-real transfer of robotic arm manipulation policies.
Dataset Details
Dataset Description
This dataset includes two parts, 1) simulation data used in student policy distillation and 2) real-robot data used in residual policy learning.
The first part can be found in the distillation
folder. We include 5 tasks in the distillation/tasks
directory. For each task, we provide 10,000 successful trajectories generated by teacher policies trained with reinforcement learning in simulation.
Furthermore, we also provide matched_point_cloud_scenes.h5
, a seperate collection of 59 matched point clouds in simulation and the real world. We use them to regularize the point-cloud encoder during policy training.
The second part can be found in the correction_data
folder. We include real-world human correction data for 5 tasks. Each task contains different number of trajectories. Each trajectory includes observations, pre-intervention actions, and post-intervention actions for residual policy learning.
- Curated by: Yunfan Jiang
- License: MIT
Dataset Sources
- Repositories: TRANSIC, TRANSIC-Envs
- Paper: TRANSIC: Sim-to-Real Policy Transfer by Learning from Online Correction
Uses
Please see our codebase for detailed usage.
Dataset Structure
Structure for distillation/tasks/*.hdf5
:
data[f"rollouts/successful/rollout_{idx}/actions"]: shape (L, 7), first 6 dimensions represent end-effector's pose change. The last dimension corresponds to the gripper action.
data[f"rollouts/successful/rollout_{idx}/eef_pos"]: shape (L + 1, 3), end-effector's positions.
data[f"rollouts/successful/rollout_{idx}/eef_quat"]: shape (L + 1, 4), end-effector's orientations in quaternion.
data[f"rollouts/successful/rollout_{idx}/franka_base"]: shape (L + 1, 7), robot base pose.
data[f"rollouts/successful/rollout_{idx}/gripper_width"]: shape (L + 1, 1), gripper's current width.
data[f"rollouts/successful/rollout_{idx}/leftfinger"]: shape (L + 1, 7), left gripper finger pose.
data[f"rollouts/successful/rollout_{idx}/q"]: shape (L + 1, 7), robot joint positions.
data[f"rollouts/successful/rollout_{idx}/rightfinger"]: shape (L + 1, 7), right gripper finger pose.
data[f"rollouts/successful/rollout_{idx}/{obj}"]: shape (L + 1, 7), pose for each object.
Structure for distillation/matched_point_cloud_scenes.h5
:
# sim
data[f"{date}/{idx}/sim/ee_mask"]: shape (N,), represent if each point in the point cloud corresponds to the end-effector. 0: not end-effector, 1: end-effector.
data[f"{date}/{idx}/sim/franka_base"]: shape (7,), robot base pose.
data[f"{date}/{idx}/sim/leftfinger"]: shape (7,), left gripper finger pose.
data[f"{date}/{idx}/sim/pointcloud"]: shape (N, 3), synthetic point cloud.
data[f"{date}/{idx}/sim/q"]: shape (9,), robot joint positions, last two dimensions correspond to two gripper fingers.
data[f"{date}/{idx}/sim/rightfinger"]: shape (7,), right gripper finger pose.
data[f"{date}/{idx}/sim/{obj}"]: shape (7,), pose for each object.
# real
data[f"{date}/{idx}/real/{sample}/eef_pos"]: shape (3, 1), end-effector's position.
data[f"{date}/{idx}/real/{sample}/eef_quat"]: shape (4), end-effector's orientations in quaternion.
data[f"{date}/{idx}/real/{sample}/fk_finger_pointcloud"]: shape (N, 3), point cloud for gripper fingers obtained through forward kinematics.
data[f"{date}/{idx}/real/{sample}/gripper_width"]: shape (), gripper width.
data[f"{date}/{idx}/real/{sample}/measured_pointcloud"]: shape (N, 3), point cloud captured by depth cameras.
data[f"{date}/{idx}/real/{sample}/q"]: shape (7,), robot joint positions.
Structure for correction_data/*/*.hdf5
:
data["is_human_intervention"]: shape (L,), represent human intervention (1) or not (0).
data["policy_action"]: shape (L, 8), simulation policies' actions.
data["policy_obs"]: shape (L, ...), simulation policies' observations.
data["post_intervention_eef_pose"]: shape (L, 4, 4), end-effector's pose after intervention.
data["post_intervention_q"]: shape (L, 7), robot joint positions after intervention.
data["post_intervention_gripper_q"]: shape (L, 2), gripper fingers' positions after intervention.
data["pre_intervention_eef_pose"]: shape (L, 4, 4), end-effector's pose before intervention.
data["pre_intervention_q"]: shape (L, 7), robot joint positions before intervention.
data["pre_intervention_gripper_q"]: shape (L, 2), gripper fingers' positions before intervention.
Dataset Creation
distillation/tasks/*.hdf5
are generated by teacher policies trained with reinforcement learning in simulation.
distillation/matched_point_cloud_scenes.h5
and correction_data/*/*.hdf5
are manually collected in the real world.
Citation
BibTeX:
@inproceedings{jiang2024transic,
title = {TRANSIC: Sim-to-Real Policy Transfer by Learning from Online Correction},
author = {Yunfan Jiang and Chen Wang and Ruohan Zhang and Jiajun Wu and Li Fei-Fei},
booktitle = {Conference on Robot Learning},
year = {2024}
}
Dataset Card Contact
Yunfan Jiang, email: yunfanj[at]cs[dot]stanford[dot]edu
- Downloads last month
- 132