In-Context Imitation Learning via Next-Token Prediction
by Max (Letian) Fu*, Huang Huang*, Gaurav Datta*, Lawrence Yunliang Chen, William Chung-Ho Panitch, Fangchen Liu, Hui Li, and Ken Goldberg at UC Berkeley and Autodesk (*equal contribution).
[Paper] | [Project Page] | [Checkpoints] | [Dataset] | [Citation]
This repo contains the checkpoints for In-Context Imitation Learning via Next-Token Prediction. We investigate how to bring few-shot, in-context learning capability that exists in next-token prediction models (i.e. GPT) into real-robot imitation learning policies.
In particular, we store the pre-trained vision encoder and ICRT model separately. Please find them in encoder, ICRT, and ICRT-Llama7B.
Please refer to the code on installing the repo, training and inferencing the model.
Dataset Structure
ICRT-MT
βββ merged_data_part1.hdf5
β βββ episode_1
β β βββ observation
β β βββ exterior_image_1_left
β β βββ exterior_image_2_left
β β βββ wrist_image_left
β β βββ cartesian_position
β β βββ gripper_position
β β βββ joint_position
β β βββ action
β β βββ cartesian_velocity
β β βββ gripper_velocity
β β βββ joint_velocity
β β βββ cartesian_position
β β βββ gripper_position
β β βββ joint_position
β β βββ language_instruction
β β βββ language_instruction_2
β β βββ language_instruction_3
β β βββ language_embedding
β β βββ language_embedding_2
β β βββ language_embedding_3
β β ...
β βββ episode_2
β β ...
β βββ episode_3
β ...
βββ merged_data_part1_keys.json
...
Citation
Please give us a star π on Github to support us!
Please cite our work if you find our work inspiring or use our code in your work:
@article{fu2024icrt,
title={In-Context Imitation Learning via Next-Token Prediction},
author={Letian Fu and Huang Huang and Gaurav Datta and Lawrence Yunliang Chen and William Chung-Ho Panitch and Fangchen Liu and Hui Li and Ken Goldberg},
journal={arXiv preprint arXiv:2408.15980},
year={2024}
}