Learning to Learn Faster from Human Feedback with Language Model Predictive Control Paper • 2402.11450 • Published Feb 18, 2024 • 23
PIVOT: Iterative Visual Prompting Elicits Actionable Knowledge for VLMs Paper • 2402.07872 • Published Feb 12, 2024 • 16
Generative Expressive Robot Behaviors using Large Language Models Paper • 2401.14673 • Published Jan 26, 2024 • 7
AutoRT: Embodied Foundation Models for Large Scale Orchestration of Robotic Agents Paper • 2401.12963 • Published Jan 23, 2024 • 12
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models Paper • 2201.11903 • Published Jan 28, 2022 • 13
Physically Grounded Vision-Language Models for Robotic Manipulation Paper • 2309.02561 • Published Sep 5, 2023 • 9
Do As I Can, Not As I Say: Grounding Language in Robotic Affordances Paper • 2204.01691 • Published Apr 4, 2022 • 1
Grounded Decoding: Guiding Text Generation with Grounded Models for Robot Control Paper • 2303.00855 • Published Mar 1, 2023
Inner Monologue: Embodied Reasoning through Planning with Language Models Paper • 2207.05608 • Published Jul 12, 2022
Open-World Object Manipulation using Pre-trained Vision-Language Models Paper • 2303.00905 • Published Mar 2, 2023
Q-Transformer: Scalable Offline Reinforcement Learning via Autoregressive Q-Functions Paper • 2309.10150 • Published Sep 18, 2023 • 25
Open X-Embodiment: Robotic Learning Datasets and RT-X Models Paper • 2310.08864 • Published Oct 13, 2023 • 2
Navigation with Large Language Models: Semantic Guesswork as a Heuristic for Planning Paper • 2310.10103 • Published Oct 16, 2023
RoboVQA: Multimodal Long-Horizon Reasoning for Robotics Paper • 2311.00899 • Published Nov 1, 2023 • 9
Distilling and Retrieving Generalizable Knowledge for Robot Manipulation via Language Corrections Paper • 2311.10678 • Published Nov 17, 2023 • 8
RT-1: Robotics Transformer for Real-World Control at Scale Paper • 2212.06817 • Published Dec 13, 2022 • 2