Diffusers documentation

Value-guided planning

You are viewing v0.32.0 version. A newer version v0.32.1 is available.
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

Value-guided planning

🧪 This is an experimental pipeline for reinforcement learning!

This pipeline is based on the Planning with Diffusion for Flexible Behavior Synthesis paper by Michael Janner, Yilun Du, Joshua B. Tenenbaum, Sergey Levine.

The abstract from the paper is:

Model-based reinforcement learning methods often use learning only for the purpose of estimating an approximate dynamics model, offloading the rest of the decision-making work to classical trajectory optimizers. While conceptually simple, this combination has a number of empirical shortcomings, suggesting that learned models may not be well-suited to standard trajectory optimization. In this paper, we consider what it would look like to fold as much of the trajectory optimization pipeline as possible into the modeling problem, such that sampling from the model and planning with it become nearly identical. The core of our technical approach lies in a diffusion probabilistic model that plans by iteratively denoising trajectories. We show how classifier-guided sampling and image inpainting can be reinterpreted as coherent planning strategies, explore the unusual and useful properties of diffusion-based planning methods, and demonstrate the effectiveness of our framework in control settings that emphasize long-horizon decision-making and test-time flexibility.

You can find additional information about the model on the project page, the original codebase, or try it out in a demo notebook.

The script to run the model is available here.

Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines.

ValueGuidedRLPipeline

class diffusers.experimental.ValueGuidedRLPipeline

< >

( value_function: UNet1DModel unet: UNet1DModel scheduler: DDPMScheduler env )

Parameters

  • value_function (UNet1DModel) — A specialized UNet for fine-tuning trajectories base on reward.
  • unet (UNet1DModel) — UNet architecture to denoise the encoded trajectories.
  • scheduler (SchedulerMixin) — A scheduler to be used in combination with unet to denoise the encoded trajectories. Default for this application is DDPMScheduler.
  • env () — An environment following the OpenAI gym API to act in. For now only Hopper has pretrained models.

Pipeline for value-guided sampling from a diffusion model trained to predict sequences of states.

This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods implemented for all pipelines (downloading, saving, running on a particular device, etc.).

< > Update on GitHub