commaVQ - GPT2M

A GPT2M model trained on a larger version of the commaVQ dataset.

This model is able to generate driving video unconditionally.

Below is an example of 5 seconds of imagined video using GPT2M.

Downloads last month
63
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the HF Inference API does not support transformers models with pipeline type unconditional-image-generation

Dataset used to train commaai/commavq-gpt2m

Space using commaai/commavq-gpt2m 1