Spaces:
Running
Running
title: Video STaR Dataset | |
emoji: π’ | |
colorFrom: pink | |
colorTo: green | |
sdk: gradio | |
sdk_version: 4.29.0 | |
app_file: app.py | |
pinned: false | |
license: apache-2.0 | |
# Video-STaR 1M Dataset Demo | |
[π₯οΈ [Website](https://orrzohar.github.io/projects/video-star/)] | |
[π° [Paper](https://arxiv.org/abs/2407.06189)] | |
[π« [Code](https://github.com/orrzohar/Video-STaR)] | |
[π€ [dataset](https://huggingface.co/datasets/orrzohar/Video-STaR)] | |
## βοΈ Citation | |
If you find our paper and code useful in your research, please consider giving a star :star: and citation :pencil:. | |
```BibTeX | |
@inproceedings{zohar2024videostar, | |
title = {Video-STaR: Self-Training Enables Video Instruction Tuning with Any Supervision}, | |
author = {Zohar, Orr and Wang, Xiaohan and Bitton, Yonatan and Szpektor, Idan and Yeung-levy, Serena}, | |
year = {2024}, | |
booktitle = {arXiv preprint arXiv:2407.06189}, | |
} | |
``` |