|
--- |
|
license: mit |
|
language: |
|
- en |
|
tags: |
|
- behavior |
|
- motion |
|
- human |
|
- egocentric |
|
- language |
|
- llm |
|
- vlm |
|
- esk |
|
size_categories: |
|
- 10K<n<100K |
|
task_categories: |
|
- question-answering |
|
--- |
|
|
|
# EPFL Smart Kitchen: Lemonade benchmark |
|
|
|
## Abstract |
|
we introduce Lemonade: **L**anguage models **E**valuation of **MO**tion a**N**d **A**ction-**D**riven **E**nquiries. |
|
Lemonade consists of 36,521 closed-ended QA pairs linked to egocentric video clips, categorized in three groups and six subcategories. |
|
18,857 QAs focus on behavior understanding, leveraging the rich ground truth behavior annotations of the EPFL-Smart Kitchen to interrogate models about perceived actions (Perception) and reason over unseen behaviors (Reasoning). |
|
8,210 QAs involve longer video clips, challenging models in summarization (Summarization) and session-level inference (Session properties). |
|
The remaining 9,463 QAs leverage the 3D pose estimation data to infer hand shapes, joint angles (Physical attributes), or trajectory velocities (Kinematics) from visual information. |
|
|
|
## Content |
|
The current repository contains all egocentric videos recorded in the EPFL-Smart-Kitchen-30 dataset. You can download the rest of the dataset at ... and ... . |
|
|
|
### Repository structure |
|
|
|
``` |
|
Lemonade |
|
βββ MCQs |
|
βββ lemonade_benchmark.csv |
|
βββ videos |
|
βββ YH2002_2023_12_04_10_15_23_hololens.mp4 |
|
βββ .. |
|
βββ README.md |
|
``` |
|
|
|
`lemonade_benchmark.csv` : Table with the following fields: |
|
**Question** : Question to be answered </br> |
|
**QID** : Question identifier, an integer from 0 to 30 </br> |
|
**Answers** : A list of possible answers to the question. This can be a multiple-choice set or open-ended responses. </br> |
|
**Correct Answer** : The answer that is deemed correct from the list of provided answers. </br> |
|
**Clip** : A reference to the video clip related to the question. </br> |
|
**Start** : The timestamp (in frame) in the clip where the question context begins. </br> |
|
**End** : The timestamp (in frame) in the clip where the question context ends. </br> |
|
**Category** : The broad topic under which the question falls (Behavior understanding, Long-term understanding or Motion and Biomechanics) </br> |
|
**Subcategory** : A more refined classification within the category (Perception, Reasoning, Summarization, Session properties, Physical attributes, Kinematics) </br> |
|
**Difficulty** : The complexity level of the question (e.g., Easy, Medium, Hard) |
|
|
|
`videos` : Folder with all egocentric videos from the EPFL-Smart-Kitchen-30 benchmark. Video names are structured as `[Participant_ID]_[Session_name]_hololens.mp4`. |
|
|
|
> We refer the reader to the associated publication for details about data processing and tasks description. |
|
|
|
|
|
## Usage |
|
The evaluation of the benchmark can be done through the following github repository: ... . |
|
|
|
## Publications |
|
cite arxiv paper |
|
|
|
## Acknowledgments |
|
We thank Andy Bonnetto for the design of the dataset and Matea Tashkovska for the adaptation of the evaluation platform. </br> |
|
We thank members of the Mathis Group for Computational Neuroscience \& AI (EPFL) for their feedback throughout the project. |
|
This work was funded by EPFL, Swiss SNF grant (320030-227871), Microsoft Swiss Joint Research Center and a Boehringer Ingelheim Fonds PhD stipend (H.Q.). |
|
We are grateful to the Brain Mind Institute for providing funds for hardware and to the Neuro-X Institute for providing funds for services. |