File size: 4,980 Bytes
93086fa
 
 
5b0dee8
93086fa
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
04510d5
 
 
 
 
93086fa
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
# Adapted from the dataset builders for the Winoground Dataset (https://huggingface.co/datasets/facebook/winoground)
import os
import ast
from pathlib import Path

import datasets
import json
import pandas as pd
# from huggingface_hub import hf_hub_url


_CITATION = """\
@inproceedings{hanna-etal-2022-act,
    title = "ACT-Thor: A Controlled Benchmark for Embodied Action Understanding in Simulated Environments",
    author = "Hanna, Michael  and
      Pedeni, Federico  and
      Suglia, Alessandro and
      Testoni, Alberto and
      Bernardi, Raffaella",
    booktitle = "Proceedings of the 29th International Conference on Computational Linguistics",
    month = oct,
    year = "2022",
    address = "Gyeongju, South Korea",
    publisher = "International Committee on Computational Linguistics",
}
"""

_URL = "https://huggingface.co/datasets/mwhanna/ACT-Thor"

_DESCRIPTION = """\
ACT-Thor is a dataset intended for evaluating models' understanding of actions.
"""


class ACTThorConfig(datasets.BuilderConfig):
    """BuilderConfig for ACT-Thor."""

    def __init__(self, split_type, **kwargs):
        """BuilderConfig for ACT-Thor.
        Args:
          **kwargs: keyword arguments forwarded to super.
        """
        super(ACTThorConfig, self).__init__(**kwargs)
        self.split_type = split_type



class ACTThor(datasets.GeneratorBasedBuilder):
    BUILDER_CONFIG_CLASS = ACTThorConfig

    BUILDER_CONFIGS = [
        ACTThorConfig('sample',
            name="sample",
        ),
        ACTThorConfig('object',
            name="object",
        ),
        ACTThorConfig('scene',
            name="scene",
        ),
    ]

    DEFAULT_CONFIG_NAME = "sample"

    IMAGE_EXTENSION = ".png"

    def _info(self):
        return datasets.DatasetInfo(
            description=_DESCRIPTION,
            features=datasets.Features(
                {
                    "id": datasets.Value("int32"),
                    "before_image": datasets.Image(),
                    "after_image_0": datasets.Image(),
                    "after_image_1": datasets.Image(),
                    "after_image_2": datasets.Image(),
                    "after_image_3": datasets.Image(),
                    "action": datasets.Value("string"),
                    "action_id": datasets.Value("int32"),
                    "label": datasets.Value("int32"),
                    "object": datasets.Value("string"),
                    "scene": datasets.Value("string"),
                }
            ),
            homepage=_URL,
            citation=_CITATION,
        )

    def _split_generators(self, dl_manager):
        """Returns SplitGenerators."""

        # hf_auth_token = dl_manager.download_config.use_auth_token
        # if hf_auth_token is None:
        #     raise ConnectionError(
        #         "Please set use_auth_token=True or use_auth_token='<TOKEN>' to download this dataset"
        #     )

        downloaded_files = dl_manager.download_and_extract({
            "examples_csv": 'https://www.dropbox.com/s/4xdlimis1lv17x4/dataset_hf.csv?dl=1', # hf_hub_url("datasets/facebook/winoground", filename="data/examples.jsonl"),
            "images_dir": 'https://www.dropbox.com/s/odkkrtvogi8go76/images.zip?dl=1', # hf_hub_url("datasets/facebook/winoground", filename="data/images.zip")
        })

        split_type = self.config.split_type
        return [datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={'split_type': split_type, 'split':'train', **downloaded_files}),
        datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={'split_type': split_type, 'split':'valid', **downloaded_files}),
        datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={'split_type': split_type, 'split':'test', **downloaded_files})]

    def _generate_examples(self, examples_csv, images_dir, split_type, split):
        """Yields examples."""
        #print('The examples csv is stored in ')
        #print(examples_csv)
        df = pd.read_csv(examples_csv)
        df = df[df[f'{split_type}_split'] == split]
        df = df.drop(['sample_split', 'object_split', 'scene_split'], axis='columns')
        for example in df.to_dict('records'):
            order = ast.literal_eval(example['order'])
            example["before_image"] = os.path.join(images_dir, "before_images", Path(example["before_image"]).name)
            example["after_image_0"] = os.path.join(images_dir, "after_images", Path(example[f"after_image_{order[0]}"]).name)
            example["after_image_1"] = os.path.join(images_dir, "after_images", Path(example[f"after_image_{order[1]}"]).name)
            example["after_image_2"] = os.path.join(images_dir, "after_images", Path(example[f"after_image_{order[2]}"]).name)
            example["after_image_3"] = os.path.join(images_dir, "after_images", Path(example[f"after_image_{order[3]}"]).name)
            id_ = example["id"]
            del example['order']
            yield id_, example