MOCHI / README.md
tzler's picture
Update README.md (#2)
6b3b2c3 verified
metadata
dataset_info:
  features:
    - name: dataset
      dtype: string
    - name: condition
      dtype: string
    - name: trial
      dtype: string
    - name: n_objects
      dtype: int64
    - name: oddity_index
      dtype: int64
    - name: images
      sequence: image
    - name: n_subjects
      dtype: int64
    - name: human_avg
      dtype: float64
    - name: human_sem
      dtype: float64
    - name: human_std
      dtype: float64
    - name: RT_avg
      dtype: float64
    - name: RT_sem
      dtype: float64
    - name: RT_std
      dtype: float64
    - name: DINOv2G_avg
      dtype: float64
    - name: DINOv2G_std
      dtype: float64
    - name: DINOv2G_sem
      dtype: float64
  splits:
    - name: train
      num_bytes: 384413356.563
      num_examples: 2019
  download_size: 382548893
  dataset_size: 384413356.563
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*

MOCHI: Multiview Object Consistency in Humans and Image models

We introduce a benchmark to evaluate the alignment between humans and image models on 3D shape understanding: Multiview Object Consistency in Humans and Image models (MOCHI)

To download dataset from huggingface, install relevant huggingface libraries

pip install datasets huggingface_hub

and download MOCHI

from datasets import load_dataset

# download huggingface dataset 
benchmark = load_dataset("tzler/MOCHI")['train']

# there are 2019 trials let's pick one 
i_trial = benchmark[1879]

Here, i_trial is a dictionary with trial-related data including human (human and RT) and model (DINOv2G) performance measures:

{'dataset': 'shapegen',
 'condition': 'abstract2',
 'trial': 'shapegen2527',
 'n_objects': 3,
 'oddity_index': 2,
 'images': [<PIL.PngImagePlugin.PngImageFile image mode=RGB size=1000x1000>,
  <PIL.PngImagePlugin.PngImageFile image mode=RGB size=1000x1000>,
  <PIL.PngImagePlugin.PngImageFile image mode=RGB size=1000x1000>],
 'n_subjects': 15,
 'human_avg': 1.0,
 'human_sem': 0.0,
 'human_std': 0.0,
 'RT_avg': 4324.733333333334,
 'RT_sem': 544.4202024405384,
 'RT_std': 2108.530377391076,
 'DINOv2G_avg': 1.0,
 'DINOv2G_std': 0.0,
 'DINOv2G_sem': 0.0}```

as well as this trial's images:

plt.figure(figsize=[15,4])
for i_plot in range(len(i_trial['images'])):
  plt.subplot(1,len(i_trial['images']),i_plot+1)
  plt.imshow(i_trial['images'][i_plot])
  if i_plot == i_trial['oddity_index']: plt.title('odd-one-out')
  plt.axis('off')
plt.show()
example trial

The complete results on this benchmark, including all of the human and model (e.g., DINOv2, CLIP, and MAE at multiple sizes), can be downloaded from the github repo:

git clone https://github.com/tzler/MOCHI.git

And then imported with a few lines of code:

import pandas 
# load data the github repo we just cloned  
df = pandas.read_csv('MOCHI/assets/benchmark.csv')
# extract trial info with the index from huggingface repo above
df.loc[i_trial_index]['trial']

This returns the trial, shapegen2527, which is the same as the huggingface dataset for this index.

@misc{bonnen2024evaluatingmultiviewobjectconsistency,
      title={Evaluating Multiview Object Consistency in Humans and Image Models}, 
      author={Tyler Bonnen and Stephanie Fu and Yutong Bai and Thomas O'Connell and Yoni Friedman and Nancy Kanwisher and Joshua B. Tenenbaum and Alexei A. Efros},
      year={2024},
      eprint={2409.05862},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2409.05862}, 
}