mmE5-MMEB-hardneg / README.md
intfloat's picture
Upload README.md with huggingface_hub
8f93cca verified
metadata
license: mit
language:
  - en
tags:
  - embedding
  - multimodal
pretty_name: mmE5 labeled data
size_categories:
  - 1M<n<10M
configs:
  - config_name: TAT-DQA
    data_files:
      - split: train
        path: TAT-DQA/TAT-DQA.parquet
  - config_name: ArxivQA
    data_files:
      - split: train
        path: ArxivQA/ArxivQA.parquet
  - config_name: InfoSeek_it2t
    data_files:
      - split: train
        path: InfoSeek_it2t/InfoSeek_it2t.parquet
  - config_name: InfoSeek_it2it
    data_files:
      - split: train
        path: InfoSeek_it2it/InfoSeek_it2it.parquet
  - config_name: ImageNet_1K
    data_files:
      - split: train
        path: ImageNet_1K/ImageNet_1K.parquet
  - config_name: N24News
    data_files:
      - split: train
        path: N24News/N24News.parquet
  - config_name: HatefulMemes
    data_files:
      - split: train
        path: HatefulMemes/HatefulMemes.parquet
  - config_name: SUN397
    data_files:
      - split: train
        path: SUN397/SUN397.parquet
  - config_name: VOC2007
    data_files:
      - split: train
        path: VOC2007/VOC2007.parquet
  - config_name: InfographicsVQA
    data_files:
      - split: train
        path: InfographicsVQA/InfographicsVQA.parquet
  - config_name: ChartQA
    data_files:
      - split: train
        path: ChartQA/ChartQA.parquet
  - config_name: A-OKVQA
    data_files:
      - split: train
        path: A-OKVQA/A-OKVQA.parquet
  - config_name: DocVQA
    data_files:
      - split: train
        path: DocVQA/DocVQA.parquet
  - config_name: OK-VQA
    data_files:
      - split: train
        path: OK-VQA/OK-VQA.parquet
  - config_name: Visual7W
    data_files:
      - split: train
        path: Visual7W/Visual7W.parquet
  - config_name: VisDial
    data_files:
      - split: train
        path: VisDial/VisDial.parquet
  - config_name: CIRR
    data_files:
      - split: train
        path: CIRR/CIRR.parquet
  - config_name: NIGHTS
    data_files:
      - split: train
        path: NIGHTS/NIGHTS.parquet
  - config_name: WebQA
    data_files:
      - split: train
        path: WebQA/WebQA.parquet
  - config_name: VisualNews_i2t
    data_files:
      - split: train
        path: VisualNews_i2t/VisualNews_i2t.parquet
  - config_name: VisualNews_t2i
    data_files:
      - split: train
        path: VisualNews_t2i/VisualNews_t2i.parquet
  - config_name: MSCOCO_i2t
    data_files:
      - split: train
        path: MSCOCO_i2t/MSCOCO_i2t.parquet
  - config_name: MSCOCO_t2i
    data_files:
      - split: train
        path: MSCOCO_t2i/MSCOCO_t2i.parquet
  - config_name: MSCOCO
    data_files:
      - split: train
        path: MSCOCO/MSCOCO.parquet

mmE5 Labeled Data

This dataset contains datasets used for the supervised finetuning of mmE5 (mmE5: Improving Multimodal Multilingual Embeddings via High-quality Synthetic Data):

  • MMEB (with hard negative)
  • InfoSeek (from M-BEIR)
  • TAT-DQA
  • ArxivQA

Github

Image Preparation

First, you should prepare the images used for training:

Image Downloads

  • Download All Images Used in mmE5:

You can use the script provided in our source code to download all images used in mmE5.

git clone https://github.com/haon-chen/mmE5.git
cd mmE5
bash scripts/prepare_images.sh

Image Organization

  images/
  β”œβ”€β”€ mbeir_images/
  β”‚     └── oven_images/
  β”‚           └── ... .jpg (InfoSeek)
  β”œβ”€β”€ ArxivQA/
  β”‚     └── images/
  β”‚           └── ... .jpg (ArxivQA)
  └── TAT-DQA/
  β”‚     └── ... .png (TAT-DQA)
  └── A-OKVQA/
        └── Train/
  β”‚           └── ... .jpg (A-OKVQA)
  β”‚
  ... (MMEB Training images)

You can refer to the image paths in each subset to view the image organization.

You can also customize your image paths by altering the image_path fields.

Citation

If you use this dataset in your research, please cite the associated paper.

@article{chen2025mmE5,
  title={mmE5: Improving Multimodal Multilingual Embeddings via High-quality Synthetic Data},
  author={Chen, Haonan and Wang, Liang and Yang, Nan and Zhu, Yutao and Zhao, Ziliang and Wei, Furu and Dou, Zhicheng},
  journal={arXiv preprint arXiv:2502.08468},
  year={2025}
}