Datasets:

Modalities:
Image
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
yizhilll SiweiWu commited on
Commit
ec7f526
1 Parent(s): 74bd4ab

Update README.md (#4)

Browse files

- Update README.md (12d12857d63683cea254f4c109498cdf52c27b7a)


Co-authored-by: SiweiWu <[email protected]>

Files changed (1) hide show
  1. README.md +63 -0
README.md CHANGED
@@ -38,5 +38,68 @@ dataset_info:
38
  dataset_size: 62900112549.824005
39
  ---
40
  # Dataset Card for "SciMMIR_dataset"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
41
 
42
  [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
38
  dataset_size: 62900112549.824005
39
  ---
40
  # Dataset Card for "SciMMIR_dataset"
41
+ ## SciMMIR
42
+
43
+ This is the repo for the paper [SciMMIR: Benchmarking Scientific Multi-modal Information Retrieval](https://arxiv.org/abs/2401.13478).
44
+
45
+ <div align="center">
46
+ <img src=./imgs/Framework.png width=80% />
47
+ </div>
48
+
49
+
50
+ In this paper, we propose a novel SciMMIR benchmark and a corresponding dataset designed to address the gap in evaluating multi-modal information retrieval (MMIR) models in the scientific domain.
51
+
52
+ It is worth mentioning that we define a data hierarchical architecture of "Two subsets, Five subcategories" and use human-created keywords to classify the data (as shown in the table below).
53
+
54
+ <div align="center">
55
+ <img src=./imgs/data_architecture.png width=50% />
56
+ </div>
57
+
58
+
59
+ As shown in the table below, we conducted extensive baselines (both fine-tuning and zero-shot) within various subsets and subcategories.
60
+
61
+ ![main_result](./imgs/main_result.png)
62
+
63
+ For more detailed experimental results and analysis, please refer to our paper [SciMMIR](https://arxiv.org/abs/2401.13478).
64
+
65
+ ## Dataset
66
+
67
+ Our SciMMIR benchmark dataset used in this paper contains 537K scientific image-text pairs which are extracted from the latest 6 months' papers in Arxiv (2023.05 to 2023.10), and we will continue to expand this data by extracting data from more papers in Arxiv and provide larger versions of the dataset.
68
+
69
+ The datasets can be obtained from huggingface Datasets [m-a-p/SciMMIR](https://huggingface.co/datasets/m-a-p/SciMMIR), and the following codes show how to use it:
70
+
71
+ ```python
72
+ import datasets
73
+ ds_remote = datasets.load_dataset("m-a-p/SciMMIR")
74
+ test_data = ds_remote['test']
75
+ caption = test_data[0]['text']
76
+ image_type = test_data[0]['class']
77
+ image = test_data[0]['image']
78
+ ```
79
+
80
+ ## Codes
81
+ The codes of this paper can be found in our [Github](https://github.com/Wusiwei0410/SciMMIR)
82
+
83
+ ## Potential TODOs before ACL
84
+
85
+ **TODO**: case study table
86
+
87
+ **TODO**: statistics of the paper fields (perhaps in appendix)
88
+
89
+ **TODO**: See if it's possible to further divide the "Figure Results" subsets.
90
+
91
+ ## Citation
92
+
93
+ ```
94
+ @misc{wu2024scimmir,
95
+ title={SciMMIR: Benchmarking Scientific Multi-modal Information Retrieval},
96
+ author={Siwei Wu and Yizhi Li and Kang Zhu and Ge Zhang and Yiming Liang and Kaijing Ma and Chenghao Xiao and Haoran Zhang and Bohao Yang and Wenhu Chen and Wenhao Huang and Noura Al Moubayed and Jie Fu and Chenghua Lin},
97
+ year={2024},
98
+ eprint={2401.13478},
99
+ archivePrefix={arXiv},
100
+ primaryClass={cs.IR}
101
+ }
102
+ ```
103
+
104
 
105
  [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)