Update README.md
Browse files
README.md
CHANGED
@@ -4,12 +4,11 @@ license: mit
|
|
4 |
|
5 |
## This is the benchmark dataset for ["A Benchmark for Multi-modal Foundation Models on Low-level Vision: from Single Images to Pairs"](https://arxiv.org/abs/2402.07116)
|
6 |
|
7 |
-
# The structure of the
|
8 |
|
9 |
1. q-bench2-a1-dev.jsonl (**with** *img_path*, *question*, *answer_candidates*, *correct_answer*)
|
10 |
2. q-bench2-a1-test.jsonl (**with** *img_path*, *question*, *answer_candidates*, **without** *correct_answer*)
|
11 |
3. q-bench2-a2.jsonl (**with** *img_path*, *empty response*)
|
12 |
-
4. q-bench-compare.zip (image files)
|
13 |
|
14 |
# The img_path is organized as *prefix* + *img1* + \_cat\_ + *img2* + *.jpg*
|
15 |
|
@@ -24,7 +23,7 @@ def get_img_names(img_path, prefix = "path_to_all_single_images"):
|
|
24 |
img2_name = os.path.join(prefix,img_paths[1])
|
25 |
return img1_name,img2_name
|
26 |
```
|
27 |
-
# The file structure is:
|
28 |
|
29 |
1. all_single_images: all of the single images used
|
30 |
2. llvisionqa_compare_dev: the concatenated images for the dev subset of the perception-compare task
|
|
|
4 |
|
5 |
## This is the benchmark dataset for ["A Benchmark for Multi-modal Foundation Models on Low-level Vision: from Single Images to Pairs"](https://arxiv.org/abs/2402.07116)
|
6 |
|
7 |
+
# The structure of the jsonl files is as follows:
|
8 |
|
9 |
1. q-bench2-a1-dev.jsonl (**with** *img_path*, *question*, *answer_candidates*, *correct_answer*)
|
10 |
2. q-bench2-a1-test.jsonl (**with** *img_path*, *question*, *answer_candidates*, **without** *correct_answer*)
|
11 |
3. q-bench2-a2.jsonl (**with** *img_path*, *empty response*)
|
|
|
12 |
|
13 |
# The img_path is organized as *prefix* + *img1* + \_cat\_ + *img2* + *.jpg*
|
14 |
|
|
|
23 |
img2_name = os.path.join(prefix,img_paths[1])
|
24 |
return img1_name,img2_name
|
25 |
```
|
26 |
+
# The image file structure is:
|
27 |
|
28 |
1. all_single_images: all of the single images used
|
29 |
2. llvisionqa_compare_dev: the concatenated images for the dev subset of the perception-compare task
|