Datasets:

Modalities:
Image
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
License:
topyun commited on
Commit
c4ed33c
·
verified ·
1 Parent(s): 4a729ae

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +38 -4
README.md CHANGED
@@ -34,6 +34,13 @@ configs:
34
  # SPARK (multi-vision Sensor Perception And Reasoning benchmarK)
35
 
36
  <!-- Provide a quick summary of the dataset. -->
 
 
 
 
 
 
 
37
 
38
  SPARK can reduce the fundamental multi-vision sensor information gap between images and multi-vision sensors. We generated 6,248 vision-language test samples automatically to investigate multi-vision sensory perception and multi-vision sensory reasoning on physical sensor knowledge proficiency across different formats, covering different types of sensor-related questions.
39
 
@@ -42,11 +49,38 @@ SPARK can reduce the fundamental multi-vision sensor information gap between ima
42
 
43
  ## Uses
44
 
45
- <!-- Address questions around how the dataset is intended to be used. -->
46
-
47
- ### Direct Use
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
48
 
49
- <!-- This section describes suitable use cases for the dataset. -->
50
 
51
 
52
  ### Source Data
 
34
  # SPARK (multi-vision Sensor Perception And Reasoning benchmarK)
35
 
36
  <!-- Provide a quick summary of the dataset. -->
37
+ <p align="center">
38
+ <img src="resources/problems.png" :height="300px" width="600px">
39
+ </p>
40
+
41
+ <p align="center">
42
+ <img src="resources/examples.png" :height="400px" width="800px">
43
+ </p>
44
 
45
  SPARK can reduce the fundamental multi-vision sensor information gap between images and multi-vision sensors. We generated 6,248 vision-language test samples automatically to investigate multi-vision sensory perception and multi-vision sensory reasoning on physical sensor knowledge proficiency across different formats, covering different types of sensor-related questions.
46
 
 
49
 
50
  ## Uses
51
 
52
+ you can easily download the dataset as follows:
53
+ ```python
54
+ from datasets import load_dataset
55
+ test_dataset = load_dataset("topyun/SPARK", split="train")
56
+ ```
57
+
58
+ Additionally, we have provided two example codes for evaluation: Open Model([**test.py**](https://github.com/top-yun/SPARK/blob/main/test.py)) and Closed Model([**test_closed_models.py**](https://github.com/top-yun/SPARK/blob/main/test_closed_models.py)). You can easily run them as shown below.
59
+
60
+ If you have 4 GPUs and want to run the experiment with llava-1.5-7b, you can do the following:
61
+ ```bash
62
+ accelerate launch --config_file utils/ddp_accel_fp16.yaml \
63
+ --num_processes=4 \
64
+ test.py \
65
+ --batch_size 1 \
66
+ --model llava \
67
+ ```
68
+
69
+ When running the closed model, make sure to insert your API KEY into the [**config.py**](https://github.com/top-yun/SPARK/blob/main/config.py) file.
70
+ If you have 1 GPU and want to run the experiment with gpt-4o, you can do the following:
71
+ ```bash
72
+ accelerate launch --config_file utils/ddp_accel_fp16.yaml \
73
+ --num_processes=$n_gpu \
74
+ test_closed_models.py \
75
+ --batch_size 8 \
76
+ --model gpt \
77
+ --multiprocess True \
78
+ ```
79
+
80
+ ### Tips
81
+ The evaluation method we've implemented simply checks whether 'A', 'B', 'C', 'D', 'yes', or 'no' appears at the beginning of the sentence.
82
+ So, if the model you're evaluating provides unexpected answers (e.g., "'B'ased on ..." or "'C'onsidering ..."), you can resolve this by adding "Do not include any additional text." at the end of the prompt.
83
 
 
84
 
85
 
86
  ### Source Data