Datasets:

Modalities:
Image
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
License:
topyun commited on
Commit
fed9c65
·
verified ·
1 Parent(s): edef0c6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -5
README.md CHANGED
@@ -33,10 +33,9 @@ configs:
33
 
34
  # SPARK (multi-vision Sensor Perception And Reasoning benchmarK)
35
 
36
- <!-- Provide a quick summary of the dataset. -->
37
- <p align="center">
38
- <img src="https://raw.githubusercontent.com/top-yun/SPARK/main/resources/problems.png" :height="300px" width="600px">
39
- </p>
40
 
41
  <p align="center">
42
  <img src="https://raw.githubusercontent.com/top-yun/SPARK/main/resources/examples.png" :height="400px" width="800px">
@@ -44,7 +43,6 @@ configs:
44
 
45
  SPARK can reduce the fundamental multi-vision sensor information gap between images and multi-vision sensors. We generated 6,248 vision-language test samples automatically to investigate multi-vision sensory perception and multi-vision sensory reasoning on physical sensor knowledge proficiency across different formats, covering different types of sensor-related questions.
46
 
47
- ## Dataset Details
48
 
49
 
50
  ## Uses
 
33
 
34
  # SPARK (multi-vision Sensor Perception And Reasoning benchmarK)
35
 
36
+ [**github**](https://github.com/top-yun/SPARK)[**🤗 Dataset**](https://huggingface.co/datasets/topyun/SPARK)
37
+
38
+ ## Dataset Details
 
39
 
40
  <p align="center">
41
  <img src="https://raw.githubusercontent.com/top-yun/SPARK/main/resources/examples.png" :height="400px" width="800px">
 
43
 
44
  SPARK can reduce the fundamental multi-vision sensor information gap between images and multi-vision sensors. We generated 6,248 vision-language test samples automatically to investigate multi-vision sensory perception and multi-vision sensory reasoning on physical sensor knowledge proficiency across different formats, covering different types of sensor-related questions.
45
 
 
46
 
47
 
48
  ## Uses