Datasets:

Modalities:
Image
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
wildvision-bench / README.md
DongfuJiang's picture
Update README.md
beaf25d verified
metadata
dataset_info:
  - config_name: vision_bench_0701
    features:
      - name: question_id
        dtype: string
      - name: instruction
        dtype: string
      - name: image
        dtype: image
      - name: language
        dtype: string
    splits:
      - name: test
        num_bytes: 1654009592
        num_examples: 500
    download_size: 1653981819
    dataset_size: 1654009592
  - config_name: vision_bench_0617
    features:
      - name: question_id
        dtype: string
      - name: instruction
        dtype: string
      - name: image
        dtype: image
      - name: language
        dtype: string
    splits:
      - name: test
        num_bytes: 1193682526
        num_examples: 500
    download_size: 1193578497
    dataset_size: 1193682526
configs:
  - config_name: vision_bench_0701
    data_files:
      - split: test
        path: vision_bench_0701/test-*
  - config_name: vision_bench_0617
    data_files:
      - split: test
        path: vision_bench_0617/test-*

WildVision-Bench

We have two versions of Wildvision-Bench data

  • vision_bench_0617: the selected 500 examples that best simulates the vision-arena elo ranking, same data in the paper.
  • vision_bench_0701: the further filter and selected 500 examples by NSFW and manual selection. Leaderboard are still preparing.

Evaluation

Please refer to our Github for evaluation

If you want to evaluate your model, please use the vision_bench_0617 version to fairly compare the performance with other models in the following leaderboard.

Leaderboard (vision_bench_0717)

Model Score 95% CI Win Rate Reward Much Better Better Tie Worse Much Worse Avg Tokens
gpt-4o 89.15 (-1.9, 1.5) 80.6% 56.4 255.0 148.0 14.0 72.0 11.0 142
gpt-4-vision-preview 79.78 (-2.9, 2.2) 71.8% 39.4 182.0 177.0 22.0 91.0 28.0 138
Reka-Flash 64.65 (-2.6, 2.7) 58.8% 18.9 135.0 159.0 28.0 116.0 62.0 168
claude-3-opus-20240229 62.03 (-3.7, 2.8) 53.0% 13.5 103.0 162.0 48.0 141.0 46.0 105
yi-vl-plus 55.05 (-3.4, 2.3) 52.8% 7.2 98.0 166.0 29.0 124.0 83.0 140
liuhaotian/llava-v1.6-34b 51.89 (-3.4, 3.8) 49.2% 2.5 90.0 156.0 26.0 145.0 83.0 153
claude-3-sonnet-20240229 50.0 (0.0, 0.0) 0.2% 0.1 0.0 1.0 499.0 0.0 0.0 114
claude-3-haiku-20240307 37.83 (-2.6, 2.8) 30.6% -16.5 54.0 99.0 47.0 228.0 72.0 89
gemini-pro-vision 35.57 (-3.0, 3.2) 32.6% -21.0 80.0 83.0 27.0 167.0 143.0 68
liuhaotian/llava-v1.6-vicuna-13b 33.87 (-2.9, 3.3) 33.8% -21.4 62.0 107.0 25.0 167.0 139.0 136
deepseek-ai/deepseek-vl-7b-chat 33.61 (-3.3, 3.0) 35.6% -21.2 59.0 119.0 17.0 161.0 144.0 116
THUDM/cogvlm-chat-hf 32.01 (-2.2, 3.0) 30.6% -26.4 75.0 78.0 15.0 172.0 160.0 61
liuhaotian/llava-v1.6-vicuna-7b 26.41 (-3.3, 3.1) 27.0% -31.4 45.0 90.0 36.0 164.0 165.0 130
idefics2-8b-chatty 23.96 (-2.2, 2.4) 26.4% -35.8 44.0 88.0 19.0 164.0 185.0 135
Qwen/Qwen-VL-Chat 18.08 (-1.9, 2.2) 19.6% -47.9 42.0 56.0 15.0 155.0 232.0 69
llava-1.5-7b-hf 15.5 (-2.4, 2.4) 18.0% -47.8 28.0 62.0 25.0 174.0 211.0 185
liuhaotian/llava-v1.5-13b 14.43 (-1.7, 1.6) 16.8% -52.5 28.0 56.0 19.0 157.0 240.0 91
BAAI/Bunny-v1_0-3B 12.98 (-2.0, 2.1) 16.6% -54.4 23.0 60.0 10.0 164.0 243.0 72
openbmb/MiniCPM-V 11.95 (-2.4, 2.1) 13.6% -57.5 25.0 43.0 16.0 164.0 252.0 86
bczhou/tiny-llava-v1-hf 8.3 (-1.6, 1.2) 11.0% -66.2 16.0 39.0 15.0 127.0 303.0 72
unum-cloud/uform-gen2-qwen-500m 7.81 (-1.3, 1.7) 10.8% -68.5 16.0 38.0 11.0 115.0 320.0 92

Citation

@article{lu2024wildvision,
  title={WildVision: Evaluating Vision-Language Models in the Wild with Human Preferences},
  author={Lu, Yujie and Jiang, Dongfu and Chen, Wenhu and Wang, William Yang and Choi, Yejin and Lin, Bill Yuchen},
  journal={arXiv preprint arXiv:2406.11069},
  year={2024}
}