Datasets:

Modalities:
Image
Text
Formats:
parquet
Languages:
Korean
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
K-DTCBench / README.md
kimyoungjune's picture
Upload dataset
94dfac7 verified
metadata
language:
  - ko
license: cc-by-nc-4.0
dataset_info:
  features:
    - name: index
      dtype: string
    - name: question
      dtype: string
    - name: choice_a
      dtype: string
    - name: choice_b
      dtype: string
    - name: choice_c
      dtype: string
    - name: choice_d
      dtype: string
    - name: answer
      dtype: string
    - name: category
      dtype: string
    - name: image
      dtype: image
  splits:
    - name: test
      num_bytes: 9681522
      num_examples: 240
  download_size: 3340794
  dataset_size: 9681522
configs:
  - config_name: default
    data_files:
      - split: test
        path: data/test-*

K-DTCBench

We introduce K-DTCBench, a newly developed Korean benchmark featuring both computer-generated and handwritten documents, tables, and charts. It consists of 80 questions for each image type and two questions per image, summing up to 240 questions in total. This benchmark is designed to evaluate whether vision-language models can process images in different formats and be applicable for diverse domains. All images are generated with made-up values and statements for evaluation purposes only. We scanned hand-written documents/tables/charts, or created digital objects with matplotlib library to build K-DTCBench. The proportions of digital and hand-written images are equal, each constituting 50%.

For more details, Please refer to the VARCO-VISION technical report.

Category Image K-DTCBench
document question: ๋ณด๊ณ ์„œ์˜ ์ฃผ์š” ๋‚ด์šฉ์ด ์•„๋‹Œ ๊ฒƒ์€ ๋ฌด์—‡์ธ๊ฐ€์š”?
A: ์•ˆ์ „ ์ธํ”„๋ผ ํ™•์ถฉ
B: ์žฌ๋‚œ ๋ฐ ์‚ฌ๊ณ  ์˜ˆ๋ฐฉ ์ฒด๊ณ„ ๊ตฌ์ถ•
C: ์‹œ๋ฏผ ์•ˆ์ „ ๊ต์œก ๊ฐ•ํ™”
D: ๊ธด๊ธ‰ ๋Œ€์‘ ์‹œ์Šคํ…œ ๊ฐœ์„ 
table question: ์ธํ”„๋ผ ๊ตฌ์ถ• ํ•ญ๋ชฉ์˜ ์ ์ˆ˜๋Š” ๋ช‡ ์ ์ธ๊ฐ€์š”?
A: 4
B: 6
C: 8
D: 10
chart question: ์ง์žฅ์ธ๋“ค์ด ํ‡ด๊ทผ ํ›„ ๋‘ ๋ฒˆ์งธ๋กœ ์„ ํ˜ธํ•˜๋Š” ํ™œ๋™์€ ๋ฌด์—‡์ธ๊ฐ€์š”?
A: ์šด๋™
B: ์—ฌ๊ฐ€ํ™œ๋™
C: ์ž๊ธฐ๊ฐœ๋ฐœ
D: ํœด์‹

Inference Prompt

<image>
{question}
Options: A: {A}, B: {B}, C: {C}, D: {D}
์ฃผ์–ด์ง„ ์„ ํƒ์ง€ ์ค‘ ํ•ด๋‹น ์˜ต์…˜์˜ ๋ฌธ์ž๋กœ ์ง์ ‘ ๋‹ตํ•˜์„ธ์š”.

Results

Below are the evaluation results of various vision-language models, including VARCO-VISION-14B on K-DTCBench.

VARCO-VISION-14B Pangea-7B Pixtral-12B Molmo-7B-D Qwen2-VL-7B-Instruct LLaVA-One-Vision-7B
K-DTCBench 84.58 48.33 27.50 45.83 75.00 52.91

Citation

If you use K-DTCBench in your research, please cite the following:

@misc{ju2024varcovisionexpandingfrontierskorean,
      title={VARCO-VISION: Expanding Frontiers in Korean Vision-Language Models}, 
      author={Jeongho Ju and Daeyoung Kim and SunYoung Park and Youngjune Kim},
      year={2024},
      eprint={2411.19103},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2411.19103}, 
}