Datasets:

Modalities:
Image
Text
Formats:
parquet
Languages:
Korean
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
File size: 4,346 Bytes
53914e1
5491d38
 
53914e1
94dfac7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
53914e1
 
 
 
 
 
 
 
 
 
a5c0763
 
 
0ef2fcc
 
53914e1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6de15da
53914e1
 
 
 
 
 
 
 
a5c0763
 
 
 
 
 
 
 
 
 
 
53914e1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
---
language:
- ko
license: cc-by-nc-4.0
dataset_info:
  features:
  - name: index
    dtype: string
  - name: question
    dtype: string
  - name: choice_a
    dtype: string
  - name: choice_b
    dtype: string
  - name: choice_c
    dtype: string
  - name: choice_d
    dtype: string
  - name: answer
    dtype: string
  - name: category
    dtype: string
  - name: image
    dtype: image
  splits:
  - name: test
    num_bytes: 9681522.0
    num_examples: 240
  download_size: 3340794
  dataset_size: 9681522.0
configs:
- config_name: default
  data_files:
  - split: test
    path: data/test-*
---
# K-DTCBench

We introduce **K-DTCBench**, a newly developed Korean benchmark featuring both computer-generated and handwritten documents, tables, and charts.
It consists of 80 questions for each image type and two questions per image, summing up to 240 questions in total.
This benchmark is designed to evaluate whether vision-language models can process images in different formats and be applicable for diverse domains.
All images are generated with made-up values and statements for evaluation purposes only. We scanned hand-written documents/tables/charts, or created digital objects with matplotlib library to build K-DTCBench.
The proportions of digital and hand-written images are equal, each constituting 50%.


For more details, Please refer to the VARCO-VISION technical report.

- **Technical Report:** [VARCO-VISION: Expanding Frontiers in Korean Vision-Language Models](https://arxiv.org/pdf/2411.19103)
- **Blog(Korean):** [VARCO-VISION Technical Report Summary](https://ncsoft.github.io/ncresearch/95ad8712e60063e9ac97538504ac3eea0ac530af)
- **Huggingface Version Model:** [NCSOFT/VARCO-VISION-14B-HF](https://huggingface.co/NCSOFT/VARCO-VISION-14B-HF)

<table>
<tr>
  <th>Category</th>
  <th>Image</th>
  <th>K-DTCBench</th>
</tr>
<tr>
  <td align="center">document</td>
  <td width=350><img src="https://cdn-uploads.huggingface.co/production/uploads/624ceaa38746b2f5773c2d1c/Ipi4HR73P-PDC5XcgP3WF.png"></td>
  <td>
    <strong>question:</strong> ๋ณด๊ณ ์„œ์˜ ์ฃผ์š” ๋‚ด์šฉ์ด ์•„๋‹Œ ๊ฒƒ์€ ๋ฌด์—‡์ธ๊ฐ€์š”?
    <br>
    <strong>A:</strong> ์•ˆ์ „ ์ธํ”„๋ผ ํ™•์ถฉ
    <br>
    <strong>B:</strong> ์žฌ๋‚œ ๋ฐ ์‚ฌ๊ณ  ์˜ˆ๋ฐฉ ์ฒด๊ณ„ ๊ตฌ์ถ•
    <br>
    <strong>C:</strong> ์‹œ๋ฏผ ์•ˆ์ „ ๊ต์œก ๊ฐ•ํ™”
    <br>
    <strong>D:</strong> ๊ธด๊ธ‰ ๋Œ€์‘ ์‹œ์Šคํ…œ ๊ฐœ์„ 
  </td>
</tr>
<tr>
  <td align="center">table</td>
  <td width=350><img src="https://cdn-uploads.huggingface.co/production/uploads/624ceaa38746b2f5773c2d1c/dz_FuPnpZ5P4P3LEB5PZ0.png"></td>
  <td>
    <strong>question:</strong> ์ธํ”„๋ผ ๊ตฌ์ถ• ํ•ญ๋ชฉ์˜ ์ ์ˆ˜๋Š” ๋ช‡ ์ ์ธ๊ฐ€์š”?
    <br>
    <strong>A:</strong> 4
    <br>
    <strong>B:</strong> 6
    <br>
    <strong>C:</strong> 8
    <br>
    <strong>D:</strong> 10
  </td>
</tr>
<tr>
  <td align="center">chart</td>
  <td width=350><img src="https://cdn-uploads.huggingface.co/production/uploads/624ceaa38746b2f5773c2d1c/IbNMPPgd974SbCAsz6zIS.png"></td>
  <td>
    <strong>question:</strong> ์ง์žฅ์ธ๋“ค์ด ํ‡ด๊ทผ ํ›„ ๋‘ ๋ฒˆ์งธ๋กœ ์„ ํ˜ธํ•˜๋Š” ํ™œ๋™์€ ๋ฌด์—‡์ธ๊ฐ€์š”?
    <br>
    <strong>A:</strong> ์šด๋™
    <br>
    <strong>B:</strong> ์—ฌ๊ฐ€ํ™œ๋™
    <br>
    <strong>C:</strong> ์ž๊ธฐ๊ฐœ๋ฐœ
    <br>
    <strong>D:</strong> ํœด์‹
  </td>
</tr>
</table>
<br>

## Inference Prompt
```
<image>
{question}
Options: A: {A}, B: {B}, C: {C}, D: {D}
์ฃผ์–ด์ง„ ์„ ํƒ์ง€ ์ค‘ ํ•ด๋‹น ์˜ต์…˜์˜ ๋ฌธ์ž๋กœ ์ง์ ‘ ๋‹ตํ•˜์„ธ์š”.
```

<br>

## Results
Below are the evaluation results of various vision-language models, including [VARCO-VISION-14B](https://huggingface.co/NCSOFT/VARCO-VISION-14B) on K-DTCBench.

| | VARCO-VISION-14B | Pangea-7B | Pixtral-12B | Molmo-7B-D | Qwen2-VL-7B-Instruct | LLaVA-One-Vision-7B |
| :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| K-DTCBench | **84.58** | 48.33 | 27.50 | 45.83 | 75.00 | 52.91 |

<br>

## Citation
If you use K-DTCBench in your research, please cite the following:
```bibtex
@misc{ju2024varcovisionexpandingfrontierskorean,
      title={VARCO-VISION: Expanding Frontiers in Korean Vision-Language Models}, 
      author={Jeongho Ju and Daeyoung Kim and SunYoung Park and Youngjune Kim},
      year={2024},
      eprint={2411.19103},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2411.19103}, 
}
```