DocTrack: A Visually-Rich Document Dataset Really Aligned with Human Eye Movement for Machine Reading
Abstract
The use of visually-rich documents (<PRE_TAG>VRDs)</POST_TAG> in various fields has created a demand for <PRE_TAG><PRE_TAG><PRE_TAG>Document AI</POST_TAG> models</POST_TAG></POST_TAG> that can read and comprehend documents like humans, which requires the overcoming of technical, linguistic, and cognitive barriers. Unfortunately, the lack of appropriate datasets has significantly hindered advancements in the field. To address this issue, we introduce <PRE_TAG>DocTrack</POST_TAG>, a VRD dataset really aligned with human eye-movement information using eye-tracking technology. This dataset can be used to investigate the challenges mentioned above. Additionally, we explore the impact of human <PRE_TAG>reading order</POST_TAG> on document understanding tasks and examine what would happen if a machine reads in the same order as a human. Our results suggest that although <PRE_TAG><PRE_TAG><PRE_TAG>Document AI</POST_TAG> models</POST_TAG></POST_TAG> have made significant progress, they still have a long way to go before they can read VRDs as accurately, continuously, and flexibly as humans do. These findings have potential implications for future research and development of <PRE_TAG><PRE_TAG><PRE_TAG>Document AI</POST_TAG> models</POST_TAG></POST_TAG>. The data is available at https://github.com/hint-lab/doctrack.
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper