The dataset viewer is not available for this split.
Error code: StreamingRowsError Exception: ValueError Message: image at imagesets/0992c072-65ca-4fa3-8a3d-d1066b7bcff0/00898408-b4af-4948-8ea0-33a954ac792a.jpg doesn't have metadata in hf://datasets/nanonets/nn-auto-bench-ds@39f80b698af3d0554493b81a7063d577ecef6152/metadata.jsonl. Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise return get_rows( File "/src/libs/libcommon/src/libcommon/utils.py", line 197, in decorator return func(*args, **kwargs) File "/src/services/worker/src/worker/utils.py", line 77, in get_rows rows_plus_one = list(itertools.islice(ds, rows_max_number + 1)) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2226, in __iter__ for key, example in ex_iterable: File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 219, in __iter__ for key_example in islice(self.generate_examples_fn(**gen_kwags), shard_example_idx_start, None): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/folder_based_builder/folder_based_builder.py", line 311, in _generate_examples raise ValueError( ValueError: image at imagesets/0992c072-65ca-4fa3-8a3d-d1066b7bcff0/00898408-b4af-4948-8ea0-33a954ac792a.jpg doesn't have metadata in hf://datasets/nanonets/nn-auto-bench-ds@39f80b698af3d0554493b81a7063d577ecef6152/metadata.jsonl.
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
nn-auto-bench-ds
nn-auto-bench-ds
is a dataset designed for key information extraction (KIE) and serves as a benchmark dataset for nn-auto-bench.
Dataset Overview
The dataset comprises 1,000 documents, categorized into the following types:
- Invoice
- Receipt
- Passport
- Bank Statement
The documents are primarily available in English, with some also in German and Arabic. Each document is annotated for key information extraction and specific tasks. The dataset can be used to compute LLM's oneshot performance on KIE tasks.
Dataset Schema
The dataset includes the following columns:
image_path
: File path to the document image.content
: OCR-extracted text from the image.accepted
: Ground truth answer.Queried_labels
: Labels, fields, or keys targeted for extraction.Queried_col_headers
: Column headers targeted for extraction.ctx_1
: OCR text from an example document.ctx_1_image_path
: File path to the example document’s image.ctx_1_accepted
: Ground truth answer for the example document.
There are total 54 unique fields/keys/labels that we want to extract from the documents.
Loading the Dataset
To load the dataset in Python using the datasets
library:
from datasets import load_dataset
dataset = load_dataset("nanonets/nn-auto-bench-ds")
Data Sources
This dataset aggregates information from multiple open-source datasets, including:
- German Invoices Dataset
- Personal Financial Dataset for India
- RVL-CDIP Invoice Dataset
- FATURA Dataset
- Find It Again
- Generated USA Passports Dataset
- Synthetic Passports Dataset
This dataset is valuable for benchmarking key information extraction models and advancing research in document understanding and natural language processing (NLP).
- Downloads last month
- 295