|
--- |
|
dataset_info: |
|
- config_name: documents |
|
features: |
|
- name: chunk_id |
|
dtype: string |
|
- name: chunk |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 34576803.19660113 |
|
num_examples: 49069 |
|
- name: test |
|
num_bytes: 1082352.8033988737 |
|
num_examples: 1536 |
|
download_size: 20677449 |
|
dataset_size: 35659156.0 |
|
- config_name: queries |
|
features: |
|
- name: original_query |
|
dtype: string |
|
- name: query |
|
dtype: string |
|
- name: chunk_id |
|
dtype: string |
|
- name: answer |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 568105.4847870183 |
|
num_examples: 1872 |
|
- name: test |
|
num_bytes: 30347.515212981743 |
|
num_examples: 100 |
|
download_size: 415558 |
|
dataset_size: 598453.0 |
|
- config_name: synthetic_queries |
|
features: |
|
- name: chunk_id |
|
dtype: string |
|
- name: query |
|
dtype: string |
|
- name: answer |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 10826392.648225958 |
|
num_examples: 45595 |
|
- name: test |
|
num_bytes: 363056.35177404294 |
|
num_examples: 1529 |
|
download_size: 6478733 |
|
dataset_size: 11189449.0 |
|
configs: |
|
- config_name: documents |
|
data_files: |
|
- split: train |
|
path: documents/train-* |
|
- split: test |
|
path: documents/test-* |
|
- config_name: queries |
|
data_files: |
|
- split: train |
|
path: queries/train-* |
|
- split: test |
|
path: queries/test-* |
|
- config_name: synthetic_queries |
|
data_files: |
|
- split: train |
|
path: synthetic_queries/train-* |
|
- split: test |
|
path: synthetic_queries/test-* |
|
--- |
|
# ConTEB - MLDR (evaluation) |
|
|
|
This dataset is part of *ConTEB* (Context-aware Text Embedding Benchmark), designed for evaluating contextual embedding model capabilities. It stems from the widely used [MLDR](https://huggingface.co/datasets/Shitao/MLDR) dataset. |
|
|
|
## Dataset Summary |
|
|
|
MLDR consists of long documents, associated to existing sets of question-answer pairs. To build the corpus, we start from the pre-existing collection documents, extract the text, and chunk them (using [LangChain](https://github.com/langchain-ai/langchain)'s RecursiveCharacterSplitter with a threshold of 1000 characters). Since chunking is done a posteriori without considering the questions, chunks are not always self-contained and eliciting document-wide context can help build meaningful representations. We use GPT-4o to annotate which chunk, among the gold document, best contains information needed to answer the query. |
|
|
|
This dataset provides a focused benchmark for contextualized embeddings. It includes a set of original documents, chunks stemming from them, and queries. |
|
|
|
|
|
* **Number of Documents:** 100 |
|
* **Number of Chunks:** 1536 |
|
* **Number of Queries:** 100 |
|
* **Average Number of Tokens per Chunk:** 164.2 |
|
|
|
## Dataset Structure (Hugging Face Datasets) |
|
The dataset is structured into the following columns: |
|
|
|
* **`documents`**: Contains chunk information: |
|
* `"chunk_id"`: The ID of the chunk, of the form `doc-id_chunk-id`, where `doc-id` is the ID of the original document and `chunk-id` is the position of the chunk within that document. |
|
* `"chunk"`: The text of the chunk |
|
* **`queries`**: Contains query information: |
|
* `"query"`: The text of the query. |
|
* `"answer"`: The answer relevant to the query, from the original dataset. |
|
* `"chunk_id"`: The ID of the chunk that the query is related to, of the form `doc-id_chunk-id`, where `doc-id` is the ID of the original document and `chunk-id` is the position of the chunk within that document. |
|
|
|
## Usage |
|
|
|
Use the `test` split for evaluation. |
|
We will upload a Quickstart evaluation snippet soon. |
|
|
|
## Citation |
|
|
|
We will add the corresponding citation soon. |
|
|
|
## Acknowledgments |
|
|
|
This work is partially supported by [ILLUIN Technology](https://www.illuin.tech/), and by a grant from ANRT France. |
|
|
|
## Copyright |
|
|
|
All rights are reserved to the original authors of the documents. |
|
|
|
|