Datasets:

Modalities:
Text
Formats:
parquet
Libraries:
Datasets
pandas
File size: 3,858 Bytes
005ca90
 
1987501
005ca90
 
 
 
 
 
 
 
 
 
 
 
 
 
1987501
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6d27e46
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
005ca90
 
 
 
 
 
 
1987501
 
 
 
 
 
6d27e46
 
 
 
 
 
005ca90
c92e395
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
---
dataset_info:
- config_name: documents
  features:
  - name: chunk_id
    dtype: string
  - name: chunk
    dtype: string
  splits:
  - name: train
    num_bytes: 34576803.19660113
    num_examples: 49069
  - name: test
    num_bytes: 1082352.8033988737
    num_examples: 1536
  download_size: 20677449
  dataset_size: 35659156.0
- config_name: queries
  features:
  - name: original_query
    dtype: string
  - name: query
    dtype: string
  - name: chunk_id
    dtype: string
  - name: answer
    dtype: string
  splits:
  - name: train
    num_bytes: 568105.4847870183
    num_examples: 1872
  - name: test
    num_bytes: 30347.515212981743
    num_examples: 100
  download_size: 415558
  dataset_size: 598453.0
- config_name: synthetic_queries
  features:
  - name: chunk_id
    dtype: string
  - name: query
    dtype: string
  - name: answer
    dtype: string
  splits:
  - name: train
    num_bytes: 10826392.648225958
    num_examples: 45595
  - name: test
    num_bytes: 363056.35177404294
    num_examples: 1529
  download_size: 6478733
  dataset_size: 11189449.0
configs:
- config_name: documents
  data_files:
  - split: train
    path: documents/train-*
  - split: test
    path: documents/test-*
- config_name: queries
  data_files:
  - split: train
    path: queries/train-*
  - split: test
    path: queries/test-*
- config_name: synthetic_queries
  data_files:
  - split: train
    path: synthetic_queries/train-*
  - split: test
    path: synthetic_queries/test-*
---
# ConTEB - MLDR (evaluation)

This dataset is part of *ConTEB* (Context-aware Text Embedding Benchmark), designed for evaluating contextual embedding model capabilities. It stems from the widely used [MLDR](https://huggingface.co/datasets/Shitao/MLDR) dataset.

## Dataset Summary

MLDR consists of long documents, associated to existing sets of question-answer pairs. To build the corpus, we start from the pre-existing collection documents, extract the text, and chunk them (using [LangChain](https://github.com/langchain-ai/langchain)'s RecursiveCharacterSplitter with a threshold of 1000 characters). Since chunking is done a posteriori without considering the questions, chunks are not always self-contained and eliciting document-wide context can help build meaningful representations. We use GPT-4o to annotate which chunk, among the gold document, best contains information needed to answer the query. 

This dataset provides a focused benchmark for contextualized embeddings. It includes a set of original documents, chunks stemming from them, and queries.


*   **Number of Documents:** 100 
*   **Number of Chunks:** 1536 
*   **Number of Queries:** 100
*   **Average Number of Tokens per Chunk:** 164.2

## Dataset Structure (Hugging Face Datasets)
The dataset is structured into the following columns:

*   **`documents`**: Contains chunk information:
    *   `"chunk_id"`:  The ID of the chunk, of the form `doc-id_chunk-id`, where `doc-id` is the ID of the original document and `chunk-id` is the position of the chunk within that document. 
    *   `"chunk"`:  The text of the chunk
*   **`queries`**: Contains query information:
    *   `"query"`: The text of the query.
    *   `"answer"`: The answer relevant to the query, from the original dataset.
    *   `"chunk_id"`: The ID of the chunk that the query is related to, of the form `doc-id_chunk-id`, where `doc-id` is the ID of the original document and `chunk-id` is the position of the chunk within that document.

## Usage

Use the `test` split for evaluation.
We will upload a Quickstart evaluation snippet soon.

## Citation

We will add the corresponding citation soon.

## Acknowledgments

This work is partially supported by [ILLUIN Technology](https://www.illuin.tech/), and by a grant from ANRT France.

## Copyright

All rights are reserved to the original authors of the documents.