File size: 2,934 Bytes
c8bb738
 
 
b4ecba8
c8bb738
b4ecba8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c8bb738
 
b4ecba8
c8bb738
d522000
b4ecba8
c8bb738
 
 
 
 
 
 
 
fa4f24e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
97ef42b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c2f3000
 
 
f001a1a
c2f3000
 
 
 
 
 
 
 
 
 
97ef42b
c2f3000
97ef42b
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
---
dataset_info:
  features:
  - name: type_
    dtype: string
  - name: block
    struct:
    - name: html_tag
      dtype: string
    - name: id
      dtype: string
    - name: order
      dtype: int64
    - name: origin_type
      dtype: string
    - name: text
      struct:
      - name: embedding
        sequence: float64
      - name: text
        dtype: string
  splits:
  - name: train
    num_bytes: 2266682282
    num_examples: 260843
  download_size: 2272790159
  dataset_size: 2266682282
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---
# Dataset Card for "es_indexing_benchmark"

Here is a code on how to pull and index this dataset to elasticsearch:

```python
import datasets
from tqdm import tqdm

from src.store.es.search import ESBaseClient
from src.store.es.model import ESNode

ds = datasets.load_dataset('stellia/es_indexing_benchmark', split='train', ignore_verifications=True)
client = ESBaseClient()


index_name = "tmp_es_index"
nodes = []
for row in tqdm(ds):
    esnode = ESNode(**row)
    esnode.meta.id = esnode.block.id
    nodes.append(esnode)


client.delete_index(index_name)
client.init_index(index_name)

batch_size = 5000
for i in tqdm(range(0, len(nodes), batch_size)):
    client.save(index_name, nodes[i:i+batch_size], refresh=False)
```


Consider empty `~/.cache/huggingface/datasets` with `rm -rf ~/.cache/huggingface/datasets` if you have problem loading the dataset.

## Dataset for "Inference Benchmark"
The dataset can also be used to test our api endpoints for inference models (retrieval, reranker, disambiguation model).

To note that reranker and disambiguation model are finetuned on the same base model, meaning that it has slightly difference on inference time.  
However, the input length distributions differs for the two models. 

Here is the code to test on (in asyncio mode to be closer the real case):

```python
import asyncio

import datasets
from tqdm import tqdm

from src.utils.stage.stage_utils import AppConfig
from src.api_client.embed_ import async_encode
from src.api_client.disambig import async_disambig


ds = datasets.load_dataset('stellia/es_indexing_benchmark', split='train', ignore_verifications=True)

texts = []

for row in tqdm(ds):
    texts.append(row['text'])  # extract pure text

# Encoding with embed model:

config = AppConfig(stage='dev')
tasks = []

batch_size = 10
for i in range(0, len(texts), batch_size):
    tasks.append(asyncio.create_task(async_encode(texts[i: i+ batch_size])))

res = await asyncio.gather(*tasks)


# Test on reranker/disambiguation
tasks = []
num_rerank = 10

for i in range(0, len(texts), num_rerank+1):
    if len(texts[i+1: i+1+num_rerank]) == 0:
        break
    tasks.append(
        asyncio.create_task(
            async_disambig(
                texts[i],
                texts[i+1: i+1+num_rerank]
            )
        )
    )

res = await asyncio.gather(*tasks)