wenzhuoliu's picture
Update README.md
11f0ec6 verified
metadata
dataset_info:
  features:
    - name: type_
      dtype: string
    - name: block
      struct:
        - name: html_tag
          dtype: string
        - name: id
          dtype: string
        - name: order
          dtype: int64
        - name: origin_type
          dtype: string
        - name: text
          struct:
            - name: embedding
              sequence: float64
            - name: text
              dtype: string
  splits:
    - name: train
      num_bytes: 2266682282
      num_examples: 260843
  download_size: 2272790159
  dataset_size: 2266682282
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*

Dataset Card for "es_indexing_benchmark"

Here is a code on how to pull and index this dataset to elasticsearch:

import datasets
from tqdm import tqdm

from src.store.es.search import ESBaseClient
from src.store.es.model import ESNode

ds = datasets.load_dataset('stellia/es_indexing_benchmark', split='train', ignore_verifications=True)
client = ESBaseClient()


index_name = "tmp_es_index"
nodes = []
for row in tqdm(ds):
    esnode = ESNode(**row)
    esnode.meta.id = esnode.block.id
    nodes.append(esnode)


client.delete_index(index_name)
client.init_index(index_name)

batch_size = 5000
for i in tqdm(range(0, len(nodes), batch_size)):
    client.save(index_name, nodes[i:i+batch_size], refresh=False)

Consider empty ~/.cache/huggingface/datasets with rm -rf ~/.cache/huggingface/datasets if you have problem loading the dataset.

Dataset for "Inference Benchmark"

The dataset can also be used to test our api endpoints for inference models (retrieval, reranker, disambiguation model).

To note that reranker and disambiguation model are finetuned on the same base model, meaning that it has slightly difference on inference time.
However, the input length distributions differs for the two models.

Here is the code to test on (in asyncio mode to be closer the real case):

import asyncio

import datasets
from tqdm import tqdm

from src.utils.stage.stage_utils import AppConfig
from src.api_client.embed_ import async_encode
from src.api_client.disambig import async_disambig


ds = datasets.load_dataset('stellia/es_indexing_benchmark', split='train', ignore_verifications=True)

texts = []

for row in tqdm(ds):
    texts.append(row['block']['text']['text'])  # extract pure text

# Encoding with embed model:

config = AppConfig(stage='dev')
tasks = []

batch_size = 10
for i in range(0, len(texts), batch_size):
    tasks.append(asyncio.create_task(async_encode(texts[i: i+ batch_size])))

res = await asyncio.gather(*tasks)


# Test on reranker/disambiguation
tasks = []
num_rerank = 10

for i in range(0, len(texts), num_rerank+1):
    if len(texts[i+1: i+1+num_rerank]) == 0:
        break
    tasks.append(
        asyncio.create_task(
            async_disambig(
                texts[i],
                texts[i+1: i+1+num_rerank]
            )
        )
    )

res = await asyncio.gather(*tasks)