HybRank / README.md
ustc-zhangzm's picture
Upload README.md with huggingface_hub
e2e9e56 verified

Dataset for HybRank

You can download preprocessed data from HuggingFace Repo

Note that train_scores.hdf5 of MS MARCO dataset files are split via

split -d -b 3G train_scores.hdf5 train_scores.hdf5.

Run following command to concatenate these files after all shards have been downloaded

cat train_scores.hdf5.* > train_scores.hdf5

Or you can generate data by yourself via the following steps:

Dependencies

java 11.0.16
maven 3.8.6
anserini 0.14.3
faiss-cpu 1.7.2
pyserini 0.17.1

Natural Questions

1. Download raw data (Refer to DPR for more details of the dataset)

python download_DPR_data.py --resource data.wikipedia_split.psgs_w100
python download_DPR_data.py --resource data.retriever.nq
python download_DPR_data.py --resource data.retriever.qas.nq
mkdir -p raw && mv downloads raw/NQ

2. Convert collections to jsonl format for Pyserini

python convert_NQ_collection_to_jsonl.py --collection-path raw/NQ/data/wikipedia_split/psgs_w100.tsv --output-folder pyserini/collections/NQ

3. Build Lucene indexes via Pyserini

python -m pyserini.index.lucene \
--collection JsonCollection \
--input pyserini/collections/NQ \
--index pyserini/indexes/NQ \
--generator DefaultLuceneDocumentGenerator \
--threads 1 \
--storePositions --storeDocvectors --storeRaw

4. Generate data

RETRIEVERS=("DPR-Multi" "DPR-Single" "ANCE" "FiD-KD" "RocketQA-retriever" "RocketQAv2-retriever" "RocketQA-reranker" "RocketQAv2-reranker")

for RETRIEVER in ${RETRIEVERS[@]}; do
  python generate_NQ_data.py --retriever $RETRIEVER
done

Note that before generate data for retriever RocketQA*, please generate the retrieval results following the instructions in data/RocketQA_baselines/README.md. Data for other retrievers can be generated directly.

MS MARCO & TREC 2019/2020

1. Download raw data (Refer to MS MARCO for more details of the dataset)

2. Convert collections to jsonl format for Pyserini

python convert_MSMARCO_collection_to_jsonl.py --collection-path raw/MSMARCO/collection.tsv --output-folder pyserini/collections/MSMARCO

3. Build Lucene indexes via Pyserini

python -m pyserini.index.lucene \
  --collection JsonCollection \
  --input pyserini/collections/MSMARCO \
  --index pyserini/indexes/MSMARCO \
  --generator DefaultLuceneDocumentGenerator \
  --threads 1 \
  --storePositions --storeDocvectors --storeRaw

4. Generate data

RETRIEVERS=("ANCE" "DistilBERT-KD" "TAS-B" "TCT-ColBERT-v1" "TCT-ColBERT-v2" "RocketQA-retriever" "RocketQAv2-retriever" "RocketQA-reranker" "RocketQAv2-reranker")

for RETRIEVER in ${RETRIEVERS[@]}; do
  python generate_MSMARCO_data.py --retriever $RETRIEVER
done
RETRIEVERS=("ANCE" "DistilBERT-KD" "TAS-B" "TCT-ColBERT-v1" "TCT-ColBERT-v2" "RocketQA-retriever" "RocketQAv2-retriever" "RocketQA-reranker" "RocketQAv2-reranker")

SPLITS=("2019" "2020")

for RETRIEVER in ${RETRIEVERS[@]}; do
  for SPLIT in ${SPLITS[@]}; do
    python generate_TRECDL_data.py --split $SPLIT --retriever $RETRIEVER
  done
done