zerank-1-small / README.md
advaith5's picture
Update README.md
1972d8f verified
|
raw
history blame
2.43 kB
metadata
license: cc-by-nc-4.0
language:
  - en
base_model:
  - Qwen/Qwen3-4B
pipeline_tag: text-ranking
tags:
  - finance
  - legal
  - code
  - stem
  - medical

zerank-1-small: Smaller, faster version of zerank-1

This model is the smaller version of zeroentropy/zerank-1. Though the model is over 2x smaller, it maintains nearly the same standard of performance, continuing to outperform other popular rerankers.

It is an open-weights reranker model meant to be integrated into RAG applications to rerank results from preliminary search methods such as embeddings, BM25, and hybrid search.

How to Use

from sentence_transformers import CrossEncoder

model = CrossEncoder("zeroentropy/zerank-1-small", trust_remote_code=True)

query_documents = [
    ("What is 2+2?", "4"),
    ("What is 2+2?", "The answer is definitely 1 million"),
]

scores = model.predict(query_documents)

print(scores)

Evaluations

Comparing NDCG@10 starting from top 100 documents by embedding (using text-3-embedding-small):

Task Embedding cohere-rerank-v3.5 Salesforce/Llama-rank-v1 zerank-1-small zerank-1
Code 0.678 0.724 0.694 0.730 0.754
Conversational 0.250 0.571 0.484 0.556 0.596
Finance 0.839 0.824 0.828 0.861 0.894
Legal 0.703 0.804 0.767 0.817 0.821
Medical 0.619 0.750 0.719 0.773 0.796
STEM 0.401 0.510 0.595 0.680 0.694

Comparing BM25 and Hybrid Search without and with zerank-1:

Description Description

Citation

BibTeX:

Coming soon!

APA:

Coming soon!