zyznull's picture
Update README.md
66447c8 verified
|
raw
history blame
1.8 kB
metadata
license: apache-2.0

gte-multilingual-base

The gte-multilingual-reranker-base model is the first reranker model in the GTE family of models, featuring several key attributes:

  • High Performance: Achieves state-of-the-art (SOTA) results in multilingual retrieval tasks and multi-task representation model evaluations when compared to reranker models of similar size.
  • Training Architecture: Trained using an encoder-only transformers architecture, resulting in a smaller model size. Unlike previous models based on decode-only LLM architecture (e.g., gte-qwen2-1.5b-instruct), this model has lower hardware requirements for inference, offering a 10x increase in inference speed.
  • Long Context: Supports text lengths up to 8192 tokens.
  • Multilingual Capability: Supports over 70 languages.

Model Information

  • Model Size: 304M
  • Max Input Tokens: 8192

Requirements

transformers>=4.39.2
flash_attn>=2.5.6

Usage

Using Huggingface transformers

import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('Alibaba-NLP/gte-multilingual-reranker-base')
model = AutoModelForSequenceClassification.from_pretrained('Alibaba-NLP/gte-multilingual-reranker-base', trust_remote_code=True)
model.eval()

pairs = [["中国的首都在哪儿","北京"], ["what is the capital of China?", "北京"], ["how to implement quick sort in python?","Introduction of quick sort"]]
with torch.no_grad():
    inputs = tokenizer(pairs, padding=True, truncation=True, return_tensors='pt', max_length=512)
    scores = model(**inputs, return_dict=True).logits.view(-1, ).float()
    print(scores)