Cross-Encoder for Semantic Textual Similarity

This model was trained using SentenceTransformers Cross-Encoder class.

Training Data

This model was trained on the STS benchmark dataset. The model will predict a score between 0 and 1 how for the semantic similarity of two sentences.

Usage and Performance

Pre-trained models can be used like this:

from sentence_transformers import CrossEncoder

model = CrossEncoder('cross-encoder/stsb-TinyBERT-L4')
scores = model.predict([('Sentence 1', 'Sentence 2'), ('Sentence 3', 'Sentence 4')])

The model will predict scores for the pairs ('Sentence 1', 'Sentence 2') and ('Sentence 3', 'Sentence 4').

You can use this model also without sentence_transformers and by just using Transformers AutoModel class

Downloads last month
35,842
Safetensors
Model size
14.4M params
Tensor type
I64
·
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for cross-encoder/stsb-TinyBERT-L4

Finetunes
2 models

Dataset used to train cross-encoder/stsb-TinyBERT-L4