File size: 1,009 Bytes
523c443 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 |
# ms-marco-MiniLM-L-6-v2
## Model description
This model is a fine-tuned version of ms-marco-MiniLM-L-6-v2 for relevancy evaluation in RAG scenarios.
## Training Data
The model was trained on a specialized dataset for evaluating RAG responses,
containing pairs of (context, response) with relevancy labels.
Dataset size: 4505 training examples, 5006 validation examples.
## Performance Metrics
```
Validation Metrics:
- NDCG: 0.9996 ± 0.0001
- MAP: 0.9970 ± 0.0009
- Accuracy: 0.9766 ± 0.0033
```
## Usage Example
```python
from sentence_transformers import CrossEncoder
# Load model
model = CrossEncoder('xtenzr/ms-marco-MiniLM-L-6-v2_finetuned_20241120_2220')
# Prepare inputs
texts = [
["Context: {...}
Query: {...}", "Response: {...}"],
]
# Get predictions
scores = model.predict(texts) # Returns relevancy scores [0-1]
```
## Training procedure
- Fine-tuned using sentence-transformers CrossEncoder
- Trained on relevancy evaluation dataset
- Optimized for RAG response evaluation
|