vector-dev-god commited on
Commit
c81f597
·
verified ·
1 Parent(s): 1a8ca8d

Update README.md

Browse files

Updated the evaluation details on the model card

Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -95,7 +95,7 @@ The model excels at understanding natural language queries like:
95
  ### Evaluation
96
  The model's evaluation metrics are available on the [MTEB Leaderboard](https://huggingface.co/spaces/mteb/leaderboard)
97
  - The model is currently by far the best embedding model under 1B parameters size and very easy to run locally on a small GPU due to it's memory size
98
- - The model also is No 1. by a far margin on the [SemRel24STS](https://huggingface.co/datasets/SemRel/SemRel2024) task with an accuracy of 81.12 beating Google Gemini embedding model (second place) 73.14. SemRel24STS evaluates the ability of systems to measure the semantic relatedness between two sentences over 14 different languages.
99
  - We noticed the model does exceptionally well with legal and news retrieval and similarity task from the MTEB leaderboard
100
 
101
 
 
95
  ### Evaluation
96
  The model's evaluation metrics are available on the [MTEB Leaderboard](https://huggingface.co/spaces/mteb/leaderboard)
97
  - The model is currently by far the best embedding model under 1B parameters size and very easy to run locally on a small GPU due to it's memory size
98
+ - The model also is No 1. by a far margin on the [SemRel24STS](https://huggingface.co/datasets/SemRel/SemRel2024) task with an accuracy of 81.12% beating Google Gemini embedding model (second place) 73.14% (as at 30th March 2025). SemRel24STS evaluates the ability of systems to measure the semantic relatedness between two sentences over 14 different languages.
99
  - We noticed the model does exceptionally well with legal and news retrieval and similarity task from the MTEB leaderboard
100
 
101