Datasets:

Modalities:
Text
Formats:
json
Languages:
Slovak
DOI:
Libraries:
Datasets
pandas
License:
dhladek commited on
Commit
1d7dc11
·
verified ·
1 Parent(s): 95a720e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +36 -0
README.md CHANGED
@@ -94,6 +94,42 @@ Scores of the answers are based on the annotators decisions:
94
  - 2 : paragraph is partially relevant
95
  - 0 : paragraphs is no relevant
96
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
97
 
98
  ### Database Content
99
 
 
94
  - 2 : paragraph is partially relevant
95
  - 0 : paragraphs is no relevant
96
 
97
+ ### Evaluation of an embedding model
98
+
99
+ For evaluation of an embedding model with this dataset, you can use HF datasets and BEIR toolit:
100
+
101
+ Example of evaluation of a model:
102
+
103
+ ```python
104
+ from beir import util, LoggingHandler
105
+ from beir.retrieval import models
106
+ from beir.datasets.data_loader import GenericDataLoader
107
+ from beir.retrieval.evaluation import EvaluateRetrieval
108
+ from beir.retrieval.search.dense import DenseRetrievalExactSearch as DRES
109
+ from huggingface_hub import snapshot_download
110
+ import logging
111
+ import pathlib, os
112
+
113
+ #### Just some code to print debug information to stdout
114
+ logging.basicConfig(format='%(asctime)s - %(message)s',
115
+ datefmt='%Y-%m-%d %H:%M:%S',
116
+ level=logging.INFO,
117
+ handlers=[LoggingHandler()])
118
+ #
119
+ data_path = snapshot_download(repo_id="TUKE-KEMT/retrieval-skquad",repo_type="dataset")
120
+ model_path = "TUKE-DeutscheTelekom/slovakbert-skquad-mnlr"
121
+
122
+ model = DRES(models.SentenceBERT(model_path), batch_size=16)
123
+
124
+ corpus, queries, qrels = GenericDataLoader(data_folder=data_path).load(split="test")
125
+
126
+ #### Load the SBERT model and retrieve using cosine-similarity
127
+ retriever = EvaluateRetrieval(model, score_function="dot") # or "cos_sim" for cosine similarity
128
+ results = retriever.retrieve(corpus, queries)
129
+
130
+ #### Evaluate your model with NDCG@k, MAP@K, Recall@K and Precision@K where k = [1,3,5,10,100,1000]
131
+ ndcg, _map, recall, precision = retriever.evaluate(qrels, results, retriever.k_values)
132
+ ```
133
 
134
  ### Database Content
135