File size: 2,311 Bytes
55bf7dd e6d0f8f 0081443 55bf7dd e6d0f8f 55bf7dd e6d0f8f 0081443 e6d0f8f 0081443 c8a7177 0081443 c8a7177 0081443 e6d0f8f 0081443 54d0179 e6d0f8f 0081443 e6d0f8f 0081443 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 |
---
title: map
tags:
- evaluate
- metric
description: "This is the mean average precision (map) metric for retrieval systems.
It is the average of the precision scores computer after each relevant document is got. You can refer to [here](https://amenra.github.io/ranx/metrics/#mean-average-precision)"
sdk: gradio
sdk_version: 3.19.1
app_file: app.py
pinned: false
---
# Metric Card for map
## Metric Description
This is the mean average precision (map) metric for retrieval systems.
It is the average of the precision scores computer after each relevant document is got. You can refer to [here](https://amenra.github.io/ranx/metrics/#mean-average-precision)
## How to Use
```python
>>> my_new_module = evaluate.load("map")
>>> references= [json.dumps({"q_1":{"d_1":1, "d_2":2} }),
json.dumps({"q_2":{"d_2":1, "d_3":2, "d_5":3}})]
>>> predictions = [json.dumps({"q_1": { "d_1": 0.8, "d_2": 0.9}}),
json.dumps({"q_2": {"d_2": 0.9, "d_1": 0.8, "d_5": 0.7, "d_3": 0.3}})]
>>> results = my_new_module.compute(references=references, predictions=predictions)
>>> print(results)
{'map': 1.0}
```
### Inputs
- **predictions:** a list of dictionaries where each dictionary consists of document relevancy scores produced by the model for a given query. One dictionary per query. The dictionaries should be converted to string.
- **references:** a lift of list of dictionaries where each dictionary consists of the relevant order for the documents for a given query in a sorted relevancy order. The dictionaries should be converted to string.
- **k:** an optional paramater whose default is None to calculate map@k
### Output Values
- **map (`float`):** mean average precision score. Minimum possible value is 0. Maximum possible value is 1.0
## Limitations and Bias
*Note any known limitations or biases that the metric has, with links and references if possible.*
## Citation
```bibtex
@inproceedings{ranx,
author = {Elias Bassani},
title = {ranx: {A} Blazing-Fast Python Library for Ranking Evaluation and Comparison},
booktitle = {{ECIR} {(2)}},
series = {Lecture Notes in Computer Science},
volume = {13186},
pages = {259--264},
publisher = {Springer},
year = {2022},
doi = {10.1007/978-3-030-99739-7\_30}
}
``` |