Datasets:

Formats:
json
Languages:
English
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
File size: 3,288 Bytes
3e27a01
 
 
 
02c2d72
3e27a01
 
 
 
28baa9a
02c2d72
 
 
 
5a1cc20
ca6a119
 
3e27a01
00be17c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9d6d5d6
 
8af6e1a
b963f64
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8af6e1a
9d6d5d6
 
 
 
 
 
 
 
 
 
 
00be17c
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
---
license: apache-2.0
task_categories:
- question-answering
- multiple-choice
language:
- en
size_categories:
- n<1K
configs:
- config_name: benchmark
  data_files:
  - split: test
    path: dataset.json
paperswithcode_id: mapeval-api
tags:
- geospatial
---

# MapEval-API

[MapEval](https://arxiv.org/abs/2501.00316)-API is created using [MapQaTor](https://arxiv.org/abs/2412.21015).

# Usage

```python
from datasets import load_dataset

# Load dataset
ds = load_dataset("MapEval/MapEval-API", name="benchmark")

# Generate better prompts
for item in ds["test"]:
    # Start with a clear task description
    prompt = (
        "You are a highly intelligent assistant. "
        "Answer the multiple-choice question by selecting the correct option.\n\n"
        "Question:\n" + item["question"] + "\n\n"
        "Options:\n"
    )
    
    # List the options more clearly
    for i, option in enumerate(item["options"], start=1):
        prompt += f"{i}. {option}\n"
    
    # Add a concluding sentence to encourage selection of the answer
    prompt += "\nSelect the best option by choosing its number."

    # Use the prompt as needed
    print(prompt)  # Replace with your processing logic
```

## Leaderboard
| Model               | Overall | Place Info | Nearby | Routing | Trip   | Unanswerable |
|---------------------|:---------:|:------------:|:--------:|:---------:|:--------:|:--------------:|
| Claude-3.5-Sonnet   | **64.00**   | **68.75**     | **55.42**  | **65.15**   | **71.64**  | 55.00        |
| GPT-4-Turbo         | 53.67   | 62.50      | 50.60  | 60.61   | 50.75  | 25.00        |
| GPT-4o              | 48.67   | 59.38      | 40.96  | 50.00   | 56.72  | 15.00        |
| Gemini-1.5-Pro      | 43.33   | 65.63      | 30.12  | 40.91   | 34.33  | **65.00**        |
| Gemini-1.5-Flash    | 41.67   | 51.56      | 38.55  | 46.97   | 34.33  | 30.00        |
| GPT-3.5-Turbo       | 27.33   | 39.06      | 22.89  | 33.33   | 19.40  | 15.00        |
| GPT-4o-mini         | 23.00   | 28.13      | 14.46  | 13.64   | 43.28  | 5.00         |
| Llama-3.2-90B       | 39.67   | 54.69      | 37.35  | 39.39   | 35.82  | 15.00        |
| Llama-3.1-70B       | 37.67   | 53.13      | 32.53  | 42.42   | 31.34  | 15.00        |
| Mixtral-8x7B        | 27.67   | 32.81      | 18.07  | 27.27   | 38.81  | 15.00        |
| Gemma-2.0-9B        | 27.00   | 35.94      | 14.46  | 28.79   | 26.87  | 45.00        |
#### Comparison between ReAct and Chameleon with GPT-3.5-Turbo
| Model                      | Overall | Place Info | Nearby | Routing | Trip   | Unanswerable |
|----------------------------|:-------:|:----------:|:------:|:-------:|:------:|:------------:|
| ReAct      | 27.33   | 39.06      | 22.89  | 33.33   | 19.40  | 15.00        |
| Chameleon  | 49.33   | 54.69      | 54.21  | 51.51   | 43.28  | 25.00        |

## Citation

If you use this dataset, please cite the original paper:

```
@article{dihan2024mapeval,
  title={MapEval: A Map-Based Evaluation of Geo-Spatial Reasoning in Foundation Models},
  author={Dihan, Mahir Labib and Hassan, Md Tanvir and Parvez, Md Tanvir and Hasan, Md Hasebul and Alam, Md Almash and Cheema, Muhammad Aamir and Ali, Mohammed Eunus and Parvez, Md Rizwan},
  journal={arXiv preprint arXiv:2501.00316},
  year={2024}
}
```