Datasets:

Languages:
English
ArXiv:
License:
File size: 3,380 Bytes
4bfb97e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c18bae5
 
 
 
 
 
 
 
 
 
 
4bfb97e
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
---
license: odc-by
language:
- en
tags:
- math
- education
---

# Dataset Card for MathFish Tasks

<!-- Provide a quick summary of the dataset. -->

This dataset is a derivative of [MathFish](https://huggingface.co/datasets/allenai/mathfish), where dev set examples are inserted into prompts for models to assess their abilities to verify and tag standards in math problems.

See [MathFish](https://huggingface.co/datasets/allenai/mathfish) for more details on sources, creation, and uses of this data.

This data can be used in conjunction with our model API wrapper included in this [Github repository](https://github.com/allenai/mathfish/tree/main).

## Dataset Details

### Dataset Description

- **Curated by:** Lucy Li, Tal August, Rose E Wang, Luca Soldaini, Courtney Allison, Kyle Lo
- **Funded by:** The Gates Foundation
- **Language(s) (NLP):** English
- **License:** ODC-By 1.0

## Dataset Structure

<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->

Files are named in the following manner: 

```
data_{task format}-{mathfish data split}_{other parameters}_{prompt number}_{table format}.jsonl
```

Each line in a tagging file is formatted as the following: 

```
{
  "id": unique instance ID
  "dataset": some grouping of instances within a given task format,
  "messages": [
    {
      "role": "user",
      "prompt_template": "",
      "options": [
        # a list of tagging options
      ],
      "problem_activity": "",
    },
    {
      "role": "assistant",
      "response_template": "{option}",
      "response_format": "", # e.g. json or comma-separated list
      "correct_option_index": [
        # integer indices here that correspond to "options" above
      ]
    }
  ]
}
```

Each instance may also include keys indicating few-shot exemplars. 

Note that files labeled with `entailment` are inputs for the task we call "verification" in our paper. In verification files, the format is similar to tagging above, but instead of an `options` key, there is a `standards_description` key including a natural language description of a math standard, and the assistant's dictionary includes a yes/no entry for whether the given problem `aligns` with the described standard. 

## Dataset Creation

The prompts in this repository are filtered by testing 15 possible prompts from [this file](https://github.com/allenai/mathfish/blob/main/mathfish/datasets/prompts.json) across three models: Llama 2 70B, Mixtral 8x7B, and GPT-4-turbo. This repo includes each models' top three performing prompts on tagging and verification tasks, to facilitate reproducibility of the findings in our paper (link TBD).

## Citation

<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->

```
@misc{lucy2024evaluatinglanguagemodelmath,
      title={Evaluating Language Model Math Reasoning via Grounding in Educational Curricula}, 
      author={Li Lucy and Tal August and Rose E. Wang and Luca Soldaini and Courtney Allison and Kyle Lo},
      year={2024},
      eprint={2408.04226},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2408.04226}, 
}
```

## Dataset Card Contact

[email protected]