Datasets:
MERA-evaluation
commited on
Commit
•
92bf35b
1
Parent(s):
e186921
Update README.md
Browse files
README.md
CHANGED
@@ -859,7 +859,7 @@ The benchmark covers 21 evaluation tasks comprising knowledge about the world, l
|
|
859 |
|
860 |
## **BPS**
|
861 |
|
862 |
-
###
|
863 |
|
864 |
The balanced sequence is an algorithmic task from [BIG-bench](https://github.com/google/BIG-bench/tree/main/bigbench/benchmark_tasks/cs_algorithms/valid_parentheses). The primary purpose of this task is to measure language models' ability to learn CS algorithmic concepts like stacks, recursion, or dynamic programming.
|
865 |
|
@@ -871,187 +871,215 @@ An input string is valid if:
|
|
871 |
2. Open brackets must be closed in the correct order.
|
872 |
3. Every close bracket has a corresponding open bracket of the same type.
|
873 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
874 |
Algorithms are a way to extrapolate examples and are some of the most concise descriptions of a pattern. In that sense, the ability of language models to learn them is a prominent measure of intelligence.
|
875 |
|
876 |
-
###
|
877 |
|
878 |
-
####
|
879 |
|
880 |
-
- `instruction`
|
881 |
-
- `inputs`
|
882 |
-
- `outputs`
|
883 |
-
- `meta`
|
884 |
-
- `id`
|
885 |
|
886 |
-
####
|
887 |
|
888 |
Below is an example from the dataset:
|
889 |
|
890 |
```json
|
891 |
{
|
892 |
-
"instruction": "
|
893 |
-
"inputs": "
|
894 |
"outputs": "0",
|
895 |
"meta": {
|
896 |
-
"id":
|
897 |
}
|
898 |
}
|
899 |
```
|
900 |
|
901 |
-
####
|
902 |
|
903 |
-
The train consists of 250 examples, and the test set includes 1000 examples.
|
904 |
|
905 |
-
####
|
906 |
|
907 |
-
|
908 |
|
909 |
-
|
|
|
|
|
910 |
|
911 |
-
####
|
912 |
|
913 |
The parentheses sequences of the length 2, 4, 8, 12, 20 were generated with the following distribution: `{20: 0.336, 12: 0.26, 8: 0.24, 4: 0.14, 2: 0.024}` for the train set and `{20: 0.301, 12: 0.279, 8: 0.273, 4: 0.121, 2: 0.026}` for the test set.
|
914 |
|
915 |
-
###
|
916 |
|
917 |
-
####
|
918 |
|
919 |
The task is evaluated using Accuracy.
|
920 |
|
921 |
-
####
|
922 |
|
923 |
The human benchmark is measured on a subset of size 100 (sampled with the same original distribution). The accuracy for this task is `1.0`.
|
924 |
|
925 |
|
926 |
## **CheGeKa**
|
927 |
|
928 |
-
###
|
929 |
|
930 |
-
|
931 |
-
This task is considered extremely difficult, requiring logical reasoning and knowledge about the world. The task involves QA pairs with a free-form answer (no choice of answer); however, the correct answer is formed by a long chain of cause-and-effect relationships between facts and associations.
|
932 |
|
933 |
-
|
934 |
|
935 |
-
|
936 |
|
937 |
-
|
938 |
-
- `id` — the task ID;
|
939 |
-
- `author` — the author of the question;
|
940 |
-
- `tour name` — the name of the game in which the question was used;
|
941 |
-
- `tour_link` — a link to the game in which the question was used (None for the test set);
|
942 |
-
- `instruction` — an instructional prompt specified for the current task;
|
943 |
-
- `inputs` — a dictionary containing the following input information:
|
944 |
-
- `text` — a text fragment with a question from the game “What? Where? When?";
|
945 |
-
- `topic` — a string containing the category of the question;
|
946 |
-
- `outputs` — a string containing the correct answer to the question.
|
947 |
|
948 |
-
|
949 |
|
950 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
951 |
|
952 |
```json
|
953 |
{
|
954 |
-
"instruction": "Вы участвуете в викторине “Что? Где? Когда?”.
|
955 |
"inputs": {
|
956 |
-
"text": "
|
957 |
-
"topic": "
|
958 |
},
|
959 |
-
"outputs": "
|
960 |
"meta": {
|
961 |
-
"id":
|
962 |
-
"author": "
|
963 |
-
"tour_name": "
|
964 |
-
"tour_link": "https://db.chgk.info/tour/
|
965 |
}
|
966 |
}
|
967 |
```
|
968 |
|
969 |
-
####
|
970 |
|
971 |
-
The dataset consists of
|
972 |
|
973 |
-
####
|
974 |
|
975 |
-
We
|
976 |
-
An example of the prompt is given below:
|
977 |
|
978 |
-
|
|
|
|
|
979 |
|
980 |
-
####
|
981 |
|
982 |
-
The dataset
|
983 |
|
984 |
-
###
|
985 |
|
986 |
-
####
|
987 |
|
988 |
-
|
989 |
|
990 |
-
####
|
|
|
|
|
991 |
|
992 |
-
The F1
|
993 |
|
994 |
|
995 |
## **LCS**
|
996 |
|
997 |
-
###
|
998 |
|
999 |
The longest common subsequence is an algorithmic task from [BIG-Bench](https://github.com/google/BIG-bench/tree/main/bigbench/benchmark_tasks/cs_algorithms/lcs). This problem consists of pairs of strings as input, and language models are expected to predict the length of the longest common subsequence between them correctly.
|
1000 |
|
1001 |
-
LCS is a prototypical dynamic programming problem and measures the model's ability to capture that approach.
|
|
|
|
|
|
|
|
|
|
|
|
|
1002 |
|
1003 |
Recently, large language models have started to do well on simple algorithmic tasks like few-shot arithmetic, so we want to extend this evaluation to more complicated algorithms.
|
1004 |
|
1005 |
-
###
|
1006 |
|
1007 |
-
####
|
1008 |
|
1009 |
-
- `instruction`
|
1010 |
-
- `inputs`
|
1011 |
-
- `outputs`
|
1012 |
-
- `meta`
|
1013 |
-
- `id`
|
1014 |
|
1015 |
-
####
|
1016 |
|
1017 |
Below is an example from the dataset:
|
1018 |
|
1019 |
```json
|
1020 |
{
|
1021 |
-
"instruction": "
|
1022 |
-
"inputs": "
|
1023 |
-
"outputs": "
|
1024 |
"meta": {
|
1025 |
-
"id":
|
1026 |
}
|
1027 |
}
|
1028 |
```
|
1029 |
|
1030 |
-
####
|
1031 |
|
1032 |
-
The public test
|
1033 |
|
1034 |
-
####
|
1035 |
|
1036 |
-
|
1037 |
|
1038 |
-
|
|
|
|
|
1039 |
|
1040 |
-
####
|
1041 |
|
1042 |
Sequences of length in the range [4; 32) were generated with a Python script for open public test and closed test sets.
|
1043 |
|
1044 |
For the open public test set we use the same seed for generation as in the Big-Bench.
|
1045 |
|
1046 |
-
###
|
1047 |
|
1048 |
-
####
|
1049 |
|
1050 |
The task is evaluated using Accuracy.
|
1051 |
|
1052 |
-
####
|
1053 |
|
1054 |
-
The human benchmark is measured on a subset of size 100 (sampled with the same original distribution). The accuracy for this task is `0.
|
1055 |
|
1056 |
|
1057 |
## **MaMuRAMu**
|
|
|
859 |
|
860 |
## **BPS**
|
861 |
|
862 |
+
### Task Description
|
863 |
|
864 |
The balanced sequence is an algorithmic task from [BIG-bench](https://github.com/google/BIG-bench/tree/main/bigbench/benchmark_tasks/cs_algorithms/valid_parentheses). The primary purpose of this task is to measure language models' ability to learn CS algorithmic concepts like stacks, recursion, or dynamic programming.
|
865 |
|
|
|
871 |
2. Open brackets must be closed in the correct order.
|
872 |
3. Every close bracket has a corresponding open bracket of the same type.
|
873 |
|
874 |
+
**Warning:** This is a diagnostic dataset with an open test and is not used for general model evaluation on the benchmark.
|
875 |
+
|
876 |
+
**Keywords:** algorithms, numerical response, context length, parantheses, binary answer
|
877 |
+
|
878 |
+
**Authors:** Harsh Mehta, Behnam Neyshabur
|
879 |
+
|
880 |
+
#### Motivation
|
881 |
+
|
882 |
Algorithms are a way to extrapolate examples and are some of the most concise descriptions of a pattern. In that sense, the ability of language models to learn them is a prominent measure of intelligence.
|
883 |
|
884 |
+
### Dataset Description
|
885 |
|
886 |
+
#### Data Fields
|
887 |
|
888 |
+
- `instruction` is a string containing instructions for the task and information about the requirements for the model output format;
|
889 |
+
- `inputs` is an example of the parentheses sequence;
|
890 |
+
- `outputs` is a string containing the correct answer: “1” if the parentheses sequence is valid, “0” otherwise;
|
891 |
+
- `meta` is a dictionary containing meta information:
|
892 |
+
- `id` is an integer indicating the index of the example.
|
893 |
|
894 |
+
#### Data Instances
|
895 |
|
896 |
Below is an example from the dataset:
|
897 |
|
898 |
```json
|
899 |
{
|
900 |
+
"instruction": "Проверьте, сбалансирована ли входная последовательность скобок.\n\"{inputs}\"\nВыведите 1, если да и 0 в противном случае.",
|
901 |
+
"inputs": "} } ) [ } ] ) { [ { { ] ( ( ] ) ( ) [ {",
|
902 |
"outputs": "0",
|
903 |
"meta": {
|
904 |
+
"id": 242
|
905 |
}
|
906 |
}
|
907 |
```
|
908 |
|
909 |
+
#### Data Splits
|
910 |
|
911 |
+
The train consists of `250` examples, and the test set includes `1000` examples.
|
912 |
|
913 |
+
#### Prompts
|
914 |
|
915 |
+
10 prompts of varying difficulty were created for this task. Example:
|
916 |
|
917 |
+
```json
|
918 |
+
"Проверьте входную последовательность скобок: \"{inputs}\" на сбалансированность. В случае положительного ответа выведите 1, иначе 0.".
|
919 |
+
```
|
920 |
|
921 |
+
#### Dataset Creation
|
922 |
|
923 |
The parentheses sequences of the length 2, 4, 8, 12, 20 were generated with the following distribution: `{20: 0.336, 12: 0.26, 8: 0.24, 4: 0.14, 2: 0.024}` for the train set and `{20: 0.301, 12: 0.279, 8: 0.273, 4: 0.121, 2: 0.026}` for the test set.
|
924 |
|
925 |
+
### Evaluation
|
926 |
|
927 |
+
#### Metrics
|
928 |
|
929 |
The task is evaluated using Accuracy.
|
930 |
|
931 |
+
#### Human benchmark
|
932 |
|
933 |
The human benchmark is measured on a subset of size 100 (sampled with the same original distribution). The accuracy for this task is `1.0`.
|
934 |
|
935 |
|
936 |
## **CheGeKa**
|
937 |
|
938 |
+
### Task Description
|
939 |
|
940 |
+
CheGeKa is a Jeopardy!-like the Russian QA dataset collected from the official Russian quiz database ChGK and belongs to the open-domain question-answering group of tasks. The dataset was created based on the [corresponding dataset](https://tape-benchmark.com/datasets.html#chegeka) from the TAPE benchmark.
|
|
|
941 |
|
942 |
+
**Keywords:** Reasoning, World Knowledge, Logic, Question-Answering, Open-Domain QA
|
943 |
|
944 |
+
**Authors:** Ekaterina Taktasheva, Tatiana Shavrina, Alena Fenogenova, Denis Shevelev, Nadezhda Katricheva, Maria Tikhonova, Albina Akhmetgareeva, Oleg Zinkevich, Anastasiia Bashmakova, Svetlana Iordanskaia, Alena Spiridonova, Valentina Kurenshchikova, Ekaterina Artemova, Vladislav Mikhailov
|
945 |
|
946 |
+
#### Motivation
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
947 |
|
948 |
+
The task can be considered the most challenging in terms of reasoning, knowledge, and logic, as the task implies the QA pairs with a free response form (no answer choices); however, a long chain of causal relationships between facts and associations forms the correct answer.
|
949 |
|
950 |
+
### Dataset Description
|
951 |
+
|
952 |
+
#### Data Fields
|
953 |
+
|
954 |
+
- `meta` is a dictionary containing meta-information about the example:
|
955 |
+
- `id` is the task ID;
|
956 |
+
- `author` is the author of the question;
|
957 |
+
- `tour name` is the name of the game in which the question was used;
|
958 |
+
- `tour_link` is a link to the game in which the question was used (None for the test set);
|
959 |
+
- `instruction` is an instructional prompt specified for the current task;
|
960 |
+
- `inputs` is a dictionary containing the following input information:
|
961 |
+
- `text` is a text fragment with a question from the game “What? Where? When?";
|
962 |
+
- `topic` is a string containing the category of the question;
|
963 |
+
- `outputs` is a string containing the correct answer to the question.
|
964 |
+
|
965 |
+
#### Data Instances
|
966 |
+
|
967 |
+
Each instance in the dataset contains an instruction, a question, the topic of the question, the correct answer, and all the meta-information. Below is an example from the dataset:
|
968 |
|
969 |
```json
|
970 |
{
|
971 |
+
"instruction": "Вы участвуете в викторине “Что? Где? Когда?”. Категория вопроса: {topic}\nВнимательно прочитайте и ответьте на него только словом или фразой. Вопрос: {text}\nОтвет:",
|
972 |
"inputs": {
|
973 |
+
"text": "Веку ожерелий (вулкан).",
|
974 |
+
"topic": "ГЕОГРАФИЧЕСКИЕ КУБРАЕЧКИ"
|
975 |
},
|
976 |
+
"outputs": "Эре|бус",
|
977 |
"meta": {
|
978 |
+
"id": 2,
|
979 |
+
"author": "Борис Шойхет",
|
980 |
+
"tour_name": "Карусель. Командное Jeopardy. Кишинёв - 1996.",
|
981 |
+
"tour_link": "https://db.chgk.info/tour/karus96"
|
982 |
}
|
983 |
}
|
984 |
```
|
985 |
|
986 |
+
#### Data Splits
|
987 |
|
988 |
+
The dataset consists of 29376 training examples (train set) and 416 test examples (test set).
|
989 |
|
990 |
+
#### Prompts
|
991 |
|
992 |
+
We use 10 different prompts written in natural language for this task. An example of the prompt is given below:
|
|
|
993 |
|
994 |
+
```json
|
995 |
+
"Прочитайте вопрос из викторины \"Что? Где? Когда?\" категории \"{topic}\" и ответьте на него. Вопрос: {text}\nОтвет:"
|
996 |
+
```
|
997 |
|
998 |
+
#### Dataset Creation
|
999 |
|
1000 |
+
The dataset was created using the corresponding dataset from the TAPE benchmark, which is, in turn, based on the original corpus of the CheGeKa game.
|
1001 |
|
1002 |
+
### Evaluation
|
1003 |
|
1004 |
+
#### Metrics
|
1005 |
|
1006 |
+
The dataset is evaluated via two metrics: F1-score and Exact Match (EM).
|
1007 |
|
1008 |
+
#### Human Benchmark
|
1009 |
+
|
1010 |
+
Human Benchmark was measured on a test set with Yandex.Toloka project with the overlap of 3 reviewers per task.
|
1011 |
|
1012 |
+
The F1-score / Exact Match results are `0.719` / `0.645`, respectively.
|
1013 |
|
1014 |
|
1015 |
## **LCS**
|
1016 |
|
1017 |
+
### Task Description
|
1018 |
|
1019 |
The longest common subsequence is an algorithmic task from [BIG-Bench](https://github.com/google/BIG-bench/tree/main/bigbench/benchmark_tasks/cs_algorithms/lcs). This problem consists of pairs of strings as input, and language models are expected to predict the length of the longest common subsequence between them correctly.
|
1020 |
|
1021 |
+
LCS is a prototypical dynamic programming problem and this task measures the model's ability to capture that approach.
|
1022 |
+
|
1023 |
+
**Keywords:** algorithms, numerical response, context length
|
1024 |
+
|
1025 |
+
**Authors:** Harsh Mehta, Behnam Neyshabur
|
1026 |
+
|
1027 |
+
#### Motivation
|
1028 |
|
1029 |
Recently, large language models have started to do well on simple algorithmic tasks like few-shot arithmetic, so we want to extend this evaluation to more complicated algorithms.
|
1030 |
|
1031 |
+
### Dataset Description
|
1032 |
|
1033 |
+
#### Data Fields
|
1034 |
|
1035 |
+
- `instruction` is a string containing instructions for the task and information about the requirements for the model output format;
|
1036 |
+
- `inputs` is an example of two sequences to be compared;
|
1037 |
+
- `outputs` is a string containing the correct answer, the length of the longest common subsequence;
|
1038 |
+
- `meta` is a dictionary containing meta information:
|
1039 |
+
- `id` is an integer indicating the index of the example.
|
1040 |
|
1041 |
+
#### Data Instances
|
1042 |
|
1043 |
Below is an example from the dataset:
|
1044 |
|
1045 |
```json
|
1046 |
{
|
1047 |
+
"instruction": "Запишите в виде одного числа длину самой длинной общей подпоследовательности для следующих строк: \"{inputs}\".\nОтвет:",
|
1048 |
+
"inputs": "RSEZREEVCIVIVPHVLSH VDNCOFYJVZNQV",
|
1049 |
+
"outputs": "4",
|
1050 |
"meta": {
|
1051 |
+
"id": 138
|
1052 |
}
|
1053 |
}
|
1054 |
```
|
1055 |
|
1056 |
+
#### Data Splits
|
1057 |
|
1058 |
+
The public test includes `320` examples, and the closed test set includes `500` examples.
|
1059 |
|
1060 |
+
#### Prompts
|
1061 |
|
1062 |
+
10 prompts of varying difficulty were created for this task. Example:
|
1063 |
|
1064 |
+
```json
|
1065 |
+
"Решите задачу нахождения длины наибольшей общей подпоследовательности для следующих строк:\n\"{inputs}\"\nОтвет (в виде одного числа):".
|
1066 |
+
```
|
1067 |
|
1068 |
+
#### Dataset Creation
|
1069 |
|
1070 |
Sequences of length in the range [4; 32) were generated with a Python script for open public test and closed test sets.
|
1071 |
|
1072 |
For the open public test set we use the same seed for generation as in the Big-Bench.
|
1073 |
|
1074 |
+
### Evaluation
|
1075 |
|
1076 |
+
#### Metrics
|
1077 |
|
1078 |
The task is evaluated using Accuracy.
|
1079 |
|
1080 |
+
#### Human Benchmark
|
1081 |
|
1082 |
+
The human benchmark is measured on a subset of size 100 (sampled with the same original distribution). The accuracy for this task is `0.56`.
|
1083 |
|
1084 |
|
1085 |
## **MaMuRAMu**
|