MERA-evaluation commited on
Commit
92bf35b
1 Parent(s): e186921

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +111 -83
README.md CHANGED
@@ -859,7 +859,7 @@ The benchmark covers 21 evaluation tasks comprising knowledge about the world, l
859
 
860
  ## **BPS**
861
 
862
- ### *Task Description*
863
 
864
  The balanced sequence is an algorithmic task from [BIG-bench](https://github.com/google/BIG-bench/tree/main/bigbench/benchmark_tasks/cs_algorithms/valid_parentheses). The primary purpose of this task is to measure language models' ability to learn CS algorithmic concepts like stacks, recursion, or dynamic programming.
865
 
@@ -871,187 +871,215 @@ An input string is valid if:
871
  2. Open brackets must be closed in the correct order.
872
  3. Every close bracket has a corresponding open bracket of the same type.
873
 
 
 
 
 
 
 
 
 
874
  Algorithms are a way to extrapolate examples and are some of the most concise descriptions of a pattern. In that sense, the ability of language models to learn them is a prominent measure of intelligence.
875
 
876
- ### *Dataset Description*
877
 
878
- #### *Data Fields*
879
 
880
- - `instruction` a string containing instructions for the task and information about the requirements for the model output format;
881
- - `inputs` an example of the parentheses sequence;
882
- - `outputs` a string containing the correct answer: “1” if the parentheses sequence is valid, “0” otherwise;
883
- - `meta` a dictionary containing meta information:
884
- - `id` an integer indicating the index of the example.
885
 
886
- #### *Data Instances*
887
 
888
  Below is an example from the dataset:
889
 
890
  ```json
891
  {
892
- "instruction": "На вход подается последовательность скобок: \"{inputs}\"\nНеобходимо ответить сбалансирована ли данная последовательность. Если последовательность сбалансирована - выведите 1, иначе 0",
893
- "inputs": "[ ] } { [ ] { ) [ } ) ) { ( ( ( ) ] } {",
894
  "outputs": "0",
895
  "meta": {
896
- "id": 40
897
  }
898
  }
899
  ```
900
 
901
- #### *Data Splits*
902
 
903
- The train consists of 250 examples, and the test set includes 1000 examples.
904
 
905
- #### *Prompts*
906
 
907
- 8 prompts of varying difficulty were created for this task. Example:
908
 
909
- `"Проверьте, сбалансирована ли входная последовательность скобок.\n"{inputs}"\nВыведите 1, если да и 0 в противном случае. Сперва закрывающей скобкой своего типа должна закрываться последняя из открытых скобок, и лишь потом соответствующей закрывающей скобкой может закрываться та, что была открыта перед ней."`.
 
 
910
 
911
- #### *Dataset Creation*
912
 
913
  The parentheses sequences of the length 2, 4, 8, 12, 20 were generated with the following distribution: `{20: 0.336, 12: 0.26, 8: 0.24, 4: 0.14, 2: 0.024}` for the train set and `{20: 0.301, 12: 0.279, 8: 0.273, 4: 0.121, 2: 0.026}` for the test set.
914
 
915
- ### *Evaluation*
916
 
917
- #### *Metrics*
918
 
919
  The task is evaluated using Accuracy.
920
 
921
- #### *Human benchmark*
922
 
923
  The human benchmark is measured on a subset of size 100 (sampled with the same original distribution). The accuracy for this task is `1.0`.
924
 
925
 
926
  ## **CheGeKa**
927
 
928
- ### *Task Description*
929
 
930
- The task contains questions from the game “What? Where? When?" and is a question-and-answer task with a free answer. The dataset is based on the dataset of the same name from the TAPE benchmark.
931
- This task is considered extremely difficult, requiring logical reasoning and knowledge about the world. The task involves QA pairs with a free-form answer (no choice of answer); however, the correct answer is formed by a long chain of cause-and-effect relationships between facts and associations.
932
 
933
- ### *Dataset Description*
934
 
935
- #### *Data Fields*
936
 
937
- - `meta` — a dictionary containing meta-information about the example:
938
- - `id` — the task ID;
939
- - `author` — the author of the question;
940
- - `tour name` — the name of the game in which the question was used;
941
- - `tour_link` — a link to the game in which the question was used (None for the test set);
942
- - `instruction` — an instructional prompt specified for the current task;
943
- - `inputs` — a dictionary containing the following input information:
944
- - `text` — a text fragment with a question from the game “What? Where? When?";
945
- - `topic` — a string containing the category of the question;
946
- - `outputs` — a string containing the correct answer to the question.
947
 
948
- #### *Data Instances*
949
 
950
- Below is an example from the dataset:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
951
 
952
  ```json
953
  {
954
- "instruction": "Вы участвуете в викторине “Что? Где? Когда?”. Внимательно прочитайте вопрос из категории \"{topic}\" и ответьте на него.\nВопрос: {text}\nВ качестве ответа запишите только ваш вариант без дополнительных объяснений.\nОтвет:",
955
  "inputs": {
956
- "text": "В корриде, кроме быка, он тоже играет одну из главных ролей.",
957
- "topic": "\"ТОР\""
958
  },
959
- "outputs": "Тореадор",
960
  "meta": {
961
- "id": 7571,
962
- "author": "Максим Стасюк",
963
- "tour_name": "Своя игра. ШДК им. Рабиндраната Дебендранатовича Тагора",
964
- "tour_link": "https://db.chgk.info/tour/tagor02"
965
  }
966
  }
967
  ```
968
 
969
- #### *Data Splits*
970
 
971
- The dataset consists of 29,376 training examples (train set) and 416 test examples (test set).
972
 
973
- #### *Prompts*
974
 
975
- We prepared 4 different prompts of various difficulties for this task.
976
- An example of the prompt is given below:
977
 
978
- `"Вы участвуете в викторине “Что? Где? Когда?”. Категория вопроса: {topic}\nВнимательно прочитайте вопрос и ответьте на него: {text}\nОтвет:"`.
 
 
979
 
980
- #### *Dataset Creation*
981
 
982
- The dataset is based on the corresponding dataset from the TAPE benchmark, which, in turn, was created based on the original corpus with questions from the game “What? Where? When?".
983
 
984
- ### *Evaluation*
985
 
986
- #### *Metrics*
987
 
988
- To evaluate models on this dataset, two metrics are used: F1 score and complete match (Exact Match EM).
989
 
990
- #### *Human Benchmark*
 
 
991
 
992
- The F1 score / Exact Match results are `0.719` / `0.645`, respectively.
993
 
994
 
995
  ## **LCS**
996
 
997
- ### *Task Description*
998
 
999
  The longest common subsequence is an algorithmic task from [BIG-Bench](https://github.com/google/BIG-bench/tree/main/bigbench/benchmark_tasks/cs_algorithms/lcs). This problem consists of pairs of strings as input, and language models are expected to predict the length of the longest common subsequence between them correctly.
1000
 
1001
- LCS is a prototypical dynamic programming problem and measures the model's ability to capture that approach.
 
 
 
 
 
 
1002
 
1003
  Recently, large language models have started to do well on simple algorithmic tasks like few-shot arithmetic, so we want to extend this evaluation to more complicated algorithms.
1004
 
1005
- ### *Dataset Description*
1006
 
1007
- #### *Data Fields*
1008
 
1009
- - `instruction` a string containing instructions for the task and information about the requirements for the model output format;
1010
- - `inputs` an example of two sequences to be compared;
1011
- - `outputs` a string containing the correct answer, the length of the longest common subsequence;
1012
- - `meta` a dictionary containing meta information:
1013
- - `id` an integer indicating the index of the example.
1014
 
1015
- #### *Data Instances*
1016
 
1017
  Below is an example from the dataset:
1018
 
1019
  ```json
1020
  {
1021
- "instruction": "Даны две строки: \"{inputs}\"\nОпределите длину их самой длинной общей подпоследовательности.",
1022
- "inputs": "DFHFTUUZTMEGMHNEFPZ IFIGWCNVGEDBBTFDUNHLNNNIAJ",
1023
- "outputs": "5",
1024
  "meta": {
1025
- "id": 186
1026
  }
1027
  }
1028
  ```
1029
 
1030
- #### *Data Splits*
1031
 
1032
- The public test (public_test split) includes 320 examples, and the closed test (test split) set includes 500 examples.
1033
 
1034
- #### *Prompts*
1035
 
1036
- 6 prompts of varying difficulty were created for this task. Example:
1037
 
1038
- `"Для двух строк: \"{inputs}\" найдите длину наибольшей общей подпоследовательности. Пересекающиеся символы должны идти в том же порядке, но могут быть разделены другими символами."`.
 
 
1039
 
1040
- #### *Dataset Creation*
1041
 
1042
  Sequences of length in the range [4; 32) were generated with a Python script for open public test and closed test sets.
1043
 
1044
  For the open public test set we use the same seed for generation as in the Big-Bench.
1045
 
1046
- ### *Evaluation*
1047
 
1048
- #### *Metrics*
1049
 
1050
  The task is evaluated using Accuracy.
1051
 
1052
- #### *Human Benchmark*
1053
 
1054
- The human benchmark is measured on a subset of size 100 (sampled with the same original distribution). The accuracy for this task is `0.704`.
1055
 
1056
 
1057
  ## **MaMuRAMu**
 
859
 
860
  ## **BPS**
861
 
862
+ ### Task Description
863
 
864
  The balanced sequence is an algorithmic task from [BIG-bench](https://github.com/google/BIG-bench/tree/main/bigbench/benchmark_tasks/cs_algorithms/valid_parentheses). The primary purpose of this task is to measure language models' ability to learn CS algorithmic concepts like stacks, recursion, or dynamic programming.
865
 
 
871
  2. Open brackets must be closed in the correct order.
872
  3. Every close bracket has a corresponding open bracket of the same type.
873
 
874
+ **Warning:** This is a diagnostic dataset with an open test and is not used for general model evaluation on the benchmark.
875
+
876
+ **Keywords:** algorithms, numerical response, context length, parantheses, binary answer
877
+
878
+ **Authors:** Harsh Mehta, Behnam Neyshabur
879
+
880
+ #### Motivation
881
+
882
  Algorithms are a way to extrapolate examples and are some of the most concise descriptions of a pattern. In that sense, the ability of language models to learn them is a prominent measure of intelligence.
883
 
884
+ ### Dataset Description
885
 
886
+ #### Data Fields
887
 
888
+ - `instruction` is a string containing instructions for the task and information about the requirements for the model output format;
889
+ - `inputs` is an example of the parentheses sequence;
890
+ - `outputs` is a string containing the correct answer: “1” if the parentheses sequence is valid, “0” otherwise;
891
+ - `meta` is a dictionary containing meta information:
892
+ - `id` is an integer indicating the index of the example.
893
 
894
+ #### Data Instances
895
 
896
  Below is an example from the dataset:
897
 
898
  ```json
899
  {
900
+ "instruction": "Проверьте, сбалансирована ли входная последовательность скобок.\n\"{inputs}\"\nВыведите 1, если да и 0 в противном случае.",
901
+ "inputs": "} } ) [ } ] ) { [ { { ] ( ( ] ) ( ) [ {",
902
  "outputs": "0",
903
  "meta": {
904
+ "id": 242
905
  }
906
  }
907
  ```
908
 
909
+ #### Data Splits
910
 
911
+ The train consists of `250` examples, and the test set includes `1000` examples.
912
 
913
+ #### Prompts
914
 
915
+ 10 prompts of varying difficulty were created for this task. Example:
916
 
917
+ ```json
918
+ "Проверьте входную последовательность скобок: \"{inputs}\" на сбалансированность. В случае положительного ответа выведите 1, иначе 0.".
919
+ ```
920
 
921
+ #### Dataset Creation
922
 
923
  The parentheses sequences of the length 2, 4, 8, 12, 20 were generated with the following distribution: `{20: 0.336, 12: 0.26, 8: 0.24, 4: 0.14, 2: 0.024}` for the train set and `{20: 0.301, 12: 0.279, 8: 0.273, 4: 0.121, 2: 0.026}` for the test set.
924
 
925
+ ### Evaluation
926
 
927
+ #### Metrics
928
 
929
  The task is evaluated using Accuracy.
930
 
931
+ #### Human benchmark
932
 
933
  The human benchmark is measured on a subset of size 100 (sampled with the same original distribution). The accuracy for this task is `1.0`.
934
 
935
 
936
  ## **CheGeKa**
937
 
938
+ ### Task Description
939
 
940
+ CheGeKa is a Jeopardy!-like the Russian QA dataset collected from the official Russian quiz database ChGK and belongs to the open-domain question-answering group of tasks. The dataset was created based on the [corresponding dataset](https://tape-benchmark.com/datasets.html#chegeka) from the TAPE benchmark.
 
941
 
942
+ **Keywords:** Reasoning, World Knowledge, Logic, Question-Answering, Open-Domain QA
943
 
944
+ **Authors:** Ekaterina Taktasheva, Tatiana Shavrina, Alena Fenogenova, Denis Shevelev, Nadezhda Katricheva, Maria Tikhonova, Albina Akhmetgareeva, Oleg Zinkevich, Anastasiia Bashmakova, Svetlana Iordanskaia, Alena Spiridonova, Valentina Kurenshchikova, Ekaterina Artemova, Vladislav Mikhailov
945
 
946
+ #### Motivation
 
 
 
 
 
 
 
 
 
947
 
948
+ The task can be considered the most challenging in terms of reasoning, knowledge, and logic, as the task implies the QA pairs with a free response form (no answer choices); however, a long chain of causal relationships between facts and associations forms the correct answer.
949
 
950
+ ### Dataset Description
951
+
952
+ #### Data Fields
953
+
954
+ - `meta` is a dictionary containing meta-information about the example:
955
+ - `id` is the task ID;
956
+ - `author` is the author of the question;
957
+ - `tour name` is the name of the game in which the question was used;
958
+ - `tour_link` is a link to the game in which the question was used (None for the test set);
959
+ - `instruction` is an instructional prompt specified for the current task;
960
+ - `inputs` is a dictionary containing the following input information:
961
+ - `text` is a text fragment with a question from the game “What? Where? When?";
962
+ - `topic` is a string containing the category of the question;
963
+ - `outputs` is a string containing the correct answer to the question.
964
+
965
+ #### Data Instances
966
+
967
+ Each instance in the dataset contains an instruction, a question, the topic of the question, the correct answer, and all the meta-information. Below is an example from the dataset:
968
 
969
  ```json
970
  {
971
+ "instruction": "Вы участвуете в викторине “Что? Где? Когда?”. Категория вопроса: {topic}\nВнимательно прочитайте и ответьте на него только словом или фразой. Вопрос: {text}\nОтвет:",
972
  "inputs": {
973
+ "text": "Веку ожерелий (вулкан).",
974
+ "topic": "ГЕОГРАФИЧЕСКИЕ КУБРАЕЧКИ"
975
  },
976
+ "outputs": "Эре|бус",
977
  "meta": {
978
+ "id": 2,
979
+ "author": "Борис Шойхет",
980
+ "tour_name": "Карусель. Командное Jeopardy. Кишинёв - 1996.",
981
+ "tour_link": "https://db.chgk.info/tour/karus96"
982
  }
983
  }
984
  ```
985
 
986
+ #### Data Splits
987
 
988
+ The dataset consists of 29376 training examples (train set) and 416 test examples (test set).
989
 
990
+ #### Prompts
991
 
992
+ We use 10 different prompts written in natural language for this task. An example of the prompt is given below:
 
993
 
994
+ ```json
995
+ "Прочитайте вопрос из викторины \"Что? Где? Когда?\" категории \"{topic}\" и ответьте на него. Вопрос: {text}\nОтвет:"
996
+ ```
997
 
998
+ #### Dataset Creation
999
 
1000
+ The dataset was created using the corresponding dataset from the TAPE benchmark, which is, in turn, based on the original corpus of the CheGeKa game.
1001
 
1002
+ ### Evaluation
1003
 
1004
+ #### Metrics
1005
 
1006
+ The dataset is evaluated via two metrics: F1-score and Exact Match (EM).
1007
 
1008
+ #### Human Benchmark
1009
+
1010
+ Human Benchmark was measured on a test set with Yandex.Toloka project with the overlap of 3 reviewers per task.
1011
 
1012
+ The F1-score / Exact Match results are `0.719` / `0.645`, respectively.
1013
 
1014
 
1015
  ## **LCS**
1016
 
1017
+ ### Task Description
1018
 
1019
  The longest common subsequence is an algorithmic task from [BIG-Bench](https://github.com/google/BIG-bench/tree/main/bigbench/benchmark_tasks/cs_algorithms/lcs). This problem consists of pairs of strings as input, and language models are expected to predict the length of the longest common subsequence between them correctly.
1020
 
1021
+ LCS is a prototypical dynamic programming problem and this task measures the model's ability to capture that approach.
1022
+
1023
+ **Keywords:** algorithms, numerical response, context length
1024
+
1025
+ **Authors:** Harsh Mehta, Behnam Neyshabur
1026
+
1027
+ #### Motivation
1028
 
1029
  Recently, large language models have started to do well on simple algorithmic tasks like few-shot arithmetic, so we want to extend this evaluation to more complicated algorithms.
1030
 
1031
+ ### Dataset Description
1032
 
1033
+ #### Data Fields
1034
 
1035
+ - `instruction` is a string containing instructions for the task and information about the requirements for the model output format;
1036
+ - `inputs` is an example of two sequences to be compared;
1037
+ - `outputs` is a string containing the correct answer, the length of the longest common subsequence;
1038
+ - `meta` is a dictionary containing meta information:
1039
+ - `id` is an integer indicating the index of the example.
1040
 
1041
+ #### Data Instances
1042
 
1043
  Below is an example from the dataset:
1044
 
1045
  ```json
1046
  {
1047
+ "instruction": "Запишите в виде одного числа длину самой длинной общей подпоследовательности для следующих строк: \"{inputs}\".\nОтвет:",
1048
+ "inputs": "RSEZREEVCIVIVPHVLSH VDNCOFYJVZNQV",
1049
+ "outputs": "4",
1050
  "meta": {
1051
+ "id": 138
1052
  }
1053
  }
1054
  ```
1055
 
1056
+ #### Data Splits
1057
 
1058
+ The public test includes `320` examples, and the closed test set includes `500` examples.
1059
 
1060
+ #### Prompts
1061
 
1062
+ 10 prompts of varying difficulty were created for this task. Example:
1063
 
1064
+ ```json
1065
+ "Решите задачу нахождения длины наибольшей общей подпоследовательности для следующих строк:\n\"{inputs}\"\nОтвет (в виде одного числа):".
1066
+ ```
1067
 
1068
+ #### Dataset Creation
1069
 
1070
  Sequences of length in the range [4; 32) were generated with a Python script for open public test and closed test sets.
1071
 
1072
  For the open public test set we use the same seed for generation as in the Big-Bench.
1073
 
1074
+ ### Evaluation
1075
 
1076
+ #### Metrics
1077
 
1078
  The task is evaluated using Accuracy.
1079
 
1080
+ #### Human Benchmark
1081
 
1082
+ The human benchmark is measured on a subset of size 100 (sampled with the same original distribution). The accuracy for this task is `0.56`.
1083
 
1084
 
1085
  ## **MaMuRAMu**