ai-forever commited on
Commit
a03553e
1 Parent(s): 1128ce4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -0
README.md CHANGED
@@ -642,6 +642,13 @@ The datasets are divided into subsets based on context lengths: 4k, 8k, 16k, 32k
642
  - **ruGSM100**: Solve math problems using Chain-of-Thought reasoning. Created by translating the [GSM100](https://huggingface.co/datasets/L4NLP/LEval/viewer/gsm100) dataset from L-Eval.
643
 
644
 
 
 
 
 
 
 
 
645
 
646
  ## Usage
647
 
 
642
  - **ruGSM100**: Solve math problems using Chain-of-Thought reasoning. Created by translating the [GSM100](https://huggingface.co/datasets/L4NLP/LEval/viewer/gsm100) dataset from L-Eval.
643
 
644
 
645
+ ## Metrics
646
+
647
+ The benchmark employs a range of evaluation metrics to ensure comprehensive assessment across different tasks and context lengths. These metrics are designed to measure the models' capabilities in handling long-context understanding, reasoning, and extraction of relevant information from extensive texts. Here are the primary metrics used in this benchmark:
648
+
649
+ - **Exact Match (EM)**: This metric is used to evaluate the accuracy of the model's responses by comparing the predicted answers to the ground truth. It is particularly effective for tasks where precise matching of responses is critical, such as question answering and retrieval tasks.
650
+
651
+ - **Overall Score**: For each model, the overall score is calculated by averaging the scores obtained across various tasks and context lengths. This provides a holistic view of the model's performance and its ability to generalize across different types of long-context tasks.
652
 
653
  ## Usage
654