Adamiros commited on
Commit
6f96d42
·
verified ·
1 Parent(s): b87f677

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -15,7 +15,7 @@ Apart from creating custom solutions for our worldwide partners, we aim to aid t
15
  ## CzechBench
16
  As selecting the most capable model for a specific task and language is crucial for ensuring optimal performance, we concentrated our efforts on developing a Czech-focused LLM evaluation suite.
17
  CzechBench, available on [GitHub](https://github.com/jirkoada/czechbench_eval_harness/tree/main/lm_eval/tasks/czechbench#readme), is a collection of Czech evaluation tasks selected to assess multiple aspects of LLM capabilities.
18
- The suite newly leverages the [Language Model Evaluation Harness framework](https://github.com/EleutherAI/lm-evaluation-harness), in order to provide an effective evaluation environment most LLM developers are already familiar with.
19
  The models are evaluated in an end-to-end fashion, using only their final textual outputs. This allows for direct performance comparison across both open-source and proprietary LLM solutions.
20
 
21
  To start evaluating your own Czech-enabled models, you can follow the instructions on [GitHub](https://github.com/jirkoada/czechbench_eval_harness/tree/main/lm_eval/tasks/czechbench#readme).
 
15
  ## CzechBench
16
  As selecting the most capable model for a specific task and language is crucial for ensuring optimal performance, we concentrated our efforts on developing a Czech-focused LLM evaluation suite.
17
  CzechBench, available on [GitHub](https://github.com/jirkoada/czechbench_eval_harness/tree/main/lm_eval/tasks/czechbench#readme), is a collection of Czech evaluation tasks selected to assess multiple aspects of LLM capabilities.
18
+ The suite newly leverages the [Language Model Evaluation Harness framework](https://github.com/EleutherAI/lm-evaluation-harness), in order to provide an effective environment most LLM developers are already familiar with.
19
  The models are evaluated in an end-to-end fashion, using only their final textual outputs. This allows for direct performance comparison across both open-source and proprietary LLM solutions.
20
 
21
  To start evaluating your own Czech-enabled models, you can follow the instructions on [GitHub](https://github.com/jirkoada/czechbench_eval_harness/tree/main/lm_eval/tasks/czechbench#readme).