Update README.md
Browse files
README.md
CHANGED
@@ -7,10 +7,16 @@ sdk: static
|
|
7 |
pinned: false
|
8 |
---
|
9 |
|
|
|
|
|
10 |
Our team at Czech Institute of Informatics, Robotics and Cybernetics focuses on developing NLP applications utilizing large language models.
|
11 |
-
|
12 |
|
13 |
-
|
14 |
-
|
|
|
|
|
|
|
15 |
|
16 |
-
|
|
|
|
7 |
pinned: false
|
8 |
---
|
9 |
|
10 |
+
# CIIRC CTU
|
11 |
+
|
12 |
Our team at Czech Institute of Informatics, Robotics and Cybernetics focuses on developing NLP applications utilizing large language models.
|
13 |
+
Apart from creating custom solutions for our worldwide partners, we aim to aid the local NLP community by developing Czech-enabled LLMs and complemantary evaluation tools.
|
14 |
|
15 |
+
## CzechBench
|
16 |
+
As selecting the most capable model for a specific task and language is crucial for ensuring optimal performance, we concentrated our efforts on developing a Czech-focused LLM evaluation suite.
|
17 |
+
CzechBench, available on [GitHub](https://github.com/jirkoada/czechbench_eval_harness/tree/main/lm_eval/tasks/czechbench#readme), is a collection of Czech evaluation tasks selected to assess multiple aspects of LLM capabilities.
|
18 |
+
The suite newly leverages the [Language Model Evaluation Harness framework](https://github.com/EleutherAI/lm-evaluation-harness), in order to provide an effective evaluation environment most LLM developers are already familiar with.
|
19 |
+
The models are evaluated in an end-to-end fashion, using only their final textual outputs. This allows for direct performance comparison across both open-source and proprietary LLM solutions.
|
20 |
|
21 |
+
To start evaluating your own Czech-enabled models, you can follow the instructions on [GitHub](https://github.com/jirkoada/czechbench_eval_harness/tree/main/lm_eval/tasks/czechbench#readme).
|
22 |
+
We are currently working on providing an open leaderboard for CzechBench to allow for efficient sharing of evaluation results.
|