Update README.md
Browse files
README.md
CHANGED
@@ -11,7 +11,7 @@ pipeline_tag: text-classification
|
|
11 |
---
|
12 |
# QA-Evaluation-Metrics
|
13 |
|
14 |
-
[](https://pypi.org/project/qa-metrics/)
|
15 |
|
16 |
|
17 |
QA-Evaluation-Metrics is a fast and lightweight Python package for evaluating question-answering models. It provides various basic metrics to assess the performance of QA models. Check out our paper [**PANDA**](https://arxiv.org/abs/2402.11161), a matching method going beyond token-level matching and is more efficient than LLM matchings but still retains competitive evaluation performance of transformer LLM models.
|
|
|
11 |
---
|
12 |
# QA-Evaluation-Metrics
|
13 |
|
14 |
+
[](https://pypi.org/project/qa-metrics/) [](https://colab.research.google.com/drive/17b7vrZqH0Yun2AJaOXydYZxr3cw20Ga6?usp=sharing)
|
15 |
|
16 |
|
17 |
QA-Evaluation-Metrics is a fast and lightweight Python package for evaluating question-answering models. It provides various basic metrics to assess the performance of QA models. Check out our paper [**PANDA**](https://arxiv.org/abs/2402.11161), a matching method going beyond token-level matching and is more efficient than LLM matchings but still retains competitive evaluation performance of transformer LLM models.
|