File size: 1,219 Bytes
d1796e7 bdddc98 d1796e7 bdddc98 d1796e7 bdddc98 d1796e7 bdddc98 d1796e7 bdddc98 d1796e7 bdddc98 17c9add bfc28ea 17c9add bfc28ea 17c9add bfc28ea 17c9add bfc28ea |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 |
---
language:
- en
configs:
- config_name: corpus
data_files:
- split: corpus
path:
- corpus.csv
- config_name: queries
data_files:
- split: queries
path:
- queries.csv
- config_name: mapping
data_files:
- split: relevant
path:
- relevant.csv
- split: irrelevant
path:
- irrelevant.csv
- split: seemingly_relevant
path:
- seemingly_relevant.csv
license: cc-by-sa-4.0
---
# RAGE - Retrieval Augmented Generation Evaluation
## TL;DR
RAGE is a tool for evaluating how well Large Language Models (LLMs) cite relevant sources in Retrieval Augmented Generation (RAG) tasks.
## More Details
For more information, please refer to our GitHub page:
[https://github.com/othr-nlp/rage_toolkit](https://github.com/othr-nlp/rage_toolkit)
## References
This dataset is based on the BeIR version of the Natural Questions dataset.
- **BeIR**:
- [Paper: https://doi.org/10.48550/arXiv.2104.08663](https://doi.org/10.48550/arXiv.2104.08663)
- **Natural Questions**:
- [Website: https://ai.google.com/research/NaturalQuestions](https://ai.google.com/research/NaturalQuestions)
- [Paper: https://doi.org/10.1162/tacl_a_00276](https://doi.org/10.1162/tacl_a_00276) |