license: apache-2.0 | |
library_name: transformers | |
pipeline_tag: text-classification | |
tags: | |
- hallucination-detection | |
- text-classification | |
# ANAH: Analytical Annotation of Hallucinations in Large Language Models | |
[](https://arxiv.org/abs/2405.20315) | |
[](./LICENSE) | |
This page holds the InternLM2-20B model which is trained with the ANAH dataset. It is fine-tuned to annotate the hallucination in LLM's responses. | |
More information please refer to our [project page](https://open-compass.github.io/ANAH/). | |
## ๐ค How to use the model | |
You have to follow the prompt in [our paper](https://arxiv.org/abs/2405.20315) to annotate the hallucination. | |
The models follow the conversation format of InternLM2-chat, with the template protocol as: | |
```python | |
dict(role='user', begin='<|im_start|>user | |
', end='<|im_end|> | |
'), | |
dict(role='assistant', begin='<|im_start|>assistant | |
', end='<|im_end|> | |
'), | |
``` | |
## ๐๏ธ Citation | |
If you find this project useful in your research, please consider citing: | |
``` | |
@article{ji2024anah, | |
title={ANAH: Analytical Annotation of Hallucinations in Large Language Models}, | |
author={Ji, Ziwei and Gu, Yuzhe and Zhang, Wenwei and Lyu, Chengqi and Lin, Dahua and Chen, Kai}, | |
journal={arXiv preprint arXiv:2405.20315}, | |
year={2024} | |
} | |
``` |