Datasets:
File size: 2,333 Bytes
08af284 61b91c6 2af8f7d d30ea59 08af284 d30ea59 61b91c6 d30ea59 08af284 7e6c06e 08af284 7e6c06e a24b0ae 01554a3 e28e9d6 01554a3 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 |
---
dataset_info:
features:
- name: entity
dtype: string
- name: perplexity
dtype: float64
- name: info
list:
- name: status_code
dtype: int64
- name: text
dtype: string
- name: url
dtype: string
- name: category
dtype: string
- name: wiki
dtype: int64
splits:
- name: train
num_bytes: 1944535165
num_examples: 7917
download_size: 1406426092
dataset_size: 1944535165
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: mit
task_categories:
- text-generation
language:
- en
size_categories:
- 1K<n<10K
---
WildHallucinations is designed for evaluating the factuality of LLMs.
Its core idea is to prompt LLMs to generate and fact-check information about a diverse set of entities.
WildHallucinations consists of 7917 entities extracted from WildChat and a knowledge source.
These entities come from English conversations that are marked as non-toxic.
As described in the main paper, we apply extensive filtering for quality control,
especially for removing entities with more than one meaning.
The knowledge source is constructed from Google search API. We scrape the top 10 web pages for each entity.
Additional cleaning process can be found in the paper.
To use the dataset:
```
from datasets import load_dataset
ds = load_dataset("wentingzhao/WildHallucinations", split="train")
```
Dataset Columns:
* entity (string): the entity name
* perplexity (float): the perplexity of the entity measured by the Llama-3-8B model
* info (string): the web information about the entity scraped from Google search results
* category (string): the category of the entity annotated by either an author or GPT-4o
* wiki (Boolean): whether any information about the entity comes from wikipedia.org
### Citation Information
Please consider citing [our paper](https://arxiv.org/abs/2407.17468) if you find this dataset useful:
```
@article{
zhao2024wildhallucinations,
title={WildHallucinations: Evaluating Long-form Factuality in LLMs with Real-World Entity Queries},
author={Wenting Zhao, Tanya Goyal, Yu Ying Chiu, Liwei Jiang, Benjamin Newman, Abhilasha Ravichander, Khyathi Chandu, Ronan Le Bras, Claire Cardie, Yuntian Deng, Yejin Choi},
journal={arXiv preprint arXiv:2407.17468},
year={2024}
} |