metadata
license: mit
task_categories:
- question-answering
language:
- en
size_categories:
- 10K<n<100K
pretty_name: ARD
AI Alignment Research Dataset
This dataset is based on alignment-research-dataset.
For more information about the dataset, have a look at the paper or LessWrong post.
It is currently maintained and kept up-to-date by volunteers at StampyAI / AI Safety Info.
Sources
The important thing here is that not all of the dataset entries contain all the same keys.
They all have the keys: id, source, title, text, and url
Other keys are available depending on the source document.
source
: indicates the data sources:
- agentmodels
- aiimpacts.org
- aipulse.org
- aisafety.camp
- arbital
- arxiv_papers
- audio_transcripts
- carado.moe
- cold.takes
- deepmind.blog
- distill
- eaforum
- gdocs
- gdrive_ebooks
- generative.ink
- gwern_blog
- intelligence.org
- jsteinhardt
- lesswrong
- markdown.ebooks
- nonarxiv_papers
- qualiacomputing.com
- reports
- stampy
- vkrakovna
- waitbutwhy
- yudkowsky.net
alignment_text
: This is label specific to the arXiv papers. We added papers to the dataset using Allen AI's SPECTER model and included all the papers that got a confidence score of over 75%. However, since we could not verify with certainty that those papers where about alignment, we've decided to create thealignment_text
key with the value"pos"
when we manually labeled it as an alignment text and"unlabeled"
when we have not labeled it yet. Additionally, we've only included thetext
for the"pos"
entries, not the"unlabeled"
entries.
Contributing
Join us at StampyAI.
Citing the Dataset
Please use the following citation when using our dataset:
Kirchner, J. H., Smith, L., Thibodeau, J., McDonnell, K., and Reynolds, L. "Understanding AI alignment research: A Systematic Analysis." arXiv preprint arXiv:2022.4338861 (2022).