Datasets:

Languages:
English
ArXiv:
License:
File size: 2,186 Bytes
735095c
 
dd378bb
 
 
 
 
 
119dc54
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
---
license: mit
task_categories:
- question-answering
language:
- en
size_categories:
- 10K<n<100K
pretty_name: ARD
---
# AI Alignment Research Dataset
This dataset is based on [alignment-research-dataset](https://github.com/moirage/alignment-research-dataset).

For more information about the dataset, have a look at the [paper](https://arxiv.org/abs/2206.02841) or [LessWrong](https://www.lesswrong.com/posts/FgjcHiWvADgsocE34/a-descriptive-not-prescriptive-overview-of-current-ai) post.

It is currently maintained and kept up-to-date by volunteers at StampyAI / AI Safety Info.

## Sources

The important thing here is that not all of the dataset entries contain all the same keys. 

They all have the keys: id, source, title, text, and url

Other keys are available depending on the source document.

1. `source`: indicates the data sources:

- agentmodels
- aiimpacts.org
- aipulse.org
- aisafety.camp
- arbital
- arxiv_papers
- audio_transcripts
- carado.moe
- cold.takes
- deepmind.blog
- distill
- eaforum
- **gdocs**
- **gdrive_ebooks**
- generative.ink
- gwern_blog
- intelligence.org
- jsteinhardt
- lesswrong
- **markdown.ebooks**
- nonarxiv_papers
- qualiacomputing.com
- **reports**
- stampy
- vkrakovna
- waitbutwhy
- yudkowsky.net

2. `alignment_text`: This is label specific to the arXiv papers. We added papers to the dataset using Allen AI's SPECTER model and included all the papers that got a confidence score of over 75%. However, since we could not verify with certainty that those papers where about alignment, we've decided to create the `alignment_text` key with the value `"pos"` when we manually labeled it as an alignment text and `"unlabeled"` when we have not labeled it yet. Additionally, we've only included the `text` for the `"pos"` entries, not the `"unlabeled"` entries.

## Contributing

Join us at [StampyAI](https://coda.io/d/AI-Safety-Info_dfau7sl2hmG/Get-involved_susRF#_lufSr).

## Citing the Dataset

Please use the following citation when using our dataset:

Kirchner, J. H., Smith, L., Thibodeau, J., McDonnell, K., and Reynolds, L. "Understanding AI alignment research: A Systematic Analysis." arXiv preprint arXiv:2022.4338861 (2022).