Datasets:

Languages:
English
ArXiv:
License:
ccstan99 commited on
Commit
119dc54
·
1 Parent(s): 41fb259

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +59 -1
README.md CHANGED
@@ -6,4 +6,62 @@ language:
6
  - en
7
  size_categories:
8
  - 10K<n<100K
9
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
  - en
7
  size_categories:
8
  - 10K<n<100K
9
+ pretty_name: ARD
10
+ ---
11
+ # AI Alignment Research Dataset
12
+ This dataset is based on [alignment-research-dataset](https://github.com/moirage/alignment-research-dataset).
13
+
14
+ For more information about the dataset, have a look at the [paper](https://arxiv.org/abs/2206.02841) or [LessWrong](https://www.lesswrong.com/posts/FgjcHiWvADgsocE34/a-descriptive-not-prescriptive-overview-of-current-ai) post.
15
+
16
+ It is currently maintained and kept up-to-date by volunteers at StampyAI / AI Safety Info.
17
+
18
+ ## Sources
19
+
20
+ The important thing here is that not all of the dataset entries contain all the same keys.
21
+
22
+ They all have the keys: id, source, title, text, and url
23
+
24
+ Other keys are available depending on the source document.
25
+
26
+ 1. `source`: indicates the data sources:
27
+
28
+ - agentmodels
29
+ - aiimpacts.org
30
+ - aipulse.org
31
+ - aisafety.camp
32
+ - arbital
33
+ - arxiv_papers
34
+ - audio_transcripts
35
+ - carado.moe
36
+ - cold.takes
37
+ - deepmind.blog
38
+ - distill
39
+ - eaforum
40
+ - **gdocs**
41
+ - **gdrive_ebooks**
42
+ - generative.ink
43
+ - gwern_blog
44
+ - intelligence.org
45
+ - jsteinhardt
46
+ - lesswrong
47
+ - **markdown.ebooks**
48
+ - nonarxiv_papers
49
+ - qualiacomputing.com
50
+ - **reports**
51
+ - stampy
52
+ - vkrakovna
53
+ - waitbutwhy
54
+ - yudkowsky.net
55
+
56
+ 2. `alignment_text`: This is label specific to the arXiv papers. We added papers to the dataset using Allen AI's SPECTER model and included all the papers that got a confidence score of over 75%. However, since we could not verify with certainty that those papers where about alignment, we've decided to create the `alignment_text` key with the value `"pos"` when we manually labeled it as an alignment text and `"unlabeled"` when we have not labeled it yet. Additionally, we've only included the `text` for the `"pos"` entries, not the `"unlabeled"` entries.
57
+
58
+ ## Contributing
59
+
60
+ Join us at [StampyAI](https://coda.io/d/AI-Safety-Info_dfau7sl2hmG/Get-involved_susRF#_lufSr).
61
+
62
+ ## Citing the Dataset
63
+
64
+ Please use the following citation when using our dataset:
65
+
66
+ Kirchner, J. H., Smith, L., Thibodeau, J., McDonnell, K., and Reynolds, L. "Understanding AI alignment research: A Systematic Analysis." arXiv preprint arXiv:2022.4338861 (2022).
67
+