Datasets:

Languages:
English
ArXiv:
License:
ccstan99 commited on
Commit
cfbef1e
·
1 Parent(s): ece5ead

Update README.md with sources & keys

Browse files

Updated list of data source and keys along with some extra description of dataset in general.

Files changed (1) hide show
  1. README.md +50 -64
README.md CHANGED
@@ -6,96 +6,82 @@ language:
6
  - en
7
  size_categories:
8
  - 10K<n<100K
9
- pretty_name: ARD
10
  ---
11
  # AI Alignment Research Dataset
12
- This dataset is based on [alignment-research-dataset](https://github.com/moirage/alignment-research-dataset).
13
 
14
- For more information about the dataset, have a look at the [paper](https://arxiv.org/abs/2206.02841) or [LessWrong](https://www.lesswrong.com/posts/FgjcHiWvADgsocE34/a-descriptive-not-prescriptive-overview-of-current-ai) post.
15
-
16
- It is currently maintained and kept up-to-date by volunteers at StampyAI / AI Safety Info.
17
 
18
  ## Sources
19
 
20
- The important thing here is that not all of the dataset entries contain all the same keys.
21
-
22
- They all have the keys: id, source, title, text, and url
23
-
24
- Other keys are available depending on the source document.
25
-
26
- 1. `source`: indicates the data sources:
27
-
28
- - agentmodels
29
- - aiimpacts.org
30
- - aipulse.org
31
- - aisafety.camp
32
- - arbital
33
- - arxiv_papers
34
- - audio_transcripts
35
- - carado.moe
36
- - cold.takes
37
- - deepmind.blog
38
- - distill
39
- - eaforum
40
- - **gdocs**
41
- - **gdrive_ebooks**
42
- - generative.ink
43
- - gwern_blog
44
- - intelligence.org
45
- - jsteinhardt
46
- - lesswrong
47
- - **markdown.ebooks**
48
- - nonarxiv_papers
49
- - qualiacomputing.com
50
- - **reports**
51
- - stampy
52
- - vkrakovna
53
- - waitbutwhy
54
- - yudkowsky.net
55
-
56
- 2. `alignment_text`: This is label specific to the arXiv papers. We added papers to the dataset using Allen AI's SPECTER model and included all the papers that got a confidence score of over 75%. However, since we could not verify with certainty that those papers where about alignment, we've decided to create the `alignment_text` key with the value `"pos"` when we manually labeled it as an alignment text and `"unlabeled"` when we have not labeled it yet. Additionally, we've only included the `text` for the `"pos"` entries, not the `"unlabeled"` entries.
 
 
 
 
57
 
58
  ## Usage
59
 
60
  Execute the following code to download and parse the files:
61
- ```
 
62
  from datasets import load_dataset
63
  data = load_dataset('StampyAI/alignment-research-dataset')
64
  ```
65
 
66
  To only get the data for a specific source, pass it in as the second argument, e.g.:
67
 
68
- ```
69
  from datasets import load_dataset
70
  data = load_dataset('StampyAI/alignment-research-dataset', 'lesswrong')
71
  ```
72
 
73
- The various sources have different keys - the resulting data object will have all keys that make sense, with `None** as the value of keys that aren't in a given source. For example, assuming there are the following sources with the appropriate features:
74
-
75
- ##### source1
76
- + id
77
- + name
78
- + description
79
- + author
80
-
81
- ##### source2
82
- + id
83
- + name
84
- + url
85
- + text
86
-
87
- Then the resulting data object with have 6 columns, i.e. `id`, `name`, `description`, `author`, `url` and `text`, where rows from `source1` will have `None` in the `url` and `text` columns, and the `source2` rows will have `None` in their `description` and `author` columns.
88
-
89
- ## Limitations and bias
90
 
91
- LessWrong posts have overweighted content on x-risk doom so beware of training or finetuning generative LLMs on the dataset.
92
 
93
  ## Contributing
94
 
95
- Join us at [StampyAI](https://coda.io/d/AI-Safety-Info_dfau7sl2hmG/Get-involved_susRF#_lufSr).
96
 
97
  ## Citing the Dataset
98
 
99
- Please use the following citation when using our dataset:
100
 
101
  Kirchner, J. H., Smith, L., Thibodeau, J., McDonnell, K., and Reynolds, L. "Understanding AI alignment research: A Systematic Analysis." arXiv preprint arXiv:2022.4338861 (2022).
 
6
  - en
7
  size_categories:
8
  - 10K<n<100K
9
+ pretty_name: alignment-research-dataset
10
  ---
11
  # AI Alignment Research Dataset
 
12
 
13
+ The AI Alignment Research Dataset is a collection of documents related to AI Alignment and Safety from various books, research papers, and alignment related blog posts. This is a work in progress. Components are still undergoing a cleaning process to be updated more regularly.
 
 
14
 
15
  ## Sources
16
 
17
+ The following list of sources may change and items may be renamed:
18
+
19
+ - [agentmodels](https://agentmodels.org/)
20
+ - [aiimpacts.org](https://aiimpacts.org/)
21
+ - [aisafety.camp](https://aisafety.camp/)
22
+ - [arbital](https://arbital.com/)
23
+ - arxiv_papers - alignment research papers from [arxiv](https://arxiv.org/)
24
+ - audio_transcripts - transcripts from interviews with various researchers and other audio recordings
25
+ - [carado.moe](https://carado.moe/)
26
+ - [cold.takes](https://www.cold-takes.com/)
27
+ - [deepmind.blog](https://deepmindsafetyresearch.medium.com/)
28
+ - [distill](https://distill.pub/)
29
+ - [eaforum](https://forum.effectivealtruism.org/) - selected posts
30
+ - gdocs
31
+ - gdrive_ebooks - books include [Superintelligence](https://www.goodreads.com/book/show/20527133-superintelligence), [Human Compatible](https://www.goodreads.com/book/show/44767248-human-compatible), [Life 3.0](https://www.goodreads.com/book/show/34272565-life-3-0), [The Precipice](https://www.goodreads.com/book/show/50485582-the-precipice), and others
32
+ - [generative.ink](https://generative.ink/posts/)
33
+ - [gwern_blog](https://gwern.net/)
34
+ - [intelligence.org](https://intelligence.org/) - MIRI
35
+ - [jsteinhardt.wordpress.com](https://jsteinhardt.wordpress.com/)
36
+ - [lesswrong](https://www.lesswrong.com/) - selected posts
37
+ - markdown.ebooks
38
+ - nonarxiv_papers - other alignment research papers
39
+ - [qualiacomputing.com](https://qualiacomputing.com/)
40
+ - reports
41
+ - [stampy](https://aisafety.info/)
42
+ - [vkrakovna.wordpress.com](https://vkrakovna.wordpress.com)
43
+ - [waitbutwhy](https://waitbutwhy.com/)
44
+ - [yudkowsky.net](https://www.yudkowsky.net/)
45
+
46
+ ## Keys
47
+
48
+ Not all of the entries contain the same keys, but they all have the following:
49
+
50
+ - id - unique identifier
51
+ - source - based on the data source listed in the previous section
52
+ - title - title of document
53
+ - text - full text of document content
54
+ - url - some values may be `'n/a'`, still being updated
55
+ - date_published - some `'n/a'`
56
+
57
+ The values of the keys are still being cleaned up for consistency. Additional keys are available depending on the source document.
58
 
59
  ## Usage
60
 
61
  Execute the following code to download and parse the files:
62
+
63
+ ```python
64
  from datasets import load_dataset
65
  data = load_dataset('StampyAI/alignment-research-dataset')
66
  ```
67
 
68
  To only get the data for a specific source, pass it in as the second argument, e.g.:
69
 
70
+ ```python
71
  from datasets import load_dataset
72
  data = load_dataset('StampyAI/alignment-research-dataset', 'lesswrong')
73
  ```
74
 
75
+ ## Limitations and Bias
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
76
 
77
+ LessWrong posts have overweighted content on doom and existential risk, so please beware in training or finetuning generative language models on the dataset.
78
 
79
  ## Contributing
80
 
81
+ The scraper to generate this dataset is open-sourced on [GitHub](https://github.com/StampyAI/alignment-research-dataset) and currently maintained by volunteers at StampyAI / AI Safety Info. [Learn more](https://coda.io/d/AI-Safety-Info_dfau7sl2hmG/Get-involved_susRF#_lufSr) or join us on [Discord](https://discord.gg/vjFSCDyMCy).
82
 
83
  ## Citing the Dataset
84
 
85
+ For more information, here is the [paper](https://arxiv.org/abs/2206.02841) and [LessWrong](https://www.lesswrong.com/posts/FgjcHiWvADgsocE34/a-descriptive-not-prescriptive-overview-of-current-ai) post. Please use the following citation when using the dataset:
86
 
87
  Kirchner, J. H., Smith, L., Thibodeau, J., McDonnell, K., and Reynolds, L. "Understanding AI alignment research: A Systematic Analysis." arXiv preprint arXiv:2022.4338861 (2022).