File size: 3,329 Bytes
735095c dd378bb 119dc54 5bbd8e9 119dc54 5bbd8e9 119dc54 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 |
---
license: mit
task_categories:
- question-answering
language:
- en
size_categories:
- 10K<n<100K
pretty_name: ARD
---
# AI Alignment Research Dataset
This dataset is based on [alignment-research-dataset](https://github.com/moirage/alignment-research-dataset).
For more information about the dataset, have a look at the [paper](https://arxiv.org/abs/2206.02841) or [LessWrong](https://www.lesswrong.com/posts/FgjcHiWvADgsocE34/a-descriptive-not-prescriptive-overview-of-current-ai) post.
It is currently maintained and kept up-to-date by volunteers at StampyAI / AI Safety Info.
## Sources
The important thing here is that not all of the dataset entries contain all the same keys.
They all have the keys: id, source, title, text, and url
Other keys are available depending on the source document.
1. `source`: indicates the data sources:
- agentmodels
- aiimpacts.org
- aipulse.org
- aisafety.camp
- arbital
- arxiv_papers
- audio_transcripts
- carado.moe
- cold.takes
- deepmind.blog
- distill
- eaforum
- **gdocs**
- **gdrive_ebooks**
- generative.ink
- gwern_blog
- intelligence.org
- jsteinhardt
- lesswrong
- **markdown.ebooks**
- nonarxiv_papers
- qualiacomputing.com
- **reports**
- stampy
- vkrakovna
- waitbutwhy
- yudkowsky.net
2. `alignment_text`: This is label specific to the arXiv papers. We added papers to the dataset using Allen AI's SPECTER model and included all the papers that got a confidence score of over 75%. However, since we could not verify with certainty that those papers where about alignment, we've decided to create the `alignment_text` key with the value `"pos"` when we manually labeled it as an alignment text and `"unlabeled"` when we have not labeled it yet. Additionally, we've only included the `text` for the `"pos"` entries, not the `"unlabeled"` entries.
## Usage
Execute the following code to download and parse the files:
```
from datasets import load_dataset
data = load_dataset('StampyAI/alignment-research-dataset')
```
To only get the data for a specific source, pass it in as the second argument, e.g.:
```
from datasets import load_dataset
data = load_dataset('StampyAI/alignment-research-dataset', 'lesswrong')
```
The various sources have different keys - the resulting data object will have all keys that make sense, with `None** as the value of keys that aren't in a given source. For example, assuming there are the following sources with the appropriate features:
##### source1
+ id
+ name
+ description
+ author
##### source2
+ id
+ name
+ url
+ text
Then the resulting data object with have 6 columns, i.e. `id`, `name`, `description`, `author`, `url` and `text`, where rows from `source1` will have `None` in the `url` and `text` columns, and the `source2` rows will have `None` in their `description` and `author` columns.
## Limitations and bias
LessWrong posts have overweighted content on x-risk doom so beware of training or finetuning generative LLMs on the dataset.
## Contributing
Join us at [StampyAI](https://coda.io/d/AI-Safety-Info_dfau7sl2hmG/Get-involved_susRF#_lufSr).
## Citing the Dataset
Please use the following citation when using our dataset:
Kirchner, J. H., Smith, L., Thibodeau, J., McDonnell, K., and Reynolds, L. "Understanding AI alignment research: A Systematic Analysis." arXiv preprint arXiv:2022.4338861 (2022).
|