|
--- |
|
license: mit |
|
task_categories: |
|
- question-answering |
|
language: |
|
- en |
|
size_categories: |
|
- 10K<n<100K |
|
pretty_name: ARD |
|
--- |
|
# AI Alignment Research Dataset |
|
This dataset is based on [alignment-research-dataset](https://github.com/moirage/alignment-research-dataset). |
|
|
|
For more information about the dataset, have a look at the [paper](https://arxiv.org/abs/2206.02841) or [LessWrong](https://www.lesswrong.com/posts/FgjcHiWvADgsocE34/a-descriptive-not-prescriptive-overview-of-current-ai) post. |
|
|
|
It is currently maintained and kept up-to-date by volunteers at StampyAI / AI Safety Info. |
|
|
|
## Sources |
|
|
|
The important thing here is that not all of the dataset entries contain all the same keys. |
|
|
|
They all have the keys: id, source, title, text, and url |
|
|
|
Other keys are available depending on the source document. |
|
|
|
1. `source`: indicates the data sources: |
|
|
|
- agentmodels |
|
- aiimpacts.org |
|
- aipulse.org |
|
- aisafety.camp |
|
- arbital |
|
- arxiv_papers |
|
- audio_transcripts |
|
- carado.moe |
|
- cold.takes |
|
- deepmind.blog |
|
- distill |
|
- eaforum |
|
- **gdocs** |
|
- **gdrive_ebooks** |
|
- generative.ink |
|
- gwern_blog |
|
- intelligence.org |
|
- jsteinhardt |
|
- lesswrong |
|
- **markdown.ebooks** |
|
- nonarxiv_papers |
|
- qualiacomputing.com |
|
- **reports** |
|
- stampy |
|
- vkrakovna |
|
- waitbutwhy |
|
- yudkowsky.net |
|
|
|
2. `alignment_text`: This is label specific to the arXiv papers. We added papers to the dataset using Allen AI's SPECTER model and included all the papers that got a confidence score of over 75%. However, since we could not verify with certainty that those papers where about alignment, we've decided to create the `alignment_text` key with the value `"pos"` when we manually labeled it as an alignment text and `"unlabeled"` when we have not labeled it yet. Additionally, we've only included the `text` for the `"pos"` entries, not the `"unlabeled"` entries. |
|
|
|
## Contributing |
|
|
|
Join us at [StampyAI](https://coda.io/d/AI-Safety-Info_dfau7sl2hmG/Get-involved_susRF#_lufSr). |
|
|
|
## Citing the Dataset |
|
|
|
Please use the following citation when using our dataset: |
|
|
|
Kirchner, J. H., Smith, L., Thibodeau, J., McDonnell, K., and Reynolds, L. "Understanding AI alignment research: A Systematic Analysis." arXiv preprint arXiv:2022.4338861 (2022). |
|
|
|
|