|
--- |
|
language: |
|
- en |
|
size_categories: |
|
- 1B<n<10B |
|
task_categories: |
|
- text-generation |
|
pretty_name: AgentSearch-V1 |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: "**/*.parquet" |
|
--- |
|
### Getting Started |
|
|
|
The AgentSearch-V1 dataset boasts a comprehensive collection of over one billion embeddings, produced using [jina-v2-base](https://huggingface.co/jinaai/jina-embeddings-v2-base-en). The dataset encompasses more than 50 million high-quality documents and over 1 billion passages, covering a vast range of content from sources such as Arxiv, Wikipedia, Project Gutenberg, and includes carefully filtered Creative Commons (CC) data. Our team is dedicated to continuously expanding and enhancing this corpus to improve the search experience. We welcome your thoughts and suggestions – please feel free to reach out with your ideas! |
|
|
|
To access and utilize the AgentSearch-V1 dataset, you can stream it via HuggingFace with the following Python code: |
|
|
|
```python |
|
from datasets import load_dataset |
|
import json |
|
import numpy as np |
|
|
|
# To stream the entire dataset: |
|
ds = load_dataset("SciPhi/AgentSearch-V1", data_files="**/*", split="train", streaming=True) |
|
|
|
# Optional, stream just the "arxiv" dataset |
|
# ds = load_dataset("SciPhi/AgentSearch-V1", data_files="**/*", split="train", data_files="arxiv/*", streaming=True) |
|
|
|
# To process the entries: |
|
for entry in ds: |
|
embeddings = np.frombuffer( |
|
entry['embeddings'], dtype=np.float32 |
|
).reshape(-1, 768) |
|
text_chunks = json.loads(entry['text_chunks']) |
|
metadata = json.loads(entry['metadata']) |
|
print(f'Embeddings:\n{embeddings}\n\nChunks:\n{text_chunks}\n\nMetadata:\n{metadata}') |
|
break |
|
``` |
|
|
|
--- |
|
|
|
A full set of scripts to recreate the dataset from scratch can be found [here](https://github.com/SciPhi-AI/agent-search). Further, you may check the docs for details on how to perform RAG over AgentSearch. |
|
|
|
### Languages |
|
|
|
English. |
|
|
|
## Dataset Structure |
|
|
|
The raw dataset structure is as follows: |
|
|
|
```json |
|
{ |
|
"url": ..., |
|
"title": ..., |
|
"metadata": {"url": "...", "timestamp": "...", "source": "...", "language": "...", ...}, |
|
"text_chunks": ..., |
|
"embeddings": ..., |
|
"dataset": "book" | "arxiv" | "wikipedia" | "stack-exchange" | "open-math" | "RedPajama-Data-V2" |
|
} |
|
``` |
|
|
|
|
|
## Dataset Creation |
|
|
|
This dataset was created as a step towards making humanities most important knowledge openly searchable and LLM optimal. It was created by filtering, cleaning, and augmenting locally publicly available datasets. |
|
|
|
To cite our work, please use the following: |
|
|
|
``` |
|
@software{SciPhi2023AgentSearch, |
|
author = {SciPhi}, |
|
title = {AgentSearch [ΨΦ]: A Comprehensive Agent-First Framework and Dataset for Webscale Search}, |
|
year = {2023}, |
|
url = {https://github.com/SciPhi-AI/agent-search} |
|
} |
|
``` |
|
|
|
### Source Data |
|
|
|
``` |
|
@ONLINE{wikidump, |
|
author = "Wikimedia Foundation", |
|
title = "Wikimedia Downloads", |
|
url = "https://dumps.wikimedia.org" |
|
} |
|
``` |
|
|
|
``` |
|
@misc{paster2023openwebmath, |
|
title={OpenWebMath: An Open Dataset of High-Quality Mathematical Web Text}, |
|
author={Keiran Paster and Marco Dos Santos and Zhangir Azerbayev and Jimmy Ba}, |
|
year={2023}, |
|
eprint={2310.06786}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.AI} |
|
} |
|
``` |
|
|
|
``` |
|
@software{together2023redpajama, |
|
author = {Together Computer}, |
|
title = {RedPajama: An Open Source Recipe to Reproduce LLaMA training dataset}, |
|
month = April, |
|
year = 2023, |
|
url = {https://github.com/togethercomputer/RedPajama-Data} |
|
} |
|
``` |
|
|
|
### License |
|
Please refer to the licenses of the data subsets you use. |
|
|
|
* [Open-Web (Common Crawl Foundation Terms of Use)](https://commoncrawl.org/terms-of-use/full/) |
|
* Books: [the_pile_books3 license](https://huggingface.co/datasets/the_pile_books3#licensing-information) and [pg19 license](https://huggingface.co/datasets/pg19#licensing-information) |
|
* [ArXiv Terms of Use](https://info.arxiv.org/help/api/tou.html) |
|
* [Wikipedia License](https://huggingface.co/datasets/wikipedia#licensing-information) |
|
* [StackExchange license on the Internet Archive](https://archive.org/details/stackexchange) |
|
|
|
<!-- |
|
### Annotations |
|
#### Annotation process |
|
[More Information Needed] |
|
#### Who are the annotators? |
|
[More Information Needed] |
|
### Personal and Sensitive Information |
|
[More Information Needed] |
|
## Considerations for Using the Data |
|
### Social Impact of Dataset |
|
[More Information Needed] |
|
### Discussion of Biases |
|
[More Information Needed] |
|
### Other Known Limitations |
|
[More Information Needed] |
|
## Additional Information |
|
### Dataset Curators |
|
[More Information Needed] |
|
### Licensing Information |
|
[More Information Needed] |
|
### Citation Information |
|
[More Information Needed] |
|
### Contributions |
|
[More Information Needed] |
|
--> |