Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
emrgnt-cmplxty commited on
Commit
e1d1861
1 Parent(s): caa6c42

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -24
README.md CHANGED
@@ -12,12 +12,9 @@ configs:
12
  - split: train
13
  path: "**/*.parquet"
14
  ---
15
- # Important Notice
16
- **This dataset is just a sample. The real dataset will be uploaded after New Year's 2024. This early release is intended for Agent Search launching today, but the data is not yet finalized.**
17
-
18
  ### Getting Started
19
 
20
- The AgentSearch-V1 dataset includes over one billion embeddings sourced from over 50 million high-quality documents. This extensive collection encompasses the majority of content from sources like Arxiv, Wikipedia, Project Gutenberg, and includes quality-filtered CC data.
21
 
22
  To access and utilize the AgentSearch-V1 dataset, you can stream it via HuggingFace with the following Python code:
23
 
@@ -27,27 +24,22 @@ from datasets import load_dataset
27
  ds = load_dataset("SciPhi/AgentSearch-V1", data_files="**/*", streaming=True)
28
 
29
  # Optional, stream just the "arxiv" dataset
30
- ds = load_dataset("SciPhi/AgentSearch-V1", data_files="arxiv/*", streaming=True)
 
 
 
 
 
 
 
 
 
 
31
  ```
32
 
33
  ---
34
 
35
- A full set of scripts to recreate the dataset from scratch can be found [here](https://github.com/SciPhi-AI/agent-search). [Synthesizer](https://github.com/SciPhi-AI/synthesizer) offers direct integration with AgentSearch and top LLM providers.
36
-
37
- ### Dataset Summary
38
-
39
- We take a similar approach to RedPajama-v1 and divide AgentSearch into a number of categories.
40
-
41
-
42
- | Dataset | Token Count |
43
- |----------------|-------------|
44
- | Books | TBD |
45
- | ArXiv | TBD |
46
- | Wikipedia | TBD |
47
- | StackExchange | TBD |
48
- | OpenMath | TBD |
49
- | Filtered Crawl | TBD |
50
- | Total | TBD |
51
 
52
  ### Languages
53
 
@@ -64,15 +56,14 @@ The raw dataset structure is as follows:
64
  "metadata": {"url": "...", "timestamp": "...", "source": "...", "language": "...", ...},
65
  "text_chunks": ...,
66
  "embeddings": ...,
67
- "dataset": "github" | "books" | "arxiv" | "wikipedia" | "stackexchange" | "open-math" | "filtered-rp2"
68
  }
69
  ```
70
 
71
- The indexed dataset can be downloaded directly and is structured as a qdrant database dump, each entry has meta data {"url", "vector"}. In addition, there is a corresponding sqlite dataset which contains the mapping from urls onto embeddings, text chunks, and other metadata.
72
 
73
  ## Dataset Creation
74
 
75
- This dataset was created as a step towards making humanities most important knowledge locally searchable and LLM optimal. It was created by filtering, cleaning, and augmenting locally publicly available datasets.
76
 
77
  To cite our work, please use the following:
78
 
 
12
  - split: train
13
  path: "**/*.parquet"
14
  ---
 
 
 
15
  ### Getting Started
16
 
17
+ The AgentSearch-V1 dataset includes over one billion embeddings sourced from over 50 million high-quality documents. This extensive collection encompasses the majority of content from sources like Arxiv, Wikipedia, Project Gutenberg, and includes quality-filtered CC data. We are actively working to expand the corpus and improve the search experience, if you have any thoughts or suggestions please reach out!
18
 
19
  To access and utilize the AgentSearch-V1 dataset, you can stream it via HuggingFace with the following Python code:
20
 
 
24
  ds = load_dataset("SciPhi/AgentSearch-V1", data_files="**/*", streaming=True)
25
 
26
  # Optional, stream just the "arxiv" dataset
27
+ # ds = load_dataset("SciPhi/AgentSearch-V1", data_files="arxiv/*", streaming=True)
28
+
29
+ # To process the entries:
30
+ for entry in ds:
31
+ embeddings = np.frombuffer(
32
+ entry['embeddings'], dtype=np.float32
33
+ ).reshape(-1, 768)
34
+ text_chunks = json.loads(entry['text_chunks'])
35
+ metadata = json.loads(entry['metadata'])
36
+ print(f'Embeddings:\n{embeddings}\n\nChunks:\n{text_chunks}\n\nMetadata:\n{metadata}')
37
+ break
38
  ```
39
 
40
  ---
41
 
42
+ A full set of scripts to recreate the dataset from scratch can be found [here](https://github.com/SciPhi-AI/agent-search). Further, you may check the docs for details on how to perform RAG over AgentSearch.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
43
 
44
  ### Languages
45
 
 
56
  "metadata": {"url": "...", "timestamp": "...", "source": "...", "language": "...", ...},
57
  "text_chunks": ...,
58
  "embeddings": ...,
59
+ "dataset": "book" | "arxiv" | "wikipedia" | "stack-exchange" | "open-math" | "RedPajama-Data-V2"
60
  }
61
  ```
62
 
 
63
 
64
  ## Dataset Creation
65
 
66
+ This dataset was created as a step towards making humanities most important knowledge openly searchable and LLM optimal. It was created by filtering, cleaning, and augmenting locally publicly available datasets.
67
 
68
  To cite our work, please use the following:
69