Datasets:
Tasks:
Text Retrieval
Modalities:
Text
Formats:
parquet
Sub-tasks:
document-retrieval
Languages:
English
Size:
100M - 1B
License:
Commit
·
cd88b4f
1
Parent(s):
f9c4fdc
Update documentation
Browse files
README.md
CHANGED
@@ -12,26 +12,15 @@ task_ids:
|
|
12 |
|
13 |
# abstracts-embeddings
|
14 |
|
15 |
-
This is the embeddings of the titles and abstracts of
|
16 |
|
17 |
-
1.
|
18 |
-
2.
|
19 |
-
3.
|
20 |
-
4. If
|
|
|
|
|
21 |
|
22 |
-
|
23 |
|
24 |
-
|
25 |
-
|
26 |
-
In the future, this dataset might become a parquet in order to admit all the features offered by Hugging Face Datasets, but it consists only of a text file and a numpy memmap for now. The memmap is an array of many length-384 `np.float16` vectors, and the i-th row vector in this array corresponds with the i-th line in the text file. The text file is just a list of ids that can be used to get more information from the OpenAlex API.
|
27 |
-
|
28 |
-
```python
|
29 |
-
import numpy as np
|
30 |
-
|
31 |
-
with open('openalex_ids.txt', 'r') as f:
|
32 |
-
idxs = f.read().splitlines()
|
33 |
-
|
34 |
-
embeddings = np.memmap('embeddings.memmap', dtype=np.float16, mode='r').reshape(-1, 384)
|
35 |
-
```
|
36 |
-
|
37 |
-
However, the memmap cannot be uploaded to Hugging Face as a single file, so it's split with the command `split -b 3221225472 -d --suffix-length=3 --additional-suffix=.memmap embeddings.memmap embeddings_`. It can be put back together with the command `cat embeddings_*.memmap > embeddings.memmap`.
|
|
|
12 |
|
13 |
# abstracts-embeddings
|
14 |
|
15 |
+
This is the embeddings of the titles and abstracts of 110 million academic publications taken from the [OpenAlex](https://openalex.org) dataset as of January 1, 2025. The embeddings are generated with a Unix pipeline, chaining together the AWS CLI, gzip, `oa_jsonl` (a C parser tailored to the JSON Lines structure of the OpenAlex snapshot), and a Python embedding script. The source code of `oa_jsonl` and the Makefile which sets up the pipeline is available on [Github](https://github.com/colonelwatch/abstracts-search), but the general process is as follows:
|
16 |
|
17 |
+
1. Decode the JSON entry of an individual work
|
18 |
+
2. From the language field, determine if the abstract will be in English, and if not, go back to step 1
|
19 |
+
3. From the abstract inverted index field, reconstruct the text of the abstract
|
20 |
+
4. If there is a title field, construct a single document in the format `title + ' ' + abstract`, or if not, just use the abstract
|
21 |
+
5. Compute an embedding with the [stella_en_1.5B_v5](https://huggingface.co/NovaSearch/stella_en_1.5B_v5) model
|
22 |
+
6. Write it to a local SQLite3 database
|
23 |
|
24 |
+
Said database is then exported in parquet format as pairs of OpenAlex IDs and length-1024 float32 vectors. The model was run with bfloat16 quantization, yielding bfloat16 vectors, but the conversion from bfloat16 to float32 leaves the lower two bytes as all-zero. This was exploited with byte-stream compression to store the vectors in a parquet with full precision but no wasted space. This does however mean that opening the parquets in the Hugging Face `datasets` library will lead to the cache using twice the space.
|
25 |
|
26 |
+
Though the OpenAlex dataset records 240 million works, not all of these works have abstracts or are in English. Besides the works without abstracts, the stella_en_1.5B_v5 model was only trained on English texts, hence the filtering.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|