Update README.md
Browse files
README.md
CHANGED
@@ -25,3 +25,22 @@ configs:
|
|
25 |
- split: train
|
26 |
path: data/train-*
|
27 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
25 |
- split: train
|
26 |
path: data/train-*
|
27 |
---
|
28 |
+
|
29 |
+
This dataset is based on the English subset of the [Cohere/wikipedia-2023-11-embed-multilingual-v3](https://huggingface.co/datasets/Cohere/wikipedia-2023-11-embed-multilingual-v3)
|
30 |
+
dataset and contains both `ubinary` and `int8` embeddings of the text, created with the [mixedbread-ai/mxbai-embed-large-v1](https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1)
|
31 |
+
embedding model. The dataset contains the following columns:
|
32 |
+
|
33 |
+
- `_id`: unique identifier of the Wikipedia text chunk
|
34 |
+
- `title`: title of the Wikipedia article
|
35 |
+
- `url`: URL of the Wikipedia article
|
36 |
+
- `text`: text chunk of the Wikipedia article
|
37 |
+
- `emb_ubinary`: `binary` embeddings of the Wikipedia text chunk
|
38 |
+
- `emb_int8`: `int8` embeddings of the Wikipedia text chunk
|
39 |
+
|
40 |
+
You can load the dataset with:
|
41 |
+
|
42 |
+
```python
|
43 |
+
from datasets import load_dataset
|
44 |
+
|
45 |
+
dataset = load_dataset("krasserm/wikipedia-2023-11-en-embed-mxbai-int8-binary", split="train")
|
46 |
+
```
|