ajosh0504 commited on
Commit
b0a9549
·
verified ·
1 Parent(s): 9ad2d82

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +75 -0
README.md ADDED
@@ -0,0 +1,75 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-3.0
3
+ task_categories:
4
+ - question-answering
5
+ language:
6
+ - en
7
+ tags:
8
+ - vector search
9
+ - retrieval augmented generation
10
+ size_categories:
11
+ - <1K
12
+ ---
13
+
14
+ ## Overview
15
+
16
+ This dataset consists of chunked and embedded versions of a small subset of MongoDB's technical documentation.
17
+
18
+ ## Dataset Structure
19
+
20
+ The dataset consists of the following fields:
21
+
22
+ - sourceName: The source of the document.
23
+ - url: Link to the article.
24
+ - action: Action taken on the article.
25
+ - body: Content of the article in Markdown format.
26
+ - format: Format of the content.
27
+ - metadata: Metadata such as tags, content type etc. associated with the document.
28
+ - title: Title of the document.
29
+ - updated: The last updated date of the document.
30
+ - embedding: The embedding of the chunk's content, created using the [thenlpr/gte-small](https://huggingface.co/thenlper/gte-small) open-source model from Hugging Face.
31
+
32
+ ## Usage
33
+
34
+ This dataset can be useful for prototyping RAG applications. This is a real sample of data we have used to build the AI chatbot on our official documentation website.
35
+
36
+ ## Ingest Data
37
+
38
+ To experiment with this dataset using MongoDB Atlas, first [create a MongoDB Atlas account](https://www.mongodb.com/cloud/atlas/register?utm_campaign=devrel&utm_source=community&utm_medium=organic_social&utm_content=Hugging%20Face%20Dataset&utm_term=apoorva.joshi).
39
+
40
+ You can then use the following script to load this dataset into your MongoDB Atlas cluster:
41
+
42
+ ```
43
+ import os
44
+ from pymongo import MongoClient
45
+ import datasets
46
+ from datasets import load_dataset
47
+ from bson import json_util
48
+
49
+
50
+ uri = os.environ.get('MONGODB_ATLAS_URI')
51
+ client = MongoClient(uri)
52
+ db_name = 'your_database_name' # Change this to your actual database name
53
+ collection_name = 'mognodb_doc_-embedded'
54
+
55
+ collection = client[db_name][collection_name]
56
+
57
+ dataset = load_dataset("MongoDB/mongodb-docs-embedded")
58
+
59
+ insert_data = []
60
+
61
+ for item in dataset['train']:
62
+ doc = json_util.loads(json_util.dumps(item))
63
+ insert_data.append(doc)
64
+
65
+ if len(insert_data) == 1000:
66
+ collection.insert_many(insert_data)
67
+ print("1000 records ingested")
68
+ insert_data = []
69
+
70
+ if len(insert_data) > 0:
71
+ collection.insert_many(insert_data)
72
+ insert_data = []
73
+
74
+ print("Data ingested successfully!")
75
+ ```