id
stringlengths 14
16
| text
stringlengths 36
2.73k
| source
stringlengths 49
117
|
---|---|---|
9aac9f5d3e2d-0
|
.ipynb
.pdf
Weaviate
Contents
Weaviate
Similarity search with score
Persistance
Retriever options
Retriever options
MMR
Question Answering with Sources
Weaviate#
Weaviate is an open-source vector database. It allows you to store data objects and vector embeddings from your favorite ML-models, and scale seamlessly into billions of data objects.
This notebook shows how to use functionality related to the Weaviatevector database.
See the Weaviate installation instructions.
!pip install weaviate-client
Requirement already satisfied: weaviate-client in /workspaces/langchain/.venv/lib/python3.9/site-packages (3.19.1)
Requirement already satisfied: requests<2.29.0,>=2.28.0 in /workspaces/langchain/.venv/lib/python3.9/site-packages (from weaviate-client) (2.28.2)
Requirement already satisfied: validators<=0.21.0,>=0.18.2 in /workspaces/langchain/.venv/lib/python3.9/site-packages (from weaviate-client) (0.20.0)
Requirement already satisfied: tqdm<5.0.0,>=4.59.0 in /workspaces/langchain/.venv/lib/python3.9/site-packages (from weaviate-client) (4.65.0)
Requirement already satisfied: authlib>=1.1.0 in /workspaces/langchain/.venv/lib/python3.9/site-packages (from weaviate-client) (1.2.0)
Requirement already satisfied: cryptography>=3.2 in /workspaces/langchain/.venv/lib/python3.9/site-packages (from authlib>=1.1.0->weaviate-client) (40.0.2)
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html
|
9aac9f5d3e2d-1
|
Requirement already satisfied: charset-normalizer<4,>=2 in /workspaces/langchain/.venv/lib/python3.9/site-packages (from requests<2.29.0,>=2.28.0->weaviate-client) (3.1.0)
Requirement already satisfied: idna<4,>=2.5 in /workspaces/langchain/.venv/lib/python3.9/site-packages (from requests<2.29.0,>=2.28.0->weaviate-client) (3.4)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /workspaces/langchain/.venv/lib/python3.9/site-packages (from requests<2.29.0,>=2.28.0->weaviate-client) (1.26.15)
Requirement already satisfied: certifi>=2017.4.17 in /workspaces/langchain/.venv/lib/python3.9/site-packages (from requests<2.29.0,>=2.28.0->weaviate-client) (2023.5.7)
Requirement already satisfied: decorator>=3.4.0 in /workspaces/langchain/.venv/lib/python3.9/site-packages (from validators<=0.21.0,>=0.18.2->weaviate-client) (5.1.1)
Requirement already satisfied: cffi>=1.12 in /workspaces/langchain/.venv/lib/python3.9/site-packages (from cryptography>=3.2->authlib>=1.1.0->weaviate-client) (1.15.1)
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html
|
9aac9f5d3e2d-2
|
Requirement already satisfied: pycparser in /workspaces/langchain/.venv/lib/python3.9/site-packages (from cffi>=1.12->cryptography>=3.2->authlib>=1.1.0->weaviate-client) (2.21)
We want to use OpenAIEmbeddings so we have to get the OpenAI API Key.
import os
import getpass
os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")
WEAVIATE_URL = getpass.getpass("WEAVIATE_URL:")
os.environ["WEAVIATE_API_KEY"] = getpass.getpass("WEAVIATE_API_KEY:")
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Weaviate
from langchain.document_loaders import TextLoader
from langchain.document_loaders import TextLoader
loader = TextLoader("../../../state_of_the_union.txt")
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
db = Weaviate.from_documents(docs, embeddings, weaviate_url=WEAVIATE_URL, by_text=False)
query = "What did the president say about Ketanji Brown Jackson"
docs = db.similarity_search(query)
print(docs[0].page_content)
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html
|
9aac9f5d3e2d-3
|
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
Similarity search with score#
docs = db.similarity_search_with_score(query, by_text=False)
docs[0]
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html
|
9aac9f5d3e2d-4
|
(Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'_additional': {'vector': [-0.015289668, -0.011418287, -0.018540842, 0.00274522, 0.008310737, 0.014179829, 0.0080104275, -0.0010217049,
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html
|
9aac9f5d3e2d-5
|
0.0080104275, -0.0010217049, -0.022327352, -0.0055002323, 0.018958665, 0.0020548347, -0.0044393567, -0.021609223, -0.013709779, -0.004543812, 0.025722157, 0.01821442, 0.031728342, -0.031388864, -0.01051083, -0.029978717, 0.011555385, 0.0009751897, 0.014675993, -0.02102166, 0.0301354, -0.031754456, 0.013526983, -0.03392191, 0.002800712, -0.0027778621, -0.024259781, -0.006202043, -0.019950991, 0.0176138, -0.0001134321, 0.008343379, 0.034209162, -0.027654583, 0.03149332, -0.0008389079, 0.0053696632, -0.0024644958, -0.016582303, 0.0066720927, -0.005036711, -0.035514854, 0.002942706, 0.02958701, 0.032825127, 0.015694432, -0.019846536, -0.024520919, -0.021974817, -0.0063293483, -0.01081114,
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html
|
9aac9f5d3e2d-6
|
-0.0063293483, -0.01081114, -0.0084282495, 0.003025944, -0.010210521, 0.008780787, 0.014793505, -0.006486031, 0.011966679, 0.01774437, -0.006985459, -0.015459408, 0.01625588, -0.016007798, 0.01706541, 0.035567082, 0.0029900377, 0.021543937, -0.0068483613, 0.040868197, -0.010909067, -0.03339963, 0.010954766, -0.014689049, -0.021596165, 0.0025607906, -0.01599474, -0.017757427, -0.0041651614, 0.010752384, 0.0053598704, -0.00019248774, 0.008480477, -0.010517359, -0.005017126, 0.0020434097, 0.011699011, 0.0051379027, 0.021687564, -0.010830725, 0.020734407, -0.006606808, 0.029769806, 0.02817686, -0.047318324, 0.024338122, -0.001150642, -0.026231378, -0.012325744, -0.0318328, -0.0094989175,
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html
|
9aac9f5d3e2d-7
|
-0.0318328, -0.0094989175, -0.00897664, 0.004736402, 0.0046482678, 0.0023241339, -0.005826656, 0.0072531262, 0.015498579, -0.0077819317, -0.011953622, -0.028934162, -0.033974137, -0.01574666, 0.0086306315, -0.029299757, 0.030213742, -0.0033148287, 0.013448641, -0.013474754, 0.015851116, 0.0076578907, -0.037421167, -0.015185213, 0.010719741, -0.014636821, 0.0001918757, 0.011783881, 0.0036330915, -0.02132197, 0.0031010215, 0.0024334856, -0.0033229894, 0.050086394, 0.0031973163, -0.01115062, 0.004837593, 0.01298512, -0.018645298, -0.02992649, 0.004837593, 0.0067634913, 0.02992649, 0.0145062525, 0.00566018, -0.0017055618, -0.0056667086, 0.012697867, 0.0150677, -0.007559964, -0.01991182,
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html
|
9aac9f5d3e2d-8
|
-0.007559964, -0.01991182, -0.005268472, -0.008650217, -0.008702445, 0.027550127, 0.0018296026, 0.0018589807, -0.033295177, 0.0036265631, -0.0060290387, 0.014349569, 0.019898765, 0.00023339267, 0.0034568228, -0.018958665, 0.012031963, 0.005186866, 0.020747464, -0.03817847, 0.028202975, -0.01340947, 0.00091643346, 0.014884903, -0.02314994, -0.024468692, 0.0004859627, 0.018828096, 0.012906778, 0.027941836, 0.027550127, -0.015028529, 0.018606128, 0.03449641, -0.017757427, -0.016020855, -0.012142947, 0.025304336, 0.00821281, -0.0025461016, -0.01902395, -0.635507, -0.030083172, 0.0177052, -0.0104912445, 0.012502013, -0.0010747487, 0.00465806, 0.020825805, -0.006887532, 0.013892576, -0.019977106,
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html
|
9aac9f5d3e2d-9
|
0.013892576, -0.019977106, 0.029952602, 0.0012004217, -0.015211326, -0.008708973, -0.017809656, 0.008578404, -0.01612531, 0.022614606, -0.022327352, -0.032616217, 0.0050693536, -0.020629952, -0.01357921, 0.011477043, 0.0013938275, -0.0052390937, 0.0142581705, -0.013200559, 0.013252786, -0.033582427, 0.030579336, -0.011568441, 0.0038387382, 0.049564116, 0.016791213, -0.01991182, 0.010889481, -0.0028251936, 0.035932675, -0.02183119, -0.008611047, 0.025121538, 0.008349908, 0.00035641342, 0.009028868, 0.007631777, -0.01298512, -0.0015350056, 0.009982024, -0.024207553, -0.003332782, 0.006283649, 0.01868447, -0.010732798, -0.00876773, -0.0075273216, -0.016530076, 0.018175248, 0.016020855, -0.00067284,
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html
|
9aac9f5d3e2d-10
|
0.016020855, -0.00067284, 0.013461698, -0.0065904865, -0.017809656, -0.014741276, 0.016582303, -0.0088526, 0.0046482678, 0.037473395, -0.02237958, 0.010112594, 0.022549322, 9.680491e-05, -0.0059082615, 0.020747464, -0.026923396, 0.01162067, -0.0074816225, 0.00024277734, 0.011842638, 0.016921783, -0.019285088, 0.005565517, 0.0046907025, 0.018109964, 0.0028676286, -0.015080757, -0.01536801, 0.0024726565, 0.020943318, 0.02187036, 0.0037767177, 0.018997835, -0.026766712, 0.005026919, 0.015942514, 0.0097469995, -0.0067830766, 0.023828901, -0.01523744, -0.0121494755, 0.00744898, 0.010445545, -0.011006993, -0.0032789223, 0.020394927, -0.017796598, -0.0029116957, 0.02318911, -0.031754456,
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html
|
9aac9f5d3e2d-11
|
0.02318911, -0.031754456, -0.018188305, -0.031441092, -0.030579336, 0.0011832844, 0.0065023527, -0.027053965, 0.009198609, 0.022079272, -0.027785152, 0.005846241, 0.013500868, 0.016699815, 0.010445545, -0.025265165, -0.004396922, 0.0076774764, 0.014597651, -0.009851455, -0.03637661, 0.0004745379, -0.010112594, -0.009205136, 0.01578583, 0.015211326, -0.0011653311, -0.0015847852, 0.01489796, -0.01625588, -0.0029067993, -0.011411758, 0.0046286825, 0.0036330915, -0.0034143878, 0.011894866, -0.03658552, 0.007266183, -0.015172156, -0.02038187, -0.033739112, 0.0018948873, -0.011379116, -0.0020923733, -0.014075373, 0.01970291, 0.0020352493, -0.0075273216, -0.02136114, 0.0027974476, -0.009577259,
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html
|
9aac9f5d3e2d-12
|
0.0027974476, -0.009577259, -0.023815846, 0.024847344, 0.014675993, -0.019454828, -0.013670608, 0.011059221, -0.005438212, 0.0406854, 0.0006218364, -0.024494806, -0.041259903, 0.022013986, -0.0040019494, -0.0052097156, 0.015798887, 0.016190596, 0.0003794671, -0.017444061, 0.012325744, 0.024769, 0.029482553, -0.0046547963, -0.015955571, -0.018397218, -0.0102431625, 0.020577725, 0.016190596, -0.02038187, 0.030030945, -0.01115062, 0.0032560725, -0.014819618, 0.005647123, -0.0032560725, 0.0038909658, 0.013311543, 0.024285894, -0.0045699263, -0.010112594, 0.009237779, 0.008728559, 0.0423828, 0.010909067, 0.04225223, -0.031806685, -0.013696723, -0.025787441, 0.00838255, -0.008715502, 0.006776548,
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html
|
9aac9f5d3e2d-13
|
-0.008715502, 0.006776548, 0.01825359, -0.014480138, -0.014427911, -0.017600743, -0.030004831, 0.0145845935, 0.013762007, -0.013226673, 0.004168425, 0.0047951583, -0.026923396, 0.014675993, 0.0055851024, 0.015616091, -0.012306159, 0.007670948, 0.038439605, -0.015759716, 0.00016178355, 0.01076544, -0.008232395, -0.009942854, 0.018801982, -0.0025314125, 0.030709906, -0.001442791, -0.042617824, -0.007409809, -0.013109161, 0.031101612, 0.016229765, 0.006162872, 0.017901054, -0.0063619902, -0.0054577976, 0.01872364, -0.0032430156, 0.02966535, 0.006495824, 0.0011008625, -0.00024318536, -0.007011573, -0.002746852, -0.004298995, 0.007710119, 0.03407859, -0.008898299, -0.008565348, 0.030527107, -0.0003027576,
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html
|
9aac9f5d3e2d-14
|
0.030527107, -0.0003027576, 0.025082368, 0.0405026, 0.03867463, 0.0014117807, -0.024076983, 0.003933401, -0.009812284, 0.00829768, -0.0074293944, 0.0061530797, -0.016647588, -0.008147526, -0.015629148, 0.02055161, 0.000504324, 0.03157166, 0.010112594, -0.009009283, 0.026557801, -0.013997031, -0.0071878415, 0.009414048, -0.03480978, 0.006626393, 0.013827291, -0.011444401, -0.011823053, -0.0042957305, -0.016229765, -0.014192886, 0.026531687, -0.012534656, -0.0056569157, -0.0010331298, 0.007977786, 0.0033654245, -0.017352663, 0.034626983, -0.011803466, 0.009035396, 0.0005288057, 0.020421041, 0.013115689, -0.0152504975, -0.0111114485, 0.032355078, 0.0025542623, -0.0030226798, -0.00074261305,
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html
|
9aac9f5d3e2d-15
|
-0.0030226798, -0.00074261305, 0.030892702, -0.026218321, 0.0062803845, -0.018031623, -0.021504767, -0.012834964, 0.009009283, -0.0029198565, -0.014349569, -0.020434098, 0.009838398, -0.005993132, -0.013618381, -0.031597774, -0.019206747, 0.00086583785, 0.15835446, 0.033765227, 0.00893747, 0.015119928, -0.019128405, 0.0079582, -0.026270548, -0.015877228, 0.014153715, -0.011960151, 0.007853745, 0.006972402, -0.014101488, 0.02456009, 0.015119928, -0.0018850947, 0.019010892, -0.0046188897, -0.0050954674, -0.03548874, -0.01608614, -0.00324628, 0.009466276, 0.031911142, 7.033402e-05, -0.025095424, 0.020225188, 0.014832675, 0.023228282, -0.011829581, -0.011300774, -0.004073763, 0.0032544404,
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html
|
9aac9f5d3e2d-16
|
-0.004073763, 0.0032544404, -0.0025983294, -0.020943318, 0.019650683, -0.0074424515, -0.0030977572, 0.0073379963, -0.00012455089, 0.010230106, -0.0007254758, -0.0025052987, -0.009681715, 0.03439196, -0.035123147, -0.0028806855, 0.012828437, 0.00018646932, 0.0066133365, 0.025539361, -0.00055736775, -0.025356563, -0.004537284, -0.007031158, 0.015825002, -0.013076518, 0.00736411, -0.00075689406, 0.0076578907, -0.019337315, -0.0024187965, -0.0110331075, -0.01187528, 0.0013048771, 0.0009711094, -0.027863493, -0.020616895, -0.0024481746, -0.0040802914, 0.014571536, -0.012306159, -0.037630077, 0.012652168, 0.009068039, -0.0018263385, 0.0371078, -0.0026831995, 0.011333417, -0.011548856,
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html
|
9aac9f5d3e2d-17
|
0.011333417, -0.011548856, -0.0059049972, -0.025186824, 0.0069789304, -0.010993936, -0.0009066408, 0.0002619547, 0.01727432, -0.008082241, -0.018645298, 0.024507863, 0.0030895968, -0.0014656406, 0.011137563, -0.025513247, -0.022967143, -0.002033617, 0.006887532, 0.016621474, -0.019337315, -0.0030618508, 0.0014697209, -0.011679426, -0.003597185, -0.0049844836, -0.012332273, 0.009068039, 0.009407519, 0.027080078, -0.011215905, -0.0062542707, -0.0013114056, -0.031911142, 0.011209376, 0.009903682, -0.007351053, 0.021335026, -0.005510025, 0.0062053073, -0.010869896, -0.0045601334, 0.017561574, -0.024847344, 0.04115545, -0.00036457402, -0.0061400225, 0.013037347, -0.005480647, 0.005947433, 0.020799693,
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html
|
9aac9f5d3e2d-18
|
0.005947433, 0.020799693, 0.014702106, 0.03272067, 0.026701428, -0.015550806, -0.036193814, -0.021126116, -0.005412098, -0.013076518, 0.027080078, 0.012900249, -0.0073379963, -0.015119928, -0.019781252, 0.0062346854, -0.03266844, 0.025278222, -0.022797402, -0.0028415148, 0.021452539, -0.023162996, 0.005170545, -0.022314297, 0.011215905, -0.009838398, -0.00033233972, 0.0019650683, 0.0026326037, 0.009753528, -0.0029639236, 0.021126116, 0.01944177, -0.00044883206, -0.00961643, 0.008846072, -0.0035775995, 0.02352859, -0.0020956376, 0.0053468137, 0.013305014, 0.0006418298, 0.023802789, 0.013122218, -0.0031548813, -0.027471786, 0.005046504, 0.008545762, 0.011261604, -0.01357921, -0.01110492,
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html
|
9aac9f5d3e2d-19
|
-0.01357921, -0.01110492, -0.014845733, -0.035384286, -0.02550019, 0.008154054, -0.0058331843, -0.008702445, -0.007311882, -0.006525202, 0.03817847, 0.00372449, 0.022914914, -0.0018981516, 0.031545546, -0.01051083, 0.013801178, -0.006296706, -0.00025052988, -0.01795328, -0.026296662, 0.0017659501, 0.021883417, 0.0028937424, 0.00495837, -0.011888337, -0.008950527, -0.012058077, 0.020316586, 0.00804307, -0.0068483613, -0.0038387382, 0.019715967, -0.025069311, -0.000797697, -0.04507253, -0.009179023, -0.016242823, 0.013553096, -0.0019014158, 0.010223578, 0.0062934416, -5.5644974e-05, -0.038282923, -0.038544063, -0.03162389, -0.006815719, 0.009936325, 0.014192886, 0.02277129, -0.006972402,
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html
|
9aac9f5d3e2d-20
|
0.02277129, -0.006972402, -0.029769806, 0.034862008, 0.01217559, -0.0037179615, 0.0008666539, 0.008924413, -0.026296662, -0.012678281, 0.014480138, 0.020734407, -0.012103776, -0.037499506, 0.022131499, 0.015028529, -0.033843566, 0.00020187242, 0.002650557, -0.0015113399, 0.021570051, -0.008284623, -0.003793039, -0.013422526, -0.009655601, -0.0016614947, -0.02388113, 0.00114901, 0.0034405016, 0.02796795, -0.039118566, 0.0023975791, -0.010608757, 0.00093438674, 0.0017382042, -0.02047327, 0.026283605, -0.020799693, 0.005947433, -0.014349569, 0.009890626, -0.022719061, -0.017248206, 0.0042565595, 0.022327352, -0.015681375, -0.013840348, 6.502964e-05, 0.015485522, -0.002678303, -0.0047984226,
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html
|
9aac9f5d3e2d-21
|
-0.002678303, -0.0047984226, -0.012182118, -0.001512972, 0.013931747, -0.009642544, 0.012652168, -0.012932892, -0.027759038, -0.01085031, 0.0050236546, -0.009675186, -0.00893747, -0.0051770736, 0.036011018, 0.003528636, -0.001008648, -0.015811944, -0.008865656, 0.012364916, 0.016621474, -0.01340947, 0.03219839, 0.032955695, -0.021517823, 0.00372449, -0.045124754, 0.015589978, -0.033582427, -0.01642562, -0.009609901, -0.031179955, 0.0012591778, -0.011176733, -0.018658355, -0.015224383, 0.014884903, 0.013083046, 0.0063587264, -0.008238924, -0.008917884, -0.003877909, 0.022836573, -0.004374072, -0.031127727, 0.02604858, -0.018136078, 0.000769951, -0.002312709, -0.025095424, -0.010621814,
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html
|
9aac9f5d3e2d-22
|
-0.025095424, -0.010621814, 0.013207087, 0.013944804, -0.0070899143, -0.022183727, -0.0028088724, -0.011424815, 0.026087752, -0.0058625625, -0.020186016, -0.010217049, 0.015315781, -0.012580355, 0.01374895, 0.004948577, -0.0021854038, 0.023215225, 0.00207442, 0.029639237, 0.01391869, -0.015811944, -0.005356606, -0.022327352, -0.021844247, -0.008310737, -0.020786636, -0.022484036, 0.011411758, 0.005826656, 0.012188647, -0.020394927, -0.0013024289, -0.027315103, -0.017000126, -0.0010600596, -0.0019014158, 0.016712872, 0.0012673384, 0.02966535, 0.02911696, -0.03081436, 0.025552418, 0.0014215735, -0.02510848, 0.020277414, -0.02672754, 0.01829276, 0.03381745, -0.013957861, 0.0049094064,
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html
|
9aac9f5d3e2d-23
|
-0.013957861, 0.0049094064, 0.033556316, 0.005167281, 0.0176138, 0.014140658, -0.0043708077, -0.0095446175, 0.012952477, 0.007853745, -0.01034109, 0.01804468, 0.0038322096, -0.04959023, 0.0023078127, 0.0053794556, -0.015106871, -0.03225062, -0.010073422, 0.007285768, 0.0056079524, -0.009002754, -0.014362626, 0.010909067, 0.009779641, -0.02796795, 0.013246258, 0.025474075, -0.001247753, 0.02442952, 0.012802322, -0.032276735, 0.0029802448, 0.014179829, 0.010321504, 0.0053337566, -0.017156808, -0.010439017, 0.034444187, -0.010393318, -0.006042096, -0.018566957, 0.004517698, -0.011228961, -0.009015812, -0.02089109, 0.022484036, 0.0029867734, -0.029064732, -0.010236635, -0.0006761042, -0.029038617,
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html
|
9aac9f5d3e2d-24
|
-0.0006761042, -0.029038617, 0.004367544, -0.012293102, 0.0017528932, -0.023358852, 0.02217067, 0.012606468, -0.008160583, -0.0104912445, -0.0034894652, 0.011078807, 0.00050922035, 0.015759716, 0.23774062, -0.0019291617, 0.006218364, 0.013762007, -0.029900376, 0.018188305, 0.0092965355, 0.0040574414, -0.014976301, -0.006228157, -0.016647588, 0.0035188433, -0.01919369, 0.0037506039, 0.029247528, -0.014532366, -0.049773026, -0.019624569, -0.034783665, -0.015028529, 0.0097469995, 0.016281994, 0.0047135525, -0.011294246, 0.011477043, 0.015485522, 0.03426139, 0.014323455, 0.011052692, -0.008362965, -0.037969556, -0.00252162, -0.013709779, -0.0030292084, -0.016569246, -0.013879519, 0.0011849166,
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html
|
9aac9f5d3e2d-25
|
-0.013879519, 0.0011849166, -0.0016925049, 0.009753528, 0.008349908, -0.008245452, 0.033007924, -0.0035873922, -0.025461018, 0.016791213, 0.05410793, -0.005950697, -0.011672897, -0.0072335405, 0.013814235, -0.0593307, -0.008624103, 0.021400312, 0.034235276, 0.015642203, -0.020068504, 0.03136275, 0.012567298, -0.010419431, 0.027445672, -0.031754456, 0.014219, -0.0075403787, 0.03812624, 0.0009988552, 0.038752973, -0.018005509, 0.013670608, 0.045882057, -0.018841153, -0.031650003, 0.010628343, -0.00459604, -0.011999321, -0.028202975, -0.018593071, 0.029743692, 0.021857304, 0.01438874, 0.00014128008, -0.006156344, -0.006691678, 0.01672593, -0.012821908, -0.0024367499, -0.03219839, 0.0058233915,
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html
|
9aac9f5d3e2d-26
|
-0.03219839, 0.0058233915, -0.0056405943, -0.009381405, 0.0064044255, 0.013905633, -0.011228961, -0.0013481282, -0.014023146, 0.00016239559, -0.0051901303, 0.0025265163, 0.023619989, -0.021517823, 0.024703717, -0.025643816, 0.040189236, 0.016295051, -0.0040411204, -0.0113595305, 0.0029981981, -0.015589978, 0.026479458, 0.0067439056, -0.035775993, -0.010550001, -0.014767391, -0.009897154, -0.013944804, -0.0147543335, 0.015798887, -0.02456009, -0.0018850947, 0.024442578, 0.0019715966, -0.02422061, -0.02945644, -0.003443766, 0.0004945313, 0.0011522742, -0.020773578, -0.011777353, 0.008173639, -0.012325744, -0.021348083, 0.0036461484, 0.0063228197, 0.00028970066, -0.0036200345, -0.021596165,
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html
|
9aac9f5d3e2d-27
|
-0.0036200345, -0.021596165, -0.003949722, -0.0006034751, 0.007305354, -0.023424136, 0.004834329, -0.008833014, -0.013435584, 0.0026097542, -0.0012240873, -0.0028349862, -0.01706541, 0.027863493, -0.026414175, -0.011783881, 0.014075373, -0.005634066, -0.006313027, -0.004638475, -0.012495484, 0.022836573, -0.022719061, -0.031284407, -0.022405695, -0.017352663, 0.021113059, -0.03494035, 0.002772966, 0.025643816, -0.0064240107, -0.009897154, 0.0020711557, -0.16409951, 0.009688243, 0.010393318, 0.0033262535, 0.011059221, -0.012919835, 0.0014493194, -0.021857304, -0.0075730206, -0.0020695236, 0.017822713, 0.017417947, -0.034835894, -0.009159437, -0.0018573486, -0.0024840813, -0.022444865,
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html
|
9aac9f5d3e2d-28
|
-0.0024840813, -0.022444865, 0.0055687814, 0.0037767177, 0.0033915383, 0.0301354, -0.012227817, 0.0021854038, -0.042878963, 0.021517823, -0.010419431, -0.0051183174, 0.01659536, 0.0017333078, -0.00727924, -0.0020026069, -0.0012493852, 0.031441092, 0.0017431005, 0.008702445, -0.0072335405, -0.020081561, -0.012423672, -0.0042239176, 0.031049386, 0.04324456, 0.02550019, 0.014362626, -0.0107393265, -0.0037538682, -0.0061791935, -0.006737377, 0.011548856, -0.0166737, -0.012828437, -0.003375217, -0.01642562, -0.011424815, 0.007181313, 0.017600743, -0.0030226798, -0.014192886, 0.0128937205, -0.009975496, 0.0051444313, -0.0044654706, -0.008826486, 0.004158633, 0.004971427, -0.017835768,
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html
|
9aac9f5d3e2d-29
|
0.004971427, -0.017835768, 0.025017083, -0.021792019, 0.013657551, -0.01872364, 0.009100681, -0.0079582, -0.011640254, -0.01093518, -0.0147543335, -0.005000805, 0.02345025, -0.028908048, 0.0104912445, -0.00753385, 0.017561574, -0.012025435, 0.042670052, -0.0041978033, 0.0013056932, -0.009263893, -0.010941708, -0.004471999, 0.01008648, -0.002578744, -0.013931747, 0.018619185, -0.04029369, -0.00025909848, 0.0030063589, 0.003149985, 0.011091864, 0.006495824, 0.00026583098, 0.0045503406, -0.007586078, -0.0007475094, -0.016856499, -0.003528636, 0.038282923, -0.0010494508, 0.024494806, 0.012593412, 0.032433417, -0.003203845, 0.005947433, -0.019937934, -0.00017800271, 0.027706811, 0.03047488,
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html
|
9aac9f5d3e2d-30
|
0.027706811, 0.03047488, 0.02047327, 0.0019258976, -0.0068940604, -0.0014990991, 0.013305014, -0.007690533, 0.058808424, -0.0016859764, -0.0044622063, -0.0037734534, 0.01578583, -0.0018459238, -0.1196015, -0.0007075225, 0.0030341048, 0.012306159, -0.0068483613, 0.01851473, 0.015315781, 0.031388864, -0.015563863, 0.04776226, -0.008199753, -0.02591801, 0.00546759, -0.004915935, 0.0050824108, 0.0027011528, -0.009205136, -0.016712872, -0.0033409426, 0.0043218443, -0.018279705, 0.00876773, 0.0050138617, -0.009688243, -0.017783541, -0.018645298, -0.010380261, 0.018606128, 0.0077492893, 0.007324939, -0.012704396, -0.002692992, -0.01259994, -0.0076970616, -0.013814235, -0.0004365912,
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html
|
9aac9f5d3e2d-31
|
-0.013814235, -0.0004365912, -0.023606932, -0.020186016, 0.025330449, -0.00991674, -0.0048278007, -0.019350372, 0.015433294, -0.0056144805, -0.0034927295, -0.00043455104, 0.008611047, 0.025748271, 0.022353467, -0.020747464, -0.015759716, 0.029038617, -0.000377631, -0.028725252, 0.018109964, -0.0016125311, -0.022719061, -0.009133324, -0.033060152, 0.011248547, -0.0019797573, -0.007181313, 0.0018867267, 0.0070899143, 0.004077027, 0.0055328747, -0.014245113, -0.021217514, -0.006750434, -0.038230695, 0.013233202, 0.014219, -0.017692143, 0.024742888, -0.008833014, -0.00753385, -0.026923396, -0.0021527617, 0.013135274, -0.018070793, -0.013500868, -0.0016696552, 0.011568441, -0.03230285, 0.023646105,
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html
|
9aac9f5d3e2d-32
|
-0.03230285, 0.023646105, 0.0111114485, -0.015172156, 0.0257091, 0.0045699263, -0.00919208, 0.021517823, 0.037838988, 0.00787333, -0.007755818, -0.028281316, 0.011170205, -0.005412098, -0.016321165, 0.009929797, 0.004609097, -0.03047488, 0.002688096, -0.07264877, 0.024455635, -0.020930262, -0.015381066, -0.0033148287, 0.027236762, 0.0014501355, -0.014101488, -0.024076983, 0.026218321, -0.009009283, 0.019624569, 0.0020646274, -0.009081096, -0.01565526, -0.003358896, 0.048571788, -0.004857179, 0.022444865, 0.024181439, 0.00080708164, 0.024873456, 3.463147e-05, 0.0010535312, -0.017940223, 0.0012159267, -0.011065749, 0.008258509, -0.018527785, -0.022797402, 0.012377972, -0.002087477, 0.010791554,
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html
|
9aac9f5d3e2d-33
|
-0.002087477, 0.010791554, 0.022288183, 0.0048604426, -0.032590102, 0.013709779, 0.004922463, 0.020055447, -0.0150677, -0.0057222005, -0.036246043, 0.0021364405, 0.021387255, -0.013435584, 0.010732798, 0.0075534354, -0.00061612396, -0.002018928, -0.004432828, -0.032746784, 0.025513247, -0.0025852725, 0.014467081, -0.008617575, -0.019755138, 0.003966043, -0.0033915383, 0.0004088452, -0.025173767, 0.02796795, 0.0023763615, 0.0052358294, 0.017796598, 0.014806561, 0.0150024155, -0.005859298, 0.01259994, 0.021726735, -0.026466403, -0.017457118, -0.0025493659, 0.0070899143, 0.02668837, 0.015485522, -0.011588027, 0.01906312, -0.003388274, -0.010210521, 0.020956375, 0.028620796, -0.018540842, 0.0025722156,
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html
|
9aac9f5d3e2d-34
|
-0.018540842, 0.0025722156, 0.0110331075, -0.003992157, 0.020930262, 0.008487006, 0.0016557822, -0.0009882465, 0.0062640635, -0.016242823, -0.0007785196, -0.0007213955, 0.018971723, 0.021687564, 0.0039464575, -0.01574666, 0.011783881, -0.0019797573, -0.013383356, -0.002706049, 0.0037734534, 0.020394927, -0.00021931567, 0.0041814824, 0.025121538, -0.036246043, -0.019428715, -0.023802789, 0.014845733, 0.015420238, 0.019650683, 0.008186696, 0.025304336, -0.03204171, 0.01774437, 0.0021233836, -0.008434778, -0.0059441687, 0.038335152, 0.022653777, -0.0066002794, 0.02149171, 0.015093814, 0.025382677, -0.007579549, 0.0030357367, -0.0014117807, -0.015341896, 0.014545423, 0.007135614, -0.0113595305,
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html
|
9aac9f5d3e2d-35
|
0.007135614, -0.0113595305, -0.04387129, 0.016308108, -0.008186696, -0.013370299, -0.014297341, 0.017431004, -0.022666834, 0.039458048, 0.0032005806, -0.02081275, 0.008526176, -0.0019307939, 0.024024757, 0.009068039, 0.00953156, 0.010608757, 0.013801178, 0.035932675, -0.015185213, -0.0038322096, -0.012462842, -0.03655941, 0.0013946436, 0.00025726235, 0.008016956, -0.0042565595, 0.008447835, 0.0038191527, -0.014702106, 0.02196176, 0.0052097156, -0.010869896, 0.0051640165, 0.030840475, -0.041468814, 0.009250836, -0.018997835, 0.020107675, 0.008421721, -0.016373392, 0.004602568, 0.0327729, -0.00812794, 0.001581521, 0.019350372, 0.016112253, 0.02132197, 0.00043944738, -0.01472822, -0.025735214,
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html
|
9aac9f5d3e2d-36
|
-0.01472822, -0.025735214, -0.03313849, 0.0033817457, 0.028855821, -0.016033912, 0.0050791465, -0.01808385]}, 'source': '../../../state_of_the_union.txt'}),
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html
|
9aac9f5d3e2d-37
|
0.8154189703772676)
Persistance#
Anything uploaded to weaviate is automatically persistent into the database. You do not need to call any specific method or pass any param for this to happen.
Retriever options#
Retriever options#
This section goes over different options for how to use Weaviate as a retriever.
MMR#
In addition to using similarity search in the retriever object, you can also use mmr.
retriever = db.as_retriever(search_type="mmr")
retriever.get_relevant_documents(query)[0]
Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../../state_of_the_union.txt'})
Question Answering with Sources#
This section goes over how to do question-answering with sources over an Index. It does this by using the RetrievalQAWithSourcesChain, which does the lookup of the documents from an Index.
from langchain.chains import RetrievalQAWithSourcesChain
from langchain import OpenAI
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html
|
9aac9f5d3e2d-38
|
from langchain.chains import RetrievalQAWithSourcesChain
from langchain import OpenAI
with open("../../../state_of_the_union.txt") as f:
state_of_the_union = f.read()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_text(state_of_the_union)
docsearch = Weaviate.from_texts(
texts,
embeddings,
weaviate_url=WEAVIATE_URL,
by_text=False,
metadatas=[{"source": f"{i}-pl"} for i in range(len(texts))],
)
chain = RetrievalQAWithSourcesChain.from_chain_type(
OpenAI(temperature=0), chain_type="stuff", retriever=docsearch.as_retriever()
)
chain(
{"question": "What did the president say about Justice Breyer"},
return_only_outputs=True,
)
{'answer': " The president honored Justice Breyer for his service and mentioned his legacy of excellence. He also nominated Circuit Court of Appeals Judge Ketanji Brown Jackson to continue Justice Breyer's legacy.\n",
'sources': '31-pl, 34-pl'}
previous
Vectara
next
Zilliz
Contents
Weaviate
Similarity search with score
Persistance
Retriever options
Retriever options
MMR
Question Answering with Sources
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html
|
ac36c2c549b4-0
|
.ipynb
.pdf
OpenSearch
Contents
Installation
similarity_search using Approximate k-NN
similarity_search using Script Scoring
similarity_search using Painless Scripting
Using a preexisting OpenSearch instance
OpenSearch#
OpenSearch is a scalable, flexible, and extensible open-source software suite for search, analytics, and observability applications licensed under Apache 2.0. OpenSearch is a distributed search and analytics engine based on Apache Lucene.
This notebook shows how to use functionality related to the OpenSearch database.
To run, you should have an OpenSearch instance up and running: see here for an easy Docker installation.
similarity_search by default performs the Approximate k-NN Search which uses one of the several algorithms like lucene, nmslib, faiss recommended for
large datasets. To perform brute force search we have other search methods known as Script Scoring and Painless Scripting.
Check this for more details.
Installation#
Install the Python client.
!pip install opensearch-py
We want to use OpenAIEmbeddings so we have to get the OpenAI API Key.
import os
import getpass
os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import OpenSearchVectorSearch
from langchain.document_loaders import TextLoader
from langchain.document_loaders import TextLoader
loader = TextLoader('../../../state_of_the_union.txt')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
similarity_search using Approximate k-NN#
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/opensearch.html
|
ac36c2c549b4-1
|
embeddings = OpenAIEmbeddings()
similarity_search using Approximate k-NN#
similarity_search using Approximate k-NN Search with Custom Parameters
docsearch = OpenSearchVectorSearch.from_documents(
docs,
embeddings,
opensearch_url="http://localhost:9200"
)
# If using the default Docker installation, use this instantiation instead:
# docsearch = OpenSearchVectorSearch.from_documents(
# docs,
# embeddings,
# opensearch_url="https://localhost:9200",
# http_auth=("admin", "admin"),
# use_ssl = False,
# verify_certs = False,
# ssl_assert_hostname = False,
# ssl_show_warn = False,
# )
query = "What did the president say about Ketanji Brown Jackson"
docs = docsearch.similarity_search(query, k=10)
print(docs[0].page_content)
docsearch = OpenSearchVectorSearch.from_documents(docs, embeddings, opensearch_url="http://localhost:9200", engine="faiss", space_type="innerproduct", ef_construction=256, m=48)
query = "What did the president say about Ketanji Brown Jackson"
docs = docsearch.similarity_search(query)
print(docs[0].page_content)
similarity_search using Script Scoring#
similarity_search using Script Scoring with Custom Parameters
docsearch = OpenSearchVectorSearch.from_documents(docs, embeddings, opensearch_url="http://localhost:9200", is_appx_search=False)
query = "What did the president say about Ketanji Brown Jackson"
docs = docsearch.similarity_search("What did the president say about Ketanji Brown Jackson", k=1, search_type="script_scoring")
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/opensearch.html
|
ac36c2c549b4-2
|
print(docs[0].page_content)
similarity_search using Painless Scripting#
similarity_search using Painless Scripting with Custom Parameters
docsearch = OpenSearchVectorSearch.from_documents(docs, embeddings, opensearch_url="http://localhost:9200", is_appx_search=False)
filter = {"bool": {"filter": {"term": {"text": "smuggling"}}}}
query = "What did the president say about Ketanji Brown Jackson"
docs = docsearch.similarity_search("What did the president say about Ketanji Brown Jackson", search_type="painless_scripting", space_type="cosineSimilarity", pre_filter=filter)
print(docs[0].page_content)
Using a preexisting OpenSearch instance#
It’s also possible to use a preexisting OpenSearch instance with documents that already have vectors present.
# this is just an example, you would need to change these values to point to another opensearch instance
docsearch = OpenSearchVectorSearch(index_name="index-*", embedding_function=embeddings, opensearch_url="http://localhost:9200")
# you can specify custom field names to match the fields you're using to store your embedding, document text value, and metadata
docs = docsearch.similarity_search("Who was asking about getting lunch today?", search_type="script_scoring", space_type="cosinesimil", vector_field="message_embedding", text_field="message", metadata_field="message_metadata")
previous
MyScale
next
PGVector
Contents
Installation
similarity_search using Approximate k-NN
similarity_search using Script Scoring
similarity_search using Painless Scripting
Using a preexisting OpenSearch instance
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/opensearch.html
|
d8f722d82605-0
|
.ipynb
.pdf
Typesense
Contents
Similarity Search
Typesense as a Retriever
Typesense#
Typesense is an open source, in-memory search engine, that you can either self-host or run on Typesense Cloud.
Typesense focuses on performance by storing the entire index in RAM (with a backup on disk) and also focuses on providing an out-of-the-box developer experience by simplifying available options and setting good defaults.
It also lets you combine attribute-based filtering together with vector queries, to fetch the most relevant documents.
This notebook shows you how to use Typesense as your VectorStore.
Let’s first install our dependencies:
!pip install typesense openapi-schema-pydantic openai tiktoken
We want to use OpenAIEmbeddings so we have to get the OpenAI API Key.
import os
import getpass
os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Typesense
from langchain.document_loaders import TextLoader
Let’s import our test dataset:
loader = TextLoader('../../../state_of_the_union.txt')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
docsearch = Typesense.from_documents(docs,
embeddings,
typesense_client_params={
'host': 'localhost', # Use xxx.a1.typesense.net for Typesense Cloud
'port': '8108', # Use 443 for Typesense Cloud
'protocol': 'http', # Use https for Typesense Cloud
'typesense_api_key': 'xyz',
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/typesense.html
|
d8f722d82605-1
|
'typesense_api_key': 'xyz',
'typesense_collection_name': 'lang-chain'
})
Similarity Search#
query = "What did the president say about Ketanji Brown Jackson"
found_docs = docsearch.similarity_search(query)
print(found_docs[0].page_content)
Typesense as a Retriever#
Typesense, as all the other vector stores, is a LangChain Retriever, by using cosine similarity.
retriever = docsearch.as_retriever()
retriever
query = "What did the president say about Ketanji Brown Jackson"
retriever.get_relevant_documents(query)[0]
previous
Tair
next
Vectara
Contents
Similarity Search
Typesense as a Retriever
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/typesense.html
|
e6e8d0e4e69b-0
|
.ipynb
.pdf
AnalyticDB
AnalyticDB#
AnalyticDB for PostgreSQL is a massively parallel processing (MPP) data warehousing service that is designed to analyze large volumes of data online.
AnalyticDB for PostgreSQL is developed based on the open source Greenplum Database project and is enhanced with in-depth extensions by Alibaba Cloud. AnalyticDB for PostgreSQL is compatible with the ANSI SQL 2003 syntax and the PostgreSQL and Oracle database ecosystems. AnalyticDB for PostgreSQL also supports row store and column store. AnalyticDB for PostgreSQL processes petabytes of data offline at a high performance level and supports highly concurrent online queries.
This notebook shows how to use functionality related to the AnalyticDB vector database.
To run, you should have an AnalyticDB instance up and running:
Using AnalyticDB Cloud Vector Database. Click here to fast deploy it.
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import AnalyticDB
Split documents and get embeddings by call OpenAI API
from langchain.document_loaders import TextLoader
loader = TextLoader('../../../state_of_the_union.txt')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
Connect to AnalyticDB by setting related ENVIRONMENTS.
export PG_HOST={your_analyticdb_hostname}
export PG_PORT={your_analyticdb_port} # Optional, default is 5432
export PG_DATABASE={your_database} # Optional, default is postgres
export PG_USER={database_username}
export PG_PASSWORD={database_password}
Then store your embeddings and documents into AnalyticDB
import os
connection_string = AnalyticDB.connection_string_from_db_params(
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/analyticdb.html
|
e6e8d0e4e69b-1
|
import os
connection_string = AnalyticDB.connection_string_from_db_params(
driver=os.environ.get("PG_DRIVER", "psycopg2cffi"),
host=os.environ.get("PG_HOST", "localhost"),
port=int(os.environ.get("PG_PORT", "5432")),
database=os.environ.get("PG_DATABASE", "postgres"),
user=os.environ.get("PG_USER", "postgres"),
password=os.environ.get("PG_PASSWORD", "postgres"),
)
vector_db = AnalyticDB.from_documents(
docs,
embeddings,
connection_string= connection_string,
)
Query and retrieve data
query = "What did the president say about Ketanji Brown Jackson"
docs = vector_db.similarity_search(query)
print(docs[0].page_content)
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
previous
Getting Started
next
Annoy
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/analyticdb.html
|
54b9f63637ba-0
|
.ipynb
.pdf
DocArrayInMemorySearch
Contents
Setup
Using DocArrayInMemorySearch
Similarity search
Similarity search with score
DocArrayInMemorySearch#
DocArrayInMemorySearch is a document index provided by Docarray that stores documents in memory. It is a great starting point for small datasets, where you may not want to launch a database server.
This notebook shows how to use functionality related to the DocArrayInMemorySearch.
Setup#
Uncomment the below cells to install docarray and get/set your OpenAI api key if you haven’t already done so.
# !pip install "docarray"
# Get an OpenAI token: https://platform.openai.com/account/api-keys
# import os
# from getpass import getpass
# OPENAI_API_KEY = getpass()
# os.environ["OPENAI_API_KEY"] = OPENAI_API_KEY
Using DocArrayInMemorySearch#
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import DocArrayInMemorySearch
from langchain.document_loaders import TextLoader
documents = TextLoader('../../../state_of_the_union.txt').load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
db = DocArrayInMemorySearch.from_documents(docs, embeddings)
Similarity search#
query = "What did the president say about Ketanji Brown Jackson"
docs = db.similarity_search(query)
print(docs[0].page_content)
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/docarray_in_memory.html
|
54b9f63637ba-1
|
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
Similarity search with score#
docs = db.similarity_search_with_score(query)
docs[0]
(Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={}),
0.8154190158347903)
previous
DocArrayHnswSearch
next
ElasticSearch
Contents
Setup
Using DocArrayInMemorySearch
Similarity search
Similarity search with score
By Harrison Chase
© Copyright 2023, Harrison Chase.
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/docarray_in_memory.html
|
54b9f63637ba-2
|
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/docarray_in_memory.html
|
23ae3b02f1a5-0
|
.ipynb
.pdf
Pinecone
Pinecone#
Pinecone is a vector database with broad functionality.
This notebook shows how to use functionality related to the Pinecone vector database.
To use Pinecone, you must have an API key.
Here are the installation instructions.
!pip install pinecone-client
import os
import getpass
PINECONE_API_KEY = getpass.getpass('Pinecone API Key:')
PINECONE_ENV = getpass.getpass('Pinecone Environment:')
We want to use OpenAIEmbeddings so we have to get the OpenAI API Key.
os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Pinecone
from langchain.document_loaders import TextLoader
from langchain.document_loaders import TextLoader
loader = TextLoader('../../../state_of_the_union.txt')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
import pinecone
# initialize pinecone
pinecone.init(
api_key=PINECONE_API_KEY, # find at app.pinecone.io
environment=PINECONE_ENV # next to api key in console
)
index_name = "langchain-demo"
docsearch = Pinecone.from_documents(docs, embeddings, index_name=index_name)
# if you already have an index, you can load it like this
# docsearch = Pinecone.from_existing_index(index_name, embeddings)
query = "What did the president say about Ketanji Brown Jackson"
docs = docsearch.similarity_search(query)
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/pinecone.html
|
23ae3b02f1a5-1
|
docs = docsearch.similarity_search(query)
print(docs[0].page_content)
previous
PGVector
next
Qdrant
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/pinecone.html
|
bce8a9bc8e7c-0
|
.ipynb
.pdf
MyScale
Contents
Setting up envrionments
Get connection info and data schema
Filtering
Deleting your data
MyScale#
MyScale is a cloud-based database optimized for AI applications and solutions, built on the open-source ClickHouse.
This notebook shows how to use functionality related to the MyScale vector database.
Setting up envrionments#
!pip install clickhouse-connect
We want to use OpenAIEmbeddings so we have to get the OpenAI API Key.
import os
import getpass
os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')
There are two ways to set up parameters for myscale index.
Environment Variables
Before you run the app, please set the environment variable with export:
export MYSCALE_URL='<your-endpoints-url>' MYSCALE_PORT=<your-endpoints-port> MYSCALE_USERNAME=<your-username> MYSCALE_PASSWORD=<your-password> ...
You can easily find your account, password and other info on our SaaS. For details please refer to this document
Every attributes under MyScaleSettings can be set with prefix MYSCALE_ and is case insensitive.
Create MyScaleSettings object with parameters
from langchain.vectorstores import MyScale, MyScaleSettings
config = MyScaleSetting(host="<your-backend-url>", port=8443, ...)
index = MyScale(embedding_function, config)
index.add_documents(...)
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import MyScale
from langchain.document_loaders import TextLoader
from langchain.document_loaders import TextLoader
loader = TextLoader('../../../state_of_the_union.txt')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/myscale.html
|
bce8a9bc8e7c-1
|
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
for d in docs:
d.metadata = {'some': 'metadata'}
docsearch = MyScale.from_documents(docs, embeddings)
query = "What did the president say about Ketanji Brown Jackson"
docs = docsearch.similarity_search(query)
Inserting data...: 100%|██████████| 42/42 [00:18<00:00, 2.21it/s]
print(docs[0].page_content)
As Frances Haugen, who is here with us tonight, has shown, we must hold social media platforms accountable for the national experiment they’re conducting on our children for profit.
It’s time to strengthen privacy protections, ban targeted advertising to children, demand tech companies stop collecting personal data on our children.
And let’s get all Americans the mental health services they need. More people they can turn to for help, and full parity between physical and mental health care.
Third, support our veterans.
Veterans are the best of us.
I’ve always believed that we have a sacred obligation to equip all those we send to war and care for them and their families when they come home.
My administration is providing assistance with job training and housing, and now helping lower-income veterans get VA care debt-free.
Our troops in Iraq and Afghanistan faced many dangers.
Get connection info and data schema#
print(str(docsearch))
Filtering#
You can have direct access to myscale SQL where statement. You can write WHERE clause following standard SQL.
NOTE: Please be aware of SQL injection, this interface must not be directly called by end-user.
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/myscale.html
|
bce8a9bc8e7c-2
|
NOTE: Please be aware of SQL injection, this interface must not be directly called by end-user.
If you custimized your column_map under your setting, you search with filter like this:
from langchain.vectorstores import MyScale, MyScaleSettings
from langchain.document_loaders import TextLoader
loader = TextLoader('../../../state_of_the_union.txt')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
for i, d in enumerate(docs):
d.metadata = {'doc_id': i}
docsearch = MyScale.from_documents(docs, embeddings)
Inserting data...: 100%|██████████| 42/42 [00:15<00:00, 2.69it/s]
meta = docsearch.metadata_column
output = docsearch.similarity_search_with_relevance_scores('What did the president say about Ketanji Brown Jackson?',
k=4, where_str=f"{meta}.doc_id<10")
for d, dist in output:
print(dist, d.metadata, d.page_content[:20] + '...')
0.252379834651947 {'doc_id': 6, 'some': ''} And I’m taking robus...
0.25022566318511963 {'doc_id': 1, 'some': ''} Groups of citizens b...
0.2469480037689209 {'doc_id': 8, 'some': ''} And so many families...
0.2428302764892578 {'doc_id': 0, 'some': 'metadata'} As Frances Haugen, w...
Deleting your data#
docsearch.drop()
previous
Milvus
next
OpenSearch
Contents
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/myscale.html
|
bce8a9bc8e7c-3
|
docsearch.drop()
previous
Milvus
next
OpenSearch
Contents
Setting up envrionments
Get connection info and data schema
Filtering
Deleting your data
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/myscale.html
|
cd48f4d19bb5-0
|
.ipynb
.pdf
DocArrayHnswSearch
Contents
Setup
Using DocArrayHnswSearch
Similarity search
Similarity search with score
DocArrayHnswSearch#
DocArrayHnswSearch is a lightweight Document Index implementation provided by Docarray that runs fully locally and is best suited for small- to medium-sized datasets. It stores vectors on disk in hnswlib, and stores all other data in SQLite.
This notebook shows how to use functionality related to the DocArrayHnswSearch.
Setup#
Uncomment the below cells to install docarray and get/set your OpenAI api key if you haven’t already done so.
# !pip install "docarray[hnswlib]"
# Get an OpenAI token: https://platform.openai.com/account/api-keys
# import os
# from getpass import getpass
# OPENAI_API_KEY = getpass()
# os.environ["OPENAI_API_KEY"] = OPENAI_API_KEY
Using DocArrayHnswSearch#
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import DocArrayHnswSearch
from langchain.document_loaders import TextLoader
documents = TextLoader('../../../state_of_the_union.txt').load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
db = DocArrayHnswSearch.from_documents(docs, embeddings, work_dir='hnswlib_store/', n_dim=1536)
Similarity search#
query = "What did the president say about Ketanji Brown Jackson"
docs = db.similarity_search(query)
print(docs[0].page_content)
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/docarray_hnsw.html
|
cd48f4d19bb5-1
|
docs = db.similarity_search(query)
print(docs[0].page_content)
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
Similarity search with score#
docs = db.similarity_search_with_score(query)
docs[0]
(Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={}),
0.36962226)
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/docarray_hnsw.html
|
cd48f4d19bb5-2
|
0.36962226)
import shutil
# delete the dir
shutil.rmtree('hnswlib_store')
previous
Deep Lake
next
DocArrayInMemorySearch
Contents
Setup
Using DocArrayHnswSearch
Similarity search
Similarity search with score
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/docarray_hnsw.html
|
245214cff820-0
|
.ipynb
.pdf
LanceDB
LanceDB#
LanceDB is an open-source database for vector-search built with persistent storage, which greatly simplifies retrevial, filtering and management of embeddings. Fully open source.
This notebook shows how to use functionality related to the LanceDB vector database based on the Lance data format.
!pip install lancedb
We want to use OpenAIEmbeddings so we have to get the OpenAI API Key.
import os
import getpass
os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')
from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores import LanceDB
from langchain.document_loaders import TextLoader
from langchain.text_splitter import CharacterTextSplitter
loader = TextLoader('../../../state_of_the_union.txt')
documents = loader.load()
documents = CharacterTextSplitter().split_documents(documents)
embeddings = OpenAIEmbeddings()
import lancedb
db = lancedb.connect('/tmp/lancedb')
table = db.create_table("my_table", data=[
{"vector": embeddings.embed_query("Hello World"), "text": "Hello World", "id": "1"}
], mode="overwrite")
docsearch = LanceDB.from_documents(documents, embeddings, connection=table)
query = "What did the president say about Ketanji Brown Jackson"
docs = docsearch.similarity_search(query)
print(docs[0].page_content)
They were responding to a 9-1-1 call when a man shot and killed them with a stolen gun.
Officer Mora was 27 years old.
Officer Rivera was 22.
Both Dominican Americans who’d grown up on the same streets they later chose to patrol as police officers.
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/lanecdb.html
|
245214cff820-1
|
I spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves.
I’ve worked on these issues a long time.
I know what works: Investing in crime preventionand community police officers who’ll walk the beat, who’ll know the neighborhood, and who can restore trust and safety.
So let’s not abandon our streets. Or choose between safety and equal justice.
Let’s come together to protect our communities, restore trust, and hold law enforcement accountable.
That’s why the Justice Department required body cameras, banned chokeholds, and restricted no-knock warrants for its officers.
That’s why the American Rescue Plan provided $350 Billion that cities, states, and counties can use to hire more police and invest in proven strategies like community violence interruption—trusted messengers breaking the cycle of violence and trauma and giving young people hope.
We should all agree: The answer is not to Defund the police. The answer is to FUND the police with the resources and training they need to protect our communities.
I ask Democrats and Republicans alike: Pass my budget and keep our neighborhoods safe.
And I will keep doing everything in my power to crack down on gun trafficking and ghost guns you can buy online and make at home—they have no serial numbers and can’t be traced.
And I ask Congress to pass proven measures to reduce gun violence. Pass universal background checks. Why should anyone on a terrorist list be able to purchase a weapon?
Ban assault weapons and high-capacity magazines.
Repeal the liability shield that makes gun manufacturers the only industry in America that can’t be sued.
These laws don’t infringe on the Second Amendment. They save lives.
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/lanecdb.html
|
245214cff820-2
|
These laws don’t infringe on the Second Amendment. They save lives.
The most fundamental right in America is the right to vote – and to have it counted. And it’s under assault.
In state after state, new laws have been passed, not only to suppress the vote, but to subvert entire elections.
We cannot let this happen.
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.
And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system.
We can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling.
We’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers.
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/lanecdb.html
|
245214cff820-3
|
We’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers.
We’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster.
previous
FAISS
next
Milvus
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/lanecdb.html
|
930ebc15ba80-0
|
.ipynb
.pdf
ElasticSearch
Contents
Installation
Example
ElasticSearch#
Elasticsearch is a distributed, RESTful search and analytics engine. It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents.
This notebook shows how to use functionality related to the Elasticsearch database.
Installation#
Check out Elasticsearch installation instructions.
To connect to an Elasticsearch instance that does not require
login credentials, pass the Elasticsearch URL and index name along with the
embedding object to the constructor.
Example:
from langchain import ElasticVectorSearch
from langchain.embeddings import OpenAIEmbeddings
embedding = OpenAIEmbeddings()
elastic_vector_search = ElasticVectorSearch(
elasticsearch_url="http://localhost:9200",
index_name="test_index",
embedding=embedding
)
To connect to an Elasticsearch instance that requires login credentials,
including Elastic Cloud, use the Elasticsearch URL format
https://username:password@es_host:9243. For example, to connect to Elastic
Cloud, create the Elasticsearch URL with the required authentication details and
pass it to the ElasticVectorSearch constructor as the named parameter
elasticsearch_url.
You can obtain your Elastic Cloud URL and login credentials by logging in to the
Elastic Cloud console at https://cloud.elastic.co, selecting your deployment, and
navigating to the “Deployments” page.
To obtain your Elastic Cloud password for the default “elastic” user:
Log in to the Elastic Cloud console at https://cloud.elastic.co
Go to “Security” > “Users”
Locate the “elastic” user and click “Edit”
Click “Reset password”
Follow the prompts to reset the password
Format for Elastic Cloud URLs is
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/elasticsearch.html
|
930ebc15ba80-1
|
Click “Reset password”
Follow the prompts to reset the password
Format for Elastic Cloud URLs is
https://username:password@cluster_id.region_id.gcp.cloud.es.io:9243.
Example:
from langchain import ElasticVectorSearch
from langchain.embeddings import OpenAIEmbeddings
embedding = OpenAIEmbeddings()
elastic_host = "cluster_id.region_id.gcp.cloud.es.io"
elasticsearch_url = f"https://username:password@{elastic_host}:9243"
elastic_vector_search = ElasticVectorSearch(
elasticsearch_url=elasticsearch_url,
index_name="test_index",
embedding=embedding
)
!pip install elasticsearch
import os
import getpass
os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')
Example#
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import ElasticVectorSearch
from langchain.document_loaders import TextLoader
from langchain.document_loaders import TextLoader
loader = TextLoader('../../../state_of_the_union.txt')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
db = ElasticVectorSearch.from_documents(docs, embeddings, elasticsearch_url="http://localhost:9200")
query = "What did the president say about Ketanji Brown Jackson"
docs = db.similarity_search(query)
print(docs[0].page_content)
In state after state, new laws have been passed, not only to suppress the vote, but to subvert entire elections.
We cannot let this happen.
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/elasticsearch.html
|
930ebc15ba80-2
|
We cannot let this happen.
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
previous
DocArrayInMemorySearch
next
FAISS
Contents
Installation
Example
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/elasticsearch.html
|
4ba7c6558573-0
|
.ipynb
.pdf
Tair
Tair#
Tair is a cloud native in-memory database service developed by Alibaba Cloud.
It provides rich data models and enterprise-grade capabilities to support your real-time online scenarios while maintaining full compatibility with open source Redis. Tair also introduces persistent memory-optimized instances that are based on the new non-volatile memory (NVM) storage medium.
This notebook shows how to use functionality related to the Tair vector database.
To run, you should have a Tair instance up and running.
from langchain.embeddings.fake import FakeEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Tair
from langchain.document_loaders import TextLoader
loader = TextLoader('../../../state_of_the_union.txt')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = FakeEmbeddings(size=128)
Connect to Tair using the TAIR_URL environment variable
export TAIR_URL="redis://{username}:{password}@{tair_address}:{tair_port}"
or the keyword argument tair_url.
Then store documents and embeddings into Tair.
tair_url = "redis://localhost:6379"
# drop first if index already exists
Tair.drop_index(tair_url=tair_url)
vector_store = Tair.from_documents(
docs,
embeddings,
tair_url=tair_url
)
Query similar documents.
query = "What did the president say about Ketanji Brown Jackson"
docs = vector_store.similarity_search(query)
docs[0]
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/tair.html
|
4ba7c6558573-1
|
docs = vector_store.similarity_search(query)
docs[0]
Document(page_content='We’re going after the criminals who stole billions in relief money meant for small businesses and millions of Americans. \n\nAnd tonight, I’m announcing that the Justice Department will name a chief prosecutor for pandemic fraud. \n\nBy the end of this year, the deficit will be down to less than half what it was before I took office. \n\nThe only president ever to cut the deficit by more than one trillion dollars in a single year. \n\nLowering your costs also means demanding more competition. \n\nI’m a capitalist, but capitalism without competition isn’t capitalism. \n\nIt’s exploitation—and it drives up prices. \n\nWhen corporations don’t have to compete, their profits go up, your prices go up, and small businesses and family farmers and ranchers go under. \n\nWe see it happening with ocean carriers moving goods in and out of America. \n\nDuring the pandemic, these foreign-owned companies raised prices by as much as 1,000% and made record profits.', metadata={'source': '../../../state_of_the_union.txt'})
previous
Supabase (Postgres)
next
Typesense
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/tair.html
|
fc60e94dfe7a-0
|
.ipynb
.pdf
Airbyte JSON
Airbyte JSON#
Airbyte is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases.
This covers how to load any source from Airbyte into a local JSON file that can be read in as a document
Prereqs:
Have docker desktop installed
Steps:
Clone Airbyte from GitHub - git clone https://github.com/airbytehq/airbyte.git
Switch into Airbyte directory - cd airbyte
Start Airbyte - docker compose up
In your browser, just visit http://localhost:8000. You will be asked for a username and password. By default, that’s username airbyte and password password.
Setup any source you wish.
Set destination as Local JSON, with specified destination path - lets say /json_data. Set up manual sync.
Run the connection.
To see what files are create, you can navigate to: file:///tmp/airbyte_local
Find your data and copy path. That path should be saved in the file variable below. It should start with /tmp/airbyte_local
from langchain.document_loaders import AirbyteJSONLoader
!ls /tmp/airbyte_local/json_data/
_airbyte_raw_pokemon.jsonl
loader = AirbyteJSONLoader('/tmp/airbyte_local/json_data/_airbyte_raw_pokemon.jsonl')
data = loader.load()
print(data[0].page_content[:500])
abilities:
ability:
name: blaze
url: https://pokeapi.co/api/v2/ability/66/
is_hidden: False
slot: 1
ability:
name: solar-power
url: https://pokeapi.co/api/v2/ability/94/
is_hidden: True
slot: 3
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/airbyte_json.html
|
fc60e94dfe7a-1
|
is_hidden: True
slot: 3
base_experience: 267
forms:
name: charizard
url: https://pokeapi.co/api/v2/pokemon-form/6/
game_indices:
game_index: 180
version:
name: red
url: https://pokeapi.co/api/v2/version/1/
game_index: 180
version:
name: blue
url: https://pokeapi.co/api/v2/version/2/
game_index: 180
version:
n
previous
YouTube transcripts
next
Apify Dataset
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/airbyte_json.html
|
6e8855dd4153-0
|
.ipynb
.pdf
Diffbot
Diffbot#
Unlike traditional web scraping tools, Diffbot doesn’t require any rules to read the content on a page.
It starts with computer vision, which classifies a page into one of 20 possible types. Content is then interpreted by a machine learning model trained to identify the key attributes on a page based on its type.
The result is a website transformed into clean structured data (like JSON or CSV), ready for your application.
This covers how to extract HTML documents from a list of URLs using the Diffbot extract API, into a document format that we can use downstream.
urls = [
"https://python.langchain.com/en/latest/index.html",
]
The Diffbot Extract API Requires an API token. Once you have it, you can extract the data from the previous URLs
import os
from langchain.document_loaders import DiffbotLoader
loader = DiffbotLoader(urls=urls, api_token=os.environ.get("DIFFBOT_API_TOKEN"))
With the .load() method, you can see the documents loaded
loader.load()
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/diffbot.html
|
6e8855dd4153-1
|
[Document(page_content='LangChain is a framework for developing applications powered by language models. We believe that the most powerful and differentiated applications will not only call out to a language model via an API, but will also:\nBe data-aware: connect a language model to other sources of data\nBe agentic: allow a language model to interact with its environment\nThe LangChain framework is designed with the above principles in mind.\nThis is the Python specific portion of the documentation. For a purely conceptual guide to LangChain, see here. For the JavaScript documentation, see here.\nGetting Started\nCheckout the below guide for a walkthrough of how to get started using LangChain to create an Language Model application.\nGetting Started Documentation\nModules\nThere are several main modules that LangChain provides support for. For each module we provide some examples to get started, how-to guides, reference docs, and conceptual guides. These modules are, in increasing order of complexity:\nModels: The various model types and model integrations LangChain supports.\nPrompts: This includes prompt management, prompt optimization,
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/diffbot.html
|
6e8855dd4153-2
|
This includes prompt management, prompt optimization, and prompt serialization.\nMemory: Memory is the concept of persisting state between calls of a chain/agent. LangChain provides a standard interface for memory, a collection of memory implementations, and examples of chains/agents that use memory.\nIndexes: Language models are often more powerful when combined with your own text data - this module covers best practices for doing exactly that.\nChains: Chains go beyond just a single LLM call, and are sequences of calls (whether to an LLM or a different utility). LangChain provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications.\nAgents: Agents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end to end agents.\nUse Cases\nThe above modules can be used in a variety of ways. LangChain also provides guidance and assistance in
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/diffbot.html
|
6e8855dd4153-3
|
ways. LangChain also provides guidance and assistance in this. Below are some of the common use cases LangChain supports.\nPersonal Assistants: The main LangChain use case. Personal assistants need to take actions, remember interactions, and have knowledge about your data.\nQuestion Answering: The second big LangChain use case. Answering questions over specific documents, only utilizing the information in those documents to construct an answer.\nChatbots: Since language models are good at producing text, that makes them ideal for creating chatbots.\nQuerying Tabular Data: If you want to understand how to use LLMs to query data that is stored in a tabular format (csvs, SQL, dataframes, etc) you should read this page.\nInteracting with APIs: Enabling LLMs to interact with APIs is extremely powerful in order to give them more up-to-date information and allow them to take actions.\nExtraction: Extract structured information from text.\nSummarization: Summarizing longer documents into shorter, more condensed chunks of information. A type of Data Augmented Generation.\nEvaluation: Generative models
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/diffbot.html
|
6e8855dd4153-4
|
type of Data Augmented Generation.\nEvaluation: Generative models are notoriously hard to evaluate with traditional metrics. One new way of evaluating them is using language models themselves to do the evaluation. LangChain provides some prompts/chains for assisting in this.\nReference Docs\nAll of LangChain’s reference documentation, in one place. Full documentation on all methods, classes, installation methods, and integration setups for LangChain.\nReference Documentation\nLangChain Ecosystem\nGuides for how other companies/products can be used with LangChain\nLangChain Ecosystem\nAdditional Resources\nAdditional collection of resources we think may be useful as you develop your application!\nLangChainHub: The LangChainHub is a place to share and explore other prompts, chains, and agents.\nGlossary: A glossary of all related terms, papers, methods, etc. Whether implemented in LangChain or not!\nGallery: A collection of our favorite projects that use LangChain. Useful for finding inspiration or seeing how things were done in other applications.\nDeployments: A collection of instructions, code snippets, and template repositories for deploying LangChain apps.\nTracing: A
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/diffbot.html
|
6e8855dd4153-5
|
template repositories for deploying LangChain apps.\nTracing: A guide on using tracing in LangChain to visualize the execution of chains and agents.\nModel Laboratory: Experimenting with different prompts, models, and chains is a big part of developing the best possible application. The ModelLaboratory makes it easy to do so.\nDiscord: Join us on our Discord to discuss all things LangChain!\nProduction Support: As you move your LangChains into production, we’d love to offer more comprehensive support. Please fill out this form and we’ll set up a dedicated support Slack channel.', metadata={'source': 'https://python.langchain.com/en/latest/index.html'})]
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/diffbot.html
|
6e8855dd4153-6
|
previous
Confluence
next
Discord
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/diffbot.html
|
40c27612a38f-0
|
.ipynb
.pdf
Modern Treasury
Modern Treasury#
Modern Treasury simplifies complex payment operations. It is a unified platform to power products and processes that move money.
Connect to banks and payment systems
Track transactions and balances in real-time
Automate payment operations for scale
This notebook covers how to load data from the Modern Treasury REST API into a format that can be ingested into LangChain, along with example usage for vectorization.
import os
from langchain.document_loaders import ModernTreasuryLoader
from langchain.indexes import VectorstoreIndexCreator
The Modern Treasury API requires an organization ID and API key, which can be found in the Modern Treasury dashboard within developer settings.
This document loader also requires a resource option which defines what data you want to load.
Following resources are available:
payment_orders Documentation
expected_payments Documentation
returns Documentation
incoming_payment_details Documentation
counterparties Documentation
internal_accounts Documentation
external_accounts Documentation
transactions Documentation
ledgers Documentation
ledger_accounts Documentation
ledger_transactions Documentation
events Documentation
invoices Documentation
modern_treasury_loader = ModernTreasuryLoader("payment_orders")
# Create a vectorstore retriver from the loader
# see https://python.langchain.com/en/latest/modules/indexes/getting_started.html for more details
index = VectorstoreIndexCreator().from_loaders([modern_treasury_loader])
modern_treasury_doc_retriever = index.vectorstore.as_retriever()
previous
Microsoft OneDrive
next
Notion DB 2/2
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/modern_treasury.html
|
d7a58e0f4118-0
|
.ipynb
.pdf
Figma
Figma#
Figma is a collaborative web application for interface design.
This notebook covers how to load data from the Figma REST API into a format that can be ingested into LangChain, along with example usage for code generation.
import os
from langchain.document_loaders.figma import FigmaFileLoader
from langchain.text_splitter import CharacterTextSplitter
from langchain.chat_models import ChatOpenAI
from langchain.indexes import VectorstoreIndexCreator
from langchain.chains import ConversationChain, LLMChain
from langchain.memory import ConversationBufferWindowMemory
from langchain.prompts.chat import (
ChatPromptTemplate,
SystemMessagePromptTemplate,
AIMessagePromptTemplate,
HumanMessagePromptTemplate,
)
The Figma API Requires an access token, node_ids, and a file key.
The file key can be pulled from the URL. https://www.figma.com/file/{filekey}/sampleFilename
Node IDs are also available in the URL. Click on anything and look for the ‘?node-id={node_id}’ param.
Access token instructions are in the Figma help center article: https://help.figma.com/hc/en-us/articles/8085703771159-Manage-personal-access-tokens
figma_loader = FigmaFileLoader(
os.environ.get('ACCESS_TOKEN'),
os.environ.get('NODE_IDS'),
os.environ.get('FILE_KEY')
)
# see https://python.langchain.com/en/latest/modules/indexes/getting_started.html for more details
index = VectorstoreIndexCreator().from_loaders([figma_loader])
figma_doc_retriever = index.vectorstore.as_retriever()
def generate_code(human_input):
# I have no idea if the Jon Carmack thing makes for better code. YMMV.
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/figma.html
|
d7a58e0f4118-1
|
# See https://python.langchain.com/en/latest/modules/models/chat/getting_started.html for chat info
system_prompt_template = """You are expert coder Jon Carmack. Use the provided design context to create idomatic HTML/CSS code as possible based on the user request.
Everything must be inline in one file and your response must be directly renderable by the browser.
Figma file nodes and metadata: {context}"""
human_prompt_template = "Code the {text}. Ensure it's mobile responsive"
system_message_prompt = SystemMessagePromptTemplate.from_template(system_prompt_template)
human_message_prompt = HumanMessagePromptTemplate.from_template(human_prompt_template)
# delete the gpt-4 model_name to use the default gpt-3.5 turbo for faster results
gpt_4 = ChatOpenAI(temperature=.02, model_name='gpt-4')
# Use the retriever's 'get_relevant_documents' method if needed to filter down longer docs
relevant_nodes = figma_doc_retriever.get_relevant_documents(human_input)
conversation = [system_message_prompt, human_message_prompt]
chat_prompt = ChatPromptTemplate.from_messages(conversation)
response = gpt_4(chat_prompt.format_prompt(
context=relevant_nodes,
text=human_input).to_messages())
return response
response = generate_code("page top header")
Returns the following in response.content:
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/figma.html
|
d7a58e0f4118-2
|
<!DOCTYPE html>\n<html lang="en">\n<head>\n <meta charset="UTF-8">\n <meta name="viewport" content="width=device-width, initial-scale=1.0">\n <style>\n @import url(\'https://fonts.googleapis.com/css2?family=DM+Sans:wght@500;700&family=Inter:wght@600&display=swap\');\n\n body {\n margin: 0;\n font-family: \'DM Sans\', sans-serif;\n }\n\n .header {\n display: flex;\n justify-content: space-between;\n align-items: center;\n padding: 20px;\n background-color: #fff;\n box-shadow: 0 2px 4px rgba(0, 0, 0, 0.1);\n }\n\n .header h1 {\n font-size: 16px;\n font-weight: 700;\n
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/figma.html
|
d7a58e0f4118-3
|
font-weight: 700;\n margin: 0;\n }\n\n .header nav {\n display: flex;\n align-items: center;\n }\n\n .header nav a {\n font-size: 14px;\n font-weight: 500;\n text-decoration: none;\n color: #000;\n margin-left: 20px;\n }\n\n @media (max-width: 768px) {\n .header nav {\n display: none;\n }\n }\n </style>\n</head>\n<body>\n <header class="header">\n <h1>Company Contact</h1>\n <nav>\n <a href="#">Lorem Ipsum</a>\n
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/figma.html
|
d7a58e0f4118-4
|
Ipsum</a>\n <a href="#">Lorem Ipsum</a>\n <a href="#">Lorem Ipsum</a>\n </nav>\n </header>\n</body>\n</html>
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/figma.html
|
d7a58e0f4118-5
|
previous
DuckDB
next
GitBook
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/figma.html
|
b5e4f33f4a25-0
|
.ipynb
.pdf
Discord
Discord#
Discord is a VoIP and instant messaging social platform. Users have the ability to communicate with voice calls, video calls, text messaging, media and files in private chats or as part of communities called “servers”. A server is a collection of persistent chat rooms and voice channels which can be accessed via invite links.
Follow these steps to download your Discord data:
Go to your User Settings
Then go to Privacy and Safety
Head over to the Request all of my Data and click on Request Data button
It might take 30 days for you to receive your data. You’ll receive an email at the address which is registered with Discord. That email will have a download button using which you would be able to download your personal Discord data.
import pandas as pd
import os
path = input("Please enter the path to the contents of the Discord \"messages\" folder: ")
li = []
for f in os.listdir(path):
expected_csv_path = os.path.join(path, f, 'messages.csv')
csv_exists = os.path.isfile(expected_csv_path)
if csv_exists:
df = pd.read_csv(expected_csv_path, index_col=None, header=0)
li.append(df)
df = pd.concat(li, axis=0, ignore_index=True, sort=False)
from langchain.document_loaders.discord import DiscordChatLoader
loader = DiscordChatLoader(df, user_id_col="ID")
print(loader.load())
previous
Diffbot
next
Docugami
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/discord_loader.html
|
c45309835645-0
|
.ipynb
.pdf
EPub
Contents
Retain Elements
EPub#
EPUB is an e-book file format that uses the “.epub” file extension. The term is short for electronic publication and is sometimes styled ePub. EPUB is supported by many e-readers, and compatible software is available for most smartphones, tablets, and computers.
This covers how to load .epub documents into the Document format that we can use downstream. You’ll need to install the pandocs package for this loader to work.
#!pip install pandocs
from langchain.document_loaders import UnstructuredEPubLoader
loader = UnstructuredEPubLoader("winter-sports.epub")
data = loader.load()
Retain Elements#
Under the hood, Unstructured creates different “elements” for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying mode="elements".
loader = UnstructuredEPubLoader("winter-sports.epub", mode="elements")
data = loader.load()
data[0]
Document(page_content='The Project Gutenberg eBook of Winter Sports in\nSwitzerland, by E. F. Benson', lookup_str='', metadata={'source': 'winter-sports.epub', 'page_number': 1, 'category': 'Title'}, lookup_index=0)
previous
Email
next
EverNote
Contents
Retain Elements
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/epub.html
|
1cf30ce58d32-0
|
.ipynb
.pdf
Azure Blob Storage File
Azure Blob Storage File#
Azure Files offers fully managed file shares in the cloud that are accessible via the industry standard Server Message Block (SMB) protocol, Network File System (NFS) protocol, and Azure Files REST API.
This covers how to load document objects from a Azure Files.
#!pip install azure-storage-blob
from langchain.document_loaders import AzureBlobStorageFileLoader
loader = AzureBlobStorageFileLoader(conn_str='<connection string>', container='<container name>', blob_name='<blob name>')
loader.load()
[Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpxvave6wl/fake.docx'}, lookup_index=0)]
previous
Azure Blob Storage Container
next
Blackboard
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/azure_blob_storage_file.html
|
340908b9143a-0
|
.ipynb
.pdf
Google Cloud Storage File
Google Cloud Storage File#
Google Cloud Storage is a managed service for storing unstructured data.
This covers how to load document objects from an Google Cloud Storage (GCS) file object (blob).
# !pip install google-cloud-storage
from langchain.document_loaders import GCSFileLoader
loader = GCSFileLoader(project_name="aist", bucket="testing-hwc", blob="fake.docx")
loader.load()
/Users/harrisonchase/workplace/langchain/.venv/lib/python3.10/site-packages/google/auth/_default.py:83: UserWarning: Your application has authenticated using end user credentials from Google Cloud SDK without a quota project. You might receive a "quota exceeded" or "API not enabled" error. We recommend you rerun `gcloud auth application-default login` and make sure a quota project is added. Or you can use service accounts instead. For more information about service accounts, see https://cloud.google.com/docs/authentication/
warnings.warn(_CLOUD_SDK_CREDENTIALS_WARNING)
[Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmp3srlf8n8/fake.docx'}, lookup_index=0)]
previous
Google Cloud Storage Directory
next
Google Drive
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/google_cloud_storage_file.html
|
5b5f0ecc00e7-0
|
.ipynb
.pdf
Git
Contents
Load existing repository from disk
Clone repository from url
Filtering files to load
Git#
Git is a distributed version control system that tracks changes in any set of computer files, usually used for coordinating work among programmers collaboratively developing source code during software development.
This notebook shows how to load text files from Git repository.
Load existing repository from disk#
!pip install GitPython
from git import Repo
repo = Repo.clone_from(
"https://github.com/hwchase17/langchain", to_path="./example_data/test_repo1"
)
branch = repo.head.reference
from langchain.document_loaders import GitLoader
loader = GitLoader(repo_path="./example_data/test_repo1/", branch=branch)
data = loader.load()
len(data)
print(data[0])
page_content='.venv\n.github\n.git\n.mypy_cache\n.pytest_cache\nDockerfile' metadata={'file_path': '.dockerignore', 'file_name': '.dockerignore', 'file_type': ''}
Clone repository from url#
from langchain.document_loaders import GitLoader
loader = GitLoader(
clone_url="https://github.com/hwchase17/langchain",
repo_path="./example_data/test_repo2/",
branch="master",
)
data = loader.load()
len(data)
1074
Filtering files to load#
from langchain.document_loaders import GitLoader
# eg. loading only python files
loader = GitLoader(repo_path="./example_data/test_repo1/", file_filter=lambda file_path: file_path.endswith(".py"))
previous
GitBook
next
Google BigQuery
Contents
Load existing repository from disk
Clone repository from url
Filtering files to load
By Harrison Chase
© Copyright 2023, Harrison Chase.
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/git.html
|
5b5f0ecc00e7-1
|
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/git.html
|
a46fbb192a02-0
|
.ipynb
.pdf
Images
Contents
Using Unstructured
Retain Elements
Images#
This covers how to load images such as JPG or PNG into a document format that we can use downstream.
Using Unstructured#
#!pip install pdfminer
from langchain.document_loaders.image import UnstructuredImageLoader
loader = UnstructuredImageLoader("layout-parser-paper-fast.jpg")
data = loader.load()
data[0]
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/image.html
|
a46fbb192a02-1
|
Document(page_content="LayoutParser: A Unified Toolkit for Deep\nLearning Based Document Image Analysis\n\n\n‘Zxjiang Shen' (F3}, Ruochen Zhang”, Melissa Dell*, Benjamin Charles Germain\nLeet, Jacob Carlson, and Weining LiF\n\n\nsugehen\n\nshangthrows, et\n\n“Abstract. Recent advanocs in document image analysis (DIA) have been\n‘pimarliy driven bythe application of neural networks dell roar\n{uteomer could be aly deployed in production and extended fo farther\n[nvetigtion. However, various factory ke lcely organize codebanee\nsnd sophisticated modal cnigurations compat the ey ree of\n‘erin! innovation by wide sence, Though there have been sng\n‘Hors to improve reuablty and simplify deep lees (DL) mode\n‘aon, sone of them ae optimized for challenge inthe demain of DIA,\nThis roprscte a major gap in the extng fol, sw DIA i eal to\nscademic research acon wie range of dpi in the social ssencee\n[rary for streamlining the sage of DL in DIA research and
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/image.html
|
a46fbb192a02-2
|
streamlining the sage of DL in DIA research and appicn\n‘tons The core LayoutFaraer brary comes with a sch of simple and\nIntative interfaee or applying and eutomiing DI. odel fr Inyo de\npltfom for sharing both protrined modes an fal document dist\n{ation pipeline We demonutate that LayootPareer shea fr both\nlightweight and lrgeseledgtieation pipelines in eal-word uae ces\nThe leary pblely smal at Btspe://layost-pareergsthab So\n\n\n\n‘Keywords: Document Image Analysis» Deep Learning Layout Analysis\n‘Character Renguition - Open Serres dary « Tol\n\n\nIntroduction\n\n\n‘Deep Learning(DL)-based approaches are the state-of-the-art for a wide range of\ndoctiment image analysis (DIA) tea including document image clasiffeation [I]\n", lookup_str='', metadata={'source': 'layout-parser-paper-fast.jpg'}, lookup_index=0)
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/image.html
|
a46fbb192a02-3
|
Retain Elements#
Under the hood, Unstructured creates different “elements” for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying mode="elements".
loader = UnstructuredImageLoader("layout-parser-paper-fast.jpg", mode="elements")
data = loader.load()
data[0]
Document(page_content='LayoutParser: A Unified Toolkit for Deep\nLearning Based Document Image Analysis\n', lookup_str='', metadata={'source': 'layout-parser-paper-fast.jpg', 'filename': 'layout-parser-paper-fast.jpg', 'page_number': 1, 'category': 'Title'}, lookup_index=0)
previous
HTML
next
Jupyter Notebook
Contents
Using Unstructured
Retain Elements
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/image.html
|
cd7b62d2af0e-0
|
.ipynb
.pdf
YouTube transcripts
Contents
Add video info
YouTube loader from Google Cloud
Prerequisites
🧑 Instructions for ingesting your Google Docs data
YouTube transcripts#
YouTube is an online video sharing and social media platform created by Google.
This notebook covers how to load documents from YouTube transcripts.
from langchain.document_loaders import YoutubeLoader
# !pip install youtube-transcript-api
loader = YoutubeLoader.from_youtube_url("https://www.youtube.com/watch?v=QsYGlZkevEg", add_video_info=True)
loader.load()
Add video info#
# ! pip install pytube
loader = YoutubeLoader.from_youtube_url("https://www.youtube.com/watch?v=QsYGlZkevEg", add_video_info=True)
loader.load()
YouTube loader from Google Cloud#
Prerequisites#
Create a Google Cloud project or use an existing project
Enable the Youtube Api
Authorize credentials for desktop app
pip install --upgrade google-api-python-client google-auth-httplib2 google-auth-oauthlib youtube-transcript-api
🧑 Instructions for ingesting your Google Docs data#
By default, the GoogleDriveLoader expects the credentials.json file to be ~/.credentials/credentials.json, but this is configurable using the credentials_file keyword argument. Same thing with token.json. Note that token.json will be created automatically the first time you use the loader.
GoogleApiYoutubeLoader can load from a list of Google Docs document ids or a folder id. You can obtain your folder and document id from the URL:
Note depending on your set up, the service_account_path needs to be set up. See here for more details.
from langchain.document_loaders import GoogleApiClient, GoogleApiYoutubeLoader
# Init the GoogleApiClient
from pathlib import Path
google_api_client = GoogleApiClient(credentials_path=Path("your_path_creds.json"))
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/youtube_transcript.html
|
cd7b62d2af0e-1
|
google_api_client = GoogleApiClient(credentials_path=Path("your_path_creds.json"))
# Use a Channel
youtube_loader_channel = GoogleApiYoutubeLoader(google_api_client=google_api_client, channel_name="Reducible",captions_language="en")
# Use Youtube Ids
youtube_loader_ids = GoogleApiYoutubeLoader(google_api_client=google_api_client, video_ids=["TrdevFK_am4"], add_video_info=True)
# returns a list of Documents
youtube_loader_channel.load()
previous
Wikipedia
next
Airbyte JSON
Contents
Add video info
YouTube loader from Google Cloud
Prerequisites
🧑 Instructions for ingesting your Google Docs data
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/youtube_transcript.html
|
72ef88508fdc-0
|
.ipynb
.pdf
Notion DB 1/2
Contents
🧑 Instructions for ingesting your own dataset
Notion DB 1/2#
Notion is a collaboration platform with modified Markdown support that integrates kanban boards, tasks, wikis and databases. It is an all-in-one workspace for notetaking, knowledge and data management, and project and task management.
This notebook covers how to load documents from a Notion database dump.
In order to get this notion dump, follow these instructions:
🧑 Instructions for ingesting your own dataset#
Export your dataset from Notion. You can do this by clicking on the three dots in the upper right hand corner and then clicking Export.
When exporting, make sure to select the Markdown & CSV format option.
This will produce a .zip file in your Downloads folder. Move the .zip file into this repository.
Run the following command to unzip the zip file (replace the Export... with your own file name as needed).
unzip Export-d3adfe0f-3131-4bf3-8987-a52017fc1bae.zip -d Notion_DB
Run the following command to ingest the data.
from langchain.document_loaders import NotionDirectoryLoader
loader = NotionDirectoryLoader("Notion_DB")
docs = loader.load()
previous
Notion DB 2/2
next
Obsidian
Contents
🧑 Instructions for ingesting your own dataset
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/notion.html
|
e1b49214ca6f-0
|
.ipynb
.pdf
EverNote
EverNote#
EverNote is intended for archiving and creating notes in which photos, audio and saved web content can be embedded. Notes are stored in virtual “notebooks” and can be tagged, annotated, edited, searched, and exported.
This notebook shows how to load an Evernote export file (.enex) from disk.
A document will be created for each note in the export.
# lxml and html2text are required to parse EverNote notes
# !pip install lxml
# !pip install html2text
from langchain.document_loaders import EverNoteLoader
# By default all notes are combined into a single Document
loader = EverNoteLoader("example_data/testing.enex")
loader.load()
[Document(page_content='testing this\n\nwhat happens?\n\nto the world?**Jan - March 2022**', metadata={'source': 'example_data/testing.enex'})]
# It's likely more useful to return a Document for each note
loader = EverNoteLoader("example_data/testing.enex", load_single_document=False)
loader.load()
[Document(page_content='testing this\n\nwhat happens?\n\nto the world?', metadata={'title': 'testing', 'created': time.struct_time(tm_year=2023, tm_mon=2, tm_mday=9, tm_hour=3, tm_min=47, tm_sec=46, tm_wday=3, tm_yday=40, tm_isdst=-1), 'updated': time.struct_time(tm_year=2023, tm_mon=2, tm_mday=9, tm_hour=3, tm_min=53, tm_sec=28, tm_wday=3, tm_yday=40, tm_isdst=-1), 'note-attributes.author': 'Harrison Chase', 'source': 'example_data/testing.enex'}),
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/evernote.html
|
e1b49214ca6f-1
|
Document(page_content='**Jan - March 2022**', metadata={'title': 'Summer Training Program', 'created': time.struct_time(tm_year=2022, tm_mon=12, tm_mday=27, tm_hour=1, tm_min=59, tm_sec=48, tm_wday=1, tm_yday=361, tm_isdst=-1), 'note-attributes.author': 'Mike McGarry', 'note-attributes.source': 'mobile.iphone', 'source': 'example_data/testing.enex'})]
previous
EPub
next
Facebook Chat
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/evernote.html
|
7bc39783f903-0
|
.ipynb
.pdf
Google Cloud Storage Directory
Contents
Specifying a prefix
Google Cloud Storage Directory#
Google Cloud Storage is a managed service for storing unstructured data.
This covers how to load document objects from an Google Cloud Storage (GCS) directory (bucket).
# !pip install google-cloud-storage
from langchain.document_loaders import GCSDirectoryLoader
loader = GCSDirectoryLoader(project_name="aist", bucket="testing-hwc")
loader.load()
/Users/harrisonchase/workplace/langchain/.venv/lib/python3.10/site-packages/google/auth/_default.py:83: UserWarning: Your application has authenticated using end user credentials from Google Cloud SDK without a quota project. You might receive a "quota exceeded" or "API not enabled" error. We recommend you rerun `gcloud auth application-default login` and make sure a quota project is added. Or you can use service accounts instead. For more information about service accounts, see https://cloud.google.com/docs/authentication/
warnings.warn(_CLOUD_SDK_CREDENTIALS_WARNING)
/Users/harrisonchase/workplace/langchain/.venv/lib/python3.10/site-packages/google/auth/_default.py:83: UserWarning: Your application has authenticated using end user credentials from Google Cloud SDK without a quota project. You might receive a "quota exceeded" or "API not enabled" error. We recommend you rerun `gcloud auth application-default login` and make sure a quota project is added. Or you can use service accounts instead. For more information about service accounts, see https://cloud.google.com/docs/authentication/
warnings.warn(_CLOUD_SDK_CREDENTIALS_WARNING)
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/google_cloud_storage_directory.html
|
7bc39783f903-1
|
warnings.warn(_CLOUD_SDK_CREDENTIALS_WARNING)
[Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpz37njh7u/fake.docx'}, lookup_index=0)]
Specifying a prefix#
You can also specify a prefix for more finegrained control over what files to load.
loader = GCSDirectoryLoader(project_name="aist", bucket="testing-hwc", prefix="fake")
loader.load()
/Users/harrisonchase/workplace/langchain/.venv/lib/python3.10/site-packages/google/auth/_default.py:83: UserWarning: Your application has authenticated using end user credentials from Google Cloud SDK without a quota project. You might receive a "quota exceeded" or "API not enabled" error. We recommend you rerun `gcloud auth application-default login` and make sure a quota project is added. Or you can use service accounts instead. For more information about service accounts, see https://cloud.google.com/docs/authentication/
warnings.warn(_CLOUD_SDK_CREDENTIALS_WARNING)
/Users/harrisonchase/workplace/langchain/.venv/lib/python3.10/site-packages/google/auth/_default.py:83: UserWarning: Your application has authenticated using end user credentials from Google Cloud SDK without a quota project. You might receive a "quota exceeded" or "API not enabled" error. We recommend you rerun `gcloud auth application-default login` and make sure a quota project is added. Or you can use service accounts instead. For more information about service accounts, see https://cloud.google.com/docs/authentication/
warnings.warn(_CLOUD_SDK_CREDENTIALS_WARNING)
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/google_cloud_storage_directory.html
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.