id
stringlengths 14
16
| text
stringlengths 36
2.73k
| source
stringlengths 49
117
|
---|---|---|
fa678718cb2d-47 | Returns
Document results for search.
Return type
List[Document]
similarity_search_by_vector(embedding: List[float], k: int = 4, param: Optional[dict] = None, expr: Optional[str] = None, timeout: Optional[int] = None, **kwargs: Any) → List[langchain.schema.Document][source]#
Perform a similarity search against the query string.
Parameters
embedding (List[float]) – The embedding vector to search.
k (int, optional) – How many results to return. Defaults to 4.
param (dict, optional) – The search params for the index type.
Defaults to None.
expr (str, optional) – Filtering expression. Defaults to None.
timeout (int, optional) – How long to wait before timeout error.
Defaults to None.
kwargs – Collection.search() keyword arguments.
Returns
Document results for search.
Return type
List[Document]
similarity_search_with_score(query: str, k: int = 4, param: Optional[dict] = None, expr: Optional[str] = None, timeout: Optional[int] = None, **kwargs: Any) → List[Tuple[langchain.schema.Document, float]][source]#
Perform a search on a query string and return results with score.
For more information about the search parameters, take a look at the pymilvus
documentation found here:
https://milvus.io/api-reference/pymilvus/v2.2.6/Collection/search().md
Parameters
query (str) – The text being searched.
k (int, optional) – The amount of results ot return. Defaults to 4.
param (dict) – The search params for the specified index.
Defaults to None.
expr (str, optional) – Filtering expression. Defaults to None. | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-48 | Defaults to None.
expr (str, optional) – Filtering expression. Defaults to None.
timeout (int, optional) – How long to wait before timeout error.
Defaults to None.
kwargs – Collection.search() keyword arguments.
Return type
List[float], List[Tuple[Document, any, any]]
similarity_search_with_score_by_vector(embedding: List[float], k: int = 4, param: Optional[dict] = None, expr: Optional[str] = None, timeout: Optional[int] = None, **kwargs: Any) → List[Tuple[langchain.schema.Document, float]][source]#
Perform a search on a query string and return results with score.
For more information about the search parameters, take a look at the pymilvus
documentation found here:
https://milvus.io/api-reference/pymilvus/v2.2.6/Collection/search().md
Parameters
embedding (List[float]) – The embedding vector being searched.
k (int, optional) – The amount of results ot return. Defaults to 4.
param (dict) – The search params for the specified index.
Defaults to None.
expr (str, optional) – Filtering expression. Defaults to None.
timeout (int, optional) – How long to wait before timeout error.
Defaults to None.
kwargs – Collection.search() keyword arguments.
Returns
Result doc and score.
Return type
List[Tuple[Document, float]]
class langchain.vectorstores.MongoDBAtlasVectorSearch(collection: Collection[MongoDBDocumentType], embedding: Embeddings, *, index_name: str = 'default', text_key: str = 'text', embedding_key: str = 'embedding')[source]#
Wrapper around MongoDB Atlas Vector Search.
To use, you should have both:
- the pymongo python package installed | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-49 | To use, you should have both:
- the pymongo python package installed
- a connection string associated with a MongoDB Atlas Cluster having deployed an
Atlas Search index
Example
from langchain.vectorstores import MongoDBAtlasVectorSearch
from langchain.embeddings.openai import OpenAIEmbeddings
from pymongo import MongoClient
mongo_client = MongoClient("<YOUR-CONNECTION-STRING>")
collection = mongo_client["<db_name>"]["<collection_name>"]
embeddings = OpenAIEmbeddings()
vectorstore = MongoDBAtlasVectorSearch(collection, embeddings)
add_texts(texts: Iterable[str], metadatas: Optional[List[Dict[str, Any]]] = None, **kwargs: Any) → List[source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts – Iterable of strings to add to the vectorstore.
metadatas – Optional list of metadatas associated with the texts.
Returns
List of ids from adding the texts into the vectorstore.
classmethod from_connection_string(connection_string: str, namespace: str, embedding: langchain.embeddings.base.Embeddings, **kwargs: Any) → langchain.vectorstores.mongodb_atlas.MongoDBAtlasVectorSearch[source]#
classmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, collection: Optional[Collection[MongoDBDocumentType]] = None, **kwargs: Any) → MongoDBAtlasVectorSearch[source]#
Construct MongoDBAtlasVectorSearch wrapper from raw documents.
This is a user-friendly interface that:
Embeds documents.
Adds the documents to a provided MongoDB Atlas Vector Search index(Lucene)
This is intended to be a quick way to get started.
Example | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-50 | This is intended to be a quick way to get started.
Example
similarity_search(query: str, k: int = 4, pre_filter: Optional[dict] = None, post_filter_pipeline: Optional[List[Dict]] = None, **kwargs: Any) → List[langchain.schema.Document][source]#
Return MongoDB documents most similar to query.
Use the knnBeta Operator available in MongoDB Atlas Search
This feature is in early access and available only for evaluation purposes, to
validate functionality, and to gather feedback from a small closed group of
early access users. It is not recommended for production deployments as we may
introduce breaking changes.
For more: https://www.mongodb.com/docs/atlas/atlas-search/knn-beta
Parameters
query – Text to look up documents similar to.
k – Optional Number of Documents to return. Defaults to 4.
pre_filter – Optional Dictionary of argument(s) to prefilter on document
fields.
post_filter_pipeline – Optional Pipeline of MongoDB aggregation stages
following the knnBeta search.
Returns
List of Documents most similar to the query and score for each
similarity_search_with_score(query: str, *, k: int = 4, pre_filter: Optional[dict] = None, post_filter_pipeline: Optional[List[Dict]] = None) → List[Tuple[langchain.schema.Document, float]][source]#
Return MongoDB documents most similar to query, along with scores.
Use the knnBeta Operator available in MongoDB Atlas Search
This feature is in early access and available only for evaluation purposes, to
validate functionality, and to gather feedback from a small closed group of
early access users. It is not recommended for production deployments as we
may introduce breaking changes.
For more: https://www.mongodb.com/docs/atlas/atlas-search/knn-beta
Parameters
query – Text to look up documents similar to. | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-51 | Parameters
query – Text to look up documents similar to.
k – Optional Number of Documents to return. Defaults to 4.
pre_filter – Optional Dictionary of argument(s) to prefilter on document
fields.
post_filter_pipeline – Optional Pipeline of MongoDB aggregation stages
following the knnBeta search.
Returns
List of Documents most similar to the query and score for each
class langchain.vectorstores.MyScale(embedding: langchain.embeddings.base.Embeddings, config: Optional[langchain.vectorstores.myscale.MyScaleSettings] = None, **kwargs: Any)[source]#
Wrapper around MyScale vector database
You need a clickhouse-connect python package, and a valid account
to connect to MyScale.
MyScale can not only search with simple vector indexes,
it also supports complex query with multiple conditions,
constraints and even sub-queries.
For more information, please visit[myscale official site](https://docs.myscale.com/en/overview/)
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, batch_size: int = 32, ids: Optional[Iterable[str]] = None, **kwargs: Any) → List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts – Iterable of strings to add to the vectorstore.
ids – Optional list of ids to associate with the texts.
batch_size – Batch size of insertion
metadata – Optional column data to be inserted
Returns
List of ids from adding the texts into the vectorstore.
drop() → None[source]#
Helper function: Drop data
escape_str(value: str) → str[source]# | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-52 | Helper function: Drop data
escape_str(value: str) → str[source]#
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[Dict[Any, Any]]] = None, config: Optional[langchain.vectorstores.myscale.MyScaleSettings] = None, text_ids: Optional[Iterable[str]] = None, batch_size: int = 32, **kwargs: Any) → langchain.vectorstores.myscale.MyScale[source]#
Create Myscale wrapper with existing texts
Parameters
embedding_function (Embeddings) – Function to extract text embedding
texts (Iterable[str]) – List or tuple of strings to be added
config (MyScaleSettings, Optional) – Myscale configuration
text_ids (Optional[Iterable], optional) – IDs for the texts.
Defaults to None.
batch_size (int, optional) – Batchsize when transmitting data to MyScale.
Defaults to 32.
metadata (List[dict], optional) – metadata to texts. Defaults to None.
into (Other keyword arguments will pass) – [clickhouse-connect](https://clickhouse.com/docs/en/integrations/python#clickhouse-connect-driver-api)
Returns
MyScale Index
property metadata_column: str#
similarity_search(query: str, k: int = 4, where_str: Optional[str] = None, **kwargs: Any) → List[langchain.schema.Document][source]#
Perform a similarity search with MyScale
Parameters
query (str) – query string
k (int, optional) – Top K neighbors to retrieve. Defaults to 4.
where_str (Optional[str], optional) – where condition string.
Defaults to None.
NOTE – Please do not let end-user to fill this and always be aware
of SQL injection. When dealing with metadatas, remember to | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-53 | of SQL injection. When dealing with metadatas, remember to
use {self.metadata_column}.attribute instead of attribute
alone. The default name for it is metadata.
Returns
List of Documents
Return type
List[Document]
similarity_search_by_vector(embedding: List[float], k: int = 4, where_str: Optional[str] = None, **kwargs: Any) → List[langchain.schema.Document][source]#
Perform a similarity search with MyScale by vectors
Parameters
query (str) – query string
k (int, optional) – Top K neighbors to retrieve. Defaults to 4.
where_str (Optional[str], optional) – where condition string.
Defaults to None.
NOTE – Please do not let end-user to fill this and always be aware
of SQL injection. When dealing with metadatas, remember to
use {self.metadata_column}.attribute instead of attribute
alone. The default name for it is metadata.
Returns
List of (Document, similarity)
Return type
List[Document]
similarity_search_with_relevance_scores(query: str, k: int = 4, where_str: Optional[str] = None, **kwargs: Any) → List[Tuple[langchain.schema.Document, float]][source]#
Perform a similarity search with MyScale
Parameters
query (str) – query string
k (int, optional) – Top K neighbors to retrieve. Defaults to 4.
where_str (Optional[str], optional) – where condition string.
Defaults to None.
NOTE – Please do not let end-user to fill this and always be aware
of SQL injection. When dealing with metadatas, remember to
use {self.metadata_column}.attribute instead of attribute
alone. The default name for it is metadata.
Returns
List of documents most similar to the query text
and cosine distance in float for each. | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-54 | List of documents most similar to the query text
and cosine distance in float for each.
Lower score represents more similarity.
Return type
List[Document]
pydantic settings langchain.vectorstores.MyScaleSettings[source]#
MyScale Client Configuration
Attribute:
myscale_host (str)An URL to connect to MyScale backend.Defaults to ‘localhost’.
myscale_port (int) : URL port to connect with HTTP. Defaults to 8443.
username (str) : Username to login. Defaults to None.
password (str) : Password to login. Defaults to None.
index_type (str): index type string.
index_param (dict): index build parameter.
database (str) : Database name to find the table. Defaults to ‘default’.
table (str) : Table name to operate on.
Defaults to ‘vector_table’.
metric (str)Metric to compute distance,supported are (‘l2’, ‘cosine’, ‘ip’). Defaults to ‘cosine’.
column_map (Dict)Column type map to project column name onto langchainsemantics. Must have keys: text, id, vector,
must be same size to number of columns. For example:
.. code-block:: python
{‘id’: ‘text_id’,
‘vector’: ‘text_embedding’,
‘text’: ‘text_plain’,
‘metadata’: ‘metadata_dictionary_in_json’,
}
Defaults to identity map.
Show JSON schema{
"title": "MyScaleSettings", | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-55 | Show JSON schema{
"title": "MyScaleSettings",
"description": "MyScale Client Configuration\n\nAttribute:\n myscale_host (str) : An URL to connect to MyScale backend.\n Defaults to 'localhost'.\n myscale_port (int) : URL port to connect with HTTP. Defaults to 8443.\n username (str) : Username to login. Defaults to None.\n password (str) : Password to login. Defaults to None.\n index_type (str): index type string.\n index_param (dict): index build parameter.\n database (str) : Database name to find the table. Defaults to 'default'.\n table (str) : Table name to operate on.\n Defaults to 'vector_table'.\n metric (str) : Metric to compute distance,\n supported are ('l2', 'cosine', 'ip'). Defaults to 'cosine'.\n column_map (Dict) : Column type map to project column name onto langchain\n semantics. Must have keys: `text`, `id`, `vector`,\n must be same size to number of columns. For example:\n .. code-block:: python\n\n {\n 'id': 'text_id',\n 'vector': 'text_embedding',\n 'text': 'text_plain',\n 'metadata': 'metadata_dictionary_in_json',\n }\n\n Defaults to identity map.",
"type": "object",
"properties": {
"host": {
"title": "Host",
"default": "localhost",
"env_names": "{'myscale_host'}",
"type": "string"
},
"port": {
"title": "Port", | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-56 | },
"port": {
"title": "Port",
"default": 8443,
"env_names": "{'myscale_port'}",
"type": "integer"
},
"username": {
"title": "Username",
"env_names": "{'myscale_username'}",
"type": "string"
},
"password": {
"title": "Password",
"env_names": "{'myscale_password'}",
"type": "string"
},
"index_type": {
"title": "Index Type",
"default": "IVFFLAT",
"env_names": "{'myscale_index_type'}",
"type": "string"
},
"index_param": {
"title": "Index Param",
"env_names": "{'myscale_index_param'}",
"type": "object",
"additionalProperties": {
"type": "string"
}
},
"column_map": {
"title": "Column Map",
"default": {
"id": "id",
"text": "text",
"vector": "vector",
"metadata": "metadata"
},
"env_names": "{'myscale_column_map'}",
"type": "object",
"additionalProperties": {
"type": "string"
}
},
"database": {
"title": "Database",
"default": "default",
"env_names": "{'myscale_database'}",
"type": "string"
},
"table": {
"title": "Table", | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-57 | },
"table": {
"title": "Table",
"default": "langchain",
"env_names": "{'myscale_table'}",
"type": "string"
},
"metric": {
"title": "Metric",
"default": "cosine",
"env_names": "{'myscale_metric'}",
"type": "string"
}
},
"additionalProperties": false
}
Config
env_file: str = .env
env_file_encoding: str = utf-8
env_prefix: str = myscale_
Fields
column_map (Dict[str, str])
database (str)
host (str)
index_param (Optional[Dict[str, str]])
index_type (str)
metric (str)
password (Optional[str])
port (int)
table (str)
username (Optional[str])
field column_map: Dict[str, str] = {'id': 'id', 'metadata': 'metadata', 'text': 'text', 'vector': 'vector'}#
field database: str = 'default'#
field host: str = 'localhost'#
field index_param: Optional[Dict[str, str]] = None#
field index_type: str = 'IVFFLAT'#
field metric: str = 'cosine'#
field password: Optional[str] = None#
field port: int = 8443#
field table: str = 'langchain'#
field username: Optional[str] = None#
class langchain.vectorstores.OpenSearchVectorSearch(opensearch_url: str, index_name: str, embedding_function: langchain.embeddings.base.Embeddings, **kwargs: Any)[source]#
Wrapper around OpenSearch as a vector database.
Example
from langchain import OpenSearchVectorSearch | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-58 | Example
from langchain import OpenSearchVectorSearch
opensearch_vector_search = OpenSearchVectorSearch(
"http://localhost:9200",
"embeddings",
embedding_function
)
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, bulk_size: int = 500, **kwargs: Any) → List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts – Iterable of strings to add to the vectorstore.
metadatas – Optional list of metadatas associated with the texts.
bulk_size – Bulk API request count; Default: 500
Returns
List of ids from adding the texts into the vectorstore.
Optional Args:vector_field: Document field embeddings are stored in. Defaults to
“vector_field”.
text_field: Document field the text of the document is stored in. Defaults
to “text”.
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, bulk_size: int = 500, **kwargs: Any) → langchain.vectorstores.opensearch_vector_search.OpenSearchVectorSearch[source]#
Construct OpenSearchVectorSearch wrapper from raw documents.
Example
from langchain import OpenSearchVectorSearch
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
opensearch_vector_search = OpenSearchVectorSearch.from_texts(
texts,
embeddings,
opensearch_url="http://localhost:9200"
)
OpenSearch by default supports Approximate Search powered by nmslib, faiss
and lucene engines recommended for large datasets. Also supports brute force
search through Script Scoring and Painless Scripting. | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-59 | search through Script Scoring and Painless Scripting.
Optional Args:vector_field: Document field embeddings are stored in. Defaults to
“vector_field”.
text_field: Document field the text of the document is stored in. Defaults
to “text”.
Optional Keyword Args for Approximate Search:engine: “nmslib”, “faiss”, “lucene”; default: “nmslib”
space_type: “l2”, “l1”, “cosinesimil”, “linf”, “innerproduct”; default: “l2”
ef_search: Size of the dynamic list used during k-NN searches. Higher values
lead to more accurate but slower searches; default: 512
ef_construction: Size of the dynamic list used during k-NN graph creation.
Higher values lead to more accurate graph but slower indexing speed;
default: 512
m: Number of bidirectional links created for each new element. Large impact
on memory consumption. Between 2 and 100; default: 16
Keyword Args for Script Scoring or Painless Scripting:is_appx_search: False
similarity_search(query: str, k: int = 4, **kwargs: Any) → List[langchain.schema.Document][source]#
Return docs most similar to query.
By default supports Approximate Search.
Also supports Script Scoring and Painless Scripting.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query.
Optional Args:vector_field: Document field embeddings are stored in. Defaults to
“vector_field”.
text_field: Document field the text of the document is stored in. Defaults
to “text”.
metadata_field: Document field that metadata is stored in. Defaults to
“metadata”. | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-60 | metadata_field: Document field that metadata is stored in. Defaults to
“metadata”.
Can be set to a special value “*” to include the entire document.
Optional Args for Approximate Search:search_type: “approximate_search”; default: “approximate_search”
boolean_filter: A Boolean filter consists of a Boolean query that
contains a k-NN query and a filter.
subquery_clause: Query clause on the knn vector field; default: “must”
lucene_filter: the Lucene algorithm decides whether to perform an exact
k-NN search with pre-filtering or an approximate search with modified
post-filtering.
Optional Args for Script Scoring Search:search_type: “script_scoring”; default: “approximate_search”
space_type: “l2”, “l1”, “linf”, “cosinesimil”, “innerproduct”,
“hammingbit”; default: “l2”
pre_filter: script_score query to pre-filter documents before identifying
nearest neighbors; default: {“match_all”: {}}
Optional Args for Painless Scripting Search:search_type: “painless_scripting”; default: “approximate_search”
space_type: “l2Squared”, “l1Norm”, “cosineSimilarity”; default: “l2Squared”
pre_filter: script_score query to pre-filter documents before identifying
nearest neighbors; default: {“match_all”: {}}
similarity_search_with_score(query: str, k: int = 4, **kwargs: Any) → List[Tuple[langchain.schema.Document, float]][source]#
Return docs and it’s scores most similar to query.
By default supports Approximate Search.
Also supports Script Scoring and Painless Scripting.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4. | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-61 | k – Number of Documents to return. Defaults to 4.
Returns
List of Documents along with its scores most similar to the query.
Optional Args:same as similarity_search
class langchain.vectorstores.Pinecone(index: Any, embedding_function: Callable, text_key: str, namespace: Optional[str] = None)[source]#
Wrapper around Pinecone vector database.
To use, you should have the pinecone-client python package installed.
Example
from langchain.vectorstores import Pinecone
from langchain.embeddings.openai import OpenAIEmbeddings
import pinecone
# The environment should be the one specified next to the API key
# in your Pinecone console
pinecone.init(api_key="***", environment="...")
index = pinecone.Index("langchain-demo")
embeddings = OpenAIEmbeddings()
vectorstore = Pinecone(index, embeddings.embed_query, "text")
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, namespace: Optional[str] = None, batch_size: int = 32, **kwargs: Any) → List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts – Iterable of strings to add to the vectorstore.
metadatas – Optional list of metadatas associated with the texts.
ids – Optional list of ids to associate with the texts.
namespace – Optional pinecone namespace to add the texts to.
Returns
List of ids from adding the texts into the vectorstore.
classmethod from_existing_index(index_name: str, embedding: langchain.embeddings.base.Embeddings, text_key: str = 'text', namespace: Optional[str] = None) → langchain.vectorstores.pinecone.Pinecone[source]#
Load pinecone vectorstore from index name. | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-62 | Load pinecone vectorstore from index name.
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, batch_size: int = 32, text_key: str = 'text', index_name: Optional[str] = None, namespace: Optional[str] = None, **kwargs: Any) → langchain.vectorstores.pinecone.Pinecone[source]#
Construct Pinecone wrapper from raw documents.
This is a user friendly interface that:
Embeds documents.
Adds the documents to a provided Pinecone index
This is intended to be a quick way to get started.
Example
from langchain import Pinecone
from langchain.embeddings import OpenAIEmbeddings
import pinecone
# The environment should be the one specified next to the API key
# in your Pinecone console
pinecone.init(api_key="***", environment="...")
embeddings = OpenAIEmbeddings()
pinecone = Pinecone.from_texts(
texts,
embeddings,
index_name="langchain-demo"
)
similarity_search(query: str, k: int = 4, filter: Optional[dict] = None, namespace: Optional[str] = None, **kwargs: Any) → List[langchain.schema.Document][source]#
Return pinecone documents most similar to query.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
filter – Dictionary of argument(s) to filter on metadata
namespace – Namespace to search in. Default will search in ‘’ namespace.
Returns
List of Documents most similar to the query and score for each | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-63 | Returns
List of Documents most similar to the query and score for each
similarity_search_with_score(query: str, k: int = 4, filter: Optional[dict] = None, namespace: Optional[str] = None) → List[Tuple[langchain.schema.Document, float]][source]#
Return pinecone documents most similar to query, along with scores.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
filter – Dictionary of argument(s) to filter on metadata
namespace – Namespace to search in. Default will search in ‘’ namespace.
Returns
List of Documents most similar to the query and score for each
class langchain.vectorstores.Qdrant(client: Any, collection_name: str, embeddings: Optional[langchain.embeddings.base.Embeddings] = None, content_payload_key: str = 'page_content', metadata_payload_key: str = 'metadata', embedding_function: Optional[Callable] = None)[source]#
Wrapper around Qdrant vector database.
To use you should have the qdrant-client package installed.
Example
from qdrant_client import QdrantClient
from langchain import Qdrant
client = QdrantClient()
collection_name = "MyCollection"
qdrant = Qdrant(client, collection_name, embedding_function)
CONTENT_KEY = 'page_content'#
METADATA_KEY = 'metadata'#
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[Sequence[str]] = None, batch_size: int = 64, **kwargs: Any) → List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts – Iterable of strings to add to the vectorstore. | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-64 | Parameters
texts – Iterable of strings to add to the vectorstore.
metadatas – Optional list of metadatas associated with the texts.
ids – Optional list of ids to associate with the texts. Ids have to be
uuid-like strings.
batch_size – How many vectors upload per-request.
Default: 64
Returns
List of ids from adding the texts into the vectorstore.
classmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, ids: Optional[Sequence[str]] = None, location: Optional[str] = None, url: Optional[str] = None, port: Optional[int] = 6333, grpc_port: int = 6334, prefer_grpc: bool = False, https: Optional[bool] = None, api_key: Optional[str] = None, prefix: Optional[str] = None, timeout: Optional[float] = None, host: Optional[str] = None, path: Optional[str] = None, collection_name: Optional[str] = None, distance_func: str = 'Cosine', content_payload_key: str = 'page_content', metadata_payload_key: str = 'metadata', batch_size: int = 64, shard_number: Optional[int] = None, replication_factor: Optional[int] = None, write_consistency_factor: Optional[int] = None, on_disk_payload: Optional[bool] = None, hnsw_config: Optional[common_types.HnswConfigDiff] = None, optimizers_config: Optional[common_types.OptimizersConfigDiff] = None, wal_config: Optional[common_types.WalConfigDiff] = None, quantization_config: Optional[common_types.QuantizationConfig] = None, init_from: Optional[common_types.InitFrom] = None, **kwargs: Any) → Qdrant[source]# | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-65 | Construct Qdrant wrapper from a list of texts.
Parameters
texts – A list of texts to be indexed in Qdrant.
embedding – A subclass of Embeddings, responsible for text vectorization.
metadatas – An optional list of metadata. If provided it has to be of the same
length as a list of texts.
ids – Optional list of ids to associate with the texts. Ids have to be
uuid-like strings.
location – If :memory: - use in-memory Qdrant instance.
If str - use it as a url parameter.
If None - fallback to relying on host and port parameters.
url – either host or str of “Optional[scheme], host, Optional[port],
Optional[prefix]”. Default: None
port – Port of the REST API interface. Default: 6333
grpc_port – Port of the gRPC interface. Default: 6334
prefer_grpc – If true - use gPRC interface whenever possible in custom methods.
Default: False
https – If true - use HTTPS(SSL) protocol. Default: None
api_key – API key for authentication in Qdrant Cloud. Default: None
prefix – If not None - add prefix to the REST URL path.
Example: service/v1 will result in
http://localhost:6333/service/v1/{qdrant-endpoint} for REST API.
Default: None
timeout – Timeout for REST and gRPC API requests.
Default: 5.0 seconds for REST and unlimited for gRPC
host – Host name of Qdrant service. If url and host are None, set to
‘localhost’. Default: None
path – Path in which the vectors will be stored while using local mode.
Default: None
collection_name – Name of the Qdrant collection to be used. If not provided, | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-66 | collection_name – Name of the Qdrant collection to be used. If not provided,
it will be created randomly. Default: None
distance_func – Distance function. One of: “Cosine” / “Euclid” / “Dot”.
Default: “Cosine”
content_payload_key – A payload key used to store the content of the document.
Default: “page_content”
metadata_payload_key – A payload key used to store the metadata of the document.
Default: “metadata”
batch_size – How many vectors upload per-request.
Default: 64
shard_number – Number of shards in collection. Default is 1, minimum is 1.
replication_factor – Replication factor for collection. Default is 1, minimum is 1.
Defines how many copies of each shard will be created.
Have effect only in distributed mode.
write_consistency_factor – Write consistency factor for collection. Default is 1, minimum is 1.
Defines how many replicas should apply the operation for us to consider
it successful. Increasing this number will make the collection more
resilient to inconsistencies, but will also make it fail if not enough
replicas are available.
Does not have any performance impact.
Have effect only in distributed mode.
on_disk_payload – If true - point`s payload will not be stored in memory.
It will be read from the disk every time it is requested.
This setting saves RAM by (slightly) increasing the response time.
Note: those payload values that are involved in filtering and are
indexed - remain in RAM.
hnsw_config – Params for HNSW index
optimizers_config – Params for optimizer
wal_config – Params for Write-Ahead-Log
quantization_config – Params for quantization, if None - quantization will be disabled
init_from – Use data stored in another collection to initialize this collection | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-67 | init_from – Use data stored in another collection to initialize this collection
**kwargs – Additional arguments passed directly into REST client initialization
This is a user-friendly interface that:
1. Creates embeddings, one for each text
2. Initializes the Qdrant database as an in-memory docstore by default
(and overridable to a remote docstore)
Adds the text embeddings to the Qdrant database
This is intended to be a quick way to get started.
Example
from langchain import Qdrant
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
qdrant = Qdrant.from_texts(texts, embeddings, "localhost")
max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
fetch_k – Number of Documents to fetch to pass to MMR algorithm.
Defaults to 20.
lambda_mult – Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance. | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-68 | Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
similarity_search(query: str, k: int = 4, filter: Optional[MetadataFilter] = None, search_params: Optional[common_types.SearchParams] = None, offset: int = 0, score_threshold: Optional[float] = None, consistency: Optional[common_types.ReadConsistency] = None, **kwargs: Any) → List[Document][source]#
Return docs most similar to query.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
filter – Filter by metadata. Defaults to None.
search_params – Additional search params
offset – Offset of the first result to return.
May be used to paginate results.
Note: large offset values may cause performance issues.
score_threshold – Define a minimal score threshold for the result.
If defined, less similar results will not be returned.
Score of the returned result might be higher or smaller than the
threshold depending on the Distance function used.
E.g. for cosine similarity only higher scores will be returned.
consistency – Read consistency of the search. Defines how many replicas should be
queried before returning the result.
Values:
- int - number of replicas to query, values should present in all
queried replicas
’majority’ - query all replicas, but return values present in themajority of replicas
’quorum’ - query the majority of replicas, return values present inall of them
’all’ - query all replicas, and return values present in all replicas
Returns
List of Documents most similar to the query. | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-69 | Returns
List of Documents most similar to the query.
similarity_search_with_score(query: str, k: int = 4, filter: Optional[MetadataFilter] = None, search_params: Optional[common_types.SearchParams] = None, offset: int = 0, score_threshold: Optional[float] = None, consistency: Optional[common_types.ReadConsistency] = None, **kwargs: Any) → List[Tuple[Document, float]][source]#
Return docs most similar to query.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
filter – Filter by metadata. Defaults to None.
search_params – Additional search params
offset – Offset of the first result to return.
May be used to paginate results.
Note: large offset values may cause performance issues.
score_threshold – Define a minimal score threshold for the result.
If defined, less similar results will not be returned.
Score of the returned result might be higher or smaller than the
threshold depending on the Distance function used.
E.g. for cosine similarity only higher scores will be returned.
consistency – Read consistency of the search. Defines how many replicas should be
queried before returning the result.
Values:
- int - number of replicas to query, values should present in all
queried replicas
’majority’ - query all replicas, but return values present in themajority of replicas
’quorum’ - query the majority of replicas, return values present inall of them
’all’ - query all replicas, and return values present in all replicas
Returns
List of documents most similar to the query text and cosine
distance in float for each.
Lower score represents more similarity. | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-70 | distance in float for each.
Lower score represents more similarity.
class langchain.vectorstores.Redis(redis_url: str, index_name: str, embedding_function: typing.Callable, content_key: str = 'content', metadata_key: str = 'metadata', vector_key: str = 'content_vector', relevance_score_fn: typing.Optional[typing.Callable[[float], float]] = <function _default_relevance_score>, **kwargs: typing.Any)[source]#
Wrapper around Redis vector database.
To use, you should have the redis python package installed.
Example
from langchain.vectorstores import Redis
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
vectorstore = Redis(
redis_url="redis://username:password@localhost:6379"
index_name="my-index",
embedding_function=embeddings.embed_query,
)
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, embeddings: Optional[List[List[float]]] = None, keys: Optional[List[str]] = None, batch_size: int = 1000, **kwargs: Any) → List[str][source]#
Add more texts to the vectorstore.
Parameters
texts (Iterable[str]) – Iterable of strings/text to add to the vectorstore.
metadatas (Optional[List[dict]], optional) – Optional list of metadatas.
Defaults to None.
embeddings (Optional[List[List[float]]], optional) – Optional pre-generated
embeddings. Defaults to None.
keys (Optional[List[str]], optional) – Optional key values to use as ids.
Defaults to None.
batch_size (int, optional) – Batch size to use for writes. Defaults to 1000.
Returns
List of ids added to the vectorstore
Return type
List[str] | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-71 | Returns
List of ids added to the vectorstore
Return type
List[str]
as_retriever(**kwargs: Any) → langchain.vectorstores.redis.RedisVectorStoreRetriever[source]#
static drop_index(index_name: str, delete_documents: bool, **kwargs: Any) → bool[source]#
Drop a Redis search index.
Parameters
index_name (str) – Name of the index to drop.
delete_documents (bool) – Whether to drop the associated documents.
Returns
Whether or not the drop was successful.
Return type
bool
classmethod from_existing_index(embedding: langchain.embeddings.base.Embeddings, index_name: str, content_key: str = 'content', metadata_key: str = 'metadata', vector_key: str = 'content_vector', **kwargs: Any) → langchain.vectorstores.redis.Redis[source]#
Connect to an existing Redis index.
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, index_name: Optional[str] = None, content_key: str = 'content', metadata_key: str = 'metadata', vector_key: str = 'content_vector', **kwargs: Any) → langchain.vectorstores.redis.Redis[source]#
Create a Redis vectorstore from raw documents.
This is a user-friendly interface that:
Embeds documents.
Creates a new index for the embeddings in Redis.
Adds the documents to the newly created Redis index.
This is intended to be a quick way to get started.
.. rubric:: Example | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-72 | This is intended to be a quick way to get started.
.. rubric:: Example
classmethod from_texts_return_keys(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, index_name: Optional[str] = None, content_key: str = 'content', metadata_key: str = 'metadata', vector_key: str = 'content_vector', distance_metric: Literal['COSINE', 'IP', 'L2'] = 'COSINE', **kwargs: Any) → Tuple[langchain.vectorstores.redis.Redis, List[str]][source]#
Create a Redis vectorstore from raw documents.
This is a user-friendly interface that:
Embeds documents.
Creates a new index for the embeddings in Redis.
Adds the documents to the newly created Redis index.
This is intended to be a quick way to get started.
.. rubric:: Example
similarity_search(query: str, k: int = 4, **kwargs: Any) → List[langchain.schema.Document][source]#
Returns the most similar indexed documents to the query text.
Parameters
query (str) – The query text for which to find similar documents.
k (int) – The number of documents to return. Default is 4.
Returns
A list of documents that are most similar to the query text.
Return type
List[Document]
similarity_search_limit_score(query: str, k: int = 4, score_threshold: float = 0.2, **kwargs: Any) → List[langchain.schema.Document][source]#
Returns the most similar indexed documents to the query text within the
score_threshold range.
Parameters
query (str) – The query text for which to find similar documents.
k (int) – The number of documents to return. Default is 4. | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-73 | k (int) – The number of documents to return. Default is 4.
score_threshold (float) – The minimum matching score required for a document
0.2. (to be considered a match. Defaults to) –
similarity (Because the similarity calculation algorithm is based on cosine) –
:param :
:param the smaller the angle:
:param the higher the similarity.:
Returns
A list of documents that are most similar to the query text,
including the match score for each document.
Return type
List[Document]
Note
If there are no documents that satisfy the score_threshold value,
an empty list is returned.
similarity_search_with_score(query: str, k: int = 4) → List[Tuple[langchain.schema.Document, float]][source]#
Return docs most similar to query.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query and score for each
class langchain.vectorstores.SKLearnVectorStore(embedding: langchain.embeddings.base.Embeddings, *, persist_path: Optional[str] = None, serializer: Literal['json', 'bson', 'parquet'] = 'json', metric: str = 'cosine', **kwargs: Any)[source]#
A simple in-memory vector store based on the scikit-learn library
NearestNeighbors implementation.
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) → List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts – Iterable of strings to add to the vectorstore.
metadatas – Optional list of metadatas associated with the texts.
kwargs – vectorstore specific parameters | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-74 | kwargs – vectorstore specific parameters
Returns
List of ids from adding the texts into the vectorstore.
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, persist_path: Optional[str] = None, **kwargs: Any) → langchain.vectorstores.sklearn.SKLearnVectorStore[source]#
Return VectorStore initialized from texts and embeddings.
max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
:param query: Text to look up documents similar to.
:param k: Number of Documents to return. Defaults to 4.
:param fetch_k: Number of Documents to fetch to pass to MMR algorithm.
:param lambda_mult: Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
max_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
:param embedding: Embedding to look up documents similar to.
:param k: Number of Documents to return. Defaults to 4. | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-75 | :param k: Number of Documents to return. Defaults to 4.
:param fetch_k: Number of Documents to fetch to pass to MMR algorithm.
:param lambda_mult: Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
persist() → None[source]#
similarity_search(query: str, k: int = 4, **kwargs: Any) → List[langchain.schema.Document][source]#
Return docs most similar to query.
similarity_search_with_score(query: str, *, k: int = 4, **kwargs: Any) → List[Tuple[langchain.schema.Document, float]][source]#
class langchain.vectorstores.SingleStoreDB(embedding: langchain.embeddings.base.Embeddings, *, table_name: str = 'embeddings', content_field: str = 'content', metadata_field: str = 'metadata', vector_field: str = 'vector', pool_size: int = 5, max_overflow: int = 10, timeout: float = 30, **kwargs: Any)[source]#
This class serves as a Pythonic interface to the SingleStore DB database.
The prerequisite for using this class is the installation of the singlestoredb
Python package.
The SingleStoreDB vectorstore can be created by providing an embedding function and
the relevant parameters for the database connection, connection pool, and
optionally, the names of the table and the fields to use.
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, embeddings: Optional[List[List[float]]] = None, **kwargs: Any) → List[str][source]#
Add more texts to the vectorstore.
Parameters | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-76 | Add more texts to the vectorstore.
Parameters
texts (Iterable[str]) – Iterable of strings/text to add to the vectorstore.
metadatas (Optional[List[dict]], optional) – Optional list of metadatas.
Defaults to None.
embeddings (Optional[List[List[float]]], optional) – Optional pre-generated
embeddings. Defaults to None.
Returns
empty list
Return type
List[str]
as_retriever(**kwargs: Any) → langchain.vectorstores.singlestoredb.SingleStoreDBRetriever[source]#
connection_kwargs#
Create connection pool.
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, table_name: str = 'embeddings', content_field: str = 'content', metadata_field: str = 'metadata', vector_field: str = 'vector', pool_size: int = 5, max_overflow: int = 10, timeout: float = 30, **kwargs: Any) → langchain.vectorstores.singlestoredb.SingleStoreDB[source]#
Create a SingleStoreDB vectorstore from raw documents.
This is a user-friendly interface that:
Embeds documents.
Creates a new table for the embeddings in SingleStoreDB.
Adds the documents to the newly created table.
This is intended to be a quick way to get started.
.. rubric:: Example
similarity_search(query: str, k: int = 4, **kwargs: Any) → List[langchain.schema.Document][source]#
Returns the most similar indexed documents to the query text.
Uses cosine similarity.
Parameters
query (str) – The query text for which to find similar documents.
k (int) – The number of documents to return. Default is 4.
Returns | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-77 | k (int) – The number of documents to return. Default is 4.
Returns
A list of documents that are most similar to the query text.
Return type
List[Document]
similarity_search_with_score(query: str, k: int = 4) → List[Tuple[langchain.schema.Document, float]][source]#
Return docs most similar to query. Uses cosine similarity.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query and score for each
vector_field#
Pass the rest of the kwargs to the connection.
class langchain.vectorstores.SupabaseVectorStore(client: supabase.client.Client, embedding: Embeddings, table_name: str, query_name: Union[str, None] = None)[source]#
VectorStore for a Supabase postgres database. Assumes you have the pgvector
extension installed and a match_documents (or similar) function. For more details:
https://js.langchain.com/docs/modules/indexes/vector_stores/integrations/supabase
You can implement your own match_documents function in order to limit the search
space to a subset of documents based on your own authorization or business logic.
Note that the Supabase Python client does not yet support async operations.
If you’d like to use max_marginal_relevance_search, please review the instructions
below on modifying the match_documents function to return matched embeddings.
add_texts(texts: Iterable[str], metadatas: Optional[List[dict[Any, Any]]] = None, **kwargs: Any) → List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts – Iterable of strings to add to the vectorstore. | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-78 | Parameters
texts – Iterable of strings to add to the vectorstore.
metadatas – Optional list of metadatas associated with the texts.
kwargs – vectorstore specific parameters
Returns
List of ids from adding the texts into the vectorstore.
add_vectors(vectors: List[List[float]], documents: List[langchain.schema.Document]) → List[str][source]#
classmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, client: Optional[supabase.client.Client] = None, table_name: Optional[str] = 'documents', query_name: Union[str, None] = 'match_documents', **kwargs: Any) → SupabaseVectorStore[source]#
Return VectorStore initialized from texts and embeddings.
max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
fetch_k – Number of Documents to fetch to pass to MMR algorithm.
lambda_mult – Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
max_marginal_relevance_search requires that query_name returns matched
embeddings alongside the match documents. The following function
demonstrates how to do this:
```sql
CREATE FUNCTION match_documents_embeddings(query_embedding vector(1536), | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-79 | ```sql
CREATE FUNCTION match_documents_embeddings(query_embedding vector(1536),
match_count int)
RETURNS TABLE(id bigint,
content text,
metadata jsonb,
embedding vector(1536),
similarity float)
LANGUAGE plpgsql
AS $$
# variable_conflict use_column
BEGINRETURN query
SELECT
id,
content,
metadata,
embedding,
1 -(docstore.embedding <=> query_embedding) AS similarity
FROMdocstore
ORDER BYdocstore.embedding <=> query_embedding
LIMIT match_count;
END;
$$;
```
max_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
embedding – Embedding to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
fetch_k – Number of Documents to fetch to pass to MMR algorithm.
lambda_mult – Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
query_name: str#
similarity_search(query: str, k: int = 4, **kwargs: Any) → List[langchain.schema.Document][source]#
Return docs most similar to query.
similarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) → List[langchain.schema.Document][source]#
Return docs most similar to embedding vector.
Parameters | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-80 | Return docs most similar to embedding vector.
Parameters
embedding – Embedding to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query vector.
similarity_search_by_vector_returning_embeddings(query: List[float], k: int) → List[Tuple[langchain.schema.Document, float, numpy.ndarray[numpy.float32, Any]]][source]#
similarity_search_by_vector_with_relevance_scores(query: List[float], k: int) → List[Tuple[langchain.schema.Document, float]][source]#
similarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) → List[Tuple[langchain.schema.Document, float]][source]#
Return docs and relevance scores in the range [0, 1].
0 is dissimilar, 1 is most similar.
Parameters
query – input text
k – Number of Documents to return. Defaults to 4.
**kwargs – kwargs to be passed to similarity search. Should include:
score_threshold: Optional, a floating point value between 0 to 1 to
filter the resulting set of retrieved docs
Returns
List of Tuples of (doc, similarity_score)
table_name: str#
class langchain.vectorstores.Tair(embedding_function: langchain.embeddings.base.Embeddings, url: str, index_name: str, content_key: str = 'content', metadata_key: str = 'metadata', search_params: Optional[dict] = None, **kwargs: Any)[source]#
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) → List[str][source]#
Add texts data to an existing index. | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-81 | Add texts data to an existing index.
create_index_if_not_exist(dim: int, distance_type: str, index_type: str, data_type: str, **kwargs: Any) → bool[source]#
static drop_index(index_name: str = 'langchain', **kwargs: Any) → bool[source]#
Drop an existing index.
Parameters
index_name (str) – Name of the index to drop.
Returns
True if the index is dropped successfully.
Return type
bool
classmethod from_documents(documents: List[langchain.schema.Document], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, index_name: str = 'langchain', content_key: str = 'content', metadata_key: str = 'metadata', **kwargs: Any) → langchain.vectorstores.tair.Tair[source]#
Return VectorStore initialized from documents and embeddings.
classmethod from_existing_index(embedding: langchain.embeddings.base.Embeddings, index_name: str = 'langchain', content_key: str = 'content', metadata_key: str = 'metadata', **kwargs: Any) → langchain.vectorstores.tair.Tair[source]#
Connect to an existing Tair index.
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, index_name: str = 'langchain', content_key: str = 'content', metadata_key: str = 'metadata', **kwargs: Any) → langchain.vectorstores.tair.Tair[source]#
Return VectorStore initialized from texts and embeddings.
similarity_search(query: str, k: int = 4, **kwargs: Any) → List[langchain.schema.Document][source]#
Returns the most similar indexed documents to the query text.
Parameters | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-82 | Returns the most similar indexed documents to the query text.
Parameters
query (str) – The query text for which to find similar documents.
k (int) – The number of documents to return. Default is 4.
Returns
A list of documents that are most similar to the query text.
Return type
List[Document]
class langchain.vectorstores.Tigris(client: TigrisClient, embeddings: Embeddings, index_name: str)[source]#
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) → List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts – Iterable of strings to add to the vectorstore.
metadatas – Optional list of metadatas associated with the texts.
ids – Optional list of ids for documents.
Ids will be autogenerated if not provided.
kwargs – vectorstore specific parameters
Returns
List of ids from adding the texts into the vectorstore.
classmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, client: Optional[TigrisClient] = None, index_name: Optional[str] = None, **kwargs: Any) → Tigris[source]#
Return VectorStore initialized from texts and embeddings.
property search_index: TigrisVectorStore#
similarity_search(query: str, k: int = 4, filter: Optional[TigrisFilter] = None, **kwargs: Any) → List[Document][source]#
Return docs most similar to query. | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-83 | Return docs most similar to query.
similarity_search_with_score(query: str, k: int = 4, filter: Optional[TigrisFilter] = None) → List[Tuple[Document, float]][source]#
Run similarity search with Chroma with distance.
Parameters
query (str) – Query text to search for.
k (int) – Number of results to return. Defaults to 4.
filter (Optional[TigrisFilter]) – Filter by metadata. Defaults to None.
Returns
List of documents most similar to the querytext with distance in float.
Return type
List[Tuple[Document, float]]
class langchain.vectorstores.Typesense(typesense_client: Client, embedding: Embeddings, *, typesense_collection_name: Optional[str] = None, text_key: str = 'text')[source]#
Wrapper around Typesense vector search.
To use, you should have the typesense python package installed.
Example
from langchain.embedding.openai import OpenAIEmbeddings
from langchain.vectorstores import Typesense
import typesense
node = {
"host": "localhost", # For Typesense Cloud use xxx.a1.typesense.net
"port": "8108", # For Typesense Cloud use 443
"protocol": "http" # For Typesense Cloud use https
}
typesense_client = typesense.Client(
{
"nodes": [node],
"api_key": "<API_KEY>",
"connection_timeout_seconds": 2
}
)
typesense_collection_name = "langchain-memory"
embedding = OpenAIEmbeddings()
vectorstore = Typesense(
typesense_client,
typesense_collection_name,
embedding.embed_query,
"text",
) | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-84 | typesense_collection_name,
embedding.embed_query,
"text",
)
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) → List[str][source]#
Run more texts through the embedding and add to the vectorstore.
Parameters
texts – Iterable of strings to add to the vectorstore.
metadatas – Optional list of metadatas associated with the texts.
ids – Optional list of ids to associate with the texts.
Returns
List of ids from adding the texts into the vectorstore.
classmethod from_client_params(embedding: langchain.embeddings.base.Embeddings, *, host: str = 'localhost', port: Union[str, int] = '8108', protocol: str = 'http', typesense_api_key: Optional[str] = None, connection_timeout_seconds: int = 2, **kwargs: Any) → langchain.vectorstores.typesense.Typesense[source]#
Initialize Typesense directly from client parameters.
Example
from langchain.embedding.openai import OpenAIEmbeddings
from langchain.vectorstores import Typesense
# Pass in typesense_api_key as kwarg or set env var "TYPESENSE_API_KEY".
vectorstore = Typesense(
OpenAIEmbeddings(),
host="localhost",
port="8108",
protocol="http",
typesense_collection_name="langchain-memory",
) | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-85 | protocol="http",
typesense_collection_name="langchain-memory",
)
classmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, typesense_client: Optional[Client] = None, typesense_client_params: Optional[dict] = None, typesense_collection_name: Optional[str] = None, text_key: str = 'text', **kwargs: Any) → Typesense[source]#
Construct Typesense wrapper from raw text.
similarity_search(query: str, k: int = 4, filter: Optional[str] = '', **kwargs: Any) → List[langchain.schema.Document][source]#
Return typesense documents most similar to query.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
filter – typesense filter_by expression to filter documents on
Returns
List of Documents most similar to the query and score for each
similarity_search_with_score(query: str, k: int = 4, filter: Optional[str] = '') → List[Tuple[langchain.schema.Document, float]][source]#
Return typesense documents most similar to query, along with scores.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
filter – typesense filter_by expression to filter documents on
Returns
List of Documents most similar to the query and score for each
class langchain.vectorstores.Vectara(vectara_customer_id: Optional[str] = None, vectara_corpus_id: Optional[str] = None, vectara_api_key: Optional[str] = None)[source]#
Implementation of Vector Store using Vectara (https://vectara.com).
.. rubric:: Example | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-86 | .. rubric:: Example
from langchain.vectorstores import Vectara
vectorstore = Vectara(
vectara_customer_id=vectara_customer_id,
vectara_corpus_id=vectara_corpus_id,
vectara_api_key=vectara_api_key
)
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) → List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts – Iterable of strings to add to the vectorstore.
metadatas – Optional list of metadatas associated with the texts.
Returns
List of ids from adding the texts into the vectorstore.
as_retriever(**kwargs: Any) → langchain.vectorstores.vectara.VectaraRetriever[source]#
classmethod from_texts(texts: List[str], embedding: Optional[langchain.embeddings.base.Embeddings] = None, metadatas: Optional[List[dict]] = None, **kwargs: Any) → langchain.vectorstores.vectara.Vectara[source]#
Construct Vectara wrapper from raw documents.
This is intended to be a quick way to get started.
.. rubric:: Example
from langchain import Vectara
vectara = Vectara.from_texts(
texts,
vectara_customer_id=customer_id,
vectara_corpus_id=corpus_id,
vectara_api_key=api_key,
)
similarity_search(query: str, k: int = 5, lambda_val: float = 0.025, filter: Optional[str] = None, n_sentence_context: int = 0, **kwargs: Any) → List[langchain.schema.Document][source]#
Return Vectara documents most similar to query, along with scores.
Parameters | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-87 | Return Vectara documents most similar to query, along with scores.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 5.
filter – Dictionary of argument(s) to filter on metadata. For example a
filter can be “doc.rating > 3.0 and part.lang = ‘deu’”} see
https://docs.vectara.com/docs/search-apis/sql/filter-overview for more
details.
n_sentence_context – number of sentences before/after the matching segment
to add
Returns
List of Documents most similar to the query
similarity_search_with_score(query: str, k: int = 5, lambda_val: float = 0.025, filter: Optional[str] = None, n_sentence_context: int = 0, **kwargs: Any) → List[Tuple[langchain.schema.Document, float]][source]#
Return Vectara documents most similar to query, along with scores.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 5.
lambda_val – lexical match parameter for hybrid search.
filter – Dictionary of argument(s) to filter on metadata. For example a
filter can be “doc.rating > 3.0 and part.lang = ‘deu’”} see
https://docs.vectara.com/docs/search-apis/sql/filter-overview
for more details.
n_sentence_context – number of sentences before/after the matching segment
to add
Returns
List of Documents most similar to the query and score for each.
class langchain.vectorstores.VectorStore[source]#
Interface for vector stores.
async aadd_documents(documents: List[langchain.schema.Document], **kwargs: Any) → List[str][source]#
Run more documents through the embeddings and add to the vectorstore.
Parameters | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-88 | Run more documents through the embeddings and add to the vectorstore.
Parameters
(List[Document] (documents) – Documents to add to the vectorstore.
Returns
List of IDs of the added texts.
Return type
List[str]
async aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) → List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
add_documents(documents: List[langchain.schema.Document], **kwargs: Any) → List[str][source]#
Run more documents through the embeddings and add to the vectorstore.
Parameters
(List[Document] (documents) – Documents to add to the vectorstore.
Returns
List of IDs of the added texts.
Return type
List[str]
abstract add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) → List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts – Iterable of strings to add to the vectorstore.
metadatas – Optional list of metadatas associated with the texts.
kwargs – vectorstore specific parameters
Returns
List of ids from adding the texts into the vectorstore.
async classmethod afrom_documents(documents: List[langchain.schema.Document], embedding: langchain.embeddings.base.Embeddings, **kwargs: Any) → langchain.vectorstores.base.VST[source]#
Return VectorStore initialized from documents and embeddings.
async classmethod afrom_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) → langchain.vectorstores.base.VST[source]#
Return VectorStore initialized from texts and embeddings. | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-89 | Return VectorStore initialized from texts and embeddings.
async amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
async amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
as_retriever(**kwargs: Any) → langchain.vectorstores.base.VectorStoreRetriever[source]#
async asearch(query: str, search_type: str, **kwargs: Any) → List[langchain.schema.Document][source]#
Return docs most similar to query using specified search type.
async asimilarity_search(query: str, k: int = 4, **kwargs: Any) → List[langchain.schema.Document][source]#
Return docs most similar to query.
async asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) → List[langchain.schema.Document][source]#
Return docs most similar to embedding vector.
async asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) → List[Tuple[langchain.schema.Document, float]][source]#
Return docs most similar to query.
classmethod from_documents(documents: List[langchain.schema.Document], embedding: langchain.embeddings.base.Embeddings, **kwargs: Any) → langchain.vectorstores.base.VST[source]#
Return VectorStore initialized from documents and embeddings. | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-90 | Return VectorStore initialized from documents and embeddings.
abstract classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) → langchain.vectorstores.base.VST[source]#
Return VectorStore initialized from texts and embeddings.
max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
fetch_k – Number of Documents to fetch to pass to MMR algorithm.
lambda_mult – Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
max_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
embedding – Embedding to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
fetch_k – Number of Documents to fetch to pass to MMR algorithm.
lambda_mult – Number between 0 and 1 that determines the degree | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-91 | lambda_mult – Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
search(query: str, search_type: str, **kwargs: Any) → List[langchain.schema.Document][source]#
Return docs most similar to query using specified search type.
abstract similarity_search(query: str, k: int = 4, **kwargs: Any) → List[langchain.schema.Document][source]#
Return docs most similar to query.
similarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) → List[langchain.schema.Document][source]#
Return docs most similar to embedding vector.
Parameters
embedding – Embedding to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query vector.
similarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) → List[Tuple[langchain.schema.Document, float]][source]#
Return docs and relevance scores in the range [0, 1].
0 is dissimilar, 1 is most similar.
Parameters
query – input text
k – Number of Documents to return. Defaults to 4.
**kwargs – kwargs to be passed to similarity search. Should include:
score_threshold: Optional, a floating point value between 0 to 1 to
filter the resulting set of retrieved docs
Returns
List of Tuples of (doc, similarity_score) | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-92 | Returns
List of Tuples of (doc, similarity_score)
class langchain.vectorstores.Weaviate(client: typing.Any, index_name: str, text_key: str, embedding: typing.Optional[langchain.embeddings.base.Embeddings] = None, attributes: typing.Optional[typing.List[str]] = None, relevance_score_fn: typing.Optional[typing.Callable[[float], float]] = <function _default_score_normalizer>, by_text: bool = True)[source]#
Wrapper around Weaviate vector database.
To use, you should have the weaviate-client python package installed.
Example
import weaviate
from langchain.vectorstores import Weaviate
client = weaviate.Client(url=os.environ["WEAVIATE_URL"], ...)
weaviate = Weaviate(client, index_name, text_key)
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) → List[str][source]#
Upload texts with metadata (properties) to Weaviate.
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) → langchain.vectorstores.weaviate.Weaviate[source]#
Construct Weaviate wrapper from raw documents.
This is a user-friendly interface that:
Embeds documents.
Creates a new index for the embeddings in the Weaviate instance.
Adds the documents to the newly created Weaviate index.
This is intended to be a quick way to get started.
Example
from langchain.vectorstores.weaviate import Weaviate
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
weaviate = Weaviate.from_texts(
texts,
embeddings, | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-93 | weaviate = Weaviate.from_texts(
texts,
embeddings,
weaviate_url="http://localhost:8080"
)
max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
fetch_k – Number of Documents to fetch to pass to MMR algorithm.
lambda_mult – Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
max_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
embedding – Embedding to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
fetch_k – Number of Documents to fetch to pass to MMR algorithm.
lambda_mult – Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance. | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-94 | Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
similarity_search(query: str, k: int = 4, **kwargs: Any) → List[langchain.schema.Document][source]#
Return docs most similar to query.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query.
similarity_search_by_text(query: str, k: int = 4, **kwargs: Any) → List[langchain.schema.Document][source]#
Return docs most similar to query.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query.
similarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) → List[langchain.schema.Document][source]#
Look up similar documents by embedding vector in Weaviate.
similarity_search_with_score(query: str, k: int = 4, **kwargs: Any) → List[Tuple[langchain.schema.Document, float]][source]#
Return list of documents most similar to the query
text and cosine distance in float for each.
Lower score represents more similarity.
class langchain.vectorstores.Zilliz(embedding_function: langchain.embeddings.base.Embeddings, collection_name: str = 'LangChainCollection', connection_args: Optional[dict[str, Any]] = None, consistency_level: str = 'Session', index_params: Optional[dict] = None, search_params: Optional[dict] = None, drop_old: Optional[bool] = False)[source]# | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-95 | classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, collection_name: str = 'LangChainCollection', connection_args: dict[str, Any] = {}, consistency_level: str = 'Session', index_params: Optional[dict] = None, search_params: Optional[dict] = None, drop_old: bool = False, **kwargs: Any) → langchain.vectorstores.zilliz.Zilliz[source]#
Create a Zilliz collection, indexes it with HNSW, and insert data.
Parameters
texts (List[str]) – Text data.
embedding (Embeddings) – Embedding function.
metadatas (Optional[List[dict]]) – Metadata for each text if it exists.
Defaults to None.
collection_name (str, optional) – Collection name to use. Defaults to
“LangChainCollection”.
connection_args (dict[str, Any], optional) – Connection args to use. Defaults
to DEFAULT_MILVUS_CONNECTION.
consistency_level (str, optional) – Which consistency level to use. Defaults
to “Session”.
index_params (Optional[dict], optional) – Which index_params to use.
Defaults to None.
search_params (Optional[dict], optional) – Which search params to use.
Defaults to None.
drop_old (Optional[bool], optional) – Whether to drop the collection with
that name if it exists. Defaults to False.
Returns
Zilliz Vector Store
Return type
Zilliz
previous
Document Loaders
next
Retrievers
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 11, 2023. | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
37669183a796-0 | .rst
.pdf
Agent Toolkits
Agent Toolkits#
Agent toolkits.
pydantic model langchain.agents.agent_toolkits.AzureCognitiveServicesToolkit[source]#
Toolkit for Azure Cognitive Services.
get_tools() → List[langchain.tools.base.BaseTool][source]#
Get the tools in the toolkit.
pydantic model langchain.agents.agent_toolkits.FileManagementToolkit[source]#
Toolkit for interacting with a Local Files.
field root_dir: Optional[str] = None#
If specified, all file operations are made relative to root_dir.
field selected_tools: Optional[List[str]] = None#
If provided, only provide the selected tools. Defaults to all.
get_tools() → List[langchain.tools.base.BaseTool][source]#
Get the tools in the toolkit.
pydantic model langchain.agents.agent_toolkits.GmailToolkit[source]#
Toolkit for interacting with Gmail.
field api_resource: Resource [Optional]#
get_tools() → List[langchain.tools.base.BaseTool][source]#
Get the tools in the toolkit.
pydantic model langchain.agents.agent_toolkits.JiraToolkit[source]#
Jira Toolkit.
field tools: List[langchain.tools.base.BaseTool] = []#
classmethod from_jira_api_wrapper(jira_api_wrapper: langchain.utilities.jira.JiraAPIWrapper) → langchain.agents.agent_toolkits.jira.toolkit.JiraToolkit[source]#
get_tools() → List[langchain.tools.base.BaseTool][source]#
Get the tools in the toolkit.
pydantic model langchain.agents.agent_toolkits.JsonToolkit[source]#
Toolkit for interacting with a JSON spec.
field spec: langchain.tools.json.tool.JsonSpec [Required]#
get_tools() → List[langchain.tools.base.BaseTool][source]# | https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html |
37669183a796-1 | get_tools() → List[langchain.tools.base.BaseTool][source]#
Get the tools in the toolkit.
pydantic model langchain.agents.agent_toolkits.NLAToolkit[source]#
Natural Language API Toolkit Definition.
field nla_tools: Sequence[langchain.agents.agent_toolkits.nla.tool.NLATool] [Required]#
List of API Endpoint Tools.
classmethod from_llm_and_ai_plugin(llm: langchain.base_language.BaseLanguageModel, ai_plugin: langchain.tools.plugin.AIPlugin, requests: Optional[langchain.requests.Requests] = None, verbose: bool = False, **kwargs: Any) → langchain.agents.agent_toolkits.nla.toolkit.NLAToolkit[source]#
Instantiate the toolkit from an OpenAPI Spec URL
classmethod from_llm_and_ai_plugin_url(llm: langchain.base_language.BaseLanguageModel, ai_plugin_url: str, requests: Optional[langchain.requests.Requests] = None, verbose: bool = False, **kwargs: Any) → langchain.agents.agent_toolkits.nla.toolkit.NLAToolkit[source]#
Instantiate the toolkit from an OpenAPI Spec URL
classmethod from_llm_and_spec(llm: langchain.base_language.BaseLanguageModel, spec: langchain.tools.openapi.utils.openapi_utils.OpenAPISpec, requests: Optional[langchain.requests.Requests] = None, verbose: bool = False, **kwargs: Any) → langchain.agents.agent_toolkits.nla.toolkit.NLAToolkit[source]#
Instantiate the toolkit by creating tools for each operation.
classmethod from_llm_and_url(llm: langchain.base_language.BaseLanguageModel, open_api_url: str, requests: Optional[langchain.requests.Requests] = None, verbose: bool = False, **kwargs: Any) → langchain.agents.agent_toolkits.nla.toolkit.NLAToolkit[source]# | https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html |
37669183a796-2 | Instantiate the toolkit from an OpenAPI Spec URL
get_tools() → List[langchain.tools.base.BaseTool][source]#
Get the tools for all the API operations.
pydantic model langchain.agents.agent_toolkits.OpenAPIToolkit[source]#
Toolkit for interacting with a OpenAPI api.
field json_agent: langchain.agents.agent.AgentExecutor [Required]#
field requests_wrapper: langchain.requests.TextRequestsWrapper [Required]#
classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, json_spec: langchain.tools.json.tool.JsonSpec, requests_wrapper: langchain.requests.TextRequestsWrapper, **kwargs: Any) → langchain.agents.agent_toolkits.openapi.toolkit.OpenAPIToolkit[source]#
Create json agent from llm, then initialize.
get_tools() → List[langchain.tools.base.BaseTool][source]#
Get the tools in the toolkit.
pydantic model langchain.agents.agent_toolkits.PlayWrightBrowserToolkit[source]#
Toolkit for web browser tools.
field async_browser: Optional['AsyncBrowser'] = None#
field sync_browser: Optional['SyncBrowser'] = None#
classmethod from_browser(sync_browser: Optional[SyncBrowser] = None, async_browser: Optional[AsyncBrowser] = None) → PlayWrightBrowserToolkit[source]#
Instantiate the toolkit.
get_tools() → List[langchain.tools.base.BaseTool][source]#
Get the tools in the toolkit.
pydantic model langchain.agents.agent_toolkits.PowerBIToolkit[source]#
Toolkit for interacting with PowerBI dataset.
field callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None#
field examples: Optional[str] = None#
field llm: langchain.base_language.BaseLanguageModel [Required]#
field max_iterations: int = 5# | https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html |
37669183a796-3 | field max_iterations: int = 5#
field powerbi: langchain.utilities.powerbi.PowerBIDataset [Required]#
get_tools() → List[langchain.tools.base.BaseTool][source]#
Get the tools in the toolkit.
pydantic model langchain.agents.agent_toolkits.SQLDatabaseToolkit[source]#
Toolkit for interacting with SQL databases.
field db: langchain.sql_database.SQLDatabase [Required]#
field llm: langchain.base_language.BaseLanguageModel [Required]#
get_tools() → List[langchain.tools.base.BaseTool][source]#
Get the tools in the toolkit.
property dialect: str#
Return string representation of dialect to use.
pydantic model langchain.agents.agent_toolkits.SparkSQLToolkit[source]#
Toolkit for interacting with Spark SQL.
field db: langchain.utilities.spark_sql.SparkSQL [Required]#
field llm: langchain.base_language.BaseLanguageModel [Required]#
get_tools() → List[langchain.tools.base.BaseTool][source]#
Get the tools in the toolkit.
pydantic model langchain.agents.agent_toolkits.VectorStoreInfo[source]#
Information about a vectorstore.
field description: str [Required]#
field name: str [Required]#
field vectorstore: langchain.vectorstores.base.VectorStore [Required]#
pydantic model langchain.agents.agent_toolkits.VectorStoreRouterToolkit[source]#
Toolkit for routing between vectorstores.
field llm: langchain.base_language.BaseLanguageModel [Optional]#
field vectorstores: List[langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreInfo] [Required]#
get_tools() → List[langchain.tools.base.BaseTool][source]#
Get the tools in the toolkit. | https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html |
37669183a796-4 | Get the tools in the toolkit.
pydantic model langchain.agents.agent_toolkits.VectorStoreToolkit[source]#
Toolkit for interacting with a vector store.
field llm: langchain.base_language.BaseLanguageModel [Optional]#
field vectorstore_info: langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreInfo [Required]#
get_tools() → List[langchain.tools.base.BaseTool][source]#
Get the tools in the toolkit.
pydantic model langchain.agents.agent_toolkits.ZapierToolkit[source]#
Zapier Toolkit.
field tools: List[langchain.tools.base.BaseTool] = []#
classmethod from_zapier_nla_wrapper(zapier_nla_wrapper: langchain.utilities.zapier.ZapierNLAWrapper) → langchain.agents.agent_toolkits.zapier.toolkit.ZapierToolkit[source]#
Create a toolkit from a ZapierNLAWrapper.
get_tools() → List[langchain.tools.base.BaseTool][source]#
Get the tools in the toolkit.
langchain.agents.agent_toolkits.create_csv_agent(llm: langchain.base_language.BaseLanguageModel, path: Union[str, List[str]], pandas_kwargs: Optional[dict] = None, **kwargs: Any) → langchain.agents.agent.AgentExecutor[source]#
Create csv agent by loading to a dataframe and using pandas agent. | https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html |
37669183a796-5 | langchain.agents.agent_toolkits.create_json_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: langchain.agents.agent_toolkits.json.toolkit.JsonToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'You are an agent designed to interact with JSON.\nYour goal is to return a final answer by interacting with the JSON.\nYou have access to the following tools which help you learn more about the JSON you are interacting with.\nOnly use the below tools. Only use the information returned by the below tools to construct your final answer.\nDo not make up any information that is not contained in the JSON.\nYour input to the tools should be in the form of `data["key"][0]` where `data` is the JSON blob you are interacting with, and the syntax used is Python. \nYou should only use keys that you know for a fact exist. You must validate that a key exists by seeing it previously when calling `json_spec_list_keys`. \nIf you have not seen a key in one of those responses, you cannot use it.\nYou should only add one key at a time to the path. You cannot add multiple keys at once.\nIf you encounter a "KeyError", go back to the previous key, look at the available keys, and try again.\n\nIf the question does not seem to be related to the JSON, just return "I don\'t know" as the answer.\nAlways begin your interaction with the `json_spec_list_keys` tool with input "data" to see what keys exist in the JSON.\n\nNote that sometimes the value at a given path is large. In this case, you will get an error "Value is a large dictionary, should explore its keys directly".\nIn this case, you should ALWAYS follow up by using the `json_spec_list_keys` tool to see what keys | https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html |
37669183a796-6 | you should ALWAYS follow up by using the `json_spec_list_keys` tool to see what keys exist at that path.\nDo not simply refer the user to the JSON or a section of the JSON, as this is not a valid answer. Keep digging until you find the answer and explicitly return it.\n', suffix: str = 'Begin!"\n\nQuestion: {input}\nThought: I should look at the keys that exist in data to see what I have access to\n{agent_scratchpad}', format_instructions: str = 'Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None, verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) → langchain.agents.agent.AgentExecutor[source]# | https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html |
37669183a796-7 | Construct a json agent from an LLM and tools. | https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html |
37669183a796-8 | langchain.agents.agent_toolkits.create_openapi_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: langchain.agents.agent_toolkits.openapi.toolkit.OpenAPIToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = "You are an agent designed to answer questions by making web requests to an API given the openapi spec.\n\nIf the question does not seem related to the API, return I don't know. Do not make up an answer.\nOnly use information provided by the tools to construct your response.\n\nFirst, find the base URL needed to make the request.\n\nSecond, find the relevant paths needed to answer the question. Take note that, sometimes, you might need to make more than one request to more than one path to answer the question.\n\nThird, find the required parameters needed to make the request. For GET requests, these are usually URL parameters and for POST requests, these are request body parameters.\n\nFourth, make the requests needed to answer the question. Ensure that you are sending the correct parameters to the request by checking which parameters are required. For parameters with a fixed set of values, please use the spec to look at which values are allowed.\n\nUse the exact parameter names as listed in the spec, do not make up any names or abbreviate the names of parameters.\nIf you get a not found error, ensure that you are using a path that actually exists in the spec.\n", suffix: str = 'Begin!\n\nQuestion: {input}\nThought: I should explore the spec to find the base url for the API.\n{agent_scratchpad}', format_instructions: str = 'Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of | https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html |
37669183a796-9 | you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None, max_iterations: Optional[int] = 15, max_execution_time: Optional[float] = None, early_stopping_method: str = 'force', verbose: bool = False, return_intermediate_steps: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) → langchain.agents.agent.AgentExecutor[source]# | https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html |
37669183a796-10 | Construct a json agent from an LLM and tools.
langchain.agents.agent_toolkits.create_pandas_dataframe_agent(llm: langchain.base_language.BaseLanguageModel, df: Any, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: Optional[str] = None, suffix: Optional[str] = None, input_variables: Optional[List[str]] = None, verbose: bool = False, return_intermediate_steps: bool = False, max_iterations: Optional[int] = 15, max_execution_time: Optional[float] = None, early_stopping_method: str = 'force', agent_executor_kwargs: Optional[Dict[str, Any]] = None, include_df_in_prompt: Optional[bool] = True, **kwargs: Dict[str, Any]) → langchain.agents.agent.AgentExecutor[source]#
Construct a pandas agent from an LLM and dataframe. | https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html |
37669183a796-11 | langchain.agents.agent_toolkits.create_pbi_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: Optional[langchain.agents.agent_toolkits.powerbi.toolkit.PowerBIToolkit], powerbi: Optional[langchain.utilities.powerbi.PowerBIDataset] = None, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'You are an agent designed to help users interact with a PowerBI Dataset.\n\nAgent has access to a tool that can write a query based on the question and then run those against PowerBI, Microsofts business intelligence tool. The questions from the users should be interpreted as related to the dataset that is available and not general questions about the world. If the question does not seem related to the dataset, just return "This does not appear to be part of this dataset." as the answer.\n\nGiven an input question, ask to run the questions against the dataset, then look at the results and return the answer, the answer should be a complete sentence that answers the question, if multiple rows are asked find a way to write that in a easily readible format for a human, also make sure to represent numbers in readable ways, like 1M instead of 1000000. Unless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\n', suffix: str = 'Begin!\n\nQuestion: {input}\nThought: I can first ask which tables I have, then how each table is defined and then ask the query tool the question I need, and finally create a nice sentence that answers the question.\n{agent_scratchpad}', format_instructions: str = 'Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of | https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html |
37669183a796-12 | you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', examples: Optional[str] = None, input_variables: Optional[List[str]] = None, top_k: int = 10, verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) → langchain.agents.agent.AgentExecutor[source]# | https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html |
37669183a796-13 | Construct a pbi agent from an LLM and tools. | https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html |
37669183a796-14 | langchain.agents.agent_toolkits.create_pbi_chat_agent(llm: langchain.chat_models.base.BaseChatModel, toolkit: Optional[langchain.agents.agent_toolkits.powerbi.toolkit.PowerBIToolkit], powerbi: Optional[langchain.utilities.powerbi.PowerBIDataset] = None, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, output_parser: Optional[langchain.agents.agent.AgentOutputParser] = None, prefix: str = 'Assistant is a large language model built to help users interact with a PowerBI Dataset.\n\nAssistant has access to a tool that can write a query based on the question and then run those against PowerBI, Microsofts business intelligence tool. The questions from the users should be interpreted as related to the dataset that is available and not general questions about the world. If the question does not seem related to the dataset, just return "This does not appear to be part of this dataset." as the answer.\n\nGiven an input question, ask to run the questions against the dataset, then look at the results and return the answer, the answer should be a complete sentence that answers the question, if multiple rows are asked find a way to write that in a easily readible format for a human, also make sure to represent numbers in readable ways, like 1M instead of 1000000. Unless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\n', suffix: str = "TOOLS\n------\nAssistant can ask the user to use tools to look up information that may be helpful in answering the users original question. The tools the human can use are:\n\n{{tools}}\n\n{format_instructions}\n\nUSER'S INPUT\n--------------------\nHere is the user's input (remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING | https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html |
37669183a796-15 | (remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else):\n\n{{{{input}}}}\n", examples: Optional[str] = None, input_variables: Optional[List[str]] = None, memory: Optional[langchain.memory.chat_memory.BaseChatMemory] = None, top_k: int = 10, verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) → langchain.agents.agent.AgentExecutor[source]# | https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html |
37669183a796-16 | Construct a pbi agent from an Chat LLM and tools.
If you supply only a toolkit and no powerbi dataset, the same LLM is used for both.
langchain.agents.agent_toolkits.create_python_agent(llm: langchain.base_language.BaseLanguageModel, tool: langchain.tools.python.tool.PythonREPLTool, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, verbose: bool = False, prefix: str = 'You are an agent designed to write and execute python code to answer questions.\nYou have access to a python REPL, which you can use to execute python code.\nIf you get an error, debug your code and try again.\nOnly use the output of your code to answer the question. \nYou might know the answer without running any code, but you should still run the code to get the answer.\nIf it does not seem like you can write code to answer the question, just return "I don\'t know" as the answer.\n', agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) → langchain.agents.agent.AgentExecutor[source]#
Construct a python agent from an LLM and tool. | https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html |
37669183a796-17 | Construct a python agent from an LLM and tool.
langchain.agents.agent_toolkits.create_spark_dataframe_agent(llm: langchain.llms.base.BaseLLM, df: Any, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = '\nYou are working with a spark dataframe in Python. The name of the dataframe is `df`.\nYou should use the tools below to answer the question posed of you:', suffix: str = '\nThis is the result of `print(df.first())`:\n{df}\n\nBegin!\nQuestion: {input}\n{agent_scratchpad}', input_variables: Optional[List[str]] = None, verbose: bool = False, return_intermediate_steps: bool = False, max_iterations: Optional[int] = 15, max_execution_time: Optional[float] = None, early_stopping_method: str = 'force', agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) → langchain.agents.agent.AgentExecutor[source]#
Construct a spark agent from an LLM and dataframe. | https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html |
37669183a796-18 | langchain.agents.agent_toolkits.create_spark_sql_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: langchain.agents.agent_toolkits.spark_sql.toolkit.SparkSQLToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'You are an agent designed to interact with Spark SQL.\nGiven an input question, create a syntactically correct Spark SQL query to run, then look at the results of the query and return the answer.\nUnless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\nYou can order the results by a relevant column to return the most interesting examples in the database.\nNever query for all the columns from a specific table, only ask for the relevant columns given the question.\nYou have access to tools for interacting with the database.\nOnly use the below tools. Only use the information returned by the below tools to construct your final answer.\nYou MUST double check your query before executing it. If you get an error while executing a query, rewrite the query and try again.\n\nDO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the database.\n\nIf the question does not seem related to the database, just return "I don\'t know" as the answer.\n', suffix: str = 'Begin!\n\nQuestion: {input}\nThought: I should look at the tables in the database to see what I can query.\n{agent_scratchpad}', format_instructions: str = 'Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this | https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html |
37669183a796-19 | Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None, top_k: int = 10, max_iterations: Optional[int] = 15, max_execution_time: Optional[float] = None, early_stopping_method: str = 'force', verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) → langchain.agents.agent.AgentExecutor[source]# | https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html |
37669183a796-20 | Construct a sql agent from an LLM and tools. | https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html |
37669183a796-21 | langchain.agents.agent_toolkits.create_sql_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: langchain.agents.agent_toolkits.sql.toolkit.SQLDatabaseToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'You are an agent designed to interact with a SQL database.\nGiven an input question, create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer.\nUnless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\nYou can order the results by a relevant column to return the most interesting examples in the database.\nNever query for all the columns from a specific table, only ask for the relevant columns given the question.\nYou have access to tools for interacting with the database.\nOnly use the below tools. Only use the information returned by the below tools to construct your final answer.\nYou MUST double check your query before executing it. If you get an error while executing a query, rewrite the query and try again.\n\nDO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the database.\n\nIf the question does not seem related to the database, just return "I don\'t know" as the answer.\n', suffix: str = 'Begin!\n\nQuestion: {input}\nThought: I should look at the tables in the database to see what I can query. Then I should query the schema of the most relevant tables.\n{agent_scratchpad}', format_instructions: str = 'Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the | https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html |
37669183a796-22 | to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None, top_k: int = 10, max_iterations: Optional[int] = 15, max_execution_time: Optional[float] = None, early_stopping_method: str = 'force', verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) → langchain.agents.agent.AgentExecutor[source]# | https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html |
37669183a796-23 | Construct a sql agent from an LLM and tools.
langchain.agents.agent_toolkits.create_vectorstore_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'You are an agent designed to answer questions about sets of documents.\nYou have access to tools for interacting with the documents, and the inputs to the tools are questions.\nSometimes, you will be asked to provide sources for your questions, in which case you should use the appropriate tool to do so.\nIf the question does not seem relevant to any of the tools provided, just return "I don\'t know" as the answer.\n', verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) → langchain.agents.agent.AgentExecutor[source]#
Construct a vectorstore agent from an LLM and tools.
langchain.agents.agent_toolkits.create_vectorstore_router_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreRouterToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'You are an agent designed to answer questions.\nYou have access to tools for interacting with different sources, and the inputs to the tools are questions.\nYour main task is to decide which of the tools is relevant for answering question at hand.\nFor complex questions, you can break the question down into sub questions and use tools to answers the sub questions.\n', verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) → langchain.agents.agent.AgentExecutor[source]# | https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html |
37669183a796-24 | Construct a vectorstore router agent from an LLM and tools.
previous
Tools
next
Utilities
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 11, 2023. | https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html |
24db8f99ff8a-0 | .rst
.pdf
SerpAPI
SerpAPI#
For backwards compatiblity.
pydantic model langchain.serpapi.SerpAPIWrapper[source]#
Wrapper around SerpAPI.
To use, you should have the google-search-results python package installed,
and the environment variable SERPAPI_API_KEY set with your API key, or pass
serpapi_api_key as a named parameter to the constructor.
Example
from langchain import SerpAPIWrapper
serpapi = SerpAPIWrapper()
field aiosession: Optional[aiohttp.client.ClientSession] = None#
field params: dict = {'engine': 'google', 'gl': 'us', 'google_domain': 'google.com', 'hl': 'en'}#
field serpapi_api_key: Optional[str] = None#
async aresults(query: str) → dict[source]#
Use aiohttp to run query through SerpAPI and return the results async.
async arun(query: str, **kwargs: Any) → str[source]#
Run query through SerpAPI and parse result async.
get_params(query: str) → Dict[str, str][source]#
Get parameters for SerpAPI.
results(query: str) → dict[source]#
Run query through SerpAPI and return the raw result.
run(query: str, **kwargs: Any) → str[source]#
Run query through SerpAPI and parse result.
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 11, 2023. | https://python.langchain.com/en/latest/reference/modules/serpapi.html |
e56e9b3601d4-0 | .rst
.pdf
Retrievers
Retrievers#
pydantic model langchain.retrievers.ArxivRetriever[source]#
It is effectively a wrapper for ArxivAPIWrapper.
It wraps load() to get_relevant_documents().
It uses all ArxivAPIWrapper arguments without any change.
async aget_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
get_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
class langchain.retrievers.AwsKendraIndexRetriever(kclient: Any, kendraindex: str, k: int = 3, languagecode: str = 'en')[source]#
Wrapper around AWS Kendra.
async aget_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
get_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Run search on Kendra index and get top k documents
docs = get_relevant_documents(‘This is my query’)
k: int#
Number of documents to query for.
kclient: Any#
boto3 client for Kendra.
kendraindex: str#
Kendra index id
languagecode: str#
Languagecode used for querying.
pydantic model langchain.retrievers.AzureCognitiveSearchRetriever[source]#
Wrapper around Azure Cognitive Search. | https://python.langchain.com/en/latest/reference/modules/retrievers.html |
e56e9b3601d4-1 | Wrapper around Azure Cognitive Search.
field aiosession: Optional[aiohttp.client.ClientSession] = None#
ClientSession, in case we want to reuse connection for better performance.
field api_key: str = ''#
API Key. Both Admin and Query keys work, but for reading data it’s
recommended to use a Query key.
field api_version: str = '2020-06-30'#
API version
field content_key: str = 'content'#
Key in a retrieved result to set as the Document page_content.
field index_name: str = ''#
Name of Index inside Azure Cognitive Search service
field service_name: str = ''#
Name of Azure Cognitive Search service
async aget_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
get_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
pydantic model langchain.retrievers.ChatGPTPluginRetriever[source]#
field aiosession: Optional[aiohttp.client.ClientSession] = None#
field bearer_token: str [Required]#
field filter: Optional[dict] = None#
field top_k: int = 3#
field url: str [Required]#
async aget_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
get_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters | https://python.langchain.com/en/latest/reference/modules/retrievers.html |
e56e9b3601d4-2 | Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
pydantic model langchain.retrievers.ContextualCompressionRetriever[source]#
Retriever that wraps a base retriever and compresses the results.
field base_compressor: langchain.retrievers.document_compressors.base.BaseDocumentCompressor [Required]#
Compressor for compressing retrieved documents.
field base_retriever: langchain.schema.BaseRetriever [Required]#
Base Retriever to use for getting relevant documents.
async aget_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
get_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
Sequence of relevant documents
class langchain.retrievers.DataberryRetriever(datastore_url: str, top_k: Optional[int] = None, api_key: Optional[str] = None)[source]#
async aget_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
api_key: Optional[str]#
datastore_url: str#
get_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
top_k: Optional[int]# | https://python.langchain.com/en/latest/reference/modules/retrievers.html |
e56e9b3601d4-3 | Returns
List of relevant documents
top_k: Optional[int]#
class langchain.retrievers.ElasticSearchBM25Retriever(client: Any, index_name: str)[source]#
Wrapper around Elasticsearch using BM25 as a retrieval method.
To connect to an Elasticsearch instance that requires login credentials,
including Elastic Cloud, use the Elasticsearch URL format
https://username:password@es_host:9243. For example, to connect to Elastic
Cloud, create the Elasticsearch URL with the required authentication details and
pass it to the ElasticVectorSearch constructor as the named parameter
elasticsearch_url.
You can obtain your Elastic Cloud URL and login credentials by logging in to the
Elastic Cloud console at https://cloud.elastic.co, selecting your deployment, and
navigating to the “Deployments” page.
To obtain your Elastic Cloud password for the default “elastic” user:
Log in to the Elastic Cloud console at https://cloud.elastic.co
Go to “Security” > “Users”
Locate the “elastic” user and click “Edit”
Click “Reset password”
Follow the prompts to reset the password
The format for Elastic Cloud URLs is
https://username:password@cluster_id.region_id.gcp.cloud.es.io:9243.
add_texts(texts: Iterable[str], refresh_indices: bool = True) → List[str][source]#
Run more texts through the embeddings and add to the retriver.
Parameters
texts – Iterable of strings to add to the retriever.
refresh_indices – bool to refresh ElasticSearch indices
Returns
List of ids from adding the texts into the retriever.
async aget_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents | https://python.langchain.com/en/latest/reference/modules/retrievers.html |
e56e9b3601d4-4 | Parameters
query – string to find relevant documents for
Returns
List of relevant documents
classmethod create(elasticsearch_url: str, index_name: str, k1: float = 2.0, b: float = 0.75) → langchain.retrievers.elastic_search_bm25.ElasticSearchBM25Retriever[source]#
get_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
pydantic model langchain.retrievers.KNNRetriever[source]#
field embeddings: langchain.embeddings.base.Embeddings [Required]#
field index: Any = None#
field k: int = 4#
field relevancy_threshold: Optional[float] = None#
field texts: List[str] [Required]#
async aget_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
classmethod from_texts(texts: List[str], embeddings: langchain.embeddings.base.Embeddings, **kwargs: Any) → langchain.retrievers.knn.KNNRetriever[source]#
get_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
class langchain.retrievers.MergerRetriever(retrievers: List[langchain.schema.BaseRetriever])[source]#
This class merges the results of multiple retrievers.
Parameters
retrievers – A list of retrievers to merge. | https://python.langchain.com/en/latest/reference/modules/retrievers.html |
e56e9b3601d4-5 | Parameters
retrievers – A list of retrievers to merge.
async aget_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Asynchronously get the relevant documents for a given query.
Parameters
query – The query to search for.
Returns
A list of relevant documents.
async amerge_documents(query: str) → List[langchain.schema.Document][source]#
Asynchronously merge the results of the retrievers.
Parameters
query – The query to search for.
Returns
A list of merged documents.
get_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get the relevant documents for a given query.
Parameters
query – The query to search for.
Returns
A list of relevant documents.
merge_documents(query: str) → List[langchain.schema.Document][source]#
Merge the results of the retrievers.
Parameters
query – The query to search for.
Returns
A list of merged documents.
class langchain.retrievers.MetalRetriever(client: Any, params: Optional[dict] = None)[source]#
async aget_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
get_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
pydantic model langchain.retrievers.PineconeHybridSearchRetriever[source]#
field alpha: float = 0.5#
field embeddings: langchain.embeddings.base.Embeddings [Required]#
field index: Any = None# | https://python.langchain.com/en/latest/reference/modules/retrievers.html |
e56e9b3601d4-6 | field index: Any = None#
field sparse_encoder: Any = None#
field top_k: int = 4#
add_texts(texts: List[str], ids: Optional[List[str]] = None, metadatas: Optional[List[dict]] = None) → None[source]#
async aget_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
get_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
pydantic model langchain.retrievers.PubMedRetriever[source]#
It is effectively a wrapper for PubMedAPIWrapper.
It wraps load() to get_relevant_documents().
It uses all PubMedAPIWrapper arguments without any change.
async aget_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
get_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
pydantic model langchain.retrievers.RemoteLangChainRetriever[source]#
field headers: Optional[dict] = None#
field input_key: str = 'message'#
field metadata_key: str = 'metadata'#
field page_content_key: str = 'page_content'#
field response_key: str = 'response'#
field url: str [Required]# | https://python.langchain.com/en/latest/reference/modules/retrievers.html |
e56e9b3601d4-7 | field response_key: str = 'response'#
field url: str [Required]#
async aget_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
get_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
pydantic model langchain.retrievers.SVMRetriever[source]#
field embeddings: langchain.embeddings.base.Embeddings [Required]#
field index: Any = None#
field k: int = 4#
field relevancy_threshold: Optional[float] = None#
field texts: List[str] [Required]#
async aget_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
classmethod from_texts(texts: List[str], embeddings: langchain.embeddings.base.Embeddings, **kwargs: Any) → langchain.retrievers.svm.SVMRetriever[source]#
get_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
pydantic model langchain.retrievers.SelfQueryRetriever[source]#
Retriever that wraps around a vector store and uses an LLM to generate
the vector store queries.
field llm_chain: langchain.chains.llm.LLMChain [Required]#
The LLMChain for generating the vector store queries. | https://python.langchain.com/en/latest/reference/modules/retrievers.html |
e56e9b3601d4-8 | The LLMChain for generating the vector store queries.
field search_kwargs: dict [Optional]#
Keyword arguments to pass in to the vector store search.
field search_type: str = 'similarity'#
The search type to perform on the vector store.
field structured_query_translator: langchain.chains.query_constructor.ir.Visitor [Required]#
Translator for turning internal query language into vectorstore search params.
field vectorstore: langchain.vectorstores.base.VectorStore [Required]#
The underlying vector store from which documents will be retrieved.
field verbose: bool = False#
async aget_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, vectorstore: langchain.vectorstores.base.VectorStore, document_contents: str, metadata_field_info: List[langchain.chains.query_constructor.schema.AttributeInfo], structured_query_translator: Optional[langchain.chains.query_constructor.ir.Visitor] = None, chain_kwargs: Optional[Dict] = None, enable_limit: bool = False, **kwargs: Any) → langchain.retrievers.self_query.base.SelfQueryRetriever[source]#
get_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
pydantic model langchain.retrievers.TFIDFRetriever[source]#
field docs: List[langchain.schema.Document] [Required]#
field k: int = 4#
field tfidf_array: Any = None#
field vectorizer: Any = None# | https://python.langchain.com/en/latest/reference/modules/retrievers.html |
e56e9b3601d4-9 | field tfidf_array: Any = None#
field vectorizer: Any = None#
async aget_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
classmethod from_documents(documents: Iterable[langchain.schema.Document], *, tfidf_params: Optional[Dict[str, Any]] = None, **kwargs: Any) → langchain.retrievers.tfidf.TFIDFRetriever[source]#
classmethod from_texts(texts: Iterable[str], metadatas: Optional[Iterable[dict]] = None, tfidf_params: Optional[Dict[str, Any]] = None, **kwargs: Any) → langchain.retrievers.tfidf.TFIDFRetriever[source]#
get_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
pydantic model langchain.retrievers.TimeWeightedVectorStoreRetriever[source]#
Retriever combining embedding similarity with recency.
field decay_rate: float = 0.01#
The exponential decay factor used as (1.0-decay_rate)**(hrs_passed).
field default_salience: Optional[float] = None#
The salience to assign memories not retrieved from the vector store.
None assigns no salience to documents not fetched from the vector store.
field k: int = 4#
The maximum number of documents to retrieve in a given call.
field memory_stream: List[langchain.schema.Document] [Optional]#
The memory_stream of documents to search through.
field other_score_keys: List[str] = []# | https://python.langchain.com/en/latest/reference/modules/retrievers.html |
e56e9b3601d4-10 | field other_score_keys: List[str] = []#
Other keys in the metadata to factor into the score, e.g. ‘importance’.
field search_kwargs: dict [Optional]#
Keyword arguments to pass to the vectorstore similarity search.
field vectorstore: langchain.vectorstores.base.VectorStore [Required]#
The vectorstore to store documents and determine salience.
async aadd_documents(documents: List[langchain.schema.Document], **kwargs: Any) → List[str][source]#
Add documents to vectorstore.
add_documents(documents: List[langchain.schema.Document], **kwargs: Any) → List[str][source]#
Add documents to vectorstore.
async aget_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Return documents that are relevant to the query.
get_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Return documents that are relevant to the query.
get_salient_docs(query: str) → Dict[int, Tuple[langchain.schema.Document, float]][source]#
Return documents that are salient to the query.
class langchain.retrievers.VespaRetriever(app: Vespa, body: Dict, content_field: str, metadata_fields: Optional[Sequence[str]] = None)[source]#
async aget_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents | https://python.langchain.com/en/latest/reference/modules/retrievers.html |
e56e9b3601d4-11 | Parameters
query – string to find relevant documents for
Returns
List of relevant documents
classmethod from_params(url: str, content_field: str, *, k: Optional[int] = None, metadata_fields: Union[Sequence[str], Literal['*']] = (), sources: Optional[Union[Sequence[str], Literal['*']]] = None, _filter: Optional[str] = None, yql: Optional[str] = None, **kwargs: Any) → langchain.retrievers.vespa_retriever.VespaRetriever[source]#
Instantiate retriever from params.
Parameters
url (str) – Vespa app URL.
content_field (str) – Field in results to return as Document page_content.
k (Optional[int]) – Number of Documents to return. Defaults to None.
metadata_fields (Sequence[str] or "*") – Fields in results to include in
document metadata. Defaults to empty tuple ().
sources (Sequence[str] or "*" or None) – Sources to retrieve
from. Defaults to None.
_filter (Optional[str]) – Document filter condition expressed in YQL.
Defaults to None.
yql (Optional[str]) – Full YQL query to be used. Should not be specified
if _filter or sources are specified. Defaults to None.
kwargs (Any) – Keyword arguments added to query body.
get_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
get_relevant_documents_with_filter(query: str, *, _filter: Optional[str] = None) → List[langchain.schema.Document][source]# | https://python.langchain.com/en/latest/reference/modules/retrievers.html |
e56e9b3601d4-12 | class langchain.retrievers.WeaviateHybridSearchRetriever(client: Any, index_name: str, text_key: str, alpha: float = 0.5, k: int = 4, attributes: Optional[List[str]] = None, create_schema_if_missing: bool = True)[source]#
class Config[source]#
Configuration for this pydantic object.
arbitrary_types_allowed = True#
extra = 'forbid'#
add_documents(docs: List[langchain.schema.Document], **kwargs: Any) → List[str][source]#
Upload documents to Weaviate.
async aget_relevant_documents(query: str, where_filter: Optional[Dict[str, object]] = None) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
get_relevant_documents(query: str, where_filter: Optional[Dict[str, object]] = None) → List[langchain.schema.Document][source]#
Look up similar documents in Weaviate.
pydantic model langchain.retrievers.WikipediaRetriever[source]#
It is effectively a wrapper for WikipediaAPIWrapper.
It wraps load() to get_relevant_documents().
It uses all WikipediaAPIWrapper arguments without any change.
async aget_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
get_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents | https://python.langchain.com/en/latest/reference/modules/retrievers.html |
e56e9b3601d4-13 | Parameters
query – string to find relevant documents for
Returns
List of relevant documents
class langchain.retrievers.ZepRetriever(session_id: str, url: str, top_k: Optional[int] = None)[source]#
A Retriever implementation for the Zep long-term memory store. Search your
user’s long-term chat history with Zep.
Note: You will need to provide the user’s session_id to use this retriever.
More on Zep:
Zep provides long-term conversation storage for LLM apps. The server stores,
summarizes, embeds, indexes, and enriches conversational AI chat
histories, and exposes them via simple, low-latency APIs.
For server installation instructions, see:
https://getzep.github.io/deployment/quickstart/
async aget_relevant_documents(query: str, metadata: Optional[Dict] = None) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
get_relevant_documents(query: str, metadata: Optional[Dict] = None) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
previous
Vector Stores
next
Document Compressors
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 11, 2023. | https://python.langchain.com/en/latest/reference/modules/retrievers.html |
7cef05c07832-0 | .rst
.pdf
Text Splitter
Text Splitter#
Functionality for splitting text.
class langchain.text_splitter.CharacterTextSplitter(separator: str = '\n\n', **kwargs: Any)[source]#
Implementation of splitting text that looks at characters.
split_text(text: str) → List[str][source]#
Split incoming text and return chunks.
class langchain.text_splitter.Language(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]#
CPP = 'cpp'#
GO = 'go'#
HTML = 'html'#
JAVA = 'java'#
JS = 'js'#
LATEX = 'latex'#
MARKDOWN = 'markdown'#
PHP = 'php'#
PROTO = 'proto'#
PYTHON = 'python'#
RST = 'rst'#
RUBY = 'ruby'#
RUST = 'rust'#
SCALA = 'scala'#
SWIFT = 'swift'#
class langchain.text_splitter.LatexTextSplitter(**kwargs: Any)[source]#
Attempts to split the text along Latex-formatted layout elements.
class langchain.text_splitter.MarkdownTextSplitter(**kwargs: Any)[source]#
Attempts to split the text along Markdown-formatted headings.
class langchain.text_splitter.NLTKTextSplitter(separator: str = '\n\n', **kwargs: Any)[source]#
Implementation of splitting text that looks at sentences using NLTK.
split_text(text: str) → List[str][source]#
Split incoming text and return chunks.
class langchain.text_splitter.PythonCodeTextSplitter(**kwargs: Any)[source]#
Attempts to split the text along Python syntax. | https://python.langchain.com/en/latest/reference/modules/text_splitter.html |
7cef05c07832-1 | Attempts to split the text along Python syntax.
class langchain.text_splitter.RecursiveCharacterTextSplitter(separators: Optional[List[str]] = None, keep_separator: bool = True, **kwargs: Any)[source]#
Implementation of splitting text that looks at characters.
Recursively tries to split by different characters to find one
that works.
classmethod from_language(language: langchain.text_splitter.Language, **kwargs: Any) → langchain.text_splitter.RecursiveCharacterTextSplitter[source]#
static get_separators_for_language(language: langchain.text_splitter.Language) → List[str][source]#
split_text(text: str) → List[str][source]#
Split text into multiple components.
class langchain.text_splitter.SentenceTransformersTokenTextSplitter(chunk_overlap: int = 50, model_name: str = 'sentence-transformers/all-mpnet-base-v2', tokens_per_chunk: Optional[int] = None, **kwargs: Any)[source]#
Implementation of splitting text that looks at tokens.
count_tokens(*, text: str) → int[source]#
split_text(text: str) → List[str][source]#
Split text into multiple components.
class langchain.text_splitter.SpacyTextSplitter(separator: str = '\n\n', pipeline: str = 'en_core_web_sm', **kwargs: Any)[source]#
Implementation of splitting text that looks at sentences using Spacy.
split_text(text: str) → List[str][source]#
Split incoming text and return chunks.
class langchain.text_splitter.TextSplitter(chunk_size: int = 4000, chunk_overlap: int = 200, length_function: typing.Callable[[str], int] = <built-in function len>, keep_separator: bool = False, add_start_index: bool = False)[source]#
Interface for splitting text into chunks. | https://python.langchain.com/en/latest/reference/modules/text_splitter.html |
7cef05c07832-2 | Interface for splitting text into chunks.
async atransform_documents(documents: Sequence[langchain.schema.Document], **kwargs: Any) → Sequence[langchain.schema.Document][source]#
Asynchronously transform a sequence of documents by splitting them.
create_documents(texts: List[str], metadatas: Optional[List[dict]] = None) → List[langchain.schema.Document][source]#
Create documents from a list of texts.
classmethod from_huggingface_tokenizer(tokenizer: Any, **kwargs: Any) → langchain.text_splitter.TextSplitter[source]#
Text splitter that uses HuggingFace tokenizer to count length.
classmethod from_tiktoken_encoder(encoding_name: str = 'gpt2', model_name: Optional[str] = None, allowed_special: Union[Literal['all'], AbstractSet[str]] = {}, disallowed_special: Union[Literal['all'], Collection[str]] = 'all', **kwargs: Any) → langchain.text_splitter.TS[source]#
Text splitter that uses tiktoken encoder to count length.
split_documents(documents: Iterable[langchain.schema.Document]) → List[langchain.schema.Document][source]#
Split documents.
abstract split_text(text: str) → List[str][source]#
Split text into multiple components.
transform_documents(documents: Sequence[langchain.schema.Document], **kwargs: Any) → Sequence[langchain.schema.Document][source]#
Transform sequence of documents by splitting them.
class langchain.text_splitter.TokenTextSplitter(encoding_name: str = 'gpt2', model_name: Optional[str] = None, allowed_special: Union[Literal['all'], AbstractSet[str]] = {}, disallowed_special: Union[Literal['all'], Collection[str]] = 'all', **kwargs: Any)[source]#
Implementation of splitting text that looks at tokens. | https://python.langchain.com/en/latest/reference/modules/text_splitter.html |
7cef05c07832-3 | Implementation of splitting text that looks at tokens.
split_text(text: str) → List[str][source]#
Split text into multiple components.
class langchain.text_splitter.Tokenizer(chunk_overlap: 'int', tokens_per_chunk: 'int', decode: 'Callable[[list[int]], str]', encode: 'Callable[[str], List[int]]')[source]#
chunk_overlap: int#
decode: Callable[[list[int]], str]#
encode: Callable[[str], List[int]]#
tokens_per_chunk: int#
langchain.text_splitter.split_text_on_tokens(*, text: str, tokenizer: langchain.text_splitter.Tokenizer) → List[str][source]#
Split incoming text and return chunks.
previous
Docstore
next
Document Loaders
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 11, 2023. | https://python.langchain.com/en/latest/reference/modules/text_splitter.html |
bdbf99f63d6a-0 | .rst
.pdf
PromptTemplates
PromptTemplates#
Prompt template classes.
pydantic model langchain.prompts.BaseChatPromptTemplate[source]#
format(**kwargs: Any) → str[source]#
Format the prompt with the inputs.
Parameters
kwargs – Any arguments to be passed to the prompt template.
Returns
A formatted string.
Example:
prompt.format(variable1="foo")
abstract format_messages(**kwargs: Any) → List[langchain.schema.BaseMessage][source]#
Format kwargs into a list of messages.
format_prompt(**kwargs: Any) → langchain.schema.PromptValue[source]#
Create Chat Messages.
pydantic model langchain.prompts.BasePromptTemplate[source]#
Base class for all prompt templates, returning a prompt.
field input_variables: List[str] [Required]#
A list of the names of the variables the prompt template expects.
field output_parser: Optional[langchain.schema.BaseOutputParser] = None#
How to parse the output of calling an LLM on this formatted prompt.
dict(**kwargs: Any) → Dict[source]#
Return dictionary representation of prompt.
abstract format(**kwargs: Any) → str[source]#
Format the prompt with the inputs.
Parameters
kwargs – Any arguments to be passed to the prompt template.
Returns
A formatted string.
Example:
prompt.format(variable1="foo")
abstract format_prompt(**kwargs: Any) → langchain.schema.PromptValue[source]#
Create Chat Messages.
partial(**kwargs: Union[str, Callable[[], str]]) → langchain.prompts.base.BasePromptTemplate[source]#
Return a partial of the prompt template.
save(file_path: Union[pathlib.Path, str]) → None[source]#
Save the prompt.
Parameters
file_path – Path to directory to save prompt to.
Example:
.. code-block:: python | https://python.langchain.com/en/latest/reference/modules/prompts.html |
bdbf99f63d6a-1 | Example:
.. code-block:: python
prompt.save(file_path=”path/prompt.yaml”)
pydantic model langchain.prompts.ChatPromptTemplate[source]#
format(**kwargs: Any) → str[source]#
Format the prompt with the inputs.
Parameters
kwargs – Any arguments to be passed to the prompt template.
Returns
A formatted string.
Example:
prompt.format(variable1="foo")
format_messages(**kwargs: Any) → List[langchain.schema.BaseMessage][source]#
Format kwargs into a list of messages.
partial(**kwargs: Union[str, Callable[[], str]]) → langchain.prompts.base.BasePromptTemplate[source]#
Return a partial of the prompt template.
save(file_path: Union[pathlib.Path, str]) → None[source]#
Save the prompt.
Parameters
file_path – Path to directory to save prompt to.
Example:
.. code-block:: python
prompt.save(file_path=”path/prompt.yaml”)
pydantic model langchain.prompts.FewShotPromptTemplate[source]#
Prompt template that contains few shot examples.
field example_prompt: langchain.prompts.prompt.PromptTemplate [Required]#
PromptTemplate used to format an individual example.
field example_selector: Optional[langchain.prompts.example_selector.base.BaseExampleSelector] = None#
ExampleSelector to choose the examples to format into the prompt.
Either this or examples should be provided.
field example_separator: str = '\n\n'#
String separator used to join the prefix, the examples, and suffix.
field examples: Optional[List[dict]] = None#
Examples to format into the prompt.
Either this or example_selector should be provided.
field input_variables: List[str] [Required]#
A list of the names of the variables the prompt template expects.
field prefix: str = ''# | https://python.langchain.com/en/latest/reference/modules/prompts.html |
bdbf99f63d6a-2 | field prefix: str = ''#
A prompt template string to put before the examples.
field suffix: str [Required]#
A prompt template string to put after the examples.
field template_format: str = 'f-string'#
The format of the prompt template. Options are: ‘f-string’, ‘jinja2’.
field validate_template: bool = True#
Whether or not to try validating the template.
dict(**kwargs: Any) → Dict[source]#
Return a dictionary of the prompt.
format(**kwargs: Any) → str[source]#
Format the prompt with the inputs.
Parameters
kwargs – Any arguments to be passed to the prompt template.
Returns
A formatted string.
Example:
prompt.format(variable1="foo")
pydantic model langchain.prompts.FewShotPromptWithTemplates[source]#
Prompt template that contains few shot examples.
field example_prompt: langchain.prompts.prompt.PromptTemplate [Required]#
PromptTemplate used to format an individual example.
field example_selector: Optional[langchain.prompts.example_selector.base.BaseExampleSelector] = None#
ExampleSelector to choose the examples to format into the prompt.
Either this or examples should be provided.
field example_separator: str = '\n\n'#
String separator used to join the prefix, the examples, and suffix.
field examples: Optional[List[dict]] = None#
Examples to format into the prompt.
Either this or example_selector should be provided.
field input_variables: List[str] [Required]#
A list of the names of the variables the prompt template expects.
field prefix: Optional[langchain.prompts.base.StringPromptTemplate] = None#
A PromptTemplate to put before the examples.
field suffix: langchain.prompts.base.StringPromptTemplate [Required]#
A PromptTemplate to put after the examples. | https://python.langchain.com/en/latest/reference/modules/prompts.html |
bdbf99f63d6a-3 | A PromptTemplate to put after the examples.
field template_format: str = 'f-string'#
The format of the prompt template. Options are: ‘f-string’, ‘jinja2’.
field validate_template: bool = True#
Whether or not to try validating the template.
dict(**kwargs: Any) → Dict[source]#
Return a dictionary of the prompt.
format(**kwargs: Any) → str[source]#
Format the prompt with the inputs.
Parameters
kwargs – Any arguments to be passed to the prompt template.
Returns
A formatted string.
Example:
prompt.format(variable1="foo")
pydantic model langchain.prompts.MessagesPlaceholder[source]#
Prompt template that assumes variable is already list of messages.
format_messages(**kwargs: Any) → List[langchain.schema.BaseMessage][source]#
To a BaseMessage.
property input_variables: List[str]#
Input variables for this prompt template.
langchain.prompts.Prompt#
alias of langchain.prompts.prompt.PromptTemplate
pydantic model langchain.prompts.PromptTemplate[source]#
Schema to represent a prompt for an LLM.
Example
from langchain import PromptTemplate
prompt = PromptTemplate(input_variables=["foo"], template="Say {foo}")
field input_variables: List[str] [Required]#
A list of the names of the variables the prompt template expects.
field template: str [Required]#
The prompt template.
field template_format: str = 'f-string'#
The format of the prompt template. Options are: ‘f-string’, ‘jinja2’.
field validate_template: bool = True#
Whether or not to try validating the template.
format(**kwargs: Any) → str[source]#
Format the prompt with the inputs.
Parameters
kwargs – Any arguments to be passed to the prompt template.
Returns | https://python.langchain.com/en/latest/reference/modules/prompts.html |
bdbf99f63d6a-4 | Parameters
kwargs – Any arguments to be passed to the prompt template.
Returns
A formatted string.
Example:
prompt.format(variable1="foo")
classmethod from_examples(examples: List[str], suffix: str, input_variables: List[str], example_separator: str = '\n\n', prefix: str = '', **kwargs: Any) → langchain.prompts.prompt.PromptTemplate[source]#
Take examples in list format with prefix and suffix to create a prompt.
Intended to be used as a way to dynamically create a prompt from examples.
Parameters
examples – List of examples to use in the prompt.
suffix – String to go after the list of examples. Should generally
set up the user’s input.
input_variables – A list of variable names the final prompt template
will expect.
example_separator – The separator to use in between examples. Defaults
to two new line characters.
prefix – String that should go before any examples. Generally includes
examples. Default to an empty string.
Returns
The final prompt generated.
classmethod from_file(template_file: Union[str, pathlib.Path], input_variables: List[str], **kwargs: Any) → langchain.prompts.prompt.PromptTemplate[source]#
Load a prompt from a file.
Parameters
template_file – The path to the file containing the prompt template.
input_variables – A list of variable names the final prompt template
will expect.
Returns
The prompt loaded from the file.
classmethod from_template(template: str, **kwargs: Any) → langchain.prompts.prompt.PromptTemplate[source]#
Load a prompt template from a template.
pydantic model langchain.prompts.StringPromptTemplate[source]#
String prompt should expose the format method, returning a prompt.
format_prompt(**kwargs: Any) → langchain.schema.PromptValue[source]#
Create Chat Messages. | https://python.langchain.com/en/latest/reference/modules/prompts.html |
bdbf99f63d6a-5 | Create Chat Messages.
langchain.prompts.load_prompt(path: Union[str, pathlib.Path]) → langchain.prompts.base.BasePromptTemplate[source]#
Unified method for loading a prompt from LangChainHub or local fs.
previous
Prompts
next
Example Selector
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 11, 2023. | https://python.langchain.com/en/latest/reference/modules/prompts.html |
b7a7c0737c44-0 | .rst
.pdf
Experimental Modules
Contents
Autonomous Agents
Generative Agents
Experimental Modules#
This module contains experimental modules and reproductions of existing work using LangChain primitives.
Autonomous Agents#
Here, we document the BabyAGI and AutoGPT classes from the langchain.experimental module.
class langchain.experimental.BabyAGI(*, memory: Optional[langchain.schema.BaseMemory] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, verbose: bool = None, task_list: collections.deque = None, task_creation_chain: langchain.chains.base.Chain, task_prioritization_chain: langchain.chains.base.Chain, execution_chain: langchain.chains.base.Chain, task_id_counter: int = 1, vectorstore: langchain.vectorstores.base.VectorStore, max_iterations: Optional[int] = None)[source]#
Controller model for the BabyAGI agent.
model Config[source]#
Configuration for this pydantic object.
arbitrary_types_allowed = True#
execute_task(objective: str, task: str, k: int = 5) → str[source]#
Execute a task.
classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, vectorstore: langchain.vectorstores.base.VectorStore, verbose: bool = False, task_execution_chain: Optional[langchain.chains.base.Chain] = None, **kwargs: Dict[str, Any]) → langchain.experimental.autonomous_agents.baby_agi.baby_agi.BabyAGI[source]#
Initialize the BabyAGI Controller.
get_next_task(result: str, task_description: str, objective: str) → List[Dict][source]#
Get the next task. | https://python.langchain.com/en/latest/reference/modules/experimental.html |
Subsets and Splits