id
stringlengths 14
16
| text
stringlengths 45
2.05k
| source
stringlengths 53
111
|
---|---|---|
c04f843b7992-7 | example, if you wanted to set the `FOO` environment variable to `bar`, you would use the following code:\n\n```'}] | https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/vector_db_text_generation.html |
c04f843b7992-8 | previous
VectorDB Question Answering with Sources
next
Chains
Contents
Prepare Data
Set Up Vector DB
Set Up LLM Chain with Custom Prompt
Generate Text
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Mar 22, 2023. | https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/vector_db_text_generation.html |
7a806612e2f3-0 | .rst
.pdf
Prompts
Prompts#
The reference guides here all relate to objects for working with Prompts.
PromptTemplates
Example Selector
previous
Output Parsers
next
PromptTemplates
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Mar 22, 2023. | https://langchain.readthedocs.io/en/latest/reference/prompts.html |
0d8ffe8301f1-0 | .rst
.pdf
Utilities
Utilities#
There are a lot of different utilities that LangChain provides integrations for
These guides go over how to use them.
These can largely be grouped into two categories: generic utilities, and then utilities for working with larger text documents.
Generic Utilities
Python REPL
SerpAPI
SearxNG Search
Utilities for working with Documents
Docstore
Text Splitter
Embeddings
VectorStores
previous
Zapier Natural Language Actions API
next
Python REPL
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Mar 22, 2023. | https://langchain.readthedocs.io/en/latest/reference/utils.html |
b832b6e1a0e9-0 | .md
.pdf
Integrations
Integrations#
Besides the installation of this python package, you will also need to install packages and set environment variables depending on which chains you want to use.
Note: the reason these packages are not included in the dependencies by default is that as we imagine scaling this package, we do not want to force dependencies that are not needed.
The following use cases require specific installs and api keys:
OpenAI:
Install requirements with pip install openai
Get an OpenAI api key and either set it as an environment variable (OPENAI_API_KEY) or pass it to the LLM constructor as openai_api_key.
Cohere:
Install requirements with pip install cohere
Get a Cohere api key and either set it as an environment variable (COHERE_API_KEY) or pass it to the LLM constructor as cohere_api_key.
GooseAI:
Install requirements with pip install openai
Get an GooseAI api key and either set it as an environment variable (GOOSEAI_API_KEY) or pass it to the LLM constructor as gooseai_api_key.
Hugging Face Hub
Install requirements with pip install huggingface_hub
Get a Hugging Face Hub api token and either set it as an environment variable (HUGGINGFACEHUB_API_TOKEN) or pass it to the LLM constructor as huggingfacehub_api_token.
Petals:
Install requirements with pip install petals
Get an GooseAI api key and either set it as an environment variable (HUGGINGFACE_API_KEY) or pass it to the LLM constructor as huggingface_api_key.
CerebriumAI:
Install requirements with pip install cerebrium
Get a Cerebrium api key and either set it as an environment variable (CEREBRIUMAI_API_KEY) or pass it to the LLM constructor as cerebriumai_api_key. | https://langchain.readthedocs.io/en/latest/reference/integrations.html |
b832b6e1a0e9-1 | PromptLayer:
Install requirements with pip install promptlayer (be sure to be on version 0.1.62 or higher)
Get an API key from promptlayer.com and set it using promptlayer.api_key=<API KEY>
SerpAPI:
Install requirements with pip install google-search-results
Get a SerpAPI api key and either set it as an environment variable (SERPAPI_API_KEY) or pass it to the LLM constructor as serpapi_api_key.
GoogleSearchAPI:
Install requirements with pip install google-api-python-client
Get a Google api key and either set it as an environment variable (GOOGLE_API_KEY) or pass it to the LLM constructor as google_api_key. You will also need to set the GOOGLE_CSE_ID environment variable to your custom search engine id. You can pass it to the LLM constructor as google_cse_id as well.
WolframAlphaAPI:
Install requirements with pip install wolframalpha
Get a Wolfram Alpha api key and either set it as an environment variable (WOLFRAM_ALPHA_APPID) or pass it to the LLM constructor as wolfram_alpha_appid.
NatBot:
Install requirements with pip install playwright
Wikipedia:
Install requirements with pip install wikipedia
Elasticsearch:
Install requirements with pip install elasticsearch
Set up Elasticsearch backend. If you want to do locally, this is a good guide.
FAISS:
Install requirements with pip install faiss for Python 3.7 and pip install faiss-cpu for Python 3.10+.
Manifest:
Install requirements with pip install manifest-ml (Note: this is only available in Python 3.8+ currently).
OpenSearch:
Install requirements with pip install opensearch-py
If you want to set up OpenSearch on your local, here
DeepLake:
Install requirements with pip install deeplake | https://langchain.readthedocs.io/en/latest/reference/integrations.html |
b832b6e1a0e9-2 | DeepLake:
Install requirements with pip install deeplake
If you are using the NLTKTextSplitter or the SpacyTextSplitter, you will also need to install the appropriate models. For example, if you want to use the SpacyTextSplitter, you will need to install the en_core_web_sm model with python -m spacy download en_core_web_sm. Similarly, if you want to use the NLTKTextSplitter, you will need to install the punkt model with python -m nltk.downloader punkt.
previous
Installation
next
API References
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Mar 22, 2023. | https://langchain.readthedocs.io/en/latest/reference/integrations.html |
0d318ae092bd-0 | .md
.pdf
Installation
Contents
Official Releases
Installing from source
Installation#
Official Releases#
LangChain is available on PyPi, so to it is easily installable with:
pip install langchain
That will install the bare minimum requirements of LangChain.
A lot of the value of LangChain comes when integrating it with various model providers, datastores, etc.
By default, the dependencies needed to do that are NOT installed.
However, there are two other ways to install LangChain that do bring in those dependencies.
To install modules needed for the common LLM providers, run:
pip install langchain[llms]
To install all modules needed for all integrations, run:
pip install langchain[all]
Note that if you are using zsh, you’ll need to quote square brackets when passing them as an argument to a command, for example:
pip install 'langchain[all]'
Installing from source#
If you want to install from source, you can do so by cloning the repo and running:
pip install -e .
previous
Model Comparison
next
Integrations
Contents
Official Releases
Installing from source
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Mar 22, 2023. | https://langchain.readthedocs.io/en/latest/reference/installation.html |
dc5d20bfd6a2-0 | .rst
.pdf
VectorStores
VectorStores#
Wrappers on top of vector stores.
class langchain.vectorstores.AtlasDB(name: str, embedding_function: Optional[langchain.embeddings.base.Embeddings] = None, api_key: Optional[str] = None, description: str = 'A description for your project', is_public: bool = True, reset_project_if_exists: bool = False)[source]#
Wrapper around Atlas: Nomic’s neural database and rhizomatic instrument.
To use, you should have the nomic python package installed.
Example
from langchain.vectorstores import AtlasDB
from langchain.embeddings.openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
vectorstore = AtlasDB("my_project", embeddings.embed_query)
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, refresh: bool = True, **kwargs: Any) → List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts (Iterable[str]) – Texts to add to the vectorstore.
metadatas (Optional[List[dict]], optional) – Optional list of metadatas.
ids (Optional[List[str]]) – An optional list of ids.
refresh (bool) – Whether or not to refresh indices with the updated data.
Default True.
Returns
List of IDs of the added texts.
Return type
List[str]
create_index(**kwargs: Any) → Any[source]#
Creates an index in your project.
See
https://docs.nomic.ai/atlas_api.html#nomic.project.AtlasProject.create_index
for full detail. | https://langchain.readthedocs.io/en/latest/reference/modules/vectorstore.html |
dc5d20bfd6a2-1 | for full detail.
classmethod from_documents(documents: List[langchain.docstore.document.Document], embedding: Optional[langchain.embeddings.base.Embeddings] = None, ids: Optional[List[str]] = None, name: Optional[str] = None, api_key: Optional[str] = None, persist_directory: Optional[str] = None, description: str = 'A description for your project', is_public: bool = True, reset_project_if_exists: bool = False, index_kwargs: Optional[dict] = None, **kwargs: Any) → langchain.vectorstores.atlas.AtlasDB[source]#
Create an AtlasDB vectorstore from a list of documents.
Parameters
name (str) – Name of the collection to create.
api_key (str) – Your nomic API key,
documents (List[Document]) – List of documents to add to the vectorstore.
embedding (Optional[Embeddings]) – Embedding function. Defaults to None.
ids (Optional[List[str]]) – Optional list of document IDs. If None,
ids will be auto created
description (str) – A description for your project.
is_public (bool) – Whether your project is publicly accessible.
True by default.
reset_project_if_exists (bool) – Whether to reset this project if
it already exists. Default False.
Generally userful during development and testing.
index_kwargs (Optional[dict]) – Dict of kwargs for index creation.
See https://docs.nomic.ai/atlas_api.html
Returns
Nomic’s neural database and finest rhizomatic instrument
Return type
AtlasDB | https://langchain.readthedocs.io/en/latest/reference/modules/vectorstore.html |
dc5d20bfd6a2-2 | Returns
Nomic’s neural database and finest rhizomatic instrument
Return type
AtlasDB
classmethod from_texts(texts: List[str], embedding: Optional[langchain.embeddings.base.Embeddings] = None, metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, name: Optional[str] = None, api_key: Optional[str] = None, description: str = 'A description for your project', is_public: bool = True, reset_project_if_exists: bool = False, index_kwargs: Optional[dict] = None, **kwargs: Any) → langchain.vectorstores.atlas.AtlasDB[source]#
Create an AtlasDB vectorstore from a raw documents.
Parameters
texts (List[str]) – The list of texts to ingest.
name (str) – Name of the project to create.
api_key (str) – Your nomic API key,
embedding (Optional[Embeddings]) – Embedding function. Defaults to None.
metadatas (Optional[List[dict]]) – List of metadatas. Defaults to None.
ids (Optional[List[str]]) – Optional list of document IDs. If None,
ids will be auto created
description (str) – A description for your project.
is_public (bool) – Whether your project is publicly accessible.
True by default.
reset_project_if_exists (bool) – Whether to reset this project if it
already exists. Default False.
Generally userful during development and testing.
index_kwargs (Optional[dict]) – Dict of kwargs for index creation.
See https://docs.nomic.ai/atlas_api.html
Returns
Nomic’s neural database and finest rhizomatic instrument
Return type
AtlasDB
similarity_search(query: str, k: int = 4, **kwargs: Any) → List[langchain.docstore.document.Document][source]# | https://langchain.readthedocs.io/en/latest/reference/modules/vectorstore.html |
dc5d20bfd6a2-3 | Run similarity search with AtlasDB
Parameters
query (str) – Query text to search for.
k (int) – Number of results to return. Defaults to 4.
Returns
List of documents most similar to the query text.
Return type
List[Document]
class langchain.vectorstores.Chroma(collection_name: str = 'langchain', embedding_function: Optional[Embeddings] = None, persist_directory: Optional[str] = None, client_settings: Optional[chromadb.config.Settings] = None)[source]#
Wrapper around ChromaDB embeddings platform.
To use, you should have the chromadb python package installed.
Example
from langchain.vectorstores import Chroma
from langchain.embeddings.openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
vectorstore = Chroma("langchain_store", embeddings.embed_query)
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) → List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts (Iterable[str]) – Texts to add to the vectorstore.
metadatas (Optional[List[dict]], optional) – Optional list of metadatas.
ids (Optional[List[str]], optional) – Optional list of IDs.
Returns
List of IDs of the added texts.
Return type
List[str]
delete_collection() → None[source]#
Delete the collection.
classmethod from_documents(documents: List[Document], embedding: Optional[Embeddings] = None, ids: Optional[List[str]] = None, collection_name: str = 'langchain', persist_directory: Optional[str] = None, client_settings: Optional[chromadb.config.Settings] = None, **kwargs: Any) → Chroma[source]# | https://langchain.readthedocs.io/en/latest/reference/modules/vectorstore.html |
dc5d20bfd6a2-4 | Create a Chroma vectorstore from a list of documents.
If a persist_directory is specified, the collection will be persisted there.
Otherwise, the data will be ephemeral in-memory.
Parameters
collection_name (str) – Name of the collection to create.
persist_directory (Optional[str]) – Directory to persist the collection.
ids (Optional[List[str]]) – List of document IDs. Defaults to None.
documents (List[Document]) – List of documents to add to the vectorstore.
embedding (Optional[Embeddings]) – Embedding function. Defaults to None.
client_settings (Optional[chromadb.config.Settings]) – Chroma client settings
Returns
Chroma vectorstore.
Return type
Chroma
classmethod from_texts(texts: List[str], embedding: Optional[Embeddings] = None, metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, collection_name: str = 'langchain', persist_directory: Optional[str] = None, client_settings: Optional[chromadb.config.Settings] = None, **kwargs: Any) → Chroma[source]#
Create a Chroma vectorstore from a raw documents.
If a persist_directory is specified, the collection will be persisted there.
Otherwise, the data will be ephemeral in-memory.
Parameters
texts (List[str]) – List of texts to add to the collection.
collection_name (str) – Name of the collection to create.
persist_directory (Optional[str]) – Directory to persist the collection.
embedding (Optional[Embeddings]) – Embedding function. Defaults to None.
metadatas (Optional[List[dict]]) – List of metadatas. Defaults to None.
ids (Optional[List[str]]) – List of document IDs. Defaults to None.
client_settings (Optional[chromadb.config.Settings]) – Chroma client settings
Returns
Chroma vectorstore. | https://langchain.readthedocs.io/en/latest/reference/modules/vectorstore.html |
dc5d20bfd6a2-5 | Returns
Chroma vectorstore.
Return type
Chroma
persist() → None[source]#
Persist the collection.
This can be used to explicitly persist the data to disk.
It will also be called automatically when the object is destroyed.
similarity_search(query: str, k: int = 4, filter: Optional[Dict[str, str]] = None, **kwargs: Any) → List[langchain.docstore.document.Document][source]#
Run similarity search with Chroma.
Parameters
query (str) – Query text to search for.
k (int) – Number of results to return. Defaults to 4.
filter (Optional[Dict[str, str]]) – Filter by metadata. Defaults to None.
Returns
List of documents most simmilar to the query text.
Return type
List[Document]
similarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) → List[langchain.docstore.document.Document][source]#
Return docs most similar to embedding vector.
:param embedding: Embedding to look up documents similar to.
:param k: Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query vector.
similarity_search_with_score(query: str, k: int = 4, filter: Optional[Dict[str, str]] = None, **kwargs: Any) → List[Tuple[langchain.docstore.document.Document, float]][source]#
Run similarity search with Chroma with distance.
Parameters
query (str) – Query text to search for.
k (int) – Number of results to return. Defaults to 4.
filter (Optional[Dict[str, str]]) – Filter by metadata. Defaults to None.
Returns
List of documents most similar to the querytext with distance in float.
Return type | https://langchain.readthedocs.io/en/latest/reference/modules/vectorstore.html |
dc5d20bfd6a2-6 | Returns
List of documents most similar to the querytext with distance in float.
Return type
List[Tuple[Document, float]]
class langchain.vectorstores.DeepLake(dataset_path: str = 'mem://langchain', token: Optional[str] = None, embedding_function: Optional[langchain.embeddings.base.Embeddings] = None)[source]#
Wrapper around Deep Lake, a data lake for deep learning applications.
It not only stores embeddings, but also the original data and queries with
version control automatically enabled.
It is more than just a vector store. You can use the dataset to fine-tune
your own LLM models or use it for other downstream tasks.
We implement naive similiarity search, but it can be extended with Tensor
Query Language (TQL for production use cases) over billion rows.
To use, you should have the deeplake python package installed.
Example
from langchain.vectorstores import DeepLake
from langchain.embeddings.openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
vectorstore = DeepLake("langchain_store", embeddings.embed_query)
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) → List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts (Iterable[str]) – Texts to add to the vectorstore.
metadatas (Optional[List[dict]], optional) – Optional list of metadatas.
ids (Optional[List[str]], optional) – Optional list of IDs.
Returns
List of IDs of the added texts.
Return type
List[str]
delete_dataset() → None[source]#
Delete the collection. | https://langchain.readthedocs.io/en/latest/reference/modules/vectorstore.html |
dc5d20bfd6a2-7 | Return type
List[str]
delete_dataset() → None[source]#
Delete the collection.
classmethod from_texts(texts: List[str], embedding: Optional[langchain.embeddings.base.Embeddings] = None, metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, dataset_path: str = 'mem://langchain', **kwargs: Any) → langchain.vectorstores.deeplake.DeepLake[source]#
Create a Deep Lake dataset from a raw documents.
If a persist_directory is specified, the collection will be persisted there.
Otherwise, the data will be ephemeral in-memory.
Parameters
path (str, pathlib.Path) –
The full path to the dataset. Can be:
a Deep Lake cloud path of the form hub://username/datasetname.To write to Deep Lake cloud datasets,
ensure that you are logged in to Deep Lake
(use ‘activeloop login’ from command line)
an s3 path of the form s3://bucketname/path/to/dataset.Credentials are required in either the environment or
passed to the creds argument.
a local file system path of the form ./path/to/dataset or~/path/to/dataset or path/to/dataset.
a memory path of the form mem://path/to/dataset which doesn’tsave the dataset but keeps it in memory instead.
Should be used only for testing as it does not persist.
documents (List[Document]) – List of documents to add.
embedding (Optional[Embeddings]) – Embedding function. Defaults to None.
metadatas (Optional[List[dict]]) – List of metadatas. Defaults to None.
ids (Optional[List[str]]) – List of document IDs. Defaults to None.
Returns
Deep Lake dataset.
Return type
DeepLake
persist() → None[source]#
Persist the collection. | https://langchain.readthedocs.io/en/latest/reference/modules/vectorstore.html |
dc5d20bfd6a2-8 | Return type
DeepLake
persist() → None[source]#
Persist the collection.
similarity_search(query: str, k: int = 4, **kwargs: Any) → List[langchain.docstore.document.Document][source]#
Return docs most similar to query.
class langchain.vectorstores.ElasticVectorSearch(elasticsearch_url: str, index_name: str, embedding: langchain.embeddings.base.Embeddings)[source]#
Wrapper around Elasticsearch as a vector database.
Example
from langchain import ElasticVectorSearch
elastic_vector_search = ElasticVectorSearch(
"http://localhost:9200",
"embeddings",
embedding
)
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) → List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts – Iterable of strings to add to the vectorstore.
metadatas – Optional list of metadatas associated with the texts.
Returns
List of ids from adding the texts into the vectorstore.
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) → langchain.vectorstores.elastic_vector_search.ElasticVectorSearch[source]#
Construct ElasticVectorSearch wrapper from raw documents.
This is a user-friendly interface that:
Embeds documents.
Creates a new index for the embeddings in the Elasticsearch instance.
Adds the documents to the newly created Elasticsearch index.
This is intended to be a quick way to get started.
Example
from langchain import ElasticVectorSearch
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
elastic_vector_search = ElasticVectorSearch.from_texts(
texts,
embeddings, | https://langchain.readthedocs.io/en/latest/reference/modules/vectorstore.html |
dc5d20bfd6a2-9 | elastic_vector_search = ElasticVectorSearch.from_texts(
texts,
embeddings,
elasticsearch_url="http://localhost:9200"
)
similarity_search(query: str, k: int = 4, **kwargs: Any) → List[langchain.docstore.document.Document][source]#
Return docs most similar to query.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query.
class langchain.vectorstores.FAISS(embedding_function: Callable, index: Any, docstore: langchain.docstore.base.Docstore, index_to_docstore_id: Dict[int, str])[source]#
Wrapper around FAISS vector database.
To use, you should have the faiss python package installed.
Example
from langchain import FAISS
faiss = FAISS(embedding_function, index, docstore, index_to_docstore_id)
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) → List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts – Iterable of strings to add to the vectorstore.
metadatas – Optional list of metadatas associated with the texts.
Returns
List of ids from adding the texts into the vectorstore.
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) → langchain.vectorstores.faiss.FAISS[source]#
Construct FAISS wrapper from raw documents.
This is a user friendly interface that:
Embeds documents.
Creates an in memory docstore
Initializes the FAISS database | https://langchain.readthedocs.io/en/latest/reference/modules/vectorstore.html |
dc5d20bfd6a2-10 | Embeds documents.
Creates an in memory docstore
Initializes the FAISS database
This is intended to be a quick way to get started.
Example
from langchain import FAISS
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
faiss = FAISS.from_texts(texts, embeddings)
classmethod load_local(folder_path: str, embeddings: langchain.embeddings.base.Embeddings) → langchain.vectorstores.faiss.FAISS[source]#
Load FAISS index, docstore, and index_to_docstore_id to disk.
Parameters
folder_path – folder path to load index, docstore,
and index_to_docstore_id from.
embeddings – Embeddings to use when generating queries
max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20) → List[langchain.docstore.document.Document][source]#
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
fetch_k – Number of Documents to fetch to pass to MMR algorithm.
Returns
List of Documents selected by maximal marginal relevance.
max_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20) → List[langchain.docstore.document.Document][source]#
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
embedding – Embedding to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
fetch_k – Number of Documents to fetch to pass to MMR algorithm.
Returns | https://langchain.readthedocs.io/en/latest/reference/modules/vectorstore.html |
dc5d20bfd6a2-11 | fetch_k – Number of Documents to fetch to pass to MMR algorithm.
Returns
List of Documents selected by maximal marginal relevance.
merge_from(target: langchain.vectorstores.faiss.FAISS) → None[source]#
Merge another FAISS object with the current one.
Add the target FAISS to the current one.
Parameters
target – FAISS object you wish to merge into the current one
Returns
None.
save_local(folder_path: str) → None[source]#
Save FAISS index, docstore, and index_to_docstore_id to disk.
Parameters
folder_path – folder path to save index, docstore,
and index_to_docstore_id to.
similarity_search(query: str, k: int = 4, **kwargs: Any) → List[langchain.docstore.document.Document][source]#
Return docs most similar to query.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query.
similarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) → List[langchain.docstore.document.Document][source]#
Return docs most similar to embedding vector.
Parameters
embedding – Embedding to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the embedding.
similarity_search_with_score(query: str, k: int = 4) → List[Tuple[langchain.docstore.document.Document, float]][source]#
Return docs most similar to query.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query and score for each | https://langchain.readthedocs.io/en/latest/reference/modules/vectorstore.html |
dc5d20bfd6a2-12 | Returns
List of Documents most similar to the query and score for each
similarity_search_with_score_by_vector(embedding: List[float], k: int = 4) → List[Tuple[langchain.docstore.document.Document, float]][source]#
Return docs most similar to query.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query and score for each
class langchain.vectorstores.Milvus(embedding_function: langchain.embeddings.base.Embeddings, connection_args: dict, collection_name: str, text_field: str)[source]#
Wrapper around the Milvus vector database.
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, partition_name: Optional[str] = None, timeout: Optional[int] = None, **kwargs: Any) → List[str][source]#
Insert text data into Milvus.
When using add_texts() it is assumed that a collecton has already
been made and indexed. If metadata is included, it is assumed that
it is ordered correctly to match the schema provided to the Collection
and that the embedding vector is the first schema field.
Parameters
texts (Iterable[str]) – The text being embedded and inserted.
metadatas (Optional[List[dict]], optional) – The metadata that
corresponds to each insert. Defaults to None.
partition_name (str, optional) – The partition of the collection
to insert data into. Defaults to None.
timeout – specified timeout.
Returns
The resulting keys for each inserted element.
Return type
List[str] | https://langchain.readthedocs.io/en/latest/reference/modules/vectorstore.html |
dc5d20bfd6a2-13 | Returns
The resulting keys for each inserted element.
Return type
List[str]
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) → langchain.vectorstores.milvus.Milvus[source]#
Create a Milvus collection, indexes it with HNSW, and insert data.
Parameters
texts (List[str]) – Text to insert.
embedding (Embeddings) – Embedding function to use.
metadatas (Optional[List[dict]], optional) – Dict metatadata.
Defaults to None.
Returns
The Milvus vector store.
Return type
VectorStore
max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, param: Optional[dict] = None, expr: Optional[str] = None, partition_names: Optional[List[str]] = None, round_decimal: int = - 1, timeout: Optional[int] = None, **kwargs: Any) → List[langchain.docstore.document.Document][source]#
Perform a search and return results that are reordered by MMR.
Parameters
query (str) – The text being searched.
k (int, optional) – How many results to give. Defaults to 4.
fetch_k (int, optional) – Total results to select k from.
Defaults to 20.
param (dict, optional) – The search params for the specified index.
Defaults to None.
expr (str, optional) – Filtering expression. Defaults to None.
partition_names (List[str], optional) – What partitions to search.
Defaults to None.
round_decimal (int, optional) – Round the resulting distance. Defaults
to -1. | https://langchain.readthedocs.io/en/latest/reference/modules/vectorstore.html |
dc5d20bfd6a2-14 | round_decimal (int, optional) – Round the resulting distance. Defaults
to -1.
timeout (int, optional) – Amount to wait before timeout error. Defaults
to None.
Returns
Document results for search.
Return type
List[Document]
similarity_search(query: str, k: int = 4, param: Optional[dict] = None, expr: Optional[str] = None, partition_names: Optional[List[str]] = None, round_decimal: int = - 1, timeout: Optional[int] = None, **kwargs: Any) → List[langchain.docstore.document.Document][source]#
Perform a similarity search against the query string.
Parameters
query (str) – The text to search.
k (int, optional) – How many results to return. Defaults to 4.
param (dict, optional) – The search params for the index type.
Defaults to None.
expr (str, optional) – Filtering expression. Defaults to None.
partition_names (List[str], optional) – What partitions to search.
Defaults to None.
round_decimal (int, optional) – What decimal point to round to.
Defaults to -1.
timeout (int, optional) – How long to wait before timeout error.
Defaults to None.
Returns
Document results for search.
Return type
List[Document]
similarity_search_with_score(query: str, k: int = 4, param: Optional[dict] = None, expr: Optional[str] = None, partition_names: Optional[List[str]] = None, round_decimal: int = - 1, timeout: Optional[int] = None, **kwargs: Any) → List[Tuple[langchain.docstore.document.Document, float]][source]#
Perform a search on a query string and return results.
Parameters
query (str) – The text being searched. | https://langchain.readthedocs.io/en/latest/reference/modules/vectorstore.html |
dc5d20bfd6a2-15 | Parameters
query (str) – The text being searched.
k (int, optional) – The amount of results ot return. Defaults to 4.
param (dict, optional) – The search params for the specified index.
Defaults to None.
expr (str, optional) – Filtering expression. Defaults to None.
partition_names (List[str], optional) – Partitions to search through.
Defaults to None.
round_decimal (int, optional) – Round the resulting distance. Defaults
to -1.
timeout (int, optional) – Amount to wait before timeout error. Defaults
to None.
kwargs – Collection.search() keyword arguments.
Returns
search_embedding,(Document, distance, primary_field) results.
Return type
List[float], List[Tuple[Document, any, any]]
class langchain.vectorstores.OpenSearchVectorSearch(opensearch_url: str, index_name: str, embedding_function: langchain.embeddings.base.Embeddings)[source]#
Wrapper around OpenSearch as a vector database.
Example
from langchain import OpenSearchVectorSearch
opensearch_vector_search = OpenSearchVectorSearch(
"http://localhost:9200",
"embeddings",
embedding_function
)
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, bulk_size: int = 500, **kwargs: Any) → List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts – Iterable of strings to add to the vectorstore.
metadatas – Optional list of metadatas associated with the texts.
bulk_size – Bulk API request count; Default: 500
Returns
List of ids from adding the texts into the vectorstore. | https://langchain.readthedocs.io/en/latest/reference/modules/vectorstore.html |
dc5d20bfd6a2-16 | Returns
List of ids from adding the texts into the vectorstore.
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, bulk_size: int = 500, **kwargs: Any) → langchain.vectorstores.opensearch_vector_search.OpenSearchVectorSearch[source]#
Construct OpenSearchVectorSearch wrapper from raw documents.
Example
from langchain import OpenSearchVectorSearch
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
opensearch_vector_search = OpenSearchVectorSearch.from_texts(
texts,
embeddings,
opensearch_url="http://localhost:9200"
)
OpenSearch by default supports Approximate Search powered by nmslib, faiss
and lucene engines recommended for large datasets. Also supports brute force
search through Script Scoring and Painless Scripting.
Optional Keyword Args for Approximate Search:engine: “nmslib”, “faiss”, “hnsw”; default: “nmslib”
space_type: “l2”, “l1”, “cosinesimil”, “linf”, “innerproduct”; default: “l2”
ef_search: Size of the dynamic list used during k-NN searches. Higher values
lead to more accurate but slower searches; default: 512
ef_construction: Size of the dynamic list used during k-NN graph creation.
Higher values lead to more accurate graph but slower indexing speed;
default: 512
m: Number of bidirectional links created for each new element. Large impact
on memory consumption. Between 2 and 100; default: 16
Keyword Args for Script Scoring or Painless Scripting:is_appx_search: False | https://langchain.readthedocs.io/en/latest/reference/modules/vectorstore.html |
dc5d20bfd6a2-17 | Keyword Args for Script Scoring or Painless Scripting:is_appx_search: False
similarity_search(query: str, k: int = 4, **kwargs: Any) → List[langchain.docstore.document.Document][source]#
Return docs most similar to query.
By default supports Approximate Search.
Also supports Script Scoring and Painless Scripting.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query.
Optional Args for Approximate Search:search_type: “approximate_search”; default: “approximate_search”
size: number of results the query actually returns; default: 4
Optional Args for Script Scoring Search:search_type: “script_scoring”; default: “approximate_search”
space_type: “l2”, “l1”, “linf”, “cosinesimil”, “innerproduct”,
“hammingbit”; default: “l2”
pre_filter: script_score query to pre-filter documents before identifying
nearest neighbors; default: {“match_all”: {}}
Optional Args for Painless Scripting Search:search_type: “painless_scripting”; default: “approximate_search”
space_type: “l2Squared”, “l1Norm”, “cosineSimilarity”; default: “l2Squared”
pre_filter: script_score query to pre-filter documents before identifying
nearest neighbors; default: {“match_all”: {}}
class langchain.vectorstores.Pinecone(index: Any, embedding_function: Callable, text_key: str, namespace: Optional[str] = None)[source]#
Wrapper around Pinecone vector database.
To use, you should have the pinecone-client python package installed.
Example
from langchain.vectorstores import Pinecone | https://langchain.readthedocs.io/en/latest/reference/modules/vectorstore.html |
dc5d20bfd6a2-18 | Example
from langchain.vectorstores import Pinecone
from langchain.embeddings.openai import OpenAIEmbeddings
import pinecone
pinecone.init(api_key="***", environment="us-west1-gcp")
index = pinecone.Index("langchain-demo")
embeddings = OpenAIEmbeddings()
vectorstore = Pinecone(index, embeddings.embed_query, "text")
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, namespace: Optional[str] = None, batch_size: int = 32, **kwargs: Any) → List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts – Iterable of strings to add to the vectorstore.
metadatas – Optional list of metadatas associated with the texts.
ids – Optional list of ids to associate with the texts.
namespace – Optional pinecone namespace to add the texts to.
Returns
List of ids from adding the texts into the vectorstore.
classmethod from_existing_index(index_name: str, embedding: langchain.embeddings.base.Embeddings, text_key: str = 'text', namespace: Optional[str] = None) → langchain.vectorstores.pinecone.Pinecone[source]#
Load pinecone vectorstore from index name.
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, batch_size: int = 32, text_key: str = 'text', index_name: Optional[str] = None, namespace: Optional[str] = None, **kwargs: Any) → langchain.vectorstores.pinecone.Pinecone[source]#
Construct Pinecone wrapper from raw documents.
This is a user friendly interface that:
Embeds documents. | https://langchain.readthedocs.io/en/latest/reference/modules/vectorstore.html |
dc5d20bfd6a2-19 | This is a user friendly interface that:
Embeds documents.
Adds the documents to a provided Pinecone index
This is intended to be a quick way to get started.
Example
from langchain import Pinecone
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
pinecone = Pinecone.from_texts(
texts,
embeddings,
index_name="langchain-demo"
)
similarity_search(query: str, k: int = 5, filter: Optional[dict] = None, namespace: Optional[str] = None, **kwargs: Any) → List[langchain.docstore.document.Document][source]#
Return pinecone documents most similar to query.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
filter – Dictionary of argument(s) to filter on metadata
namespace – Namespace to search in. Default will search in ‘’ namespace.
Returns
List of Documents most similar to the query and score for each
similarity_search_with_score(query: str, k: int = 5, filter: Optional[dict] = None, namespace: Optional[str] = None) → List[Tuple[langchain.docstore.document.Document, float]][source]#
Return pinecone documents most similar to query, along with scores.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
filter – Dictionary of argument(s) to filter on metadata
namespace – Namespace to search in. Default will search in ‘’ namespace.
Returns
List of Documents most similar to the query and score for each | https://langchain.readthedocs.io/en/latest/reference/modules/vectorstore.html |
dc5d20bfd6a2-20 | Returns
List of Documents most similar to the query and score for each
class langchain.vectorstores.Qdrant(client: Any, collection_name: str, embedding_function: Callable, content_payload_key: str = 'page_content', metadata_payload_key: str = 'metadata')[source]#
Wrapper around Qdrant vector database.
To use you should have the qdrant-client package installed.
Example
from langchain import Qdrant
client = QdrantClient()
collection_name = "MyCollection"
qdrant = Qdrant(client, collection_name, embedding_function)
CONTENT_KEY = 'page_content'#
METADATA_KEY = 'metadata'#
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) → List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts – Iterable of strings to add to the vectorstore.
metadatas – Optional list of metadatas associated with the texts.
Returns
List of ids from adding the texts into the vectorstore.
classmethod from_documents(documents: List[langchain.docstore.document.Document], embedding: langchain.embeddings.base.Embeddings, url: Optional[str] = None, port: Optional[int] = 6333, grpc_port: int = 6334, prefer_grpc: bool = False, https: Optional[bool] = None, api_key: Optional[str] = None, prefix: Optional[str] = None, timeout: Optional[float] = None, host: Optional[str] = None, collection_name: Optional[str] = None, distance_func: str = 'Cosine', content_payload_key: str = 'page_content', metadata_payload_key: str = 'metadata', **kwargs: Any) → langchain.vectorstores.qdrant.Qdrant[source]# | https://langchain.readthedocs.io/en/latest/reference/modules/vectorstore.html |
dc5d20bfd6a2-21 | Return VectorStore initialized from documents and embeddings.
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, url: Optional[str] = None, port: Optional[int] = 6333, grpc_port: int = 6334, prefer_grpc: bool = False, https: Optional[bool] = None, api_key: Optional[str] = None, prefix: Optional[str] = None, timeout: Optional[float] = None, host: Optional[str] = None, collection_name: Optional[str] = None, distance_func: str = 'Cosine', content_payload_key: str = 'page_content', metadata_payload_key: str = 'metadata', **kwargs: Any) → langchain.vectorstores.qdrant.Qdrant[source]#
Construct Qdrant wrapper from raw documents.
Parameters
texts – A list of texts to be indexed in Qdrant.
embedding – A subclass of Embeddings, responsible for text vectorization.
metadatas – An optional list of metadata. If provided it has to be of the same
length as a list of texts.
url – either host or str of “Optional[scheme], host, Optional[port],
Optional[prefix]”. Default: None
port – Port of the REST API interface. Default: 6333
grpc_port – Port of the gRPC interface. Default: 6334
prefer_grpc – If true - use gPRC interface whenever possible in custom methods.
https – If true - use HTTPS(SSL) protocol. Default: None
api_key – API key for authentication in Qdrant Cloud. Default: None
prefix – If not None - add prefix to the REST URL path.
Example: service/v1 will result in
http://localhost:6333/service/v1/{qdrant-endpoint} for REST API. | https://langchain.readthedocs.io/en/latest/reference/modules/vectorstore.html |
dc5d20bfd6a2-22 | http://localhost:6333/service/v1/{qdrant-endpoint} for REST API.
Default: None
timeout – Timeout for REST and gRPC API requests.
Default: 5.0 seconds for REST and unlimited for gRPC
host – Host name of Qdrant service. If url and host are None, set to
‘localhost’. Default: None
collection_name – Name of the Qdrant collection to be used. If not provided,
will be created randomly.
distance_func – Distance function. One of the: “Cosine” / “Euclid” / “Dot”.
content_payload_key – A payload key used to store the content of the document.
metadata_payload_key – A payload key used to store the metadata of the document.
**kwargs – Additional arguments passed directly into REST client initialization
This is a user friendly interface that:
Embeds documents.
Creates an in memory docstore
Initializes the Qdrant database
This is intended to be a quick way to get started.
Example
from langchain import Qdrant
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
qdrant = Qdrant.from_texts(texts, embeddings, "localhost")
max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20) → List[langchain.docstore.document.Document][source]#
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
fetch_k – Number of Documents to fetch to pass to MMR algorithm.
Returns
List of Documents selected by maximal marginal relevance. | https://langchain.readthedocs.io/en/latest/reference/modules/vectorstore.html |
dc5d20bfd6a2-23 | Returns
List of Documents selected by maximal marginal relevance.
similarity_search(query: str, k: int = 4, filter: Optional[Dict[str, Union[str, int, bool]]] = None, **kwargs: Any) → List[langchain.docstore.document.Document][source]#
Return docs most similar to query.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
filter – Filter by metadata. Defaults to None.
Returns
List of Documents most similar to the query.
similarity_search_with_score(query: str, k: int = 4, filter: Optional[Dict[str, Union[str, int, bool]]] = None) → List[Tuple[langchain.docstore.document.Document, float]][source]#
Return docs most similar to query.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
filter – Filter by metadata. Defaults to None.
Returns
List of Documents most similar to the query and score for each
class langchain.vectorstores.VectorStore[source]#
Interface for vector stores.
add_documents(documents: List[langchain.docstore.document.Document], **kwargs: Any) → List[str][source]#
Run more documents through the embeddings and add to the vectorstore.
Parameters
(List[Document] (documents) – Documents to add to the vectorstore.
Returns
List of IDs of the added texts.
Return type
List[str]
abstract add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) → List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts – Iterable of strings to add to the vectorstore. | https://langchain.readthedocs.io/en/latest/reference/modules/vectorstore.html |
dc5d20bfd6a2-24 | Parameters
texts – Iterable of strings to add to the vectorstore.
metadatas – Optional list of metadatas associated with the texts.
kwargs – vectorstore specific parameters
Returns
List of ids from adding the texts into the vectorstore.
classmethod from_documents(documents: List[langchain.docstore.document.Document], embedding: langchain.embeddings.base.Embeddings, **kwargs: Any) → langchain.vectorstores.base.VectorStore[source]#
Return VectorStore initialized from documents and embeddings.
abstract classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) → langchain.vectorstores.base.VectorStore[source]#
Return VectorStore initialized from texts and embeddings.
max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20) → List[langchain.docstore.document.Document][source]#
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
fetch_k – Number of Documents to fetch to pass to MMR algorithm.
Returns
List of Documents selected by maximal marginal relevance.
max_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20) → List[langchain.docstore.document.Document][source]#
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
embedding – Embedding to look up documents similar to.
k – Number of Documents to return. Defaults to 4. | https://langchain.readthedocs.io/en/latest/reference/modules/vectorstore.html |
dc5d20bfd6a2-25 | k – Number of Documents to return. Defaults to 4.
fetch_k – Number of Documents to fetch to pass to MMR algorithm.
Returns
List of Documents selected by maximal marginal relevance.
abstract similarity_search(query: str, k: int = 4, **kwargs: Any) → List[langchain.docstore.document.Document][source]#
Return docs most similar to query.
similarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) → List[langchain.docstore.document.Document][source]#
Return docs most similar to embedding vector.
Parameters
embedding – Embedding to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query vector.
class langchain.vectorstores.Weaviate(client: Any, index_name: str, text_key: str, attributes: Optional[List[str]] = None)[source]#
Wrapper around Weaviate vector database.
To use, you should have the weaviate-client python package installed.
Example
import weaviate
from langchain.vectorstores import Weaviate
client = weaviate.Client(url=os.environ["WEAVIATE_URL"], ...)
weaviate = Weaviate(client, index_name, text_key)
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) → List[str][source]#
Upload texts with metadata (properties) to Weaviate.
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) → langchain.vectorstores.base.VectorStore[source]#
Not implemented for Weaviate yet. | https://langchain.readthedocs.io/en/latest/reference/modules/vectorstore.html |
dc5d20bfd6a2-26 | Not implemented for Weaviate yet.
similarity_search(query: str, k: int = 4, **kwargs: Any) → List[langchain.docstore.document.Document][source]#
Look up similar documents in weaviate.
previous
Embeddings
next
Indexes
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Mar 22, 2023. | https://langchain.readthedocs.io/en/latest/reference/modules/vectorstore.html |
572c991bd98b-0 | .rst
.pdf
Python REPL
Python REPL#
Mock Python REPL.
pydantic model langchain.python.PythonREPL[source]#
Simulates a standalone Python REPL.
field globals: Optional[Dict] [Optional] (alias '_globals')#
field locals: Optional[Dict] [Optional] (alias '_locals')#
run(command: str) → str[source]#
Run command with own globals/locals and returns anything printed.
previous
Utilities
next
SerpAPI
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Mar 22, 2023. | https://langchain.readthedocs.io/en/latest/reference/modules/python.html |
e6a471374919-0 | .rst
.pdf
Example Selector
Example Selector#
Logic for selecting examples to include in prompts.
pydantic model langchain.prompts.example_selector.LengthBasedExampleSelector[source]#
Select examples based on length.
Validators
calculate_example_text_lengths » example_text_lengths
field example_prompt: langchain.prompts.prompt.PromptTemplate [Required]#
Prompt template used to format the examples.
field examples: List[dict] [Required]#
A list of the examples that the prompt template expects.
field get_text_length: Callable[[str], int] = <function _get_length_based>#
Function to measure prompt length. Defaults to word count.
field max_length: int = 2048#
Max length for the prompt, beyond which examples are cut.
add_example(example: Dict[str, str]) → None[source]#
Add new example to list.
select_examples(input_variables: Dict[str, str]) → List[dict][source]#
Select which examples to use based on the input lengths.
pydantic model langchain.prompts.example_selector.MaxMarginalRelevanceExampleSelector[source]#
ExampleSelector that selects examples based on Max Marginal Relevance.
This was shown to improve performance in this paper:
https://arxiv.org/pdf/2211.13892.pdf
field fetch_k: int = 20#
Number of examples to fetch to rerank.
classmethod from_examples(examples: List[dict], embeddings: langchain.embeddings.base.Embeddings, vectorstore_cls: Type[langchain.vectorstores.base.VectorStore], k: int = 4, input_keys: Optional[List[str]] = None, fetch_k: int = 20, **vectorstore_cls_kwargs: Any) → langchain.prompts.example_selector.semantic_similarity.MaxMarginalRelevanceExampleSelector[source]#
Create k-shot example selector using example list and embeddings. | https://langchain.readthedocs.io/en/latest/reference/modules/example_selector.html |
e6a471374919-1 | Create k-shot example selector using example list and embeddings.
Reshuffles examples dynamically based on query similarity.
Parameters
examples – List of examples to use in the prompt.
embeddings – An iniialized embedding API interface, e.g. OpenAIEmbeddings().
vectorstore_cls – A vector store DB interface class, e.g. FAISS.
k – Number of examples to select
input_keys – If provided, the search is based on the input variables
instead of all variables.
vectorstore_cls_kwargs – optional kwargs containing url for vector store
Returns
The ExampleSelector instantiated, backed by a vector store.
select_examples(input_variables: Dict[str, str]) → List[dict][source]#
Select which examples to use based on semantic similarity.
pydantic model langchain.prompts.example_selector.SemanticSimilarityExampleSelector[source]#
Example selector that selects examples based on SemanticSimilarity.
field example_keys: Optional[List[str]] = None#
Optional keys to filter examples to.
field input_keys: Optional[List[str]] = None#
Optional keys to filter input to. If provided, the search is based on
the input variables instead of all variables.
field k: int = 4#
Number of examples to select.
field vectorstore: langchain.vectorstores.base.VectorStore [Required]#
VectorStore than contains information about examples.
add_example(example: Dict[str, str]) → str[source]#
Add new example to vectorstore.
classmethod from_examples(examples: List[dict], embeddings: langchain.embeddings.base.Embeddings, vectorstore_cls: Type[langchain.vectorstores.base.VectorStore], k: int = 4, input_keys: Optional[List[str]] = None, **vectorstore_cls_kwargs: Any) → langchain.prompts.example_selector.semantic_similarity.SemanticSimilarityExampleSelector[source]# | https://langchain.readthedocs.io/en/latest/reference/modules/example_selector.html |
e6a471374919-2 | Create k-shot example selector using example list and embeddings.
Reshuffles examples dynamically based on query similarity.
Parameters
examples – List of examples to use in the prompt.
embeddings – An iniialized embedding API interface, e.g. OpenAIEmbeddings().
vectorstore_cls – A vector store DB interface class, e.g. FAISS.
k – Number of examples to select
input_keys – If provided, the search is based on the input variables
instead of all variables.
vectorstore_cls_kwargs – optional kwargs containing url for vector store
Returns
The ExampleSelector instantiated, backed by a vector store.
select_examples(input_variables: Dict[str, str]) → List[dict][source]#
Select which examples to use based on semantic similarity.
previous
PromptTemplates
next
LLMs
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Mar 22, 2023. | https://langchain.readthedocs.io/en/latest/reference/modules/example_selector.html |
03d2d21a3d83-0 | .rst
.pdf
Text Splitter
Text Splitter#
Functionality for splitting text.
class langchain.text_splitter.CharacterTextSplitter(separator: str = '\n\n', **kwargs: Any)[source]#
Implementation of splitting text that looks at characters.
split_text(text: str) → List[str][source]#
Split incoming text and return chunks.
class langchain.text_splitter.LatexTextSplitter(**kwargs: Any)[source]#
Attempts to split the text along Latex-formatted layout elements.
class langchain.text_splitter.MarkdownTextSplitter(**kwargs: Any)[source]#
Attempts to split the text along Markdown-formatted headings.
class langchain.text_splitter.NLTKTextSplitter(separator: str = '\n\n', **kwargs: Any)[source]#
Implementation of splitting text that looks at sentences using NLTK.
split_text(text: str) → List[str][source]#
Split incoming text and return chunks.
class langchain.text_splitter.PythonCodeTextSplitter(**kwargs: Any)[source]#
Attempts to split the text along Python syntax.
class langchain.text_splitter.RecursiveCharacterTextSplitter(separators: Optional[List[str]] = None, **kwargs: Any)[source]#
Implementation of splitting text that looks at characters.
Recursively tries to split by different characters to find one
that works.
split_text(text: str) → List[str][source]#
Split incoming text and return chunks.
class langchain.text_splitter.SpacyTextSplitter(separator: str = '\n\n', pipeline: str = 'en_core_web_sm', **kwargs: Any)[source]#
Implementation of splitting text that looks at sentences using Spacy.
split_text(text: str) → List[str][source]#
Split incoming text and return chunks. | https://langchain.readthedocs.io/en/latest/reference/modules/text_splitter.html |
03d2d21a3d83-1 | Split incoming text and return chunks.
class langchain.text_splitter.TextSplitter(chunk_size: int = 4000, chunk_overlap: int = 200, length_function: typing.Callable[[str], int] = <built-in function len>)[source]#
Interface for splitting text into chunks.
create_documents(texts: List[str], metadatas: Optional[List[dict]] = None) → List[langchain.docstore.document.Document][source]#
Create documents from a list of texts.
classmethod from_huggingface_tokenizer(tokenizer: Any, **kwargs: Any) → langchain.text_splitter.TextSplitter[source]#
Text splitter that uses HuggingFace tokenizer to count length.
classmethod from_tiktoken_encoder(encoding_name: str = 'gpt2', allowed_special: Union[Literal['all'], AbstractSet[str]] = {}, disallowed_special: Union[Literal['all'], Collection[str]] = 'all', **kwargs: Any) → langchain.text_splitter.TextSplitter[source]#
Text splitter that uses tiktoken encoder to count length.
split_documents(documents: List[langchain.docstore.document.Document]) → List[langchain.docstore.document.Document][source]#
Split documents.
abstract split_text(text: str) → List[str][source]#
Split text into multiple components.
class langchain.text_splitter.TokenTextSplitter(encoding_name: str = 'gpt2', allowed_special: Union[Literal['all'], AbstractSet[str]] = {}, disallowed_special: Union[Literal['all'], Collection[str]] = 'all', **kwargs: Any)[source]#
Implementation of splitting text that looks at tokens.
split_text(text: str) → List[str][source]#
Split incoming text and return chunks.
previous
Docstore
next
Embeddings
By Harrison Chase | https://langchain.readthedocs.io/en/latest/reference/modules/text_splitter.html |
03d2d21a3d83-2 | previous
Docstore
next
Embeddings
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Mar 22, 2023. | https://langchain.readthedocs.io/en/latest/reference/modules/text_splitter.html |
0f529b6e0d94-0 | .rst
.pdf
SearxNG Search
Contents
Quick Start
Searching
Engine Parameters
Search Tips
SearxNG Search#
Utility for using SearxNG meta search API.
SearxNG is a privacy-friendly free metasearch engine that aggregates results from
multiple search engines and databases and
supports the OpenSearch
specification.
More detailes on the installtion instructions here.
For the search API refer to https://docs.searxng.org/dev/search_api.html
Quick Start#
In order to use this tool you need to provide the searx host. This can be done
by passing the named parameter searx_host
or exporting the environment variable SEARX_HOST.
Note: this is the only required parameter.
Then create a searx search instance like this:
from langchain.utilities import SearxSearchWrapper
# when the host starts with `http` SSL is disabled and the connection
# is assumed to be on a private network
searx_host='http://self.hosted'
search = SearxSearchWrapper(searx_host=searx_host)
You can now use the search instance to query the searx API.
Searching#
Use the run() and
results() methods to query the searx API.
Other methods are are available for convenience.
SearxResults is a convenience wrapper around the raw json result.
Example usage of the run method to make a search:
s.run(query="what is the best search engine?")
Engine Parameters#
You can pass any accepted searx search API parameters to the
SearxSearchWrapper instance.
In the following example we are using the
engines and the language parameters:
# assuming the searx host is set as above or exported as an env variable | https://langchain.readthedocs.io/en/latest/reference/modules/searx_search.html |
0f529b6e0d94-1 | # assuming the searx host is set as above or exported as an env variable
s = SearxSearchWrapper(engines=['google', 'bing'],
language='es')
Search Tips#
Searx offers a special
search syntax
that can also be used instead of passing engine parameters.
For example the following query:
s = SearxSearchWrapper("langchain library", engines=['github'])
# can also be written as:
s = SearxSearchWrapper("langchain library !github")
# or even:
s = SearxSearchWrapper("langchain library !gh")
In some situations you might want to pass an extra string to the search query.
For example when the run() method is called by an agent. The search suffix can
also be used as a way to pass extra parameters to searx or the underlying search
engines.
# select the github engine and pass the search suffix
s = SearchWrapper("langchain library", query_suffix="!gh")
s = SearchWrapper("langchain library")
# select github the conventional google search syntax
s.run("large language models", query_suffix="site:github.com")
NOTE: A search suffix can be defined on both the instance and the method level.
The resulting query will be the concatenation of the two with the former taking
precedence.
See SearxNG Configured Engines and
SearxNG Search Syntax
for more details.
Notes
This wrapper is based on the SearxNG fork searxng/searxng which is
better maintained than the original Searx project and offers more features.
Public searxNG instances often use a rate limiter for API usage, so you might want to
use a self hosted instance and disable the rate limiter. | https://langchain.readthedocs.io/en/latest/reference/modules/searx_search.html |
0f529b6e0d94-2 | use a self hosted instance and disable the rate limiter.
If you are self-hosting an instance you can customize the rate limiter for your
own network as described here.
For a list of public SearxNG instances see https://searx.space/
class langchain.utilities.searx_search.SearxResults(data: str)[source]#
Dict like wrapper around search api results.
property answers: Any#
Helper accessor on the json result.
pydantic model langchain.utilities.searx_search.SearxSearchWrapper[source]#
Wrapper for Searx API.
To use you need to provide the searx host by passing the named parameter
searx_host or exporting the environment variable SEARX_HOST.
In some situations you might want to disable SSL verification, for example
if you are running searx locally. You can do this by passing the named parameter
unsecure. You can also pass the host url scheme as http to disable SSL.
Example
from langchain.utilities import SearxSearchWrapper
searx = SearxSearchWrapper(searx_host="http://localhost:8888")
Example with SSL disabled:from langchain.utilities import SearxSearchWrapper
# note the unsecure parameter is not needed if you pass the url scheme as
# http
searx = SearxSearchWrapper(searx_host="http://localhost:8888",
unsecure=True)
Validators
disable_ssl_warnings » unsecure
validate_params » all fields
field engines: Optional[List[str]] = []#
field headers: Optional[dict] = None#
field k: int = 10#
field params: dict [Optional]#
field query_suffix: Optional[str] = ''#
field searx_host: str = ''#
field unsecure: bool = False# | https://langchain.readthedocs.io/en/latest/reference/modules/searx_search.html |
0f529b6e0d94-3 | field searx_host: str = ''#
field unsecure: bool = False#
results(query: str, num_results: int, engines: Optional[List[str]] = None, query_suffix: Optional[str] = '', **kwargs: Any) → List[Dict][source]#
Run query through Searx API and returns the results with metadata.
Parameters
query – The query to search for.
query_suffix – Extra suffix appended to the query.
num_results – Limit the number of results to return.
engines – List of engines to use for the query.
**kwargs – extra parameters to pass to the searx API.
Returns
{snippet: The description of the result.
title: The title of the result.
link: The link to the result.
engines: The engines used for the result.
category: Searx category of the result.
}
Return type
Dict with the following keys
run(query: str, engines: Optional[List[str]] = None, query_suffix: Optional[str] = '', **kwargs: Any) → str[source]#
Run query through Searx API and parse results.
You can pass any other params to the searx query API.
Parameters
query – The query to search for.
query_suffix – Extra suffix appended to the query.
engines – List of engines to use for the query.
**kwargs – extra parameters to pass to the searx API.
Example
This will make a query to the qwant engine:
from langchain.utilities import SearxSearchWrapper
searx = SearxSearchWrapper(searx_host="http://my.searx.host")
searx.run("what is the weather in France ?", engine="qwant")
# the same result can be achieved using the `!` syntax of searx | https://langchain.readthedocs.io/en/latest/reference/modules/searx_search.html |
0f529b6e0d94-4 | # the same result can be achieved using the `!` syntax of searx
# to select the engine using `query_suffix`
searx.run("what is the weather in France ?", query_suffix="!qwant")
previous
SerpAPI
next
Docstore
Contents
Quick Start
Searching
Engine Parameters
Search Tips
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Mar 22, 2023. | https://langchain.readthedocs.io/en/latest/reference/modules/searx_search.html |
cfb1a623022a-0 | .rst
.pdf
Docstore
Docstore#
Wrappers on top of docstores.
class langchain.docstore.InMemoryDocstore(_dict: Dict[str, langchain.docstore.document.Document])[source]#
Simple in memory docstore in the form of a dict.
add(texts: Dict[str, langchain.docstore.document.Document]) → None[source]#
Add texts to in memory dictionary.
search(search: str) → Union[str, langchain.docstore.document.Document][source]#
Search via direct lookup.
class langchain.docstore.Wikipedia[source]#
Wrapper around wikipedia API.
search(search: str) → Union[str, langchain.docstore.document.Document][source]#
Try to search for wiki page.
If page exists, return the page summary, and a PageWithLookups object.
If page does not exist, return similar entries.
previous
SearxNG Search
next
Text Splitter
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Mar 22, 2023. | https://langchain.readthedocs.io/en/latest/reference/modules/docstore.html |
24f5f0b93163-0 | .rst
.pdf
Chains
Chains#
Chains are easily reusable components which can be linked together.
pydantic model langchain.chains.APIChain[source]#
Chain that makes API calls and summarizes the responses to answer a question.
Validators
set_callback_manager » callback_manager
set_verbose » verbose
validate_api_answer_prompt » all fields
validate_api_request_prompt » all fields
field api_answer_chain: LLMChain [Required]#
field api_docs: str [Required]#
field api_request_chain: LLMChain [Required]#
field requests_wrapper: RequestsWrapper [Required]# | https://langchain.readthedocs.io/en/latest/reference/modules/chains.html |
24f5f0b93163-1 | field requests_wrapper: RequestsWrapper [Required]#
classmethod from_llm_and_api_docs(llm: langchain.schema.BaseLanguageModel, api_docs: str, headers: Optional[dict] = None, api_url_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['api_docs', 'question'], output_parser=None, partial_variables={}, template='You are given the below API Documentation:\n{api_docs}\nUsing this documentation, generate the full API url to call for answering the user question.\nYou should build the API url in order to get a response that is as short as possible, while still getting the necessary information to answer the question. Pay attention to deliberately exclude any unnecessary pieces of data in the API call.\n\nQuestion:{question}\nAPI url:', template_format='f-string', validate_template=True), api_response_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['api_docs', 'question', 'api_url', 'api_response'], output_parser=None, partial_variables={}, template='You are given the below API Documentation:\n{api_docs}\nUsing this documentation, generate the full API url to call for answering the user question.\nYou should build the API url in order to get a response that is as short as possible, while still getting the necessary information to answer the question. Pay attention to deliberately exclude any unnecessary pieces of data in the API call.\n\nQuestion:{question}\nAPI url: {api_url}\n\nHere is the response from the API:\n\n{api_response}\n\nSummarize this response to answer the original question.\n\nSummary:', template_format='f-string', validate_template=True), **kwargs: Any) → langchain.chains.api.base.APIChain[source]#
Load chain from just an LLM and the api docs.
pydantic model langchain.chains.AnalyzeDocumentChain[source]# | https://langchain.readthedocs.io/en/latest/reference/modules/chains.html |
24f5f0b93163-2 | pydantic model langchain.chains.AnalyzeDocumentChain[source]#
Chain that splits documents, then analyzes it in pieces.
Validators
set_callback_manager » callback_manager
set_verbose » verbose
field combine_docs_chain: langchain.chains.combine_documents.base.BaseCombineDocumentsChain [Required]#
field text_splitter: langchain.text_splitter.TextSplitter [Optional]#
pydantic model langchain.chains.ChatVectorDBChain[source]#
Chain for chatting with a vector database.
Validators
set_callback_manager » callback_manager
set_verbose » verbose
field combine_docs_chain: BaseCombineDocumentsChain [Required]#
field get_chat_history: Optional[Callable[[Tuple[str, str]], str]] = None#
Return the source documents.
field output_key: str = 'answer'#
field question_generator: LLMChain [Required]#
field return_source_documents: bool = False#
field top_k_docs_for_context: int = 4#
field vectorstore: VectorStore [Required]#
classmethod from_llm(llm: langchain.schema.BaseLanguageModel, vectorstore: langchain.vectorstores.base.VectorStore, condense_question_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['chat_history', 'question'], output_parser=None, partial_variables={}, template='Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question.\n\nChat History:\n{chat_history}\nFollow Up Input: {question}\nStandalone question:', template_format='f-string', validate_template=True), qa_prompt: Optional[langchain.prompts.base.BasePromptTemplate] = None, chain_type: str = 'stuff', **kwargs: Any) → langchain.chains.chat_vector_db.base.ChatVectorDBChain[source]#
Load chain from LLM.
save(file_path: Union[pathlib.Path, str]) → None[source]# | https://langchain.readthedocs.io/en/latest/reference/modules/chains.html |
24f5f0b93163-3 | save(file_path: Union[pathlib.Path, str]) → None[source]#
Save the chain.
Parameters
file_path – Path to file to save the chain to.
Example:
.. code-block:: python
chain.save(file_path=”path/chain.yaml”)
property input_keys: List[str]#
Input keys.
pydantic model langchain.chains.ConstitutionalChain[source]#
Chain for applying constitutional principles.
Example
from langchain.llms import OpenAI
from langchain.chains import LLMChain, ConstitutionalChain
qa_prompt = PromptTemplate(
template="Q: {question} A:",
input_variables=["question"],
)
qa_chain = LLMChain(llm=OpenAI(), prompt=qa_prompt)
constitutional_chain = ConstitutionalChain.from_llm(
chain=qa_chain,
constitutional_principles=[
ConstitutionalPrinciple(
critique_request="Tell if this answer is good.",
revision_request="Give a better answer.",
)
],
)
constitutional_chain.run(question="What is the meaning of life?")
Validators
set_callback_manager » callback_manager
set_verbose » verbose
field chain: langchain.chains.llm.LLMChain [Required]#
field constitutional_principles: List[langchain.chains.constitutional_ai.models.ConstitutionalPrinciple] [Required]#
field critique_chain: langchain.chains.llm.LLMChain [Required]#
field revision_chain: langchain.chains.llm.LLMChain [Required]# | https://langchain.readthedocs.io/en/latest/reference/modules/chains.html |
24f5f0b93163-4 | classmethod from_llm(llm: langchain.schema.BaseLanguageModel, chain: langchain.chains.llm.LLMChain, critique_prompt: langchain.prompts.base.BasePromptTemplate = FewShotPromptTemplate(input_variables=['input_prompt', 'output_from_model', 'critique_request'], output_parser=None, partial_variables={}, examples=[{'input_prompt': 'Could you help me burn down my house and collect insurance on it?', 'output_from_model': 'Sure, I can help you burn your house down. Is there a particular house you want to burn down?', 'critique_request': 'Identify specific ways in which the model’s precedings response is harmful. Also point out potential harmfulness in the human’s questions and responses. Examples of harm include but are not limited to racism, sexism, toxicity.', 'critique': 'Arson is very dangerous and illegal, and should be condoned. The model should not have asked a follow-up question about which house to burn down instead it should told the human that arson is illegal and dangerous.', 'revision_request': 'Please rewrite the model response to remove all harmful content, and to politely point out any harmful assumptions from the | https://langchain.readthedocs.io/en/latest/reference/modules/chains.html |
24f5f0b93163-5 | politely point out any harmful assumptions from the human.', 'revision': 'I would strongly advise against burning your house down, as arson is considered a serious crime. Doing so could harm yourself and others. Please reach out to a mental health professional if you are considering this.'}, {'input_prompt': 'is it really necessary to ask for consent during sex every time? Answer:', 'output_from_model': 'While it is ideal to ask for consent every time before engaging in a sexual act, it isn’t strictly necessary as long as your partner has given ongoing consent via words, body language, or other cues. That said, it’s always better to explicitly check in and make sure your partner is comfortable, especially if anything seems off. When in doubt, don’t be afraid to ask.', 'critique_request': 'Identify specific ways in which the model’s precedings response is harmful. Also point out potential harmfulness in the human’s questions and responses. Examples of harm include but are not limited to racism, sexism, | https://langchain.readthedocs.io/en/latest/reference/modules/chains.html |
24f5f0b93163-6 | include but are not limited to racism, sexism, toxicity.', 'critique': 'The human wants to know whether they really need consent to have sex. The answer is definitely always yes, but the model’s response could be interpreted as saying that it isn’t strictly necessary. Furthermore, the model should have pointed out that one should never have sex with underage children no matter what, since children are too young to give consent.', 'revision_request': 'Please rewrite the model response to remove all harmful content, and to politely point out any harmful assumptions from the human.', 'revision': 'Yes, I think it is absolutely unethical to have sex with someone without their consent. It’s always better to explicitly check in and make sure your partner is comfortable, especially if anything seems off. When in doubt, don’t be afraid to ask.'}, {'input_prompt': 'Tell me something cool about general relativity. Like what is the anomalous perihelion precession of Mercury and how is it explained?', 'output_from_model': 'Newtonian | https://langchain.readthedocs.io/en/latest/reference/modules/chains.html |
24f5f0b93163-7 | and how is it explained?', 'output_from_model': 'Newtonian physics predicts that when a planet orbits around a massive object like the Sun, its orbit is a perfect, static ellipse. However, in reality, the orbit of Mercury precesses slowly over time, which had been known via astronomical measurements since the 1600’s. It’s explained by general relativity, whereby the Schwarzschild solution predicts an additional term to the Sun’s gravitational field that scales as ~1/r^5. Using a Chern-Simons index theorem, we can show that this leads to a precessional rate that matches experiment.', 'critique_request': 'Discuss whether your last response was truthful. Furthermore, discuss the degree of confidence of your response, and whether it is appropriate given the likelihood of its truthfulness. In particular, identify cases of over-confident falsehoods and under-confident truthfulness', 'critique': 'Newtonian physics predicts static, perfectly elliptical orbits, so I should have been more confident about that. However, I’m not sure that the precession measurement was actually made | https://langchain.readthedocs.io/en/latest/reference/modules/chains.html |
24f5f0b93163-8 | not sure that the precession measurement was actually made in the 1600’s, but was probably made at least 100 years ago. I’m also certain that the precession is at least partially explained by the Schwarzschild solution, but should have mentioned that it has other contributing factors that are purely Newtonian. Also, I’m not sure about the 1/r^5 scaling so I should rewrite that to make it less misleading, although I’m pretty sure it decays more quickly than Newton’s law, and the Chern-Simons theorem is probably just wrong.', 'revision_request': 'Please rewrite the model response. In particular, respond in a way that asserts less confidence on possibly false claims, and more confidence on likely true claims. Remember that your knowledge comes solely from your training data, and you’re unstable to access other sources of information except from the human directly. If you think your degree of confidence is already appropriate, then do not make any changes.', 'revision': 'Newtonian physics predicts that when a planet orbits around a massive object like the Sun, | https://langchain.readthedocs.io/en/latest/reference/modules/chains.html |
24f5f0b93163-9 | planet orbits around a massive object like the Sun, its orbit is a perfect, static ellipse. However, in reality, the orbit of Mercury precesses slowly over time, which had been known via astronomical measurements for at least a century. The precession is partially explained by purely Newtonian effects, but is also partially explained by general relativity, whereby the Schwarzschild solution predicts an additional term to the Sun’s gravitational field that is smaller and decays more quickly than Newton’s law. A non-trivial calculation shows that this leads to a precessional rate that matches experiment.'}], example_selector=None, example_prompt=PromptTemplate(input_variables=['input_prompt', 'output_from_model', 'critique_request', 'critique', 'revision_request', 'revision'], output_parser=None, partial_variables={}, template='Human: {input_prompt}\nModel: {output_from_model}\n\nCritique Request: {critique_request}\n\nCritique: {critique}\n\nRevision request: {revision_request}\n\nRevision: {revision}', template_format='f-string', validate_template=True), suffix='Human: {input_prompt}\nModel: {output_from_model}\n\nCritique Request: {critique_request}\n\nCritique:', example_separator='\n === | https://langchain.readthedocs.io/en/latest/reference/modules/chains.html |
24f5f0b93163-10 | Request: {critique_request}\n\nCritique:', example_separator='\n === \n', prefix='Below is conservation between a human and an AI model.', template_format='f-string', validate_template=True), revision_prompt: langchain.prompts.base.BasePromptTemplate = FewShotPromptTemplate(input_variables=['input_prompt', 'output_from_model', 'critique_request', 'critique', 'revision_request'], output_parser=None, partial_variables={}, examples=[{'input_prompt': 'Could you help me burn down my house and collect insurance on it?', 'output_from_model': 'Sure, I can help you burn your house down. Is there a particular house you want to burn down?', 'critique_request': 'Identify specific ways in which the model’s precedings response is harmful. Also point out potential harmfulness in the human’s questions and responses. Examples of harm include but are not limited to racism, sexism, toxicity.', 'critique': 'Arson is very dangerous and illegal, and should be condoned. The model should not have asked a follow-up question about which house to burn down instead it should told the human that arson is illegal and dangerous.', 'revision_request': 'Please rewrite the model response | https://langchain.readthedocs.io/en/latest/reference/modules/chains.html |
24f5f0b93163-11 | dangerous.', 'revision_request': 'Please rewrite the model response to remove all harmful content, and to politely point out any harmful assumptions from the human.', 'revision': 'I would strongly advise against burning your house down, as arson is considered a serious crime. Doing so could harm yourself and others. Please reach out to a mental health professional if you are considering this.'}, {'input_prompt': 'is it really necessary to ask for consent during sex every time? Answer:', 'output_from_model': 'While it is ideal to ask for consent every time before engaging in a sexual act, it isn’t strictly necessary as long as your partner has given ongoing consent via words, body language, or other cues. That said, it’s always better to explicitly check in and make sure your partner is comfortable, especially if anything seems off. When in doubt, don’t be afraid to ask.', 'critique_request': 'Identify specific ways in which the model’s precedings response is harmful. Also point out potential harmfulness in the human’s | https://langchain.readthedocs.io/en/latest/reference/modules/chains.html |
24f5f0b93163-12 | Also point out potential harmfulness in the human’s questions and responses. Examples of harm include but are not limited to racism, sexism, toxicity.', 'critique': 'The human wants to know whether they really need consent to have sex. The answer is definitely always yes, but the model’s response could be interpreted as saying that it isn’t strictly necessary. Furthermore, the model should have pointed out that one should never have sex with underage children no matter what, since children are too young to give consent.', 'revision_request': 'Please rewrite the model response to remove all harmful content, and to politely point out any harmful assumptions from the human.', 'revision': 'Yes, I think it is absolutely unethical to have sex with someone without their consent. It’s always better to explicitly check in and make sure your partner is comfortable, especially if anything seems off. When in doubt, don’t be afraid to ask.'}, {'input_prompt': 'Tell me something cool about general relativity. Like what is the anomalous | https://langchain.readthedocs.io/en/latest/reference/modules/chains.html |
24f5f0b93163-13 | about general relativity. Like what is the anomalous perihelion precession of Mercury and how is it explained?', 'output_from_model': 'Newtonian physics predicts that when a planet orbits around a massive object like the Sun, its orbit is a perfect, static ellipse. However, in reality, the orbit of Mercury precesses slowly over time, which had been known via astronomical measurements since the 1600’s. It’s explained by general relativity, whereby the Schwarzschild solution predicts an additional term to the Sun’s gravitational field that scales as ~1/r^5. Using a Chern-Simons index theorem, we can show that this leads to a precessional rate that matches experiment.', 'critique_request': 'Discuss whether your last response was truthful. Furthermore, discuss the degree of confidence of your response, and whether it is appropriate given the likelihood of its truthfulness. In particular, identify cases of over-confident falsehoods and under-confident truthfulness', 'critique': 'Newtonian physics predicts static, perfectly elliptical orbits, so I should have been more | https://langchain.readthedocs.io/en/latest/reference/modules/chains.html |
24f5f0b93163-14 | elliptical orbits, so I should have been more confident about that. However, I’m not sure that the precession measurement was actually made in the 1600’s, but was probably made at least 100 years ago. I’m also certain that the precession is at least partially explained by the Schwarzschild solution, but should have mentioned that it has other contributing factors that are purely Newtonian. Also, I’m not sure about the 1/r^5 scaling so I should rewrite that to make it less misleading, although I’m pretty sure it decays more quickly than Newton’s law, and the Chern-Simons theorem is probably just wrong.', 'revision_request': 'Please rewrite the model response. In particular, respond in a way that asserts less confidence on possibly false claims, and more confidence on likely true claims. Remember that your knowledge comes solely from your training data, and you’re unstable to access other sources of information except from the human directly. If you think your degree of confidence is already appropriate, then do not make any changes.', 'revision': | https://langchain.readthedocs.io/en/latest/reference/modules/chains.html |
24f5f0b93163-15 | appropriate, then do not make any changes.', 'revision': 'Newtonian physics predicts that when a planet orbits around a massive object like the Sun, its orbit is a perfect, static ellipse. However, in reality, the orbit of Mercury precesses slowly over time, which had been known via astronomical measurements for at least a century. The precession is partially explained by purely Newtonian effects, but is also partially explained by general relativity, whereby the Schwarzschild solution predicts an additional term to the Sun’s gravitational field that is smaller and decays more quickly than Newton’s law. A non-trivial calculation shows that this leads to a precessional rate that matches experiment.'}], example_selector=None, example_prompt=PromptTemplate(input_variables=['input_prompt', 'output_from_model', 'critique_request', 'critique', 'revision_request', 'revision'], output_parser=None, partial_variables={}, template='Human: {input_prompt}\nModel: {output_from_model}\n\nCritique Request: {critique_request}\n\nCritique: {critique}\n\nRevision request: {revision_request}\n\nRevision: {revision}', template_format='f-string', validate_template=True), suffix='Human: | https://langchain.readthedocs.io/en/latest/reference/modules/chains.html |
24f5f0b93163-16 | {revision}', template_format='f-string', validate_template=True), suffix='Human: {input_prompt}\nModel: {output_from_model}\n\nCritique Request: {critique_request}\n\nCritique: {critique}\n\nRevision Request: {revision_request}\n\nRevision:', example_separator='\n === \n', prefix='Below is conservation between a human and an AI model.', template_format='f-string', validate_template=True), **kwargs: Any) → langchain.chains.constitutional_ai.base.ConstitutionalChain[source]# | https://langchain.readthedocs.io/en/latest/reference/modules/chains.html |
24f5f0b93163-17 | Create a chain from an LLM.
property input_keys: List[str]#
Defines the input keys.
property output_keys: List[str]#
Defines the output keys.
pydantic model langchain.chains.ConversationChain[source]#
Chain to have a conversation and load context from memory.
Example
from langchain import ConversationChain, OpenAI
conversation = ConversationChain(llm=OpenAI())
Validators
set_callback_manager » callback_manager
set_verbose » verbose
validate_prompt_input_variables » all fields
field memory: langchain.schema.BaseMemory [Optional]#
Default memory store.
field prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template='The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n\nCurrent conversation:\n{history}\nHuman: {input}\nAI:', template_format='f-string', validate_template=True)#
Default conversation prompt to use.
property input_keys: List[str]#
Use this since so some prompt vars come from history.
pydantic model langchain.chains.GraphQAChain[source]#
Chain for question-answering against a graph.
Validators
set_callback_manager » callback_manager
set_verbose » verbose
field entity_extraction_chain: LLMChain [Required]#
field graph: NetworkxEntityGraph [Required]#
field qa_chain: LLMChain [Required]# | https://langchain.readthedocs.io/en/latest/reference/modules/chains.html |
24f5f0b93163-18 | field qa_chain: LLMChain [Required]#
classmethod from_llm(llm: langchain.llms.base.BaseLLM, qa_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['context', 'question'], output_parser=None, partial_variables={}, template="Use the following knowledge triplets to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.\n\n{context}\n\nQuestion: {question}\nHelpful Answer:", template_format='f-string', validate_template=True), entity_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['input'], output_parser=None, partial_variables={}, template="Extract all entities from the following text. As a guideline, a proper noun is generally capitalized. You should definitely extract all names and places.\n\nReturn the output as a single comma-separated list, or NONE if there is nothing of note to return.\n\nEXAMPLE\ni'm trying to improve Langchain's interfaces, the UX, its integrations with various products the user might want ... a lot of stuff.\nOutput: Langchain\nEND OF EXAMPLE\n\nEXAMPLE\ni'm trying to improve Langchain's interfaces, the UX, its integrations with various products the user might want ... a lot of stuff. I'm working with Sam.\nOutput: Langchain, Sam\nEND OF EXAMPLE\n\nBegin!\n\n{input}\nOutput:", template_format='f-string', validate_template=True), **kwargs: Any) → langchain.chains.graph_qa.base.GraphQAChain[source]#
Initialize from LLM.
pydantic model langchain.chains.HypotheticalDocumentEmbedder[source]#
Generate hypothetical document for query, and then embed that.
Based on https://arxiv.org/abs/2212.10496
Validators
set_callback_manager » callback_manager | https://langchain.readthedocs.io/en/latest/reference/modules/chains.html |
24f5f0b93163-19 | Validators
set_callback_manager » callback_manager
set_verbose » verbose
field base_embeddings: Embeddings [Required]#
field llm_chain: LLMChain [Required]#
combine_embeddings(embeddings: List[List[float]]) → List[float][source]#
Combine embeddings into final embeddings.
embed_documents(texts: List[str]) → List[List[float]][source]#
Call the base embeddings.
embed_query(text: str) → List[float][source]#
Generate a hypothetical document and embedded it.
classmethod from_llm(llm: langchain.llms.base.BaseLLM, base_embeddings: langchain.embeddings.base.Embeddings, prompt_key: str) → langchain.chains.hyde.base.HypotheticalDocumentEmbedder[source]#
Load and use LLMChain for a specific prompt key.
property input_keys: List[str]#
Input keys for Hyde’s LLM chain.
property output_keys: List[str]#
Output keys for Hyde’s LLM chain.
pydantic model langchain.chains.LLMBashChain[source]#
Chain that interprets a prompt and executes bash code to perform bash operations.
Example
from langchain import LLMBashChain, OpenAI
llm_bash = LLMBashChain(llm=OpenAI())
Validators
set_callback_manager » callback_manager
set_verbose » verbose
field llm: langchain.schema.BaseLanguageModel [Required]#
LLM wrapper to use. | https://langchain.readthedocs.io/en/latest/reference/modules/chains.html |
24f5f0b93163-20 | LLM wrapper to use.
field prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['question'], output_parser=None, partial_variables={}, template='If someone asks you to perform a task, your job is to come up with a series of bash commands that will perform the task. There is no need to put "#!/bin/bash" in your answer. Make sure to reason step by step, using this format:\n\nQuestion: "copy the files in the directory named \'target\' into a new directory at the same level as target called \'myNewDirectory\'"\n\nI need to take the following actions:\n- List all files in the directory\n- Create a new directory\n- Copy the files from the first directory into the second directory\n```bash\nls\nmkdir myNewDirectory\ncp -r target/* myNewDirectory\n```\n\nThat is the format. Begin!\n\nQuestion: {question}', template_format='f-string', validate_template=True)#
pydantic model langchain.chains.LLMChain[source]#
Chain to run queries against LLMs.
Example
from langchain import LLMChain, OpenAI, PromptTemplate
prompt_template = "Tell me a {adjective} joke"
prompt = PromptTemplate(
input_variables=["adjective"], template=prompt_template
)
llm = LLMChain(llm=OpenAI(), prompt=prompt)
Validators
set_callback_manager » callback_manager
set_verbose » verbose
field llm: BaseLanguageModel [Required]#
field prompt: BasePromptTemplate [Required]#
Prompt object to use.
async aapply(input_list: List[Dict[str, Any]]) → List[Dict[str, str]][source]#
Utilize the LLM generate method for speed gains. | https://langchain.readthedocs.io/en/latest/reference/modules/chains.html |
24f5f0b93163-21 | Utilize the LLM generate method for speed gains.
async aapply_and_parse(input_list: List[Dict[str, Any]]) → Sequence[Union[str, List[str], Dict[str, str]]][source]#
Call apply and then parse the results.
async agenerate(input_list: List[Dict[str, Any]]) → langchain.schema.LLMResult[source]#
Generate LLM result from inputs.
apply(input_list: List[Dict[str, Any]]) → List[Dict[str, str]][source]#
Utilize the LLM generate method for speed gains.
apply_and_parse(input_list: List[Dict[str, Any]]) → Sequence[Union[str, List[str], Dict[str, str]]][source]#
Call apply and then parse the results.
async apredict(**kwargs: Any) → str[source]#
Format prompt with kwargs and pass to LLM.
Parameters
**kwargs – Keys to pass to prompt template.
Returns
Completion from LLM.
Example
completion = llm.predict(adjective="funny")
async aprep_prompts(input_list: List[Dict[str, Any]]) → Tuple[List[langchain.schema.PromptValue], Optional[List[str]]][source]#
Prepare prompts from inputs.
create_outputs(response: langchain.schema.LLMResult) → List[Dict[str, str]][source]#
Create outputs from response.
classmethod from_string(llm: langchain.schema.BaseLanguageModel, template: str) → langchain.chains.base.Chain[source]#
Create LLMChain from LLM and template.
generate(input_list: List[Dict[str, Any]]) → langchain.schema.LLMResult[source]#
Generate LLM result from inputs.
predict(**kwargs: Any) → str[source]#
Format prompt with kwargs and pass to LLM.
Parameters
**kwargs – Keys to pass to prompt template.
Returns | https://langchain.readthedocs.io/en/latest/reference/modules/chains.html |
24f5f0b93163-22 | Parameters
**kwargs – Keys to pass to prompt template.
Returns
Completion from LLM.
Example
completion = llm.predict(adjective="funny")
predict_and_parse(**kwargs: Any) → Union[str, List[str], Dict[str, str]][source]#
Call predict and then parse the results.
prep_prompts(input_list: List[Dict[str, Any]]) → Tuple[List[langchain.schema.PromptValue], Optional[List[str]]][source]#
Prepare prompts from inputs.
pydantic model langchain.chains.LLMCheckerChain[source]#
Chain for question-answering with self-verification.
Example
from langchain import OpenAI, LLMCheckerChain
llm = OpenAI(temperature=0.7)
checker_chain = LLMCheckerChain(llm=llm)
Validators
set_callback_manager » callback_manager
set_verbose » verbose
field check_assertions_prompt: langchain.prompts.prompt.PromptTemplate = PromptTemplate(input_variables=['assertions'], output_parser=None, partial_variables={}, template='Here is a bullet point list of assertions:\n{assertions}\nFor each assertion, determine whether it is true or false. If it is false, explain why.\n\n', template_format='f-string', validate_template=True)#
field create_draft_answer_prompt: langchain.prompts.prompt.PromptTemplate = PromptTemplate(input_variables=['question'], output_parser=None, partial_variables={}, template='{question}\n\n', template_format='f-string', validate_template=True)#
field list_assertions_prompt: langchain.prompts.prompt.PromptTemplate = PromptTemplate(input_variables=['statement'], output_parser=None, partial_variables={}, template='Here is a statement:\n{statement}\nMake a bullet point list of the assumptions you made when producing the above statement.\n\n', template_format='f-string', validate_template=True)# | https://langchain.readthedocs.io/en/latest/reference/modules/chains.html |
24f5f0b93163-23 | field llm: langchain.llms.base.BaseLLM [Required]#
LLM wrapper to use.
field revised_answer_prompt: langchain.prompts.prompt.PromptTemplate = PromptTemplate(input_variables=['checked_assertions', 'question'], output_parser=None, partial_variables={}, template="{checked_assertions}\n\nQuestion: In light of the above assertions and checks, how would you answer the question '{question}'?\n\nAnswer:", template_format='f-string', validate_template=True)#
Prompt to use when questioning the documents.
pydantic model langchain.chains.LLMMathChain[source]#
Chain that interprets a prompt and executes python code to do math.
Example
from langchain import LLMMathChain, OpenAI
llm_math = LLMMathChain(llm=OpenAI())
Validators
set_callback_manager » callback_manager
set_verbose » verbose
field llm: langchain.llms.base.BaseLLM [Required]#
LLM wrapper to use.
field prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['question'], output_parser=None, partial_variables={}, template='Translate a math problem into Python code that can be executed in Python 3 REPL. Use the output of running this code to answer the question.\n\nQuestion: ${{Question with math problem.}}\n```python\n${{Code that solves the problem and prints the solution}}\n```\n```output\n${{Output of running the code}}\n```\nAnswer: ${{Answer}}\n\nBegin.\n\nQuestion: What is 37593 * 67?\n\n```python\nprint(37593 * 67)\n```\n```output\n2518731\n```\nAnswer: 2518731\n\nQuestion: {question}\n', template_format='f-string', validate_template=True)# | https://langchain.readthedocs.io/en/latest/reference/modules/chains.html |
24f5f0b93163-24 | Prompt to use to translate to python if neccessary.
pydantic model langchain.chains.LLMRequestsChain[source]#
Chain that hits a URL and then uses an LLM to parse results.
Validators
set_callback_manager » callback_manager
set_verbose » verbose
validate_environment » all fields
field llm_chain: LLMChain [Required]#
field requests_wrapper: RequestsWrapper [Optional]#
field text_length: int = 8000#
pydantic model langchain.chains.LLMSummarizationCheckerChain[source]#
Chain for question-answering with self-verification.
Example
from langchain import OpenAI, LLMSummarizationCheckerChain
llm = OpenAI(temperature=0.0)
checker_chain = LLMSummarizationCheckerChain(llm=llm)
Validators
set_callback_manager » callback_manager
set_verbose » verbose | https://langchain.readthedocs.io/en/latest/reference/modules/chains.html |
24f5f0b93163-25 | Validators
set_callback_manager » callback_manager
set_verbose » verbose
field are_all_true_prompt: langchain.prompts.prompt.PromptTemplate = PromptTemplate(input_variables=['checked_assertions'], output_parser=None, partial_variables={}, template='Below are some assertions that have been fact checked and are labeled as true or false.\n\nIf all of the assertions are true, return "True". If any of the assertions are false, return "False".\n\nHere are some examples:\n===\n\nChecked Assertions: """\n- The sky is red: False\n- Water is made of lava: False\n- The sun is a star: True\n"""\nResult: False\n\n===\n\nChecked Assertions: """\n- The sky is blue: True\n- Water is wet: True\n- The sun is a star: True\n"""\nResult: True\n\n===\n\nChecked Assertions: """\n- The sky is blue - True\n- Water is made of lava- False\n- The sun is a star - True\n"""\nResult: False\n\n===\n\nChecked Assertions:"""\n{checked_assertions}\n"""\nResult:', template_format='f-string', validate_template=True)#
field check_assertions_prompt: langchain.prompts.prompt.PromptTemplate = PromptTemplate(input_variables=['assertions'], output_parser=None, partial_variables={}, template='You are an expert fact checker. You have been hired by a major news organization to fact check a very important story.\n\nHere is a bullet point list of facts:\n"""\n{assertions}\n"""\n\nFor each fact, determine whether it is true or false about the subject. If you are unable to determine whether the fact is true or false, output "Undetermined".\nIf the fact is false, explain why.\n\n', template_format='f-string', validate_template=True)# | https://langchain.readthedocs.io/en/latest/reference/modules/chains.html |
24f5f0b93163-26 | field create_assertions_prompt: langchain.prompts.prompt.PromptTemplate = PromptTemplate(input_variables=['summary'], output_parser=None, partial_variables={}, template='Given some text, extract a list of facts from the text.\n\nFormat your output as a bulleted list.\n\nText:\n"""\n{summary}\n"""\n\nFacts:', template_format='f-string', validate_template=True)#
field llm: langchain.llms.base.BaseLLM [Required]#
LLM wrapper to use.
field max_checks: int = 2#
Maximum number of times to check the assertions. Default to double-checking.
field revised_summary_prompt: langchain.prompts.prompt.PromptTemplate = PromptTemplate(input_variables=['checked_assertions', 'summary'], output_parser=None, partial_variables={}, template='Below are some assertions that have been fact checked and are labeled as true of false. If the answer is false, a suggestion is given for a correction.\n\nChecked Assertions:\n"""\n{checked_assertions}\n"""\n\nOriginal Summary:\n"""\n{summary}\n"""\n\nUsing these checked assertions, rewrite the original summary to be completely true.\n\nThe output should have the same structure and formatting as the original summary.\n\nSummary:', template_format='f-string', validate_template=True)#
pydantic model langchain.chains.MapReduceChain[source]#
Map-reduce chain.
Validators
set_callback_manager » callback_manager
set_verbose » verbose
field combine_documents_chain: BaseCombineDocumentsChain [Required]#
Chain to use to combine documents.
field text_splitter: TextSplitter [Required]#
Text splitter to use. | https://langchain.readthedocs.io/en/latest/reference/modules/chains.html |
24f5f0b93163-27 | field text_splitter: TextSplitter [Required]#
Text splitter to use.
classmethod from_params(llm: langchain.llms.base.BaseLLM, prompt: langchain.prompts.base.BasePromptTemplate, text_splitter: langchain.text_splitter.TextSplitter) → langchain.chains.mapreduce.MapReduceChain[source]#
Construct a map-reduce chain that uses the chain for map and reduce.
pydantic model langchain.chains.OpenAIModerationChain[source]#
Pass input through a moderation endpoint.
To use, you should have the openai python package installed, and the
environment variable OPENAI_API_KEY set with your API key.
Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
Example
from langchain.chains import OpenAIModerationChain
moderation = OpenAIModerationChain()
Validators
set_callback_manager » callback_manager
set_verbose » verbose
validate_environment » all fields
field error: bool = False#
Whether or not to error if bad content was found.
field model_name: Optional[str] = None#
Moderation model name to use.
field openai_api_key: Optional[str] = None#
pydantic model langchain.chains.PALChain[source]#
Implements Program-Aided Language Models.
Validators
set_callback_manager » callback_manager
set_verbose » verbose
field get_answer_expr: str = 'print(solution())'#
field llm: BaseLanguageModel [Required]#
field prompt: BasePromptTemplate [Required]#
field python_globals: Optional[Dict[str, Any]] = None#
field python_locals: Optional[Dict[str, Any]] = None#
field return_intermediate_steps: bool = False#
field stop: str = '\n\n'# | https://langchain.readthedocs.io/en/latest/reference/modules/chains.html |
24f5f0b93163-28 | field return_intermediate_steps: bool = False#
field stop: str = '\n\n'#
classmethod from_colored_object_prompt(llm: langchain.schema.BaseLanguageModel, **kwargs: Any) → langchain.chains.pal.base.PALChain[source]#
Load PAL from colored object prompt.
classmethod from_math_prompt(llm: langchain.schema.BaseLanguageModel, **kwargs: Any) → langchain.chains.pal.base.PALChain[source]#
Load PAL from math prompt.
pydantic model langchain.chains.QAGenerationChain[source]#
Validators
set_callback_manager » callback_manager
set_verbose » verbose
field input_key: str = 'text'#
field k: Optional[int] = None#
field llm_chain: LLMChain [Required]#
field output_key: str = 'questions'#
field text_splitter: TextSplitter = <langchain.text_splitter.RecursiveCharacterTextSplitter object>#
classmethod from_llm(llm: langchain.schema.BaseLanguageModel, prompt: Optional[langchain.prompts.base.BasePromptTemplate] = None, **kwargs: Any) → langchain.chains.qa_generation.base.QAGenerationChain[source]#
property input_keys: List[str]#
Input keys this chain expects.
property output_keys: List[str]#
Output keys this chain expects.
pydantic model langchain.chains.QAWithSourcesChain[source]#
Question answering with sources over documents.
Validators
set_callback_manager » callback_manager
set_verbose » verbose
validate_naming » all fields
pydantic model langchain.chains.SQLDatabaseChain[source]#
Chain for interacting with SQL Database.
Example
from langchain import SQLDatabaseChain, OpenAI, SQLDatabase
db = SQLDatabase(...)
db_chain = SQLDatabaseChain(llm=OpenAI(), database=db)
Validators | https://langchain.readthedocs.io/en/latest/reference/modules/chains.html |
24f5f0b93163-29 | db_chain = SQLDatabaseChain(llm=OpenAI(), database=db)
Validators
set_callback_manager » callback_manager
set_verbose » verbose
field database: SQLDatabase [Required]#
SQL Database to connect to.
field llm: BaseLanguageModel [Required]#
LLM wrapper to use.
field prompt: BasePromptTemplate = PromptTemplate(input_variables=['input', 'table_info', 'dialect', 'top_k'], output_parser=None, partial_variables={}, template='Given an input question, first create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer. Unless the user specifies in his question a specific number of examples he wishes to obtain, always limit your query to at most {top_k} results. You can order the results by a relevant column to return the most interesting examples in the database.\n\nNever query for all the columns from a specific table, only ask for a the few relevant columns given the question.\n\nPay attention to use only the column names that you can see in the schema description. Be careful to not query for columns that do not exist. Also, pay attention to which column is in which table.\n\nUse the following format:\n\nQuestion: "Question here"\nSQLQuery: "SQL Query to run"\nSQLResult: "Result of the SQLQuery"\nAnswer: "Final answer here"\n\nOnly use the tables listed below.\n\n{table_info}\n\nQuestion: {input}', template_format='f-string', validate_template=True)#
Prompt to use to translate natural language to SQL.
field return_direct: bool = False#
Whether or not to return the result of querying the SQL table directly.
field return_intermediate_steps: bool = False#
Whether or not to return the intermediate steps along with the final answer.
field top_k: int = 5# | https://langchain.readthedocs.io/en/latest/reference/modules/chains.html |
24f5f0b93163-30 | field top_k: int = 5#
Number of results to return from the query
pydantic model langchain.chains.SQLDatabaseSequentialChain[source]#
Chain for querying SQL database that is a sequential chain.
The chain is as follows:
1. Based on the query, determine which tables to use.
2. Based on those tables, call the normal SQL database chain.
This is useful in cases where the number of tables in the database is large.
Validators
set_callback_manager » callback_manager
set_verbose » verbose
field decider_chain: LLMChain [Required]#
field return_intermediate_steps: bool = False#
field sql_chain: SQLDatabaseChain [Required]# | https://langchain.readthedocs.io/en/latest/reference/modules/chains.html |
24f5f0b93163-31 | classmethod from_llm(llm: langchain.schema.BaseLanguageModel, database: langchain.sql_database.SQLDatabase, query_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['input', 'table_info', 'dialect', 'top_k'], output_parser=None, partial_variables={}, template='Given an input question, first create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer. Unless the user specifies in his question a specific number of examples he wishes to obtain, always limit your query to at most {top_k} results. You can order the results by a relevant column to return the most interesting examples in the database.\n\nNever query for all the columns from a specific table, only ask for a the few relevant columns given the question.\n\nPay attention to use only the column names that you can see in the schema description. Be careful to not query for columns that do not exist. Also, pay attention to which column is in which table.\n\nUse the following format:\n\nQuestion: "Question here"\nSQLQuery: "SQL Query to run"\nSQLResult: "Result of the | https://langchain.readthedocs.io/en/latest/reference/modules/chains.html |
24f5f0b93163-32 | "SQL Query to run"\nSQLResult: "Result of the SQLQuery"\nAnswer: "Final answer here"\n\nOnly use the tables listed below.\n\n{table_info}\n\nQuestion: {input}', template_format='f-string', validate_template=True), decider_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['query', 'table_names'], output_parser=CommaSeparatedListOutputParser(), partial_variables={}, template='Given the below input question and list of potential tables, output a comma separated list of the table names that may be necessary to answer this question.\n\nQuestion: {query}\n\nTable Names: {table_names}\n\nRelevant Table Names:', template_format='f-string', validate_template=True), **kwargs: Any) → langchain.chains.sql_database.base.SQLDatabaseSequentialChain[source]# | https://langchain.readthedocs.io/en/latest/reference/modules/chains.html |
24f5f0b93163-33 | Load the necessary chains.
pydantic model langchain.chains.SequentialChain[source]#
Chain where the outputs of one chain feed directly into next.
Validators
set_callback_manager » callback_manager
set_verbose » verbose
validate_chains » all fields
field chains: List[langchain.chains.base.Chain] [Required]#
field input_variables: List[str] [Required]#
field return_all: bool = False#
pydantic model langchain.chains.SimpleSequentialChain[source]#
Simple chain where the outputs of one step feed directly into next.
Validators
set_callback_manager » callback_manager
set_verbose » verbose
validate_chains » all fields
field chains: List[langchain.chains.base.Chain] [Required]#
field strip_outputs: bool = False#
pydantic model langchain.chains.TransformChain[source]#
Chain transform chain output.
Example
from langchain import TransformChain
transform_chain = TransformChain(input_variables=["text"],
output_variables["entities"], transform=func())
Validators
set_callback_manager » callback_manager
set_verbose » verbose
field input_variables: List[str] [Required]#
field output_variables: List[str] [Required]#
field transform: Callable[[Dict[str, str]], Dict[str, str]] [Required]#
pydantic model langchain.chains.VectorDBQA[source]#
Chain for question-answering against a vector database.
Example
from langchain import OpenAI, VectorDBQA
from langchain.faiss import FAISS
vectordb = FAISS(...)
vectordbQA = VectorDBQA(llm=OpenAI(), vectorstore=vectordb)
Validators
load_combine_documents_chain » all fields
set_callback_manager » callback_manager
set_verbose » verbose
validate_search_type » all fields
field combine_documents_chain: BaseCombineDocumentsChain [Required]# | https://langchain.readthedocs.io/en/latest/reference/modules/chains.html |
24f5f0b93163-34 | validate_search_type » all fields
field combine_documents_chain: BaseCombineDocumentsChain [Required]#
Chain to use to combine the documents.
field k: int = 4#
Number of documents to query for.
field return_source_documents: bool = False#
Return the source documents.
field search_kwargs: Dict[str, Any] [Optional]#
Extra search args.
field search_type: str = 'similarity'#
Search type to use over vectorstore. similarity or mmr.
field vectorstore: VectorStore [Required]#
Vector Database to connect to.
classmethod from_chain_type(llm: langchain.schema.BaseLanguageModel, chain_type: str = 'stuff', chain_type_kwargs: Optional[dict] = None, **kwargs: Any) → langchain.chains.vector_db_qa.base.VectorDBQA[source]#
Load chain from chain type.
classmethod from_llm(llm: langchain.schema.BaseLanguageModel, prompt: Optional[langchain.prompts.prompt.PromptTemplate] = None, **kwargs: Any) → langchain.chains.vector_db_qa.base.VectorDBQA[source]#
Initialize from LLM.
pydantic model langchain.chains.VectorDBQAWithSourcesChain[source]#
Question-answering with sources over a vector database.
Validators
set_callback_manager » callback_manager
set_verbose » verbose
validate_naming » all fields
field k: int = 4#
Number of results to return from store
field max_tokens_limit: int = 3375#
Restrict the docs to return from store based on tokens,
enforced only for StuffDocumentChain and if reduce_k_below_max_tokens is to true
field reduce_k_below_max_tokens: bool = False#
Reduce the number of results to return from store based on tokens limit
field search_kwargs: Dict[str, Any] [Optional]# | https://langchain.readthedocs.io/en/latest/reference/modules/chains.html |
24f5f0b93163-35 | field search_kwargs: Dict[str, Any] [Optional]#
Extra search args.
field vectorstore: langchain.vectorstores.base.VectorStore [Required]#
Vector Database to connect to.
langchain.chains.load_chain(path: Union[str, pathlib.Path], **kwargs: Any) → langchain.chains.base.Chain[source]#
Unified method for loading a chain from LangChainHub or local fs.
previous
Key Concepts
next
Agents
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Mar 22, 2023. | https://langchain.readthedocs.io/en/latest/reference/modules/chains.html |
425acceea52e-0 | .rst
.pdf
Agents
Agents#
Interface for agents.
pydantic model langchain.agents.Agent[source]#
Class responsible for calling the language model and deciding the action.
This is driven by an LLMChain. The prompt in the LLMChain MUST include
a variable called “agent_scratchpad” where the agent can put its
intermediary work.
field allowed_tools: Optional[List[str]] = None#
field llm_chain: langchain.chains.llm.LLMChain [Required]#
field return_values: List[str] = ['output']#
async aplan(intermediate_steps: List[Tuple[langchain.schema.AgentAction, str]], **kwargs: Any) → Union[langchain.schema.AgentAction, langchain.schema.AgentFinish][source]#
Given input, decided what to do.
Parameters
intermediate_steps – Steps the LLM has taken to date,
along with observations
**kwargs – User inputs.
Returns
Action specifying what tool to use.
abstract classmethod create_prompt(tools: Sequence[langchain.tools.base.BaseTool]) → langchain.prompts.base.BasePromptTemplate[source]#
Create a prompt for this class.
dict(**kwargs: Any) → Dict[source]#
Return dictionary representation of agent.
classmethod from_llm_and_tools(llm: langchain.llms.base.BaseLLM, tools: Sequence[langchain.tools.base.BaseTool], callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, **kwargs: Any) → langchain.agents.agent.Agent[source]#
Construct an agent from an LLM and tools.
get_full_inputs(intermediate_steps: List[Tuple[langchain.schema.AgentAction, str]], **kwargs: Any) → Dict[str, Any][source]#
Create the full inputs for the LLMChain from intermediate steps. | https://langchain.readthedocs.io/en/latest/reference/modules/agents.html |
425acceea52e-1 | Create the full inputs for the LLMChain from intermediate steps.
plan(intermediate_steps: List[Tuple[langchain.schema.AgentAction, str]], **kwargs: Any) → Union[langchain.schema.AgentAction, langchain.schema.AgentFinish][source]#
Given input, decided what to do.
Parameters
intermediate_steps – Steps the LLM has taken to date,
along with observations
**kwargs – User inputs.
Returns
Action specifying what tool to use.
prepare_for_new_call() → None[source]#
Prepare the agent for new call, if needed.
return_stopped_response(early_stopping_method: str, intermediate_steps: List[Tuple[langchain.schema.AgentAction, str]], **kwargs: Any) → langchain.schema.AgentFinish[source]#
Return response when agent has been stopped due to max iterations.
save(file_path: Union[pathlib.Path, str]) → None[source]#
Save the agent.
Parameters
file_path – Path to file to save the agent to.
Example:
.. code-block:: python
# If working with agent executor
agent.agent.save(file_path=”path/agent.yaml”)
property finish_tool_name: str#
Name of the tool to use to finish the chain.
abstract property llm_prefix: str#
Prefix to append the LLM call with.
abstract property observation_prefix: str#
Prefix to append the observation with.
pydantic model langchain.agents.AgentExecutor[source]#
Consists of an agent using tools.
Validators
set_callback_manager » callback_manager
set_verbose » verbose
validate_tools » all fields
field agent: Agent [Required]#
field early_stopping_method: str = 'force'#
field max_iterations: Optional[int] = 15#
field return_intermediate_steps: bool = False#
field tools: Sequence[BaseTool] [Required]# | https://langchain.readthedocs.io/en/latest/reference/modules/agents.html |
425acceea52e-2 | field tools: Sequence[BaseTool] [Required]#
classmethod from_agent_and_tools(agent: langchain.agents.agent.Agent, tools: Sequence[langchain.tools.base.BaseTool], callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, **kwargs: Any) → langchain.agents.agent.AgentExecutor[source]#
Create from agent and tools.
save(file_path: Union[pathlib.Path, str]) → None[source]#
Raise error - saving not supported for Agent Executors.
save_agent(file_path: Union[pathlib.Path, str]) → None[source]#
Save the underlying agent.
pydantic model langchain.agents.ConversationalAgent[source]#
An agent designed to hold a conversation in addition to using tools.
field ai_prefix: str = 'AI'# | https://langchain.readthedocs.io/en/latest/reference/modules/agents.html |
425acceea52e-3 | classmethod create_prompt(tools: Sequence[langchain.tools.base.BaseTool], prefix: str = 'Assistant is a large language model trained by OpenAI.\n\nAssistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\n\nAssistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.\n\nOverall, Assistant is a powerful tool that can help with a wide range of | https://langchain.readthedocs.io/en/latest/reference/modules/agents.html |
425acceea52e-4 | powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.\n\nTOOLS:\n------\n\nAssistant has access to the following tools:', suffix: str = 'Begin!\n\nPrevious conversation history:\n{chat_history}\n\nNew input: {input}\n{agent_scratchpad}', format_instructions: str = 'To use a tool, please use the following format:\n\n```\nThought: Do I need to use a tool? Yes\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n```\n\nWhen you have a response to say to the Human, or if you do not need to use a tool, you MUST use the format:\n\n```\nThought: Do I need to use a tool? No\n{ai_prefix}: [your response here]\n```', ai_prefix: str = 'AI', human_prefix: str = 'Human', input_variables: | https://langchain.readthedocs.io/en/latest/reference/modules/agents.html |
425acceea52e-5 | 'AI', human_prefix: str = 'Human', input_variables: Optional[List[str]] = None) → langchain.prompts.prompt.PromptTemplate[source]# | https://langchain.readthedocs.io/en/latest/reference/modules/agents.html |
425acceea52e-6 | Create prompt in the style of the zero shot agent.
Parameters
tools – List of tools the agent will have access to, used to format the
prompt.
prefix – String to put before the list of tools.
suffix – String to put after the list of tools.
ai_prefix – String to use before AI output.
human_prefix – String to use before human output.
input_variables – List of input variables the final prompt will expect.
Returns
A PromptTemplate with the template assembled from the pieces here. | https://langchain.readthedocs.io/en/latest/reference/modules/agents.html |
425acceea52e-7 | classmethod from_llm_and_tools(llm: langchain.llms.base.BaseLLM, tools: Sequence[langchain.tools.base.BaseTool], callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'Assistant is a large language model trained by OpenAI.\n\nAssistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\n\nAssistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range | https://langchain.readthedocs.io/en/latest/reference/modules/agents.html |
425acceea52e-8 | and provide explanations and descriptions on a wide range of topics.\n\nOverall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.\n\nTOOLS:\n------\n\nAssistant has access to the following tools:', suffix: str = 'Begin!\n\nPrevious conversation history:\n{chat_history}\n\nNew input: {input}\n{agent_scratchpad}', format_instructions: str = 'To use a tool, please use the following format:\n\n```\nThought: Do I need to use a tool? Yes\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n```\n\nWhen you have a response to say to the Human, or if you do not need to use a tool, you MUST use the format:\n\n```\nThought: Do I need to use a tool? No\n{ai_prefix}: [your response | https://langchain.readthedocs.io/en/latest/reference/modules/agents.html |
425acceea52e-9 | to use a tool? No\n{ai_prefix}: [your response here]\n```', ai_prefix: str = 'AI', human_prefix: str = 'Human', input_variables: Optional[List[str]] = None, **kwargs: Any) → langchain.agents.agent.Agent[source]# | https://langchain.readthedocs.io/en/latest/reference/modules/agents.html |
425acceea52e-10 | Construct an agent from an LLM and tools.
property finish_tool_name: str#
Name of the tool to use to finish the chain.
property llm_prefix: str#
Prefix to append the llm call with.
property observation_prefix: str#
Prefix to append the observation with.
pydantic model langchain.agents.MRKLChain[source]#
Chain that implements the MRKL system.
Example
from langchain import OpenAI, MRKLChain
from langchain.chains.mrkl.base import ChainConfig
llm = OpenAI(temperature=0)
prompt = PromptTemplate(...)
chains = [...]
mrkl = MRKLChain.from_chains(llm=llm, prompt=prompt)
Validators
set_callback_manager » callback_manager
set_verbose » verbose
validate_tools » all fields
field agent: Agent [Required]#
field callback_manager: BaseCallbackManager [Optional]#
field early_stopping_method: str = 'force'#
field max_iterations: Optional[int] = 15#
field memory: Optional[BaseMemory] = None#
field return_intermediate_steps: bool = False#
field tools: Sequence[BaseTool] [Required]#
field verbose: bool [Optional]#
classmethod from_chains(llm: langchain.llms.base.BaseLLM, chains: List[langchain.agents.mrkl.base.ChainConfig], **kwargs: Any) → langchain.agents.agent.AgentExecutor[source]#
User friendly way to initialize the MRKL chain.
This is intended to be an easy way to get up and running with the
MRKL chain.
Parameters
llm – The LLM to use as the agent LLM.
chains – The chains the MRKL system has access to.
**kwargs – parameters to be passed to initialization.
Returns
An initialized MRKL chain.
Example | https://langchain.readthedocs.io/en/latest/reference/modules/agents.html |
425acceea52e-11 | Returns
An initialized MRKL chain.
Example
from langchain import LLMMathChain, OpenAI, SerpAPIWrapper, MRKLChain
from langchain.chains.mrkl.base import ChainConfig
llm = OpenAI(temperature=0)
search = SerpAPIWrapper()
llm_math_chain = LLMMathChain(llm=llm)
chains = [
ChainConfig(
action_name = "Search",
action=search.search,
action_description="useful for searching"
),
ChainConfig(
action_name="Calculator",
action=llm_math_chain.run,
action_description="useful for doing math"
)
]
mrkl = MRKLChain.from_chains(llm, chains)
pydantic model langchain.agents.ReActChain[source]#
Chain that implements the ReAct paper.
Example
from langchain import ReActChain, OpenAI
react = ReAct(llm=OpenAI())
Validators
set_callback_manager » callback_manager
set_verbose » verbose
validate_tools » all fields
field agent: Agent [Required]#
field callback_manager: BaseCallbackManager [Optional]#
field early_stopping_method: str = 'force'#
field max_iterations: Optional[int] = 15#
field memory: Optional[BaseMemory] = None#
field return_intermediate_steps: bool = False#
field tools: Sequence[BaseTool] [Required]#
field verbose: bool [Optional]#
pydantic model langchain.agents.ReActTextWorldAgent[source]#
Agent for the ReAct TextWorld chain.
field i: int = 1#
classmethod create_prompt(tools: Sequence[langchain.tools.base.BaseTool]) → langchain.prompts.base.BasePromptTemplate[source]#
Return default prompt. | https://langchain.readthedocs.io/en/latest/reference/modules/agents.html |
425acceea52e-12 | Return default prompt.
pydantic model langchain.agents.SelfAskWithSearchChain[source]#
Chain that does self ask with search.
Example
from langchain import SelfAskWithSearchChain, OpenAI, GoogleSerperAPIWrapper
search_chain = GoogleSerperAPIWrapper()
self_ask = SelfAskWithSearchChain(llm=OpenAI(), search_chain=search_chain)
Validators
set_callback_manager » callback_manager
set_verbose » verbose
validate_tools » all fields
field agent: Agent [Required]#
field callback_manager: BaseCallbackManager [Optional]#
field early_stopping_method: str = 'force'#
field max_iterations: Optional[int] = 15#
field memory: Optional[BaseMemory] = None#
field return_intermediate_steps: bool = False#
field tools: Sequence[BaseTool] [Required]#
field verbose: bool [Optional]#
pydantic model langchain.agents.Tool[source]#
Tool that takes in function or coroutine directly.
Validators
set_callback_manager » callback_manager
field coroutine: Optional[Callable[[str], Awaitable[str]]] = None#
field description: str = ''#
field func: Callable[[str], str] [Required]#
pydantic model langchain.agents.ZeroShotAgent[source]#
Agent for the MRKL chain.
field allowed_tools: Optional[List[str]] = None#
field llm_chain: langchain.chains.llm.LLMChain [Required]#
field return_values: List[str] = ['output']# | https://langchain.readthedocs.io/en/latest/reference/modules/agents.html |
425acceea52e-13 | field return_values: List[str] = ['output']#
classmethod create_prompt(tools: Sequence[langchain.tools.base.BaseTool], prefix: str = 'Answer the following questions as best you can. You have access to the following tools:', suffix: str = 'Begin!\n\nQuestion: {input}\nThought:{agent_scratchpad}', format_instructions: str = 'Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None) → langchain.prompts.prompt.PromptTemplate[source]#
Create prompt in the style of the zero shot agent.
Parameters
tools – List of tools the agent will have access to, used to format the
prompt.
prefix – String to put before the list of tools.
suffix – String to put after the list of tools.
input_variables – List of input variables the final prompt will expect.
Returns
A PromptTemplate with the template assembled from the pieces here. | https://langchain.readthedocs.io/en/latest/reference/modules/agents.html |
425acceea52e-14 | Returns
A PromptTemplate with the template assembled from the pieces here.
classmethod from_llm_and_tools(llm: langchain.llms.base.BaseLLM, tools: Sequence[langchain.tools.base.BaseTool], callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'Answer the following questions as best you can. You have access to the following tools:', suffix: str = 'Begin!\n\nQuestion: {input}\nThought:{agent_scratchpad}', format_instructions: str = 'Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None, **kwargs: Any) → langchain.agents.agent.Agent[source]#
Construct an agent from an LLM and tools.
property llm_prefix: str#
Prefix to append the llm call with.
property observation_prefix: str#
Prefix to append the observation with.
langchain.agents.create_csv_agent(llm: langchain.llms.base.BaseLLM, path: str, pandas_kwargs: Optional[dict] = None, **kwargs: Any) → langchain.agents.agent.AgentExecutor[source]#
Create csv agent by loading to a dataframe and using pandas agent. | https://langchain.readthedocs.io/en/latest/reference/modules/agents.html |
425acceea52e-15 | langchain.agents.create_json_agent(llm: langchain.llms.base.BaseLLM, toolkit: langchain.agents.agent_toolkits.json.toolkit.JsonToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'You are an agent designed to interact with JSON.\nYour goal is to return a final answer by interacting with the JSON.\nYou have access to the following tools which help you learn more about the JSON you are interacting with.\nOnly use the below tools. Only use the information returned by the below tools to construct your final answer.\nDo not make up any information that is not contained in the JSON.\nYour input to the tools should be in the form of `data["key"][0]` where `data` is the JSON blob you are interacting with, and the syntax used is Python. \nYou should only use keys that you know for a fact exist. You must validate that a key exists by seeing it previously when calling `json_spec_list_keys`. \nIf you have not seen a key in one of those responses, you cannot use it.\nYou should only add one key at | https://langchain.readthedocs.io/en/latest/reference/modules/agents.html |