id
stringlengths 14
16
| text
stringlengths 36
2.73k
| source
stringlengths 49
117
|
---|---|---|
7487ca26a34d-12 | Embed documents using a MiniMax embedding endpoint.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Embed a query using a MiniMax embedding endpoint.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
pydantic model langchain.embeddings.ModelScopeEmbeddings[source]#
Wrapper around modelscope_hub embedding models.
To use, you should have the modelscope python package installed.
Example
from langchain.embeddings import ModelScopeEmbeddings
model_id = "damo/nlp_corom_sentence-embedding_english-base"
embed = ModelScopeEmbeddings(model_id=model_id)
field model_id: str = 'damo/nlp_corom_sentence-embedding_english-base'#
Model name to use.
embed_documents(texts: List[str]) → List[List[float]][source]#
Compute doc embeddings using a modelscope embedding model.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Compute query embeddings using a modelscope embedding model.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
pydantic model langchain.embeddings.MosaicMLInstructorEmbeddings[source]#
Wrapper around MosaicML’s embedding inference service.
To use, you should have the
environment variable MOSAICML_API_TOKEN set with your API token, or pass
it as a named parameter to the constructor.
Example
from langchain.llms import MosaicMLInstructorEmbeddings
endpoint_url = (
"https://models.hosted-on.mosaicml.hosting/instructor-large/v1/predict"
) | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
7487ca26a34d-13 | )
mosaic_llm = MosaicMLInstructorEmbeddings(
endpoint_url=endpoint_url,
mosaicml_api_token="my-api-key"
)
field embed_instruction: str = 'Represent the document for retrieval: '#
Instruction used to embed documents.
field endpoint_url: str = 'https://models.hosted-on.mosaicml.hosting/instructor-large/v1/predict'#
Endpoint URL to use.
field query_instruction: str = 'Represent the question for retrieving supporting documents: '#
Instruction used to embed the query.
field retry_sleep: float = 1.0#
How long to try sleeping for if a rate limit is encountered
embed_documents(texts: List[str]) → List[List[float]][source]#
Embed documents using a MosaicML deployed instructor embedding model.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Embed a query using a MosaicML deployed instructor embedding model.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
pydantic model langchain.embeddings.OpenAIEmbeddings[source]#
Wrapper around OpenAI embedding models.
To use, you should have the openai python package installed, and the
environment variable OPENAI_API_KEY set with your API key or pass it
as a named parameter to the constructor.
Example
from langchain.embeddings import OpenAIEmbeddings
openai = OpenAIEmbeddings(openai_api_key="my-api-key")
In order to use the library with Microsoft Azure endpoints, you need to set
the OPENAI_API_TYPE, OPENAI_API_BASE, OPENAI_API_KEY and OPENAI_API_VERSION.
The OPENAI_API_TYPE must be set to ‘azure’ and the others correspond to
the properties of your endpoint. | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
7487ca26a34d-14 | the properties of your endpoint.
In addition, the deployment name must be passed as the model parameter.
Example
import os
os.environ["OPENAI_API_TYPE"] = "azure"
os.environ["OPENAI_API_BASE"] = "https://<your-endpoint.openai.azure.com/"
os.environ["OPENAI_API_KEY"] = "your AzureOpenAI key"
os.environ["OPENAI_API_VERSION"] = "2023-03-15-preview"
os.environ["OPENAI_PROXY"] = "http://your-corporate-proxy:8080"
from langchain.embeddings.openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings(
deployment="your-embeddings-deployment-name",
model="your-embeddings-model-name",
openai_api_base="https://your-endpoint.openai.azure.com/",
openai_api_type="azure",
)
text = "This is a test query."
query_result = embeddings.embed_query(text)
field chunk_size: int = 1000#
Maximum number of texts to embed in each batch
field max_retries: int = 6#
Maximum number of retries to make when generating.
field request_timeout: Optional[Union[float, Tuple[float, float]]] = None#
Timeout in seconds for the OpenAPI request.
embed_documents(texts: List[str], chunk_size: Optional[int] = 0) → List[List[float]][source]#
Call out to OpenAI’s embedding endpoint for embedding search docs.
Parameters
texts – The list of texts to embed.
chunk_size – The chunk size of embeddings. If None, will use the chunk size
specified by the class.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Call out to OpenAI’s embedding endpoint for embedding query text. | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
7487ca26a34d-15 | Call out to OpenAI’s embedding endpoint for embedding query text.
Parameters
text – The text to embed.
Returns
Embedding for the text.
pydantic model langchain.embeddings.SagemakerEndpointEmbeddings[source]#
Wrapper around custom Sagemaker Inference Endpoints.
To use, you must supply the endpoint name from your deployed
Sagemaker model & the region where it is deployed.
To authenticate, the AWS client uses the following methods to
automatically load credentials:
https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
If a specific credential profile should be used, you must pass
the name of the profile from the ~/.aws/credentials file that is to be used.
Make sure the credentials / roles used have the required policies to
access the Sagemaker endpoint.
See: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html
field content_handler: langchain.embeddings.sagemaker_endpoint.EmbeddingsContentHandler [Required]#
The content handler class that provides an input and
output transform functions to handle formats between LLM
and the endpoint.
field credentials_profile_name: Optional[str] = None#
The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which
has either access keys or role information specified.
If not specified, the default credential profile or, if on an EC2 instance,
credentials from IMDS will be used.
See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
field endpoint_kwargs: Optional[Dict] = None#
Optional attributes passed to the invoke_endpoint
function. See `boto3`_. docs for more info.
.. _boto3: <https://boto3.amazonaws.com/v1/documentation/api/latest/index.html>
field endpoint_name: str = ''# | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
7487ca26a34d-16 | field endpoint_name: str = ''#
The name of the endpoint from the deployed Sagemaker model.
Must be unique within an AWS Region.
field model_kwargs: Optional[Dict] = None#
Key word arguments to pass to the model.
field region_name: str = ''#
The aws region where the Sagemaker model is deployed, eg. us-west-2.
embed_documents(texts: List[str], chunk_size: int = 64) → List[List[float]][source]#
Compute doc embeddings using a SageMaker Inference Endpoint.
Parameters
texts – The list of texts to embed.
chunk_size – The chunk size defines how many input texts will
be grouped together as request. If None, will use the
chunk size specified by the class.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Compute query embeddings using a SageMaker inference endpoint.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
pydantic model langchain.embeddings.SelfHostedEmbeddings[source]#
Runs custom embedding models on self-hosted remote hardware.
Supported hardware includes auto-launched instances on AWS, GCP, Azure,
and Lambda, as well as servers specified
by IP address and SSH credentials (such as on-prem, or another
cloud like Paperspace, Coreweave, etc.).
To use, you should have the runhouse python package installed.
Example using a model load function:from langchain.embeddings import SelfHostedEmbeddings
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
import runhouse as rh
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1")
def get_pipeline():
model_id = "facebook/bart-large" | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
7487ca26a34d-17 | def get_pipeline():
model_id = "facebook/bart-large"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
return pipeline("feature-extraction", model=model, tokenizer=tokenizer)
embeddings = SelfHostedEmbeddings(
model_load_fn=get_pipeline,
hardware=gpu
model_reqs=["./", "torch", "transformers"],
)
Example passing in a pipeline path:from langchain.embeddings import SelfHostedHFEmbeddings
import runhouse as rh
from transformers import pipeline
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1")
pipeline = pipeline(model="bert-base-uncased", task="feature-extraction")
rh.blob(pickle.dumps(pipeline),
path="models/pipeline.pkl").save().to(gpu, path="models")
embeddings = SelfHostedHFEmbeddings.from_pipeline(
pipeline="models/pipeline.pkl",
hardware=gpu,
model_reqs=["./", "torch", "transformers"],
)
Validators
raise_deprecation » all fields
set_verbose » verbose
field inference_fn: Callable = <function _embed_documents>#
Inference function to extract the embeddings on the remote hardware.
field inference_kwargs: Any = None#
Any kwargs to pass to the model’s inference function.
embed_documents(texts: List[str]) → List[List[float]][source]#
Compute doc embeddings using a HuggingFace transformer model.
Parameters
texts – The list of texts to embed.s
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Compute query embeddings using a HuggingFace transformer model.
Parameters
text – The text to embed.
Returns | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
7487ca26a34d-18 | Parameters
text – The text to embed.
Returns
Embeddings for the text.
pydantic model langchain.embeddings.SelfHostedHuggingFaceEmbeddings[source]#
Runs sentence_transformers embedding models on self-hosted remote hardware.
Supported hardware includes auto-launched instances on AWS, GCP, Azure,
and Lambda, as well as servers specified
by IP address and SSH credentials (such as on-prem, or another cloud
like Paperspace, Coreweave, etc.).
To use, you should have the runhouse python package installed.
Example
from langchain.embeddings import SelfHostedHuggingFaceEmbeddings
import runhouse as rh
model_name = "sentence-transformers/all-mpnet-base-v2"
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1")
hf = SelfHostedHuggingFaceEmbeddings(model_name=model_name, hardware=gpu)
Validators
raise_deprecation » all fields
set_verbose » verbose
field hardware: Any = None#
Remote hardware to send the inference function to.
field inference_fn: Callable = <function _embed_documents>#
Inference function to extract the embeddings.
field load_fn_kwargs: Optional[dict] = None#
Key word arguments to pass to the model load function.
field model_id: str = 'sentence-transformers/all-mpnet-base-v2'#
Model name to use.
field model_load_fn: Callable = <function load_embedding_model>#
Function to load the model remotely on the server.
field model_reqs: List[str] = ['./', 'sentence_transformers', 'torch']#
Requirements to install on hardware to inference the model.
pydantic model langchain.embeddings.SelfHostedHuggingFaceInstructEmbeddings[source]#
Runs InstructorEmbedding embedding models on self-hosted remote hardware. | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
7487ca26a34d-19 | Runs InstructorEmbedding embedding models on self-hosted remote hardware.
Supported hardware includes auto-launched instances on AWS, GCP, Azure,
and Lambda, as well as servers specified
by IP address and SSH credentials (such as on-prem, or another
cloud like Paperspace, Coreweave, etc.).
To use, you should have the runhouse python package installed.
Example
from langchain.embeddings import SelfHostedHuggingFaceInstructEmbeddings
import runhouse as rh
model_name = "hkunlp/instructor-large"
gpu = rh.cluster(name='rh-a10x', instance_type='A100:1')
hf = SelfHostedHuggingFaceInstructEmbeddings(
model_name=model_name, hardware=gpu)
Validators
raise_deprecation » all fields
set_verbose » verbose
field embed_instruction: str = 'Represent the document for retrieval: '#
Instruction to use for embedding documents.
field model_id: str = 'hkunlp/instructor-large'#
Model name to use.
field model_reqs: List[str] = ['./', 'InstructorEmbedding', 'torch']#
Requirements to install on hardware to inference the model.
field query_instruction: str = 'Represent the question for retrieving supporting documents: '#
Instruction to use for embedding query.
embed_documents(texts: List[str]) → List[List[float]][source]#
Compute doc embeddings using a HuggingFace instruct model.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Compute query embeddings using a HuggingFace instruct model.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
langchain.embeddings.SentenceTransformerEmbeddings# | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
7487ca26a34d-20 | Returns
Embeddings for the text.
langchain.embeddings.SentenceTransformerEmbeddings#
alias of langchain.embeddings.huggingface.HuggingFaceEmbeddings
pydantic model langchain.embeddings.TensorflowHubEmbeddings[source]#
Wrapper around tensorflow_hub embedding models.
To use, you should have the tensorflow_text python package installed.
Example
from langchain.embeddings import TensorflowHubEmbeddings
url = "https://tfhub.dev/google/universal-sentence-encoder-multilingual/3"
tf = TensorflowHubEmbeddings(model_url=url)
field model_url: str = 'https://tfhub.dev/google/universal-sentence-encoder-multilingual/3'#
Model name to use.
embed_documents(texts: List[str]) → List[List[float]][source]#
Compute doc embeddings using a TensorflowHub embedding model.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Compute query embeddings using a TensorflowHub embedding model.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
previous
Chat Models
next
Indexes
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 11, 2023. | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
6a450e95639a-0 | .rst
.pdf
Document Loaders
Document Loaders#
All different types of document loaders.
class langchain.document_loaders.AZLyricsLoader(web_path: Union[str, List[str]], header_template: Optional[dict] = None)[source]#
Loader that loads AZLyrics webpages.
load() → List[langchain.schema.Document][source]#
Load webpage.
class langchain.document_loaders.AirbyteJSONLoader(file_path: str)[source]#
Loader that loads local airbyte json files.
load() → List[langchain.schema.Document][source]#
Load file.
class langchain.document_loaders.AirtableLoader(api_token: str, table_id: str, base_id: str)[source]#
Loader that loads local airbyte json files.
lazy_load() → Iterator[langchain.schema.Document][source]#
Load Table.
load() → List[langchain.schema.Document][source]#
Load Table.
pydantic model langchain.document_loaders.ApifyDatasetLoader[source]#
Logic for loading documents from Apify datasets.
field apify_client: Any = None#
field dataset_id: str [Required]#
The ID of the dataset on the Apify platform.
field dataset_mapping_function: Callable[[Dict], langchain.schema.Document] [Required]#
A custom function that takes a single dictionary (an Apify dataset item)
and converts it to an instance of the Document class.
load() → List[langchain.schema.Document][source]#
Load documents.
class langchain.document_loaders.ArxivLoader(query: str, load_max_docs: Optional[int] = 100, load_all_available_meta: Optional[bool] = False)[source]#
Loads a query result from arxiv.org into a list of Documents.
Each document represents one Document. | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
6a450e95639a-1 | Each document represents one Document.
The loader converts the original PDF format into the text.
load() → List[langchain.schema.Document][source]#
Load data into document objects.
class langchain.document_loaders.AzureBlobStorageContainerLoader(conn_str: str, container: str, prefix: str = '')[source]#
Loading logic for loading documents from Azure Blob Storage.
load() → List[langchain.schema.Document][source]#
Load documents.
class langchain.document_loaders.AzureBlobStorageFileLoader(conn_str: str, container: str, blob_name: str)[source]#
Loading logic for loading documents from Azure Blob Storage.
load() → List[langchain.schema.Document][source]#
Load documents.
class langchain.document_loaders.BSHTMLLoader(file_path: str, open_encoding: Optional[str] = None, bs_kwargs: Optional[dict] = None, get_text_separator: str = '')[source]#
Loader that uses beautiful soup to parse HTML files.
load() → List[langchain.schema.Document][source]#
Load data into document objects.
class langchain.document_loaders.BibtexLoader(file_path: str, *, parser: Optional[langchain.utilities.bibtex.BibtexparserWrapper] = None, max_docs: Optional[int] = None, max_content_chars: Optional[int] = 4000, load_extra_metadata: bool = False, file_pattern: str = '[^:]+\\.pdf')[source]#
Loads a bibtex file into a list of Documents.
Each document represents one entry from the bibtex file.
If a PDF file is present in the file bibtex field, the original PDF
is loaded into the document text. If no such file entry is present,
the abstract field is used instead.
lazy_load() → Iterator[langchain.schema.Document][source]# | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
6a450e95639a-2 | lazy_load() → Iterator[langchain.schema.Document][source]#
Load bibtex file using bibtexparser and get the article texts plus the
article metadata.
See https://bibtexparser.readthedocs.io/en/master/
Returns
a list of documents with the document.page_content in text format
load() → List[langchain.schema.Document][source]#
Load bibtex file documents from the given bibtex file path.
See https://bibtexparser.readthedocs.io/en/master/
Parameters
file_path – the path to the bibtex file
Returns
a list of documents with the document.page_content in text format
class langchain.document_loaders.BigQueryLoader(query: str, project: Optional[str] = None, page_content_columns: Optional[List[str]] = None, metadata_columns: Optional[List[str]] = None, credentials: Optional[Credentials] = None)[source]#
Loads a query result from BigQuery into a list of documents.
Each document represents one row of the result. The page_content_columns
are written into the page_content of the document. The metadata_columns
are written into the metadata of the document. By default, all columns
are written into the page_content and none into the metadata.
load() → List[langchain.schema.Document][source]#
Load data into document objects.
class langchain.document_loaders.BiliBiliLoader(video_urls: List[str])[source]#
Loader that loads bilibili transcripts.
load() → List[langchain.schema.Document][source]#
Load from bilibili url.
class langchain.document_loaders.BlackboardLoader(blackboard_course_url: str, bbrouter: str, load_all_recursively: bool = True, basic_auth: Optional[Tuple[str, str]] = None, cookies: Optional[dict] = None)[source]# | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
6a450e95639a-3 | Loader that loads all documents from a Blackboard course.
This loader is not compatible with all Blackboard courses. It is only
compatible with courses that use the new Blackboard interface.
To use this loader, you must have the BbRouter cookie. You can get this
cookie by logging into the course and then copying the value of the
BbRouter cookie from the browser’s developer tools.
Example
from langchain.document_loaders import BlackboardLoader
loader = BlackboardLoader(
blackboard_course_url="https://blackboard.example.com/webapps/blackboard/execute/announcement?method=search&context=course_entry&course_id=_123456_1",
bbrouter="expires:12345...",
)
documents = loader.load()
base_url: str#
check_bs4() → None[source]#
Check if BeautifulSoup4 is installed.
Raises
ImportError – If BeautifulSoup4 is not installed.
download(path: str) → None[source]#
Download a file from a url.
Parameters
path – Path to the file.
folder_path: str#
load() → List[langchain.schema.Document][source]#
Load data into document objects.
Returns
List of documents.
load_all_recursively: bool#
parse_filename(url: str) → str[source]#
Parse the filename from a url.
Parameters
url – Url to parse the filename from.
Returns
The filename.
class langchain.document_loaders.BlockchainDocumentLoader(contract_address: str, blockchainType: langchain.document_loaders.blockchain.BlockchainType = BlockchainType.ETH_MAINNET, api_key: str = 'docs-demo', startToken: str = '', get_all_tokens: bool = False, max_execution_time: Optional[int] = None)[source]#
Loads elements from a blockchain smart contract into Langchain documents. | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
6a450e95639a-4 | Loads elements from a blockchain smart contract into Langchain documents.
The supported blockchains are: Ethereum mainnet, Ethereum Goerli testnet,
Polygon mainnet, and Polygon Mumbai testnet.
If no BlockchainType is specified, the default is Ethereum mainnet.
The Loader uses the Alchemy API to interact with the blockchain.
ALCHEMY_API_KEY environment variable must be set to use this loader.
The API returns 100 NFTs per request and can be paginated using the
startToken parameter.
If get_all_tokens is set to True, the loader will get all tokens
on the contract. Note that for contracts with a large number of tokens,
this may take a long time (e.g. 10k tokens is 100 requests).
Default value is false for this reason.
The max_execution_time (sec) can be set to limit the execution time
of the loader.
Future versions of this loader can:
Support additional Alchemy APIs (e.g. getTransactions, etc.)
Support additional blockain APIs (e.g. Infura, Opensea, etc.)
load() → List[langchain.schema.Document][source]#
Load data into document objects.
class langchain.document_loaders.CSVLoader(file_path: str, source_column: Optional[str] = None, csv_args: Optional[Dict] = None, encoding: Optional[str] = None)[source]#
Loads a CSV file into a list of documents.
Each document represents one row of the CSV file. Every row is converted into a
key/value pair and outputted to a new line in the document’s page_content.
The source for each document loaded from csv is set to the value of the
file_path argument for all doucments by default.
You can override this by setting the source_column argument to the
name of a column in the CSV file. | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
6a450e95639a-5 | name of a column in the CSV file.
The source of each document will then be set to the value of the column
with the name specified in source_column.
Output Example:column1: value1
column2: value2
column3: value3
load() → List[langchain.schema.Document][source]#
Load data into document objects.
class langchain.document_loaders.ChatGPTLoader(log_file: str, num_logs: int = - 1)[source]#
Loader that loads conversations from exported ChatGPT data.
load() → List[langchain.schema.Document][source]#
Load data into document objects.
class langchain.document_loaders.CoNLLULoader(file_path: str)[source]#
Load CoNLL-U files.
load() → List[langchain.schema.Document][source]#
Load from file path.
class langchain.document_loaders.CollegeConfidentialLoader(web_path: Union[str, List[str]], header_template: Optional[dict] = None)[source]#
Loader that loads College Confidential webpages.
load() → List[langchain.schema.Document][source]#
Load webpage.
class langchain.document_loaders.ConfluenceLoader(url: str, api_key: Optional[str] = None, username: Optional[str] = None, oauth2: Optional[dict] = None, token: Optional[str] = None, cloud: Optional[bool] = True, number_of_retries: Optional[int] = 3, min_retry_seconds: Optional[int] = 2, max_retry_seconds: Optional[int] = 10, confluence_kwargs: Optional[dict] = None)[source]#
Load Confluence pages. Port of https://llamahub.ai/l/confluence
This currently supports username/api_key, Oauth2 login or personal access token
authentication. | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
6a450e95639a-6 | This currently supports username/api_key, Oauth2 login or personal access token
authentication.
Specify a list page_ids and/or space_key to load in the corresponding pages into
Document objects, if both are specified the union of both sets will be returned.
You can also specify a boolean include_attachments to include attachments, this
is set to False by default, if set to True all attachments will be downloaded and
ConfluenceReader will extract the text from the attachments and add it to the
Document object. Currently supported attachment types are: PDF, PNG, JPEG/JPG,
SVG, Word and Excel.
Hint: space_key and page_id can both be found in the URL of a page in Confluence
- https://yoursite.atlassian.com/wiki/spaces/<space_key>/pages/<page_id>
Example
from langchain.document_loaders import ConfluenceLoader
loader = ConfluenceLoader(
url="https://yoursite.atlassian.com/wiki",
username="me",
api_key="12345"
)
documents = loader.load(space_key="SPACE",limit=50)
Parameters
url (str) – _description_
api_key (str, optional) – _description_, defaults to None
username (str, optional) – _description_, defaults to None
oauth2 (dict, optional) – _description_, defaults to {}
token (str, optional) – _description_, defaults to None
cloud (bool, optional) – _description_, defaults to True
number_of_retries (Optional[int], optional) – How many times to retry, defaults to 3
min_retry_seconds (Optional[int], optional) – defaults to 2
max_retry_seconds (Optional[int], optional) – defaults to 10
confluence_kwargs (dict, optional) – additional kwargs to initialize confluence with
Raises
ValueError – Errors while validating input | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
6a450e95639a-7 | Raises
ValueError – Errors while validating input
ImportError – Required dependencies not installed.
is_public_page(page: dict) → bool[source]#
Check if a page is publicly accessible.
load(space_key: Optional[str] = None, page_ids: Optional[List[str]] = None, label: Optional[str] = None, cql: Optional[str] = None, include_restricted_content: bool = False, include_archived_content: bool = False, include_attachments: bool = False, include_comments: bool = False, limit: Optional[int] = 50, max_pages: Optional[int] = 1000, ocr_languages: Optional[str] = None) → List[langchain.schema.Document][source]#
Parameters
space_key (Optional[str], optional) – Space key retrieved from a confluence URL, defaults to None
page_ids (Optional[List[str]], optional) – List of specific page IDs to load, defaults to None
label (Optional[str], optional) – Get all pages with this label, defaults to None
cql (Optional[str], optional) – CQL Expression, defaults to None
include_restricted_content (bool, optional) – defaults to False
include_archived_content (bool, optional) – Whether to include archived content,
defaults to False
include_attachments (bool, optional) – defaults to False
include_comments (bool, optional) – defaults to False
limit (int, optional) – Maximum number of pages to retrieve per request, defaults to 50
max_pages (int, optional) – Maximum number of pages to retrieve in total, defaults 1000
ocr_languages (str, optional) – The languages to use for the Tesseract agent. To use a
language, you’ll first need to install the appropriate
Tesseract language pack.
Raises
ValueError – _description_
ImportError – _description_
Returns
_description_
Return type | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
6a450e95639a-8 | ImportError – _description_
Returns
_description_
Return type
List[Document]
paginate_request(retrieval_method: Callable, **kwargs: Any) → List[source]#
Paginate the various methods to retrieve groups of pages.
Unfortunately, due to page size, sometimes the Confluence API
doesn’t match the limit value. If limit is >100 confluence
seems to cap the response to 100. Also, due to the Atlassian Python
package, we don’t get the “next” values from the “_links” key because
they only return the value from the results key. So here, the pagination
starts from 0 and goes until the max_pages, getting the limit number
of pages with each request. We have to manually check if there
are more docs based on the length of the returned list of pages, rather than
just checking for the presence of a next key in the response like this page
would have you do:
https://developer.atlassian.com/server/confluence/pagination-in-the-rest-api/
Parameters
retrieval_method (callable) – Function used to retrieve docs
Returns
List of documents
Return type
List
process_attachment(page_id: str, ocr_languages: Optional[str] = None) → List[str][source]#
process_doc(link: str) → str[source]#
process_image(link: str, ocr_languages: Optional[str] = None) → str[source]#
process_page(page: dict, include_attachments: bool, include_comments: bool, ocr_languages: Optional[str] = None) → langchain.schema.Document[source]#
process_pages(pages: List[dict], include_restricted_content: bool, include_attachments: bool, include_comments: bool, ocr_languages: Optional[str] = None) → List[langchain.schema.Document][source]# | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
6a450e95639a-9 | Process a list of pages into a list of documents.
process_pdf(link: str, ocr_languages: Optional[str] = None) → str[source]#
process_svg(link: str, ocr_languages: Optional[str] = None) → str[source]#
process_xls(link: str) → str[source]#
static validate_init_args(url: Optional[str] = None, api_key: Optional[str] = None, username: Optional[str] = None, oauth2: Optional[dict] = None, token: Optional[str] = None) → Optional[List][source]#
Validates proper combinations of init arguments
class langchain.document_loaders.DataFrameLoader(data_frame: Any, page_content_column: str = 'text')[source]#
Load Pandas DataFrames.
load() → List[langchain.schema.Document][source]#
Load from the dataframe.
class langchain.document_loaders.DiffbotLoader(api_token: str, urls: List[str], continue_on_failure: bool = True)[source]#
Loader that loads Diffbot file json.
load() → List[langchain.schema.Document][source]#
Extract text from Diffbot on all the URLs and return Document instances
class langchain.document_loaders.DirectoryLoader(path: str, glob: str = '**/[!.]*', silent_errors: bool = False, load_hidden: bool = False, loader_cls: typing.Union[typing.Type[langchain.document_loaders.unstructured.UnstructuredFileLoader], typing.Type[langchain.document_loaders.text.TextLoader], typing.Type[langchain.document_loaders.html_bs.BSHTMLLoader]] = <class 'langchain.document_loaders.unstructured.UnstructuredFileLoader'>, loader_kwargs: typing.Optional[dict] = None, recursive: bool = False, show_progress: bool = False, use_multithreading: bool = False, max_concurrency: int = 4)[source]# | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
6a450e95639a-10 | Loading logic for loading documents from a directory.
load() → List[langchain.schema.Document][source]#
Load documents.
load_file(item: pathlib.Path, path: pathlib.Path, docs: List[langchain.schema.Document], pbar: Optional[Any]) → None[source]#
class langchain.document_loaders.DiscordChatLoader(chat_log: pd.DataFrame, user_id_col: str = 'ID')[source]#
Load Discord chat logs.
load() → List[langchain.schema.Document][source]#
Load all chat messages.
pydantic model langchain.document_loaders.DocugamiLoader[source]#
Loader that loads processed docs from Docugami.
To use, you should have the lxml python package installed.
field access_token: Optional[str] = None#
field api: str = 'https://api.docugami.com/v1preview1'#
field docset_id: Optional[str] = None#
field document_ids: Optional[Sequence[str]] = None#
field file_paths: Optional[Sequence[Union[pathlib.Path, str]]] = None#
field min_chunk_size: int = 32#
load() → List[langchain.schema.Document][source]#
Load documents.
class langchain.document_loaders.Docx2txtLoader(file_path: str)[source]#
Loads a DOCX with docx2txt and chunks at character level.
Defaults to check for local file, but if the file is a web path, it will download it
to a temporary file, and use that, then clean up the temporary file after completion
load() → List[langchain.schema.Document][source]#
Load given path as single page. | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
6a450e95639a-11 | Load given path as single page.
class langchain.document_loaders.DuckDBLoader(query: str, database: str = ':memory:', read_only: bool = False, config: Optional[Dict[str, str]] = None, page_content_columns: Optional[List[str]] = None, metadata_columns: Optional[List[str]] = None)[source]#
Loads a query result from DuckDB into a list of documents.
Each document represents one row of the result. The page_content_columns
are written into the page_content of the document. The metadata_columns
are written into the metadata of the document. By default, all columns
are written into the page_content and none into the metadata.
load() → List[langchain.schema.Document][source]#
Load data into document objects.
class langchain.document_loaders.EverNoteLoader(file_path: str, load_single_document: bool = True)[source]#
EverNote Loader.
Loads an EverNote notebook export file e.g. my_notebook.enex into Documents.
Instructions on producing this file can be found at
https://help.evernote.com/hc/en-us/articles/209005557-Export-notes-and-notebooks-as-ENEX-or-HTML
Currently only the plain text in the note is extracted and stored as the contents
of the Document, any non content metadata (e.g. ‘author’, ‘created’, ‘updated’ etc.
but not ‘content-raw’ or ‘resource’) tags on the note will be extracted and stored
as metadata on the Document.
Parameters
file_path (str) – The path to the notebook export with a .enex extension
load_single_document (bool) – Whether or not to concatenate the content of all
notes into a single long Document.
True (If this is set to) – the ‘source’ which contains the file name of the export. | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
6a450e95639a-12 | load() → List[langchain.schema.Document][source]#
Load documents from EverNote export file.
class langchain.document_loaders.FacebookChatLoader(path: str)[source]#
Loader that loads Facebook messages json directory dump.
load() → List[langchain.schema.Document][source]#
Load documents.
class langchain.document_loaders.FaunaLoader(query: str, page_content_field: str, secret: str, metadata_fields: Optional[Sequence[str]] = None)[source]#
query#
The FQL query string to execute.
Type
str
page_content_field#
The field that contains the content of each page.
Type
str
secret#
The secret key for authenticating to FaunaDB.
Type
str
metadata_fields#
Optional list of field names to include in metadata.
Type
Optional[Sequence[str]]
lazy_load() → Iterator[langchain.schema.Document][source]#
A lazy loader for document content.
load() → List[langchain.schema.Document][source]#
Load data into document objects.
class langchain.document_loaders.FigmaFileLoader(access_token: str, ids: str, key: str)[source]#
Loader that loads Figma file json.
load() → List[langchain.schema.Document][source]#
Load file
class langchain.document_loaders.GCSDirectoryLoader(project_name: str, bucket: str, prefix: str = '')[source]#
Loading logic for loading documents from GCS.
load() → List[langchain.schema.Document][source]#
Load documents.
class langchain.document_loaders.GCSFileLoader(project_name: str, bucket: str, blob: str)[source]#
Loading logic for loading documents from GCS.
load() → List[langchain.schema.Document][source]#
Load documents. | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
6a450e95639a-13 | load() → List[langchain.schema.Document][source]#
Load documents.
pydantic model langchain.document_loaders.GitHubIssuesLoader[source]#
Validators
validate_environment » all fields
validate_since » since
field assignee: Optional[str] = None#
Filter on assigned user. Pass ‘none’ for no user and ‘*’ for any user.
field creator: Optional[str] = None#
Filter on the user that created the issue.
field direction: Optional[Literal['asc', 'desc']] = None#
The direction to sort the results by. Can be one of: ‘asc’, ‘desc’.
field include_prs: bool = True#
If True include Pull Requests in results, otherwise ignore them.
field labels: Optional[List[str]] = None#
Label names to filter one. Example: bug,ui,@high.
field mentioned: Optional[str] = None#
Filter on a user that’s mentioned in the issue.
field milestone: Optional[Union[int, Literal['*', 'none']]] = None#
If integer is passed, it should be a milestone’s number field.
If the string ‘*’ is passed, issues with any milestone are accepted.
If the string ‘none’ is passed, issues without milestones are returned.
field since: Optional[str] = None#
Only show notifications updated after the given time.
This is a timestamp in ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ.
field sort: Optional[Literal['created', 'updated', 'comments']] = None#
What to sort results by. Can be one of: ‘created’, ‘updated’, ‘comments’.
Default is ‘created’.
field state: Optional[Literal['open', 'closed', 'all']] = None# | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
6a450e95639a-14 | field state: Optional[Literal['open', 'closed', 'all']] = None#
Filter on issue state. Can be one of: ‘open’, ‘closed’, ‘all’.
lazy_load() → Iterator[langchain.schema.Document][source]#
Get issues of a GitHub repository.
Returns
page_content
metadata
url
title
creator
created_at
last_update_time
closed_time
number of comments
state
labels
assignee
assignees
milestone
locked
number
is_pull_request
Return type
A list of Documents with attributes
load() → List[langchain.schema.Document][source]#
Get issues of a GitHub repository.
Returns
page_content
metadata
url
title
creator
created_at
last_update_time
closed_time
number of comments
state
labels
assignee
assignees
milestone
locked
number
is_pull_request
Return type
A list of Documents with attributes
parse_issue(issue: dict) → langchain.schema.Document[source]#
Create Document objects from a list of GitHub issues.
property query_params: str#
property url: str#
class langchain.document_loaders.GitLoader(repo_path: str, clone_url: Optional[str] = None, branch: Optional[str] = 'main', file_filter: Optional[Callable[[str], bool]] = None)[source]#
Loads files from a Git repository into a list of documents.
Repository can be local on disk available at repo_path,
or remote at clone_url that will be cloned to repo_path.
Currently supports only text files.
Each document represents one file in the repository. The path points to
the local Git repository, and the branch specifies the branch to load
files from. By default, it loads from the main branch.
load() → List[langchain.schema.Document][source]#
Load data into document objects. | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
6a450e95639a-15 | Load data into document objects.
class langchain.document_loaders.GitbookLoader(web_page: str, load_all_paths: bool = False, base_url: Optional[str] = None, content_selector: str = 'main')[source]#
Load GitBook data.
load from either a single page, or
load all (relative) paths in the navbar.
load() → List[langchain.schema.Document][source]#
Fetch text from one single GitBook page.
class langchain.document_loaders.GoogleApiClient(credentials_path: pathlib.Path = PosixPath('/home/docs/.credentials/credentials.json'), service_account_path: pathlib.Path = PosixPath('/home/docs/.credentials/credentials.json'), token_path: pathlib.Path = PosixPath('/home/docs/.credentials/token.json'))[source]#
A Generic Google Api Client.
To use, you should have the google_auth_oauthlib,youtube_transcript_api,google
python package installed.
As the google api expects credentials you need to set up a google account and
register your Service. “https://developers.google.com/docs/api/quickstart/python”
Example
from langchain.document_loaders import GoogleApiClient
google_api_client = GoogleApiClient(
service_account_path=Path("path_to_your_sec_file.json")
)
credentials_path: pathlib.Path = PosixPath('/home/docs/.credentials/credentials.json')#
service_account_path: pathlib.Path = PosixPath('/home/docs/.credentials/credentials.json')#
token_path: pathlib.Path = PosixPath('/home/docs/.credentials/token.json')#
classmethod validate_channel_or_videoIds_is_set(values: Dict[str, Any]) → Dict[str, Any][source]#
Validate that either folder_id or document_ids is set, but not both. | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
6a450e95639a-16 | Validate that either folder_id or document_ids is set, but not both.
class langchain.document_loaders.GoogleApiYoutubeLoader(google_api_client: langchain.document_loaders.youtube.GoogleApiClient, channel_name: Optional[str] = None, video_ids: Optional[List[str]] = None, add_video_info: bool = True, captions_language: str = 'en', continue_on_failure: bool = False)[source]#
Loader that loads all Videos from a Channel
To use, you should have the googleapiclient,youtube_transcript_api
python package installed.
As the service needs a google_api_client, you first have to initialize
the GoogleApiClient.
Additionally you have to either provide a channel name or a list of videoids
“https://developers.google.com/docs/api/quickstart/python”
Example
from langchain.document_loaders import GoogleApiClient
from langchain.document_loaders import GoogleApiYoutubeLoader
google_api_client = GoogleApiClient(
service_account_path=Path("path_to_your_sec_file.json")
)
loader = GoogleApiYoutubeLoader(
google_api_client=google_api_client,
channel_name = "CodeAesthetic"
)
load.load()
add_video_info: bool = True#
captions_language: str = 'en'#
channel_name: Optional[str] = None#
continue_on_failure: bool = False#
google_api_client: langchain.document_loaders.youtube.GoogleApiClient#
load() → List[langchain.schema.Document][source]#
Load documents.
classmethod validate_channel_or_videoIds_is_set(values: Dict[str, Any]) → Dict[str, Any][source]#
Validate that either folder_id or document_ids is set, but not both.
video_ids: Optional[List[str]] = None#
pydantic model langchain.document_loaders.GoogleDriveLoader[source]#
Loader that loads Google Docs from Google Drive. | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
6a450e95639a-17 | Loader that loads Google Docs from Google Drive.
Validators
validate_credentials_path » credentials_path
validate_inputs » all fields
field credentials_path: pathlib.Path = PosixPath('/home/docs/.credentials/credentials.json')#
field document_ids: Optional[List[str]] = None#
field file_ids: Optional[List[str]] = None#
field file_types: Optional[Sequence[str]] = None#
field folder_id: Optional[str] = None#
field load_trashed_files: bool = False#
field recursive: bool = False#
field service_account_key: pathlib.Path = PosixPath('/home/docs/.credentials/keys.json')#
field token_path: pathlib.Path = PosixPath('/home/docs/.credentials/token.json')#
load() → List[langchain.schema.Document][source]#
Load documents.
class langchain.document_loaders.GutenbergLoader(file_path: str)[source]#
Loader that uses urllib to load .txt web files.
load() → List[langchain.schema.Document][source]#
Load file.
class langchain.document_loaders.HNLoader(web_path: Union[str, List[str]], header_template: Optional[dict] = None)[source]#
Load Hacker News data from either main page results or the comments page.
load() → List[langchain.schema.Document][source]#
Get important HN webpage information.
Components are:
title
content
source url,
time of post
author of the post
number of comments
rank of the post
load_comments(soup_info: Any) → List[langchain.schema.Document][source]#
Load comments from a HN post.
load_results(soup: Any) → List[langchain.schema.Document][source]#
Load items from an HN page. | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
6a450e95639a-18 | Load items from an HN page.
class langchain.document_loaders.HuggingFaceDatasetLoader(path: str, page_content_column: str = 'text', name: Optional[str] = None, data_dir: Optional[str] = None, data_files: Optional[Union[str, Sequence[str], Mapping[str, Union[str, Sequence[str]]]]] = None, cache_dir: Optional[str] = None, keep_in_memory: Optional[bool] = None, save_infos: bool = False, use_auth_token: Optional[Union[bool, str]] = None, num_proc: Optional[int] = None)[source]#
Loading logic for loading documents from the Hugging Face Hub.
lazy_load() → Iterator[langchain.schema.Document][source]#
Load documents lazily.
load() → List[langchain.schema.Document][source]#
Load documents.
class langchain.document_loaders.IFixitLoader(web_path: str)[source]#
Load iFixit repair guides, device wikis and answers.
iFixit is the largest, open repair community on the web. The site contains nearly
100k repair manuals, 200k Questions & Answers on 42k devices, and all the data is
licensed under CC-BY.
This loader will allow you to download the text of a repair guide, text of Q&A’s
and wikis from devices on iFixit using their open APIs and web scraping.
load() → List[langchain.schema.Document][source]#
Load data into document objects.
load_device(url_override: Optional[str] = None, include_guides: bool = True) → List[langchain.schema.Document][source]#
load_guide(url_override: Optional[str] = None) → List[langchain.schema.Document][source]# | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
6a450e95639a-19 | load_questions_and_answers(url_override: Optional[str] = None) → List[langchain.schema.Document][source]#
static load_suggestions(query: str = '', doc_type: str = 'all') → List[langchain.schema.Document][source]#
class langchain.document_loaders.IMSDbLoader(web_path: Union[str, List[str]], header_template: Optional[dict] = None)[source]#
Loader that loads IMSDb webpages.
load() → List[langchain.schema.Document][source]#
Load webpage.
class langchain.document_loaders.ImageCaptionLoader(path_images: Union[str, List[str]], blip_processor: str = 'Salesforce/blip-image-captioning-base', blip_model: str = 'Salesforce/blip-image-captioning-base')[source]#
Loader that loads the captions of an image
load() → List[langchain.schema.Document][source]#
Load from a list of image files
class langchain.document_loaders.IuguLoader(resource: str, api_token: Optional[str] = None)[source]#
load() → List[langchain.schema.Document][source]#
Load data into document objects.
class langchain.document_loaders.JSONLoader(file_path: Union[str, pathlib.Path], jq_schema: str, content_key: Optional[str] = None, metadata_func: Optional[Callable[[Dict, Dict], Dict]] = None, text_content: bool = True)[source]#
Loads a JSON file and references a jq schema provided to load the text into
documents.
Example
[{“text”: …}, {“text”: …}, {“text”: …}] -> schema = .[].text
{“key”: [{“text”: …}, {“text”: …}, {“text”: …}]} -> schema = .key[].text
[“”, “”, “”] -> schema = .[] | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
6a450e95639a-20 | [“”, “”, “”] -> schema = .[]
load() → List[langchain.schema.Document][source]#
Load and return documents from the JSON file.
class langchain.document_loaders.JoplinLoader(access_token: Optional[str] = None, port: int = 41184, host: str = 'localhost')[source]#
Loader that fetches notes from Joplin.
In order to use this loader, you need to have Joplin running with the
Web Clipper enabled (look for “Web Clipper” in the app settings).
To get the access token, you need to go to the Web Clipper options and
under “Advanced Options” you will find the access token.
You can find more information about the Web Clipper service here:
https://joplinapp.org/clipper/
lazy_load() → Iterator[langchain.schema.Document][source]#
A lazy loader for document content.
load() → List[langchain.schema.Document][source]#
Load data into document objects.
class langchain.document_loaders.MWDumpLoader(file_path: str, encoding: Optional[str] = 'utf8')[source]#
Load MediaWiki dump from XML file
.. rubric:: Example
from langchain.document_loaders import MWDumpLoader
loader = MWDumpLoader(
file_path="myWiki.xml",
encoding="utf8"
)
docs = loader.load()
from langchain.text_splitter import RecursiveCharacterTextSplitter
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=1000, chunk_overlap=0
)
texts = text_splitter.split_documents(docs)
Parameters
file_path (str) – XML local file path
encoding (str, optional) – Charset encoding, defaults to “utf8” | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
6a450e95639a-21 | encoding (str, optional) – Charset encoding, defaults to “utf8”
load() → List[langchain.schema.Document][source]#
Load from file path.
class langchain.document_loaders.MastodonTootsLoader(mastodon_accounts: Sequence[str], number_toots: Optional[int] = 100, exclude_replies: bool = False, access_token: Optional[str] = None, api_base_url: str = 'https://mastodon.social')[source]#
Mastodon toots loader.
load() → List[langchain.schema.Document][source]#
Load toots into documents.
class langchain.document_loaders.MathpixPDFLoader(file_path: str, processed_file_format: str = 'mmd', max_wait_time_seconds: int = 500, should_clean_pdf: bool = False, **kwargs: Any)[source]#
clean_pdf(contents: str) → str[source]#
property data: dict#
get_processed_pdf(pdf_id: str) → str[source]#
property headers: dict#
load() → List[langchain.schema.Document][source]#
Load data into document objects.
send_pdf() → str[source]#
property url: str#
wait_for_processing(pdf_id: str) → None[source]#
class langchain.document_loaders.MaxComputeLoader(query: str, api_wrapper: langchain.utilities.max_compute.MaxComputeAPIWrapper, *, page_content_columns: Optional[Sequence[str]] = None, metadata_columns: Optional[Sequence[str]] = None)[source]#
Loads a query result from Alibaba Cloud MaxCompute table into documents.
classmethod from_params(query: str, endpoint: str, project: str, *, access_id: Optional[str] = None, secret_access_key: Optional[str] = None, **kwargs: Any) → langchain.document_loaders.max_compute.MaxComputeLoader[source]# | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
6a450e95639a-22 | Convenience constructor that builds the MaxCompute API wrapper fromgiven parameters.
Parameters
query – SQL query to execute.
endpoint – MaxCompute endpoint.
project – A project is a basic organizational unit of MaxCompute, which is
similar to a database.
access_id – MaxCompute access ID. Should be passed in directly or set as the
environment variable MAX_COMPUTE_ACCESS_ID.
secret_access_key – MaxCompute secret access key. Should be passed in
directly or set as the environment variable
MAX_COMPUTE_SECRET_ACCESS_KEY.
lazy_load() → Iterator[langchain.schema.Document][source]#
A lazy loader for document content.
load() → List[langchain.schema.Document][source]#
Load data into document objects.
class langchain.document_loaders.ModernTreasuryLoader(resource: str, organization_id: Optional[str] = None, api_key: Optional[str] = None)[source]#
load() → List[langchain.schema.Document][source]#
Load data into document objects.
class langchain.document_loaders.NotebookLoader(path: str, include_outputs: bool = False, max_output_length: int = 10, remove_newline: bool = False, traceback: bool = False)[source]#
Loader that loads .ipynb notebook files.
load() → List[langchain.schema.Document][source]#
Load documents.
class langchain.document_loaders.NotionDBLoader(integration_token: str, database_id: str, request_timeout_sec: Optional[int] = 10)[source]#
Notion DB Loader.
Reads content from pages within a Noton Database.
:param integration_token: Notion integration token.
:type integration_token: str
:param database_id: Notion database id.
:type database_id: str
:param request_timeout_sec: Timeout for Notion requests in seconds.
:type request_timeout_sec: int | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
6a450e95639a-23 | :type request_timeout_sec: int
load() → List[langchain.schema.Document][source]#
Load documents from the Notion database.
:returns: List of documents.
:rtype: List[Document]
load_page(page_id: str) → langchain.schema.Document[source]#
Read a page.
class langchain.document_loaders.NotionDirectoryLoader(path: str)[source]#
Loader that loads Notion directory dump.
load() → List[langchain.schema.Document][source]#
Load documents.
class langchain.document_loaders.ObsidianLoader(path: str, encoding: str = 'UTF-8', collect_metadata: bool = True)[source]#
Loader that loads Obsidian files from disk.
FRONT_MATTER_REGEX = re.compile('^---\\n(.*?)\\n---\\n', re.MULTILINE|re.DOTALL)#
load() → List[langchain.schema.Document][source]#
Load documents.
pydantic model langchain.document_loaders.OneDriveFileLoader[source]#
field file: File [Required]#
load() → List[langchain.schema.Document][source]#
Load Documents
pydantic model langchain.document_loaders.OneDriveLoader[source]#
field auth_with_token: bool = False#
field drive_id: str [Required]#
field folder_path: Optional[str] = None#
field object_ids: Optional[List[str]] = None#
field settings: langchain.document_loaders.onedrive._OneDriveSettings [Optional]#
load() → List[langchain.schema.Document][source]#
Loads all supported document files from the specified OneDrive drive a
nd returns a list of Document objects.
Returns
A list of Document objects
representing the loaded documents.
Return type
List[Document]
Raises
ValueError – If the specified drive ID | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
6a450e95639a-24 | Return type
List[Document]
Raises
ValueError – If the specified drive ID
does not correspond to a drive in the OneDrive storage. –
class langchain.document_loaders.OnlinePDFLoader(file_path: str)[source]#
Loader that loads online PDFs.
load() → List[langchain.schema.Document][source]#
Load documents.
class langchain.document_loaders.OutlookMessageLoader(file_path: str)[source]#
Loader that loads Outlook Message files using extract_msg.
TeamMsgExtractor/msg-extractor
load() → List[langchain.schema.Document][source]#
Load data into document objects.
class langchain.document_loaders.PDFMinerLoader(file_path: str)[source]#
Loader that uses PDFMiner to load PDF files.
lazy_load() → Iterator[langchain.schema.Document][source]#
Lazily lod documents.
load() → List[langchain.schema.Document][source]#
Eagerly load the content.
class langchain.document_loaders.PDFMinerPDFasHTMLLoader(file_path: str)[source]#
Loader that uses PDFMiner to load PDF files as HTML content.
load() → List[langchain.schema.Document][source]#
Load file.
class langchain.document_loaders.PDFPlumberLoader(file_path: str, text_kwargs: Optional[Mapping[str, Any]] = None)[source]#
Loader that uses pdfplumber to load PDF files.
load() → List[langchain.schema.Document][source]#
Load file.
langchain.document_loaders.PagedPDFSplitter#
alias of langchain.document_loaders.pdf.PyPDFLoader | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
6a450e95639a-25 | alias of langchain.document_loaders.pdf.PyPDFLoader
class langchain.document_loaders.PlaywrightURLLoader(urls: List[str], continue_on_failure: bool = True, headless: bool = True, remove_selectors: Optional[List[str]] = None)[source]#
Loader that uses Playwright and to load a page and unstructured to load the html.
This is useful for loading pages that require javascript to render.
urls#
List of URLs to load.
Type
List[str]
continue_on_failure#
If True, continue loading other URLs on failure.
Type
bool
headless#
If True, the browser will run in headless mode.
Type
bool
load() → List[langchain.schema.Document][source]#
Load the specified URLs using Playwright and create Document instances.
Returns
A list of Document instances with loaded content.
Return type
List[Document]
class langchain.document_loaders.PsychicLoader(api_key: str, connector_id: str, connection_id: str)[source]#
Loader that loads documents from Psychic.dev.
load() → List[langchain.schema.Document][source]#
Load documents.
class langchain.document_loaders.PyMuPDFLoader(file_path: str)[source]#
Loader that uses PyMuPDF to load PDF files.
load(**kwargs: Optional[Any]) → List[langchain.schema.Document][source]#
Load file.
class langchain.document_loaders.PyPDFDirectoryLoader(path: str, glob: str = '**/[!.]*.pdf', silent_errors: bool = False, load_hidden: bool = False, recursive: bool = False)[source]#
Loads a directory with PDF files with pypdf and chunks at character level.
Loader also stores page numbers in metadatas.
load() → List[langchain.schema.Document][source]# | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
6a450e95639a-26 | load() → List[langchain.schema.Document][source]#
Load data into document objects.
class langchain.document_loaders.PyPDFLoader(file_path: str)[source]#
Loads a PDF with pypdf and chunks at character level.
Loader also stores page numbers in metadatas.
lazy_load() → Iterator[langchain.schema.Document][source]#
Lazy load given path as pages.
load() → List[langchain.schema.Document][source]#
Load given path as pages.
class langchain.document_loaders.PyPDFium2Loader(file_path: str)[source]#
Loads a PDF with pypdfium2 and chunks at character level.
lazy_load() → Iterator[langchain.schema.Document][source]#
Lazy load given path as pages.
load() → List[langchain.schema.Document][source]#
Load given path as pages.
class langchain.document_loaders.PySparkDataFrameLoader(spark_session: Optional[SparkSession] = None, df: Optional[Any] = None, page_content_column: str = 'text', fraction_of_memory: float = 0.1)[source]#
Load PySpark DataFrames
get_num_rows() → Tuple[int, int][source]#
Gets the amount of “feasible” rows for the DataFrame
lazy_load() → Iterator[langchain.schema.Document][source]#
A lazy loader for document content.
load() → List[langchain.schema.Document][source]#
Load from the dataframe.
class langchain.document_loaders.PythonLoader(file_path: str)[source]#
Load Python files, respecting any non-default encoding if specified. | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
6a450e95639a-27 | Load Python files, respecting any non-default encoding if specified.
class langchain.document_loaders.ReadTheDocsLoader(path: Union[str, pathlib.Path], encoding: Optional[str] = None, errors: Optional[str] = None, custom_html_tag: Optional[Tuple[str, dict]] = None, **kwargs: Optional[Any])[source]#
Loader that loads ReadTheDocs documentation directory dump.
load() → List[langchain.schema.Document][source]#
Load documents.
class langchain.document_loaders.RedditPostsLoader(client_id: str, client_secret: str, user_agent: str, search_queries: Sequence[str], mode: str, categories: Sequence[str] = ['new'], number_posts: Optional[int] = 10)[source]#
Reddit posts loader.
Read posts on a subreddit.
First you need to go to
https://www.reddit.com/prefs/apps/
and create your application
load() → List[langchain.schema.Document][source]#
Load reddits.
class langchain.document_loaders.RoamLoader(path: str)[source]#
Loader that loads Roam files from disk.
load() → List[langchain.schema.Document][source]#
Load documents.
class langchain.document_loaders.S3DirectoryLoader(bucket: str, prefix: str = '')[source]#
Loading logic for loading documents from s3.
load() → List[langchain.schema.Document][source]#
Load documents.
class langchain.document_loaders.S3FileLoader(bucket: str, key: str)[source]#
Loading logic for loading documents from s3.
load() → List[langchain.schema.Document][source]#
Load documents.
class langchain.document_loaders.SRTLoader(file_path: str)[source]#
Loader for .srt (subtitle) files. | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
6a450e95639a-28 | Loader for .srt (subtitle) files.
load() → List[langchain.schema.Document][source]#
Load using pysrt file.
class langchain.document_loaders.SeleniumURLLoader(urls: List[str], continue_on_failure: bool = True, browser: Literal['chrome', 'firefox'] = 'chrome', binary_location: Optional[str] = None, executable_path: Optional[str] = None, headless: bool = True, arguments: List[str] = [])[source]#
Loader that uses Selenium and to load a page and unstructured to load the html.
This is useful for loading pages that require javascript to render.
urls#
List of URLs to load.
Type
List[str]
continue_on_failure#
If True, continue loading other URLs on failure.
Type
bool
browser#
The browser to use, either ‘chrome’ or ‘firefox’.
Type
str
binary_location#
The location of the browser binary.
Type
Optional[str]
executable_path#
The path to the browser executable.
Type
Optional[str]
headless#
If True, the browser will run in headless mode.
Type
bool
arguments [List[str]]
List of arguments to pass to the browser.
load() → List[langchain.schema.Document][source]#
Load the specified URLs using Selenium and create Document instances.
Returns
A list of Document instances with loaded content.
Return type
List[Document]
class langchain.document_loaders.SitemapLoader(web_path: str, filter_urls: Optional[List[str]] = None, parsing_function: Optional[Callable] = None, blocksize: Optional[int] = None, blocknum: int = 0, meta_function: Optional[Callable] = None, is_local: bool = False)[source]#
Loader that fetches a sitemap and loads those URLs. | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
6a450e95639a-29 | Loader that fetches a sitemap and loads those URLs.
load() → List[langchain.schema.Document][source]#
Load sitemap.
parse_sitemap(soup: Any) → List[dict][source]#
Parse sitemap xml and load into a list of dicts.
class langchain.document_loaders.SlackDirectoryLoader(zip_path: str, workspace_url: Optional[str] = None)[source]#
Loader for loading documents from a Slack directory dump.
load() → List[langchain.schema.Document][source]#
Load and return documents from the Slack directory dump.
class langchain.document_loaders.SnowflakeLoader(query: str, user: str, password: str, account: str, warehouse: str, role: str, database: str, schema: str, parameters: Optional[Dict[str, Any]] = None, page_content_columns: Optional[List[str]] = None, metadata_columns: Optional[List[str]] = None)[source]#
Loads a query result from Snowflake into a list of documents.
Each document represents one row of the result. The page_content_columns
are written into the page_content of the document. The metadata_columns
are written into the metadata of the document. By default, all columns
are written into the page_content and none into the metadata.
lazy_load() → Iterator[langchain.schema.Document][source]#
A lazy loader for document content.
load() → List[langchain.schema.Document][source]#
Load data into document objects.
class langchain.document_loaders.SpreedlyLoader(access_token: str, resource: str)[source]#
load() → List[langchain.schema.Document][source]#
Load data into document objects.
class langchain.document_loaders.StripeLoader(resource: str, access_token: Optional[str] = None)[source]# | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
6a450e95639a-30 | load() → List[langchain.schema.Document][source]#
Load data into document objects.
class langchain.document_loaders.TelegramChatApiLoader(chat_entity: Optional[EntityLike] = None, api_id: Optional[int] = None, api_hash: Optional[str] = None, username: Optional[str] = None, file_path: str = 'telegram_data.json')[source]#
Loader that loads Telegram chat json directory dump.
async fetch_data_from_telegram() → None[source]#
Fetch data from Telegram API and save it as a JSON file.
load() → List[langchain.schema.Document][source]#
Load documents.
class langchain.document_loaders.TelegramChatFileLoader(path: str)[source]#
Loader that loads Telegram chat json directory dump.
load() → List[langchain.schema.Document][source]#
Load documents.
langchain.document_loaders.TelegramChatLoader#
alias of langchain.document_loaders.telegram.TelegramChatFileLoader
class langchain.document_loaders.TextLoader(file_path: str, encoding: Optional[str] = None, autodetect_encoding: bool = False)[source]#
Load text files.
Parameters
file_path – Path to the file to load.
encoding – File encoding to use. If None, the file will be loaded
encoding. (with the default system) –
autodetect_encoding – Whether to try to autodetect the file encoding
if the specified encoding fails.
load() → List[langchain.schema.Document][source]#
Load from file path.
class langchain.document_loaders.ToMarkdownLoader(url: str, api_key: str)[source]#
Loader that loads HTML to markdown using 2markdown.
lazy_load() → Iterator[langchain.schema.Document][source]#
Lazily load the file. | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
6a450e95639a-31 | Lazily load the file.
load() → List[langchain.schema.Document][source]#
Load file.
class langchain.document_loaders.TomlLoader(source: Union[str, pathlib.Path])[source]#
A TOML document loader that inherits from the BaseLoader class.
This class can be initialized with either a single source file or a source
directory containing TOML files.
lazy_load() → Iterator[langchain.schema.Document][source]#
Lazily load the TOML documents from the source file or directory.
load() → List[langchain.schema.Document][source]#
Load and return all documents.
class langchain.document_loaders.TrelloLoader(client: TrelloClient, board_name: str, *, include_card_name: bool = True, include_comments: bool = True, include_checklist: bool = True, card_filter: Literal['closed', 'open', 'all'] = 'all', extra_metadata: Tuple[str, ...] = ('due_date', 'labels', 'list', 'closed'))[source]#
Trello loader. Reads all cards from a Trello board.
classmethod from_credentials(board_name: str, *, api_key: Optional[str] = None, token: Optional[str] = None, **kwargs: Any) → langchain.document_loaders.trello.TrelloLoader[source]#
Convenience constructor that builds TrelloClient init param for you.
Parameters
board_name – The name of the Trello board.
api_key – Trello API key. Can also be specified as environment variable
TRELLO_API_KEY.
token – Trello token. Can also be specified as environment variable
TRELLO_TOKEN.
include_card_name – Whether to include the name of the card in the document.
include_comments – Whether to include the comments on the card in the
document. | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
6a450e95639a-32 | include_comments – Whether to include the comments on the card in the
document.
include_checklist – Whether to include the checklist on the card in the
document.
card_filter – Filter on card status. Valid values are “closed”, “open”,
“all”.
extra_metadata – List of additional metadata fields to include as document
metadata.Valid values are “due_date”, “labels”, “list”, “closed”.
load() → List[langchain.schema.Document][source]#
Loads all cards from the specified Trello board.
You can filter the cards, metadata and text included by using the optional
parameters.
Returns:A list of documents, one for each card in the board.
class langchain.document_loaders.TwitterTweetLoader(auth_handler: Union[OAuthHandler, OAuth2BearerHandler], twitter_users: Sequence[str], number_tweets: Optional[int] = 100)[source]#
Twitter tweets loader.
Read tweets of user twitter handle.
First you need to go to
https://developer.twitter.com/en/docs/twitter-api
/getting-started/getting-access-to-the-twitter-api
to get your token. And create a v2 version of the app.
classmethod from_bearer_token(oauth2_bearer_token: str, twitter_users: Sequence[str], number_tweets: Optional[int] = 100) → langchain.document_loaders.twitter.TwitterTweetLoader[source]#
Create a TwitterTweetLoader from OAuth2 bearer token.
classmethod from_secrets(access_token: str, access_token_secret: str, consumer_key: str, consumer_secret: str, twitter_users: Sequence[str], number_tweets: Optional[int] = 100) → langchain.document_loaders.twitter.TwitterTweetLoader[source]#
Create a TwitterTweetLoader from access tokens and secrets.
load() → List[langchain.schema.Document][source]#
Load tweets. | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
6a450e95639a-33 | load() → List[langchain.schema.Document][source]#
Load tweets.
class langchain.document_loaders.UnstructuredAPIFileIOLoader(file: Union[IO, Sequence[IO]], mode: str = 'single', url: str = 'https://api.unstructured.io/general/v0/general', api_key: str = '', **unstructured_kwargs: Any)[source]#
Loader that uses the unstructured web API to load file IO objects.
class langchain.document_loaders.UnstructuredAPIFileLoader(file_path: Union[str, List[str]] = '', mode: str = 'single', url: str = 'https://api.unstructured.io/general/v0/general', api_key: str = '', **unstructured_kwargs: Any)[source]#
Loader that uses the unstructured web API to load files.
class langchain.document_loaders.UnstructuredCSVLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]#
Loader that uses unstructured to load CSV files.
class langchain.document_loaders.UnstructuredEPubLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]#
Loader that uses unstructured to load epub files.
class langchain.document_loaders.UnstructuredEmailLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]#
Loader that uses unstructured to load email files.
class langchain.document_loaders.UnstructuredExcelLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]#
Loader that uses unstructured to load Microsoft Excel files.
class langchain.document_loaders.UnstructuredFileIOLoader(file: Union[IO, Sequence[IO]], mode: str = 'single', **unstructured_kwargs: Any)[source]# | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
6a450e95639a-34 | Loader that uses unstructured to load file IO objects.
class langchain.document_loaders.UnstructuredFileLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]#
Loader that uses unstructured to load files.
class langchain.document_loaders.UnstructuredHTMLLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]#
Loader that uses unstructured to load HTML files.
class langchain.document_loaders.UnstructuredImageLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]#
Loader that uses unstructured to load image files, such as PNGs and JPGs.
class langchain.document_loaders.UnstructuredMarkdownLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]#
Loader that uses unstructured to load markdown files.
class langchain.document_loaders.UnstructuredODTLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]#
Loader that uses unstructured to load open office ODT files.
class langchain.document_loaders.UnstructuredPDFLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]#
Loader that uses unstructured to load PDF files.
class langchain.document_loaders.UnstructuredPowerPointLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]#
Loader that uses unstructured to load powerpoint files.
class langchain.document_loaders.UnstructuredRTFLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]# | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
6a450e95639a-35 | Loader that uses unstructured to load rtf files.
class langchain.document_loaders.UnstructuredURLLoader(urls: List[str], continue_on_failure: bool = True, mode: str = 'single', **unstructured_kwargs: Any)[source]#
Loader that uses unstructured to load HTML files.
load() → List[langchain.schema.Document][source]#
Load file.
class langchain.document_loaders.UnstructuredWordDocumentLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]#
Loader that uses unstructured to load word documents.
class langchain.document_loaders.UnstructuredXMLLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]#
Loader that uses unstructured to load XML files.
class langchain.document_loaders.WeatherDataLoader(client: langchain.utilities.openweathermap.OpenWeatherMapAPIWrapper, places: Sequence[str])[source]#
Weather Reader.
Reads the forecast & current weather of any location using OpenWeatherMap’s free
API. Checkout ‘https://openweathermap.org/appid’ for more on how to generate a free
OpenWeatherMap API.
classmethod from_params(places: Sequence[str], *, openweathermap_api_key: Optional[str] = None) → langchain.document_loaders.weather.WeatherDataLoader[source]#
lazy_load() → Iterator[langchain.schema.Document][source]#
Lazily load weather data for the given locations.
load() → List[langchain.schema.Document][source]#
Load weather data for the given locations.
class langchain.document_loaders.WebBaseLoader(web_path: Union[str, List[str]], header_template: Optional[dict] = None)[source]#
Loader that uses urllib and beautiful soup to load webpages.
aload() → List[langchain.schema.Document][source]# | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
6a450e95639a-36 | aload() → List[langchain.schema.Document][source]#
Load text from the urls in web_path async into Documents.
default_parser: str = 'html.parser'#
Default parser to use for BeautifulSoup.
async fetch_all(urls: List[str]) → Any[source]#
Fetch all urls concurrently with rate limiting.
load() → List[langchain.schema.Document][source]#
Load text from the url(s) in web_path.
requests_kwargs: Dict[str, Any] = {}#
kwargs for requests
requests_per_second: int = 2#
Max number of concurrent requests to make.
scrape(parser: Optional[str] = None) → Any[source]#
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls: List[str], parser: Optional[str] = None) → List[Any][source]#
Fetch all urls, then return soups for all results.
property web_path: str#
web_paths: List[str]#
class langchain.document_loaders.WhatsAppChatLoader(path: str)[source]#
Loader that loads WhatsApp messages text file.
load() → List[langchain.schema.Document][source]#
Load documents.
class langchain.document_loaders.WikipediaLoader(query: str, lang: str = 'en', load_max_docs: Optional[int] = 100, load_all_available_meta: Optional[bool] = False)[source]#
Loads a query result from www.wikipedia.org into a list of Documents.
The hard limit on the number of downloaded Documents is 300 for now.
Each wiki page represents one Document.
load() → List[langchain.schema.Document][source]#
Load data into document objects. | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
6a450e95639a-37 | Load data into document objects.
class langchain.document_loaders.YoutubeLoader(video_id: str, add_video_info: bool = False, language: Union[str, Sequence[str]] = 'en', translation: str = 'en', continue_on_failure: bool = False)[source]#
Loader that loads Youtube transcripts.
static extract_video_id(youtube_url: str) → str[source]#
Extract video id from common YT urls.
classmethod from_youtube_url(youtube_url: str, **kwargs: Any) → langchain.document_loaders.youtube.YoutubeLoader[source]#
Given youtube URL, load video.
load() → List[langchain.schema.Document][source]#
Load documents.
previous
Text Splitter
next
Vector Stores
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 11, 2023. | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
6e0b17fecde7-0 | .rst
.pdf
Output Parsers
Output Parsers#
pydantic model langchain.output_parsers.CommaSeparatedListOutputParser[source]#
Parse out comma separated lists.
get_format_instructions() → str[source]#
Instructions on how the LLM output should be formatted.
parse(text: str) → List[str][source]#
Parse the output of an LLM call.
pydantic model langchain.output_parsers.DatetimeOutputParser[source]#
field format: str = '%Y-%m-%dT%H:%M:%S.%fZ'#
get_format_instructions() → str[source]#
Instructions on how the LLM output should be formatted.
parse(response: str) → datetime.datetime[source]#
Parse the output of an LLM call.
A method which takes in a string (assumed output of a language model )
and parses it into some structure.
Parameters
text – output of language model
Returns
structured output
pydantic model langchain.output_parsers.GuardrailsOutputParser[source]#
field guard: Any = None#
classmethod from_rail(rail_file: str, num_reasks: int = 1) → langchain.output_parsers.rail_parser.GuardrailsOutputParser[source]#
classmethod from_rail_string(rail_str: str, num_reasks: int = 1) → langchain.output_parsers.rail_parser.GuardrailsOutputParser[source]#
get_format_instructions() → str[source]#
Instructions on how the LLM output should be formatted.
parse(text: str) → Dict[source]#
Parse the output of an LLM call.
A method which takes in a string (assumed output of a language model )
and parses it into some structure.
Parameters
text – output of language model
Returns
structured output | https://python.langchain.com/en/latest/reference/modules/output_parsers.html |
6e0b17fecde7-1 | Parameters
text – output of language model
Returns
structured output
pydantic model langchain.output_parsers.ListOutputParser[source]#
Class to parse the output of an LLM call to a list.
abstract parse(text: str) → List[str][source]#
Parse the output of an LLM call.
pydantic model langchain.output_parsers.OutputFixingParser[source]#
Wraps a parser and tries to fix parsing errors.
field parser: langchain.schema.BaseOutputParser[langchain.output_parsers.fix.T] [Required]#
field retry_chain: langchain.chains.llm.LLMChain [Required]#
classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, parser: langchain.schema.BaseOutputParser[langchain.output_parsers.fix.T], prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['completion', 'error', 'instructions'], output_parser=None, partial_variables={}, template='Instructions:\n--------------\n{instructions}\n--------------\nCompletion:\n--------------\n{completion}\n--------------\n\nAbove, the Completion did not satisfy the constraints given in the Instructions.\nError:\n--------------\n{error}\n--------------\n\nPlease try again. Please only respond with an answer that satisfies the constraints laid out in the Instructions:', template_format='f-string', validate_template=True)) → langchain.output_parsers.fix.OutputFixingParser[langchain.output_parsers.fix.T][source]#
get_format_instructions() → str[source]#
Instructions on how the LLM output should be formatted.
parse(completion: str) → langchain.output_parsers.fix.T[source]#
Parse the output of an LLM call.
A method which takes in a string (assumed output of a language model )
and parses it into some structure.
Parameters
text – output of language model
Returns | https://python.langchain.com/en/latest/reference/modules/output_parsers.html |
6e0b17fecde7-2 | and parses it into some structure.
Parameters
text – output of language model
Returns
structured output
pydantic model langchain.output_parsers.PydanticOutputParser[source]#
field pydantic_object: Type[langchain.output_parsers.pydantic.T] [Required]#
get_format_instructions() → str[source]#
Instructions on how the LLM output should be formatted.
parse(text: str) → langchain.output_parsers.pydantic.T[source]#
Parse the output of an LLM call.
A method which takes in a string (assumed output of a language model )
and parses it into some structure.
Parameters
text – output of language model
Returns
structured output
pydantic model langchain.output_parsers.RegexDictParser[source]#
Class to parse the output into a dictionary.
field no_update_value: Optional[str] = None#
field output_key_to_format: Dict[str, str] [Required]#
field regex_pattern: str = "{}:\\s?([^.'\\n']*)\\.?"#
parse(text: str) → Dict[str, str][source]#
Parse the output of an LLM call.
pydantic model langchain.output_parsers.RegexParser[source]#
Class to parse the output into a dictionary.
field default_output_key: Optional[str] = None#
field output_keys: List[str] [Required]#
field regex: str [Required]#
parse(text: str) → Dict[str, str][source]#
Parse the output of an LLM call.
pydantic model langchain.output_parsers.ResponseSchema[source]#
field description: str [Required]#
field name: str [Required]#
field type: str = 'string'#
pydantic model langchain.output_parsers.RetryOutputParser[source]# | https://python.langchain.com/en/latest/reference/modules/output_parsers.html |
6e0b17fecde7-3 | pydantic model langchain.output_parsers.RetryOutputParser[source]#
Wraps a parser and tries to fix parsing errors.
Does this by passing the original prompt and the completion to another
LLM, and telling it the completion did not satisfy criteria in the prompt.
field parser: langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T] [Required]#
field retry_chain: langchain.chains.llm.LLMChain [Required]#
classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, parser: langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T], prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['completion', 'prompt'], output_parser=None, partial_variables={}, template='Prompt:\n{prompt}\nCompletion:\n{completion}\n\nAbove, the Completion did not satisfy the constraints given in the Prompt.\nPlease try again:', template_format='f-string', validate_template=True)) → langchain.output_parsers.retry.RetryOutputParser[langchain.output_parsers.retry.T][source]#
get_format_instructions() → str[source]#
Instructions on how the LLM output should be formatted.
parse(completion: str) → langchain.output_parsers.retry.T[source]#
Parse the output of an LLM call.
A method which takes in a string (assumed output of a language model )
and parses it into some structure.
Parameters
text – output of language model
Returns
structured output
parse_with_prompt(completion: str, prompt_value: langchain.schema.PromptValue) → langchain.output_parsers.retry.T[source]#
Optional method to parse the output of an LLM call with a prompt.
The prompt is largely provided in the event the OutputParser wants
to retry or fix the output in some way, and needs information from
the prompt to do so.
Parameters | https://python.langchain.com/en/latest/reference/modules/output_parsers.html |
6e0b17fecde7-4 | the prompt to do so.
Parameters
completion – output of language model
prompt – prompt value
Returns
structured output
pydantic model langchain.output_parsers.RetryWithErrorOutputParser[source]#
Wraps a parser and tries to fix parsing errors.
Does this by passing the original prompt, the completion, AND the error
that was raised to another language model and telling it that the completion
did not work, and raised the given error. Differs from RetryOutputParser
in that this implementation provides the error that was raised back to the
LLM, which in theory should give it more information on how to fix it.
field parser: langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T] [Required]#
field retry_chain: langchain.chains.llm.LLMChain [Required]#
classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, parser: langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T], prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['completion', 'error', 'prompt'], output_parser=None, partial_variables={}, template='Prompt:\n{prompt}\nCompletion:\n{completion}\n\nAbove, the Completion did not satisfy the constraints given in the Prompt.\nDetails: {error}\nPlease try again:', template_format='f-string', validate_template=True)) → langchain.output_parsers.retry.RetryWithErrorOutputParser[langchain.output_parsers.retry.T][source]#
get_format_instructions() → str[source]#
Instructions on how the LLM output should be formatted.
parse(completion: str) → langchain.output_parsers.retry.T[source]#
Parse the output of an LLM call.
A method which takes in a string (assumed output of a language model )
and parses it into some structure.
Parameters
text – output of language model
Returns | https://python.langchain.com/en/latest/reference/modules/output_parsers.html |
6e0b17fecde7-5 | and parses it into some structure.
Parameters
text – output of language model
Returns
structured output
parse_with_prompt(completion: str, prompt_value: langchain.schema.PromptValue) → langchain.output_parsers.retry.T[source]#
Optional method to parse the output of an LLM call with a prompt.
The prompt is largely provided in the event the OutputParser wants
to retry or fix the output in some way, and needs information from
the prompt to do so.
Parameters
completion – output of language model
prompt – prompt value
Returns
structured output
pydantic model langchain.output_parsers.StructuredOutputParser[source]#
field response_schemas: List[langchain.output_parsers.structured.ResponseSchema] [Required]#
classmethod from_response_schemas(response_schemas: List[langchain.output_parsers.structured.ResponseSchema]) → langchain.output_parsers.structured.StructuredOutputParser[source]#
get_format_instructions() → str[source]#
Instructions on how the LLM output should be formatted.
parse(text: str) → Any[source]#
Parse the output of an LLM call.
A method which takes in a string (assumed output of a language model )
and parses it into some structure.
Parameters
text – output of language model
Returns
structured output
previous
Example Selector
next
Chat Prompt Templates
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 11, 2023. | https://python.langchain.com/en/latest/reference/modules/output_parsers.html |
fa678718cb2d-0 | .rst
.pdf
Vector Stores
Vector Stores#
Wrappers on top of vector stores.
class langchain.vectorstores.AnalyticDB(connection_string: str, embedding_function: langchain.embeddings.base.Embeddings, collection_name: str = 'langchain', collection_metadata: Optional[dict] = None, pre_delete_collection: bool = False, logger: Optional[logging.Logger] = None)[source]#
VectorStore implementation using AnalyticDB.
AnalyticDB is a distributed full PostgresSQL syntax cloud-native database.
- connection_string is a postgres connection string.
- embedding_function any embedding function implementing
langchain.embeddings.base.Embeddings interface.
collection_name is the name of the collection to use. (default: langchain)
NOTE: This is not the name of the table, but the name of the collection.The tables will be created when initializing the store (if not exists)
So, make sure the user has the right permissions to create tables.
pre_delete_collection if True, will delete the collection if it exists.(default: False)
- Useful for testing.
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) → List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts – Iterable of strings to add to the vectorstore.
metadatas – Optional list of metadatas associated with the texts.
kwargs – vectorstore specific parameters
Returns
List of ids from adding the texts into the vectorstore.
connect() → sqlalchemy.engine.base.Connection[source]#
classmethod connection_string_from_db_params(driver: str, host: str, port: int, database: str, user: str, password: str) → str[source]#
Return connection string from database parameters. | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-1 | Return connection string from database parameters.
create_collection() → None[source]#
create_tables_if_not_exists() → None[source]#
delete_collection() → None[source]#
drop_tables() → None[source]#
classmethod from_documents(documents: List[langchain.schema.Document], embedding: langchain.embeddings.base.Embeddings, collection_name: str = 'langchain', ids: Optional[List[str]] = None, pre_delete_collection: bool = False, **kwargs: Any) → langchain.vectorstores.analyticdb.AnalyticDB[source]#
Return VectorStore initialized from documents and embeddings.
Postgres connection string is required
Either pass it as a parameter
or set the PGVECTOR_CONNECTION_STRING environment variable.
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, collection_name: str = 'langchain', ids: Optional[List[str]] = None, pre_delete_collection: bool = False, **kwargs: Any) → langchain.vectorstores.analyticdb.AnalyticDB[source]#
Return VectorStore initialized from texts and embeddings.
Postgres connection string is required
Either pass it as a parameter
or set the PGVECTOR_CONNECTION_STRING environment variable.
get_collection(session: sqlalchemy.orm.session.Session) → Optional[langchain.vectorstores.analyticdb.CollectionStore][source]#
classmethod get_connection_string(kwargs: Dict[str, Any]) → str[source]#
similarity_search(query: str, k: int = 4, filter: Optional[dict] = None, **kwargs: Any) → List[langchain.schema.Document][source]#
Run similarity search with AnalyticDB with distance.
Parameters
query (str) – Query text to search for.
k (int) – Number of results to return. Defaults to 4. | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-2 | k (int) – Number of results to return. Defaults to 4.
filter (Optional[Dict[str, str]]) – Filter by metadata. Defaults to None.
Returns
List of Documents most similar to the query.
similarity_search_by_vector(embedding: List[float], k: int = 4, filter: Optional[dict] = None, **kwargs: Any) → List[langchain.schema.Document][source]#
Return docs most similar to embedding vector.
Parameters
embedding – Embedding to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
filter (Optional[Dict[str, str]]) – Filter by metadata. Defaults to None.
Returns
List of Documents most similar to the query vector.
similarity_search_with_score(query: str, k: int = 4, filter: Optional[dict] = None) → List[Tuple[langchain.schema.Document, float]][source]#
Return docs most similar to query.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
filter (Optional[Dict[str, str]]) – Filter by metadata. Defaults to None.
Returns
List of Documents most similar to the query and score for each
similarity_search_with_score_by_vector(embedding: List[float], k: int = 4, filter: Optional[dict] = None) → List[Tuple[langchain.schema.Document, float]][source]#
class langchain.vectorstores.Annoy(embedding_function: Callable, index: Any, metric: str, docstore: langchain.docstore.base.Docstore, index_to_docstore_id: Dict[int, str])[source]#
Wrapper around Annoy vector database.
To use, you should have the annoy python package installed.
Example
from langchain import Annoy | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-3 | Example
from langchain import Annoy
db = Annoy(embedding_function, index, docstore, index_to_docstore_id)
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) → List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts – Iterable of strings to add to the vectorstore.
metadatas – Optional list of metadatas associated with the texts.
kwargs – vectorstore specific parameters
Returns
List of ids from adding the texts into the vectorstore.
classmethod from_embeddings(text_embeddings: List[Tuple[str, List[float]]], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, metric: str = 'angular', trees: int = 100, n_jobs: int = - 1, **kwargs: Any) → langchain.vectorstores.annoy.Annoy[source]#
Construct Annoy wrapper from embeddings.
Parameters
text_embeddings – List of tuples of (text, embedding)
embedding – Embedding function to use.
metadatas – List of metadata dictionaries to associate with documents.
metric – Metric to use for indexing. Defaults to “angular”.
trees – Number of trees to use for indexing. Defaults to 100.
n_jobs – Number of jobs to use for indexing. Defaults to -1
This is a user friendly interface that:
Creates an in memory docstore with provided embeddings
Initializes the Annoy database
This is intended to be a quick way to get started.
Example
from langchain import Annoy
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
text_embeddings = embeddings.embed_documents(texts)
text_embedding_pairs = list(zip(texts, text_embeddings)) | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-4 | text_embedding_pairs = list(zip(texts, text_embeddings))
db = Annoy.from_embeddings(text_embedding_pairs, embeddings)
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, metric: str = 'angular', trees: int = 100, n_jobs: int = - 1, **kwargs: Any) → langchain.vectorstores.annoy.Annoy[source]#
Construct Annoy wrapper from raw documents.
Parameters
texts – List of documents to index.
embedding – Embedding function to use.
metadatas – List of metadata dictionaries to associate with documents.
metric – Metric to use for indexing. Defaults to “angular”.
trees – Number of trees to use for indexing. Defaults to 100.
n_jobs – Number of jobs to use for indexing. Defaults to -1.
This is a user friendly interface that:
Embeds documents.
Creates an in memory docstore
Initializes the Annoy database
This is intended to be a quick way to get started.
Example
from langchain import Annoy
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
index = Annoy.from_texts(texts, embeddings)
classmethod load_local(folder_path: str, embeddings: langchain.embeddings.base.Embeddings) → langchain.vectorstores.annoy.Annoy[source]#
Load Annoy index, docstore, and index_to_docstore_id to disk.
Parameters
folder_path – folder path to load index, docstore,
and index_to_docstore_id from.
embeddings – Embeddings to use when generating queries. | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-5 | embeddings – Embeddings to use when generating queries.
max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
fetch_k – Number of Documents to fetch to pass to MMR algorithm.
lambda_mult – Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
max_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
embedding – Embedding to look up documents similar to.
fetch_k – Number of Documents to fetch to pass to MMR algorithm.
k – Number of Documents to return. Defaults to 4.
lambda_mult – Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance. | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-6 | Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
process_index_results(idxs: List[int], dists: List[float]) → List[Tuple[langchain.schema.Document, float]][source]#
Turns annoy results into a list of documents and scores.
Parameters
idxs – List of indices of the documents in the index.
dists – List of distances of the documents in the index.
Returns
List of Documents and scores.
save_local(folder_path: str, prefault: bool = False) → None[source]#
Save Annoy index, docstore, and index_to_docstore_id to disk.
Parameters
folder_path – folder path to save index, docstore,
and index_to_docstore_id to.
prefault – Whether to pre-load the index into memory.
similarity_search(query: str, k: int = 4, search_k: int = - 1, **kwargs: Any) → List[langchain.schema.Document][source]#
Return docs most similar to query.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
search_k – inspect up to search_k nodes which defaults
to n_trees * n if not provided
Returns
List of Documents most similar to the query.
similarity_search_by_index(docstore_index: int, k: int = 4, search_k: int = - 1, **kwargs: Any) → List[langchain.schema.Document][source]#
Return docs most similar to docstore_index.
Parameters
docstore_index – Index of document in docstore
k – Number of Documents to return. Defaults to 4.
search_k – inspect up to search_k nodes which defaults
to n_trees * n if not provided
Returns | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-7 | to n_trees * n if not provided
Returns
List of Documents most similar to the embedding.
similarity_search_by_vector(embedding: List[float], k: int = 4, search_k: int = - 1, **kwargs: Any) → List[langchain.schema.Document][source]#
Return docs most similar to embedding vector.
Parameters
embedding – Embedding to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
search_k – inspect up to search_k nodes which defaults
to n_trees * n if not provided
Returns
List of Documents most similar to the embedding.
similarity_search_with_score(query: str, k: int = 4, search_k: int = - 1) → List[Tuple[langchain.schema.Document, float]][source]#
Return docs most similar to query.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
search_k – inspect up to search_k nodes which defaults
to n_trees * n if not provided
Returns
List of Documents most similar to the query and score for each
similarity_search_with_score_by_index(docstore_index: int, k: int = 4, search_k: int = - 1) → List[Tuple[langchain.schema.Document, float]][source]#
Return docs most similar to query.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
search_k – inspect up to search_k nodes which defaults
to n_trees * n if not provided
Returns
List of Documents most similar to the query and score for each | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-8 | Returns
List of Documents most similar to the query and score for each
similarity_search_with_score_by_vector(embedding: List[float], k: int = 4, search_k: int = - 1) → List[Tuple[langchain.schema.Document, float]][source]#
Return docs most similar to query.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
search_k – inspect up to search_k nodes which defaults
to n_trees * n if not provided
Returns
List of Documents most similar to the query and score for each
class langchain.vectorstores.AtlasDB(name: str, embedding_function: Optional[langchain.embeddings.base.Embeddings] = None, api_key: Optional[str] = None, description: str = 'A description for your project', is_public: bool = True, reset_project_if_exists: bool = False)[source]#
Wrapper around Atlas: Nomic’s neural database and rhizomatic instrument.
To use, you should have the nomic python package installed.
Example
from langchain.vectorstores import AtlasDB
from langchain.embeddings.openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
vectorstore = AtlasDB("my_project", embeddings.embed_query)
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, refresh: bool = True, **kwargs: Any) → List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts (Iterable[str]) – Texts to add to the vectorstore.
metadatas (Optional[List[dict]], optional) – Optional list of metadatas.
ids (Optional[List[str]]) – An optional list of ids. | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-9 | ids (Optional[List[str]]) – An optional list of ids.
refresh (bool) – Whether or not to refresh indices with the updated data.
Default True.
Returns
List of IDs of the added texts.
Return type
List[str]
create_index(**kwargs: Any) → Any[source]#
Creates an index in your project.
See
https://docs.nomic.ai/atlas_api.html#nomic.project.AtlasProject.create_index
for full detail.
classmethod from_documents(documents: List[langchain.schema.Document], embedding: Optional[langchain.embeddings.base.Embeddings] = None, ids: Optional[List[str]] = None, name: Optional[str] = None, api_key: Optional[str] = None, persist_directory: Optional[str] = None, description: str = 'A description for your project', is_public: bool = True, reset_project_if_exists: bool = False, index_kwargs: Optional[dict] = None, **kwargs: Any) → langchain.vectorstores.atlas.AtlasDB[source]#
Create an AtlasDB vectorstore from a list of documents.
Parameters
name (str) – Name of the collection to create.
api_key (str) – Your nomic API key,
documents (List[Document]) – List of documents to add to the vectorstore.
embedding (Optional[Embeddings]) – Embedding function. Defaults to None.
ids (Optional[List[str]]) – Optional list of document IDs. If None,
ids will be auto created
description (str) – A description for your project.
is_public (bool) – Whether your project is publicly accessible.
True by default.
reset_project_if_exists (bool) – Whether to reset this project if
it already exists. Default False.
Generally userful during development and testing.
index_kwargs (Optional[dict]) – Dict of kwargs for index creation. | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-10 | index_kwargs (Optional[dict]) – Dict of kwargs for index creation.
See https://docs.nomic.ai/atlas_api.html
Returns
Nomic’s neural database and finest rhizomatic instrument
Return type
AtlasDB
classmethod from_texts(texts: List[str], embedding: Optional[langchain.embeddings.base.Embeddings] = None, metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, name: Optional[str] = None, api_key: Optional[str] = None, description: str = 'A description for your project', is_public: bool = True, reset_project_if_exists: bool = False, index_kwargs: Optional[dict] = None, **kwargs: Any) → langchain.vectorstores.atlas.AtlasDB[source]#
Create an AtlasDB vectorstore from a raw documents.
Parameters
texts (List[str]) – The list of texts to ingest.
name (str) – Name of the project to create.
api_key (str) – Your nomic API key,
embedding (Optional[Embeddings]) – Embedding function. Defaults to None.
metadatas (Optional[List[dict]]) – List of metadatas. Defaults to None.
ids (Optional[List[str]]) – Optional list of document IDs. If None,
ids will be auto created
description (str) – A description for your project.
is_public (bool) – Whether your project is publicly accessible.
True by default.
reset_project_if_exists (bool) – Whether to reset this project if it
already exists. Default False.
Generally userful during development and testing.
index_kwargs (Optional[dict]) – Dict of kwargs for index creation.
See https://docs.nomic.ai/atlas_api.html
Returns
Nomic’s neural database and finest rhizomatic instrument
Return type
AtlasDB | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-11 | Returns
Nomic’s neural database and finest rhizomatic instrument
Return type
AtlasDB
similarity_search(query: str, k: int = 4, **kwargs: Any) → List[langchain.schema.Document][source]#
Run similarity search with AtlasDB
Parameters
query (str) – Query text to search for.
k (int) – Number of results to return. Defaults to 4.
Returns
List of documents most similar to the query text.
Return type
List[Document]
class langchain.vectorstores.AwaDB(table_name: str = 'langchain_awadb', embedding_model: Optional[Embeddings] = None, log_and_data_dir: Optional[str] = None, client: Optional[awadb.Client] = None)[source]#
Interface implemented by AwaDB vector stores.
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) → List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
:param texts: Iterable of strings to add to the vectorstore.
:param metadatas: Optional list of metadatas associated with the texts.
:param kwargs: vectorstore specific parameters
Returns
List of ids from adding the texts into the vectorstore.
classmethod from_texts(texts: List[str], embedding: Optional[Embeddings] = None, metadatas: Optional[List[dict]] = None, table_name: str = 'langchain_awadb', logging_and_data_dir: Optional[str] = None, client: Optional[awadb.Client] = None, **kwargs: Any) → AwaDB[source]#
Create an AwaDB vectorstore from a raw documents.
Parameters
texts (List[str]) – List of texts to add to the table. | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-12 | Parameters
texts (List[str]) – List of texts to add to the table.
embedding (Optional[Embeddings]) – Embedding function. Defaults to None.
metadatas (Optional[List[dict]]) – List of metadatas. Defaults to None.
table_name (str) – Name of the table to create.
logging_and_data_dir (Optional[str]) – Directory of logging and persistence.
client (Optional[awadb.Client]) – AwaDB client
Returns
AwaDB vectorstore.
Return type
AwaDB
load_local(table_name: str = 'langchain_awadb', **kwargs: Any) → bool[source]#
similarity_search(query: str, k: int = 4, **kwargs: Any) → List[langchain.schema.Document][source]#
Return docs most similar to query.
similarity_search_by_vector(embedding: List[float], k: int = 4, scores: Optional[list] = None, **kwargs: Any) → List[langchain.schema.Document][source]#
Return docs most similar to embedding vector.
Parameters
embedding – Embedding to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query vector.
similarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) → List[Tuple[langchain.schema.Document, float]][source]#
Return docs and relevance scores, normalized on a scale from 0 to 1.
0 is dissimilar, 1 is most similar.
similarity_search_with_score(query: str, k: int = 4, **kwargs: Any) → List[Tuple[langchain.schema.Document, float]][source]#
Return docs and relevance scores, normalized on a scale from 0 to 1. | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-13 | Return docs and relevance scores, normalized on a scale from 0 to 1.
0 is dissimilar, 1 is most similar.
class langchain.vectorstores.Chroma(collection_name: str = 'langchain', embedding_function: Optional[Embeddings] = None, persist_directory: Optional[str] = None, client_settings: Optional[chromadb.config.Settings] = None, collection_metadata: Optional[Dict] = None, client: Optional[chromadb.Client] = None)[source]#
Wrapper around ChromaDB embeddings platform.
To use, you should have the chromadb python package installed.
Example
from langchain.vectorstores import Chroma
from langchain.embeddings.openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
vectorstore = Chroma("langchain_store", embeddings)
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) → List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts (Iterable[str]) – Texts to add to the vectorstore.
metadatas (Optional[List[dict]], optional) – Optional list of metadatas.
ids (Optional[List[str]], optional) – Optional list of IDs.
Returns
List of IDs of the added texts.
Return type
List[str]
delete_collection() → None[source]#
Delete the collection.
classmethod from_documents(documents: List[Document], embedding: Optional[Embeddings] = None, ids: Optional[List[str]] = None, collection_name: str = 'langchain', persist_directory: Optional[str] = None, client_settings: Optional[chromadb.config.Settings] = None, client: Optional[chromadb.Client] = None, **kwargs: Any) → Chroma[source]# | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-14 | Create a Chroma vectorstore from a list of documents.
If a persist_directory is specified, the collection will be persisted there.
Otherwise, the data will be ephemeral in-memory.
Parameters
collection_name (str) – Name of the collection to create.
persist_directory (Optional[str]) – Directory to persist the collection.
ids (Optional[List[str]]) – List of document IDs. Defaults to None.
documents (List[Document]) – List of documents to add to the vectorstore.
embedding (Optional[Embeddings]) – Embedding function. Defaults to None.
client_settings (Optional[chromadb.config.Settings]) – Chroma client settings
Returns
Chroma vectorstore.
Return type
Chroma
classmethod from_texts(texts: List[str], embedding: Optional[Embeddings] = None, metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, collection_name: str = 'langchain', persist_directory: Optional[str] = None, client_settings: Optional[chromadb.config.Settings] = None, client: Optional[chromadb.Client] = None, **kwargs: Any) → Chroma[source]#
Create a Chroma vectorstore from a raw documents.
If a persist_directory is specified, the collection will be persisted there.
Otherwise, the data will be ephemeral in-memory.
Parameters
texts (List[str]) – List of texts to add to the collection.
collection_name (str) – Name of the collection to create.
persist_directory (Optional[str]) – Directory to persist the collection.
embedding (Optional[Embeddings]) – Embedding function. Defaults to None.
metadatas (Optional[List[dict]]) – List of metadatas. Defaults to None.
ids (Optional[List[str]]) – List of document IDs. Defaults to None. | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-15 | ids (Optional[List[str]]) – List of document IDs. Defaults to None.
client_settings (Optional[chromadb.config.Settings]) – Chroma client settings
Returns
Chroma vectorstore.
Return type
Chroma
get(include: Optional[List[str]] = None) → Dict[str, Any][source]#
Gets the collection.
Parameters
include (Optional[List[str]]) – List of fields to include from db.
Defaults to None.
max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, filter: Optional[Dict[str, str]] = None, **kwargs: Any) → List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
fetch_k – Number of Documents to fetch to pass to MMR algorithm.
lambda_mult – Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
filter (Optional[Dict[str, str]]) – Filter by metadata. Defaults to None.
Returns
List of Documents selected by maximal marginal relevance.
max_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, filter: Optional[Dict[str, str]] = None, **kwargs: Any) → List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-16 | Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
embedding – Embedding to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
fetch_k – Number of Documents to fetch to pass to MMR algorithm.
lambda_mult – Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
filter (Optional[Dict[str, str]]) – Filter by metadata. Defaults to None.
Returns
List of Documents selected by maximal marginal relevance.
persist() → None[source]#
Persist the collection.
This can be used to explicitly persist the data to disk.
It will also be called automatically when the object is destroyed.
similarity_search(query: str, k: int = 4, filter: Optional[Dict[str, str]] = None, **kwargs: Any) → List[langchain.schema.Document][source]#
Run similarity search with Chroma.
Parameters
query (str) – Query text to search for.
k (int) – Number of results to return. Defaults to 4.
filter (Optional[Dict[str, str]]) – Filter by metadata. Defaults to None.
Returns
List of documents most similar to the query text.
Return type
List[Document]
similarity_search_by_vector(embedding: List[float], k: int = 4, filter: Optional[Dict[str, str]] = None, **kwargs: Any) → List[langchain.schema.Document][source]#
Return docs most similar to embedding vector.
:param embedding: Embedding to look up documents similar to.
:type embedding: str
:param k: Number of Documents to return. Defaults to 4.
:type k: int | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-17 | :param k: Number of Documents to return. Defaults to 4.
:type k: int
:param filter: Filter by metadata. Defaults to None.
:type filter: Optional[Dict[str, str]]
Returns
List of Documents most similar to the query vector.
similarity_search_with_score(query: str, k: int = 4, filter: Optional[Dict[str, str]] = None, **kwargs: Any) → List[Tuple[langchain.schema.Document, float]][source]#
Run similarity search with Chroma with distance.
Parameters
query (str) – Query text to search for.
k (int) – Number of results to return. Defaults to 4.
filter (Optional[Dict[str, str]]) – Filter by metadata. Defaults to None.
Returns
List of documents most similar to
the query text and cosine distance in float for each.
Lower score represents more similarity.
Return type
List[Tuple[Document, float]]
update_document(document_id: str, document: langchain.schema.Document) → None[source]#
Update a document in the collection.
Parameters
document_id (str) – ID of the document to update.
document (Document) – Document to update.
class langchain.vectorstores.Clickhouse(embedding: langchain.embeddings.base.Embeddings, config: Optional[langchain.vectorstores.clickhouse.ClickhouseSettings] = None, **kwargs: Any)[source]#
Wrapper around ClickHouse vector database
You need a clickhouse-connect python package, and a valid account
to connect to ClickHouse.
ClickHouse can not only search with simple vector indexes,
it also supports complex query with multiple conditions,
constraints and even sub-queries.
For more information, please visit[ClickHouse official site](https://clickhouse.com/clickhouse) | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-18 | add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, batch_size: int = 32, ids: Optional[Iterable[str]] = None, **kwargs: Any) → List[str][source]#
Insert more texts through the embeddings and add to the VectorStore.
Parameters
texts – Iterable of strings to add to the VectorStore.
ids – Optional list of ids to associate with the texts.
batch_size – Batch size of insertion
metadata – Optional column data to be inserted
Returns
List of ids from adding the texts into the VectorStore.
drop() → None[source]#
Helper function: Drop data
escape_str(value: str) → str[source]#
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[Dict[Any, Any]]] = None, config: Optional[langchain.vectorstores.clickhouse.ClickhouseSettings] = None, text_ids: Optional[Iterable[str]] = None, batch_size: int = 32, **kwargs: Any) → langchain.vectorstores.clickhouse.Clickhouse[source]#
Create ClickHouse wrapper with existing texts
Parameters
embedding_function (Embeddings) – Function to extract text embedding
texts (Iterable[str]) – List or tuple of strings to be added
config (ClickHouseSettings, Optional) – ClickHouse configuration
text_ids (Optional[Iterable], optional) – IDs for the texts.
Defaults to None.
batch_size (int, optional) – Batchsize when transmitting data to ClickHouse.
Defaults to 32.
metadata (List[dict], optional) – metadata to texts. Defaults to None.
into (Other keyword arguments will pass) – [clickhouse-connect](https://clickhouse.com/docs/en/integrations/python#clickhouse-connect-driver-api)
Returns
ClickHouse Index
property metadata_column: str# | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-19 | Returns
ClickHouse Index
property metadata_column: str#
similarity_search(query: str, k: int = 4, where_str: Optional[str] = None, **kwargs: Any) → List[langchain.schema.Document][source]#
Perform a similarity search with ClickHouse
Parameters
query (str) – query string
k (int, optional) – Top K neighbors to retrieve. Defaults to 4.
where_str (Optional[str], optional) – where condition string.
Defaults to None.
NOTE – Please do not let end-user to fill this and always be aware
of SQL injection. When dealing with metadatas, remember to
use {self.metadata_column}.attribute instead of attribute
alone. The default name for it is metadata.
Returns
List of Documents
Return type
List[Document]
similarity_search_by_vector(embedding: List[float], k: int = 4, where_str: Optional[str] = None, **kwargs: Any) → List[langchain.schema.Document][source]#
Perform a similarity search with ClickHouse by vectors
Parameters
query (str) – query string
k (int, optional) – Top K neighbors to retrieve. Defaults to 4.
where_str (Optional[str], optional) – where condition string.
Defaults to None.
NOTE – Please do not let end-user to fill this and always be aware
of SQL injection. When dealing with metadatas, remember to
use {self.metadata_column}.attribute instead of attribute
alone. The default name for it is metadata.
Returns
List of (Document, similarity)
Return type
List[Document]
similarity_search_with_relevance_scores(query: str, k: int = 4, where_str: Optional[str] = None, **kwargs: Any) → List[Tuple[langchain.schema.Document, float]][source]# | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-20 | Perform a similarity search with ClickHouse
Parameters
query (str) – query string
k (int, optional) – Top K neighbors to retrieve. Defaults to 4.
where_str (Optional[str], optional) – where condition string.
Defaults to None.
NOTE – Please do not let end-user to fill this and always be aware
of SQL injection. When dealing with metadatas, remember to
use {self.metadata_column}.attribute instead of attribute
alone. The default name for it is metadata.
Returns
List of documents
Return type
List[Document]
pydantic settings langchain.vectorstores.ClickhouseSettings[source]#
ClickHouse Client Configuration
Attribute:
clickhouse_host (str)An URL to connect to MyScale backend.Defaults to ‘localhost’.
clickhouse_port (int) : URL port to connect with HTTP. Defaults to 8443.
username (str) : Username to login. Defaults to None.
password (str) : Password to login. Defaults to None.
index_type (str): index type string.
index_param (list): index build parameter.
index_query_params(dict): index query parameters.
database (str) : Database name to find the table. Defaults to ‘default’.
table (str) : Table name to operate on.
Defaults to ‘vector_table’.
metric (str)Metric to compute distance,supported are (‘angular’, ‘euclidean’, ‘manhattan’, ‘hamming’,
‘dot’). Defaults to ‘angular’.
spotify/annoy
column_map (Dict)Column type map to project column name onto langchainsemantics. Must have keys: text, id, vector,
must be same size to number of columns. For example:
.. code-block:: python
{‘id’: ‘text_id’,
‘uuid’: ‘global_unique_id’
‘embedding’: ‘text_embedding’, | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-21 | ‘uuid’: ‘global_unique_id’
‘embedding’: ‘text_embedding’,
‘document’: ‘text_plain’,
‘metadata’: ‘metadata_dictionary_in_json’,
}
Defaults to identity map.
Show JSON schema{
"title": "ClickhouseSettings", | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-22 | Show JSON schema{
"title": "ClickhouseSettings",
"description": "ClickHouse Client Configuration\n\nAttribute:\n clickhouse_host (str) : An URL to connect to MyScale backend.\n Defaults to 'localhost'.\n clickhouse_port (int) : URL port to connect with HTTP. Defaults to 8443.\n username (str) : Username to login. Defaults to None.\n password (str) : Password to login. Defaults to None.\n index_type (str): index type string.\n index_param (list): index build parameter.\n index_query_params(dict): index query parameters.\n database (str) : Database name to find the table. Defaults to 'default'.\n table (str) : Table name to operate on.\n Defaults to 'vector_table'.\n metric (str) : Metric to compute distance,\n supported are ('angular', 'euclidean', 'manhattan', 'hamming',\n 'dot'). Defaults to 'angular'.\n https://github.com/spotify/annoy/blob/main/src/annoymodule.cc#L149-L169\n\n column_map (Dict) : Column type map to project column name onto langchain\n semantics. Must have keys: `text`, `id`, `vector`,\n must be same size to number of columns. For example:\n .. code-block:: python\n\n {\n 'id': 'text_id',\n 'uuid': 'global_unique_id'\n 'embedding': 'text_embedding',\n 'document': 'text_plain',\n 'metadata': 'metadata_dictionary_in_json',\n }\n\n Defaults to identity map.",
"type": "object",
"properties": {
"host": { | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-23 | "type": "object",
"properties": {
"host": {
"title": "Host",
"default": "localhost",
"env_names": "{'clickhouse_host'}",
"type": "string"
},
"port": {
"title": "Port",
"default": 8123,
"env_names": "{'clickhouse_port'}",
"type": "integer"
},
"username": {
"title": "Username",
"env_names": "{'clickhouse_username'}",
"type": "string"
},
"password": {
"title": "Password",
"env_names": "{'clickhouse_password'}",
"type": "string"
},
"index_type": {
"title": "Index Type",
"default": "annoy",
"env_names": "{'clickhouse_index_type'}",
"type": "string"
},
"index_param": {
"title": "Index Param",
"default": [
100,
"'L2Distance'"
],
"env_names": "{'clickhouse_index_param'}",
"anyOf": [
{
"type": "array",
"items": {}
},
{
"type": "object"
}
]
},
"index_query_params": {
"title": "Index Query Params",
"default": {},
"env_names": "{'clickhouse_index_query_params'}",
"type": "object",
"additionalProperties": {
"type": "string"
}
}, | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-24 | "type": "string"
}
},
"column_map": {
"title": "Column Map",
"default": {
"id": "id",
"uuid": "uuid",
"document": "document",
"embedding": "embedding",
"metadata": "metadata"
},
"env_names": "{'clickhouse_column_map'}",
"type": "object",
"additionalProperties": {
"type": "string"
}
},
"database": {
"title": "Database",
"default": "default",
"env_names": "{'clickhouse_database'}",
"type": "string"
},
"table": {
"title": "Table",
"default": "langchain",
"env_names": "{'clickhouse_table'}",
"type": "string"
},
"metric": {
"title": "Metric",
"default": "angular",
"env_names": "{'clickhouse_metric'}",
"type": "string"
}
},
"additionalProperties": false
}
Config
env_file: str = .env
env_file_encoding: str = utf-8
env_prefix: str = clickhouse_
Fields
column_map (Dict[str, str])
database (str)
host (str)
index_param (Optional[Union[List, Dict]])
index_query_params (Dict[str, str])
index_type (str)
metric (str)
password (Optional[str])
port (int)
table (str)
username (Optional[str]) | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-25 | port (int)
table (str)
username (Optional[str])
field column_map: Dict[str, str] = {'document': 'document', 'embedding': 'embedding', 'id': 'id', 'metadata': 'metadata', 'uuid': 'uuid'}#
field database: str = 'default'#
field host: str = 'localhost'#
field index_param: Optional[Union[List, Dict]] = [100, "'L2Distance'"]#
field index_query_params: Dict[str, str] = {}#
field index_type: str = 'annoy'#
field metric: str = 'angular'#
field password: Optional[str] = None#
field port: int = 8123#
field table: str = 'langchain'#
field username: Optional[str] = None#
class langchain.vectorstores.DeepLake(dataset_path: str = './deeplake/', token: Optional[str] = None, embedding_function: Optional[langchain.embeddings.base.Embeddings] = None, read_only: Optional[bool] = False, ingestion_batch_size: int = 1024, num_workers: int = 0, verbose: bool = True, **kwargs: Any)[source]#
Wrapper around Deep Lake, a data lake for deep learning applications.
We implement naive similarity search and filtering for fast prototyping,
but it can be extended with Tensor Query Language (TQL) for production use cases
over billion rows.
Why Deep Lake?
Not only stores embeddings, but also the original data with version control.
Serverless, doesn’t require another service and can be used with majorcloud providers (S3, GCS, etc.)
More than just a multi-modal vector store. You can use the datasetto fine-tune your own LLM models.
To use, you should have the deeplake python package installed.
Example | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-26 | To use, you should have the deeplake python package installed.
Example
from langchain.vectorstores import DeepLake
from langchain.embeddings.openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
vectorstore = DeepLake("langchain_store", embeddings.embed_query)
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) → List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts (Iterable[str]) – Texts to add to the vectorstore.
metadatas (Optional[List[dict]], optional) – Optional list of metadatas.
ids (Optional[List[str]], optional) – Optional list of IDs.
Returns
List of IDs of the added texts.
Return type
List[str]
delete(ids: Any[List[str], None] = None, filter: Any[Dict[str, str], None] = None, delete_all: Any[bool, None] = None) → bool[source]#
Delete the entities in the dataset
Parameters
ids (Optional[List[str]], optional) – The document_ids to delete.
Defaults to None.
filter (Optional[Dict[str, str]], optional) – The filter to delete by.
Defaults to None.
delete_all (Optional[bool], optional) – Whether to drop the dataset.
Defaults to None.
delete_dataset() → None[source]#
Delete the collection.
classmethod force_delete_by_path(path: str) → None[source]#
Force delete dataset by path | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-27 | Force delete dataset by path
classmethod from_texts(texts: List[str], embedding: Optional[langchain.embeddings.base.Embeddings] = None, metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, dataset_path: str = './deeplake/', **kwargs: Any) → langchain.vectorstores.deeplake.DeepLake[source]#
Create a Deep Lake dataset from a raw documents.
If a dataset_path is specified, the dataset will be persisted in that location,
otherwise by default at ./deeplake
Parameters
path (str, pathlib.Path) –
The full path to the dataset. Can be:
Deep Lake cloud path of the form hub://username/dataset_name.To write to Deep Lake cloud datasets,
ensure that you are logged in to Deep Lake
(use ‘activeloop login’ from command line)
AWS S3 path of the form s3://bucketname/path/to/dataset.Credentials are required in either the environment
Google Cloud Storage path of the formgcs://bucketname/path/to/dataset Credentials are required
in either the environment
Local file system path of the form ./path/to/dataset or~/path/to/dataset or path/to/dataset.
In-memory path of the form mem://path/to/dataset which doesn’tsave the dataset, but keeps it in memory instead.
Should be used only for testing as it does not persist.
documents (List[Document]) – List of documents to add.
embedding (Optional[Embeddings]) – Embedding function. Defaults to None.
metadatas (Optional[List[dict]]) – List of metadatas. Defaults to None.
ids (Optional[List[str]]) – List of document IDs. Defaults to None.
Returns
Deep Lake dataset.
Return type
DeepLake | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-28 | Returns
Deep Lake dataset.
Return type
DeepLake
max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
:param query: Text to look up documents similar to.
:param k: Number of Documents to return. Defaults to 4.
:param fetch_k: Number of Documents to fetch to pass to MMR algorithm.
:param lambda_mult: Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
max_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
embedding – Embedding to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
fetch_k – Number of Documents to fetch to pass to MMR algorithm.
lambda_mult – Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
persist() → None[source]#
Persist the collection. | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-29 | persist() → None[source]#
Persist the collection.
similarity_search(query: str, k: int = 4, **kwargs: Any) → List[langchain.schema.Document][source]#
Return docs most similar to query.
Parameters
query – text to embed and run the query on.
k – Number of Documents to return.
Defaults to 4.
query – Text to look up documents similar to.
embedding – Embedding function to use.
Defaults to None.
k – Number of Documents to return.
Defaults to 4.
distance_metric – L2 for Euclidean, L1 for Nuclear, max
L-infinity distance, cos for cosine similarity, ‘dot’ for dot product
Defaults to L2.
filter – Attribute filter by metadata example {‘key’: ‘value’}.
Defaults to None.
maximal_marginal_relevance – Whether to use maximal marginal relevance.
Defaults to False.
fetch_k – Number of Documents to fetch to pass to MMR algorithm.
Defaults to 20.
return_score – Whether to return the score. Defaults to False.
Returns
List of Documents most similar to the query vector.
similarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) → List[langchain.schema.Document][source]#
Return docs most similar to embedding vector.
Parameters
embedding – Embedding to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query vector.
similarity_search_with_score(query: str, distance_metric: str = 'L2', k: int = 4, filter: Optional[Dict[str, str]] = None) → List[Tuple[langchain.schema.Document, float]][source]#
Run similarity search with Deep Lake with distance returned. | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-30 | Run similarity search with Deep Lake with distance returned.
Parameters
query (str) – Query text to search for.
distance_metric – L2 for Euclidean, L1 for Nuclear, max L-infinity
distance, cos for cosine similarity, ‘dot’ for dot product.
Defaults to L2.
k (int) – Number of results to return. Defaults to 4.
filter (Optional[Dict[str, str]]) – Filter by metadata. Defaults to None.
Returns
List of documents most similar to the querytext with distance in float.
Return type
List[Tuple[Document, float]]
class langchain.vectorstores.DocArrayHnswSearch(doc_index: BaseDocIndex, embedding: langchain.embeddings.base.Embeddings)[source]#
Wrapper around HnswLib storage.
To use it, you should have the docarray package with version >=0.32.0 installed.
You can install it with pip install “langchain[docarray]”.
classmethod from_params(embedding: langchain.embeddings.base.Embeddings, work_dir: str, n_dim: int, dist_metric: Literal['cosine', 'ip', 'l2'] = 'cosine', max_elements: int = 1024, index: bool = True, ef_construction: int = 200, ef: int = 10, M: int = 16, allow_replace_deleted: bool = True, num_threads: int = 1, **kwargs: Any) → langchain.vectorstores.docarray.hnsw.DocArrayHnswSearch[source]#
Initialize DocArrayHnswSearch store.
Parameters
embedding (Embeddings) – Embedding function.
work_dir (str) – path to the location where all the data will be stored.
n_dim (int) – dimension of an embedding. | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-31 | n_dim (int) – dimension of an embedding.
dist_metric (str) – Distance metric for DocArrayHnswSearch can be one of:
“cosine”, “ip”, and “l2”. Defaults to “cosine”.
max_elements (int) – Maximum number of vectors that can be stored.
Defaults to 1024.
index (bool) – Whether an index should be built for this field.
Defaults to True.
ef_construction (int) – defines a construction time/accuracy trade-off.
Defaults to 200.
ef (int) – parameter controlling query time/accuracy trade-off.
Defaults to 10.
M (int) – parameter that defines the maximum number of outgoing
connections in the graph. Defaults to 16.
allow_replace_deleted (bool) – Enables replacing of deleted elements
with new added ones. Defaults to True.
num_threads (int) – Sets the number of cpu threads to use. Defaults to 1.
**kwargs – Other keyword arguments to be passed to the get_doc_cls method.
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, work_dir: Optional[str] = None, n_dim: Optional[int] = None, **kwargs: Any) → langchain.vectorstores.docarray.hnsw.DocArrayHnswSearch[source]#
Create an DocArrayHnswSearch store and insert data.
Parameters
texts (List[str]) – Text data.
embedding (Embeddings) – Embedding function.
metadatas (Optional[List[dict]]) – Metadata for each text if it exists.
Defaults to None.
work_dir (str) – path to the location where all the data will be stored.
n_dim (int) – dimension of an embedding. | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-32 | n_dim (int) – dimension of an embedding.
**kwargs – Other keyword arguments to be passed to the __init__ method.
Returns
DocArrayHnswSearch Vector Store
class langchain.vectorstores.DocArrayInMemorySearch(doc_index: BaseDocIndex, embedding: langchain.embeddings.base.Embeddings)[source]#
Wrapper around in-memory storage for exact search.
To use it, you should have the docarray package with version >=0.32.0 installed.
You can install it with pip install “langchain[docarray]”.
classmethod from_params(embedding: langchain.embeddings.base.Embeddings, metric: Literal['cosine_sim', 'euclidian_dist', 'sgeuclidean_dist'] = 'cosine_sim', **kwargs: Any) → langchain.vectorstores.docarray.in_memory.DocArrayInMemorySearch[source]#
Initialize DocArrayInMemorySearch store.
Parameters
embedding (Embeddings) – Embedding function.
metric (str) – metric for exact nearest-neighbor search.
Can be one of: “cosine_sim”, “euclidean_dist” and “sqeuclidean_dist”.
Defaults to “cosine_sim”.
**kwargs – Other keyword arguments to be passed to the get_doc_cls method.
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[Dict[Any, Any]]] = None, **kwargs: Any) → langchain.vectorstores.docarray.in_memory.DocArrayInMemorySearch[source]#
Create an DocArrayInMemorySearch store and insert data.
Parameters
texts (List[str]) – Text data.
embedding (Embeddings) – Embedding function.
metadatas (Optional[List[Dict[Any, Any]]]) – Metadata for each text
if it exists. Defaults to None. | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-33 | if it exists. Defaults to None.
metric (str) – metric for exact nearest-neighbor search.
Can be one of: “cosine_sim”, “euclidean_dist” and “sqeuclidean_dist”.
Defaults to “cosine_sim”.
Returns
DocArrayInMemorySearch Vector Store
class langchain.vectorstores.ElasticVectorSearch(elasticsearch_url: str, index_name: str, embedding: langchain.embeddings.base.Embeddings, *, ssl_verify: Optional[Dict[str, Any]] = None)[source]#
Wrapper around Elasticsearch as a vector database.
To connect to an Elasticsearch instance that does not require
login credentials, pass the Elasticsearch URL and index name along with the
embedding object to the constructor.
Example
from langchain import ElasticVectorSearch
from langchain.embeddings import OpenAIEmbeddings
embedding = OpenAIEmbeddings()
elastic_vector_search = ElasticVectorSearch(
elasticsearch_url="http://localhost:9200",
index_name="test_index",
embedding=embedding
)
To connect to an Elasticsearch instance that requires login credentials,
including Elastic Cloud, use the Elasticsearch URL format
https://username:password@es_host:9243. For example, to connect to Elastic
Cloud, create the Elasticsearch URL with the required authentication details and
pass it to the ElasticVectorSearch constructor as the named parameter
elasticsearch_url.
You can obtain your Elastic Cloud URL and login credentials by logging in to the
Elastic Cloud console at https://cloud.elastic.co, selecting your deployment, and
navigating to the “Deployments” page.
To obtain your Elastic Cloud password for the default “elastic” user:
Log in to the Elastic Cloud console at https://cloud.elastic.co
Go to “Security” > “Users”
Locate the “elastic” user and click “Edit”
Click “Reset password” | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-34 | Locate the “elastic” user and click “Edit”
Click “Reset password”
Follow the prompts to reset the password
The format for Elastic Cloud URLs is
https://username:password@cluster_id.region_id.gcp.cloud.es.io:9243.
Example
from langchain import ElasticVectorSearch
from langchain.embeddings import OpenAIEmbeddings
embedding = OpenAIEmbeddings()
elastic_host = "cluster_id.region_id.gcp.cloud.es.io"
elasticsearch_url = f"https://username:password@{elastic_host}:9243"
elastic_vector_search = ElasticVectorSearch(
elasticsearch_url=elasticsearch_url,
index_name="test_index",
embedding=embedding
)
Parameters
elasticsearch_url (str) – The URL for the Elasticsearch instance.
index_name (str) – The name of the Elasticsearch index for the embeddings.
embedding (Embeddings) – An object that provides the ability to embed text.
It should be an instance of a class that subclasses the Embeddings
abstract base class, such as OpenAIEmbeddings()
Raises
ValueError – If the elasticsearch python package is not installed.
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, refresh_indices: bool = True, **kwargs: Any) → List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts – Iterable of strings to add to the vectorstore.
metadatas – Optional list of metadatas associated with the texts.
refresh_indices – bool to refresh ElasticSearch indices
Returns
List of ids from adding the texts into the vectorstore.
client_search(client: Any, index_name: str, script_query: Dict, size: int) → Any[source]# | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-35 | create_index(client: Any, index_name: str, mapping: Dict) → None[source]#
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, elasticsearch_url: Optional[str] = None, index_name: Optional[str] = None, refresh_indices: bool = True, **kwargs: Any) → langchain.vectorstores.elastic_vector_search.ElasticVectorSearch[source]#
Construct ElasticVectorSearch wrapper from raw documents.
This is a user-friendly interface that:
Embeds documents.
Creates a new index for the embeddings in the Elasticsearch instance.
Adds the documents to the newly created Elasticsearch index.
This is intended to be a quick way to get started.
Example
from langchain import ElasticVectorSearch
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
elastic_vector_search = ElasticVectorSearch.from_texts(
texts,
embeddings,
elasticsearch_url="http://localhost:9200"
)
similarity_search(query: str, k: int = 4, filter: Optional[dict] = None, **kwargs: Any) → List[langchain.schema.Document][source]#
Return docs most similar to query.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query.
similarity_search_with_score(query: str, k: int = 4, filter: Optional[dict] = None, **kwargs: Any) → List[Tuple[langchain.schema.Document, float]][source]#
Return docs most similar to query.
:param query: Text to look up documents similar to.
:param k: Number of Documents to return. Defaults to 4.
Returns | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-36 | :param k: Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query.
class langchain.vectorstores.FAISS(embedding_function: typing.Callable, index: typing.Any, docstore: langchain.docstore.base.Docstore, index_to_docstore_id: typing.Dict[int, str], relevance_score_fn: typing.Optional[typing.Callable[[float], float]] = <function _default_relevance_score_fn>, normalize_L2: bool = False)[source]#
Wrapper around FAISS vector database.
To use, you should have the faiss python package installed.
Example
from langchain import FAISS
faiss = FAISS(embedding_function, index, docstore, index_to_docstore_id)
add_embeddings(text_embeddings: Iterable[Tuple[str, List[float]]], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) → List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
text_embeddings – Iterable pairs of string and embedding to
add to the vectorstore.
metadatas – Optional list of metadatas associated with the texts.
ids – Optional list of unique IDs.
Returns
List of ids from adding the texts into the vectorstore.
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) → List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts – Iterable of strings to add to the vectorstore.
metadatas – Optional list of metadatas associated with the texts.
ids – Optional list of unique IDs.
Returns
List of ids from adding the texts into the vectorstore. | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-37 | Returns
List of ids from adding the texts into the vectorstore.
classmethod from_embeddings(text_embeddings: List[Tuple[str, List[float]]], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) → langchain.vectorstores.faiss.FAISS[source]#
Construct FAISS wrapper from raw documents.
This is a user friendly interface that:
Embeds documents.
Creates an in memory docstore
Initializes the FAISS database
This is intended to be a quick way to get started.
Example
from langchain import FAISS
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
text_embeddings = embeddings.embed_documents(texts)
text_embedding_pairs = list(zip(texts, text_embeddings))
faiss = FAISS.from_embeddings(text_embedding_pairs, embeddings)
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) → langchain.vectorstores.faiss.FAISS[source]#
Construct FAISS wrapper from raw documents.
This is a user friendly interface that:
Embeds documents.
Creates an in memory docstore
Initializes the FAISS database
This is intended to be a quick way to get started.
Example
from langchain import FAISS
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
faiss = FAISS.from_texts(texts, embeddings)
classmethod load_local(folder_path: str, embeddings: langchain.embeddings.base.Embeddings, index_name: str = 'index') → langchain.vectorstores.faiss.FAISS[source]# | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-38 | Load FAISS index, docstore, and index_to_docstore_id from disk.
Parameters
folder_path – folder path to load index, docstore,
and index_to_docstore_id from.
embeddings – Embeddings to use when generating queries
index_name – for saving with a specific index file name
max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
fetch_k – Number of Documents to fetch to pass to MMR algorithm.
lambda_mult – Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
max_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) → List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
embedding – Embedding to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
fetch_k – Number of Documents to fetch to pass to MMR algorithm.
lambda_mult – Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-39 | of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
merge_from(target: langchain.vectorstores.faiss.FAISS) → None[source]#
Merge another FAISS object with the current one.
Add the target FAISS to the current one.
Parameters
target – FAISS object you wish to merge into the current one
Returns
None.
save_local(folder_path: str, index_name: str = 'index') → None[source]#
Save FAISS index, docstore, and index_to_docstore_id to disk.
Parameters
folder_path – folder path to save index, docstore,
and index_to_docstore_id to.
index_name – for saving with a specific index file name
similarity_search(query: str, k: int = 4, **kwargs: Any) → List[langchain.schema.Document][source]#
Return docs most similar to query.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query.
similarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) → List[langchain.schema.Document][source]#
Return docs most similar to embedding vector.
Parameters
embedding – Embedding to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the embedding.
similarity_search_with_score(query: str, k: int = 4) → List[Tuple[langchain.schema.Document, float]][source]#
Return docs most similar to query.
Parameters
query – Text to look up documents similar to. | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-40 | Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
Returns
List of documents most similar to the query text with
L2 distance in float. Lower score represents more similarity.
similarity_search_with_score_by_vector(embedding: List[float], k: int = 4) → List[Tuple[langchain.schema.Document, float]][source]#
Return docs most similar to query.
Parameters
embedding – Embedding vector to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
Returns
List of documents most similar to the query text and L2 distance
in float for each. Lower score represents more similarity.
class langchain.vectorstores.LanceDB(connection: Any, embedding: langchain.embeddings.base.Embeddings, vector_key: Optional[str] = 'vector', id_key: Optional[str] = 'id', text_key: Optional[str] = 'text')[source]#
Wrapper around LanceDB vector database.
To use, you should have lancedb python package installed.
Example
db = lancedb.connect('./lancedb')
table = db.open_table('my_table')
vectorstore = LanceDB(table, embedding_function)
vectorstore.add_texts(['text1', 'text2'])
result = vectorstore.similarity_search('text1')
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) → List[str][source]#
Turn texts into embedding and add it to the database
Parameters
texts – Iterable of strings to add to the vectorstore.
metadatas – Optional list of metadatas associated with the texts.
ids – Optional list of ids to associate with the texts.
Returns
List of ids of the added texts. | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-41 | Returns
List of ids of the added texts.
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, connection: Any = None, vector_key: Optional[str] = 'vector', id_key: Optional[str] = 'id', text_key: Optional[str] = 'text', **kwargs: Any) → langchain.vectorstores.lancedb.LanceDB[source]#
Return VectorStore initialized from texts and embeddings.
similarity_search(query: str, k: int = 4, **kwargs: Any) → List[langchain.schema.Document][source]#
Return documents most similar to the query
Parameters
query – String to query the vectorstore with.
k – Number of documents to return.
Returns
List of documents most similar to the query.
class langchain.vectorstores.MatchingEngine(project_id: str, index: MatchingEngineIndex, endpoint: MatchingEngineIndexEndpoint, embedding: Embeddings, gcs_client: storage.Client, gcs_bucket_name: str, credentials: Optional[Credentials] = None)[source]#
Vertex Matching Engine implementation of the vector store.
While the embeddings are stored in the Matching Engine, the embedded
documents will be stored in GCS.
An existing Index and corresponding Endpoint are preconditions for
using this module.
See usage in docs/modules/indexes/vectorstores/examples/matchingengine.ipynb
Note that this implementation is mostly meant for reading if you are
planning to do a real time implementation. While reading is a real time
operation, updating the index takes close to one hour.
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) → List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-42 | Run more texts through the embeddings and add to the vectorstore.
Parameters
texts – Iterable of strings to add to the vectorstore.
metadatas – Optional list of metadatas associated with the texts.
kwargs – vectorstore specific parameters.
Returns
List of ids from adding the texts into the vectorstore.
classmethod from_components(project_id: str, region: str, gcs_bucket_name: str, index_id: str, endpoint_id: str, credentials_path: Optional[str] = None, embedding: Optional[langchain.embeddings.base.Embeddings] = None) → langchain.vectorstores.matching_engine.MatchingEngine[source]#
Takes the object creation out of the constructor.
Parameters
project_id – The GCP project id.
region – The default location making the API calls. It must have
regional. (the same location as the GCS bucket and must be) –
gcs_bucket_name – The location where the vectors will be stored in
created. (order for the index to be) –
index_id – The id of the created index.
endpoint_id – The id of the created endpoint.
credentials_path – (Optional) The path of the Google credentials on
system. (the local file) –
embedding – The Embeddings that will be used for
texts. (embedding the) –
Returns
A configured MatchingEngine with the texts added to the index.
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) → langchain.vectorstores.matching_engine.MatchingEngine[source]#
Use from components instead.
similarity_search(query: str, k: int = 4, **kwargs: Any) → List[langchain.schema.Document][source]#
Return docs most similar to query.
Parameters | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-43 | Return docs most similar to query.
Parameters
query – The string that will be used to search for similar documents.
k – The amount of neighbors that will be retrieved.
Returns
A list of k matching documents.
class langchain.vectorstores.Milvus(embedding_function: langchain.embeddings.base.Embeddings, collection_name: str = 'LangChainCollection', connection_args: Optional[dict[str, Any]] = None, consistency_level: str = 'Session', index_params: Optional[dict] = None, search_params: Optional[dict] = None, drop_old: Optional[bool] = False)[source]#
Wrapper around the Milvus vector database.
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, timeout: Optional[int] = None, batch_size: int = 1000, **kwargs: Any) → List[str][source]#
Insert text data into Milvus.
Inserting data when the collection has not be made yet will result
in creating a new Collection. The data of the first entity decides
the schema of the new collection, the dim is extracted from the first
embedding and the columns are decided by the first metadata dict.
Metada keys will need to be present for all inserted values. At
the moment there is no None equivalent in Milvus.
Parameters
texts (Iterable[str]) – The texts to embed, it is assumed
that they all fit in memory.
metadatas (Optional[List[dict]]) – Metadata dicts attached to each of
the texts. Defaults to None.
timeout (Optional[int]) – Timeout for each batch insert. Defaults
to None.
batch_size (int, optional) – Batch size to use for insertion.
Defaults to 1000.
Raises
MilvusException – Failure to add texts
Returns
The resulting keys for each inserted element. | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-44 | Returns
The resulting keys for each inserted element.
Return type
List[str]
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, collection_name: str = 'LangChainCollection', connection_args: dict[str, Any] = {'host': 'localhost', 'password': '', 'port': '19530', 'secure': False, 'user': ''}, consistency_level: str = 'Session', index_params: Optional[dict] = None, search_params: Optional[dict] = None, drop_old: bool = False, **kwargs: Any) → langchain.vectorstores.milvus.Milvus[source]#
Create a Milvus collection, indexes it with HNSW, and insert data.
Parameters
texts (List[str]) – Text data.
embedding (Embeddings) – Embedding function.
metadatas (Optional[List[dict]]) – Metadata for each text if it exists.
Defaults to None.
collection_name (str, optional) – Collection name to use. Defaults to
“LangChainCollection”.
connection_args (dict[str, Any], optional) – Connection args to use. Defaults
to DEFAULT_MILVUS_CONNECTION.
consistency_level (str, optional) – Which consistency level to use. Defaults
to “Session”.
index_params (Optional[dict], optional) – Which index_params to use. Defaults
to None.
search_params (Optional[dict], optional) – Which search params to use.
Defaults to None.
drop_old (Optional[bool], optional) – Whether to drop the collection with
that name if it exists. Defaults to False.
Returns
Milvus Vector Store
Return type
Milvus | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-45 | Returns
Milvus Vector Store
Return type
Milvus
max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, param: Optional[dict] = None, expr: Optional[str] = None, timeout: Optional[int] = None, **kwargs: Any) → List[langchain.schema.Document][source]#
Perform a search and return results that are reordered by MMR.
Parameters
query (str) – The text being searched.
k (int, optional) – How many results to give. Defaults to 4.
fetch_k (int, optional) – Total results to select k from.
Defaults to 20.
lambda_mult – Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5
param (dict, optional) – The search params for the specified index.
Defaults to None.
expr (str, optional) – Filtering expression. Defaults to None.
timeout (int, optional) – How long to wait before timeout error.
Defaults to None.
kwargs – Collection.search() keyword arguments.
Returns
Document results for search.
Return type
List[Document]
max_marginal_relevance_search_by_vector(embedding: list[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, param: Optional[dict] = None, expr: Optional[str] = None, timeout: Optional[int] = None, **kwargs: Any) → List[langchain.schema.Document][source]#
Perform a search and return results that are reordered by MMR.
Parameters
embedding (str) – The embedding vector being searched. | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
fa678718cb2d-46 | Parameters
embedding (str) – The embedding vector being searched.
k (int, optional) – How many results to give. Defaults to 4.
fetch_k (int, optional) – Total results to select k from.
Defaults to 20.
lambda_mult – Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5
param (dict, optional) – The search params for the specified index.
Defaults to None.
expr (str, optional) – Filtering expression. Defaults to None.
timeout (int, optional) – How long to wait before timeout error.
Defaults to None.
kwargs – Collection.search() keyword arguments.
Returns
Document results for search.
Return type
List[Document]
similarity_search(query: str, k: int = 4, param: Optional[dict] = None, expr: Optional[str] = None, timeout: Optional[int] = None, **kwargs: Any) → List[langchain.schema.Document][source]#
Perform a similarity search against the query string.
Parameters
query (str) – The text to search.
k (int, optional) – How many results to return. Defaults to 4.
param (dict, optional) – The search params for the index type.
Defaults to None.
expr (str, optional) – Filtering expression. Defaults to None.
timeout (int, optional) – How long to wait before timeout error.
Defaults to None.
kwargs – Collection.search() keyword arguments.
Returns
Document results for search.
Return type
List[Document] | https://python.langchain.com/en/latest/reference/modules/vectorstores.html |
Subsets and Splits