id
stringlengths 14
16
| text
stringlengths 13
2.7k
| source
stringlengths 57
178
|
---|---|---|
f6d7b5601415-1 | load_comments(soup_info)
Load comments from a HN post.
load_results(soup)
Load items from an HN page.
scrape([parser])
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls[, parser])
Fetch all urls, then return soups for all results.
__init__(web_path: Union[str, Sequence[str]] = '', header_template: Optional[dict] = None, verify_ssl: bool = True, proxies: Optional[dict] = None, continue_on_failure: bool = False, autoset_encoding: bool = True, encoding: Optional[str] = None, web_paths: Sequence[str] = (), requests_per_second: int = 2, default_parser: str = 'html.parser', requests_kwargs: Optional[Dict[str, Any]] = None, raise_for_status: bool = False, bs_get_text_kwargs: Optional[Dict[str, Any]] = None, bs_kwargs: Optional[Dict[str, Any]] = None, session: Any = None) → None¶
Initialize loader.
Parameters
web_paths – Web paths to load from.
requests_per_second – Max number of concurrent requests to make.
default_parser – Default parser to use for BeautifulSoup.
requests_kwargs – kwargs for requests
raise_for_status – Raise an exception if http status code denotes an error.
bs_get_text_kwargs – kwargs for beatifulsoup4 get_text
bs_kwargs – kwargs for beatifulsoup4 web page parsing
aload() → List[Document]¶
Load text from the urls in web_path async into Documents.
async fetch_all(urls: List[str]) → Any¶
Fetch all urls concurrently with rate limiting.
lazy_load() → Iterator[Document]¶
Lazy load text from the url(s) in web_path.
load() → List[Document][source]¶
Get important HN webpage information. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.hn.HNLoader.html |
f6d7b5601415-2 | load() → List[Document][source]¶
Get important HN webpage information.
HN webpage components are:
title
content
source url,
time of post
author of the post
number of comments
rank of the post
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
load_comments(soup_info: Any) → List[Document][source]¶
Load comments from a HN post.
load_results(soup: Any) → List[Document][source]¶
Load items from an HN page.
scrape(parser: Optional[str] = None) → Any¶
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls: List[str], parser: Optional[str] = None) → List[Any]¶
Fetch all urls, then return soups for all results.
Examples using HNLoader¶
Hacker News | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.hn.HNLoader.html |
fac9a1db6db8-0 | langchain.document_loaders.figma.FigmaFileLoader¶
class langchain.document_loaders.figma.FigmaFileLoader(access_token: str, ids: str, key: str)[source]¶
Load Figma file.
Initialize with access token, ids, and key.
Parameters
access_token – The access token for the Figma REST API.
ids – The ids of the Figma file.
key – The key for the Figma file
Methods
__init__(access_token, ids, key)
Initialize with access token, ids, and key.
lazy_load()
A lazy loader for Documents.
load()
Load file
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(access_token: str, ids: str, key: str)[source]¶
Initialize with access token, ids, and key.
Parameters
access_token – The access token for the Figma REST API.
ids – The ids of the Figma file.
key – The key for the Figma file
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load file
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using FigmaFileLoader¶
Figma | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.figma.FigmaFileLoader.html |
3593302afccc-0 | langchain.document_loaders.embaas.EmbaasDocumentExtractionPayload¶
class langchain.document_loaders.embaas.EmbaasDocumentExtractionPayload[source]¶
Payload for the Embaas document extraction API.
Attributes
bytes
The base64 encoded bytes of the document to extract text from.
Methods
__init__(*args, **kwargs)
clear()
copy()
fromkeys([value])
Create a new dictionary with keys from iterable and values set to value.
get(key[, default])
Return the value for key if key is in the dictionary, else default.
items()
keys()
pop(k[,d])
If the key is not found, return the default if given; otherwise, raise a KeyError.
popitem()
Remove and return a (key, value) pair as a 2-tuple.
setdefault(key[, default])
Insert key with a value of default if key is not in the dictionary.
update([E, ]**F)
If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]
values()
__init__(*args, **kwargs)¶
clear() → None. Remove all items from D.¶
copy() → a shallow copy of D¶
fromkeys(value=None, /)¶
Create a new dictionary with keys from iterable and values set to value.
get(key, default=None, /)¶
Return the value for key if key is in the dictionary, else default.
items() → a set-like object providing a view on D's items¶ | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasDocumentExtractionPayload.html |
3593302afccc-1 | items() → a set-like object providing a view on D's items¶
keys() → a set-like object providing a view on D's keys¶
pop(k[, d]) → v, remove specified key and return the corresponding value.¶
If the key is not found, return the default if given; otherwise,
raise a KeyError.
popitem()¶
Remove and return a (key, value) pair as a 2-tuple.
Pairs are returned in LIFO (last-in, first-out) order.
Raises KeyError if the dict is empty.
setdefault(key, default=None, /)¶
Insert key with a value of default if key is not in the dictionary.
Return the value for key if key is in the dictionary, else default.
update([E, ]**F) → None. Update D from dict/iterable E and F.¶
If E is present and has a .keys() method, then does: for k in E: D[k] = E[k]
If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v
In either case, this is followed by: for k in F: D[k] = F[k]
values() → an object providing a view on D's values¶ | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasDocumentExtractionPayload.html |
043c484aacf0-0 | langchain.document_loaders.bibtex.BibtexLoader¶
class langchain.document_loaders.bibtex.BibtexLoader(file_path: str, *, parser: Optional[BibtexparserWrapper] = None, max_docs: Optional[int] = None, max_content_chars: Optional[int] = 4000, load_extra_metadata: bool = False, file_pattern: str = '[^:]+\\.pdf')[source]¶
Load a bibtex file.
Each document represents one entry from the bibtex file.
If a PDF file is present in the file bibtex field, the original PDF
is loaded into the document text. If no such file entry is present,
the abstract field is used instead.
Initialize the BibtexLoader.
Parameters
file_path – Path to the bibtex file.
parser – The parser to use. If None, a default parser is used.
max_docs – Max number of associated documents to load. Use -1 means
no limit.
max_content_chars – Maximum number of characters to load from the PDF.
load_extra_metadata – Whether to load extra metadata from the PDF.
file_pattern – Regex pattern to match the file name in the bibtex.
Methods
__init__(file_path, *[, parser, max_docs, ...])
Initialize the BibtexLoader.
lazy_load()
Load bibtex file using bibtexparser and get the article texts plus the article metadata.
load()
Load bibtex file documents from the given bibtex file path.
load_and_split([text_splitter])
Load Documents and split into chunks. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.bibtex.BibtexLoader.html |
043c484aacf0-1 | load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str, *, parser: Optional[BibtexparserWrapper] = None, max_docs: Optional[int] = None, max_content_chars: Optional[int] = 4000, load_extra_metadata: bool = False, file_pattern: str = '[^:]+\\.pdf')[source]¶
Initialize the BibtexLoader.
Parameters
file_path – Path to the bibtex file.
parser – The parser to use. If None, a default parser is used.
max_docs – Max number of associated documents to load. Use -1 means
no limit.
max_content_chars – Maximum number of characters to load from the PDF.
load_extra_metadata – Whether to load extra metadata from the PDF.
file_pattern – Regex pattern to match the file name in the bibtex.
lazy_load() → Iterator[Document][source]¶
Load bibtex file using bibtexparser and get the article texts plus the
article metadata.
See https://bibtexparser.readthedocs.io/en/master/
Returns
a list of documents with the document.page_content in text format
load() → List[Document][source]¶
Load bibtex file documents from the given bibtex file path.
See https://bibtexparser.readthedocs.io/en/master/
Parameters
file_path – the path to the bibtex file
Returns
a list of documents with the document.page_content in text format
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.bibtex.BibtexLoader.html |
043c484aacf0-2 | Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using BibtexLoader¶
BibTeX | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.bibtex.BibtexLoader.html |
50dde75afd24-0 | langchain.document_loaders.parsers.language.code_segmenter.CodeSegmenter¶
class langchain.document_loaders.parsers.language.code_segmenter.CodeSegmenter(code: str)[source]¶
Abstract class for the code segmenter.
Methods
__init__(code)
extract_functions_classes()
is_valid()
simplify_code()
__init__(code: str)[source]¶
abstract extract_functions_classes() → List[str][source]¶
is_valid() → bool[source]¶
abstract simplify_code() → str[source]¶ | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.language.code_segmenter.CodeSegmenter.html |
4ddc8d5c4603-0 | langchain.document_loaders.email.OutlookMessageLoader¶
class langchain.document_loaders.email.OutlookMessageLoader(file_path: str)[source]¶
Loads Outlook Message files using extract_msg.
https://github.com/TeamMsgExtractor/msg-extractor
Initialize with a file path.
Parameters
file_path – The path to the Outlook Message file.
Methods
__init__(file_path)
Initialize with a file path.
lazy_load()
A lazy loader for Documents.
load()
Load data into document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str)[source]¶
Initialize with a file path.
Parameters
file_path – The path to the Outlook Message file.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using OutlookMessageLoader¶
Email | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.email.OutlookMessageLoader.html |
e38bd75b8301-0 | langchain.document_loaders.lakefs.UnstructuredLakeFSLoader¶
class langchain.document_loaders.lakefs.UnstructuredLakeFSLoader(url: str, repo: str, ref: str = 'main', path: str = '', presign: bool = True, **unstructured_kwargs: Any)[source]¶
Args:
Parameters
lakefs_access_key –
lakefs_secret_key –
lakefs_endpoint –
repo –
ref –
Methods
__init__(url, repo[, ref, path, presign])
Args:
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(url: str, repo: str, ref: str = 'main', path: str = '', presign: bool = True, **unstructured_kwargs: Any)[source]¶
Args:
Parameters
lakefs_access_key –
lakefs_secret_key –
lakefs_endpoint –
repo –
ref –
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.lakefs.UnstructuredLakeFSLoader.html |
07a92bd52b45-0 | langchain.document_loaders.obsidian.ObsidianLoader¶
class langchain.document_loaders.obsidian.ObsidianLoader(path: str, encoding: str = 'UTF-8', collect_metadata: bool = True)[source]¶
Load Obsidian files from directory.
Initialize with a path.
Parameters
path – Path to the directory containing the Obsidian files.
encoding – Charset encoding, defaults to “UTF-8”
collect_metadata – Whether to collect metadata from the front matter.
Defaults to True.
Attributes
DATAVIEW_INLINE_BRACKET_REGEX
DATAVIEW_INLINE_PAREN_REGEX
DATAVIEW_LINE_REGEX
FRONT_MATTER_REGEX
TAG_REGEX
Methods
__init__(path[, encoding, collect_metadata])
Initialize with a path.
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(path: str, encoding: str = 'UTF-8', collect_metadata: bool = True)[source]¶
Initialize with a path.
Parameters
path – Path to the directory containing the Obsidian files.
encoding – Charset encoding, defaults to “UTF-8”
collect_metadata – Whether to collect metadata from the front matter.
Defaults to True.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using ObsidianLoader¶
Obsidian | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.obsidian.ObsidianLoader.html |
bd93991eb6b1-0 | langchain.document_loaders.fauna.FaunaLoader¶
class langchain.document_loaders.fauna.FaunaLoader(query: str, page_content_field: str, secret: str, metadata_fields: Optional[Sequence[str]] = None)[source]¶
Load from FaunaDB.
query¶
The FQL query string to execute.
Type
str
page_content_field¶
The field that contains the content of each page.
Type
str
secret¶
The secret key for authenticating to FaunaDB.
Type
str
metadata_fields¶
Optional list of field names to include in metadata.
Type
Optional[Sequence[str]]
Methods
__init__(query, page_content_field, secret)
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(query: str, page_content_field: str, secret: str, metadata_fields: Optional[Sequence[str]] = None)[source]¶
lazy_load() → Iterator[Document][source]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using FaunaLoader¶
Fauna | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.fauna.FaunaLoader.html |
e24993933c48-0 | langchain.document_loaders.telegram.concatenate_rows¶
langchain.document_loaders.telegram.concatenate_rows(row: dict) → str[source]¶
Combine message information in a readable format ready to be used. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.telegram.concatenate_rows.html |
6f904581bca0-0 | langchain.document_loaders.srt.SRTLoader¶
class langchain.document_loaders.srt.SRTLoader(file_path: str)[source]¶
Load .srt (subtitle) files.
Initialize with a file path.
Methods
__init__(file_path)
Initialize with a file path.
lazy_load()
A lazy loader for Documents.
load()
Load using pysrt file.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str)[source]¶
Initialize with a file path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load using pysrt file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using SRTLoader¶
Subtitle | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.srt.SRTLoader.html |
037f3c3265d2-0 | langchain.document_loaders.blockchain.BlockchainDocumentLoader¶
class langchain.document_loaders.blockchain.BlockchainDocumentLoader(contract_address: str, blockchainType: BlockchainType = BlockchainType.ETH_MAINNET, api_key: str = 'docs-demo', startToken: str = '', get_all_tokens: bool = False, max_execution_time: Optional[int] = None)[source]¶
Load elements from a blockchain smart contract.
The supported blockchains are: Ethereum mainnet, Ethereum Goerli testnet,
Polygon mainnet, and Polygon Mumbai testnet.
If no BlockchainType is specified, the default is Ethereum mainnet.
The Loader uses the Alchemy API to interact with the blockchain.
ALCHEMY_API_KEY environment variable must be set to use this loader.
The API returns 100 NFTs per request and can be paginated using the
startToken parameter.
If get_all_tokens is set to True, the loader will get all tokens
on the contract. Note that for contracts with a large number of tokens,
this may take a long time (e.g. 10k tokens is 100 requests).
Default value is false for this reason.
The max_execution_time (sec) can be set to limit the execution time
of the loader.
Future versions of this loader can:
Support additional Alchemy APIs (e.g. getTransactions, etc.)
Support additional blockain APIs (e.g. Infura, Opensea, etc.)
Parameters
contract_address – The address of the smart contract.
blockchainType – The blockchain type.
api_key – The Alchemy API key.
startToken – The start token for pagination.
get_all_tokens – Whether to get all tokens on the contract.
max_execution_time – The maximum execution time (sec).
Methods
__init__(contract_address[, blockchainType, ...])
param contract_address | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blockchain.BlockchainDocumentLoader.html |
037f3c3265d2-1 | __init__(contract_address[, blockchainType, ...])
param contract_address
The address of the smart contract.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(contract_address: str, blockchainType: BlockchainType = BlockchainType.ETH_MAINNET, api_key: str = 'docs-demo', startToken: str = '', get_all_tokens: bool = False, max_execution_time: Optional[int] = None)[source]¶
Parameters
contract_address – The address of the smart contract.
blockchainType – The blockchain type.
api_key – The Alchemy API key.
startToken – The start token for pagination.
get_all_tokens – Whether to get all tokens on the contract.
max_execution_time – The maximum execution time (sec).
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using BlockchainDocumentLoader¶
Blockchain | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blockchain.BlockchainDocumentLoader.html |
445f22acc369-0 | langchain.document_loaders.image_captions.ImageCaptionLoader¶
class langchain.document_loaders.image_captions.ImageCaptionLoader(images: Union[str, bytes, List[Union[str, bytes]]], blip_processor: str = 'Salesforce/blip-image-captioning-base', blip_model: str = 'Salesforce/blip-image-captioning-base')[source]¶
Load image captions.
By default, the loader utilizes the pre-trained
Salesforce BLIP image captioning model.
https://huggingface.co/Salesforce/blip-image-captioning-base
Initialize with a list of image data (bytes) or file paths
Parameters
images – Either a single image or a list of images. Accepts
image data (bytes) or file paths to images.
blip_processor – The name of the pre-trained BLIP processor.
blip_model – The name of the pre-trained BLIP model.
Methods
__init__(images[, blip_processor, blip_model])
Initialize with a list of image data (bytes) or file paths
lazy_load()
A lazy loader for Documents.
load()
Load from a list of image data or file paths
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(images: Union[str, bytes, List[Union[str, bytes]]], blip_processor: str = 'Salesforce/blip-image-captioning-base', blip_model: str = 'Salesforce/blip-image-captioning-base')[source]¶
Initialize with a list of image data (bytes) or file paths
Parameters
images – Either a single image or a list of images. Accepts
image data (bytes) or file paths to images.
blip_processor – The name of the pre-trained BLIP processor.
blip_model – The name of the pre-trained BLIP model. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.image_captions.ImageCaptionLoader.html |
445f22acc369-1 | blip_model – The name of the pre-trained BLIP model.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load from a list of image data or file paths
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using ImageCaptionLoader¶
Image captions | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.image_captions.ImageCaptionLoader.html |
4c5910c251e9-0 | langchain.document_loaders.browserless.BrowserlessLoader¶
class langchain.document_loaders.browserless.BrowserlessLoader(api_token: str, urls: Union[str, List[str]], text_content: bool = True)[source]¶
Load webpages with Browserless /content endpoint.
Initialize with API token and the URLs to scrape
Attributes
api_token
Browserless API token.
urls
List of URLs to scrape.
Methods
__init__(api_token, urls[, text_content])
Initialize with API token and the URLs to scrape
lazy_load()
Lazy load Documents from URLs.
load()
Load Documents from URLs.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(api_token: str, urls: Union[str, List[str]], text_content: bool = True)[source]¶
Initialize with API token and the URLs to scrape
lazy_load() → Iterator[Document][source]¶
Lazy load Documents from URLs.
load() → List[Document][source]¶
Load Documents from URLs.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using BrowserlessLoader¶
Browserless | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.browserless.BrowserlessLoader.html |
392eceed0fef-0 | langchain.document_loaders.url.UnstructuredURLLoader¶
class langchain.document_loaders.url.UnstructuredURLLoader(urls: List[str], continue_on_failure: bool = True, mode: str = 'single', show_progress_bar: bool = False, **unstructured_kwargs: Any)[source]¶
Load files from remote URLs using Unstructured.
Use the unstructured partition function to detect the MIME type
and route the file to the appropriate partitioner.
You can run the loader in one of two modes: “single” and “elements”.
If you use “single” mode, the document will be returned as a single
langchain Document object. If you use “elements” mode, the unstructured
library will split the document into elements such as Title and NarrativeText.
You can pass in additional unstructured kwargs after mode to apply
different unstructured settings.
Examples
from langchain.document_loaders import UnstructuredURLLoader
loader = UnstructuredURLLoader(urls=[“<url-1>”, “<url-2>”], mode=”elements”, strategy=”fast”,
)
docs = loader.load()
References
https://unstructured-io.github.io/unstructured/bricks.html#partition
Initialize with file path.
Methods
__init__(urls[, continue_on_failure, mode, ...])
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(urls: List[str], continue_on_failure: bool = True, mode: str = 'single', show_progress_bar: bool = False, **unstructured_kwargs: Any)[source]¶
Initialize with file path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load file. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.url.UnstructuredURLLoader.html |
392eceed0fef-1 | load() → List[Document][source]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using UnstructuredURLLoader¶
URL | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.url.UnstructuredURLLoader.html |
bb300dffbc11-0 | langchain.document_loaders.url_selenium.SeleniumURLLoader¶
class langchain.document_loaders.url_selenium.SeleniumURLLoader(urls: List[str], continue_on_failure: bool = True, browser: Literal['chrome', 'firefox'] = 'chrome', binary_location: Optional[str] = None, executable_path: Optional[str] = None, headless: bool = True, arguments: List[str] = [])[source]¶
Load HTML pages with Selenium and parse with Unstructured.
This is useful for loading pages that require javascript to render.
urls¶
List of URLs to load.
Type
List[str]
continue_on_failure¶
If True, continue loading other URLs on failure.
Type
bool
browser¶
The browser to use, either ‘chrome’ or ‘firefox’.
Type
str
binary_location¶
The location of the browser binary.
Type
Optional[str]
executable_path¶
The path to the browser executable.
Type
Optional[str]
headless¶
If True, the browser will run in headless mode.
Type
bool
arguments [List[str]]
List of arguments to pass to the browser.
Load a list of URLs using Selenium and unstructured.
Methods
__init__(urls[, continue_on_failure, ...])
Load a list of URLs using Selenium and unstructured.
lazy_load()
A lazy loader for Documents.
load()
Load the specified URLs using Selenium and create Document instances.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(urls: List[str], continue_on_failure: bool = True, browser: Literal['chrome', 'firefox'] = 'chrome', binary_location: Optional[str] = None, executable_path: Optional[str] = None, headless: bool = True, arguments: List[str] = [])[source]¶ | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.url_selenium.SeleniumURLLoader.html |
bb300dffbc11-1 | Load a list of URLs using Selenium and unstructured.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load the specified URLs using Selenium and create Document instances.
Returns
A list of Document instances with loaded content.
Return type
List[Document]
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using SeleniumURLLoader¶
URL | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.url_selenium.SeleniumURLLoader.html |
ba8bd9f78c84-0 | langchain.document_loaders.parsers.language.javascript.JavaScriptSegmenter¶
class langchain.document_loaders.parsers.language.javascript.JavaScriptSegmenter(code: str)[source]¶
Code segmenter for JavaScript.
Methods
__init__(code)
extract_functions_classes()
is_valid()
simplify_code()
__init__(code: str)[source]¶
extract_functions_classes() → List[str][source]¶
is_valid() → bool[source]¶
simplify_code() → str[source]¶ | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.language.javascript.JavaScriptSegmenter.html |
9c9c15754a3b-0 | langchain.document_loaders.mastodon.MastodonTootsLoader¶
class langchain.document_loaders.mastodon.MastodonTootsLoader(mastodon_accounts: Sequence[str], number_toots: Optional[int] = 100, exclude_replies: bool = False, access_token: Optional[str] = None, api_base_url: str = 'https://mastodon.social')[source]¶
Load the Mastodon ‘toots’.
Instantiate Mastodon toots loader.
Parameters
mastodon_accounts – The list of Mastodon accounts to query.
number_toots – How many toots to pull for each account. Defaults to 100.
exclude_replies – Whether to exclude reply toots from the load.
Defaults to False.
access_token – An access token if toots are loaded as a Mastodon app. Can
also be specified via the environment variables “MASTODON_ACCESS_TOKEN”.
api_base_url – A Mastodon API base URL to talk to, if not using the default.
Defaults to “https://mastodon.social”.
Methods
__init__(mastodon_accounts[, number_toots, ...])
Instantiate Mastodon toots loader.
lazy_load()
A lazy loader for Documents.
load()
Load toots into documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(mastodon_accounts: Sequence[str], number_toots: Optional[int] = 100, exclude_replies: bool = False, access_token: Optional[str] = None, api_base_url: str = 'https://mastodon.social')[source]¶
Instantiate Mastodon toots loader.
Parameters
mastodon_accounts – The list of Mastodon accounts to query.
number_toots – How many toots to pull for each account. Defaults to 100. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.mastodon.MastodonTootsLoader.html |
9c9c15754a3b-1 | exclude_replies – Whether to exclude reply toots from the load.
Defaults to False.
access_token – An access token if toots are loaded as a Mastodon app. Can
also be specified via the environment variables “MASTODON_ACCESS_TOKEN”.
api_base_url – A Mastodon API base URL to talk to, if not using the default.
Defaults to “https://mastodon.social”.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load toots into documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using MastodonTootsLoader¶
Mastodon | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.mastodon.MastodonTootsLoader.html |
f71bd88d25ab-0 | langchain.document_loaders.rspace.RSpaceLoader¶
class langchain.document_loaders.rspace.RSpaceLoader(global_id: str, api_key: Optional[str] = None, url: Optional[str] = None)[source]¶
Loads content from RSpace notebooks, folders, documents or PDF Gallery files into
Langchain documents.
Maps RSpace document <-> Langchain Document in 1-1. PDFs are imported using PyPDF.
Requirements are rspace_client (pip install rspace_client) and PyPDF if importingPDF docs (pip install pypdf).
api_key: RSpace API key - can also be supplied as environment variable
‘RSPACE_API_KEY’
url: str
The URL of your RSpace instance - can also be supplied as environment
variable ‘RSPACE_URL’
global_id: str
The global ID of the resource to load,
e.g. ‘SD12344’ (a single document); ‘GL12345’(A PDF file in the gallery);
‘NB4567’ (a notebook); ‘FL12244’ (a folder)
Methods
__init__(global_id[, api_key, url])
api_key: RSpace API key - can also be supplied as environment variable 'RSPACE_API_KEY' url: str The URL of your RSpace instance - can also be supplied as environment variable 'RSPACE_URL' global_id: str The global ID of the resource to load, e.g. 'SD12344' (a single document); 'GL12345'(A PDF file in the gallery); 'NB4567' (a notebook); 'FL12244' (a folder).
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
validate_environment(values) | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.rspace.RSpaceLoader.html |
f71bd88d25ab-1 | Load Documents and split into chunks.
validate_environment(values)
Validate that API key and URL exists in environment.
__init__(global_id: str, api_key: Optional[str] = None, url: Optional[str] = None)[source]¶
api_key: RSpace API key - can also be supplied as environment variable
‘RSPACE_API_KEY’
url: str
The URL of your RSpace instance - can also be supplied as environment
variable ‘RSPACE_URL’
global_id: str
The global ID of the resource to load,
e.g. ‘SD12344’ (a single document); ‘GL12345’(A PDF file in the gallery);
‘NB4567’ (a notebook); ‘FL12244’ (a folder)
lazy_load() → Iterator[Document][source]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
classmethod validate_environment(values: Dict) → Dict[source]¶
Validate that API key and URL exists in environment. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.rspace.RSpaceLoader.html |
390c2a93a239-0 | langchain.document_loaders.chatgpt.concatenate_rows¶
langchain.document_loaders.chatgpt.concatenate_rows(message: dict, title: str) → str[source]¶
Combine message information in a readable format ready to be used.
:param message: Message to be concatenated
:param title: Title of the conversation
Returns
Concatenated message | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.chatgpt.concatenate_rows.html |
64dabe809a7d-0 | langchain.document_loaders.parsers.grobid.GrobidParser¶
class langchain.document_loaders.parsers.grobid.GrobidParser(segment_sentences: bool, grobid_server: str = 'http://localhost:8070/api/processFulltextDocument')[source]¶
Load article PDF files using Grobid.
Methods
__init__(segment_sentences[, grobid_server])
lazy_parse(blob)
Lazy parsing interface.
parse(blob)
Eagerly parse the blob into a document or documents.
process_xml(file_path, xml_data, ...)
Process the XML file from Grobin.
__init__(segment_sentences: bool, grobid_server: str = 'http://localhost:8070/api/processFulltextDocument') → None[source]¶
lazy_parse(blob: Blob) → Iterator[Document][source]¶
Lazy parsing interface.
Subclasses are required to implement this method.
Parameters
blob – Blob instance
Returns
Generator of documents
parse(blob: Blob) → List[Document]¶
Eagerly parse the blob into a document or documents.
This is a convenience method for interactive development environment.
Production applications should favor the lazy_parse method instead.
Subclasses should generally not over-ride this parse method.
Parameters
blob – Blob instance
Returns
List of documents
process_xml(file_path: str, xml_data: str, segment_sentences: bool) → Iterator[Document][source]¶
Process the XML file from Grobin.
Examples using GrobidParser¶
Grobid | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.grobid.GrobidParser.html |
0703b99a37c1-0 | langchain.document_loaders.azlyrics.AZLyricsLoader¶
class langchain.document_loaders.azlyrics.AZLyricsLoader(web_path: Union[str, Sequence[str]] = '', header_template: Optional[dict] = None, verify_ssl: bool = True, proxies: Optional[dict] = None, continue_on_failure: bool = False, autoset_encoding: bool = True, encoding: Optional[str] = None, web_paths: Sequence[str] = (), requests_per_second: int = 2, default_parser: str = 'html.parser', requests_kwargs: Optional[Dict[str, Any]] = None, raise_for_status: bool = False, bs_get_text_kwargs: Optional[Dict[str, Any]] = None, bs_kwargs: Optional[Dict[str, Any]] = None, session: Any = None)[source]¶
Load AZLyrics webpages.
Initialize loader.
Parameters
web_paths – Web paths to load from.
requests_per_second – Max number of concurrent requests to make.
default_parser – Default parser to use for BeautifulSoup.
requests_kwargs – kwargs for requests
raise_for_status – Raise an exception if http status code denotes an error.
bs_get_text_kwargs – kwargs for beatifulsoup4 get_text
bs_kwargs – kwargs for beatifulsoup4 web page parsing
Attributes
web_path
Methods
__init__([web_path, header_template, ...])
Initialize loader.
aload()
Load text from the urls in web_path async into Documents.
fetch_all(urls)
Fetch all urls concurrently with rate limiting.
lazy_load()
Lazy load text from the url(s) in web_path.
load()
Load webpages into Documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
scrape([parser])
Scrape data from webpage and return it in BeautifulSoup format. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.azlyrics.AZLyricsLoader.html |
0703b99a37c1-1 | scrape([parser])
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls[, parser])
Fetch all urls, then return soups for all results.
__init__(web_path: Union[str, Sequence[str]] = '', header_template: Optional[dict] = None, verify_ssl: bool = True, proxies: Optional[dict] = None, continue_on_failure: bool = False, autoset_encoding: bool = True, encoding: Optional[str] = None, web_paths: Sequence[str] = (), requests_per_second: int = 2, default_parser: str = 'html.parser', requests_kwargs: Optional[Dict[str, Any]] = None, raise_for_status: bool = False, bs_get_text_kwargs: Optional[Dict[str, Any]] = None, bs_kwargs: Optional[Dict[str, Any]] = None, session: Any = None) → None¶
Initialize loader.
Parameters
web_paths – Web paths to load from.
requests_per_second – Max number of concurrent requests to make.
default_parser – Default parser to use for BeautifulSoup.
requests_kwargs – kwargs for requests
raise_for_status – Raise an exception if http status code denotes an error.
bs_get_text_kwargs – kwargs for beatifulsoup4 get_text
bs_kwargs – kwargs for beatifulsoup4 web page parsing
aload() → List[Document]¶
Load text from the urls in web_path async into Documents.
async fetch_all(urls: List[str]) → Any¶
Fetch all urls concurrently with rate limiting.
lazy_load() → Iterator[Document]¶
Lazy load text from the url(s) in web_path.
load() → List[Document][source]¶
Load webpages into Documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶ | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.azlyrics.AZLyricsLoader.html |
0703b99a37c1-2 | Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
scrape(parser: Optional[str] = None) → Any¶
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls: List[str], parser: Optional[str] = None) → List[Any]¶
Fetch all urls, then return soups for all results.
Examples using AZLyricsLoader¶
AZLyrics | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.azlyrics.AZLyricsLoader.html |
5ea9bfc6a6bb-0 | langchain.document_loaders.parsers.language.python.PythonSegmenter¶
class langchain.document_loaders.parsers.language.python.PythonSegmenter(code: str)[source]¶
Code segmenter for Python.
Methods
__init__(code)
extract_functions_classes()
is_valid()
simplify_code()
__init__(code: str)[source]¶
extract_functions_classes() → List[str][source]¶
is_valid() → bool[source]¶
simplify_code() → str[source]¶ | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.language.python.PythonSegmenter.html |
5c3474a4c2e1-0 | langchain.document_loaders.facebook_chat.FacebookChatLoader¶
class langchain.document_loaders.facebook_chat.FacebookChatLoader(path: str)[source]¶
Load Facebook Chat messages directory dump.
Initialize with a path.
Methods
__init__(path)
Initialize with a path.
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(path: str)[source]¶
Initialize with a path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using FacebookChatLoader¶
Facebook Chat | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.facebook_chat.FacebookChatLoader.html |
057d5d0681ac-0 | langchain.document_loaders.parsers.generic.MimeTypeBasedParser¶
class langchain.document_loaders.parsers.generic.MimeTypeBasedParser(handlers: Mapping[str, BaseBlobParser], *, fallback_parser: Optional[BaseBlobParser] = None)[source]¶
Parser that uses mime-types to parse a blob.
This parser is useful for simple pipelines where the mime-type is sufficient
to determine how to parse a blob.
To use, configure handlers based on mime-types and pass them to the initializer.
Example
from langchain.document_loaders.parsers.generic import MimeTypeBasedParser
parser = MimeTypeBasedParser(
handlers={“application/pdf”: …,
},
fallback_parser=…,
)
Define a parser that uses mime-types to determine how to parse a blob.
Parameters
handlers – A mapping from mime-types to functions that take a blob, parse it
and return a document.
fallback_parser – A fallback_parser parser to use if the mime-type is not
found in the handlers. If provided, this parser will be
used to parse blobs with all mime-types not found in
the handlers.
If not provided, a ValueError will be raised if the
mime-type is not found in the handlers.
Methods
__init__(handlers, *[, fallback_parser])
Define a parser that uses mime-types to determine how to parse a blob.
lazy_parse(blob)
Load documents from a blob.
parse(blob)
Eagerly parse the blob into a document or documents.
__init__(handlers: Mapping[str, BaseBlobParser], *, fallback_parser: Optional[BaseBlobParser] = None) → None[source]¶
Define a parser that uses mime-types to determine how to parse a blob.
Parameters
handlers – A mapping from mime-types to functions that take a blob, parse it
and return a document. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.generic.MimeTypeBasedParser.html |
057d5d0681ac-1 | and return a document.
fallback_parser – A fallback_parser parser to use if the mime-type is not
found in the handlers. If provided, this parser will be
used to parse blobs with all mime-types not found in
the handlers.
If not provided, a ValueError will be raised if the
mime-type is not found in the handlers.
lazy_parse(blob: Blob) → Iterator[Document][source]¶
Load documents from a blob.
parse(blob: Blob) → List[Document]¶
Eagerly parse the blob into a document or documents.
This is a convenience method for interactive development environment.
Production applications should favor the lazy_parse method instead.
Subclasses should generally not over-ride this parse method.
Parameters
blob – Blob instance
Returns
List of documents | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.generic.MimeTypeBasedParser.html |
5bfd5cddb292-0 | langchain.document_loaders.helpers.detect_file_encodings¶
langchain.document_loaders.helpers.detect_file_encodings(file_path: str, timeout: int = 5) → List[FileEncoding][source]¶
Try to detect the file encoding.
Returns a list of FileEncoding tuples with the detected encodings ordered
by confidence.
Parameters
file_path – The path to the file to detect the encoding for.
timeout – The timeout in seconds for the encoding detection. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.helpers.detect_file_encodings.html |
faf12e42e0cd-0 | langchain.document_loaders.youtube.YoutubeLoader¶
class langchain.document_loaders.youtube.YoutubeLoader(video_id: str, add_video_info: bool = False, language: Union[str, Sequence[str]] = 'en', translation: str = 'en', continue_on_failure: bool = False)[source]¶
Load YouTube transcripts.
Initialize with YouTube video ID.
Methods
__init__(video_id[, add_video_info, ...])
Initialize with YouTube video ID.
extract_video_id(youtube_url)
Extract video id from common YT urls.
from_youtube_url(youtube_url, **kwargs)
Given youtube URL, load video.
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(video_id: str, add_video_info: bool = False, language: Union[str, Sequence[str]] = 'en', translation: str = 'en', continue_on_failure: bool = False)[source]¶
Initialize with YouTube video ID.
static extract_video_id(youtube_url: str) → str[source]¶
Extract video id from common YT urls.
classmethod from_youtube_url(youtube_url: str, **kwargs: Any) → YoutubeLoader[source]¶
Given youtube URL, load video.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using YoutubeLoader¶
YouTube
YouTube transcripts | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.youtube.YoutubeLoader.html |
0d15db966493-0 | langchain.document_loaders.bigquery.BigQueryLoader¶
class langchain.document_loaders.bigquery.BigQueryLoader(query: str, project: Optional[str] = None, page_content_columns: Optional[List[str]] = None, metadata_columns: Optional[List[str]] = None, credentials: Optional[Credentials] = None)[source]¶
Load from the Google Cloud Platform BigQuery.
Each document represents one row of the result. The page_content_columns
are written into the page_content of the document. The metadata_columns
are written into the metadata of the document. By default, all columns
are written into the page_content and none into the metadata.
Initialize BigQuery document loader.
Parameters
query – The query to run in BigQuery.
project – Optional. The project to run the query in.
page_content_columns – Optional. The columns to write into the page_content
of the document.
metadata_columns – Optional. The columns to write into the metadata of the
document.
credentials – google.auth.credentials.Credentials, optional
Credentials for accessing Google APIs. Use this parameter to override
default credentials, such as to use Compute Engine
(google.auth.compute_engine.Credentials) or Service Account
(google.oauth2.service_account.Credentials) credentials directly.
Methods
__init__(query[, project, ...])
Initialize BigQuery document loader.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(query: str, project: Optional[str] = None, page_content_columns: Optional[List[str]] = None, metadata_columns: Optional[List[str]] = None, credentials: Optional[Credentials] = None)[source]¶
Initialize BigQuery document loader.
Parameters
query – The query to run in BigQuery. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.bigquery.BigQueryLoader.html |
0d15db966493-1 | Initialize BigQuery document loader.
Parameters
query – The query to run in BigQuery.
project – Optional. The project to run the query in.
page_content_columns – Optional. The columns to write into the page_content
of the document.
metadata_columns – Optional. The columns to write into the metadata of the
document.
credentials – google.auth.credentials.Credentials, optional
Credentials for accessing Google APIs. Use this parameter to override
default credentials, such as to use Compute Engine
(google.auth.compute_engine.Credentials) or Service Account
(google.oauth2.service_account.Credentials) credentials directly.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using BigQueryLoader¶
Google BigQuery | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.bigquery.BigQueryLoader.html |
b98fb03f41a2-0 | langchain.document_loaders.polars_dataframe.PolarsDataFrameLoader¶
class langchain.document_loaders.polars_dataframe.PolarsDataFrameLoader(data_frame: Any, *, page_content_column: str = 'text')[source]¶
Load Polars DataFrame.
Initialize with dataframe object.
Parameters
data_frame – Polars DataFrame object.
page_content_column – Name of the column containing the page content.
Defaults to “text”.
Methods
__init__(data_frame, *[, page_content_column])
Initialize with dataframe object.
lazy_load()
Lazy load records from dataframe.
load()
Load full dataframe.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(data_frame: Any, *, page_content_column: str = 'text')[source]¶
Initialize with dataframe object.
Parameters
data_frame – Polars DataFrame object.
page_content_column – Name of the column containing the page content.
Defaults to “text”.
lazy_load() → Iterator[Document][source]¶
Lazy load records from dataframe.
load() → List[Document]¶
Load full dataframe.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using PolarsDataFrameLoader¶
Polars DataFrame | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.polars_dataframe.PolarsDataFrameLoader.html |
4d8379190c3f-0 | langchain.document_loaders.diffbot.DiffbotLoader¶
class langchain.document_loaders.diffbot.DiffbotLoader(api_token: str, urls: List[str], continue_on_failure: bool = True)[source]¶
Load Diffbot json file.
Initialize with API token, ids, and key.
Parameters
api_token – Diffbot API token.
urls – List of URLs to load.
continue_on_failure – Whether to continue loading other URLs if one fails.
Defaults to True.
Methods
__init__(api_token, urls[, continue_on_failure])
Initialize with API token, ids, and key.
lazy_load()
A lazy loader for Documents.
load()
Extract text from Diffbot on all the URLs and return Documents
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(api_token: str, urls: List[str], continue_on_failure: bool = True)[source]¶
Initialize with API token, ids, and key.
Parameters
api_token – Diffbot API token.
urls – List of URLs to load.
continue_on_failure – Whether to continue loading other URLs if one fails.
Defaults to True.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Extract text from Diffbot on all the URLs and return Documents
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using DiffbotLoader¶
Diffbot | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.diffbot.DiffbotLoader.html |
3f3431220501-0 | langchain.document_loaders.async_html.AsyncHtmlLoader¶
class langchain.document_loaders.async_html.AsyncHtmlLoader(web_path: Union[str, List[str]], header_template: Optional[dict] = None, verify_ssl: Optional[bool] = True, proxies: Optional[dict] = None, autoset_encoding: bool = True, encoding: Optional[str] = None, default_parser: str = 'html.parser', requests_per_second: int = 2, requests_kwargs: Optional[Dict[str, Any]] = None, raise_for_status: bool = False, ignore_load_errors: bool = False)[source]¶
Load HTML asynchronously.
Initialize with a webpage path.
Methods
__init__(web_path[, header_template, ...])
Initialize with a webpage path.
fetch_all(urls)
Fetch all urls concurrently with rate limiting.
lazy_load()
Lazy load text from the url(s) in web_path.
load()
Load text from the url(s) in web_path.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(web_path: Union[str, List[str]], header_template: Optional[dict] = None, verify_ssl: Optional[bool] = True, proxies: Optional[dict] = None, autoset_encoding: bool = True, encoding: Optional[str] = None, default_parser: str = 'html.parser', requests_per_second: int = 2, requests_kwargs: Optional[Dict[str, Any]] = None, raise_for_status: bool = False, ignore_load_errors: bool = False)[source]¶
Initialize with a webpage path.
async fetch_all(urls: List[str]) → Any[source]¶
Fetch all urls concurrently with rate limiting.
lazy_load() → Iterator[Document][source]¶
Lazy load text from the url(s) in web_path.
load() → List[Document][source]¶ | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.async_html.AsyncHtmlLoader.html |
3f3431220501-1 | load() → List[Document][source]¶
Load text from the url(s) in web_path.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using AsyncHtmlLoader¶
html2text
AsyncHtmlLoader
Set env var OPENAI_API_KEY or load from a .env file: | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.async_html.AsyncHtmlLoader.html |
1815d4498c1e-0 | langchain.document_loaders.assemblyai.TranscriptFormat¶
class langchain.document_loaders.assemblyai.TranscriptFormat(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]¶
Transcript format to use for the document loader.
TEXT = 'text'¶
One document with the transcription text
SENTENCES = 'sentences'¶
Multiple documents, splits the transcription by each sentence
PARAGRAPHS = 'paragraphs'¶
Multiple documents, splits the transcription by each paragraph
SUBTITLES_SRT = 'subtitles_srt'¶
One document with the transcript exported in SRT subtitles format
SUBTITLES_VTT = 'subtitles_vtt'¶
One document with the transcript exported in VTT subtitles format
Examples using TranscriptFormat¶
AssemblyAI Audio Transcripts | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.assemblyai.TranscriptFormat.html |
3728042394e3-0 | langchain.document_loaders.duckdb_loader.DuckDBLoader¶
class langchain.document_loaders.duckdb_loader.DuckDBLoader(query: str, database: str = ':memory:', read_only: bool = False, config: Optional[Dict[str, str]] = None, page_content_columns: Optional[List[str]] = None, metadata_columns: Optional[List[str]] = None)[source]¶
Load from DuckDB.
Each document represents one row of the result. The page_content_columns
are written into the page_content of the document. The metadata_columns
are written into the metadata of the document. By default, all columns
are written into the page_content and none into the metadata.
Parameters
query – The query to execute.
database – The database to connect to. Defaults to “:memory:”.
read_only – Whether to open the database in read-only mode.
Defaults to False.
config – A dictionary of configuration options to pass to the database.
Optional.
page_content_columns – The columns to write into the page_content
of the document. Optional.
metadata_columns – The columns to write into the metadata of the document.
Optional.
Methods
__init__(query[, database, read_only, ...])
param query
The query to execute.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(query: str, database: str = ':memory:', read_only: bool = False, config: Optional[Dict[str, str]] = None, page_content_columns: Optional[List[str]] = None, metadata_columns: Optional[List[str]] = None)[source]¶
Parameters
query – The query to execute.
database – The database to connect to. Defaults to “:memory:”. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.duckdb_loader.DuckDBLoader.html |
3728042394e3-1 | database – The database to connect to. Defaults to “:memory:”.
read_only – Whether to open the database in read-only mode.
Defaults to False.
config – A dictionary of configuration options to pass to the database.
Optional.
page_content_columns – The columns to write into the page_content
of the document. Optional.
metadata_columns – The columns to write into the metadata of the document.
Optional.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using DuckDBLoader¶
DuckDB | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.duckdb_loader.DuckDBLoader.html |
53f68f772bc0-0 | langchain.document_loaders.blackboard.BlackboardLoader¶
class langchain.document_loaders.blackboard.BlackboardLoader(blackboard_course_url: str, bbrouter: str, load_all_recursively: bool = True, basic_auth: Optional[Tuple[str, str]] = None, cookies: Optional[dict] = None, continue_on_failure: bool = False)[source]¶
Load a Blackboard course.
This loader is not compatible with all Blackboard courses. It is only
compatible with courses that use the new Blackboard interface.
To use this loader, you must have the BbRouter cookie. You can get this
cookie by logging into the course and then copying the value of the
BbRouter cookie from the browser’s developer tools.
Example
from langchain.document_loaders import BlackboardLoader
loader = BlackboardLoader(
blackboard_course_url="https://blackboard.example.com/webapps/blackboard/execute/announcement?method=search&context=course_entry&course_id=_123456_1",
bbrouter="expires:12345...",
)
documents = loader.load()
Initialize with blackboard course url.
The BbRouter cookie is required for most blackboard courses.
Parameters
blackboard_course_url – Blackboard course url.
bbrouter – BbRouter cookie.
load_all_recursively – If True, load all documents recursively.
basic_auth – Basic auth credentials.
cookies – Cookies.
continue_on_failure – whether to continue loading the sitemap if an error
occurs loading a url, emitting a warning instead of raising an
exception. Setting this to True makes the loader more robust, but also
may result in missing data. Default: False
Raises
ValueError – If blackboard course url is invalid.
Attributes
web_path
Methods
__init__(blackboard_course_url, bbrouter[, ...]) | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blackboard.BlackboardLoader.html |
53f68f772bc0-1 | Methods
__init__(blackboard_course_url, bbrouter[, ...])
Initialize with blackboard course url.
aload()
Load text from the urls in web_path async into Documents.
check_bs4()
Check if BeautifulSoup4 is installed.
download(path)
Download a file from an url.
fetch_all(urls)
Fetch all urls concurrently with rate limiting.
lazy_load()
Lazy load text from the url(s) in web_path.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
parse_filename(url)
Parse the filename from an url.
scrape([parser])
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls[, parser])
Fetch all urls, then return soups for all results.
__init__(blackboard_course_url: str, bbrouter: str, load_all_recursively: bool = True, basic_auth: Optional[Tuple[str, str]] = None, cookies: Optional[dict] = None, continue_on_failure: bool = False)[source]¶
Initialize with blackboard course url.
The BbRouter cookie is required for most blackboard courses.
Parameters
blackboard_course_url – Blackboard course url.
bbrouter – BbRouter cookie.
load_all_recursively – If True, load all documents recursively.
basic_auth – Basic auth credentials.
cookies – Cookies.
continue_on_failure – whether to continue loading the sitemap if an error
occurs loading a url, emitting a warning instead of raising an
exception. Setting this to True makes the loader more robust, but also
may result in missing data. Default: False
Raises
ValueError – If blackboard course url is invalid.
aload() → List[Document]¶ | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blackboard.BlackboardLoader.html |
53f68f772bc0-2 | aload() → List[Document]¶
Load text from the urls in web_path async into Documents.
check_bs4() → None[source]¶
Check if BeautifulSoup4 is installed.
Raises
ImportError – If BeautifulSoup4 is not installed.
download(path: str) → None[source]¶
Download a file from an url.
Parameters
path – Path to the file.
async fetch_all(urls: List[str]) → Any¶
Fetch all urls concurrently with rate limiting.
lazy_load() → Iterator[Document]¶
Lazy load text from the url(s) in web_path.
load() → List[Document][source]¶
Load data into Document objects.
Returns
List of Documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
parse_filename(url: str) → str[source]¶
Parse the filename from an url.
Parameters
url – Url to parse the filename from.
Returns
The filename.
scrape(parser: Optional[str] = None) → Any¶
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls: List[str], parser: Optional[str] = None) → List[Any]¶
Fetch all urls, then return soups for all results.
Examples using BlackboardLoader¶
Blackboard | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blackboard.BlackboardLoader.html |
30d2a514a90a-0 | langchain.document_loaders.embaas.EmbaasLoader¶
class langchain.document_loaders.embaas.EmbaasLoader[source]¶
Bases: BaseEmbaasLoader, BaseLoader
Load from Embaas.
To use, you should have the
environment variable EMBAAS_API_KEY set with your API key, or pass
it as a named parameter to the constructor.
Example
# Default parsing
from langchain.document_loaders.embaas import EmbaasLoader
loader = EmbaasLoader(file_path="example.mp3")
documents = loader.load()
# Custom api parameters (create embeddings automatically)
from langchain.document_loaders.embaas import EmbaasBlobLoader
loader = EmbaasBlobLoader(
file_path="example.pdf",
params={
"should_embed": True,
"model": "e5-large-v2",
"chunk_size": 256,
"chunk_splitter": "CharacterTextSplitter"
}
)
documents = loader.load()
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param api_url: str = 'https://api.embaas.io/v1/document/extract-text/bytes/'¶
The URL of the Embaas document extraction API.
param blob_loader: Optional[langchain.document_loaders.embaas.EmbaasBlobLoader] = None¶
The blob loader to use. If not provided, a default one will be created.
param embaas_api_key: Optional[str] = None¶
The API key for the Embaas document extraction API.
param file_path: str [Required]¶
The path to the file to load. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasLoader.html |
30d2a514a90a-1 | param file_path: str [Required]¶
The path to the file to load.
param params: langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters = {}¶
Additional parameters to pass to the Embaas document extraction API.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasLoader.html |
30d2a514a90a-2 | Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
lazy_load() → Iterator[Document][source]¶
Load the documents from the file path lazily.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document][source]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasLoader.html |
30d2a514a90a-3 | classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
Examples using EmbaasLoader¶
Embaas | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasLoader.html |
fba2fe08fb81-0 | langchain.load.serializable.try_neq_default¶
langchain.load.serializable.try_neq_default(value: Any, key: str, model: BaseModel) → bool[source]¶ | lang/api.python.langchain.com/en/latest/load/langchain.load.serializable.try_neq_default.html |
f1ce8680452f-0 | langchain.load.dump.dumps¶
langchain.load.dump.dumps(obj: Any, *, pretty: bool = False) → str[source]¶
Return a json string representation of an object. | lang/api.python.langchain.com/en/latest/load/langchain.load.dump.dumps.html |
90800be93f83-0 | langchain.load.load.loads¶
langchain.load.load.loads(text: str, *, secrets_map: Optional[Dict[str, str]] = None, valid_namespaces: Optional[List[str]] = None) → Any[source]¶
Revive a LangChain class from a JSON string.
Equivalent to load(json.loads(text)).
Parameters
text – The string to load.
secrets_map – A map of secrets to load.
valid_namespaces – A list of additional namespaces (modules)
to allow to be deserialized.
Returns
Revived LangChain objects. | lang/api.python.langchain.com/en/latest/load/langchain.load.load.loads.html |
3d1fd99a7d7d-0 | langchain.load.serializable.Serializable¶
class langchain.load.serializable.Serializable[source]¶
Bases: BaseModel, ABC
Serializable base class.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. | lang/api.python.langchain.com/en/latest/load/langchain.load.serializable.Serializable.html |
3d1fd99a7d7d-1 | Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
classmethod get_lc_namespace() → List[str][source]¶
Get the namespace of the langchain object.
For example, if the class is langchain.llms.openai.OpenAI, then the
namespace is [“langchain”, “llms”, “openai”]
classmethod is_lc_serializable() → bool[source]¶
Is this class serializable?
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str][source]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ | lang/api.python.langchain.com/en/latest/load/langchain.load.serializable.Serializable.html |
3d1fd99a7d7d-2 | classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented][source]¶
to_json_not_implemented() → SerializedNotImplemented[source]¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”} | lang/api.python.langchain.com/en/latest/load/langchain.load.serializable.Serializable.html |
f0e6ced90542-0 | langchain.load.serializable.SerializedSecret¶
class langchain.load.serializable.SerializedSecret[source]¶
Serialized secret.
lc: int¶
id: List[str]¶
type: Literal['secret']¶ | lang/api.python.langchain.com/en/latest/load/langchain.load.serializable.SerializedSecret.html |
023f79269240-0 | langchain.load.serializable.SerializedConstructor¶
class langchain.load.serializable.SerializedConstructor[source]¶
Serialized constructor.
lc: int¶
id: List[str]¶
type: Literal['constructor']¶
kwargs: Dict[str, Any]¶ | lang/api.python.langchain.com/en/latest/load/langchain.load.serializable.SerializedConstructor.html |
186acdb653da-0 | langchain.load.serializable.SerializedNotImplemented¶
class langchain.load.serializable.SerializedNotImplemented[source]¶
Serialized not implemented.
lc: int¶
id: List[str]¶
type: Literal['not_implemented']¶
repr: Optional[str]¶ | lang/api.python.langchain.com/en/latest/load/langchain.load.serializable.SerializedNotImplemented.html |
04fd5307c084-0 | langchain.load.load.Reviver¶
class langchain.load.load.Reviver(secrets_map: Optional[Dict[str, str]] = None, valid_namespaces: Optional[List[str]] = None)[source]¶
Reviver for JSON objects.
Methods
__init__([secrets_map, valid_namespaces])
__init__(secrets_map: Optional[Dict[str, str]] = None, valid_namespaces: Optional[List[str]] = None) → None[source]¶ | lang/api.python.langchain.com/en/latest/load/langchain.load.load.Reviver.html |
5f043825a3fe-0 | langchain.load.load.load¶
langchain.load.load.load(obj: Any, *, secrets_map: Optional[Dict[str, str]] = None, valid_namespaces: Optional[List[str]] = None) → Any[source]¶
Revive a LangChain class from a JSON object. Use this if you already
have a parsed JSON object, eg. from json.load or orjson.loads.
Parameters
obj – The object to load.
secrets_map – A map of secrets to load.
valid_namespaces – A list of additional namespaces (modules)
to allow to be deserialized.
Returns
Revived LangChain objects. | lang/api.python.langchain.com/en/latest/load/langchain.load.load.load.html |
12f802aff75a-0 | langchain.load.serializable.BaseSerialized¶
class langchain.load.serializable.BaseSerialized[source]¶
Base class for serialized objects.
lc: int¶
id: List[str]¶ | lang/api.python.langchain.com/en/latest/load/langchain.load.serializable.BaseSerialized.html |
5192a4d0223f-0 | langchain.load.dump.dumpd¶
langchain.load.dump.dumpd(obj: Any) → Dict[str, Any][source]¶
Return a json dict representation of an object. | lang/api.python.langchain.com/en/latest/load/langchain.load.dump.dumpd.html |
046fc542793b-0 | langchain.load.serializable.to_json_not_implemented¶
langchain.load.serializable.to_json_not_implemented(obj: object) → SerializedNotImplemented[source]¶
Serialize a “not implemented” object.
Parameters
obj – object to serialize
Returns
SerializedNotImplemented | lang/api.python.langchain.com/en/latest/load/langchain.load.serializable.to_json_not_implemented.html |
36957705504c-0 | langchain.load.dump.default¶
langchain.load.dump.default(obj: Any) → Any[source]¶
Return a default value for a Serializable object or
a SerializedNotImplemented object. | lang/api.python.langchain.com/en/latest/load/langchain.load.dump.default.html |
6e03e692b64e-0 | langchain.output_parsers.fix.OutputFixingParser¶
class langchain.output_parsers.fix.OutputFixingParser[source]¶
Bases: BaseOutputParser[T]
Wraps a parser and tries to fix parsing errors.
param max_retries: int = 1¶
The maximum number of times to retry the parse.
param parser: langchain.schema.output_parser.BaseOutputParser[langchain.output_parsers.fix.T] [Required]¶
The parser to use to parse the output.
param retry_chain: Any = None¶
The LLMChain to use to retry the completion.
async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶
Default implementation runs ainvoke in parallel using asyncio.gather.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
async ainvoke(input: str | langchain.schema.messages.BaseMessage, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → T¶
Default implementation of ainvoke, calls invoke from a thread.
The default implementation allows usage of async code even if
the runnable did not implement a native async version of invoke.
Subclasses should override this method if they can run asynchronously.
async aparse(completion: str) → T[source]¶
Parse a single string model output into some structure.
Parameters
text – String output of a language model.
Returns
Structured output.
async aparse_result(result: List[Generation], *, partial: bool = False) → T¶
Parse a list of candidate model Generations into a specific format. | lang/api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.fix.OutputFixingParser.html |
6e03e692b64e-1 | Parse a list of candidate model Generations into a specific format.
The return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation.
Parameters
result – A list of Generations to be parsed. The Generations are assumed
to be different candidate outputs for a single model input.
Returns
Structured output.
async astream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
Default implementation of astream, which calls ainvoke.
Subclasses should override this method if they support streaming output.
async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, diff: bool = True, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any]) → Union[AsyncIterator[RunLogPatch], AsyncIterator[RunLog]]¶
Stream all output from a runnable, as reported to the callback system.
This includes all inner runs of LLMs, Retrievers, Tools, etc.
Output is streamed as Log objects, which include a list of
jsonpatch ops that describe how the state of the run has changed in each
step, and the final state of the run.
The jsonpatch ops can be applied in order to construct state.
async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
Default implementation of atransform, which buffers input and calls astream.
Subclasses should override this method if they can start producing output while | lang/api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.fix.OutputFixingParser.html |
6e03e692b64e-2 | Subclasses should override this method if they can start producing output while
input is still being generated.
batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶
Default implementation runs invoke in parallel using a thread pool executor.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
config_schema(*, include: Optional[Sequence[str]] = None) → Type[BaseModel]¶
The type of config this runnable accepts specified as a pydantic model.
To mark a field as configurable, see the configurable_fields
and configurable_alternatives methods.
Parameters
include – A list of fields to include in the config schema.
Returns
A pydantic model that can be used to validate config.
configurable_alternatives(which: ConfigurableField, default_key: str = 'default', **kwargs: Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) → RunnableSerializable[Input, Output]¶
configurable_fields(**kwargs: Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) → RunnableSerializable[Input, Output]¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed. | lang/api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.fix.OutputFixingParser.html |
6e03e692b64e-3 | Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict¶
Return dictionary representation of output parser.
classmethod from_llm(llm: BaseLanguageModel, parser: BaseOutputParser[T], prompt: BasePromptTemplate = PromptTemplate(input_variables=['completion', 'error', 'instructions'], template='Instructions:\n--------------\n{instructions}\n--------------\nCompletion:\n--------------\n{completion}\n--------------\n\nAbove, the Completion did not satisfy the constraints given in the Instructions.\nError:\n--------------\n{error}\n--------------\n\nPlease try again. Please only respond with an answer that satisfies the constraints laid out in the Instructions:'), max_retries: int = 1) → OutputFixingParser[T][source]¶
Create an OutputFixingParser from a language model and a parser.
Parameters
llm – llm to use for fixing
parser – parser to use for parsing
prompt – prompt to use for fixing
max_retries – Maximum number of retries to parse. | lang/api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.fix.OutputFixingParser.html |
6e03e692b64e-4 | prompt – prompt to use for fixing
max_retries – Maximum number of retries to parse.
Returns
OutputFixingParser
classmethod from_orm(obj: Any) → Model¶
get_format_instructions() → str[source]¶
Instructions on how the LLM output should be formatted.
get_input_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
Get a pydantic model that can be used to validate input to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic input schema that depends on which
configuration the runnable is invoked with.
This method allows to get an input schema for a specific configuration.
Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate input.
classmethod get_lc_namespace() → List[str]¶
Get the namespace of the langchain object.
For example, if the class is langchain.llms.openai.OpenAI, then the
namespace is [“langchain”, “llms”, “openai”]
get_output_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
Get a pydantic model that can be used to validate output to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic output schema that depends on which
configuration the runnable is invoked with.
This method allows to get an output schema for a specific configuration.
Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate output.
invoke(input: Union[str, BaseMessage], config: Optional[RunnableConfig] = None) → T¶
Transform a single input into an output. Override to implement.
Parameters
input – The input to the runnable. | lang/api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.fix.OutputFixingParser.html |
6e03e692b64e-5 | Parameters
input – The input to the runnable.
config – A config to use when invoking the runnable.
The config supports standard keys like ‘tags’, ‘metadata’ for tracing
purposes, ‘max_concurrency’ for controlling how much work to do
in parallel, and other keys. Please refer to the RunnableConfig
for more details.
Returns
The output of the runnable.
classmethod is_lc_serializable() → bool[source]¶
Is this class serializable?
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object.
map() → Runnable[List[Input], List[Output]]¶
Return a new Runnable that maps a list of inputs to a list of outputs,
by calling invoke() with each input.
parse(completion: str) → T[source]¶
Parse a single string model output into some structure.
Parameters
text – String output of a language model.
Returns
Structured output. | lang/api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.fix.OutputFixingParser.html |
6e03e692b64e-6 | Parameters
text – String output of a language model.
Returns
Structured output.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
parse_result(result: List[Generation], *, partial: bool = False) → T¶
Parse a list of candidate model Generations into a specific format.
The return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation.
Parameters
result – A list of Generations to be parsed. The Generations are assumed
to be different candidate outputs for a single model input.
Returns
Structured output.
parse_with_prompt(completion: str, prompt: PromptValue) → Any¶
Parse the output of an LLM call with the input prompt for context.
The prompt is largely provided in the event the OutputParser wants
to retry or fix the output in some way, and needs information from
the prompt to do so.
Parameters
completion – String output of a language model.
prompt – Input PromptValue.
Returns
Structured output
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of stream, which calls invoke. | lang/api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.fix.OutputFixingParser.html |
6e03e692b64e-7 | Default implementation of stream, which calls invoke.
Subclasses should override this method if they support streaming output.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of transform, which buffers input and then calls stream.
Subclasses should override this method if they can start producing output while
input is still being generated.
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶
Bind config to a Runnable, returning a new Runnable.
with_fallbacks(fallbacks: Sequence[Runnable[Input, Output]], *, exceptions_to_handle: Tuple[Type[BaseException], ...] = (<class 'Exception'>,)) → RunnableWithFallbacksT[Input, Output]¶
Add fallbacks to a runnable, returning a new Runnable.
Parameters
fallbacks – A sequence of runnables to try if the original runnable fails.
exceptions_to_handle – A tuple of exception types to handle.
Returns
A new Runnable that will try the original runnable, and then each
fallback in order, upon failures.
with_listeners(*, on_start: Optional[Listener] = None, on_end: Optional[Listener] = None, on_error: Optional[Listener] = None) → Runnable[Input, Output]¶
Bind lifecycle listeners to a Runnable, returning a new Runnable.
on_start: Called before the runnable starts running, with the Run object. | lang/api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.fix.OutputFixingParser.html |
6e03e692b64e-8 | on_start: Called before the runnable starts running, with the Run object.
on_end: Called after the runnable finishes running, with the Run object.
on_error: Called if the runnable throws an error, with the Run object.
The Run object contains information about the run, including its id,
type, input, output, error, start_time, end_time, and any tags or metadata
added to the run.
with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶
Create a new Runnable that retries the original runnable on exceptions.
Parameters
retry_if_exception_type – A tuple of exception types to retry on
wait_exponential_jitter – Whether to add jitter to the wait time
between retries
stop_after_attempt – The maximum number of attempts to make before giving up
Returns
A new Runnable that retries the original runnable on exceptions.
with_types(*, input_type: Optional[Type[Input]] = None, output_type: Optional[Type[Output]] = None) → Runnable[Input, Output]¶
Bind input and output types to a Runnable, returning a new Runnable.
property InputType: Any¶
The type of input this runnable accepts specified as a type annotation.
property OutputType: Type[langchain.schema.output_parser.T]¶
The type of output this runnable produces specified as a type annotation.
property config_specs: List[langchain.schema.runnable.utils.ConfigurableFieldSpec]¶
List configurable fields for this runnable.
property input_schema: Type[pydantic.main.BaseModel]¶
The type of input this runnable accepts specified as a pydantic model.
property lc_attributes: Dict¶ | lang/api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.fix.OutputFixingParser.html |
6e03e692b64e-9 | property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”}
property output_schema: Type[pydantic.main.BaseModel]¶
The type of output this runnable produces specified as a pydantic model.
Examples using OutputFixingParser¶
Retry parser | lang/api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.fix.OutputFixingParser.html |
fd12f0feb939-0 | langchain.output_parsers.openai_functions.JsonKeyOutputFunctionsParser¶
class langchain.output_parsers.openai_functions.JsonKeyOutputFunctionsParser[source]¶
Bases: JsonOutputFunctionsParser
Parse an output as the element of the Json object.
param args_only: bool = True¶
Whether to only return the arguments to the function call.
param diff: bool = False¶
In streaming mode, whether to yield diffs between the previous and current
parsed output, or just the current parsed output.
param key_name: str [Required]¶
The name of the key to return.
param strict: bool = False¶
Whether to allow non-JSON-compliant strings.
See: https://docs.python.org/3/library/json.html#encoders-and-decoders
Useful when the parsed output may include unicode characters or new lines.
async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶
Default implementation runs ainvoke in parallel using asyncio.gather.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
async ainvoke(input: str | langchain.schema.messages.BaseMessage, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → T¶
Default implementation of ainvoke, calls invoke from a thread.
The default implementation allows usage of async code even if
the runnable did not implement a native async version of invoke.
Subclasses should override this method if they can run asynchronously.
async aparse(text: str) → T¶
Parse a single string model output into some structure.
Parameters | lang/api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.openai_functions.JsonKeyOutputFunctionsParser.html |
fd12f0feb939-1 | Parse a single string model output into some structure.
Parameters
text – String output of a language model.
Returns
Structured output.
async aparse_result(result: List[Generation], *, partial: bool = False) → T¶
Parse a list of candidate model Generations into a specific format.
The return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation.
Parameters
result – A list of Generations to be parsed. The Generations are assumed
to be different candidate outputs for a single model input.
Returns
Structured output.
async astream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
Default implementation of astream, which calls ainvoke.
Subclasses should override this method if they support streaming output.
async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, diff: bool = True, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any]) → Union[AsyncIterator[RunLogPatch], AsyncIterator[RunLog]]¶
Stream all output from a runnable, as reported to the callback system.
This includes all inner runs of LLMs, Retrievers, Tools, etc.
Output is streamed as Log objects, which include a list of
jsonpatch ops that describe how the state of the run has changed in each
step, and the final state of the run.
The jsonpatch ops can be applied in order to construct state. | lang/api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.openai_functions.JsonKeyOutputFunctionsParser.html |
fd12f0feb939-2 | The jsonpatch ops can be applied in order to construct state.
async atransform(input: AsyncIterator[Union[str, BaseMessage]], config: Optional[RunnableConfig] = None, **kwargs: Any) → AsyncIterator[T]¶
Default implementation of atransform, which buffers input and calls astream.
Subclasses should override this method if they can start producing output while
input is still being generated.
batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶
Default implementation runs invoke in parallel using a thread pool executor.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
config_schema(*, include: Optional[Sequence[str]] = None) → Type[BaseModel]¶
The type of config this runnable accepts specified as a pydantic model.
To mark a field as configurable, see the configurable_fields
and configurable_alternatives methods.
Parameters
include – A list of fields to include in the config schema.
Returns
A pydantic model that can be used to validate config.
configurable_alternatives(which: ConfigurableField, default_key: str = 'default', **kwargs: Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) → RunnableSerializable[Input, Output]¶
configurable_fields(**kwargs: Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) → RunnableSerializable[Input, Output]¶ | lang/api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.openai_functions.JsonKeyOutputFunctionsParser.html |
fd12f0feb939-3 | classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict¶
Return dictionary representation of output parser.
classmethod from_orm(obj: Any) → Model¶
get_format_instructions() → str¶
Instructions on how the LLM output should be formatted.
get_input_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
Get a pydantic model that can be used to validate input to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic input schema that depends on which
configuration the runnable is invoked with.
This method allows to get an input schema for a specific configuration.
Parameters
config – A config to use when generating the schema.
Returns | lang/api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.openai_functions.JsonKeyOutputFunctionsParser.html |
fd12f0feb939-4 | Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate input.
classmethod get_lc_namespace() → List[str]¶
Get the namespace of the langchain object.
For example, if the class is langchain.llms.openai.OpenAI, then the
namespace is [“langchain”, “llms”, “openai”]
get_output_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
Get a pydantic model that can be used to validate output to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic output schema that depends on which
configuration the runnable is invoked with.
This method allows to get an output schema for a specific configuration.
Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate output.
invoke(input: Union[str, BaseMessage], config: Optional[RunnableConfig] = None) → T¶
Transform a single input into an output. Override to implement.
Parameters
input – The input to the runnable.
config – A config to use when invoking the runnable.
The config supports standard keys like ‘tags’, ‘metadata’ for tracing
purposes, ‘max_concurrency’ for controlling how much work to do
in parallel, and other keys. Please refer to the RunnableConfig
for more details.
Returns
The output of the runnable.
classmethod is_lc_serializable() → bool¶
Is this class serializable? | lang/api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.openai_functions.JsonKeyOutputFunctionsParser.html |
fd12f0feb939-5 | classmethod is_lc_serializable() → bool¶
Is this class serializable?
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object.
map() → Runnable[List[Input], List[Output]]¶
Return a new Runnable that maps a list of inputs to a list of outputs,
by calling invoke() with each input.
parse(text: str) → Any¶
Parse a single string model output into some structure.
Parameters
text – String output of a language model.
Returns
Structured output.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ | lang/api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.openai_functions.JsonKeyOutputFunctionsParser.html |
fd12f0feb939-6 | parse_result(result: List[Generation], *, partial: bool = False) → Any[source]¶
Parse a list of candidate model Generations into a specific format.
The return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation.
Parameters
result – A list of Generations to be parsed. The Generations are assumed
to be different candidate outputs for a single model input.
Returns
Structured output.
parse_with_prompt(completion: str, prompt: PromptValue) → Any¶
Parse the output of an LLM call with the input prompt for context.
The prompt is largely provided in the event the OutputParser wants
to retry or fix the output in some way, and needs information from
the prompt to do so.
Parameters
completion – String output of a language model.
prompt – Input PromptValue.
Returns
Structured output
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of stream, which calls invoke.
Subclasses should override this method if they support streaming output.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
transform(input: Iterator[Union[str, BaseMessage]], config: Optional[RunnableConfig] = None, **kwargs: Any) → Iterator[T]¶
Default implementation of transform, which buffers input and then calls stream.
Subclasses should override this method if they can start producing output while
input is still being generated. | lang/api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.openai_functions.JsonKeyOutputFunctionsParser.html |
fd12f0feb939-7 | input is still being generated.
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶
Bind config to a Runnable, returning a new Runnable.
with_fallbacks(fallbacks: Sequence[Runnable[Input, Output]], *, exceptions_to_handle: Tuple[Type[BaseException], ...] = (<class 'Exception'>,)) → RunnableWithFallbacksT[Input, Output]¶
Add fallbacks to a runnable, returning a new Runnable.
Parameters
fallbacks – A sequence of runnables to try if the original runnable fails.
exceptions_to_handle – A tuple of exception types to handle.
Returns
A new Runnable that will try the original runnable, and then each
fallback in order, upon failures.
with_listeners(*, on_start: Optional[Listener] = None, on_end: Optional[Listener] = None, on_error: Optional[Listener] = None) → Runnable[Input, Output]¶
Bind lifecycle listeners to a Runnable, returning a new Runnable.
on_start: Called before the runnable starts running, with the Run object.
on_end: Called after the runnable finishes running, with the Run object.
on_error: Called if the runnable throws an error, with the Run object.
The Run object contains information about the run, including its id,
type, input, output, error, start_time, end_time, and any tags or metadata
added to the run. | lang/api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.openai_functions.JsonKeyOutputFunctionsParser.html |
fd12f0feb939-8 | added to the run.
with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶
Create a new Runnable that retries the original runnable on exceptions.
Parameters
retry_if_exception_type – A tuple of exception types to retry on
wait_exponential_jitter – Whether to add jitter to the wait time
between retries
stop_after_attempt – The maximum number of attempts to make before giving up
Returns
A new Runnable that retries the original runnable on exceptions.
with_types(*, input_type: Optional[Type[Input]] = None, output_type: Optional[Type[Output]] = None) → Runnable[Input, Output]¶
Bind input and output types to a Runnable, returning a new Runnable.
property InputType: Any¶
The type of input this runnable accepts specified as a type annotation.
property OutputType: Type[langchain.schema.output_parser.T]¶
The type of output this runnable produces specified as a type annotation.
property config_specs: List[langchain.schema.runnable.utils.ConfigurableFieldSpec]¶
List configurable fields for this runnable.
property input_schema: Type[pydantic.main.BaseModel]¶
The type of input this runnable accepts specified as a pydantic model.
property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”}
property output_schema: Type[pydantic.main.BaseModel]¶ | lang/api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.openai_functions.JsonKeyOutputFunctionsParser.html |
fd12f0feb939-9 | property output_schema: Type[pydantic.main.BaseModel]¶
The type of output this runnable produces specified as a pydantic model.
Examples using JsonKeyOutputFunctionsParser¶
MultiVector Retriever
prompt_llm_parser.md | lang/api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.openai_functions.JsonKeyOutputFunctionsParser.html |
104988625f00-0 | langchain.output_parsers.rail_parser.GuardrailsOutputParser¶
class langchain.output_parsers.rail_parser.GuardrailsOutputParser[source]¶
Bases: BaseOutputParser
Parse the output of an LLM call using Guardrails.
param api: Optional[Callable] = None¶
The API to use for the Guardrails object.
param args: Any = None¶
The arguments to pass to the API.
param guard: Any = None¶
The Guardrails object.
param kwargs: Any = None¶
The keyword arguments to pass to the API.
async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶
Default implementation runs ainvoke in parallel using asyncio.gather.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
async ainvoke(input: str | langchain.schema.messages.BaseMessage, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → T¶
Default implementation of ainvoke, calls invoke from a thread.
The default implementation allows usage of async code even if
the runnable did not implement a native async version of invoke.
Subclasses should override this method if they can run asynchronously.
async aparse(text: str) → T¶
Parse a single string model output into some structure.
Parameters
text – String output of a language model.
Returns
Structured output.
async aparse_result(result: List[Generation], *, partial: bool = False) → T¶
Parse a list of candidate model Generations into a specific format. | lang/api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.rail_parser.GuardrailsOutputParser.html |
104988625f00-1 | Parse a list of candidate model Generations into a specific format.
The return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation.
Parameters
result – A list of Generations to be parsed. The Generations are assumed
to be different candidate outputs for a single model input.
Returns
Structured output.
async astream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
Default implementation of astream, which calls ainvoke.
Subclasses should override this method if they support streaming output.
async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, diff: bool = True, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any]) → Union[AsyncIterator[RunLogPatch], AsyncIterator[RunLog]]¶
Stream all output from a runnable, as reported to the callback system.
This includes all inner runs of LLMs, Retrievers, Tools, etc.
Output is streamed as Log objects, which include a list of
jsonpatch ops that describe how the state of the run has changed in each
step, and the final state of the run.
The jsonpatch ops can be applied in order to construct state.
async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
Default implementation of atransform, which buffers input and calls astream.
Subclasses should override this method if they can start producing output while | lang/api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.rail_parser.GuardrailsOutputParser.html |
104988625f00-2 | Subclasses should override this method if they can start producing output while
input is still being generated.
batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶
Default implementation runs invoke in parallel using a thread pool executor.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
config_schema(*, include: Optional[Sequence[str]] = None) → Type[BaseModel]¶
The type of config this runnable accepts specified as a pydantic model.
To mark a field as configurable, see the configurable_fields
and configurable_alternatives methods.
Parameters
include – A list of fields to include in the config schema.
Returns
A pydantic model that can be used to validate config.
configurable_alternatives(which: ConfigurableField, default_key: str = 'default', **kwargs: Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) → RunnableSerializable[Input, Output]¶
configurable_fields(**kwargs: Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) → RunnableSerializable[Input, Output]¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed. | lang/api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.rail_parser.GuardrailsOutputParser.html |
104988625f00-3 | Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict¶
Return dictionary representation of output parser.
classmethod from_orm(obj: Any) → Model¶
classmethod from_pydantic(output_class: Any, num_reasks: int = 1, api: Optional[Callable] = None, *args: Any, **kwargs: Any) → GuardrailsOutputParser[source]¶
classmethod from_rail(rail_file: str, num_reasks: int = 1, api: Optional[Callable] = None, *args: Any, **kwargs: Any) → GuardrailsOutputParser[source]¶
Create a GuardrailsOutputParser from a rail file.
Parameters
rail_file – a rail file.
num_reasks – number of times to re-ask the question.
api – the API to use for the Guardrails object.
*args – The arguments to pass to the API
**kwargs – The keyword arguments to pass to the API.
Returns
GuardrailsOutputParser | lang/api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.rail_parser.GuardrailsOutputParser.html |
104988625f00-4 | **kwargs – The keyword arguments to pass to the API.
Returns
GuardrailsOutputParser
classmethod from_rail_string(rail_str: str, num_reasks: int = 1, api: Optional[Callable] = None, *args: Any, **kwargs: Any) → GuardrailsOutputParser[source]¶
get_format_instructions() → str[source]¶
Instructions on how the LLM output should be formatted.
get_input_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
Get a pydantic model that can be used to validate input to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic input schema that depends on which
configuration the runnable is invoked with.
This method allows to get an input schema for a specific configuration.
Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate input.
classmethod get_lc_namespace() → List[str]¶
Get the namespace of the langchain object.
For example, if the class is langchain.llms.openai.OpenAI, then the
namespace is [“langchain”, “llms”, “openai”]
get_output_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
Get a pydantic model that can be used to validate output to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic output schema that depends on which
configuration the runnable is invoked with.
This method allows to get an output schema for a specific configuration.
Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate output. | lang/api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.rail_parser.GuardrailsOutputParser.html |
104988625f00-5 | Returns
A pydantic model that can be used to validate output.
invoke(input: Union[str, BaseMessage], config: Optional[RunnableConfig] = None) → T¶
Transform a single input into an output. Override to implement.
Parameters
input – The input to the runnable.
config – A config to use when invoking the runnable.
The config supports standard keys like ‘tags’, ‘metadata’ for tracing
purposes, ‘max_concurrency’ for controlling how much work to do
in parallel, and other keys. Please refer to the RunnableConfig
for more details.
Returns
The output of the runnable.
classmethod is_lc_serializable() → bool¶
Is this class serializable?
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object.
map() → Runnable[List[Input], List[Output]]¶
Return a new Runnable that maps a list of inputs to a list of outputs,
by calling invoke() with each input.
parse(text: str) → Dict[source]¶ | lang/api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.rail_parser.GuardrailsOutputParser.html |
104988625f00-6 | by calling invoke() with each input.
parse(text: str) → Dict[source]¶
Parse a single string model output into some structure.
Parameters
text – String output of a language model.
Returns
Structured output.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
parse_result(result: List[Generation], *, partial: bool = False) → T¶
Parse a list of candidate model Generations into a specific format.
The return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation.
Parameters
result – A list of Generations to be parsed. The Generations are assumed
to be different candidate outputs for a single model input.
Returns
Structured output.
parse_with_prompt(completion: str, prompt: PromptValue) → Any¶
Parse the output of an LLM call with the input prompt for context.
The prompt is largely provided in the event the OutputParser wants
to retry or fix the output in some way, and needs information from
the prompt to do so.
Parameters
completion – String output of a language model.
prompt – Input PromptValue.
Returns
Structured output
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ | lang/api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.rail_parser.GuardrailsOutputParser.html |
104988625f00-7 | stream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of stream, which calls invoke.
Subclasses should override this method if they support streaming output.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of transform, which buffers input and then calls stream.
Subclasses should override this method if they can start producing output while
input is still being generated.
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶
Bind config to a Runnable, returning a new Runnable.
with_fallbacks(fallbacks: Sequence[Runnable[Input, Output]], *, exceptions_to_handle: Tuple[Type[BaseException], ...] = (<class 'Exception'>,)) → RunnableWithFallbacksT[Input, Output]¶
Add fallbacks to a runnable, returning a new Runnable.
Parameters
fallbacks – A sequence of runnables to try if the original runnable fails.
exceptions_to_handle – A tuple of exception types to handle.
Returns
A new Runnable that will try the original runnable, and then each
fallback in order, upon failures.
with_listeners(*, on_start: Optional[Listener] = None, on_end: Optional[Listener] = None, on_error: Optional[Listener] = None) → Runnable[Input, Output]¶ | lang/api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.rail_parser.GuardrailsOutputParser.html |
104988625f00-8 | Bind lifecycle listeners to a Runnable, returning a new Runnable.
on_start: Called before the runnable starts running, with the Run object.
on_end: Called after the runnable finishes running, with the Run object.
on_error: Called if the runnable throws an error, with the Run object.
The Run object contains information about the run, including its id,
type, input, output, error, start_time, end_time, and any tags or metadata
added to the run.
with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶
Create a new Runnable that retries the original runnable on exceptions.
Parameters
retry_if_exception_type – A tuple of exception types to retry on
wait_exponential_jitter – Whether to add jitter to the wait time
between retries
stop_after_attempt – The maximum number of attempts to make before giving up
Returns
A new Runnable that retries the original runnable on exceptions.
with_types(*, input_type: Optional[Type[Input]] = None, output_type: Optional[Type[Output]] = None) → Runnable[Input, Output]¶
Bind input and output types to a Runnable, returning a new Runnable.
property InputType: Any¶
The type of input this runnable accepts specified as a type annotation.
property OutputType: Type[langchain.schema.output_parser.T]¶
The type of output this runnable produces specified as a type annotation.
property config_specs: List[langchain.schema.runnable.utils.ConfigurableFieldSpec]¶
List configurable fields for this runnable.
property input_schema: Type[pydantic.main.BaseModel]¶
The type of input this runnable accepts specified as a pydantic model. | lang/api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.rail_parser.GuardrailsOutputParser.html |
Subsets and Splits