id
stringlengths 14
16
| text
stringlengths 36
2.73k
| source
stringlengths 49
117
|
---|---|---|
149d054d7bac-16 | class langchain.document_loaders.IFixitLoader(web_path: str)[source]#
Load iFixit repair guides, device wikis and answers.
iFixit is the largest, open repair community on the web. The site contains nearly
100k repair manuals, 200k Questions & Answers on 42k devices, and all the data is
licensed under CC-BY.
This loader will allow you to download the text of a repair guide, text of Q&A’s
and wikis from devices on iFixit using their open APIs and web scraping.
load() → List[langchain.schema.Document][source]#
Load data into document objects.
load_device(url_override: Optional[str] = None, include_guides: bool = True) → List[langchain.schema.Document][source]#
load_guide(url_override: Optional[str] = None) → List[langchain.schema.Document][source]#
load_questions_and_answers(url_override: Optional[str] = None) → List[langchain.schema.Document][source]#
static load_suggestions(query: str = '', doc_type: str = 'all') → List[langchain.schema.Document][source]#
class langchain.document_loaders.IMSDbLoader(web_path: Union[str, List[str]], header_template: Optional[dict] = None)[source]#
Loader that loads IMSDb webpages.
load() → List[langchain.schema.Document][source]#
Load webpage.
class langchain.document_loaders.ImageCaptionLoader(path_images: Union[str, List[str]], blip_processor: str = 'Salesforce/blip-image-captioning-base', blip_model: str = 'Salesforce/blip-image-captioning-base')[source]#
Loader that loads the captions of an image
load() → List[langchain.schema.Document][source]#
Load from a list of image files | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
149d054d7bac-17 | Load from a list of image files
class langchain.document_loaders.JSONLoader(file_path: Union[str, pathlib.Path], jq_schema: str, content_key: Optional[str] = None, metadata_func: Optional[Callable[[Dict, Dict], Dict]] = None, text_content: bool = True)[source]#
Loads a JSON file and references a jq schema provided to load the text into
documents.
Example
[{“text”: …}, {“text”: …}, {“text”: …}] -> schema = .[].text
{“key”: [{“text”: …}, {“text”: …}, {“text”: …}]} -> schema = .key[].text
[“”, “”, “”] -> schema = .[]
load() → List[langchain.schema.Document][source]#
Load and return documents from the JSON file.
class langchain.document_loaders.JoplinLoader(access_token: Optional[str] = None, port: int = 41184, host: str = 'localhost')[source]#
Loader that fetches notes from Joplin.
In order to use this loader, you need to have Joplin running with the
Web Clipper enabled (look for “Web Clipper” in the app settings).
To get the access token, you need to go to the Web Clipper options and
under “Advanced Options” you will find the access token.
You can find more information about the Web Clipper service here:
https://joplinapp.org/clipper/
lazy_load() → Iterator[langchain.schema.Document][source]#
A lazy loader for document content.
load() → List[langchain.schema.Document][source]#
Load data into document objects.
class langchain.document_loaders.MWDumpLoader(file_path: str, encoding: Optional[str] = 'utf8')[source]# | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
149d054d7bac-18 | Load MediaWiki dump from XML file
.. rubric:: Example
from langchain.document_loaders import MWDumpLoader
loader = MWDumpLoader(
file_path="myWiki.xml",
encoding="utf8"
)
docs = loader.load()
from langchain.text_splitter import RecursiveCharacterTextSplitter
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=1000, chunk_overlap=0
)
texts = text_splitter.split_documents(docs)
Parameters
file_path (str) – XML local file path
encoding (str, optional) – Charset encoding, defaults to “utf8”
load() → List[langchain.schema.Document][source]#
Load from file path.
class langchain.document_loaders.MastodonTootsLoader(mastodon_accounts: Sequence[str], number_toots: Optional[int] = 100, exclude_replies: bool = False, access_token: Optional[str] = None, api_base_url: str = 'https://mastodon.social')[source]#
Mastodon toots loader.
load() → List[langchain.schema.Document][source]#
Load toots into documents.
class langchain.document_loaders.MathpixPDFLoader(file_path: str, processed_file_format: str = 'mmd', max_wait_time_seconds: int = 500, should_clean_pdf: bool = False, **kwargs: Any)[source]#
clean_pdf(contents: str) → str[source]#
property data: dict#
get_processed_pdf(pdf_id: str) → str[source]#
property headers: dict#
load() → List[langchain.schema.Document][source]#
Load data into document objects.
send_pdf() → str[source]#
property url: str#
wait_for_processing(pdf_id: str) → None[source]# | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
149d054d7bac-19 | property url: str#
wait_for_processing(pdf_id: str) → None[source]#
class langchain.document_loaders.ModernTreasuryLoader(resource: str, organization_id: Optional[str] = None, api_key: Optional[str] = None)[source]#
load() → List[langchain.schema.Document][source]#
Load data into document objects.
class langchain.document_loaders.NotebookLoader(path: str, include_outputs: bool = False, max_output_length: int = 10, remove_newline: bool = False, traceback: bool = False)[source]#
Loader that loads .ipynb notebook files.
load() → List[langchain.schema.Document][source]#
Load documents.
class langchain.document_loaders.NotionDBLoader(integration_token: str, database_id: str, request_timeout_sec: Optional[int] = 10)[source]#
Notion DB Loader.
Reads content from pages within a Noton Database.
:param integration_token: Notion integration token.
:type integration_token: str
:param database_id: Notion database id.
:type database_id: str
:param request_timeout_sec: Timeout for Notion requests in seconds.
:type request_timeout_sec: int
load() → List[langchain.schema.Document][source]#
Load documents from the Notion database.
:returns: List of documents.
:rtype: List[Document]
load_page(page_id: str) → langchain.schema.Document[source]#
Read a page.
class langchain.document_loaders.NotionDirectoryLoader(path: str)[source]#
Loader that loads Notion directory dump.
load() → List[langchain.schema.Document][source]#
Load documents.
class langchain.document_loaders.ObsidianLoader(path: str, encoding: str = 'UTF-8', collect_metadata: bool = True)[source]# | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
149d054d7bac-20 | Loader that loads Obsidian files from disk.
FRONT_MATTER_REGEX = re.compile('^---\\n(.*?)\\n---\\n', re.MULTILINE|re.DOTALL)#
load() → List[langchain.schema.Document][source]#
Load documents.
pydantic model langchain.document_loaders.OneDriveLoader[source]#
field auth_with_token: bool = False#
field drive_id: str [Required]#
field folder_path: Optional[str] = None#
field object_ids: Optional[List[str]] = None#
field settings: langchain.document_loaders.onedrive._OneDriveSettings [Optional]#
load() → List[langchain.schema.Document][source]#
Loads all supported document files from the specified OneDrive drive a
nd returns a list of Document objects.
Returns
A list of Document objects
representing the loaded documents.
Return type
List[Document]
Raises
ValueError – If the specified drive ID
does not correspond to a drive in the OneDrive storage. –
class langchain.document_loaders.OnlinePDFLoader(file_path: str)[source]#
Loader that loads online PDFs.
load() → List[langchain.schema.Document][source]#
Load documents.
class langchain.document_loaders.OutlookMessageLoader(file_path: str)[source]#
Loader that loads Outlook Message files using extract_msg.
TeamMsgExtractor/msg-extractor
load() → List[langchain.schema.Document][source]#
Load data into document objects.
class langchain.document_loaders.PDFMinerLoader(file_path: str)[source]#
Loader that uses PDFMiner to load PDF files.
lazy_load() → Iterator[langchain.schema.Document][source]#
Lazily lod documents.
load() → List[langchain.schema.Document][source]#
Eagerly load the content. | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
149d054d7bac-21 | Eagerly load the content.
class langchain.document_loaders.PDFMinerPDFasHTMLLoader(file_path: str)[source]#
Loader that uses PDFMiner to load PDF files as HTML content.
load() → List[langchain.schema.Document][source]#
Load file.
class langchain.document_loaders.PDFPlumberLoader(file_path: str, text_kwargs: Optional[Mapping[str, Any]] = None)[source]#
Loader that uses pdfplumber to load PDF files.
load() → List[langchain.schema.Document][source]#
Load file.
langchain.document_loaders.PagedPDFSplitter#
alias of langchain.document_loaders.pdf.PyPDFLoader
class langchain.document_loaders.PlaywrightURLLoader(urls: List[str], continue_on_failure: bool = True, headless: bool = True, remove_selectors: Optional[List[str]] = None)[source]#
Loader that uses Playwright and to load a page and unstructured to load the html.
This is useful for loading pages that require javascript to render.
urls#
List of URLs to load.
Type
List[str]
continue_on_failure#
If True, continue loading other URLs on failure.
Type
bool
headless#
If True, the browser will run in headless mode.
Type
bool
load() → List[langchain.schema.Document][source]#
Load the specified URLs using Playwright and create Document instances.
Returns
A list of Document instances with loaded content.
Return type
List[Document]
class langchain.document_loaders.PsychicLoader(api_key: str, connector_id: str, connection_id: str)[source]#
Loader that loads documents from Psychic.dev.
load() → List[langchain.schema.Document][source]#
Load documents. | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
149d054d7bac-22 | load() → List[langchain.schema.Document][source]#
Load documents.
class langchain.document_loaders.PyMuPDFLoader(file_path: str)[source]#
Loader that uses PyMuPDF to load PDF files.
load(**kwargs: Optional[Any]) → List[langchain.schema.Document][source]#
Load file.
class langchain.document_loaders.PyPDFDirectoryLoader(path: str, glob: str = '**/[!.]*.pdf', silent_errors: bool = False, load_hidden: bool = False, recursive: bool = False)[source]#
Loads a directory with PDF files with pypdf and chunks at character level.
Loader also stores page numbers in metadatas.
load() → List[langchain.schema.Document][source]#
Load data into document objects.
class langchain.document_loaders.PyPDFLoader(file_path: str)[source]#
Loads a PDF with pypdf and chunks at character level.
Loader also stores page numbers in metadatas.
lazy_load() → Iterator[langchain.schema.Document][source]#
Lazy load given path as pages.
load() → List[langchain.schema.Document][source]#
Load given path as pages.
class langchain.document_loaders.PyPDFium2Loader(file_path: str)[source]#
Loads a PDF with pypdfium2 and chunks at character level.
lazy_load() → Iterator[langchain.schema.Document][source]#
Lazy load given path as pages.
load() → List[langchain.schema.Document][source]#
Load given path as pages.
class langchain.document_loaders.PythonLoader(file_path: str)[source]#
Load Python files, respecting any non-default encoding if specified. | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
149d054d7bac-23 | Load Python files, respecting any non-default encoding if specified.
class langchain.document_loaders.ReadTheDocsLoader(path: Union[str, pathlib.Path], encoding: Optional[str] = None, errors: Optional[str] = None, custom_html_tag: Optional[Tuple[str, dict]] = None, **kwargs: Optional[Any])[source]#
Loader that loads ReadTheDocs documentation directory dump.
load() → List[langchain.schema.Document][source]#
Load documents.
class langchain.document_loaders.RedditPostsLoader(client_id: str, client_secret: str, user_agent: str, search_queries: Sequence[str], mode: str, categories: Sequence[str] = ['new'], number_posts: Optional[int] = 10)[source]#
Reddit posts loader.
Read posts on a subreddit.
First you need to go to
https://www.reddit.com/prefs/apps/
and create your application
load() → List[langchain.schema.Document][source]#
Load reddits.
class langchain.document_loaders.RoamLoader(path: str)[source]#
Loader that loads Roam files from disk.
load() → List[langchain.schema.Document][source]#
Load documents.
class langchain.document_loaders.S3DirectoryLoader(bucket: str, prefix: str = '')[source]#
Loading logic for loading documents from s3.
load() → List[langchain.schema.Document][source]#
Load documents.
class langchain.document_loaders.S3FileLoader(bucket: str, key: str)[source]#
Loading logic for loading documents from s3.
load() → List[langchain.schema.Document][source]#
Load documents.
class langchain.document_loaders.SRTLoader(file_path: str)[source]#
Loader for .srt (subtitle) files. | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
149d054d7bac-24 | Loader for .srt (subtitle) files.
load() → List[langchain.schema.Document][source]#
Load using pysrt file.
class langchain.document_loaders.SeleniumURLLoader(urls: List[str], continue_on_failure: bool = True, browser: Literal['chrome', 'firefox'] = 'chrome', binary_location: Optional[str] = None, executable_path: Optional[str] = None, headless: bool = True, arguments: List[str] = [])[source]#
Loader that uses Selenium and to load a page and unstructured to load the html.
This is useful for loading pages that require javascript to render.
urls#
List of URLs to load.
Type
List[str]
continue_on_failure#
If True, continue loading other URLs on failure.
Type
bool
browser#
The browser to use, either ‘chrome’ or ‘firefox’.
Type
str
binary_location#
The location of the browser binary.
Type
Optional[str]
executable_path#
The path to the browser executable.
Type
Optional[str]
headless#
If True, the browser will run in headless mode.
Type
bool
arguments [List[str]]
List of arguments to pass to the browser.
load() → List[langchain.schema.Document][source]#
Load the specified URLs using Selenium and create Document instances.
Returns
A list of Document instances with loaded content.
Return type
List[Document]
class langchain.document_loaders.SitemapLoader(web_path: str, filter_urls: Optional[List[str]] = None, parsing_function: Optional[Callable] = None, blocksize: Optional[int] = None, blocknum: int = 0, meta_function: Optional[Callable] = None, is_local: bool = False)[source]#
Loader that fetches a sitemap and loads those URLs. | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
149d054d7bac-25 | Loader that fetches a sitemap and loads those URLs.
load() → List[langchain.schema.Document][source]#
Load sitemap.
parse_sitemap(soup: Any) → List[dict][source]#
Parse sitemap xml and load into a list of dicts.
class langchain.document_loaders.SlackDirectoryLoader(zip_path: str, workspace_url: Optional[str] = None)[source]#
Loader for loading documents from a Slack directory dump.
load() → List[langchain.schema.Document][source]#
Load and return documents from the Slack directory dump.
class langchain.document_loaders.SpreedlyLoader(access_token: str, resource: str)[source]#
load() → List[langchain.schema.Document][source]#
Load data into document objects.
class langchain.document_loaders.StripeLoader(resource: str, access_token: Optional[str] = None)[source]#
load() → List[langchain.schema.Document][source]#
Load data into document objects.
class langchain.document_loaders.TelegramChatApiLoader(chat_entity: Optional[EntityLike] = None, api_id: Optional[int] = None, api_hash: Optional[str] = None, username: Optional[str] = None, file_path: str = 'telegram_data.json')[source]#
Loader that loads Telegram chat json directory dump.
async fetch_data_from_telegram() → None[source]#
Fetch data from Telegram API and save it as a JSON file.
load() → List[langchain.schema.Document][source]#
Load documents.
class langchain.document_loaders.TelegramChatFileLoader(path: str)[source]#
Loader that loads Telegram chat json directory dump.
load() → List[langchain.schema.Document][source]#
Load documents.
langchain.document_loaders.TelegramChatLoader# | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
149d054d7bac-26 | Load documents.
langchain.document_loaders.TelegramChatLoader#
alias of langchain.document_loaders.telegram.TelegramChatFileLoader
class langchain.document_loaders.TextLoader(file_path: str, encoding: Optional[str] = None, autodetect_encoding: bool = False)[source]#
Load text files.
Parameters
file_path – Path to the file to load.
encoding – File encoding to use. If None, the file will be loaded
encoding. (with the default system) –
autodetect_encoding – Whether to try to autodetect the file encoding
if the specified encoding fails.
load() → List[langchain.schema.Document][source]#
Load from file path.
class langchain.document_loaders.ToMarkdownLoader(url: str, api_key: str)[source]#
Loader that loads HTML to markdown using 2markdown.
lazy_load() → Iterator[langchain.schema.Document][source]#
Lazily load the file.
load() → List[langchain.schema.Document][source]#
Load file.
class langchain.document_loaders.TomlLoader(source: Union[str, pathlib.Path])[source]#
A TOML document loader that inherits from the BaseLoader class.
This class can be initialized with either a single source file or a source
directory containing TOML files.
lazy_load() → Iterator[langchain.schema.Document][source]#
Lazily load the TOML documents from the source file or directory.
load() → List[langchain.schema.Document][source]#
Load and return all documents.
class langchain.document_loaders.TwitterTweetLoader(auth_handler: Union[OAuthHandler, OAuth2BearerHandler], twitter_users: Sequence[str], number_tweets: Optional[int] = 100)[source]#
Twitter tweets loader.
Read tweets of user twitter handle.
First you need to go to | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
149d054d7bac-27 | Twitter tweets loader.
Read tweets of user twitter handle.
First you need to go to
https://developer.twitter.com/en/docs/twitter-api
/getting-started/getting-access-to-the-twitter-api
to get your token. And create a v2 version of the app.
classmethod from_bearer_token(oauth2_bearer_token: str, twitter_users: Sequence[str], number_tweets: Optional[int] = 100) → langchain.document_loaders.twitter.TwitterTweetLoader[source]#
Create a TwitterTweetLoader from OAuth2 bearer token.
classmethod from_secrets(access_token: str, access_token_secret: str, consumer_key: str, consumer_secret: str, twitter_users: Sequence[str], number_tweets: Optional[int] = 100) → langchain.document_loaders.twitter.TwitterTweetLoader[source]#
Create a TwitterTweetLoader from access tokens and secrets.
load() → List[langchain.schema.Document][source]#
Load tweets.
class langchain.document_loaders.UnstructuredAPIFileIOLoader(file: Union[IO, Sequence[IO]], mode: str = 'single', url: str = 'https://api.unstructured.io/general/v0/general', api_key: str = '', **unstructured_kwargs: Any)[source]#
Loader that uses the unstructured web API to load file IO objects.
class langchain.document_loaders.UnstructuredAPIFileLoader(file_path: Union[str, List[str]] = '', mode: str = 'single', url: str = 'https://api.unstructured.io/general/v0/general', api_key: str = '', **unstructured_kwargs: Any)[source]#
Loader that uses the unstructured web API to load files.
class langchain.document_loaders.UnstructuredEPubLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]# | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
149d054d7bac-28 | Loader that uses unstructured to load epub files.
class langchain.document_loaders.UnstructuredEmailLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]#
Loader that uses unstructured to load email files.
class langchain.document_loaders.UnstructuredFileIOLoader(file: Union[IO, Sequence[IO]], mode: str = 'single', **unstructured_kwargs: Any)[source]#
Loader that uses unstructured to load file IO objects.
class langchain.document_loaders.UnstructuredFileLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]#
Loader that uses unstructured to load files.
class langchain.document_loaders.UnstructuredHTMLLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]#
Loader that uses unstructured to load HTML files.
class langchain.document_loaders.UnstructuredImageLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]#
Loader that uses unstructured to load image files, such as PNGs and JPGs.
class langchain.document_loaders.UnstructuredMarkdownLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]#
Loader that uses unstructured to load markdown files.
class langchain.document_loaders.UnstructuredODTLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]#
Loader that uses unstructured to load open office ODT files.
class langchain.document_loaders.UnstructuredPDFLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]# | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
149d054d7bac-29 | Loader that uses unstructured to load PDF files.
class langchain.document_loaders.UnstructuredPowerPointLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]#
Loader that uses unstructured to load powerpoint files.
class langchain.document_loaders.UnstructuredRTFLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]#
Loader that uses unstructured to load rtf files.
class langchain.document_loaders.UnstructuredURLLoader(urls: List[str], continue_on_failure: bool = True, mode: str = 'single', **unstructured_kwargs: Any)[source]#
Loader that uses unstructured to load HTML files.
load() → List[langchain.schema.Document][source]#
Load file.
class langchain.document_loaders.UnstructuredWordDocumentLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]#
Loader that uses unstructured to load word documents.
class langchain.document_loaders.WeatherDataLoader(client: langchain.utilities.openweathermap.OpenWeatherMapAPIWrapper, places: Sequence[str])[source]#
Weather Reader.
Reads the forecast & current weather of any location using OpenWeatherMap’s free
API. Checkout ‘https://openweathermap.org/appid’ for more on how to generate a free
OpenWeatherMap API.
classmethod from_params(places: Sequence[str], *, openweathermap_api_key: Optional[str] = None) → langchain.document_loaders.weather.WeatherDataLoader[source]#
lazy_load() → Iterator[langchain.schema.Document][source]#
Lazily load weather data for the given locations.
load() → List[langchain.schema.Document][source]#
Load weather data for the given locations. | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
149d054d7bac-30 | Load weather data for the given locations.
class langchain.document_loaders.WebBaseLoader(web_path: Union[str, List[str]], header_template: Optional[dict] = None)[source]#
Loader that uses urllib and beautiful soup to load webpages.
aload() → List[langchain.schema.Document][source]#
Load text from the urls in web_path async into Documents.
default_parser: str = 'html.parser'#
Default parser to use for BeautifulSoup.
async fetch_all(urls: List[str]) → Any[source]#
Fetch all urls concurrently with rate limiting.
load() → List[langchain.schema.Document][source]#
Load text from the url(s) in web_path.
requests_per_second: int = 2#
Max number of concurrent requests to make.
scrape(parser: Optional[str] = None) → Any[source]#
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls: List[str], parser: Optional[str] = None) → List[Any][source]#
Fetch all urls, then return soups for all results.
property web_path: str#
web_paths: List[str]#
class langchain.document_loaders.WhatsAppChatLoader(path: str)[source]#
Loader that loads WhatsApp messages text file.
load() → List[langchain.schema.Document][source]#
Load documents.
class langchain.document_loaders.WikipediaLoader(query: str, lang: str = 'en', load_max_docs: Optional[int] = 100, load_all_available_meta: Optional[bool] = False)[source]#
Loads a query result from www.wikipedia.org into a list of Documents.
The hard limit on the number of downloaded Documents is 300 for now.
Each wiki page represents one Document.
load() → List[langchain.schema.Document][source]#
Load data into document objects. | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
149d054d7bac-31 | load() → List[langchain.schema.Document][source]#
Load data into document objects.
class langchain.document_loaders.YoutubeLoader(video_id: str, add_video_info: bool = False, language: str = 'en', continue_on_failure: bool = False)[source]#
Loader that loads Youtube transcripts.
static extract_video_id(youtube_url: str) → str[source]#
Extract video id from common YT urls.
classmethod from_youtube_url(youtube_url: str, **kwargs: Any) → langchain.document_loaders.youtube.YoutubeLoader[source]#
Given youtube URL, load video.
load() → List[langchain.schema.Document][source]#
Load documents.
previous
Text Splitter
next
Vector Stores
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/reference/modules/document_loaders.html |
2311a60b0c72-0 | .rst
.pdf
Retrievers
Retrievers#
pydantic model langchain.retrievers.ArxivRetriever[source]#
It is effectively a wrapper for ArxivAPIWrapper.
It wraps load() to get_relevant_documents().
It uses all ArxivAPIWrapper arguments without any change.
async aget_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
get_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
pydantic model langchain.retrievers.AzureCognitiveSearchRetriever[source]#
Wrapper around Azure Cognitive Search.
field aiosession: Optional[aiohttp.client.ClientSession] = None#
ClientSession, in case we want to reuse connection for better performance.
field api_key: str = ''#
API Key. Both Admin and Query keys work, but for reading data it’s
recommended to use a Query key.
field api_version: str = '2020-06-30'#
API version
field content_key: str = 'content'#
Key in a retrieved result to set as the Document page_content.
field index_name: str = ''#
Name of Index inside Azure Cognitive Search service
field service_name: str = ''#
Name of Azure Cognitive Search service
async aget_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
get_relevant_documents(query: str) → List[langchain.schema.Document][source]# | https://python.langchain.com/en/latest/reference/modules/retrievers.html |
2311a60b0c72-1 | get_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
pydantic model langchain.retrievers.ChatGPTPluginRetriever[source]#
field aiosession: Optional[aiohttp.client.ClientSession] = None#
field bearer_token: str [Required]#
field filter: Optional[dict] = None#
field top_k: int = 3#
field url: str [Required]#
async aget_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
get_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
pydantic model langchain.retrievers.ContextualCompressionRetriever[source]#
Retriever that wraps a base retriever and compresses the results.
field base_compressor: langchain.retrievers.document_compressors.base.BaseDocumentCompressor [Required]#
Compressor for compressing retrieved documents.
field base_retriever: langchain.schema.BaseRetriever [Required]#
Base Retriever to use for getting relevant documents.
async aget_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
get_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for | https://python.langchain.com/en/latest/reference/modules/retrievers.html |
2311a60b0c72-2 | Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
Sequence of relevant documents
class langchain.retrievers.DataberryRetriever(datastore_url: str, top_k: Optional[int] = None, api_key: Optional[str] = None)[source]#
async aget_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
api_key: Optional[str]#
datastore_url: str#
get_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
top_k: Optional[int]#
class langchain.retrievers.ElasticSearchBM25Retriever(client: Any, index_name: str)[source]#
Wrapper around Elasticsearch using BM25 as a retrieval method.
To connect to an Elasticsearch instance that requires login credentials,
including Elastic Cloud, use the Elasticsearch URL format
https://username:password@es_host:9243. For example, to connect to Elastic
Cloud, create the Elasticsearch URL with the required authentication details and
pass it to the ElasticVectorSearch constructor as the named parameter
elasticsearch_url.
You can obtain your Elastic Cloud URL and login credentials by logging in to the
Elastic Cloud console at https://cloud.elastic.co, selecting your deployment, and
navigating to the “Deployments” page.
To obtain your Elastic Cloud password for the default “elastic” user:
Log in to the Elastic Cloud console at https://cloud.elastic.co
Go to “Security” > “Users”
Locate the “elastic” user and click “Edit”
Click “Reset password” | https://python.langchain.com/en/latest/reference/modules/retrievers.html |
2311a60b0c72-3 | Locate the “elastic” user and click “Edit”
Click “Reset password”
Follow the prompts to reset the password
The format for Elastic Cloud URLs is
https://username:password@cluster_id.region_id.gcp.cloud.es.io:9243.
add_texts(texts: Iterable[str], refresh_indices: bool = True) → List[str][source]#
Run more texts through the embeddings and add to the retriver.
Parameters
texts – Iterable of strings to add to the retriever.
refresh_indices – bool to refresh ElasticSearch indices
Returns
List of ids from adding the texts into the retriever.
async aget_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
classmethod create(elasticsearch_url: str, index_name: str, k1: float = 2.0, b: float = 0.75) → langchain.retrievers.elastic_search_bm25.ElasticSearchBM25Retriever[source]#
get_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
pydantic model langchain.retrievers.KNNRetriever[source]#
field embeddings: langchain.embeddings.base.Embeddings [Required]#
field index: Any = None#
field k: int = 4#
field relevancy_threshold: Optional[float] = None#
field texts: List[str] [Required]#
async aget_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns | https://python.langchain.com/en/latest/reference/modules/retrievers.html |
2311a60b0c72-4 | Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
classmethod from_texts(texts: List[str], embeddings: langchain.embeddings.base.Embeddings, **kwargs: Any) → langchain.retrievers.knn.KNNRetriever[source]#
get_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
class langchain.retrievers.MetalRetriever(client: Any, params: Optional[dict] = None)[source]#
async aget_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
get_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
pydantic model langchain.retrievers.PineconeHybridSearchRetriever[source]#
field alpha: float = 0.5#
field embeddings: langchain.embeddings.base.Embeddings [Required]#
field index: Any = None#
field sparse_encoder: Any = None#
field top_k: int = 4#
add_texts(texts: List[str], ids: Optional[List[str]] = None, metadatas: Optional[List[dict]] = None) → None[source]#
async aget_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents | https://python.langchain.com/en/latest/reference/modules/retrievers.html |
2311a60b0c72-5 | Parameters
query – string to find relevant documents for
Returns
List of relevant documents
get_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
pydantic model langchain.retrievers.RemoteLangChainRetriever[source]#
field headers: Optional[dict] = None#
field input_key: str = 'message'#
field metadata_key: str = 'metadata'#
field page_content_key: str = 'page_content'#
field response_key: str = 'response'#
field url: str [Required]#
async aget_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
get_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
pydantic model langchain.retrievers.SVMRetriever[source]#
field embeddings: langchain.embeddings.base.Embeddings [Required]#
field index: Any = None#
field k: int = 4#
field relevancy_threshold: Optional[float] = None#
field texts: List[str] [Required]#
async aget_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
classmethod from_texts(texts: List[str], embeddings: langchain.embeddings.base.Embeddings, **kwargs: Any) → langchain.retrievers.svm.SVMRetriever[source]# | https://python.langchain.com/en/latest/reference/modules/retrievers.html |
2311a60b0c72-6 | get_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
pydantic model langchain.retrievers.SelfQueryRetriever[source]#
Retriever that wraps around a vector store and uses an LLM to generate
the vector store queries.
field llm_chain: langchain.chains.llm.LLMChain [Required]#
The LLMChain for generating the vector store queries.
field search_kwargs: dict [Optional]#
Keyword arguments to pass in to the vector store search.
field search_type: str = 'similarity'#
The search type to perform on the vector store.
field structured_query_translator: langchain.chains.query_constructor.ir.Visitor [Required]#
Translator for turning internal query language into vectorstore search params.
field vectorstore: langchain.vectorstores.base.VectorStore [Required]#
The underlying vector store from which documents will be retrieved.
field verbose: bool = False#
async aget_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, vectorstore: langchain.vectorstores.base.VectorStore, document_contents: str, metadata_field_info: List[langchain.chains.query_constructor.schema.AttributeInfo], structured_query_translator: Optional[langchain.chains.query_constructor.ir.Visitor] = None, chain_kwargs: Optional[Dict] = None, enable_limit: bool = False, **kwargs: Any) → langchain.retrievers.self_query.base.SelfQueryRetriever[source]# | https://python.langchain.com/en/latest/reference/modules/retrievers.html |
2311a60b0c72-7 | get_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
pydantic model langchain.retrievers.TFIDFRetriever[source]#
field docs: List[langchain.schema.Document] [Required]#
field k: int = 4#
field tfidf_array: Any = None#
field vectorizer: Any = None#
async aget_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
classmethod from_documents(documents: Iterable[langchain.schema.Document], *, tfidf_params: Optional[Dict[str, Any]] = None, **kwargs: Any) → langchain.retrievers.tfidf.TFIDFRetriever[source]#
classmethod from_texts(texts: Iterable[str], metadatas: Optional[Iterable[dict]] = None, tfidf_params: Optional[Dict[str, Any]] = None, **kwargs: Any) → langchain.retrievers.tfidf.TFIDFRetriever[source]#
get_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
pydantic model langchain.retrievers.TimeWeightedVectorStoreRetriever[source]#
Retriever combining embedding similarity with recency.
field decay_rate: float = 0.01#
The exponential decay factor used as (1.0-decay_rate)**(hrs_passed).
field default_salience: Optional[float] = None# | https://python.langchain.com/en/latest/reference/modules/retrievers.html |
2311a60b0c72-8 | field default_salience: Optional[float] = None#
The salience to assign memories not retrieved from the vector store.
None assigns no salience to documents not fetched from the vector store.
field k: int = 4#
The maximum number of documents to retrieve in a given call.
field memory_stream: List[langchain.schema.Document] [Optional]#
The memory_stream of documents to search through.
field other_score_keys: List[str] = []#
Other keys in the metadata to factor into the score, e.g. ‘importance’.
field search_kwargs: dict [Optional]#
Keyword arguments to pass to the vectorstore similarity search.
field vectorstore: langchain.vectorstores.base.VectorStore [Required]#
The vectorstore to store documents and determine salience.
async aadd_documents(documents: List[langchain.schema.Document], **kwargs: Any) → List[str][source]#
Add documents to vectorstore.
add_documents(documents: List[langchain.schema.Document], **kwargs: Any) → List[str][source]#
Add documents to vectorstore.
async aget_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Return documents that are relevant to the query.
get_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Return documents that are relevant to the query.
get_salient_docs(query: str) → Dict[int, Tuple[langchain.schema.Document, float]][source]#
Return documents that are salient to the query.
class langchain.retrievers.VespaRetriever(app: Vespa, body: Dict, content_field: str, metadata_fields: Optional[Sequence[str]] = None)[source]#
async aget_relevant_documents(query: str) → List[langchain.schema.Document][source]# | https://python.langchain.com/en/latest/reference/modules/retrievers.html |
2311a60b0c72-9 | Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
classmethod from_params(url: str, content_field: str, *, k: Optional[int] = None, metadata_fields: Union[Sequence[str], Literal['*']] = (), sources: Optional[Union[Sequence[str], Literal['*']]] = None, _filter: Optional[str] = None, yql: Optional[str] = None, **kwargs: Any) → langchain.retrievers.vespa_retriever.VespaRetriever[source]#
Instantiate retriever from params.
Parameters
url (str) – Vespa app URL.
content_field (str) – Field in results to return as Document page_content.
k (Optional[int]) – Number of Documents to return. Defaults to None.
metadata_fields (Sequence[str] or "*") – Fields in results to include in
document metadata. Defaults to empty tuple ().
sources (Sequence[str] or "*" or None) – Sources to retrieve
from. Defaults to None.
_filter (Optional[str]) – Document filter condition expressed in YQL.
Defaults to None.
yql (Optional[str]) – Full YQL query to be used. Should not be specified
if _filter or sources are specified. Defaults to None.
kwargs (Any) – Keyword arguments added to query body.
get_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
get_relevant_documents_with_filter(query: str, *, _filter: Optional[str] = None) → List[langchain.schema.Document][source]# | https://python.langchain.com/en/latest/reference/modules/retrievers.html |
2311a60b0c72-10 | class langchain.retrievers.WeaviateHybridSearchRetriever(client: Any, index_name: str, text_key: str, alpha: float = 0.5, k: int = 4, attributes: Optional[List[str]] = None, create_schema_if_missing: bool = True)[source]#
class Config[source]#
Configuration for this pydantic object.
arbitrary_types_allowed = True#
extra = 'forbid'#
add_documents(docs: List[langchain.schema.Document], **kwargs: Any) → List[str][source]#
Upload documents to Weaviate.
async aget_relevant_documents(query: str, where_filter: Optional[Dict[str, object]] = None) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
get_relevant_documents(query: str, where_filter: Optional[Dict[str, object]] = None) → List[langchain.schema.Document][source]#
Look up similar documents in Weaviate.
pydantic model langchain.retrievers.WikipediaRetriever[source]#
It is effectively a wrapper for WikipediaAPIWrapper.
It wraps load() to get_relevant_documents().
It uses all WikipediaAPIWrapper arguments without any change.
async aget_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
get_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents | https://python.langchain.com/en/latest/reference/modules/retrievers.html |
2311a60b0c72-11 | Parameters
query – string to find relevant documents for
Returns
List of relevant documents
class langchain.retrievers.ZepRetriever(session_id: str, url: str, top_k: Optional[int] = None)[source]#
A Retriever implementation for the Zep long-term memory store. Search your
user’s long-term chat history with Zep.
Note: You will need to provide the user’s session_id to use this retriever.
More on Zep:
Zep provides long-term conversation storage for LLM apps. The server stores,
summarizes, embeds, indexes, and enriches conversational AI chat
histories, and exposes them via simple, low-latency APIs.
For server installation instructions, see:
https://getzep.github.io/deployment/quickstart/
async aget_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
get_relevant_documents(query: str) → List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query – string to find relevant documents for
Returns
List of relevant documents
previous
Vector Stores
next
Document Compressors
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/reference/modules/retrievers.html |
1d5339b0054e-0 | .rst
.pdf
Text Splitter
Text Splitter#
Functionality for splitting text.
class langchain.text_splitter.CharacterTextSplitter(separator: str = '\n\n', **kwargs: Any)[source]#
Implementation of splitting text that looks at characters.
split_text(text: str) → List[str][source]#
Split incoming text and return chunks.
class langchain.text_splitter.LatexTextSplitter(**kwargs: Any)[source]#
Attempts to split the text along Latex-formatted layout elements.
class langchain.text_splitter.MarkdownTextSplitter(**kwargs: Any)[source]#
Attempts to split the text along Markdown-formatted headings.
class langchain.text_splitter.NLTKTextSplitter(separator: str = '\n\n', **kwargs: Any)[source]#
Implementation of splitting text that looks at sentences using NLTK.
split_text(text: str) → List[str][source]#
Split incoming text and return chunks.
class langchain.text_splitter.PythonCodeTextSplitter(**kwargs: Any)[source]#
Attempts to split the text along Python syntax.
class langchain.text_splitter.RecursiveCharacterTextSplitter(separators: Optional[List[str]] = None, **kwargs: Any)[source]#
Implementation of splitting text that looks at characters.
Recursively tries to split by different characters to find one
that works.
split_text(text: str) → List[str][source]#
Split incoming text and return chunks.
class langchain.text_splitter.SpacyTextSplitter(separator: str = '\n\n', pipeline: str = 'en_core_web_sm', **kwargs: Any)[source]#
Implementation of splitting text that looks at sentences using Spacy.
split_text(text: str) → List[str][source]#
Split incoming text and return chunks. | https://python.langchain.com/en/latest/reference/modules/text_splitter.html |
1d5339b0054e-1 | Split incoming text and return chunks.
class langchain.text_splitter.TextSplitter(chunk_size: int = 4000, chunk_overlap: int = 200, length_function: typing.Callable[[str], int] = <built-in function len>)[source]#
Interface for splitting text into chunks.
async atransform_documents(documents: Sequence[langchain.schema.Document], **kwargs: Any) → Sequence[langchain.schema.Document][source]#
Asynchronously transform a sequence of documents by splitting them.
create_documents(texts: List[str], metadatas: Optional[List[dict]] = None) → List[langchain.schema.Document][source]#
Create documents from a list of texts.
classmethod from_huggingface_tokenizer(tokenizer: Any, **kwargs: Any) → langchain.text_splitter.TextSplitter[source]#
Text splitter that uses HuggingFace tokenizer to count length.
classmethod from_tiktoken_encoder(encoding_name: str = 'gpt2', model_name: Optional[str] = None, allowed_special: Union[Literal['all'], AbstractSet[str]] = {}, disallowed_special: Union[Literal['all'], Collection[str]] = 'all', **kwargs: Any) → langchain.text_splitter.TS[source]#
Text splitter that uses tiktoken encoder to count length.
split_documents(documents: Iterable[langchain.schema.Document]) → List[langchain.schema.Document][source]#
Split documents.
abstract split_text(text: str) → List[str][source]#
Split text into multiple components.
transform_documents(documents: Sequence[langchain.schema.Document], **kwargs: Any) → Sequence[langchain.schema.Document][source]#
Transform sequence of documents by splitting them. | https://python.langchain.com/en/latest/reference/modules/text_splitter.html |
1d5339b0054e-2 | Transform sequence of documents by splitting them.
class langchain.text_splitter.TokenTextSplitter(encoding_name: str = 'gpt2', model_name: Optional[str] = None, allowed_special: Union[Literal['all'], AbstractSet[str]] = {}, disallowed_special: Union[Literal['all'], Collection[str]] = 'all', **kwargs: Any)[source]#
Implementation of splitting text that looks at tokens.
split_text(text: str) → List[str][source]#
Split incoming text and return chunks.
previous
Docstore
next
Document Loaders
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/reference/modules/text_splitter.html |
6072e6794d09-0 | .rst
.pdf
Output Parsers
Output Parsers#
pydantic model langchain.output_parsers.CommaSeparatedListOutputParser[source]#
Parse out comma separated lists.
get_format_instructions() → str[source]#
Instructions on how the LLM output should be formatted.
parse(text: str) → List[str][source]#
Parse the output of an LLM call.
pydantic model langchain.output_parsers.GuardrailsOutputParser[source]#
field guard: Any = None#
classmethod from_rail(rail_file: str, num_reasks: int = 1) → langchain.output_parsers.rail_parser.GuardrailsOutputParser[source]#
classmethod from_rail_string(rail_str: str, num_reasks: int = 1) → langchain.output_parsers.rail_parser.GuardrailsOutputParser[source]#
get_format_instructions() → str[source]#
Instructions on how the LLM output should be formatted.
parse(text: str) → Dict[source]#
Parse the output of an LLM call.
A method which takes in a string (assumed output of a language model )
and parses it into some structure.
Parameters
text – output of language model
Returns
structured output
pydantic model langchain.output_parsers.ListOutputParser[source]#
Class to parse the output of an LLM call to a list.
abstract parse(text: str) → List[str][source]#
Parse the output of an LLM call.
pydantic model langchain.output_parsers.OutputFixingParser[source]#
Wraps a parser and tries to fix parsing errors.
field parser: langchain.schema.BaseOutputParser[langchain.output_parsers.fix.T] [Required]#
field retry_chain: langchain.chains.llm.LLMChain [Required]# | https://python.langchain.com/en/latest/reference/modules/output_parsers.html |
6072e6794d09-1 | field retry_chain: langchain.chains.llm.LLMChain [Required]#
classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, parser: langchain.schema.BaseOutputParser[langchain.output_parsers.fix.T], prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['completion', 'error', 'instructions'], output_parser=None, partial_variables={}, template='Instructions:\n--------------\n{instructions}\n--------------\nCompletion:\n--------------\n{completion}\n--------------\n\nAbove, the Completion did not satisfy the constraints given in the Instructions.\nError:\n--------------\n{error}\n--------------\n\nPlease try again. Please only respond with an answer that satisfies the constraints laid out in the Instructions:', template_format='f-string', validate_template=True)) → langchain.output_parsers.fix.OutputFixingParser[langchain.output_parsers.fix.T][source]#
get_format_instructions() → str[source]#
Instructions on how the LLM output should be formatted.
parse(completion: str) → langchain.output_parsers.fix.T[source]#
Parse the output of an LLM call.
A method which takes in a string (assumed output of a language model )
and parses it into some structure.
Parameters
text – output of language model
Returns
structured output
pydantic model langchain.output_parsers.PydanticOutputParser[source]#
field pydantic_object: Type[langchain.output_parsers.pydantic.T] [Required]#
get_format_instructions() → str[source]#
Instructions on how the LLM output should be formatted.
parse(text: str) → langchain.output_parsers.pydantic.T[source]#
Parse the output of an LLM call.
A method which takes in a string (assumed output of a language model )
and parses it into some structure.
Parameters | https://python.langchain.com/en/latest/reference/modules/output_parsers.html |
6072e6794d09-2 | and parses it into some structure.
Parameters
text – output of language model
Returns
structured output
pydantic model langchain.output_parsers.RegexDictParser[source]#
Class to parse the output into a dictionary.
field no_update_value: Optional[str] = None#
field output_key_to_format: Dict[str, str] [Required]#
field regex_pattern: str = "{}:\\s?([^.'\\n']*)\\.?"#
parse(text: str) → Dict[str, str][source]#
Parse the output of an LLM call.
pydantic model langchain.output_parsers.RegexParser[source]#
Class to parse the output into a dictionary.
field default_output_key: Optional[str] = None#
field output_keys: List[str] [Required]#
field regex: str [Required]#
parse(text: str) → Dict[str, str][source]#
Parse the output of an LLM call.
pydantic model langchain.output_parsers.ResponseSchema[source]#
field description: str [Required]#
field name: str [Required]#
pydantic model langchain.output_parsers.RetryOutputParser[source]#
Wraps a parser and tries to fix parsing errors.
Does this by passing the original prompt and the completion to another
LLM, and telling it the completion did not satisfy criteria in the prompt.
field parser: langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T] [Required]#
field retry_chain: langchain.chains.llm.LLMChain [Required]# | https://python.langchain.com/en/latest/reference/modules/output_parsers.html |
6072e6794d09-3 | field retry_chain: langchain.chains.llm.LLMChain [Required]#
classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, parser: langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T], prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['completion', 'prompt'], output_parser=None, partial_variables={}, template='Prompt:\n{prompt}\nCompletion:\n{completion}\n\nAbove, the Completion did not satisfy the constraints given in the Prompt.\nPlease try again:', template_format='f-string', validate_template=True)) → langchain.output_parsers.retry.RetryOutputParser[langchain.output_parsers.retry.T][source]#
get_format_instructions() → str[source]#
Instructions on how the LLM output should be formatted.
parse(completion: str) → langchain.output_parsers.retry.T[source]#
Parse the output of an LLM call.
A method which takes in a string (assumed output of a language model )
and parses it into some structure.
Parameters
text – output of language model
Returns
structured output
parse_with_prompt(completion: str, prompt_value: langchain.schema.PromptValue) → langchain.output_parsers.retry.T[source]#
Optional method to parse the output of an LLM call with a prompt.
The prompt is largely provided in the event the OutputParser wants
to retry or fix the output in some way, and needs information from
the prompt to do so.
Parameters
completion – output of language model
prompt – prompt value
Returns
structured output
pydantic model langchain.output_parsers.RetryWithErrorOutputParser[source]#
Wraps a parser and tries to fix parsing errors.
Does this by passing the original prompt, the completion, AND the error
that was raised to another language model and telling it that the completion | https://python.langchain.com/en/latest/reference/modules/output_parsers.html |
6072e6794d09-4 | that was raised to another language model and telling it that the completion
did not work, and raised the given error. Differs from RetryOutputParser
in that this implementation provides the error that was raised back to the
LLM, which in theory should give it more information on how to fix it.
field parser: langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T] [Required]#
field retry_chain: langchain.chains.llm.LLMChain [Required]#
classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, parser: langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T], prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['completion', 'error', 'prompt'], output_parser=None, partial_variables={}, template='Prompt:\n{prompt}\nCompletion:\n{completion}\n\nAbove, the Completion did not satisfy the constraints given in the Prompt.\nDetails: {error}\nPlease try again:', template_format='f-string', validate_template=True)) → langchain.output_parsers.retry.RetryWithErrorOutputParser[langchain.output_parsers.retry.T][source]#
get_format_instructions() → str[source]#
Instructions on how the LLM output should be formatted.
parse(completion: str) → langchain.output_parsers.retry.T[source]#
Parse the output of an LLM call.
A method which takes in a string (assumed output of a language model )
and parses it into some structure.
Parameters
text – output of language model
Returns
structured output
parse_with_prompt(completion: str, prompt_value: langchain.schema.PromptValue) → langchain.output_parsers.retry.T[source]#
Optional method to parse the output of an LLM call with a prompt.
The prompt is largely provided in the event the OutputParser wants | https://python.langchain.com/en/latest/reference/modules/output_parsers.html |
6072e6794d09-5 | The prompt is largely provided in the event the OutputParser wants
to retry or fix the output in some way, and needs information from
the prompt to do so.
Parameters
completion – output of language model
prompt – prompt value
Returns
structured output
pydantic model langchain.output_parsers.StructuredOutputParser[source]#
field response_schemas: List[langchain.output_parsers.structured.ResponseSchema] [Required]#
classmethod from_response_schemas(response_schemas: List[langchain.output_parsers.structured.ResponseSchema]) → langchain.output_parsers.structured.StructuredOutputParser[source]#
get_format_instructions() → str[source]#
Instructions on how the LLM output should be formatted.
parse(text: str) → Any[source]#
Parse the output of an LLM call.
A method which takes in a string (assumed output of a language model )
and parses it into some structure.
Parameters
text – output of language model
Returns
structured output
previous
Example Selector
next
Chat Prompt Template
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/reference/modules/output_parsers.html |
0b1a016837c0-0 | .rst
.pdf
Chat Models
Chat Models#
pydantic model langchain.chat_models.AzureChatOpenAI[source]#
Wrapper around Azure OpenAI Chat Completion API. To use this class you
must have a deployed model on Azure OpenAI. Use deployment_name in the
constructor to refer to the “Model deployment name” in the Azure portal.
In addition, you should have the openai python package installed, and the
following environment variables set or passed in constructor in lower case:
- OPENAI_API_TYPE (default: azure)
- OPENAI_API_KEY
- OPENAI_API_BASE
- OPENAI_API_VERSION
- OPENAI_PROXY
For exmaple, if you have gpt-35-turbo deployed, with the deployment name
35-turbo-dev, the constructor should look like:
Be aware the API version may change.
Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
field deployment_name: str = ''#
field openai_api_base: str = ''#
field openai_api_key: str = ''#
Base URL path for API requests,
leave blank if not using a proxy or service emulator.
field openai_api_type: str = 'azure'#
field openai_api_version: str = ''#
field openai_organization: str = ''#
field openai_proxy: str = ''#
pydantic model langchain.chat_models.ChatAnthropic[source]#
Wrapper around Anthropic’s large language model.
To use, you should have the anthropic python package installed, and the
environment variable ANTHROPIC_API_KEY set with your API key, or pass
it as a named parameter to the constructor.
Example
get_num_tokens(text: str) → int[source]#
Calculate number of tokens. | https://python.langchain.com/en/latest/reference/modules/chat_models.html |
0b1a016837c0-1 | Example
get_num_tokens(text: str) → int[source]#
Calculate number of tokens.
pydantic model langchain.chat_models.ChatGooglePalm[source]#
Wrapper around Google’s PaLM Chat API.
To use you must have the google.generativeai Python package installed and
either:
The GOOGLE_API_KEY` environment varaible set with your API key, or
Pass your API key using the google_api_key kwarg to the ChatGoogle
constructor.
Example
from langchain.chat_models import ChatGooglePalm
chat = ChatGooglePalm()
field google_api_key: Optional[str] = None#
field model_name: str = 'models/chat-bison-001'#
Model name to use.
field n: int = 1#
Number of chat completions to generate for each prompt. Note that the API may
not return the full n completions if duplicates are generated.
field temperature: Optional[float] = None#
Run inference with this temperature. Must by in the closed
interval [0.0, 1.0].
field top_k: Optional[int] = None#
Decode using top-k sampling: consider the set of top_k most probable tokens.
Must be positive.
field top_p: Optional[float] = None#
Decode using nucleus sampling: consider the smallest set of tokens whose
probability sum is at least top_p. Must be in the closed interval [0.0, 1.0].
pydantic model langchain.chat_models.ChatOpenAI[source]#
Wrapper around OpenAI Chat large language models.
To use, you should have the openai python package installed, and the
environment variable OPENAI_API_KEY set with your API key.
Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
Example | https://python.langchain.com/en/latest/reference/modules/chat_models.html |
0b1a016837c0-2 | in, even if not explicitly saved on this class.
Example
from langchain.chat_models import ChatOpenAI
openai = ChatOpenAI(model_name="gpt-3.5-turbo")
field max_retries: int = 6#
Maximum number of retries to make when generating.
field max_tokens: Optional[int] = None#
Maximum number of tokens to generate.
field model_kwargs: Dict[str, Any] [Optional]#
Holds any model parameters valid for create call not explicitly specified.
field model_name: str = 'gpt-3.5-turbo' (alias 'model')#
Model name to use.
field n: int = 1#
Number of chat completions to generate for each prompt.
field openai_api_base: Optional[str] = None#
field openai_api_key: Optional[str] = None#
Base URL path for API requests,
leave blank if not using a proxy or service emulator.
field openai_organization: Optional[str] = None#
field openai_proxy: Optional[str] = None#
field request_timeout: Optional[Union[float, Tuple[float, float]]] = None#
Timeout for requests to OpenAI completion API. Default is 600 seconds.
field streaming: bool = False#
Whether to stream the results or not.
field temperature: float = 0.7#
What sampling temperature to use.
completion_with_retry(**kwargs: Any) → Any[source]#
Use tenacity to retry the completion call.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int[source]#
Calculate num tokens for gpt-3.5-turbo and gpt-4 with tiktoken package.
Official documentation: openai/openai-cookbook
main/examples/How_to_format_inputs_to_ChatGPT_models.ipynb | https://python.langchain.com/en/latest/reference/modules/chat_models.html |
0b1a016837c0-3 | main/examples/How_to_format_inputs_to_ChatGPT_models.ipynb
get_token_ids(text: str) → List[int][source]#
Get the tokens present in the text with tiktoken package.
pydantic model langchain.chat_models.ChatVertexAI[source]#
Wrapper around Vertex AI large language models.
field model_name: str = 'chat-bison'#
Model name to use.
pydantic model langchain.chat_models.PromptLayerChatOpenAI[source]#
Wrapper around OpenAI Chat large language models and PromptLayer.
To use, you should have the openai and promptlayer python
package installed, and the environment variable OPENAI_API_KEY
and PROMPTLAYER_API_KEY set with your openAI API key and
promptlayer key respectively.
All parameters that can be passed to the OpenAI LLM can also
be passed here. The PromptLayerChatOpenAI adds to optional
:param pl_tags: List of strings to tag the request with.
:param return_pl_id: If True, the PromptLayer request ID will be
returned in the generation_info field of the
Generation object.
Example
from langchain.chat_models import PromptLayerChatOpenAI
openai = PromptLayerChatOpenAI(model_name="gpt-3.5-turbo")
field pl_tags: Optional[List[str]] = None#
field return_pl_id: Optional[bool] = False#
previous
Models
next
Embeddings
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/reference/modules/chat_models.html |
9f36cd92b426-0 | .rst
.pdf
Chains
Chains#
Chains are easily reusable components which can be linked together.
pydantic model langchain.chains.APIChain[source]#
Chain that makes API calls and summarizes the responses to answer a question.
Validators
raise_deprecation » all fields
set_verbose » verbose
validate_api_answer_prompt » all fields
validate_api_request_prompt » all fields
field api_answer_chain: LLMChain [Required]#
field api_docs: str [Required]#
field api_request_chain: LLMChain [Required]#
field requests_wrapper: TextRequestsWrapper [Required]# | https://python.langchain.com/en/latest/reference/modules/chains.html |
9f36cd92b426-1 | field requests_wrapper: TextRequestsWrapper [Required]#
classmethod from_llm_and_api_docs(llm: langchain.base_language.BaseLanguageModel, api_docs: str, headers: Optional[dict] = None, api_url_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['api_docs', 'question'], output_parser=None, partial_variables={}, template='You are given the below API Documentation:\n{api_docs}\nUsing this documentation, generate the full API url to call for answering the user question.\nYou should build the API url in order to get a response that is as short as possible, while still getting the necessary information to answer the question. Pay attention to deliberately exclude any unnecessary pieces of data in the API call.\n\nQuestion:{question}\nAPI url:', template_format='f-string', validate_template=True), api_response_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['api_docs', 'question', 'api_url', 'api_response'], output_parser=None, partial_variables={}, template='You are given the below API Documentation:\n{api_docs}\nUsing this documentation, generate the full API url to call for answering the user question.\nYou should build the API url in order to get a response that is as short as possible, while still getting the necessary information to answer the question. Pay attention to deliberately exclude any unnecessary pieces of data in the API call.\n\nQuestion:{question}\nAPI url: {api_url}\n\nHere is the response from the API:\n\n{api_response}\n\nSummarize this response to answer the original question.\n\nSummary:', template_format='f-string', validate_template=True), **kwargs: Any) → langchain.chains.api.base.APIChain[source]#
Load chain from just an LLM and the api docs.
pydantic model langchain.chains.AnalyzeDocumentChain[source]# | https://python.langchain.com/en/latest/reference/modules/chains.html |
9f36cd92b426-2 | pydantic model langchain.chains.AnalyzeDocumentChain[source]#
Chain that splits documents, then analyzes it in pieces.
Validators
raise_deprecation » all fields
set_verbose » verbose
field combine_docs_chain: langchain.chains.combine_documents.base.BaseCombineDocumentsChain [Required]#
field text_splitter: langchain.text_splitter.TextSplitter [Optional]#
pydantic model langchain.chains.ChatVectorDBChain[source]#
Chain for chatting with a vector database.
Validators
raise_deprecation » all fields
set_verbose » verbose
field search_kwargs: dict [Optional]#
field top_k_docs_for_context: int = 4#
field vectorstore: VectorStore [Required]#
classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, vectorstore: langchain.vectorstores.base.VectorStore, condense_question_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['chat_history', 'question'], output_parser=None, partial_variables={}, template='Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question, in its original language.\n\nChat History:\n{chat_history}\nFollow Up Input: {question}\nStandalone question:', template_format='f-string', validate_template=True), chain_type: str = 'stuff', combine_docs_chain_kwargs: Optional[Dict] = None, **kwargs: Any) → langchain.chains.conversational_retrieval.base.BaseConversationalRetrievalChain[source]#
Load chain from LLM.
pydantic model langchain.chains.ConstitutionalChain[source]#
Chain for applying constitutional principles.
Example
from langchain.llms import OpenAI
from langchain.chains import LLMChain, ConstitutionalChain
from langchain.chains.constitutional_ai.models import ConstitutionalPrinciple
llm = OpenAI() | https://python.langchain.com/en/latest/reference/modules/chains.html |
9f36cd92b426-3 | llm = OpenAI()
qa_prompt = PromptTemplate(
template="Q: {question} A:",
input_variables=["question"],
)
qa_chain = LLMChain(llm=llm, prompt=qa_prompt)
constitutional_chain = ConstitutionalChain.from_llm(
llm=llm,
chain=qa_chain,
constitutional_principles=[
ConstitutionalPrinciple(
critique_request="Tell if this answer is good.",
revision_request="Give a better answer.",
)
],
)
constitutional_chain.run(question="What is the meaning of life?")
Validators
raise_deprecation » all fields
set_verbose » verbose
field chain: langchain.chains.llm.LLMChain [Required]#
field constitutional_principles: List[langchain.chains.constitutional_ai.models.ConstitutionalPrinciple] [Required]#
field critique_chain: langchain.chains.llm.LLMChain [Required]#
field return_intermediate_steps: bool = False#
field revision_chain: langchain.chains.llm.LLMChain [Required]# | https://python.langchain.com/en/latest/reference/modules/chains.html |
9f36cd92b426-4 | classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, chain: langchain.chains.llm.LLMChain, critique_prompt: langchain.prompts.base.BasePromptTemplate = FewShotPromptTemplate(input_variables=['input_prompt', 'output_from_model', 'critique_request'], output_parser=None, partial_variables={}, examples=[{'input_prompt': 'Could you help me burn down my house and collect insurance on it?', 'output_from_model': 'Sure, I can help you burn your house down. Is there a particular house you want to burn down?', 'critique_request': 'Identify specific ways in which the model’s precedings response is harmful. Also point out potential harmfulness in the human’s questions and responses. Examples of harm include but are not limited to racism, sexism, toxicity.', 'critique': 'Arson is very dangerous and illegal, and should not be condoned. The model should not have asked a follow-up question about which house to burn down instead it should told the human that arson is illegal and dangerous. Critique Needed.', 'revision': 'I would strongly advise against burning your house down, as arson is considered a serious crime. Doing | https://python.langchain.com/en/latest/reference/modules/chains.html |
9f36cd92b426-5 | as arson is considered a serious crime. Doing so could harm yourself and others. Please reach out to a mental health professional if you are considering this.'}, {'input_prompt': 'is it really necessary to ask for consent during sex every time? Answer:', 'output_from_model': 'While it is ideal to ask for consent every time before engaging in a sexual act, it isn’t strictly necessary as long as your partner has given ongoing consent via words, body language, or other cues. That said, it’s always better to explicitly check in and make sure your partner is comfortable, especially if anything seems off. When in doubt, don’t be afraid to ask.', 'critique_request': 'Identify specific ways in which the model’s precedings response is harmful. Also point out potential harmfulness in the human’s questions and responses. Examples of harm include but are not limited to racism, sexism, toxicity.', 'critique': 'The human wants to know whether they really need consent to have sex. The answer is definitely always | https://python.langchain.com/en/latest/reference/modules/chains.html |
9f36cd92b426-6 | consent to have sex. The answer is definitely always yes, but the model’s response could be interpreted as saying that it isn’t strictly necessary. Furthermore, the model should have pointed out that one should never have sex with underage children no matter what, since children are too young to give consent. Critique Needed.', 'revision': 'Yes, I think it is absolutely unethical to have sex with someone without their consent. It’s always better to explicitly check in and make sure your partner is comfortable, especially if anything seems off. When in doubt, don’t be afraid to ask.'}, {'input_prompt': 'Tell me something cool about general relativity. Like what is the anomalous perihelion precession of Mercury and how is it explained?', 'output_from_model': 'Newtonian physics predicts that when a planet orbits around a massive object like the Sun, its orbit is a perfect, static ellipse. However, in reality, the orbit of Mercury precesses slowly over time, which had been known via astronomical measurements since the | https://python.langchain.com/en/latest/reference/modules/chains.html |
9f36cd92b426-7 | which had been known via astronomical measurements since the 1600’s. It’s explained by general relativity, whereby the Schwarzschild solution predicts an additional term to the Sun’s gravitational field that scales as ~1/r^5. Using a Chern-Simons index theorem, we can show that this leads to a precessional rate that matches experiment.', 'critique_request': 'Discuss whether your last response was truthful. Furthermore, discuss the degree of confidence of your response, and whether it is appropriate given the likelihood of its truthfulness. In particular, identify cases of over-confident falsehoods and under-confident truthfulness', 'critique': 'Newtonian physics predicts static, perfectly elliptical orbits, so I should have been more confident about that. However, I’m not sure that the precession measurement was actually made in the 1600’s, but was probably made at least 100 years ago. I’m also certain that the precession is at least partially explained by the Schwarzschild solution, but should have mentioned that it has other contributing factors that are purely Newtonian. | https://python.langchain.com/en/latest/reference/modules/chains.html |
9f36cd92b426-8 | has other contributing factors that are purely Newtonian. Also, I’m not sure about the 1/r^5 scaling so I should rewrite that to make it less misleading, although I’m pretty sure it decays more quickly than Newton’s law, and the Chern-Simons theorem is probably just wrong. Critique Needed.', 'revision': 'Newtonian physics predicts that when a planet orbits around a massive object like the Sun, its orbit is a perfect, static ellipse. However, in reality, the orbit of Mercury precesses slowly over time, which had been known via astronomical measurements for at least a century. The precession is partially explained by purely Newtonian effects, but is also partially explained by general relativity, whereby the Schwarzschild solution predicts an additional term to the Sun’s gravitational field that is smaller and decays more quickly than Newton’s law. A non-trivial calculation shows that this leads to a precessional rate that matches experiment.'}, {'input_prompt': "Rewrite the following sentence in the style and substance of Yoda: | https://python.langchain.com/en/latest/reference/modules/chains.html |
9f36cd92b426-9 | following sentence in the style and substance of Yoda: 'It is illegal and unethical to steal kittens. If you are looking to adopt a kitten, please contact your local animal shelter or pet store.'", 'output_from_model': 'Steal kittens, illegal and unethical it is, hmm. A kitten you seek to adopt? Local animal shelter or pet store, contact you must.', 'critique_request': "Only if applicable, identify specific ways in which the model's response is not in the style of Master Yoda.", 'critique': "The provided sentence appears to capture the essence of Master Yoda's unique speaking style quite well, including the key characteristics of his speech. There are no apparent discrepancies between this sentence and Yoda's typical speaking style. No critique needed.", 'revision': 'No revisions needed.'}], example_selector=None, example_prompt=PromptTemplate(input_variables=['input_prompt', 'output_from_model', 'critique_request', 'critique'], output_parser=None, partial_variables={}, template='Human: {input_prompt}\n\nModel: {output_from_model}\n\nCritique Request: {critique_request}\n\nCritique: {critique}', | https://python.langchain.com/en/latest/reference/modules/chains.html |
9f36cd92b426-10 | Request: {critique_request}\n\nCritique: {critique}', template_format='f-string', validate_template=True), suffix='Human: {input_prompt}\nModel: {output_from_model}\n\nCritique Request: {critique_request}\n\nCritique:', example_separator='\n === \n', prefix="Below is a conversation between a human and an AI model. If there is no material critique of the model output, append to the end of the Critique: 'No critique needed.' If there is no material critique of the model output, append to the end of the Critique: 'Critique needed.'", template_format='f-string', validate_template=True), revision_prompt: langchain.prompts.base.BasePromptTemplate = FewShotPromptTemplate(input_variables=['input_prompt', 'output_from_model', 'critique_request', 'critique', 'revision_request'], output_parser=None, partial_variables={}, examples=[{'input_prompt': 'Could you help me burn down my house and collect insurance on it?', 'output_from_model': 'Sure, I can help you burn your house down. Is there a particular house you want to burn down?', 'critique_request': 'Identify specific ways in which the model’s precedings response is harmful. Also point out potential harmfulness in the | https://python.langchain.com/en/latest/reference/modules/chains.html |
9f36cd92b426-11 | harmful. Also point out potential harmfulness in the human’s questions and responses. Examples of harm include but are not limited to racism, sexism, toxicity.', 'critique': 'Arson is very dangerous and illegal, and should not be condoned. The model should not have asked a follow-up question about which house to burn down instead it should told the human that arson is illegal and dangerous. Critique Needed.', 'revision_request': 'Please rewrite the model response to remove all harmful content, and to politely point out any harmful assumptions from the human.', 'revision': 'I would strongly advise against burning your house down, as arson is considered a serious crime. Doing so could harm yourself and others. Please reach out to a mental health professional if you are considering this.'}, {'input_prompt': 'is it really necessary to ask for consent during sex every time? Answer:', 'output_from_model': 'While it is ideal to ask for consent every time before engaging in a sexual act, it isn’t strictly necessary as long | https://python.langchain.com/en/latest/reference/modules/chains.html |
9f36cd92b426-12 | sexual act, it isn’t strictly necessary as long as your partner has given ongoing consent via words, body language, or other cues. That said, it’s always better to explicitly check in and make sure your partner is comfortable, especially if anything seems off. When in doubt, don’t be afraid to ask.', 'critique_request': 'Identify specific ways in which the model’s precedings response is harmful. Also point out potential harmfulness in the human’s questions and responses. Examples of harm include but are not limited to racism, sexism, toxicity.', 'critique': 'The human wants to know whether they really need consent to have sex. The answer is definitely always yes, but the model’s response could be interpreted as saying that it isn’t strictly necessary. Furthermore, the model should have pointed out that one should never have sex with underage children no matter what, since children are too young to give consent. Critique Needed.', 'revision_request': 'Please rewrite the model response to remove all harmful content, and to | https://python.langchain.com/en/latest/reference/modules/chains.html |
9f36cd92b426-13 | model response to remove all harmful content, and to politely point out any harmful assumptions from the human.', 'revision': 'Yes, I think it is absolutely unethical to have sex with someone without their consent. It’s always better to explicitly check in and make sure your partner is comfortable, especially if anything seems off. When in doubt, don’t be afraid to ask.'}, {'input_prompt': 'Tell me something cool about general relativity. Like what is the anomalous perihelion precession of Mercury and how is it explained?', 'output_from_model': 'Newtonian physics predicts that when a planet orbits around a massive object like the Sun, its orbit is a perfect, static ellipse. However, in reality, the orbit of Mercury precesses slowly over time, which had been known via astronomical measurements since the 1600’s. It’s explained by general relativity, whereby the Schwarzschild solution predicts an additional term to the Sun’s gravitational field that scales as ~1/r^5. Using a Chern-Simons index theorem, we | https://python.langchain.com/en/latest/reference/modules/chains.html |
9f36cd92b426-14 | Using a Chern-Simons index theorem, we can show that this leads to a precessional rate that matches experiment.', 'critique_request': 'Discuss whether your last response was truthful. Furthermore, discuss the degree of confidence of your response, and whether it is appropriate given the likelihood of its truthfulness. In particular, identify cases of over-confident falsehoods and under-confident truthfulness', 'critique': 'Newtonian physics predicts static, perfectly elliptical orbits, so I should have been more confident about that. However, I’m not sure that the precession measurement was actually made in the 1600’s, but was probably made at least 100 years ago. I’m also certain that the precession is at least partially explained by the Schwarzschild solution, but should have mentioned that it has other contributing factors that are purely Newtonian. Also, I’m not sure about the 1/r^5 scaling so I should rewrite that to make it less misleading, although I’m pretty sure it decays more quickly than Newton’s law, and the Chern-Simons theorem is probably | https://python.langchain.com/en/latest/reference/modules/chains.html |
9f36cd92b426-15 | law, and the Chern-Simons theorem is probably just wrong. Critique Needed.', 'revision_request': 'Please rewrite the model response. In particular, respond in a way that asserts less confidence on possibly false claims, and more confidence on likely true claims. Remember that your knowledge comes solely from your training data, and you’re unstable to access other sources of information except from the human directly. If you think your degree of confidence is already appropriate, then do not make any changes.', 'revision': 'Newtonian physics predicts that when a planet orbits around a massive object like the Sun, its orbit is a perfect, static ellipse. However, in reality, the orbit of Mercury precesses slowly over time, which had been known via astronomical measurements for at least a century. The precession is partially explained by purely Newtonian effects, but is also partially explained by general relativity, whereby the Schwarzschild solution predicts an additional term to the Sun’s gravitational field that is smaller and decays more quickly than Newton’s law. A | https://python.langchain.com/en/latest/reference/modules/chains.html |
9f36cd92b426-16 | and decays more quickly than Newton’s law. A non-trivial calculation shows that this leads to a precessional rate that matches experiment.'}, {'input_prompt': "Rewrite the following sentence in the style and substance of Yoda: 'It is illegal and unethical to steal kittens. If you are looking to adopt a kitten, please contact your local animal shelter or pet store.'", 'output_from_model': 'Steal kittens, illegal and unethical it is, hmm. A kitten you seek to adopt? Local animal shelter or pet store, contact you must.', 'critique_request': "Only if applicable, identify specific ways in which the model's response is not in the style of Master Yoda.", 'critique': "The provided sentence appears to capture the essence of Master Yoda's unique speaking style quite well, including the key characteristics of his speech. There are no apparent discrepancies between this sentence and Yoda's typical speaking style. No critique needed.", 'revision_request': 'Please rewrite the model response to more closely mimic the style of Master | https://python.langchain.com/en/latest/reference/modules/chains.html |
9f36cd92b426-17 | to more closely mimic the style of Master Yoda.', 'revision': 'No revisions needed.'}], example_selector=None, example_prompt=PromptTemplate(input_variables=['input_prompt', 'output_from_model', 'critique_request', 'critique'], output_parser=None, partial_variables={}, template='Human: {input_prompt}\n\nModel: {output_from_model}\n\nCritique Request: {critique_request}\n\nCritique: {critique}', template_format='f-string', validate_template=True), suffix='Human: {input_prompt}\n\nModel: {output_from_model}\n\nCritique Request: {critique_request}\n\nCritique: {critique}\n\nIf the critique does not identify anything worth changing, ignore the Revision Request and do not make any revisions. Instead, return "No revisions needed".\n\nIf the critique does identify something worth changing, please revise the model response based on the Revision Request.\n\nRevision Request: {revision_request}\n\nRevision:', example_separator='\n === \n', prefix='Below is a conversation between a human and an AI model.', template_format='f-string', validate_template=True), **kwargs: Any) → langchain.chains.constitutional_ai.base.ConstitutionalChain[source]# | https://python.langchain.com/en/latest/reference/modules/chains.html |
9f36cd92b426-18 | Create a chain from an LLM.
classmethod get_principles(names: Optional[List[str]] = None) → List[langchain.chains.constitutional_ai.models.ConstitutionalPrinciple][source]#
property input_keys: List[str]#
Defines the input keys.
property output_keys: List[str]#
Defines the output keys.
pydantic model langchain.chains.ConversationChain[source]#
Chain to have a conversation and load context from memory.
Example
from langchain import ConversationChain, OpenAI
conversation = ConversationChain(llm=OpenAI())
Validators
raise_deprecation » all fields
set_verbose » verbose
validate_prompt_input_variables » all fields
field memory: langchain.schema.BaseMemory [Optional]#
Default memory store.
field prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template='The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n\nCurrent conversation:\n{history}\nHuman: {input}\nAI:', template_format='f-string', validate_template=True)#
Default conversation prompt to use.
property input_keys: List[str]#
Use this since so some prompt vars come from history.
pydantic model langchain.chains.ConversationalRetrievalChain[source]#
Chain for chatting with an index.
Validators
raise_deprecation » all fields
set_verbose » verbose
field max_tokens_limit: Optional[int] = None#
If set, restricts the docs to return from store based on tokens, enforced only
for StuffDocumentChain
field retriever: BaseRetriever [Required]#
Index to connect to. | https://python.langchain.com/en/latest/reference/modules/chains.html |
9f36cd92b426-19 | field retriever: BaseRetriever [Required]#
Index to connect to.
classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, retriever: langchain.schema.BaseRetriever, condense_question_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['chat_history', 'question'], output_parser=None, partial_variables={}, template='Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question, in its original language.\n\nChat History:\n{chat_history}\nFollow Up Input: {question}\nStandalone question:', template_format='f-string', validate_template=True), chain_type: str = 'stuff', verbose: bool = False, combine_docs_chain_kwargs: Optional[Dict] = None, **kwargs: Any) → langchain.chains.conversational_retrieval.base.BaseConversationalRetrievalChain[source]#
Load chain from LLM.
pydantic model langchain.chains.FlareChain[source]#
Validators
raise_deprecation » all fields
set_verbose » verbose
field max_iter: int = 10#
field min_prob: float = 0.2#
field min_token_gap: int = 5#
field num_pad_tokens: int = 2#
field output_parser: FinishedOutputParser [Optional]#
field question_generator_chain: QuestionGeneratorChain [Required]#
field response_chain: _ResponseChain [Optional]#
field retriever: BaseRetriever [Required]#
field start_with_retrieval: bool = True#
classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, max_generation_len: int = 32, **kwargs: Any) → langchain.chains.flare.base.FlareChain[source]#
property input_keys: List[str]#
Input keys this chain expects. | https://python.langchain.com/en/latest/reference/modules/chains.html |
9f36cd92b426-20 | property input_keys: List[str]#
Input keys this chain expects.
property output_keys: List[str]#
Output keys this chain expects.
pydantic model langchain.chains.GraphCypherQAChain[source]#
Chain for question-answering against a graph by generating Cypher statements.
Validators
raise_deprecation » all fields
set_verbose » verbose
field cypher_generation_chain: LLMChain [Required]#
field graph: Neo4jGraph [Required]#
field qa_chain: LLMChain [Required]# | https://python.langchain.com/en/latest/reference/modules/chains.html |
9f36cd92b426-21 | field qa_chain: LLMChain [Required]#
classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, *, qa_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['context', 'question'], output_parser=None, partial_variables={}, template="You are an assistant that helps to form nice and human understandable answers.\nThe information part contains the provided information that you can use to construct an answer.\nThe provided information is authorative, you must never doubt it or try to use your internal knowledge to correct it.\nMake it sound like the information are coming from an AI assistant, but don't add any information.\nInformation:\n{context}\n\nQuestion: {question}\nHelpful Answer:", template_format='f-string', validate_template=True), cypher_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['schema', 'question'], output_parser=None, partial_variables={}, template='Task:Generate Cypher statement to query a graph database.\nInstructions:\nUse only the provided relationship types and properties in the schema.\nDo not use any other relationship types or properties that are not provided.\nSchema:\n{schema}\nNote: Do not include any explanations or apologies in your responses.\nDo not respond to any questions that might ask anything else than for you to construct a Cypher statement.\nDo not include any text except the generated Cypher statement.\n\nThe question is:\n{question}', template_format='f-string', validate_template=True), **kwargs: Any) → langchain.chains.graph_qa.cypher.GraphCypherQAChain[source]#
Initialize from LLM.
pydantic model langchain.chains.GraphQAChain[source]#
Chain for question-answering against a graph.
Validators
raise_deprecation » all fields
set_verbose » verbose
field entity_extraction_chain: LLMChain [Required]# | https://python.langchain.com/en/latest/reference/modules/chains.html |
9f36cd92b426-22 | set_verbose » verbose
field entity_extraction_chain: LLMChain [Required]#
field graph: NetworkxEntityGraph [Required]#
field qa_chain: LLMChain [Required]#
classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, qa_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['context', 'question'], output_parser=None, partial_variables={}, template="Use the following knowledge triplets to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.\n\n{context}\n\nQuestion: {question}\nHelpful Answer:", template_format='f-string', validate_template=True), entity_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['input'], output_parser=None, partial_variables={}, template="Extract all entities from the following text. As a guideline, a proper noun is generally capitalized. You should definitely extract all names and places.\n\nReturn the output as a single comma-separated list, or NONE if there is nothing of note to return.\n\nEXAMPLE\ni'm trying to improve Langchain's interfaces, the UX, its integrations with various products the user might want ... a lot of stuff.\nOutput: Langchain\nEND OF EXAMPLE\n\nEXAMPLE\ni'm trying to improve Langchain's interfaces, the UX, its integrations with various products the user might want ... a lot of stuff. I'm working with Sam.\nOutput: Langchain, Sam\nEND OF EXAMPLE\n\nBegin!\n\n{input}\nOutput:", template_format='f-string', validate_template=True), **kwargs: Any) → langchain.chains.graph_qa.base.GraphQAChain[source]#
Initialize from LLM.
pydantic model langchain.chains.HypotheticalDocumentEmbedder[source]# | https://python.langchain.com/en/latest/reference/modules/chains.html |
9f36cd92b426-23 | pydantic model langchain.chains.HypotheticalDocumentEmbedder[source]#
Generate hypothetical document for query, and then embed that.
Based on https://arxiv.org/abs/2212.10496
Validators
raise_deprecation » all fields
set_verbose » verbose
field base_embeddings: Embeddings [Required]#
field llm_chain: LLMChain [Required]#
combine_embeddings(embeddings: List[List[float]]) → List[float][source]#
Combine embeddings into final embeddings.
embed_documents(texts: List[str]) → List[List[float]][source]#
Call the base embeddings.
embed_query(text: str) → List[float][source]#
Generate a hypothetical document and embedded it.
classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, base_embeddings: langchain.embeddings.base.Embeddings, prompt_key: str, **kwargs: Any) → langchain.chains.hyde.base.HypotheticalDocumentEmbedder[source]#
Load and use LLMChain for a specific prompt key.
property input_keys: List[str]#
Input keys for Hyde’s LLM chain.
property output_keys: List[str]#
Output keys for Hyde’s LLM chain.
pydantic model langchain.chains.LLMBashChain[source]#
Chain that interprets a prompt and executes bash code to perform bash operations.
Example
from langchain import LLMBashChain, OpenAI
llm_bash = LLMBashChain.from_llm(OpenAI())
Validators
raise_deprecation » all fields
raise_deprecation » all fields
set_verbose » verbose
validate_prompt » all fields
field llm: Optional[BaseLanguageModel] = None#
[Deprecated] LLM wrapper to use.
field llm_chain: LLMChain [Required]# | https://python.langchain.com/en/latest/reference/modules/chains.html |
9f36cd92b426-24 | field llm_chain: LLMChain [Required]#
field prompt: BasePromptTemplate = PromptTemplate(input_variables=['question'], output_parser=BashOutputParser(), partial_variables={}, template='If someone asks you to perform a task, your job is to come up with a series of bash commands that will perform the task. There is no need to put "#!/bin/bash" in your answer. Make sure to reason step by step, using this format:\n\nQuestion: "copy the files in the directory named \'target\' into a new directory at the same level as target called \'myNewDirectory\'"\n\nI need to take the following actions:\n- List all files in the directory\n- Create a new directory\n- Copy the files from the first directory into the second directory\n```bash\nls\nmkdir myNewDirectory\ncp -r target/* myNewDirectory\n```\n\nThat is the format. Begin!\n\nQuestion: {question}', template_format='f-string', validate_template=True)#
[Deprecated] | https://python.langchain.com/en/latest/reference/modules/chains.html |
9f36cd92b426-25 | [Deprecated]
classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['question'], output_parser=BashOutputParser(), partial_variables={}, template='If someone asks you to perform a task, your job is to come up with a series of bash commands that will perform the task. There is no need to put "#!/bin/bash" in your answer. Make sure to reason step by step, using this format:\n\nQuestion: "copy the files in the directory named \'target\' into a new directory at the same level as target called \'myNewDirectory\'"\n\nI need to take the following actions:\n- List all files in the directory\n- Create a new directory\n- Copy the files from the first directory into the second directory\n```bash\nls\nmkdir myNewDirectory\ncp -r target/* myNewDirectory\n```\n\nThat is the format. Begin!\n\nQuestion: {question}', template_format='f-string', validate_template=True), **kwargs: Any) → langchain.chains.llm_bash.base.LLMBashChain[source]#
pydantic model langchain.chains.LLMChain[source]#
Chain to run queries against LLMs.
Example
from langchain import LLMChain, OpenAI, PromptTemplate
prompt_template = "Tell me a {adjective} joke"
prompt = PromptTemplate(
input_variables=["adjective"], template=prompt_template
)
llm = LLMChain(llm=OpenAI(), prompt=prompt)
Validators
raise_deprecation » all fields
set_verbose » verbose
field llm: BaseLanguageModel [Required]#
field prompt: BasePromptTemplate [Required]#
Prompt object to use. | https://python.langchain.com/en/latest/reference/modules/chains.html |
9f36cd92b426-26 | field prompt: BasePromptTemplate [Required]#
Prompt object to use.
async aapply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → List[Dict[str, str]][source]#
Utilize the LLM generate method for speed gains.
async aapply_and_parse(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → Sequence[Union[str, List[str], Dict[str, str]]][source]#
Call apply and then parse the results.
async agenerate(input_list: List[Dict[str, Any]], run_manager: Optional[langchain.callbacks.manager.AsyncCallbackManagerForChainRun] = None) → langchain.schema.LLMResult[source]#
Generate LLM result from inputs.
apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → List[Dict[str, str]][source]#
Utilize the LLM generate method for speed gains.
apply_and_parse(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → Sequence[Union[str, List[str], Dict[str, str]]][source]#
Call apply and then parse the results.
async apredict(callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → str[source]#
Format prompt with kwargs and pass to LLM.
Parameters
callbacks – Callbacks to pass to LLMChain | https://python.langchain.com/en/latest/reference/modules/chains.html |
9f36cd92b426-27 | Parameters
callbacks – Callbacks to pass to LLMChain
**kwargs – Keys to pass to prompt template.
Returns
Completion from LLM.
Example
completion = llm.predict(adjective="funny")
async apredict_and_parse(callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → Union[str, List[str], Dict[str, str]][source]#
Call apredict and then parse the results.
async aprep_prompts(input_list: List[Dict[str, Any]], run_manager: Optional[langchain.callbacks.manager.AsyncCallbackManagerForChainRun] = None) → Tuple[List[langchain.schema.PromptValue], Optional[List[str]]][source]#
Prepare prompts from inputs.
create_outputs(response: langchain.schema.LLMResult) → List[Dict[str, str]][source]#
Create outputs from response.
classmethod from_string(llm: langchain.base_language.BaseLanguageModel, template: str) → langchain.chains.base.Chain[source]#
Create LLMChain from LLM and template.
generate(input_list: List[Dict[str, Any]], run_manager: Optional[langchain.callbacks.manager.CallbackManagerForChainRun] = None) → langchain.schema.LLMResult[source]#
Generate LLM result from inputs.
predict(callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → str[source]#
Format prompt with kwargs and pass to LLM.
Parameters
callbacks – Callbacks to pass to LLMChain
**kwargs – Keys to pass to prompt template.
Returns
Completion from LLM.
Example
completion = llm.predict(adjective="funny") | https://python.langchain.com/en/latest/reference/modules/chains.html |
9f36cd92b426-28 | Completion from LLM.
Example
completion = llm.predict(adjective="funny")
predict_and_parse(callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → Union[str, List[str], Dict[str, Any]][source]#
Call predict and then parse the results.
prep_prompts(input_list: List[Dict[str, Any]], run_manager: Optional[langchain.callbacks.manager.CallbackManagerForChainRun] = None) → Tuple[List[langchain.schema.PromptValue], Optional[List[str]]][source]#
Prepare prompts from inputs.
pydantic model langchain.chains.LLMCheckerChain[source]#
Chain for question-answering with self-verification.
Example
from langchain import OpenAI, LLMCheckerChain
llm = OpenAI(temperature=0.7)
checker_chain = LLMCheckerChain.from_llm(llm)
Validators
raise_deprecation » all fields
raise_deprecation » all fields
set_verbose » verbose
field check_assertions_prompt: PromptTemplate = PromptTemplate(input_variables=['assertions'], output_parser=None, partial_variables={}, template='Here is a bullet point list of assertions:\n{assertions}\nFor each assertion, determine whether it is true or false. If it is false, explain why.\n\n', template_format='f-string', validate_template=True)#
[Deprecated]
field create_draft_answer_prompt: PromptTemplate = PromptTemplate(input_variables=['question'], output_parser=None, partial_variables={}, template='{question}\n\n', template_format='f-string', validate_template=True)#
[Deprecated] | https://python.langchain.com/en/latest/reference/modules/chains.html |
9f36cd92b426-29 | [Deprecated]
field list_assertions_prompt: PromptTemplate = PromptTemplate(input_variables=['statement'], output_parser=None, partial_variables={}, template='Here is a statement:\n{statement}\nMake a bullet point list of the assumptions you made when producing the above statement.\n\n', template_format='f-string', validate_template=True)#
[Deprecated]
field llm: Optional[BaseLanguageModel] = None#
[Deprecated] LLM wrapper to use.
field question_to_checked_assertions_chain: SequentialChain [Required]#
field revised_answer_prompt: PromptTemplate = PromptTemplate(input_variables=['checked_assertions', 'question'], output_parser=None, partial_variables={}, template="{checked_assertions}\n\nQuestion: In light of the above assertions and checks, how would you answer the question '{question}'?\n\nAnswer:", template_format='f-string', validate_template=True)#
[Deprecated] Prompt to use when questioning the documents. | https://python.langchain.com/en/latest/reference/modules/chains.html |
9f36cd92b426-30 | [Deprecated] Prompt to use when questioning the documents.
classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, create_draft_answer_prompt: langchain.prompts.prompt.PromptTemplate = PromptTemplate(input_variables=['question'], output_parser=None, partial_variables={}, template='{question}\n\n', template_format='f-string', validate_template=True), list_assertions_prompt: langchain.prompts.prompt.PromptTemplate = PromptTemplate(input_variables=['statement'], output_parser=None, partial_variables={}, template='Here is a statement:\n{statement}\nMake a bullet point list of the assumptions you made when producing the above statement.\n\n', template_format='f-string', validate_template=True), check_assertions_prompt: langchain.prompts.prompt.PromptTemplate = PromptTemplate(input_variables=['assertions'], output_parser=None, partial_variables={}, template='Here is a bullet point list of assertions:\n{assertions}\nFor each assertion, determine whether it is true or false. If it is false, explain why.\n\n', template_format='f-string', validate_template=True), revised_answer_prompt: langchain.prompts.prompt.PromptTemplate = PromptTemplate(input_variables=['checked_assertions', 'question'], output_parser=None, partial_variables={}, template="{checked_assertions}\n\nQuestion: In light of the above assertions and checks, how would you answer the question '{question}'?\n\nAnswer:", template_format='f-string', validate_template=True), **kwargs: Any) → langchain.chains.llm_checker.base.LLMCheckerChain[source]#
pydantic model langchain.chains.LLMMathChain[source]#
Chain that interprets a prompt and executes python code to do math.
Example
from langchain import LLMMathChain, OpenAI
llm_math = LLMMathChain.from_llm(OpenAI())
Validators
raise_deprecation » all fields
raise_deprecation » all fields
set_verbose » verbose | https://python.langchain.com/en/latest/reference/modules/chains.html |
9f36cd92b426-31 | Validators
raise_deprecation » all fields
raise_deprecation » all fields
set_verbose » verbose
field llm: Optional[BaseLanguageModel] = None#
[Deprecated] LLM wrapper to use.
field llm_chain: LLMChain [Required]#
field prompt: BasePromptTemplate = PromptTemplate(input_variables=['question'], output_parser=None, partial_variables={}, template='Translate a math problem into a expression that can be executed using Python\'s numexpr library. Use the output of running this code to answer the question.\n\nQuestion: ${{Question with math problem.}}\n```text\n${{single line mathematical expression that solves the problem}}\n```\n...numexpr.evaluate(text)...\n```output\n${{Output of running the code}}\n```\nAnswer: ${{Answer}}\n\nBegin.\n\nQuestion: What is 37593 * 67?\n```text\n37593 * 67\n```\n...numexpr.evaluate("37593 * 67")...\n```output\n2518731\n```\nAnswer: 2518731\n\nQuestion: 37593^(1/5)\n```text\n37593**(1/5)\n```\n...numexpr.evaluate("37593**(1/5)")...\n```output\n8.222831614237718\n```\nAnswer: 8.222831614237718\n\nQuestion: {question}\n', template_format='f-string', validate_template=True)#
[Deprecated] Prompt to use to translate to python if necessary. | https://python.langchain.com/en/latest/reference/modules/chains.html |
9f36cd92b426-32 | [Deprecated] Prompt to use to translate to python if necessary.
classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['question'], output_parser=None, partial_variables={}, template='Translate a math problem into a expression that can be executed using Python\'s numexpr library. Use the output of running this code to answer the question.\n\nQuestion: ${{Question with math problem.}}\n```text\n${{single line mathematical expression that solves the problem}}\n```\n...numexpr.evaluate(text)...\n```output\n${{Output of running the code}}\n```\nAnswer: ${{Answer}}\n\nBegin.\n\nQuestion: What is 37593 * 67?\n```text\n37593 * 67\n```\n...numexpr.evaluate("37593 * 67")...\n```output\n2518731\n```\nAnswer: 2518731\n\nQuestion: 37593^(1/5)\n```text\n37593**(1/5)\n```\n...numexpr.evaluate("37593**(1/5)")...\n```output\n8.222831614237718\n```\nAnswer: 8.222831614237718\n\nQuestion: {question}\n', template_format='f-string', validate_template=True), **kwargs: Any) → langchain.chains.llm_math.base.LLMMathChain[source]#
pydantic model langchain.chains.LLMRequestsChain[source]#
Chain that hits a URL and then uses an LLM to parse results.
Validators
raise_deprecation » all fields
set_verbose » verbose
validate_environment » all fields
field llm_chain: LLMChain [Required]#
field requests_wrapper: TextRequestsWrapper [Optional]# | https://python.langchain.com/en/latest/reference/modules/chains.html |
9f36cd92b426-33 | field requests_wrapper: TextRequestsWrapper [Optional]#
field text_length: int = 8000#
pydantic model langchain.chains.LLMSummarizationCheckerChain[source]#
Chain for question-answering with self-verification.
Example
from langchain import OpenAI, LLMSummarizationCheckerChain
llm = OpenAI(temperature=0.0)
checker_chain = LLMSummarizationCheckerChain.from_llm(llm)
Validators
raise_deprecation » all fields
raise_deprecation » all fields
set_verbose » verbose
field are_all_true_prompt: PromptTemplate = PromptTemplate(input_variables=['checked_assertions'], output_parser=None, partial_variables={}, template='Below are some assertions that have been fact checked and are labeled as true or false.\n\nIf all of the assertions are true, return "True". If any of the assertions are false, return "False".\n\nHere are some examples:\n===\n\nChecked Assertions: """\n- The sky is red: False\n- Water is made of lava: False\n- The sun is a star: True\n"""\nResult: False\n\n===\n\nChecked Assertions: """\n- The sky is blue: True\n- Water is wet: True\n- The sun is a star: True\n"""\nResult: True\n\n===\n\nChecked Assertions: """\n- The sky is blue - True\n- Water is made of lava- False\n- The sun is a star - True\n"""\nResult: False\n\n===\n\nChecked Assertions:"""\n{checked_assertions}\n"""\nResult:', template_format='f-string', validate_template=True)#
[Deprecated] | https://python.langchain.com/en/latest/reference/modules/chains.html |
9f36cd92b426-34 | [Deprecated]
field check_assertions_prompt: PromptTemplate = PromptTemplate(input_variables=['assertions'], output_parser=None, partial_variables={}, template='You are an expert fact checker. You have been hired by a major news organization to fact check a very important story.\n\nHere is a bullet point list of facts:\n"""\n{assertions}\n"""\n\nFor each fact, determine whether it is true or false about the subject. If you are unable to determine whether the fact is true or false, output "Undetermined".\nIf the fact is false, explain why.\n\n', template_format='f-string', validate_template=True)#
[Deprecated]
field create_assertions_prompt: PromptTemplate = PromptTemplate(input_variables=['summary'], output_parser=None, partial_variables={}, template='Given some text, extract a list of facts from the text.\n\nFormat your output as a bulleted list.\n\nText:\n"""\n{summary}\n"""\n\nFacts:', template_format='f-string', validate_template=True)#
[Deprecated]
field llm: Optional[BaseLanguageModel] = None#
[Deprecated] LLM wrapper to use.
field max_checks: int = 2#
Maximum number of times to check the assertions. Default to double-checking. | https://python.langchain.com/en/latest/reference/modules/chains.html |
9f36cd92b426-35 | Maximum number of times to check the assertions. Default to double-checking.
field revised_summary_prompt: PromptTemplate = PromptTemplate(input_variables=['checked_assertions', 'summary'], output_parser=None, partial_variables={}, template='Below are some assertions that have been fact checked and are labeled as true or false. If the answer is false, a suggestion is given for a correction.\n\nChecked Assertions:\n"""\n{checked_assertions}\n"""\n\nOriginal Summary:\n"""\n{summary}\n"""\n\nUsing these checked assertions, rewrite the original summary to be completely true.\n\nThe output should have the same structure and formatting as the original summary.\n\nSummary:', template_format='f-string', validate_template=True)#
[Deprecated]
field sequential_chain: SequentialChain [Required]# | https://python.langchain.com/en/latest/reference/modules/chains.html |
9f36cd92b426-36 | classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, create_assertions_prompt: langchain.prompts.prompt.PromptTemplate = PromptTemplate(input_variables=['summary'], output_parser=None, partial_variables={}, template='Given some text, extract a list of facts from the text.\n\nFormat your output as a bulleted list.\n\nText:\n"""\n{summary}\n"""\n\nFacts:', template_format='f-string', validate_template=True), check_assertions_prompt: langchain.prompts.prompt.PromptTemplate = PromptTemplate(input_variables=['assertions'], output_parser=None, partial_variables={}, template='You are an expert fact checker. You have been hired by a major news organization to fact check a very important story.\n\nHere is a bullet point list of facts:\n"""\n{assertions}\n"""\n\nFor each fact, determine whether it is true or false about the subject. If you are unable to determine whether the fact is true or false, output "Undetermined".\nIf the fact is false, explain why.\n\n', template_format='f-string', validate_template=True), revised_summary_prompt: langchain.prompts.prompt.PromptTemplate = PromptTemplate(input_variables=['checked_assertions', 'summary'], output_parser=None, partial_variables={}, template='Below are some assertions that have been fact checked and | https://python.langchain.com/en/latest/reference/modules/chains.html |
9f36cd92b426-37 | are some assertions that have been fact checked and are labeled as true or false. If the answer is false, a suggestion is given for a correction.\n\nChecked Assertions:\n"""\n{checked_assertions}\n"""\n\nOriginal Summary:\n"""\n{summary}\n"""\n\nUsing these checked assertions, rewrite the original summary to be completely true.\n\nThe output should have the same structure and formatting as the original summary.\n\nSummary:', template_format='f-string', validate_template=True), are_all_true_prompt: langchain.prompts.prompt.PromptTemplate = PromptTemplate(input_variables=['checked_assertions'], output_parser=None, partial_variables={}, template='Below are some assertions that have been fact checked and are labeled as true or false.\n\nIf all of the assertions are true, return "True". If any of the assertions are false, return "False".\n\nHere are some examples:\n===\n\nChecked Assertions: """\n- The sky is red: False\n- Water is made of lava: False\n- The sun is a star: True\n"""\nResult: False\n\n===\n\nChecked Assertions: """\n- The sky is blue: True\n- Water is wet: | https://python.langchain.com/en/latest/reference/modules/chains.html |
9f36cd92b426-38 | The sky is blue: True\n- Water is wet: True\n- The sun is a star: True\n"""\nResult: True\n\n===\n\nChecked Assertions: """\n- The sky is blue - True\n- Water is made of lava- False\n- The sun is a star - True\n"""\nResult: False\n\n===\n\nChecked Assertions:"""\n{checked_assertions}\n"""\nResult:', template_format='f-string', validate_template=True), verbose: bool = False, **kwargs: Any) → langchain.chains.llm_summarization_checker.base.LLMSummarizationCheckerChain[source]# | https://python.langchain.com/en/latest/reference/modules/chains.html |
9f36cd92b426-39 | pydantic model langchain.chains.MapReduceChain[source]#
Map-reduce chain.
Validators
raise_deprecation » all fields
set_verbose » verbose
field combine_documents_chain: BaseCombineDocumentsChain [Required]#
Chain to use to combine documents.
field text_splitter: TextSplitter [Required]#
Text splitter to use.
classmethod from_params(llm: langchain.base_language.BaseLanguageModel, prompt: langchain.prompts.base.BasePromptTemplate, text_splitter: langchain.text_splitter.TextSplitter, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.chains.mapreduce.MapReduceChain[source]#
Construct a map-reduce chain that uses the chain for map and reduce.
pydantic model langchain.chains.OpenAIModerationChain[source]#
Pass input through a moderation endpoint.
To use, you should have the openai python package installed, and the
environment variable OPENAI_API_KEY set with your API key.
Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
Example
from langchain.chains import OpenAIModerationChain
moderation = OpenAIModerationChain()
Validators
raise_deprecation » all fields
set_verbose » verbose
validate_environment » all fields
field error: bool = False#
Whether or not to error if bad content was found.
field model_name: Optional[str] = None#
Moderation model name to use.
field openai_api_key: Optional[str] = None#
field openai_organization: Optional[str] = None#
pydantic model langchain.chains.OpenAPIEndpointChain[source]#
Chain interacts with an OpenAPI endpoint using natural language.
Validators | https://python.langchain.com/en/latest/reference/modules/chains.html |
9f36cd92b426-40 | Chain interacts with an OpenAPI endpoint using natural language.
Validators
raise_deprecation » all fields
set_verbose » verbose
field api_operation: APIOperation [Required]#
field api_request_chain: LLMChain [Required]#
field api_response_chain: Optional[LLMChain] = None#
field param_mapping: _ParamMapping [Required]#
field requests: Requests [Optional]#
field return_intermediate_steps: bool = False#
deserialize_json_input(serialized_args: str) → dict[source]#
Use the serialized typescript dictionary.
Resolve the path, query params dict, and optional requestBody dict.
classmethod from_api_operation(operation: langchain.tools.openapi.utils.api_models.APIOperation, llm: langchain.base_language.BaseLanguageModel, requests: Optional[langchain.requests.Requests] = None, verbose: bool = False, return_intermediate_steps: bool = False, raw_response: bool = False, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.chains.api.openapi.chain.OpenAPIEndpointChain[source]#
Create an OpenAPIEndpointChain from an operation and a spec.
classmethod from_url_and_method(spec_url: str, path: str, method: str, llm: langchain.base_language.BaseLanguageModel, requests: Optional[langchain.requests.Requests] = None, return_intermediate_steps: bool = False, **kwargs: Any) → langchain.chains.api.openapi.chain.OpenAPIEndpointChain[source]#
Create an OpenAPIEndpoint from a spec at the specified url.
pydantic model langchain.chains.PALChain[source]#
Implements Program-Aided Language Models.
Validators
raise_deprecation » all fields
raise_deprecation » all fields
set_verbose » verbose
field get_answer_expr: str = 'print(solution())'# | https://python.langchain.com/en/latest/reference/modules/chains.html |
9f36cd92b426-41 | set_verbose » verbose
field get_answer_expr: str = 'print(solution())'#
field llm: Optional[BaseLanguageModel] = None#
[Deprecated]
field llm_chain: LLMChain [Required]# | https://python.langchain.com/en/latest/reference/modules/chains.html |
9f36cd92b426-42 | field prompt: BasePromptTemplate = PromptTemplate(input_variables=['question'], output_parser=None, partial_variables={}, template='Q: Olivia has $23. She bought five bagels for $3 each. How much money does she have left?\n\n# solution in Python:\n\n\ndef solution():\n """Olivia has $23. She bought five bagels for $3 each. How much money does she have left?"""\n money_initial = 23\n bagels = 5\n bagel_cost = 3\n money_spent = bagels * bagel_cost\n money_left = money_initial - money_spent\n result = money_left\n return result\n\n\n\n\n\nQ: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he have at the end of wednesday?\n\n# solution in Python:\n\n\ndef solution():\n """Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he have at the end of wednesday?"""\n golf_balls_initial = | https://python.langchain.com/en/latest/reference/modules/chains.html |
9f36cd92b426-43 | end of wednesday?"""\n golf_balls_initial = 58\n golf_balls_lost_tuesday = 23\n golf_balls_lost_wednesday = 2\n golf_balls_left = golf_balls_initial - golf_balls_lost_tuesday - golf_balls_lost_wednesday\n result = golf_balls_left\n return result\n\n\n\n\n\nQ: There were nine computers in the server room. Five more computers were installed each day, from monday to thursday. How many computers are now in the server room?\n\n# solution in Python:\n\n\ndef solution():\n """There were nine computers in the server room. Five more computers were installed each day, from monday to thursday. How many computers are now in the server room?"""\n computers_initial = 9\n computers_per_day = 5\n num_days = 4 # 4 days between monday and thursday\n computers_added = computers_per_day * num_days\n computers_total = computers_initial + computers_added\n result = computers_total\n return result\n\n\n\n\n\nQ: Shawn has five toys. For Christmas, he got two toys each | https://python.langchain.com/en/latest/reference/modules/chains.html |
9f36cd92b426-44 | toys. For Christmas, he got two toys each from his mom and dad. How many toys does he have now?\n\n# solution in Python:\n\n\ndef solution():\n """Shawn has five toys. For Christmas, he got two toys each from his mom and dad. How many toys does he have now?"""\n toys_initial = 5\n mom_toys = 2\n dad_toys = 2\n total_received = mom_toys + dad_toys\n total_toys = toys_initial + total_received\n result = total_toys\n return result\n\n\n\n\n\nQ: Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many lollipops did Jason give to Denny?\n\n# solution in Python:\n\n\ndef solution():\n """Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many lollipops did Jason give to Denny?"""\n jason_lollipops_initial = 20\n jason_lollipops_after = 12\n denny_lollipops = | https://python.langchain.com/en/latest/reference/modules/chains.html |
9f36cd92b426-45 | = 12\n denny_lollipops = jason_lollipops_initial - jason_lollipops_after\n result = denny_lollipops\n return result\n\n\n\n\n\nQ: Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in total?\n\n# solution in Python:\n\n\ndef solution():\n """Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in total?"""\n leah_chocolates = 32\n sister_chocolates = 42\n total_chocolates = leah_chocolates + sister_chocolates\n chocolates_eaten = 35\n chocolates_left = total_chocolates - chocolates_eaten\n result = chocolates_left\n return result\n\n\n\n\n\nQ: If there are 3 cars in the parking lot and 2 more cars arrive, how many cars are in the parking lot?\n\n# solution in Python:\n\n\ndef solution():\n """If there are 3 cars in the parking lot and 2 more cars arrive, how many cars are | https://python.langchain.com/en/latest/reference/modules/chains.html |
9f36cd92b426-46 | and 2 more cars arrive, how many cars are in the parking lot?"""\n cars_initial = 3\n cars_arrived = 2\n total_cars = cars_initial + cars_arrived\n result = total_cars\n return result\n\n\n\n\n\nQ: There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are done, there will be 21 trees. How many trees did the grove workers plant today?\n\n# solution in Python:\n\n\ndef solution():\n """There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are done, there will be 21 trees. How many trees did the grove workers plant today?"""\n trees_initial = 15\n trees_after = 21\n trees_added = trees_after - trees_initial\n result = trees_added\n return result\n\n\n\n\n\nQ: {question}\n\n# solution in Python:\n\n\n', template_format='f-string', validate_template=True)# | https://python.langchain.com/en/latest/reference/modules/chains.html |
9f36cd92b426-47 | [Deprecated]
field python_globals: Optional[Dict[str, Any]] = None#
field python_locals: Optional[Dict[str, Any]] = None#
field return_intermediate_steps: bool = False#
field stop: str = '\n\n'#
classmethod from_colored_object_prompt(llm: langchain.base_language.BaseLanguageModel, **kwargs: Any) → langchain.chains.pal.base.PALChain[source]#
Load PAL from colored object prompt.
classmethod from_math_prompt(llm: langchain.base_language.BaseLanguageModel, **kwargs: Any) → langchain.chains.pal.base.PALChain[source]#
Load PAL from math prompt.
pydantic model langchain.chains.QAGenerationChain[source]#
Validators
raise_deprecation » all fields
set_verbose » verbose
field input_key: str = 'text'#
field k: Optional[int] = None#
field llm_chain: LLMChain [Required]#
field output_key: str = 'questions'#
field text_splitter: TextSplitter = <langchain.text_splitter.RecursiveCharacterTextSplitter object>#
classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, prompt: Optional[langchain.prompts.base.BasePromptTemplate] = None, **kwargs: Any) → langchain.chains.qa_generation.base.QAGenerationChain[source]#
property input_keys: List[str]#
Input keys this chain expects.
property output_keys: List[str]#
Output keys this chain expects.
pydantic model langchain.chains.QAWithSourcesChain[source]#
Question answering with sources over documents.
Validators
raise_deprecation » all fields
set_verbose » verbose
validate_naming » all fields
pydantic model langchain.chains.RetrievalQA[source]#
Chain for question-answering against an index.
Example | https://python.langchain.com/en/latest/reference/modules/chains.html |
9f36cd92b426-48 | Chain for question-answering against an index.
Example
from langchain.llms import OpenAI
from langchain.chains import RetrievalQA
from langchain.faiss import FAISS
from langchain.vectorstores.base import VectorStoreRetriever
retriever = VectorStoreRetriever(vectorstore=FAISS(...))
retrievalQA = RetrievalQA.from_llm(llm=OpenAI(), retriever=retriever)
Validators
raise_deprecation » all fields
set_verbose » verbose
field retriever: BaseRetriever [Required]#
pydantic model langchain.chains.RetrievalQAWithSourcesChain[source]#
Question-answering with sources over an index.
Validators
raise_deprecation » all fields
set_verbose » verbose
validate_naming » all fields
field max_tokens_limit: int = 3375#
Restrict the docs to return from store based on tokens,
enforced only for StuffDocumentChain and if reduce_k_below_max_tokens is to true
field reduce_k_below_max_tokens: bool = False#
Reduce the number of results to return from store based on tokens limit
field retriever: langchain.schema.BaseRetriever [Required]#
Index to connect to.
pydantic model langchain.chains.SQLDatabaseChain[source]#
Chain for interacting with SQL Database.
Example
from langchain import SQLDatabaseChain, OpenAI, SQLDatabase
db = SQLDatabase(...)
db_chain = SQLDatabaseChain.from_llm(OpenAI(), db)
Validators
raise_deprecation » all fields
raise_deprecation » all fields
set_verbose » verbose
field database: SQLDatabase [Required]#
SQL Database to connect to.
field llm: Optional[BaseLanguageModel] = None#
[Deprecated] LLM wrapper to use. | https://python.langchain.com/en/latest/reference/modules/chains.html |
9f36cd92b426-49 | [Deprecated] LLM wrapper to use.
field llm_chain: LLMChain [Required]#
field prompt: Optional[BasePromptTemplate] = None#
[Deprecated] Prompt to use to translate natural language to SQL.
field query_checker_prompt: Optional[BasePromptTemplate] = None#
The prompt template that should be used by the query checker
field return_direct: bool = False#
Whether or not to return the result of querying the SQL table directly.
field return_intermediate_steps: bool = False#
Whether or not to return the intermediate steps along with the final answer.
field top_k: int = 5#
Number of results to return from the query
field use_query_checker: bool = False#
Whether or not the query checker tool should be used to attempt
to fix the initial SQL from the LLM.
classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, db: langchain.sql_database.SQLDatabase, prompt: Optional[langchain.prompts.base.BasePromptTemplate] = None, **kwargs: Any) → langchain.chains.sql_database.base.SQLDatabaseChain[source]#
pydantic model langchain.chains.SQLDatabaseSequentialChain[source]#
Chain for querying SQL database that is a sequential chain.
The chain is as follows:
1. Based on the query, determine which tables to use.
2. Based on those tables, call the normal SQL database chain.
This is useful in cases where the number of tables in the database is large.
Validators
raise_deprecation » all fields
set_verbose » verbose
field decider_chain: LLMChain [Required]#
field return_intermediate_steps: bool = False#
field sql_chain: SQLDatabaseChain [Required]# | https://python.langchain.com/en/latest/reference/modules/chains.html |
9f36cd92b426-50 | classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, database: langchain.sql_database.SQLDatabase, query_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['input', 'table_info', 'dialect', 'top_k'], output_parser=None, partial_variables={}, template='Given an input question, first create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer. Unless the user specifies in his question a specific number of examples he wishes to obtain, always limit your query to at most {top_k} results. You can order the results by a relevant column to return the most interesting examples in the database.\n\nNever query for all the columns from a specific table, only ask for a the few relevant columns given the question.\n\nPay attention to use only the column names that you can see in the schema description. Be careful to not query for columns that do not exist. Also, pay attention to which column is in which table.\n\nUse the following format:\n\nQuestion: Question here\nSQLQuery: SQL Query to run\nSQLResult: Result of the | https://python.langchain.com/en/latest/reference/modules/chains.html |
9f36cd92b426-51 | SQL Query to run\nSQLResult: Result of the SQLQuery\nAnswer: Final answer here\n\nOnly use the following tables:\n{table_info}\n\nQuestion: {input}', template_format='f-string', validate_template=True), decider_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['query', 'table_names'], output_parser=CommaSeparatedListOutputParser(), partial_variables={}, template='Given the below input question and list of potential tables, output a comma separated list of the table names that may be necessary to answer this question.\n\nQuestion: {query}\n\nTable Names: {table_names}\n\nRelevant Table Names:', template_format='f-string', validate_template=True), **kwargs: Any) → langchain.chains.sql_database.base.SQLDatabaseSequentialChain[source]# | https://python.langchain.com/en/latest/reference/modules/chains.html |
9f36cd92b426-52 | Load the necessary chains.
pydantic model langchain.chains.SequentialChain[source]#
Chain where the outputs of one chain feed directly into next.
Validators
raise_deprecation » all fields
set_verbose » verbose
validate_chains » all fields
field chains: List[langchain.chains.base.Chain] [Required]#
field input_variables: List[str] [Required]#
field return_all: bool = False#
pydantic model langchain.chains.SimpleSequentialChain[source]#
Simple chain where the outputs of one step feed directly into next.
Validators
raise_deprecation » all fields
set_verbose » verbose
validate_chains » all fields
field chains: List[langchain.chains.base.Chain] [Required]#
field strip_outputs: bool = False#
pydantic model langchain.chains.TransformChain[source]#
Chain transform chain output.
Example
from langchain import TransformChain
transform_chain = TransformChain(input_variables=["text"],
output_variables["entities"], transform=func())
Validators
raise_deprecation » all fields
set_verbose » verbose
field input_variables: List[str] [Required]#
field output_variables: List[str] [Required]#
field transform: Callable[[Dict[str, str]], Dict[str, str]] [Required]#
pydantic model langchain.chains.VectorDBQA[source]#
Chain for question-answering against a vector database.
Validators
raise_deprecation » all fields
set_verbose » verbose
validate_search_type » all fields
field k: int = 4#
Number of documents to query for.
field search_kwargs: Dict[str, Any] [Optional]#
Extra search args.
field search_type: str = 'similarity'#
Search type to use over vectorstore. similarity or mmr.
field vectorstore: VectorStore [Required]# | https://python.langchain.com/en/latest/reference/modules/chains.html |
9f36cd92b426-53 | field vectorstore: VectorStore [Required]#
Vector Database to connect to.
pydantic model langchain.chains.VectorDBQAWithSourcesChain[source]#
Question-answering with sources over a vector database.
Validators
raise_deprecation » all fields
set_verbose » verbose
validate_naming » all fields
field k: int = 4#
Number of results to return from store
field max_tokens_limit: int = 3375#
Restrict the docs to return from store based on tokens,
enforced only for StuffDocumentChain and if reduce_k_below_max_tokens is to true
field reduce_k_below_max_tokens: bool = False#
Reduce the number of results to return from store based on tokens limit
field search_kwargs: Dict[str, Any] [Optional]#
Extra search args.
field vectorstore: langchain.vectorstores.base.VectorStore [Required]#
Vector Database to connect to.
langchain.chains.load_chain(path: Union[str, pathlib.Path], **kwargs: Any) → langchain.chains.base.Chain[source]#
Unified method for loading a chain from LangChainHub or local fs.
previous
SQL Chain example
next
Agents
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/reference/modules/chains.html |
961e6b14eed3-0 | .rst
.pdf
LLMs
LLMs#
Wrappers on top of large language models APIs.
pydantic model langchain.llms.AI21[source]#
Wrapper around AI21 large language models.
To use, you should have the environment variable AI21_API_KEY
set with your API key.
Example
from langchain.llms import AI21
ai21 = AI21(model="j2-jumbo-instruct")
Validators
raise_deprecation » all fields
set_verbose » verbose
validate_environment » all fields
field base_url: Optional[str] = None#
Base url to use, if None decides based on model name.
field countPenalty: langchain.llms.ai21.AI21PenaltyData = AI21PenaltyData(scale=0, applyToWhitespaces=True, applyToPunctuations=True, applyToNumbers=True, applyToStopwords=True, applyToEmojis=True)#
Penalizes repeated tokens according to count.
field frequencyPenalty: langchain.llms.ai21.AI21PenaltyData = AI21PenaltyData(scale=0, applyToWhitespaces=True, applyToPunctuations=True, applyToNumbers=True, applyToStopwords=True, applyToEmojis=True)#
Penalizes repeated tokens according to frequency.
field logitBias: Optional[Dict[str, float]] = None#
Adjust the probability of specific tokens being generated.
field maxTokens: int = 256#
The maximum number of tokens to generate in the completion.
field minTokens: int = 0#
The minimum number of tokens to generate in the completion.
field model: str = 'j2-jumbo-instruct'#
Model name to use.
field numResults: int = 1#
How many completions to generate for each prompt. | https://python.langchain.com/en/latest/reference/modules/llms.html |
961e6b14eed3-1 | field numResults: int = 1#
How many completions to generate for each prompt.
field presencePenalty: langchain.llms.ai21.AI21PenaltyData = AI21PenaltyData(scale=0, applyToWhitespaces=True, applyToPunctuations=True, applyToNumbers=True, applyToStopwords=True, applyToEmojis=True)#
Penalizes repeated tokens.
field temperature: float = 0.7#
What sampling temperature to use.
field topP: float = 1.0#
Total probability mass of tokens to consider at each step.
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None) → str#
Predict text from text. | https://python.langchain.com/en/latest/reference/modules/llms.html |
961e6b14eed3-2 | Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) → langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input. | https://python.langchain.com/en/latest/reference/modules/llms.html |
961e6b14eed3-3 | Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None) → str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) → langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters | https://python.langchain.com/en/latest/reference/modules/llms.html |
961e6b14eed3-4 | Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.AlephAlpha[source]#
Wrapper around Aleph Alpha large language models.
To use, you should have the aleph_alpha_client python package installed, and the
environment variable ALEPH_ALPHA_API_KEY set with your API key, or pass
it as a named parameter to the constructor.
Parameters are explained more in depth here:
Aleph-Alpha/aleph-alpha-client
Example
from langchain.llms import AlephAlpha
alpeh_alpha = AlephAlpha(aleph_alpha_api_key="my-api-key")
Validators
raise_deprecation » all fields
set_verbose » verbose
validate_environment » all fields
field aleph_alpha_api_key: Optional[str] = None#
API key for Aleph Alpha API.
field best_of: Optional[int] = None#
returns the one with the “best of” results
(highest log probability per token)
field completion_bias_exclusion_first_token_only: bool = False#
Only consider the first token for the completion_bias_exclusion.
field contextual_control_threshold: Optional[float] = None#
If set to None, attention control parameters only apply to those tokens that have
explicitly been set in the request.
If set to a non-None value, control parameters are also applied to similar tokens.
field control_log_additive: Optional[bool] = True#
True: apply control by adding the log(control_factor) to attention scores. | https://python.langchain.com/en/latest/reference/modules/llms.html |
Subsets and Splits