id
stringlengths 14
16
| text
stringlengths 13
2.7k
| source
stringlengths 57
178
|
---|---|---|
d2f9ba547695-1 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
download_documents(query: str) → List[Document][source]¶
Query the Brave search engine and return the results as a list of Documents.
Parameters
query – The query to search for.
Returns: The results as a list of Documents.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶ | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.brave_search.BraveSearchWrapper.html |
d2f9ba547695-2 | classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
run(query: str) → str[source]¶
Query the Brave search engine and return the results as a JSON string.
Parameters
query – The query to search for.
Returns: The results as a JSON string.
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶ | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.brave_search.BraveSearchWrapper.html |
052815778da4-0 | langchain.utilities.spark_sql.SparkSQL¶
class langchain.utilities.spark_sql.SparkSQL(spark_session: Optional[SparkSession] = None, catalog: Optional[str] = None, schema: Optional[str] = None, ignore_tables: Optional[List[str]] = None, include_tables: Optional[List[str]] = None, sample_rows_in_table_info: int = 3)[source]¶
SparkSQL is a utility class for interacting with Spark SQL.
Initialize a SparkSQL object.
Parameters
spark_session – A SparkSession object.
If not provided, one will be created.
catalog – The catalog to use.
If not provided, the default catalog will be used.
schema – The schema to use.
If not provided, the default schema will be used.
ignore_tables – A list of tables to ignore.
If not provided, all tables will be used.
include_tables – A list of tables to include.
If not provided, all tables will be used.
sample_rows_in_table_info – The number of rows to include in the table info.
Defaults to 3.
Methods
__init__([spark_session, catalog, schema, ...])
Initialize a SparkSQL object.
from_uri(database_uri[, engine_args])
Creating a remote Spark Session via Spark connect.
get_table_info([table_names])
get_table_info_no_throw([table_names])
Get information about specified tables.
get_usable_table_names()
Get names of tables available.
run(command[, fetch])
run_no_throw(command[, fetch])
Execute a SQL command and return a string representing the results. | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.spark_sql.SparkSQL.html |
052815778da4-1 | Execute a SQL command and return a string representing the results.
__init__(spark_session: Optional[SparkSession] = None, catalog: Optional[str] = None, schema: Optional[str] = None, ignore_tables: Optional[List[str]] = None, include_tables: Optional[List[str]] = None, sample_rows_in_table_info: int = 3)[source]¶
Initialize a SparkSQL object.
Parameters
spark_session – A SparkSession object.
If not provided, one will be created.
catalog – The catalog to use.
If not provided, the default catalog will be used.
schema – The schema to use.
If not provided, the default schema will be used.
ignore_tables – A list of tables to ignore.
If not provided, all tables will be used.
include_tables – A list of tables to include.
If not provided, all tables will be used.
sample_rows_in_table_info – The number of rows to include in the table info.
Defaults to 3.
classmethod from_uri(database_uri: str, engine_args: Optional[dict] = None, **kwargs: Any) → SparkSQL[source]¶
Creating a remote Spark Session via Spark connect.
For example: SparkSQL.from_uri(“sc://localhost:15002”)
get_table_info(table_names: Optional[List[str]] = None) → str[source]¶
get_table_info_no_throw(table_names: Optional[List[str]] = None) → str[source]¶
Get information about specified tables.
Follows best practices as specified in: Rajkumar et al, 2022
(https://arxiv.org/abs/2204.00498)
If sample_rows_in_table_info, the specified number of sample rows will be
appended to each table description. This can increase performance as
demonstrated in the paper. | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.spark_sql.SparkSQL.html |
052815778da4-2 | demonstrated in the paper.
get_usable_table_names() → Iterable[str][source]¶
Get names of tables available.
run(command: str, fetch: str = 'all') → str[source]¶
run_no_throw(command: str, fetch: str = 'all') → str[source]¶
Execute a SQL command and return a string representing the results.
If the statement returns rows, a string of the results is returned.
If the statement returns no rows, an empty string is returned.
If the statement throws an error, the error message is returned.
Examples using SparkSQL¶
Spark SQL | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.spark_sql.SparkSQL.html |
6ed5f724386c-0 | langchain.utilities.metaphor_search.MetaphorSearchAPIWrapper¶
class langchain.utilities.metaphor_search.MetaphorSearchAPIWrapper[source]¶
Bases: BaseModel
Wrapper for Metaphor Search API.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param k: int = 10¶
param metaphor_api_key: str [Required]¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.metaphor_search.MetaphorSearchAPIWrapper.html |
6ed5f724386c-1 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.metaphor_search.MetaphorSearchAPIWrapper.html |
6ed5f724386c-2 | results(query: str, num_results: int, include_domains: Optional[List[str]] = None, exclude_domains: Optional[List[str]] = None, start_crawl_date: Optional[str] = None, end_crawl_date: Optional[str] = None, start_published_date: Optional[str] = None, end_published_date: Optional[str] = None, use_autoprompt: Optional[bool] = None) → List[Dict][source]¶
Run query through Metaphor Search and return metadata.
Parameters
query – The query to search for.
num_results – The number of results to return.
include_domains – A list of domains to include in the search. Only one of include_domains and exclude_domains should be defined.
exclude_domains – A list of domains to exclude from the search. Only one of include_domains and exclude_domains should be defined.
start_crawl_date – If specified, only pages we crawled after start_crawl_date will be returned.
end_crawl_date – If specified, only pages we crawled before end_crawl_date will be returned.
start_published_date – If specified, only pages published after start_published_date will be returned.
end_published_date – If specified, only pages published before end_published_date will be returned.
use_autoprompt – If true, we turn your query into a more Metaphor-friendly query. Adds latency.
Returns
title - The title of the page
url - The url
author - Author of the content, if applicable. Otherwise, None.
published_date - Estimated date published
in YYYY-MM-DD format. Otherwise, None.
Return type
A list of dictionaries with the following keys | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.metaphor_search.MetaphorSearchAPIWrapper.html |
6ed5f724386c-3 | Return type
A list of dictionaries with the following keys
async results_async(query: str, num_results: int, include_domains: Optional[List[str]] = None, exclude_domains: Optional[List[str]] = None, start_crawl_date: Optional[str] = None, end_crawl_date: Optional[str] = None, start_published_date: Optional[str] = None, end_published_date: Optional[str] = None, use_autoprompt: Optional[bool] = None) → List[Dict][source]¶
Get results from the Metaphor Search API asynchronously.
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
Examples using MetaphorSearchAPIWrapper¶
Metaphor Search | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.metaphor_search.MetaphorSearchAPIWrapper.html |
5e8abc1c1b08-0 | langchain.utilities.powerbi.json_to_md¶
langchain.utilities.powerbi.json_to_md(json_contents: List[Dict[str, Union[str, float, int]]], table_name: Optional[str] = None) → str[source]¶
Converts a JSON object to a markdown table. | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.powerbi.json_to_md.html |
4166258a0d9d-0 | langchain.utilities.google_serper.GoogleSerperAPIWrapper¶
class langchain.utilities.google_serper.GoogleSerperAPIWrapper[source]¶
Bases: BaseModel
Wrapper around the Serper.dev Google Search API.
You can create a free API key at https://serper.dev.
To use, you should have the environment variable SERPER_API_KEY
set with your API key, or pass serper_api_key as a named parameter
to the constructor.
Example
from langchain.utilities import GoogleSerperAPIWrapper
google_serper = GoogleSerperAPIWrapper()
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param aiosession: Optional[aiohttp.client.ClientSession] = None¶
param gl: str = 'us'¶
param hl: str = 'en'¶
param k: int = 10¶
param serper_api_key: Optional[str] = None¶
param tbs: Optional[str] = None¶
param type: Literal['news', 'search', 'places', 'images'] = 'search'¶
async aresults(query: str, **kwargs: Any) → Dict[source]¶
Run query through GoogleSearch.
async arun(query: str, **kwargs: Any) → str[source]¶
Run query through GoogleSearch and parse result async.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.google_serper.GoogleSerperAPIWrapper.html |
4166258a0d9d-1 | Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶ | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.google_serper.GoogleSerperAPIWrapper.html |
4166258a0d9d-2 | classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
results(query: str, **kwargs: Any) → Dict[source]¶
Run query through GoogleSearch.
run(query: str, **kwargs: Any) → str[source]¶
Run query through GoogleSearch and parse result.
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns. | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.google_serper.GoogleSerperAPIWrapper.html |
4166258a0d9d-3 | Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
Examples using GoogleSerperAPIWrapper¶
Google Serper
Retrieve as you generate with FLARE | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.google_serper.GoogleSerperAPIWrapper.html |
c2395a7fc578-0 | langchain.utilities.wolfram_alpha.WolframAlphaAPIWrapper¶
class langchain.utilities.wolfram_alpha.WolframAlphaAPIWrapper[source]¶
Bases: BaseModel
Wrapper for Wolfram Alpha.
Docs for using:
Go to wolfram alpha and sign up for a developer account
Create an app and get your APP ID
Save your APP ID into WOLFRAM_ALPHA_APPID env variable
pip install wolframalpha
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param wolfram_alpha_appid: Optional[str] = None¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.wolfram_alpha.WolframAlphaAPIWrapper.html |
c2395a7fc578-1 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
run(query: str) → str[source]¶ | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.wolfram_alpha.WolframAlphaAPIWrapper.html |
c2395a7fc578-2 | run(query: str) → str[source]¶
Run query through WolframAlpha and parse result.
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
Examples using WolframAlphaAPIWrapper¶
Wolfram Alpha | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.wolfram_alpha.WolframAlphaAPIWrapper.html |
4dd5b3c06b21-0 | langchain.utilities.python.PythonREPL¶
class langchain.utilities.python.PythonREPL[source]¶
Bases: BaseModel
Simulates a standalone Python REPL.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param globals: Optional[Dict] [Optional] (alias '_globals')¶
param locals: Optional[Dict] [Optional] (alias '_locals')¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.python.PythonREPL.html |
4dd5b3c06b21-1 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
run(command: str, timeout: Optional[int] = None) → str[source]¶ | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.python.PythonREPL.html |
4dd5b3c06b21-2 | run(command: str, timeout: Optional[int] = None) → str[source]¶
Run command with own globals/locals and returns anything printed.
Timeout after the specified number of seconds.
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
classmethod worker(command: str, globals: Optional[Dict], locals: Optional[Dict], queue: Queue) → None[source]¶
Examples using PythonREPL¶
Dynamodb Chat Message History
Python
Code writing | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.python.PythonREPL.html |
6fcac1005765-0 | langchain.utilities.tensorflow_datasets.TensorflowDatasets¶
class langchain.utilities.tensorflow_datasets.TensorflowDatasets[source]¶
Bases: BaseModel
Access to the TensorFlow Datasets.
The Current implementation can work only with datasets that fit in a memory.
TensorFlow Datasets is a collection of datasets ready to use, with TensorFlow
or other Python ML frameworks, such as Jax. All datasets are exposed
as tf.data.Datasets.
To get started see the Guide: https://www.tensorflow.org/datasets/overview and
the list of datasets: https://www.tensorflow.org/datasets/catalog/
overview#all_datasets
You have to provide the sample_to_document_function: a function thata sample from the dataset-specific format to the Document.
dataset_name¶
the name of the dataset to load
split_name¶
the name of the split to load. Defaults to “train”.
load_max_docs¶
a limit to the number of loaded documents. Defaults to 100.
sample_to_document_function¶
a function that converts a dataset sample
to a Document
Example
from langchain.utilities import TensorflowDatasets
def mlqaen_example_to_document(example: dict) -> Document:
return Document(
page_content=decode_to_str(example["context"]),
metadata={
"id": decode_to_str(example["id"]),
"title": decode_to_str(example["title"]),
"question": decode_to_str(example["question"]),
"answer": decode_to_str(example["answers"]["text"][0]),
},
)
tsds_client = TensorflowDatasets(
dataset_name="mlqa/en",
split_name="train",
load_max_docs=MAX_DOCS,
sample_to_document_function=mlqaen_example_to_document,
)
Create a new model by parsing and validating input data from keyword arguments. | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.tensorflow_datasets.TensorflowDatasets.html |
6fcac1005765-1 | )
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param dataset_name: str = ''¶
param load_max_docs: int = 100¶
param sample_to_document_function: Optional[Callable[[Dict], langchain.schema.document.Document]] = None¶
param split_name: str = 'train'¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.tensorflow_datasets.TensorflowDatasets.html |
6fcac1005765-2 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
lazy_load() → Iterator[Document][source]¶
Download a selected dataset lazily.
Returns: an iterator of Documents.
load() → List[Document][source]¶
Download a selected dataset.
Returns: a list of Documents.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶ | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.tensorflow_datasets.TensorflowDatasets.html |
6fcac1005765-3 | classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶ | lang/api.python.langchain.com/en/latest/utilities/langchain.utilities.tensorflow_datasets.TensorflowDatasets.html |
8180e0102889-0 | langchain_experimental.pal_chain.base.PALChain¶
class langchain_experimental.pal_chain.base.PALChain[source]¶
Bases: Chain
Implements Program-Aided Language Models (PAL).
This class implements the Program-Aided Language Models (PAL) for generating code
solutions. PAL is a technique described in the paper “Program-Aided Language Models”
(https://arxiv.org/pdf/2211.10435.pdf).
Security note: This class implements an AI technique that generates and evaluatesPython code, which can be dangerous and requires a specially sandboxed
environment to be safely used. While this class implements some basic guardrails
by limiting available locals/globals and by parsing and inspecting
the generated Python AST using PALValidation, those guardrails will not
deter sophisticated attackers and are not a replacement for a proper sandbox.
Do not use this class on untrusted inputs, with elevated permissions,
or without consulting your security team about proper sandboxing!
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param callback_manager: Optional[BaseCallbackManager] = None¶
Deprecated, use callbacks instead.
param callbacks: Callbacks = None¶
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
param code_validations: PALValidation [Optional]¶
Validations to perform on the generated code.
param get_answer_expr: str = 'print(solution())'¶
Expression to use to get the answer from the generated code.
param llm_chain: LLMChain [Required]¶ | lang/api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html |
8180e0102889-1 | param llm_chain: LLMChain [Required]¶
param memory: Optional[BaseMemory] = None¶
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
param metadata: Optional[Dict[str, Any]] = None¶
Optional metadata associated with the chain. Defaults to None.
This metadata will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
param python_globals: Optional[Dict[str, Any]] = None¶
Python globals and locals to use when executing the generated code.
param python_locals: Optional[Dict[str, Any]] = None¶
Python globals and locals to use when executing the generated code.
param return_intermediate_steps: bool = False¶
Whether to return intermediate steps in the generated code.
param stop: str = '\n\n'¶
Stop token to use when generating code.
param tags: Optional[List[str]] = None¶
Optional list of tags associated with the chain. Defaults to None.
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
param timeout: Optional[int] = 10¶
Timeout in seconds for the generated code to execute.
param verbose: bool [Optional]¶
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to the global verbose value, | lang/api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html |
8180e0102889-2 | will be printed to the console. Defaults to the global verbose value,
accessible via langchain.globals.get_verbose().
__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, run_name: Optional[str] = None, include_run_info: bool = False) → Dict[str, Any]¶
Execute the chain.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
return_only_outputs – Whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
metadata – Optional metadata associated with the chain. Defaults to None
include_run_info – Whether to include run info in the response. Defaults
to False.
Returns
A dict of named outputs. Should contain all outputs specified inChain.output_keys.
async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶ | lang/api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html |
8180e0102889-3 | Default implementation runs ainvoke in parallel using asyncio.gather.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, run_name: Optional[str] = None, include_run_info: bool = False) → Dict[str, Any]¶
Asynchronously execute the chain.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
return_only_outputs – Whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
metadata – Optional metadata associated with the chain. Defaults to None
include_run_info – Whether to include run info in the response. Defaults
to False.
Returns
A dict of named outputs. Should contain all outputs specified inChain.output_keys. | lang/api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html |
8180e0102889-4 | Returns
A dict of named outputs. Should contain all outputs specified inChain.output_keys.
async ainvoke(input: Dict[str, Any], config: Optional[RunnableConfig] = None, **kwargs: Any) → Dict[str, Any]¶
Default implementation of ainvoke, calls invoke from a thread.
The default implementation allows usage of async code even if
the runnable did not implement a native async version of invoke.
Subclasses should override this method if they can run asynchronously.
apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶
Call the chain on all inputs in the list.
async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Convenience method for executing chain.
The main difference between this method and Chain.__call__ is that this
method expects inputs to be passed directly in as positional arguments or
keyword arguments, whereas Chain.__call__ expects a single input dictionary
with all the inputs
Parameters
*args – If the chain expects a single input, it can be passed in as the
sole positional argument.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
**kwargs – If the chain expects multiple inputs, they can be passed in
directly as keyword arguments. | lang/api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html |
8180e0102889-5 | directly as keyword arguments.
Returns
The chain output.
Example
# Suppose we have a single-input chain that takes a 'question' string:
await chain.arun("What's the temperature in Boise, Idaho?")
# -> "The temperature in Boise is..."
# Suppose we have a multi-input chain that takes a 'question' string
# and 'context' string:
question = "What's the temperature in Boise, Idaho?"
context = "Weather report for Boise, Idaho on 07/03/23..."
await chain.arun(question=question, context=context)
# -> "The temperature in Boise is..."
async astream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
Default implementation of astream, which calls ainvoke.
Subclasses should override this method if they support streaming output.
async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, diff: bool = True, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any]) → Union[AsyncIterator[RunLogPatch], AsyncIterator[RunLog]]¶
Stream all output from a runnable, as reported to the callback system.
This includes all inner runs of LLMs, Retrievers, Tools, etc.
Output is streamed as Log objects, which include a list of
jsonpatch ops that describe how the state of the run has changed in each
step, and the final state of the run.
The jsonpatch ops can be applied in order to construct state. | lang/api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html |
8180e0102889-6 | The jsonpatch ops can be applied in order to construct state.
async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
Default implementation of atransform, which buffers input and calls astream.
Subclasses should override this method if they can start producing output while
input is still being generated.
batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶
Default implementation runs invoke in parallel using a thread pool executor.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
config_schema(*, include: Optional[Sequence[str]] = None) → Type[BaseModel]¶
The type of config this runnable accepts specified as a pydantic model.
To mark a field as configurable, see the configurable_fields
and configurable_alternatives methods.
Parameters
include – A list of fields to include in the config schema.
Returns
A pydantic model that can be used to validate config.
configurable_alternatives(which: ConfigurableField, default_key: str = 'default', **kwargs: Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) → RunnableSerializable[Input, Output]¶
configurable_fields(**kwargs: Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) → RunnableSerializable[Input, Output]¶ | lang/api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html |
8180e0102889-7 | classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict¶
Dictionary representation of chain.
Expects Chain._chain_type property to be implemented and for memory to benull.
Parameters
**kwargs – Keyword arguments passed to default pydantic.BaseModel.dict
method.
Returns
A dictionary representation of the chain.
Example
chain.dict(exclude_unset=True)
# -> {"_type": "foo", "verbose": False, ...}
classmethod from_colored_object_prompt(llm: BaseLanguageModel, **kwargs: Any) → PALChain[source]¶
Load PAL from colored object prompt.
Parameters
llm (BaseLanguageModel) – The language model to use for generating code.
Returns
An instance of PALChain.
Return type
PALChain | lang/api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html |
8180e0102889-8 | Returns
An instance of PALChain.
Return type
PALChain
classmethod from_math_prompt(llm: BaseLanguageModel, **kwargs: Any) → PALChain[source]¶
Load PAL from math prompt.
Parameters
llm (BaseLanguageModel) – The language model to use for generating code.
Returns
An instance of PALChain.
Return type
PALChain
classmethod from_orm(obj: Any) → Model¶
get_input_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
Get a pydantic model that can be used to validate input to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic input schema that depends on which
configuration the runnable is invoked with.
This method allows to get an input schema for a specific configuration.
Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate input.
classmethod get_lc_namespace() → List[str]¶
Get the namespace of the langchain object.
For example, if the class is langchain.llms.openai.OpenAI, then the
namespace is [“langchain”, “llms”, “openai”]
get_output_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
Get a pydantic model that can be used to validate output to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic output schema that depends on which
configuration the runnable is invoked with.
This method allows to get an output schema for a specific configuration.
Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate output. | lang/api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html |
8180e0102889-9 | Returns
A pydantic model that can be used to validate output.
invoke(input: Dict[str, Any], config: Optional[RunnableConfig] = None, **kwargs: Any) → Dict[str, Any]¶
Transform a single input into an output. Override to implement.
Parameters
input – The input to the runnable.
config – A config to use when invoking the runnable.
The config supports standard keys like ‘tags’, ‘metadata’ for tracing
purposes, ‘max_concurrency’ for controlling how much work to do
in parallel, and other keys. Please refer to the RunnableConfig
for more details.
Returns
The output of the runnable.
classmethod is_lc_serializable() → bool¶
Is this class serializable?
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object.
map() → Runnable[List[Input], List[Output]]¶
Return a new Runnable that maps a list of inputs to a list of outputs,
by calling invoke() with each input. | lang/api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html |
8180e0102889-10 | by calling invoke() with each input.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
prep_inputs(inputs: Union[Dict[str, Any], Any]) → Dict[str, str]¶
Validate and prepare chain inputs, including adding inputs from memory.
Parameters
inputs – Dictionary of raw inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
Returns
A dictionary of all inputs, including those added by the chain’s memory.
prep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) → Dict[str, str]¶
Validate and prepare chain outputs, and save info about this run to memory.
Parameters
inputs – Dictionary of chain inputs, including any inputs added by chain
memory.
outputs – Dictionary of initial chain outputs.
return_only_outputs – Whether to only return the chain outputs. If False,
inputs are also added to the final outputs.
Returns
A dict of the final chain outputs.
run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Convenience method for executing chain.
The main difference between this method and Chain.__call__ is that this | lang/api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html |
8180e0102889-11 | The main difference between this method and Chain.__call__ is that this
method expects inputs to be passed directly in as positional arguments or
keyword arguments, whereas Chain.__call__ expects a single input dictionary
with all the inputs
Parameters
*args – If the chain expects a single input, it can be passed in as the
sole positional argument.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
**kwargs – If the chain expects multiple inputs, they can be passed in
directly as keyword arguments.
Returns
The chain output.
Example
# Suppose we have a single-input chain that takes a 'question' string:
chain.run("What's the temperature in Boise, Idaho?")
# -> "The temperature in Boise is..."
# Suppose we have a multi-input chain that takes a 'question' string
# and 'context' string:
question = "What's the temperature in Boise, Idaho?"
context = "Weather report for Boise, Idaho on 07/03/23..."
chain.run(question=question, context=context)
# -> "The temperature in Boise is..."
save(file_path: Union[Path, str]) → None¶
Save the chain.
Expects Chain._chain_type property to be implemented and for memory to benull.
Parameters
file_path – Path to file to save the chain to.
Example
chain.save(file_path="path/chain.yaml")
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ | lang/api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html |
8180e0102889-12 | classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of stream, which calls invoke.
Subclasses should override this method if they support streaming output.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of transform, which buffers input and then calls stream.
Subclasses should override this method if they can start producing output while
input is still being generated.
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
classmethod validate_code(code: str, code_validations: PALValidation) → None[source]¶
with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶
Bind config to a Runnable, returning a new Runnable.
with_fallbacks(fallbacks: Sequence[Runnable[Input, Output]], *, exceptions_to_handle: Tuple[Type[BaseException], ...] = (<class 'Exception'>,)) → RunnableWithFallbacksT[Input, Output]¶
Add fallbacks to a runnable, returning a new Runnable.
Parameters
fallbacks – A sequence of runnables to try if the original runnable fails.
exceptions_to_handle – A tuple of exception types to handle.
Returns
A new Runnable that will try the original runnable, and then each
fallback in order, upon failures. | lang/api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html |
8180e0102889-13 | fallback in order, upon failures.
with_listeners(*, on_start: Optional[Listener] = None, on_end: Optional[Listener] = None, on_error: Optional[Listener] = None) → Runnable[Input, Output]¶
Bind lifecycle listeners to a Runnable, returning a new Runnable.
on_start: Called before the runnable starts running, with the Run object.
on_end: Called after the runnable finishes running, with the Run object.
on_error: Called if the runnable throws an error, with the Run object.
The Run object contains information about the run, including its id,
type, input, output, error, start_time, end_time, and any tags or metadata
added to the run.
with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶
Create a new Runnable that retries the original runnable on exceptions.
Parameters
retry_if_exception_type – A tuple of exception types to retry on
wait_exponential_jitter – Whether to add jitter to the wait time
between retries
stop_after_attempt – The maximum number of attempts to make before giving up
Returns
A new Runnable that retries the original runnable on exceptions.
with_types(*, input_type: Optional[Type[Input]] = None, output_type: Optional[Type[Output]] = None) → Runnable[Input, Output]¶
Bind input and output types to a Runnable, returning a new Runnable.
property InputType: Type[langchain.schema.runnable.utils.Input]¶
The type of input this runnable accepts specified as a type annotation.
property OutputType: Type[langchain.schema.runnable.utils.Output]¶
The type of output this runnable produces specified as a type annotation. | lang/api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html |
8180e0102889-14 | The type of output this runnable produces specified as a type annotation.
property config_specs: List[langchain.schema.runnable.utils.ConfigurableFieldSpec]¶
List configurable fields for this runnable.
property input_schema: Type[pydantic.main.BaseModel]¶
The type of input this runnable accepts specified as a pydantic model.
property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”}
property output_schema: Type[pydantic.main.BaseModel]¶
The type of output this runnable produces specified as a pydantic model. | lang/api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html |
e878749b31ff-0 | langchain_experimental.pal_chain.base.PALValidation¶
class langchain_experimental.pal_chain.base.PALValidation(solution_expression_name: Optional[str] = None, solution_expression_type: Optional[type] = None, allow_imports: bool = False, allow_command_exec: bool = False)[source]¶
Initialize a PALValidation instance.
Parameters
solution_expression_name (str) – Name of the expected solution expression.
If passed, solution_expression_type must be passed as well.
solution_expression_type (str) – AST type of the expected solution
expression. If passed, solution_expression_name must be passed as well.
Must be one of PALValidation.SOLUTION_EXPRESSION_TYPE_FUNCTION,
PALValidation.SOLUTION_EXPRESSION_TYPE_VARIABLE.
allow_imports (bool) – Allow import statements.
allow_command_exec (bool) – Allow using known command execution functions.
Methods
__init__([solution_expression_name, ...])
Initialize a PALValidation instance.
__init__(solution_expression_name: Optional[str] = None, solution_expression_type: Optional[type] = None, allow_imports: bool = False, allow_command_exec: bool = False)[source]¶
Initialize a PALValidation instance.
Parameters
solution_expression_name (str) – Name of the expected solution expression.
If passed, solution_expression_type must be passed as well.
solution_expression_type (str) – AST type of the expected solution
expression. If passed, solution_expression_name must be passed as well.
Must be one of PALValidation.SOLUTION_EXPRESSION_TYPE_FUNCTION,
PALValidation.SOLUTION_EXPRESSION_TYPE_VARIABLE.
allow_imports (bool) – Allow import statements.
allow_command_exec (bool) – Allow using known command execution functions. | lang/api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALValidation.html |
7e841b2c5d42-0 | langchain.document_transformers.embeddings_redundant_filter.get_stateful_documents¶
langchain.document_transformers.embeddings_redundant_filter.get_stateful_documents(documents: Sequence[Document]) → Sequence[_DocumentWithState][source]¶
Convert a list of documents to a list of documents with state.
Parameters
documents – The documents to convert.
Returns
A list of documents with state. | lang/api.python.langchain.com/en/latest/document_transformers/langchain.document_transformers.embeddings_redundant_filter.get_stateful_documents.html |
0306e9b7524b-0 | langchain.document_transformers.html2text.Html2TextTransformer¶
class langchain.document_transformers.html2text.Html2TextTransformer(ignore_links: bool = True, ignore_images: bool = True)[source]¶
Replace occurrences of a particular search pattern with a replacement string
Parameters
ignore_links – Whether links should be ignored; defaults to True.
ignore_images – Whether images should be ignored; defaults to True.
Example
Methods
__init__([ignore_links, ignore_images])
atransform_documents(documents, **kwargs)
Asynchronously transform a list of documents.
transform_documents(documents, **kwargs)
Transform a list of documents.
__init__(ignore_links: bool = True, ignore_images: bool = True) → None[source]¶
async atransform_documents(documents: Sequence[Document], **kwargs: Any) → Sequence[Document][source]¶
Asynchronously transform a list of documents.
Parameters
documents – A sequence of Documents to be transformed.
Returns
A list of transformed Documents.
transform_documents(documents: Sequence[Document], **kwargs: Any) → Sequence[Document][source]¶
Transform a list of documents.
Parameters
documents – A sequence of Documents to be transformed.
Returns
A list of transformed Documents.
Examples using Html2TextTransformer¶
html2text
Async Chromium
Set env var OPENAI_API_KEY or load from a .env file: | lang/api.python.langchain.com/en/latest/document_transformers/langchain.document_transformers.html2text.Html2TextTransformer.html |
c0eb6624d515-0 | langchain.document_transformers.embeddings_redundant_filter.EmbeddingsRedundantFilter¶
class langchain.document_transformers.embeddings_redundant_filter.EmbeddingsRedundantFilter[source]¶
Bases: BaseDocumentTransformer, BaseModel
Filter that drops redundant documents by comparing their embeddings.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param embeddings: langchain.schema.embeddings.Embeddings [Required]¶
Embeddings to use for embedding document contents.
param similarity_fn: Callable = <function cosine_similarity>¶
Similarity function for comparing documents. Function expected to take as input
two matrices (List[List[float]]) and return a matrix of scores where higher values
indicate greater similarity.
param similarity_threshold: float = 0.95¶
Threshold for determining when two documents are similar enough
to be considered redundant.
async atransform_documents(documents: Sequence[Document], **kwargs: Any) → Sequence[Document]¶
Asynchronously transform a list of documents.
Parameters
documents – A sequence of Documents to be transformed.
Returns
A list of transformed Documents.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ | lang/api.python.langchain.com/en/latest/document_transformers/langchain.document_transformers.embeddings_redundant_filter.EmbeddingsRedundantFilter.html |
c0eb6624d515-1 | Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). | lang/api.python.langchain.com/en/latest/document_transformers/langchain.document_transformers.embeddings_redundant_filter.EmbeddingsRedundantFilter.html |
c0eb6624d515-2 | classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
transform_documents(documents: Sequence[Document], **kwargs: Any) → Sequence[Document][source]¶
Filter down documents.
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
Examples using EmbeddingsRedundantFilter¶
LOTR (Merger Retriever) | lang/api.python.langchain.com/en/latest/document_transformers/langchain.document_transformers.embeddings_redundant_filter.EmbeddingsRedundantFilter.html |
f804010546c2-0 | langchain.document_transformers.openai_functions.OpenAIMetadataTagger¶
class langchain.document_transformers.openai_functions.OpenAIMetadataTagger[source]¶
Bases: BaseDocumentTransformer, BaseModel
Extract metadata tags from document contents using OpenAI functions.
Example:from langchain.chat_models import ChatOpenAI
from langchain.document_transformers import OpenAIMetadataTagger
from langchain.schema import Document
schema = {
"properties": {
"movie_title": { "type": "string" },
"critic": { "type": "string" },
"tone": {
"type": "string",
"enum": ["positive", "negative"]
},
"rating": {
"type": "integer",
"description": "The number of stars the critic rated the movie"
}
},
"required": ["movie_title", "critic", "tone"]
}
# Must be an OpenAI model that supports functions
llm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo-0613")
tagging_chain = create_tagging_chain(schema, llm)
document_transformer = OpenAIMetadataTagger(tagging_chain=tagging_chain)
original_documents = [
Document(page_content="Review of The Bee Movie
By Roger Ebert
This is the greatest movie ever made. 4 out of 5 stars.”),Document(page_content=”Review of The Godfather
By Anonymous
This movie was super boring. 1 out of 5 stars.”, metadata={“reliable”: False}),]
enhanced_documents = document_transformer.transform_documents(original_documents)
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model. | lang/api.python.langchain.com/en/latest/document_transformers/langchain.document_transformers.openai_functions.OpenAIMetadataTagger.html |
f804010546c2-1 | Raises ValidationError if the input data cannot be parsed to form a valid model.
param tagging_chain: langchain.chains.llm.LLMChain [Required]¶
The chain used to extract metadata from each document.
async atransform_documents(documents: Sequence[Document], **kwargs: Any) → Sequence[Document][source]¶
Asynchronously transform a list of documents.
Parameters
documents – A sequence of Documents to be transformed.
Returns
A list of transformed Documents.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance | lang/api.python.langchain.com/en/latest/document_transformers/langchain.document_transformers.openai_functions.OpenAIMetadataTagger.html |
f804010546c2-2 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ | lang/api.python.langchain.com/en/latest/document_transformers/langchain.document_transformers.openai_functions.OpenAIMetadataTagger.html |
f804010546c2-3 | classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
transform_documents(documents: Sequence[Document], **kwargs: Any) → Sequence[Document][source]¶
Automatically extract and populate metadata
for each document according to the provided schema.
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶ | lang/api.python.langchain.com/en/latest/document_transformers/langchain.document_transformers.openai_functions.OpenAIMetadataTagger.html |
ca0870a0ec3e-0 | langchain.document_transformers.beautiful_soup_transformer.BeautifulSoupTransformer¶
class langchain.document_transformers.beautiful_soup_transformer.BeautifulSoupTransformer[source]¶
Transform HTML content by extracting specific tags and removing unwanted ones.
Example
Initialize the transformer.
This checks if the BeautifulSoup4 package is installed.
If not, it raises an ImportError.
Methods
__init__()
Initialize the transformer.
atransform_documents(documents, **kwargs)
Asynchronously transform a list of documents.
extract_tags(html_content, tags)
Extract specific tags from a given HTML content.
remove_unnecessary_lines(content)
Clean up the content by removing unnecessary lines.
remove_unwanted_tags(html_content, unwanted_tags)
Remove unwanted tags from a given HTML content.
transform_documents(documents[, ...])
Transform a list of Document objects by cleaning their HTML content.
__init__() → None[source]¶
Initialize the transformer.
This checks if the BeautifulSoup4 package is installed.
If not, it raises an ImportError.
async atransform_documents(documents: Sequence[Document], **kwargs: Any) → Sequence[Document][source]¶
Asynchronously transform a list of documents.
Parameters
documents – A sequence of Documents to be transformed.
Returns
A list of transformed Documents.
static extract_tags(html_content: str, tags: List[str]) → str[source]¶
Extract specific tags from a given HTML content.
Parameters
html_content – The original HTML content string.
tags – A list of tags to be extracted from the HTML.
Returns
A string combining the content of the extracted tags.
static remove_unnecessary_lines(content: str) → str[source]¶
Clean up the content by removing unnecessary lines.
Parameters
content – A string, which may contain unnecessary lines or spaces.
Returns
A cleaned string with unnecessary lines removed. | lang/api.python.langchain.com/en/latest/document_transformers/langchain.document_transformers.beautiful_soup_transformer.BeautifulSoupTransformer.html |
ca0870a0ec3e-1 | Returns
A cleaned string with unnecessary lines removed.
static remove_unwanted_tags(html_content: str, unwanted_tags: List[str]) → str[source]¶
Remove unwanted tags from a given HTML content.
Parameters
html_content – The original HTML content string.
unwanted_tags – A list of tags to be removed from the HTML.
Returns
A cleaned HTML string with unwanted tags removed.
transform_documents(documents: Sequence[Document], unwanted_tags: List[str] = ['script', 'style'], tags_to_extract: List[str] = ['p', 'li', 'div', 'a'], remove_lines: bool = True, **kwargs: Any) → Sequence[Document][source]¶
Transform a list of Document objects by cleaning their HTML content.
Parameters
documents – A sequence of Document objects containing HTML content.
unwanted_tags – A list of tags to be removed from the HTML.
tags_to_extract – A list of tags whose content will be extracted.
remove_lines – If set to True, unnecessary lines will be
content. (removed from the HTML) –
Returns
A sequence of Document objects with transformed content.
Examples using BeautifulSoupTransformer¶
Beautiful Soup
Set env var OPENAI_API_KEY or load from a .env file: | lang/api.python.langchain.com/en/latest/document_transformers/langchain.document_transformers.beautiful_soup_transformer.BeautifulSoupTransformer.html |
a0cab05eadf7-0 | langchain.document_transformers.embeddings_redundant_filter.EmbeddingsClusteringFilter¶
class langchain.document_transformers.embeddings_redundant_filter.EmbeddingsClusteringFilter[source]¶
Bases: BaseDocumentTransformer, BaseModel
Perform K-means clustering on document vectors.
Returns an arbitrary number of documents closest to center.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param embeddings: langchain.schema.embeddings.Embeddings [Required]¶
Embeddings to use for embedding document contents.
param num_closest: int = 1¶
The number of closest vectors to return for each cluster center.
param num_clusters: int = 5¶
Number of clusters. Groups of documents with similar meaning.
param random_state: int = 42¶
Controls the random number generator used to initialize the cluster centroids.
If you set the random_state parameter to None, the KMeans algorithm will use a
random number generator that is seeded with the current time. This means
that the results of the KMeans algorithm will be different each time you
run it.
param remove_duplicates: bool = False¶
By default duplicated results are skipped and replaced by the next closest
vector in the cluster. If remove_duplicates is true no replacement will be done:
This could dramatically reduce results when there is a lot of overlap between
clusters.
param sorted: bool = False¶
By default results are re-ordered “grouping” them by cluster, if sorted is true
result will be ordered by the original position from the retriever
async atransform_documents(documents: Sequence[Document], **kwargs: Any) → Sequence[Document]¶
Asynchronously transform a list of documents.
Parameters
documents – A sequence of Documents to be transformed.
Returns
A list of transformed Documents. | lang/api.python.langchain.com/en/latest/document_transformers/langchain.document_transformers.embeddings_redundant_filter.EmbeddingsClusteringFilter.html |
a0cab05eadf7-1 | documents – A sequence of Documents to be transformed.
Returns
A list of transformed Documents.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶ | lang/api.python.langchain.com/en/latest/document_transformers/langchain.document_transformers.embeddings_redundant_filter.EmbeddingsClusteringFilter.html |
a0cab05eadf7-2 | classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
transform_documents(documents: Sequence[Document], **kwargs: Any) → Sequence[Document][source]¶
Filter down documents.
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
Examples using EmbeddingsClusteringFilter¶ | lang/api.python.langchain.com/en/latest/document_transformers/langchain.document_transformers.embeddings_redundant_filter.EmbeddingsClusteringFilter.html |
a0cab05eadf7-3 | classmethod validate(value: Any) → Model¶
Examples using EmbeddingsClusteringFilter¶
LOTR (Merger Retriever) | lang/api.python.langchain.com/en/latest/document_transformers/langchain.document_transformers.embeddings_redundant_filter.EmbeddingsClusteringFilter.html |
9ce36987d003-0 | langchain.document_transformers.beautiful_soup_transformer.get_navigable_strings¶
langchain.document_transformers.beautiful_soup_transformer.get_navigable_strings(element: Any) → Iterator[str][source]¶ | lang/api.python.langchain.com/en/latest/document_transformers/langchain.document_transformers.beautiful_soup_transformer.get_navigable_strings.html |
3b80ccfdaca3-0 | langchain.document_transformers.doctran_text_extract.DoctranPropertyExtractor¶
class langchain.document_transformers.doctran_text_extract.DoctranPropertyExtractor(properties: List[dict], openai_api_key: Optional[str] = None, openai_api_model: Optional[str] = None)[source]¶
Extract properties from text documents using doctran.
Parameters
properties – A list of the properties to extract.
openai_api_key – OpenAI API key. Can also be specified via environment variable
OPENAI_API_KEY.
Example
from langchain.document_transformers import DoctranPropertyExtractor
properties = [
{
"name": "category",
"description": "What type of email this is.",
"type": "string",
"enum": ["update", "action_item", "customer_feedback", "announcement", "other"],
"required": True,
},
{
"name": "mentions",
"description": "A list of all people mentioned in this email.",
"type": "array",
"items": {
"name": "full_name",
"description": "The full name of the person mentioned.",
"type": "string",
},
"required": True,
},
{
"name": "eli5",
"description": "Explain this email to me like I'm 5 years old.",
"type": "string",
"required": True,
},
]
# Pass in openai_api_key or set env var OPENAI_API_KEY
property_extractor = DoctranPropertyExtractor(properties)
transformed_document = await qa_transformer.atransform_documents(documents)
Methods
__init__(properties[, openai_api_key, ...]) | lang/api.python.langchain.com/en/latest/document_transformers/langchain.document_transformers.doctran_text_extract.DoctranPropertyExtractor.html |
3b80ccfdaca3-1 | Methods
__init__(properties[, openai_api_key, ...])
atransform_documents(documents, **kwargs)
Extracts properties from text documents using doctran.
transform_documents(documents, **kwargs)
Transform a list of documents.
__init__(properties: List[dict], openai_api_key: Optional[str] = None, openai_api_model: Optional[str] = None) → None[source]¶
async atransform_documents(documents: Sequence[Document], **kwargs: Any) → Sequence[Document][source]¶
Extracts properties from text documents using doctran.
transform_documents(documents: Sequence[Document], **kwargs: Any) → Sequence[Document][source]¶
Transform a list of documents.
Parameters
documents – A sequence of Documents to be transformed.
Returns
A list of transformed Documents.
Examples using DoctranPropertyExtractor¶
Doctran Extract Properties | lang/api.python.langchain.com/en/latest/document_transformers/langchain.document_transformers.doctran_text_extract.DoctranPropertyExtractor.html |
589345c8795e-0 | langchain.document_transformers.nuclia_text_transform.NucliaTextTransformer¶
class langchain.document_transformers.nuclia_text_transform.NucliaTextTransformer(nua: NucliaUnderstandingAPI)[source]¶
The Nuclia Understanding API splits into paragraphs and sentences,
identifies entities, provides a summary of the text and generates
embeddings for all sentences.
Methods
__init__(nua)
atransform_documents(documents, **kwargs)
Asynchronously transform a list of documents.
transform_documents(documents, **kwargs)
Transform a list of documents.
__init__(nua: NucliaUnderstandingAPI)[source]¶
async atransform_documents(documents: Sequence[Document], **kwargs: Any) → Sequence[Document][source]¶
Asynchronously transform a list of documents.
Parameters
documents – A sequence of Documents to be transformed.
Returns
A list of transformed Documents.
transform_documents(documents: Sequence[Document], **kwargs: Any) → Sequence[Document][source]¶
Transform a list of documents.
Parameters
documents – A sequence of Documents to be transformed.
Returns
A list of transformed Documents.
Examples using NucliaTextTransformer¶
Nuclia Understanding API document transformer | lang/api.python.langchain.com/en/latest/document_transformers/langchain.document_transformers.nuclia_text_transform.NucliaTextTransformer.html |
b0b4eada3c0e-0 | langchain.document_transformers.long_context_reorder.LongContextReorder¶
class langchain.document_transformers.long_context_reorder.LongContextReorder[source]¶
Bases: BaseDocumentTransformer, BaseModel
Lost in the middle:
Performance degrades when models must access relevant information
in the middle of long contexts.
See: https://arxiv.org/abs//2307.03172
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
async atransform_documents(documents: Sequence[Document], **kwargs: Any) → Sequence[Document][source]¶
Asynchronously transform a list of documents.
Parameters
documents – A sequence of Documents to be transformed.
Returns
A list of transformed Documents.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns | lang/api.python.langchain.com/en/latest/document_transformers/langchain.document_transformers.long_context_reorder.LongContextReorder.html |
b0b4eada3c0e-1 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ | lang/api.python.langchain.com/en/latest/document_transformers/langchain.document_transformers.long_context_reorder.LongContextReorder.html |
b0b4eada3c0e-2 | classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
transform_documents(documents: Sequence[Document], **kwargs: Any) → Sequence[Document][source]¶
Reorders documents.
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
Examples using LongContextReorder¶
LOTR (Merger Retriever)
Lost in the middle: The problem with long contexts | lang/api.python.langchain.com/en/latest/document_transformers/langchain.document_transformers.long_context_reorder.LongContextReorder.html |
6efce921486c-0 | langchain.document_transformers.doctran_text_qa.DoctranQATransformer¶
class langchain.document_transformers.doctran_text_qa.DoctranQATransformer(openai_api_key: Optional[str] = None, openai_api_model: Optional[str] = None)[source]¶
Extract QA from text documents using doctran.
Parameters
openai_api_key – OpenAI API key. Can also be specified via environment variable
OPENAI_API_KEY.
Example
from langchain.document_transformers import DoctranQATransformer
# Pass in openai_api_key or set env var OPENAI_API_KEY
qa_transformer = DoctranQATransformer()
transformed_document = await qa_transformer.atransform_documents(documents)
Methods
__init__([openai_api_key, openai_api_model])
atransform_documents(documents, **kwargs)
Extracts QA from text documents using doctran.
transform_documents(documents, **kwargs)
Transform a list of documents.
__init__(openai_api_key: Optional[str] = None, openai_api_model: Optional[str] = None) → None[source]¶
async atransform_documents(documents: Sequence[Document], **kwargs: Any) → Sequence[Document][source]¶
Extracts QA from text documents using doctran.
transform_documents(documents: Sequence[Document], **kwargs: Any) → Sequence[Document][source]¶
Transform a list of documents.
Parameters
documents – A sequence of Documents to be transformed.
Returns
A list of transformed Documents.
Examples using DoctranQATransformer¶
Doctran Interrogate Documents | lang/api.python.langchain.com/en/latest/document_transformers/langchain.document_transformers.doctran_text_qa.DoctranQATransformer.html |
1e39aaf085b8-0 | langchain.document_transformers.doctran_text_translate.DoctranTextTranslator¶
class langchain.document_transformers.doctran_text_translate.DoctranTextTranslator(openai_api_key: Optional[str] = None, language: str = 'english', openai_api_model: Optional[str] = None)[source]¶
Translate text documents using doctran.
Parameters
openai_api_key – OpenAI API key. Can also be specified via environment variable
OPENAI_API_KEY. –
language – The language to translate to.
Example
from langchain.document_transformers import DoctranTextTranslator
# Pass in openai_api_key or set env var OPENAI_API_KEY
qa_translator = DoctranTextTranslator(language=”spanish”)
translated_document = await qa_translator.atransform_documents(documents)
Methods
__init__([openai_api_key, language, ...])
atransform_documents(documents, **kwargs)
Translates text documents using doctran.
transform_documents(documents, **kwargs)
Transform a list of documents.
__init__(openai_api_key: Optional[str] = None, language: str = 'english', openai_api_model: Optional[str] = None) → None[source]¶
async atransform_documents(documents: Sequence[Document], **kwargs: Any) → Sequence[Document][source]¶
Translates text documents using doctran.
transform_documents(documents: Sequence[Document], **kwargs: Any) → Sequence[Document][source]¶
Transform a list of documents.
Parameters
documents – A sequence of Documents to be transformed.
Returns
A list of transformed Documents.
Examples using DoctranTextTranslator¶
Doctran Translate Documents | lang/api.python.langchain.com/en/latest/document_transformers/langchain.document_transformers.doctran_text_translate.DoctranTextTranslator.html |
a1d795b26b6b-0 | langchain.document_transformers.openai_functions.create_metadata_tagger¶
langchain.document_transformers.openai_functions.create_metadata_tagger(metadata_schema: Union[Dict[str, Any], Type[BaseModel]], llm: BaseLanguageModel, prompt: Optional[ChatPromptTemplate] = None, *, tagging_chain_kwargs: Optional[Dict] = None) → OpenAIMetadataTagger[source]¶
Create a DocumentTransformer that uses an OpenAI function chain to automatically
tag documents with metadata based on their content and an input schema.
Args:
metadata_schema: Either a dictionary or pydantic.BaseModel class. If a dictionaryis passed in, it’s assumed to already be a valid JsonSchema.
For best results, pydantic.BaseModels should have docstrings describing what
the schema represents and descriptions for the parameters.
llm: Language model to use, assumed to support the OpenAI function-calling API.Defaults to use “gpt-3.5-turbo-0613”
prompt: BasePromptTemplate to pass to the model.
Returns:An LLMChain that will pass the given function to the model.
Example:from langchain.chat_models import ChatOpenAI
from langchain.document_transformers import create_metadata_tagger
from langchain.schema import Document
schema = {
"properties": {
"movie_title": { "type": "string" },
"critic": { "type": "string" },
"tone": {
"type": "string",
"enum": ["positive", "negative"]
},
"rating": {
"type": "integer",
"description": "The number of stars the critic rated the movie"
}
},
"required": ["movie_title", "critic", "tone"]
}
# Must be an OpenAI model that supports functions | lang/api.python.langchain.com/en/latest/document_transformers/langchain.document_transformers.openai_functions.create_metadata_tagger.html |
a1d795b26b6b-1 | }
# Must be an OpenAI model that supports functions
llm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo-0613")
document_transformer = create_metadata_tagger(schema, llm)
original_documents = [
Document(page_content="Review of The Bee Movie
By Roger Ebert
This is the greatest movie ever made. 4 out of 5 stars.”),Document(page_content=”Review of The Godfather
By Anonymous
This movie was super boring. 1 out of 5 stars.”, metadata={“reliable”: False}),]
enhanced_documents = document_transformer.transform_documents(original_documents)
Examples using create_metadata_tagger¶
OpenAI Functions Metadata Tagger | lang/api.python.langchain.com/en/latest/document_transformers/langchain.document_transformers.openai_functions.create_metadata_tagger.html |
0e49691eb9bc-0 | langchain.document_transformers.google_translate.GoogleTranslateTransformer¶
class langchain.document_transformers.google_translate.GoogleTranslateTransformer(project_id: str, *, location: str = 'global', model_id: Optional[str] = None, glossary_id: Optional[str] = None, api_endpoint: Optional[str] = None)[source]¶
Translate text documents using Google Cloud Translation.
Parameters
project_id – Google Cloud Project ID.
location – (Optional) Translate model location.
model_id – (Optional) Translate model ID to use.
glossary_id – (Optional) Translate glossary ID to use.
api_endpoint – (Optional) Regional endpoint to use.
Methods
__init__(project_id, *[, location, ...])
param project_id
Google Cloud Project ID.
atransform_documents(documents, **kwargs)
Asynchronously transform a list of documents.
transform_documents(documents, **kwargs)
Translate text documents using Google Translate.
__init__(project_id: str, *, location: str = 'global', model_id: Optional[str] = None, glossary_id: Optional[str] = None, api_endpoint: Optional[str] = None) → None[source]¶
Parameters
project_id – Google Cloud Project ID.
location – (Optional) Translate model location.
model_id – (Optional) Translate model ID to use.
glossary_id – (Optional) Translate glossary ID to use.
api_endpoint – (Optional) Regional endpoint to use.
async atransform_documents(documents: Sequence[Document], **kwargs: Any) → Sequence[Document]¶
Asynchronously transform a list of documents.
Parameters
documents – A sequence of Documents to be transformed.
Returns
A list of transformed Documents.
transform_documents(documents: Sequence[Document], **kwargs: Any) → Sequence[Document][source]¶ | lang/api.python.langchain.com/en/latest/document_transformers/langchain.document_transformers.google_translate.GoogleTranslateTransformer.html |
0e49691eb9bc-1 | Translate text documents using Google Translate.
Parameters
source_language_code – ISO 639 language code of the input document.
target_language_code – ISO 639 language code of the output document.
For supported languages, refer to:
https://cloud.google.com/translate/docs/languages
mime_type – (Optional) Media Type of input text.
Options: text/plain, text/html
Examples using GoogleTranslateTransformer¶
Google Cloud Translation | lang/api.python.langchain.com/en/latest/document_transformers/langchain.document_transformers.google_translate.GoogleTranslateTransformer.html |
d18638b0af75-0 | langchain.memory.readonly.ReadOnlySharedMemory¶
class langchain.memory.readonly.ReadOnlySharedMemory[source]¶
Bases: BaseMemory
A memory wrapper that is read-only and cannot be changed.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param memory: langchain.schema.memory.BaseMemory [Required]¶
clear() → None[source]¶
Nothing to clear, got a memory like a vault.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance | lang/api.python.langchain.com/en/latest/memory/langchain.memory.readonly.ReadOnlySharedMemory.html |
d18638b0af75-1 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
classmethod get_lc_namespace() → List[str]¶
Get the namespace of the langchain object.
For example, if the class is langchain.llms.openai.OpenAI, then the
namespace is [“langchain”, “llms”, “openai”]
classmethod is_lc_serializable() → bool¶
Is this class serializable?
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object. | lang/api.python.langchain.com/en/latest/memory/langchain.memory.readonly.ReadOnlySharedMemory.html |
d18638b0af75-2 | The unique identifier is a list of strings that describes the path
to the object.
load_memory_variables(inputs: Dict[str, Any]) → Dict[str, str][source]¶
Load memory variables from memory.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None[source]¶
Nothing should be saved or changed
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”}
property memory_variables: List[str]¶
Return memory variables.
Examples using ReadOnlySharedMemory¶
Shared memory across agents and tools | lang/api.python.langchain.com/en/latest/memory/langchain.memory.readonly.ReadOnlySharedMemory.html |
cfc736f8d91a-0 | langchain.memory.motorhead_memory.MotorheadMemory¶
class langchain.memory.motorhead_memory.MotorheadMemory[source]¶
Bases: BaseChatMemory
Chat message memory backed by Motorhead service.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param api_key: Optional[str] = None¶
param chat_memory: BaseChatMessageHistory [Optional]¶
param client_id: Optional[str] = None¶
param context: Optional[str] = None¶
param input_key: Optional[str] = None¶
param memory_key: str = 'history'¶
param output_key: Optional[str] = None¶
param return_messages: bool = False¶
param session_id: str [Required]¶
param timeout: int = 3000¶
param url: str = 'https://api.getmetal.io/v1/motorhead'¶
clear() → None¶
Clear memory contents.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include | lang/api.python.langchain.com/en/latest/memory/langchain.memory.motorhead_memory.MotorheadMemory.html |
cfc736f8d91a-1 | exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
delete_session() → None[source]¶
Delete a session
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
classmethod get_lc_namespace() → List[str]¶
Get the namespace of the langchain object.
For example, if the class is langchain.llms.openai.OpenAI, then the
namespace is [“langchain”, “llms”, “openai”]
async init() → None[source]¶
classmethod is_lc_serializable() → bool¶
Is this class serializable?
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ | lang/api.python.langchain.com/en/latest/memory/langchain.memory.motorhead_memory.MotorheadMemory.html |
cfc736f8d91a-2 | Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object.
load_memory_variables(values: Dict[str, Any]) → Dict[str, Any][source]¶
Return key-value pairs given the text input to the chain.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None[source]¶
Save context from this conversation to buffer.
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor. | lang/api.python.langchain.com/en/latest/memory/langchain.memory.motorhead_memory.MotorheadMemory.html |
cfc736f8d91a-3 | These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”}
property memory_variables: List[str]¶
The string keys this memory class will add to chain inputs.
Examples using MotorheadMemory¶
Motörhead Memory
Motörhead Memory (Managed) | lang/api.python.langchain.com/en/latest/memory/langchain.memory.motorhead_memory.MotorheadMemory.html |
d5e8156166b4-0 | langchain.memory.chat_message_histories.cosmos_db.CosmosDBChatMessageHistory¶
class langchain.memory.chat_message_histories.cosmos_db.CosmosDBChatMessageHistory(cosmos_endpoint: str, cosmos_database: str, cosmos_container: str, session_id: str, user_id: str, credential: Any = None, connection_string: Optional[str] = None, ttl: Optional[int] = None, cosmos_client_kwargs: Optional[dict] = None)[source]¶
Chat message history backed by Azure CosmosDB.
Initializes a new instance of the CosmosDBChatMessageHistory class.
Make sure to call prepare_cosmos or use the context manager to make
sure your database is ready.
Either a credential or a connection string must be provided.
Parameters
cosmos_endpoint – The connection endpoint for the Azure Cosmos DB account.
cosmos_database – The name of the database to use.
cosmos_container – The name of the container to use.
session_id – The session ID to use, can be overwritten while loading.
user_id – The user ID to use, can be overwritten while loading.
credential – The credential to use to authenticate to Azure Cosmos DB.
connection_string – The connection string to use to authenticate.
ttl – The time to live (in seconds) to use for documents in the container.
cosmos_client_kwargs – Additional kwargs to pass to the CosmosClient.
Attributes
messages
A list of Messages stored in-memory.
Methods
__init__(cosmos_endpoint, cosmos_database, ...)
Initializes a new instance of the CosmosDBChatMessageHistory class.
add_ai_message(message)
Convenience method for adding an AI message string to the store.
add_message(message)
Add a self-created message to the store
add_user_message(message)
Convenience method for adding a human message string to the store.
clear() | lang/api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.cosmos_db.CosmosDBChatMessageHistory.html |
d5e8156166b4-1 | Convenience method for adding a human message string to the store.
clear()
Clear session memory from this memory and cosmos.
load_messages()
Retrieve the messages from Cosmos
prepare_cosmos()
Prepare the CosmosDB client.
upsert_messages()
Update the cosmosdb item.
__init__(cosmos_endpoint: str, cosmos_database: str, cosmos_container: str, session_id: str, user_id: str, credential: Any = None, connection_string: Optional[str] = None, ttl: Optional[int] = None, cosmos_client_kwargs: Optional[dict] = None)[source]¶
Initializes a new instance of the CosmosDBChatMessageHistory class.
Make sure to call prepare_cosmos or use the context manager to make
sure your database is ready.
Either a credential or a connection string must be provided.
Parameters
cosmos_endpoint – The connection endpoint for the Azure Cosmos DB account.
cosmos_database – The name of the database to use.
cosmos_container – The name of the container to use.
session_id – The session ID to use, can be overwritten while loading.
user_id – The user ID to use, can be overwritten while loading.
credential – The credential to use to authenticate to Azure Cosmos DB.
connection_string – The connection string to use to authenticate.
ttl – The time to live (in seconds) to use for documents in the container.
cosmos_client_kwargs – Additional kwargs to pass to the CosmosClient.
add_ai_message(message: str) → None¶
Convenience method for adding an AI message string to the store.
Parameters
message – The string contents of an AI message.
add_message(message: BaseMessage) → None[source]¶
Add a self-created message to the store
add_user_message(message: str) → None¶
Convenience method for adding a human message string to the store.
Parameters | lang/api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.cosmos_db.CosmosDBChatMessageHistory.html |
d5e8156166b4-2 | Convenience method for adding a human message string to the store.
Parameters
message – The string contents of a human message.
clear() → None[source]¶
Clear session memory from this memory and cosmos.
load_messages() → None[source]¶
Retrieve the messages from Cosmos
prepare_cosmos() → None[source]¶
Prepare the CosmosDB client.
Use this function or the context manager to make sure your database is ready.
upsert_messages() → None[source]¶
Update the cosmosdb item. | lang/api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.cosmos_db.CosmosDBChatMessageHistory.html |
bcbdad36fc58-0 | langchain.memory.buffer_window.ConversationBufferWindowMemory¶
class langchain.memory.buffer_window.ConversationBufferWindowMemory[source]¶
Bases: BaseChatMemory
Buffer for storing conversation memory inside a limited size window.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param ai_prefix: str = 'AI'¶
param chat_memory: BaseChatMessageHistory [Optional]¶
param human_prefix: str = 'Human'¶
param input_key: Optional[str] = None¶
param k: int = 5¶
Number of messages to store in buffer.
param output_key: Optional[str] = None¶
param return_messages: bool = False¶
clear() → None¶
Clear memory contents.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model | lang/api.python.langchain.com/en/latest/memory/langchain.memory.buffer_window.ConversationBufferWindowMemory.html |
bcbdad36fc58-1 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
classmethod get_lc_namespace() → List[str]¶
Get the namespace of the langchain object.
For example, if the class is langchain.llms.openai.OpenAI, then the
namespace is [“langchain”, “llms”, “openai”]
classmethod is_lc_serializable() → bool¶
Is this class serializable?
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object. | lang/api.python.langchain.com/en/latest/memory/langchain.memory.buffer_window.ConversationBufferWindowMemory.html |
bcbdad36fc58-2 | The unique identifier is a list of strings that describes the path
to the object.
load_memory_variables(inputs: Dict[str, Any]) → Dict[str, Any][source]¶
Return history buffer.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None¶
Save context from this conversation to buffer.
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
property buffer: Union[str, List[langchain.schema.messages.BaseMessage]]¶
String buffer of memory.
property buffer_as_messages: List[langchain.schema.messages.BaseMessage]¶
Exposes the buffer as a list of messages in case return_messages is False.
property buffer_as_str: str¶
Exposes the buffer as a string in case return_messages is True.
property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs. | lang/api.python.langchain.com/en/latest/memory/langchain.memory.buffer_window.ConversationBufferWindowMemory.html |
bcbdad36fc58-3 | List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”}
Examples using ConversationBufferWindowMemory¶
Figma
OpaquePrompts
Set env var OPENAI_API_KEY or load from a .env file:
Meta-Prompt
Create ChatGPT clone | lang/api.python.langchain.com/en/latest/memory/langchain.memory.buffer_window.ConversationBufferWindowMemory.html |
b4ef72c35814-0 | langchain.memory.chat_message_histories.firestore.FirestoreChatMessageHistory¶
class langchain.memory.chat_message_histories.firestore.FirestoreChatMessageHistory(collection_name: str, session_id: str, user_id: str, firestore_client: Optional[Client] = None)[source]¶
Chat message history backed by Google Firestore.
Initialize a new instance of the FirestoreChatMessageHistory class.
Parameters
collection_name – The name of the collection to use.
session_id – The session ID for the chat..
user_id – The user ID for the chat.
Attributes
messages
A list of Messages stored in-memory.
Methods
__init__(collection_name, session_id, user_id)
Initialize a new instance of the FirestoreChatMessageHistory class.
add_ai_message(message)
Convenience method for adding an AI message string to the store.
add_message(message)
Add a Message object to the store.
add_user_message(message)
Convenience method for adding a human message string to the store.
clear()
Clear session memory from this memory and Firestore.
load_messages()
Retrieve the messages from Firestore
prepare_firestore()
Prepare the Firestore client.
upsert_messages([new_message])
Update the Firestore document.
__init__(collection_name: str, session_id: str, user_id: str, firestore_client: Optional[Client] = None)[source]¶
Initialize a new instance of the FirestoreChatMessageHistory class.
Parameters
collection_name – The name of the collection to use.
session_id – The session ID for the chat..
user_id – The user ID for the chat.
add_ai_message(message: str) → None¶
Convenience method for adding an AI message string to the store.
Parameters
message – The string contents of an AI message.
add_message(message: BaseMessage) → None[source]¶
Add a Message object to the store. | lang/api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.firestore.FirestoreChatMessageHistory.html |
b4ef72c35814-1 | Add a Message object to the store.
Parameters
message – A BaseMessage object to store.
add_user_message(message: str) → None¶
Convenience method for adding a human message string to the store.
Parameters
message – The string contents of a human message.
clear() → None[source]¶
Clear session memory from this memory and Firestore.
load_messages() → None[source]¶
Retrieve the messages from Firestore
prepare_firestore() → None[source]¶
Prepare the Firestore client.
Use this function to make sure your database is ready.
upsert_messages(new_message: Optional[BaseMessage] = None) → None[source]¶
Update the Firestore document. | lang/api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.firestore.FirestoreChatMessageHistory.html |
5562c58242ae-0 | langchain.memory.entity.InMemoryEntityStore¶
class langchain.memory.entity.InMemoryEntityStore[source]¶
Bases: BaseEntityStore
In-memory Entity store.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param store: Dict[str, Optional[str]] = {}¶
clear() → None[source]¶
Delete all entities from store.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
delete(key: str) → None[source]¶
Delete entity value from store. | lang/api.python.langchain.com/en/latest/memory/langchain.memory.entity.InMemoryEntityStore.html |
5562c58242ae-1 | delete(key: str) → None[source]¶
Delete entity value from store.
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
exists(key: str) → bool[source]¶
Check if entity exists in store.
classmethod from_orm(obj: Any) → Model¶
get(key: str, default: Optional[str] = None) → Optional[str][source]¶
Get entity value from store.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶ | lang/api.python.langchain.com/en/latest/memory/langchain.memory.entity.InMemoryEntityStore.html |
5562c58242ae-2 | classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
set(key: str, value: Optional[str]) → None[source]¶
Set entity value in store.
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶ | lang/api.python.langchain.com/en/latest/memory/langchain.memory.entity.InMemoryEntityStore.html |
103ab843c0a1-0 | langchain.memory.chat_message_histories.sql.DefaultMessageConverter¶
class langchain.memory.chat_message_histories.sql.DefaultMessageConverter(table_name: str)[source]¶
The default message converter for SQLChatMessageHistory.
Methods
__init__(table_name)
from_sql_model(sql_message)
Convert a SQLAlchemy model to a BaseMessage instance.
get_sql_model_class()
Get the SQLAlchemy model class.
to_sql_model(message, session_id)
Convert a BaseMessage instance to a SQLAlchemy model.
__init__(table_name: str)[source]¶
from_sql_model(sql_message: Any) → BaseMessage[source]¶
Convert a SQLAlchemy model to a BaseMessage instance.
get_sql_model_class() → Any[source]¶
Get the SQLAlchemy model class.
to_sql_model(message: BaseMessage, session_id: str) → Any[source]¶
Convert a BaseMessage instance to a SQLAlchemy model. | lang/api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.sql.DefaultMessageConverter.html |
bd28147facf7-0 | langchain.memory.token_buffer.ConversationTokenBufferMemory¶
class langchain.memory.token_buffer.ConversationTokenBufferMemory[source]¶
Bases: BaseChatMemory
Conversation chat memory with token limit.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param ai_prefix: str = 'AI'¶
param chat_memory: BaseChatMessageHistory [Optional]¶
param human_prefix: str = 'Human'¶
param input_key: Optional[str] = None¶
param llm: langchain.schema.language_model.BaseLanguageModel [Required]¶
param max_token_limit: int = 2000¶
param memory_key: str = 'history'¶
param output_key: Optional[str] = None¶
param return_messages: bool = False¶
clear() → None¶
Clear memory contents.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating | lang/api.python.langchain.com/en/latest/memory/langchain.memory.token_buffer.ConversationTokenBufferMemory.html |
bd28147facf7-1 | the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
classmethod get_lc_namespace() → List[str]¶
Get the namespace of the langchain object.
For example, if the class is langchain.llms.openai.OpenAI, then the
namespace is [“langchain”, “llms”, “openai”]
classmethod is_lc_serializable() → bool¶
Is this class serializable?
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes. | lang/api.python.langchain.com/en/latest/memory/langchain.memory.token_buffer.ConversationTokenBufferMemory.html |
bd28147facf7-2 | A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object.
load_memory_variables(inputs: Dict[str, Any]) → Dict[str, Any][source]¶
Return history buffer.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None[source]¶
Save context from this conversation to buffer. Pruned.
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
property buffer: Any¶
String buffer of memory.
property buffer_as_messages: List[langchain.schema.messages.BaseMessage]¶
Exposes the buffer as a list of messages in case return_messages is True.
property buffer_as_str: str¶
Exposes the buffer as a string in case return_messages is False.
property lc_attributes: Dict¶ | lang/api.python.langchain.com/en/latest/memory/langchain.memory.token_buffer.ConversationTokenBufferMemory.html |
bd28147facf7-3 | property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”}
Examples using ConversationTokenBufferMemory¶
Conversation Token Buffer | lang/api.python.langchain.com/en/latest/memory/langchain.memory.token_buffer.ConversationTokenBufferMemory.html |
1d0c0ff2c05d-0 | langchain.memory.chat_message_histories.momento.MomentoChatMessageHistory¶
class langchain.memory.chat_message_histories.momento.MomentoChatMessageHistory(session_id: str, cache_client: momento.CacheClient, cache_name: str, *, key_prefix: str = 'message_store:', ttl: Optional[timedelta] = None, ensure_cache_exists: bool = True)[source]¶
Chat message history cache that uses Momento as a backend.
See https://gomomento.com/
Instantiate a chat message history cache that uses Momento as a backend.
Note: to instantiate the cache client passed to MomentoChatMessageHistory,
you must have a Momento account at https://gomomento.com/.
Parameters
session_id (str) – The session ID to use for this chat session.
cache_client (CacheClient) – The Momento cache client.
cache_name (str) – The name of the cache to use to store the messages.
key_prefix (str, optional) – The prefix to apply to the cache key.
Defaults to “message_store:”.
ttl (Optional[timedelta], optional) – The TTL to use for the messages.
Defaults to None, ie the default TTL of the cache will be used.
ensure_cache_exists (bool, optional) – Create the cache if it doesn’t exist.
Defaults to True.
Raises
ImportError – Momento python package is not installed.
TypeError – cache_client is not of type momento.CacheClientObject
Attributes
messages
Retrieve the messages from Momento.
Methods
__init__(session_id, cache_client, cache_name, *)
Instantiate a chat message history cache that uses Momento as a backend.
add_ai_message(message)
Convenience method for adding an AI message string to the store.
add_message(message)
Store a message in the cache. | lang/api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.momento.MomentoChatMessageHistory.html |
1d0c0ff2c05d-1 | add_message(message)
Store a message in the cache.
add_user_message(message)
Convenience method for adding a human message string to the store.
clear()
Remove the session's messages from the cache.
from_client_params(session_id, cache_name, ...)
Construct cache from CacheClient parameters.
__init__(session_id: str, cache_client: momento.CacheClient, cache_name: str, *, key_prefix: str = 'message_store:', ttl: Optional[timedelta] = None, ensure_cache_exists: bool = True)[source]¶
Instantiate a chat message history cache that uses Momento as a backend.
Note: to instantiate the cache client passed to MomentoChatMessageHistory,
you must have a Momento account at https://gomomento.com/.
Parameters
session_id (str) – The session ID to use for this chat session.
cache_client (CacheClient) – The Momento cache client.
cache_name (str) – The name of the cache to use to store the messages.
key_prefix (str, optional) – The prefix to apply to the cache key.
Defaults to “message_store:”.
ttl (Optional[timedelta], optional) – The TTL to use for the messages.
Defaults to None, ie the default TTL of the cache will be used.
ensure_cache_exists (bool, optional) – Create the cache if it doesn’t exist.
Defaults to True.
Raises
ImportError – Momento python package is not installed.
TypeError – cache_client is not of type momento.CacheClientObject
add_ai_message(message: str) → None¶
Convenience method for adding an AI message string to the store.
Parameters
message – The string contents of an AI message.
add_message(message: BaseMessage) → None[source]¶
Store a message in the cache.
Parameters | lang/api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.momento.MomentoChatMessageHistory.html |
1d0c0ff2c05d-2 | Store a message in the cache.
Parameters
message (BaseMessage) – The message object to store.
Raises
SdkException – Momento service or network error.
Exception – Unexpected response.
add_user_message(message: str) → None¶
Convenience method for adding a human message string to the store.
Parameters
message – The string contents of a human message.
clear() → None[source]¶
Remove the session’s messages from the cache.
Raises
SdkException – Momento service or network error.
Exception – Unexpected response.
classmethod from_client_params(session_id: str, cache_name: str, ttl: timedelta, *, configuration: Optional[momento.config.Configuration] = None, api_key: Optional[str] = None, auth_token: Optional[str] = None, **kwargs: Any) → MomentoChatMessageHistory[source]¶
Construct cache from CacheClient parameters.
Examples using MomentoChatMessageHistory¶
Momento Chat Message History | lang/api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.momento.MomentoChatMessageHistory.html |
242dd485aacb-0 | langchain.memory.chat_message_histories.sql.SQLChatMessageHistory¶
class langchain.memory.chat_message_histories.sql.SQLChatMessageHistory(session_id: str, connection_string: str, table_name: str = 'message_store', session_id_field_name: str = 'session_id', custom_message_converter: Optional[BaseMessageConverter] = None)[source]¶
Chat message history stored in an SQL database.
Attributes
messages
Retrieve all messages from db
Methods
__init__(session_id, connection_string[, ...])
add_ai_message(message)
Convenience method for adding an AI message string to the store.
add_message(message)
Append the message to the record in db
add_user_message(message)
Convenience method for adding a human message string to the store.
clear()
Clear session memory from db
__init__(session_id: str, connection_string: str, table_name: str = 'message_store', session_id_field_name: str = 'session_id', custom_message_converter: Optional[BaseMessageConverter] = None)[source]¶
add_ai_message(message: str) → None¶
Convenience method for adding an AI message string to the store.
Parameters
message – The string contents of an AI message.
add_message(message: BaseMessage) → None[source]¶
Append the message to the record in db
add_user_message(message: str) → None¶
Convenience method for adding a human message string to the store.
Parameters
message – The string contents of a human message.
clear() → None[source]¶
Clear session memory from db
Examples using SQLChatMessageHistory¶
SQL Chat Message History | lang/api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.sql.SQLChatMessageHistory.html |
23a4091f4d21-0 | langchain.memory.chat_message_histories.zep.ZepChatMessageHistory¶
class langchain.memory.chat_message_histories.zep.ZepChatMessageHistory(session_id: str, url: str = 'http://localhost:8000', api_key: Optional[str] = None)[source]¶
Chat message history that uses Zep as a backend.
Recommended usage:
# Set up Zep Chat History
zep_chat_history = ZepChatMessageHistory(
session_id=session_id,
url=ZEP_API_URL,
api_key=<your_api_key>,
)
# Use a standard ConversationBufferMemory to encapsulate the Zep chat history
memory = ConversationBufferMemory(
memory_key="chat_history", chat_memory=zep_chat_history
)
Zep provides long-term conversation storage for LLM apps. The server stores,
summarizes, embeds, indexes, and enriches conversational AI chat
histories, and exposes them via simple, low-latency APIs.
For server installation instructions and more, see:
https://docs.getzep.com/deployment/quickstart/
This class is a thin wrapper around the zep-python package. Additional
Zep functionality is exposed via the zep_summary and zep_messages
properties.
For more information on the zep-python package, see:
https://github.com/getzep/zep-python
Attributes
messages
Retrieve messages from Zep memory
zep_messages
Retrieve summary from Zep memory
zep_summary
Retrieve summary from Zep memory
Methods
__init__(session_id[, url, api_key])
add_ai_message(message[, metadata])
Convenience method for adding an AI message string to the store.
add_message(message[, metadata])
Append the message to the Zep memory history
add_user_message(message[, metadata]) | lang/api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.zep.ZepChatMessageHistory.html |
23a4091f4d21-1 | Append the message to the Zep memory history
add_user_message(message[, metadata])
Convenience method for adding a human message string to the store.
clear()
Clear session memory from Zep.
search(query[, metadata, limit])
Search Zep memory for messages matching the query
__init__(session_id: str, url: str = 'http://localhost:8000', api_key: Optional[str] = None) → None[source]¶
add_ai_message(message: str, metadata: Optional[Dict[str, Any]] = None) → None[source]¶
Convenience method for adding an AI message string to the store.
Parameters
message – The string contents of an AI message.
metadata – Optional metadata to attach to the message.
add_message(message: BaseMessage, metadata: Optional[Dict[str, Any]] = None) → None[source]¶
Append the message to the Zep memory history
add_user_message(message: str, metadata: Optional[Dict[str, Any]] = None) → None[source]¶
Convenience method for adding a human message string to the store.
Parameters
message – The string contents of a human message.
metadata – Optional metadata to attach to the message.
clear() → None[source]¶
Clear session memory from Zep. Note that Zep is long-term storage for memory
and this is not advised unless you have specific data retention requirements.
search(query: str, metadata: Optional[Dict] = None, limit: Optional[int] = None) → List[MemorySearchResult][source]¶
Search Zep memory for messages matching the query | lang/api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.zep.ZepChatMessageHistory.html |
892b007925a2-0 | langchain.memory.chat_message_histories.postgres.PostgresChatMessageHistory¶
class langchain.memory.chat_message_histories.postgres.PostgresChatMessageHistory(session_id: str, connection_string: str = 'postgresql://postgres:mypassword@localhost/chat_history', table_name: str = 'message_store')[source]¶
Chat message history stored in a Postgres database.
Attributes
messages
Retrieve the messages from PostgreSQL
Methods
__init__(session_id[, connection_string, ...])
add_ai_message(message)
Convenience method for adding an AI message string to the store.
add_message(message)
Append the message to the record in PostgreSQL
add_user_message(message)
Convenience method for adding a human message string to the store.
clear()
Clear session memory from PostgreSQL
__init__(session_id: str, connection_string: str = 'postgresql://postgres:mypassword@localhost/chat_history', table_name: str = 'message_store')[source]¶
add_ai_message(message: str) → None¶
Convenience method for adding an AI message string to the store.
Parameters
message – The string contents of an AI message.
add_message(message: BaseMessage) → None[source]¶
Append the message to the record in PostgreSQL
add_user_message(message: str) → None¶
Convenience method for adding a human message string to the store.
Parameters
message – The string contents of a human message.
clear() → None[source]¶
Clear session memory from PostgreSQL
Examples using PostgresChatMessageHistory¶
Postgres Chat Message History | lang/api.python.langchain.com/en/latest/memory/langchain.memory.chat_message_histories.postgres.PostgresChatMessageHistory.html |
93c753639227-0 | langchain.memory.summary_buffer.ConversationSummaryBufferMemory¶
class langchain.memory.summary_buffer.ConversationSummaryBufferMemory[source]¶
Bases: BaseChatMemory, SummarizerMixin
Buffer with summarizer for storing conversation memory.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param ai_prefix: str = 'AI'¶
param chat_memory: BaseChatMessageHistory [Optional]¶
param human_prefix: str = 'Human'¶
param input_key: Optional[str] = None¶
param llm: BaseLanguageModel [Required]¶
param max_token_limit: int = 2000¶
param memory_key: str = 'history'¶
param moving_summary_buffer: str = ''¶
param output_key: Optional[str] = None¶
param prompt: BasePromptTemplate = PromptTemplate(input_variables=['new_lines', 'summary'], template='Progressively summarize the lines of conversation provided, adding onto the previous summary returning a new summary.\n\nEXAMPLE\nCurrent summary:\nThe human asks what the AI thinks of artificial intelligence. The AI thinks artificial intelligence is a force for good.\n\nNew lines of conversation:\nHuman: Why do you think artificial intelligence is a force for good?\nAI: Because artificial intelligence will help humans reach their full potential.\n\nNew summary:\nThe human asks what the AI thinks of artificial intelligence. The AI thinks artificial intelligence is a force for good because it will help humans reach their full potential.\nEND OF EXAMPLE\n\nCurrent summary:\n{summary}\n\nNew lines of conversation:\n{new_lines}\n\nNew summary:')¶
param return_messages: bool = False¶
param summary_message_cls: Type[BaseMessage] = <class 'langchain.schema.messages.SystemMessage'>¶
clear() → None[source]¶
Clear memory contents. | lang/api.python.langchain.com/en/latest/memory/langchain.memory.summary_buffer.ConversationSummaryBufferMemory.html |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.