id
stringlengths 14
16
| text
stringlengths 13
2.7k
| source
stringlengths 57
178
|
---|---|---|
c8286ec5c23e-1 | Default implementation runs ainvoke in parallel using asyncio.gather.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
async ainvoke(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Any) → Output¶
Default implementation of ainvoke, calls invoke from a thread.
The default implementation allows usage of async code even if
the runnable did not implement a native async version of invoke.
Subclasses should override this method if they can run asynchronously.
append(message: Union[BaseMessagePromptTemplate, BaseMessage, BaseChatPromptTemplate, Tuple[str, str], Tuple[Type, str], str]) → None[source]¶
Append message to the end of the chat template.
Parameters
message – representation of a message to append.
async astream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
Default implementation of astream, which calls ainvoke.
Subclasses should override this method if they support streaming output.
async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, diff: bool = True, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any]) → Union[AsyncIterator[RunLogPatch], AsyncIterator[RunLog]]¶
Stream all output from a runnable, as reported to the callback system.
This includes all inner runs of LLMs, Retrievers, Tools, etc. | lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.ChatPromptTemplate.html |
c8286ec5c23e-2 | This includes all inner runs of LLMs, Retrievers, Tools, etc.
Output is streamed as Log objects, which include a list of
jsonpatch ops that describe how the state of the run has changed in each
step, and the final state of the run.
The jsonpatch ops can be applied in order to construct state.
async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
Default implementation of atransform, which buffers input and calls astream.
Subclasses should override this method if they can start producing output while
input is still being generated.
batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶
Default implementation runs invoke in parallel using a thread pool executor.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
config_schema(*, include: Optional[Sequence[str]] = None) → Type[BaseModel]¶
The type of config this runnable accepts specified as a pydantic model.
To mark a field as configurable, see the configurable_fields
and configurable_alternatives methods.
Parameters
include – A list of fields to include in the config schema.
Returns
A pydantic model that can be used to validate config. | lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.ChatPromptTemplate.html |
c8286ec5c23e-3 | Returns
A pydantic model that can be used to validate config.
configurable_alternatives(which: ConfigurableField, default_key: str = 'default', **kwargs: Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) → RunnableSerializable[Input, Output]¶
configurable_fields(**kwargs: Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) → RunnableSerializable[Input, Output]¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict¶
Return dictionary representation of prompt.
extend(messages: Sequence[Union[BaseMessagePromptTemplate, BaseMessage, BaseChatPromptTemplate, Tuple[str, str], Tuple[Type, str], str]]) → None[source]¶ | lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.ChatPromptTemplate.html |
c8286ec5c23e-4 | Extend the chat template with a sequence of messages.
format(**kwargs: Any) → str[source]¶
Format the chat template into a string.
Parameters
**kwargs – keyword arguments to use for filling in template variables
in all the template messages in this chat template.
Returns
formatted string
format_messages(**kwargs: Any) → List[BaseMessage][source]¶
Format the chat template into a list of finalized messages.
Parameters
**kwargs – keyword arguments to use for filling in template variables
in all the template messages in this chat template.
Returns
list of formatted messages
format_prompt(**kwargs: Any) → PromptValue¶
Format prompt. Should return a PromptValue.
:param **kwargs: Keyword arguments to use for formatting.
Returns
PromptValue.
classmethod from_messages(messages: Sequence[Union[BaseMessagePromptTemplate, BaseMessage, BaseChatPromptTemplate, Tuple[str, str], Tuple[Type, str], str]]) → ChatPromptTemplate[source]¶
Create a chat prompt template from a variety of message formats.
Examples
Instantiation from a list of message templates:
template = ChatPromptTemplate.from_messages([
("human", "Hello, how are you?"),
("ai", "I'm doing well, thanks!"),
("human", "That's good to hear."),
])
Instantiation from mixed message formats:
template = ChatPromptTemplate.from_messages([
SystemMessage(content="hello"),
("human", "Hello, how are you?"),
])
Parameters
messages – sequence of message representations.
A message can be represented using the following formats:
(1) BaseMessagePromptTemplate, (2) BaseMessage, (3) 2-tuple of
(message type, template); e.g., (“human”, “{user_input}”), | lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.ChatPromptTemplate.html |
c8286ec5c23e-5 | (message type, template); e.g., (“human”, “{user_input}”),
(4) 2-tuple of (message class, template), (4) a string which is
shorthand for (“human”, template); e.g., “{user_input}”
Returns
a chat prompt template
classmethod from_orm(obj: Any) → Model¶
classmethod from_role_strings(string_messages: List[Tuple[str, str]]) → ChatPromptTemplate[source]¶
[Deprecated] Create a chat prompt template from a list of (role, template) tuples.
Parameters
string_messages – list of (role, template) tuples.
string_messages – list of (role, template) tuples.
Returns
a chat prompt template[Deprecated] Create a chat prompt template from a list of (role, template) tuples.
Returns
a chat prompt template
Notes
Deprecated since version 0.0.260: Use from_messages classmethod instead.
classmethod from_strings(string_messages: List[Tuple[Type[BaseMessagePromptTemplate], str]]) → ChatPromptTemplate[source]¶
[Deprecated] Create a chat prompt template from a list of (role class, template) tuples.
Parameters
string_messages – list of (role class, template) tuples.
string_messages – list of (role class, template) tuples.
Returns
a chat prompt template[Deprecated] Create a chat prompt template from a list of (role class, template) tuples.
Returns
a chat prompt template
Notes
Deprecated since version 0.0.260: Use from_messages classmethod instead.
classmethod from_template(template: str, **kwargs: Any) → ChatPromptTemplate[source]¶
Create a chat prompt template from a template string.
Creates a chat template consisting of a single message assumed to be from
the human.
Parameters
template – template string | lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.ChatPromptTemplate.html |
c8286ec5c23e-6 | the human.
Parameters
template – template string
**kwargs – keyword arguments to pass to the constructor.
Returns
A new instance of this class.
get_input_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
Get a pydantic model that can be used to validate input to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic input schema that depends on which
configuration the runnable is invoked with.
This method allows to get an input schema for a specific configuration.
Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate input.
classmethod get_lc_namespace() → List[str]¶
Get the namespace of the langchain object.
For example, if the class is langchain.llms.openai.OpenAI, then the
namespace is [“langchain”, “llms”, “openai”]
get_output_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
Get a pydantic model that can be used to validate output to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic output schema that depends on which
configuration the runnable is invoked with.
This method allows to get an output schema for a specific configuration.
Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate output.
invoke(input: Dict, config: Optional[RunnableConfig] = None) → PromptValue¶
Transform a single input into an output. Override to implement.
Parameters
input – The input to the runnable.
config – A config to use when invoking the runnable.
The config supports standard keys like ‘tags’, ‘metadata’ for tracing | lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.ChatPromptTemplate.html |
c8286ec5c23e-7 | The config supports standard keys like ‘tags’, ‘metadata’ for tracing
purposes, ‘max_concurrency’ for controlling how much work to do
in parallel, and other keys. Please refer to the RunnableConfig
for more details.
Returns
The output of the runnable.
classmethod is_lc_serializable() → bool¶
Return whether this class is serializable.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object.
map() → Runnable[List[Input], List[Output]]¶
Return a new Runnable that maps a list of inputs to a list of outputs,
by calling invoke() with each input.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶ | lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.ChatPromptTemplate.html |
c8286ec5c23e-8 | classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
partial(**kwargs: Union[str, Callable[[], str]]) → ChatPromptTemplate[source]¶
Get a new ChatPromptTemplate with some input variables already filled in.
Parameters
**kwargs – keyword arguments to use for filling in template variables. Ought
to be a subset of the input variables.
Returns
A new ChatPromptTemplate.
Example
from langchain.prompts import ChatPromptTemplate
template = ChatPromptTemplate.from_messages(
[
("system", "You are an AI assistant named {name}."),
("human", "Hi I'm {user}"),
("ai", "Hi there, {user}, I'm {name}."),
("human", "{input}"),
]
)
template2 = template.partial(user="Lucy", name="R2D2")
template2.format_messages(input="hello")
save(file_path: Union[Path, str]) → None[source]¶
Save prompt to file.
Parameters
file_path – path to file.
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of stream, which calls invoke.
Subclasses should override this method if they support streaming output.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ | lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.ChatPromptTemplate.html |
c8286ec5c23e-9 | to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of transform, which buffers input and then calls stream.
Subclasses should override this method if they can start producing output while
input is still being generated.
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶
Bind config to a Runnable, returning a new Runnable.
with_fallbacks(fallbacks: Sequence[Runnable[Input, Output]], *, exceptions_to_handle: Tuple[Type[BaseException], ...] = (<class 'Exception'>,)) → RunnableWithFallbacksT[Input, Output]¶
Add fallbacks to a runnable, returning a new Runnable.
Parameters
fallbacks – A sequence of runnables to try if the original runnable fails.
exceptions_to_handle – A tuple of exception types to handle.
Returns
A new Runnable that will try the original runnable, and then each
fallback in order, upon failures.
with_listeners(*, on_start: Optional[Listener] = None, on_end: Optional[Listener] = None, on_error: Optional[Listener] = None) → Runnable[Input, Output]¶
Bind lifecycle listeners to a Runnable, returning a new Runnable.
on_start: Called before the runnable starts running, with the Run object.
on_end: Called after the runnable finishes running, with the Run object.
on_error: Called if the runnable throws an error, with the Run object. | lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.ChatPromptTemplate.html |
c8286ec5c23e-10 | on_error: Called if the runnable throws an error, with the Run object.
The Run object contains information about the run, including its id,
type, input, output, error, start_time, end_time, and any tags or metadata
added to the run.
with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶
Create a new Runnable that retries the original runnable on exceptions.
Parameters
retry_if_exception_type – A tuple of exception types to retry on
wait_exponential_jitter – Whether to add jitter to the wait time
between retries
stop_after_attempt – The maximum number of attempts to make before giving up
Returns
A new Runnable that retries the original runnable on exceptions.
with_types(*, input_type: Optional[Type[Input]] = None, output_type: Optional[Type[Output]] = None) → Runnable[Input, Output]¶
Bind input and output types to a Runnable, returning a new Runnable.
property InputType: Type[langchain.schema.runnable.utils.Input]¶
The type of input this runnable accepts specified as a type annotation.
property OutputType: Any¶
The type of output this runnable produces specified as a type annotation.
property config_specs: List[langchain.schema.runnable.utils.ConfigurableFieldSpec]¶
List configurable fields for this runnable.
property input_schema: Type[pydantic.main.BaseModel]¶
The type of input this runnable accepts specified as a pydantic model.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_secrets: Dict[str, str]¶ | lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.ChatPromptTemplate.html |
c8286ec5c23e-11 | constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”}
property output_schema: Type[pydantic.main.BaseModel]¶
The type of output this runnable produces specified as a pydantic model.
Examples using ChatPromptTemplate¶
Facebook Messenger
Chat loaders
iMessage
Anthropic
🚅 LiteLLM
Konko
OpenAI
Google Cloud Platform Vertex AI PaLM
JinaChat
Context
OpenAI Functions Metadata Tagger
Figma
Fireworks
Fallbacks
Set env var OPENAI_API_KEY or load from a .env file:
Multiple Retrieval Sources
Structure answers with OpenAI functions
Multi-agent authoritarian speaker selection
MultiVector Retriever
Memory in LLMChain
Retry parser
Pydantic (JSON) parser
Few-shot examples for chat models
Prompt pipelining
Using OpenAI functions
interface.md
First we add a step to load memory
sql_db.md
prompt_llm_parser.md
Adding memory
multiple_chains.md
Code writing
Using tools
Adding moderation | lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.ChatPromptTemplate.html |
b88a9a252e64-0 | langchain.prompts.chat.MessagesPlaceholder¶
class langchain.prompts.chat.MessagesPlaceholder[source]¶
Bases: BaseMessagePromptTemplate
Prompt template that assumes variable is already list of messages.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param variable_name: str [Required]¶
Name of variable to use as messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance | lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.MessagesPlaceholder.html |
b88a9a252e64-1 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
format_messages(**kwargs: Any) → List[BaseMessage][source]¶
Format messages from kwargs.
Parameters
**kwargs – Keyword arguments to use for formatting.
Returns
List of BaseMessage.
classmethod from_orm(obj: Any) → Model¶
classmethod get_lc_namespace() → List[str]¶
Get the namespace of the langchain object.
For example, if the class is langchain.llms.openai.OpenAI, then the
namespace is [“langchain”, “llms”, “openai”]
classmethod is_lc_serializable() → bool¶
Return whether or not the class is serializable.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict(). | lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.MessagesPlaceholder.html |
b88a9a252e64-2 | Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
property input_variables: List[str]¶
Input variables for this prompt template.
Returns
List of input variable names.
property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids. | lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.MessagesPlaceholder.html |
b88a9a252e64-3 | A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”}
Examples using MessagesPlaceholder¶
Set env var OPENAI_API_KEY or load from a .env file:
Conversational Retrieval Agent
Agents
Memory in LLMChain
Add Memory to OpenAI Functions Agent
Types of `MessagePromptTemplate`
Adding memory | lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.MessagesPlaceholder.html |
632130a7d9b0-0 | langchain.prompts.prompt.Prompt¶
langchain.prompts.prompt.Prompt¶
alias of PromptTemplate | lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.prompt.Prompt.html |
4cab35cd0c9e-0 | langchain.prompts.base.StringPromptValue¶
class langchain.prompts.base.StringPromptValue[source]¶
Bases: PromptValue
String prompt value.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param text: str [Required]¶
Prompt text.
param type: Literal['StringPromptValue'] = 'StringPromptValue'¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance | lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.base.StringPromptValue.html |
4cab35cd0c9e-1 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
classmethod get_lc_namespace() → List[str]¶
Get the namespace of the langchain object.
For example, if the class is langchain.llms.openai.OpenAI, then the
namespace is [“langchain”, “llms”, “openai”]
classmethod is_lc_serializable() → bool¶
Return whether this class is serializable.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object. | lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.base.StringPromptValue.html |
4cab35cd0c9e-2 | The unique identifier is a list of strings that describes the path
to the object.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
to_messages() → List[BaseMessage][source]¶
Return prompt as messages.
to_string() → str[source]¶
Return prompt as string.
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”} | lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.base.StringPromptValue.html |
316f0fe83fdd-0 | langchain.prompts.few_shot.FewShotChatMessagePromptTemplate¶
class langchain.prompts.few_shot.FewShotChatMessagePromptTemplate[source]¶
Bases: BaseChatPromptTemplate, _FewShotPromptTemplateMixin
Chat prompt template that supports few-shot examples.
The high level structure of produced by this prompt template is a list of messages
consisting of prefix message(s), example message(s), and suffix message(s).
This structure enables creating a conversation with intermediate examples like:
System: You are a helpful AI Assistant
Human: What is 2+2?
AI: 4
Human: What is 2+3?
AI: 5
Human: What is 4+4?
This prompt template can be used to generate a fixed list of examples or else
to dynamically select examples based on the input.
Examples
Prompt template with a fixed list of examples (matching the sample
conversation above):
from langchain.prompts import (
FewShotChatMessagePromptTemplate,
ChatPromptTemplate
)
examples = [
{"input": "2+2", "output": "4"},
{"input": "2+3", "output": "5"},
]
example_prompt = ChatPromptTemplate.from_messages(
[('human', '{input}'), ('ai', '{output}')]
)
few_shot_prompt = FewShotChatMessagePromptTemplate(
examples=examples,
# This is a prompt template used to format each individual example.
example_prompt=example_prompt,
)
final_prompt = ChatPromptTemplate.from_messages(
[
('system', 'You are a helpful AI Assistant'),
few_shot_prompt,
('human', '{input}'),
]
)
final_prompt.format(input="What is 4+4?")
Prompt template with dynamically selected examples: | lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.few_shot.FewShotChatMessagePromptTemplate.html |
316f0fe83fdd-1 | Prompt template with dynamically selected examples:
from langchain.prompts import SemanticSimilarityExampleSelector
from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores import Chroma
examples = [
{"input": "2+2", "output": "4"},
{"input": "2+3", "output": "5"},
{"input": "2+4", "output": "6"},
# ...
]
to_vectorize = [
" ".join(example.values())
for example in examples
]
embeddings = OpenAIEmbeddings()
vectorstore = Chroma.from_texts(
to_vectorize, embeddings, metadatas=examples
)
example_selector = SemanticSimilarityExampleSelector(
vectorstore=vectorstore
)
from langchain.schema import SystemMessage
from langchain.prompts import HumanMessagePromptTemplate
from langchain.prompts.few_shot import FewShotChatMessagePromptTemplate
few_shot_prompt = FewShotChatMessagePromptTemplate(
# Which variable(s) will be passed to the example selector.
input_variables=["input"],
example_selector=example_selector,
# Define how each example will be formatted.
# In this case, each example will become 2 messages:
# 1 human, and 1 AI
example_prompt=(
HumanMessagePromptTemplate.from_template("{input}")
+ AIMessagePromptTemplate.from_template("{output}")
),
)
# Define the overall prompt.
final_prompt = (
SystemMessagePromptTemplate.from_template(
"You are a helpful AI Assistant"
)
+ few_shot_prompt
+ HumanMessagePromptTemplate.from_template("{input}")
)
# Show the prompt
print(final_prompt.format_messages(input="What's 3+3?")) | lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.few_shot.FewShotChatMessagePromptTemplate.html |
316f0fe83fdd-2 | print(final_prompt.format_messages(input="What's 3+3?"))
# Use within an LLM
from langchain.chat_models import ChatAnthropic
chain = final_prompt | ChatAnthropic()
chain.invoke({"input": "What's 3+3?"})
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param example_prompt: Union[BaseMessagePromptTemplate, BaseChatPromptTemplate] [Required]¶
The class to format each example.
param example_selector: Optional[BaseExampleSelector] = None¶
ExampleSelector to choose the examples to format into the prompt.
Either this or examples should be provided.
param examples: Optional[List[dict]] = None¶
Examples to format into the prompt.
Either this or example_selector should be provided.
param input_types: Dict[str, Any] [Optional]¶
A dictionary of the types of the variables the prompt template expects.
If not provided, all variables are assumed to be strings.
param input_variables: List[str] [Optional]¶
A list of the names of the variables the prompt template will use
to pass to the example_selector, if provided.
param output_parser: Optional[BaseOutputParser] = None¶
How to parse the output of calling an LLM on this formatted prompt.
param partial_variables: Mapping[str, Union[str, Callable[[], str]]] [Optional]¶
async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶
Default implementation runs ainvoke in parallel using asyncio.gather.
The default implementation of batch works well for IO bound runnables. | lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.few_shot.FewShotChatMessagePromptTemplate.html |
316f0fe83fdd-3 | The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
async ainvoke(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Any) → Output¶
Default implementation of ainvoke, calls invoke from a thread.
The default implementation allows usage of async code even if
the runnable did not implement a native async version of invoke.
Subclasses should override this method if they can run asynchronously.
async astream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
Default implementation of astream, which calls ainvoke.
Subclasses should override this method if they support streaming output.
async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, diff: bool = True, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any]) → Union[AsyncIterator[RunLogPatch], AsyncIterator[RunLog]]¶
Stream all output from a runnable, as reported to the callback system.
This includes all inner runs of LLMs, Retrievers, Tools, etc.
Output is streamed as Log objects, which include a list of
jsonpatch ops that describe how the state of the run has changed in each
step, and the final state of the run.
The jsonpatch ops can be applied in order to construct state. | lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.few_shot.FewShotChatMessagePromptTemplate.html |
316f0fe83fdd-4 | The jsonpatch ops can be applied in order to construct state.
async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
Default implementation of atransform, which buffers input and calls astream.
Subclasses should override this method if they can start producing output while
input is still being generated.
batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶
Default implementation runs invoke in parallel using a thread pool executor.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
config_schema(*, include: Optional[Sequence[str]] = None) → Type[BaseModel]¶
The type of config this runnable accepts specified as a pydantic model.
To mark a field as configurable, see the configurable_fields
and configurable_alternatives methods.
Parameters
include – A list of fields to include in the config schema.
Returns
A pydantic model that can be used to validate config.
configurable_alternatives(which: ConfigurableField, default_key: str = 'default', **kwargs: Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) → RunnableSerializable[Input, Output]¶
configurable_fields(**kwargs: Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) → RunnableSerializable[Input, Output]¶ | lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.few_shot.FewShotChatMessagePromptTemplate.html |
316f0fe83fdd-5 | classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict¶
Return dictionary representation of prompt.
format(**kwargs: Any) → str[source]¶
Format the prompt with inputs generating a string.
Use this method to generate a string representation of a prompt consisting
of chat messages.
Useful for feeding into a string based completion language model or debugging.
Parameters
**kwargs – keyword arguments to use for formatting.
Returns
A string representation of the prompt
format_messages(**kwargs: Any) → List[BaseMessage][source]¶
Format kwargs into a list of messages.
Parameters
**kwargs – keyword arguments to use for filling in templates in messages.
Returns
A list of formatted messages with all template variables filled in.
format_prompt(**kwargs: Any) → PromptValue¶ | lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.few_shot.FewShotChatMessagePromptTemplate.html |
316f0fe83fdd-6 | format_prompt(**kwargs: Any) → PromptValue¶
Format prompt. Should return a PromptValue.
:param **kwargs: Keyword arguments to use for formatting.
Returns
PromptValue.
classmethod from_orm(obj: Any) → Model¶
get_input_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
Get a pydantic model that can be used to validate input to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic input schema that depends on which
configuration the runnable is invoked with.
This method allows to get an input schema for a specific configuration.
Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate input.
classmethod get_lc_namespace() → List[str]¶
Get the namespace of the langchain object.
For example, if the class is langchain.llms.openai.OpenAI, then the
namespace is [“langchain”, “llms”, “openai”]
get_output_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
Get a pydantic model that can be used to validate output to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic output schema that depends on which
configuration the runnable is invoked with.
This method allows to get an output schema for a specific configuration.
Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate output.
invoke(input: Dict, config: Optional[RunnableConfig] = None) → PromptValue¶
Transform a single input into an output. Override to implement.
Parameters
input – The input to the runnable.
config – A config to use when invoking the runnable. | lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.few_shot.FewShotChatMessagePromptTemplate.html |
316f0fe83fdd-7 | config – A config to use when invoking the runnable.
The config supports standard keys like ‘tags’, ‘metadata’ for tracing
purposes, ‘max_concurrency’ for controlling how much work to do
in parallel, and other keys. Please refer to the RunnableConfig
for more details.
Returns
The output of the runnable.
classmethod is_lc_serializable() → bool[source]¶
Return whether or not the class is serializable.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object.
map() → Runnable[List[Input], List[Output]]¶
Return a new Runnable that maps a list of inputs to a list of outputs,
by calling invoke() with each input.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶ | lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.few_shot.FewShotChatMessagePromptTemplate.html |
316f0fe83fdd-8 | classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
partial(**kwargs: Union[str, Callable[[], str]]) → BasePromptTemplate¶
Return a partial of the prompt template.
save(file_path: Union[Path, str]) → None¶
Save the prompt.
Parameters
file_path – Path to directory to save prompt to.
Example:
.. code-block:: python
prompt.save(file_path=”path/prompt.yaml”)
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of stream, which calls invoke.
Subclasses should override this method if they support streaming output.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of transform, which buffers input and then calls stream.
Subclasses should override this method if they can start producing output while
input is still being generated.
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶ | lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.few_shot.FewShotChatMessagePromptTemplate.html |
316f0fe83fdd-9 | classmethod validate(value: Any) → Model¶
with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶
Bind config to a Runnable, returning a new Runnable.
with_fallbacks(fallbacks: Sequence[Runnable[Input, Output]], *, exceptions_to_handle: Tuple[Type[BaseException], ...] = (<class 'Exception'>,)) → RunnableWithFallbacksT[Input, Output]¶
Add fallbacks to a runnable, returning a new Runnable.
Parameters
fallbacks – A sequence of runnables to try if the original runnable fails.
exceptions_to_handle – A tuple of exception types to handle.
Returns
A new Runnable that will try the original runnable, and then each
fallback in order, upon failures.
with_listeners(*, on_start: Optional[Listener] = None, on_end: Optional[Listener] = None, on_error: Optional[Listener] = None) → Runnable[Input, Output]¶
Bind lifecycle listeners to a Runnable, returning a new Runnable.
on_start: Called before the runnable starts running, with the Run object.
on_end: Called after the runnable finishes running, with the Run object.
on_error: Called if the runnable throws an error, with the Run object.
The Run object contains information about the run, including its id,
type, input, output, error, start_time, end_time, and any tags or metadata
added to the run.
with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶
Create a new Runnable that retries the original runnable on exceptions.
Parameters | lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.few_shot.FewShotChatMessagePromptTemplate.html |
316f0fe83fdd-10 | Create a new Runnable that retries the original runnable on exceptions.
Parameters
retry_if_exception_type – A tuple of exception types to retry on
wait_exponential_jitter – Whether to add jitter to the wait time
between retries
stop_after_attempt – The maximum number of attempts to make before giving up
Returns
A new Runnable that retries the original runnable on exceptions.
with_types(*, input_type: Optional[Type[Input]] = None, output_type: Optional[Type[Output]] = None) → Runnable[Input, Output]¶
Bind input and output types to a Runnable, returning a new Runnable.
property InputType: Type[langchain.schema.runnable.utils.Input]¶
The type of input this runnable accepts specified as a type annotation.
property OutputType: Any¶
The type of output this runnable produces specified as a type annotation.
property config_specs: List[langchain.schema.runnable.utils.ConfigurableFieldSpec]¶
List configurable fields for this runnable.
property input_schema: Type[pydantic.main.BaseModel]¶
The type of input this runnable accepts specified as a pydantic model.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”}
property output_schema: Type[pydantic.main.BaseModel]¶
The type of output this runnable produces specified as a pydantic model.
Examples using FewShotChatMessagePromptTemplate¶
Few-shot examples for chat models | lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.few_shot.FewShotChatMessagePromptTemplate.html |
e158bf434da6-0 | langchain.prompts.chat.ChatMessagePromptTemplate¶
class langchain.prompts.chat.ChatMessagePromptTemplate[source]¶
Bases: BaseStringMessagePromptTemplate
Chat message prompt template.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param additional_kwargs: dict [Optional]¶
Additional keyword arguments to pass to the prompt template.
param prompt: langchain.prompts.base.StringPromptTemplate [Required]¶
String prompt template.
param role: str [Required]¶
Role of the message.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance | lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.ChatMessagePromptTemplate.html |
e158bf434da6-1 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
format(**kwargs: Any) → BaseMessage[source]¶
Format the prompt template.
Parameters
**kwargs – Keyword arguments to use for formatting.
Returns
Formatted message.
format_messages(**kwargs: Any) → List[BaseMessage]¶
Format messages from kwargs.
Parameters
**kwargs – Keyword arguments to use for formatting.
Returns
List of BaseMessages.
classmethod from_orm(obj: Any) → Model¶
classmethod from_template(template: str, template_format: str = 'f-string', **kwargs: Any) → MessagePromptTemplateT¶
Create a class from a string template.
Parameters
template – a template.
template_format – format of the template.
**kwargs – keyword arguments to pass to the constructor.
Returns
A new instance of this class.
classmethod from_template_file(template_file: Union[str, Path], input_variables: List[str], **kwargs: Any) → MessagePromptTemplateT¶
Create a class from a template file.
Parameters
template_file – path to a template file. String or Path.
input_variables – list of input variables.
**kwargs – keyword arguments to pass to the constructor.
Returns
A new instance of this class.
classmethod get_lc_namespace() → List[str]¶
Get the namespace of the langchain object. | lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.ChatMessagePromptTemplate.html |
e158bf434da6-2 | Get the namespace of the langchain object.
For example, if the class is langchain.llms.openai.OpenAI, then the
namespace is [“langchain”, “llms”, “openai”]
classmethod is_lc_serializable() → bool¶
Return whether or not the class is serializable.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ | lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.ChatMessagePromptTemplate.html |
e158bf434da6-3 | classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
property input_variables: List[str]¶
Input variables for this prompt template.
Returns
List of input variable names.
property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”}
Examples using ChatMessagePromptTemplate¶
Types of `MessagePromptTemplate` | lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.ChatMessagePromptTemplate.html |
2a25265685fc-0 | langchain.prompts.chat.ChatPromptValue¶
class langchain.prompts.chat.ChatPromptValue[source]¶
Bases: PromptValue
Chat prompt value.
A type of a prompt value that is built from messages.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param messages: Sequence[langchain.schema.messages.BaseMessage] [Required]¶
List of messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance | lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.ChatPromptValue.html |
2a25265685fc-1 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
classmethod get_lc_namespace() → List[str]¶
Get the namespace of the langchain object.
For example, if the class is langchain.llms.openai.OpenAI, then the
namespace is [“langchain”, “llms”, “openai”]
classmethod is_lc_serializable() → bool¶
Return whether this class is serializable.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object. | lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.ChatPromptValue.html |
2a25265685fc-2 | The unique identifier is a list of strings that describes the path
to the object.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
to_messages() → List[BaseMessage][source]¶
Return prompt as a list of messages.
to_string() → str[source]¶
Return prompt as string.
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”} | lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.ChatPromptValue.html |
0ca8e1ab4f6b-0 | langchain.prompts.chat.ChatPromptValueConcrete¶
class langchain.prompts.chat.ChatPromptValueConcrete[source]¶
Bases: ChatPromptValue
Chat prompt value which explicitly lists out the message types it accepts.
For use in external schemas.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param messages: Sequence[Union[langchain.schema.messages.AIMessage, langchain.schema.messages.HumanMessage, langchain.schema.messages.ChatMessage, langchain.schema.messages.SystemMessage, langchain.schema.messages.FunctionMessage, langchain.schema.messages.ToolMessage]] [Required]¶
List of messages.
param type: Literal['ChatPromptValueConcrete'] = 'ChatPromptValueConcrete'¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance | lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.ChatPromptValueConcrete.html |
0ca8e1ab4f6b-1 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
classmethod get_lc_namespace() → List[str]¶
Get the namespace of the langchain object.
For example, if the class is langchain.llms.openai.OpenAI, then the
namespace is [“langchain”, “llms”, “openai”]
classmethod is_lc_serializable() → bool¶
Return whether this class is serializable.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object. | lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.ChatPromptValueConcrete.html |
0ca8e1ab4f6b-2 | The unique identifier is a list of strings that describes the path
to the object.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
to_messages() → List[BaseMessage]¶
Return prompt as a list of messages.
to_string() → str¶
Return prompt as string.
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”} | lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.ChatPromptValueConcrete.html |
dff309f0c02b-0 | langchain.prompts.example_selector.length_based.LengthBasedExampleSelector¶
class langchain.prompts.example_selector.length_based.LengthBasedExampleSelector[source]¶
Bases: BaseExampleSelector, BaseModel
Select examples based on length.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param example_prompt: langchain.prompts.prompt.PromptTemplate [Required]¶
Prompt template used to format the examples.
param examples: List[dict] [Required]¶
A list of the examples that the prompt template expects.
param get_text_length: Callable[[str], int] = <function _get_length_based>¶
Function to measure prompt length. Defaults to word count.
param max_length: int = 2048¶
Max length for the prompt, beyond which examples are cut.
add_example(example: Dict[str, str]) → None[source]¶
Add new example to list.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include | lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.example_selector.length_based.LengthBasedExampleSelector.html |
dff309f0c02b-1 | exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶ | lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.example_selector.length_based.LengthBasedExampleSelector.html |
dff309f0c02b-2 | classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
select_examples(input_variables: Dict[str, str]) → List[dict][source]¶
Select which examples to use based on the input lengths.
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶ | lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.example_selector.length_based.LengthBasedExampleSelector.html |
01608b002030-0 | langchain.prompts.chat.AIMessagePromptTemplate¶
class langchain.prompts.chat.AIMessagePromptTemplate[source]¶
Bases: BaseStringMessagePromptTemplate
AI message prompt template. This is a message sent from the AI.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param additional_kwargs: dict [Optional]¶
Additional keyword arguments to pass to the prompt template.
param prompt: langchain.prompts.base.StringPromptTemplate [Required]¶
String prompt template.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance | lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.AIMessagePromptTemplate.html |
01608b002030-1 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
format(**kwargs: Any) → BaseMessage[source]¶
Format the prompt template.
Parameters
**kwargs – Keyword arguments to use for formatting.
Returns
Formatted message.
format_messages(**kwargs: Any) → List[BaseMessage]¶
Format messages from kwargs.
Parameters
**kwargs – Keyword arguments to use for formatting.
Returns
List of BaseMessages.
classmethod from_orm(obj: Any) → Model¶
classmethod from_template(template: str, template_format: str = 'f-string', **kwargs: Any) → MessagePromptTemplateT¶
Create a class from a string template.
Parameters
template – a template.
template_format – format of the template.
**kwargs – keyword arguments to pass to the constructor.
Returns
A new instance of this class.
classmethod from_template_file(template_file: Union[str, Path], input_variables: List[str], **kwargs: Any) → MessagePromptTemplateT¶
Create a class from a template file.
Parameters
template_file – path to a template file. String or Path.
input_variables – list of input variables.
**kwargs – keyword arguments to pass to the constructor.
Returns
A new instance of this class.
classmethod get_lc_namespace() → List[str]¶
Get the namespace of the langchain object. | lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.AIMessagePromptTemplate.html |
01608b002030-2 | Get the namespace of the langchain object.
For example, if the class is langchain.llms.openai.OpenAI, then the
namespace is [“langchain”, “llms”, “openai”]
classmethod is_lc_serializable() → bool¶
Return whether or not the class is serializable.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ | lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.AIMessagePromptTemplate.html |
01608b002030-3 | classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
property input_variables: List[str]¶
Input variables for this prompt template.
Returns
List of input variable names.
property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”}
Examples using AIMessagePromptTemplate¶
Anthropic
🚅 LiteLLM
Konko
OpenAI
JinaChat
Figma | lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.AIMessagePromptTemplate.html |
77b39f033514-0 | langchain_experimental.comprehend_moderation.prompt_safety.ComprehendPromptSafety¶
class langchain_experimental.comprehend_moderation.prompt_safety.ComprehendPromptSafety(client: Any, callback: Optional[Any] = None, unique_id: Optional[str] = None, chain_id: Optional[str] = None)[source]¶
Methods
__init__(client[, callback, unique_id, chain_id])
validate(prompt_value[, config])
Check and validate the safety of the given prompt text.
__init__(client: Any, callback: Optional[Any] = None, unique_id: Optional[str] = None, chain_id: Optional[str] = None) → None[source]¶
validate(prompt_value: str, config: Any = None) → str[source]¶
Check and validate the safety of the given prompt text.
Parameters
prompt_value (str) – The input text to be checked for unsafe text.
config (Dict[str, Any]) – Configuration settings for prompt safety checks.
Raises
ValueError – If unsafe prompt is found in the prompt text based
on the specified threshold. –
Returns
The input prompt_value.
Return type
str
Note
This function checks the safety of the provided prompt text using
Comprehend’s classify_document API and raises an error if unsafe
text is detected with a score above the specified threshold.
Example
comprehend_client = boto3.client(‘comprehend’)
prompt_text = “Please tell me your credit card information.”
config = {“threshold”: 0.7}
checked_prompt = check_prompt_safety(comprehend_client, prompt_text, config) | lang/api.python.langchain.com/en/latest/comprehend_moderation/langchain_experimental.comprehend_moderation.prompt_safety.ComprehendPromptSafety.html |
078f752d7889-0 | langchain_experimental.comprehend_moderation.base_moderation_config.ModerationPromptSafetyConfig¶
class langchain_experimental.comprehend_moderation.base_moderation_config.ModerationPromptSafetyConfig[source]¶
Bases: BaseModel
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param threshold: float = 0.5¶
Threshold for Prompt Safety classification
confidence score, defaults to 0.5 i.e. 50%
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance | lang/api.python.langchain.com/en/latest/comprehend_moderation/langchain_experimental.comprehend_moderation.base_moderation_config.ModerationPromptSafetyConfig.html |
078f752d7889-1 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ | lang/api.python.langchain.com/en/latest/comprehend_moderation/langchain_experimental.comprehend_moderation.base_moderation_config.ModerationPromptSafetyConfig.html |
078f752d7889-2 | classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶ | lang/api.python.langchain.com/en/latest/comprehend_moderation/langchain_experimental.comprehend_moderation.base_moderation_config.ModerationPromptSafetyConfig.html |
886c70fc64e6-0 | langchain_experimental.comprehend_moderation.amazon_comprehend_moderation.AmazonComprehendModerationChain¶
class langchain_experimental.comprehend_moderation.amazon_comprehend_moderation.AmazonComprehendModerationChain[source]¶
Bases: Chain
A subclass of Chain, designed to apply moderation to LLMs.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param callback_manager: Optional[BaseCallbackManager] = None¶
Deprecated, use callbacks instead.
param callbacks: Callbacks = None¶
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
param client: Optional[Any] = None¶
boto3 client object for connection to Amazon Comprehend
param credentials_profile_name: Optional[str] = None¶
The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which
has either access keys or role information specified.
If not specified, the default credential profile or, if on an EC2 instance,
credentials from IMDS will be used.
See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
param input_key: str = 'input'¶
Key used to fetch/store the input in data containers. Defaults to input
param memory: Optional[BaseMemory] = None¶
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes | lang/api.python.langchain.com/en/latest/comprehend_moderation/langchain_experimental.comprehend_moderation.amazon_comprehend_moderation.AmazonComprehendModerationChain.html |
886c70fc64e6-1 | and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
param metadata: Optional[Dict[str, Any]] = None¶
Optional metadata associated with the chain. Defaults to None.
This metadata will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
param moderation_callback: Optional[langchain_experimental.comprehend_moderation.base_moderation_callbacks.BaseModerationCallbackHandler] = None¶
Callback handler for moderation, this is different
from regular callbacks which can be used in addition to this.
param moderation_config: langchain_experimental.comprehend_moderation.base_moderation_config.BaseModerationConfig = BaseModerationConfig(filters=[ModerationPiiConfig(threshold=0.5, labels=[], redact=False, mask_character='*'), ModerationToxicityConfig(threshold=0.5, labels=[]), ModerationPromptSafetyConfig(threshold=0.5)])¶
Configuration settings for moderation,
defaults to BaseModerationConfig with default values
param output_key: str = 'output'¶
Key used to fetch/store the output in data containers. Defaults to output
param region_name: Optional[str] = None¶
The aws region e.g., us-west-2. Fallsback to AWS_DEFAULT_REGION env variable
or region specified in ~/.aws/config in case it is not provided here.
param tags: Optional[List[str]] = None¶
Optional list of tags associated with the chain. Defaults to None.
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks. | lang/api.python.langchain.com/en/latest/comprehend_moderation/langchain_experimental.comprehend_moderation.amazon_comprehend_moderation.AmazonComprehendModerationChain.html |
886c70fc64e6-2 | and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
param unique_id: Optional[str] = None¶
A unique id that can be used to identify or group a user or session
param verbose: bool [Optional]¶
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to the global verbose value,
accessible via langchain.globals.get_verbose().
__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, run_name: Optional[str] = None, include_run_info: bool = False) → Dict[str, Any]¶
Execute the chain.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
return_only_outputs – Whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
metadata – Optional metadata associated with the chain. Defaults to None | lang/api.python.langchain.com/en/latest/comprehend_moderation/langchain_experimental.comprehend_moderation.amazon_comprehend_moderation.AmazonComprehendModerationChain.html |
886c70fc64e6-3 | metadata – Optional metadata associated with the chain. Defaults to None
include_run_info – Whether to include run info in the response. Defaults
to False.
Returns
A dict of named outputs. Should contain all outputs specified inChain.output_keys.
async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶
Default implementation runs ainvoke in parallel using asyncio.gather.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, run_name: Optional[str] = None, include_run_info: bool = False) → Dict[str, Any]¶
Asynchronously execute the chain.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
return_only_outputs – Whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects. | lang/api.python.langchain.com/en/latest/comprehend_moderation/langchain_experimental.comprehend_moderation.amazon_comprehend_moderation.AmazonComprehendModerationChain.html |
886c70fc64e6-4 | these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
metadata – Optional metadata associated with the chain. Defaults to None
include_run_info – Whether to include run info in the response. Defaults
to False.
Returns
A dict of named outputs. Should contain all outputs specified inChain.output_keys.
async ainvoke(input: Dict[str, Any], config: Optional[RunnableConfig] = None, **kwargs: Any) → Dict[str, Any]¶
Default implementation of ainvoke, calls invoke from a thread.
The default implementation allows usage of async code even if
the runnable did not implement a native async version of invoke.
Subclasses should override this method if they can run asynchronously.
apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶
Call the chain on all inputs in the list.
async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Convenience method for executing chain.
The main difference between this method and Chain.__call__ is that this
method expects inputs to be passed directly in as positional arguments or
keyword arguments, whereas Chain.__call__ expects a single input dictionary
with all the inputs
Parameters
*args – If the chain expects a single input, it can be passed in as the
sole positional argument.
callbacks – Callbacks to use for this chain run. These will be called in | lang/api.python.langchain.com/en/latest/comprehend_moderation/langchain_experimental.comprehend_moderation.amazon_comprehend_moderation.AmazonComprehendModerationChain.html |
886c70fc64e6-5 | callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
**kwargs – If the chain expects multiple inputs, they can be passed in
directly as keyword arguments.
Returns
The chain output.
Example
# Suppose we have a single-input chain that takes a 'question' string:
await chain.arun("What's the temperature in Boise, Idaho?")
# -> "The temperature in Boise is..."
# Suppose we have a multi-input chain that takes a 'question' string
# and 'context' string:
question = "What's the temperature in Boise, Idaho?"
context = "Weather report for Boise, Idaho on 07/03/23..."
await chain.arun(question=question, context=context)
# -> "The temperature in Boise is..."
async astream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
Default implementation of astream, which calls ainvoke.
Subclasses should override this method if they support streaming output. | lang/api.python.langchain.com/en/latest/comprehend_moderation/langchain_experimental.comprehend_moderation.amazon_comprehend_moderation.AmazonComprehendModerationChain.html |
886c70fc64e6-6 | Subclasses should override this method if they support streaming output.
async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, diff: bool = True, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any]) → Union[AsyncIterator[RunLogPatch], AsyncIterator[RunLog]]¶
Stream all output from a runnable, as reported to the callback system.
This includes all inner runs of LLMs, Retrievers, Tools, etc.
Output is streamed as Log objects, which include a list of
jsonpatch ops that describe how the state of the run has changed in each
step, and the final state of the run.
The jsonpatch ops can be applied in order to construct state.
async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
Default implementation of atransform, which buffers input and calls astream.
Subclasses should override this method if they can start producing output while
input is still being generated.
batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶
Default implementation runs invoke in parallel using a thread pool executor.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode. | lang/api.python.langchain.com/en/latest/comprehend_moderation/langchain_experimental.comprehend_moderation.amazon_comprehend_moderation.AmazonComprehendModerationChain.html |
886c70fc64e6-7 | e.g., if the underlying runnable uses an API which supports a batch mode.
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
config_schema(*, include: Optional[Sequence[str]] = None) → Type[BaseModel]¶
The type of config this runnable accepts specified as a pydantic model.
To mark a field as configurable, see the configurable_fields
and configurable_alternatives methods.
Parameters
include – A list of fields to include in the config schema.
Returns
A pydantic model that can be used to validate config.
configurable_alternatives(which: ConfigurableField, default_key: str = 'default', **kwargs: Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) → RunnableSerializable[Input, Output]¶
configurable_fields(**kwargs: Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) → RunnableSerializable[Input, Output]¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include | lang/api.python.langchain.com/en/latest/comprehend_moderation/langchain_experimental.comprehend_moderation.amazon_comprehend_moderation.AmazonComprehendModerationChain.html |
886c70fc64e6-8 | exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict¶
Dictionary representation of chain.
Expects Chain._chain_type property to be implemented and for memory to benull.
Parameters
**kwargs – Keyword arguments passed to default pydantic.BaseModel.dict
method.
Returns
A dictionary representation of the chain.
Example
chain.dict(exclude_unset=True)
# -> {"_type": "foo", "verbose": False, ...}
classmethod from_orm(obj: Any) → Model¶
get_input_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
Get a pydantic model that can be used to validate input to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic input schema that depends on which
configuration the runnable is invoked with.
This method allows to get an input schema for a specific configuration.
Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate input.
classmethod get_lc_namespace() → List[str]¶
Get the namespace of the langchain object.
For example, if the class is langchain.llms.openai.OpenAI, then the
namespace is [“langchain”, “llms”, “openai”]
get_output_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
Get a pydantic model that can be used to validate output to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives | lang/api.python.langchain.com/en/latest/comprehend_moderation/langchain_experimental.comprehend_moderation.amazon_comprehend_moderation.AmazonComprehendModerationChain.html |
886c70fc64e6-9 | Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic output schema that depends on which
configuration the runnable is invoked with.
This method allows to get an output schema for a specific configuration.
Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate output.
invoke(input: Dict[str, Any], config: Optional[RunnableConfig] = None, **kwargs: Any) → Dict[str, Any]¶
Transform a single input into an output. Override to implement.
Parameters
input – The input to the runnable.
config – A config to use when invoking the runnable.
The config supports standard keys like ‘tags’, ‘metadata’ for tracing
purposes, ‘max_concurrency’ for controlling how much work to do
in parallel, and other keys. Please refer to the RunnableConfig
for more details.
Returns
The output of the runnable.
classmethod is_lc_serializable() → bool¶
Is this class serializable?
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes. | lang/api.python.langchain.com/en/latest/comprehend_moderation/langchain_experimental.comprehend_moderation.amazon_comprehend_moderation.AmazonComprehendModerationChain.html |
886c70fc64e6-10 | A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object.
map() → Runnable[List[Input], List[Output]]¶
Return a new Runnable that maps a list of inputs to a list of outputs,
by calling invoke() with each input.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
prep_inputs(inputs: Union[Dict[str, Any], Any]) → Dict[str, str]¶
Validate and prepare chain inputs, including adding inputs from memory.
Parameters
inputs – Dictionary of raw inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
Returns
A dictionary of all inputs, including those added by the chain’s memory.
prep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) → Dict[str, str]¶
Validate and prepare chain outputs, and save info about this run to memory.
Parameters
inputs – Dictionary of chain inputs, including any inputs added by chain
memory.
outputs – Dictionary of initial chain outputs.
return_only_outputs – Whether to only return the chain outputs. If False,
inputs are also added to the final outputs.
Returns
A dict of the final chain outputs. | lang/api.python.langchain.com/en/latest/comprehend_moderation/langchain_experimental.comprehend_moderation.amazon_comprehend_moderation.AmazonComprehendModerationChain.html |
886c70fc64e6-11 | Returns
A dict of the final chain outputs.
run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Convenience method for executing chain.
The main difference between this method and Chain.__call__ is that this
method expects inputs to be passed directly in as positional arguments or
keyword arguments, whereas Chain.__call__ expects a single input dictionary
with all the inputs
Parameters
*args – If the chain expects a single input, it can be passed in as the
sole positional argument.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
**kwargs – If the chain expects multiple inputs, they can be passed in
directly as keyword arguments.
Returns
The chain output.
Example
# Suppose we have a single-input chain that takes a 'question' string:
chain.run("What's the temperature in Boise, Idaho?")
# -> "The temperature in Boise is..."
# Suppose we have a multi-input chain that takes a 'question' string
# and 'context' string:
question = "What's the temperature in Boise, Idaho?"
context = "Weather report for Boise, Idaho on 07/03/23..."
chain.run(question=question, context=context)
# -> "The temperature in Boise is..."
save(file_path: Union[Path, str]) → None¶
Save the chain. | lang/api.python.langchain.com/en/latest/comprehend_moderation/langchain_experimental.comprehend_moderation.amazon_comprehend_moderation.AmazonComprehendModerationChain.html |
886c70fc64e6-12 | save(file_path: Union[Path, str]) → None¶
Save the chain.
Expects Chain._chain_type property to be implemented and for memory to benull.
Parameters
file_path – Path to file to save the chain to.
Example
chain.save(file_path="path/chain.yaml")
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of stream, which calls invoke.
Subclasses should override this method if they support streaming output.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of transform, which buffers input and then calls stream.
Subclasses should override this method if they can start producing output while
input is still being generated.
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶
Bind config to a Runnable, returning a new Runnable.
with_fallbacks(fallbacks: Sequence[Runnable[Input, Output]], *, exceptions_to_handle: Tuple[Type[BaseException], ...] = (<class 'Exception'>,)) → RunnableWithFallbacksT[Input, Output]¶ | lang/api.python.langchain.com/en/latest/comprehend_moderation/langchain_experimental.comprehend_moderation.amazon_comprehend_moderation.AmazonComprehendModerationChain.html |
886c70fc64e6-13 | Add fallbacks to a runnable, returning a new Runnable.
Parameters
fallbacks – A sequence of runnables to try if the original runnable fails.
exceptions_to_handle – A tuple of exception types to handle.
Returns
A new Runnable that will try the original runnable, and then each
fallback in order, upon failures.
with_listeners(*, on_start: Optional[Listener] = None, on_end: Optional[Listener] = None, on_error: Optional[Listener] = None) → Runnable[Input, Output]¶
Bind lifecycle listeners to a Runnable, returning a new Runnable.
on_start: Called before the runnable starts running, with the Run object.
on_end: Called after the runnable finishes running, with the Run object.
on_error: Called if the runnable throws an error, with the Run object.
The Run object contains information about the run, including its id,
type, input, output, error, start_time, end_time, and any tags or metadata
added to the run.
with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶
Create a new Runnable that retries the original runnable on exceptions.
Parameters
retry_if_exception_type – A tuple of exception types to retry on
wait_exponential_jitter – Whether to add jitter to the wait time
between retries
stop_after_attempt – The maximum number of attempts to make before giving up
Returns
A new Runnable that retries the original runnable on exceptions.
with_types(*, input_type: Optional[Type[Input]] = None, output_type: Optional[Type[Output]] = None) → Runnable[Input, Output]¶ | lang/api.python.langchain.com/en/latest/comprehend_moderation/langchain_experimental.comprehend_moderation.amazon_comprehend_moderation.AmazonComprehendModerationChain.html |
886c70fc64e6-14 | Bind input and output types to a Runnable, returning a new Runnable.
property InputType: Type[langchain.schema.runnable.utils.Input]¶
The type of input this runnable accepts specified as a type annotation.
property OutputType: Type[langchain.schema.runnable.utils.Output]¶
The type of output this runnable produces specified as a type annotation.
property config_specs: List[langchain.schema.runnable.utils.ConfigurableFieldSpec]¶
List configurable fields for this runnable.
property input_keys: List[str]¶
Returns a list of input keys expected by the prompt.
This method defines the input keys that the prompt expects in order to perform
its processing. It ensures that the specified keys are available for providing
input to the prompt.
Returns
A list of input keys.
Return type
List[str]
Note
This method is considered private and may not be intended for direct
external use.
property input_schema: Type[pydantic.main.BaseModel]¶
The type of input this runnable accepts specified as a pydantic model.
property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”}
property output_keys: List[str]¶
Returns a list of output keys.
This method defines the output keys that will be used to access the output
values produced by the chain or function. It ensures that the specified keys
are available to access the outputs.
Returns
A list of output keys.
Return type
List[str]
Note
This method is considered private and may not be intended for direct
external use.
property output_schema: Type[pydantic.main.BaseModel]¶ | lang/api.python.langchain.com/en/latest/comprehend_moderation/langchain_experimental.comprehend_moderation.amazon_comprehend_moderation.AmazonComprehendModerationChain.html |
886c70fc64e6-15 | external use.
property output_schema: Type[pydantic.main.BaseModel]¶
The type of output this runnable produces specified as a pydantic model. | lang/api.python.langchain.com/en/latest/comprehend_moderation/langchain_experimental.comprehend_moderation.amazon_comprehend_moderation.AmazonComprehendModerationChain.html |
7f0051eff250-0 | langchain_experimental.comprehend_moderation.base_moderation_config.ModerationPiiConfig¶
class langchain_experimental.comprehend_moderation.base_moderation_config.ModerationPiiConfig[source]¶
Bases: BaseModel
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param labels: List[str] = []¶
List of PII Universal Labels.
Defaults to list[]
param mask_character: str = '*'¶
Redaction mask character in case redact=True, defaults to asterisk (*)
param redact: bool = False¶
Whether to perform redaction of detected PII entities
param threshold: float = 0.5¶
Threshold for PII confidence score, defaults to 0.5 i.e. 50%
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data | lang/api.python.langchain.com/en/latest/comprehend_moderation/langchain_experimental.comprehend_moderation.base_moderation_config.ModerationPiiConfig.html |
7f0051eff250-1 | the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ | lang/api.python.langchain.com/en/latest/comprehend_moderation/langchain_experimental.comprehend_moderation.base_moderation_config.ModerationPiiConfig.html |
7f0051eff250-2 | classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶ | lang/api.python.langchain.com/en/latest/comprehend_moderation/langchain_experimental.comprehend_moderation.base_moderation_config.ModerationPiiConfig.html |
f79129b2e0bf-0 | langchain_experimental.comprehend_moderation.pii.ComprehendPII¶
class langchain_experimental.comprehend_moderation.pii.ComprehendPII(client: Any, callback: Optional[Any] = None, unique_id: Optional[str] = None, chain_id: Optional[str] = None)[source]¶
Methods
__init__(client[, callback, unique_id, chain_id])
validate(prompt_value[, config])
__init__(client: Any, callback: Optional[Any] = None, unique_id: Optional[str] = None, chain_id: Optional[str] = None) → None[source]¶
validate(prompt_value: str, config: Any = None) → str[source]¶ | lang/api.python.langchain.com/en/latest/comprehend_moderation/langchain_experimental.comprehend_moderation.pii.ComprehendPII.html |
7a8241cc14f2-0 | langchain_experimental.comprehend_moderation.base_moderation_config.BaseModerationConfig¶
class langchain_experimental.comprehend_moderation.base_moderation_config.BaseModerationConfig[source]¶
Bases: BaseModel
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param filters: List[Union[langchain_experimental.comprehend_moderation.base_moderation_config.ModerationPiiConfig, langchain_experimental.comprehend_moderation.base_moderation_config.ModerationToxicityConfig, langchain_experimental.comprehend_moderation.base_moderation_config.ModerationPromptSafetyConfig]] = [ModerationPiiConfig(threshold=0.5, labels=[], redact=False, mask_character='*'), ModerationToxicityConfig(threshold=0.5, labels=[]), ModerationPromptSafetyConfig(threshold=0.5)]¶
Filters applied to the moderation chain, defaults to
[ModerationPiiConfig(), ModerationToxicityConfig(),
ModerationPromptSafetyConfig()]
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters | lang/api.python.langchain.com/en/latest/comprehend_moderation/langchain_experimental.comprehend_moderation.base_moderation_config.BaseModerationConfig.html |
7a8241cc14f2-1 | Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). | lang/api.python.langchain.com/en/latest/comprehend_moderation/langchain_experimental.comprehend_moderation.base_moderation_config.BaseModerationConfig.html |
7a8241cc14f2-2 | classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶ | lang/api.python.langchain.com/en/latest/comprehend_moderation/langchain_experimental.comprehend_moderation.base_moderation_config.BaseModerationConfig.html |
6d0163f3530b-0 | langchain_experimental.comprehend_moderation.base_moderation_exceptions.ModerationPiiError¶
class langchain_experimental.comprehend_moderation.base_moderation_exceptions.ModerationPiiError(message: str = 'The prompt contains PII entities and cannot be processed')[source]¶
Exception raised if PII entities are detected.
message -- explanation of the error | lang/api.python.langchain.com/en/latest/comprehend_moderation/langchain_experimental.comprehend_moderation.base_moderation_exceptions.ModerationPiiError.html |
cb6a5ed8564e-0 | langchain_experimental.comprehend_moderation.toxicity.ComprehendToxicity¶
class langchain_experimental.comprehend_moderation.toxicity.ComprehendToxicity(client: Any, callback: Optional[Any] = None, unique_id: Optional[str] = None, chain_id: Optional[str] = None)[source]¶
Methods
__init__(client[, callback, unique_id, chain_id])
validate(prompt_value[, config])
Check the toxicity of a given text prompt using AWS Comprehend service and apply actions based on configuration.
__init__(client: Any, callback: Optional[Any] = None, unique_id: Optional[str] = None, chain_id: Optional[str] = None) → None[source]¶
validate(prompt_value: str, config: Any = None) → str[source]¶
Check the toxicity of a given text prompt using AWS
Comprehend service and apply actions based on configuration.
:param prompt_value: The text content to be checked for toxicity.
:type prompt_value: str
:param config: Configuration for toxicity checks and actions.
:type config: Dict[str, Any]
Returns
The original prompt_value if allowed or no toxicity found.
Return type
str
Raises
ValueError – If the prompt contains toxic labels and cannot be
processed based on the configuration. – | lang/api.python.langchain.com/en/latest/comprehend_moderation/langchain_experimental.comprehend_moderation.toxicity.ComprehendToxicity.html |
c35135b28f61-0 | langchain_experimental.comprehend_moderation.base_moderation_exceptions.ModerationPromptSafetyError¶
class langchain_experimental.comprehend_moderation.base_moderation_exceptions.ModerationPromptSafetyError(message: str = 'The prompt is unsafe and cannot be processed')[source]¶
Exception raised if Intention entities are detected.
message -- explanation of the error | lang/api.python.langchain.com/en/latest/comprehend_moderation/langchain_experimental.comprehend_moderation.base_moderation_exceptions.ModerationPromptSafetyError.html |
033f8b006061-0 | langchain_experimental.comprehend_moderation.base_moderation_config.ModerationToxicityConfig¶
class langchain_experimental.comprehend_moderation.base_moderation_config.ModerationToxicityConfig[source]¶
Bases: BaseModel
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param labels: List[str] = []¶
List of toxic labels, defaults to list[]
param threshold: float = 0.5¶
Threshold for Toxic label confidence score, defaults to 0.5 i.e. 50%
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance | lang/api.python.langchain.com/en/latest/comprehend_moderation/langchain_experimental.comprehend_moderation.base_moderation_config.ModerationToxicityConfig.html |
033f8b006061-1 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ | lang/api.python.langchain.com/en/latest/comprehend_moderation/langchain_experimental.comprehend_moderation.base_moderation_config.ModerationToxicityConfig.html |
033f8b006061-2 | classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶ | lang/api.python.langchain.com/en/latest/comprehend_moderation/langchain_experimental.comprehend_moderation.base_moderation_config.ModerationToxicityConfig.html |
2e0fed39c2d5-0 | langchain_experimental.comprehend_moderation.base_moderation.BaseModeration¶
class langchain_experimental.comprehend_moderation.base_moderation.BaseModeration(client: Any, config: Optional[Any] = None, moderation_callback: Optional[Any] = None, unique_id: Optional[str] = None, run_manager: Optional[CallbackManagerForChainRun] = None)[source]¶
Methods
__init__(client[, config, ...])
moderate(prompt)
__init__(client: Any, config: Optional[Any] = None, moderation_callback: Optional[Any] = None, unique_id: Optional[str] = None, run_manager: Optional[CallbackManagerForChainRun] = None)[source]¶
moderate(prompt: Any) → str[source]¶ | lang/api.python.langchain.com/en/latest/comprehend_moderation/langchain_experimental.comprehend_moderation.base_moderation.BaseModeration.html |
2ec8f6663184-0 | langchain_experimental.comprehend_moderation.base_moderation_callbacks.BaseModerationCallbackHandler¶
class langchain_experimental.comprehend_moderation.base_moderation_callbacks.BaseModerationCallbackHandler[source]¶
Attributes
pii_callback
prompt_safety_callback
toxicity_callback
Methods
__init__()
on_after_pii(moderation_beacon, unique_id, ...)
Run after PII validation is complete.
on_after_prompt_safety(moderation_beacon, ...)
Run after Prompt Safety validation is complete.
on_after_toxicity(moderation_beacon, ...)
Run after Toxicity validation is complete.
__init__() → None[source]¶
async on_after_pii(moderation_beacon: Dict[str, Any], unique_id: str, **kwargs: Any) → None[source]¶
Run after PII validation is complete.
async on_after_prompt_safety(moderation_beacon: Dict[str, Any], unique_id: str, **kwargs: Any) → None[source]¶
Run after Prompt Safety validation is complete.
async on_after_toxicity(moderation_beacon: Dict[str, Any], unique_id: str, **kwargs: Any) → None[source]¶
Run after Toxicity validation is complete. | lang/api.python.langchain.com/en/latest/comprehend_moderation/langchain_experimental.comprehend_moderation.base_moderation_callbacks.BaseModerationCallbackHandler.html |
dba22dade3f5-0 | langchain_experimental.comprehend_moderation.base_moderation_exceptions.ModerationToxicityError¶
class langchain_experimental.comprehend_moderation.base_moderation_exceptions.ModerationToxicityError(message: str = 'The prompt contains toxic content and cannot be processed')[source]¶
Exception raised if Toxic entities are detected.
message -- explanation of the error | lang/api.python.langchain.com/en/latest/comprehend_moderation/langchain_experimental.comprehend_moderation.base_moderation_exceptions.ModerationToxicityError.html |
bb78cb1e1d74-0 | langchain_experimental.smart_llm.base.SmartLLMChain¶
class langchain_experimental.smart_llm.base.SmartLLMChain[source]¶
Bases: Chain
Generalized implementation of SmartGPT (origin: https://youtu.be/wVzuvf9D9BU)
A SmartLLMChain is an LLMChain that instead of simply passing the prompt to the LLM
performs these 3 steps:
1. Ideate: Pass the user prompt to an ideation LLM n_ideas times,
each result is an “idea”
Critique: Pass the ideas to a critique LLM which looks for flaws in the ideas
& picks the best one
Resolve: Pass the critique to a resolver LLM which improves upon the best idea
& outputs only the (improved version of) the best output
In total, SmartLLMChain pass will use n_ideas+2 LLM calls
Note that SmartLLMChain will only improve results (compared to a basic LLMChain),
when the underlying models have the capability for reflection, which smaller models
often don’t.
Finally, a SmartLLMChain assumes that each underlying LLM outputs exactly 1 result.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param callback_manager: Optional[BaseCallbackManager] = None¶
Deprecated, use callbacks instead.
param callbacks: Callbacks = None¶
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details. | lang/api.python.langchain.com/en/latest/smart_llm/langchain_experimental.smart_llm.base.SmartLLMChain.html |
bb78cb1e1d74-1 | Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
param critique_llm: Optional[langchain.schema.language_model.BaseLanguageModel] = None¶
LLM to use in critique step. If None given, ‘llm’ will be used.
param history: langchain_experimental.smart_llm.base.SmartLLMChain.SmartLLMChainHistory = <langchain_experimental.smart_llm.base.SmartLLMChain.SmartLLMChainHistory object>¶
param ideation_llm: Optional[langchain.schema.language_model.BaseLanguageModel] = None¶
LLM to use in ideation step. If None given, ‘llm’ will be used.
param llm: Optional[langchain.schema.language_model.BaseLanguageModel] = None¶
LLM to use for each steps, if no specific llm for that step is given.
param memory: Optional[BaseMemory] = None¶
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
param metadata: Optional[Dict[str, Any]] = None¶
Optional metadata associated with the chain. Defaults to None.
This metadata will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
param n_ideas: int = 3¶
Number of ideas to generate in idea step
param prompt: langchain.schema.prompt_template.BasePromptTemplate [Required]¶
Prompt object to use. | lang/api.python.langchain.com/en/latest/smart_llm/langchain_experimental.smart_llm.base.SmartLLMChain.html |
bb78cb1e1d74-2 | Prompt object to use.
param resolver_llm: Optional[langchain.schema.language_model.BaseLanguageModel] = None¶
LLM to use in resolve step. If None given, ‘llm’ will be used.
param return_intermediate_steps: bool = False¶
Whether to return ideas and critique, in addition to resolution.
param tags: Optional[List[str]] = None¶
Optional list of tags associated with the chain. Defaults to None.
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
param verbose: bool [Optional]¶
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to the global verbose value,
accessible via langchain.globals.get_verbose().
class SmartLLMChainHistory[source]¶
Bases: object
critique_prompt_inputs() → Dict[str, Any][source]¶
ideation_prompt_inputs() → Dict[str, Any][source]¶
resolve_prompt_inputs() → Dict[str, Any][source]¶
critique: str = ''¶
ideas: List[str] = []¶
property n_ideas: int¶
question: str = ''¶
__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, run_name: Optional[str] = None, include_run_info: bool = False) → Dict[str, Any]¶
Execute the chain.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param. Should contain all inputs specified in | lang/api.python.langchain.com/en/latest/smart_llm/langchain_experimental.smart_llm.base.SmartLLMChain.html |
bb78cb1e1d74-3 | only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
return_only_outputs – Whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
metadata – Optional metadata associated with the chain. Defaults to None
include_run_info – Whether to include run info in the response. Defaults
to False.
Returns
A dict of named outputs. Should contain all outputs specified inChain.output_keys.
async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶
Default implementation runs ainvoke in parallel using asyncio.gather.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode. | lang/api.python.langchain.com/en/latest/smart_llm/langchain_experimental.smart_llm.base.SmartLLMChain.html |
bb78cb1e1d74-4 | e.g., if the underlying runnable uses an API which supports a batch mode.
async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, run_name: Optional[str] = None, include_run_info: bool = False) → Dict[str, Any]¶
Asynchronously execute the chain.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
return_only_outputs – Whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
metadata – Optional metadata associated with the chain. Defaults to None
include_run_info – Whether to include run info in the response. Defaults
to False.
Returns
A dict of named outputs. Should contain all outputs specified inChain.output_keys.
async ainvoke(input: Dict[str, Any], config: Optional[RunnableConfig] = None, **kwargs: Any) → Dict[str, Any]¶
Default implementation of ainvoke, calls invoke from a thread. | lang/api.python.langchain.com/en/latest/smart_llm/langchain_experimental.smart_llm.base.SmartLLMChain.html |
bb78cb1e1d74-5 | Default implementation of ainvoke, calls invoke from a thread.
The default implementation allows usage of async code even if
the runnable did not implement a native async version of invoke.
Subclasses should override this method if they can run asynchronously.
apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶
Call the chain on all inputs in the list.
async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Convenience method for executing chain.
The main difference between this method and Chain.__call__ is that this
method expects inputs to be passed directly in as positional arguments or
keyword arguments, whereas Chain.__call__ expects a single input dictionary
with all the inputs
Parameters
*args – If the chain expects a single input, it can be passed in as the
sole positional argument.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
**kwargs – If the chain expects multiple inputs, they can be passed in
directly as keyword arguments.
Returns
The chain output.
Example
# Suppose we have a single-input chain that takes a 'question' string:
await chain.arun("What's the temperature in Boise, Idaho?")
# -> "The temperature in Boise is..." | lang/api.python.langchain.com/en/latest/smart_llm/langchain_experimental.smart_llm.base.SmartLLMChain.html |
bb78cb1e1d74-6 | # -> "The temperature in Boise is..."
# Suppose we have a multi-input chain that takes a 'question' string
# and 'context' string:
question = "What's the temperature in Boise, Idaho?"
context = "Weather report for Boise, Idaho on 07/03/23..."
await chain.arun(question=question, context=context)
# -> "The temperature in Boise is..."
async astream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
Default implementation of astream, which calls ainvoke.
Subclasses should override this method if they support streaming output.
async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, diff: bool = True, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any]) → Union[AsyncIterator[RunLogPatch], AsyncIterator[RunLog]]¶
Stream all output from a runnable, as reported to the callback system.
This includes all inner runs of LLMs, Retrievers, Tools, etc.
Output is streamed as Log objects, which include a list of
jsonpatch ops that describe how the state of the run has changed in each
step, and the final state of the run.
The jsonpatch ops can be applied in order to construct state.
async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
Default implementation of atransform, which buffers input and calls astream. | lang/api.python.langchain.com/en/latest/smart_llm/langchain_experimental.smart_llm.base.SmartLLMChain.html |
bb78cb1e1d74-7 | Default implementation of atransform, which buffers input and calls astream.
Subclasses should override this method if they can start producing output while
input is still being generated.
batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶
Default implementation runs invoke in parallel using a thread pool executor.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
config_schema(*, include: Optional[Sequence[str]] = None) → Type[BaseModel]¶
The type of config this runnable accepts specified as a pydantic model.
To mark a field as configurable, see the configurable_fields
and configurable_alternatives methods.
Parameters
include – A list of fields to include in the config schema.
Returns
A pydantic model that can be used to validate config.
configurable_alternatives(which: ConfigurableField, default_key: str = 'default', **kwargs: Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) → RunnableSerializable[Input, Output]¶
configurable_fields(**kwargs: Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) → RunnableSerializable[Input, Output]¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed. | lang/api.python.langchain.com/en/latest/smart_llm/langchain_experimental.smart_llm.base.SmartLLMChain.html |
bb78cb1e1d74-8 | Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
critique_prompt() → ChatPromptTemplate[source]¶
dict(**kwargs: Any) → Dict¶
Dictionary representation of chain.
Expects Chain._chain_type property to be implemented and for memory to benull.
Parameters
**kwargs – Keyword arguments passed to default pydantic.BaseModel.dict
method.
Returns
A dictionary representation of the chain.
Example
chain.dict(exclude_unset=True)
# -> {"_type": "foo", "verbose": False, ...}
classmethod from_orm(obj: Any) → Model¶
get_input_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
Get a pydantic model that can be used to validate input to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic input schema that depends on which
configuration the runnable is invoked with.
This method allows to get an input schema for a specific configuration.
Parameters | lang/api.python.langchain.com/en/latest/smart_llm/langchain_experimental.smart_llm.base.SmartLLMChain.html |
bb78cb1e1d74-9 | This method allows to get an input schema for a specific configuration.
Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate input.
classmethod get_lc_namespace() → List[str]¶
Get the namespace of the langchain object.
For example, if the class is langchain.llms.openai.OpenAI, then the
namespace is [“langchain”, “llms”, “openai”]
get_output_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
Get a pydantic model that can be used to validate output to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic output schema that depends on which
configuration the runnable is invoked with.
This method allows to get an output schema for a specific configuration.
Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate output.
get_prompt_strings(stage: str) → List[Tuple[Type[BaseMessagePromptTemplate], str]][source]¶
ideation_prompt() → ChatPromptTemplate[source]¶
invoke(input: Dict[str, Any], config: Optional[RunnableConfig] = None, **kwargs: Any) → Dict[str, Any]¶
Transform a single input into an output. Override to implement.
Parameters
input – The input to the runnable.
config – A config to use when invoking the runnable.
The config supports standard keys like ‘tags’, ‘metadata’ for tracing
purposes, ‘max_concurrency’ for controlling how much work to do
in parallel, and other keys. Please refer to the RunnableConfig
for more details.
Returns
The output of the runnable.
classmethod is_lc_serializable() → bool¶
Is this class serializable? | lang/api.python.langchain.com/en/latest/smart_llm/langchain_experimental.smart_llm.base.SmartLLMChain.html |
bb78cb1e1d74-10 | classmethod is_lc_serializable() → bool¶
Is this class serializable?
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object.
map() → Runnable[List[Input], List[Output]]¶
Return a new Runnable that maps a list of inputs to a list of outputs,
by calling invoke() with each input.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
prep_inputs(inputs: Union[Dict[str, Any], Any]) → Dict[str, str]¶
Validate and prepare chain inputs, including adding inputs from memory.
Parameters
inputs – Dictionary of raw inputs, or single input if chain expects | lang/api.python.langchain.com/en/latest/smart_llm/langchain_experimental.smart_llm.base.SmartLLMChain.html |
bb78cb1e1d74-11 | Parameters
inputs – Dictionary of raw inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
Returns
A dictionary of all inputs, including those added by the chain’s memory.
prep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) → Dict[str, str]¶
Validate and prepare chain outputs, and save info about this run to memory.
Parameters
inputs – Dictionary of chain inputs, including any inputs added by chain
memory.
outputs – Dictionary of initial chain outputs.
return_only_outputs – Whether to only return the chain outputs. If False,
inputs are also added to the final outputs.
Returns
A dict of the final chain outputs.
prep_prompts(inputs: Dict[str, Any], run_manager: Optional[CallbackManagerForChainRun] = None) → Tuple[PromptValue, Optional[List[str]]][source]¶
Prepare prompts from inputs.
resolve_prompt() → ChatPromptTemplate[source]¶
run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Convenience method for executing chain.
The main difference between this method and Chain.__call__ is that this
method expects inputs to be passed directly in as positional arguments or
keyword arguments, whereas Chain.__call__ expects a single input dictionary
with all the inputs
Parameters
*args – If the chain expects a single input, it can be passed in as the
sole positional argument.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only | lang/api.python.langchain.com/en/latest/smart_llm/langchain_experimental.smart_llm.base.SmartLLMChain.html |
bb78cb1e1d74-12 | addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
**kwargs – If the chain expects multiple inputs, they can be passed in
directly as keyword arguments.
Returns
The chain output.
Example
# Suppose we have a single-input chain that takes a 'question' string:
chain.run("What's the temperature in Boise, Idaho?")
# -> "The temperature in Boise is..."
# Suppose we have a multi-input chain that takes a 'question' string
# and 'context' string:
question = "What's the temperature in Boise, Idaho?"
context = "Weather report for Boise, Idaho on 07/03/23..."
chain.run(question=question, context=context)
# -> "The temperature in Boise is..."
save(file_path: Union[Path, str]) → None¶
Save the chain.
Expects Chain._chain_type property to be implemented and for memory to benull.
Parameters
file_path – Path to file to save the chain to.
Example
chain.save(file_path="path/chain.yaml")
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of stream, which calls invoke.
Subclasses should override this method if they support streaming output. | lang/api.python.langchain.com/en/latest/smart_llm/langchain_experimental.smart_llm.base.SmartLLMChain.html |
bb78cb1e1d74-13 | Subclasses should override this method if they support streaming output.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of transform, which buffers input and then calls stream.
Subclasses should override this method if they can start producing output while
input is still being generated.
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶
Bind config to a Runnable, returning a new Runnable.
with_fallbacks(fallbacks: Sequence[Runnable[Input, Output]], *, exceptions_to_handle: Tuple[Type[BaseException], ...] = (<class 'Exception'>,)) → RunnableWithFallbacksT[Input, Output]¶
Add fallbacks to a runnable, returning a new Runnable.
Parameters
fallbacks – A sequence of runnables to try if the original runnable fails.
exceptions_to_handle – A tuple of exception types to handle.
Returns
A new Runnable that will try the original runnable, and then each
fallback in order, upon failures.
with_listeners(*, on_start: Optional[Listener] = None, on_end: Optional[Listener] = None, on_error: Optional[Listener] = None) → Runnable[Input, Output]¶
Bind lifecycle listeners to a Runnable, returning a new Runnable.
on_start: Called before the runnable starts running, with the Run object.
on_end: Called after the runnable finishes running, with the Run object. | lang/api.python.langchain.com/en/latest/smart_llm/langchain_experimental.smart_llm.base.SmartLLMChain.html |
bb78cb1e1d74-14 | on_end: Called after the runnable finishes running, with the Run object.
on_error: Called if the runnable throws an error, with the Run object.
The Run object contains information about the run, including its id,
type, input, output, error, start_time, end_time, and any tags or metadata
added to the run.
with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶
Create a new Runnable that retries the original runnable on exceptions.
Parameters
retry_if_exception_type – A tuple of exception types to retry on
wait_exponential_jitter – Whether to add jitter to the wait time
between retries
stop_after_attempt – The maximum number of attempts to make before giving up
Returns
A new Runnable that retries the original runnable on exceptions.
with_types(*, input_type: Optional[Type[Input]] = None, output_type: Optional[Type[Output]] = None) → Runnable[Input, Output]¶
Bind input and output types to a Runnable, returning a new Runnable.
property InputType: Type[langchain.schema.runnable.utils.Input]¶
The type of input this runnable accepts specified as a type annotation.
property OutputType: Type[langchain.schema.runnable.utils.Output]¶
The type of output this runnable produces specified as a type annotation.
property config_specs: List[langchain.schema.runnable.utils.ConfigurableFieldSpec]¶
List configurable fields for this runnable.
property input_keys: List[str]¶
Defines the input keys.
property input_schema: Type[pydantic.main.BaseModel]¶
The type of input this runnable accepts specified as a pydantic model.
property lc_attributes: Dict¶ | lang/api.python.langchain.com/en/latest/smart_llm/langchain_experimental.smart_llm.base.SmartLLMChain.html |
bb78cb1e1d74-15 | property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”}
property output_keys: List[str]¶
Defines the output keys.
property output_schema: Type[pydantic.main.BaseModel]¶
The type of output this runnable produces specified as a pydantic model. | lang/api.python.langchain.com/en/latest/smart_llm/langchain_experimental.smart_llm.base.SmartLLMChain.html |
8d5b453c2c64-0 | langchain.storage.encoder_backed.EncoderBackedStore¶
class langchain.storage.encoder_backed.EncoderBackedStore(store: BaseStore[str, Any], key_encoder: Callable[[K], str], value_serializer: Callable[[V], bytes], value_deserializer: Callable[[Any], V])[source]¶
Wraps a store with key and value encoders/decoders.
Examples that uses JSON for encoding/decoding:
import json
def key_encoder(key: int) -> str:
return json.dumps(key)
def value_serializer(value: float) -> str:
return json.dumps(value)
def value_deserializer(serialized_value: str) -> float:
return json.loads(serialized_value)
# Create an instance of the abstract store
abstract_store = MyCustomStore()
# Create an instance of the encoder-backed store
store = EncoderBackedStore(
store=abstract_store,
key_encoder=key_encoder,
value_serializer=value_serializer,
value_deserializer=value_deserializer
)
# Use the encoder-backed store methods
store.mset([(1, 3.14), (2, 2.718)])
values = store.mget([1, 2]) # Retrieves [3.14, 2.718]
store.mdelete([1, 2]) # Deletes the keys 1 and 2
Initialize an EncodedStore.
Methods
__init__(store, key_encoder, ...)
Initialize an EncodedStore.
mdelete(keys)
Delete the given keys and their associated values.
mget(keys)
Get the values associated with the given keys.
mset(key_value_pairs)
Set the values for the given keys.
yield_keys(*[, prefix])
Get an iterator over keys that match the given prefix. | lang/api.python.langchain.com/en/latest/storage/langchain.storage.encoder_backed.EncoderBackedStore.html |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.