id
stringlengths 14
16
| text
stringlengths 13
2.7k
| source
stringlengths 57
178
|
---|---|---|
a434c360bd26-1 | }
| RunnablePassthrough.assign(
total_chars=lambda inputs: len(inputs['llm1'] + inputs['llm2'])
)
runnable.invoke('hello')
# {'llm1': 'completion', 'llm2': 'completion', 'total_chars': 20}
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param afunc: Optional[Union[Callable[[langchain.schema.runnable.base.Other], Awaitable[None]], Callable[[langchain.schema.runnable.base.Other, langchain.schema.runnable.config.RunnableConfig], Awaitable[None]]]] = None¶
param func: Optional[Union[Callable[[langchain.schema.runnable.base.Other], None], Callable[[langchain.schema.runnable.base.Other, langchain.schema.runnable.config.RunnableConfig], None]]] = None¶
param input_type: Optional[Type[langchain.schema.runnable.base.Other]] = None¶
async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶
Default implementation runs ainvoke in parallel using asyncio.gather.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
async ainvoke(input: Other, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Other[source]¶
Default implementation of ainvoke, calls invoke from a thread.
The default implementation allows usage of async code even if
the runnable did not implement a native async version of invoke. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.runnable.passthrough.RunnablePassthrough.html |
a434c360bd26-2 | the runnable did not implement a native async version of invoke.
Subclasses should override this method if they can run asynchronously.
classmethod assign(**kwargs: Union[Runnable[Dict[str, Any], Any], Callable[[Dict[str, Any]], Any], Mapping[str, Union[Runnable[Dict[str, Any], Any], Callable[[Dict[str, Any]], Any]]]]) → RunnableAssign[source]¶
Merge the Dict input with the output produced by the mapping argument.
Parameters
mapping – A mapping from keys to runnables or callables.
Returns
A runnable that merges the Dict input with the output produced by the
mapping argument.
async astream(input: Other, config: Optional[RunnableConfig] = None, **kwargs: Any) → AsyncIterator[Other][source]¶
Default implementation of astream, which calls ainvoke.
Subclasses should override this method if they support streaming output.
async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, diff: bool = True, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any]) → Union[AsyncIterator[RunLogPatch], AsyncIterator[RunLog]]¶
Stream all output from a runnable, as reported to the callback system.
This includes all inner runs of LLMs, Retrievers, Tools, etc.
Output is streamed as Log objects, which include a list of
jsonpatch ops that describe how the state of the run has changed in each
step, and the final state of the run.
The jsonpatch ops can be applied in order to construct state. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.runnable.passthrough.RunnablePassthrough.html |
a434c360bd26-3 | The jsonpatch ops can be applied in order to construct state.
async atransform(input: AsyncIterator[Other], config: Optional[RunnableConfig] = None, **kwargs: Any) → AsyncIterator[Other][source]¶
Default implementation of atransform, which buffers input and calls astream.
Subclasses should override this method if they can start producing output while
input is still being generated.
batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶
Default implementation runs invoke in parallel using a thread pool executor.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
config_schema(*, include: Optional[Sequence[str]] = None) → Type[BaseModel]¶
The type of config this runnable accepts specified as a pydantic model.
To mark a field as configurable, see the configurable_fields
and configurable_alternatives methods.
Parameters
include – A list of fields to include in the config schema.
Returns
A pydantic model that can be used to validate config.
configurable_alternatives(which: ConfigurableField, default_key: str = 'default', **kwargs: Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) → RunnableSerializable[Input, Output]¶
configurable_fields(**kwargs: Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) → RunnableSerializable[Input, Output]¶ | lang/api.python.langchain.com/en/latest/schema/langchain.schema.runnable.passthrough.RunnablePassthrough.html |
a434c360bd26-4 | classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
get_input_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
Get a pydantic model that can be used to validate input to the runnable. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.runnable.passthrough.RunnablePassthrough.html |
a434c360bd26-5 | Get a pydantic model that can be used to validate input to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic input schema that depends on which
configuration the runnable is invoked with.
This method allows to get an input schema for a specific configuration.
Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate input.
classmethod get_lc_namespace() → List[str][source]¶
Get the namespace of the langchain object.
For example, if the class is langchain.llms.openai.OpenAI, then the
namespace is [“langchain”, “llms”, “openai”]
get_output_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
Get a pydantic model that can be used to validate output to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic output schema that depends on which
configuration the runnable is invoked with.
This method allows to get an output schema for a specific configuration.
Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate output.
invoke(input: Other, config: Optional[RunnableConfig] = None, **kwargs: Any) → Other[source]¶
Transform a single input into an output. Override to implement.
Parameters
input – The input to the runnable.
config – A config to use when invoking the runnable.
The config supports standard keys like ‘tags’, ‘metadata’ for tracing
purposes, ‘max_concurrency’ for controlling how much work to do
in parallel, and other keys. Please refer to the RunnableConfig
for more details.
Returns
The output of the runnable. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.runnable.passthrough.RunnablePassthrough.html |
a434c360bd26-6 | for more details.
Returns
The output of the runnable.
classmethod is_lc_serializable() → bool[source]¶
Is this class serializable?
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object.
map() → Runnable[List[Input], List[Output]]¶
Return a new Runnable that maps a list of inputs to a list of outputs,
by calling invoke() with each input.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ | lang/api.python.langchain.com/en/latest/schema/langchain.schema.runnable.passthrough.RunnablePassthrough.html |
a434c360bd26-7 | classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Other, config: Optional[RunnableConfig] = None, **kwargs: Any) → Iterator[Other][source]¶
Default implementation of stream, which calls invoke.
Subclasses should override this method if they support streaming output.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
transform(input: Iterator[Other], config: Optional[RunnableConfig] = None, **kwargs: Any) → Iterator[Other][source]¶
Default implementation of transform, which buffers input and then calls stream.
Subclasses should override this method if they can start producing output while
input is still being generated.
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶
Bind config to a Runnable, returning a new Runnable.
with_fallbacks(fallbacks: Sequence[Runnable[Input, Output]], *, exceptions_to_handle: Tuple[Type[BaseException], ...] = (<class 'Exception'>,)) → RunnableWithFallbacksT[Input, Output]¶
Add fallbacks to a runnable, returning a new Runnable.
Parameters
fallbacks – A sequence of runnables to try if the original runnable fails.
exceptions_to_handle – A tuple of exception types to handle.
Returns
A new Runnable that will try the original runnable, and then each
fallback in order, upon failures. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.runnable.passthrough.RunnablePassthrough.html |
a434c360bd26-8 | fallback in order, upon failures.
with_listeners(*, on_start: Optional[Listener] = None, on_end: Optional[Listener] = None, on_error: Optional[Listener] = None) → Runnable[Input, Output]¶
Bind lifecycle listeners to a Runnable, returning a new Runnable.
on_start: Called before the runnable starts running, with the Run object.
on_end: Called after the runnable finishes running, with the Run object.
on_error: Called if the runnable throws an error, with the Run object.
The Run object contains information about the run, including its id,
type, input, output, error, start_time, end_time, and any tags or metadata
added to the run.
with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶
Create a new Runnable that retries the original runnable on exceptions.
Parameters
retry_if_exception_type – A tuple of exception types to retry on
wait_exponential_jitter – Whether to add jitter to the wait time
between retries
stop_after_attempt – The maximum number of attempts to make before giving up
Returns
A new Runnable that retries the original runnable on exceptions.
with_types(*, input_type: Optional[Type[Input]] = None, output_type: Optional[Type[Output]] = None) → Runnable[Input, Output]¶
Bind input and output types to a Runnable, returning a new Runnable.
property InputType: Any¶
The type of input this runnable accepts specified as a type annotation.
property OutputType: Any¶
The type of output this runnable produces specified as a type annotation. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.runnable.passthrough.RunnablePassthrough.html |
a434c360bd26-9 | The type of output this runnable produces specified as a type annotation.
property config_specs: List[langchain.schema.runnable.utils.ConfigurableFieldSpec]¶
List configurable fields for this runnable.
property input_schema: Type[pydantic.main.BaseModel]¶
The type of input this runnable accepts specified as a pydantic model.
property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”}
property output_schema: Type[pydantic.main.BaseModel]¶
The type of output this runnable produces specified as a pydantic model.
Examples using RunnablePassthrough¶
First we add a step to load memory
prompt_llm_parser.md
multiple_chains.md | lang/api.python.langchain.com/en/latest/schema/langchain.schema.runnable.passthrough.RunnablePassthrough.html |
e9db1caac566-0 | langchain.schema.callbacks.manager.tracing_v2_enabled¶
langchain.schema.callbacks.manager.tracing_v2_enabled(project_name: Optional[str] = None, *, example_id: Optional[Union[str, UUID]] = None, tags: Optional[List[str]] = None, client: Optional[LangSmithClient] = None) → Generator[LangChainTracer, None, None][source]¶
Instruct LangChain to log all runs in context to LangSmith.
Parameters
project_name (str, optional) – The name of the project.
Defaults to “default”.
example_id (str or UUID, optional) – The ID of the example.
Defaults to None.
tags (List[str], optional) – The tags to add to the run.
Defaults to None.
Returns
None
Example
>>> with tracing_v2_enabled():
... # LangChain code will automatically be traced
You can use this to fetch the LangSmith run URL:
>>> with tracing_v2_enabled() as cb:
... chain.invoke("foo")
... run_url = cb.get_run_url()
Examples using tracing_v2_enabled¶
LangSmith Walkthrough | lang/api.python.langchain.com/en/latest/schema/langchain.schema.callbacks.manager.tracing_v2_enabled.html |
86ba57802f9e-0 | langchain.schema.callbacks.tracers.stdout.try_json_stringify¶
langchain.schema.callbacks.tracers.stdout.try_json_stringify(obj: Any, fallback: str) → str[source]¶
Try to stringify an object to JSON.
:param obj: Object to stringify.
:param fallback: Fallback string to return if the object cannot be stringified.
Returns
A JSON string if the object can be stringified, otherwise the fallback string. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.callbacks.tracers.stdout.try_json_stringify.html |
ad768a3fd453-0 | langchain.schema.runnable.utils.get_lambda_source¶
langchain.schema.runnable.utils.get_lambda_source(func: Callable) → Optional[str][source]¶
Get the source code of a lambda function.
Parameters
func – a callable that can be a lambda function
Returns
the source code of the lambda function
Return type
str | lang/api.python.langchain.com/en/latest/schema/langchain.schema.runnable.utils.get_lambda_source.html |
f0b97704b20a-0 | langchain.schema.callbacks.tracers.run_collector.RunCollectorCallbackHandler¶
class langchain.schema.callbacks.tracers.run_collector.RunCollectorCallbackHandler(example_id: Optional[Union[UUID, str]] = None, **kwargs: Any)[source]¶
A tracer that collects all nested runs in a list.
This tracer is useful for inspection and evaluation purposes.
Parameters
example_id (Optional[Union[UUID, str]], default=None) – The ID of the example being traced. It can be either a UUID or a string.
Initialize the RunCollectorCallbackHandler.
Parameters
example_id (Optional[Union[UUID, str]], default=None) – The ID of the example being traced. It can be either a UUID or a string.
Attributes
ignore_agent
Whether to ignore agent callbacks.
ignore_chain
Whether to ignore chain callbacks.
ignore_chat_model
Whether to ignore chat model callbacks.
ignore_llm
Whether to ignore LLM callbacks.
ignore_retriever
Whether to ignore retriever callbacks.
ignore_retry
Whether to ignore retry callbacks.
name
raise_error
run_inline
Methods
__init__([example_id])
Initialize the RunCollectorCallbackHandler.
on_agent_action(action, *, run_id[, ...])
Run on agent action.
on_agent_finish(finish, *, run_id[, ...])
Run on agent end.
on_chain_end(outputs, *, run_id[, inputs])
End a trace for a chain run.
on_chain_error(error, *[, inputs])
Handle an error for a chain run.
on_chain_start(serialized, inputs, *, run_id)
Start a trace for a chain run.
on_chat_model_start(serialized, messages, *, ...)
Run when a chat model starts running. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.callbacks.tracers.run_collector.RunCollectorCallbackHandler.html |
f0b97704b20a-1 | Run when a chat model starts running.
on_llm_end(response, *, run_id, **kwargs)
End a trace for an LLM run.
on_llm_error(error, *, run_id, **kwargs)
Handle an error for an LLM run.
on_llm_new_token(token, *[, chunk, ...])
Run on new LLM token.
on_llm_start(serialized, prompts, *, run_id)
Start a trace for an LLM run.
on_retriever_end(documents, *, run_id, **kwargs)
Run when Retriever ends running.
on_retriever_error(error, *, run_id, **kwargs)
Run when Retriever errors.
on_retriever_start(serialized, query, *, run_id)
Run when Retriever starts running.
on_retry(retry_state, *, run_id, **kwargs)
Run on a retry event.
on_text(text, *, run_id[, parent_run_id])
Run on arbitrary text.
on_tool_end(output, *, run_id, **kwargs)
End a trace for a tool run.
on_tool_error(error, *, run_id, **kwargs)
Handle an error for a tool run.
on_tool_start(serialized, input_str, *, run_id)
Start a trace for a tool run.
__init__(example_id: Optional[Union[UUID, str]] = None, **kwargs: Any) → None[source]¶
Initialize the RunCollectorCallbackHandler.
Parameters
example_id (Optional[Union[UUID, str]], default=None) – The ID of the example being traced. It can be either a UUID or a string. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.callbacks.tracers.run_collector.RunCollectorCallbackHandler.html |
f0b97704b20a-2 | on_agent_action(action: AgentAction, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run on agent action.
on_agent_finish(finish: AgentFinish, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run on agent end.
on_chain_end(outputs: Dict[str, Any], *, run_id: UUID, inputs: Optional[Dict[str, Any]] = None, **kwargs: Any) → Run¶
End a trace for a chain run.
on_chain_error(error: BaseException, *, inputs: Optional[Dict[str, Any]] = None, run_id: UUID, **kwargs: Any) → Run¶
Handle an error for a chain run.
on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, run_type: Optional[str] = None, name: Optional[str] = None, **kwargs: Any) → Run¶
Start a trace for a chain run.
on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run when a chat model starts running.
on_llm_end(response: LLMResult, *, run_id: UUID, **kwargs: Any) → Run¶
End a trace for an LLM run.
on_llm_error(error: BaseException, *, run_id: UUID, **kwargs: Any) → Run¶ | lang/api.python.langchain.com/en/latest/schema/langchain.schema.callbacks.tracers.run_collector.RunCollectorCallbackHandler.html |
f0b97704b20a-3 | Handle an error for an LLM run.
on_llm_new_token(token: str, *, chunk: Optional[Union[GenerationChunk, ChatGenerationChunk]] = None, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Run¶
Run on new LLM token. Only available when streaming is enabled.
on_llm_start(serialized: Dict[str, Any], prompts: List[str], *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, name: Optional[str] = None, **kwargs: Any) → Run¶
Start a trace for an LLM run.
on_retriever_end(documents: Sequence[Document], *, run_id: UUID, **kwargs: Any) → Run¶
Run when Retriever ends running.
on_retriever_error(error: BaseException, *, run_id: UUID, **kwargs: Any) → Run¶
Run when Retriever errors.
on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, name: Optional[str] = None, **kwargs: Any) → Run¶
Run when Retriever starts running.
on_retry(retry_state: RetryCallState, *, run_id: UUID, **kwargs: Any) → Run¶
Run on a retry event.
on_text(text: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run on arbitrary text.
on_tool_end(output: str, *, run_id: UUID, **kwargs: Any) → Run¶ | lang/api.python.langchain.com/en/latest/schema/langchain.schema.callbacks.tracers.run_collector.RunCollectorCallbackHandler.html |
f0b97704b20a-4 | End a trace for a tool run.
on_tool_error(error: BaseException, *, run_id: UUID, **kwargs: Any) → Run¶
Handle an error for a tool run.
on_tool_start(serialized: Dict[str, Any], input_str: str, *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, name: Optional[str] = None, **kwargs: Any) → Run¶
Start a trace for a tool run. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.callbacks.tracers.run_collector.RunCollectorCallbackHandler.html |
ee7898967507-0 | langchain.schema.messages.ChatMessageChunk¶
class langchain.schema.messages.ChatMessageChunk[source]¶
Bases: ChatMessage, BaseMessageChunk
A Chat Message chunk.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param additional_kwargs: dict [Optional]¶
Any additional information.
param content: Union[str, List[Union[str, Dict]]] [Required]¶
The string contents of the message.
param role: str [Required]¶
The speaker / role of the Message.
param type: Literal['ChatMessageChunk'] = 'ChatMessageChunk'¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance | lang/api.python.langchain.com/en/latest/schema/langchain.schema.messages.ChatMessageChunk.html |
ee7898967507-1 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
classmethod get_lc_namespace() → List[str]¶
Get the namespace of the langchain object.
For example, if the class is langchain.llms.openai.OpenAI, then the
namespace is [“langchain”, “llms”, “openai”]
classmethod is_lc_serializable() → bool¶
Return whether this class is serializable.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.messages.ChatMessageChunk.html |
ee7898967507-2 | The unique identifier is a list of strings that describes the path
to the object.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”} | lang/api.python.langchain.com/en/latest/schema/langchain.schema.messages.ChatMessageChunk.html |
e459db521ea1-0 | langchain.schema.chat_history.BaseChatMessageHistory¶
class langchain.schema.chat_history.BaseChatMessageHistory[source]¶
Abstract base class for storing chat message history.
See ChatMessageHistory for default implementation.
Example
class FileChatMessageHistory(BaseChatMessageHistory):
storage_path: str
session_id: str
@property
def messages(self):
with open(os.path.join(storage_path, session_id), 'r:utf-8') as f:
messages = json.loads(f.read())
return messages_from_dict(messages)
def add_message(self, message: BaseMessage) -> None:
messages = self.messages.append(_message_to_dict(message))
with open(os.path.join(storage_path, session_id), 'w') as f:
json.dump(f, messages)
def clear(self):
with open(os.path.join(storage_path, session_id), 'w') as f:
f.write("[]")
Attributes
messages
A list of Messages stored in-memory.
Methods
__init__()
add_ai_message(message)
Convenience method for adding an AI message string to the store.
add_message(message)
Add a Message object to the store.
add_user_message(message)
Convenience method for adding a human message string to the store.
clear()
Remove all messages from the store
__init__()¶
add_ai_message(message: str) → None[source]¶
Convenience method for adding an AI message string to the store.
Parameters
message – The string contents of an AI message.
abstract add_message(message: BaseMessage) → None[source]¶
Add a Message object to the store.
Parameters
message – A BaseMessage object to store.
add_user_message(message: str) → None[source]¶
Convenience method for adding a human message string to the store.
Parameters | lang/api.python.langchain.com/en/latest/schema/langchain.schema.chat_history.BaseChatMessageHistory.html |
e459db521ea1-1 | Convenience method for adding a human message string to the store.
Parameters
message – The string contents of a human message.
abstract clear() → None[source]¶
Remove all messages from the store | lang/api.python.langchain.com/en/latest/schema/langchain.schema.chat_history.BaseChatMessageHistory.html |
ace0f11e7ae8-0 | langchain.schema.prompt_template.BasePromptTemplate¶
class langchain.schema.prompt_template.BasePromptTemplate[source]¶
Bases: RunnableSerializable[Dict, PromptValue], ABC
Base class for all prompt templates, returning a prompt.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param input_types: Dict[str, Any] [Optional]¶
A dictionary of the types of the variables the prompt template expects.
If not provided, all variables are assumed to be strings.
param input_variables: List[str] [Required]¶
A list of the names of the variables the prompt template expects.
param output_parser: Optional[langchain.schema.output_parser.BaseOutputParser] = None¶
How to parse the output of calling an LLM on this formatted prompt.
param partial_variables: Mapping[str, Union[str, Callable[[], str]]] [Optional]¶
async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶
Default implementation runs ainvoke in parallel using asyncio.gather.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
async ainvoke(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Any) → Output¶
Default implementation of ainvoke, calls invoke from a thread.
The default implementation allows usage of async code even if
the runnable did not implement a native async version of invoke.
Subclasses should override this method if they can run asynchronously. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.prompt_template.BasePromptTemplate.html |
ace0f11e7ae8-1 | Subclasses should override this method if they can run asynchronously.
async astream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
Default implementation of astream, which calls ainvoke.
Subclasses should override this method if they support streaming output.
async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, diff: bool = True, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any]) → Union[AsyncIterator[RunLogPatch], AsyncIterator[RunLog]]¶
Stream all output from a runnable, as reported to the callback system.
This includes all inner runs of LLMs, Retrievers, Tools, etc.
Output is streamed as Log objects, which include a list of
jsonpatch ops that describe how the state of the run has changed in each
step, and the final state of the run.
The jsonpatch ops can be applied in order to construct state.
async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
Default implementation of atransform, which buffers input and calls astream.
Subclasses should override this method if they can start producing output while
input is still being generated.
batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶
Default implementation runs invoke in parallel using a thread pool executor. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.prompt_template.BasePromptTemplate.html |
ace0f11e7ae8-2 | Default implementation runs invoke in parallel using a thread pool executor.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
config_schema(*, include: Optional[Sequence[str]] = None) → Type[BaseModel]¶
The type of config this runnable accepts specified as a pydantic model.
To mark a field as configurable, see the configurable_fields
and configurable_alternatives methods.
Parameters
include – A list of fields to include in the config schema.
Returns
A pydantic model that can be used to validate config.
configurable_alternatives(which: ConfigurableField, default_key: str = 'default', **kwargs: Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) → RunnableSerializable[Input, Output]¶
configurable_fields(**kwargs: Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) → RunnableSerializable[Input, Output]¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ | lang/api.python.langchain.com/en/latest/schema/langchain.schema.prompt_template.BasePromptTemplate.html |
ace0f11e7ae8-3 | Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict[source]¶
Return dictionary representation of prompt.
abstract format(**kwargs: Any) → str[source]¶
Format the prompt with the inputs.
Parameters
kwargs – Any arguments to be passed to the prompt template.
Returns
A formatted string.
Example:
prompt.format(variable1="foo")
abstract format_prompt(**kwargs: Any) → PromptValue[source]¶
Create Chat Messages.
classmethod from_orm(obj: Any) → Model¶
get_input_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel][source]¶
Get a pydantic model that can be used to validate input to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic input schema that depends on which
configuration the runnable is invoked with.
This method allows to get an input schema for a specific configuration.
Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate input.
classmethod get_lc_namespace() → List[str]¶
Get the namespace of the langchain object.
For example, if the class is langchain.llms.openai.OpenAI, then the
namespace is [“langchain”, “llms”, “openai”]
get_output_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶ | lang/api.python.langchain.com/en/latest/schema/langchain.schema.prompt_template.BasePromptTemplate.html |
ace0f11e7ae8-4 | Get a pydantic model that can be used to validate output to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic output schema that depends on which
configuration the runnable is invoked with.
This method allows to get an output schema for a specific configuration.
Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate output.
invoke(input: Dict, config: Optional[RunnableConfig] = None) → PromptValue[source]¶
Transform a single input into an output. Override to implement.
Parameters
input – The input to the runnable.
config – A config to use when invoking the runnable.
The config supports standard keys like ‘tags’, ‘metadata’ for tracing
purposes, ‘max_concurrency’ for controlling how much work to do
in parallel, and other keys. Please refer to the RunnableConfig
for more details.
Returns
The output of the runnable.
classmethod is_lc_serializable() → bool[source]¶
Return whether this class is serializable.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶ | lang/api.python.langchain.com/en/latest/schema/langchain.schema.prompt_template.BasePromptTemplate.html |
ace0f11e7ae8-5 | classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object.
map() → Runnable[List[Input], List[Output]]¶
Return a new Runnable that maps a list of inputs to a list of outputs,
by calling invoke() with each input.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
partial(**kwargs: Union[str, Callable[[], str]]) → BasePromptTemplate[source]¶
Return a partial of the prompt template.
save(file_path: Union[Path, str]) → None[source]¶
Save the prompt.
Parameters
file_path – Path to directory to save prompt to.
Example:
.. code-block:: python
prompt.save(file_path=”path/prompt.yaml”)
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of stream, which calls invoke.
Subclasses should override this method if they support streaming output.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶ | lang/api.python.langchain.com/en/latest/schema/langchain.schema.prompt_template.BasePromptTemplate.html |
ace0f11e7ae8-6 | to_json_not_implemented() → SerializedNotImplemented¶
transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of transform, which buffers input and then calls stream.
Subclasses should override this method if they can start producing output while
input is still being generated.
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶
Bind config to a Runnable, returning a new Runnable.
with_fallbacks(fallbacks: Sequence[Runnable[Input, Output]], *, exceptions_to_handle: Tuple[Type[BaseException], ...] = (<class 'Exception'>,)) → RunnableWithFallbacksT[Input, Output]¶
Add fallbacks to a runnable, returning a new Runnable.
Parameters
fallbacks – A sequence of runnables to try if the original runnable fails.
exceptions_to_handle – A tuple of exception types to handle.
Returns
A new Runnable that will try the original runnable, and then each
fallback in order, upon failures.
with_listeners(*, on_start: Optional[Listener] = None, on_end: Optional[Listener] = None, on_error: Optional[Listener] = None) → Runnable[Input, Output]¶
Bind lifecycle listeners to a Runnable, returning a new Runnable.
on_start: Called before the runnable starts running, with the Run object.
on_end: Called after the runnable finishes running, with the Run object.
on_error: Called if the runnable throws an error, with the Run object.
The Run object contains information about the run, including its id, | lang/api.python.langchain.com/en/latest/schema/langchain.schema.prompt_template.BasePromptTemplate.html |
ace0f11e7ae8-7 | The Run object contains information about the run, including its id,
type, input, output, error, start_time, end_time, and any tags or metadata
added to the run.
with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶
Create a new Runnable that retries the original runnable on exceptions.
Parameters
retry_if_exception_type – A tuple of exception types to retry on
wait_exponential_jitter – Whether to add jitter to the wait time
between retries
stop_after_attempt – The maximum number of attempts to make before giving up
Returns
A new Runnable that retries the original runnable on exceptions.
with_types(*, input_type: Optional[Type[Input]] = None, output_type: Optional[Type[Output]] = None) → Runnable[Input, Output]¶
Bind input and output types to a Runnable, returning a new Runnable.
property InputType: Type[langchain.schema.runnable.utils.Input]¶
The type of input this runnable accepts specified as a type annotation.
property OutputType: Any¶
The type of output this runnable produces specified as a type annotation.
property config_specs: List[langchain.schema.runnable.utils.ConfigurableFieldSpec]¶
List configurable fields for this runnable.
property input_schema: Type[pydantic.main.BaseModel]¶
The type of input this runnable accepts specified as a pydantic model.
property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.prompt_template.BasePromptTemplate.html |
ace0f11e7ae8-8 | A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”}
property output_schema: Type[pydantic.main.BaseModel]¶
The type of output this runnable produces specified as a pydantic model.
Examples using BasePromptTemplate¶
Custom chain | lang/api.python.langchain.com/en/latest/schema/langchain.schema.prompt_template.BasePromptTemplate.html |
9fef55613407-0 | langchain.schema.runnable.branch.RunnableBranch¶
class langchain.schema.runnable.branch.RunnableBranch[source]¶
Bases: RunnableSerializable[Input, Output]
A Runnable that selects which branch to run based on a condition.
The runnable is initialized with a list of (condition, runnable) pairs and
a default branch.
When operating on an input, the first condition that evaluates to True is
selected, and the corresponding runnable is run on the input.
If no condition evaluates to True, the default branch is run on the input.
Examples
from langchain.schema.runnable import RunnableBranch
branch = RunnableBranch(
(lambda x: isinstance(x, str), lambda x: x.upper()),
(lambda x: isinstance(x, int), lambda x: x + 1),
(lambda x: isinstance(x, float), lambda x: x * 2),
lambda x: "goodbye",
)
branch.invoke("hello") # "HELLO"
branch.invoke(None) # "goodbye"
A Runnable that runs one of two branches based on a condition.
param branches: Sequence[Tuple[langchain.schema.runnable.base.Runnable[langchain.schema.runnable.utils.Input, bool], langchain.schema.runnable.base.Runnable[langchain.schema.runnable.utils.Input, langchain.schema.runnable.utils.Output]]] [Required]¶
param default: langchain.schema.runnable.base.Runnable[langchain.schema.runnable.utils.Input, langchain.schema.runnable.utils.Output] [Required]¶
async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶
Default implementation runs ainvoke in parallel using asyncio.gather.
The default implementation of batch works well for IO bound runnables. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.runnable.branch.RunnableBranch.html |
9fef55613407-1 | The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
async ainvoke(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Any) → Output[source]¶
Async version of invoke.
async astream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
Default implementation of astream, which calls ainvoke.
Subclasses should override this method if they support streaming output.
async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, diff: bool = True, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any]) → Union[AsyncIterator[RunLogPatch], AsyncIterator[RunLog]]¶
Stream all output from a runnable, as reported to the callback system.
This includes all inner runs of LLMs, Retrievers, Tools, etc.
Output is streamed as Log objects, which include a list of
jsonpatch ops that describe how the state of the run has changed in each
step, and the final state of the run.
The jsonpatch ops can be applied in order to construct state.
async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
Default implementation of atransform, which buffers input and calls astream. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.runnable.branch.RunnableBranch.html |
9fef55613407-2 | Default implementation of atransform, which buffers input and calls astream.
Subclasses should override this method if they can start producing output while
input is still being generated.
batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶
Default implementation runs invoke in parallel using a thread pool executor.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
config_schema(*, include: Optional[Sequence[str]] = None) → Type[BaseModel]¶
The type of config this runnable accepts specified as a pydantic model.
To mark a field as configurable, see the configurable_fields
and configurable_alternatives methods.
Parameters
include – A list of fields to include in the config schema.
Returns
A pydantic model that can be used to validate config.
configurable_alternatives(which: ConfigurableField, default_key: str = 'default', **kwargs: Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) → RunnableSerializable[Input, Output]¶
configurable_fields(**kwargs: Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) → RunnableSerializable[Input, Output]¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.runnable.branch.RunnableBranch.html |
9fef55613407-3 | Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
get_input_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel][source]¶
Get a pydantic model that can be used to validate input to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic input schema that depends on which
configuration the runnable is invoked with.
This method allows to get an input schema for a specific configuration.
Parameters | lang/api.python.langchain.com/en/latest/schema/langchain.schema.runnable.branch.RunnableBranch.html |
9fef55613407-4 | This method allows to get an input schema for a specific configuration.
Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate input.
classmethod get_lc_namespace() → List[str][source]¶
The namespace of a RunnableBranch is the namespace of its default branch.
get_output_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
Get a pydantic model that can be used to validate output to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic output schema that depends on which
configuration the runnable is invoked with.
This method allows to get an output schema for a specific configuration.
Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate output.
invoke(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Any) → Output[source]¶
First evaluates the condition, then delegate to true or false branch.
classmethod is_lc_serializable() → bool[source]¶
RunnableBranch is serializable if all its branches are serializable.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict(). | lang/api.python.langchain.com/en/latest/schema/langchain.schema.runnable.branch.RunnableBranch.html |
9fef55613407-5 | Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object.
map() → Runnable[List[Input], List[Output]]¶
Return a new Runnable that maps a list of inputs to a list of outputs,
by calling invoke() with each input.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of stream, which calls invoke.
Subclasses should override this method if they support streaming output.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of transform, which buffers input and then calls stream. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.runnable.branch.RunnableBranch.html |
9fef55613407-6 | Default implementation of transform, which buffers input and then calls stream.
Subclasses should override this method if they can start producing output while
input is still being generated.
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶
Bind config to a Runnable, returning a new Runnable.
with_fallbacks(fallbacks: Sequence[Runnable[Input, Output]], *, exceptions_to_handle: Tuple[Type[BaseException], ...] = (<class 'Exception'>,)) → RunnableWithFallbacksT[Input, Output]¶
Add fallbacks to a runnable, returning a new Runnable.
Parameters
fallbacks – A sequence of runnables to try if the original runnable fails.
exceptions_to_handle – A tuple of exception types to handle.
Returns
A new Runnable that will try the original runnable, and then each
fallback in order, upon failures.
with_listeners(*, on_start: Optional[Listener] = None, on_end: Optional[Listener] = None, on_error: Optional[Listener] = None) → Runnable[Input, Output]¶
Bind lifecycle listeners to a Runnable, returning a new Runnable.
on_start: Called before the runnable starts running, with the Run object.
on_end: Called after the runnable finishes running, with the Run object.
on_error: Called if the runnable throws an error, with the Run object.
The Run object contains information about the run, including its id,
type, input, output, error, start_time, end_time, and any tags or metadata
added to the run. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.runnable.branch.RunnableBranch.html |
9fef55613407-7 | added to the run.
with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶
Create a new Runnable that retries the original runnable on exceptions.
Parameters
retry_if_exception_type – A tuple of exception types to retry on
wait_exponential_jitter – Whether to add jitter to the wait time
between retries
stop_after_attempt – The maximum number of attempts to make before giving up
Returns
A new Runnable that retries the original runnable on exceptions.
with_types(*, input_type: Optional[Type[Input]] = None, output_type: Optional[Type[Output]] = None) → Runnable[Input, Output]¶
Bind input and output types to a Runnable, returning a new Runnable.
property InputType: Type[langchain.schema.runnable.utils.Input]¶
The type of input this runnable accepts specified as a type annotation.
property OutputType: Type[langchain.schema.runnable.utils.Output]¶
The type of output this runnable produces specified as a type annotation.
property config_specs: List[langchain.schema.runnable.utils.ConfigurableFieldSpec]¶
List configurable fields for this runnable.
property input_schema: Type[pydantic.main.BaseModel]¶
The type of input this runnable accepts specified as a pydantic model.
property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”}
property output_schema: Type[pydantic.main.BaseModel]¶ | lang/api.python.langchain.com/en/latest/schema/langchain.schema.runnable.branch.RunnableBranch.html |
9fef55613407-8 | property output_schema: Type[pydantic.main.BaseModel]¶
The type of output this runnable produces specified as a pydantic model. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.runnable.branch.RunnableBranch.html |
a82f75d7df0e-0 | langchain.schema.runnable.config.get_async_callback_manager_for_config¶
langchain.schema.runnable.config.get_async_callback_manager_for_config(config: RunnableConfig) → AsyncCallbackManager[source]¶
Get an async callback manager for a config.
Parameters
config (RunnableConfig) – The config.
Returns
The async callback manager.
Return type
AsyncCallbackManager | lang/api.python.langchain.com/en/latest/schema/langchain.schema.runnable.config.get_async_callback_manager_for_config.html |
b198ac934c61-0 | langchain.schema.messages.SystemMessage¶
class langchain.schema.messages.SystemMessage[source]¶
Bases: BaseMessage
A Message for priming AI behavior, usually passed in as the first of a sequence
of input messages.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param additional_kwargs: dict [Optional]¶
Any additional information.
param content: Union[str, List[Union[str, Dict]]] [Required]¶
The string contents of the message.
param type: Literal['system'] = 'system'¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance | lang/api.python.langchain.com/en/latest/schema/langchain.schema.messages.SystemMessage.html |
b198ac934c61-1 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
classmethod get_lc_namespace() → List[str]¶
Get the namespace of the langchain object.
For example, if the class is langchain.llms.openai.OpenAI, then the
namespace is [“langchain”, “llms”, “openai”]
classmethod is_lc_serializable() → bool¶
Return whether this class is serializable.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.messages.SystemMessage.html |
b198ac934c61-2 | The unique identifier is a list of strings that describes the path
to the object.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”}
Examples using SystemMessage¶
Metaphor Search
SQL Chat Message History
Anthropic
🚅 LiteLLM
Konko
OpenAI
Google Cloud Platform Vertex AI PaLM
JinaChat
Anyscale
LLMonitor
Context
Label Studio
MLflow AI Gateway
Set env var OPENAI_API_KEY or load from a .env file:
Conversational Retrieval Agent | lang/api.python.langchain.com/en/latest/schema/langchain.schema.messages.SystemMessage.html |
b198ac934c61-3 | Conversational Retrieval Agent
Structure answers with OpenAI functions
Agents
CAMEL Role-Playing Autonomous Cooperative Agents
Multi-Agent Simulated Environment: Petting Zoo
Multi-agent decentralized speaker selection
Multi-agent authoritarian speaker selection
Two-Player Dungeons & Dragons
Multi-Player Dungeons & Dragons
Simulated Environment: Gymnasium
Agent Debates with Tools
Memory in LLMChain
Use ToolKits with OpenAI Functions
Prompt pipelining
Using OpenAI functions | lang/api.python.langchain.com/en/latest/schema/langchain.schema.messages.SystemMessage.html |
648ffcd4af6b-0 | langchain.schema.runnable.router.RouterRunnable¶
class langchain.schema.runnable.router.RouterRunnable[source]¶
Bases: RunnableSerializable[RouterInput, Output]
A runnable that routes to a set of runnables based on Input[‘key’].
Returns the output of the selected runnable.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param runnables: Mapping[str, langchain.schema.runnable.base.Runnable[Any, langchain.schema.runnable.utils.Output]] [Required]¶
async abatch(inputs: List[RouterInput], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output][source]¶
Default implementation runs ainvoke in parallel using asyncio.gather.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
async ainvoke(input: RouterInput, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Output[source]¶
Default implementation of ainvoke, calls invoke from a thread.
The default implementation allows usage of async code even if
the runnable did not implement a native async version of invoke.
Subclasses should override this method if they can run asynchronously.
async astream(input: RouterInput, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output][source]¶
Default implementation of astream, which calls ainvoke.
Subclasses should override this method if they support streaming output. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.runnable.router.RouterRunnable.html |
648ffcd4af6b-1 | Subclasses should override this method if they support streaming output.
async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, diff: bool = True, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any]) → Union[AsyncIterator[RunLogPatch], AsyncIterator[RunLog]]¶
Stream all output from a runnable, as reported to the callback system.
This includes all inner runs of LLMs, Retrievers, Tools, etc.
Output is streamed as Log objects, which include a list of
jsonpatch ops that describe how the state of the run has changed in each
step, and the final state of the run.
The jsonpatch ops can be applied in order to construct state.
async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
Default implementation of atransform, which buffers input and calls astream.
Subclasses should override this method if they can start producing output while
input is still being generated.
batch(inputs: List[RouterInput], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output][source]¶
Default implementation runs invoke in parallel using a thread pool executor.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.runnable.router.RouterRunnable.html |
648ffcd4af6b-2 | e.g., if the underlying runnable uses an API which supports a batch mode.
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
config_schema(*, include: Optional[Sequence[str]] = None) → Type[BaseModel]¶
The type of config this runnable accepts specified as a pydantic model.
To mark a field as configurable, see the configurable_fields
and configurable_alternatives methods.
Parameters
include – A list of fields to include in the config schema.
Returns
A pydantic model that can be used to validate config.
configurable_alternatives(which: ConfigurableField, default_key: str = 'default', **kwargs: Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) → RunnableSerializable[Input, Output]¶
configurable_fields(**kwargs: Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) → RunnableSerializable[Input, Output]¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include | lang/api.python.langchain.com/en/latest/schema/langchain.schema.runnable.router.RouterRunnable.html |
648ffcd4af6b-3 | exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
get_input_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
Get a pydantic model that can be used to validate input to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic input schema that depends on which
configuration the runnable is invoked with.
This method allows to get an input schema for a specific configuration.
Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate input.
classmethod get_lc_namespace() → List[str][source]¶
Get the namespace of the langchain object.
For example, if the class is langchain.llms.openai.OpenAI, then the
namespace is [“langchain”, “llms”, “openai”]
get_output_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
Get a pydantic model that can be used to validate output to the runnable. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.runnable.router.RouterRunnable.html |
648ffcd4af6b-4 | Get a pydantic model that can be used to validate output to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic output schema that depends on which
configuration the runnable is invoked with.
This method allows to get an output schema for a specific configuration.
Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate output.
invoke(input: RouterInput, config: Optional[RunnableConfig] = None) → Output[source]¶
Transform a single input into an output. Override to implement.
Parameters
input – The input to the runnable.
config – A config to use when invoking the runnable.
The config supports standard keys like ‘tags’, ‘metadata’ for tracing
purposes, ‘max_concurrency’ for controlling how much work to do
in parallel, and other keys. Please refer to the RunnableConfig
for more details.
Returns
The output of the runnable.
classmethod is_lc_serializable() → bool[source]¶
Return whether this class is serializable.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶ | lang/api.python.langchain.com/en/latest/schema/langchain.schema.runnable.router.RouterRunnable.html |
648ffcd4af6b-5 | classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object.
map() → Runnable[List[Input], List[Output]]¶
Return a new Runnable that maps a list of inputs to a list of outputs,
by calling invoke() with each input.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: RouterInput, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output][source]¶
Default implementation of stream, which calls invoke.
Subclasses should override this method if they support streaming output.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of transform, which buffers input and then calls stream.
Subclasses should override this method if they can start producing output while
input is still being generated.
classmethod update_forward_refs(**localns: Any) → None¶ | lang/api.python.langchain.com/en/latest/schema/langchain.schema.runnable.router.RouterRunnable.html |
648ffcd4af6b-6 | classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶
Bind config to a Runnable, returning a new Runnable.
with_fallbacks(fallbacks: Sequence[Runnable[Input, Output]], *, exceptions_to_handle: Tuple[Type[BaseException], ...] = (<class 'Exception'>,)) → RunnableWithFallbacksT[Input, Output]¶
Add fallbacks to a runnable, returning a new Runnable.
Parameters
fallbacks – A sequence of runnables to try if the original runnable fails.
exceptions_to_handle – A tuple of exception types to handle.
Returns
A new Runnable that will try the original runnable, and then each
fallback in order, upon failures.
with_listeners(*, on_start: Optional[Listener] = None, on_end: Optional[Listener] = None, on_error: Optional[Listener] = None) → Runnable[Input, Output]¶
Bind lifecycle listeners to a Runnable, returning a new Runnable.
on_start: Called before the runnable starts running, with the Run object.
on_end: Called after the runnable finishes running, with the Run object.
on_error: Called if the runnable throws an error, with the Run object.
The Run object contains information about the run, including its id,
type, input, output, error, start_time, end_time, and any tags or metadata
added to the run. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.runnable.router.RouterRunnable.html |
648ffcd4af6b-7 | added to the run.
with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶
Create a new Runnable that retries the original runnable on exceptions.
Parameters
retry_if_exception_type – A tuple of exception types to retry on
wait_exponential_jitter – Whether to add jitter to the wait time
between retries
stop_after_attempt – The maximum number of attempts to make before giving up
Returns
A new Runnable that retries the original runnable on exceptions.
with_types(*, input_type: Optional[Type[Input]] = None, output_type: Optional[Type[Output]] = None) → Runnable[Input, Output]¶
Bind input and output types to a Runnable, returning a new Runnable.
property InputType: Type[langchain.schema.runnable.utils.Input]¶
The type of input this runnable accepts specified as a type annotation.
property OutputType: Type[langchain.schema.runnable.utils.Output]¶
The type of output this runnable produces specified as a type annotation.
property config_specs: List[langchain.schema.runnable.utils.ConfigurableFieldSpec]¶
List configurable fields for this runnable.
property input_schema: Type[pydantic.main.BaseModel]¶
The type of input this runnable accepts specified as a pydantic model.
property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”}
property output_schema: Type[pydantic.main.BaseModel]¶ | lang/api.python.langchain.com/en/latest/schema/langchain.schema.runnable.router.RouterRunnable.html |
648ffcd4af6b-8 | property output_schema: Type[pydantic.main.BaseModel]¶
The type of output this runnable produces specified as a pydantic model. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.runnable.router.RouterRunnable.html |
ff650f87153e-0 | langchain.schema.runnable.utils.accepts_run_manager¶
langchain.schema.runnable.utils.accepts_run_manager(callable: Callable[[...], Any]) → bool[source]¶
Check if a callable accepts a run_manager argument. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.runnable.utils.accepts_run_manager.html |
ec300a401f1c-0 | langchain.schema.output_parser.BaseLLMOutputParser¶
class langchain.schema.output_parser.BaseLLMOutputParser[source]¶
Abstract base class for parsing the outputs of a model.
Methods
__init__()
aparse_result(result, *[, partial])
Parse a list of candidate model Generations into a specific format.
parse_result(result, *[, partial])
Parse a list of candidate model Generations into a specific format.
__init__()¶
async aparse_result(result: List[Generation], *, partial: bool = False) → T[source]¶
Parse a list of candidate model Generations into a specific format.
Parameters
result – A list of Generations to be parsed. The Generations are assumed
to be different candidate outputs for a single model input.
Returns
Structured output.
abstract parse_result(result: List[Generation], *, partial: bool = False) → T[source]¶
Parse a list of candidate model Generations into a specific format.
Parameters
result – A list of Generations to be parsed. The Generations are assumed
to be different candidate outputs for a single model input.
Returns
Structured output. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.output_parser.BaseLLMOutputParser.html |
42967315e201-0 | langchain.schema.callbacks.manager.AsyncRunManager¶
class langchain.schema.callbacks.manager.AsyncRunManager(*, run_id: UUID, handlers: List[BaseCallbackHandler], inheritable_handlers: List[BaseCallbackHandler], parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, inheritable_tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None)[source]¶
Async Run Manager.
Initialize the run manager.
Parameters
run_id (UUID) – The ID of the run.
handlers (List[BaseCallbackHandler]) – The list of handlers.
inheritable_handlers (List[BaseCallbackHandler]) – The list of inheritable handlers.
parent_run_id (UUID, optional) – The ID of the parent run.
Defaults to None.
tags (Optional[List[str]]) – The list of tags.
inheritable_tags (Optional[List[str]]) – The list of inheritable tags.
metadata (Optional[Dict[str, Any]]) – The metadata.
inheritable_metadata (Optional[Dict[str, Any]]) – The inheritable metadata.
Methods
__init__(*, run_id, handlers, ...[, ...])
Initialize the run manager.
get_noop_manager()
Return a manager that doesn't perform any operations.
on_retry(retry_state, **kwargs)
Run on a retry event.
on_text(text, **kwargs)
Run when text is received. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.callbacks.manager.AsyncRunManager.html |
42967315e201-1 | on_text(text, **kwargs)
Run when text is received.
__init__(*, run_id: UUID, handlers: List[BaseCallbackHandler], inheritable_handlers: List[BaseCallbackHandler], parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, inheritable_tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None) → None¶
Initialize the run manager.
Parameters
run_id (UUID) – The ID of the run.
handlers (List[BaseCallbackHandler]) – The list of handlers.
inheritable_handlers (List[BaseCallbackHandler]) – The list of inheritable handlers.
parent_run_id (UUID, optional) – The ID of the parent run.
Defaults to None.
tags (Optional[List[str]]) – The list of tags.
inheritable_tags (Optional[List[str]]) – The list of inheritable tags.
metadata (Optional[Dict[str, Any]]) – The metadata.
inheritable_metadata (Optional[Dict[str, Any]]) – The inheritable metadata.
classmethod get_noop_manager() → BRM¶
Return a manager that doesn’t perform any operations.
Returns
The noop manager.
Return type
BaseRunManager
async on_retry(retry_state: RetryCallState, **kwargs: Any) → None[source]¶
Run on a retry event.
async on_text(text: str, **kwargs: Any) → Any[source]¶
Run when text is received.
Parameters
text (str) – The received text.
Returns
The result of the callback.
Return type
Any | lang/api.python.langchain.com/en/latest/schema/langchain.schema.callbacks.manager.AsyncRunManager.html |
0c81c75ba89d-0 | langchain.schema.runnable.utils.AddableDict¶
class langchain.schema.runnable.utils.AddableDict[source]¶
Dictionary that can be added to another dictionary.
Methods
__init__(*args, **kwargs)
clear()
copy()
fromkeys([value])
Create a new dictionary with keys from iterable and values set to value.
get(key[, default])
Return the value for key if key is in the dictionary, else default.
items()
keys()
pop(k[,d])
If the key is not found, return the default if given; otherwise, raise a KeyError.
popitem()
Remove and return a (key, value) pair as a 2-tuple.
setdefault(key[, default])
Insert key with a value of default if key is not in the dictionary.
update([E, ]**F)
If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]
values()
__init__(*args, **kwargs)¶
clear() → None. Remove all items from D.¶
copy() → a shallow copy of D¶
fromkeys(value=None, /)¶
Create a new dictionary with keys from iterable and values set to value.
get(key, default=None, /)¶
Return the value for key if key is in the dictionary, else default.
items() → a set-like object providing a view on D's items¶
keys() → a set-like object providing a view on D's keys¶
pop(k[, d]) → v, remove specified key and return the corresponding value.¶ | lang/api.python.langchain.com/en/latest/schema/langchain.schema.runnable.utils.AddableDict.html |
0c81c75ba89d-1 | pop(k[, d]) → v, remove specified key and return the corresponding value.¶
If the key is not found, return the default if given; otherwise,
raise a KeyError.
popitem()¶
Remove and return a (key, value) pair as a 2-tuple.
Pairs are returned in LIFO (last-in, first-out) order.
Raises KeyError if the dict is empty.
setdefault(key, default=None, /)¶
Insert key with a value of default if key is not in the dictionary.
Return the value for key if key is in the dictionary, else default.
update([E, ]**F) → None. Update D from dict/iterable E and F.¶
If E is present and has a .keys() method, then does: for k in E: D[k] = E[k]
If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v
In either case, this is followed by: for k in F: D[k] = F[k]
values() → an object providing a view on D's values¶ | lang/api.python.langchain.com/en/latest/schema/langchain.schema.runnable.utils.AddableDict.html |
c4b5aacb3c58-0 | langchain.schema.callbacks.manager.CallbackManager¶
class langchain.schema.callbacks.manager.CallbackManager(handlers: List[BaseCallbackHandler], inheritable_handlers: Optional[List[BaseCallbackHandler]] = None, parent_run_id: Optional[UUID] = None, *, tags: Optional[List[str]] = None, inheritable_tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None)[source]¶
Callback manager that handles callbacks from LangChain.
Initialize callback manager.
Attributes
is_async
Whether the callback manager is async.
Methods
__init__(handlers[, inheritable_handlers, ...])
Initialize callback manager.
add_handler(handler[, inherit])
Add a handler to the callback manager.
add_metadata(metadata[, inherit])
add_tags(tags[, inherit])
configure([inheritable_callbacks, ...])
Configure the callback manager.
copy()
Copy the callback manager.
on_chain_start(serialized, inputs[, run_id])
Run when chain starts running.
on_chat_model_start(serialized, messages, ...)
Run when LLM starts running.
on_llm_start(serialized, prompts, **kwargs)
Run when LLM starts running.
on_retriever_start(serialized, query[, ...])
Run when retriever starts running.
on_tool_start(serialized, input_str[, ...])
Run when tool starts running.
remove_handler(handler)
Remove a handler from the callback manager.
remove_metadata(keys)
remove_tags(tags)
set_handler(handler[, inherit])
Set handler as the only handler on the callback manager.
set_handlers(handlers[, inherit])
Set handlers as the only handlers on the callback manager. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.callbacks.manager.CallbackManager.html |
c4b5aacb3c58-1 | Set handlers as the only handlers on the callback manager.
__init__(handlers: List[BaseCallbackHandler], inheritable_handlers: Optional[List[BaseCallbackHandler]] = None, parent_run_id: Optional[UUID] = None, *, tags: Optional[List[str]] = None, inheritable_tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None) → None¶
Initialize callback manager.
add_handler(handler: BaseCallbackHandler, inherit: bool = True) → None¶
Add a handler to the callback manager.
add_metadata(metadata: Dict[str, Any], inherit: bool = True) → None¶
add_tags(tags: List[str], inherit: bool = True) → None¶
classmethod configure(inheritable_callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, local_callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, verbose: bool = False, inheritable_tags: Optional[List[str]] = None, local_tags: Optional[List[str]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None, local_metadata: Optional[Dict[str, Any]] = None) → CallbackManager[source]¶
Configure the callback manager.
Parameters
inheritable_callbacks (Optional[Callbacks], optional) – The inheritable
callbacks. Defaults to None.
local_callbacks (Optional[Callbacks], optional) – The local callbacks.
Defaults to None.
verbose (bool, optional) – Whether to enable verbose mode. Defaults to False.
inheritable_tags (Optional[List[str]], optional) – The inheritable tags.
Defaults to None.
local_tags (Optional[List[str]], optional) – The local tags.
Defaults to None.
inheritable_metadata (Optional[Dict[str, Any]], optional) – The inheritable | lang/api.python.langchain.com/en/latest/schema/langchain.schema.callbacks.manager.CallbackManager.html |
c4b5aacb3c58-2 | inheritable_metadata (Optional[Dict[str, Any]], optional) – The inheritable
metadata. Defaults to None.
local_metadata (Optional[Dict[str, Any]], optional) – The local metadata.
Defaults to None.
Returns
The configured callback manager.
Return type
CallbackManager
copy() → T¶
Copy the callback manager.
on_chain_start(serialized: Dict[str, Any], inputs: Union[Dict[str, Any], Any], run_id: Optional[UUID] = None, **kwargs: Any) → CallbackManagerForChainRun[source]¶
Run when chain starts running.
Parameters
serialized (Dict[str, Any]) – The serialized chain.
inputs (Union[Dict[str, Any], Any]) – The inputs to the chain.
run_id (UUID, optional) – The ID of the run. Defaults to None.
Returns
The callback manager for the chain run.
Return type
CallbackManagerForChainRun
on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], **kwargs: Any) → List[CallbackManagerForLLMRun][source]¶
Run when LLM starts running.
Parameters
serialized (Dict[str, Any]) – The serialized LLM.
messages (List[List[BaseMessage]]) – The list of messages.
run_id (UUID, optional) – The ID of the run. Defaults to None.
Returns
A callback manager for eachlist of messages as an LLM run.
Return type
List[CallbackManagerForLLMRun]
on_llm_start(serialized: Dict[str, Any], prompts: List[str], **kwargs: Any) → List[CallbackManagerForLLMRun][source]¶
Run when LLM starts running.
Parameters
serialized (Dict[str, Any]) – The serialized LLM. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.callbacks.manager.CallbackManager.html |
c4b5aacb3c58-3 | Parameters
serialized (Dict[str, Any]) – The serialized LLM.
prompts (List[str]) – The list of prompts.
run_id (UUID, optional) – The ID of the run. Defaults to None.
Returns
A callback manager for eachprompt as an LLM run.
Return type
List[CallbackManagerForLLMRun]
on_retriever_start(serialized: Dict[str, Any], query: str, run_id: Optional[UUID] = None, parent_run_id: Optional[UUID] = None, **kwargs: Any) → CallbackManagerForRetrieverRun[source]¶
Run when retriever starts running.
on_tool_start(serialized: Dict[str, Any], input_str: str, run_id: Optional[UUID] = None, parent_run_id: Optional[UUID] = None, **kwargs: Any) → CallbackManagerForToolRun[source]¶
Run when tool starts running.
Parameters
serialized (Dict[str, Any]) – The serialized tool.
input_str (str) – The input to the tool.
run_id (UUID, optional) – The ID of the run. Defaults to None.
parent_run_id (UUID, optional) – The ID of the parent run. Defaults to None.
Returns
The callback manager for the tool run.
Return type
CallbackManagerForToolRun
remove_handler(handler: BaseCallbackHandler) → None¶
Remove a handler from the callback manager.
remove_metadata(keys: List[str]) → None¶
remove_tags(tags: List[str]) → None¶
set_handler(handler: BaseCallbackHandler, inherit: bool = True) → None¶
Set handler as the only handler on the callback manager.
set_handlers(handlers: List[BaseCallbackHandler], inherit: bool = True) → None¶
Set handlers as the only handlers on the callback manager.
Examples using CallbackManager¶ | lang/api.python.langchain.com/en/latest/schema/langchain.schema.callbacks.manager.CallbackManager.html |
c4b5aacb3c58-4 | Set handlers as the only handlers on the callback manager.
Examples using CallbackManager¶
Anthropic
🚅 LiteLLM
Ollama
Llama.cpp
Titan Takeoff
Run LLMs locally
Set env var OPENAI_API_KEY or load from a .env file
Use local LLMs
WebResearchRetriever | lang/api.python.langchain.com/en/latest/schema/langchain.schema.callbacks.manager.CallbackManager.html |
85e26bae34f9-0 | langchain.schema.runnable.config.get_executor_for_config¶
langchain.schema.runnable.config.get_executor_for_config(config: RunnableConfig) → Generator[Executor, None, None][source]¶
Get an executor for a config.
Parameters
config (RunnableConfig) – The config.
Yields
Generator[Executor, None, None] – The executor. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.runnable.config.get_executor_for_config.html |
5d6f488fc59e-0 | langchain.schema.output.LLMResult¶
class langchain.schema.output.LLMResult[source]¶
Bases: BaseModel
Class that contains all results for a batched LLM call.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param generations: List[List[langchain.schema.output.Generation]] [Required]¶
List of generated outputs. This is a List[List[]] because
each input could have multiple candidate generations.
param llm_output: Optional[dict] = None¶
Arbitrary LLM provider-specific output.
param run: Optional[List[langchain.schema.output.RunInfo]] = None¶
List of metadata info for model call for each input.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance | lang/api.python.langchain.com/en/latest/schema/langchain.schema.output.LLMResult.html |
5d6f488fc59e-1 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
flatten() → List[LLMResult][source]¶
Flatten generations into a single list.
Unpack List[List[Generation]] -> List[LLMResult] where each returned LLMResultcontains only a single Generation. If token usage information is available,
it is kept only for the LLMResult corresponding to the top-choice
Generation, to avoid over-counting of token usage downstream.
Returns
List of LLMResults where each returned LLMResult contains a singleGeneration.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). | lang/api.python.langchain.com/en/latest/schema/langchain.schema.output.LLMResult.html |
5d6f488fc59e-2 | classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
Examples using LLMResult¶
Ollama
Async callbacks | lang/api.python.langchain.com/en/latest/schema/langchain.schema.output.LLMResult.html |
6d72cc75a2ac-0 | langchain.schema.runnable.utils.get_unique_config_specs¶
langchain.schema.runnable.utils.get_unique_config_specs(specs: Iterable[ConfigurableFieldSpec]) → List[ConfigurableFieldSpec][source]¶
Get the unique config specs from a sequence of config specs. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.runnable.utils.get_unique_config_specs.html |
33b8b595a510-0 | langchain.schema.messages.ToolMessage¶
class langchain.schema.messages.ToolMessage[source]¶
Bases: BaseMessage
A Message for passing the result of executing a tool back to a model.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param additional_kwargs: dict [Optional]¶
Any additional information.
param content: Union[str, List[Union[str, Dict]]] [Required]¶
The string contents of the message.
param tool_call_id: str [Required]¶
Tool call that this message is responding to.
param type: Literal['tool'] = 'tool'¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance | lang/api.python.langchain.com/en/latest/schema/langchain.schema.messages.ToolMessage.html |
33b8b595a510-1 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
classmethod get_lc_namespace() → List[str]¶
Get the namespace of the langchain object.
For example, if the class is langchain.llms.openai.OpenAI, then the
namespace is [“langchain”, “llms”, “openai”]
classmethod is_lc_serializable() → bool¶
Return whether this class is serializable.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.messages.ToolMessage.html |
33b8b595a510-2 | The unique identifier is a list of strings that describes the path
to the object.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”} | lang/api.python.langchain.com/en/latest/schema/langchain.schema.messages.ToolMessage.html |
8bd517e55242-0 | langchain.schema.callbacks.manager.AsyncCallbackManagerForRetrieverRun¶
class langchain.schema.callbacks.manager.AsyncCallbackManagerForRetrieverRun(*, run_id: UUID, handlers: List[BaseCallbackHandler], inheritable_handlers: List[BaseCallbackHandler], parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, inheritable_tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None)[source]¶
Async callback manager for retriever run.
Initialize the run manager.
Parameters
run_id (UUID) – The ID of the run.
handlers (List[BaseCallbackHandler]) – The list of handlers.
inheritable_handlers (List[BaseCallbackHandler]) – The list of inheritable handlers.
parent_run_id (UUID, optional) – The ID of the parent run.
Defaults to None.
tags (Optional[List[str]]) – The list of tags.
inheritable_tags (Optional[List[str]]) – The list of inheritable tags.
metadata (Optional[Dict[str, Any]]) – The metadata.
inheritable_metadata (Optional[Dict[str, Any]]) – The inheritable metadata.
Methods
__init__(*, run_id, handlers, ...[, ...])
Initialize the run manager.
get_child([tag])
Get a child callback manager.
get_noop_manager()
Return a manager that doesn't perform any operations.
on_retriever_end(documents, **kwargs)
Run when retriever ends running.
on_retriever_error(error, **kwargs)
Run when retriever errors.
on_retry(retry_state, **kwargs)
Run on a retry event.
on_text(text, **kwargs)
Run when text is received. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.callbacks.manager.AsyncCallbackManagerForRetrieverRun.html |
8bd517e55242-1 | on_text(text, **kwargs)
Run when text is received.
__init__(*, run_id: UUID, handlers: List[BaseCallbackHandler], inheritable_handlers: List[BaseCallbackHandler], parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, inheritable_tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None) → None¶
Initialize the run manager.
Parameters
run_id (UUID) – The ID of the run.
handlers (List[BaseCallbackHandler]) – The list of handlers.
inheritable_handlers (List[BaseCallbackHandler]) – The list of inheritable handlers.
parent_run_id (UUID, optional) – The ID of the parent run.
Defaults to None.
tags (Optional[List[str]]) – The list of tags.
inheritable_tags (Optional[List[str]]) – The list of inheritable tags.
metadata (Optional[Dict[str, Any]]) – The metadata.
inheritable_metadata (Optional[Dict[str, Any]]) – The inheritable metadata.
get_child(tag: Optional[str] = None) → AsyncCallbackManager¶
Get a child callback manager.
Parameters
tag (str, optional) – The tag for the child callback manager.
Defaults to None.
Returns
The child callback manager.
Return type
AsyncCallbackManager
classmethod get_noop_manager() → BRM¶
Return a manager that doesn’t perform any operations.
Returns
The noop manager.
Return type
BaseRunManager
async on_retriever_end(documents: Sequence[Document], **kwargs: Any) → None[source]¶
Run when retriever ends running.
async on_retriever_error(error: BaseException, **kwargs: Any) → None[source]¶
Run when retriever errors. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.callbacks.manager.AsyncCallbackManagerForRetrieverRun.html |
8bd517e55242-2 | Run when retriever errors.
async on_retry(retry_state: RetryCallState, **kwargs: Any) → None¶
Run on a retry event.
async on_text(text: str, **kwargs: Any) → Any¶
Run when text is received.
Parameters
text (str) – The received text.
Returns
The result of the callback.
Return type
Any
Examples using AsyncCallbackManagerForRetrieverRun¶
Retrieve as you generate with FLARE | lang/api.python.langchain.com/en/latest/schema/langchain.schema.callbacks.manager.AsyncCallbackManagerForRetrieverRun.html |
4d48ab55c38c-0 | langchain.schema.callbacks.tracers.langchain_v1.LangChainTracerV1¶
class langchain.schema.callbacks.tracers.langchain_v1.LangChainTracerV1(**kwargs: Any)[source]¶
An implementation of the SharedTracer that POSTS to the langchain endpoint.
Initialize the LangChain tracer.
Attributes
ignore_agent
Whether to ignore agent callbacks.
ignore_chain
Whether to ignore chain callbacks.
ignore_chat_model
Whether to ignore chat model callbacks.
ignore_llm
Whether to ignore LLM callbacks.
ignore_retriever
Whether to ignore retriever callbacks.
ignore_retry
Whether to ignore retry callbacks.
raise_error
run_inline
Methods
__init__(**kwargs)
Initialize the LangChain tracer.
load_default_session()
Load the default tracing session and set it as the Tracer's session.
load_session(session_name)
Load a session with the given name from the tracer.
on_agent_action(action, *, run_id[, ...])
Run on agent action.
on_agent_finish(finish, *, run_id[, ...])
Run on agent end.
on_chain_end(outputs, *, run_id[, inputs])
End a trace for a chain run.
on_chain_error(error, *[, inputs])
Handle an error for a chain run.
on_chain_start(serialized, inputs, *, run_id)
Start a trace for a chain run.
on_chat_model_start(serialized, messages, *, ...)
Run when a chat model starts running.
on_llm_end(response, *, run_id, **kwargs)
End a trace for an LLM run.
on_llm_error(error, *, run_id, **kwargs)
Handle an error for an LLM run. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.callbacks.tracers.langchain_v1.LangChainTracerV1.html |
4d48ab55c38c-1 | Handle an error for an LLM run.
on_llm_new_token(token, *[, chunk, ...])
Run on new LLM token.
on_llm_start(serialized, prompts, *, run_id)
Start a trace for an LLM run.
on_retriever_end(documents, *, run_id, **kwargs)
Run when Retriever ends running.
on_retriever_error(error, *, run_id, **kwargs)
Run when Retriever errors.
on_retriever_start(serialized, query, *, run_id)
Run when Retriever starts running.
on_retry(retry_state, *, run_id, **kwargs)
Run on a retry event.
on_text(text, *, run_id[, parent_run_id])
Run on arbitrary text.
on_tool_end(output, *, run_id, **kwargs)
End a trace for a tool run.
on_tool_error(error, *, run_id, **kwargs)
Handle an error for a tool run.
on_tool_start(serialized, input_str, *, run_id)
Start a trace for a tool run.
__init__(**kwargs: Any) → None[source]¶
Initialize the LangChain tracer.
load_default_session() → Union[TracerSessionV1, TracerSession][source]¶
Load the default tracing session and set it as the Tracer’s session.
load_session(session_name: str) → Union[TracerSessionV1, TracerSession][source]¶
Load a session with the given name from the tracer.
on_agent_action(action: AgentAction, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run on agent action. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.callbacks.tracers.langchain_v1.LangChainTracerV1.html |
4d48ab55c38c-2 | Run on agent action.
on_agent_finish(finish: AgentFinish, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run on agent end.
on_chain_end(outputs: Dict[str, Any], *, run_id: UUID, inputs: Optional[Dict[str, Any]] = None, **kwargs: Any) → Run¶
End a trace for a chain run.
on_chain_error(error: BaseException, *, inputs: Optional[Dict[str, Any]] = None, run_id: UUID, **kwargs: Any) → Run¶
Handle an error for a chain run.
on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, run_type: Optional[str] = None, name: Optional[str] = None, **kwargs: Any) → Run¶
Start a trace for a chain run.
on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run when a chat model starts running.
on_llm_end(response: LLMResult, *, run_id: UUID, **kwargs: Any) → Run¶
End a trace for an LLM run.
on_llm_error(error: BaseException, *, run_id: UUID, **kwargs: Any) → Run¶
Handle an error for an LLM run. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.callbacks.tracers.langchain_v1.LangChainTracerV1.html |
4d48ab55c38c-3 | Handle an error for an LLM run.
on_llm_new_token(token: str, *, chunk: Optional[Union[GenerationChunk, ChatGenerationChunk]] = None, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Run¶
Run on new LLM token. Only available when streaming is enabled.
on_llm_start(serialized: Dict[str, Any], prompts: List[str], *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, name: Optional[str] = None, **kwargs: Any) → Run¶
Start a trace for an LLM run.
on_retriever_end(documents: Sequence[Document], *, run_id: UUID, **kwargs: Any) → Run¶
Run when Retriever ends running.
on_retriever_error(error: BaseException, *, run_id: UUID, **kwargs: Any) → Run¶
Run when Retriever errors.
on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, name: Optional[str] = None, **kwargs: Any) → Run¶
Run when Retriever starts running.
on_retry(retry_state: RetryCallState, *, run_id: UUID, **kwargs: Any) → Run¶
Run on a retry event.
on_text(text: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run on arbitrary text.
on_tool_end(output: str, *, run_id: UUID, **kwargs: Any) → Run¶ | lang/api.python.langchain.com/en/latest/schema/langchain.schema.callbacks.tracers.langchain_v1.LangChainTracerV1.html |
4d48ab55c38c-4 | End a trace for a tool run.
on_tool_error(error: BaseException, *, run_id: UUID, **kwargs: Any) → Run¶
Handle an error for a tool run.
on_tool_start(serialized: Dict[str, Any], input_str: str, *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, name: Optional[str] = None, **kwargs: Any) → Run¶
Start a trace for a tool run. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.callbacks.tracers.langchain_v1.LangChainTracerV1.html |
158dbe5a0b0f-0 | langchain.schema.callbacks.tracers.log_stream.RunState¶
class langchain.schema.callbacks.tracers.log_stream.RunState[source]¶
State of the run.
id: str¶
ID of the run.
streamed_output: List[Any]¶
List of output chunks streamed by Runnable.stream()
final_output: Optional[Any]¶
Final output of the run, usually the result of aggregating (+) streamed_output.
Only available after the run has finished successfully.
logs: Dict[str, langchain.schema.callbacks.tracers.log_stream.LogEntry]¶
Map of run names to sub-runs. If filters were supplied, this list will
contain only the runs that matched the filters. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.callbacks.tracers.log_stream.RunState.html |
fe55b4110643-0 | langchain.schema.runnable.config.call_func_with_variable_args¶
langchain.schema.runnable.config.call_func_with_variable_args(func: Union[Callable[[Input], Output], Callable[[Input, RunnableConfig], Output], Callable[[Input, CallbackManagerForChainRun], Output], Callable[[Input, CallbackManagerForChainRun, RunnableConfig], Output]], input: Input, config: RunnableConfig, run_manager: Optional[CallbackManagerForChainRun] = None, **kwargs: Any) → Output[source]¶
Call function that may optionally accept a run_manager and/or config.
Parameters
(Union[Callable[[Input] (func) – Callable[[Input, CallbackManagerForChainRun], Output],
Callable[[Input, CallbackManagerForChainRun, RunnableConfig], Output]]):
The function to call.
Output] – Callable[[Input, CallbackManagerForChainRun], Output],
Callable[[Input, CallbackManagerForChainRun, RunnableConfig], Output]]):
The function to call.
:paramCallable[[Input, CallbackManagerForChainRun], Output],
Callable[[Input, CallbackManagerForChainRun, RunnableConfig], Output]]):The function to call.
Parameters
input (Input) – The input to the function.
run_manager (CallbackManagerForChainRun) – The run manager to
pass to the function.
config (RunnableConfig) – The config to pass to the function.
**kwargs (Any) – The keyword arguments to pass to the function.
Returns
The output of the function.
Return type
Output | lang/api.python.langchain.com/en/latest/schema/langchain.schema.runnable.config.call_func_with_variable_args.html |
30e1e4601905-0 | langchain.schema.callbacks.tracers.log_stream.LogStreamCallbackHandler¶
class langchain.schema.callbacks.tracers.log_stream.LogStreamCallbackHandler(*, auto_close: bool = True, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None)[source]¶
A tracer that streams run logs to a stream.
Attributes
ignore_agent
Whether to ignore agent callbacks.
ignore_chain
Whether to ignore chain callbacks.
ignore_chat_model
Whether to ignore chat model callbacks.
ignore_llm
Whether to ignore LLM callbacks.
ignore_retriever
Whether to ignore retriever callbacks.
ignore_retry
Whether to ignore retry callbacks.
raise_error
run_inline
Methods
__init__(*[, auto_close, include_names, ...])
include_run(run)
on_agent_action(action, *, run_id[, ...])
Run on agent action.
on_agent_finish(finish, *, run_id[, ...])
Run on agent end.
on_chain_end(outputs, *, run_id[, inputs])
End a trace for a chain run.
on_chain_error(error, *[, inputs])
Handle an error for a chain run.
on_chain_start(serialized, inputs, *, run_id)
Start a trace for a chain run.
on_chat_model_start(serialized, messages, *, ...)
Run when a chat model starts running.
on_llm_end(response, *, run_id, **kwargs)
End a trace for an LLM run.
on_llm_error(error, *, run_id, **kwargs) | lang/api.python.langchain.com/en/latest/schema/langchain.schema.callbacks.tracers.log_stream.LogStreamCallbackHandler.html |
30e1e4601905-1 | on_llm_error(error, *, run_id, **kwargs)
Handle an error for an LLM run.
on_llm_new_token(token, *[, chunk, ...])
Run on new LLM token.
on_llm_start(serialized, prompts, *, run_id)
Start a trace for an LLM run.
on_retriever_end(documents, *, run_id, **kwargs)
Run when Retriever ends running.
on_retriever_error(error, *, run_id, **kwargs)
Run when Retriever errors.
on_retriever_start(serialized, query, *, run_id)
Run when Retriever starts running.
on_retry(retry_state, *, run_id, **kwargs)
Run on a retry event.
on_text(text, *, run_id[, parent_run_id])
Run on arbitrary text.
on_tool_end(output, *, run_id, **kwargs)
End a trace for a tool run.
on_tool_error(error, *, run_id, **kwargs)
Handle an error for a tool run.
on_tool_start(serialized, input_str, *, run_id)
Start a trace for a tool run.
__init__(*, auto_close: bool = True, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None) → None[source]¶
include_run(run: Run) → bool[source]¶ | lang/api.python.langchain.com/en/latest/schema/langchain.schema.callbacks.tracers.log_stream.LogStreamCallbackHandler.html |
30e1e4601905-2 | include_run(run: Run) → bool[source]¶
on_agent_action(action: AgentAction, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run on agent action.
on_agent_finish(finish: AgentFinish, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run on agent end.
on_chain_end(outputs: Dict[str, Any], *, run_id: UUID, inputs: Optional[Dict[str, Any]] = None, **kwargs: Any) → Run¶
End a trace for a chain run.
on_chain_error(error: BaseException, *, inputs: Optional[Dict[str, Any]] = None, run_id: UUID, **kwargs: Any) → Run¶
Handle an error for a chain run.
on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, run_type: Optional[str] = None, name: Optional[str] = None, **kwargs: Any) → Run¶
Start a trace for a chain run.
on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run when a chat model starts running.
on_llm_end(response: LLMResult, *, run_id: UUID, **kwargs: Any) → Run¶
End a trace for an LLM run. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.callbacks.tracers.log_stream.LogStreamCallbackHandler.html |
30e1e4601905-3 | End a trace for an LLM run.
on_llm_error(error: BaseException, *, run_id: UUID, **kwargs: Any) → Run¶
Handle an error for an LLM run.
on_llm_new_token(token: str, *, chunk: Optional[Union[GenerationChunk, ChatGenerationChunk]] = None, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Run¶
Run on new LLM token. Only available when streaming is enabled.
on_llm_start(serialized: Dict[str, Any], prompts: List[str], *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, name: Optional[str] = None, **kwargs: Any) → Run¶
Start a trace for an LLM run.
on_retriever_end(documents: Sequence[Document], *, run_id: UUID, **kwargs: Any) → Run¶
Run when Retriever ends running.
on_retriever_error(error: BaseException, *, run_id: UUID, **kwargs: Any) → Run¶
Run when Retriever errors.
on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, name: Optional[str] = None, **kwargs: Any) → Run¶
Run when Retriever starts running.
on_retry(retry_state: RetryCallState, *, run_id: UUID, **kwargs: Any) → Run¶
Run on a retry event. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.callbacks.tracers.log_stream.LogStreamCallbackHandler.html |
30e1e4601905-4 | Run on a retry event.
on_text(text: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run on arbitrary text.
on_tool_end(output: str, *, run_id: UUID, **kwargs: Any) → Run¶
End a trace for a tool run.
on_tool_error(error: BaseException, *, run_id: UUID, **kwargs: Any) → Run¶
Handle an error for a tool run.
on_tool_start(serialized: Dict[str, Any], input_str: str, *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, name: Optional[str] = None, **kwargs: Any) → Run¶
Start a trace for a tool run. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.callbacks.tracers.log_stream.LogStreamCallbackHandler.html |
38bda6ab957c-0 | langchain.schema.messages.BaseMessageChunk¶
class langchain.schema.messages.BaseMessageChunk[source]¶
Bases: BaseMessage
A Message chunk, which can be concatenated with other Message chunks.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param additional_kwargs: dict [Optional]¶
Any additional information.
param content: Union[str, List[Union[str, Dict]]] [Required]¶
The string contents of the message.
param type: str [Required]¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance | lang/api.python.langchain.com/en/latest/schema/langchain.schema.messages.BaseMessageChunk.html |
38bda6ab957c-1 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
classmethod get_lc_namespace() → List[str]¶
Get the namespace of the langchain object.
For example, if the class is langchain.llms.openai.OpenAI, then the
namespace is [“langchain”, “llms”, “openai”]
classmethod is_lc_serializable() → bool¶
Return whether this class is serializable.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.messages.BaseMessageChunk.html |
38bda6ab957c-2 | The unique identifier is a list of strings that describes the path
to the object.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”} | lang/api.python.langchain.com/en/latest/schema/langchain.schema.messages.BaseMessageChunk.html |
ad88c9c8f2ea-0 | langchain.schema.runnable.router.RouterInput¶
class langchain.schema.runnable.router.RouterInput[source]¶
A Router input.
key¶
The key to route on.
Type
str
input¶
The input to pass to the selected runnable.
Type
Any
Attributes
key
input
Methods
__init__(*args, **kwargs)
clear()
copy()
fromkeys([value])
Create a new dictionary with keys from iterable and values set to value.
get(key[, default])
Return the value for key if key is in the dictionary, else default.
items()
keys()
pop(k[,d])
If the key is not found, return the default if given; otherwise, raise a KeyError.
popitem()
Remove and return a (key, value) pair as a 2-tuple.
setdefault(key[, default])
Insert key with a value of default if key is not in the dictionary.
update([E, ]**F)
If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]
values()
__init__(*args, **kwargs)¶
clear() → None. Remove all items from D.¶
copy() → a shallow copy of D¶
fromkeys(value=None, /)¶
Create a new dictionary with keys from iterable and values set to value.
get(key, default=None, /)¶
Return the value for key if key is in the dictionary, else default.
items() → a set-like object providing a view on D's items¶ | lang/api.python.langchain.com/en/latest/schema/langchain.schema.runnable.router.RouterInput.html |
ad88c9c8f2ea-1 | items() → a set-like object providing a view on D's items¶
keys() → a set-like object providing a view on D's keys¶
pop(k[, d]) → v, remove specified key and return the corresponding value.¶
If the key is not found, return the default if given; otherwise,
raise a KeyError.
popitem()¶
Remove and return a (key, value) pair as a 2-tuple.
Pairs are returned in LIFO (last-in, first-out) order.
Raises KeyError if the dict is empty.
setdefault(key, default=None, /)¶
Insert key with a value of default if key is not in the dictionary.
Return the value for key if key is in the dictionary, else default.
update([E, ]**F) → None. Update D from dict/iterable E and F.¶
If E is present and has a .keys() method, then does: for k in E: D[k] = E[k]
If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v
In either case, this is followed by: for k in F: D[k] = F[k]
values() → an object providing a view on D's values¶ | lang/api.python.langchain.com/en/latest/schema/langchain.schema.runnable.router.RouterInput.html |
7d8db8dea3f1-0 | langchain.schema.callbacks.manager.CallbackManagerForLLMRun¶
class langchain.schema.callbacks.manager.CallbackManagerForLLMRun(*, run_id: UUID, handlers: List[BaseCallbackHandler], inheritable_handlers: List[BaseCallbackHandler], parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, inheritable_tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None)[source]¶
Callback manager for LLM run.
Initialize the run manager.
Parameters
run_id (UUID) – The ID of the run.
handlers (List[BaseCallbackHandler]) – The list of handlers.
inheritable_handlers (List[BaseCallbackHandler]) – The list of inheritable handlers.
parent_run_id (UUID, optional) – The ID of the parent run.
Defaults to None.
tags (Optional[List[str]]) – The list of tags.
inheritable_tags (Optional[List[str]]) – The list of inheritable tags.
metadata (Optional[Dict[str, Any]]) – The metadata.
inheritable_metadata (Optional[Dict[str, Any]]) – The inheritable metadata.
Methods
__init__(*, run_id, handlers, ...[, ...])
Initialize the run manager.
get_noop_manager()
Return a manager that doesn't perform any operations.
on_llm_end(response, **kwargs)
Run when LLM ends running.
on_llm_error(error, **kwargs)
Run when LLM errors.
on_llm_new_token(token, *[, chunk])
Run when LLM generates a new token.
on_retry(retry_state, **kwargs)
Run on a retry event.
on_text(text, **kwargs)
Run when text is received. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.callbacks.manager.CallbackManagerForLLMRun.html |
7d8db8dea3f1-1 | on_text(text, **kwargs)
Run when text is received.
__init__(*, run_id: UUID, handlers: List[BaseCallbackHandler], inheritable_handlers: List[BaseCallbackHandler], parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, inheritable_tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None) → None¶
Initialize the run manager.
Parameters
run_id (UUID) – The ID of the run.
handlers (List[BaseCallbackHandler]) – The list of handlers.
inheritable_handlers (List[BaseCallbackHandler]) – The list of inheritable handlers.
parent_run_id (UUID, optional) – The ID of the parent run.
Defaults to None.
tags (Optional[List[str]]) – The list of tags.
inheritable_tags (Optional[List[str]]) – The list of inheritable tags.
metadata (Optional[Dict[str, Any]]) – The metadata.
inheritable_metadata (Optional[Dict[str, Any]]) – The inheritable metadata.
classmethod get_noop_manager() → BRM¶
Return a manager that doesn’t perform any operations.
Returns
The noop manager.
Return type
BaseRunManager
on_llm_end(response: LLMResult, **kwargs: Any) → None[source]¶
Run when LLM ends running.
Parameters
response (LLMResult) – The LLM result.
on_llm_error(error: BaseException, **kwargs: Any) → None[source]¶
Run when LLM errors.
Parameters
error (Exception or KeyboardInterrupt) – The error.
on_llm_new_token(token: str, *, chunk: Optional[Union[GenerationChunk, ChatGenerationChunk]] = None, **kwargs: Any) → None[source]¶ | lang/api.python.langchain.com/en/latest/schema/langchain.schema.callbacks.manager.CallbackManagerForLLMRun.html |
7d8db8dea3f1-2 | Run when LLM generates a new token.
Parameters
token (str) – The new token.
on_retry(retry_state: RetryCallState, **kwargs: Any) → None¶
Run on a retry event.
on_text(text: str, **kwargs: Any) → Any¶
Run when text is received.
Parameters
text (str) – The received text.
Returns
The result of the callback.
Return type
Any
Examples using CallbackManagerForLLMRun¶
Custom LLM | lang/api.python.langchain.com/en/latest/schema/langchain.schema.callbacks.manager.CallbackManagerForLLMRun.html |
e558f6b8edbd-0 | langchain.schema.runnable.utils.IsFunctionArgDict¶
class langchain.schema.runnable.utils.IsFunctionArgDict[source]¶
Check if the first argument of a function is a dict.
Methods
__init__()
generic_visit(node)
Called if no explicit visitor function exists for a node.
visit(node)
Visit a node.
visit_AsyncFunctionDef(node)
visit_Constant(node)
visit_FunctionDef(node)
visit_Lambda(node)
__init__() → None[source]¶
generic_visit(node)¶
Called if no explicit visitor function exists for a node.
visit(node)¶
Visit a node.
visit_AsyncFunctionDef(node: AsyncFunctionDef) → Any[source]¶
visit_Constant(node)¶
visit_FunctionDef(node: FunctionDef) → Any[source]¶
visit_Lambda(node: Lambda) → Any[source]¶ | lang/api.python.langchain.com/en/latest/schema/langchain.schema.runnable.utils.IsFunctionArgDict.html |
f6dccca5caae-0 | langchain.schema.callbacks.manager.AsyncCallbackManagerForLLMRun¶
class langchain.schema.callbacks.manager.AsyncCallbackManagerForLLMRun(*, run_id: UUID, handlers: List[BaseCallbackHandler], inheritable_handlers: List[BaseCallbackHandler], parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, inheritable_tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None)[source]¶
Async callback manager for LLM run.
Initialize the run manager.
Parameters
run_id (UUID) – The ID of the run.
handlers (List[BaseCallbackHandler]) – The list of handlers.
inheritable_handlers (List[BaseCallbackHandler]) – The list of inheritable handlers.
parent_run_id (UUID, optional) – The ID of the parent run.
Defaults to None.
tags (Optional[List[str]]) – The list of tags.
inheritable_tags (Optional[List[str]]) – The list of inheritable tags.
metadata (Optional[Dict[str, Any]]) – The metadata.
inheritable_metadata (Optional[Dict[str, Any]]) – The inheritable metadata.
Methods
__init__(*, run_id, handlers, ...[, ...])
Initialize the run manager.
get_noop_manager()
Return a manager that doesn't perform any operations.
on_llm_end(response, **kwargs)
Run when LLM ends running.
on_llm_error(error, **kwargs)
Run when LLM errors.
on_llm_new_token(token, *[, chunk])
Run when LLM generates a new token.
on_retry(retry_state, **kwargs)
Run on a retry event.
on_text(text, **kwargs)
Run when text is received. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.callbacks.manager.AsyncCallbackManagerForLLMRun.html |
f6dccca5caae-1 | on_text(text, **kwargs)
Run when text is received.
__init__(*, run_id: UUID, handlers: List[BaseCallbackHandler], inheritable_handlers: List[BaseCallbackHandler], parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, inheritable_tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None) → None¶
Initialize the run manager.
Parameters
run_id (UUID) – The ID of the run.
handlers (List[BaseCallbackHandler]) – The list of handlers.
inheritable_handlers (List[BaseCallbackHandler]) – The list of inheritable handlers.
parent_run_id (UUID, optional) – The ID of the parent run.
Defaults to None.
tags (Optional[List[str]]) – The list of tags.
inheritable_tags (Optional[List[str]]) – The list of inheritable tags.
metadata (Optional[Dict[str, Any]]) – The metadata.
inheritable_metadata (Optional[Dict[str, Any]]) – The inheritable metadata.
classmethod get_noop_manager() → BRM¶
Return a manager that doesn’t perform any operations.
Returns
The noop manager.
Return type
BaseRunManager
async on_llm_end(response: LLMResult, **kwargs: Any) → None[source]¶
Run when LLM ends running.
Parameters
response (LLMResult) – The LLM result.
async on_llm_error(error: BaseException, **kwargs: Any) → None[source]¶
Run when LLM errors.
Parameters
error (Exception or KeyboardInterrupt) – The error. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.callbacks.manager.AsyncCallbackManagerForLLMRun.html |
f6dccca5caae-2 | Run when LLM errors.
Parameters
error (Exception or KeyboardInterrupt) – The error.
async on_llm_new_token(token: str, *, chunk: Optional[Union[GenerationChunk, ChatGenerationChunk]] = None, **kwargs: Any) → None[source]¶
Run when LLM generates a new token.
Parameters
token (str) – The new token.
async on_retry(retry_state: RetryCallState, **kwargs: Any) → None¶
Run on a retry event.
async on_text(text: str, **kwargs: Any) → Any¶
Run when text is received.
Parameters
text (str) – The received text.
Returns
The result of the callback.
Return type
Any | lang/api.python.langchain.com/en/latest/schema/langchain.schema.callbacks.manager.AsyncCallbackManagerForLLMRun.html |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.