id
stringlengths 14
16
| text
stringlengths 13
2.7k
| source
stringlengths 57
178
|
---|---|---|
806cbac008e0-0 | langchain.schema.runnable.utils.IsLocalDict¶
class langchain.schema.runnable.utils.IsLocalDict(name: str, keys: Set[str])[source]¶
Check if a name is a local dict.
Methods
__init__(name, keys)
generic_visit(node)
Called if no explicit visitor function exists for a node.
visit(node)
Visit a node.
visit_Call(node)
visit_Constant(node)
visit_Subscript(node)
__init__(name: str, keys: Set[str]) → None[source]¶
generic_visit(node)¶
Called if no explicit visitor function exists for a node.
visit(node)¶
Visit a node.
visit_Call(node: Call) → Any[source]¶
visit_Constant(node)¶
visit_Subscript(node: Subscript) → Any[source]¶ | lang/api.python.langchain.com/en/latest/schema/langchain.schema.runnable.utils.IsLocalDict.html |
649399f3430c-0 | langchain.schema.callbacks.tracers.langchain.LangChainTracer¶
class langchain.schema.callbacks.tracers.langchain.LangChainTracer(example_id: Optional[Union[str, UUID]] = None, project_name: Optional[str] = None, client: Optional[Client] = None, tags: Optional[List[str]] = None, use_threading: bool = True, **kwargs: Any)[source]¶
An implementation of the SharedTracer that POSTS to the langchain endpoint.
Initialize the LangChain tracer.
Attributes
ignore_agent
Whether to ignore agent callbacks.
ignore_chain
Whether to ignore chain callbacks.
ignore_chat_model
Whether to ignore chat model callbacks.
ignore_llm
Whether to ignore LLM callbacks.
ignore_retriever
Whether to ignore retriever callbacks.
ignore_retry
Whether to ignore retry callbacks.
raise_error
run_inline
Methods
__init__([example_id, project_name, client, ...])
Initialize the LangChain tracer.
get_run_url()
Get the LangSmith root run URL
on_agent_action(action, *, run_id[, ...])
Run on agent action.
on_agent_finish(finish, *, run_id[, ...])
Run on agent end.
on_chain_end(outputs, *, run_id[, inputs])
End a trace for a chain run.
on_chain_error(error, *[, inputs])
Handle an error for a chain run.
on_chain_start(serialized, inputs, *, run_id)
Start a trace for a chain run.
on_chat_model_start(serialized, messages, *, ...)
Start a trace for an LLM run.
on_llm_end(response, *, run_id, **kwargs)
End a trace for an LLM run. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.callbacks.tracers.langchain.LangChainTracer.html |
649399f3430c-1 | End a trace for an LLM run.
on_llm_error(error, *, run_id, **kwargs)
Handle an error for an LLM run.
on_llm_new_token(token, *[, chunk, ...])
Run on new LLM token.
on_llm_start(serialized, prompts, *, run_id)
Start a trace for an LLM run.
on_retriever_end(documents, *, run_id, **kwargs)
Run when Retriever ends running.
on_retriever_error(error, *, run_id, **kwargs)
Run when Retriever errors.
on_retriever_start(serialized, query, *, run_id)
Run when Retriever starts running.
on_retry(retry_state, *, run_id, **kwargs)
Run on a retry event.
on_text(text, *, run_id[, parent_run_id])
Run on arbitrary text.
on_tool_end(output, *, run_id, **kwargs)
End a trace for a tool run.
on_tool_error(error, *, run_id, **kwargs)
Handle an error for a tool run.
on_tool_start(serialized, input_str, *, run_id)
Start a trace for a tool run.
wait_for_futures()
Wait for the given futures to complete.
__init__(example_id: Optional[Union[str, UUID]] = None, project_name: Optional[str] = None, client: Optional[Client] = None, tags: Optional[List[str]] = None, use_threading: bool = True, **kwargs: Any) → None[source]¶
Initialize the LangChain tracer.
get_run_url() → str[source]¶
Get the LangSmith root run URL | lang/api.python.langchain.com/en/latest/schema/langchain.schema.callbacks.tracers.langchain.LangChainTracer.html |
649399f3430c-2 | get_run_url() → str[source]¶
Get the LangSmith root run URL
on_agent_action(action: AgentAction, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run on agent action.
on_agent_finish(finish: AgentFinish, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run on agent end.
on_chain_end(outputs: Dict[str, Any], *, run_id: UUID, inputs: Optional[Dict[str, Any]] = None, **kwargs: Any) → Run¶
End a trace for a chain run.
on_chain_error(error: BaseException, *, inputs: Optional[Dict[str, Any]] = None, run_id: UUID, **kwargs: Any) → Run¶
Handle an error for a chain run.
on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, run_type: Optional[str] = None, name: Optional[str] = None, **kwargs: Any) → Run¶
Start a trace for a chain run.
on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, name: Optional[str] = None, **kwargs: Any) → None[source]¶
Start a trace for an LLM run.
on_llm_end(response: LLMResult, *, run_id: UUID, **kwargs: Any) → Run¶ | lang/api.python.langchain.com/en/latest/schema/langchain.schema.callbacks.tracers.langchain.LangChainTracer.html |
649399f3430c-3 | End a trace for an LLM run.
on_llm_error(error: BaseException, *, run_id: UUID, **kwargs: Any) → Run¶
Handle an error for an LLM run.
on_llm_new_token(token: str, *, chunk: Optional[Union[GenerationChunk, ChatGenerationChunk]] = None, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Run¶
Run on new LLM token. Only available when streaming is enabled.
on_llm_start(serialized: Dict[str, Any], prompts: List[str], *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, name: Optional[str] = None, **kwargs: Any) → Run¶
Start a trace for an LLM run.
on_retriever_end(documents: Sequence[Document], *, run_id: UUID, **kwargs: Any) → Run¶
Run when Retriever ends running.
on_retriever_error(error: BaseException, *, run_id: UUID, **kwargs: Any) → Run¶
Run when Retriever errors.
on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, name: Optional[str] = None, **kwargs: Any) → Run¶
Run when Retriever starts running.
on_retry(retry_state: RetryCallState, *, run_id: UUID, **kwargs: Any) → Run¶
Run on a retry event. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.callbacks.tracers.langchain.LangChainTracer.html |
649399f3430c-4 | Run on a retry event.
on_text(text: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run on arbitrary text.
on_tool_end(output: str, *, run_id: UUID, **kwargs: Any) → Run¶
End a trace for a tool run.
on_tool_error(error: BaseException, *, run_id: UUID, **kwargs: Any) → Run¶
Handle an error for a tool run.
on_tool_start(serialized: Dict[str, Any], input_str: str, *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, name: Optional[str] = None, **kwargs: Any) → Run¶
Start a trace for a tool run.
wait_for_futures() → None[source]¶
Wait for the given futures to complete.
Examples using LangChainTracer¶
Async API | lang/api.python.langchain.com/en/latest/schema/langchain.schema.callbacks.tracers.langchain.LangChainTracer.html |
e46e10139714-0 | langchain.schema.callbacks.tracers.stdout.FunctionCallbackHandler¶
class langchain.schema.callbacks.tracers.stdout.FunctionCallbackHandler(function: Callable[[str], None], **kwargs: Any)[source]¶
Tracer that calls a function with a single str parameter.
Attributes
ignore_agent
Whether to ignore agent callbacks.
ignore_chain
Whether to ignore chain callbacks.
ignore_chat_model
Whether to ignore chat model callbacks.
ignore_llm
Whether to ignore LLM callbacks.
ignore_retriever
Whether to ignore retriever callbacks.
ignore_retry
Whether to ignore retry callbacks.
name
raise_error
run_inline
Methods
__init__(function, **kwargs)
get_breadcrumbs(run)
get_parents(run)
on_agent_action(action, *, run_id[, ...])
Run on agent action.
on_agent_finish(finish, *, run_id[, ...])
Run on agent end.
on_chain_end(outputs, *, run_id[, inputs])
End a trace for a chain run.
on_chain_error(error, *[, inputs])
Handle an error for a chain run.
on_chain_start(serialized, inputs, *, run_id)
Start a trace for a chain run.
on_chat_model_start(serialized, messages, *, ...)
Run when a chat model starts running.
on_llm_end(response, *, run_id, **kwargs)
End a trace for an LLM run.
on_llm_error(error, *, run_id, **kwargs)
Handle an error for an LLM run.
on_llm_new_token(token, *[, chunk, ...])
Run on new LLM token.
on_llm_start(serialized, prompts, *, run_id)
Start a trace for an LLM run. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.callbacks.tracers.stdout.FunctionCallbackHandler.html |
e46e10139714-1 | Start a trace for an LLM run.
on_retriever_end(documents, *, run_id, **kwargs)
Run when Retriever ends running.
on_retriever_error(error, *, run_id, **kwargs)
Run when Retriever errors.
on_retriever_start(serialized, query, *, run_id)
Run when Retriever starts running.
on_retry(retry_state, *, run_id, **kwargs)
Run on a retry event.
on_text(text, *, run_id[, parent_run_id])
Run on arbitrary text.
on_tool_end(output, *, run_id, **kwargs)
End a trace for a tool run.
on_tool_error(error, *, run_id, **kwargs)
Handle an error for a tool run.
on_tool_start(serialized, input_str, *, run_id)
Start a trace for a tool run.
__init__(function: Callable[[str], None], **kwargs: Any) → None[source]¶
get_breadcrumbs(run: Run) → str[source]¶
get_parents(run: Run) → List[Run][source]¶
on_agent_action(action: AgentAction, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run on agent action.
on_agent_finish(finish: AgentFinish, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run on agent end.
on_chain_end(outputs: Dict[str, Any], *, run_id: UUID, inputs: Optional[Dict[str, Any]] = None, **kwargs: Any) → Run¶
End a trace for a chain run. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.callbacks.tracers.stdout.FunctionCallbackHandler.html |
e46e10139714-2 | End a trace for a chain run.
on_chain_error(error: BaseException, *, inputs: Optional[Dict[str, Any]] = None, run_id: UUID, **kwargs: Any) → Run¶
Handle an error for a chain run.
on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, run_type: Optional[str] = None, name: Optional[str] = None, **kwargs: Any) → Run¶
Start a trace for a chain run.
on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run when a chat model starts running.
on_llm_end(response: LLMResult, *, run_id: UUID, **kwargs: Any) → Run¶
End a trace for an LLM run.
on_llm_error(error: BaseException, *, run_id: UUID, **kwargs: Any) → Run¶
Handle an error for an LLM run.
on_llm_new_token(token: str, *, chunk: Optional[Union[GenerationChunk, ChatGenerationChunk]] = None, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Run¶
Run on new LLM token. Only available when streaming is enabled. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.callbacks.tracers.stdout.FunctionCallbackHandler.html |
e46e10139714-3 | Run on new LLM token. Only available when streaming is enabled.
on_llm_start(serialized: Dict[str, Any], prompts: List[str], *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, name: Optional[str] = None, **kwargs: Any) → Run¶
Start a trace for an LLM run.
on_retriever_end(documents: Sequence[Document], *, run_id: UUID, **kwargs: Any) → Run¶
Run when Retriever ends running.
on_retriever_error(error: BaseException, *, run_id: UUID, **kwargs: Any) → Run¶
Run when Retriever errors.
on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, name: Optional[str] = None, **kwargs: Any) → Run¶
Run when Retriever starts running.
on_retry(retry_state: RetryCallState, *, run_id: UUID, **kwargs: Any) → Run¶
Run on a retry event.
on_text(text: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run on arbitrary text.
on_tool_end(output: str, *, run_id: UUID, **kwargs: Any) → Run¶
End a trace for a tool run.
on_tool_error(error: BaseException, *, run_id: UUID, **kwargs: Any) → Run¶
Handle an error for a tool run. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.callbacks.tracers.stdout.FunctionCallbackHandler.html |
e46e10139714-4 | Handle an error for a tool run.
on_tool_start(serialized: Dict[str, Any], input_str: str, *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, name: Optional[str] = None, **kwargs: Any) → Run¶
Start a trace for a tool run. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.callbacks.tracers.stdout.FunctionCallbackHandler.html |
8482c4a99c14-0 | langchain.schema.runnable.utils.get_function_first_arg_dict_keys¶
langchain.schema.runnable.utils.get_function_first_arg_dict_keys(func: Callable) → Optional[List[str]][source]¶
Get the keys of the first argument of a function if it is a dict. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.runnable.utils.get_function_first_arg_dict_keys.html |
8ba1b0ca347f-0 | langchain.schema.callbacks.manager.AsyncParentRunManager¶
class langchain.schema.callbacks.manager.AsyncParentRunManager(*, run_id: UUID, handlers: List[BaseCallbackHandler], inheritable_handlers: List[BaseCallbackHandler], parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, inheritable_tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None)[source]¶
Async Parent Run Manager.
Initialize the run manager.
Parameters
run_id (UUID) – The ID of the run.
handlers (List[BaseCallbackHandler]) – The list of handlers.
inheritable_handlers (List[BaseCallbackHandler]) – The list of inheritable handlers.
parent_run_id (UUID, optional) – The ID of the parent run.
Defaults to None.
tags (Optional[List[str]]) – The list of tags.
inheritable_tags (Optional[List[str]]) – The list of inheritable tags.
metadata (Optional[Dict[str, Any]]) – The metadata.
inheritable_metadata (Optional[Dict[str, Any]]) – The inheritable metadata.
Methods
__init__(*, run_id, handlers, ...[, ...])
Initialize the run manager.
get_child([tag])
Get a child callback manager.
get_noop_manager()
Return a manager that doesn't perform any operations.
on_retry(retry_state, **kwargs)
Run on a retry event.
on_text(text, **kwargs)
Run when text is received. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.callbacks.manager.AsyncParentRunManager.html |
8ba1b0ca347f-1 | on_text(text, **kwargs)
Run when text is received.
__init__(*, run_id: UUID, handlers: List[BaseCallbackHandler], inheritable_handlers: List[BaseCallbackHandler], parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, inheritable_tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None) → None¶
Initialize the run manager.
Parameters
run_id (UUID) – The ID of the run.
handlers (List[BaseCallbackHandler]) – The list of handlers.
inheritable_handlers (List[BaseCallbackHandler]) – The list of inheritable handlers.
parent_run_id (UUID, optional) – The ID of the parent run.
Defaults to None.
tags (Optional[List[str]]) – The list of tags.
inheritable_tags (Optional[List[str]]) – The list of inheritable tags.
metadata (Optional[Dict[str, Any]]) – The metadata.
inheritable_metadata (Optional[Dict[str, Any]]) – The inheritable metadata.
get_child(tag: Optional[str] = None) → AsyncCallbackManager[source]¶
Get a child callback manager.
Parameters
tag (str, optional) – The tag for the child callback manager.
Defaults to None.
Returns
The child callback manager.
Return type
AsyncCallbackManager
classmethod get_noop_manager() → BRM¶
Return a manager that doesn’t perform any operations.
Returns
The noop manager.
Return type
BaseRunManager
async on_retry(retry_state: RetryCallState, **kwargs: Any) → None¶
Run on a retry event.
async on_text(text: str, **kwargs: Any) → Any¶
Run when text is received.
Parameters | lang/api.python.langchain.com/en/latest/schema/langchain.schema.callbacks.manager.AsyncParentRunManager.html |
8ba1b0ca347f-2 | Run when text is received.
Parameters
text (str) – The received text.
Returns
The result of the callback.
Return type
Any | lang/api.python.langchain.com/en/latest/schema/langchain.schema.callbacks.manager.AsyncParentRunManager.html |
e84c2eea98ae-0 | langchain.schema.output.ChatGenerationChunk¶
class langchain.schema.output.ChatGenerationChunk[source]¶
Bases: ChatGeneration
A ChatGeneration chunk, which can be concatenated with otherChatGeneration chunks.
message¶
The message chunk output by the chat model.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param generation_info: Optional[Dict[str, Any]] = None¶
Raw response from the provider. May include things like the
reason for finishing or token log probabilities.
param message: langchain.schema.messages.BaseMessageChunk [Required]¶
The message output by the chat model.
param text: str = ''¶
SHOULD NOT BE SET DIRECTLY The text contents of the output message.
param type: Literal['ChatGenerationChunk'] = 'ChatGenerationChunk'¶
Type is used exclusively for serialization purposes.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include | lang/api.python.langchain.com/en/latest/schema/langchain.schema.output.ChatGenerationChunk.html |
e84c2eea98ae-1 | exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
classmethod get_lc_namespace() → List[str]¶
Get the namespace of the langchain object.
For example, if the class is langchain.llms.openai.OpenAI, then the
namespace is [“langchain”, “llms”, “openai”]
classmethod is_lc_serializable() → bool¶
Return whether this class is serializable.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict(). | lang/api.python.langchain.com/en/latest/schema/langchain.schema.output.ChatGenerationChunk.html |
e84c2eea98ae-2 | Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”} | lang/api.python.langchain.com/en/latest/schema/langchain.schema.output.ChatGenerationChunk.html |
82e88f71c59d-0 | langchain.schema.runnable.utils.gated_coro¶
async langchain.schema.runnable.utils.gated_coro(semaphore: Semaphore, coro: Coroutine) → Any[source]¶
Run a coroutine with a semaphore.
:param semaphore: The semaphore to use.
:param coro: The coroutine to run.
Returns
The result of the coroutine. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.runnable.utils.gated_coro.html |
54f7fb9158d6-0 | langchain.schema.runnable.utils.indent_lines_after_first¶
langchain.schema.runnable.utils.indent_lines_after_first(text: str, prefix: str) → str[source]¶
Indent all lines of text after the first line.
Parameters
text – The text to indent
prefix – Used to determine the number of spaces to indent
Returns
The indented text
Return type
str | lang/api.python.langchain.com/en/latest/schema/langchain.schema.runnable.utils.indent_lines_after_first.html |
4ed11b473592-0 | langchain.schema.callbacks.tracers.log_stream.LogEntry¶
class langchain.schema.callbacks.tracers.log_stream.LogEntry[source]¶
A single entry in the run log.
id: str¶
ID of the sub-run.
name: str¶
Name of the object being run.
type: str¶
Type of the object being run, eg. prompt, chain, llm, etc.
tags: List[str]¶
List of tags for the run.
metadata: Dict[str, Any]¶
Key-value pairs of metadata for the run.
start_time: str¶
ISO-8601 timestamp of when the run started.
streamed_output_str: List[str]¶
List of LLM tokens streamed by this run, if applicable.
final_output: Optional[Any]¶
Final output of this run.
Only available after the run has finished successfully.
end_time: Optional[str]¶
ISO-8601 timestamp of when the run ended.
Only available after the run has finished. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.callbacks.tracers.log_stream.LogEntry.html |
ac39b52bb443-0 | langchain.schema.output_parser.BaseGenerationOutputParser¶
class langchain.schema.output_parser.BaseGenerationOutputParser[source]¶
Bases: BaseLLMOutputParser, RunnableSerializable[Union[str, BaseMessage], T]
Base class to parse the output of an LLM call.
async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶
Default implementation runs ainvoke in parallel using asyncio.gather.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
async ainvoke(input: str | langchain.schema.messages.BaseMessage, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → T[source]¶
Default implementation of ainvoke, calls invoke from a thread.
The default implementation allows usage of async code even if
the runnable did not implement a native async version of invoke.
Subclasses should override this method if they can run asynchronously.
async aparse_result(result: List[Generation], *, partial: bool = False) → T¶
Parse a list of candidate model Generations into a specific format.
Parameters
result – A list of Generations to be parsed. The Generations are assumed
to be different candidate outputs for a single model input.
Returns
Structured output.
async astream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
Default implementation of astream, which calls ainvoke.
Subclasses should override this method if they support streaming output. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.output_parser.BaseGenerationOutputParser.html |
ac39b52bb443-1 | Subclasses should override this method if they support streaming output.
async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, diff: bool = True, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any]) → Union[AsyncIterator[RunLogPatch], AsyncIterator[RunLog]]¶
Stream all output from a runnable, as reported to the callback system.
This includes all inner runs of LLMs, Retrievers, Tools, etc.
Output is streamed as Log objects, which include a list of
jsonpatch ops that describe how the state of the run has changed in each
step, and the final state of the run.
The jsonpatch ops can be applied in order to construct state.
async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
Default implementation of atransform, which buffers input and calls astream.
Subclasses should override this method if they can start producing output while
input is still being generated.
batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶
Default implementation runs invoke in parallel using a thread pool executor.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.output_parser.BaseGenerationOutputParser.html |
ac39b52bb443-2 | e.g., if the underlying runnable uses an API which supports a batch mode.
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
config_schema(*, include: Optional[Sequence[str]] = None) → Type[BaseModel]¶
The type of config this runnable accepts specified as a pydantic model.
To mark a field as configurable, see the configurable_fields
and configurable_alternatives methods.
Parameters
include – A list of fields to include in the config schema.
Returns
A pydantic model that can be used to validate config.
configurable_alternatives(which: ConfigurableField, default_key: str = 'default', **kwargs: Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) → RunnableSerializable[Input, Output]¶
configurable_fields(**kwargs: Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) → RunnableSerializable[Input, Output]¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include | lang/api.python.langchain.com/en/latest/schema/langchain.schema.output_parser.BaseGenerationOutputParser.html |
ac39b52bb443-3 | exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
get_input_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
Get a pydantic model that can be used to validate input to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic input schema that depends on which
configuration the runnable is invoked with.
This method allows to get an input schema for a specific configuration.
Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate input.
classmethod get_lc_namespace() → List[str]¶
Get the namespace of the langchain object.
For example, if the class is langchain.llms.openai.OpenAI, then the
namespace is [“langchain”, “llms”, “openai”]
get_output_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
Get a pydantic model that can be used to validate output to the runnable. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.output_parser.BaseGenerationOutputParser.html |
ac39b52bb443-4 | Get a pydantic model that can be used to validate output to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic output schema that depends on which
configuration the runnable is invoked with.
This method allows to get an output schema for a specific configuration.
Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate output.
invoke(input: Union[str, BaseMessage], config: Optional[RunnableConfig] = None) → T[source]¶
Transform a single input into an output. Override to implement.
Parameters
input – The input to the runnable.
config – A config to use when invoking the runnable.
The config supports standard keys like ‘tags’, ‘metadata’ for tracing
purposes, ‘max_concurrency’ for controlling how much work to do
in parallel, and other keys. Please refer to the RunnableConfig
for more details.
Returns
The output of the runnable.
classmethod is_lc_serializable() → bool¶
Is this class serializable?
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶ | lang/api.python.langchain.com/en/latest/schema/langchain.schema.output_parser.BaseGenerationOutputParser.html |
ac39b52bb443-5 | classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object.
map() → Runnable[List[Input], List[Output]]¶
Return a new Runnable that maps a list of inputs to a list of outputs,
by calling invoke() with each input.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
abstract parse_result(result: List[Generation], *, partial: bool = False) → T¶
Parse a list of candidate model Generations into a specific format.
Parameters
result – A list of Generations to be parsed. The Generations are assumed
to be different candidate outputs for a single model input.
Returns
Structured output.
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of stream, which calls invoke.
Subclasses should override this method if they support streaming output.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶ | lang/api.python.langchain.com/en/latest/schema/langchain.schema.output_parser.BaseGenerationOutputParser.html |
ac39b52bb443-6 | to_json_not_implemented() → SerializedNotImplemented¶
transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of transform, which buffers input and then calls stream.
Subclasses should override this method if they can start producing output while
input is still being generated.
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶
Bind config to a Runnable, returning a new Runnable.
with_fallbacks(fallbacks: Sequence[Runnable[Input, Output]], *, exceptions_to_handle: Tuple[Type[BaseException], ...] = (<class 'Exception'>,)) → RunnableWithFallbacksT[Input, Output]¶
Add fallbacks to a runnable, returning a new Runnable.
Parameters
fallbacks – A sequence of runnables to try if the original runnable fails.
exceptions_to_handle – A tuple of exception types to handle.
Returns
A new Runnable that will try the original runnable, and then each
fallback in order, upon failures.
with_listeners(*, on_start: Optional[Listener] = None, on_end: Optional[Listener] = None, on_error: Optional[Listener] = None) → Runnable[Input, Output]¶
Bind lifecycle listeners to a Runnable, returning a new Runnable.
on_start: Called before the runnable starts running, with the Run object.
on_end: Called after the runnable finishes running, with the Run object.
on_error: Called if the runnable throws an error, with the Run object.
The Run object contains information about the run, including its id, | lang/api.python.langchain.com/en/latest/schema/langchain.schema.output_parser.BaseGenerationOutputParser.html |
ac39b52bb443-7 | The Run object contains information about the run, including its id,
type, input, output, error, start_time, end_time, and any tags or metadata
added to the run.
with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶
Create a new Runnable that retries the original runnable on exceptions.
Parameters
retry_if_exception_type – A tuple of exception types to retry on
wait_exponential_jitter – Whether to add jitter to the wait time
between retries
stop_after_attempt – The maximum number of attempts to make before giving up
Returns
A new Runnable that retries the original runnable on exceptions.
with_types(*, input_type: Optional[Type[Input]] = None, output_type: Optional[Type[Output]] = None) → Runnable[Input, Output]¶
Bind input and output types to a Runnable, returning a new Runnable.
property InputType: Any¶
The type of input this runnable accepts specified as a type annotation.
property OutputType: Type[langchain.schema.output_parser.T]¶
The type of output this runnable produces specified as a type annotation.
property config_specs: List[langchain.schema.runnable.utils.ConfigurableFieldSpec]¶
List configurable fields for this runnable.
property input_schema: Type[pydantic.main.BaseModel]¶
The type of input this runnable accepts specified as a pydantic model.
property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.output_parser.BaseGenerationOutputParser.html |
ac39b52bb443-8 | A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”}
property output_schema: Type[pydantic.main.BaseModel]¶
The type of output this runnable produces specified as a pydantic model. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.output_parser.BaseGenerationOutputParser.html |
53b90d26f80a-0 | langchain.schema.document.BaseDocumentTransformer¶
class langchain.schema.document.BaseDocumentTransformer[source]¶
Abstract base class for document transformation systems.
A document transformation system takes a sequence of Documents and returns a
sequence of transformed Documents.
Example
class EmbeddingsRedundantFilter(BaseDocumentTransformer, BaseModel):
embeddings: Embeddings
similarity_fn: Callable = cosine_similarity
similarity_threshold: float = 0.95
class Config:
arbitrary_types_allowed = True
def transform_documents(
self, documents: Sequence[Document], **kwargs: Any
) -> Sequence[Document]:
stateful_documents = get_stateful_documents(documents)
embedded_documents = _get_embeddings_from_stateful_docs(
self.embeddings, stateful_documents
)
included_idxs = _filter_similar_embeddings(
embedded_documents, self.similarity_fn, self.similarity_threshold
)
return [stateful_documents[i] for i in sorted(included_idxs)]
async def atransform_documents(
self, documents: Sequence[Document], **kwargs: Any
) -> Sequence[Document]:
raise NotImplementedError
Methods
__init__()
atransform_documents(documents, **kwargs)
Asynchronously transform a list of documents.
transform_documents(documents, **kwargs)
Transform a list of documents.
__init__()¶
async atransform_documents(documents: Sequence[Document], **kwargs: Any) → Sequence[Document][source]¶
Asynchronously transform a list of documents.
Parameters
documents – A sequence of Documents to be transformed.
Returns
A list of transformed Documents.
abstract transform_documents(documents: Sequence[Document], **kwargs: Any) → Sequence[Document][source]¶
Transform a list of documents.
Parameters
documents – A sequence of Documents to be transformed.
Returns | lang/api.python.langchain.com/en/latest/schema/langchain.schema.document.BaseDocumentTransformer.html |
53b90d26f80a-1 | Parameters
documents – A sequence of Documents to be transformed.
Returns
A list of transformed Documents. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.document.BaseDocumentTransformer.html |
b27d905d128a-0 | langchain.schema.runnable.base.RunnableEachBase¶
class langchain.schema.runnable.base.RunnableEachBase[source]¶
Bases: RunnableSerializable[List[Input], List[Output]]
A runnable that delegates calls to another runnable
with each element of the input sequence.
Use only if creating a new RunnableEach subclass with different __init__ args.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param bound: langchain.schema.runnable.base.Runnable[langchain.schema.runnable.utils.Input, langchain.schema.runnable.utils.Output] [Required]¶
async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶
Default implementation runs ainvoke in parallel using asyncio.gather.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
async ainvoke(input: List[Input], config: Optional[RunnableConfig] = None, **kwargs: Any) → List[Output][source]¶
Default implementation of ainvoke, calls invoke from a thread.
The default implementation allows usage of async code even if
the runnable did not implement a native async version of invoke.
Subclasses should override this method if they can run asynchronously.
async astream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
Default implementation of astream, which calls ainvoke.
Subclasses should override this method if they support streaming output. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.runnable.base.RunnableEachBase.html |
b27d905d128a-1 | Subclasses should override this method if they support streaming output.
async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, diff: bool = True, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any]) → Union[AsyncIterator[RunLogPatch], AsyncIterator[RunLog]]¶
Stream all output from a runnable, as reported to the callback system.
This includes all inner runs of LLMs, Retrievers, Tools, etc.
Output is streamed as Log objects, which include a list of
jsonpatch ops that describe how the state of the run has changed in each
step, and the final state of the run.
The jsonpatch ops can be applied in order to construct state.
async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
Default implementation of atransform, which buffers input and calls astream.
Subclasses should override this method if they can start producing output while
input is still being generated.
batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶
Default implementation runs invoke in parallel using a thread pool executor.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.runnable.base.RunnableEachBase.html |
b27d905d128a-2 | e.g., if the underlying runnable uses an API which supports a batch mode.
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
config_schema(*, include: Optional[Sequence[str]] = None) → Type[BaseModel]¶
The type of config this runnable accepts specified as a pydantic model.
To mark a field as configurable, see the configurable_fields
and configurable_alternatives methods.
Parameters
include – A list of fields to include in the config schema.
Returns
A pydantic model that can be used to validate config.
configurable_alternatives(which: ConfigurableField, default_key: str = 'default', **kwargs: Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) → RunnableSerializable[Input, Output]¶
configurable_fields(**kwargs: Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) → RunnableSerializable[Input, Output]¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include | lang/api.python.langchain.com/en/latest/schema/langchain.schema.runnable.base.RunnableEachBase.html |
b27d905d128a-3 | exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
get_input_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel][source]¶
Get a pydantic model that can be used to validate input to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic input schema that depends on which
configuration the runnable is invoked with.
This method allows to get an input schema for a specific configuration.
Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate input.
classmethod get_lc_namespace() → List[str][source]¶
Get the namespace of the langchain object.
For example, if the class is langchain.llms.openai.OpenAI, then the
namespace is [“langchain”, “llms”, “openai”]
get_output_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel][source]¶ | lang/api.python.langchain.com/en/latest/schema/langchain.schema.runnable.base.RunnableEachBase.html |
b27d905d128a-4 | Get a pydantic model that can be used to validate output to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic output schema that depends on which
configuration the runnable is invoked with.
This method allows to get an output schema for a specific configuration.
Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate output.
invoke(input: List[Input], config: Optional[RunnableConfig] = None, **kwargs: Any) → List[Output][source]¶
Transform a single input into an output. Override to implement.
Parameters
input – The input to the runnable.
config – A config to use when invoking the runnable.
The config supports standard keys like ‘tags’, ‘metadata’ for tracing
purposes, ‘max_concurrency’ for controlling how much work to do
in parallel, and other keys. Please refer to the RunnableConfig
for more details.
Returns
The output of the runnable.
classmethod is_lc_serializable() → bool[source]¶
Is this class serializable?
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶ | lang/api.python.langchain.com/en/latest/schema/langchain.schema.runnable.base.RunnableEachBase.html |
b27d905d128a-5 | classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object.
map() → Runnable[List[Input], List[Output]]¶
Return a new Runnable that maps a list of inputs to a list of outputs,
by calling invoke() with each input.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of stream, which calls invoke.
Subclasses should override this method if they support streaming output.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of transform, which buffers input and then calls stream.
Subclasses should override this method if they can start producing output while
input is still being generated.
classmethod update_forward_refs(**localns: Any) → None¶ | lang/api.python.langchain.com/en/latest/schema/langchain.schema.runnable.base.RunnableEachBase.html |
b27d905d128a-6 | classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶
Bind config to a Runnable, returning a new Runnable.
with_fallbacks(fallbacks: Sequence[Runnable[Input, Output]], *, exceptions_to_handle: Tuple[Type[BaseException], ...] = (<class 'Exception'>,)) → RunnableWithFallbacksT[Input, Output]¶
Add fallbacks to a runnable, returning a new Runnable.
Parameters
fallbacks – A sequence of runnables to try if the original runnable fails.
exceptions_to_handle – A tuple of exception types to handle.
Returns
A new Runnable that will try the original runnable, and then each
fallback in order, upon failures.
with_listeners(*, on_start: Optional[Listener] = None, on_end: Optional[Listener] = None, on_error: Optional[Listener] = None) → Runnable[Input, Output]¶
Bind lifecycle listeners to a Runnable, returning a new Runnable.
on_start: Called before the runnable starts running, with the Run object.
on_end: Called after the runnable finishes running, with the Run object.
on_error: Called if the runnable throws an error, with the Run object.
The Run object contains information about the run, including its id,
type, input, output, error, start_time, end_time, and any tags or metadata
added to the run. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.runnable.base.RunnableEachBase.html |
b27d905d128a-7 | added to the run.
with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶
Create a new Runnable that retries the original runnable on exceptions.
Parameters
retry_if_exception_type – A tuple of exception types to retry on
wait_exponential_jitter – Whether to add jitter to the wait time
between retries
stop_after_attempt – The maximum number of attempts to make before giving up
Returns
A new Runnable that retries the original runnable on exceptions.
with_types(*, input_type: Optional[Type[Input]] = None, output_type: Optional[Type[Output]] = None) → Runnable[Input, Output]¶
Bind input and output types to a Runnable, returning a new Runnable.
property InputType: Any¶
The type of input this runnable accepts specified as a type annotation.
property OutputType: Type[List[langchain.schema.runnable.utils.Output]]¶
The type of output this runnable produces specified as a type annotation.
property config_specs: List[langchain.schema.runnable.utils.ConfigurableFieldSpec]¶
List configurable fields for this runnable.
property input_schema: Type[pydantic.main.BaseModel]¶
The type of input this runnable accepts specified as a pydantic model.
property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”}
property output_schema: Type[pydantic.main.BaseModel]¶ | lang/api.python.langchain.com/en/latest/schema/langchain.schema.runnable.base.RunnableEachBase.html |
b27d905d128a-8 | property output_schema: Type[pydantic.main.BaseModel]¶
The type of output this runnable produces specified as a pydantic model. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.runnable.base.RunnableEachBase.html |
359bf2f927bf-0 | langchain.schema.output_parser.StrOutputParser¶
class langchain.schema.output_parser.StrOutputParser[source]¶
Bases: BaseTransformOutputParser[str]
OutputParser that parses LLMResult into the top likely string.
async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶
Default implementation runs ainvoke in parallel using asyncio.gather.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
async ainvoke(input: str | langchain.schema.messages.BaseMessage, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → T¶
Default implementation of ainvoke, calls invoke from a thread.
The default implementation allows usage of async code even if
the runnable did not implement a native async version of invoke.
Subclasses should override this method if they can run asynchronously.
async aparse(text: str) → T¶
Parse a single string model output into some structure.
Parameters
text – String output of a language model.
Returns
Structured output.
async aparse_result(result: List[Generation], *, partial: bool = False) → T¶
Parse a list of candidate model Generations into a specific format.
The return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation.
Parameters
result – A list of Generations to be parsed. The Generations are assumed
to be different candidate outputs for a single model input.
Returns
Structured output. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.output_parser.StrOutputParser.html |
359bf2f927bf-1 | to be different candidate outputs for a single model input.
Returns
Structured output.
async astream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
Default implementation of astream, which calls ainvoke.
Subclasses should override this method if they support streaming output.
async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, diff: bool = True, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any]) → Union[AsyncIterator[RunLogPatch], AsyncIterator[RunLog]]¶
Stream all output from a runnable, as reported to the callback system.
This includes all inner runs of LLMs, Retrievers, Tools, etc.
Output is streamed as Log objects, which include a list of
jsonpatch ops that describe how the state of the run has changed in each
step, and the final state of the run.
The jsonpatch ops can be applied in order to construct state.
async atransform(input: AsyncIterator[Union[str, BaseMessage]], config: Optional[RunnableConfig] = None, **kwargs: Any) → AsyncIterator[T]¶
Default implementation of atransform, which buffers input and calls astream.
Subclasses should override this method if they can start producing output while
input is still being generated.
batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶ | lang/api.python.langchain.com/en/latest/schema/langchain.schema.output_parser.StrOutputParser.html |
359bf2f927bf-2 | Default implementation runs invoke in parallel using a thread pool executor.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
config_schema(*, include: Optional[Sequence[str]] = None) → Type[BaseModel]¶
The type of config this runnable accepts specified as a pydantic model.
To mark a field as configurable, see the configurable_fields
and configurable_alternatives methods.
Parameters
include – A list of fields to include in the config schema.
Returns
A pydantic model that can be used to validate config.
configurable_alternatives(which: ConfigurableField, default_key: str = 'default', **kwargs: Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) → RunnableSerializable[Input, Output]¶
configurable_fields(**kwargs: Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) → RunnableSerializable[Input, Output]¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ | lang/api.python.langchain.com/en/latest/schema/langchain.schema.output_parser.StrOutputParser.html |
359bf2f927bf-3 | Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict¶
Return dictionary representation of output parser.
classmethod from_orm(obj: Any) → Model¶
get_format_instructions() → str¶
Instructions on how the LLM output should be formatted.
get_input_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
Get a pydantic model that can be used to validate input to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic input schema that depends on which
configuration the runnable is invoked with.
This method allows to get an input schema for a specific configuration.
Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate input.
classmethod get_lc_namespace() → List[str]¶
Get the namespace of the langchain object.
For example, if the class is langchain.llms.openai.OpenAI, then the
namespace is [“langchain”, “llms”, “openai”]
get_output_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
Get a pydantic model that can be used to validate output to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic output schema that depends on which
configuration the runnable is invoked with. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.output_parser.StrOutputParser.html |
359bf2f927bf-4 | methods will have a dynamic output schema that depends on which
configuration the runnable is invoked with.
This method allows to get an output schema for a specific configuration.
Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate output.
invoke(input: Union[str, BaseMessage], config: Optional[RunnableConfig] = None) → T¶
Transform a single input into an output. Override to implement.
Parameters
input – The input to the runnable.
config – A config to use when invoking the runnable.
The config supports standard keys like ‘tags’, ‘metadata’ for tracing
purposes, ‘max_concurrency’ for controlling how much work to do
in parallel, and other keys. Please refer to the RunnableConfig
for more details.
Returns
The output of the runnable.
classmethod is_lc_serializable() → bool[source]¶
Return whether this class is serializable.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.output_parser.StrOutputParser.html |
359bf2f927bf-5 | The unique identifier is a list of strings that describes the path
to the object.
map() → Runnable[List[Input], List[Output]]¶
Return a new Runnable that maps a list of inputs to a list of outputs,
by calling invoke() with each input.
parse(text: str) → str[source]¶
Returns the input text with no changes.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
parse_result(result: List[Generation], *, partial: bool = False) → T¶
Parse a list of candidate model Generations into a specific format.
The return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation.
Parameters
result – A list of Generations to be parsed. The Generations are assumed
to be different candidate outputs for a single model input.
Returns
Structured output.
parse_with_prompt(completion: str, prompt: PromptValue) → Any¶
Parse the output of an LLM call with the input prompt for context.
The prompt is largely provided in the event the OutputParser wants
to retry or fix the output in some way, and needs information from
the prompt to do so.
Parameters
completion – String output of a language model.
prompt – Input PromptValue.
Returns
Structured output
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ | lang/api.python.langchain.com/en/latest/schema/langchain.schema.output_parser.StrOutputParser.html |
359bf2f927bf-6 | classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of stream, which calls invoke.
Subclasses should override this method if they support streaming output.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
transform(input: Iterator[Union[str, BaseMessage]], config: Optional[RunnableConfig] = None, **kwargs: Any) → Iterator[T]¶
Default implementation of transform, which buffers input and then calls stream.
Subclasses should override this method if they can start producing output while
input is still being generated.
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶
Bind config to a Runnable, returning a new Runnable.
with_fallbacks(fallbacks: Sequence[Runnable[Input, Output]], *, exceptions_to_handle: Tuple[Type[BaseException], ...] = (<class 'Exception'>,)) → RunnableWithFallbacksT[Input, Output]¶
Add fallbacks to a runnable, returning a new Runnable.
Parameters
fallbacks – A sequence of runnables to try if the original runnable fails.
exceptions_to_handle – A tuple of exception types to handle.
Returns
A new Runnable that will try the original runnable, and then each
fallback in order, upon failures. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.output_parser.StrOutputParser.html |
359bf2f927bf-7 | fallback in order, upon failures.
with_listeners(*, on_start: Optional[Listener] = None, on_end: Optional[Listener] = None, on_error: Optional[Listener] = None) → Runnable[Input, Output]¶
Bind lifecycle listeners to a Runnable, returning a new Runnable.
on_start: Called before the runnable starts running, with the Run object.
on_end: Called after the runnable finishes running, with the Run object.
on_error: Called if the runnable throws an error, with the Run object.
The Run object contains information about the run, including its id,
type, input, output, error, start_time, end_time, and any tags or metadata
added to the run.
with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶
Create a new Runnable that retries the original runnable on exceptions.
Parameters
retry_if_exception_type – A tuple of exception types to retry on
wait_exponential_jitter – Whether to add jitter to the wait time
between retries
stop_after_attempt – The maximum number of attempts to make before giving up
Returns
A new Runnable that retries the original runnable on exceptions.
with_types(*, input_type: Optional[Type[Input]] = None, output_type: Optional[Type[Output]] = None) → Runnable[Input, Output]¶
Bind input and output types to a Runnable, returning a new Runnable.
property InputType: Any¶
The type of input this runnable accepts specified as a type annotation.
property OutputType: Type[langchain.schema.output_parser.T]¶
The type of output this runnable produces specified as a type annotation. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.output_parser.StrOutputParser.html |
359bf2f927bf-8 | The type of output this runnable produces specified as a type annotation.
property config_specs: List[langchain.schema.runnable.utils.ConfigurableFieldSpec]¶
List configurable fields for this runnable.
property input_schema: Type[pydantic.main.BaseModel]¶
The type of input this runnable accepts specified as a pydantic model.
property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”}
property output_schema: Type[pydantic.main.BaseModel]¶
The type of output this runnable produces specified as a pydantic model.
Examples using StrOutputParser¶
Facebook Messenger
Chat loaders
iMessage
OpaquePrompts
Fallbacks
MultiVector Retriever
First we add a step to load memory
sql_db.md
prompt_llm_parser.md
multiple_chains.md
Code writing
Using tools
Configure Runnable traces | lang/api.python.langchain.com/en/latest/schema/langchain.schema.output_parser.StrOutputParser.html |
7389048d7103-0 | langchain.schema.cache.BaseCache¶
class langchain.schema.cache.BaseCache[source]¶
Base interface for cache.
Methods
__init__()
clear(**kwargs)
Clear cache that can take additional keyword arguments.
lookup(prompt, llm_string)
Look up based on prompt and llm_string.
update(prompt, llm_string, return_val)
Update cache based on prompt and llm_string.
__init__()¶
abstract clear(**kwargs: Any) → None[source]¶
Clear cache that can take additional keyword arguments.
abstract lookup(prompt: str, llm_string: str) → Optional[Sequence[Generation]][source]¶
Look up based on prompt and llm_string.
abstract update(prompt: str, llm_string: str, return_val: Sequence[Generation]) → None[source]¶
Update cache based on prompt and llm_string. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.cache.BaseCache.html |
5e722430aa12-0 | langchain.schema.messages.AIMessage¶
class langchain.schema.messages.AIMessage[source]¶
Bases: BaseMessage
A Message from an AI.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param additional_kwargs: dict [Optional]¶
Any additional information.
param content: Union[str, List[Union[str, Dict]]] [Required]¶
The string contents of the message.
param example: bool = False¶
Whether this Message is being passed in to the model as part of an example
conversation.
param type: Literal['ai'] = 'ai'¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance | lang/api.python.langchain.com/en/latest/schema/langchain.schema.messages.AIMessage.html |
5e722430aa12-1 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
classmethod get_lc_namespace() → List[str]¶
Get the namespace of the langchain object.
For example, if the class is langchain.llms.openai.OpenAI, then the
namespace is [“langchain”, “llms”, “openai”]
classmethod is_lc_serializable() → bool¶
Return whether this class is serializable.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.messages.AIMessage.html |
5e722430aa12-2 | The unique identifier is a list of strings that describes the path
to the object.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”}
Examples using AIMessage¶
Twitter (via Apify)
Zep
Zep Memory
SQL Chat Message History
Anthropic
🚅 LiteLLM
Konko
OpenAI
JinaChat
Set env var OPENAI_API_KEY or load from a .env file:
CAMEL Role-Playing Autonomous Cooperative Agents
Multi-agent decentralized speaker selection
Multi-agent authoritarian speaker selection | lang/api.python.langchain.com/en/latest/schema/langchain.schema.messages.AIMessage.html |
5e722430aa12-3 | Multi-agent decentralized speaker selection
Multi-agent authoritarian speaker selection
Multi-Player Dungeons & Dragons
Simulated Environment: Gymnasium
Agent Debates with Tools
Prompt pipelining | lang/api.python.langchain.com/en/latest/schema/langchain.schema.messages.AIMessage.html |
92a45df9d2b1-0 | langchain.schema.callbacks.tracers.schemas.TracerSessionV1Base¶
class langchain.schema.callbacks.tracers.schemas.TracerSessionV1Base[source]¶
Bases: BaseModel
Base class for TracerSessionV1.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param extra: Optional[Dict[str, Any]] = None¶
param name: Optional[str] = None¶
param start_time: datetime.datetime [Optional]¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance | lang/api.python.langchain.com/en/latest/schema/langchain.schema.callbacks.tracers.schemas.TracerSessionV1Base.html |
92a45df9d2b1-1 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ | lang/api.python.langchain.com/en/latest/schema/langchain.schema.callbacks.tracers.schemas.TracerSessionV1Base.html |
92a45df9d2b1-2 | classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶ | lang/api.python.langchain.com/en/latest/schema/langchain.schema.callbacks.tracers.schemas.TracerSessionV1Base.html |
7578f9963a5a-0 | langchain.schema.memory.BaseMemory¶
class langchain.schema.memory.BaseMemory[source]¶
Bases: Serializable, ABC
Abstract base class for memory in Chains.
Memory refers to state in Chains. Memory can be used to store information aboutpast executions of a Chain and inject that information into the inputs of
future executions of the Chain. For example, for conversational Chains Memory
can be used to store conversations and automatically add them to future model
prompts so that the model has the necessary context to respond coherently to
the latest input.
Example
class SimpleMemory(BaseMemory):
memories: Dict[str, Any] = dict()
@property
def memory_variables(self) -> List[str]:
return list(self.memories.keys())
def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, str]:
return self.memories
def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:
pass
def clear(self) -> None:
pass
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
abstract clear() → None[source]¶
Clear memory contents.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values | lang/api.python.langchain.com/en/latest/schema/langchain.schema.memory.BaseMemory.html |
7578f9963a5a-1 | Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
classmethod get_lc_namespace() → List[str]¶
Get the namespace of the langchain object.
For example, if the class is langchain.llms.openai.OpenAI, then the
namespace is [“langchain”, “llms”, “openai”]
classmethod is_lc_serializable() → bool¶
Is this class serializable? | lang/api.python.langchain.com/en/latest/schema/langchain.schema.memory.BaseMemory.html |
7578f9963a5a-2 | classmethod is_lc_serializable() → bool¶
Is this class serializable?
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object.
abstract load_memory_variables(inputs: Dict[str, Any]) → Dict[str, Any][source]¶
Return key-value pairs given the text input to the chain.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
abstract save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None[source]¶
Save the context of this chain run to memory.
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ | lang/api.python.langchain.com/en/latest/schema/langchain.schema.memory.BaseMemory.html |
7578f9963a5a-3 | classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”}
abstract property memory_variables: List[str]¶
The string keys this memory class will add to chain inputs.
Examples using BaseMemory¶
Custom Memory | lang/api.python.langchain.com/en/latest/schema/langchain.schema.memory.BaseMemory.html |
3583c0a8cacb-0 | langchain.schema.runnable.utils.ConfigurableFieldMultiOption¶
class langchain.schema.runnable.utils.ConfigurableFieldMultiOption(id: str, options: Mapping[str, Any], default: Sequence[str], name: Optional[str] = None, description: Optional[str] = None)[source]¶
A field that can be configured by the user with multiple default values.
Create new instance of ConfigurableFieldMultiOption(id, options, default, name, description)
Attributes
default
Alias for field number 2
description
Alias for field number 4
id
Alias for field number 0
name
Alias for field number 3
options
Alias for field number 1
Methods
__init__()
count(value, /)
Return number of occurrences of value.
index(value[, start, stop])
Return first index of value.
__init__()¶
count(value, /)¶
Return number of occurrences of value.
index(value, start=0, stop=9223372036854775807, /)¶
Return first index of value.
Raises ValueError if the value is not present. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.runnable.utils.ConfigurableFieldMultiOption.html |
b5a5b2b71656-0 | langchain.schema.runnable.base.RunnableMap¶
langchain.schema.runnable.base.RunnableMap¶
alias of RunnableParallel
Examples using RunnableMap¶
OpaquePrompts
interface.md
First we add a step to load memory
sql_db.md
prompt_llm_parser.md
Adding memory
multiple_chains.md | lang/api.python.langchain.com/en/latest/schema/langchain.schema.runnable.base.RunnableMap.html |
1b3923dc3425-0 | langchain.schema.runnable.utils.add¶
langchain.schema.runnable.utils.add(addables: Iterable[Addable]) → Optional[Addable][source]¶
Add a sequence of addable objects together. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.runnable.utils.add.html |
63e373c6943c-0 | langchain.schema.runnable.utils.accepts_config¶
langchain.schema.runnable.utils.accepts_config(callable: Callable[[...], Any]) → bool[source]¶
Check if a callable accepts a config argument. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.runnable.utils.accepts_config.html |
687869cf286b-0 | langchain.schema.runnable.base.RunnableBinding¶
class langchain.schema.runnable.base.RunnableBinding[source]¶
Bases: RunnableBindingBase[Input, Output]
A runnable that delegates calls to another runnable with a set of kwargs.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param bound: langchain.schema.runnable.base.Runnable[langchain.schema.runnable.utils.Input, langchain.schema.runnable.utils.Output] [Required]¶
param config: langchain.schema.runnable.config.RunnableConfig [Optional]¶
param config_factories: List[Callable[[langchain.schema.runnable.config.RunnableConfig], langchain.schema.runnable.config.RunnableConfig]] [Optional]¶
param custom_input_type: Optional[Any] = None¶
param custom_output_type: Optional[Any] = None¶
param kwargs: Mapping[str, Any] [Optional]¶
async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶
Default implementation runs ainvoke in parallel using asyncio.gather.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
async ainvoke(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Output¶
Default implementation of ainvoke, calls invoke from a thread.
The default implementation allows usage of async code even if
the runnable did not implement a native async version of invoke.
Subclasses should override this method if they can run asynchronously. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.runnable.base.RunnableBinding.html |
687869cf286b-1 | Subclasses should override this method if they can run asynchronously.
async astream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
Default implementation of astream, which calls ainvoke.
Subclasses should override this method if they support streaming output.
async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, diff: bool = True, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any]) → Union[AsyncIterator[RunLogPatch], AsyncIterator[RunLog]]¶
Stream all output from a runnable, as reported to the callback system.
This includes all inner runs of LLMs, Retrievers, Tools, etc.
Output is streamed as Log objects, which include a list of
jsonpatch ops that describe how the state of the run has changed in each
step, and the final state of the run.
The jsonpatch ops can be applied in order to construct state.
async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Any) → AsyncIterator[Output]¶
Default implementation of atransform, which buffers input and calls astream.
Subclasses should override this method if they can start producing output while
input is still being generated.
batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶
Default implementation runs invoke in parallel using a thread pool executor. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.runnable.base.RunnableBinding.html |
687869cf286b-2 | Default implementation runs invoke in parallel using a thread pool executor.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
bind(**kwargs: Any) → Runnable[Input, Output][source]¶
Bind arguments to a Runnable, returning a new Runnable.
config_schema(*, include: Optional[Sequence[str]] = None) → Type[BaseModel]¶
The type of config this runnable accepts specified as a pydantic model.
To mark a field as configurable, see the configurable_fields
and configurable_alternatives methods.
Parameters
include – A list of fields to include in the config schema.
Returns
A pydantic model that can be used to validate config.
configurable_alternatives(which: ConfigurableField, default_key: str = 'default', **kwargs: Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) → RunnableSerializable[Input, Output]¶
configurable_fields(**kwargs: Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) → RunnableSerializable[Input, Output]¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ | lang/api.python.langchain.com/en/latest/schema/langchain.schema.runnable.base.RunnableBinding.html |
687869cf286b-3 | Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
get_input_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
Get a pydantic model that can be used to validate input to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic input schema that depends on which
configuration the runnable is invoked with.
This method allows to get an input schema for a specific configuration.
Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate input.
classmethod get_lc_namespace() → List[str]¶
Get the namespace of the langchain object.
For example, if the class is langchain.llms.openai.OpenAI, then the
namespace is [“langchain”, “llms”, “openai”] | lang/api.python.langchain.com/en/latest/schema/langchain.schema.runnable.base.RunnableBinding.html |
687869cf286b-4 | namespace is [“langchain”, “llms”, “openai”]
get_output_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
Get a pydantic model that can be used to validate output to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic output schema that depends on which
configuration the runnable is invoked with.
This method allows to get an output schema for a specific configuration.
Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate output.
invoke(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Output¶
Transform a single input into an output. Override to implement.
Parameters
input – The input to the runnable.
config – A config to use when invoking the runnable.
The config supports standard keys like ‘tags’, ‘metadata’ for tracing
purposes, ‘max_concurrency’ for controlling how much work to do
in parallel, and other keys. Please refer to the RunnableConfig
for more details.
Returns
The output of the runnable.
classmethod is_lc_serializable() → bool¶
Is this class serializable?
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict(). | lang/api.python.langchain.com/en/latest/schema/langchain.schema.runnable.base.RunnableBinding.html |
687869cf286b-5 | Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object.
map() → Runnable[List[Input], List[Output]]¶
Return a new Runnable that maps a list of inputs to a list of outputs,
by calling invoke() with each input.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of stream, which calls invoke.
Subclasses should override this method if they support streaming output.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Any) → Iterator[Output]¶
Default implementation of transform, which buffers input and then calls stream. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.runnable.base.RunnableBinding.html |
687869cf286b-6 | Default implementation of transform, which buffers input and then calls stream.
Subclasses should override this method if they can start producing output while
input is still being generated.
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output][source]¶
Bind config to a Runnable, returning a new Runnable.
with_fallbacks(fallbacks: Sequence[Runnable[Input, Output]], *, exceptions_to_handle: Tuple[Type[BaseException], ...] = (<class 'Exception'>,)) → RunnableWithFallbacksT[Input, Output]¶
Add fallbacks to a runnable, returning a new Runnable.
Parameters
fallbacks – A sequence of runnables to try if the original runnable fails.
exceptions_to_handle – A tuple of exception types to handle.
Returns
A new Runnable that will try the original runnable, and then each
fallback in order, upon failures.
with_listeners(*, on_start: Optional[Listener] = None, on_end: Optional[Listener] = None, on_error: Optional[Listener] = None) → Runnable[Input, Output][source]¶
Bind lifecycle listeners to a Runnable, returning a new Runnable.
on_start: Called before the runnable starts running, with the Run object.
on_end: Called after the runnable finishes running, with the Run object.
on_error: Called if the runnable throws an error, with the Run object.
The Run object contains information about the run, including its id,
type, input, output, error, start_time, end_time, and any tags or metadata
added to the run. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.runnable.base.RunnableBinding.html |
687869cf286b-7 | added to the run.
with_retry(**kwargs: Any) → Runnable[Input, Output][source]¶
Create a new Runnable that retries the original runnable on exceptions.
Parameters
retry_if_exception_type – A tuple of exception types to retry on
wait_exponential_jitter – Whether to add jitter to the wait time
between retries
stop_after_attempt – The maximum number of attempts to make before giving up
Returns
A new Runnable that retries the original runnable on exceptions.
with_types(input_type: Optional[Union[Type[Input], BaseModel]] = None, output_type: Optional[Union[Type[Output], BaseModel]] = None) → Runnable[Input, Output][source]¶
Bind input and output types to a Runnable, returning a new Runnable.
property InputType: Type[langchain.schema.runnable.utils.Input]¶
The type of input this runnable accepts specified as a type annotation.
property OutputType: Type[langchain.schema.runnable.utils.Output]¶
The type of output this runnable produces specified as a type annotation.
property config_specs: List[langchain.schema.runnable.utils.ConfigurableFieldSpec]¶
List configurable fields for this runnable.
property input_schema: Type[pydantic.main.BaseModel]¶
The type of input this runnable accepts specified as a pydantic model.
property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”}
property output_schema: Type[pydantic.main.BaseModel]¶
The type of output this runnable produces specified as a pydantic model. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.runnable.base.RunnableBinding.html |
bcfcc031859c-0 | langchain.schema.callbacks.tracers.schemas.TracerSessionV1¶
class langchain.schema.callbacks.tracers.schemas.TracerSessionV1[source]¶
Bases: TracerSessionV1Base
TracerSessionV1 schema.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param extra: Optional[Dict[str, Any]] = None¶
param id: int [Required]¶
param name: Optional[str] = None¶
param start_time: datetime.datetime [Optional]¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance | lang/api.python.langchain.com/en/latest/schema/langchain.schema.callbacks.tracers.schemas.TracerSessionV1.html |
bcfcc031859c-1 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ | lang/api.python.langchain.com/en/latest/schema/langchain.schema.callbacks.tracers.schemas.TracerSessionV1.html |
bcfcc031859c-2 | classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶ | lang/api.python.langchain.com/en/latest/schema/langchain.schema.callbacks.tracers.schemas.TracerSessionV1.html |
5491566e8a1d-0 | langchain.schema.agent.AgentActionMessageLog¶
class langchain.schema.agent.AgentActionMessageLog[source]¶
Bases: AgentAction
Override init to support instantiation by position for backward compat.
param log: str [Required]¶
Additional information to log about the action.
This log can be used in a few ways. First, it can be used to audit
what exactly the LLM predicted to lead to this (tool, tool_input).
Second, it can be used in future iterations to show the LLMs prior
thoughts. This is useful when (tool, tool_input) does not contain
full information about the LLM prediction (for example, any thought
before the tool/tool_input).
param message_log: Sequence[langchain.schema.messages.BaseMessage] [Required]¶
Similar to log, this can be used to pass along extra
information about what exact messages were predicted by the LLM
before parsing out the (tool, tool_input). This is again useful
if (tool, tool_input) cannot be used to fully recreate the LLM
prediction, and you need that LLM prediction (for future agent iteration).
Compared to log, this is useful when the underlying LLM is a
ChatModel (and therefore returns messages rather than a string).
param tool: str [Required]¶
The name of the Tool to execute.
param tool_input: Union[str, dict] [Required]¶
The input to pass in to the Tool.
param type: Literal['AgentActionMessageLog'] = 'AgentActionMessageLog'¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.agent.AgentActionMessageLog.html |
5491566e8a1d-1 | Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
classmethod get_lc_namespace() → List[str]¶
Get the namespace of the langchain object.
For example, if the class is langchain.llms.openai.OpenAI, then the
namespace is [“langchain”, “llms”, “openai”]
classmethod is_lc_serializable() → bool¶
Return whether or not the class is serializable. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.agent.AgentActionMessageLog.html |
5491566e8a1d-2 | Return whether or not the class is serializable.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶ | lang/api.python.langchain.com/en/latest/schema/langchain.schema.agent.AgentActionMessageLog.html |
5491566e8a1d-3 | classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”} | lang/api.python.langchain.com/en/latest/schema/langchain.schema.agent.AgentActionMessageLog.html |
b7403cc4ca4b-0 | langchain.schema.callbacks.manager.RunManager¶
class langchain.schema.callbacks.manager.RunManager(*, run_id: UUID, handlers: List[BaseCallbackHandler], inheritable_handlers: List[BaseCallbackHandler], parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, inheritable_tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None)[source]¶
Sync Run Manager.
Initialize the run manager.
Parameters
run_id (UUID) – The ID of the run.
handlers (List[BaseCallbackHandler]) – The list of handlers.
inheritable_handlers (List[BaseCallbackHandler]) – The list of inheritable handlers.
parent_run_id (UUID, optional) – The ID of the parent run.
Defaults to None.
tags (Optional[List[str]]) – The list of tags.
inheritable_tags (Optional[List[str]]) – The list of inheritable tags.
metadata (Optional[Dict[str, Any]]) – The metadata.
inheritable_metadata (Optional[Dict[str, Any]]) – The inheritable metadata.
Methods
__init__(*, run_id, handlers, ...[, ...])
Initialize the run manager.
get_noop_manager()
Return a manager that doesn't perform any operations.
on_retry(retry_state, **kwargs)
Run on a retry event.
on_text(text, **kwargs)
Run when text is received. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.callbacks.manager.RunManager.html |
b7403cc4ca4b-1 | on_text(text, **kwargs)
Run when text is received.
__init__(*, run_id: UUID, handlers: List[BaseCallbackHandler], inheritable_handlers: List[BaseCallbackHandler], parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, inheritable_tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None) → None¶
Initialize the run manager.
Parameters
run_id (UUID) – The ID of the run.
handlers (List[BaseCallbackHandler]) – The list of handlers.
inheritable_handlers (List[BaseCallbackHandler]) – The list of inheritable handlers.
parent_run_id (UUID, optional) – The ID of the parent run.
Defaults to None.
tags (Optional[List[str]]) – The list of tags.
inheritable_tags (Optional[List[str]]) – The list of inheritable tags.
metadata (Optional[Dict[str, Any]]) – The metadata.
inheritable_metadata (Optional[Dict[str, Any]]) – The inheritable metadata.
classmethod get_noop_manager() → BRM¶
Return a manager that doesn’t perform any operations.
Returns
The noop manager.
Return type
BaseRunManager
on_retry(retry_state: RetryCallState, **kwargs: Any) → None[source]¶
Run on a retry event.
on_text(text: str, **kwargs: Any) → Any[source]¶
Run when text is received.
Parameters
text (str) – The received text.
Returns
The result of the callback.
Return type
Any | lang/api.python.langchain.com/en/latest/schema/langchain.schema.callbacks.manager.RunManager.html |
372595142933-0 | langchain.schema.callbacks.manager.AsyncCallbackManagerForChainRun¶
class langchain.schema.callbacks.manager.AsyncCallbackManagerForChainRun(*, run_id: UUID, handlers: List[BaseCallbackHandler], inheritable_handlers: List[BaseCallbackHandler], parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, inheritable_tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None)[source]¶
Async callback manager for chain run.
Initialize the run manager.
Parameters
run_id (UUID) – The ID of the run.
handlers (List[BaseCallbackHandler]) – The list of handlers.
inheritable_handlers (List[BaseCallbackHandler]) – The list of inheritable handlers.
parent_run_id (UUID, optional) – The ID of the parent run.
Defaults to None.
tags (Optional[List[str]]) – The list of tags.
inheritable_tags (Optional[List[str]]) – The list of inheritable tags.
metadata (Optional[Dict[str, Any]]) – The metadata.
inheritable_metadata (Optional[Dict[str, Any]]) – The inheritable metadata.
Methods
__init__(*, run_id, handlers, ...[, ...])
Initialize the run manager.
get_child([tag])
Get a child callback manager.
get_noop_manager()
Return a manager that doesn't perform any operations.
on_agent_action(action, **kwargs)
Run when agent action is received.
on_agent_finish(finish, **kwargs)
Run when agent finish is received.
on_chain_end(outputs, **kwargs)
Run when chain ends running.
on_chain_error(error, **kwargs)
Run when chain errors.
on_retry(retry_state, **kwargs) | lang/api.python.langchain.com/en/latest/schema/langchain.schema.callbacks.manager.AsyncCallbackManagerForChainRun.html |
372595142933-1 | Run when chain errors.
on_retry(retry_state, **kwargs)
Run on a retry event.
on_text(text, **kwargs)
Run when text is received.
__init__(*, run_id: UUID, handlers: List[BaseCallbackHandler], inheritable_handlers: List[BaseCallbackHandler], parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, inheritable_tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None) → None¶
Initialize the run manager.
Parameters
run_id (UUID) – The ID of the run.
handlers (List[BaseCallbackHandler]) – The list of handlers.
inheritable_handlers (List[BaseCallbackHandler]) – The list of inheritable handlers.
parent_run_id (UUID, optional) – The ID of the parent run.
Defaults to None.
tags (Optional[List[str]]) – The list of tags.
inheritable_tags (Optional[List[str]]) – The list of inheritable tags.
metadata (Optional[Dict[str, Any]]) – The metadata.
inheritable_metadata (Optional[Dict[str, Any]]) – The inheritable metadata.
get_child(tag: Optional[str] = None) → AsyncCallbackManager¶
Get a child callback manager.
Parameters
tag (str, optional) – The tag for the child callback manager.
Defaults to None.
Returns
The child callback manager.
Return type
AsyncCallbackManager
classmethod get_noop_manager() → BRM¶
Return a manager that doesn’t perform any operations.
Returns
The noop manager.
Return type
BaseRunManager
async on_agent_action(action: AgentAction, **kwargs: Any) → Any[source]¶
Run when agent action is received.
Parameters | lang/api.python.langchain.com/en/latest/schema/langchain.schema.callbacks.manager.AsyncCallbackManagerForChainRun.html |
372595142933-2 | Run when agent action is received.
Parameters
action (AgentAction) – The agent action.
Returns
The result of the callback.
Return type
Any
async on_agent_finish(finish: AgentFinish, **kwargs: Any) → Any[source]¶
Run when agent finish is received.
Parameters
finish (AgentFinish) – The agent finish.
Returns
The result of the callback.
Return type
Any
async on_chain_end(outputs: Union[Dict[str, Any], Any], **kwargs: Any) → None[source]¶
Run when chain ends running.
Parameters
outputs (Union[Dict[str, Any], Any]) – The outputs of the chain.
async on_chain_error(error: BaseException, **kwargs: Any) → None[source]¶
Run when chain errors.
Parameters
error (Exception or KeyboardInterrupt) – The error.
async on_retry(retry_state: RetryCallState, **kwargs: Any) → None¶
Run on a retry event.
async on_text(text: str, **kwargs: Any) → Any¶
Run when text is received.
Parameters
text (str) – The received text.
Returns
The result of the callback.
Return type
Any
Examples using AsyncCallbackManagerForChainRun¶
Custom chain | lang/api.python.langchain.com/en/latest/schema/langchain.schema.callbacks.manager.AsyncCallbackManagerForChainRun.html |
354488b97572-0 | langchain.schema.callbacks.manager.ParentRunManager¶
class langchain.schema.callbacks.manager.ParentRunManager(*, run_id: UUID, handlers: List[BaseCallbackHandler], inheritable_handlers: List[BaseCallbackHandler], parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, inheritable_tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None)[source]¶
Sync Parent Run Manager.
Initialize the run manager.
Parameters
run_id (UUID) – The ID of the run.
handlers (List[BaseCallbackHandler]) – The list of handlers.
inheritable_handlers (List[BaseCallbackHandler]) – The list of inheritable handlers.
parent_run_id (UUID, optional) – The ID of the parent run.
Defaults to None.
tags (Optional[List[str]]) – The list of tags.
inheritable_tags (Optional[List[str]]) – The list of inheritable tags.
metadata (Optional[Dict[str, Any]]) – The metadata.
inheritable_metadata (Optional[Dict[str, Any]]) – The inheritable metadata.
Methods
__init__(*, run_id, handlers, ...[, ...])
Initialize the run manager.
get_child([tag])
Get a child callback manager.
get_noop_manager()
Return a manager that doesn't perform any operations.
on_retry(retry_state, **kwargs)
Run on a retry event.
on_text(text, **kwargs)
Run when text is received. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.callbacks.manager.ParentRunManager.html |
354488b97572-1 | on_text(text, **kwargs)
Run when text is received.
__init__(*, run_id: UUID, handlers: List[BaseCallbackHandler], inheritable_handlers: List[BaseCallbackHandler], parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, inheritable_tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None) → None¶
Initialize the run manager.
Parameters
run_id (UUID) – The ID of the run.
handlers (List[BaseCallbackHandler]) – The list of handlers.
inheritable_handlers (List[BaseCallbackHandler]) – The list of inheritable handlers.
parent_run_id (UUID, optional) – The ID of the parent run.
Defaults to None.
tags (Optional[List[str]]) – The list of tags.
inheritable_tags (Optional[List[str]]) – The list of inheritable tags.
metadata (Optional[Dict[str, Any]]) – The metadata.
inheritable_metadata (Optional[Dict[str, Any]]) – The inheritable metadata.
get_child(tag: Optional[str] = None) → CallbackManager[source]¶
Get a child callback manager.
Parameters
tag (str, optional) – The tag for the child callback manager.
Defaults to None.
Returns
The child callback manager.
Return type
CallbackManager
classmethod get_noop_manager() → BRM¶
Return a manager that doesn’t perform any operations.
Returns
The noop manager.
Return type
BaseRunManager
on_retry(retry_state: RetryCallState, **kwargs: Any) → None¶
Run on a retry event.
on_text(text: str, **kwargs: Any) → Any¶
Run when text is received.
Parameters
text (str) – The received text. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.callbacks.manager.ParentRunManager.html |
354488b97572-2 | Run when text is received.
Parameters
text (str) – The received text.
Returns
The result of the callback.
Return type
Any | lang/api.python.langchain.com/en/latest/schema/langchain.schema.callbacks.manager.ParentRunManager.html |
22dcb5d47e39-0 | langchain.schema.output_parser.BaseCumulativeTransformOutputParser¶
class langchain.schema.output_parser.BaseCumulativeTransformOutputParser[source]¶
Bases: BaseTransformOutputParser[T]
Base class for an output parser that can handle streaming input.
param diff: bool = False¶
In streaming mode, whether to yield diffs between the previous and current
parsed output, or just the current parsed output.
async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶
Default implementation runs ainvoke in parallel using asyncio.gather.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
async ainvoke(input: str | langchain.schema.messages.BaseMessage, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → T¶
Default implementation of ainvoke, calls invoke from a thread.
The default implementation allows usage of async code even if
the runnable did not implement a native async version of invoke.
Subclasses should override this method if they can run asynchronously.
async aparse(text: str) → T¶
Parse a single string model output into some structure.
Parameters
text – String output of a language model.
Returns
Structured output.
async aparse_result(result: List[Generation], *, partial: bool = False) → T¶
Parse a list of candidate model Generations into a specific format.
The return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation.
Parameters
result – A list of Generations to be parsed. The Generations are assumed | lang/api.python.langchain.com/en/latest/schema/langchain.schema.output_parser.BaseCumulativeTransformOutputParser.html |
22dcb5d47e39-1 | Parameters
result – A list of Generations to be parsed. The Generations are assumed
to be different candidate outputs for a single model input.
Returns
Structured output.
async astream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
Default implementation of astream, which calls ainvoke.
Subclasses should override this method if they support streaming output.
async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, diff: bool = True, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any]) → Union[AsyncIterator[RunLogPatch], AsyncIterator[RunLog]]¶
Stream all output from a runnable, as reported to the callback system.
This includes all inner runs of LLMs, Retrievers, Tools, etc.
Output is streamed as Log objects, which include a list of
jsonpatch ops that describe how the state of the run has changed in each
step, and the final state of the run.
The jsonpatch ops can be applied in order to construct state.
async atransform(input: AsyncIterator[Union[str, BaseMessage]], config: Optional[RunnableConfig] = None, **kwargs: Any) → AsyncIterator[T]¶
Default implementation of atransform, which buffers input and calls astream.
Subclasses should override this method if they can start producing output while
input is still being generated. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.output_parser.BaseCumulativeTransformOutputParser.html |
22dcb5d47e39-2 | input is still being generated.
batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶
Default implementation runs invoke in parallel using a thread pool executor.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
config_schema(*, include: Optional[Sequence[str]] = None) → Type[BaseModel]¶
The type of config this runnable accepts specified as a pydantic model.
To mark a field as configurable, see the configurable_fields
and configurable_alternatives methods.
Parameters
include – A list of fields to include in the config schema.
Returns
A pydantic model that can be used to validate config.
configurable_alternatives(which: ConfigurableField, default_key: str = 'default', **kwargs: Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) → RunnableSerializable[Input, Output]¶
configurable_fields(**kwargs: Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) → RunnableSerializable[Input, Output]¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values | lang/api.python.langchain.com/en/latest/schema/langchain.schema.output_parser.BaseCumulativeTransformOutputParser.html |
22dcb5d47e39-3 | Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict¶
Return dictionary representation of output parser.
classmethod from_orm(obj: Any) → Model¶
get_format_instructions() → str¶
Instructions on how the LLM output should be formatted.
get_input_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
Get a pydantic model that can be used to validate input to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic input schema that depends on which
configuration the runnable is invoked with.
This method allows to get an input schema for a specific configuration.
Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate input.
classmethod get_lc_namespace() → List[str]¶
Get the namespace of the langchain object.
For example, if the class is langchain.llms.openai.OpenAI, then the
namespace is [“langchain”, “llms”, “openai”] | lang/api.python.langchain.com/en/latest/schema/langchain.schema.output_parser.BaseCumulativeTransformOutputParser.html |
22dcb5d47e39-4 | namespace is [“langchain”, “llms”, “openai”]
get_output_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
Get a pydantic model that can be used to validate output to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic output schema that depends on which
configuration the runnable is invoked with.
This method allows to get an output schema for a specific configuration.
Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate output.
invoke(input: Union[str, BaseMessage], config: Optional[RunnableConfig] = None) → T¶
Transform a single input into an output. Override to implement.
Parameters
input – The input to the runnable.
config – A config to use when invoking the runnable.
The config supports standard keys like ‘tags’, ‘metadata’ for tracing
purposes, ‘max_concurrency’ for controlling how much work to do
in parallel, and other keys. Please refer to the RunnableConfig
for more details.
Returns
The output of the runnable.
classmethod is_lc_serializable() → bool¶
Is this class serializable?
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict(). | lang/api.python.langchain.com/en/latest/schema/langchain.schema.output_parser.BaseCumulativeTransformOutputParser.html |
22dcb5d47e39-5 | Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object.
map() → Runnable[List[Input], List[Output]]¶
Return a new Runnable that maps a list of inputs to a list of outputs,
by calling invoke() with each input.
abstract parse(text: str) → T¶
Parse a single string model output into some structure.
Parameters
text – String output of a language model.
Returns
Structured output.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
parse_result(result: List[Generation], *, partial: bool = False) → T¶
Parse a list of candidate model Generations into a specific format.
The return value is parsed from only the first Generation in the result, whichis assumed to be the highest-likelihood Generation.
Parameters
result – A list of Generations to be parsed. The Generations are assumed
to be different candidate outputs for a single model input.
Returns
Structured output.
parse_with_prompt(completion: str, prompt: PromptValue) → Any¶
Parse the output of an LLM call with the input prompt for context.
The prompt is largely provided in the event the OutputParser wants | lang/api.python.langchain.com/en/latest/schema/langchain.schema.output_parser.BaseCumulativeTransformOutputParser.html |
22dcb5d47e39-6 | The prompt is largely provided in the event the OutputParser wants
to retry or fix the output in some way, and needs information from
the prompt to do so.
Parameters
completion – String output of a language model.
prompt – Input PromptValue.
Returns
Structured output
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of stream, which calls invoke.
Subclasses should override this method if they support streaming output.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
transform(input: Iterator[Union[str, BaseMessage]], config: Optional[RunnableConfig] = None, **kwargs: Any) → Iterator[T]¶
Default implementation of transform, which buffers input and then calls stream.
Subclasses should override this method if they can start producing output while
input is still being generated.
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶
Bind config to a Runnable, returning a new Runnable.
with_fallbacks(fallbacks: Sequence[Runnable[Input, Output]], *, exceptions_to_handle: Tuple[Type[BaseException], ...] = (<class 'Exception'>,)) → RunnableWithFallbacksT[Input, Output]¶ | lang/api.python.langchain.com/en/latest/schema/langchain.schema.output_parser.BaseCumulativeTransformOutputParser.html |
22dcb5d47e39-7 | Add fallbacks to a runnable, returning a new Runnable.
Parameters
fallbacks – A sequence of runnables to try if the original runnable fails.
exceptions_to_handle – A tuple of exception types to handle.
Returns
A new Runnable that will try the original runnable, and then each
fallback in order, upon failures.
with_listeners(*, on_start: Optional[Listener] = None, on_end: Optional[Listener] = None, on_error: Optional[Listener] = None) → Runnable[Input, Output]¶
Bind lifecycle listeners to a Runnable, returning a new Runnable.
on_start: Called before the runnable starts running, with the Run object.
on_end: Called after the runnable finishes running, with the Run object.
on_error: Called if the runnable throws an error, with the Run object.
The Run object contains information about the run, including its id,
type, input, output, error, start_time, end_time, and any tags or metadata
added to the run.
with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶
Create a new Runnable that retries the original runnable on exceptions.
Parameters
retry_if_exception_type – A tuple of exception types to retry on
wait_exponential_jitter – Whether to add jitter to the wait time
between retries
stop_after_attempt – The maximum number of attempts to make before giving up
Returns
A new Runnable that retries the original runnable on exceptions.
with_types(*, input_type: Optional[Type[Input]] = None, output_type: Optional[Type[Output]] = None) → Runnable[Input, Output]¶ | lang/api.python.langchain.com/en/latest/schema/langchain.schema.output_parser.BaseCumulativeTransformOutputParser.html |
22dcb5d47e39-8 | Bind input and output types to a Runnable, returning a new Runnable.
property InputType: Any¶
The type of input this runnable accepts specified as a type annotation.
property OutputType: Type[langchain.schema.output_parser.T]¶
The type of output this runnable produces specified as a type annotation.
property config_specs: List[langchain.schema.runnable.utils.ConfigurableFieldSpec]¶
List configurable fields for this runnable.
property input_schema: Type[pydantic.main.BaseModel]¶
The type of input this runnable accepts specified as a pydantic model.
property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”}
property output_schema: Type[pydantic.main.BaseModel]¶
The type of output this runnable produces specified as a pydantic model. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.output_parser.BaseCumulativeTransformOutputParser.html |
32bb8a026f25-0 | langchain.schema.callbacks.tracers.langchain.log_error_once¶
langchain.schema.callbacks.tracers.langchain.log_error_once(method: str, exception: Exception) → None[source]¶
Log an error once. | lang/api.python.langchain.com/en/latest/schema/langchain.schema.callbacks.tracers.langchain.log_error_once.html |
a434c360bd26-0 | langchain.schema.runnable.passthrough.RunnablePassthrough¶
class langchain.schema.runnable.passthrough.RunnablePassthrough[source]¶
Bases: RunnableSerializable[Other, Other]
A runnable to passthrough inputs unchanged or with additional keys.
This runnable behaves almost like the identity function, except that it
can be configured to add additional keys to the output, if the input is a
dict.
The examples below demonstrate this runnable works using a few simple
chains. The chains rely on simple lambdas to make the examples easy to execute
and experiment with.
Examples
from langchain.schema.runnable import RunnablePassthrough, RunnableParallel
runnable = RunnableParallel(
origin=RunnablePassthrough(),
modified=lambda x: x+1
)
runnable.invoke(1) # {'origin': 1, 'modified': 2}
def fake_llm(prompt: str) -> str: # Fake LLM for the example
return "completion"
chain = RunnableLambda(fake_llm) | {
'original': RunnablePassthrough(), # Original LLM output
'parsed': lambda text: text[::-1] # Parsing logic
}
chain.invoke('hello') # {'original': 'completion', 'parsed': 'noitelpmoc'}
In some cases, it may be useful to pass the input through while adding some
keys to the output. In this case, you can use the assign method:
from langchain.schema.runnable import RunnablePassthrough, RunnableParallel
def fake_llm(prompt: str) -> str: # Fake LLM for the example
return "completion"
runnable = {
'llm1': fake_llm,
'llm2': fake_llm,
}
| RunnablePassthrough.assign( | lang/api.python.langchain.com/en/latest/schema/langchain.schema.runnable.passthrough.RunnablePassthrough.html |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.