id
stringlengths 14
15
| text
stringlengths 13
2.7k
| source
stringlengths 60
181
|
---|---|---|
ff55f4a683ec-4 | Start a trace for an LLM run.
on_retriever_end(documents: Sequence[Document], *, run_id: UUID, **kwargs: Any) → Run¶
Run when Retriever ends running.
on_retriever_error(error: BaseException, *, run_id: UUID, **kwargs: Any) → Run¶
Run when Retriever errors.
on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, name: Optional[str] = None, **kwargs: Any) → Run¶
Run when Retriever starts running.
on_retry(retry_state: RetryCallState, *, run_id: UUID, **kwargs: Any) → Run¶
Run on a retry event.
on_text(text: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run on arbitrary text.
on_tool_end(output: str, *, run_id: UUID, **kwargs: Any) → Run¶
End a trace for a tool run.
on_tool_error(error: BaseException, *, run_id: UUID, **kwargs: Any) → Run¶
Handle an error for a tool run.
on_tool_start(serialized: Dict[str, Any], input_str: str, *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, name: Optional[str] = None, **kwargs: Any) → Run¶
Start a trace for a tool run. | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.tracers.wandb.WandbTracer.html |
c915a6f45024-0 | langchain.callbacks.streaming_stdout_final_only.FinalStreamingStdOutCallbackHandler¶
class langchain.callbacks.streaming_stdout_final_only.FinalStreamingStdOutCallbackHandler(*, answer_prefix_tokens: Optional[List[str]] = None, strip_tokens: bool = True, stream_prefix: bool = False)[source]¶
Callback handler for streaming in agents.
Only works with agents using LLMs that support streaming.
Only the final output of the agent will be streamed.
Instantiate FinalStreamingStdOutCallbackHandler.
Parameters
answer_prefix_tokens – Token sequence that prefixes the answer.
Default is [“Final”, “Answer”, “:”]
strip_tokens – Ignore white spaces and new lines when comparing
answer_prefix_tokens to last tokens? (to determine if answer has been
reached)
stream_prefix – Should answer prefix itself also be streamed?
Attributes
ignore_agent
Whether to ignore agent callbacks.
ignore_chain
Whether to ignore chain callbacks.
ignore_chat_model
Whether to ignore chat model callbacks.
ignore_llm
Whether to ignore LLM callbacks.
ignore_retriever
Whether to ignore retriever callbacks.
ignore_retry
Whether to ignore retry callbacks.
raise_error
run_inline
Methods
__init__(*[, answer_prefix_tokens, ...])
Instantiate FinalStreamingStdOutCallbackHandler.
append_to_last_tokens(token)
check_if_answer_reached()
on_agent_action(action, **kwargs)
Run on agent action.
on_agent_finish(finish, **kwargs)
Run on agent end.
on_chain_end(outputs, **kwargs)
Run when chain ends running.
on_chain_error(error, **kwargs)
Run when chain errors.
on_chain_start(serialized, inputs, **kwargs)
Run when chain starts running.
on_chat_model_start(serialized, messages, ...)
Run when LLM starts running. | https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.streaming_stdout_final_only.FinalStreamingStdOutCallbackHandler.html |
c915a6f45024-1 | Run when LLM starts running.
on_llm_end(response, **kwargs)
Run when LLM ends running.
on_llm_error(error, **kwargs)
Run when LLM errors.
on_llm_new_token(token, **kwargs)
Run on new LLM token.
on_llm_start(serialized, prompts, **kwargs)
Run when LLM starts running.
on_retriever_end(documents, *, run_id[, ...])
Run when Retriever ends running.
on_retriever_error(error, *, run_id[, ...])
Run when Retriever errors.
on_retriever_start(serialized, query, *, run_id)
Run when Retriever starts running.
on_retry(retry_state, *, run_id[, parent_run_id])
Run on a retry event.
on_text(text, **kwargs)
Run on arbitrary text.
on_tool_end(output, **kwargs)
Run when tool ends running.
on_tool_error(error, **kwargs)
Run when tool errors.
on_tool_start(serialized, input_str, **kwargs)
Run when tool starts running.
__init__(*, answer_prefix_tokens: Optional[List[str]] = None, strip_tokens: bool = True, stream_prefix: bool = False) → None[source]¶
Instantiate FinalStreamingStdOutCallbackHandler.
Parameters
answer_prefix_tokens – Token sequence that prefixes the answer.
Default is [“Final”, “Answer”, “:”]
strip_tokens – Ignore white spaces and new lines when comparing
answer_prefix_tokens to last tokens? (to determine if answer has been
reached)
stream_prefix – Should answer prefix itself also be streamed?
append_to_last_tokens(token: str) → None[source]¶ | https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.streaming_stdout_final_only.FinalStreamingStdOutCallbackHandler.html |
c915a6f45024-2 | append_to_last_tokens(token: str) → None[source]¶
check_if_answer_reached() → bool[source]¶
on_agent_action(action: AgentAction, **kwargs: Any) → Any¶
Run on agent action.
on_agent_finish(finish: AgentFinish, **kwargs: Any) → None¶
Run on agent end.
on_chain_end(outputs: Dict[str, Any], **kwargs: Any) → None¶
Run when chain ends running.
on_chain_error(error: BaseException, **kwargs: Any) → None¶
Run when chain errors.
on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any) → None¶
Run when chain starts running.
on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], **kwargs: Any) → None¶
Run when LLM starts running.
on_llm_end(response: LLMResult, **kwargs: Any) → None¶
Run when LLM ends running.
on_llm_error(error: BaseException, **kwargs: Any) → None¶
Run when LLM errors.
on_llm_new_token(token: str, **kwargs: Any) → None[source]¶
Run on new LLM token. Only available when streaming is enabled.
on_llm_start(serialized: Dict[str, Any], prompts: List[str], **kwargs: Any) → None[source]¶
Run when LLM starts running.
on_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when Retriever ends running.
on_retriever_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ | https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.streaming_stdout_final_only.FinalStreamingStdOutCallbackHandler.html |
c915a6f45024-3 | Run when Retriever errors.
on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run when Retriever starts running.
on_retry(retry_state: RetryCallState, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run on a retry event.
on_text(text: str, **kwargs: Any) → None¶
Run on arbitrary text.
on_tool_end(output: str, **kwargs: Any) → None¶
Run when tool ends running.
on_tool_error(error: BaseException, **kwargs: Any) → None¶
Run when tool errors.
on_tool_start(serialized: Dict[str, Any], input_str: str, **kwargs: Any) → None¶
Run when tool starts running.
Examples using FinalStreamingStdOutCallbackHandler¶
Streaming final agent output | https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.streaming_stdout_final_only.FinalStreamingStdOutCallbackHandler.html |
5a4926ef41d0-0 | langchain_community.callbacks.llmonitor_callback.UserContextManager¶
class langchain_community.callbacks.llmonitor_callback.UserContextManager(user_id: str, user_props: Any = None)[source]¶
Context manager for LLMonitor user context.
Methods
__init__(user_id[, user_props])
__init__(user_id: str, user_props: Any = None) → None[source]¶ | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.llmonitor_callback.UserContextManager.html |
4abab72ed077-0 | langchain_core.callbacks.base.ToolManagerMixin¶
class langchain_core.callbacks.base.ToolManagerMixin[source]¶
Mixin for tool callbacks.
Methods
__init__()
on_tool_end(output, *, run_id[, parent_run_id])
Run when tool ends running.
on_tool_error(error, *, run_id[, parent_run_id])
Run when tool errors.
__init__()¶
on_tool_end(output: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any[source]¶
Run when tool ends running.
on_tool_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any[source]¶
Run when tool errors. | https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.base.ToolManagerMixin.html |
8274a5d6d663-0 | langchain_core.callbacks.manager.AsyncCallbackManagerForChainGroup¶
class langchain_core.callbacks.manager.AsyncCallbackManagerForChainGroup(handlers: List[BaseCallbackHandler], inheritable_handlers: Optional[List[BaseCallbackHandler]] = None, parent_run_id: Optional[UUID] = None, *, parent_run_manager: AsyncCallbackManagerForChainRun, **kwargs: Any)[source]¶
Async callback manager for the chain group.
Initialize callback manager.
Attributes
is_async
Return whether the handler is async.
Methods
__init__(handlers[, inheritable_handlers, ...])
Initialize callback manager.
add_handler(handler[, inherit])
Add a handler to the callback manager.
add_metadata(metadata[, inherit])
add_tags(tags[, inherit])
configure([inheritable_callbacks, ...])
Configure the async callback manager.
copy()
Copy the callback manager.
on_chain_end(outputs, **kwargs)
Run when traced chain group ends.
on_chain_error(error, **kwargs)
Run when chain errors.
on_chain_start(serialized, inputs[, run_id])
Run when chain starts running.
on_chat_model_start(serialized, messages, ...)
Run when LLM starts running.
on_llm_start(serialized, prompts, **kwargs)
Run when LLM starts running.
on_retriever_start(serialized, query[, ...])
Run when retriever starts running.
on_tool_start(serialized, input_str[, ...])
Run when tool starts running.
remove_handler(handler)
Remove a handler from the callback manager.
remove_metadata(keys)
remove_tags(tags)
set_handler(handler[, inherit])
Set handler as the only handler on the callback manager.
set_handlers(handlers[, inherit])
Set handlers as the only handlers on the callback manager. | https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.manager.AsyncCallbackManagerForChainGroup.html |
8274a5d6d663-1 | Set handlers as the only handlers on the callback manager.
__init__(handlers: List[BaseCallbackHandler], inheritable_handlers: Optional[List[BaseCallbackHandler]] = None, parent_run_id: Optional[UUID] = None, *, parent_run_manager: AsyncCallbackManagerForChainRun, **kwargs: Any) → None[source]¶
Initialize callback manager.
add_handler(handler: BaseCallbackHandler, inherit: bool = True) → None¶
Add a handler to the callback manager.
add_metadata(metadata: Dict[str, Any], inherit: bool = True) → None¶
add_tags(tags: List[str], inherit: bool = True) → None¶
classmethod configure(inheritable_callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, local_callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, verbose: bool = False, inheritable_tags: Optional[List[str]] = None, local_tags: Optional[List[str]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None, local_metadata: Optional[Dict[str, Any]] = None) → AsyncCallbackManager¶
Configure the async callback manager.
Parameters
inheritable_callbacks (Optional[Callbacks], optional) – The inheritable
callbacks. Defaults to None.
local_callbacks (Optional[Callbacks], optional) – The local callbacks.
Defaults to None.
verbose (bool, optional) – Whether to enable verbose mode. Defaults to False.
inheritable_tags (Optional[List[str]], optional) – The inheritable tags.
Defaults to None.
local_tags (Optional[List[str]], optional) – The local tags.
Defaults to None.
inheritable_metadata (Optional[Dict[str, Any]], optional) – The inheritable
metadata. Defaults to None.
local_metadata (Optional[Dict[str, Any]], optional) – The local metadata.
Defaults to None.
Returns | https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.manager.AsyncCallbackManagerForChainGroup.html |
8274a5d6d663-2 | Defaults to None.
Returns
The configured async callback manager.
Return type
AsyncCallbackManager
copy() → AsyncCallbackManagerForChainGroup[source]¶
Copy the callback manager.
async on_chain_end(outputs: Union[Dict[str, Any], Any], **kwargs: Any) → None[source]¶
Run when traced chain group ends.
Parameters
outputs (Union[Dict[str, Any], Any]) – The outputs of the chain.
async on_chain_error(error: BaseException, **kwargs: Any) → None[source]¶
Run when chain errors.
Parameters
error (Exception or KeyboardInterrupt) – The error.
async on_chain_start(serialized: Dict[str, Any], inputs: Union[Dict[str, Any], Any], run_id: Optional[UUID] = None, **kwargs: Any) → AsyncCallbackManagerForChainRun¶
Run when chain starts running.
Parameters
serialized (Dict[str, Any]) – The serialized chain.
inputs (Union[Dict[str, Any], Any]) – The inputs to the chain.
run_id (UUID, optional) – The ID of the run. Defaults to None.
Returns
The async callback managerfor the chain run.
Return type
AsyncCallbackManagerForChainRun
async on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], **kwargs: Any) → List[AsyncCallbackManagerForLLMRun]¶
Run when LLM starts running.
Parameters
serialized (Dict[str, Any]) – The serialized LLM.
messages (List[List[BaseMessage]]) – The list of messages.
run_id (UUID, optional) – The ID of the run. Defaults to None.
Returns
The list ofasync callback managers, one for each LLM Run
corresponding to each inner message list.
Return type
List[AsyncCallbackManagerForLLMRun] | https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.manager.AsyncCallbackManagerForChainGroup.html |
8274a5d6d663-3 | Return type
List[AsyncCallbackManagerForLLMRun]
async on_llm_start(serialized: Dict[str, Any], prompts: List[str], **kwargs: Any) → List[AsyncCallbackManagerForLLMRun]¶
Run when LLM starts running.
Parameters
serialized (Dict[str, Any]) – The serialized LLM.
prompts (List[str]) – The list of prompts.
run_id (UUID, optional) – The ID of the run. Defaults to None.
Returns
The list of asynccallback managers, one for each LLM Run corresponding
to each prompt.
Return type
List[AsyncCallbackManagerForLLMRun]
async on_retriever_start(serialized: Dict[str, Any], query: str, run_id: Optional[UUID] = None, parent_run_id: Optional[UUID] = None, **kwargs: Any) → AsyncCallbackManagerForRetrieverRun¶
Run when retriever starts running.
async on_tool_start(serialized: Dict[str, Any], input_str: str, run_id: Optional[UUID] = None, parent_run_id: Optional[UUID] = None, **kwargs: Any) → AsyncCallbackManagerForToolRun¶
Run when tool starts running.
Parameters
serialized (Dict[str, Any]) – The serialized tool.
input_str (str) – The input to the tool.
run_id (UUID, optional) – The ID of the run. Defaults to None.
parent_run_id (UUID, optional) – The ID of the parent run.
Defaults to None.
Returns
The async callback managerfor the tool run.
Return type
AsyncCallbackManagerForToolRun
remove_handler(handler: BaseCallbackHandler) → None¶
Remove a handler from the callback manager.
remove_metadata(keys: List[str]) → None¶
remove_tags(tags: List[str]) → None¶ | https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.manager.AsyncCallbackManagerForChainGroup.html |
8274a5d6d663-4 | remove_tags(tags: List[str]) → None¶
set_handler(handler: BaseCallbackHandler, inherit: bool = True) → None¶
Set handler as the only handler on the callback manager.
set_handlers(handlers: List[BaseCallbackHandler], inherit: bool = True) → None¶
Set handlers as the only handlers on the callback manager. | https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.manager.AsyncCallbackManagerForChainGroup.html |
0121ee901b6d-0 | langchain_core.callbacks.manager.RunManager¶
class langchain_core.callbacks.manager.RunManager(*, run_id: UUID, handlers: List[BaseCallbackHandler], inheritable_handlers: List[BaseCallbackHandler], parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, inheritable_tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None)[source]¶
Sync Run Manager.
Initialize the run manager.
Parameters
run_id (UUID) – The ID of the run.
handlers (List[BaseCallbackHandler]) – The list of handlers.
inheritable_handlers (List[BaseCallbackHandler]) – The list of inheritable handlers.
parent_run_id (UUID, optional) – The ID of the parent run.
Defaults to None.
tags (Optional[List[str]]) – The list of tags.
inheritable_tags (Optional[List[str]]) – The list of inheritable tags.
metadata (Optional[Dict[str, Any]]) – The metadata.
inheritable_metadata (Optional[Dict[str, Any]]) – The inheritable metadata.
Methods
__init__(*, run_id, handlers, ...[, ...])
Initialize the run manager.
get_noop_manager()
Return a manager that doesn't perform any operations.
on_retry(retry_state, **kwargs)
Run on a retry event.
on_text(text, **kwargs)
Run when text is received. | https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.manager.RunManager.html |
0121ee901b6d-1 | on_text(text, **kwargs)
Run when text is received.
__init__(*, run_id: UUID, handlers: List[BaseCallbackHandler], inheritable_handlers: List[BaseCallbackHandler], parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, inheritable_tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None) → None¶
Initialize the run manager.
Parameters
run_id (UUID) – The ID of the run.
handlers (List[BaseCallbackHandler]) – The list of handlers.
inheritable_handlers (List[BaseCallbackHandler]) – The list of inheritable handlers.
parent_run_id (UUID, optional) – The ID of the parent run.
Defaults to None.
tags (Optional[List[str]]) – The list of tags.
inheritable_tags (Optional[List[str]]) – The list of inheritable tags.
metadata (Optional[Dict[str, Any]]) – The metadata.
inheritable_metadata (Optional[Dict[str, Any]]) – The inheritable metadata.
classmethod get_noop_manager() → BRM¶
Return a manager that doesn’t perform any operations.
Returns
The noop manager.
Return type
BaseRunManager
on_retry(retry_state: RetryCallState, **kwargs: Any) → None[source]¶
Run on a retry event.
on_text(text: str, **kwargs: Any) → Any[source]¶
Run when text is received.
Parameters
text (str) – The received text.
Returns
The result of the callback.
Return type
Any | https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.manager.RunManager.html |
7894afa3c190-0 | langchain_community.callbacks.tracers.wandb.RunProcessor¶
class langchain_community.callbacks.tracers.wandb.RunProcessor(wandb_module: Any, trace_module: Any)[source]¶
Handles the conversion of a LangChain Runs into a WBTraceTree.
Methods
__init__(wandb_module, trace_module)
build_tree(runs)
Builds a nested dictionary from a list of runs. :param runs: The list of runs to build the tree from. :return: The nested dictionary representing the langchain Run in a tree structure compatible with WBTraceTree.
flatten_run(run)
Utility to flatten a nest run object into a list of runs.
modify_serialized_iterative(runs[, ...])
Utility to modify the serialized field of a list of runs dictionaries.
process_model(run)
Utility to process a run for wandb model_dict serialization.
process_span(run)
Converts a LangChain Run into a W&B Trace Span.
truncate_run_iterative(runs[, keep_keys])
Utility to truncate a list of runs dictionaries to only keep the specified
__init__(wandb_module: Any, trace_module: Any)[source]¶
build_tree(runs: List[Dict[str, Any]]) → Dict[str, Any][source]¶
Builds a nested dictionary from a list of runs.
:param runs: The list of runs to build the tree from.
:return: The nested dictionary representing the langchain Run in a tree
structure compatible with WBTraceTree.
flatten_run(run: Dict[str, Any]) → List[Dict[str, Any]][source]¶
Utility to flatten a nest run object into a list of runs.
:param run: The base run to flatten.
:return: The flattened list of runs. | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.tracers.wandb.RunProcessor.html |
7894afa3c190-1 | :param run: The base run to flatten.
:return: The flattened list of runs.
modify_serialized_iterative(runs: List[Dict[str, Any]], exact_keys: Tuple[str, ...] = (), partial_keys: Tuple[str, ...] = ()) → List[Dict[str, Any]][source]¶
Utility to modify the serialized field of a list of runs dictionaries.
removes any keys that match the exact_keys and any keys that contain any of the
partial_keys.
recursively moves the dictionaries under the kwargs key to the top level.
changes the “id” field to a string “_kind” field that tells WBTraceTree how to
visualize the run. promotes the “serialized” field to the top level.
Parameters
runs – The list of runs to modify.
exact_keys – A tuple of keys to remove from the serialized field.
partial_keys – A tuple of partial keys to remove from the serialized
field.
Returns
The modified list of runs.
process_model(run: Run) → Optional[Dict[str, Any]][source]¶
Utility to process a run for wandb model_dict serialization.
:param run: The run to process.
:return: The convert model_dict to pass to WBTraceTree.
process_span(run: Run) → Optional['Span'][source]¶
Converts a LangChain Run into a W&B Trace Span.
:param run: The LangChain Run to convert.
:return: The converted W&B Trace Span.
truncate_run_iterative(runs: List[Dict[str, Any]], keep_keys: Tuple[str, ...] = ()) → List[Dict[str, Any]][source]¶
Utility to truncate a list of runs dictionaries to only keep the specifiedkeys in each run.
Parameters
runs – The list of runs to truncate.
keep_keys – The keys to keep in each run.
Returns | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.tracers.wandb.RunProcessor.html |
7894afa3c190-2 | keep_keys – The keys to keep in each run.
Returns
The truncated list of runs. | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.tracers.wandb.RunProcessor.html |
03434380e8c4-0 | langchain_community.callbacks.clearml_callback.import_clearml¶
langchain_community.callbacks.clearml_callback.import_clearml() → Any[source]¶
Import the clearml python package and raise an error if it is not installed. | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.clearml_callback.import_clearml.html |
c1ca30123406-0 | langchain.callbacks.tracers.logging.LoggingCallbackHandler¶
class langchain.callbacks.tracers.logging.LoggingCallbackHandler(logger: Logger, log_level: int = 20, extra: Optional[dict] = None, **kwargs: Any)[source]¶
Tracer that logs via the input Logger.
Attributes
ignore_agent
Whether to ignore agent callbacks.
ignore_chain
Whether to ignore chain callbacks.
ignore_chat_model
Whether to ignore chat model callbacks.
ignore_llm
Whether to ignore LLM callbacks.
ignore_retriever
Whether to ignore retriever callbacks.
ignore_retry
Whether to ignore retry callbacks.
name
raise_error
run_inline
Methods
__init__(logger[, log_level, extra])
get_breadcrumbs(run)
get_parents(run)
on_agent_action(action, *, run_id[, ...])
Run on agent action.
on_agent_finish(finish, *, run_id[, ...])
Run on agent end.
on_chain_end(outputs, *, run_id[, inputs])
End a trace for a chain run.
on_chain_error(error, *[, inputs])
Handle an error for a chain run.
on_chain_start(serialized, inputs, *, run_id)
Start a trace for a chain run.
on_chat_model_start(serialized, messages, *, ...)
Run when a chat model starts running.
on_llm_end(response, *, run_id, **kwargs)
End a trace for an LLM run.
on_llm_error(error, *, run_id, **kwargs)
Handle an error for an LLM run.
on_llm_new_token(token, *[, chunk, ...])
Run on new LLM token.
on_llm_start(serialized, prompts, *, run_id) | https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.logging.LoggingCallbackHandler.html |
c1ca30123406-1 | on_llm_start(serialized, prompts, *, run_id)
Start a trace for an LLM run.
on_retriever_end(documents, *, run_id, **kwargs)
Run when Retriever ends running.
on_retriever_error(error, *, run_id, **kwargs)
Run when Retriever errors.
on_retriever_start(serialized, query, *, run_id)
Run when Retriever starts running.
on_retry(retry_state, *, run_id, **kwargs)
Run on a retry event.
on_text(text, *, run_id[, parent_run_id])
Run on arbitrary text.
on_tool_end(output, *, run_id, **kwargs)
End a trace for a tool run.
on_tool_error(error, *, run_id, **kwargs)
Handle an error for a tool run.
on_tool_start(serialized, input_str, *, run_id)
Start a trace for a tool run.
__init__(logger: Logger, log_level: int = 20, extra: Optional[dict] = None, **kwargs: Any) → None[source]¶
get_breadcrumbs(run: Run) → str¶
get_parents(run: Run) → List[Run]¶
on_agent_action(action: AgentAction, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run on agent action.
on_agent_finish(finish: AgentFinish, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run on agent end. | https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.logging.LoggingCallbackHandler.html |
c1ca30123406-2 | Run on agent end.
on_chain_end(outputs: Dict[str, Any], *, run_id: UUID, inputs: Optional[Dict[str, Any]] = None, **kwargs: Any) → Run¶
End a trace for a chain run.
on_chain_error(error: BaseException, *, inputs: Optional[Dict[str, Any]] = None, run_id: UUID, **kwargs: Any) → Run¶
Handle an error for a chain run.
on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, run_type: Optional[str] = None, name: Optional[str] = None, **kwargs: Any) → Run¶
Start a trace for a chain run.
on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run when a chat model starts running.
on_llm_end(response: LLMResult, *, run_id: UUID, **kwargs: Any) → Run¶
End a trace for an LLM run.
on_llm_error(error: BaseException, *, run_id: UUID, **kwargs: Any) → Run¶
Handle an error for an LLM run.
on_llm_new_token(token: str, *, chunk: Optional[Union[GenerationChunk, ChatGenerationChunk]] = None, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Run¶
Run on new LLM token. Only available when streaming is enabled. | https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.logging.LoggingCallbackHandler.html |
c1ca30123406-3 | Run on new LLM token. Only available when streaming is enabled.
on_llm_start(serialized: Dict[str, Any], prompts: List[str], *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, name: Optional[str] = None, **kwargs: Any) → Run¶
Start a trace for an LLM run.
on_retriever_end(documents: Sequence[Document], *, run_id: UUID, **kwargs: Any) → Run¶
Run when Retriever ends running.
on_retriever_error(error: BaseException, *, run_id: UUID, **kwargs: Any) → Run¶
Run when Retriever errors.
on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, name: Optional[str] = None, **kwargs: Any) → Run¶
Run when Retriever starts running.
on_retry(retry_state: RetryCallState, *, run_id: UUID, **kwargs: Any) → Run¶
Run on a retry event.
on_text(text: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → None[source]¶
Run on arbitrary text.
on_tool_end(output: str, *, run_id: UUID, **kwargs: Any) → Run¶
End a trace for a tool run.
on_tool_error(error: BaseException, *, run_id: UUID, **kwargs: Any) → Run¶
Handle an error for a tool run. | https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.logging.LoggingCallbackHandler.html |
c1ca30123406-4 | Handle an error for a tool run.
on_tool_start(serialized: Dict[str, Any], input_str: str, *, run_id: UUID, tags: Optional[List[str]] = None, parent_run_id: Optional[UUID] = None, metadata: Optional[Dict[str, Any]] = None, name: Optional[str] = None, **kwargs: Any) → Run¶
Start a trace for a tool run. | https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.tracers.logging.LoggingCallbackHandler.html |
dccdec698418-0 | langchain_community.callbacks.streamlit.streamlit_callback_handler.LLMThought¶
class langchain_community.callbacks.streamlit.streamlit_callback_handler.LLMThought(parent_container: DeltaGenerator, labeler: LLMThoughtLabeler, expanded: bool, collapse_on_complete: bool)[source]¶
A thought in the LLM’s thought stream.
Initialize the LLMThought.
Parameters
parent_container – The container we’re writing into.
labeler – The labeler to use for this thought.
expanded – Whether the thought should be expanded by default.
collapse_on_complete – Whether the thought should be collapsed.
Attributes
container
The container we're writing into.
last_tool
The last tool executed by this thought
Methods
__init__(parent_container, labeler, ...)
Initialize the LLMThought.
clear()
Remove the thought from the screen.
complete([final_label])
Finish the thought.
on_agent_action(action[, color])
on_llm_end(response, **kwargs)
on_llm_error(error, **kwargs)
on_llm_new_token(token, **kwargs)
on_llm_start(serialized, prompts)
on_tool_end(output[, color, ...])
on_tool_error(error, **kwargs)
on_tool_start(serialized, input_str, **kwargs)
__init__(parent_container: DeltaGenerator, labeler: LLMThoughtLabeler, expanded: bool, collapse_on_complete: bool)[source]¶
Initialize the LLMThought.
Parameters
parent_container – The container we’re writing into.
labeler – The labeler to use for this thought.
expanded – Whether the thought should be expanded by default.
collapse_on_complete – Whether the thought should be collapsed.
clear() → None[source]¶
Remove the thought from the screen. A cleared thought can’t be reused. | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.streamlit.streamlit_callback_handler.LLMThought.html |
dccdec698418-1 | Remove the thought from the screen. A cleared thought can’t be reused.
complete(final_label: Optional[str] = None) → None[source]¶
Finish the thought.
on_agent_action(action: AgentAction, color: Optional[str] = None, **kwargs: Any) → Any[source]¶
on_llm_end(response: LLMResult, **kwargs: Any) → None[source]¶
on_llm_error(error: BaseException, **kwargs: Any) → None[source]¶
on_llm_new_token(token: str, **kwargs: Any) → None[source]¶
on_llm_start(serialized: Dict[str, Any], prompts: List[str]) → None[source]¶
on_tool_end(output: str, color: Optional[str] = None, observation_prefix: Optional[str] = None, llm_prefix: Optional[str] = None, **kwargs: Any) → None[source]¶
on_tool_error(error: BaseException, **kwargs: Any) → None[source]¶
on_tool_start(serialized: Dict[str, Any], input_str: str, **kwargs: Any) → None[source]¶ | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.streamlit.streamlit_callback_handler.LLMThought.html |
f1c91a727a9a-0 | langchain_core.callbacks.manager.ParentRunManager¶
class langchain_core.callbacks.manager.ParentRunManager(*, run_id: UUID, handlers: List[BaseCallbackHandler], inheritable_handlers: List[BaseCallbackHandler], parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, inheritable_tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None)[source]¶
Sync Parent Run Manager.
Initialize the run manager.
Parameters
run_id (UUID) – The ID of the run.
handlers (List[BaseCallbackHandler]) – The list of handlers.
inheritable_handlers (List[BaseCallbackHandler]) – The list of inheritable handlers.
parent_run_id (UUID, optional) – The ID of the parent run.
Defaults to None.
tags (Optional[List[str]]) – The list of tags.
inheritable_tags (Optional[List[str]]) – The list of inheritable tags.
metadata (Optional[Dict[str, Any]]) – The metadata.
inheritable_metadata (Optional[Dict[str, Any]]) – The inheritable metadata.
Methods
__init__(*, run_id, handlers, ...[, ...])
Initialize the run manager.
get_child([tag])
Get a child callback manager.
get_noop_manager()
Return a manager that doesn't perform any operations.
on_retry(retry_state, **kwargs)
Run on a retry event.
on_text(text, **kwargs)
Run when text is received. | https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.manager.ParentRunManager.html |
f1c91a727a9a-1 | on_text(text, **kwargs)
Run when text is received.
__init__(*, run_id: UUID, handlers: List[BaseCallbackHandler], inheritable_handlers: List[BaseCallbackHandler], parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, inheritable_tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None) → None¶
Initialize the run manager.
Parameters
run_id (UUID) – The ID of the run.
handlers (List[BaseCallbackHandler]) – The list of handlers.
inheritable_handlers (List[BaseCallbackHandler]) – The list of inheritable handlers.
parent_run_id (UUID, optional) – The ID of the parent run.
Defaults to None.
tags (Optional[List[str]]) – The list of tags.
inheritable_tags (Optional[List[str]]) – The list of inheritable tags.
metadata (Optional[Dict[str, Any]]) – The metadata.
inheritable_metadata (Optional[Dict[str, Any]]) – The inheritable metadata.
get_child(tag: Optional[str] = None) → CallbackManager[source]¶
Get a child callback manager.
Parameters
tag (str, optional) – The tag for the child callback manager.
Defaults to None.
Returns
The child callback manager.
Return type
CallbackManager
classmethod get_noop_manager() → BRM¶
Return a manager that doesn’t perform any operations.
Returns
The noop manager.
Return type
BaseRunManager
on_retry(retry_state: RetryCallState, **kwargs: Any) → None¶
Run on a retry event.
on_text(text: str, **kwargs: Any) → Any¶
Run when text is received.
Parameters
text (str) – The received text. | https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.manager.ParentRunManager.html |
f1c91a727a9a-2 | Run when text is received.
Parameters
text (str) – The received text.
Returns
The result of the callback.
Return type
Any | https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.manager.ParentRunManager.html |
965624020e14-0 | langchain_community.callbacks.whylabs_callback.import_langkit¶
langchain_community.callbacks.whylabs_callback.import_langkit(sentiment: bool = False, toxicity: bool = False, themes: bool = False) → Any[source]¶
Import the langkit python package and raise an error if it is not installed.
Parameters
sentiment – Whether to import the langkit.sentiment module. Defaults to False.
toxicity – Whether to import the langkit.toxicity module. Defaults to False.
themes – Whether to import the langkit.themes module. Defaults to False.
Returns
The imported langkit module. | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.whylabs_callback.import_langkit.html |
027096df52fb-0 | langchain_community.callbacks.mlflow_callback.analyze_text¶
langchain_community.callbacks.mlflow_callback.analyze_text(text: str, nlp: Any = None) → dict[source]¶
Analyze text using textstat and spacy.
Parameters
text (str) – The text to analyze.
nlp (spacy.lang) – The spacy language model to use for visualization.
Returns
A dictionary containing the complexity metrics and visualizationfiles serialized to HTML string.
Return type
(dict) | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.mlflow_callback.analyze_text.html |
c4cc087fc09b-0 | langchain_core.callbacks.base.AsyncCallbackHandler¶
class langchain_core.callbacks.base.AsyncCallbackHandler[source]¶
Async callback handler that handles callbacks from LangChain.
Attributes
ignore_agent
Whether to ignore agent callbacks.
ignore_chain
Whether to ignore chain callbacks.
ignore_chat_model
Whether to ignore chat model callbacks.
ignore_llm
Whether to ignore LLM callbacks.
ignore_retriever
Whether to ignore retriever callbacks.
ignore_retry
Whether to ignore retry callbacks.
raise_error
run_inline
Methods
__init__()
on_agent_action(action, *, run_id[, ...])
Run on agent action.
on_agent_finish(finish, *, run_id[, ...])
Run on agent end.
on_chain_end(outputs, *, run_id[, ...])
Run when chain ends running.
on_chain_error(error, *, run_id[, ...])
Run when chain errors.
on_chain_start(serialized, inputs, *, run_id)
Run when chain starts running.
on_chat_model_start(serialized, messages, *, ...)
Run when a chat model starts running.
on_llm_end(response, *, run_id[, ...])
Run when LLM ends running.
on_llm_error(error, *, run_id[, ...])
Run when LLM errors. :param error: The error that occurred. :type error: BaseException :param kwargs: Additional keyword arguments. - response (LLMResult): The response which was generated before the error occurred. :type kwargs: Any.
on_llm_new_token(token, *[, chunk, ...])
Run on new LLM token.
on_llm_start(serialized, prompts, *, run_id)
Run when LLM starts running. | https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.base.AsyncCallbackHandler.html |
c4cc087fc09b-1 | Run when LLM starts running.
on_retriever_end(documents, *, run_id[, ...])
Run on retriever end.
on_retriever_error(error, *, run_id[, ...])
Run on retriever error.
on_retriever_start(serialized, query, *, run_id)
Run on retriever start.
on_retry(retry_state, *, run_id[, parent_run_id])
Run on a retry event.
on_text(text, *, run_id[, parent_run_id, tags])
Run on arbitrary text.
on_tool_end(output, *, run_id[, ...])
Run when tool ends running.
on_tool_error(error, *, run_id[, ...])
Run when tool errors.
on_tool_start(serialized, input_str, *, run_id)
Run when tool starts running.
__init__()¶
async on_agent_action(action: AgentAction, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → None[source]¶
Run on agent action.
async on_agent_finish(finish: AgentFinish, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → None[source]¶
Run on agent end.
async on_chain_end(outputs: Dict[str, Any], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → None[source]¶
Run when chain ends running. | https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.base.AsyncCallbackHandler.html |
c4cc087fc09b-2 | Run when chain ends running.
async on_chain_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → None[source]¶
Run when chain errors.
async on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None[source]¶
Run when chain starts running.
async on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any[source]¶
Run when a chat model starts running.
async on_llm_end(response: LLMResult, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → None[source]¶
Run when LLM ends running.
async on_llm_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → None[source]¶
Run when LLM errors.
:param error: The error that occurred.
:type error: BaseException
:param kwargs: Additional keyword arguments.
response (LLMResult): The response which was generated beforethe error occurred. | https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.base.AsyncCallbackHandler.html |
c4cc087fc09b-3 | response (LLMResult): The response which was generated beforethe error occurred.
async on_llm_new_token(token: str, *, chunk: Optional[Union[GenerationChunk, ChatGenerationChunk]] = None, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → None[source]¶
Run on new LLM token. Only available when streaming is enabled.
async on_llm_start(serialized: Dict[str, Any], prompts: List[str], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None[source]¶
Run when LLM starts running.
async on_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → None[source]¶
Run on retriever end.
async on_retriever_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → None[source]¶
Run on retriever error.
async on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None[source]¶
Run on retriever start.
async on_retry(retry_state: RetryCallState, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any[source]¶ | https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.base.AsyncCallbackHandler.html |
c4cc087fc09b-4 | Run on a retry event.
async on_text(text: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → None[source]¶
Run on arbitrary text.
async on_tool_end(output: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → None[source]¶
Run when tool ends running.
async on_tool_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → None[source]¶
Run when tool errors.
async on_tool_start(serialized: Dict[str, Any], input_str: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None[source]¶
Run when tool starts running.
Examples using AsyncCallbackHandler¶
Async callbacks | https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.base.AsyncCallbackHandler.html |
a844859c0461-0 | langchain_community.callbacks.trubrics_callback.TrubricsCallbackHandler¶
class langchain_community.callbacks.trubrics_callback.TrubricsCallbackHandler(project: str = 'default', email: Optional[str] = None, password: Optional[str] = None, **kwargs: Any)[source]¶
Callback handler for Trubrics.
Parameters
project – a trubrics project, default project is “default”
email – a trubrics account email, can equally be set in env variables
password – a trubrics account password, can equally be set in env variables
**kwargs – all other kwargs are parsed and set to trubrics prompt variables,
or added to the metadata dict
Attributes
ignore_agent
Whether to ignore agent callbacks.
ignore_chain
Whether to ignore chain callbacks.
ignore_chat_model
Whether to ignore chat model callbacks.
ignore_llm
Whether to ignore LLM callbacks.
ignore_retriever
Whether to ignore retriever callbacks.
ignore_retry
Whether to ignore retry callbacks.
raise_error
run_inline
Methods
__init__([project, email, password])
on_agent_action(action, *, run_id[, ...])
Run on agent action.
on_agent_finish(finish, *, run_id[, ...])
Run on agent end.
on_chain_end(outputs, *, run_id[, parent_run_id])
Run when chain ends running.
on_chain_error(error, *, run_id[, parent_run_id])
Run when chain errors.
on_chain_start(serialized, inputs, *, run_id)
Run when chain starts running.
on_chat_model_start(serialized, messages, ...)
Run when a chat model starts running.
on_llm_end(response, run_id, **kwargs)
Run when LLM ends running. | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.trubrics_callback.TrubricsCallbackHandler.html |
a844859c0461-1 | Run when LLM ends running.
on_llm_error(error, *, run_id[, parent_run_id])
Run when LLM errors. :param error: The error that occurred. :type error: BaseException :param kwargs: Additional keyword arguments. - response (LLMResult): The response which was generated before the error occurred. :type kwargs: Any.
on_llm_new_token(token, *[, chunk, ...])
Run on new LLM token.
on_llm_start(serialized, prompts, **kwargs)
Run when LLM starts running.
on_retriever_end(documents, *, run_id[, ...])
Run when Retriever ends running.
on_retriever_error(error, *, run_id[, ...])
Run when Retriever errors.
on_retriever_start(serialized, query, *, run_id)
Run when Retriever starts running.
on_retry(retry_state, *, run_id[, parent_run_id])
Run on a retry event.
on_text(text, *, run_id[, parent_run_id])
Run on arbitrary text.
on_tool_end(output, *, run_id[, parent_run_id])
Run when tool ends running.
on_tool_error(error, *, run_id[, parent_run_id])
Run when tool errors.
on_tool_start(serialized, input_str, *, run_id)
Run when tool starts running.
__init__(project: str = 'default', email: Optional[str] = None, password: Optional[str] = None, **kwargs: Any) → None[source]¶
on_agent_action(action: AgentAction, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.trubrics_callback.TrubricsCallbackHandler.html |
a844859c0461-2 | Run on agent action.
on_agent_finish(finish: AgentFinish, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run on agent end.
on_chain_end(outputs: Dict[str, Any], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when chain ends running.
on_chain_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when chain errors.
on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run when chain starts running.
on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], **kwargs: Any) → None[source]¶
Run when a chat model starts running.
on_llm_end(response: LLMResult, run_id: UUID, **kwargs: Any) → None[source]¶
Run when LLM ends running.
on_llm_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when LLM errors.
:param error: The error that occurred.
:type error: BaseException
:param kwargs: Additional keyword arguments.
response (LLMResult): The response which was generated beforethe error occurred. | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.trubrics_callback.TrubricsCallbackHandler.html |
a844859c0461-3 | response (LLMResult): The response which was generated beforethe error occurred.
on_llm_new_token(token: str, *, chunk: Optional[Union[GenerationChunk, ChatGenerationChunk]] = None, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run on new LLM token. Only available when streaming is enabled.
Parameters
token (str) – The new token.
chunk (GenerationChunk | ChatGenerationChunk) – The new generated chunk,
information. (containing content and other) –
on_llm_start(serialized: Dict[str, Any], prompts: List[str], **kwargs: Any) → None[source]¶
Run when LLM starts running.
on_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when Retriever ends running.
on_retriever_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when Retriever errors.
on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run when Retriever starts running.
on_retry(retry_state: RetryCallState, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run on a retry event.
on_text(text: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶ | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.trubrics_callback.TrubricsCallbackHandler.html |
a844859c0461-4 | Run on arbitrary text.
on_tool_end(output: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when tool ends running.
on_tool_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when tool errors.
on_tool_start(serialized: Dict[str, Any], input_str: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run when tool starts running. | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.trubrics_callback.TrubricsCallbackHandler.html |
ef37a8e7f56d-0 | langchain_community.callbacks.streamlit.mutable_expander.MutableExpander¶
class langchain_community.callbacks.streamlit.mutable_expander.MutableExpander(parent_container: DeltaGenerator, label: str, expanded: bool)[source]¶
A Streamlit expander that can be renamed and dynamically expanded/collapsed.
Create a new MutableExpander.
Parameters
parent_container – The st.container that the expander will be created inside.
The expander transparently deletes and recreates its underlying
st.expander instance when its label changes, and it uses
parent_container to ensure it recreates this underlying expander in the
same location onscreen.
label – The expander’s initial label.
expanded – The expander’s initial expanded value.
Attributes
expanded
True if the expander was created with expanded=True.
label
The expander's label string.
Methods
__init__(parent_container, label, expanded)
Create a new MutableExpander.
append_copy(other)
Append a copy of another MutableExpander's children to this MutableExpander.
clear()
Remove the container and its contents entirely.
exception(exception, *[, index])
Add an Exception element to the container and return its index.
markdown(body[, unsafe_allow_html, help, index])
Add a Markdown element to the container and return its index.
update(*[, new_label, new_expanded])
Change the expander's label and expanded state
__init__(parent_container: DeltaGenerator, label: str, expanded: bool)[source]¶
Create a new MutableExpander.
Parameters
parent_container – The st.container that the expander will be created inside.
The expander transparently deletes and recreates its underlying
st.expander instance when its label changes, and it uses
parent_container to ensure it recreates this underlying expander in the | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.streamlit.mutable_expander.MutableExpander.html |
ef37a8e7f56d-1 | parent_container to ensure it recreates this underlying expander in the
same location onscreen.
label – The expander’s initial label.
expanded – The expander’s initial expanded value.
append_copy(other: MutableExpander) → None[source]¶
Append a copy of another MutableExpander’s children to this
MutableExpander.
clear() → None[source]¶
Remove the container and its contents entirely. A cleared container can’t
be reused.
exception(exception: BaseException, *, index: Optional[int] = None) → int[source]¶
Add an Exception element to the container and return its index.
markdown(body: SupportsStr, unsafe_allow_html: bool = False, *, help: Optional[str] = None, index: Optional[int] = None) → int[source]¶
Add a Markdown element to the container and return its index.
update(*, new_label: Optional[str] = None, new_expanded: Optional[bool] = None) → None[source]¶
Change the expander’s label and expanded state | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.streamlit.mutable_expander.MutableExpander.html |
dee1dea035e3-0 | langchain_community.callbacks.streamlit.mutable_expander.ChildRecord¶
class langchain_community.callbacks.streamlit.mutable_expander.ChildRecord(type: ChildType, kwargs: Dict[str, Any], dg: DeltaGenerator)[source]¶
The child record as a NamedTuple.
Create new instance of ChildRecord(type, kwargs, dg)
Attributes
dg
Alias for field number 2
kwargs
Alias for field number 1
type
Alias for field number 0
Methods
__init__()
count(value, /)
Return number of occurrences of value.
index(value[, start, stop])
Return first index of value.
__init__()¶
count(value, /)¶
Return number of occurrences of value.
index(value, start=0, stop=9223372036854775807, /)¶
Return first index of value.
Raises ValueError if the value is not present. | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.streamlit.mutable_expander.ChildRecord.html |
a367bf9d0a6b-0 | langchain_community.callbacks.clearml_callback.ClearMLCallbackHandler¶
class langchain_community.callbacks.clearml_callback.ClearMLCallbackHandler(task_type: Optional[str] = 'inference', project_name: Optional[str] = 'langchain_callback_demo', tags: Optional[Sequence] = None, task_name: Optional[str] = None, visualize: bool = False, complexity_metrics: bool = False, stream_logs: bool = False)[source]¶
Callback Handler that logs to ClearML.
Parameters
job_type (str) – The type of clearml task such as “inference”, “testing” or “qc”
project_name (str) – The clearml project name
tags (list) – Tags to add to the task
task_name (str) – Name of the clearml task
visualize (bool) – Whether to visualize the run.
complexity_metrics (bool) – Whether to log complexity metrics
stream_logs (bool) – Whether to stream callback actions to ClearML
This handler will utilize the associated callback method and formats
the input of each callback function with metadata regarding the state of LLM run,
and adds the response to the list of records for both the {method}_records and
action. It then logs the response to the ClearML console.
Initialize callback handler.
Attributes
always_verbose
Whether to call verbose callbacks even if verbose is False.
ignore_agent
Whether to ignore agent callbacks.
ignore_chain
Whether to ignore chain callbacks.
ignore_chat_model
Whether to ignore chat model callbacks.
ignore_llm
Whether to ignore LLM callbacks.
ignore_retriever
Whether to ignore retriever callbacks.
ignore_retry
Whether to ignore retry callbacks.
raise_error
run_inline
Methods
__init__([task_type, project_name, tags, ...])
Initialize callback handler.
analyze_text(text) | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.clearml_callback.ClearMLCallbackHandler.html |
a367bf9d0a6b-1 | Initialize callback handler.
analyze_text(text)
Analyze text using textstat and spacy.
flush_tracker([name, langchain_asset, finish])
Flush the tracker and setup the session.
get_custom_callback_meta()
on_agent_action(action, **kwargs)
Run on agent action.
on_agent_finish(finish, **kwargs)
Run when agent ends running.
on_chain_end(outputs, **kwargs)
Run when chain ends running.
on_chain_error(error, **kwargs)
Run when chain errors.
on_chain_start(serialized, inputs, **kwargs)
Run when chain starts running.
on_chat_model_start(serialized, messages, *, ...)
Run when a chat model starts running.
on_llm_end(response, **kwargs)
Run when LLM ends running.
on_llm_error(error, **kwargs)
Run when LLM errors.
on_llm_new_token(token, **kwargs)
Run when LLM generates a new token.
on_llm_start(serialized, prompts, **kwargs)
Run when LLM starts.
on_retriever_end(documents, *, run_id[, ...])
Run when Retriever ends running.
on_retriever_error(error, *, run_id[, ...])
Run when Retriever errors.
on_retriever_start(serialized, query, *, run_id)
Run when Retriever starts running.
on_retry(retry_state, *, run_id[, parent_run_id])
Run on a retry event.
on_text(text, **kwargs)
Run when agent is ending.
on_tool_end(output, **kwargs)
Run when tool ends running.
on_tool_error(error, **kwargs)
Run when tool errors. | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.clearml_callback.ClearMLCallbackHandler.html |
a367bf9d0a6b-2 | on_tool_error(error, **kwargs)
Run when tool errors.
on_tool_start(serialized, input_str, **kwargs)
Run when tool starts running.
reset_callback_meta()
Reset the callback metadata.
__init__(task_type: Optional[str] = 'inference', project_name: Optional[str] = 'langchain_callback_demo', tags: Optional[Sequence] = None, task_name: Optional[str] = None, visualize: bool = False, complexity_metrics: bool = False, stream_logs: bool = False) → None[source]¶
Initialize callback handler.
analyze_text(text: str) → dict[source]¶
Analyze text using textstat and spacy.
Parameters
text (str) – The text to analyze.
Returns
A dictionary containing the complexity metrics.
Return type
(dict)
flush_tracker(name: Optional[str] = None, langchain_asset: Any = None, finish: bool = False) → None[source]¶
Flush the tracker and setup the session.
Everything after this will be a new table.
Parameters
name – Name of the performed session so far so it is identifiable
langchain_asset – The langchain asset to save.
finish – Whether to finish the run.
Returns – None
get_custom_callback_meta() → Dict[str, Any]¶
on_agent_action(action: AgentAction, **kwargs: Any) → Any[source]¶
Run on agent action.
on_agent_finish(finish: AgentFinish, **kwargs: Any) → None[source]¶
Run when agent ends running.
on_chain_end(outputs: Dict[str, Any], **kwargs: Any) → None[source]¶
Run when chain ends running.
on_chain_error(error: BaseException, **kwargs: Any) → None[source]¶
Run when chain errors. | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.clearml_callback.ClearMLCallbackHandler.html |
a367bf9d0a6b-3 | Run when chain errors.
on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any) → None[source]¶
Run when chain starts running.
on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run when a chat model starts running.
on_llm_end(response: LLMResult, **kwargs: Any) → None[source]¶
Run when LLM ends running.
on_llm_error(error: BaseException, **kwargs: Any) → None[source]¶
Run when LLM errors.
on_llm_new_token(token: str, **kwargs: Any) → None[source]¶
Run when LLM generates a new token.
on_llm_start(serialized: Dict[str, Any], prompts: List[str], **kwargs: Any) → None[source]¶
Run when LLM starts.
on_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when Retriever ends running.
on_retriever_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when Retriever errors.
on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.clearml_callback.ClearMLCallbackHandler.html |
a367bf9d0a6b-4 | Run when Retriever starts running.
on_retry(retry_state: RetryCallState, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run on a retry event.
on_text(text: str, **kwargs: Any) → None[source]¶
Run when agent is ending.
on_tool_end(output: str, **kwargs: Any) → None[source]¶
Run when tool ends running.
on_tool_error(error: BaseException, **kwargs: Any) → None[source]¶
Run when tool errors.
on_tool_start(serialized: Dict[str, Any], input_str: str, **kwargs: Any) → None[source]¶
Run when tool starts running.
reset_callback_meta() → None¶
Reset the callback metadata.
Examples using ClearMLCallbackHandler¶
ClearML | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.clearml_callback.ClearMLCallbackHandler.html |
ce82cb74bbf9-0 | langchain_experimental.pal_chain.base.PALChain¶
class langchain_experimental.pal_chain.base.PALChain[source]¶
Bases: Chain
Implements Program-Aided Language Models (PAL).
This class implements the Program-Aided Language Models (PAL) for generating code
solutions. PAL is a technique described in the paper “Program-Aided Language Models”
(https://arxiv.org/pdf/2211.10435.pdf).
Security note: This class implements an AI technique that generates and evaluatesPython code, which can be dangerous and requires a specially sandboxed
environment to be safely used. While this class implements some basic guardrails
by limiting available locals/globals and by parsing and inspecting
the generated Python AST using PALValidation, those guardrails will not
deter sophisticated attackers and are not a replacement for a proper sandbox.
Do not use this class on untrusted inputs, with elevated permissions,
or without consulting your security team about proper sandboxing!
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param callback_manager: Optional[BaseCallbackManager] = None¶
Deprecated, use callbacks instead.
param callbacks: Callbacks = None¶
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
param code_validations: PALValidation [Optional]¶
Validations to perform on the generated code.
param get_answer_expr: str = 'print(solution())'¶
Expression to use to get the answer from the generated code.
param llm_chain: LLMChain [Required]¶ | https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html |
ce82cb74bbf9-1 | param llm_chain: LLMChain [Required]¶
param memory: Optional[BaseMemory] = None¶
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
param metadata: Optional[Dict[str, Any]] = None¶
Optional metadata associated with the chain. Defaults to None.
This metadata will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
param python_globals: Optional[Dict[str, Any]] = None¶
Python globals and locals to use when executing the generated code.
param python_locals: Optional[Dict[str, Any]] = None¶
Python globals and locals to use when executing the generated code.
param return_intermediate_steps: bool = False¶
Whether to return intermediate steps in the generated code.
param stop: str = '\n\n'¶
Stop token to use when generating code.
param tags: Optional[List[str]] = None¶
Optional list of tags associated with the chain. Defaults to None.
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
param timeout: Optional[int] = 10¶
Timeout in seconds for the generated code to execute.
param verbose: bool [Optional]¶
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to the global verbose value, | https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html |
ce82cb74bbf9-2 | will be printed to the console. Defaults to the global verbose value,
accessible via langchain.globals.get_verbose().
__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, run_name: Optional[str] = None, include_run_info: bool = False) → Dict[str, Any]¶
Execute the chain.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
return_only_outputs – Whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
metadata – Optional metadata associated with the chain. Defaults to None
include_run_info – Whether to include run info in the response. Defaults
to False.
Returns
A dict of named outputs. Should contain all outputs specified inChain.output_keys.
async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶ | https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html |
ce82cb74bbf9-3 | Default implementation runs ainvoke in parallel using asyncio.gather.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, run_name: Optional[str] = None, include_run_info: bool = False) → Dict[str, Any]¶
Asynchronously execute the chain.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
return_only_outputs – Whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
metadata – Optional metadata associated with the chain. Defaults to None
include_run_info – Whether to include run info in the response. Defaults
to False.
Returns
A dict of named outputs. Should contain all outputs specified inChain.output_keys. | https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html |
ce82cb74bbf9-4 | Returns
A dict of named outputs. Should contain all outputs specified inChain.output_keys.
async ainvoke(input: Dict[str, Any], config: Optional[RunnableConfig] = None, **kwargs: Any) → Dict[str, Any]¶
Default implementation of ainvoke, calls invoke from a thread.
The default implementation allows usage of async code even if
the runnable did not implement a native async version of invoke.
Subclasses should override this method if they can run asynchronously.
apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶
Call the chain on all inputs in the list.
async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Convenience method for executing chain.
The main difference between this method and Chain.__call__ is that this
method expects inputs to be passed directly in as positional arguments or
keyword arguments, whereas Chain.__call__ expects a single input dictionary
with all the inputs
Parameters
*args – If the chain expects a single input, it can be passed in as the
sole positional argument.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
**kwargs – If the chain expects multiple inputs, they can be passed in
directly as keyword arguments. | https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html |
ce82cb74bbf9-5 | directly as keyword arguments.
Returns
The chain output.
Example
# Suppose we have a single-input chain that takes a 'question' string:
await chain.arun("What's the temperature in Boise, Idaho?")
# -> "The temperature in Boise is..."
# Suppose we have a multi-input chain that takes a 'question' string
# and 'context' string:
question = "What's the temperature in Boise, Idaho?"
context = "Weather report for Boise, Idaho on 07/03/23..."
await chain.arun(question=question, context=context)
# -> "The temperature in Boise is..."
assign(**kwargs: Union[Runnable[Dict[str, Any], Any], Callable[[Dict[str, Any]], Any], Mapping[str, Union[Runnable[Dict[str, Any], Any], Callable[[Dict[str, Any]], Any]]]]) → RunnableSerializable[Any, Any]¶
Assigns new fields to the dict output of this runnable.
Returns a new runnable.
async astream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
Default implementation of astream, which calls ainvoke.
Subclasses should override this method if they support streaming output.
async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, diff: bool = True, with_streamed_output_list: bool = True, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any]) → Union[AsyncIterator[RunLogPatch], AsyncIterator[RunLog]]¶ | https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html |
ce82cb74bbf9-6 | Stream all output from a runnable, as reported to the callback system.
This includes all inner runs of LLMs, Retrievers, Tools, etc.
Output is streamed as Log objects, which include a list of
jsonpatch ops that describe how the state of the run has changed in each
step, and the final state of the run.
The jsonpatch ops can be applied in order to construct state.
Parameters
input – The input to the runnable.
config – The config to use for the runnable.
diff – Whether to yield diffs between each step, or the current state.
with_streamed_output_list – Whether to yield the streamed_output list.
include_names – Only include logs with these names.
include_types – Only include logs with these types.
include_tags – Only include logs with these tags.
exclude_names – Exclude logs with these names.
exclude_types – Exclude logs with these types.
exclude_tags – Exclude logs with these tags.
async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
Default implementation of atransform, which buffers input and calls astream.
Subclasses should override this method if they can start producing output while
input is still being generated.
batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶
Default implementation runs invoke in parallel using a thread pool executor.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
bind(**kwargs: Any) → Runnable[Input, Output]¶ | https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html |
ce82cb74bbf9-7 | bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
config_schema(*, include: Optional[Sequence[str]] = None) → Type[BaseModel]¶
The type of config this runnable accepts specified as a pydantic model.
To mark a field as configurable, see the configurable_fields
and configurable_alternatives methods.
Parameters
include – A list of fields to include in the config schema.
Returns
A pydantic model that can be used to validate config.
configurable_alternatives(which: ConfigurableField, *, default_key: str = 'default', prefix_keys: bool = False, **kwargs: Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) → RunnableSerializable[Input, Output]¶
configurable_fields(**kwargs: Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) → RunnableSerializable[Input, Output]¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include | https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html |
ce82cb74bbf9-8 | exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict¶
Dictionary representation of chain.
Expects Chain._chain_type property to be implemented and for memory to benull.
Parameters
**kwargs – Keyword arguments passed to default pydantic.BaseModel.dict
method.
Returns
A dictionary representation of the chain.
Example
chain.dict(exclude_unset=True)
# -> {"_type": "foo", "verbose": False, ...}
classmethod from_colored_object_prompt(llm: BaseLanguageModel, **kwargs: Any) → PALChain[source]¶
Load PAL from colored object prompt.
Parameters
llm (BaseLanguageModel) – The language model to use for generating code.
Returns
An instance of PALChain.
Return type
PALChain
classmethod from_math_prompt(llm: BaseLanguageModel, **kwargs: Any) → PALChain[source]¶
Load PAL from math prompt.
Parameters
llm (BaseLanguageModel) – The language model to use for generating code.
Returns
An instance of PALChain.
Return type
PALChain
classmethod from_orm(obj: Any) → Model¶
get_graph(config: Optional[RunnableConfig] = None) → Graph¶
Return a graph representation of this runnable.
get_input_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
Get a pydantic model that can be used to validate input to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic input schema that depends on which | https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html |
ce82cb74bbf9-9 | methods will have a dynamic input schema that depends on which
configuration the runnable is invoked with.
This method allows to get an input schema for a specific configuration.
Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate input.
classmethod get_lc_namespace() → List[str]¶
Get the namespace of the langchain object.
For example, if the class is langchain.llms.openai.OpenAI, then the
namespace is [“langchain”, “llms”, “openai”]
get_name(suffix: Optional[str] = None, *, name: Optional[str] = None) → str¶
Get the name of the runnable.
get_output_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
Get a pydantic model that can be used to validate output to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic output schema that depends on which
configuration the runnable is invoked with.
This method allows to get an output schema for a specific configuration.
Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate output.
get_prompts(config: Optional[RunnableConfig] = None) → List[BasePromptTemplate]¶
invoke(input: Dict[str, Any], config: Optional[RunnableConfig] = None, **kwargs: Any) → Dict[str, Any]¶
Transform a single input into an output. Override to implement.
Parameters
input – The input to the runnable.
config – A config to use when invoking the runnable.
The config supports standard keys like ‘tags’, ‘metadata’ for tracing
purposes, ‘max_concurrency’ for controlling how much work to do | https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html |
ce82cb74bbf9-10 | purposes, ‘max_concurrency’ for controlling how much work to do
in parallel, and other keys. Please refer to the RunnableConfig
for more details.
Returns
The output of the runnable.
classmethod is_lc_serializable() → bool¶
Is this class serializable?
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object.
map() → Runnable[List[Input], List[Output]]¶
Return a new Runnable that maps a list of inputs to a list of outputs,
by calling invoke() with each input.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ | https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html |
ce82cb74bbf9-11 | pick(keys: Union[str, List[str]]) → RunnableSerializable[Any, Any]¶
Pick keys from the dict output of this runnable.
Returns a new runnable.
pipe(*others: Union[Runnable[Any, Other], Callable[[Any], Other]], name: Optional[str] = None) → RunnableSerializable[Input, Other]¶
Compose this runnable with another object to create a RunnableSequence.
prep_inputs(inputs: Union[Dict[str, Any], Any]) → Dict[str, str]¶
Validate and prepare chain inputs, including adding inputs from memory.
Parameters
inputs – Dictionary of raw inputs, or single input if chain expects
only one param. Should contain all inputs specified in
Chain.input_keys except for inputs that will be set by the chain’s
memory.
Returns
A dictionary of all inputs, including those added by the chain’s memory.
prep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) → Dict[str, str]¶
Validate and prepare chain outputs, and save info about this run to memory.
Parameters
inputs – Dictionary of chain inputs, including any inputs added by chain
memory.
outputs – Dictionary of initial chain outputs.
return_only_outputs – Whether to only return the chain outputs. If False,
inputs are also added to the final outputs.
Returns
A dict of the final chain outputs.
run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Convenience method for executing chain.
The main difference between this method and Chain.__call__ is that this
method expects inputs to be passed directly in as positional arguments or
keyword arguments, whereas Chain.__call__ expects a single input dictionary | https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html |
ce82cb74bbf9-12 | keyword arguments, whereas Chain.__call__ expects a single input dictionary
with all the inputs
Parameters
*args – If the chain expects a single input, it can be passed in as the
sole positional argument.
callbacks – Callbacks to use for this chain run. These will be called in
addition to callbacks passed to the chain during construction, but only
these runtime callbacks will propagate to calls to other objects.
tags – List of string tags to pass to all callbacks. These will be passed in
addition to tags passed to the chain during construction, but only
these runtime tags will propagate to calls to other objects.
**kwargs – If the chain expects multiple inputs, they can be passed in
directly as keyword arguments.
Returns
The chain output.
Example
# Suppose we have a single-input chain that takes a 'question' string:
chain.run("What's the temperature in Boise, Idaho?")
# -> "The temperature in Boise is..."
# Suppose we have a multi-input chain that takes a 'question' string
# and 'context' string:
question = "What's the temperature in Boise, Idaho?"
context = "Weather report for Boise, Idaho on 07/03/23..."
chain.run(question=question, context=context)
# -> "The temperature in Boise is..."
save(file_path: Union[Path, str]) → None¶
Save the chain.
Expects Chain._chain_type property to be implemented and for memory to benull.
Parameters
file_path – Path to file to save the chain to.
Example
chain.save(file_path="path/chain.yaml")
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ | https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html |
ce82cb74bbf9-13 | classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of stream, which calls invoke.
Subclasses should override this method if they support streaming output.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of transform, which buffers input and then calls stream.
Subclasses should override this method if they can start producing output while
input is still being generated.
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
classmethod validate_code(code: str, code_validations: PALValidation) → None[source]¶
with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶
Bind config to a Runnable, returning a new Runnable.
with_fallbacks(fallbacks: Sequence[Runnable[Input, Output]], *, exceptions_to_handle: Tuple[Type[BaseException], ...] = (<class 'Exception'>,)) → RunnableWithFallbacksT[Input, Output]¶
Add fallbacks to a runnable, returning a new Runnable.
Parameters
fallbacks – A sequence of runnables to try if the original runnable fails.
exceptions_to_handle – A tuple of exception types to handle.
Returns
A new Runnable that will try the original runnable, and then each
fallback in order, upon failures. | https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html |
ce82cb74bbf9-14 | fallback in order, upon failures.
with_listeners(*, on_start: Optional[Listener] = None, on_end: Optional[Listener] = None, on_error: Optional[Listener] = None) → Runnable[Input, Output]¶
Bind lifecycle listeners to a Runnable, returning a new Runnable.
on_start: Called before the runnable starts running, with the Run object.
on_end: Called after the runnable finishes running, with the Run object.
on_error: Called if the runnable throws an error, with the Run object.
The Run object contains information about the run, including its id,
type, input, output, error, start_time, end_time, and any tags or metadata
added to the run.
with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶
Create a new Runnable that retries the original runnable on exceptions.
Parameters
retry_if_exception_type – A tuple of exception types to retry on
wait_exponential_jitter – Whether to add jitter to the wait time
between retries
stop_after_attempt – The maximum number of attempts to make before giving up
Returns
A new Runnable that retries the original runnable on exceptions.
with_types(*, input_type: Optional[Type[Input]] = None, output_type: Optional[Type[Output]] = None) → Runnable[Input, Output]¶
Bind input and output types to a Runnable, returning a new Runnable.
property InputType: Type[langchain_core.runnables.utils.Input]¶
The type of input this runnable accepts specified as a type annotation.
property OutputType: Type[langchain_core.runnables.utils.Output]¶ | https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html |
ce82cb74bbf9-15 | property OutputType: Type[langchain_core.runnables.utils.Output]¶
The type of output this runnable produces specified as a type annotation.
property config_specs: List[langchain_core.runnables.utils.ConfigurableFieldSpec]¶
List configurable fields for this runnable.
property input_schema: Type[pydantic.main.BaseModel]¶
The type of input this runnable accepts specified as a pydantic model.
property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”}
name: Optional[str] = None¶
The name of the runnable. Used for debugging and tracing.
property output_schema: Type[pydantic.main.BaseModel]¶
The type of output this runnable produces specified as a pydantic model. | https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html |
e958bcfc63d8-0 | langchain_experimental.pal_chain.base.PALValidation¶
class langchain_experimental.pal_chain.base.PALValidation(solution_expression_name: Optional[str] = None, solution_expression_type: Optional[type] = None, allow_imports: bool = False, allow_command_exec: bool = False)[source]¶
Initialize a PALValidation instance.
Parameters
solution_expression_name (str) – Name of the expected solution expression.
If passed, solution_expression_type must be passed as well.
solution_expression_type (str) – AST type of the expected solution
expression. If passed, solution_expression_name must be passed as well.
Must be one of PALValidation.SOLUTION_EXPRESSION_TYPE_FUNCTION,
PALValidation.SOLUTION_EXPRESSION_TYPE_VARIABLE.
allow_imports (bool) – Allow import statements.
allow_command_exec (bool) – Allow using known command execution functions.
Methods
__init__([solution_expression_name, ...])
Initialize a PALValidation instance.
__init__(solution_expression_name: Optional[str] = None, solution_expression_type: Optional[type] = None, allow_imports: bool = False, allow_command_exec: bool = False)[source]¶
Initialize a PALValidation instance.
Parameters
solution_expression_name (str) – Name of the expected solution expression.
If passed, solution_expression_type must be passed as well.
solution_expression_type (str) – AST type of the expected solution
expression. If passed, solution_expression_name must be passed as well.
Must be one of PALValidation.SOLUTION_EXPRESSION_TYPE_FUNCTION,
PALValidation.SOLUTION_EXPRESSION_TYPE_VARIABLE.
allow_imports (bool) – Allow import statements.
allow_command_exec (bool) – Allow using known command execution functions. | https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALValidation.html |
f4b9cadd805e-0 | langchain_core.beta.runnables.context.config_with_context¶
langchain_core.beta.runnables.context.config_with_context(config: RunnableConfig, steps: List[Runnable]) → RunnableConfig[source]¶
Patch a runnable config with context getters and setters.
Parameters
config – The runnable config.
steps – The runnable steps.
Returns
The patched runnable config. | https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.config_with_context.html |
3923fc575c1f-0 | langchain_core.beta.runnables.context.aconfig_with_context¶
langchain_core.beta.runnables.context.aconfig_with_context(config: RunnableConfig, steps: List[Runnable]) → RunnableConfig[source]¶
Asynchronously patch a runnable config with context getters and setters.
Parameters
config – The runnable config.
steps – The runnable steps.
Returns
The patched runnable config. | https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.aconfig_with_context.html |
92025fdeba63-0 | langchain_core.beta.runnables.context.PrefixContext¶
class langchain_core.beta.runnables.context.PrefixContext(prefix: str = '')[source]¶
Context for a runnable with a prefix.
Attributes
prefix
Methods
__init__([prefix])
getter(key, /)
setter([_key, _value])
__init__(prefix: str = '')[source]¶
getter(key: Union[str, List[str]], /) → ContextGet[source]¶
setter(_key: Optional[str] = None, _value: Optional[Union[Runnable[Input, Output], Callable[[Input], Output], Callable[[Input], Awaitable[Output]], Any]] = None, /, **kwargs: Union[Runnable[Input, Output], Callable[[Input], Output], Callable[[Input], Awaitable[Output]], Any]) → ContextSet[source]¶ | https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.PrefixContext.html |
354aaa9c47c8-0 | langchain_core.beta.runnables.context.ContextSet¶
class langchain_core.beta.runnables.context.ContextSet[source]¶
Bases: RunnableSerializable
Set a context value.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param keys: Mapping[str, Optional[langchain_core.runnables.base.Runnable]] [Required]¶
param prefix: str = ''¶
async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶
Default implementation runs ainvoke in parallel using asyncio.gather.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
async ainvoke(input: Any, config: Optional[RunnableConfig] = None, **kwargs: Any) → Any[source]¶
Default implementation of ainvoke, calls invoke from a thread.
The default implementation allows usage of async code even if
the runnable did not implement a native async version of invoke.
Subclasses should override this method if they can run asynchronously.
assign(**kwargs: Union[Runnable[Dict[str, Any], Any], Callable[[Dict[str, Any]], Any], Mapping[str, Union[Runnable[Dict[str, Any], Any], Callable[[Dict[str, Any]], Any]]]]) → RunnableSerializable[Any, Any]¶
Assigns new fields to the dict output of this runnable.
Returns a new runnable.
async astream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶ | https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.ContextSet.html |
354aaa9c47c8-1 | Default implementation of astream, which calls ainvoke.
Subclasses should override this method if they support streaming output.
async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, diff: bool = True, with_streamed_output_list: bool = True, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any]) → Union[AsyncIterator[RunLogPatch], AsyncIterator[RunLog]]¶
Stream all output from a runnable, as reported to the callback system.
This includes all inner runs of LLMs, Retrievers, Tools, etc.
Output is streamed as Log objects, which include a list of
jsonpatch ops that describe how the state of the run has changed in each
step, and the final state of the run.
The jsonpatch ops can be applied in order to construct state.
Parameters
input – The input to the runnable.
config – The config to use for the runnable.
diff – Whether to yield diffs between each step, or the current state.
with_streamed_output_list – Whether to yield the streamed_output list.
include_names – Only include logs with these names.
include_types – Only include logs with these types.
include_tags – Only include logs with these tags.
exclude_names – Exclude logs with these names.
exclude_types – Exclude logs with these types.
exclude_tags – Exclude logs with these tags.
async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶ | https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.ContextSet.html |
354aaa9c47c8-2 | Default implementation of atransform, which buffers input and calls astream.
Subclasses should override this method if they can start producing output while
input is still being generated.
batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶
Default implementation runs invoke in parallel using a thread pool executor.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
config_schema(*, include: Optional[Sequence[str]] = None) → Type[BaseModel]¶
The type of config this runnable accepts specified as a pydantic model.
To mark a field as configurable, see the configurable_fields
and configurable_alternatives methods.
Parameters
include – A list of fields to include in the config schema.
Returns
A pydantic model that can be used to validate config.
configurable_alternatives(which: ConfigurableField, *, default_key: str = 'default', prefix_keys: bool = False, **kwargs: Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) → RunnableSerializable[Input, Output]¶
configurable_fields(**kwargs: Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) → RunnableSerializable[Input, Output]¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. | https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.ContextSet.html |
354aaa9c47c8-3 | Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
get_graph(config: Optional[RunnableConfig] = None) → Graph¶
Return a graph representation of this runnable.
get_input_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
Get a pydantic model that can be used to validate input to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic input schema that depends on which | https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.ContextSet.html |
354aaa9c47c8-4 | methods will have a dynamic input schema that depends on which
configuration the runnable is invoked with.
This method allows to get an input schema for a specific configuration.
Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate input.
classmethod get_lc_namespace() → List[str]¶
Get the namespace of the langchain object.
For example, if the class is langchain.llms.openai.OpenAI, then the
namespace is [“langchain”, “llms”, “openai”]
get_name(suffix: Optional[str] = None, *, name: Optional[str] = None) → str¶
Get the name of the runnable.
get_output_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
Get a pydantic model that can be used to validate output to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic output schema that depends on which
configuration the runnable is invoked with.
This method allows to get an output schema for a specific configuration.
Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate output.
get_prompts(config: Optional[RunnableConfig] = None) → List[BasePromptTemplate]¶
invoke(input: Any, config: Optional[RunnableConfig] = None) → Any[source]¶
Transform a single input into an output. Override to implement.
Parameters
input – The input to the runnable.
config – A config to use when invoking the runnable.
The config supports standard keys like ‘tags’, ‘metadata’ for tracing
purposes, ‘max_concurrency’ for controlling how much work to do
in parallel, and other keys. Please refer to the RunnableConfig
for more details.
Returns | https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.ContextSet.html |
354aaa9c47c8-5 | for more details.
Returns
The output of the runnable.
classmethod is_lc_serializable() → bool¶
Is this class serializable?
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object.
map() → Runnable[List[Input], List[Output]]¶
Return a new Runnable that maps a list of inputs to a list of outputs,
by calling invoke() with each input.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
pick(keys: Union[str, List[str]]) → RunnableSerializable[Any, Any]¶
Pick keys from the dict output of this runnable.
Returns a new runnable. | https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.ContextSet.html |
354aaa9c47c8-6 | Pick keys from the dict output of this runnable.
Returns a new runnable.
pipe(*others: Union[Runnable[Any, Other], Callable[[Any], Other]], name: Optional[str] = None) → RunnableSerializable[Input, Other]¶
Compose this runnable with another object to create a RunnableSequence.
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of stream, which calls invoke.
Subclasses should override this method if they support streaming output.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of transform, which buffers input and then calls stream.
Subclasses should override this method if they can start producing output while
input is still being generated.
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶
Bind config to a Runnable, returning a new Runnable.
with_fallbacks(fallbacks: Sequence[Runnable[Input, Output]], *, exceptions_to_handle: Tuple[Type[BaseException], ...] = (<class 'Exception'>,)) → RunnableWithFallbacksT[Input, Output]¶ | https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.ContextSet.html |
354aaa9c47c8-7 | Add fallbacks to a runnable, returning a new Runnable.
Parameters
fallbacks – A sequence of runnables to try if the original runnable fails.
exceptions_to_handle – A tuple of exception types to handle.
Returns
A new Runnable that will try the original runnable, and then each
fallback in order, upon failures.
with_listeners(*, on_start: Optional[Listener] = None, on_end: Optional[Listener] = None, on_error: Optional[Listener] = None) → Runnable[Input, Output]¶
Bind lifecycle listeners to a Runnable, returning a new Runnable.
on_start: Called before the runnable starts running, with the Run object.
on_end: Called after the runnable finishes running, with the Run object.
on_error: Called if the runnable throws an error, with the Run object.
The Run object contains information about the run, including its id,
type, input, output, error, start_time, end_time, and any tags or metadata
added to the run.
with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶
Create a new Runnable that retries the original runnable on exceptions.
Parameters
retry_if_exception_type – A tuple of exception types to retry on
wait_exponential_jitter – Whether to add jitter to the wait time
between retries
stop_after_attempt – The maximum number of attempts to make before giving up
Returns
A new Runnable that retries the original runnable on exceptions.
with_types(*, input_type: Optional[Type[Input]] = None, output_type: Optional[Type[Output]] = None) → Runnable[Input, Output]¶ | https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.ContextSet.html |
354aaa9c47c8-8 | Bind input and output types to a Runnable, returning a new Runnable.
property InputType: Type[langchain_core.runnables.utils.Input]¶
The type of input this runnable accepts specified as a type annotation.
property OutputType: Type[langchain_core.runnables.utils.Output]¶
The type of output this runnable produces specified as a type annotation.
property config_specs: List[langchain_core.runnables.utils.ConfigurableFieldSpec]¶
List configurable fields for this runnable.
property ids: List[str]¶
property input_schema: Type[pydantic.main.BaseModel]¶
The type of input this runnable accepts specified as a pydantic model.
property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”}
name: Optional[str] = None¶
The name of the runnable. Used for debugging and tracing.
property output_schema: Type[pydantic.main.BaseModel]¶
The type of output this runnable produces specified as a pydantic model. | https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.ContextSet.html |
83f3f8d7dac3-0 | langchain_core.beta.runnables.context.ContextGet¶
class langchain_core.beta.runnables.context.ContextGet[source]¶
Bases: RunnableSerializable
Get a context value.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param key: Union[str, List[str]] [Required]¶
param prefix: str = ''¶
async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶
Default implementation runs ainvoke in parallel using asyncio.gather.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
async ainvoke(input: Any, config: Optional[RunnableConfig] = None, **kwargs: Any) → Any[source]¶
Default implementation of ainvoke, calls invoke from a thread.
The default implementation allows usage of async code even if
the runnable did not implement a native async version of invoke.
Subclasses should override this method if they can run asynchronously.
assign(**kwargs: Union[Runnable[Dict[str, Any], Any], Callable[[Dict[str, Any]], Any], Mapping[str, Union[Runnable[Dict[str, Any], Any], Callable[[Dict[str, Any]], Any]]]]) → RunnableSerializable[Any, Any]¶
Assigns new fields to the dict output of this runnable.
Returns a new runnable.
async astream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
Default implementation of astream, which calls ainvoke. | https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.ContextGet.html |
83f3f8d7dac3-1 | Default implementation of astream, which calls ainvoke.
Subclasses should override this method if they support streaming output.
async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, diff: bool = True, with_streamed_output_list: bool = True, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any]) → Union[AsyncIterator[RunLogPatch], AsyncIterator[RunLog]]¶
Stream all output from a runnable, as reported to the callback system.
This includes all inner runs of LLMs, Retrievers, Tools, etc.
Output is streamed as Log objects, which include a list of
jsonpatch ops that describe how the state of the run has changed in each
step, and the final state of the run.
The jsonpatch ops can be applied in order to construct state.
Parameters
input – The input to the runnable.
config – The config to use for the runnable.
diff – Whether to yield diffs between each step, or the current state.
with_streamed_output_list – Whether to yield the streamed_output list.
include_names – Only include logs with these names.
include_types – Only include logs with these types.
include_tags – Only include logs with these tags.
exclude_names – Exclude logs with these names.
exclude_types – Exclude logs with these types.
exclude_tags – Exclude logs with these tags.
async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶ | https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.ContextGet.html |
83f3f8d7dac3-2 | Default implementation of atransform, which buffers input and calls astream.
Subclasses should override this method if they can start producing output while
input is still being generated.
batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶
Default implementation runs invoke in parallel using a thread pool executor.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
config_schema(*, include: Optional[Sequence[str]] = None) → Type[BaseModel]¶
The type of config this runnable accepts specified as a pydantic model.
To mark a field as configurable, see the configurable_fields
and configurable_alternatives methods.
Parameters
include – A list of fields to include in the config schema.
Returns
A pydantic model that can be used to validate config.
configurable_alternatives(which: ConfigurableField, *, default_key: str = 'default', prefix_keys: bool = False, **kwargs: Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) → RunnableSerializable[Input, Output]¶
configurable_fields(**kwargs: Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) → RunnableSerializable[Input, Output]¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. | https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.ContextGet.html |
83f3f8d7dac3-3 | Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
get_graph(config: Optional[RunnableConfig] = None) → Graph¶
Return a graph representation of this runnable.
get_input_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
Get a pydantic model that can be used to validate input to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic input schema that depends on which | https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.ContextGet.html |
83f3f8d7dac3-4 | methods will have a dynamic input schema that depends on which
configuration the runnable is invoked with.
This method allows to get an input schema for a specific configuration.
Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate input.
classmethod get_lc_namespace() → List[str]¶
Get the namespace of the langchain object.
For example, if the class is langchain.llms.openai.OpenAI, then the
namespace is [“langchain”, “llms”, “openai”]
get_name(suffix: Optional[str] = None, *, name: Optional[str] = None) → str¶
Get the name of the runnable.
get_output_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
Get a pydantic model that can be used to validate output to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic output schema that depends on which
configuration the runnable is invoked with.
This method allows to get an output schema for a specific configuration.
Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate output.
get_prompts(config: Optional[RunnableConfig] = None) → List[BasePromptTemplate]¶
invoke(input: Any, config: Optional[RunnableConfig] = None) → Any[source]¶
Transform a single input into an output. Override to implement.
Parameters
input – The input to the runnable.
config – A config to use when invoking the runnable.
The config supports standard keys like ‘tags’, ‘metadata’ for tracing
purposes, ‘max_concurrency’ for controlling how much work to do
in parallel, and other keys. Please refer to the RunnableConfig
for more details.
Returns | https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.ContextGet.html |
83f3f8d7dac3-5 | for more details.
Returns
The output of the runnable.
classmethod is_lc_serializable() → bool¶
Is this class serializable?
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object.
map() → Runnable[List[Input], List[Output]]¶
Return a new Runnable that maps a list of inputs to a list of outputs,
by calling invoke() with each input.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
pick(keys: Union[str, List[str]]) → RunnableSerializable[Any, Any]¶
Pick keys from the dict output of this runnable.
Returns a new runnable. | https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.ContextGet.html |
83f3f8d7dac3-6 | Pick keys from the dict output of this runnable.
Returns a new runnable.
pipe(*others: Union[Runnable[Any, Other], Callable[[Any], Other]], name: Optional[str] = None) → RunnableSerializable[Input, Other]¶
Compose this runnable with another object to create a RunnableSequence.
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of stream, which calls invoke.
Subclasses should override this method if they support streaming output.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of transform, which buffers input and then calls stream.
Subclasses should override this method if they can start producing output while
input is still being generated.
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶
Bind config to a Runnable, returning a new Runnable.
with_fallbacks(fallbacks: Sequence[Runnable[Input, Output]], *, exceptions_to_handle: Tuple[Type[BaseException], ...] = (<class 'Exception'>,)) → RunnableWithFallbacksT[Input, Output]¶ | https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.ContextGet.html |
83f3f8d7dac3-7 | Add fallbacks to a runnable, returning a new Runnable.
Parameters
fallbacks – A sequence of runnables to try if the original runnable fails.
exceptions_to_handle – A tuple of exception types to handle.
Returns
A new Runnable that will try the original runnable, and then each
fallback in order, upon failures.
with_listeners(*, on_start: Optional[Listener] = None, on_end: Optional[Listener] = None, on_error: Optional[Listener] = None) → Runnable[Input, Output]¶
Bind lifecycle listeners to a Runnable, returning a new Runnable.
on_start: Called before the runnable starts running, with the Run object.
on_end: Called after the runnable finishes running, with the Run object.
on_error: Called if the runnable throws an error, with the Run object.
The Run object contains information about the run, including its id,
type, input, output, error, start_time, end_time, and any tags or metadata
added to the run.
with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶
Create a new Runnable that retries the original runnable on exceptions.
Parameters
retry_if_exception_type – A tuple of exception types to retry on
wait_exponential_jitter – Whether to add jitter to the wait time
between retries
stop_after_attempt – The maximum number of attempts to make before giving up
Returns
A new Runnable that retries the original runnable on exceptions.
with_types(*, input_type: Optional[Type[Input]] = None, output_type: Optional[Type[Output]] = None) → Runnable[Input, Output]¶ | https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.ContextGet.html |
83f3f8d7dac3-8 | Bind input and output types to a Runnable, returning a new Runnable.
property InputType: Type[langchain_core.runnables.utils.Input]¶
The type of input this runnable accepts specified as a type annotation.
property OutputType: Type[langchain_core.runnables.utils.Output]¶
The type of output this runnable produces specified as a type annotation.
property config_specs: List[langchain_core.runnables.utils.ConfigurableFieldSpec]¶
List configurable fields for this runnable.
property ids: List[str]¶
property input_schema: Type[pydantic.main.BaseModel]¶
The type of input this runnable accepts specified as a pydantic model.
property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”}
name: Optional[str] = None¶
The name of the runnable. Used for debugging and tracing.
property output_schema: Type[pydantic.main.BaseModel]¶
The type of output this runnable produces specified as a pydantic model. | https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.ContextGet.html |
3340d98cf7f2-0 | langchain_core.beta.runnables.context.Context¶
class langchain_core.beta.runnables.context.Context[source]¶
Context for a runnable.
Methods
__init__()
create_scope(scope, /)
Create a context scope.
getter(key, /)
setter([_key, _value])
__init__()¶
static create_scope(scope: str, /) → PrefixContext[source]¶
Create a context scope.
Parameters
scope – The scope.
Returns
The context scope.
static getter(key: Union[str, List[str]], /) → ContextGet[source]¶
static setter(_key: Optional[str] = None, _value: Optional[Union[Runnable[Input, Output], Callable[[Input], Output], Callable[[Input], Awaitable[Output]], Any]] = None, /, **kwargs: Union[Runnable[Input, Output], Callable[[Input], Output], Callable[[Input], Awaitable[Output]], Any]) → ContextSet[source]¶ | https://api.python.langchain.com/en/latest/beta/langchain_core.beta.runnables.context.Context.html |
25cb30e3785a-0 | langchain_experimental.generative_agents.memory.GenerativeAgentMemory¶
class langchain_experimental.generative_agents.memory.GenerativeAgentMemory[source]¶
Bases: BaseMemory
Memory for the generative agent.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param add_memory_key: str = 'add_memory'¶
param aggregate_importance: float = 0.0¶
Track the sum of the ‘importance’ of recent memories.
Triggers reflection when it reaches reflection_threshold.
param current_plan: List[str] = []¶
The current plan of the agent.
param importance_weight: float = 0.15¶
How much weight to assign the memory importance.
param llm: langchain_core.language_models.base.BaseLanguageModel [Required]¶
The core language model.
param max_tokens_limit: int = 1200¶
param memory_retriever: langchain.retrievers.time_weighted_retriever.TimeWeightedVectorStoreRetriever [Required]¶
The retriever to fetch related memories.
param most_recent_memories_key: str = 'most_recent_memories'¶
param most_recent_memories_token_key: str = 'recent_memories_token'¶
param now_key: str = 'now'¶
param queries_key: str = 'queries'¶
param reflecting: bool = False¶
param reflection_threshold: Optional[float] = None¶
When aggregate_importance exceeds reflection_threshold, stop to reflect.
param relevant_memories_key: str = 'relevant_memories'¶
param relevant_memories_simple_key: str = 'relevant_memories_simple'¶
param verbose: bool = False¶
add_memories(memory_content: str, now: Optional[datetime] = None) → List[str][source]¶ | https://api.python.langchain.com/en/latest/generative_agents/langchain_experimental.generative_agents.memory.GenerativeAgentMemory.html |
25cb30e3785a-1 | Add an observations or memories to the agent’s memory.
add_memory(memory_content: str, now: Optional[datetime] = None) → List[str][source]¶
Add an observation or memory to the agent’s memory.
chain(prompt: PromptTemplate) → LLMChain[source]¶
clear() → None[source]¶
Clear memory contents.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ | https://api.python.langchain.com/en/latest/generative_agents/langchain_experimental.generative_agents.memory.GenerativeAgentMemory.html |
25cb30e3785a-2 | Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
fetch_memories(observation: str, now: Optional[datetime] = None) → List[Document][source]¶
Fetch related memories.
format_memories_detail(relevant_memories: List[Document]) → str[source]¶
format_memories_simple(relevant_memories: List[Document]) → str[source]¶
classmethod from_orm(obj: Any) → Model¶
classmethod get_lc_namespace() → List[str]¶
Get the namespace of the langchain object.
For example, if the class is langchain.llms.openai.OpenAI, then the
namespace is [“langchain”, “llms”, “openai”]
classmethod is_lc_serializable() → bool¶
Is this class serializable?
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object.
load_memory_variables(inputs: Dict[str, Any]) → Dict[str, str][source]¶
Return key-value pairs given the text input to the chain. | https://api.python.langchain.com/en/latest/generative_agents/langchain_experimental.generative_agents.memory.GenerativeAgentMemory.html |
25cb30e3785a-3 | Return key-value pairs given the text input to the chain.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
pause_to_reflect(now: Optional[datetime] = None) → List[str][source]¶
Reflect on recent observations and generate ‘insights’.
save_context(inputs: Dict[str, Any], outputs: Dict[str, Any]) → None[source]¶
Save the context of this model run to memory.
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”}
property memory_variables: List[str]¶
Input keys this memory class will load dynamically. | https://api.python.langchain.com/en/latest/generative_agents/langchain_experimental.generative_agents.memory.GenerativeAgentMemory.html |
0ef5bfa08f00-0 | langchain_experimental.generative_agents.generative_agent.GenerativeAgent¶
class langchain_experimental.generative_agents.generative_agent.GenerativeAgent[source]¶
Bases: BaseModel
An Agent as a character with memory and innate characteristics.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param age: Optional[int] = None¶
The optional age of the character.
param daily_summaries: List[str] [Optional]¶
Summary of the events in the plan that the agent took.
param last_refreshed: datetime.datetime [Optional]¶
The last time the character’s summary was regenerated.
param llm: langchain_core.language_models.base.BaseLanguageModel [Required]¶
The underlying language model.
param memory: langchain_experimental.generative_agents.memory.GenerativeAgentMemory [Required]¶
The memory object that combines relevance, recency, and ‘importance’.
param name: str [Required]¶
The character’s name.
param status: str [Required]¶
The traits of the character you wish not to change.
param summary: str = ''¶
Stateful self-summary generated via reflection on the character’s memory.
param summary_refresh_seconds: int = 3600¶
How frequently to re-generate the summary.
param traits: str = 'N/A'¶
Permanent traits to ascribe to the character.
param verbose: bool = False¶
chain(prompt: PromptTemplate) → LLMChain[source]¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed. | https://api.python.langchain.com/en/latest/generative_agents/langchain_experimental.generative_agents.generative_agent.GenerativeAgent.html |
0ef5bfa08f00-1 | Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
generate_dialogue_response(observation: str, now: Optional[datetime] = None) → Tuple[bool, str][source]¶
React to a given observation.
generate_reaction(observation: str, now: Optional[datetime] = None) → Tuple[bool, str][source]¶
React to a given observation.
get_full_header(force_refresh: bool = False, now: Optional[datetime] = None) → str[source]¶ | https://api.python.langchain.com/en/latest/generative_agents/langchain_experimental.generative_agents.generative_agent.GenerativeAgent.html |
0ef5bfa08f00-2 | Return a full header of the agent’s status, summary, and current time.
get_summary(force_refresh: bool = False, now: Optional[datetime] = None) → str[source]¶
Return a descriptive summary of the agent.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
summarize_related_memories(observation: str) → str[source]¶
Summarize memories that are most relevant to an observation.
classmethod update_forward_refs(**localns: Any) → None¶ | https://api.python.langchain.com/en/latest/generative_agents/langchain_experimental.generative_agents.generative_agent.GenerativeAgent.html |
0ef5bfa08f00-3 | classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶ | https://api.python.langchain.com/en/latest/generative_agents/langchain_experimental.generative_agents.generative_agent.GenerativeAgent.html |
e8a9ace7ac83-0 | langchain_core.outputs.run_info.RunInfo¶
class langchain_core.outputs.run_info.RunInfo[source]¶
Bases: BaseModel
Class that contains metadata for a single execution of a Chain or model.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param run_id: uuid.UUID [Required]¶
A unique identifier for the model or chain run.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance | https://api.python.langchain.com/en/latest/outputs/langchain_core.outputs.run_info.RunInfo.html |
e8a9ace7ac83-1 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ | https://api.python.langchain.com/en/latest/outputs/langchain_core.outputs.run_info.RunInfo.html |
e8a9ace7ac83-2 | classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶ | https://api.python.langchain.com/en/latest/outputs/langchain_core.outputs.run_info.RunInfo.html |
7db934eb7ab9-0 | langchain_core.outputs.llm_result.LLMResult¶
class langchain_core.outputs.llm_result.LLMResult[source]¶
Bases: BaseModel
Class that contains all results for a batched LLM call.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param generations: List[List[langchain_core.outputs.generation.Generation]] [Required]¶
List of generated outputs. This is a List[List[]] because
each input could have multiple candidate generations.
param llm_output: Optional[dict] = None¶
Arbitrary LLM provider-specific output.
param run: Optional[List[langchain_core.outputs.run_info.RunInfo]] = None¶
List of metadata info for model call for each input.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model | https://api.python.langchain.com/en/latest/outputs/langchain_core.outputs.llm_result.LLMResult.html |
7db934eb7ab9-1 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
flatten() → List[LLMResult][source]¶
Flatten generations into a single list.
Unpack List[List[Generation]] -> List[LLMResult] where each returned LLMResultcontains only a single Generation. If token usage information is available,
it is kept only for the LLMResult corresponding to the top-choice
Generation, to avoid over-counting of token usage downstream.
Returns
List of LLMResults where each returned LLMResult contains a singleGeneration.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). | https://api.python.langchain.com/en/latest/outputs/langchain_core.outputs.llm_result.LLMResult.html |
Subsets and Splits