id
stringlengths 14
15
| text
stringlengths 13
2.7k
| source
stringlengths 60
181
|
---|---|---|
761966b0e4e4-3 | Run when LLM ends running.
async on_llm_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → None¶
Run when LLM errors.
:param error: The error that occurred.
:type error: BaseException
:param kwargs: Additional keyword arguments.
response (LLMResult): The response which was generated beforethe error occurred.
async on_llm_new_token(token: str, *, chunk: Optional[Union[GenerationChunk, ChatGenerationChunk]] = None, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → None¶
Run on new LLM token. Only available when streaming is enabled.
async on_llm_start(serialized: Dict[str, Any], prompts: List[str], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None¶
Run when LLM starts running.
async on_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → None¶
Run on retriever end.
async on_retriever_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → None¶
Run on retriever error. | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.human.AsyncHumanApprovalCallbackHandler.html |
761966b0e4e4-4 | Run on retriever error.
async on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → None¶
Run on retriever start.
async on_retry(retry_state: RetryCallState, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run on a retry event.
async on_text(text: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → None¶
Run on arbitrary text.
async on_tool_end(output: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → None¶
Run when tool ends running.
async on_tool_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → None¶
Run when tool errors.
async on_tool_start(serialized: Dict[str, Any], input_str: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any[source]¶
Run when tool starts running. | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.human.AsyncHumanApprovalCallbackHandler.html |
a725b5f413b2-0 | langchain_community.callbacks.streamlit.streamlit_callback_handler.StreamlitCallbackHandler¶
class langchain_community.callbacks.streamlit.streamlit_callback_handler.StreamlitCallbackHandler(parent_container: DeltaGenerator, *, max_thought_containers: int = 4, expand_new_thoughts: bool = True, collapse_completed_thoughts: bool = True, thought_labeler: Optional[LLMThoughtLabeler] = None)[source]¶
A callback handler that writes to a Streamlit app.
Create a StreamlitCallbackHandler instance.
Parameters
parent_container – The st.container that will contain all the Streamlit elements that the
Handler creates.
max_thought_containers – The max number of completed LLM thought containers to show at once. When
this threshold is reached, a new thought will cause the oldest thoughts to
be collapsed into a “History” expander. Defaults to 4.
expand_new_thoughts – Each LLM “thought” gets its own st.expander. This param controls whether
that expander is expanded by default. Defaults to True.
collapse_completed_thoughts – If True, LLM thought expanders will be collapsed when completed.
Defaults to True.
thought_labeler – An optional custom LLMThoughtLabeler instance. If unspecified, the handler
will use the default thought labeling logic. Defaults to None.
Attributes
ignore_agent
Whether to ignore agent callbacks.
ignore_chain
Whether to ignore chain callbacks.
ignore_chat_model
Whether to ignore chat model callbacks.
ignore_llm
Whether to ignore LLM callbacks.
ignore_retriever
Whether to ignore retriever callbacks.
ignore_retry
Whether to ignore retry callbacks.
raise_error
run_inline
Methods
__init__(parent_container, *[, ...])
Create a StreamlitCallbackHandler instance.
on_agent_action(action[, color])
Run on agent action. | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.streamlit.streamlit_callback_handler.StreamlitCallbackHandler.html |
a725b5f413b2-1 | on_agent_action(action[, color])
Run on agent action.
on_agent_finish(finish[, color])
Run on agent end.
on_chain_end(outputs, **kwargs)
Run when chain ends running.
on_chain_error(error, **kwargs)
Run when chain errors.
on_chain_start(serialized, inputs, **kwargs)
Run when chain starts running.
on_chat_model_start(serialized, messages, *, ...)
Run when a chat model starts running.
on_llm_end(response, **kwargs)
Run when LLM ends running.
on_llm_error(error, **kwargs)
Run when LLM errors. :param error: The error that occurred. :type error: BaseException :param kwargs: Additional keyword arguments. - response (LLMResult): The response which was generated before the error occurred. :type kwargs: Any.
on_llm_new_token(token, **kwargs)
Run on new LLM token.
on_llm_start(serialized, prompts, **kwargs)
Run when LLM starts running.
on_retriever_end(documents, *, run_id[, ...])
Run when Retriever ends running.
on_retriever_error(error, *, run_id[, ...])
Run when Retriever errors.
on_retriever_start(serialized, query, *, run_id)
Run when Retriever starts running.
on_retry(retry_state, *, run_id[, parent_run_id])
Run on a retry event.
on_text(text[, color, end])
Run on arbitrary text.
on_tool_end(output[, color, ...])
Run when tool ends running.
on_tool_error(error, **kwargs)
Run when tool errors. | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.streamlit.streamlit_callback_handler.StreamlitCallbackHandler.html |
a725b5f413b2-2 | on_tool_error(error, **kwargs)
Run when tool errors.
on_tool_start(serialized, input_str, **kwargs)
Run when tool starts running.
__init__(parent_container: DeltaGenerator, *, max_thought_containers: int = 4, expand_new_thoughts: bool = True, collapse_completed_thoughts: bool = True, thought_labeler: Optional[LLMThoughtLabeler] = None)[source]¶
Create a StreamlitCallbackHandler instance.
Parameters
parent_container – The st.container that will contain all the Streamlit elements that the
Handler creates.
max_thought_containers – The max number of completed LLM thought containers to show at once. When
this threshold is reached, a new thought will cause the oldest thoughts to
be collapsed into a “History” expander. Defaults to 4.
expand_new_thoughts – Each LLM “thought” gets its own st.expander. This param controls whether
that expander is expanded by default. Defaults to True.
collapse_completed_thoughts – If True, LLM thought expanders will be collapsed when completed.
Defaults to True.
thought_labeler – An optional custom LLMThoughtLabeler instance. If unspecified, the handler
will use the default thought labeling logic. Defaults to None.
on_agent_action(action: AgentAction, color: Optional[str] = None, **kwargs: Any) → Any[source]¶
Run on agent action.
on_agent_finish(finish: AgentFinish, color: Optional[str] = None, **kwargs: Any) → None[source]¶
Run on agent end.
on_chain_end(outputs: Dict[str, Any], **kwargs: Any) → None[source]¶
Run when chain ends running.
on_chain_error(error: BaseException, **kwargs: Any) → None[source]¶
Run when chain errors. | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.streamlit.streamlit_callback_handler.StreamlitCallbackHandler.html |
a725b5f413b2-3 | Run when chain errors.
on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any) → None[source]¶
Run when chain starts running.
on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run when a chat model starts running.
on_llm_end(response: LLMResult, **kwargs: Any) → None[source]¶
Run when LLM ends running.
on_llm_error(error: BaseException, **kwargs: Any) → None[source]¶
Run when LLM errors.
:param error: The error that occurred.
:type error: BaseException
:param kwargs: Additional keyword arguments.
response (LLMResult): The response which was generated beforethe error occurred.
on_llm_new_token(token: str, **kwargs: Any) → None[source]¶
Run on new LLM token. Only available when streaming is enabled.
Parameters
token (str) – The new token.
chunk (GenerationChunk | ChatGenerationChunk) – The new generated chunk,
information. (containing content and other) –
on_llm_start(serialized: Dict[str, Any], prompts: List[str], **kwargs: Any) → None[source]¶
Run when LLM starts running.
on_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when Retriever ends running. | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.streamlit.streamlit_callback_handler.StreamlitCallbackHandler.html |
a725b5f413b2-4 | Run when Retriever ends running.
on_retriever_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when Retriever errors.
on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run when Retriever starts running.
on_retry(retry_state: RetryCallState, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run on a retry event.
on_text(text: str, color: Optional[str] = None, end: str = '', **kwargs: Any) → None[source]¶
Run on arbitrary text.
on_tool_end(output: str, color: Optional[str] = None, observation_prefix: Optional[str] = None, llm_prefix: Optional[str] = None, **kwargs: Any) → None[source]¶
Run when tool ends running.
on_tool_error(error: BaseException, **kwargs: Any) → None[source]¶
Run when tool errors.
on_tool_start(serialized: Dict[str, Any], input_str: str, **kwargs: Any) → None[source]¶
Run when tool starts running.
Examples using StreamlitCallbackHandler¶
Streamlit
GPT4All | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.streamlit.streamlit_callback_handler.StreamlitCallbackHandler.html |
2c5fdb668d64-0 | langchain_core.callbacks.manager.AsyncCallbackManagerForRetrieverRun¶
class langchain_core.callbacks.manager.AsyncCallbackManagerForRetrieverRun(*, run_id: UUID, handlers: List[BaseCallbackHandler], inheritable_handlers: List[BaseCallbackHandler], parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, inheritable_tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None)[source]¶
Async callback manager for retriever run.
Initialize the run manager.
Parameters
run_id (UUID) – The ID of the run.
handlers (List[BaseCallbackHandler]) – The list of handlers.
inheritable_handlers (List[BaseCallbackHandler]) – The list of inheritable handlers.
parent_run_id (UUID, optional) – The ID of the parent run.
Defaults to None.
tags (Optional[List[str]]) – The list of tags.
inheritable_tags (Optional[List[str]]) – The list of inheritable tags.
metadata (Optional[Dict[str, Any]]) – The metadata.
inheritable_metadata (Optional[Dict[str, Any]]) – The inheritable metadata.
Methods
__init__(*, run_id, handlers, ...[, ...])
Initialize the run manager.
get_child([tag])
Get a child callback manager.
get_noop_manager()
Return a manager that doesn't perform any operations.
get_sync()
Get the equivalent sync RunManager.
on_retriever_end(documents, **kwargs)
Run when retriever ends running.
on_retriever_error(error, **kwargs)
Run when retriever errors.
on_retry(retry_state, **kwargs)
Run on a retry event.
on_text(text, **kwargs) | https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.manager.AsyncCallbackManagerForRetrieverRun.html |
2c5fdb668d64-1 | Run on a retry event.
on_text(text, **kwargs)
Run when text is received.
__init__(*, run_id: UUID, handlers: List[BaseCallbackHandler], inheritable_handlers: List[BaseCallbackHandler], parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, inheritable_tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None) → None¶
Initialize the run manager.
Parameters
run_id (UUID) – The ID of the run.
handlers (List[BaseCallbackHandler]) – The list of handlers.
inheritable_handlers (List[BaseCallbackHandler]) – The list of inheritable handlers.
parent_run_id (UUID, optional) – The ID of the parent run.
Defaults to None.
tags (Optional[List[str]]) – The list of tags.
inheritable_tags (Optional[List[str]]) – The list of inheritable tags.
metadata (Optional[Dict[str, Any]]) – The metadata.
inheritable_metadata (Optional[Dict[str, Any]]) – The inheritable metadata.
get_child(tag: Optional[str] = None) → AsyncCallbackManager¶
Get a child callback manager.
Parameters
tag (str, optional) – The tag for the child callback manager.
Defaults to None.
Returns
The child callback manager.
Return type
AsyncCallbackManager
classmethod get_noop_manager() → BRM¶
Return a manager that doesn’t perform any operations.
Returns
The noop manager.
Return type
BaseRunManager
get_sync() → CallbackManagerForRetrieverRun[source]¶
Get the equivalent sync RunManager.
Returns
The sync RunManager.
Return type
CallbackManagerForRetrieverRun | https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.manager.AsyncCallbackManagerForRetrieverRun.html |
2c5fdb668d64-2 | Returns
The sync RunManager.
Return type
CallbackManagerForRetrieverRun
async on_retriever_end(documents: Sequence[Document], **kwargs: Any) → None[source]¶
Run when retriever ends running.
async on_retriever_error(error: BaseException, **kwargs: Any) → None[source]¶
Run when retriever errors.
async on_retry(retry_state: RetryCallState, **kwargs: Any) → None¶
Run on a retry event.
async on_text(text: str, **kwargs: Any) → Any¶
Run when text is received.
Parameters
text (str) – The received text.
Returns
The result of the callback.
Return type
Any
Examples using AsyncCallbackManagerForRetrieverRun¶
Retrieve as you generate with FLARE | https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.manager.AsyncCallbackManagerForRetrieverRun.html |
97971ee117dd-0 | langchain_community.callbacks.labelstudio_callback.get_default_label_configs¶
langchain_community.callbacks.labelstudio_callback.get_default_label_configs(mode: Union[str, LabelStudioMode]) → Tuple[str, LabelStudioMode][source]¶
Get default Label Studio configs for the given mode.
Parameters
mode – Label Studio mode (“prompt” or “chat”)
Returns: Tuple of Label Studio config and mode | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.labelstudio_callback.get_default_label_configs.html |
e97aa0247e8d-0 | langchain_community.callbacks.infino_callback.InfinoCallbackHandler¶
class langchain_community.callbacks.infino_callback.InfinoCallbackHandler(model_id: Optional[str] = None, model_version: Optional[str] = None, verbose: bool = False)[source]¶
Callback Handler that logs to Infino.
Attributes
ignore_agent
Whether to ignore agent callbacks.
ignore_chain
Whether to ignore chain callbacks.
ignore_chat_model
Whether to ignore chat model callbacks.
ignore_llm
Whether to ignore LLM callbacks.
ignore_retriever
Whether to ignore retriever callbacks.
ignore_retry
Whether to ignore retry callbacks.
raise_error
run_inline
Methods
__init__([model_id, model_version, verbose])
on_agent_action(action, **kwargs)
Do nothing when agent takes a specific action.
on_agent_finish(finish, **kwargs)
Do nothing.
on_chain_end(outputs, **kwargs)
Do nothing when LLM chain ends.
on_chain_error(error, **kwargs)
Need to log the error.
on_chain_start(serialized, inputs, **kwargs)
Do nothing when LLM chain starts.
on_chat_model_start(serialized, messages, ...)
Run when LLM starts running.
on_llm_end(response, **kwargs)
Log the latency, error, token usage, and response to Infino.
on_llm_error(error, **kwargs)
Set the error flag.
on_llm_new_token(token, **kwargs)
Do nothing when a new token is generated.
on_llm_start(serialized, prompts, **kwargs)
Log the prompts to Infino, and set start time and error flag.
on_retriever_end(documents, *, run_id[, ...])
Run when Retriever ends running. | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.infino_callback.InfinoCallbackHandler.html |
e97aa0247e8d-1 | Run when Retriever ends running.
on_retriever_error(error, *, run_id[, ...])
Run when Retriever errors.
on_retriever_start(serialized, query, *, run_id)
Run when Retriever starts running.
on_retry(retry_state, *, run_id[, parent_run_id])
Run on a retry event.
on_text(text, **kwargs)
Do nothing.
on_tool_end(output[, observation_prefix, ...])
Do nothing when tool ends.
on_tool_error(error, **kwargs)
Do nothing when tool outputs an error.
on_tool_start(serialized, input_str, **kwargs)
Do nothing when tool starts.
__init__(model_id: Optional[str] = None, model_version: Optional[str] = None, verbose: bool = False) → None[source]¶
on_agent_action(action: AgentAction, **kwargs: Any) → Any[source]¶
Do nothing when agent takes a specific action.
on_agent_finish(finish: AgentFinish, **kwargs: Any) → None[source]¶
Do nothing.
on_chain_end(outputs: Dict[str, Any], **kwargs: Any) → None[source]¶
Do nothing when LLM chain ends.
on_chain_error(error: BaseException, **kwargs: Any) → None[source]¶
Need to log the error.
on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any) → None[source]¶
Do nothing when LLM chain starts.
on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], **kwargs: Any) → None[source]¶
Run when LLM starts running.
on_llm_end(response: LLMResult, **kwargs: Any) → None[source]¶ | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.infino_callback.InfinoCallbackHandler.html |
e97aa0247e8d-2 | Log the latency, error, token usage, and response to Infino.
on_llm_error(error: BaseException, **kwargs: Any) → None[source]¶
Set the error flag.
on_llm_new_token(token: str, **kwargs: Any) → None[source]¶
Do nothing when a new token is generated.
on_llm_start(serialized: Dict[str, Any], prompts: List[str], **kwargs: Any) → None[source]¶
Log the prompts to Infino, and set start time and error flag.
on_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when Retriever ends running.
on_retriever_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when Retriever errors.
on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run when Retriever starts running.
on_retry(retry_state: RetryCallState, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run on a retry event.
on_text(text: str, **kwargs: Any) → None[source]¶
Do nothing.
on_tool_end(output: str, observation_prefix: Optional[str] = None, llm_prefix: Optional[str] = None, **kwargs: Any) → None[source]¶
Do nothing when tool ends. | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.infino_callback.InfinoCallbackHandler.html |
e97aa0247e8d-3 | Do nothing when tool ends.
on_tool_error(error: BaseException, **kwargs: Any) → None[source]¶
Do nothing when tool outputs an error.
on_tool_start(serialized: Dict[str, Any], input_str: str, **kwargs: Any) → None[source]¶
Do nothing when tool starts.
Examples using InfinoCallbackHandler¶
Infino | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.infino_callback.InfinoCallbackHandler.html |
89f6dc6573b2-0 | langchain_community.callbacks.wandb_callback.construct_html_from_prompt_and_generation¶
langchain_community.callbacks.wandb_callback.construct_html_from_prompt_and_generation(prompt: str, generation: str) → Any[source]¶
Construct an html element from a prompt and a generation.
Parameters
prompt (str) – The prompt.
generation (str) – The generation.
Returns
The html element.
Return type
(wandb.Html) | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.wandb_callback.construct_html_from_prompt_and_generation.html |
40069c7ac194-0 | langchain_community.callbacks.tracers.wandb.WandbRunArgs¶
class langchain_community.callbacks.tracers.wandb.WandbRunArgs[source]¶
Arguments for the WandbTracer.
job_type: Optional[str]¶
dir: Optional[StrPath]¶
config: Union[Dict, str, None]¶
project: Optional[str]¶
entity: Optional[str]¶
reinit: Optional[bool]¶
tags: Optional[Sequence]¶
group: Optional[str]¶
name: Optional[str]¶
notes: Optional[str]¶
magic: Optional[Union[dict, str, bool]]¶
config_exclude_keys: Optional[List[str]]¶
config_include_keys: Optional[List[str]]¶
anonymous: Optional[str]¶
mode: Optional[str]¶
allow_val_change: Optional[bool]¶
resume: Optional[Union[bool, str]]¶
force: Optional[bool]¶
tensorboard: Optional[bool]¶
sync_tensorboard: Optional[bool]¶
monitor_gym: Optional[bool]¶
save_code: Optional[bool]¶
id: Optional[str]¶
settings: Union[WBSettings, Dict[str, Any], None]¶ | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.tracers.wandb.WandbRunArgs.html |
b5e47526fc01-0 | langchain_community.callbacks.sagemaker_callback.SageMakerCallbackHandler¶
class langchain_community.callbacks.sagemaker_callback.SageMakerCallbackHandler(run: Any)[source]¶
Callback Handler that logs prompt artifacts and metrics to SageMaker Experiments.
Parameters
run (sagemaker.experiments.run.Run) – Run object where the experiment is logged.
Initialize callback handler.
Attributes
ignore_agent
Whether to ignore agent callbacks.
ignore_chain
Whether to ignore chain callbacks.
ignore_chat_model
Whether to ignore chat model callbacks.
ignore_llm
Whether to ignore LLM callbacks.
ignore_retriever
Whether to ignore retriever callbacks.
ignore_retry
Whether to ignore retry callbacks.
raise_error
run_inline
Methods
__init__(run)
Initialize callback handler.
flush_tracker()
Reset the steps and delete the temporary local directory.
jsonf(data, data_dir, filename[, is_output])
To log the input data as json file artifact.
on_agent_action(action, **kwargs)
Run on agent action.
on_agent_finish(finish, **kwargs)
Run when agent ends running.
on_chain_end(outputs, **kwargs)
Run when chain ends running.
on_chain_error(error, **kwargs)
Run when chain errors.
on_chain_start(serialized, inputs, **kwargs)
Run when chain starts running.
on_chat_model_start(serialized, messages, *, ...)
Run when a chat model starts running.
on_llm_end(response, **kwargs)
Run when LLM ends running.
on_llm_error(error, **kwargs)
Run when LLM errors.
on_llm_new_token(token, **kwargs)
Run when LLM generates a new token.
on_llm_start(serialized, prompts, **kwargs) | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.sagemaker_callback.SageMakerCallbackHandler.html |
b5e47526fc01-1 | on_llm_start(serialized, prompts, **kwargs)
Run when LLM starts.
on_retriever_end(documents, *, run_id[, ...])
Run when Retriever ends running.
on_retriever_error(error, *, run_id[, ...])
Run when Retriever errors.
on_retriever_start(serialized, query, *, run_id)
Run when Retriever starts running.
on_retry(retry_state, *, run_id[, parent_run_id])
Run on a retry event.
on_text(text, **kwargs)
Run when agent is ending.
on_tool_end(output, **kwargs)
Run when tool ends running.
on_tool_error(error, **kwargs)
Run when tool errors.
on_tool_start(serialized, input_str, **kwargs)
Run when tool starts running.
__init__(run: Any) → None[source]¶
Initialize callback handler.
flush_tracker() → None[source]¶
Reset the steps and delete the temporary local directory.
jsonf(data: Dict[str, Any], data_dir: str, filename: str, is_output: Optional[bool] = True) → None[source]¶
To log the input data as json file artifact.
on_agent_action(action: AgentAction, **kwargs: Any) → Any[source]¶
Run on agent action.
on_agent_finish(finish: AgentFinish, **kwargs: Any) → None[source]¶
Run when agent ends running.
on_chain_end(outputs: Dict[str, Any], **kwargs: Any) → None[source]¶
Run when chain ends running.
on_chain_error(error: BaseException, **kwargs: Any) → None[source]¶
Run when chain errors. | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.sagemaker_callback.SageMakerCallbackHandler.html |
b5e47526fc01-2 | Run when chain errors.
on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any) → None[source]¶
Run when chain starts running.
on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run when a chat model starts running.
on_llm_end(response: LLMResult, **kwargs: Any) → None[source]¶
Run when LLM ends running.
on_llm_error(error: BaseException, **kwargs: Any) → None[source]¶
Run when LLM errors.
on_llm_new_token(token: str, **kwargs: Any) → None[source]¶
Run when LLM generates a new token.
on_llm_start(serialized: Dict[str, Any], prompts: List[str], **kwargs: Any) → None[source]¶
Run when LLM starts.
on_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when Retriever ends running.
on_retriever_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when Retriever errors.
on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.sagemaker_callback.SageMakerCallbackHandler.html |
b5e47526fc01-3 | Run when Retriever starts running.
on_retry(retry_state: RetryCallState, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run on a retry event.
on_text(text: str, **kwargs: Any) → None[source]¶
Run when agent is ending.
on_tool_end(output: str, **kwargs: Any) → None[source]¶
Run when tool ends running.
on_tool_error(error: BaseException, **kwargs: Any) → None[source]¶
Run when tool errors.
on_tool_start(serialized: Dict[str, Any], input_str: str, **kwargs: Any) → None[source]¶
Run when tool starts running.
Examples using SageMakerCallbackHandler¶
SageMaker Tracking | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.sagemaker_callback.SageMakerCallbackHandler.html |
003901e1c3e5-0 | langchain_community.callbacks.human.HumanRejectedException¶
class langchain_community.callbacks.human.HumanRejectedException[source]¶
Exception to raise when a person manually review and rejects a value. | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.human.HumanRejectedException.html |
da6654a86cfd-0 | langchain_community.callbacks.wandb_callback.import_wandb¶
langchain_community.callbacks.wandb_callback.import_wandb() → Any[source]¶
Import the wandb python package and raise an error if it is not installed. | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.wandb_callback.import_wandb.html |
aec8b647b1ef-0 | langchain_community.callbacks.wandb_callback.WandbCallbackHandler¶
class langchain_community.callbacks.wandb_callback.WandbCallbackHandler(job_type: Optional[str] = None, project: Optional[str] = 'langchain_callback_demo', entity: Optional[str] = None, tags: Optional[Sequence] = None, group: Optional[str] = None, name: Optional[str] = None, notes: Optional[str] = None, visualize: bool = False, complexity_metrics: bool = False, stream_logs: bool = False)[source]¶
Callback Handler that logs to Weights and Biases.
Parameters
job_type (str) – The type of job.
project (str) – The project to log to.
entity (str) – The entity to log to.
tags (list) – The tags to log.
group (str) – The group to log to.
name (str) – The name of the run.
notes (str) – The notes to log.
visualize (bool) – Whether to visualize the run.
complexity_metrics (bool) – Whether to log complexity metrics.
stream_logs (bool) – Whether to stream callback actions to W&B
This handler will utilize the associated callback method called and formats
the input of each callback function with metadata regarding the state of LLM run,
and adds the response to the list of records for both the {method}_records and
action. It then logs the response using the run.log() method to Weights and Biases.
Initialize callback handler.
Attributes
always_verbose
Whether to call verbose callbacks even if verbose is False.
ignore_agent
Whether to ignore agent callbacks.
ignore_chain
Whether to ignore chain callbacks.
ignore_chat_model
Whether to ignore chat model callbacks.
ignore_llm
Whether to ignore LLM callbacks.
ignore_retriever
Whether to ignore retriever callbacks. | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.wandb_callback.WandbCallbackHandler.html |
aec8b647b1ef-1 | ignore_retriever
Whether to ignore retriever callbacks.
ignore_retry
Whether to ignore retry callbacks.
raise_error
run_inline
Methods
__init__([job_type, project, entity, tags, ...])
Initialize callback handler.
flush_tracker([langchain_asset, reset, ...])
Flush the tracker and reset the session.
get_custom_callback_meta()
on_agent_action(action, **kwargs)
Run on agent action.
on_agent_finish(finish, **kwargs)
Run when agent ends running.
on_chain_end(outputs, **kwargs)
Run when chain ends running.
on_chain_error(error, **kwargs)
Run when chain errors.
on_chain_start(serialized, inputs, **kwargs)
Run when chain starts running.
on_chat_model_start(serialized, messages, *, ...)
Run when a chat model starts running.
on_llm_end(response, **kwargs)
Run when LLM ends running.
on_llm_error(error, **kwargs)
Run when LLM errors.
on_llm_new_token(token, **kwargs)
Run when LLM generates a new token.
on_llm_start(serialized, prompts, **kwargs)
Run when LLM starts.
on_retriever_end(documents, *, run_id[, ...])
Run when Retriever ends running.
on_retriever_error(error, *, run_id[, ...])
Run when Retriever errors.
on_retriever_start(serialized, query, *, run_id)
Run when Retriever starts running.
on_retry(retry_state, *, run_id[, parent_run_id])
Run on a retry event.
on_text(text, **kwargs)
Run when agent is ending. | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.wandb_callback.WandbCallbackHandler.html |
aec8b647b1ef-2 | on_text(text, **kwargs)
Run when agent is ending.
on_tool_end(output, **kwargs)
Run when tool ends running.
on_tool_error(error, **kwargs)
Run when tool errors.
on_tool_start(serialized, input_str, **kwargs)
Run when tool starts running.
reset_callback_meta()
Reset the callback metadata.
__init__(job_type: Optional[str] = None, project: Optional[str] = 'langchain_callback_demo', entity: Optional[str] = None, tags: Optional[Sequence] = None, group: Optional[str] = None, name: Optional[str] = None, notes: Optional[str] = None, visualize: bool = False, complexity_metrics: bool = False, stream_logs: bool = False) → None[source]¶
Initialize callback handler.
flush_tracker(langchain_asset: Any = None, reset: bool = True, finish: bool = False, job_type: Optional[str] = None, project: Optional[str] = None, entity: Optional[str] = None, tags: Optional[Sequence] = None, group: Optional[str] = None, name: Optional[str] = None, notes: Optional[str] = None, visualize: Optional[bool] = None, complexity_metrics: Optional[bool] = None) → None[source]¶
Flush the tracker and reset the session.
Parameters
langchain_asset – The langchain asset to save.
reset – Whether to reset the session.
finish – Whether to finish the run.
job_type – The job type.
project – The project.
entity – The entity.
tags – The tags.
group – The group.
name – The name.
notes – The notes.
visualize – Whether to visualize.
complexity_metrics – Whether to compute complexity metrics.
Returns – None | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.wandb_callback.WandbCallbackHandler.html |
aec8b647b1ef-3 | complexity_metrics – Whether to compute complexity metrics.
Returns – None
get_custom_callback_meta() → Dict[str, Any]¶
on_agent_action(action: AgentAction, **kwargs: Any) → Any[source]¶
Run on agent action.
on_agent_finish(finish: AgentFinish, **kwargs: Any) → None[source]¶
Run when agent ends running.
on_chain_end(outputs: Dict[str, Any], **kwargs: Any) → None[source]¶
Run when chain ends running.
on_chain_error(error: BaseException, **kwargs: Any) → None[source]¶
Run when chain errors.
on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any) → None[source]¶
Run when chain starts running.
on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run when a chat model starts running.
on_llm_end(response: LLMResult, **kwargs: Any) → None[source]¶
Run when LLM ends running.
on_llm_error(error: BaseException, **kwargs: Any) → None[source]¶
Run when LLM errors.
on_llm_new_token(token: str, **kwargs: Any) → None[source]¶
Run when LLM generates a new token.
on_llm_start(serialized: Dict[str, Any], prompts: List[str], **kwargs: Any) → None[source]¶
Run when LLM starts. | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.wandb_callback.WandbCallbackHandler.html |
aec8b647b1ef-4 | Run when LLM starts.
on_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when Retriever ends running.
on_retriever_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when Retriever errors.
on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run when Retriever starts running.
on_retry(retry_state: RetryCallState, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run on a retry event.
on_text(text: str, **kwargs: Any) → None[source]¶
Run when agent is ending.
on_tool_end(output: str, **kwargs: Any) → None[source]¶
Run when tool ends running.
on_tool_error(error: BaseException, **kwargs: Any) → None[source]¶
Run when tool errors.
on_tool_start(serialized: Dict[str, Any], input_str: str, **kwargs: Any) → None[source]¶
Run when tool starts running.
reset_callback_meta() → None¶
Reset the callback metadata.
Examples using WandbCallbackHandler¶
Weights & Biases | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.wandb_callback.WandbCallbackHandler.html |
0a00f834dd4e-0 | langchain_core.callbacks.base.BaseCallbackManager¶
class langchain_core.callbacks.base.BaseCallbackManager(handlers: List[BaseCallbackHandler], inheritable_handlers: Optional[List[BaseCallbackHandler]] = None, parent_run_id: Optional[UUID] = None, *, tags: Optional[List[str]] = None, inheritable_tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None)[source]¶
Base callback manager that handles callbacks from LangChain.
Initialize callback manager.
Attributes
is_async
Whether the callback manager is async.
Methods
__init__(handlers[, inheritable_handlers, ...])
Initialize callback manager.
add_handler(handler[, inherit])
Add a handler to the callback manager.
add_metadata(metadata[, inherit])
add_tags(tags[, inherit])
copy()
Copy the callback manager.
on_chain_start(serialized, inputs, *, run_id)
Run when chain starts running.
on_chat_model_start(serialized, messages, *, ...)
Run when a chat model starts running.
on_llm_start(serialized, prompts, *, run_id)
Run when LLM starts running.
on_retriever_start(serialized, query, *, run_id)
Run when Retriever starts running.
on_tool_start(serialized, input_str, *, run_id)
Run when tool starts running.
remove_handler(handler)
Remove a handler from the callback manager.
remove_metadata(keys)
remove_tags(tags)
set_handler(handler[, inherit])
Set handler as the only handler on the callback manager.
set_handlers(handlers[, inherit])
Set handlers as the only handlers on the callback manager. | https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.base.BaseCallbackManager.html |
0a00f834dd4e-1 | Set handlers as the only handlers on the callback manager.
__init__(handlers: List[BaseCallbackHandler], inheritable_handlers: Optional[List[BaseCallbackHandler]] = None, parent_run_id: Optional[UUID] = None, *, tags: Optional[List[str]] = None, inheritable_tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None) → None[source]¶
Initialize callback manager.
add_handler(handler: BaseCallbackHandler, inherit: bool = True) → None[source]¶
Add a handler to the callback manager.
add_metadata(metadata: Dict[str, Any], inherit: bool = True) → None[source]¶
add_tags(tags: List[str], inherit: bool = True) → None[source]¶
copy() → T[source]¶
Copy the callback manager.
on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run when chain starts running.
on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run when a chat model starts running. | https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.base.BaseCallbackManager.html |
0a00f834dd4e-2 | Run when a chat model starts running.
on_llm_start(serialized: Dict[str, Any], prompts: List[str], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run when LLM starts running.
on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run when Retriever starts running.
on_tool_start(serialized: Dict[str, Any], input_str: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run when tool starts running.
remove_handler(handler: BaseCallbackHandler) → None[source]¶
Remove a handler from the callback manager.
remove_metadata(keys: List[str]) → None[source]¶
remove_tags(tags: List[str]) → None[source]¶
set_handler(handler: BaseCallbackHandler, inherit: bool = True) → None[source]¶
Set handler as the only handler on the callback manager.
set_handlers(handlers: List[BaseCallbackHandler], inherit: bool = True) → None[source]¶
Set handlers as the only handlers on the callback manager. | https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.base.BaseCallbackManager.html |
ddf28316fb60-0 | langchain_community.callbacks.streamlit.streamlit_callback_handler.LLMThoughtState¶
class langchain_community.callbacks.streamlit.streamlit_callback_handler.LLMThoughtState(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]¶
Enumerator of the LLMThought state.
THINKING = 'THINKING'¶
RUNNING_TOOL = 'RUNNING_TOOL'¶
COMPLETE = 'COMPLETE'¶ | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.streamlit.streamlit_callback_handler.LLMThoughtState.html |
093fa7a9c304-0 | langchain_community.callbacks.comet_ml_callback.CometCallbackHandler¶
class langchain_community.callbacks.comet_ml_callback.CometCallbackHandler(task_type: Optional[str] = 'inference', workspace: Optional[str] = None, project_name: Optional[str] = None, tags: Optional[Sequence] = None, name: Optional[str] = None, visualizations: Optional[List[str]] = None, complexity_metrics: bool = False, custom_metrics: Optional[Callable] = None, stream_logs: bool = True)[source]¶
Callback Handler that logs to Comet.
Parameters
job_type (str) – The type of comet_ml task such as “inference”,
“testing” or “qc”
project_name (str) – The comet_ml project name
tags (list) – Tags to add to the task
task_name (str) – Name of the comet_ml task
visualize (bool) – Whether to visualize the run.
complexity_metrics (bool) – Whether to log complexity metrics
stream_logs (bool) – Whether to stream callback actions to Comet
This handler will utilize the associated callback method and formats
the input of each callback function with metadata regarding the state of LLM run,
and adds the response to the list of records for both the {method}_records and
action. It then logs the response to Comet.
Initialize callback handler.
Attributes
always_verbose
Whether to call verbose callbacks even if verbose is False.
ignore_agent
Whether to ignore agent callbacks.
ignore_chain
Whether to ignore chain callbacks.
ignore_chat_model
Whether to ignore chat model callbacks.
ignore_llm
Whether to ignore LLM callbacks.
ignore_retriever
Whether to ignore retriever callbacks.
ignore_retry
Whether to ignore retry callbacks.
raise_error
run_inline
Methods
__init__([task_type, workspace, ...])
Initialize callback handler. | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.comet_ml_callback.CometCallbackHandler.html |
093fa7a9c304-1 | __init__([task_type, workspace, ...])
Initialize callback handler.
flush_tracker([langchain_asset, task_type, ...])
Flush the tracker and setup the session.
get_custom_callback_meta()
on_agent_action(action, **kwargs)
Run on agent action.
on_agent_finish(finish, **kwargs)
Run when agent ends running.
on_chain_end(outputs, **kwargs)
Run when chain ends running.
on_chain_error(error, **kwargs)
Run when chain errors.
on_chain_start(serialized, inputs, **kwargs)
Run when chain starts running.
on_chat_model_start(serialized, messages, *, ...)
Run when a chat model starts running.
on_llm_end(response, **kwargs)
Run when LLM ends running.
on_llm_error(error, **kwargs)
Run when LLM errors.
on_llm_new_token(token, **kwargs)
Run when LLM generates a new token.
on_llm_start(serialized, prompts, **kwargs)
Run when LLM starts.
on_retriever_end(documents, *, run_id[, ...])
Run when Retriever ends running.
on_retriever_error(error, *, run_id[, ...])
Run when Retriever errors.
on_retriever_start(serialized, query, *, run_id)
Run when Retriever starts running.
on_retry(retry_state, *, run_id[, parent_run_id])
Run on a retry event.
on_text(text, **kwargs)
Run when agent is ending.
on_tool_end(output, **kwargs)
Run when tool ends running.
on_tool_error(error, **kwargs)
Run when tool errors. | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.comet_ml_callback.CometCallbackHandler.html |
093fa7a9c304-2 | on_tool_error(error, **kwargs)
Run when tool errors.
on_tool_start(serialized, input_str, **kwargs)
Run when tool starts running.
reset_callback_meta()
Reset the callback metadata.
__init__(task_type: Optional[str] = 'inference', workspace: Optional[str] = None, project_name: Optional[str] = None, tags: Optional[Sequence] = None, name: Optional[str] = None, visualizations: Optional[List[str]] = None, complexity_metrics: bool = False, custom_metrics: Optional[Callable] = None, stream_logs: bool = True) → None[source]¶
Initialize callback handler.
flush_tracker(langchain_asset: Any = None, task_type: Optional[str] = 'inference', workspace: Optional[str] = None, project_name: Optional[str] = 'comet-langchain-demo', tags: Optional[Sequence] = None, name: Optional[str] = None, visualizations: Optional[List[str]] = None, complexity_metrics: bool = False, custom_metrics: Optional[Callable] = None, finish: bool = False, reset: bool = False) → None[source]¶
Flush the tracker and setup the session.
Everything after this will be a new table.
Parameters
name – Name of the performed session so far so it is identifiable
langchain_asset – The langchain asset to save.
finish – Whether to finish the run.
Returns – None
get_custom_callback_meta() → Dict[str, Any]¶
on_agent_action(action: AgentAction, **kwargs: Any) → Any[source]¶
Run on agent action.
on_agent_finish(finish: AgentFinish, **kwargs: Any) → None[source]¶
Run when agent ends running.
on_chain_end(outputs: Dict[str, Any], **kwargs: Any) → None[source]¶
Run when chain ends running. | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.comet_ml_callback.CometCallbackHandler.html |
093fa7a9c304-3 | Run when chain ends running.
on_chain_error(error: BaseException, **kwargs: Any) → None[source]¶
Run when chain errors.
on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any) → None[source]¶
Run when chain starts running.
on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run when a chat model starts running.
on_llm_end(response: LLMResult, **kwargs: Any) → None[source]¶
Run when LLM ends running.
on_llm_error(error: BaseException, **kwargs: Any) → None[source]¶
Run when LLM errors.
on_llm_new_token(token: str, **kwargs: Any) → None[source]¶
Run when LLM generates a new token.
on_llm_start(serialized: Dict[str, Any], prompts: List[str], **kwargs: Any) → None[source]¶
Run when LLM starts.
on_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when Retriever ends running.
on_retriever_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when Retriever errors. | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.comet_ml_callback.CometCallbackHandler.html |
093fa7a9c304-4 | Run when Retriever errors.
on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run when Retriever starts running.
on_retry(retry_state: RetryCallState, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run on a retry event.
on_text(text: str, **kwargs: Any) → None[source]¶
Run when agent is ending.
on_tool_end(output: str, **kwargs: Any) → None[source]¶
Run when tool ends running.
on_tool_error(error: BaseException, **kwargs: Any) → None[source]¶
Run when tool errors.
on_tool_start(serialized: Dict[str, Any], input_str: str, **kwargs: Any) → None[source]¶
Run when tool starts running.
reset_callback_meta() → None¶
Reset the callback metadata.
Examples using CometCallbackHandler¶
Comet | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.comet_ml_callback.CometCallbackHandler.html |
a9e5e0ac6874-0 | langchain_community.callbacks.infino_callback.get_num_tokens¶
langchain_community.callbacks.infino_callback.get_num_tokens(string: str, openai_model_name: str) → int[source]¶
Calculate num tokens for OpenAI with tiktoken package.
Official documentation: https://github.com/openai/openai-cookbook/blob/main/examples/How_to_count_tokens_with_tiktoken.ipynb | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.infino_callback.get_num_tokens.html |
4d2310a5ee90-0 | langchain.callbacks.file.FileCallbackHandler¶
class langchain.callbacks.file.FileCallbackHandler(filename: str, mode: str = 'a', color: Optional[str] = None)[source]¶
Callback Handler that writes to a file.
Initialize callback handler.
Attributes
ignore_agent
Whether to ignore agent callbacks.
ignore_chain
Whether to ignore chain callbacks.
ignore_chat_model
Whether to ignore chat model callbacks.
ignore_llm
Whether to ignore LLM callbacks.
ignore_retriever
Whether to ignore retriever callbacks.
ignore_retry
Whether to ignore retry callbacks.
raise_error
run_inline
Methods
__init__(filename[, mode, color])
Initialize callback handler.
on_agent_action(action[, color])
Run on agent action.
on_agent_finish(finish[, color])
Run on agent end.
on_chain_end(outputs, **kwargs)
Print out that we finished a chain.
on_chain_error(error, *, run_id[, parent_run_id])
Run when chain errors.
on_chain_start(serialized, inputs, **kwargs)
Print out that we are entering a chain.
on_chat_model_start(serialized, messages, *, ...)
Run when a chat model starts running.
on_llm_end(response, *, run_id[, parent_run_id])
Run when LLM ends running.
on_llm_error(error, *, run_id[, parent_run_id])
Run when LLM errors. :param error: The error that occurred. :type error: BaseException :param kwargs: Additional keyword arguments. - response (LLMResult): The response which was generated before the error occurred. :type kwargs: Any.
on_llm_new_token(token, *[, chunk, ...])
Run on new LLM token. | https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.file.FileCallbackHandler.html |
4d2310a5ee90-1 | Run on new LLM token.
on_llm_start(serialized, prompts, *, run_id)
Run when LLM starts running.
on_retriever_end(documents, *, run_id[, ...])
Run when Retriever ends running.
on_retriever_error(error, *, run_id[, ...])
Run when Retriever errors.
on_retriever_start(serialized, query, *, run_id)
Run when Retriever starts running.
on_retry(retry_state, *, run_id[, parent_run_id])
Run on a retry event.
on_text(text[, color, end])
Run when agent ends.
on_tool_end(output[, color, ...])
If not the final action, print out observation.
on_tool_error(error, *, run_id[, parent_run_id])
Run when tool errors.
on_tool_start(serialized, input_str, *, run_id)
Run when tool starts running.
__init__(filename: str, mode: str = 'a', color: Optional[str] = None) → None[source]¶
Initialize callback handler.
on_agent_action(action: AgentAction, color: Optional[str] = None, **kwargs: Any) → Any[source]¶
Run on agent action.
on_agent_finish(finish: AgentFinish, color: Optional[str] = None, **kwargs: Any) → None[source]¶
Run on agent end.
on_chain_end(outputs: Dict[str, Any], **kwargs: Any) → None[source]¶
Print out that we finished a chain.
on_chain_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when chain errors. | https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.file.FileCallbackHandler.html |
4d2310a5ee90-2 | Run when chain errors.
on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any) → None[source]¶
Print out that we are entering a chain.
on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run when a chat model starts running.
on_llm_end(response: LLMResult, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when LLM ends running.
on_llm_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when LLM errors.
:param error: The error that occurred.
:type error: BaseException
:param kwargs: Additional keyword arguments.
response (LLMResult): The response which was generated beforethe error occurred.
on_llm_new_token(token: str, *, chunk: Optional[Union[GenerationChunk, ChatGenerationChunk]] = None, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run on new LLM token. Only available when streaming is enabled.
Parameters
token (str) – The new token.
chunk (GenerationChunk | ChatGenerationChunk) – The new generated chunk,
information. (containing content and other) – | https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.file.FileCallbackHandler.html |
4d2310a5ee90-3 | information. (containing content and other) –
on_llm_start(serialized: Dict[str, Any], prompts: List[str], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run when LLM starts running.
on_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when Retriever ends running.
on_retriever_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when Retriever errors.
on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run when Retriever starts running.
on_retry(retry_state: RetryCallState, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run on a retry event.
on_text(text: str, color: Optional[str] = None, end: str = '', **kwargs: Any) → None[source]¶
Run when agent ends.
on_tool_end(output: str, color: Optional[str] = None, observation_prefix: Optional[str] = None, llm_prefix: Optional[str] = None, **kwargs: Any) → None[source]¶
If not the final action, print out observation. | https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.file.FileCallbackHandler.html |
4d2310a5ee90-4 | If not the final action, print out observation.
on_tool_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when tool errors.
on_tool_start(serialized: Dict[str, Any], input_str: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run when tool starts running.
Examples using FileCallbackHandler¶
Logging to file | https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.file.FileCallbackHandler.html |
2b5dd5ec120d-0 | langchain_core.callbacks.manager.AsyncCallbackManagerForToolRun¶
class langchain_core.callbacks.manager.AsyncCallbackManagerForToolRun(*, run_id: UUID, handlers: List[BaseCallbackHandler], inheritable_handlers: List[BaseCallbackHandler], parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, inheritable_tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None)[source]¶
Async callback manager for tool run.
Initialize the run manager.
Parameters
run_id (UUID) – The ID of the run.
handlers (List[BaseCallbackHandler]) – The list of handlers.
inheritable_handlers (List[BaseCallbackHandler]) – The list of inheritable handlers.
parent_run_id (UUID, optional) – The ID of the parent run.
Defaults to None.
tags (Optional[List[str]]) – The list of tags.
inheritable_tags (Optional[List[str]]) – The list of inheritable tags.
metadata (Optional[Dict[str, Any]]) – The metadata.
inheritable_metadata (Optional[Dict[str, Any]]) – The inheritable metadata.
Methods
__init__(*, run_id, handlers, ...[, ...])
Initialize the run manager.
get_child([tag])
Get a child callback manager.
get_noop_manager()
Return a manager that doesn't perform any operations.
get_sync()
Get the equivalent sync RunManager.
on_retry(retry_state, **kwargs)
Run on a retry event.
on_text(text, **kwargs)
Run when text is received.
on_tool_end(output, **kwargs)
Run when tool ends running.
on_tool_error(error, **kwargs)
Run when tool errors. | https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.manager.AsyncCallbackManagerForToolRun.html |
2b5dd5ec120d-1 | on_tool_error(error, **kwargs)
Run when tool errors.
__init__(*, run_id: UUID, handlers: List[BaseCallbackHandler], inheritable_handlers: List[BaseCallbackHandler], parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, inheritable_tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None) → None¶
Initialize the run manager.
Parameters
run_id (UUID) – The ID of the run.
handlers (List[BaseCallbackHandler]) – The list of handlers.
inheritable_handlers (List[BaseCallbackHandler]) – The list of inheritable handlers.
parent_run_id (UUID, optional) – The ID of the parent run.
Defaults to None.
tags (Optional[List[str]]) – The list of tags.
inheritable_tags (Optional[List[str]]) – The list of inheritable tags.
metadata (Optional[Dict[str, Any]]) – The metadata.
inheritable_metadata (Optional[Dict[str, Any]]) – The inheritable metadata.
get_child(tag: Optional[str] = None) → AsyncCallbackManager¶
Get a child callback manager.
Parameters
tag (str, optional) – The tag for the child callback manager.
Defaults to None.
Returns
The child callback manager.
Return type
AsyncCallbackManager
classmethod get_noop_manager() → BRM¶
Return a manager that doesn’t perform any operations.
Returns
The noop manager.
Return type
BaseRunManager
get_sync() → CallbackManagerForToolRun[source]¶
Get the equivalent sync RunManager.
Returns
The sync RunManager.
Return type
CallbackManagerForToolRun
async on_retry(retry_state: RetryCallState, **kwargs: Any) → None¶ | https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.manager.AsyncCallbackManagerForToolRun.html |
2b5dd5ec120d-2 | async on_retry(retry_state: RetryCallState, **kwargs: Any) → None¶
Run on a retry event.
async on_text(text: str, **kwargs: Any) → Any¶
Run when text is received.
Parameters
text (str) – The received text.
Returns
The result of the callback.
Return type
Any
async on_tool_end(output: str, **kwargs: Any) → None[source]¶
Run when tool ends running.
Parameters
output (str) – The output of the tool.
async on_tool_error(error: BaseException, **kwargs: Any) → None[source]¶
Run when tool errors.
Parameters
error (Exception or KeyboardInterrupt) – The error.
Examples using AsyncCallbackManagerForToolRun¶
Defining Custom Tools | https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.manager.AsyncCallbackManagerForToolRun.html |
69d819a61f5c-0 | langchain_core.callbacks.base.LLMManagerMixin¶
class langchain_core.callbacks.base.LLMManagerMixin[source]¶
Mixin for LLM callbacks.
Methods
__init__()
on_llm_end(response, *, run_id[, parent_run_id])
Run when LLM ends running.
on_llm_error(error, *, run_id[, parent_run_id])
Run when LLM errors. :param error: The error that occurred. :type error: BaseException :param kwargs: Additional keyword arguments. - response (LLMResult): The response which was generated before the error occurred. :type kwargs: Any.
on_llm_new_token(token, *[, chunk, ...])
Run on new LLM token.
__init__()¶
on_llm_end(response: LLMResult, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any[source]¶
Run when LLM ends running.
on_llm_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any[source]¶
Run when LLM errors.
:param error: The error that occurred.
:type error: BaseException
:param kwargs: Additional keyword arguments.
response (LLMResult): The response which was generated beforethe error occurred.
on_llm_new_token(token: str, *, chunk: Optional[Union[GenerationChunk, ChatGenerationChunk]] = None, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any[source]¶
Run on new LLM token. Only available when streaming is enabled.
Parameters
token (str) – The new token.
chunk (GenerationChunk | ChatGenerationChunk) – The new generated chunk, | https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.base.LLMManagerMixin.html |
69d819a61f5c-1 | chunk (GenerationChunk | ChatGenerationChunk) – The new generated chunk,
information. (containing content and other) – | https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.base.LLMManagerMixin.html |
ddb38faf2fcc-0 | langchain_community.callbacks.infino_callback.import_infino¶
langchain_community.callbacks.infino_callback.import_infino() → Any[source]¶
Import the infino client. | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.infino_callback.import_infino.html |
b1380db83f9f-0 | langchain_community.callbacks.comet_ml_callback.import_comet_ml¶
langchain_community.callbacks.comet_ml_callback.import_comet_ml() → Any[source]¶
Import comet_ml and raise an error if it is not installed. | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.comet_ml_callback.import_comet_ml.html |
213c5b5da3f9-0 | langchain_community.callbacks.context_callback.ContextCallbackHandler¶
class langchain_community.callbacks.context_callback.ContextCallbackHandler(token: str = '', verbose: bool = False, **kwargs: Any)[source]¶
Callback Handler that records transcripts to the Context service.
(https://context.ai).
Keyword Arguments
token (optional) – The token with which to authenticate requests to Context.
Visit https://with.context.ai/settings to generate a token.
If not provided, the value of the CONTEXT_TOKEN environment
variable will be used.
Raises
ImportError – if the context-python package is not installed.
Chat Example:>>> from langchain_community.llms import ChatOpenAI
>>> from langchain_community.callbacks import ContextCallbackHandler
>>> context_callback = ContextCallbackHandler(
... token="<CONTEXT_TOKEN_HERE>",
... )
>>> chat = ChatOpenAI(
... temperature=0,
... headers={"user_id": "123"},
... callbacks=[context_callback],
... openai_api_key="API_KEY_HERE",
... )
>>> messages = [
... SystemMessage(content="You translate English to French."),
... HumanMessage(content="I love programming with LangChain."),
... ]
>>> chat(messages)
Chain Example:>>> from langchain.chains import LLMChain
>>> from langchain_community.chat_models import ChatOpenAI
>>> from langchain_community.callbacks import ContextCallbackHandler
>>> context_callback = ContextCallbackHandler(
... token="<CONTEXT_TOKEN_HERE>",
... )
>>> human_message_prompt = HumanMessagePromptTemplate(
... prompt=PromptTemplate(
... template="What is a good name for a company that makes {product}?",
... input_variables=["product"],
... ),
... )
>>> chat_prompt_template = ChatPromptTemplate.from_messages(
... [human_message_prompt]
... ) | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.context_callback.ContextCallbackHandler.html |
213c5b5da3f9-1 | ... [human_message_prompt]
... )
>>> callback = ContextCallbackHandler(token)
>>> # Note: the same callback object must be shared between the
... LLM and the chain.
>>> chat = ChatOpenAI(temperature=0.9, callbacks=[callback])
>>> chain = LLMChain(
... llm=chat,
... prompt=chat_prompt_template,
... callbacks=[callback]
... )
>>> chain.run("colorful socks")
Attributes
ignore_agent
Whether to ignore agent callbacks.
ignore_chain
Whether to ignore chain callbacks.
ignore_chat_model
Whether to ignore chat model callbacks.
ignore_llm
Whether to ignore LLM callbacks.
ignore_retriever
Whether to ignore retriever callbacks.
ignore_retry
Whether to ignore retry callbacks.
raise_error
run_inline
Methods
__init__([token, verbose])
on_agent_action(action, *, run_id[, ...])
Run on agent action.
on_agent_finish(finish, *, run_id[, ...])
Run on agent end.
on_chain_end(outputs, **kwargs)
Run when chain ends.
on_chain_error(error, *, run_id[, parent_run_id])
Run when chain errors.
on_chain_start(serialized, inputs, **kwargs)
Run when chain starts.
on_chat_model_start(serialized, messages, *, ...)
Run when the chat model is started.
on_llm_end(response, **kwargs)
Run when LLM ends.
on_llm_error(error, *, run_id[, parent_run_id]) | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.context_callback.ContextCallbackHandler.html |
213c5b5da3f9-2 | on_llm_error(error, *, run_id[, parent_run_id])
Run when LLM errors. :param error: The error that occurred. :type error: BaseException :param kwargs: Additional keyword arguments. - response (LLMResult): The response which was generated before the error occurred. :type kwargs: Any.
on_llm_new_token(token, *[, chunk, ...])
Run on new LLM token.
on_llm_start(serialized, prompts, *, run_id)
Run when LLM starts running.
on_retriever_end(documents, *, run_id[, ...])
Run when Retriever ends running.
on_retriever_error(error, *, run_id[, ...])
Run when Retriever errors.
on_retriever_start(serialized, query, *, run_id)
Run when Retriever starts running.
on_retry(retry_state, *, run_id[, parent_run_id])
Run on a retry event.
on_text(text, *, run_id[, parent_run_id])
Run on arbitrary text.
on_tool_end(output, *, run_id[, parent_run_id])
Run when tool ends running.
on_tool_error(error, *, run_id[, parent_run_id])
Run when tool errors.
on_tool_start(serialized, input_str, *, run_id)
Run when tool starts running.
__init__(token: str = '', verbose: bool = False, **kwargs: Any) → None[source]¶
on_agent_action(action: AgentAction, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run on agent action. | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.context_callback.ContextCallbackHandler.html |
213c5b5da3f9-3 | Run on agent action.
on_agent_finish(finish: AgentFinish, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run on agent end.
on_chain_end(outputs: Dict[str, Any], **kwargs: Any) → None[source]¶
Run when chain ends.
on_chain_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when chain errors.
on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any) → None[source]¶
Run when chain starts.
on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, **kwargs: Any) → Any[source]¶
Run when the chat model is started.
on_llm_end(response: LLMResult, **kwargs: Any) → None[source]¶
Run when LLM ends.
on_llm_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when LLM errors.
:param error: The error that occurred.
:type error: BaseException
:param kwargs: Additional keyword arguments.
response (LLMResult): The response which was generated beforethe error occurred.
on_llm_new_token(token: str, *, chunk: Optional[Union[GenerationChunk, ChatGenerationChunk]] = None, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run on new LLM token. Only available when streaming is enabled.
Parameters
token (str) – The new token.
chunk (GenerationChunk | ChatGenerationChunk) – The new generated chunk, | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.context_callback.ContextCallbackHandler.html |
213c5b5da3f9-4 | chunk (GenerationChunk | ChatGenerationChunk) – The new generated chunk,
information. (containing content and other) –
on_llm_start(serialized: Dict[str, Any], prompts: List[str], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run when LLM starts running.
on_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when Retriever ends running.
on_retriever_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when Retriever errors.
on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run when Retriever starts running.
on_retry(retry_state: RetryCallState, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run on a retry event.
on_text(text: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run on arbitrary text.
on_tool_end(output: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when tool ends running. | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.context_callback.ContextCallbackHandler.html |
213c5b5da3f9-5 | Run when tool ends running.
on_tool_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when tool errors.
on_tool_start(serialized: Dict[str, Any], input_str: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run when tool starts running.
Examples using ContextCallbackHandler¶
Context | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.context_callback.ContextCallbackHandler.html |
ad36a5b6dc1b-0 | langchain_community.callbacks.aim_callback.import_aim¶
langchain_community.callbacks.aim_callback.import_aim() → Any[source]¶
Import the aim python package and raise an error if it is not installed. | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.aim_callback.import_aim.html |
dc1cb652a470-0 | langchain_community.callbacks.streamlit.streamlit_callback_handler.ToolRecord¶
class langchain_community.callbacks.streamlit.streamlit_callback_handler.ToolRecord(name: str, input_str: str)[source]¶
The tool record as a NamedTuple.
Create new instance of ToolRecord(name, input_str)
Attributes
input_str
Alias for field number 1
name
Alias for field number 0
Methods
__init__()
count(value, /)
Return number of occurrences of value.
index(value[, start, stop])
Return first index of value.
__init__()¶
count(value, /)¶
Return number of occurrences of value.
index(value, start=0, stop=9223372036854775807, /)¶
Return first index of value.
Raises ValueError if the value is not present. | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.streamlit.streamlit_callback_handler.ToolRecord.html |
0315b1b9eeb7-0 | langchain_community.callbacks.arize_callback.ArizeCallbackHandler¶
class langchain_community.callbacks.arize_callback.ArizeCallbackHandler(model_id: Optional[str] = None, model_version: Optional[str] = None, SPACE_KEY: Optional[str] = None, API_KEY: Optional[str] = None)[source]¶
Callback Handler that logs to Arize.
Initialize callback handler.
Attributes
ignore_agent
Whether to ignore agent callbacks.
ignore_chain
Whether to ignore chain callbacks.
ignore_chat_model
Whether to ignore chat model callbacks.
ignore_llm
Whether to ignore LLM callbacks.
ignore_retriever
Whether to ignore retriever callbacks.
ignore_retry
Whether to ignore retry callbacks.
raise_error
run_inline
Methods
__init__([model_id, model_version, ...])
Initialize callback handler.
on_agent_action(action, **kwargs)
Do nothing.
on_agent_finish(finish, **kwargs)
Run on agent end.
on_chain_end(outputs, **kwargs)
Do nothing.
on_chain_error(error, **kwargs)
Do nothing.
on_chain_start(serialized, inputs, **kwargs)
Run when chain starts running.
on_chat_model_start(serialized, messages, *, ...)
Run when a chat model starts running.
on_llm_end(response, **kwargs)
Run when LLM ends running.
on_llm_error(error, **kwargs)
Do nothing.
on_llm_new_token(token, **kwargs)
Do nothing.
on_llm_start(serialized, prompts, **kwargs)
Run when LLM starts running.
on_retriever_end(documents, *, run_id[, ...])
Run when Retriever ends running.
on_retriever_error(error, *, run_id[, ...]) | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.arize_callback.ArizeCallbackHandler.html |
0315b1b9eeb7-1 | on_retriever_error(error, *, run_id[, ...])
Run when Retriever errors.
on_retriever_start(serialized, query, *, run_id)
Run when Retriever starts running.
on_retry(retry_state, *, run_id[, parent_run_id])
Run on a retry event.
on_text(text, **kwargs)
Run on arbitrary text.
on_tool_end(output[, observation_prefix, ...])
Run when tool ends running.
on_tool_error(error, **kwargs)
Run when tool errors.
on_tool_start(serialized, input_str, **kwargs)
Run when tool starts running.
__init__(model_id: Optional[str] = None, model_version: Optional[str] = None, SPACE_KEY: Optional[str] = None, API_KEY: Optional[str] = None) → None[source]¶
Initialize callback handler.
on_agent_action(action: AgentAction, **kwargs: Any) → Any[source]¶
Do nothing.
on_agent_finish(finish: AgentFinish, **kwargs: Any) → None[source]¶
Run on agent end.
on_chain_end(outputs: Dict[str, Any], **kwargs: Any) → None[source]¶
Do nothing.
on_chain_error(error: BaseException, **kwargs: Any) → None[source]¶
Do nothing.
on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any) → None[source]¶
Run when chain starts running.
on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶ | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.arize_callback.ArizeCallbackHandler.html |
0315b1b9eeb7-2 | Run when a chat model starts running.
on_llm_end(response: LLMResult, **kwargs: Any) → None[source]¶
Run when LLM ends running.
on_llm_error(error: BaseException, **kwargs: Any) → None[source]¶
Do nothing.
on_llm_new_token(token: str, **kwargs: Any) → None[source]¶
Do nothing.
on_llm_start(serialized: Dict[str, Any], prompts: List[str], **kwargs: Any) → None[source]¶
Run when LLM starts running.
on_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when Retriever ends running.
on_retriever_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when Retriever errors.
on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run when Retriever starts running.
on_retry(retry_state: RetryCallState, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run on a retry event.
on_text(text: str, **kwargs: Any) → None[source]¶
Run on arbitrary text.
on_tool_end(output: str, observation_prefix: Optional[str] = None, llm_prefix: Optional[str] = None, **kwargs: Any) → None[source]¶
Run when tool ends running. | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.arize_callback.ArizeCallbackHandler.html |
0315b1b9eeb7-3 | Run when tool ends running.
on_tool_error(error: BaseException, **kwargs: Any) → None[source]¶
Run when tool errors.
on_tool_start(serialized: Dict[str, Any], input_str: str, **kwargs: Any) → None[source]¶
Run when tool starts running. | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.arize_callback.ArizeCallbackHandler.html |
b6df4349b8c4-0 | langchain_community.callbacks.confident_callback.DeepEvalCallbackHandler¶
class langchain_community.callbacks.confident_callback.DeepEvalCallbackHandler(metrics: List[Any], implementation_name: Optional[str] = None)[source]¶
Callback Handler that logs into deepeval.
Parameters
implementation_name – name of the implementation in deepeval
metrics – A list of metrics
Raises
ImportError – if the deepeval package is not installed.
Examples
>>> from langchain_community.llms import OpenAI
>>> from langchain_community.callbacks import DeepEvalCallbackHandler
>>> from deepeval.metrics import AnswerRelevancy
>>> metric = AnswerRelevancy(minimum_score=0.3)
>>> deepeval_callback = DeepEvalCallbackHandler(
... implementation_name="exampleImplementation",
... metrics=[metric],
... )
>>> llm = OpenAI(
... temperature=0,
... callbacks=[deepeval_callback],
... verbose=True,
... openai_api_key="API_KEY_HERE",
... )
>>> llm.generate([
... "What is the best evaluation tool out there? (no bias at all)",
... ])
"Deepeval, no doubt about it."
Initializes the deepevalCallbackHandler.
Parameters
implementation_name – Name of the implementation you want.
metrics – What metrics do you want to track?
Raises
ImportError – if the deepeval package is not installed.
ConnectionError – if the connection to deepeval fails.
Attributes
BLOG_URL
ISSUES_URL
REPO_URL
ignore_agent
Whether to ignore agent callbacks.
ignore_chain
Whether to ignore chain callbacks.
ignore_chat_model
Whether to ignore chat model callbacks.
ignore_llm
Whether to ignore LLM callbacks.
ignore_retriever
Whether to ignore retriever callbacks.
ignore_retry | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.confident_callback.DeepEvalCallbackHandler.html |
b6df4349b8c4-1 | ignore_retriever
Whether to ignore retriever callbacks.
ignore_retry
Whether to ignore retry callbacks.
raise_error
run_inline
Methods
__init__(metrics[, implementation_name])
Initializes the deepevalCallbackHandler.
on_agent_action(action, **kwargs)
Do nothing when agent takes a specific action.
on_agent_finish(finish, **kwargs)
Do nothing
on_chain_end(outputs, **kwargs)
Do nothing when chain ends.
on_chain_error(error, **kwargs)
Do nothing when LLM chain outputs an error.
on_chain_start(serialized, inputs, **kwargs)
Do nothing when chain starts
on_chat_model_start(serialized, messages, *, ...)
Run when a chat model starts running.
on_llm_end(response, **kwargs)
Log records to deepeval when an LLM ends.
on_llm_error(error, **kwargs)
Do nothing when LLM outputs an error.
on_llm_new_token(token, **kwargs)
Do nothing when a new token is generated.
on_llm_start(serialized, prompts, **kwargs)
Store the prompts
on_retriever_end(documents, *, run_id[, ...])
Run when Retriever ends running.
on_retriever_error(error, *, run_id[, ...])
Run when Retriever errors.
on_retriever_start(serialized, query, *, run_id)
Run when Retriever starts running.
on_retry(retry_state, *, run_id[, parent_run_id])
Run on a retry event.
on_text(text, **kwargs)
Do nothing
on_tool_end(output[, observation_prefix, ...])
Do nothing when tool ends.
on_tool_error(error, **kwargs) | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.confident_callback.DeepEvalCallbackHandler.html |
b6df4349b8c4-2 | Do nothing when tool ends.
on_tool_error(error, **kwargs)
Do nothing when tool outputs an error.
on_tool_start(serialized, input_str, **kwargs)
Do nothing when tool starts.
__init__(metrics: List[Any], implementation_name: Optional[str] = None) → None[source]¶
Initializes the deepevalCallbackHandler.
Parameters
implementation_name – Name of the implementation you want.
metrics – What metrics do you want to track?
Raises
ImportError – if the deepeval package is not installed.
ConnectionError – if the connection to deepeval fails.
on_agent_action(action: AgentAction, **kwargs: Any) → Any[source]¶
Do nothing when agent takes a specific action.
on_agent_finish(finish: AgentFinish, **kwargs: Any) → None[source]¶
Do nothing
on_chain_end(outputs: Dict[str, Any], **kwargs: Any) → None[source]¶
Do nothing when chain ends.
on_chain_error(error: BaseException, **kwargs: Any) → None[source]¶
Do nothing when LLM chain outputs an error.
on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any) → None[source]¶
Do nothing when chain starts
on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run when a chat model starts running.
on_llm_end(response: LLMResult, **kwargs: Any) → None[source]¶
Log records to deepeval when an LLM ends. | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.confident_callback.DeepEvalCallbackHandler.html |
b6df4349b8c4-3 | Log records to deepeval when an LLM ends.
on_llm_error(error: BaseException, **kwargs: Any) → None[source]¶
Do nothing when LLM outputs an error.
on_llm_new_token(token: str, **kwargs: Any) → None[source]¶
Do nothing when a new token is generated.
on_llm_start(serialized: Dict[str, Any], prompts: List[str], **kwargs: Any) → None[source]¶
Store the prompts
on_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when Retriever ends running.
on_retriever_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when Retriever errors.
on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run when Retriever starts running.
on_retry(retry_state: RetryCallState, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run on a retry event.
on_text(text: str, **kwargs: Any) → None[source]¶
Do nothing
on_tool_end(output: str, observation_prefix: Optional[str] = None, llm_prefix: Optional[str] = None, **kwargs: Any) → None[source]¶
Do nothing when tool ends.
on_tool_error(error: BaseException, **kwargs: Any) → None[source]¶ | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.confident_callback.DeepEvalCallbackHandler.html |
b6df4349b8c4-4 | on_tool_error(error: BaseException, **kwargs: Any) → None[source]¶
Do nothing when tool outputs an error.
on_tool_start(serialized: Dict[str, Any], input_str: str, **kwargs: Any) → None[source]¶
Do nothing when tool starts.
Examples using DeepEvalCallbackHandler¶
Confident | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.confident_callback.DeepEvalCallbackHandler.html |
08dfdd880c3a-0 | langchain_community.callbacks.utils.flatten_dict¶
langchain_community.callbacks.utils.flatten_dict(nested_dict: Dict[str, Any], parent_key: str = '', sep: str = '_') → Dict[str, Any][source]¶
Flattens a nested dictionary into a flat dictionary.
Parameters
nested_dict (dict) – The nested dictionary to flatten.
parent_key (str) – The prefix to prepend to the keys of the flattened dict.
sep (str) – The separator to use between the parent key and the key of the
flattened dictionary.
Returns
A flat dictionary.
Return type
(dict) | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.utils.flatten_dict.html |
da89d117ea92-0 | langchain_community.callbacks.wandb_callback.load_json_to_dict¶
langchain_community.callbacks.wandb_callback.load_json_to_dict(json_path: Union[str, Path]) → dict[source]¶
Load json file to a dictionary.
Parameters
json_path (str) – The path to the json file.
Returns
The dictionary representation of the json file.
Return type
(dict) | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.wandb_callback.load_json_to_dict.html |
f6070b0a9616-0 | langchain_core.callbacks.manager.CallbackManagerForChainRun¶
class langchain_core.callbacks.manager.CallbackManagerForChainRun(*, run_id: UUID, handlers: List[BaseCallbackHandler], inheritable_handlers: List[BaseCallbackHandler], parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, inheritable_tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None)[source]¶
Callback manager for chain run.
Initialize the run manager.
Parameters
run_id (UUID) – The ID of the run.
handlers (List[BaseCallbackHandler]) – The list of handlers.
inheritable_handlers (List[BaseCallbackHandler]) – The list of inheritable handlers.
parent_run_id (UUID, optional) – The ID of the parent run.
Defaults to None.
tags (Optional[List[str]]) – The list of tags.
inheritable_tags (Optional[List[str]]) – The list of inheritable tags.
metadata (Optional[Dict[str, Any]]) – The metadata.
inheritable_metadata (Optional[Dict[str, Any]]) – The inheritable metadata.
Methods
__init__(*, run_id, handlers, ...[, ...])
Initialize the run manager.
get_child([tag])
Get a child callback manager.
get_noop_manager()
Return a manager that doesn't perform any operations.
on_agent_action(action, **kwargs)
Run when agent action is received.
on_agent_finish(finish, **kwargs)
Run when agent finish is received.
on_chain_end(outputs, **kwargs)
Run when chain ends running.
on_chain_error(error, **kwargs)
Run when chain errors.
on_retry(retry_state, **kwargs)
Run on a retry event. | https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.manager.CallbackManagerForChainRun.html |
f6070b0a9616-1 | on_retry(retry_state, **kwargs)
Run on a retry event.
on_text(text, **kwargs)
Run when text is received.
__init__(*, run_id: UUID, handlers: List[BaseCallbackHandler], inheritable_handlers: List[BaseCallbackHandler], parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, inheritable_tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None) → None¶
Initialize the run manager.
Parameters
run_id (UUID) – The ID of the run.
handlers (List[BaseCallbackHandler]) – The list of handlers.
inheritable_handlers (List[BaseCallbackHandler]) – The list of inheritable handlers.
parent_run_id (UUID, optional) – The ID of the parent run.
Defaults to None.
tags (Optional[List[str]]) – The list of tags.
inheritable_tags (Optional[List[str]]) – The list of inheritable tags.
metadata (Optional[Dict[str, Any]]) – The metadata.
inheritable_metadata (Optional[Dict[str, Any]]) – The inheritable metadata.
get_child(tag: Optional[str] = None) → CallbackManager¶
Get a child callback manager.
Parameters
tag (str, optional) – The tag for the child callback manager.
Defaults to None.
Returns
The child callback manager.
Return type
CallbackManager
classmethod get_noop_manager() → BRM¶
Return a manager that doesn’t perform any operations.
Returns
The noop manager.
Return type
BaseRunManager
on_agent_action(action: AgentAction, **kwargs: Any) → Any[source]¶
Run when agent action is received.
Parameters
action (AgentAction) – The agent action.
Returns | https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.manager.CallbackManagerForChainRun.html |
f6070b0a9616-2 | Parameters
action (AgentAction) – The agent action.
Returns
The result of the callback.
Return type
Any
on_agent_finish(finish: AgentFinish, **kwargs: Any) → Any[source]¶
Run when agent finish is received.
Parameters
finish (AgentFinish) – The agent finish.
Returns
The result of the callback.
Return type
Any
on_chain_end(outputs: Union[Dict[str, Any], Any], **kwargs: Any) → None[source]¶
Run when chain ends running.
Parameters
outputs (Union[Dict[str, Any], Any]) – The outputs of the chain.
on_chain_error(error: BaseException, **kwargs: Any) → None[source]¶
Run when chain errors.
Parameters
error (Exception or KeyboardInterrupt) – The error.
on_retry(retry_state: RetryCallState, **kwargs: Any) → None¶
Run on a retry event.
on_text(text: str, **kwargs: Any) → Any¶
Run when text is received.
Parameters
text (str) – The received text.
Returns
The result of the callback.
Return type
Any
Examples using CallbackManagerForChainRun¶
Custom chain | https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.manager.CallbackManagerForChainRun.html |
2b08e108112e-0 | langchain_core.callbacks.manager.atrace_as_chain_group¶
langchain_core.callbacks.manager.atrace_as_chain_group(group_name: str, callback_manager: Optional[AsyncCallbackManager] = None, *, inputs: Optional[Dict[str, Any]] = None, project_name: Optional[str] = None, example_id: Optional[Union[str, UUID]] = None, run_id: Optional[UUID] = None, tags: Optional[List[str]] = None) → AsyncGenerator[AsyncCallbackManagerForChainGroup, None][source]¶
Get an async callback manager for a chain group in a context manager.
Useful for grouping different async calls together as a single run even if
they aren’t composed in a single chain.
Parameters
group_name (str) – The name of the chain group.
callback_manager (AsyncCallbackManager, optional) – The async callback manager to use,
which manages tracing and other callback behavior.
project_name (str, optional) – The name of the project.
Defaults to None.
example_id (str or UUID, optional) – The ID of the example.
Defaults to None.
run_id (UUID, optional) – The ID of the run.
tags (List[str], optional) – The inheritable tags to apply to all runs.
Defaults to None.
Returns
The async callback manager for the chain group.
Return type
AsyncCallbackManager
Note: must have LANGCHAIN_TRACING_V2 env var set to true to see the trace in LangSmith.
Example
llm_input = "Foo"
async with atrace_as_chain_group("group_name", inputs={"input": llm_input}) as manager:
# Use the async callback manager for the chain group
res = await llm.apredict(llm_input, callbacks=manager)
await manager.on_chain_end({"output": res}) | https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.manager.atrace_as_chain_group.html |
b2564544619c-0 | langchain_community.callbacks.labelstudio_callback.LabelStudioMode¶
class langchain_community.callbacks.labelstudio_callback.LabelStudioMode(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]¶
Label Studio mode enumerator.
PROMPT = 'prompt'¶
CHAT = 'chat'¶ | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.labelstudio_callback.LabelStudioMode.html |
80197a7cc231-0 | langchain_community.callbacks.tracers.comet.import_comet_llm_api¶
langchain_community.callbacks.tracers.comet.import_comet_llm_api() → SimpleNamespace[source]¶
Import comet_llm api and raise an error if it is not installed. | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.tracers.comet.import_comet_llm_api.html |
eff56d92017b-0 | langchain_core.callbacks.manager.AsyncCallbackManager¶
class langchain_core.callbacks.manager.AsyncCallbackManager(handlers: List[BaseCallbackHandler], inheritable_handlers: Optional[List[BaseCallbackHandler]] = None, parent_run_id: Optional[UUID] = None, *, tags: Optional[List[str]] = None, inheritable_tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None)[source]¶
Async callback manager that handles callbacks from LangChain.
Initialize callback manager.
Attributes
is_async
Return whether the handler is async.
Methods
__init__(handlers[, inheritable_handlers, ...])
Initialize callback manager.
add_handler(handler[, inherit])
Add a handler to the callback manager.
add_metadata(metadata[, inherit])
add_tags(tags[, inherit])
configure([inheritable_callbacks, ...])
Configure the async callback manager.
copy()
Copy the callback manager.
on_chain_start(serialized, inputs[, run_id])
Run when chain starts running.
on_chat_model_start(serialized, messages, ...)
Run when LLM starts running.
on_llm_start(serialized, prompts, **kwargs)
Run when LLM starts running.
on_retriever_start(serialized, query[, ...])
Run when retriever starts running.
on_tool_start(serialized, input_str[, ...])
Run when tool starts running.
remove_handler(handler)
Remove a handler from the callback manager.
remove_metadata(keys)
remove_tags(tags)
set_handler(handler[, inherit])
Set handler as the only handler on the callback manager.
set_handlers(handlers[, inherit])
Set handlers as the only handlers on the callback manager. | https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.manager.AsyncCallbackManager.html |
eff56d92017b-1 | Set handlers as the only handlers on the callback manager.
__init__(handlers: List[BaseCallbackHandler], inheritable_handlers: Optional[List[BaseCallbackHandler]] = None, parent_run_id: Optional[UUID] = None, *, tags: Optional[List[str]] = None, inheritable_tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None) → None¶
Initialize callback manager.
add_handler(handler: BaseCallbackHandler, inherit: bool = True) → None¶
Add a handler to the callback manager.
add_metadata(metadata: Dict[str, Any], inherit: bool = True) → None¶
add_tags(tags: List[str], inherit: bool = True) → None¶
classmethod configure(inheritable_callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, local_callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, verbose: bool = False, inheritable_tags: Optional[List[str]] = None, local_tags: Optional[List[str]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None, local_metadata: Optional[Dict[str, Any]] = None) → AsyncCallbackManager[source]¶
Configure the async callback manager.
Parameters
inheritable_callbacks (Optional[Callbacks], optional) – The inheritable
callbacks. Defaults to None.
local_callbacks (Optional[Callbacks], optional) – The local callbacks.
Defaults to None.
verbose (bool, optional) – Whether to enable verbose mode. Defaults to False.
inheritable_tags (Optional[List[str]], optional) – The inheritable tags.
Defaults to None.
local_tags (Optional[List[str]], optional) – The local tags.
Defaults to None.
inheritable_metadata (Optional[Dict[str, Any]], optional) – The inheritable | https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.manager.AsyncCallbackManager.html |
eff56d92017b-2 | inheritable_metadata (Optional[Dict[str, Any]], optional) – The inheritable
metadata. Defaults to None.
local_metadata (Optional[Dict[str, Any]], optional) – The local metadata.
Defaults to None.
Returns
The configured async callback manager.
Return type
AsyncCallbackManager
copy() → T¶
Copy the callback manager.
async on_chain_start(serialized: Dict[str, Any], inputs: Union[Dict[str, Any], Any], run_id: Optional[UUID] = None, **kwargs: Any) → AsyncCallbackManagerForChainRun[source]¶
Run when chain starts running.
Parameters
serialized (Dict[str, Any]) – The serialized chain.
inputs (Union[Dict[str, Any], Any]) – The inputs to the chain.
run_id (UUID, optional) – The ID of the run. Defaults to None.
Returns
The async callback managerfor the chain run.
Return type
AsyncCallbackManagerForChainRun
async on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], **kwargs: Any) → List[AsyncCallbackManagerForLLMRun][source]¶
Run when LLM starts running.
Parameters
serialized (Dict[str, Any]) – The serialized LLM.
messages (List[List[BaseMessage]]) – The list of messages.
run_id (UUID, optional) – The ID of the run. Defaults to None.
Returns
The list ofasync callback managers, one for each LLM Run
corresponding to each inner message list.
Return type
List[AsyncCallbackManagerForLLMRun]
async on_llm_start(serialized: Dict[str, Any], prompts: List[str], **kwargs: Any) → List[AsyncCallbackManagerForLLMRun][source]¶
Run when LLM starts running.
Parameters | https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.manager.AsyncCallbackManager.html |
eff56d92017b-3 | Run when LLM starts running.
Parameters
serialized (Dict[str, Any]) – The serialized LLM.
prompts (List[str]) – The list of prompts.
run_id (UUID, optional) – The ID of the run. Defaults to None.
Returns
The list of asynccallback managers, one for each LLM Run corresponding
to each prompt.
Return type
List[AsyncCallbackManagerForLLMRun]
async on_retriever_start(serialized: Dict[str, Any], query: str, run_id: Optional[UUID] = None, parent_run_id: Optional[UUID] = None, **kwargs: Any) → AsyncCallbackManagerForRetrieverRun[source]¶
Run when retriever starts running.
async on_tool_start(serialized: Dict[str, Any], input_str: str, run_id: Optional[UUID] = None, parent_run_id: Optional[UUID] = None, **kwargs: Any) → AsyncCallbackManagerForToolRun[source]¶
Run when tool starts running.
Parameters
serialized (Dict[str, Any]) – The serialized tool.
input_str (str) – The input to the tool.
run_id (UUID, optional) – The ID of the run. Defaults to None.
parent_run_id (UUID, optional) – The ID of the parent run.
Defaults to None.
Returns
The async callback managerfor the tool run.
Return type
AsyncCallbackManagerForToolRun
remove_handler(handler: BaseCallbackHandler) → None¶
Remove a handler from the callback manager.
remove_metadata(keys: List[str]) → None¶
remove_tags(tags: List[str]) → None¶
set_handler(handler: BaseCallbackHandler, inherit: bool = True) → None¶
Set handler as the only handler on the callback manager. | https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.manager.AsyncCallbackManager.html |
eff56d92017b-4 | Set handler as the only handler on the callback manager.
set_handlers(handlers: List[BaseCallbackHandler], inherit: bool = True) → None¶
Set handlers as the only handlers on the callback manager. | https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.manager.AsyncCallbackManager.html |
a482cb46eadc-0 | langchain_community.callbacks.wandb_callback.analyze_text¶
langchain_community.callbacks.wandb_callback.analyze_text(text: str, complexity_metrics: bool = True, visualize: bool = True, nlp: Any = None, output_dir: Optional[Union[str, Path]] = None) → dict[source]¶
Analyze text using textstat and spacy.
Parameters
text (str) – The text to analyze.
complexity_metrics (bool) – Whether to compute complexity metrics.
visualize (bool) – Whether to visualize the text.
nlp (spacy.lang) – The spacy language model to use for visualization.
output_dir (str) – The directory to save the visualization files to.
Returns
A dictionary containing the complexity metrics and visualizationfiles serialized in a wandb.Html element.
Return type
(dict) | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.wandb_callback.analyze_text.html |
595b756f9cb9-0 | langchain_community.callbacks.mlflow_callback.MlflowCallbackHandler¶
class langchain_community.callbacks.mlflow_callback.MlflowCallbackHandler(name: Optional[str] = 'langchainrun-%', experiment: Optional[str] = 'langchain', tags: Optional[Dict] = None, tracking_uri: Optional[str] = None)[source]¶
Callback Handler that logs metrics and artifacts to mlflow server.
Parameters
name (str) – Name of the run.
experiment (str) – Name of the experiment.
tags (dict) – Tags to be attached for the run.
tracking_uri (str) – MLflow tracking server uri.
This handler will utilize the associated callback method called and formats
the input of each callback function with metadata regarding the state of LLM run,
and adds the response to the list of records for both the {method}_records and
action. It then logs the response to mlflow server.
Initialize callback handler.
Attributes
always_verbose
Whether to call verbose callbacks even if verbose is False.
ignore_agent
Whether to ignore agent callbacks.
ignore_chain
Whether to ignore chain callbacks.
ignore_chat_model
Whether to ignore chat model callbacks.
ignore_llm
Whether to ignore LLM callbacks.
ignore_retriever
Whether to ignore retriever callbacks.
ignore_retry
Whether to ignore retry callbacks.
raise_error
run_inline
Methods
__init__([name, experiment, tags, tracking_uri])
Initialize callback handler.
flush_tracker([langchain_asset, finish])
get_custom_callback_meta()
on_agent_action(action, **kwargs)
Run on agent action.
on_agent_finish(finish, **kwargs)
Run when agent ends running.
on_chain_end(outputs, **kwargs)
Run when chain ends running.
on_chain_error(error, **kwargs)
Run when chain errors. | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.mlflow_callback.MlflowCallbackHandler.html |
595b756f9cb9-1 | on_chain_error(error, **kwargs)
Run when chain errors.
on_chain_start(serialized, inputs, **kwargs)
Run when chain starts running.
on_chat_model_start(serialized, messages, *, ...)
Run when a chat model starts running.
on_llm_end(response, **kwargs)
Run when LLM ends running.
on_llm_error(error, **kwargs)
Run when LLM errors.
on_llm_new_token(token, **kwargs)
Run when LLM generates a new token.
on_llm_start(serialized, prompts, **kwargs)
Run when LLM starts.
on_retriever_end(documents, *, run_id[, ...])
Run when Retriever ends running.
on_retriever_error(error, *, run_id[, ...])
Run when Retriever errors.
on_retriever_start(serialized, query, *, run_id)
Run when Retriever starts running.
on_retry(retry_state, *, run_id[, parent_run_id])
Run on a retry event.
on_text(text, **kwargs)
Run when agent is ending.
on_tool_end(output, **kwargs)
Run when tool ends running.
on_tool_error(error, **kwargs)
Run when tool errors.
on_tool_start(serialized, input_str, **kwargs)
Run when tool starts running.
reset_callback_meta()
Reset the callback metadata.
__init__(name: Optional[str] = 'langchainrun-%', experiment: Optional[str] = 'langchain', tags: Optional[Dict] = None, tracking_uri: Optional[str] = None) → None[source]¶
Initialize callback handler. | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.mlflow_callback.MlflowCallbackHandler.html |
595b756f9cb9-2 | Initialize callback handler.
flush_tracker(langchain_asset: Any = None, finish: bool = False) → None[source]¶
get_custom_callback_meta() → Dict[str, Any]¶
on_agent_action(action: AgentAction, **kwargs: Any) → Any[source]¶
Run on agent action.
on_agent_finish(finish: AgentFinish, **kwargs: Any) → None[source]¶
Run when agent ends running.
on_chain_end(outputs: Dict[str, Any], **kwargs: Any) → None[source]¶
Run when chain ends running.
on_chain_error(error: BaseException, **kwargs: Any) → None[source]¶
Run when chain errors.
on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any) → None[source]¶
Run when chain starts running.
on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run when a chat model starts running.
on_llm_end(response: LLMResult, **kwargs: Any) → None[source]¶
Run when LLM ends running.
on_llm_error(error: BaseException, **kwargs: Any) → None[source]¶
Run when LLM errors.
on_llm_new_token(token: str, **kwargs: Any) → None[source]¶
Run when LLM generates a new token.
on_llm_start(serialized: Dict[str, Any], prompts: List[str], **kwargs: Any) → None[source]¶
Run when LLM starts. | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.mlflow_callback.MlflowCallbackHandler.html |
595b756f9cb9-3 | Run when LLM starts.
on_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when Retriever ends running.
on_retriever_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when Retriever errors.
on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run when Retriever starts running.
on_retry(retry_state: RetryCallState, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run on a retry event.
on_text(text: str, **kwargs: Any) → None[source]¶
Run when agent is ending.
on_tool_end(output: str, **kwargs: Any) → None[source]¶
Run when tool ends running.
on_tool_error(error: BaseException, **kwargs: Any) → None[source]¶
Run when tool errors.
on_tool_start(serialized: Dict[str, Any], input_str: str, **kwargs: Any) → None[source]¶
Run when tool starts running.
reset_callback_meta() → None¶
Reset the callback metadata.
Examples using MlflowCallbackHandler¶
MLflow | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.mlflow_callback.MlflowCallbackHandler.html |
b45d9849ce74-0 | langchain_community.callbacks.streamlit.mutable_expander.ChildType¶
class langchain_community.callbacks.streamlit.mutable_expander.ChildType(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]¶
The enumerator of the child type.
MARKDOWN = 'MARKDOWN'¶
EXCEPTION = 'EXCEPTION'¶ | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.streamlit.mutable_expander.ChildType.html |
453a254d1da7-0 | langchain_core.callbacks.manager.AsyncRunManager¶
class langchain_core.callbacks.manager.AsyncRunManager(*, run_id: UUID, handlers: List[BaseCallbackHandler], inheritable_handlers: List[BaseCallbackHandler], parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, inheritable_tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None)[source]¶
Async Run Manager.
Initialize the run manager.
Parameters
run_id (UUID) – The ID of the run.
handlers (List[BaseCallbackHandler]) – The list of handlers.
inheritable_handlers (List[BaseCallbackHandler]) – The list of inheritable handlers.
parent_run_id (UUID, optional) – The ID of the parent run.
Defaults to None.
tags (Optional[List[str]]) – The list of tags.
inheritable_tags (Optional[List[str]]) – The list of inheritable tags.
metadata (Optional[Dict[str, Any]]) – The metadata.
inheritable_metadata (Optional[Dict[str, Any]]) – The inheritable metadata.
Methods
__init__(*, run_id, handlers, ...[, ...])
Initialize the run manager.
get_noop_manager()
Return a manager that doesn't perform any operations.
get_sync()
Get the equivalent sync RunManager.
on_retry(retry_state, **kwargs)
Run on a retry event.
on_text(text, **kwargs)
Run when text is received. | https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.manager.AsyncRunManager.html |
453a254d1da7-1 | on_text(text, **kwargs)
Run when text is received.
__init__(*, run_id: UUID, handlers: List[BaseCallbackHandler], inheritable_handlers: List[BaseCallbackHandler], parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, inheritable_tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, inheritable_metadata: Optional[Dict[str, Any]] = None) → None¶
Initialize the run manager.
Parameters
run_id (UUID) – The ID of the run.
handlers (List[BaseCallbackHandler]) – The list of handlers.
inheritable_handlers (List[BaseCallbackHandler]) – The list of inheritable handlers.
parent_run_id (UUID, optional) – The ID of the parent run.
Defaults to None.
tags (Optional[List[str]]) – The list of tags.
inheritable_tags (Optional[List[str]]) – The list of inheritable tags.
metadata (Optional[Dict[str, Any]]) – The metadata.
inheritable_metadata (Optional[Dict[str, Any]]) – The inheritable metadata.
classmethod get_noop_manager() → BRM¶
Return a manager that doesn’t perform any operations.
Returns
The noop manager.
Return type
BaseRunManager
abstract get_sync() → RunManager[source]¶
Get the equivalent sync RunManager.
Returns
The sync RunManager.
Return type
RunManager
async on_retry(retry_state: RetryCallState, **kwargs: Any) → None[source]¶
Run on a retry event.
async on_text(text: str, **kwargs: Any) → Any[source]¶
Run when text is received.
Parameters
text (str) – The received text.
Returns
The result of the callback.
Return type
Any | https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.manager.AsyncRunManager.html |
3b8e842e3b2b-0 | langchain_community.callbacks.whylabs_callback.WhyLabsCallbackHandler¶
class langchain_community.callbacks.whylabs_callback.WhyLabsCallbackHandler(logger: Logger, handler: Any)[source]¶
Callback Handler for logging to WhyLabs. This callback handler utilizes
langkit to extract features from the prompts & responses when interacting with
an LLM. These features can be used to guardrail, evaluate, and observe interactions
over time to detect issues relating to hallucinations, prompt engineering,
or output validation. LangKit is an LLM monitoring toolkit developed by WhyLabs.
Here are some examples of what can be monitored with LangKit:
* Text Quality
readability score
complexity and grade scores
Text Relevance
- Similarity scores between prompt/responses
- Similarity scores against user-defined themes
- Topic classification
Security and Privacy
- patterns - count of strings matching a user-defined regex pattern group
- jailbreaks - similarity scores with respect to known jailbreak attempts
- prompt injection - similarity scores with respect to known prompt attacks
- refusals - similarity scores with respect to known LLM refusal responses
Sentiment and Toxicity
- sentiment analysis
- toxicity analysis
For more information, see https://docs.whylabs.ai/docs/language-model-monitoring
or check out the LangKit repo here: https://github.com/whylabs/langkit
—
:param api_key: WhyLabs API key. Optional because the preferred
way to specify the API key is with environment variable
WHYLABS_API_KEY.
Parameters
org_id (Optional[str]) – WhyLabs organization id to write profiles to.
Optional because the preferred way to specify the organization id is
with environment variable WHYLABS_DEFAULT_ORG_ID.
dataset_id (Optional[str]) – WhyLabs dataset id to write profiles to.
Optional because the preferred way to specify the dataset id is | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.whylabs_callback.WhyLabsCallbackHandler.html |
3b8e842e3b2b-1 | Optional because the preferred way to specify the dataset id is
with environment variable WHYLABS_DEFAULT_DATASET_ID.
sentiment (bool) – Whether to enable sentiment analysis. Defaults to False.
toxicity (bool) – Whether to enable toxicity analysis. Defaults to False.
themes (bool) – Whether to enable theme analysis. Defaults to False.
Initiate the rolling logger.
Attributes
ignore_agent
Whether to ignore agent callbacks.
ignore_chain
Whether to ignore chain callbacks.
ignore_chat_model
Whether to ignore chat model callbacks.
ignore_llm
Whether to ignore LLM callbacks.
ignore_retriever
Whether to ignore retriever callbacks.
ignore_retry
Whether to ignore retry callbacks.
raise_error
run_inline
Methods
__init__(logger, handler)
Initiate the rolling logger.
close()
Close any loggers to allow writing out of any profiles before exiting.
flush()
Explicitly write current profile if using a rolling logger.
from_params(*[, api_key, org_id, ...])
Instantiate whylogs Logger from params.
on_agent_action(action, *, run_id[, ...])
Run on agent action.
on_agent_finish(finish, *, run_id[, ...])
Run on agent end.
on_chain_end(outputs, *, run_id[, parent_run_id])
Run when chain ends running.
on_chain_error(error, *, run_id[, parent_run_id])
Run when chain errors.
on_chain_start(serialized, inputs, *, run_id)
Run when chain starts running.
on_chat_model_start(serialized, messages, *, ...)
Run when a chat model starts running.
on_llm_end(response, *, run_id[, parent_run_id])
Run when LLM ends running. | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.whylabs_callback.WhyLabsCallbackHandler.html |
3b8e842e3b2b-2 | Run when LLM ends running.
on_llm_error(error, *, run_id[, parent_run_id])
Run when LLM errors. :param error: The error that occurred. :type error: BaseException :param kwargs: Additional keyword arguments. - response (LLMResult): The response which was generated before the error occurred. :type kwargs: Any.
on_llm_new_token(token, *[, chunk, ...])
Run on new LLM token.
on_llm_start(serialized, prompts, *, run_id)
Run when LLM starts running.
on_retriever_end(documents, *, run_id[, ...])
Run when Retriever ends running.
on_retriever_error(error, *, run_id[, ...])
Run when Retriever errors.
on_retriever_start(serialized, query, *, run_id)
Run when Retriever starts running.
on_retry(retry_state, *, run_id[, parent_run_id])
Run on a retry event.
on_text(text, *, run_id[, parent_run_id])
Run on arbitrary text.
on_tool_end(output, *, run_id[, parent_run_id])
Run when tool ends running.
on_tool_error(error, *, run_id[, parent_run_id])
Run when tool errors.
on_tool_start(serialized, input_str, *, run_id)
Run when tool starts running.
__init__(logger: Logger, handler: Any)[source]¶
Initiate the rolling logger.
close() → None[source]¶
Close any loggers to allow writing out of any profiles before exiting.
flush() → None[source]¶
Explicitly write current profile if using a rolling logger. | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.whylabs_callback.WhyLabsCallbackHandler.html |
3b8e842e3b2b-3 | flush() → None[source]¶
Explicitly write current profile if using a rolling logger.
classmethod from_params(*, api_key: Optional[str] = None, org_id: Optional[str] = None, dataset_id: Optional[str] = None, sentiment: bool = False, toxicity: bool = False, themes: bool = False, logger: Optional[Logger] = None) → WhyLabsCallbackHandler[source]¶
Instantiate whylogs Logger from params.
Parameters
api_key (Optional[str]) – WhyLabs API key. Optional because the preferred
way to specify the API key is with environment variable
WHYLABS_API_KEY.
org_id (Optional[str]) – WhyLabs organization id to write profiles to.
If not set must be specified in environment variable
WHYLABS_DEFAULT_ORG_ID.
dataset_id (Optional[str]) – The model or dataset this callback is gathering
telemetry for. If not set must be specified in environment variable
WHYLABS_DEFAULT_DATASET_ID.
sentiment (bool) – If True will initialize a model to perform
sentiment analysis compound score. Defaults to False and will not gather
this metric.
toxicity (bool) – If True will initialize a model to score
toxicity. Defaults to False and will not gather this metric.
themes (bool) – If True will initialize a model to calculate
distance to configured themes. Defaults to None and will not gather this
metric.
logger (Optional[Logger]) – If specified will bind the configured logger as
the telemetry gathering agent. Defaults to LangKit schema with periodic
WhyLabs writer.
on_agent_action(action: AgentAction, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run on agent action. | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.whylabs_callback.WhyLabsCallbackHandler.html |
3b8e842e3b2b-4 | Run on agent action.
on_agent_finish(finish: AgentFinish, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run on agent end.
on_chain_end(outputs: Dict[str, Any], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when chain ends running.
on_chain_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when chain errors.
on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run when chain starts running.
on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run when a chat model starts running.
on_llm_end(response: LLMResult, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when LLM ends running.
on_llm_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when LLM errors.
:param error: The error that occurred.
:type error: BaseException
:param kwargs: Additional keyword arguments. | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.whylabs_callback.WhyLabsCallbackHandler.html |
3b8e842e3b2b-5 | :type error: BaseException
:param kwargs: Additional keyword arguments.
response (LLMResult): The response which was generated beforethe error occurred.
on_llm_new_token(token: str, *, chunk: Optional[Union[GenerationChunk, ChatGenerationChunk]] = None, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run on new LLM token. Only available when streaming is enabled.
Parameters
token (str) – The new token.
chunk (GenerationChunk | ChatGenerationChunk) – The new generated chunk,
information. (containing content and other) –
on_llm_start(serialized: Dict[str, Any], prompts: List[str], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run when LLM starts running.
on_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when Retriever ends running.
on_retriever_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when Retriever errors.
on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run when Retriever starts running. | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.whylabs_callback.WhyLabsCallbackHandler.html |
3b8e842e3b2b-6 | Run when Retriever starts running.
on_retry(retry_state: RetryCallState, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run on a retry event.
on_text(text: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run on arbitrary text.
on_tool_end(output: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when tool ends running.
on_tool_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when tool errors.
on_tool_start(serialized: Dict[str, Any], input_str: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run when tool starts running.
Examples using WhyLabsCallbackHandler¶
WhyLabs | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.whylabs_callback.WhyLabsCallbackHandler.html |
1b7fbaf302d5-0 | langchain_core.callbacks.manager.trace_as_chain_group¶
langchain_core.callbacks.manager.trace_as_chain_group(group_name: str, callback_manager: Optional[CallbackManager] = None, *, inputs: Optional[Dict[str, Any]] = None, project_name: Optional[str] = None, example_id: Optional[Union[str, UUID]] = None, run_id: Optional[UUID] = None, tags: Optional[List[str]] = None) → Generator[CallbackManagerForChainGroup, None, None][source]¶
Get a callback manager for a chain group in a context manager.
Useful for grouping different calls together as a single run even if
they aren’t composed in a single chain.
Parameters
group_name (str) – The name of the chain group.
callback_manager (CallbackManager, optional) – The callback manager to use.
inputs (Dict[str, Any], optional) – The inputs to the chain group.
project_name (str, optional) – The name of the project.
Defaults to None.
example_id (str or UUID, optional) – The ID of the example.
Defaults to None.
run_id (UUID, optional) – The ID of the run.
tags (List[str], optional) – The inheritable tags to apply to all runs.
Defaults to None.
Note: must have LANGCHAIN_TRACING_V2 env var set to true to see the trace in LangSmith.
Returns
The callback manager for the chain group.
Return type
CallbackManagerForChainGroup
Example
llm_input = "Foo"
with trace_as_chain_group("group_name", inputs={"input": llm_input}) as manager:
# Use the callback manager for the chain group
res = llm.predict(llm_input, callbacks=manager)
manager.on_chain_end({"output": res}) | https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.manager.trace_as_chain_group.html |
b6ec7736d150-0 | langchain_community.callbacks.labelstudio_callback.LabelStudioCallbackHandler¶
class langchain_community.callbacks.labelstudio_callback.LabelStudioCallbackHandler(api_key: Optional[str] = None, url: Optional[str] = None, project_id: Optional[int] = None, project_name: str = 'LangChain-%Y-%m-%d', project_config: Optional[str] = None, mode: Union[str, LabelStudioMode] = LabelStudioMode.PROMPT)[source]¶
Label Studio callback handler.
Provides the ability to send predictions to Label Studio
for human evaluation, feedback and annotation.
Parameters
api_key – Label Studio API key
url – Label Studio URL
project_id – Label Studio project ID
project_name – Label Studio project name
project_config – Label Studio project config (XML)
mode – Label Studio mode (“prompt” or “chat”)
Examples
>>> from langchain_community.llms import OpenAI
>>> from langchain_community.callbacks import LabelStudioCallbackHandler
>>> handler = LabelStudioCallbackHandler(
... api_key='<your_key_here>',
... url='http://localhost:8080',
... project_name='LangChain-%Y-%m-%d',
... mode='prompt'
... )
>>> llm = OpenAI(callbacks=[handler])
>>> llm.predict('Tell me a story about a dog.')
Attributes
DEFAULT_PROJECT_NAME
ignore_agent
Whether to ignore agent callbacks.
ignore_chain
Whether to ignore chain callbacks.
ignore_chat_model
Whether to ignore chat model callbacks.
ignore_llm
Whether to ignore LLM callbacks.
ignore_retriever
Whether to ignore retriever callbacks.
ignore_retry
Whether to ignore retry callbacks.
raise_error
run_inline
Methods
__init__([api_key, url, project_id, ...])
add_prompts_generations(run_id, generations) | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.labelstudio_callback.LabelStudioCallbackHandler.html |
b6ec7736d150-1 | add_prompts_generations(run_id, generations)
on_agent_action(action, **kwargs)
Do nothing when agent takes a specific action.
on_agent_finish(finish, **kwargs)
Do nothing
on_chain_end(outputs, **kwargs)
Run when chain ends running.
on_chain_error(error, **kwargs)
Do nothing when LLM chain outputs an error.
on_chain_start(serialized, inputs, **kwargs)
Run when chain starts running.
on_chat_model_start(serialized, messages, *, ...)
Save the prompts in memory when an LLM starts.
on_llm_end(response, **kwargs)
Create a new Label Studio task for each prompt and generation.
on_llm_error(error, **kwargs)
Do nothing when LLM outputs an error.
on_llm_new_token(token, **kwargs)
Do nothing when a new token is generated.
on_llm_start(serialized, prompts, **kwargs)
Save the prompts in memory when an LLM starts.
on_retriever_end(documents, *, run_id[, ...])
Run when Retriever ends running.
on_retriever_error(error, *, run_id[, ...])
Run when Retriever errors.
on_retriever_start(serialized, query, *, run_id)
Run when Retriever starts running.
on_retry(retry_state, *, run_id[, parent_run_id])
Run on a retry event.
on_text(text, **kwargs)
Do nothing
on_tool_end(output[, observation_prefix, ...])
Do nothing when tool ends.
on_tool_error(error, **kwargs)
Do nothing when tool outputs an error.
on_tool_start(serialized, input_str, **kwargs) | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.labelstudio_callback.LabelStudioCallbackHandler.html |
b6ec7736d150-2 | on_tool_start(serialized, input_str, **kwargs)
Do nothing when tool starts.
__init__(api_key: Optional[str] = None, url: Optional[str] = None, project_id: Optional[int] = None, project_name: str = 'LangChain-%Y-%m-%d', project_config: Optional[str] = None, mode: Union[str, LabelStudioMode] = LabelStudioMode.PROMPT)[source]¶
add_prompts_generations(run_id: str, generations: List[List[Generation]]) → None[source]¶
on_agent_action(action: AgentAction, **kwargs: Any) → Any[source]¶
Do nothing when agent takes a specific action.
on_agent_finish(finish: AgentFinish, **kwargs: Any) → None[source]¶
Do nothing
on_chain_end(outputs: Dict[str, Any], **kwargs: Any) → None[source]¶
Run when chain ends running.
on_chain_error(error: BaseException, **kwargs: Any) → None[source]¶
Do nothing when LLM chain outputs an error.
on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any) → None[source]¶
Run when chain starts running.
on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any[source]¶
Save the prompts in memory when an LLM starts.
on_llm_end(response: LLMResult, **kwargs: Any) → None[source]¶
Create a new Label Studio task for each prompt and generation.
on_llm_error(error: BaseException, **kwargs: Any) → None[source]¶ | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.labelstudio_callback.LabelStudioCallbackHandler.html |
b6ec7736d150-3 | on_llm_error(error: BaseException, **kwargs: Any) → None[source]¶
Do nothing when LLM outputs an error.
on_llm_new_token(token: str, **kwargs: Any) → None[source]¶
Do nothing when a new token is generated.
on_llm_start(serialized: Dict[str, Any], prompts: List[str], **kwargs: Any) → None[source]¶
Save the prompts in memory when an LLM starts.
on_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when Retriever ends running.
on_retriever_error(error: BaseException, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when Retriever errors.
on_retriever_start(serialized: Dict[str, Any], query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → Any¶
Run when Retriever starts running.
on_retry(retry_state: RetryCallState, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run on a retry event.
on_text(text: str, **kwargs: Any) → None[source]¶
Do nothing
on_tool_end(output: str, observation_prefix: Optional[str] = None, llm_prefix: Optional[str] = None, **kwargs: Any) → None[source]¶
Do nothing when tool ends.
on_tool_error(error: BaseException, **kwargs: Any) → None[source]¶
Do nothing when tool outputs an error. | https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.labelstudio_callback.LabelStudioCallbackHandler.html |
Subsets and Splits