id
stringlengths 14
15
| text
stringlengths 13
2.7k
| source
stringlengths 60
181
|
---|---|---|
7db934eb7ab9-2 | classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
Examples using LLMResult¶
Ollama
Async callbacks | https://api.python.langchain.com/en/latest/outputs/langchain_core.outputs.llm_result.LLMResult.html |
ff1e78e47922-0 | langchain_core.outputs.chat_generation.ChatGeneration¶
class langchain_core.outputs.chat_generation.ChatGeneration[source]¶
Bases: Generation
A single chat generation output.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param generation_info: Optional[Dict[str, Any]] = None¶
Raw response from the provider. May include things like the
reason for finishing or token log probabilities.
param message: BaseMessage [Required]¶
The message output by the chat model.
param text: str = ''¶
SHOULD NOT BE SET DIRECTLY The text contents of the output message.
param type: Literal['ChatGeneration'] = 'ChatGeneration'¶
Type is used exclusively for serialization purposes.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns | https://api.python.langchain.com/en/latest/outputs/langchain_core.outputs.chat_generation.ChatGeneration.html |
ff1e78e47922-1 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
classmethod get_lc_namespace() → List[str][source]¶
Get the namespace of the langchain object.
classmethod is_lc_serializable() → bool¶
Return whether this class is serializable.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object. | https://api.python.langchain.com/en/latest/outputs/langchain_core.outputs.chat_generation.ChatGeneration.html |
ff1e78e47922-2 | The unique identifier is a list of strings that describes the path
to the object.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”} | https://api.python.langchain.com/en/latest/outputs/langchain_core.outputs.chat_generation.ChatGeneration.html |
97f60d030f11-0 | langchain_core.outputs.generation.Generation¶
class langchain_core.outputs.generation.Generation[source]¶
Bases: Serializable
A single text generation output.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param generation_info: Optional[Dict[str, Any]] = None¶
Raw response from the provider. May include things like the
reason for finishing or token log probabilities.
param text: str [Required]¶
Generated text output.
param type: Literal['Generation'] = 'Generation'¶
Type is used exclusively for serialization purposes.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance | https://api.python.langchain.com/en/latest/outputs/langchain_core.outputs.generation.Generation.html |
97f60d030f11-1 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
classmethod get_lc_namespace() → List[str][source]¶
Get the namespace of the langchain object.
classmethod is_lc_serializable() → bool[source]¶
Return whether this class is serializable.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object. | https://api.python.langchain.com/en/latest/outputs/langchain_core.outputs.generation.Generation.html |
97f60d030f11-2 | The unique identifier is a list of strings that describes the path
to the object.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”} | https://api.python.langchain.com/en/latest/outputs/langchain_core.outputs.generation.Generation.html |
d612fcdebb7b-0 | langchain_core.outputs.chat_result.ChatResult¶
class langchain_core.outputs.chat_result.ChatResult[source]¶
Bases: BaseModel
Class that contains all results for a single chat model call.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param generations: List[langchain_core.outputs.chat_generation.ChatGeneration] [Required]¶
List of the chat generations. This is a List because an input can have multiple
candidate generations.
param llm_output: Optional[dict] = None¶
For arbitrary LLM provider specific output.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance | https://api.python.langchain.com/en/latest/outputs/langchain_core.outputs.chat_result.ChatResult.html |
d612fcdebb7b-1 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ | https://api.python.langchain.com/en/latest/outputs/langchain_core.outputs.chat_result.ChatResult.html |
d612fcdebb7b-2 | classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶ | https://api.python.langchain.com/en/latest/outputs/langchain_core.outputs.chat_result.ChatResult.html |
767e29491e32-0 | langchain_core.outputs.generation.GenerationChunk¶
class langchain_core.outputs.generation.GenerationChunk[source]¶
Bases: Generation
A Generation chunk, which can be concatenated with other Generation chunks.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param generation_info: Optional[Dict[str, Any]] = None¶
Raw response from the provider. May include things like the
reason for finishing or token log probabilities.
param text: str [Required]¶
Generated text output.
param type: Literal['Generation'] = 'Generation'¶
Type is used exclusively for serialization purposes.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance | https://api.python.langchain.com/en/latest/outputs/langchain_core.outputs.generation.GenerationChunk.html |
767e29491e32-1 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
classmethod get_lc_namespace() → List[str][source]¶
Get the namespace of the langchain object.
classmethod is_lc_serializable() → bool¶
Return whether this class is serializable.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object. | https://api.python.langchain.com/en/latest/outputs/langchain_core.outputs.generation.GenerationChunk.html |
767e29491e32-2 | The unique identifier is a list of strings that describes the path
to the object.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”} | https://api.python.langchain.com/en/latest/outputs/langchain_core.outputs.generation.GenerationChunk.html |
899e9ac0ab53-0 | langchain_core.outputs.chat_generation.ChatGenerationChunk¶
class langchain_core.outputs.chat_generation.ChatGenerationChunk[source]¶
Bases: ChatGeneration
A ChatGeneration chunk, which can be concatenated with otherChatGeneration chunks.
message¶
The message chunk output by the chat model.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param generation_info: Optional[Dict[str, Any]] = None¶
Raw response from the provider. May include things like the
reason for finishing or token log probabilities.
param message: BaseMessageChunk [Required]¶
The message output by the chat model.
param text: str = ''¶
SHOULD NOT BE SET DIRECTLY The text contents of the output message.
param type: Literal['ChatGenerationChunk'] = 'ChatGenerationChunk'¶
Type is used exclusively for serialization purposes.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include | https://api.python.langchain.com/en/latest/outputs/langchain_core.outputs.chat_generation.ChatGenerationChunk.html |
899e9ac0ab53-1 | exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
classmethod get_lc_namespace() → List[str][source]¶
Get the namespace of the langchain object.
classmethod is_lc_serializable() → bool¶
Return whether this class is serializable.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes. | https://api.python.langchain.com/en/latest/outputs/langchain_core.outputs.chat_generation.ChatGenerationChunk.html |
899e9ac0ab53-2 | A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”} | https://api.python.langchain.com/en/latest/outputs/langchain_core.outputs.chat_generation.ChatGenerationChunk.html |
5f8eb7ba8192-0 | langchain_core.load.dump.default¶
langchain_core.load.dump.default(obj: Any) → Any[source]¶
Return a default value for a Serializable object or
a SerializedNotImplemented object. | https://api.python.langchain.com/en/latest/load/langchain_core.load.dump.default.html |
0a1674f2908e-0 | langchain_core.load.load.Reviver¶
class langchain_core.load.load.Reviver(secrets_map: Optional[Dict[str, str]] = None, valid_namespaces: Optional[List[str]] = None)[source]¶
Reviver for JSON objects.
Methods
__init__([secrets_map, valid_namespaces])
__init__(secrets_map: Optional[Dict[str, str]] = None, valid_namespaces: Optional[List[str]] = None) → None[source]¶ | https://api.python.langchain.com/en/latest/load/langchain_core.load.load.Reviver.html |
bf4895e73af3-0 | langchain_core.load.serializable.try_neq_default¶
langchain_core.load.serializable.try_neq_default(value: Any, key: str, model: BaseModel) → bool[source]¶
Try to determine if a value is different from the default.
Parameters
value – The value.
key – The key.
model – The model.
Returns
Whether the value is different from the default. | https://api.python.langchain.com/en/latest/load/langchain_core.load.serializable.try_neq_default.html |
1f1ec4e11933-0 | langchain_core.load.load.load¶
langchain_core.load.load.load(obj: Any, *, secrets_map: Optional[Dict[str, str]] = None, valid_namespaces: Optional[List[str]] = None) → Any[source]¶
Revive a LangChain class from a JSON object. Use this if you already
have a parsed JSON object, eg. from json.load or orjson.loads.
Parameters
obj – The object to load.
secrets_map – A map of secrets to load.
valid_namespaces – A list of additional namespaces (modules)
to allow to be deserialized.
Returns
Revived LangChain objects. | https://api.python.langchain.com/en/latest/load/langchain_core.load.load.load.html |
33981bd41120-0 | langchain_core.load.dump.dumps¶
langchain_core.load.dump.dumps(obj: Any, *, pretty: bool = False, **kwargs: Any) → str[source]¶
Return a json string representation of an object. | https://api.python.langchain.com/en/latest/load/langchain_core.load.dump.dumps.html |
64e3f7b28c1f-0 | langchain_core.load.serializable.Serializable¶
class langchain_core.load.serializable.Serializable[source]¶
Bases: BaseModel, ABC
Serializable base class.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ | https://api.python.langchain.com/en/latest/load/langchain_core.load.serializable.Serializable.html |
64e3f7b28c1f-1 | Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
classmethod get_lc_namespace() → List[str][source]¶
Get the namespace of the langchain object.
For example, if the class is langchain.llms.openai.OpenAI, then the
namespace is [“langchain”, “llms”, “openai”]
classmethod is_lc_serializable() → bool[source]¶
Is this class serializable?
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str][source]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ | https://api.python.langchain.com/en/latest/load/langchain_core.load.serializable.Serializable.html |
64e3f7b28c1f-2 | classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented][source]¶
to_json_not_implemented() → SerializedNotImplemented[source]¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”} | https://api.python.langchain.com/en/latest/load/langchain_core.load.serializable.Serializable.html |
d055c0df6946-0 | langchain_core.load.serializable.to_json_not_implemented¶
langchain_core.load.serializable.to_json_not_implemented(obj: object) → SerializedNotImplemented[source]¶
Serialize a “not implemented” object.
Parameters
obj – object to serialize
Returns
SerializedNotImplemented | https://api.python.langchain.com/en/latest/load/langchain_core.load.serializable.to_json_not_implemented.html |
a22d9592f5ce-0 | langchain_core.load.serializable.SerializedConstructor¶
class langchain_core.load.serializable.SerializedConstructor[source]¶
Serialized constructor.
lc: int¶
id: List[str]¶
type: Literal['constructor']¶
kwargs: Dict[str, Any]¶ | https://api.python.langchain.com/en/latest/load/langchain_core.load.serializable.SerializedConstructor.html |
87a6d5370b9a-0 | langchain_core.load.serializable.SerializedSecret¶
class langchain_core.load.serializable.SerializedSecret[source]¶
Serialized secret.
lc: int¶
id: List[str]¶
type: Literal['secret']¶ | https://api.python.langchain.com/en/latest/load/langchain_core.load.serializable.SerializedSecret.html |
4102fe05f7bb-0 | langchain_core.load.serializable.SerializedNotImplemented¶
class langchain_core.load.serializable.SerializedNotImplemented[source]¶
Serialized not implemented.
lc: int¶
id: List[str]¶
type: Literal['not_implemented']¶
repr: Optional[str]¶ | https://api.python.langchain.com/en/latest/load/langchain_core.load.serializable.SerializedNotImplemented.html |
70746422f425-0 | langchain_core.load.dump.dumpd¶
langchain_core.load.dump.dumpd(obj: Any) → Dict[str, Any][source]¶
Return a json dict representation of an object. | https://api.python.langchain.com/en/latest/load/langchain_core.load.dump.dumpd.html |
5e20ff9ae8b8-0 | langchain_core.load.serializable.BaseSerialized¶
class langchain_core.load.serializable.BaseSerialized[source]¶
Base class for serialized objects.
lc: int¶
id: List[str]¶ | https://api.python.langchain.com/en/latest/load/langchain_core.load.serializable.BaseSerialized.html |
911f60783050-0 | langchain_core.load.load.loads¶
langchain_core.load.load.loads(text: str, *, secrets_map: Optional[Dict[str, str]] = None, valid_namespaces: Optional[List[str]] = None) → Any[source]¶
Revive a LangChain class from a JSON string.
Equivalent to load(json.loads(text)).
Parameters
text – The string to load.
secrets_map – A map of secrets to load.
valid_namespaces – A list of additional namespaces (modules)
to allow to be deserialized.
Returns
Revived LangChain objects. | https://api.python.langchain.com/en/latest/load/langchain_core.load.load.loads.html |
0452e81de1d1-0 | langchain_core.prompts.chat.BaseStringMessagePromptTemplate¶
class langchain_core.prompts.chat.BaseStringMessagePromptTemplate[source]¶
Bases: BaseMessagePromptTemplate, ABC
Base class for message prompt templates that use a string prompt template.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param additional_kwargs: dict [Optional]¶
Additional keyword arguments to pass to the prompt template.
param prompt: langchain_core.prompts.string.StringPromptTemplate [Required]¶
String prompt template.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.BaseStringMessagePromptTemplate.html |
0452e81de1d1-1 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
abstract format(**kwargs: Any) → BaseMessage[source]¶
Format the prompt template.
Parameters
**kwargs – Keyword arguments to use for formatting.
Returns
Formatted message.
format_messages(**kwargs: Any) → List[BaseMessage][source]¶
Format messages from kwargs.
Parameters
**kwargs – Keyword arguments to use for formatting.
Returns
List of BaseMessages.
classmethod from_orm(obj: Any) → Model¶
classmethod from_template(template: str, template_format: str = 'f-string', partial_variables: Optional[Dict[str, Any]] = None, **kwargs: Any) → MessagePromptTemplateT[source]¶
Create a class from a string template.
Parameters
template – a template.
template_format – format of the template.
partial_variables –
A dictionary of variables that can be used to partiallyfill in the template. For example, if the template is
”{variable1} {variable2}”, and partial_variables is
{“variable1”: “foo”}, then the final prompt will be
“foo {variable2}”.
**kwargs – keyword arguments to pass to the constructor.
Returns
A new instance of this class. | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.BaseStringMessagePromptTemplate.html |
0452e81de1d1-2 | Returns
A new instance of this class.
classmethod from_template_file(template_file: Union[str, Path], input_variables: List[str], **kwargs: Any) → MessagePromptTemplateT[source]¶
Create a class from a template file.
Parameters
template_file – path to a template file. String or Path.
input_variables – list of input variables.
**kwargs – keyword arguments to pass to the constructor.
Returns
A new instance of this class.
classmethod get_lc_namespace() → List[str][source]¶
Get the namespace of the langchain object.
classmethod is_lc_serializable() → bool¶
Return whether or not the class is serializable.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶ | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.BaseStringMessagePromptTemplate.html |
0452e81de1d1-3 | classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
property input_variables: List[str]¶
Input variables for this prompt template.
Returns
List of input variable names.
property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”} | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.BaseStringMessagePromptTemplate.html |
9347c411d58d-0 | langchain_core.prompts.chat.BaseMessagePromptTemplate¶
class langchain_core.prompts.chat.BaseMessagePromptTemplate[source]¶
Bases: Serializable, ABC
Base class for message prompt templates.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.BaseMessagePromptTemplate.html |
9347c411d58d-1 | Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
abstract format_messages(**kwargs: Any) → List[BaseMessage][source]¶
Format messages from kwargs. Should return a list of BaseMessages.
Parameters
**kwargs – Keyword arguments to use for formatting.
Returns
List of BaseMessages.
classmethod from_orm(obj: Any) → Model¶
classmethod get_lc_namespace() → List[str][source]¶
Get the namespace of the langchain object.
classmethod is_lc_serializable() → bool[source]¶
Return whether or not the class is serializable.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶ | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.BaseMessagePromptTemplate.html |
9347c411d58d-2 | classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
abstract property input_variables: List[str]¶
Input variables for this prompt template.
Returns
List of input variables.
property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”} | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.BaseMessagePromptTemplate.html |
e76735a61cb4-0 | langchain_core.prompts.chat.MessagesPlaceholder¶
class langchain_core.prompts.chat.MessagesPlaceholder[source]¶
Bases: BaseMessagePromptTemplate
Prompt template that assumes variable is already list of messages.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param optional: bool = False¶
param variable_name: str [Required]¶
Name of variable to use as messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.MessagesPlaceholder.html |
e76735a61cb4-1 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
format_messages(**kwargs: Any) → List[BaseMessage][source]¶
Format messages from kwargs.
Parameters
**kwargs – Keyword arguments to use for formatting.
Returns
List of BaseMessage.
classmethod from_orm(obj: Any) → Model¶
classmethod get_lc_namespace() → List[str][source]¶
Get the namespace of the langchain object.
classmethod is_lc_serializable() → bool¶
Return whether or not the class is serializable.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes. | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.MessagesPlaceholder.html |
e76735a61cb4-2 | A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
property input_variables: List[str]¶
Input variables for this prompt template.
Returns
List of input variable names.
property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”}
Examples using MessagesPlaceholder¶
Set env var OPENAI_API_KEY or load from a .env file:
Conversational Retrieval Agent
Agents
Memory in LLMChain
Add Memory to OpenAI Functions Agent | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.MessagesPlaceholder.html |
e76735a61cb4-3 | Agents
Memory in LLMChain
Add Memory to OpenAI Functions Agent
Types of `MessagePromptTemplate`
Adding memory | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.MessagesPlaceholder.html |
a95cc778182f-0 | langchain_core.prompts.chat.ChatPromptTemplate¶
class langchain_core.prompts.chat.ChatPromptTemplate[source]¶
Bases: BaseChatPromptTemplate
A prompt template for chat models.
Use to create flexible templated prompts for chat models.
Examples
from langchain_core.prompts import ChatPromptTemplate
template = ChatPromptTemplate.from_messages([
("system", "You are a helpful AI bot. Your name is {name}."),
("human", "Hello, how are you doing?"),
("ai", "I'm doing well, thanks!"),
("human", "{user_input}"),
])
messages = template.format_messages(
name="Bob",
user_input="What is your name?"
)
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param input_types: Dict[str, Any] [Optional]¶
A dictionary of the types of the variables the prompt template expects.
If not provided, all variables are assumed to be strings.
param input_variables: List[str] [Required]¶
List of input variables in template messages. Used for validation.
param messages: List[MessageLike] [Required]¶
List of messages consisting of either message prompt templates or messages.
param output_parser: Optional[BaseOutputParser] = None¶
How to parse the output of calling an LLM on this formatted prompt.
param partial_variables: Mapping[str, Union[str, Callable[[], str]]] [Optional]¶
param validate_template: bool = False¶
Whether or not to try validating the template.
async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶ | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html |
a95cc778182f-1 | Default implementation runs ainvoke in parallel using asyncio.gather.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
async ainvoke(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Any) → Output¶
Default implementation of ainvoke, calls invoke from a thread.
The default implementation allows usage of async code even if
the runnable did not implement a native async version of invoke.
Subclasses should override this method if they can run asynchronously.
append(message: Union[BaseMessagePromptTemplate, BaseMessage, BaseChatPromptTemplate, Tuple[str, str], Tuple[Type, str], str]) → None[source]¶
Append message to the end of the chat template.
Parameters
message – representation of a message to append.
assign(**kwargs: Union[Runnable[Dict[str, Any], Any], Callable[[Dict[str, Any]], Any], Mapping[str, Union[Runnable[Dict[str, Any], Any], Callable[[Dict[str, Any]], Any]]]]) → RunnableSerializable[Any, Any]¶
Assigns new fields to the dict output of this runnable.
Returns a new runnable.
async astream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
Default implementation of astream, which calls ainvoke.
Subclasses should override this method if they support streaming output. | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html |
a95cc778182f-2 | Subclasses should override this method if they support streaming output.
async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, diff: bool = True, with_streamed_output_list: bool = True, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any]) → Union[AsyncIterator[RunLogPatch], AsyncIterator[RunLog]]¶
Stream all output from a runnable, as reported to the callback system.
This includes all inner runs of LLMs, Retrievers, Tools, etc.
Output is streamed as Log objects, which include a list of
jsonpatch ops that describe how the state of the run has changed in each
step, and the final state of the run.
The jsonpatch ops can be applied in order to construct state.
Parameters
input – The input to the runnable.
config – The config to use for the runnable.
diff – Whether to yield diffs between each step, or the current state.
with_streamed_output_list – Whether to yield the streamed_output list.
include_names – Only include logs with these names.
include_types – Only include logs with these types.
include_tags – Only include logs with these tags.
exclude_names – Exclude logs with these names.
exclude_types – Exclude logs with these types.
exclude_tags – Exclude logs with these tags.
async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
Default implementation of atransform, which buffers input and calls astream. | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html |
a95cc778182f-3 | Default implementation of atransform, which buffers input and calls astream.
Subclasses should override this method if they can start producing output while
input is still being generated.
batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶
Default implementation runs invoke in parallel using a thread pool executor.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
config_schema(*, include: Optional[Sequence[str]] = None) → Type[BaseModel]¶
The type of config this runnable accepts specified as a pydantic model.
To mark a field as configurable, see the configurable_fields
and configurable_alternatives methods.
Parameters
include – A list of fields to include in the config schema.
Returns
A pydantic model that can be used to validate config.
configurable_alternatives(which: ConfigurableField, *, default_key: str = 'default', prefix_keys: bool = False, **kwargs: Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) → RunnableSerializable[Input, Output]¶
configurable_fields(**kwargs: Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) → RunnableSerializable[Input, Output]¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html |
a95cc778182f-4 | Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict¶
Return dictionary representation of prompt.
extend(messages: Sequence[Union[BaseMessagePromptTemplate, BaseMessage, BaseChatPromptTemplate, Tuple[str, str], Tuple[Type, str], str]]) → None[source]¶
Extend the chat template with a sequence of messages.
format(**kwargs: Any) → str[source]¶
Format the chat template into a string.
Parameters
**kwargs – keyword arguments to use for filling in template variables
in all the template messages in this chat template.
Returns
formatted string
format_messages(**kwargs: Any) → List[BaseMessage][source]¶
Format the chat template into a list of finalized messages.
Parameters
**kwargs – keyword arguments to use for filling in template variables
in all the template messages in this chat template.
Returns
list of formatted messages
format_prompt(**kwargs: Any) → PromptValue¶
Format prompt. Should return a PromptValue. | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html |
a95cc778182f-5 | Format prompt. Should return a PromptValue.
:param **kwargs: Keyword arguments to use for formatting.
Returns
PromptValue.
classmethod from_messages(messages: Sequence[Union[BaseMessagePromptTemplate, BaseMessage, BaseChatPromptTemplate, Tuple[str, str], Tuple[Type, str], str]]) → ChatPromptTemplate[source]¶
Create a chat prompt template from a variety of message formats.
Examples
Instantiation from a list of message templates:
template = ChatPromptTemplate.from_messages([
("human", "Hello, how are you?"),
("ai", "I'm doing well, thanks!"),
("human", "That's good to hear."),
])
Instantiation from mixed message formats:
template = ChatPromptTemplate.from_messages([
SystemMessage(content="hello"),
("human", "Hello, how are you?"),
])
Parameters
messages – sequence of message representations.
A message can be represented using the following formats:
(1) BaseMessagePromptTemplate, (2) BaseMessage, (3) 2-tuple of
(message type, template); e.g., (“human”, “{user_input}”),
(4) 2-tuple of (message class, template), (4) a string which is
shorthand for (“human”, template); e.g., “{user_input}”
Returns
a chat prompt template
classmethod from_orm(obj: Any) → Model¶
classmethod from_role_strings(string_messages: List[Tuple[str, str]]) → ChatPromptTemplate[source]¶
[Deprecated] Create a chat prompt template from a list of (role, template) tuples.
Parameters
string_messages – list of (role, template) tuples.
string_messages – list of (role, template) tuples.
Returns | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html |
a95cc778182f-6 | string_messages – list of (role, template) tuples.
Returns
a chat prompt template[Deprecated] Create a chat prompt template from a list of (role, template) tuples.
Returns
a chat prompt template
Notes
Deprecated since version 0.0.260: Use from_messages classmethod instead.
classmethod from_strings(string_messages: List[Tuple[Type[BaseMessagePromptTemplate], str]]) → ChatPromptTemplate[source]¶
[Deprecated] Create a chat prompt template from a list of (role class, template) tuples.
Parameters
string_messages – list of (role class, template) tuples.
string_messages – list of (role class, template) tuples.
Returns
a chat prompt template[Deprecated] Create a chat prompt template from a list of (role class, template) tuples.
Returns
a chat prompt template
Notes
Deprecated since version 0.0.260: Use from_messages classmethod instead.
classmethod from_template(template: str, **kwargs: Any) → ChatPromptTemplate[source]¶
Create a chat prompt template from a template string.
Creates a chat template consisting of a single message assumed to be from
the human.
Parameters
template – template string
**kwargs – keyword arguments to pass to the constructor.
Returns
A new instance of this class.
get_graph(config: Optional[RunnableConfig] = None) → Graph¶
Return a graph representation of this runnable.
get_input_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
Get a pydantic model that can be used to validate input to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic input schema that depends on which
configuration the runnable is invoked with.
This method allows to get an input schema for a specific configuration.
Parameters | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html |
a95cc778182f-7 | This method allows to get an input schema for a specific configuration.
Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate input.
classmethod get_lc_namespace() → List[str][source]¶
Get the namespace of the langchain object.
get_name(suffix: Optional[str] = None, *, name: Optional[str] = None) → str¶
Get the name of the runnable.
get_output_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
Get a pydantic model that can be used to validate output to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic output schema that depends on which
configuration the runnable is invoked with.
This method allows to get an output schema for a specific configuration.
Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate output.
get_prompts(config: Optional[RunnableConfig] = None) → List[BasePromptTemplate]¶
invoke(input: Dict, config: Optional[RunnableConfig] = None) → PromptValue¶
Transform a single input into an output. Override to implement.
Parameters
input – The input to the runnable.
config – A config to use when invoking the runnable.
The config supports standard keys like ‘tags’, ‘metadata’ for tracing
purposes, ‘max_concurrency’ for controlling how much work to do
in parallel, and other keys. Please refer to the RunnableConfig
for more details.
Returns
The output of the runnable.
classmethod is_lc_serializable() → bool¶
Return whether this class is serializable. | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html |
a95cc778182f-8 | classmethod is_lc_serializable() → bool¶
Return whether this class is serializable.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object.
map() → Runnable[List[Input], List[Output]]¶
Return a new Runnable that maps a list of inputs to a list of outputs,
by calling invoke() with each input.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
partial(**kwargs: Union[str, Callable[[], str]]) → ChatPromptTemplate[source]¶
Get a new ChatPromptTemplate with some input variables already filled in.
Parameters
**kwargs – keyword arguments to use for filling in template variables. Ought | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html |
a95cc778182f-9 | Parameters
**kwargs – keyword arguments to use for filling in template variables. Ought
to be a subset of the input variables.
Returns
A new ChatPromptTemplate.
Example
from langchain_core.prompts import ChatPromptTemplate
template = ChatPromptTemplate.from_messages(
[
("system", "You are an AI assistant named {name}."),
("human", "Hi I'm {user}"),
("ai", "Hi there, {user}, I'm {name}."),
("human", "{input}"),
]
)
template2 = template.partial(user="Lucy", name="R2D2")
template2.format_messages(input="hello")
pick(keys: Union[str, List[str]]) → RunnableSerializable[Any, Any]¶
Pick keys from the dict output of this runnable.
Returns a new runnable.
pipe(*others: Union[Runnable[Any, Other], Callable[[Any], Other]], name: Optional[str] = None) → RunnableSerializable[Input, Other]¶
Compose this runnable with another object to create a RunnableSequence.
save(file_path: Union[Path, str]) → None[source]¶
Save prompt to file.
Parameters
file_path – path to file.
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of stream, which calls invoke.
Subclasses should override this method if they support streaming output.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html |
a95cc778182f-10 | to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of transform, which buffers input and then calls stream.
Subclasses should override this method if they can start producing output while
input is still being generated.
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶
Bind config to a Runnable, returning a new Runnable.
with_fallbacks(fallbacks: Sequence[Runnable[Input, Output]], *, exceptions_to_handle: Tuple[Type[BaseException], ...] = (<class 'Exception'>,)) → RunnableWithFallbacksT[Input, Output]¶
Add fallbacks to a runnable, returning a new Runnable.
Parameters
fallbacks – A sequence of runnables to try if the original runnable fails.
exceptions_to_handle – A tuple of exception types to handle.
Returns
A new Runnable that will try the original runnable, and then each
fallback in order, upon failures.
with_listeners(*, on_start: Optional[Listener] = None, on_end: Optional[Listener] = None, on_error: Optional[Listener] = None) → Runnable[Input, Output]¶
Bind lifecycle listeners to a Runnable, returning a new Runnable.
on_start: Called before the runnable starts running, with the Run object.
on_end: Called after the runnable finishes running, with the Run object.
on_error: Called if the runnable throws an error, with the Run object. | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html |
a95cc778182f-11 | on_error: Called if the runnable throws an error, with the Run object.
The Run object contains information about the run, including its id,
type, input, output, error, start_time, end_time, and any tags or metadata
added to the run.
with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶
Create a new Runnable that retries the original runnable on exceptions.
Parameters
retry_if_exception_type – A tuple of exception types to retry on
wait_exponential_jitter – Whether to add jitter to the wait time
between retries
stop_after_attempt – The maximum number of attempts to make before giving up
Returns
A new Runnable that retries the original runnable on exceptions.
with_types(*, input_type: Optional[Type[Input]] = None, output_type: Optional[Type[Output]] = None) → Runnable[Input, Output]¶
Bind input and output types to a Runnable, returning a new Runnable.
property InputType: Type[langchain_core.runnables.utils.Input]¶
The type of input this runnable accepts specified as a type annotation.
property OutputType: Any¶
The type of output this runnable produces specified as a type annotation.
property config_specs: List[langchain_core.runnables.utils.ConfigurableFieldSpec]¶
List configurable fields for this runnable.
property input_schema: Type[pydantic.main.BaseModel]¶
The type of input this runnable accepts specified as a pydantic model.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_secrets: Dict[str, str]¶ | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html |
a95cc778182f-12 | constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”}
name: Optional[str] = None¶
The name of the runnable. Used for debugging and tracing.
property output_schema: Type[pydantic.main.BaseModel]¶
The type of output this runnable produces specified as a pydantic model.
Examples using ChatPromptTemplate¶
Facebook Messenger
Chat loaders
iMessage
Anthropic
🚅 LiteLLM
Konko
OpenAI
Google Cloud Platform Vertex AI PaLM
JinaChat
Context
OpenAI Functions Metadata Tagger
Figma
Fireworks
Fallbacks
Set env var OPENAI_API_KEY or load from a .env file:
Multiple Retrieval Sources
Structure answers with OpenAI functions
Multi-agent authoritarian speaker selection
MultiVector Retriever
Memory in LLMChain
Retry parser
Pydantic (JSON) parser
Few-shot examples for chat models
Prompt pipelining
Using OpenAI functions
interface.md
First we add a step to load memory
sql_db.md
prompt_llm_parser.md
Adding memory
multiple_chains.md
Code writing
Using tools
Adding moderation | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html |
7443c0b5f148-0 | langchain.prompts.example_selector.ngram_overlap.ngram_overlap_score¶
langchain.prompts.example_selector.ngram_overlap.ngram_overlap_score(source: List[str], example: List[str]) → float[source]¶
Compute ngram overlap score of source and example as sentence_bleu score.
Use sentence_bleu with method1 smoothing function and auto reweighting.
Return float value between 0.0 and 1.0 inclusive.
https://www.nltk.org/_modules/nltk/translate/bleu_score.html
https://aclanthology.org/P02-1040.pdf | https://api.python.langchain.com/en/latest/prompts/langchain.prompts.example_selector.ngram_overlap.ngram_overlap_score.html |
f64853b2b75f-0 | langchain_experimental.prompts.load.load_prompt¶
langchain_experimental.prompts.load.load_prompt(path: Union[str, Path]) → BasePromptTemplate[source]¶
Unified method for loading a prompt from LangChainHub or local fs.
Examples using load_prompt¶
Amazon Comprehend Moderation Chain
Serialization | https://api.python.langchain.com/en/latest/prompts/langchain_experimental.prompts.load.load_prompt.html |
6cf5485ef6ca-0 | langchain_core.prompts.loading.load_prompt¶
langchain_core.prompts.loading.load_prompt(path: Union[str, Path]) → BasePromptTemplate[source]¶
Unified method for loading a prompt from LangChainHub or local fs.
Examples using load_prompt¶
Amazon Comprehend Moderation Chain
Serialization | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.loading.load_prompt.html |
6670df147e83-0 | langchain_core.prompts.string.validate_jinja2¶
langchain_core.prompts.string.validate_jinja2(template: str, input_variables: List[str]) → None[source]¶
Validate that the input variables are valid for the template.
Issues a warning if missing or extra variables are found.
Parameters
template – The template string.
input_variables – The input variables. | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.string.validate_jinja2.html |
b516d2a52a5c-0 | langchain_core.prompts.prompt.PromptTemplate¶
class langchain_core.prompts.prompt.PromptTemplate[source]¶
Bases: StringPromptTemplate
A prompt template for a language model.
A prompt template consists of a string template. It accepts a set of parameters
from the user that can be used to generate a prompt for a language model.
The template can be formatted using either f-strings (default) or jinja2 syntax.
Security warning: Prefer using template_format=”f-string” instead oftemplate_format=”jinja2”, or make sure to NEVER accept jinja2 templates
from untrusted sources as they may lead to arbitrary Python code execution.
As of LangChain 0.0.329, Jinja2 templates will be rendered using
Jinja2’s SandboxedEnvironment by default. This sand-boxing should
be treated as a best-effort approach rather than a guarantee of security,
as it is an opt-out rather than opt-in approach.
Despite the sand-boxing, we recommend to never use jinja2 templates
from untrusted sources.
Example
from langchain_core.prompts import PromptTemplate
# Instantiation using from_template (recommended)
prompt = PromptTemplate.from_template("Say {foo}")
prompt.format(foo="bar")
# Instantiation using initializer
prompt = PromptTemplate(input_variables=["foo"], template="Say {foo}")
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param input_types: Dict[str, Any] [Optional]¶
A dictionary of the types of the variables the prompt template expects.
If not provided, all variables are assumed to be strings.
param input_variables: List[str] [Required]¶
A list of the names of the variables the prompt template expects. | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.prompt.PromptTemplate.html |
b516d2a52a5c-1 | A list of the names of the variables the prompt template expects.
param output_parser: Optional[BaseOutputParser] = None¶
How to parse the output of calling an LLM on this formatted prompt.
param partial_variables: Mapping[str, Union[str, Callable[[], str]]] [Optional]¶
param template: str [Required]¶
The prompt template.
param template_format: Union[Literal['f-string'], Literal['jinja2']] = 'f-string'¶
The format of the prompt template. Options are: ‘f-string’, ‘jinja2’.
param validate_template: bool = False¶
Whether or not to try validating the template.
async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶
Default implementation runs ainvoke in parallel using asyncio.gather.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
async ainvoke(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Any) → Output¶
Default implementation of ainvoke, calls invoke from a thread.
The default implementation allows usage of async code even if
the runnable did not implement a native async version of invoke.
Subclasses should override this method if they can run asynchronously.
assign(**kwargs: Union[Runnable[Dict[str, Any], Any], Callable[[Dict[str, Any]], Any], Mapping[str, Union[Runnable[Dict[str, Any], Any], Callable[[Dict[str, Any]], Any]]]]) → RunnableSerializable[Any, Any]¶
Assigns new fields to the dict output of this runnable. | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.prompt.PromptTemplate.html |
b516d2a52a5c-2 | Assigns new fields to the dict output of this runnable.
Returns a new runnable.
async astream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
Default implementation of astream, which calls ainvoke.
Subclasses should override this method if they support streaming output.
async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, diff: bool = True, with_streamed_output_list: bool = True, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any]) → Union[AsyncIterator[RunLogPatch], AsyncIterator[RunLog]]¶
Stream all output from a runnable, as reported to the callback system.
This includes all inner runs of LLMs, Retrievers, Tools, etc.
Output is streamed as Log objects, which include a list of
jsonpatch ops that describe how the state of the run has changed in each
step, and the final state of the run.
The jsonpatch ops can be applied in order to construct state.
Parameters
input – The input to the runnable.
config – The config to use for the runnable.
diff – Whether to yield diffs between each step, or the current state.
with_streamed_output_list – Whether to yield the streamed_output list.
include_names – Only include logs with these names.
include_types – Only include logs with these types.
include_tags – Only include logs with these tags.
exclude_names – Exclude logs with these names.
exclude_types – Exclude logs with these types. | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.prompt.PromptTemplate.html |
b516d2a52a5c-3 | exclude_names – Exclude logs with these names.
exclude_types – Exclude logs with these types.
exclude_tags – Exclude logs with these tags.
async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
Default implementation of atransform, which buffers input and calls astream.
Subclasses should override this method if they can start producing output while
input is still being generated.
batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶
Default implementation runs invoke in parallel using a thread pool executor.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
config_schema(*, include: Optional[Sequence[str]] = None) → Type[BaseModel]¶
The type of config this runnable accepts specified as a pydantic model.
To mark a field as configurable, see the configurable_fields
and configurable_alternatives methods.
Parameters
include – A list of fields to include in the config schema.
Returns
A pydantic model that can be used to validate config.
configurable_alternatives(which: ConfigurableField, *, default_key: str = 'default', prefix_keys: bool = False, **kwargs: Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) → RunnableSerializable[Input, Output]¶ | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.prompt.PromptTemplate.html |
b516d2a52a5c-4 | configurable_fields(**kwargs: Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) → RunnableSerializable[Input, Output]¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict¶
Return dictionary representation of prompt.
format(**kwargs: Any) → str[source]¶
Format the prompt with the inputs.
Parameters
kwargs – Any arguments to be passed to the prompt template.
Returns
A formatted string.
Example
prompt.format(variable1="foo")
format_prompt(**kwargs: Any) → PromptValue¶
Create Chat Messages.
classmethod from_examples(examples: List[str], suffix: str, input_variables: List[str], example_separator: str = '\n\n', prefix: str = '', **kwargs: Any) → PromptTemplate[source]¶ | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.prompt.PromptTemplate.html |
b516d2a52a5c-5 | Take examples in list format with prefix and suffix to create a prompt.
Intended to be used as a way to dynamically create a prompt from examples.
Parameters
examples – List of examples to use in the prompt.
suffix – String to go after the list of examples. Should generally
set up the user’s input.
input_variables – A list of variable names the final prompt template
will expect.
example_separator – The separator to use in between examples. Defaults
to two new line characters.
prefix – String that should go before any examples. Generally includes
examples. Default to an empty string.
Returns
The final prompt generated.
classmethod from_file(template_file: Union[str, Path], input_variables: Optional[List[str]] = None, **kwargs: Any) → PromptTemplate[source]¶
Load a prompt from a file.
Parameters
template_file – The path to the file containing the prompt template.
input_variables – [DEPRECATED] A list of variable names the final prompt
template will expect.
input_variables is ignored as from_file now delegates to from_template().
Returns
The prompt loaded from the file.
classmethod from_orm(obj: Any) → Model¶
classmethod from_template(template: str, *, template_format: str = 'f-string', partial_variables: Optional[Dict[str, Any]] = None, **kwargs: Any) → PromptTemplate[source]¶
Load a prompt template from a template.
Security warning: Prefer using template_format=”f-string” instead oftemplate_format=”jinja2”, or make sure to NEVER accept jinja2 templates
from untrusted sources as they may lead to arbitrary Python code execution.
As of LangChain 0.0.329, Jinja2 templates will be rendered using
Jinja2’s SandboxedEnvironment by default. This sand-boxing should
be treated as a best-effort approach rather than a guarantee of security, | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.prompt.PromptTemplate.html |
b516d2a52a5c-6 | be treated as a best-effort approach rather than a guarantee of security,
as it is an opt-out rather than opt-in approach.
Despite the sand-boxing, we recommend to never use jinja2 templates
from untrusted sources.
Parameters
template – The template to load.
template_format – The format of the template. Use jinja2 for jinja2,
and f-string or None for f-strings.
partial_variables –
A dictionary of variables that can be used to partiallyfill in the template. For example, if the template is
”{variable1} {variable2}”, and partial_variables is
{“variable1”: “foo”}, then the final prompt will be
“foo {variable2}”.
Returns
The prompt template loaded from the template.
get_graph(config: Optional[RunnableConfig] = None) → Graph¶
Return a graph representation of this runnable.
get_input_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
Get a pydantic model that can be used to validate input to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic input schema that depends on which
configuration the runnable is invoked with.
This method allows to get an input schema for a specific configuration.
Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate input.
classmethod get_lc_namespace() → List[str][source]¶
Get the namespace of the langchain object.
get_name(suffix: Optional[str] = None, *, name: Optional[str] = None) → str¶
Get the name of the runnable.
get_output_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶ | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.prompt.PromptTemplate.html |
b516d2a52a5c-7 | Get a pydantic model that can be used to validate output to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic output schema that depends on which
configuration the runnable is invoked with.
This method allows to get an output schema for a specific configuration.
Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate output.
get_prompts(config: Optional[RunnableConfig] = None) → List[BasePromptTemplate]¶
invoke(input: Dict, config: Optional[RunnableConfig] = None) → PromptValue¶
Transform a single input into an output. Override to implement.
Parameters
input – The input to the runnable.
config – A config to use when invoking the runnable.
The config supports standard keys like ‘tags’, ‘metadata’ for tracing
purposes, ‘max_concurrency’ for controlling how much work to do
in parallel, and other keys. Please refer to the RunnableConfig
for more details.
Returns
The output of the runnable.
classmethod is_lc_serializable() → bool¶
Return whether this class is serializable.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.prompt.PromptTemplate.html |
b516d2a52a5c-8 | classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object.
map() → Runnable[List[Input], List[Output]]¶
Return a new Runnable that maps a list of inputs to a list of outputs,
by calling invoke() with each input.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
partial(**kwargs: Union[str, Callable[[], str]]) → BasePromptTemplate¶
Return a partial of the prompt template.
pick(keys: Union[str, List[str]]) → RunnableSerializable[Any, Any]¶
Pick keys from the dict output of this runnable.
Returns a new runnable.
pipe(*others: Union[Runnable[Any, Other], Callable[[Any], Other]], name: Optional[str] = None) → RunnableSerializable[Input, Other]¶
Compose this runnable with another object to create a RunnableSequence.
save(file_path: Union[Path, str]) → None¶
Save the prompt.
Parameters
file_path – Path to directory to save prompt to.
Example:
.. code-block:: python
prompt.save(file_path=”path/prompt.yaml”)
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.prompt.PromptTemplate.html |
b516d2a52a5c-9 | stream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of stream, which calls invoke.
Subclasses should override this method if they support streaming output.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of transform, which buffers input and then calls stream.
Subclasses should override this method if they can start producing output while
input is still being generated.
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶
Bind config to a Runnable, returning a new Runnable.
with_fallbacks(fallbacks: Sequence[Runnable[Input, Output]], *, exceptions_to_handle: Tuple[Type[BaseException], ...] = (<class 'Exception'>,)) → RunnableWithFallbacksT[Input, Output]¶
Add fallbacks to a runnable, returning a new Runnable.
Parameters
fallbacks – A sequence of runnables to try if the original runnable fails.
exceptions_to_handle – A tuple of exception types to handle.
Returns
A new Runnable that will try the original runnable, and then each
fallback in order, upon failures.
with_listeners(*, on_start: Optional[Listener] = None, on_end: Optional[Listener] = None, on_error: Optional[Listener] = None) → Runnable[Input, Output]¶ | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.prompt.PromptTemplate.html |
b516d2a52a5c-10 | Bind lifecycle listeners to a Runnable, returning a new Runnable.
on_start: Called before the runnable starts running, with the Run object.
on_end: Called after the runnable finishes running, with the Run object.
on_error: Called if the runnable throws an error, with the Run object.
The Run object contains information about the run, including its id,
type, input, output, error, start_time, end_time, and any tags or metadata
added to the run.
with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶
Create a new Runnable that retries the original runnable on exceptions.
Parameters
retry_if_exception_type – A tuple of exception types to retry on
wait_exponential_jitter – Whether to add jitter to the wait time
between retries
stop_after_attempt – The maximum number of attempts to make before giving up
Returns
A new Runnable that retries the original runnable on exceptions.
with_types(*, input_type: Optional[Type[Input]] = None, output_type: Optional[Type[Output]] = None) → Runnable[Input, Output]¶
Bind input and output types to a Runnable, returning a new Runnable.
property InputType: Type[langchain_core.runnables.utils.Input]¶
The type of input this runnable accepts specified as a type annotation.
property OutputType: Any¶
The type of output this runnable produces specified as a type annotation.
property config_specs: List[langchain_core.runnables.utils.ConfigurableFieldSpec]¶
List configurable fields for this runnable.
property input_schema: Type[pydantic.main.BaseModel]¶
The type of input this runnable accepts specified as a pydantic model. | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.prompt.PromptTemplate.html |
b516d2a52a5c-11 | The type of input this runnable accepts specified as a pydantic model.
property lc_attributes: Dict[str, Any]¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”}
name: Optional[str] = None¶
The name of the runnable. Used for debugging and tracing.
property output_schema: Type[pydantic.main.BaseModel]¶
The type of output this runnable produces specified as a pydantic model.
Examples using PromptTemplate¶
RePhraseQueryRetriever
Zapier Natural Language Actions
Dall-E Image Generator
Streamlit Chat Message History
Context
Argilla
Comet
Aim
Weights & Biases
SageMaker Tracking
Rebuff
MLflow
Flyte
Vectara Text Generation
Natural Language APIs
your local model path
Google Drive
Google Vertex AI PaLM
Predibase
Hugging Face Local Pipelines
Eden AI
Fallbacks
Reversible data anonymization with Microsoft Presidio
Data anonymization with Microsoft Presidio
Removing logical fallacies from model output
Amazon Comprehend Moderation Chain
Pairwise String Comparison
Criteria Evaluation
Set env var OPENAI_API_KEY or load from a .env file
Set env var OPENAI_API_KEY or load from a .env file:
Question Answering
Retrieve from vector stores directly
Improve document indexing with HyDE
Structure answers with OpenAI functions
Neo4j DB QA chain
Multi-agent authoritarian speaker selection
Agent Debates with Tools
Bash chain
How to use a SmartLLMChain
Elasticsearch
SQL
MultiQueryRetriever
WebResearchRetriever | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.prompt.PromptTemplate.html |
b516d2a52a5c-12 | Elasticsearch
SQL
MultiQueryRetriever
WebResearchRetriever
Lost in the middle: The problem with long contexts
Custom Memory
Memory in the Multi-Input Chain
Memory in LLMChain
Multiple Memory classes
Customizing Conversational Memory
Conversation Knowledge Graph
Logging to file
Retry parser
Datetime parser
Pydantic (JSON) parser
Select by maximal marginal relevance (MMR)
Select by n-gram overlap
Prompt pipelining
Template formats
Connecting to a Feature Store
Router
Transformation
Custom chain
Async API
First we add a step to load memory
Configure Runnable traces | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.prompt.PromptTemplate.html |
96f45cef3041-0 | langchain_core.prompts.string.jinja2_formatter¶
langchain_core.prompts.string.jinja2_formatter(template: str, **kwargs: Any) → str[source]¶
Format a template using jinja2.
Security warning: As of LangChain 0.0.329, this method uses Jinja2’sSandboxedEnvironment by default. However, this sand-boxing should
be treated as a best-effort approach rather than a guarantee of security.
Do not accept jinja2 templates from untrusted sources as they may lead
to arbitrary Python code execution.
https://jinja.palletsprojects.com/en/3.1.x/sandbox/ | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.string.jinja2_formatter.html |
9b0c3229b5f9-0 | langchain_core.prompts.few_shot.FewShotChatMessagePromptTemplate¶
class langchain_core.prompts.few_shot.FewShotChatMessagePromptTemplate[source]¶
Bases: BaseChatPromptTemplate, _FewShotPromptTemplateMixin
Chat prompt template that supports few-shot examples.
The high level structure of produced by this prompt template is a list of messages
consisting of prefix message(s), example message(s), and suffix message(s).
This structure enables creating a conversation with intermediate examples like:
System: You are a helpful AI Assistant
Human: What is 2+2?
AI: 4
Human: What is 2+3?
AI: 5
Human: What is 4+4?
This prompt template can be used to generate a fixed list of examples or else
to dynamically select examples based on the input.
Examples
Prompt template with a fixed list of examples (matching the sample
conversation above):
from langchain_core.prompts import (
FewShotChatMessagePromptTemplate,
ChatPromptTemplate
)
examples = [
{"input": "2+2", "output": "4"},
{"input": "2+3", "output": "5"},
]
example_prompt = ChatPromptTemplate.from_messages(
[('human', '{input}'), ('ai', '{output}')]
)
few_shot_prompt = FewShotChatMessagePromptTemplate(
examples=examples,
# This is a prompt template used to format each individual example.
example_prompt=example_prompt,
)
final_prompt = ChatPromptTemplate.from_messages(
[
('system', 'You are a helpful AI Assistant'),
few_shot_prompt,
('human', '{input}'),
]
)
final_prompt.format(input="What is 4+4?") | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.few_shot.FewShotChatMessagePromptTemplate.html |
9b0c3229b5f9-1 | ]
)
final_prompt.format(input="What is 4+4?")
Prompt template with dynamically selected examples:
from langchain_core.prompts import SemanticSimilarityExampleSelector
from langchain_core.embeddings import OpenAIEmbeddings
from langchain_core.vectorstores import Chroma
examples = [
{"input": "2+2", "output": "4"},
{"input": "2+3", "output": "5"},
{"input": "2+4", "output": "6"},
# ...
]
to_vectorize = [
" ".join(example.values())
for example in examples
]
embeddings = OpenAIEmbeddings()
vectorstore = Chroma.from_texts(
to_vectorize, embeddings, metadatas=examples
)
example_selector = SemanticSimilarityExampleSelector(
vectorstore=vectorstore
)
from langchain_core import SystemMessage
from langchain_core.prompts import HumanMessagePromptTemplate
from langchain_core.prompts.few_shot import FewShotChatMessagePromptTemplate
few_shot_prompt = FewShotChatMessagePromptTemplate(
# Which variable(s) will be passed to the example selector.
input_variables=["input"],
example_selector=example_selector,
# Define how each example will be formatted.
# In this case, each example will become 2 messages:
# 1 human, and 1 AI
example_prompt=(
HumanMessagePromptTemplate.from_template("{input}")
+ AIMessagePromptTemplate.from_template("{output}")
),
)
# Define the overall prompt.
final_prompt = (
SystemMessagePromptTemplate.from_template(
"You are a helpful AI Assistant"
)
+ few_shot_prompt
+ HumanMessagePromptTemplate.from_template("{input}")
) | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.few_shot.FewShotChatMessagePromptTemplate.html |
9b0c3229b5f9-2 | + few_shot_prompt
+ HumanMessagePromptTemplate.from_template("{input}")
)
# Show the prompt
print(final_prompt.format_messages(input="What's 3+3?"))
# Use within an LLM
from langchain_core.chat_models import ChatAnthropic
chain = final_prompt | ChatAnthropic()
chain.invoke({"input": "What's 3+3?"})
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param example_prompt: Union[BaseMessagePromptTemplate, BaseChatPromptTemplate] [Required]¶
The class to format each example.
param example_selector: Any = None¶
ExampleSelector to choose the examples to format into the prompt.
Either this or examples should be provided.
param examples: Optional[List[dict]] = None¶
Examples to format into the prompt.
Either this or example_selector should be provided.
param input_types: Dict[str, Any] [Optional]¶
A dictionary of the types of the variables the prompt template expects.
If not provided, all variables are assumed to be strings.
param input_variables: List[str] [Optional]¶
A list of the names of the variables the prompt template will use
to pass to the example_selector, if provided.
param output_parser: Optional[BaseOutputParser] = None¶
How to parse the output of calling an LLM on this formatted prompt.
param partial_variables: Mapping[str, Union[str, Callable[[], str]]] [Optional]¶
async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶
Default implementation runs ainvoke in parallel using asyncio.gather. | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.few_shot.FewShotChatMessagePromptTemplate.html |
9b0c3229b5f9-3 | Default implementation runs ainvoke in parallel using asyncio.gather.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
async ainvoke(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Any) → Output¶
Default implementation of ainvoke, calls invoke from a thread.
The default implementation allows usage of async code even if
the runnable did not implement a native async version of invoke.
Subclasses should override this method if they can run asynchronously.
assign(**kwargs: Union[Runnable[Dict[str, Any], Any], Callable[[Dict[str, Any]], Any], Mapping[str, Union[Runnable[Dict[str, Any], Any], Callable[[Dict[str, Any]], Any]]]]) → RunnableSerializable[Any, Any]¶
Assigns new fields to the dict output of this runnable.
Returns a new runnable.
async astream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
Default implementation of astream, which calls ainvoke.
Subclasses should override this method if they support streaming output.
async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, diff: bool = True, with_streamed_output_list: bool = True, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any]) → Union[AsyncIterator[RunLogPatch], AsyncIterator[RunLog]]¶ | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.few_shot.FewShotChatMessagePromptTemplate.html |
9b0c3229b5f9-4 | Stream all output from a runnable, as reported to the callback system.
This includes all inner runs of LLMs, Retrievers, Tools, etc.
Output is streamed as Log objects, which include a list of
jsonpatch ops that describe how the state of the run has changed in each
step, and the final state of the run.
The jsonpatch ops can be applied in order to construct state.
Parameters
input – The input to the runnable.
config – The config to use for the runnable.
diff – Whether to yield diffs between each step, or the current state.
with_streamed_output_list – Whether to yield the streamed_output list.
include_names – Only include logs with these names.
include_types – Only include logs with these types.
include_tags – Only include logs with these tags.
exclude_names – Exclude logs with these names.
exclude_types – Exclude logs with these types.
exclude_tags – Exclude logs with these tags.
async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
Default implementation of atransform, which buffers input and calls astream.
Subclasses should override this method if they can start producing output while
input is still being generated.
batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶
Default implementation runs invoke in parallel using a thread pool executor.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
bind(**kwargs: Any) → Runnable[Input, Output]¶ | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.few_shot.FewShotChatMessagePromptTemplate.html |
9b0c3229b5f9-5 | bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
config_schema(*, include: Optional[Sequence[str]] = None) → Type[BaseModel]¶
The type of config this runnable accepts specified as a pydantic model.
To mark a field as configurable, see the configurable_fields
and configurable_alternatives methods.
Parameters
include – A list of fields to include in the config schema.
Returns
A pydantic model that can be used to validate config.
configurable_alternatives(which: ConfigurableField, *, default_key: str = 'default', prefix_keys: bool = False, **kwargs: Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) → RunnableSerializable[Input, Output]¶
configurable_fields(**kwargs: Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) → RunnableSerializable[Input, Output]¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.few_shot.FewShotChatMessagePromptTemplate.html |
9b0c3229b5f9-6 | exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict¶
Return dictionary representation of prompt.
format(**kwargs: Any) → str[source]¶
Format the prompt with inputs generating a string.
Use this method to generate a string representation of a prompt consisting
of chat messages.
Useful for feeding into a string based completion language model or debugging.
Parameters
**kwargs – keyword arguments to use for formatting.
Returns
A string representation of the prompt
format_messages(**kwargs: Any) → List[BaseMessage][source]¶
Format kwargs into a list of messages.
Parameters
**kwargs – keyword arguments to use for filling in templates in messages.
Returns
A list of formatted messages with all template variables filled in.
format_prompt(**kwargs: Any) → PromptValue¶
Format prompt. Should return a PromptValue.
:param **kwargs: Keyword arguments to use for formatting.
Returns
PromptValue.
classmethod from_orm(obj: Any) → Model¶
get_graph(config: Optional[RunnableConfig] = None) → Graph¶
Return a graph representation of this runnable.
get_input_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
Get a pydantic model that can be used to validate input to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic input schema that depends on which
configuration the runnable is invoked with.
This method allows to get an input schema for a specific configuration.
Parameters
config – A config to use when generating the schema.
Returns | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.few_shot.FewShotChatMessagePromptTemplate.html |
9b0c3229b5f9-7 | Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate input.
classmethod get_lc_namespace() → List[str]¶
Get the namespace of the langchain object.
get_name(suffix: Optional[str] = None, *, name: Optional[str] = None) → str¶
Get the name of the runnable.
get_output_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
Get a pydantic model that can be used to validate output to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic output schema that depends on which
configuration the runnable is invoked with.
This method allows to get an output schema for a specific configuration.
Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate output.
get_prompts(config: Optional[RunnableConfig] = None) → List[BasePromptTemplate]¶
invoke(input: Dict, config: Optional[RunnableConfig] = None) → PromptValue¶
Transform a single input into an output. Override to implement.
Parameters
input – The input to the runnable.
config – A config to use when invoking the runnable.
The config supports standard keys like ‘tags’, ‘metadata’ for tracing
purposes, ‘max_concurrency’ for controlling how much work to do
in parallel, and other keys. Please refer to the RunnableConfig
for more details.
Returns
The output of the runnable.
classmethod is_lc_serializable() → bool[source]¶
Return whether or not the class is serializable. | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.few_shot.FewShotChatMessagePromptTemplate.html |
9b0c3229b5f9-8 | Return whether or not the class is serializable.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object.
map() → Runnable[List[Input], List[Output]]¶
Return a new Runnable that maps a list of inputs to a list of outputs,
by calling invoke() with each input.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
partial(**kwargs: Union[str, Callable[[], str]]) → BasePromptTemplate¶
Return a partial of the prompt template.
pick(keys: Union[str, List[str]]) → RunnableSerializable[Any, Any]¶
Pick keys from the dict output of this runnable.
Returns a new runnable. | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.few_shot.FewShotChatMessagePromptTemplate.html |
9b0c3229b5f9-9 | Pick keys from the dict output of this runnable.
Returns a new runnable.
pipe(*others: Union[Runnable[Any, Other], Callable[[Any], Other]], name: Optional[str] = None) → RunnableSerializable[Input, Other]¶
Compose this runnable with another object to create a RunnableSequence.
save(file_path: Union[Path, str]) → None¶
Save the prompt.
Parameters
file_path – Path to directory to save prompt to.
Example:
.. code-block:: python
prompt.save(file_path=”path/prompt.yaml”)
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
stream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of stream, which calls invoke.
Subclasses should override this method if they support streaming output.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶
Default implementation of transform, which buffers input and then calls stream.
Subclasses should override this method if they can start producing output while
input is still being generated.
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶ | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.few_shot.FewShotChatMessagePromptTemplate.html |
9b0c3229b5f9-10 | Bind config to a Runnable, returning a new Runnable.
with_fallbacks(fallbacks: Sequence[Runnable[Input, Output]], *, exceptions_to_handle: Tuple[Type[BaseException], ...] = (<class 'Exception'>,)) → RunnableWithFallbacksT[Input, Output]¶
Add fallbacks to a runnable, returning a new Runnable.
Parameters
fallbacks – A sequence of runnables to try if the original runnable fails.
exceptions_to_handle – A tuple of exception types to handle.
Returns
A new Runnable that will try the original runnable, and then each
fallback in order, upon failures.
with_listeners(*, on_start: Optional[Listener] = None, on_end: Optional[Listener] = None, on_error: Optional[Listener] = None) → Runnable[Input, Output]¶
Bind lifecycle listeners to a Runnable, returning a new Runnable.
on_start: Called before the runnable starts running, with the Run object.
on_end: Called after the runnable finishes running, with the Run object.
on_error: Called if the runnable throws an error, with the Run object.
The Run object contains information about the run, including its id,
type, input, output, error, start_time, end_time, and any tags or metadata
added to the run.
with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶
Create a new Runnable that retries the original runnable on exceptions.
Parameters
retry_if_exception_type – A tuple of exception types to retry on
wait_exponential_jitter – Whether to add jitter to the wait time
between retries
stop_after_attempt – The maximum number of attempts to make before giving up | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.few_shot.FewShotChatMessagePromptTemplate.html |
9b0c3229b5f9-11 | between retries
stop_after_attempt – The maximum number of attempts to make before giving up
Returns
A new Runnable that retries the original runnable on exceptions.
with_types(*, input_type: Optional[Type[Input]] = None, output_type: Optional[Type[Output]] = None) → Runnable[Input, Output]¶
Bind input and output types to a Runnable, returning a new Runnable.
property InputType: Type[langchain_core.runnables.utils.Input]¶
The type of input this runnable accepts specified as a type annotation.
property OutputType: Any¶
The type of output this runnable produces specified as a type annotation.
property config_specs: List[langchain_core.runnables.utils.ConfigurableFieldSpec]¶
List configurable fields for this runnable.
property input_schema: Type[pydantic.main.BaseModel]¶
The type of input this runnable accepts specified as a pydantic model.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”}
name: Optional[str] = None¶
The name of the runnable. Used for debugging and tracing.
property output_schema: Type[pydantic.main.BaseModel]¶
The type of output this runnable produces specified as a pydantic model.
Examples using FewShotChatMessagePromptTemplate¶
Few-shot examples for chat models | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.few_shot.FewShotChatMessagePromptTemplate.html |
19d4d1ac509e-0 | langchain.prompts.example_selector.ngram_overlap.NGramOverlapExampleSelector¶
class langchain.prompts.example_selector.ngram_overlap.NGramOverlapExampleSelector[source]¶
Bases: BaseExampleSelector, BaseModel
Select and order examples based on ngram overlap score (sentence_bleu score).
https://www.nltk.org/_modules/nltk/translate/bleu_score.html
https://aclanthology.org/P02-1040.pdf
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param example_prompt: langchain_core.prompts.prompt.PromptTemplate [Required]¶
Prompt template used to format the examples.
param examples: List[dict] [Required]¶
A list of the examples that the prompt template expects.
param threshold: float = -1.0¶
Threshold at which algorithm stops. Set to -1.0 by default.
For negative threshold:
select_examples sorts examples by ngram_overlap_score, but excludes none.
For threshold greater than 1.0:
select_examples excludes all examples, and returns an empty list.
For threshold equal to 0.0:
select_examples sorts examples by ngram_overlap_score,
and excludes examples with no ngram overlap with input.
add_example(example: Dict[str, str]) → None[source]¶
Add new example to list.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values | https://api.python.langchain.com/en/latest/prompts/langchain.prompts.example_selector.ngram_overlap.NGramOverlapExampleSelector.html |
19d4d1ac509e-1 | Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶ | https://api.python.langchain.com/en/latest/prompts/langchain.prompts.example_selector.ngram_overlap.NGramOverlapExampleSelector.html |
19d4d1ac509e-2 | classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
select_examples(input_variables: Dict[str, str]) → List[dict][source]¶
Return list of examples sorted by ngram_overlap_score with input.
Descending order.
Excludes any examples with ngram_overlap_score less than or equal to threshold.
classmethod update_forward_refs(**localns: Any) → None¶ | https://api.python.langchain.com/en/latest/prompts/langchain.prompts.example_selector.ngram_overlap.NGramOverlapExampleSelector.html |
19d4d1ac509e-3 | classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
Examples using NGramOverlapExampleSelector¶
Select by n-gram overlap | https://api.python.langchain.com/en/latest/prompts/langchain.prompts.example_selector.ngram_overlap.NGramOverlapExampleSelector.html |
3d7bbb6544a1-0 | langchain_core.prompts.chat.HumanMessagePromptTemplate¶
class langchain_core.prompts.chat.HumanMessagePromptTemplate[source]¶
Bases: BaseStringMessagePromptTemplate
Human message prompt template. This is a message sent from the user.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param additional_kwargs: dict [Optional]¶
Additional keyword arguments to pass to the prompt template.
param prompt: langchain_core.prompts.string.StringPromptTemplate [Required]¶
String prompt template.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.HumanMessagePromptTemplate.html |
3d7bbb6544a1-1 | deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
format(**kwargs: Any) → BaseMessage[source]¶
Format the prompt template.
Parameters
**kwargs – Keyword arguments to use for formatting.
Returns
Formatted message.
format_messages(**kwargs: Any) → List[BaseMessage]¶
Format messages from kwargs.
Parameters
**kwargs – Keyword arguments to use for formatting.
Returns
List of BaseMessages.
classmethod from_orm(obj: Any) → Model¶
classmethod from_template(template: str, template_format: str = 'f-string', partial_variables: Optional[Dict[str, Any]] = None, **kwargs: Any) → MessagePromptTemplateT¶
Create a class from a string template.
Parameters
template – a template.
template_format – format of the template.
partial_variables –
A dictionary of variables that can be used to partiallyfill in the template. For example, if the template is
”{variable1} {variable2}”, and partial_variables is
{“variable1”: “foo”}, then the final prompt will be
“foo {variable2}”.
**kwargs – keyword arguments to pass to the constructor.
Returns
A new instance of this class. | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.HumanMessagePromptTemplate.html |
3d7bbb6544a1-2 | Returns
A new instance of this class.
classmethod from_template_file(template_file: Union[str, Path], input_variables: List[str], **kwargs: Any) → MessagePromptTemplateT¶
Create a class from a template file.
Parameters
template_file – path to a template file. String or Path.
input_variables – list of input variables.
**kwargs – keyword arguments to pass to the constructor.
Returns
A new instance of this class.
classmethod get_lc_namespace() → List[str][source]¶
Get the namespace of the langchain object.
classmethod is_lc_serializable() → bool¶
Return whether or not the class is serializable.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶ | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.HumanMessagePromptTemplate.html |
3d7bbb6544a1-3 | classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
property input_variables: List[str]¶
Input variables for this prompt template.
Returns
List of input variable names.
property lc_attributes: Dict¶
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
property lc_secrets: Dict[str, str]¶
A map of constructor argument names to secret ids.
For example,{“openai_api_key”: “OPENAI_API_KEY”}
Examples using HumanMessagePromptTemplate¶
Anthropic
🚅 LiteLLM
Konko
OpenAI
Google Cloud Platform Vertex AI PaLM
JinaChat
Context
Figma
Fireworks
Set env var OPENAI_API_KEY or load from a .env file:
Structure answers with OpenAI functions
CAMEL Role-Playing Autonomous Cooperative Agents
Multi-agent authoritarian speaker selection
Memory in LLMChain
Retry parser
Pydantic (JSON) parser
Prompt pipelining
Using OpenAI functions
Code writing | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.HumanMessagePromptTemplate.html |
d1cfbdb9a250-0 | langchain_core.prompts.string.get_template_variables¶
langchain_core.prompts.string.get_template_variables(template: str, template_format: str) → List[str][source]¶
Get the variables from the template.
Parameters
template – The template string.
template_format – The template format. Should be one of “f-string” or “jinja2”.
Returns
The variables from the template.
Raises
ValueError – If the template format is not supported. | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.string.get_template_variables.html |
b5b603b848e3-0 | langchain_core.prompts.string.StringPromptTemplate¶
class langchain_core.prompts.string.StringPromptTemplate[source]¶
Bases: BasePromptTemplate, ABC
String prompt that exposes the format method, returning a prompt.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param input_types: Dict[str, Any] [Optional]¶
A dictionary of the types of the variables the prompt template expects.
If not provided, all variables are assumed to be strings.
param input_variables: List[str] [Required]¶
A list of the names of the variables the prompt template expects.
param output_parser: Optional[BaseOutputParser] = None¶
How to parse the output of calling an LLM on this formatted prompt.
param partial_variables: Mapping[str, Union[str, Callable[[], str]]] [Optional]¶
async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶
Default implementation runs ainvoke in parallel using asyncio.gather.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
async ainvoke(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Any) → Output¶
Default implementation of ainvoke, calls invoke from a thread.
The default implementation allows usage of async code even if
the runnable did not implement a native async version of invoke.
Subclasses should override this method if they can run asynchronously. | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.string.StringPromptTemplate.html |
b5b603b848e3-1 | Subclasses should override this method if they can run asynchronously.
assign(**kwargs: Union[Runnable[Dict[str, Any], Any], Callable[[Dict[str, Any]], Any], Mapping[str, Union[Runnable[Dict[str, Any], Any], Callable[[Dict[str, Any]], Any]]]]) → RunnableSerializable[Any, Any]¶
Assigns new fields to the dict output of this runnable.
Returns a new runnable.
async astream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
Default implementation of astream, which calls ainvoke.
Subclasses should override this method if they support streaming output.
async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, diff: bool = True, with_streamed_output_list: bool = True, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any]) → Union[AsyncIterator[RunLogPatch], AsyncIterator[RunLog]]¶
Stream all output from a runnable, as reported to the callback system.
This includes all inner runs of LLMs, Retrievers, Tools, etc.
Output is streamed as Log objects, which include a list of
jsonpatch ops that describe how the state of the run has changed in each
step, and the final state of the run.
The jsonpatch ops can be applied in order to construct state.
Parameters
input – The input to the runnable.
config – The config to use for the runnable.
diff – Whether to yield diffs between each step, or the current state. | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.string.StringPromptTemplate.html |
b5b603b848e3-2 | diff – Whether to yield diffs between each step, or the current state.
with_streamed_output_list – Whether to yield the streamed_output list.
include_names – Only include logs with these names.
include_types – Only include logs with these types.
include_tags – Only include logs with these tags.
exclude_names – Exclude logs with these names.
exclude_types – Exclude logs with these types.
exclude_tags – Exclude logs with these tags.
async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶
Default implementation of atransform, which buffers input and calls astream.
Subclasses should override this method if they can start producing output while
input is still being generated.
batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶
Default implementation runs invoke in parallel using a thread pool executor.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently;
e.g., if the underlying runnable uses an API which supports a batch mode.
bind(**kwargs: Any) → Runnable[Input, Output]¶
Bind arguments to a Runnable, returning a new Runnable.
config_schema(*, include: Optional[Sequence[str]] = None) → Type[BaseModel]¶
The type of config this runnable accepts specified as a pydantic model.
To mark a field as configurable, see the configurable_fields
and configurable_alternatives methods.
Parameters
include – A list of fields to include in the config schema.
Returns
A pydantic model that can be used to validate config. | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.string.StringPromptTemplate.html |
b5b603b848e3-3 | Returns
A pydantic model that can be used to validate config.
configurable_alternatives(which: ConfigurableField, *, default_key: str = 'default', prefix_keys: bool = False, **kwargs: Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) → RunnableSerializable[Input, Output]¶
configurable_fields(**kwargs: Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) → RunnableSerializable[Input, Output]¶
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict¶
Return dictionary representation of prompt.
abstract format(**kwargs: Any) → str¶
Format the prompt with the inputs.
Parameters
kwargs – Any arguments to be passed to the prompt template.
Returns
A formatted string. | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.string.StringPromptTemplate.html |
b5b603b848e3-4 | kwargs – Any arguments to be passed to the prompt template.
Returns
A formatted string.
Example:
prompt.format(variable1="foo")
format_prompt(**kwargs: Any) → PromptValue[source]¶
Create Chat Messages.
classmethod from_orm(obj: Any) → Model¶
get_graph(config: Optional[RunnableConfig] = None) → Graph¶
Return a graph representation of this runnable.
get_input_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
Get a pydantic model that can be used to validate input to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic input schema that depends on which
configuration the runnable is invoked with.
This method allows to get an input schema for a specific configuration.
Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate input.
classmethod get_lc_namespace() → List[str][source]¶
Get the namespace of the langchain object.
get_name(suffix: Optional[str] = None, *, name: Optional[str] = None) → str¶
Get the name of the runnable.
get_output_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
Get a pydantic model that can be used to validate output to the runnable.
Runnables that leverage the configurable_fields and configurable_alternatives
methods will have a dynamic output schema that depends on which
configuration the runnable is invoked with.
This method allows to get an output schema for a specific configuration.
Parameters
config – A config to use when generating the schema.
Returns
A pydantic model that can be used to validate output.
get_prompts(config: Optional[RunnableConfig] = None) → List[BasePromptTemplate]¶ | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.string.StringPromptTemplate.html |
b5b603b848e3-5 | invoke(input: Dict, config: Optional[RunnableConfig] = None) → PromptValue¶
Transform a single input into an output. Override to implement.
Parameters
input – The input to the runnable.
config – A config to use when invoking the runnable.
The config supports standard keys like ‘tags’, ‘metadata’ for tracing
purposes, ‘max_concurrency’ for controlling how much work to do
in parallel, and other keys. Please refer to the RunnableConfig
for more details.
Returns
The output of the runnable.
classmethod is_lc_serializable() → bool¶
Return whether this class is serializable.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
classmethod lc_id() → List[str]¶
A unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path
to the object.
map() → Runnable[List[Input], List[Output]]¶
Return a new Runnable that maps a list of inputs to a list of outputs,
by calling invoke() with each input. | https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.string.StringPromptTemplate.html |
Subsets and Splits