id
stringlengths
14
16
text
stringlengths
13
2.7k
source
stringlengths
57
178
250271cdfbb1-1
exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. classmethod from_orm(obj: Any) → Model¶ get_kwargs() → Dict[str, Any][source]¶ Get the keyword arguments for the load_evaluator call. Returns The keyword arguments for the load_evaluator call. Return type Dict[str, Any] json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.config.SingleKeyEvalConfig.html
250271cdfbb1-2
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶
lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.config.SingleKeyEvalConfig.html
8c63148cc78e-0
langchain.smith.evaluation.string_run_evaluator.StringExampleMapper¶ class langchain.smith.evaluation.string_run_evaluator.StringExampleMapper[source]¶ Bases: Serializable Map an example, or row in the dataset, to the inputs of an evaluation. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param reference_key: Optional[str] = None¶ __call__(example: Example) → Dict[str, str][source]¶ Maps the Run and Example to a dictionary. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance
lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.string_run_evaluator.StringExampleMapper.html
8c63148cc78e-1
deep – set to True to make a deep copy of the model Returns new model instance dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. classmethod from_orm(obj: Any) → Model¶ classmethod get_lc_namespace() → List[str]¶ Get the namespace of the langchain object. For example, if the class is langchain.llms.openai.OpenAI, then the namespace is [“langchain”, “llms”, “openai”] classmethod is_lc_serializable() → bool¶ Is this class serializable? json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod lc_id() → List[str]¶ A unique identifier for this class for serialization purposes. The unique identifier is a list of strings that describes the path to the object.
lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.string_run_evaluator.StringExampleMapper.html
8c63148cc78e-2
The unique identifier is a list of strings that describes the path to the object. map(example: Example) → Dict[str, str][source]¶ Maps the Example, or dataset row to a dictionary. classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ serialize_chat_messages(messages: List[Dict]) → str[source]¶ Extract the input messages from the run. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶ property lc_attributes: Dict¶ List of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_secrets: Dict[str, str]¶ A map of constructor argument names to secret ids. For example,{“openai_api_key”: “OPENAI_API_KEY”} property output_keys: List[str]¶ The keys to extract from the run.
lang/api.python.langchain.com/en/latest/smith/langchain.smith.evaluation.string_run_evaluator.StringExampleMapper.html
f1e568bf69f9-0
langchain.prompts.base.get_template_variables¶ langchain.prompts.base.get_template_variables(template: str, template_format: str) → List[str][source]¶ Get the variables from the template. Parameters template – The template string. template_format – The template format. Should be one of “f-string” or “jinja2”. Returns The variables from the template. Raises ValueError – If the template format is not supported.
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.base.get_template_variables.html
15a4d7c690db-0
langchain.prompts.chat.BaseChatPromptTemplate¶ class langchain.prompts.chat.BaseChatPromptTemplate[source]¶ Bases: BasePromptTemplate, ABC Base class for chat prompt templates. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param input_types: Dict[str, Any] [Optional]¶ A dictionary of the types of the variables the prompt template expects. If not provided, all variables are assumed to be strings. param input_variables: List[str] [Required]¶ A list of the names of the variables the prompt template expects. param output_parser: Optional[BaseOutputParser] = None¶ How to parse the output of calling an LLM on this formatted prompt. param partial_variables: Mapping[str, Union[str, Callable[[], str]]] [Optional]¶ async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶ Default implementation runs ainvoke in parallel using asyncio.gather. The default implementation of batch works well for IO bound runnables. Subclasses should override this method if they can batch more efficiently; e.g., if the underlying runnable uses an API which supports a batch mode. async ainvoke(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Any) → Output¶ Default implementation of ainvoke, calls invoke from a thread. The default implementation allows usage of async code even if the runnable did not implement a native async version of invoke. Subclasses should override this method if they can run asynchronously.
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.BaseChatPromptTemplate.html
15a4d7c690db-1
Subclasses should override this method if they can run asynchronously. async astream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶ Default implementation of astream, which calls ainvoke. Subclasses should override this method if they support streaming output. async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, diff: bool = True, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any]) → Union[AsyncIterator[RunLogPatch], AsyncIterator[RunLog]]¶ Stream all output from a runnable, as reported to the callback system. This includes all inner runs of LLMs, Retrievers, Tools, etc. Output is streamed as Log objects, which include a list of jsonpatch ops that describe how the state of the run has changed in each step, and the final state of the run. The jsonpatch ops can be applied in order to construct state. async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶ Default implementation of atransform, which buffers input and calls astream. Subclasses should override this method if they can start producing output while input is still being generated. batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶ Default implementation runs invoke in parallel using a thread pool executor.
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.BaseChatPromptTemplate.html
15a4d7c690db-2
Default implementation runs invoke in parallel using a thread pool executor. The default implementation of batch works well for IO bound runnables. Subclasses should override this method if they can batch more efficiently; e.g., if the underlying runnable uses an API which supports a batch mode. bind(**kwargs: Any) → Runnable[Input, Output]¶ Bind arguments to a Runnable, returning a new Runnable. config_schema(*, include: Optional[Sequence[str]] = None) → Type[BaseModel]¶ The type of config this runnable accepts specified as a pydantic model. To mark a field as configurable, see the configurable_fields and configurable_alternatives methods. Parameters include – A list of fields to include in the config schema. Returns A pydantic model that can be used to validate config. configurable_alternatives(which: ConfigurableField, default_key: str = 'default', **kwargs: Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) → RunnableSerializable[Input, Output]¶ configurable_fields(**kwargs: Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) → RunnableSerializable[Input, Output]¶ classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.BaseChatPromptTemplate.html
15a4d7c690db-3
Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) → Dict¶ Return dictionary representation of prompt. format(**kwargs: Any) → str[source]¶ Format the chat template into a string. Parameters **kwargs – keyword arguments to use for filling in template variables in all the template messages in this chat template. Returns formatted string abstract format_messages(**kwargs: Any) → List[BaseMessage][source]¶ Format kwargs into a list of messages. format_prompt(**kwargs: Any) → PromptValue[source]¶ Format prompt. Should return a PromptValue. :param **kwargs: Keyword arguments to use for formatting. Returns PromptValue. classmethod from_orm(obj: Any) → Model¶ get_input_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶ Get a pydantic model that can be used to validate input to the runnable. Runnables that leverage the configurable_fields and configurable_alternatives methods will have a dynamic input schema that depends on which configuration the runnable is invoked with. This method allows to get an input schema for a specific configuration. Parameters config – A config to use when generating the schema. Returns A pydantic model that can be used to validate input. classmethod get_lc_namespace() → List[str]¶ Get the namespace of the langchain object.
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.BaseChatPromptTemplate.html
15a4d7c690db-4
Get the namespace of the langchain object. For example, if the class is langchain.llms.openai.OpenAI, then the namespace is [“langchain”, “llms”, “openai”] get_output_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶ Get a pydantic model that can be used to validate output to the runnable. Runnables that leverage the configurable_fields and configurable_alternatives methods will have a dynamic output schema that depends on which configuration the runnable is invoked with. This method allows to get an output schema for a specific configuration. Parameters config – A config to use when generating the schema. Returns A pydantic model that can be used to validate output. invoke(input: Dict, config: Optional[RunnableConfig] = None) → PromptValue¶ Transform a single input into an output. Override to implement. Parameters input – The input to the runnable. config – A config to use when invoking the runnable. The config supports standard keys like ‘tags’, ‘metadata’ for tracing purposes, ‘max_concurrency’ for controlling how much work to do in parallel, and other keys. Please refer to the RunnableConfig for more details. Returns The output of the runnable. classmethod is_lc_serializable() → bool¶ Return whether this class is serializable.
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.BaseChatPromptTemplate.html
15a4d7c690db-5
classmethod is_lc_serializable() → bool¶ Return whether this class is serializable. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod lc_id() → List[str]¶ A unique identifier for this class for serialization purposes. The unique identifier is a list of strings that describes the path to the object. map() → Runnable[List[Input], List[Output]]¶ Return a new Runnable that maps a list of inputs to a list of outputs, by calling invoke() with each input. classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ partial(**kwargs: Union[str, Callable[[], str]]) → BasePromptTemplate¶ Return a partial of the prompt template. save(file_path: Union[Path, str]) → None¶ Save the prompt. Parameters
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.BaseChatPromptTemplate.html
15a4d7c690db-6
Save the prompt. Parameters file_path – Path to directory to save prompt to. Example: .. code-block:: python prompt.save(file_path=”path/prompt.yaml”) classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ stream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶ Default implementation of stream, which calls invoke. Subclasses should override this method if they support streaming output. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶ Default implementation of transform, which buffers input and then calls stream. Subclasses should override this method if they can start producing output while input is still being generated. classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶ with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶ Bind config to a Runnable, returning a new Runnable. with_fallbacks(fallbacks: Sequence[Runnable[Input, Output]], *, exceptions_to_handle: Tuple[Type[BaseException], ...] = (<class 'Exception'>,)) → RunnableWithFallbacksT[Input, Output]¶ Add fallbacks to a runnable, returning a new Runnable. Parameters
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.BaseChatPromptTemplate.html
15a4d7c690db-7
Add fallbacks to a runnable, returning a new Runnable. Parameters fallbacks – A sequence of runnables to try if the original runnable fails. exceptions_to_handle – A tuple of exception types to handle. Returns A new Runnable that will try the original runnable, and then each fallback in order, upon failures. with_listeners(*, on_start: Optional[Listener] = None, on_end: Optional[Listener] = None, on_error: Optional[Listener] = None) → Runnable[Input, Output]¶ Bind lifecycle listeners to a Runnable, returning a new Runnable. on_start: Called before the runnable starts running, with the Run object. on_end: Called after the runnable finishes running, with the Run object. on_error: Called if the runnable throws an error, with the Run object. The Run object contains information about the run, including its id, type, input, output, error, start_time, end_time, and any tags or metadata added to the run. with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶ Create a new Runnable that retries the original runnable on exceptions. Parameters retry_if_exception_type – A tuple of exception types to retry on wait_exponential_jitter – Whether to add jitter to the wait time between retries stop_after_attempt – The maximum number of attempts to make before giving up Returns A new Runnable that retries the original runnable on exceptions. with_types(*, input_type: Optional[Type[Input]] = None, output_type: Optional[Type[Output]] = None) → Runnable[Input, Output]¶
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.BaseChatPromptTemplate.html
15a4d7c690db-8
Bind input and output types to a Runnable, returning a new Runnable. property InputType: Type[langchain.schema.runnable.utils.Input]¶ The type of input this runnable accepts specified as a type annotation. property OutputType: Any¶ The type of output this runnable produces specified as a type annotation. property config_specs: List[langchain.schema.runnable.utils.ConfigurableFieldSpec]¶ List configurable fields for this runnable. property input_schema: Type[pydantic.main.BaseModel]¶ The type of input this runnable accepts specified as a pydantic model. property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_secrets: Dict[str, str]¶ A map of constructor argument names to secret ids. For example,{“openai_api_key”: “OPENAI_API_KEY”} property output_schema: Type[pydantic.main.BaseModel]¶ The type of output this runnable produces specified as a pydantic model.
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.BaseChatPromptTemplate.html
88b7e5376a32-0
langchain.prompts.loading.load_prompt¶ langchain.prompts.loading.load_prompt(path: Union[str, Path]) → BasePromptTemplate[source]¶ Unified method for loading a prompt from LangChainHub or local fs. Examples using load_prompt¶ Amazon Comprehend Moderation Chain Serialization
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.loading.load_prompt.html
d6367679850f-0
langchain.prompts.example_selector.base.BaseExampleSelector¶ class langchain.prompts.example_selector.base.BaseExampleSelector[source]¶ Interface for selecting examples to include in prompts. Methods __init__() add_example(example) Add new example to store for a key. select_examples(input_variables) Select which examples to use based on the inputs. __init__()¶ abstract add_example(example: Dict[str, str]) → Any[source]¶ Add new example to store for a key. abstract select_examples(input_variables: Dict[str, str]) → List[dict][source]¶ Select which examples to use based on the inputs. Examples using BaseExampleSelector¶ Custom example selector
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.example_selector.base.BaseExampleSelector.html
2b6bd3282da2-0
langchain.prompts.example_selector.ngram_overlap.ngram_overlap_score¶ langchain.prompts.example_selector.ngram_overlap.ngram_overlap_score(source: List[str], example: List[str]) → float[source]¶ Compute ngram overlap score of source and example as sentence_bleu score. Use sentence_bleu with method1 smoothing function and auto reweighting. Return float value between 0.0 and 1.0 inclusive. https://www.nltk.org/_modules/nltk/translate/bleu_score.html https://aclanthology.org/P02-1040.pdf
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.example_selector.ngram_overlap.ngram_overlap_score.html
1748b302ba94-0
langchain.prompts.base.validate_jinja2¶ langchain.prompts.base.validate_jinja2(template: str, input_variables: List[str]) → None[source]¶ Validate that the input variables are valid for the template. Issues a warning if missing or extra variables are found. Parameters template – The template string. input_variables – The input variables.
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.base.validate_jinja2.html
25b90a939f5c-0
langchain.prompts.pipeline.PipelinePromptTemplate¶ class langchain.prompts.pipeline.PipelinePromptTemplate[source]¶ Bases: BasePromptTemplate A prompt template for composing multiple prompt templates together. This can be useful when you want to reuse parts of prompts. A PipelinePrompt consists of two main parts: final_prompt: This is the final prompt that is returned pipeline_prompts: This is a list of tuples, consistingof a string (name) and a Prompt Template. Each PromptTemplate will be formatted and then passed to future prompt templates as a variable with the same name as name Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param final_prompt: langchain.schema.prompt_template.BasePromptTemplate [Required]¶ The final prompt that is returned. param input_types: Dict[str, Any] [Optional]¶ A dictionary of the types of the variables the prompt template expects. If not provided, all variables are assumed to be strings. param input_variables: List[str] [Required]¶ A list of the names of the variables the prompt template expects. param output_parser: Optional[BaseOutputParser] = None¶ How to parse the output of calling an LLM on this formatted prompt. param partial_variables: Mapping[str, Union[str, Callable[[], str]]] [Optional]¶ param pipeline_prompts: List[Tuple[str, langchain.schema.prompt_template.BasePromptTemplate]] [Required]¶ A list of tuples, consisting of a string (name) and a Prompt Template. async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.pipeline.PipelinePromptTemplate.html
25b90a939f5c-1
Default implementation runs ainvoke in parallel using asyncio.gather. The default implementation of batch works well for IO bound runnables. Subclasses should override this method if they can batch more efficiently; e.g., if the underlying runnable uses an API which supports a batch mode. async ainvoke(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Any) → Output¶ Default implementation of ainvoke, calls invoke from a thread. The default implementation allows usage of async code even if the runnable did not implement a native async version of invoke. Subclasses should override this method if they can run asynchronously. async astream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶ Default implementation of astream, which calls ainvoke. Subclasses should override this method if they support streaming output. async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, diff: bool = True, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any]) → Union[AsyncIterator[RunLogPatch], AsyncIterator[RunLog]]¶ Stream all output from a runnable, as reported to the callback system. This includes all inner runs of LLMs, Retrievers, Tools, etc. Output is streamed as Log objects, which include a list of jsonpatch ops that describe how the state of the run has changed in each step, and the final state of the run. The jsonpatch ops can be applied in order to construct state.
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.pipeline.PipelinePromptTemplate.html
25b90a939f5c-2
The jsonpatch ops can be applied in order to construct state. async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶ Default implementation of atransform, which buffers input and calls astream. Subclasses should override this method if they can start producing output while input is still being generated. batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶ Default implementation runs invoke in parallel using a thread pool executor. The default implementation of batch works well for IO bound runnables. Subclasses should override this method if they can batch more efficiently; e.g., if the underlying runnable uses an API which supports a batch mode. bind(**kwargs: Any) → Runnable[Input, Output]¶ Bind arguments to a Runnable, returning a new Runnable. config_schema(*, include: Optional[Sequence[str]] = None) → Type[BaseModel]¶ The type of config this runnable accepts specified as a pydantic model. To mark a field as configurable, see the configurable_fields and configurable_alternatives methods. Parameters include – A list of fields to include in the config schema. Returns A pydantic model that can be used to validate config. configurable_alternatives(which: ConfigurableField, default_key: str = 'default', **kwargs: Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) → RunnableSerializable[Input, Output]¶ configurable_fields(**kwargs: Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) → RunnableSerializable[Input, Output]¶
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.pipeline.PipelinePromptTemplate.html
25b90a939f5c-3
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) → Dict¶ Return dictionary representation of prompt. format(**kwargs: Any) → str[source]¶ Format the prompt with the inputs. Parameters kwargs – Any arguments to be passed to the prompt template. Returns A formatted string. Example: prompt.format(variable1="foo") format_prompt(**kwargs: Any) → PromptValue[source]¶ Create Chat Messages. classmethod from_orm(obj: Any) → Model¶ get_input_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶ Get a pydantic model that can be used to validate input to the runnable. Runnables that leverage the configurable_fields and configurable_alternatives methods will have a dynamic input schema that depends on which
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.pipeline.PipelinePromptTemplate.html
25b90a939f5c-4
methods will have a dynamic input schema that depends on which configuration the runnable is invoked with. This method allows to get an input schema for a specific configuration. Parameters config – A config to use when generating the schema. Returns A pydantic model that can be used to validate input. classmethod get_lc_namespace() → List[str]¶ Get the namespace of the langchain object. For example, if the class is langchain.llms.openai.OpenAI, then the namespace is [“langchain”, “llms”, “openai”] get_output_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶ Get a pydantic model that can be used to validate output to the runnable. Runnables that leverage the configurable_fields and configurable_alternatives methods will have a dynamic output schema that depends on which configuration the runnable is invoked with. This method allows to get an output schema for a specific configuration. Parameters config – A config to use when generating the schema. Returns A pydantic model that can be used to validate output. invoke(input: Dict, config: Optional[RunnableConfig] = None) → PromptValue¶ Transform a single input into an output. Override to implement. Parameters input – The input to the runnable. config – A config to use when invoking the runnable. The config supports standard keys like ‘tags’, ‘metadata’ for tracing purposes, ‘max_concurrency’ for controlling how much work to do in parallel, and other keys. Please refer to the RunnableConfig for more details. Returns The output of the runnable. classmethod is_lc_serializable() → bool¶ Return whether this class is serializable.
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.pipeline.PipelinePromptTemplate.html
25b90a939f5c-5
classmethod is_lc_serializable() → bool¶ Return whether this class is serializable. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod lc_id() → List[str]¶ A unique identifier for this class for serialization purposes. The unique identifier is a list of strings that describes the path to the object. map() → Runnable[List[Input], List[Output]]¶ Return a new Runnable that maps a list of inputs to a list of outputs, by calling invoke() with each input. classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ partial(**kwargs: Union[str, Callable[[], str]]) → BasePromptTemplate¶ Return a partial of the prompt template. save(file_path: Union[Path, str]) → None¶ Save the prompt. Parameters
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.pipeline.PipelinePromptTemplate.html
25b90a939f5c-6
Save the prompt. Parameters file_path – Path to directory to save prompt to. Example: .. code-block:: python prompt.save(file_path=”path/prompt.yaml”) classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ stream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶ Default implementation of stream, which calls invoke. Subclasses should override this method if they support streaming output. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶ Default implementation of transform, which buffers input and then calls stream. Subclasses should override this method if they can start producing output while input is still being generated. classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶ with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶ Bind config to a Runnable, returning a new Runnable. with_fallbacks(fallbacks: Sequence[Runnable[Input, Output]], *, exceptions_to_handle: Tuple[Type[BaseException], ...] = (<class 'Exception'>,)) → RunnableWithFallbacksT[Input, Output]¶ Add fallbacks to a runnable, returning a new Runnable. Parameters
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.pipeline.PipelinePromptTemplate.html
25b90a939f5c-7
Add fallbacks to a runnable, returning a new Runnable. Parameters fallbacks – A sequence of runnables to try if the original runnable fails. exceptions_to_handle – A tuple of exception types to handle. Returns A new Runnable that will try the original runnable, and then each fallback in order, upon failures. with_listeners(*, on_start: Optional[Listener] = None, on_end: Optional[Listener] = None, on_error: Optional[Listener] = None) → Runnable[Input, Output]¶ Bind lifecycle listeners to a Runnable, returning a new Runnable. on_start: Called before the runnable starts running, with the Run object. on_end: Called after the runnable finishes running, with the Run object. on_error: Called if the runnable throws an error, with the Run object. The Run object contains information about the run, including its id, type, input, output, error, start_time, end_time, and any tags or metadata added to the run. with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶ Create a new Runnable that retries the original runnable on exceptions. Parameters retry_if_exception_type – A tuple of exception types to retry on wait_exponential_jitter – Whether to add jitter to the wait time between retries stop_after_attempt – The maximum number of attempts to make before giving up Returns A new Runnable that retries the original runnable on exceptions. with_types(*, input_type: Optional[Type[Input]] = None, output_type: Optional[Type[Output]] = None) → Runnable[Input, Output]¶
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.pipeline.PipelinePromptTemplate.html
25b90a939f5c-8
Bind input and output types to a Runnable, returning a new Runnable. property InputType: Type[langchain.schema.runnable.utils.Input]¶ The type of input this runnable accepts specified as a type annotation. property OutputType: Any¶ The type of output this runnable produces specified as a type annotation. property config_specs: List[langchain.schema.runnable.utils.ConfigurableFieldSpec]¶ List configurable fields for this runnable. property input_schema: Type[pydantic.main.BaseModel]¶ The type of input this runnable accepts specified as a pydantic model. property lc_attributes: Dict¶ List of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_secrets: Dict[str, str]¶ A map of constructor argument names to secret ids. For example,{“openai_api_key”: “OPENAI_API_KEY”} property output_schema: Type[pydantic.main.BaseModel]¶ The type of output this runnable produces specified as a pydantic model.
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.pipeline.PipelinePromptTemplate.html
d313b1c83a41-0
langchain.prompts.base.jinja2_formatter¶ langchain.prompts.base.jinja2_formatter(template: str, **kwargs: Any) → str[source]¶ Format a template using jinja2. Security warning: As of LangChain 0.0.329, this method uses Jinja2’sSandboxedEnvironment by default. However, this sand-boxing should be treated as a best-effort approach rather than a guarantee of security. Do not accept jinja2 templates from untrusted sources as they may lead to arbitrary Python code execution. https://jinja.palletsprojects.com/en/3.1.x/sandbox/
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.base.jinja2_formatter.html
fac8f870698f-0
langchain.prompts.few_shot_with_templates.FewShotPromptWithTemplates¶ class langchain.prompts.few_shot_with_templates.FewShotPromptWithTemplates[source]¶ Bases: StringPromptTemplate Prompt template that contains few shot examples. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param example_prompt: langchain.prompts.prompt.PromptTemplate [Required]¶ PromptTemplate used to format an individual example. param example_selector: Optional[langchain.prompts.example_selector.base.BaseExampleSelector] = None¶ ExampleSelector to choose the examples to format into the prompt. Either this or examples should be provided. param example_separator: str = '\n\n'¶ String separator used to join the prefix, the examples, and suffix. param examples: Optional[List[dict]] = None¶ Examples to format into the prompt. Either this or example_selector should be provided. param input_types: Dict[str, Any] [Optional]¶ A dictionary of the types of the variables the prompt template expects. If not provided, all variables are assumed to be strings. param input_variables: List[str] [Required]¶ A list of the names of the variables the prompt template expects. param output_parser: Optional[BaseOutputParser] = None¶ How to parse the output of calling an LLM on this formatted prompt. param partial_variables: Mapping[str, Union[str, Callable[[], str]]] [Optional]¶ param prefix: Optional[langchain.prompts.base.StringPromptTemplate] = None¶ A PromptTemplate to put before the examples. param suffix: langchain.prompts.base.StringPromptTemplate [Required]¶ A PromptTemplate to put after the examples. param template_format: str = 'f-string'¶
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.few_shot_with_templates.FewShotPromptWithTemplates.html
fac8f870698f-1
param template_format: str = 'f-string'¶ The format of the prompt template. Options are: ‘f-string’, ‘jinja2’. param validate_template: bool = False¶ Whether or not to try validating the template. async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶ Default implementation runs ainvoke in parallel using asyncio.gather. The default implementation of batch works well for IO bound runnables. Subclasses should override this method if they can batch more efficiently; e.g., if the underlying runnable uses an API which supports a batch mode. async ainvoke(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Any) → Output¶ Default implementation of ainvoke, calls invoke from a thread. The default implementation allows usage of async code even if the runnable did not implement a native async version of invoke. Subclasses should override this method if they can run asynchronously. async astream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶ Default implementation of astream, which calls ainvoke. Subclasses should override this method if they support streaming output.
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.few_shot_with_templates.FewShotPromptWithTemplates.html
fac8f870698f-2
Subclasses should override this method if they support streaming output. async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, diff: bool = True, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any]) → Union[AsyncIterator[RunLogPatch], AsyncIterator[RunLog]]¶ Stream all output from a runnable, as reported to the callback system. This includes all inner runs of LLMs, Retrievers, Tools, etc. Output is streamed as Log objects, which include a list of jsonpatch ops that describe how the state of the run has changed in each step, and the final state of the run. The jsonpatch ops can be applied in order to construct state. async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶ Default implementation of atransform, which buffers input and calls astream. Subclasses should override this method if they can start producing output while input is still being generated. batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶ Default implementation runs invoke in parallel using a thread pool executor. The default implementation of batch works well for IO bound runnables. Subclasses should override this method if they can batch more efficiently; e.g., if the underlying runnable uses an API which supports a batch mode.
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.few_shot_with_templates.FewShotPromptWithTemplates.html
fac8f870698f-3
e.g., if the underlying runnable uses an API which supports a batch mode. bind(**kwargs: Any) → Runnable[Input, Output]¶ Bind arguments to a Runnable, returning a new Runnable. config_schema(*, include: Optional[Sequence[str]] = None) → Type[BaseModel]¶ The type of config this runnable accepts specified as a pydantic model. To mark a field as configurable, see the configurable_fields and configurable_alternatives methods. Parameters include – A list of fields to include in the config schema. Returns A pydantic model that can be used to validate config. configurable_alternatives(which: ConfigurableField, default_key: str = 'default', **kwargs: Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) → RunnableSerializable[Input, Output]¶ configurable_fields(**kwargs: Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) → RunnableSerializable[Input, Output]¶ classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.few_shot_with_templates.FewShotPromptWithTemplates.html
fac8f870698f-4
exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) → Dict¶ Return dictionary representation of prompt. format(**kwargs: Any) → str[source]¶ Format the prompt with the inputs. Parameters kwargs – Any arguments to be passed to the prompt template. Returns A formatted string. Example: prompt.format(variable1="foo") format_prompt(**kwargs: Any) → PromptValue¶ Create Chat Messages. classmethod from_orm(obj: Any) → Model¶ get_input_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶ Get a pydantic model that can be used to validate input to the runnable. Runnables that leverage the configurable_fields and configurable_alternatives methods will have a dynamic input schema that depends on which configuration the runnable is invoked with. This method allows to get an input schema for a specific configuration. Parameters config – A config to use when generating the schema. Returns A pydantic model that can be used to validate input. classmethod get_lc_namespace() → List[str]¶ Get the namespace of the langchain object. For example, if the class is langchain.llms.openai.OpenAI, then the namespace is [“langchain”, “llms”, “openai”] get_output_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶ Get a pydantic model that can be used to validate output to the runnable. Runnables that leverage the configurable_fields and configurable_alternatives
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.few_shot_with_templates.FewShotPromptWithTemplates.html
fac8f870698f-5
Runnables that leverage the configurable_fields and configurable_alternatives methods will have a dynamic output schema that depends on which configuration the runnable is invoked with. This method allows to get an output schema for a specific configuration. Parameters config – A config to use when generating the schema. Returns A pydantic model that can be used to validate output. invoke(input: Dict, config: Optional[RunnableConfig] = None) → PromptValue¶ Transform a single input into an output. Override to implement. Parameters input – The input to the runnable. config – A config to use when invoking the runnable. The config supports standard keys like ‘tags’, ‘metadata’ for tracing purposes, ‘max_concurrency’ for controlling how much work to do in parallel, and other keys. Please refer to the RunnableConfig for more details. Returns The output of the runnable. classmethod is_lc_serializable() → bool¶ Return whether this class is serializable. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod lc_id() → List[str]¶ A unique identifier for this class for serialization purposes. The unique identifier is a list of strings that describes the path to the object.
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.few_shot_with_templates.FewShotPromptWithTemplates.html
fac8f870698f-6
The unique identifier is a list of strings that describes the path to the object. map() → Runnable[List[Input], List[Output]]¶ Return a new Runnable that maps a list of inputs to a list of outputs, by calling invoke() with each input. classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ partial(**kwargs: Union[str, Callable[[], str]]) → BasePromptTemplate¶ Return a partial of the prompt template. save(file_path: Union[Path, str]) → None[source]¶ Save the prompt. Parameters file_path – Path to directory to save prompt to. Example: .. code-block:: python prompt.save(file_path=”path/prompt.yaml”) classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ stream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶ Default implementation of stream, which calls invoke. Subclasses should override this method if they support streaming output. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.few_shot_with_templates.FewShotPromptWithTemplates.html
fac8f870698f-7
to_json_not_implemented() → SerializedNotImplemented¶ transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶ Default implementation of transform, which buffers input and then calls stream. Subclasses should override this method if they can start producing output while input is still being generated. classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶ with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶ Bind config to a Runnable, returning a new Runnable. with_fallbacks(fallbacks: Sequence[Runnable[Input, Output]], *, exceptions_to_handle: Tuple[Type[BaseException], ...] = (<class 'Exception'>,)) → RunnableWithFallbacksT[Input, Output]¶ Add fallbacks to a runnable, returning a new Runnable. Parameters fallbacks – A sequence of runnables to try if the original runnable fails. exceptions_to_handle – A tuple of exception types to handle. Returns A new Runnable that will try the original runnable, and then each fallback in order, upon failures. with_listeners(*, on_start: Optional[Listener] = None, on_end: Optional[Listener] = None, on_error: Optional[Listener] = None) → Runnable[Input, Output]¶ Bind lifecycle listeners to a Runnable, returning a new Runnable. on_start: Called before the runnable starts running, with the Run object. on_end: Called after the runnable finishes running, with the Run object. on_error: Called if the runnable throws an error, with the Run object. The Run object contains information about the run, including its id,
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.few_shot_with_templates.FewShotPromptWithTemplates.html
fac8f870698f-8
The Run object contains information about the run, including its id, type, input, output, error, start_time, end_time, and any tags or metadata added to the run. with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶ Create a new Runnable that retries the original runnable on exceptions. Parameters retry_if_exception_type – A tuple of exception types to retry on wait_exponential_jitter – Whether to add jitter to the wait time between retries stop_after_attempt – The maximum number of attempts to make before giving up Returns A new Runnable that retries the original runnable on exceptions. with_types(*, input_type: Optional[Type[Input]] = None, output_type: Optional[Type[Output]] = None) → Runnable[Input, Output]¶ Bind input and output types to a Runnable, returning a new Runnable. property InputType: Type[langchain.schema.runnable.utils.Input]¶ The type of input this runnable accepts specified as a type annotation. property OutputType: Any¶ The type of output this runnable produces specified as a type annotation. property config_specs: List[langchain.schema.runnable.utils.ConfigurableFieldSpec]¶ List configurable fields for this runnable. property input_schema: Type[pydantic.main.BaseModel]¶ The type of input this runnable accepts specified as a pydantic model. property lc_attributes: Dict¶ List of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_secrets: Dict[str, str]¶ A map of constructor argument names to secret ids.
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.few_shot_with_templates.FewShotPromptWithTemplates.html
fac8f870698f-9
A map of constructor argument names to secret ids. For example,{“openai_api_key”: “OPENAI_API_KEY”} property output_schema: Type[pydantic.main.BaseModel]¶ The type of output this runnable produces specified as a pydantic model.
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.few_shot_with_templates.FewShotPromptWithTemplates.html
fe4e284ec39b-0
langchain.prompts.chat.BaseMessagePromptTemplate¶ class langchain.prompts.chat.BaseMessagePromptTemplate[source]¶ Bases: Serializable, ABC Base class for message prompt templates. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.BaseMessagePromptTemplate.html
fe4e284ec39b-1
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. abstract format_messages(**kwargs: Any) → List[BaseMessage][source]¶ Format messages from kwargs. Should return a list of BaseMessages. Parameters **kwargs – Keyword arguments to use for formatting. Returns List of BaseMessages. classmethod from_orm(obj: Any) → Model¶ classmethod get_lc_namespace() → List[str]¶ Get the namespace of the langchain object. For example, if the class is langchain.llms.openai.OpenAI, then the namespace is [“langchain”, “llms”, “openai”] classmethod is_lc_serializable() → bool[source]¶ Return whether or not the class is serializable. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod lc_id() → List[str]¶ A unique identifier for this class for serialization purposes. The unique identifier is a list of strings that describes the path to the object. classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.BaseMessagePromptTemplate.html
fe4e284ec39b-2
classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶ abstract property input_variables: List[str]¶ Input variables for this prompt template. Returns List of input variables. property lc_attributes: Dict¶ List of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_secrets: Dict[str, str]¶ A map of constructor argument names to secret ids. For example,{“openai_api_key”: “OPENAI_API_KEY”}
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.BaseMessagePromptTemplate.html
b724c5932392-0
langchain.prompts.chat.BaseStringMessagePromptTemplate¶ class langchain.prompts.chat.BaseStringMessagePromptTemplate[source]¶ Bases: BaseMessagePromptTemplate, ABC Base class for message prompt templates that use a string prompt template. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param additional_kwargs: dict [Optional]¶ Additional keyword arguments to pass to the prompt template. param prompt: langchain.prompts.base.StringPromptTemplate [Required]¶ String prompt template. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.BaseStringMessagePromptTemplate.html
b724c5932392-1
deep – set to True to make a deep copy of the model Returns new model instance dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. abstract format(**kwargs: Any) → BaseMessage[source]¶ Format the prompt template. Parameters **kwargs – Keyword arguments to use for formatting. Returns Formatted message. format_messages(**kwargs: Any) → List[BaseMessage][source]¶ Format messages from kwargs. Parameters **kwargs – Keyword arguments to use for formatting. Returns List of BaseMessages. classmethod from_orm(obj: Any) → Model¶ classmethod from_template(template: str, template_format: str = 'f-string', **kwargs: Any) → MessagePromptTemplateT[source]¶ Create a class from a string template. Parameters template – a template. template_format – format of the template. **kwargs – keyword arguments to pass to the constructor. Returns A new instance of this class. classmethod from_template_file(template_file: Union[str, Path], input_variables: List[str], **kwargs: Any) → MessagePromptTemplateT[source]¶ Create a class from a template file. Parameters template_file – path to a template file. String or Path. input_variables – list of input variables. **kwargs – keyword arguments to pass to the constructor. Returns A new instance of this class. classmethod get_lc_namespace() → List[str]¶
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.BaseStringMessagePromptTemplate.html
b724c5932392-2
A new instance of this class. classmethod get_lc_namespace() → List[str]¶ Get the namespace of the langchain object. For example, if the class is langchain.llms.openai.OpenAI, then the namespace is [“langchain”, “llms”, “openai”] classmethod is_lc_serializable() → bool¶ Return whether or not the class is serializable. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod lc_id() → List[str]¶ A unique identifier for this class for serialization purposes. The unique identifier is a list of strings that describes the path to the object. classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.BaseStringMessagePromptTemplate.html
b724c5932392-3
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶ property input_variables: List[str]¶ Input variables for this prompt template. Returns List of input variable names. property lc_attributes: Dict¶ List of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_secrets: Dict[str, str]¶ A map of constructor argument names to secret ids. For example,{“openai_api_key”: “OPENAI_API_KEY”}
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.BaseStringMessagePromptTemplate.html
7fcb9c30e975-0
langchain.prompts.loading.load_prompt_from_config¶ langchain.prompts.loading.load_prompt_from_config(config: dict) → BasePromptTemplate[source]¶ Load prompt from Config Dict.
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.loading.load_prompt_from_config.html
418924db01b9-0
langchain.prompts.chat.HumanMessagePromptTemplate¶ class langchain.prompts.chat.HumanMessagePromptTemplate[source]¶ Bases: BaseStringMessagePromptTemplate Human message prompt template. This is a message sent from the user. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param additional_kwargs: dict [Optional]¶ Additional keyword arguments to pass to the prompt template. param prompt: langchain.prompts.base.StringPromptTemplate [Required]¶ String prompt template. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.HumanMessagePromptTemplate.html
418924db01b9-1
deep – set to True to make a deep copy of the model Returns new model instance dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. format(**kwargs: Any) → BaseMessage[source]¶ Format the prompt template. Parameters **kwargs – Keyword arguments to use for formatting. Returns Formatted message. format_messages(**kwargs: Any) → List[BaseMessage]¶ Format messages from kwargs. Parameters **kwargs – Keyword arguments to use for formatting. Returns List of BaseMessages. classmethod from_orm(obj: Any) → Model¶ classmethod from_template(template: str, template_format: str = 'f-string', **kwargs: Any) → MessagePromptTemplateT¶ Create a class from a string template. Parameters template – a template. template_format – format of the template. **kwargs – keyword arguments to pass to the constructor. Returns A new instance of this class. classmethod from_template_file(template_file: Union[str, Path], input_variables: List[str], **kwargs: Any) → MessagePromptTemplateT¶ Create a class from a template file. Parameters template_file – path to a template file. String or Path. input_variables – list of input variables. **kwargs – keyword arguments to pass to the constructor. Returns A new instance of this class. classmethod get_lc_namespace() → List[str]¶ Get the namespace of the langchain object.
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.HumanMessagePromptTemplate.html
418924db01b9-2
Get the namespace of the langchain object. For example, if the class is langchain.llms.openai.OpenAI, then the namespace is [“langchain”, “llms”, “openai”] classmethod is_lc_serializable() → bool¶ Return whether or not the class is serializable. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod lc_id() → List[str]¶ A unique identifier for this class for serialization purposes. The unique identifier is a list of strings that describes the path to the object. classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.HumanMessagePromptTemplate.html
418924db01b9-3
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶ property input_variables: List[str]¶ Input variables for this prompt template. Returns List of input variable names. property lc_attributes: Dict¶ List of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_secrets: Dict[str, str]¶ A map of constructor argument names to secret ids. For example,{“openai_api_key”: “OPENAI_API_KEY”} Examples using HumanMessagePromptTemplate¶ Anthropic 🚅 LiteLLM Konko OpenAI Google Cloud Platform Vertex AI PaLM JinaChat Context Figma Fireworks Set env var OPENAI_API_KEY or load from a .env file: Structure answers with OpenAI functions CAMEL Role-Playing Autonomous Cooperative Agents Multi-agent authoritarian speaker selection Memory in LLMChain Retry parser Pydantic (JSON) parser Prompt pipelining Using OpenAI functions Code writing
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.HumanMessagePromptTemplate.html
62ca70213344-0
langchain.prompts.base.StringPromptTemplate¶ class langchain.prompts.base.StringPromptTemplate[source]¶ Bases: BasePromptTemplate, ABC String prompt that exposes the format method, returning a prompt. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param input_types: Dict[str, Any] [Optional]¶ A dictionary of the types of the variables the prompt template expects. If not provided, all variables are assumed to be strings. param input_variables: List[str] [Required]¶ A list of the names of the variables the prompt template expects. param output_parser: Optional[BaseOutputParser] = None¶ How to parse the output of calling an LLM on this formatted prompt. param partial_variables: Mapping[str, Union[str, Callable[[], str]]] [Optional]¶ async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶ Default implementation runs ainvoke in parallel using asyncio.gather. The default implementation of batch works well for IO bound runnables. Subclasses should override this method if they can batch more efficiently; e.g., if the underlying runnable uses an API which supports a batch mode. async ainvoke(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Any) → Output¶ Default implementation of ainvoke, calls invoke from a thread. The default implementation allows usage of async code even if the runnable did not implement a native async version of invoke. Subclasses should override this method if they can run asynchronously.
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.base.StringPromptTemplate.html
62ca70213344-1
Subclasses should override this method if they can run asynchronously. async astream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶ Default implementation of astream, which calls ainvoke. Subclasses should override this method if they support streaming output. async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, diff: bool = True, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any]) → Union[AsyncIterator[RunLogPatch], AsyncIterator[RunLog]]¶ Stream all output from a runnable, as reported to the callback system. This includes all inner runs of LLMs, Retrievers, Tools, etc. Output is streamed as Log objects, which include a list of jsonpatch ops that describe how the state of the run has changed in each step, and the final state of the run. The jsonpatch ops can be applied in order to construct state. async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶ Default implementation of atransform, which buffers input and calls astream. Subclasses should override this method if they can start producing output while input is still being generated. batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶ Default implementation runs invoke in parallel using a thread pool executor.
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.base.StringPromptTemplate.html
62ca70213344-2
Default implementation runs invoke in parallel using a thread pool executor. The default implementation of batch works well for IO bound runnables. Subclasses should override this method if they can batch more efficiently; e.g., if the underlying runnable uses an API which supports a batch mode. bind(**kwargs: Any) → Runnable[Input, Output]¶ Bind arguments to a Runnable, returning a new Runnable. config_schema(*, include: Optional[Sequence[str]] = None) → Type[BaseModel]¶ The type of config this runnable accepts specified as a pydantic model. To mark a field as configurable, see the configurable_fields and configurable_alternatives methods. Parameters include – A list of fields to include in the config schema. Returns A pydantic model that can be used to validate config. configurable_alternatives(which: ConfigurableField, default_key: str = 'default', **kwargs: Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) → RunnableSerializable[Input, Output]¶ configurable_fields(**kwargs: Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) → RunnableSerializable[Input, Output]¶ classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.base.StringPromptTemplate.html
62ca70213344-3
Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) → Dict¶ Return dictionary representation of prompt. abstract format(**kwargs: Any) → str¶ Format the prompt with the inputs. Parameters kwargs – Any arguments to be passed to the prompt template. Returns A formatted string. Example: prompt.format(variable1="foo") format_prompt(**kwargs: Any) → PromptValue[source]¶ Create Chat Messages. classmethod from_orm(obj: Any) → Model¶ get_input_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶ Get a pydantic model that can be used to validate input to the runnable. Runnables that leverage the configurable_fields and configurable_alternatives methods will have a dynamic input schema that depends on which configuration the runnable is invoked with. This method allows to get an input schema for a specific configuration. Parameters config – A config to use when generating the schema. Returns A pydantic model that can be used to validate input. classmethod get_lc_namespace() → List[str]¶ Get the namespace of the langchain object. For example, if the class is langchain.llms.openai.OpenAI, then the namespace is [“langchain”, “llms”, “openai”] get_output_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.base.StringPromptTemplate.html
62ca70213344-4
Get a pydantic model that can be used to validate output to the runnable. Runnables that leverage the configurable_fields and configurable_alternatives methods will have a dynamic output schema that depends on which configuration the runnable is invoked with. This method allows to get an output schema for a specific configuration. Parameters config – A config to use when generating the schema. Returns A pydantic model that can be used to validate output. invoke(input: Dict, config: Optional[RunnableConfig] = None) → PromptValue¶ Transform a single input into an output. Override to implement. Parameters input – The input to the runnable. config – A config to use when invoking the runnable. The config supports standard keys like ‘tags’, ‘metadata’ for tracing purposes, ‘max_concurrency’ for controlling how much work to do in parallel, and other keys. Please refer to the RunnableConfig for more details. Returns The output of the runnable. classmethod is_lc_serializable() → bool¶ Return whether this class is serializable. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod lc_id() → List[str]¶ A unique identifier for this class for serialization purposes.
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.base.StringPromptTemplate.html
62ca70213344-5
A unique identifier for this class for serialization purposes. The unique identifier is a list of strings that describes the path to the object. map() → Runnable[List[Input], List[Output]]¶ Return a new Runnable that maps a list of inputs to a list of outputs, by calling invoke() with each input. classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ partial(**kwargs: Union[str, Callable[[], str]]) → BasePromptTemplate¶ Return a partial of the prompt template. save(file_path: Union[Path, str]) → None¶ Save the prompt. Parameters file_path – Path to directory to save prompt to. Example: .. code-block:: python prompt.save(file_path=”path/prompt.yaml”) classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ stream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶ Default implementation of stream, which calls invoke. Subclasses should override this method if they support streaming output. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.base.StringPromptTemplate.html
62ca70213344-6
to_json_not_implemented() → SerializedNotImplemented¶ transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶ Default implementation of transform, which buffers input and then calls stream. Subclasses should override this method if they can start producing output while input is still being generated. classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶ with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶ Bind config to a Runnable, returning a new Runnable. with_fallbacks(fallbacks: Sequence[Runnable[Input, Output]], *, exceptions_to_handle: Tuple[Type[BaseException], ...] = (<class 'Exception'>,)) → RunnableWithFallbacksT[Input, Output]¶ Add fallbacks to a runnable, returning a new Runnable. Parameters fallbacks – A sequence of runnables to try if the original runnable fails. exceptions_to_handle – A tuple of exception types to handle. Returns A new Runnable that will try the original runnable, and then each fallback in order, upon failures. with_listeners(*, on_start: Optional[Listener] = None, on_end: Optional[Listener] = None, on_error: Optional[Listener] = None) → Runnable[Input, Output]¶ Bind lifecycle listeners to a Runnable, returning a new Runnable. on_start: Called before the runnable starts running, with the Run object. on_end: Called after the runnable finishes running, with the Run object. on_error: Called if the runnable throws an error, with the Run object. The Run object contains information about the run, including its id,
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.base.StringPromptTemplate.html
62ca70213344-7
The Run object contains information about the run, including its id, type, input, output, error, start_time, end_time, and any tags or metadata added to the run. with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶ Create a new Runnable that retries the original runnable on exceptions. Parameters retry_if_exception_type – A tuple of exception types to retry on wait_exponential_jitter – Whether to add jitter to the wait time between retries stop_after_attempt – The maximum number of attempts to make before giving up Returns A new Runnable that retries the original runnable on exceptions. with_types(*, input_type: Optional[Type[Input]] = None, output_type: Optional[Type[Output]] = None) → Runnable[Input, Output]¶ Bind input and output types to a Runnable, returning a new Runnable. property InputType: Type[langchain.schema.runnable.utils.Input]¶ The type of input this runnable accepts specified as a type annotation. property OutputType: Any¶ The type of output this runnable produces specified as a type annotation. property config_specs: List[langchain.schema.runnable.utils.ConfigurableFieldSpec]¶ List configurable fields for this runnable. property input_schema: Type[pydantic.main.BaseModel]¶ The type of input this runnable accepts specified as a pydantic model. property lc_attributes: Dict¶ List of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_secrets: Dict[str, str]¶ A map of constructor argument names to secret ids.
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.base.StringPromptTemplate.html
62ca70213344-8
A map of constructor argument names to secret ids. For example,{“openai_api_key”: “OPENAI_API_KEY”} property output_schema: Type[pydantic.main.BaseModel]¶ The type of output this runnable produces specified as a pydantic model. Examples using StringPromptTemplate¶ Plug-and-Plai Wikibase Agent SalesGPT - Your Context-Aware AI Sales Assistant With Knowledge Base Custom Agent with PlugIn Retrieval Custom agent with tool retrieval Custom prompt template Connecting to a Feature Store
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.base.StringPromptTemplate.html
1d3005096236-0
langchain.prompts.example_selector.ngram_overlap.NGramOverlapExampleSelector¶ class langchain.prompts.example_selector.ngram_overlap.NGramOverlapExampleSelector[source]¶ Bases: BaseExampleSelector, BaseModel Select and order examples based on ngram overlap score (sentence_bleu score). https://www.nltk.org/_modules/nltk/translate/bleu_score.html https://aclanthology.org/P02-1040.pdf Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param example_prompt: langchain.prompts.prompt.PromptTemplate [Required]¶ Prompt template used to format the examples. param examples: List[dict] [Required]¶ A list of the examples that the prompt template expects. param threshold: float = -1.0¶ Threshold at which algorithm stops. Set to -1.0 by default. For negative threshold: select_examples sorts examples by ngram_overlap_score, but excludes none. For threshold greater than 1.0: select_examples excludes all examples, and returns an empty list. For threshold equal to 0.0: select_examples sorts examples by ngram_overlap_score, and excludes examples with no ngram overlap with input. add_example(example: Dict[str, str]) → None[source]¶ Add new example to list. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.example_selector.ngram_overlap.NGramOverlapExampleSelector.html
1d3005096236-1
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. classmethod from_orm(obj: Any) → Model¶
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.example_selector.ngram_overlap.NGramOverlapExampleSelector.html
1d3005096236-2
classmethod from_orm(obj: Any) → Model¶ json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ select_examples(input_variables: Dict[str, str]) → List[dict][source]¶ Return list of examples sorted by ngram_overlap_score with input. Descending order. Excludes any examples with ngram_overlap_score less than or equal to threshold. classmethod update_forward_refs(**localns: Any) → None¶
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.example_selector.ngram_overlap.NGramOverlapExampleSelector.html
1d3005096236-3
classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶ Examples using NGramOverlapExampleSelector¶ Select by n-gram overlap
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.example_selector.ngram_overlap.NGramOverlapExampleSelector.html
a821a27cfc82-0
langchain.prompts.example_selector.semantic_similarity.SemanticSimilarityExampleSelector¶ class langchain.prompts.example_selector.semantic_similarity.SemanticSimilarityExampleSelector[source]¶ Bases: BaseExampleSelector, BaseModel Example selector that selects examples based on SemanticSimilarity. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param example_keys: Optional[List[str]] = None¶ Optional keys to filter examples to. param input_keys: Optional[List[str]] = None¶ Optional keys to filter input to. If provided, the search is based on the input variables instead of all variables. param k: int = 4¶ Number of examples to select. param vectorstore: langchain.schema.vectorstore.VectorStore [Required]¶ VectorStore than contains information about examples. add_example(example: Dict[str, str]) → str[source]¶ Add new example to vectorstore. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.example_selector.semantic_similarity.SemanticSimilarityExampleSelector.html
a821a27cfc82-1
exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. classmethod from_examples(examples: List[dict], embeddings: Embeddings, vectorstore_cls: Type[VectorStore], k: int = 4, input_keys: Optional[List[str]] = None, **vectorstore_cls_kwargs: Any) → SemanticSimilarityExampleSelector[source]¶ Create k-shot example selector using example list and embeddings. Reshuffles examples dynamically based on query similarity. Parameters examples – List of examples to use in the prompt. embeddings – An initialized embedding API interface, e.g. OpenAIEmbeddings(). vectorstore_cls – A vector store DB interface class, e.g. FAISS. k – Number of examples to select input_keys – If provided, the search is based on the input variables instead of all variables. vectorstore_cls_kwargs – optional kwargs containing url for vector store Returns The ExampleSelector instantiated, backed by a vector store. classmethod from_orm(obj: Any) → Model¶
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.example_selector.semantic_similarity.SemanticSimilarityExampleSelector.html
a821a27cfc82-2
classmethod from_orm(obj: Any) → Model¶ json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ select_examples(input_variables: Dict[str, str]) → List[dict][source]¶ Select which examples to use based on semantic similarity. classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶ Examples using SemanticSimilarityExampleSelector¶
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.example_selector.semantic_similarity.SemanticSimilarityExampleSelector.html
a821a27cfc82-3
classmethod validate(value: Any) → Model¶ Examples using SemanticSimilarityExampleSelector¶ Select by maximal marginal relevance (MMR) Few-shot examples for chat models
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.example_selector.semantic_similarity.SemanticSimilarityExampleSelector.html
b638ce3dc8d5-0
langchain.prompts.few_shot.FewShotPromptTemplate¶ class langchain.prompts.few_shot.FewShotPromptTemplate[source]¶ Bases: _FewShotPromptTemplateMixin, StringPromptTemplate Prompt template that contains few shot examples. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param example_prompt: PromptTemplate [Required]¶ PromptTemplate used to format an individual example. param example_selector: Optional[BaseExampleSelector] = None¶ ExampleSelector to choose the examples to format into the prompt. Either this or examples should be provided. param example_separator: str = '\n\n'¶ String separator used to join the prefix, the examples, and suffix. param examples: Optional[List[dict]] = None¶ Examples to format into the prompt. Either this or example_selector should be provided. param input_types: Dict[str, Any] [Optional]¶ A dictionary of the types of the variables the prompt template expects. If not provided, all variables are assumed to be strings. param input_variables: List[str] [Required]¶ A list of the names of the variables the prompt template expects. param output_parser: Optional[BaseOutputParser] = None¶ How to parse the output of calling an LLM on this formatted prompt. param partial_variables: Mapping[str, Union[str, Callable[[], str]]] [Optional]¶ param prefix: str = ''¶ A prompt template string to put before the examples. param suffix: str [Required]¶ A prompt template string to put after the examples. param template_format: Union[Literal['f-string'], Literal['jinja2']] = 'f-string'¶ The format of the prompt template. Options are: ‘f-string’, ‘jinja2’.
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.few_shot.FewShotPromptTemplate.html
b638ce3dc8d5-1
The format of the prompt template. Options are: ‘f-string’, ‘jinja2’. param validate_template: bool = False¶ Whether or not to try validating the template. async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶ Default implementation runs ainvoke in parallel using asyncio.gather. The default implementation of batch works well for IO bound runnables. Subclasses should override this method if they can batch more efficiently; e.g., if the underlying runnable uses an API which supports a batch mode. async ainvoke(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Any) → Output¶ Default implementation of ainvoke, calls invoke from a thread. The default implementation allows usage of async code even if the runnable did not implement a native async version of invoke. Subclasses should override this method if they can run asynchronously. async astream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶ Default implementation of astream, which calls ainvoke. Subclasses should override this method if they support streaming output. async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, diff: bool = True, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any]) → Union[AsyncIterator[RunLogPatch], AsyncIterator[RunLog]]¶
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.few_shot.FewShotPromptTemplate.html
b638ce3dc8d5-2
Stream all output from a runnable, as reported to the callback system. This includes all inner runs of LLMs, Retrievers, Tools, etc. Output is streamed as Log objects, which include a list of jsonpatch ops that describe how the state of the run has changed in each step, and the final state of the run. The jsonpatch ops can be applied in order to construct state. async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶ Default implementation of atransform, which buffers input and calls astream. Subclasses should override this method if they can start producing output while input is still being generated. batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶ Default implementation runs invoke in parallel using a thread pool executor. The default implementation of batch works well for IO bound runnables. Subclasses should override this method if they can batch more efficiently; e.g., if the underlying runnable uses an API which supports a batch mode. bind(**kwargs: Any) → Runnable[Input, Output]¶ Bind arguments to a Runnable, returning a new Runnable. config_schema(*, include: Optional[Sequence[str]] = None) → Type[BaseModel]¶ The type of config this runnable accepts specified as a pydantic model. To mark a field as configurable, see the configurable_fields and configurable_alternatives methods. Parameters include – A list of fields to include in the config schema. Returns A pydantic model that can be used to validate config.
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.few_shot.FewShotPromptTemplate.html
b638ce3dc8d5-3
Returns A pydantic model that can be used to validate config. configurable_alternatives(which: ConfigurableField, default_key: str = 'default', **kwargs: Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) → RunnableSerializable[Input, Output]¶ configurable_fields(**kwargs: Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) → RunnableSerializable[Input, Output]¶ classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) → Dict¶ Return dictionary representation of prompt. format(**kwargs: Any) → str[source]¶ Format the prompt with the inputs. Parameters **kwargs – Any arguments to be passed to the prompt template. Returns A formatted string. Example:
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.few_shot.FewShotPromptTemplate.html
b638ce3dc8d5-4
Returns A formatted string. Example: prompt.format(variable1="foo") format_prompt(**kwargs: Any) → PromptValue¶ Create Chat Messages. classmethod from_orm(obj: Any) → Model¶ get_input_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶ Get a pydantic model that can be used to validate input to the runnable. Runnables that leverage the configurable_fields and configurable_alternatives methods will have a dynamic input schema that depends on which configuration the runnable is invoked with. This method allows to get an input schema for a specific configuration. Parameters config – A config to use when generating the schema. Returns A pydantic model that can be used to validate input. classmethod get_lc_namespace() → List[str]¶ Get the namespace of the langchain object. For example, if the class is langchain.llms.openai.OpenAI, then the namespace is [“langchain”, “llms”, “openai”] get_output_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶ Get a pydantic model that can be used to validate output to the runnable. Runnables that leverage the configurable_fields and configurable_alternatives methods will have a dynamic output schema that depends on which configuration the runnable is invoked with. This method allows to get an output schema for a specific configuration. Parameters config – A config to use when generating the schema. Returns A pydantic model that can be used to validate output. invoke(input: Dict, config: Optional[RunnableConfig] = None) → PromptValue¶ Transform a single input into an output. Override to implement. Parameters input – The input to the runnable. config – A config to use when invoking the runnable.
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.few_shot.FewShotPromptTemplate.html
b638ce3dc8d5-5
config – A config to use when invoking the runnable. The config supports standard keys like ‘tags’, ‘metadata’ for tracing purposes, ‘max_concurrency’ for controlling how much work to do in parallel, and other keys. Please refer to the RunnableConfig for more details. Returns The output of the runnable. classmethod is_lc_serializable() → bool[source]¶ Return whether or not the class is serializable. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod lc_id() → List[str]¶ A unique identifier for this class for serialization purposes. The unique identifier is a list of strings that describes the path to the object. map() → Runnable[List[Input], List[Output]]¶ Return a new Runnable that maps a list of inputs to a list of outputs, by calling invoke() with each input. classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.few_shot.FewShotPromptTemplate.html
b638ce3dc8d5-6
classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ partial(**kwargs: Union[str, Callable[[], str]]) → BasePromptTemplate¶ Return a partial of the prompt template. save(file_path: Union[Path, str]) → None[source]¶ Save the prompt. Parameters file_path – Path to directory to save prompt to. Example: .. code-block:: python prompt.save(file_path=”path/prompt.yaml”) classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ stream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶ Default implementation of stream, which calls invoke. Subclasses should override this method if they support streaming output. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶ Default implementation of transform, which buffers input and then calls stream. Subclasses should override this method if they can start producing output while input is still being generated. classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.few_shot.FewShotPromptTemplate.html
b638ce3dc8d5-7
classmethod validate(value: Any) → Model¶ with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶ Bind config to a Runnable, returning a new Runnable. with_fallbacks(fallbacks: Sequence[Runnable[Input, Output]], *, exceptions_to_handle: Tuple[Type[BaseException], ...] = (<class 'Exception'>,)) → RunnableWithFallbacksT[Input, Output]¶ Add fallbacks to a runnable, returning a new Runnable. Parameters fallbacks – A sequence of runnables to try if the original runnable fails. exceptions_to_handle – A tuple of exception types to handle. Returns A new Runnable that will try the original runnable, and then each fallback in order, upon failures. with_listeners(*, on_start: Optional[Listener] = None, on_end: Optional[Listener] = None, on_error: Optional[Listener] = None) → Runnable[Input, Output]¶ Bind lifecycle listeners to a Runnable, returning a new Runnable. on_start: Called before the runnable starts running, with the Run object. on_end: Called after the runnable finishes running, with the Run object. on_error: Called if the runnable throws an error, with the Run object. The Run object contains information about the run, including its id, type, input, output, error, start_time, end_time, and any tags or metadata added to the run. with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶ Create a new Runnable that retries the original runnable on exceptions. Parameters
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.few_shot.FewShotPromptTemplate.html
b638ce3dc8d5-8
Create a new Runnable that retries the original runnable on exceptions. Parameters retry_if_exception_type – A tuple of exception types to retry on wait_exponential_jitter – Whether to add jitter to the wait time between retries stop_after_attempt – The maximum number of attempts to make before giving up Returns A new Runnable that retries the original runnable on exceptions. with_types(*, input_type: Optional[Type[Input]] = None, output_type: Optional[Type[Output]] = None) → Runnable[Input, Output]¶ Bind input and output types to a Runnable, returning a new Runnable. property InputType: Type[langchain.schema.runnable.utils.Input]¶ The type of input this runnable accepts specified as a type annotation. property OutputType: Any¶ The type of output this runnable produces specified as a type annotation. property config_specs: List[langchain.schema.runnable.utils.ConfigurableFieldSpec]¶ List configurable fields for this runnable. property input_schema: Type[pydantic.main.BaseModel]¶ The type of input this runnable accepts specified as a pydantic model. property lc_attributes: Dict¶ List of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_secrets: Dict[str, str]¶ A map of constructor argument names to secret ids. For example,{“openai_api_key”: “OPENAI_API_KEY”} property output_schema: Type[pydantic.main.BaseModel]¶ The type of output this runnable produces specified as a pydantic model. Examples using FewShotPromptTemplate¶ Select by maximal marginal relevance (MMR) Select by n-gram overlap
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.few_shot.FewShotPromptTemplate.html
a41c962d8d89-0
langchain.prompts.prompt.PromptTemplate¶ class langchain.prompts.prompt.PromptTemplate[source]¶ Bases: StringPromptTemplate A prompt template for a language model. A prompt template consists of a string template. It accepts a set of parameters from the user that can be used to generate a prompt for a language model. The template can be formatted using either f-strings (default) or jinja2 syntax. Security warning: Prefer using template_format=”f-string” instead oftemplate_format=”jinja2”, or make sure to NEVER accept jinja2 templates from untrusted sources as they may lead to arbitrary Python code execution. As of LangChain 0.0.329, Jinja2 templates will be rendered using Jinja2’s SandboxedEnvironment by default. This sand-boxing should be treated as a best-effort approach rather than a guarantee of security, as it is an opt-out rather than opt-in approach. Despite the sand-boxing, we recommend to never use jinja2 templates from untrusted sources. Example from langchain.prompts import PromptTemplate # Instantiation using from_template (recommended) prompt = PromptTemplate.from_template("Say {foo}") prompt.format(foo="bar") # Instantiation using initializer prompt = PromptTemplate(input_variables=["foo"], template="Say {foo}") Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param input_types: Dict[str, Any] [Optional]¶ A dictionary of the types of the variables the prompt template expects. If not provided, all variables are assumed to be strings. param input_variables: List[str] [Required]¶ A list of the names of the variables the prompt template expects. param output_parser: Optional[BaseOutputParser] = None¶
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.prompt.PromptTemplate.html
a41c962d8d89-1
param output_parser: Optional[BaseOutputParser] = None¶ How to parse the output of calling an LLM on this formatted prompt. param partial_variables: Mapping[str, Union[str, Callable[[], str]]] [Optional]¶ param template: str [Required]¶ The prompt template. param template_format: Union[Literal['f-string'], Literal['jinja2']] = 'f-string'¶ The format of the prompt template. Options are: ‘f-string’, ‘jinja2’. param validate_template: bool = False¶ Whether or not to try validating the template. async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶ Default implementation runs ainvoke in parallel using asyncio.gather. The default implementation of batch works well for IO bound runnables. Subclasses should override this method if they can batch more efficiently; e.g., if the underlying runnable uses an API which supports a batch mode. async ainvoke(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Any) → Output¶ Default implementation of ainvoke, calls invoke from a thread. The default implementation allows usage of async code even if the runnable did not implement a native async version of invoke. Subclasses should override this method if they can run asynchronously. async astream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶ Default implementation of astream, which calls ainvoke. Subclasses should override this method if they support streaming output.
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.prompt.PromptTemplate.html
a41c962d8d89-2
Subclasses should override this method if they support streaming output. async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, diff: bool = True, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any]) → Union[AsyncIterator[RunLogPatch], AsyncIterator[RunLog]]¶ Stream all output from a runnable, as reported to the callback system. This includes all inner runs of LLMs, Retrievers, Tools, etc. Output is streamed as Log objects, which include a list of jsonpatch ops that describe how the state of the run has changed in each step, and the final state of the run. The jsonpatch ops can be applied in order to construct state. async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶ Default implementation of atransform, which buffers input and calls astream. Subclasses should override this method if they can start producing output while input is still being generated. batch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶ Default implementation runs invoke in parallel using a thread pool executor. The default implementation of batch works well for IO bound runnables. Subclasses should override this method if they can batch more efficiently; e.g., if the underlying runnable uses an API which supports a batch mode.
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.prompt.PromptTemplate.html
a41c962d8d89-3
e.g., if the underlying runnable uses an API which supports a batch mode. bind(**kwargs: Any) → Runnable[Input, Output]¶ Bind arguments to a Runnable, returning a new Runnable. config_schema(*, include: Optional[Sequence[str]] = None) → Type[BaseModel]¶ The type of config this runnable accepts specified as a pydantic model. To mark a field as configurable, see the configurable_fields and configurable_alternatives methods. Parameters include – A list of fields to include in the config schema. Returns A pydantic model that can be used to validate config. configurable_alternatives(which: ConfigurableField, default_key: str = 'default', **kwargs: Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) → RunnableSerializable[Input, Output]¶ configurable_fields(**kwargs: Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) → RunnableSerializable[Input, Output]¶ classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.prompt.PromptTemplate.html
a41c962d8d89-4
exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) → Dict¶ Return dictionary representation of prompt. format(**kwargs: Any) → str[source]¶ Format the prompt with the inputs. Parameters kwargs – Any arguments to be passed to the prompt template. Returns A formatted string. Example prompt.format(variable1="foo") format_prompt(**kwargs: Any) → PromptValue¶ Create Chat Messages. classmethod from_examples(examples: List[str], suffix: str, input_variables: List[str], example_separator: str = '\n\n', prefix: str = '', **kwargs: Any) → PromptTemplate[source]¶ Take examples in list format with prefix and suffix to create a prompt. Intended to be used as a way to dynamically create a prompt from examples. Parameters examples – List of examples to use in the prompt. suffix – String to go after the list of examples. Should generally set up the user’s input. input_variables – A list of variable names the final prompt template will expect. example_separator – The separator to use in between examples. Defaults to two new line characters. prefix – String that should go before any examples. Generally includes examples. Default to an empty string. Returns The final prompt generated. classmethod from_file(template_file: Union[str, Path], input_variables: List[str], **kwargs: Any) → PromptTemplate[source]¶ Load a prompt from a file. Parameters template_file – The path to the file containing the prompt template.
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.prompt.PromptTemplate.html
a41c962d8d89-5
Parameters template_file – The path to the file containing the prompt template. input_variables – A list of variable names the final prompt template will expect. Returns The prompt loaded from the file. classmethod from_orm(obj: Any) → Model¶ classmethod from_template(template: str, *, template_format: str = 'f-string', partial_variables: Optional[Dict[str, Any]] = None, **kwargs: Any) → PromptTemplate[source]¶ Load a prompt template from a template. Security warning: Prefer using template_format=”f-string” instead oftemplate_format=”jinja2”, or make sure to NEVER accept jinja2 templates from untrusted sources as they may lead to arbitrary Python code execution. As of LangChain 0.0.329, Jinja2 templates will be rendered using Jinja2’s SandboxedEnvironment by default. This sand-boxing should be treated as a best-effort approach rather than a guarantee of security, as it is an opt-out rather than opt-in approach. Despite the sand-boxing, we recommend to never use jinja2 templates from untrusted sources. Parameters template – The template to load. template_format – The format of the template. Use jinja2 for jinja2, and f-string or None for f-strings. partial_variables – A dictionary of variables that can be used to partiallyfill in the template. For example, if the template is ”{variable1} {variable2}”, and partial_variables is {“variable1”: “foo”}, then the final prompt will be “foo {variable2}”. Returns The prompt template loaded from the template. get_input_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶ Get a pydantic model that can be used to validate input to the runnable.
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.prompt.PromptTemplate.html
a41c962d8d89-6
Get a pydantic model that can be used to validate input to the runnable. Runnables that leverage the configurable_fields and configurable_alternatives methods will have a dynamic input schema that depends on which configuration the runnable is invoked with. This method allows to get an input schema for a specific configuration. Parameters config – A config to use when generating the schema. Returns A pydantic model that can be used to validate input. classmethod get_lc_namespace() → List[str]¶ Get the namespace of the langchain object. For example, if the class is langchain.llms.openai.OpenAI, then the namespace is [“langchain”, “llms”, “openai”] get_output_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶ Get a pydantic model that can be used to validate output to the runnable. Runnables that leverage the configurable_fields and configurable_alternatives methods will have a dynamic output schema that depends on which configuration the runnable is invoked with. This method allows to get an output schema for a specific configuration. Parameters config – A config to use when generating the schema. Returns A pydantic model that can be used to validate output. invoke(input: Dict, config: Optional[RunnableConfig] = None) → PromptValue¶ Transform a single input into an output. Override to implement. Parameters input – The input to the runnable. config – A config to use when invoking the runnable. The config supports standard keys like ‘tags’, ‘metadata’ for tracing purposes, ‘max_concurrency’ for controlling how much work to do in parallel, and other keys. Please refer to the RunnableConfig for more details. Returns The output of the runnable. classmethod is_lc_serializable() → bool¶
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.prompt.PromptTemplate.html
a41c962d8d89-7
Returns The output of the runnable. classmethod is_lc_serializable() → bool¶ Return whether this class is serializable. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod lc_id() → List[str]¶ A unique identifier for this class for serialization purposes. The unique identifier is a list of strings that describes the path to the object. map() → Runnable[List[Input], List[Output]]¶ Return a new Runnable that maps a list of inputs to a list of outputs, by calling invoke() with each input. classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ partial(**kwargs: Union[str, Callable[[], str]]) → BasePromptTemplate¶ Return a partial of the prompt template. save(file_path: Union[Path, str]) → None¶ Save the prompt.
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.prompt.PromptTemplate.html
a41c962d8d89-8
save(file_path: Union[Path, str]) → None¶ Save the prompt. Parameters file_path – Path to directory to save prompt to. Example: .. code-block:: python prompt.save(file_path=”path/prompt.yaml”) classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ stream(input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶ Default implementation of stream, which calls invoke. Subclasses should override this method if they support streaming output. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶ Default implementation of transform, which buffers input and then calls stream. Subclasses should override this method if they can start producing output while input is still being generated. classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶ with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶ Bind config to a Runnable, returning a new Runnable. with_fallbacks(fallbacks: Sequence[Runnable[Input, Output]], *, exceptions_to_handle: Tuple[Type[BaseException], ...] = (<class 'Exception'>,)) → RunnableWithFallbacksT[Input, Output]¶
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.prompt.PromptTemplate.html
a41c962d8d89-9
Add fallbacks to a runnable, returning a new Runnable. Parameters fallbacks – A sequence of runnables to try if the original runnable fails. exceptions_to_handle – A tuple of exception types to handle. Returns A new Runnable that will try the original runnable, and then each fallback in order, upon failures. with_listeners(*, on_start: Optional[Listener] = None, on_end: Optional[Listener] = None, on_error: Optional[Listener] = None) → Runnable[Input, Output]¶ Bind lifecycle listeners to a Runnable, returning a new Runnable. on_start: Called before the runnable starts running, with the Run object. on_end: Called after the runnable finishes running, with the Run object. on_error: Called if the runnable throws an error, with the Run object. The Run object contains information about the run, including its id, type, input, output, error, start_time, end_time, and any tags or metadata added to the run. with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶ Create a new Runnable that retries the original runnable on exceptions. Parameters retry_if_exception_type – A tuple of exception types to retry on wait_exponential_jitter – Whether to add jitter to the wait time between retries stop_after_attempt – The maximum number of attempts to make before giving up Returns A new Runnable that retries the original runnable on exceptions. with_types(*, input_type: Optional[Type[Input]] = None, output_type: Optional[Type[Output]] = None) → Runnable[Input, Output]¶
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.prompt.PromptTemplate.html
a41c962d8d89-10
Bind input and output types to a Runnable, returning a new Runnable. property InputType: Type[langchain.schema.runnable.utils.Input]¶ The type of input this runnable accepts specified as a type annotation. property OutputType: Any¶ The type of output this runnable produces specified as a type annotation. property config_specs: List[langchain.schema.runnable.utils.ConfigurableFieldSpec]¶ List configurable fields for this runnable. property input_schema: Type[pydantic.main.BaseModel]¶ The type of input this runnable accepts specified as a pydantic model. property lc_attributes: Dict[str, Any]¶ List of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_secrets: Dict[str, str]¶ A map of constructor argument names to secret ids. For example,{“openai_api_key”: “OPENAI_API_KEY”} property output_schema: Type[pydantic.main.BaseModel]¶ The type of output this runnable produces specified as a pydantic model. Examples using PromptTemplate¶ RePhraseQueryRetriever Zapier Natural Language Actions Dall-E Image Generator Streamlit Chat Message History Context Argilla Comet Aim Weights & Biases SageMaker Tracking Rebuff MLflow Flyte Vectara Text Generation Natural Language APIs your local model path Google Drive Google Vertex AI PaLM Predibase Hugging Face Local Pipelines Eden AI Fallbacks Reversible data anonymization with Microsoft Presidio Data anonymization with Microsoft Presidio Removing logical fallacies from model output Amazon Comprehend Moderation Chain Pairwise String Comparison Criteria Evaluation Set env var OPENAI_API_KEY or load from a .env file
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.prompt.PromptTemplate.html
a41c962d8d89-11
Criteria Evaluation Set env var OPENAI_API_KEY or load from a .env file Set env var OPENAI_API_KEY or load from a .env file: Question Answering Retrieve from vector stores directly Improve document indexing with HyDE Structure answers with OpenAI functions Neo4j DB QA chain Multi-agent authoritarian speaker selection Agent Debates with Tools Bash chain How to use a SmartLLMChain Elasticsearch SQL MultiQueryRetriever WebResearchRetriever Lost in the middle: The problem with long contexts Custom Memory Memory in the Multi-Input Chain Memory in LLMChain Multiple Memory classes Customizing Conversational Memory Conversation Knowledge Graph Logging to file Retry parser Datetime parser Pydantic (JSON) parser Select by maximal marginal relevance (MMR) Select by n-gram overlap Prompt pipelining Template formats Connecting to a Feature Store Router Transformation Custom chain Async API First we add a step to load memory Configure Runnable traces
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.prompt.PromptTemplate.html
d0f7c007259e-0
langchain.prompts.example_selector.semantic_similarity.sorted_values¶ langchain.prompts.example_selector.semantic_similarity.sorted_values(values: Dict[str, str]) → List[Any][source]¶ Return a list of values in dict sorted by key.
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.example_selector.semantic_similarity.sorted_values.html
228d0547a61d-0
langchain.prompts.chat.SystemMessagePromptTemplate¶ class langchain.prompts.chat.SystemMessagePromptTemplate[source]¶ Bases: BaseStringMessagePromptTemplate System message prompt template. This is a message that is not sent to the user. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param additional_kwargs: dict [Optional]¶ Additional keyword arguments to pass to the prompt template. param prompt: langchain.prompts.base.StringPromptTemplate [Required]¶ String prompt template. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.SystemMessagePromptTemplate.html
228d0547a61d-1
deep – set to True to make a deep copy of the model Returns new model instance dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. format(**kwargs: Any) → BaseMessage[source]¶ Format the prompt template. Parameters **kwargs – Keyword arguments to use for formatting. Returns Formatted message. format_messages(**kwargs: Any) → List[BaseMessage]¶ Format messages from kwargs. Parameters **kwargs – Keyword arguments to use for formatting. Returns List of BaseMessages. classmethod from_orm(obj: Any) → Model¶ classmethod from_template(template: str, template_format: str = 'f-string', **kwargs: Any) → MessagePromptTemplateT¶ Create a class from a string template. Parameters template – a template. template_format – format of the template. **kwargs – keyword arguments to pass to the constructor. Returns A new instance of this class. classmethod from_template_file(template_file: Union[str, Path], input_variables: List[str], **kwargs: Any) → MessagePromptTemplateT¶ Create a class from a template file. Parameters template_file – path to a template file. String or Path. input_variables – list of input variables. **kwargs – keyword arguments to pass to the constructor. Returns A new instance of this class. classmethod get_lc_namespace() → List[str]¶ Get the namespace of the langchain object.
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.SystemMessagePromptTemplate.html
228d0547a61d-2
Get the namespace of the langchain object. For example, if the class is langchain.llms.openai.OpenAI, then the namespace is [“langchain”, “llms”, “openai”] classmethod is_lc_serializable() → bool¶ Return whether or not the class is serializable. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod lc_id() → List[str]¶ A unique identifier for this class for serialization purposes. The unique identifier is a list of strings that describes the path to the object. classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.SystemMessagePromptTemplate.html
228d0547a61d-3
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶ property input_variables: List[str]¶ Input variables for this prompt template. Returns List of input variable names. property lc_attributes: Dict¶ List of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_secrets: Dict[str, str]¶ A map of constructor argument names to secret ids. For example,{“openai_api_key”: “OPENAI_API_KEY”} Examples using SystemMessagePromptTemplate¶ Anthropic 🚅 LiteLLM Konko OpenAI Google Cloud Platform Vertex AI PaLM JinaChat Figma Set env var OPENAI_API_KEY or load from a .env file: CAMEL Role-Playing Autonomous Cooperative Agents Code writing
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.SystemMessagePromptTemplate.html
2328dbb19939-0
langchain.prompts.example_selector.semantic_similarity.MaxMarginalRelevanceExampleSelector¶ class langchain.prompts.example_selector.semantic_similarity.MaxMarginalRelevanceExampleSelector[source]¶ Bases: SemanticSimilarityExampleSelector ExampleSelector that selects examples based on Max Marginal Relevance. This was shown to improve performance in this paper: https://arxiv.org/pdf/2211.13892.pdf Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param example_keys: Optional[List[str]] = None¶ Optional keys to filter examples to. param fetch_k: int = 20¶ Number of examples to fetch to rerank. param input_keys: Optional[List[str]] = None¶ Optional keys to filter input to. If provided, the search is based on the input variables instead of all variables. param k: int = 4¶ Number of examples to select. param vectorstore: langchain.schema.vectorstore.VectorStore [Required]¶ VectorStore than contains information about examples. add_example(example: Dict[str, str]) → str¶ Add new example to vectorstore. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.example_selector.semantic_similarity.MaxMarginalRelevanceExampleSelector.html
2328dbb19939-1
Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. classmethod from_examples(examples: List[dict], embeddings: Embeddings, vectorstore_cls: Type[VectorStore], k: int = 4, input_keys: Optional[List[str]] = None, fetch_k: int = 20, **vectorstore_cls_kwargs: Any) → MaxMarginalRelevanceExampleSelector[source]¶ Create k-shot example selector using example list and embeddings. Reshuffles examples dynamically based on query similarity. Parameters examples – List of examples to use in the prompt. embeddings – An iniialized embedding API interface, e.g. OpenAIEmbeddings(). vectorstore_cls – A vector store DB interface class, e.g. FAISS. k – Number of examples to select input_keys – If provided, the search is based on the input variables instead of all variables. vectorstore_cls_kwargs – optional kwargs containing url for vector store Returns The ExampleSelector instantiated, backed by a vector store.
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.example_selector.semantic_similarity.MaxMarginalRelevanceExampleSelector.html
2328dbb19939-2
Returns The ExampleSelector instantiated, backed by a vector store. classmethod from_orm(obj: Any) → Model¶ json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ select_examples(input_variables: Dict[str, str]) → List[dict][source]¶ Select which examples to use based on semantic similarity. classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns.
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.example_selector.semantic_similarity.MaxMarginalRelevanceExampleSelector.html
2328dbb19939-3
Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶ Examples using MaxMarginalRelevanceExampleSelector¶ Select by maximal marginal relevance (MMR)
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.example_selector.semantic_similarity.MaxMarginalRelevanceExampleSelector.html
307c03107c3a-0
langchain.prompts.base.check_valid_template¶ langchain.prompts.base.check_valid_template(template: str, template_format: str, input_variables: List[str]) → None[source]¶ Check that template string is valid. Parameters template – The template string. template_format – The template format. Should be one of “f-string” or “jinja2”. input_variables – The input variables. Raises ValueError – If the template format is not supported.
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.base.check_valid_template.html
c8286ec5c23e-0
langchain.prompts.chat.ChatPromptTemplate¶ class langchain.prompts.chat.ChatPromptTemplate[source]¶ Bases: BaseChatPromptTemplate A prompt template for chat models. Use to create flexible templated prompts for chat models. Examples from langchain.prompts import ChatPromptTemplate template = ChatPromptTemplate.from_messages([ ("system", "You are a helpful AI bot. Your name is {name}."), ("human", "Hello, how are you doing?"), ("ai", "I'm doing well, thanks!"), ("human", "{user_input}"), ]) messages = template.format_messages( name="Bob", user_input="What is your name?" ) Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param input_types: Dict[str, Any] [Optional]¶ A dictionary of the types of the variables the prompt template expects. If not provided, all variables are assumed to be strings. param input_variables: List[str] [Required]¶ List of input variables in template messages. Used for validation. param messages: List[MessageLike] [Required]¶ List of messages consisting of either message prompt templates or messages. param output_parser: Optional[BaseOutputParser] = None¶ How to parse the output of calling an LLM on this formatted prompt. param partial_variables: Mapping[str, Union[str, Callable[[], str]]] [Optional]¶ param validate_template: bool = False¶ Whether or not to try validating the template. async abatch(inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) → List[Output]¶
lang/api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.ChatPromptTemplate.html