id
stringlengths 14
15
| text
stringlengths 101
5.26k
| source
stringlengths 57
120
|
---|---|---|
03890c24ea5c-50 | Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β int#
Get the number of tokens in the message.
get_token_ids(text: str) β List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) β None#
Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.PromptLayerOpenAI[source]#
Wrapper around OpenAI large language models.
To use, you should have the openai and promptlayer python
package installed, and the environment variable OPENAI_API_KEY
and PROMPTLAYER_API_KEY set with your openAI API key and
promptlayer key respectively.
All parameters that can be passed to the OpenAI LLM can also
be passed here. The PromptLayerOpenAI LLM adds two optional
Parameters
pl_tags β List of strings to tag the request with.
return_pl_id β If True, the PromptLayer request ID will be
returned in the generation_info field of the
Generation object.
Example
from langchain.llms import PromptLayerOpenAI
openai = PromptLayerOpenAI(model_name="text-davinci-003")
Validators
build_extra Β» all fields
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_environment Β» all fields
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters | https://langchain.readthedocs.io/en/latest/reference/modules/llms.html |
03890c24ea5c-51 | Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep β set to True to make a deep copy of the model
Returns
new model instance
create_llm_result(choices: Any, prompts: List[str], token_usage: Dict[str, int]) β langchain.schema.LLMResult#
Create the LLMResult from the choices and prompts.
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) β int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β int#
Get the number of tokens in the message.
get_sub_prompts(params: Dict[str, Any], prompts: List[str], stop: Optional[List[str]] = None) β List[List[str]]#
Get the sub prompts for llm call.
get_token_ids(text: str) β List[int]#
Get the token IDs using the tiktoken package.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
max_tokens_for_prompt(prompt: str) β int#
Calculate the maximum number of tokens possible to generate for a prompt.
Parameters
prompt β The prompt to pass into the model.
Returns
The maximum number of tokens to generate for a prompt.
Example
max_tokens = openai.max_token_for_prompt("Tell me a joke.")
modelname_to_contextsize(modelname: str) β int#
Calculate the maximum number of tokens possible to generate for a model.
Parameters
modelname β The modelname we want to know the context size for.
Returns
The maximum context size
Example
max_tokens = openai.modelname_to_contextsize("text-davinci-003")
predict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
prep_streaming_params(stop: Optional[List[str]] = None) β Dict[str, Any]#
Prepare the params for streaming.
save(file_path: Union[pathlib.Path, str]) β None#
Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
stream(prompt: str, stop: Optional[List[str]] = None) β Generator#
Call OpenAI with streaming flag and return the resulting generator.
BETA: this is a beta feature while we figure out the right abstraction.
Once that happens, this interface could change.
Parameters
prompt β The prompts to pass into the model.
stop β Optional list of stop words to use when generating.
Returns
A generator representing the stream of tokens from OpenAI.
Example
generator = openai.stream("Tell me a joke.")
for token in generator:
yield token
classmethod update_forward_refs(**localns: Any) β None# | https://langchain.readthedocs.io/en/latest/reference/modules/llms.html |
03890c24ea5c-52 | yield token
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.PromptLayerOpenAIChat[source]#
Wrapper around OpenAI large language models.
To use, you should have the openai and promptlayer python
package installed, and the environment variable OPENAI_API_KEY
and PROMPTLAYER_API_KEY set with your openAI API key and
promptlayer key respectively.
All parameters that can be passed to the OpenAIChat LLM can also
be passed here. The PromptLayerOpenAIChat adds two optional
Parameters
pl_tags β List of strings to tag the request with.
return_pl_id β If True, the PromptLayer request ID will be
returned in the generation_info field of the
Generation object.
Example
from langchain.llms import PromptLayerOpenAIChat
openaichat = PromptLayerOpenAIChat(model_name="gpt-3.5-turbo")
Validators
build_extra Β» all fields
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_environment Β» all fields
field allowed_special: Union[Literal['all'], AbstractSet[str]] = {}#
Set of special tokens that are allowedγ
field disallowed_special: Union[Literal['all'], Collection[str]] = 'all'#
Set of special tokens that are not allowedγ
field max_retries: int = 6#
Maximum number of retries to make when generating.
field model_kwargs: Dict[str, Any] [Optional]#
Holds any model parameters valid for create call not explicitly specified.
field model_name: str = 'gpt-3.5-turbo'#
Model name to use.
field prefix_messages: List [Optional]#
Series of messages for Chat input.
field streaming: bool = False#
Whether to stream the results or not.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep β set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input. | https://langchain.readthedocs.io/en/latest/reference/modules/llms.html |
03890c24ea5c-53 | Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) β int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β int#
Get the number of tokens in the message.
get_token_ids(text: str) β List[int]#
Get the token IDs using the tiktoken package.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) β None#
Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.RWKV[source]#
Wrapper around RWKV language models.
To use, you should have the rwkv python package installed, the
pre-trained model file, and the modelβs config information.
Example
from langchain.llms import RWKV
model = RWKV(model="./models/rwkv-3b-fp16.bin", strategy="cpu fp32")
# Simplest invocation
response = model("Once upon a time, ")
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_environment Β» all fields
field CHUNK_LEN: int = 256#
Batch size for prompt processing.
field max_tokens_per_generation: int = 256#
Maximum number of tokens to generate.
field model: str [Required]#
Path to the pre-trained RWKV model file.
field penalty_alpha_frequency: float = 0.4#
Positive values penalize new tokens based on their existing frequency
in the text so far, decreasing the modelβs likelihood to repeat the same
line verbatim..
field penalty_alpha_presence: float = 0.4#
Positive values penalize new tokens based on whether they appear
in the text so far, increasing the modelβs likelihood to talk about
new topics..
field rwkv_verbose: bool = True#
Print debug information.
field strategy: str = 'cpu fp32'#
Token context window.
field temperature: float = 1.0#
The temperature to use for sampling.
field tokens_path: str [Required]#
Path to the RWKV tokens file.
field top_p: float = 0.5#
The top-p value to use for sampling.
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input. | https://langchain.readthedocs.io/en/latest/reference/modules/llms.html |
03890c24ea5c-54 | Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep β set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) β int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β int#
Get the number of tokens in the message.
get_token_ids(text: str) β List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) β None#
Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.Replicate[source]#
Wrapper around Replicate models.
To use, you should have the replicate python package installed,
and the environment variable REPLICATE_API_TOKEN set with your API token.
You can find your token here: https://replicate.com/account
The model param is required, but any other model parameters can also | https://langchain.readthedocs.io/en/latest/reference/modules/llms.html |
03890c24ea5c-55 | The model param is required, but any other model parameters can also
be passed in with the format input={model_param: value, β¦}
Example
from langchain.llms import Replicate
replicate = Replicate(model="stability-ai/stable-diffusion: 27b93a2413e7f36cd83da926f365628 0b2931564ff050bf9575f1fdf9bcd7478",
input={"image_dimensions": "512x512"})
Validators
build_extra Β» all fields
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_environment Β» all fields
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep β set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) β int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β int#
Get the number of tokens in the message.
get_token_ids(text: str) β List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β unicode# | https://langchain.readthedocs.io/en/latest/reference/modules/llms.html |
03890c24ea5c-56 | Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) β None#
Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.SagemakerEndpoint[source]#
Wrapper around custom Sagemaker Inference Endpoints.
To use, you must supply the endpoint name from your deployed
Sagemaker model & the region where it is deployed.
To authenticate, the AWS client uses the following methods to
automatically load credentials:
https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
If a specific credential profile should be used, you must pass
the name of the profile from the ~/.aws/credentials file that is to be used.
Make sure the credentials / roles used have the required policies to
access the Sagemaker endpoint.
See: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_environment Β» all fields
field content_handler: langchain.llms.sagemaker_endpoint.LLMContentHandler [Required]#
The content handler class that provides an input and
output transform functions to handle formats between LLM
and the endpoint.
field credentials_profile_name: Optional[str] = None#
The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which
has either access keys or role information specified.
If not specified, the default credential profile or, if on an EC2 instance,
credentials from IMDS will be used.
See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
field endpoint_kwargs: Optional[Dict] = None#
Optional attributes passed to the invoke_endpoint
function. See `boto3`_. docs for more info.
.. _boto3: <https://boto3.amazonaws.com/v1/documentation/api/latest/index.html>
field endpoint_name: str = ''#
The name of the endpoint from the deployed Sagemaker model.
Must be unique within an AWS Region.
field model_kwargs: Optional[Dict] = None#
Key word arguments to pass to the model.
field region_name: str = ''#
The aws region where the Sagemaker model is deployed, eg. us-west-2.
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed. | https://langchain.readthedocs.io/en/latest/reference/modules/llms.html |
03890c24ea5c-57 | Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep β set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) β int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β int#
Get the number of tokens in the message.
get_token_ids(text: str) β List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) β None#
Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.SelfHostedHuggingFaceLLM[source]#
Wrapper around HuggingFace Pipeline API to run on self-hosted remote hardware.
Supported hardware includes auto-launched instances on AWS, GCP, Azure,
and Lambda, as well as servers specified
by IP address and SSH credentials (such as on-prem, or another cloud
like Paperspace, Coreweave, etc.).
To use, you should have the runhouse python package installed.
Only supports text-generation, text2text-generation and summarization for now.
Example using from_model_id:from langchain.llms import SelfHostedHuggingFaceLLM
import runhouse as rh
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1")
hf = SelfHostedHuggingFaceLLM(
model_id="google/flan-t5-large", task="text2text-generation",
hardware=gpu
)
Example passing fn that generates a pipeline (bc the pipeline is not serializable):from langchain.llms import SelfHostedHuggingFaceLLM
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
import runhouse as rh
def get_pipeline():
model_id = "gpt2" | https://langchain.readthedocs.io/en/latest/reference/modules/llms.html |
03890c24ea5c-58 | def get_pipeline():
model_id = "gpt2"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
pipe = pipeline(
"text-generation", model=model, tokenizer=tokenizer
)
return pipe
hf = SelfHostedHuggingFaceLLM(
model_load_fn=get_pipeline, model_id="gpt2", hardware=gpu)
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
field device: int = 0#
Device to use for inference. -1 for CPU, 0 for GPU, 1 for second GPU, etc.
field hardware: Any = None#
Remote hardware to send the inference function to.
field inference_fn: Callable = <function _generate_text>#
Inference function to send to the remote hardware.
field load_fn_kwargs: Optional[dict] = None#
Key word arguments to pass to the model load function.
field model_id: str = 'gpt2'#
Hugging Face model_id to load the model.
field model_kwargs: Optional[dict] = None#
Key word arguments to pass to the model.
field model_load_fn: Callable = <function _load_transformer>#
Function to load the model remotely on the server.
field model_reqs: List[str] = ['./', 'transformers', 'torch']#
Requirements to install on hardware to inference the model.
field task: str = 'text-generation'#
Hugging Face task (βtext-generationβ, βtext2text-generationβ or
βsummarizationβ).
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep β set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
classmethod from_pipeline(pipeline: Any, hardware: Any, model_reqs: Optional[List[str]] = None, device: int = 0, **kwargs: Any) β langchain.llms.base.LLM#
Init the SelfHostedPipeline from a pipeline object or string.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult# | https://langchain.readthedocs.io/en/latest/reference/modules/llms.html |
03890c24ea5c-59 | Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) β int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β int#
Get the number of tokens in the message.
get_token_ids(text: str) β List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) β None#
Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.SelfHostedPipeline[source]#
Run model inference on self-hosted remote hardware.
Supported hardware includes auto-launched instances on AWS, GCP, Azure,
and Lambda, as well as servers specified
by IP address and SSH credentials (such as on-prem, or another
cloud like Paperspace, Coreweave, etc.).
To use, you should have the runhouse python package installed.
Example for custom pipeline and inference functions:from langchain.llms import SelfHostedPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
import runhouse as rh
def load_pipeline():
tokenizer = AutoTokenizer.from_pretrained("gpt2")
model = AutoModelForCausalLM.from_pretrained("gpt2")
return pipeline(
"text-generation", model=model, tokenizer=tokenizer,
max_new_tokens=10
)
def inference_fn(pipeline, prompt, stop = None):
return pipeline(prompt)[0]["generated_text"]
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1")
llm = SelfHostedPipeline(
model_load_fn=load_pipeline,
hardware=gpu,
model_reqs=model_reqs, inference_fn=inference_fn
)
Example for <2GB model (can be serialized and sent directly to the server):from langchain.llms import SelfHostedPipeline
import runhouse as rh
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1")
my_model = ...
llm = SelfHostedPipeline.from_pipeline(
pipeline=my_model,
hardware=gpu,
model_reqs=["./", "torch", "transformers"],
)
Example passing model path for larger models:from langchain.llms import SelfHostedPipeline
import runhouse as rh
import pickle
from transformers import pipeline
generator = pipeline(model="gpt2")
rh.blob(pickle.dumps(generator), path="models/pipeline.pkl"
).save().to(gpu, path="models")
llm = SelfHostedPipeline.from_pipeline(
pipeline="models/pipeline.pkl",
hardware=gpu,
model_reqs=["./", "torch", "transformers"],
)
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
field hardware: Any = None#
Remote hardware to send the inference function to. | https://langchain.readthedocs.io/en/latest/reference/modules/llms.html |
03890c24ea5c-60 | field hardware: Any = None#
Remote hardware to send the inference function to.
field inference_fn: Callable = <function _generate_text>#
Inference function to send to the remote hardware.
field load_fn_kwargs: Optional[dict] = None#
Key word arguments to pass to the model load function.
field model_load_fn: Callable [Required]#
Function to load the model remotely on the server.
field model_reqs: List[str] = ['./', 'torch']#
Requirements to install on hardware to inference the model.
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep β set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
classmethod from_pipeline(pipeline: Any, hardware: Any, model_reqs: Optional[List[str]] = None, device: int = 0, **kwargs: Any) β langchain.llms.base.LLM[source]#
Init the SelfHostedPipeline from a pipeline object or string.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) β int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β int#
Get the number of tokens in the message.
get_token_ids(text: str) β List[int]#
Get the token present in the text. | https://langchain.readthedocs.io/en/latest/reference/modules/llms.html |
03890c24ea5c-61 | Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) β None#
Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.StochasticAI[source]#
Wrapper around StochasticAI large language models.
To use, you should have the environment variable STOCHASTICAI_API_KEY
set with your API key.
Example
from langchain.llms import StochasticAI
stochasticai = StochasticAI(api_url="")
Validators
build_extra Β» all fields
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_environment Β» all fields
field api_url: str = ''#
Model name to use.
field model_kwargs: Dict[str, Any] [Optional]#
Holds any model parameters valid for create call not
explicitly specified.
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep β set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM. | https://langchain.readthedocs.io/en/latest/reference/modules/llms.html |
03890c24ea5c-62 | dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) β int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β int#
Get the number of tokens in the message.
get_token_ids(text: str) β List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) β None#
Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.VertexAI[source]#
Wrapper around Google Vertex AI large language models.
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_environment Β» all fields
field credentials: Any = None#
The default custom credentials (google.auth.credentials.Credentials) to use
field location: str = 'us-central1'#
The default location to use when making API calls.
field max_output_tokens: int = 128#
Token limit determines the maximum amount of text output from one prompt.
field project: Optional[str] = None#
The default GCP project to use when making Vertex API calls.
field stop: Optional[List[str]] = None#
Optional list of stop words to use when generating.
field temperature: float = 0.0#
Sampling temperature, it controls the degree of randomness in token selection.
field top_k: int = 40#
How the model selects tokens for output, the next token is selected from
field top_p: float = 0.95#
Tokens are selected from most probable to least until the sum of their
field tuned_model_name: Optional[str] = None#
The name of a tuned model, if itβs provided, model_name is ignored.
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult# | https://langchain.readthedocs.io/en/latest/reference/modules/llms.html |
03890c24ea5c-63 | Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep β set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) β int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β int#
Get the number of tokens in the message.
get_token_ids(text: str) β List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) β None#
Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.Writer[source]#
Wrapper around Writer large language models.
To use, you should have the environment variable WRITER_API_KEY and
WRITER_ORG_ID set with your API key and organization ID respectively.
Example
from langchain import Writer
writer = Writer(model_id="palmyra-base")
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_environment Β» all fields
field base_url: Optional[str] = None#
Base url to use, if None decides based on model name.
field best_of: Optional[int] = None#
Generates this many completions server-side and returns the βbestβ. | https://langchain.readthedocs.io/en/latest/reference/modules/llms.html |
03890c24ea5c-64 | Generates this many completions server-side and returns the βbestβ.
field logprobs: bool = False#
Whether to return log probabilities.
field max_tokens: Optional[int] = None#
Maximum number of tokens to generate.
field min_tokens: Optional[int] = None#
Minimum number of tokens to generate.
field model_id: str = 'palmyra-instruct'#
Model name to use.
field n: Optional[int] = None#
How many completions to generate.
field presence_penalty: Optional[float] = None#
Penalizes repeated tokens regardless of frequency.
field repetition_penalty: Optional[float] = None#
Penalizes repeated tokens according to frequency.
field stop: Optional[List[str]] = None#
Sequences when completion generation will stop.
field temperature: Optional[float] = None#
What sampling temperature to use.
field top_p: Optional[float] = None#
Total probability mass of tokens to consider at each step.
field verbose: bool [Optional]#
Whether to print out response text.
field writer_api_key: Optional[str] = None#
Writer API key.
field writer_org_id: Optional[str] = None#
Writer organization ID.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep β set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) β int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β int#
Get the number of tokens in the message.
get_token_ids(text: str) β List[int]#
Get the token present in the text. | https://langchain.readthedocs.io/en/latest/reference/modules/llms.html |
03890c24ea5c-65 | Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) β None#
Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
previous
Writer
next
Chat Models
By Harrison Chase
Β© Copyright 2023, Harrison Chase.
Last updated on Jun 08, 2023. | https://langchain.readthedocs.io/en/latest/reference/modules/llms.html |
dbc1327bb7ca-0 | .rst
.pdf
Document Loaders
Document Loaders#
All different types of document loaders.
class langchain.document_loaders.AZLyricsLoader(web_path: Union[str, List[str]], header_template: Optional[dict] = None)[source]#
Loader that loads AZLyrics webpages.
load() β List[langchain.schema.Document][source]#
Load webpage.
class langchain.document_loaders.AirbyteJSONLoader(file_path: str)[source]#
Loader that loads local airbyte json files.
load() β List[langchain.schema.Document][source]#
Load file.
pydantic model langchain.document_loaders.ApifyDatasetLoader[source]#
Logic for loading documents from Apify datasets.
field apify_client: Any = None#
field dataset_id: str [Required]#
The ID of the dataset on the Apify platform.
field dataset_mapping_function: Callable[[Dict], langchain.schema.Document] [Required]#
A custom function that takes a single dictionary (an Apify dataset item)
and converts it to an instance of the Document class.
load() β List[langchain.schema.Document][source]#
Load documents.
class langchain.document_loaders.ArxivLoader(query: str, load_max_docs: Optional[int] = 100, load_all_available_meta: Optional[bool] = False)[source]#
Loads a query result from arxiv.org into a list of Documents.
Each document represents one Document.
The loader converts the original PDF format into the text.
load() β List[langchain.schema.Document][source]#
Load data into document objects.
class langchain.document_loaders.AzureBlobStorageContainerLoader(conn_str: str, container: str, prefix: str = '')[source]#
Loading logic for loading documents from Azure Blob Storage.
load() β List[langchain.schema.Document][source]#
Load documents.
class langchain.document_loaders.AzureBlobStorageFileLoader(conn_str: str, container: str, blob_name: str)[source]#
Loading logic for loading documents from Azure Blob Storage.
load() β List[langchain.schema.Document][source]#
Load documents.
class langchain.document_loaders.BSHTMLLoader(file_path: str, open_encoding: Optional[str] = None, bs_kwargs: Optional[dict] = None, get_text_separator: str = '')[source]#
Loader that uses beautiful soup to parse HTML files.
load() β List[langchain.schema.Document][source]#
Load data into document objects.
class langchain.document_loaders.BibtexLoader(file_path: str, *, parser: Optional[langchain.utilities.bibtex.BibtexparserWrapper] = None, max_docs: Optional[int] = None, max_content_chars: Optional[int] = 4000, load_extra_metadata: bool = False, file_pattern: str = '[^:]+\\.pdf')[source]#
Loads a bibtex file into a list of Documents.
Each document represents one entry from the bibtex file.
If a PDF file is present in the file bibtex field, the original PDF
is loaded into the document text. If no such file entry is present,
the abstract field is used instead.
lazy_load() β Iterator[langchain.schema.Document][source]#
Load bibtex file using bibtexparser and get the article texts plus the
article metadata.
See https://bibtexparser.readthedocs.io/en/master/
Returns
a list of documents with the document.page_content in text format
load() β List[langchain.schema.Document][source]#
Load bibtex file documents from the given bibtex file path.
See https://bibtexparser.readthedocs.io/en/master/
Parameters
file_path β the path to the bibtex file
Returns
a list of documents with the document.page_content in text format
class langchain.document_loaders.BigQueryLoader(query: str, project: Optional[str] = None, page_content_columns: Optional[List[str]] = None, metadata_columns: Optional[List[str]] = None, credentials: Optional[Credentials] = None)[source]#
Loads a query result from BigQuery into a list of documents.
Each document represents one row of the result. The page_content_columns
are written into the page_content of the document. The metadata_columns
are written into the metadata of the document. By default, all columns
are written into the page_content and none into the metadata.
load() β List[langchain.schema.Document][source]#
Load data into document objects. | https://langchain.readthedocs.io/en/latest/reference/modules/document_loaders.html |
dbc1327bb7ca-1 | Load data into document objects.
class langchain.document_loaders.BiliBiliLoader(video_urls: List[str])[source]#
Loader that loads bilibili transcripts.
load() β List[langchain.schema.Document][source]#
Load from bilibili url.
class langchain.document_loaders.BlackboardLoader(blackboard_course_url: str, bbrouter: str, load_all_recursively: bool = True, basic_auth: Optional[Tuple[str, str]] = None, cookies: Optional[dict] = None)[source]#
Loader that loads all documents from a Blackboard course.
This loader is not compatible with all Blackboard courses. It is only
compatible with courses that use the new Blackboard interface.
To use this loader, you must have the BbRouter cookie. You can get this
cookie by logging into the course and then copying the value of the
BbRouter cookie from the browserβs developer tools.
Example
from langchain.document_loaders import BlackboardLoader
loader = BlackboardLoader(
blackboard_course_url="https://blackboard.example.com/webapps/blackboard/execute/announcement?method=search&context=course_entry&course_id=_123456_1",
bbrouter="expires:12345...",
)
documents = loader.load()
base_url: str#
check_bs4() β None[source]#
Check if BeautifulSoup4 is installed.
Raises
ImportError β If BeautifulSoup4 is not installed.
download(path: str) β None[source]#
Download a file from a url.
Parameters
path β Path to the file.
folder_path: str#
load() β List[langchain.schema.Document][source]#
Load data into document objects.
Returns
List of documents.
load_all_recursively: bool#
parse_filename(url: str) β str[source]#
Parse the filename from a url.
Parameters
url β Url to parse the filename from.
Returns
The filename.
class langchain.document_loaders.BlockchainDocumentLoader(contract_address: str, blockchainType: langchain.document_loaders.blockchain.BlockchainType = BlockchainType.ETH_MAINNET, api_key: str = 'docs-demo', startToken: str = '', get_all_tokens: bool = False, max_execution_time: Optional[int] = None)[source]#
Loads elements from a blockchain smart contract into Langchain documents.
The supported blockchains are: Ethereum mainnet, Ethereum Goerli testnet,
Polygon mainnet, and Polygon Mumbai testnet.
If no BlockchainType is specified, the default is Ethereum mainnet.
The Loader uses the Alchemy API to interact with the blockchain.
ALCHEMY_API_KEY environment variable must be set to use this loader.
The API returns 100 NFTs per request and can be paginated using the
startToken parameter.
If get_all_tokens is set to True, the loader will get all tokens
on the contract. Note that for contracts with a large number of tokens,
this may take a long time (e.g. 10k tokens is 100 requests).
Default value is false for this reason.
The max_execution_time (sec) can be set to limit the execution time
of the loader.
Future versions of this loader can:
Support additional Alchemy APIs (e.g. getTransactions, etc.)
Support additional blockain APIs (e.g. Infura, Opensea, etc.)
load() β List[langchain.schema.Document][source]#
Load data into document objects.
class langchain.document_loaders.CSVLoader(file_path: str, source_column: Optional[str] = None, csv_args: Optional[Dict] = None, encoding: Optional[str] = None)[source]#
Loads a CSV file into a list of documents.
Each document represents one row of the CSV file. Every row is converted into a
key/value pair and outputted to a new line in the documentβs page_content.
The source for each document loaded from csv is set to the value of the
file_path argument for all doucments by default.
You can override this by setting the source_column argument to the
name of a column in the CSV file.
The source of each document will then be set to the value of the column
with the name specified in source_column.
Output Example:column1: value1
column2: value2
column3: value3
load() β List[langchain.schema.Document][source]#
Load data into document objects.
class langchain.document_loaders.ChatGPTLoader(log_file: str, num_logs: int = - 1)[source]#
Loader that loads conversations from exported ChatGPT data. | https://langchain.readthedocs.io/en/latest/reference/modules/document_loaders.html |
dbc1327bb7ca-2 | Loader that loads conversations from exported ChatGPT data.
load() β List[langchain.schema.Document][source]#
Load data into document objects.
class langchain.document_loaders.CoNLLULoader(file_path: str)[source]#
Load CoNLL-U files.
load() β List[langchain.schema.Document][source]#
Load from file path.
class langchain.document_loaders.CollegeConfidentialLoader(web_path: Union[str, List[str]], header_template: Optional[dict] = None)[source]#
Loader that loads College Confidential webpages.
load() β List[langchain.schema.Document][source]#
Load webpage.
class langchain.document_loaders.ConfluenceLoader(url: str, api_key: Optional[str] = None, username: Optional[str] = None, oauth2: Optional[dict] = None, token: Optional[str] = None, cloud: Optional[bool] = True, number_of_retries: Optional[int] = 3, min_retry_seconds: Optional[int] = 2, max_retry_seconds: Optional[int] = 10, confluence_kwargs: Optional[dict] = None)[source]#
Load Confluence pages. Port of https://llamahub.ai/l/confluence
This currently supports username/api_key, Oauth2 login or personal access token
authentication.
Specify a list page_ids and/or space_key to load in the corresponding pages into
Document objects, if both are specified the union of both sets will be returned.
You can also specify a boolean include_attachments to include attachments, this
is set to False by default, if set to True all attachments will be downloaded and
ConfluenceReader will extract the text from the attachments and add it to the
Document object. Currently supported attachment types are: PDF, PNG, JPEG/JPG,
SVG, Word and Excel.
Hint: space_key and page_id can both be found in the URL of a page in Confluence
- https://yoursite.atlassian.com/wiki/spaces/<space_key>/pages/<page_id>
Example
from langchain.document_loaders import ConfluenceLoader
loader = ConfluenceLoader(
url="https://yoursite.atlassian.com/wiki",
username="me",
api_key="12345"
)
documents = loader.load(space_key="SPACE",limit=50)
Parameters
url (str) β _description_
api_key (str, optional) β _description_, defaults to None
username (str, optional) β _description_, defaults to None
oauth2 (dict, optional) β _description_, defaults to {}
token (str, optional) β _description_, defaults to None
cloud (bool, optional) β _description_, defaults to True
number_of_retries (Optional[int], optional) β How many times to retry, defaults to 3
min_retry_seconds (Optional[int], optional) β defaults to 2
max_retry_seconds (Optional[int], optional) β defaults to 10
confluence_kwargs (dict, optional) β additional kwargs to initialize confluence with
Raises
ValueError β Errors while validating input
ImportError β Required dependencies not installed.
is_public_page(page: dict) β bool[source]#
Check if a page is publicly accessible.
load(space_key: Optional[str] = None, page_ids: Optional[List[str]] = None, label: Optional[str] = None, cql: Optional[str] = None, include_restricted_content: bool = False, include_archived_content: bool = False, include_attachments: bool = False, include_comments: bool = False, limit: Optional[int] = 50, max_pages: Optional[int] = 1000) β List[langchain.schema.Document][source]#
Parameters
space_key (Optional[str], optional) β Space key retrieved from a confluence URL, defaults to None
page_ids (Optional[List[str]], optional) β List of specific page IDs to load, defaults to None
label (Optional[str], optional) β Get all pages with this label, defaults to None
cql (Optional[str], optional) β CQL Expression, defaults to None
include_restricted_content (bool, optional) β defaults to False
include_archived_content (bool, optional) β Whether to include archived content,
defaults to False
include_attachments (bool, optional) β defaults to False
include_comments (bool, optional) β defaults to False
limit (int, optional) β Maximum number of pages to retrieve per request, defaults to 50
max_pages (int, optional) β Maximum number of pages to retrieve in total, defaults 1000
Raises
ValueError β _description_
ImportError β _description_
Returns
_description_ | https://langchain.readthedocs.io/en/latest/reference/modules/document_loaders.html |
dbc1327bb7ca-3 | ValueError β _description_
ImportError β _description_
Returns
_description_
Return type
List[Document]
paginate_request(retrieval_method: Callable, **kwargs: Any) β List[source]#
Paginate the various methods to retrieve groups of pages.
Unfortunately, due to page size, sometimes the Confluence API
doesnβt match the limit value. If limit is >100 confluence
seems to cap the response to 100. Also, due to the Atlassian Python
package, we donβt get the βnextβ values from the β_linksβ key because
they only return the value from the results key. So here, the pagination
starts from 0 and goes until the max_pages, getting the limit number
of pages with each request. We have to manually check if there
are more docs based on the length of the returned list of pages, rather than
just checking for the presence of a next key in the response like this page
would have you do:
https://developer.atlassian.com/server/confluence/pagination-in-the-rest-api/
Parameters
retrieval_method (callable) β Function used to retrieve docs
Returns
List of documents
Return type
List
process_attachment(page_id: str) β List[str][source]#
process_doc(link: str) β str[source]#
process_image(link: str) β str[source]#
process_page(page: dict, include_attachments: bool, include_comments: bool) β langchain.schema.Document[source]#
process_pages(pages: List[dict], include_restricted_content: bool, include_attachments: bool, include_comments: bool) β List[langchain.schema.Document][source]#
Process a list of pages into a list of documents.
process_pdf(link: str) β str[source]#
process_svg(link: str) β str[source]#
process_xls(link: str) β str[source]#
static validate_init_args(url: Optional[str] = None, api_key: Optional[str] = None, username: Optional[str] = None, oauth2: Optional[dict] = None, token: Optional[str] = None) β Optional[List][source]#
Validates proper combinations of init arguments
class langchain.document_loaders.DataFrameLoader(data_frame: Any, page_content_column: str = 'text')[source]#
Load Pandas DataFrames.
load() β List[langchain.schema.Document][source]#
Load from the dataframe.
class langchain.document_loaders.DiffbotLoader(api_token: str, urls: List[str], continue_on_failure: bool = True)[source]#
Loader that loads Diffbot file json.
load() β List[langchain.schema.Document][source]#
Extract text from Diffbot on all the URLs and return Document instances
class langchain.document_loaders.DirectoryLoader(path: str, glob: str = '**/[!.]*', silent_errors: bool = False, load_hidden: bool = False, loader_cls: typing.Union[typing.Type[langchain.document_loaders.unstructured.UnstructuredFileLoader], typing.Type[langchain.document_loaders.text.TextLoader], typing.Type[langchain.document_loaders.html_bs.BSHTMLLoader]] = <class 'langchain.document_loaders.unstructured.UnstructuredFileLoader'>, loader_kwargs: typing.Optional[dict] = None, recursive: bool = False, show_progress: bool = False, use_multithreading: bool = False, max_concurrency: int = 4)[source]#
Loading logic for loading documents from a directory.
load() β List[langchain.schema.Document][source]#
Load documents.
load_file(item: pathlib.Path, path: pathlib.Path, docs: List[langchain.schema.Document], pbar: Optional[Any]) β None[source]#
class langchain.document_loaders.DiscordChatLoader(chat_log: pd.DataFrame, user_id_col: str = 'ID')[source]#
Load Discord chat logs.
load() β List[langchain.schema.Document][source]#
Load all chat messages.
pydantic model langchain.document_loaders.DocugamiLoader[source]#
Loader that loads processed docs from Docugami.
To use, you should have the lxml python package installed.
field access_token: Optional[str] = None#
field api: str = 'https://api.docugami.com/v1preview1'#
field docset_id: Optional[str] = None#
field document_ids: Optional[Sequence[str]] = None#
field file_paths: Optional[Sequence[Union[pathlib.Path, str]]] = None#
field min_chunk_size: int = 32#
load() β List[langchain.schema.Document][source]#
Load documents. | https://langchain.readthedocs.io/en/latest/reference/modules/document_loaders.html |
dbc1327bb7ca-4 | load() β List[langchain.schema.Document][source]#
Load documents.
class langchain.document_loaders.Docx2txtLoader(file_path: str)[source]#
Loads a DOCX with docx2txt and chunks at character level.
Defaults to check for local file, but if the file is a web path, it will download it
to a temporary file, and use that, then clean up the temporary file after completion
load() β List[langchain.schema.Document][source]#
Load given path as single page.
class langchain.document_loaders.DuckDBLoader(query: str, database: str = ':memory:', read_only: bool = False, config: Optional[Dict[str, str]] = None, page_content_columns: Optional[List[str]] = None, metadata_columns: Optional[List[str]] = None)[source]#
Loads a query result from DuckDB into a list of documents.
Each document represents one row of the result. The page_content_columns
are written into the page_content of the document. The metadata_columns
are written into the metadata of the document. By default, all columns
are written into the page_content and none into the metadata.
load() β List[langchain.schema.Document][source]#
Load data into document objects.
class langchain.document_loaders.EverNoteLoader(file_path: str, load_single_document: bool = True)[source]#
EverNote Loader.
Loads an EverNote notebook export file e.g. my_notebook.enex into Documents.
Instructions on producing this file can be found at
https://help.evernote.com/hc/en-us/articles/209005557-Export-notes-and-notebooks-as-ENEX-or-HTML
Currently only the plain text in the note is extracted and stored as the contents
of the Document, any non content metadata (e.g. βauthorβ, βcreatedβ, βupdatedβ etc.
but not βcontent-rawβ or βresourceβ) tags on the note will be extracted and stored
as metadata on the Document.
Parameters
file_path (str) β The path to the notebook export with a .enex extension
load_single_document (bool) β Whether or not to concatenate the content of all
notes into a single long Document.
True (If this is set to) β the βsourceβ which contains the file name of the export.
load() β List[langchain.schema.Document][source]#
Load documents from EverNote export file.
class langchain.document_loaders.FacebookChatLoader(path: str)[source]#
Loader that loads Facebook messages json directory dump.
load() β List[langchain.schema.Document][source]#
Load documents.
class langchain.document_loaders.FaunaLoader(query: str, page_content_field: str, secret: str, metadata_fields: Optional[Sequence[str]] = None)[source]#
query#
The FQL query string to execute.
Type
str
page_content_field#
The field that contains the content of each page.
Type
str
secret#
The secret key for authenticating to FaunaDB.
Type
str
metadata_fields#
Optional list of field names to include in metadata.
Type
Optional[Sequence[str]]
lazy_load() β Iterator[langchain.schema.Document][source]#
A lazy loader for document content.
load() β List[langchain.schema.Document][source]#
Load data into document objects.
class langchain.document_loaders.FigmaFileLoader(access_token: str, ids: str, key: str)[source]#
Loader that loads Figma file json.
load() β List[langchain.schema.Document][source]#
Load file
class langchain.document_loaders.GCSDirectoryLoader(project_name: str, bucket: str, prefix: str = '')[source]#
Loading logic for loading documents from GCS.
load() β List[langchain.schema.Document][source]#
Load documents.
class langchain.document_loaders.GCSFileLoader(project_name: str, bucket: str, blob: str)[source]#
Loading logic for loading documents from GCS.
load() β List[langchain.schema.Document][source]#
Load documents.
pydantic model langchain.document_loaders.GitHubIssuesLoader[source]#
Validators
validate_environment Β» all fields
validate_since Β» since
field assignee: Optional[str] = None#
Filter on assigned user. Pass βnoneβ for no user and β*β for any user.
field creator: Optional[str] = None#
Filter on the user that created the issue.
field direction: Optional[Literal['asc', 'desc']] = None# | https://langchain.readthedocs.io/en/latest/reference/modules/document_loaders.html |
dbc1327bb7ca-5 | field direction: Optional[Literal['asc', 'desc']] = None#
The direction to sort the results by. Can be one of: βascβ, βdescβ.
field include_prs: bool = True#
If True include Pull Requests in results, otherwise ignore them.
field labels: Optional[List[str]] = None#
Label names to filter one. Example: bug,ui,@high.
field mentioned: Optional[str] = None#
Filter on a user thatβs mentioned in the issue.
field milestone: Optional[Union[int, Literal['*', 'none']]] = None#
If integer is passed, it should be a milestoneβs number field.
If the string β*β is passed, issues with any milestone are accepted.
If the string βnoneβ is passed, issues without milestones are returned.
field since: Optional[str] = None#
Only show notifications updated after the given time.
This is a timestamp in ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ.
field sort: Optional[Literal['created', 'updated', 'comments']] = None#
What to sort results by. Can be one of: βcreatedβ, βupdatedβ, βcommentsβ.
Default is βcreatedβ.
field state: Optional[Literal['open', 'closed', 'all']] = None#
Filter on issue state. Can be one of: βopenβ, βclosedβ, βallβ.
lazy_load() β Iterator[langchain.schema.Document][source]#
Get issues of a GitHub repository.
Returns
page_content
metadata
url
title
creator
created_at
last_update_time
closed_time
number of comments
state
labels
assignee
assignees
milestone
locked
number
is_pull_request
Return type
A list of Documents with attributes
load() β List[langchain.schema.Document][source]#
Get issues of a GitHub repository.
Returns
page_content
metadata
url
title
creator
created_at
last_update_time
closed_time
number of comments
state
labels
assignee
assignees
milestone
locked
number
is_pull_request
Return type
A list of Documents with attributes
parse_issue(issue: dict) β langchain.schema.Document[source]#
Create Document objects from a list of GitHub issues.
property query_params: str#
property url: str#
class langchain.document_loaders.GitLoader(repo_path: str, clone_url: Optional[str] = None, branch: Optional[str] = 'main', file_filter: Optional[Callable[[str], bool]] = None)[source]#
Loads files from a Git repository into a list of documents.
Repository can be local on disk available at repo_path,
or remote at clone_url that will be cloned to repo_path.
Currently supports only text files.
Each document represents one file in the repository. The path points to
the local Git repository, and the branch specifies the branch to load
files from. By default, it loads from the main branch.
load() β List[langchain.schema.Document][source]#
Load data into document objects.
class langchain.document_loaders.GitbookLoader(web_page: str, load_all_paths: bool = False, base_url: Optional[str] = None, content_selector: str = 'main')[source]#
Load GitBook data.
load from either a single page, or
load all (relative) paths in the navbar.
load() β List[langchain.schema.Document][source]#
Fetch text from one single GitBook page.
class langchain.document_loaders.GoogleApiClient(credentials_path: pathlib.Path = PosixPath('/home/docs/.credentials/credentials.json'), service_account_path: pathlib.Path = PosixPath('/home/docs/.credentials/credentials.json'), token_path: pathlib.Path = PosixPath('/home/docs/.credentials/token.json'))[source]#
A Generic Google Api Client.
To use, you should have the google_auth_oauthlib,youtube_transcript_api,google
python package installed.
As the google api expects credentials you need to set up a google account and
register your Service. βhttps://developers.google.com/docs/api/quickstart/pythonβ
Example
from langchain.document_loaders import GoogleApiClient
google_api_client = GoogleApiClient(
service_account_path=Path("path_to_your_sec_file.json")
)
credentials_path: pathlib.Path = PosixPath('/home/docs/.credentials/credentials.json')#
service_account_path: pathlib.Path = PosixPath('/home/docs/.credentials/credentials.json')#
token_path: pathlib.Path = PosixPath('/home/docs/.credentials/token.json')# | https://langchain.readthedocs.io/en/latest/reference/modules/document_loaders.html |
dbc1327bb7ca-6 | token_path: pathlib.Path = PosixPath('/home/docs/.credentials/token.json')#
classmethod validate_channel_or_videoIds_is_set(values: Dict[str, Any]) β Dict[str, Any][source]#
Validate that either folder_id or document_ids is set, but not both.
class langchain.document_loaders.GoogleApiYoutubeLoader(google_api_client: langchain.document_loaders.youtube.GoogleApiClient, channel_name: Optional[str] = None, video_ids: Optional[List[str]] = None, add_video_info: bool = True, captions_language: str = 'en', continue_on_failure: bool = False)[source]#
Loader that loads all Videos from a Channel
To use, you should have the googleapiclient,youtube_transcript_api
python package installed.
As the service needs a google_api_client, you first have to initialize
the GoogleApiClient.
Additionally you have to either provide a channel name or a list of videoids
βhttps://developers.google.com/docs/api/quickstart/pythonβ
Example
from langchain.document_loaders import GoogleApiClient
from langchain.document_loaders import GoogleApiYoutubeLoader
google_api_client = GoogleApiClient(
service_account_path=Path("path_to_your_sec_file.json")
)
loader = GoogleApiYoutubeLoader(
google_api_client=google_api_client,
channel_name = "CodeAesthetic"
)
load.load()
add_video_info: bool = True#
captions_language: str = 'en'#
channel_name: Optional[str] = None#
continue_on_failure: bool = False#
google_api_client: langchain.document_loaders.youtube.GoogleApiClient#
load() β List[langchain.schema.Document][source]#
Load documents.
classmethod validate_channel_or_videoIds_is_set(values: Dict[str, Any]) β Dict[str, Any][source]#
Validate that either folder_id or document_ids is set, but not both.
video_ids: Optional[List[str]] = None#
pydantic model langchain.document_loaders.GoogleDriveLoader[source]#
Loader that loads Google Docs from Google Drive.
Validators
validate_credentials_path Β» credentials_path
validate_inputs Β» all fields
field credentials_path: pathlib.Path = PosixPath('/home/docs/.credentials/credentials.json')#
field document_ids: Optional[List[str]] = None#
field file_ids: Optional[List[str]] = None#
field file_types: Optional[Sequence[str]] = None#
field folder_id: Optional[str] = None#
field load_trashed_files: bool = False#
field recursive: bool = False#
field service_account_key: pathlib.Path = PosixPath('/home/docs/.credentials/keys.json')#
field token_path: pathlib.Path = PosixPath('/home/docs/.credentials/token.json')#
load() β List[langchain.schema.Document][source]#
Load documents.
class langchain.document_loaders.GutenbergLoader(file_path: str)[source]#
Loader that uses urllib to load .txt web files.
load() β List[langchain.schema.Document][source]#
Load file.
class langchain.document_loaders.HNLoader(web_path: Union[str, List[str]], header_template: Optional[dict] = None)[source]#
Load Hacker News data from either main page results or the comments page.
load() β List[langchain.schema.Document][source]#
Get important HN webpage information.
Components are:
title
content
source url,
time of post
author of the post
number of comments
rank of the post
load_comments(soup_info: Any) β List[langchain.schema.Document][source]#
Load comments from a HN post.
load_results(soup: Any) β List[langchain.schema.Document][source]#
Load items from an HN page.
class langchain.document_loaders.HuggingFaceDatasetLoader(path: str, page_content_column: str = 'text', name: Optional[str] = None, data_dir: Optional[str] = None, data_files: Optional[Union[str, Sequence[str], Mapping[str, Union[str, Sequence[str]]]]] = None, cache_dir: Optional[str] = None, keep_in_memory: Optional[bool] = None, save_infos: bool = False, use_auth_token: Optional[Union[bool, str]] = None, num_proc: Optional[int] = None)[source]#
Loading logic for loading documents from the Hugging Face Hub.
lazy_load() β Iterator[langchain.schema.Document][source]#
Load documents lazily.
load() β List[langchain.schema.Document][source]#
Load documents. | https://langchain.readthedocs.io/en/latest/reference/modules/document_loaders.html |
dbc1327bb7ca-7 | load() β List[langchain.schema.Document][source]#
Load documents.
class langchain.document_loaders.IFixitLoader(web_path: str)[source]#
Load iFixit repair guides, device wikis and answers.
iFixit is the largest, open repair community on the web. The site contains nearly
100k repair manuals, 200k Questions & Answers on 42k devices, and all the data is
licensed under CC-BY.
This loader will allow you to download the text of a repair guide, text of Q&Aβs
and wikis from devices on iFixit using their open APIs and web scraping.
load() β List[langchain.schema.Document][source]#
Load data into document objects.
load_device(url_override: Optional[str] = None, include_guides: bool = True) β List[langchain.schema.Document][source]#
load_guide(url_override: Optional[str] = None) β List[langchain.schema.Document][source]#
load_questions_and_answers(url_override: Optional[str] = None) β List[langchain.schema.Document][source]#
static load_suggestions(query: str = '', doc_type: str = 'all') β List[langchain.schema.Document][source]#
class langchain.document_loaders.IMSDbLoader(web_path: Union[str, List[str]], header_template: Optional[dict] = None)[source]#
Loader that loads IMSDb webpages.
load() β List[langchain.schema.Document][source]#
Load webpage.
class langchain.document_loaders.ImageCaptionLoader(path_images: Union[str, List[str]], blip_processor: str = 'Salesforce/blip-image-captioning-base', blip_model: str = 'Salesforce/blip-image-captioning-base')[source]#
Loader that loads the captions of an image
load() β List[langchain.schema.Document][source]#
Load from a list of image files
class langchain.document_loaders.IuguLoader(resource: str, api_token: Optional[str] = None)[source]#
load() β List[langchain.schema.Document][source]#
Load data into document objects.
class langchain.document_loaders.JSONLoader(file_path: Union[str, pathlib.Path], jq_schema: str, content_key: Optional[str] = None, metadata_func: Optional[Callable[[Dict, Dict], Dict]] = None, text_content: bool = True)[source]#
Loads a JSON file and references a jq schema provided to load the text into
documents.
Example
[{βtextβ: β¦}, {βtextβ: β¦}, {βtextβ: β¦}] -> schema = .[].text
{βkeyβ: [{βtextβ: β¦}, {βtextβ: β¦}, {βtextβ: β¦}]} -> schema = .key[].text
[ββ, ββ, ββ] -> schema = .[]
load() β List[langchain.schema.Document][source]#
Load and return documents from the JSON file.
class langchain.document_loaders.JoplinLoader(access_token: Optional[str] = None, port: int = 41184, host: str = 'localhost')[source]#
Loader that fetches notes from Joplin.
In order to use this loader, you need to have Joplin running with the
Web Clipper enabled (look for βWeb Clipperβ in the app settings).
To get the access token, you need to go to the Web Clipper options and
under βAdvanced Optionsβ you will find the access token.
You can find more information about the Web Clipper service here:
https://joplinapp.org/clipper/
lazy_load() β Iterator[langchain.schema.Document][source]#
A lazy loader for document content.
load() β List[langchain.schema.Document][source]#
Load data into document objects.
class langchain.document_loaders.MWDumpLoader(file_path: str, encoding: Optional[str] = 'utf8')[source]#
Load MediaWiki dump from XML file
.. rubric:: Example
from langchain.document_loaders import MWDumpLoader
loader = MWDumpLoader(
file_path="myWiki.xml",
encoding="utf8"
)
docs = loader.load()
from langchain.text_splitter import RecursiveCharacterTextSplitter
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=1000, chunk_overlap=0
)
texts = text_splitter.split_documents(docs)
Parameters
file_path (str) β XML local file path
encoding (str, optional) β Charset encoding, defaults to βutf8β
load() β List[langchain.schema.Document][source]# | https://langchain.readthedocs.io/en/latest/reference/modules/document_loaders.html |
dbc1327bb7ca-8 | load() β List[langchain.schema.Document][source]#
Load from file path.
class langchain.document_loaders.MastodonTootsLoader(mastodon_accounts: Sequence[str], number_toots: Optional[int] = 100, exclude_replies: bool = False, access_token: Optional[str] = None, api_base_url: str = 'https://mastodon.social')[source]#
Mastodon toots loader.
load() β List[langchain.schema.Document][source]#
Load toots into documents.
class langchain.document_loaders.MathpixPDFLoader(file_path: str, processed_file_format: str = 'mmd', max_wait_time_seconds: int = 500, should_clean_pdf: bool = False, **kwargs: Any)[source]#
clean_pdf(contents: str) β str[source]#
property data: dict#
get_processed_pdf(pdf_id: str) β str[source]#
property headers: dict#
load() β List[langchain.schema.Document][source]#
Load data into document objects.
send_pdf() β str[source]#
property url: str#
wait_for_processing(pdf_id: str) β None[source]#
class langchain.document_loaders.MaxComputeLoader(query: str, api_wrapper: langchain.utilities.max_compute.MaxComputeAPIWrapper, *, page_content_columns: Optional[Sequence[str]] = None, metadata_columns: Optional[Sequence[str]] = None)[source]#
Loads a query result from Alibaba Cloud MaxCompute table into documents.
classmethod from_params(query: str, endpoint: str, project: str, *, access_id: Optional[str] = None, secret_access_key: Optional[str] = None, **kwargs: Any) β langchain.document_loaders.max_compute.MaxComputeLoader[source]#
Convenience constructor that builds the MaxCompute API wrapper fromgiven parameters.
Parameters
query β SQL query to execute.
endpoint β MaxCompute endpoint.
project β A project is a basic organizational unit of MaxCompute, which is
similar to a database.
access_id β MaxCompute access ID. Should be passed in directly or set as the
environment variable MAX_COMPUTE_ACCESS_ID.
secret_access_key β MaxCompute secret access key. Should be passed in
directly or set as the environment variable
MAX_COMPUTE_SECRET_ACCESS_KEY.
lazy_load() β Iterator[langchain.schema.Document][source]#
A lazy loader for document content.
load() β List[langchain.schema.Document][source]#
Load data into document objects.
class langchain.document_loaders.ModernTreasuryLoader(resource: str, organization_id: Optional[str] = None, api_key: Optional[str] = None)[source]#
load() β List[langchain.schema.Document][source]#
Load data into document objects.
class langchain.document_loaders.NotebookLoader(path: str, include_outputs: bool = False, max_output_length: int = 10, remove_newline: bool = False, traceback: bool = False)[source]#
Loader that loads .ipynb notebook files.
load() β List[langchain.schema.Document][source]#
Load documents.
class langchain.document_loaders.NotionDBLoader(integration_token: str, database_id: str, request_timeout_sec: Optional[int] = 10)[source]#
Notion DB Loader.
Reads content from pages within a Noton Database.
:param integration_token: Notion integration token.
:type integration_token: str
:param database_id: Notion database id.
:type database_id: str
:param request_timeout_sec: Timeout for Notion requests in seconds.
:type request_timeout_sec: int
load() β List[langchain.schema.Document][source]#
Load documents from the Notion database.
:returns: List of documents.
:rtype: List[Document]
load_page(page_id: str) β langchain.schema.Document[source]#
Read a page.
class langchain.document_loaders.NotionDirectoryLoader(path: str)[source]#
Loader that loads Notion directory dump.
load() β List[langchain.schema.Document][source]#
Load documents.
class langchain.document_loaders.ObsidianLoader(path: str, encoding: str = 'UTF-8', collect_metadata: bool = True)[source]#
Loader that loads Obsidian files from disk.
FRONT_MATTER_REGEX = re.compile('^---\\n(.*?)\\n---\\n', re.MULTILINE|re.DOTALL)#
load() β List[langchain.schema.Document][source]#
Load documents.
pydantic model langchain.document_loaders.OneDriveFileLoader[source]#
field file: File [Required]# | https://langchain.readthedocs.io/en/latest/reference/modules/document_loaders.html |
dbc1327bb7ca-9 | field file: File [Required]#
load() β List[langchain.schema.Document][source]#
Load Documents
pydantic model langchain.document_loaders.OneDriveLoader[source]#
field auth_with_token: bool = False#
field drive_id: str [Required]#
field folder_path: Optional[str] = None#
field object_ids: Optional[List[str]] = None#
field settings: langchain.document_loaders.onedrive._OneDriveSettings [Optional]#
load() β List[langchain.schema.Document][source]#
Loads all supported document files from the specified OneDrive drive a
nd returns a list of Document objects.
Returns
A list of Document objects
representing the loaded documents.
Return type
List[Document]
Raises
ValueError β If the specified drive ID
does not correspond to a drive in the OneDrive storage. β
class langchain.document_loaders.OnlinePDFLoader(file_path: str)[source]#
Loader that loads online PDFs.
load() β List[langchain.schema.Document][source]#
Load documents.
class langchain.document_loaders.OutlookMessageLoader(file_path: str)[source]#
Loader that loads Outlook Message files using extract_msg.
TeamMsgExtractor/msg-extractor
load() β List[langchain.schema.Document][source]#
Load data into document objects.
class langchain.document_loaders.PDFMinerLoader(file_path: str)[source]#
Loader that uses PDFMiner to load PDF files.
lazy_load() β Iterator[langchain.schema.Document][source]#
Lazily lod documents.
load() β List[langchain.schema.Document][source]#
Eagerly load the content.
class langchain.document_loaders.PDFMinerPDFasHTMLLoader(file_path: str)[source]#
Loader that uses PDFMiner to load PDF files as HTML content.
load() β List[langchain.schema.Document][source]#
Load file.
class langchain.document_loaders.PDFPlumberLoader(file_path: str, text_kwargs: Optional[Mapping[str, Any]] = None)[source]#
Loader that uses pdfplumber to load PDF files.
load() β List[langchain.schema.Document][source]#
Load file.
langchain.document_loaders.PagedPDFSplitter#
alias of langchain.document_loaders.pdf.PyPDFLoader
class langchain.document_loaders.PlaywrightURLLoader(urls: List[str], continue_on_failure: bool = True, headless: bool = True, remove_selectors: Optional[List[str]] = None)[source]#
Loader that uses Playwright and to load a page and unstructured to load the html.
This is useful for loading pages that require javascript to render.
urls#
List of URLs to load.
Type
List[str]
continue_on_failure#
If True, continue loading other URLs on failure.
Type
bool
headless#
If True, the browser will run in headless mode.
Type
bool
load() β List[langchain.schema.Document][source]#
Load the specified URLs using Playwright and create Document instances.
Returns
A list of Document instances with loaded content.
Return type
List[Document]
class langchain.document_loaders.PsychicLoader(api_key: str, connector_id: str, connection_id: str)[source]#
Loader that loads documents from Psychic.dev.
load() β List[langchain.schema.Document][source]#
Load documents.
class langchain.document_loaders.PyMuPDFLoader(file_path: str)[source]#
Loader that uses PyMuPDF to load PDF files.
load(**kwargs: Optional[Any]) β List[langchain.schema.Document][source]#
Load file.
class langchain.document_loaders.PyPDFDirectoryLoader(path: str, glob: str = '**/[!.]*.pdf', silent_errors: bool = False, load_hidden: bool = False, recursive: bool = False)[source]#
Loads a directory with PDF files with pypdf and chunks at character level.
Loader also stores page numbers in metadatas.
load() β List[langchain.schema.Document][source]#
Load data into document objects.
class langchain.document_loaders.PyPDFLoader(file_path: str)[source]#
Loads a PDF with pypdf and chunks at character level.
Loader also stores page numbers in metadatas.
lazy_load() β Iterator[langchain.schema.Document][source]#
Lazy load given path as pages.
load() β List[langchain.schema.Document][source]#
Load given path as pages. | https://langchain.readthedocs.io/en/latest/reference/modules/document_loaders.html |
dbc1327bb7ca-10 | Load given path as pages.
class langchain.document_loaders.PyPDFium2Loader(file_path: str)[source]#
Loads a PDF with pypdfium2 and chunks at character level.
lazy_load() β Iterator[langchain.schema.Document][source]#
Lazy load given path as pages.
load() β List[langchain.schema.Document][source]#
Load given path as pages.
class langchain.document_loaders.PySparkDataFrameLoader(spark_session: Optional[SparkSession] = None, df: Optional[Any] = None, page_content_column: str = 'text', fraction_of_memory: float = 0.1)[source]#
Load PySpark DataFrames
get_num_rows() β Tuple[int, int][source]#
Gets the amount of βfeasibleβ rows for the DataFrame
lazy_load() β Iterator[langchain.schema.Document][source]#
A lazy loader for document content.
load() β List[langchain.schema.Document][source]#
Load from the dataframe.
class langchain.document_loaders.PythonLoader(file_path: str)[source]#
Load Python files, respecting any non-default encoding if specified.
class langchain.document_loaders.ReadTheDocsLoader(path: Union[str, pathlib.Path], encoding: Optional[str] = None, errors: Optional[str] = None, custom_html_tag: Optional[Tuple[str, dict]] = None, **kwargs: Optional[Any])[source]#
Loader that loads ReadTheDocs documentation directory dump.
load() β List[langchain.schema.Document][source]#
Load documents.
class langchain.document_loaders.RedditPostsLoader(client_id: str, client_secret: str, user_agent: str, search_queries: Sequence[str], mode: str, categories: Sequence[str] = ['new'], number_posts: Optional[int] = 10)[source]#
Reddit posts loader.
Read posts on a subreddit.
First you need to go to
https://www.reddit.com/prefs/apps/
and create your application
load() β List[langchain.schema.Document][source]#
Load reddits.
class langchain.document_loaders.RoamLoader(path: str)[source]#
Loader that loads Roam files from disk.
load() β List[langchain.schema.Document][source]#
Load documents.
class langchain.document_loaders.S3DirectoryLoader(bucket: str, prefix: str = '')[source]#
Loading logic for loading documents from s3.
load() β List[langchain.schema.Document][source]#
Load documents.
class langchain.document_loaders.S3FileLoader(bucket: str, key: str)[source]#
Loading logic for loading documents from s3.
load() β List[langchain.schema.Document][source]#
Load documents.
class langchain.document_loaders.SRTLoader(file_path: str)[source]#
Loader for .srt (subtitle) files.
load() β List[langchain.schema.Document][source]#
Load using pysrt file.
class langchain.document_loaders.SeleniumURLLoader(urls: List[str], continue_on_failure: bool = True, browser: Literal['chrome', 'firefox'] = 'chrome', binary_location: Optional[str] = None, executable_path: Optional[str] = None, headless: bool = True, arguments: List[str] = [])[source]#
Loader that uses Selenium and to load a page and unstructured to load the html.
This is useful for loading pages that require javascript to render.
urls#
List of URLs to load.
Type
List[str]
continue_on_failure#
If True, continue loading other URLs on failure.
Type
bool
browser#
The browser to use, either βchromeβ or βfirefoxβ.
Type
str
binary_location#
The location of the browser binary.
Type
Optional[str]
executable_path#
The path to the browser executable.
Type
Optional[str]
headless#
If True, the browser will run in headless mode.
Type
bool
arguments [List[str]]
List of arguments to pass to the browser.
load() β List[langchain.schema.Document][source]#
Load the specified URLs using Selenium and create Document instances.
Returns
A list of Document instances with loaded content.
Return type
List[Document]
class langchain.document_loaders.SitemapLoader(web_path: str, filter_urls: Optional[List[str]] = None, parsing_function: Optional[Callable] = None, blocksize: Optional[int] = None, blocknum: int = 0, meta_function: Optional[Callable] = None, is_local: bool = False)[source]# | https://langchain.readthedocs.io/en/latest/reference/modules/document_loaders.html |
dbc1327bb7ca-11 | Loader that fetches a sitemap and loads those URLs.
load() β List[langchain.schema.Document][source]#
Load sitemap.
parse_sitemap(soup: Any) β List[dict][source]#
Parse sitemap xml and load into a list of dicts.
class langchain.document_loaders.SlackDirectoryLoader(zip_path: str, workspace_url: Optional[str] = None)[source]#
Loader for loading documents from a Slack directory dump.
load() β List[langchain.schema.Document][source]#
Load and return documents from the Slack directory dump.
class langchain.document_loaders.SpreedlyLoader(access_token: str, resource: str)[source]#
load() β List[langchain.schema.Document][source]#
Load data into document objects.
class langchain.document_loaders.StripeLoader(resource: str, access_token: Optional[str] = None)[source]#
load() β List[langchain.schema.Document][source]#
Load data into document objects.
class langchain.document_loaders.TelegramChatApiLoader(chat_entity: Optional[EntityLike] = None, api_id: Optional[int] = None, api_hash: Optional[str] = None, username: Optional[str] = None, file_path: str = 'telegram_data.json')[source]#
Loader that loads Telegram chat json directory dump.
async fetch_data_from_telegram() β None[source]#
Fetch data from Telegram API and save it as a JSON file.
load() β List[langchain.schema.Document][source]#
Load documents.
class langchain.document_loaders.TelegramChatFileLoader(path: str)[source]#
Loader that loads Telegram chat json directory dump.
load() β List[langchain.schema.Document][source]#
Load documents.
langchain.document_loaders.TelegramChatLoader#
alias of langchain.document_loaders.telegram.TelegramChatFileLoader
class langchain.document_loaders.TextLoader(file_path: str, encoding: Optional[str] = None, autodetect_encoding: bool = False)[source]#
Load text files.
Parameters
file_path β Path to the file to load.
encoding β File encoding to use. If None, the file will be loaded
encoding. (with the default system) β
autodetect_encoding β Whether to try to autodetect the file encoding
if the specified encoding fails.
load() β List[langchain.schema.Document][source]#
Load from file path.
class langchain.document_loaders.ToMarkdownLoader(url: str, api_key: str)[source]#
Loader that loads HTML to markdown using 2markdown.
lazy_load() β Iterator[langchain.schema.Document][source]#
Lazily load the file.
load() β List[langchain.schema.Document][source]#
Load file.
class langchain.document_loaders.TomlLoader(source: Union[str, pathlib.Path])[source]#
A TOML document loader that inherits from the BaseLoader class.
This class can be initialized with either a single source file or a source
directory containing TOML files.
lazy_load() β Iterator[langchain.schema.Document][source]#
Lazily load the TOML documents from the source file or directory.
load() β List[langchain.schema.Document][source]#
Load and return all documents.
class langchain.document_loaders.TrelloLoader(client: TrelloClient, board_name: str, *, include_card_name: bool = True, include_comments: bool = True, include_checklist: bool = True, card_filter: Literal['closed', 'open', 'all'] = 'all', extra_metadata: Tuple[str, ...] = ('due_date', 'labels', 'list', 'closed'))[source]#
Trello loader. Reads all cards from a Trello board.
classmethod from_credentials(board_name: str, *, api_key: Optional[str] = None, token: Optional[str] = None, **kwargs: Any) β langchain.document_loaders.trello.TrelloLoader[source]#
Convenience constructor that builds TrelloClient init param for you.
Parameters
board_name β The name of the Trello board.
api_key β Trello API key. Can also be specified as environment variable
TRELLO_API_KEY.
token β Trello token. Can also be specified as environment variable
TRELLO_TOKEN.
include_card_name β Whether to include the name of the card in the document.
include_comments β Whether to include the comments on the card in the
document.
include_checklist β Whether to include the checklist on the card in the
document. | https://langchain.readthedocs.io/en/latest/reference/modules/document_loaders.html |
dbc1327bb7ca-12 | include_checklist β Whether to include the checklist on the card in the
document.
card_filter β Filter on card status. Valid values are βclosedβ, βopenβ,
βallβ.
extra_metadata β List of additional metadata fields to include as document
metadata.Valid values are βdue_dateβ, βlabelsβ, βlistβ, βclosedβ.
load() β List[langchain.schema.Document][source]#
Loads all cards from the specified Trello board.
You can filter the cards, metadata and text included by using the optional
parameters.
Returns:A list of documents, one for each card in the board.
class langchain.document_loaders.TwitterTweetLoader(auth_handler: Union[OAuthHandler, OAuth2BearerHandler], twitter_users: Sequence[str], number_tweets: Optional[int] = 100)[source]#
Twitter tweets loader.
Read tweets of user twitter handle.
First you need to go to
https://developer.twitter.com/en/docs/twitter-api
/getting-started/getting-access-to-the-twitter-api
to get your token. And create a v2 version of the app.
classmethod from_bearer_token(oauth2_bearer_token: str, twitter_users: Sequence[str], number_tweets: Optional[int] = 100) β langchain.document_loaders.twitter.TwitterTweetLoader[source]#
Create a TwitterTweetLoader from OAuth2 bearer token.
classmethod from_secrets(access_token: str, access_token_secret: str, consumer_key: str, consumer_secret: str, twitter_users: Sequence[str], number_tweets: Optional[int] = 100) β langchain.document_loaders.twitter.TwitterTweetLoader[source]#
Create a TwitterTweetLoader from access tokens and secrets.
load() β List[langchain.schema.Document][source]#
Load tweets.
class langchain.document_loaders.UnstructuredAPIFileIOLoader(file: Union[IO, Sequence[IO]], mode: str = 'single', url: str = 'https://api.unstructured.io/general/v0/general', api_key: str = '', **unstructured_kwargs: Any)[source]#
Loader that uses the unstructured web API to load file IO objects.
class langchain.document_loaders.UnstructuredAPIFileLoader(file_path: Union[str, List[str]] = '', mode: str = 'single', url: str = 'https://api.unstructured.io/general/v0/general', api_key: str = '', **unstructured_kwargs: Any)[source]#
Loader that uses the unstructured web API to load files.
class langchain.document_loaders.UnstructuredCSVLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]#
Loader that uses unstructured to load CSV files.
class langchain.document_loaders.UnstructuredEPubLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]#
Loader that uses unstructured to load epub files.
class langchain.document_loaders.UnstructuredEmailLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]#
Loader that uses unstructured to load email files.
class langchain.document_loaders.UnstructuredExcelLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]#
Loader that uses unstructured to load Microsoft Excel files.
class langchain.document_loaders.UnstructuredFileIOLoader(file: Union[IO, Sequence[IO]], mode: str = 'single', **unstructured_kwargs: Any)[source]#
Loader that uses unstructured to load file IO objects.
class langchain.document_loaders.UnstructuredFileLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]#
Loader that uses unstructured to load files.
class langchain.document_loaders.UnstructuredHTMLLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]#
Loader that uses unstructured to load HTML files.
class langchain.document_loaders.UnstructuredImageLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]#
Loader that uses unstructured to load image files, such as PNGs and JPGs.
class langchain.document_loaders.UnstructuredMarkdownLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]#
Loader that uses unstructured to load markdown files.
class langchain.document_loaders.UnstructuredODTLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]# | https://langchain.readthedocs.io/en/latest/reference/modules/document_loaders.html |
dbc1327bb7ca-13 | Loader that uses unstructured to load open office ODT files.
class langchain.document_loaders.UnstructuredPDFLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]#
Loader that uses unstructured to load PDF files.
class langchain.document_loaders.UnstructuredPowerPointLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]#
Loader that uses unstructured to load powerpoint files.
class langchain.document_loaders.UnstructuredRTFLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]#
Loader that uses unstructured to load rtf files.
class langchain.document_loaders.UnstructuredURLLoader(urls: List[str], continue_on_failure: bool = True, mode: str = 'single', **unstructured_kwargs: Any)[source]#
Loader that uses unstructured to load HTML files.
load() β List[langchain.schema.Document][source]#
Load file.
class langchain.document_loaders.UnstructuredWordDocumentLoader(file_path: Union[str, List[str]], mode: str = 'single', **unstructured_kwargs: Any)[source]#
Loader that uses unstructured to load word documents.
class langchain.document_loaders.WeatherDataLoader(client: langchain.utilities.openweathermap.OpenWeatherMapAPIWrapper, places: Sequence[str])[source]#
Weather Reader.
Reads the forecast & current weather of any location using OpenWeatherMapβs free
API. Checkout βhttps://openweathermap.org/appidβ for more on how to generate a free
OpenWeatherMap API.
classmethod from_params(places: Sequence[str], *, openweathermap_api_key: Optional[str] = None) β langchain.document_loaders.weather.WeatherDataLoader[source]#
lazy_load() β Iterator[langchain.schema.Document][source]#
Lazily load weather data for the given locations.
load() β List[langchain.schema.Document][source]#
Load weather data for the given locations.
class langchain.document_loaders.WebBaseLoader(web_path: Union[str, List[str]], header_template: Optional[dict] = None)[source]#
Loader that uses urllib and beautiful soup to load webpages.
aload() β List[langchain.schema.Document][source]#
Load text from the urls in web_path async into Documents.
default_parser: str = 'html.parser'#
Default parser to use for BeautifulSoup.
async fetch_all(urls: List[str]) β Any[source]#
Fetch all urls concurrently with rate limiting.
load() β List[langchain.schema.Document][source]#
Load text from the url(s) in web_path.
requests_kwargs: Dict[str, Any] = {}#
kwargs for requests
requests_per_second: int = 2#
Max number of concurrent requests to make.
scrape(parser: Optional[str] = None) β Any[source]#
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls: List[str], parser: Optional[str] = None) β List[Any][source]#
Fetch all urls, then return soups for all results.
property web_path: str#
web_paths: List[str]#
class langchain.document_loaders.WhatsAppChatLoader(path: str)[source]#
Loader that loads WhatsApp messages text file.
load() β List[langchain.schema.Document][source]#
Load documents.
class langchain.document_loaders.WikipediaLoader(query: str, lang: str = 'en', load_max_docs: Optional[int] = 100, load_all_available_meta: Optional[bool] = False)[source]#
Loads a query result from www.wikipedia.org into a list of Documents.
The hard limit on the number of downloaded Documents is 300 for now.
Each wiki page represents one Document.
load() β List[langchain.schema.Document][source]#
Load data into document objects.
class langchain.document_loaders.YoutubeLoader(video_id: str, add_video_info: bool = False, language: Union[str, Sequence[str]] = 'en', translation: str = 'en', continue_on_failure: bool = False)[source]#
Loader that loads Youtube transcripts.
static extract_video_id(youtube_url: str) β str[source]#
Extract video id from common YT urls.
classmethod from_youtube_url(youtube_url: str, **kwargs: Any) β langchain.document_loaders.youtube.YoutubeLoader[source]#
Given youtube URL, load video.
load() β List[langchain.schema.Document][source]#
Load documents.
previous
Text Splitter
next | https://langchain.readthedocs.io/en/latest/reference/modules/document_loaders.html |
dbc1327bb7ca-14 | Load documents.
previous
Text Splitter
next
Vector Stores
By Harrison Chase
Β© Copyright 2023, Harrison Chase.
Last updated on Jun 08, 2023. | https://langchain.readthedocs.io/en/latest/reference/modules/document_loaders.html |
5118617f3f06-0 | .rst
.pdf
Docstore
Docstore#
Wrappers on top of docstores.
class langchain.docstore.InMemoryDocstore(_dict: Dict[str, langchain.schema.Document])[source]#
Simple in memory docstore in the form of a dict.
add(texts: Dict[str, langchain.schema.Document]) β None[source]#
Add texts to in memory dictionary.
search(search: str) β Union[str, langchain.schema.Document][source]#
Search via direct lookup.
class langchain.docstore.Wikipedia[source]#
Wrapper around wikipedia API.
search(search: str) β Union[str, langchain.schema.Document][source]#
Try to search for wiki page.
If page exists, return the page summary, and a PageWithLookups object.
If page does not exist, return similar entries.
previous
Indexes
next
Text Splitter
By Harrison Chase
Β© Copyright 2023, Harrison Chase.
Last updated on Jun 08, 2023. | https://langchain.readthedocs.io/en/latest/reference/modules/docstore.html |
14e754802292-0 | .rst
.pdf
Chat Models
Chat Models#
pydantic model langchain.chat_models.AzureChatOpenAI[source]#
Wrapper around Azure OpenAI Chat Completion API. To use this class you
must have a deployed model on Azure OpenAI. Use deployment_name in the
constructor to refer to the βModel deployment nameβ in the Azure portal.
In addition, you should have the openai python package installed, and the
following environment variables set or passed in constructor in lower case:
- OPENAI_API_TYPE (default: azure)
- OPENAI_API_KEY
- OPENAI_API_BASE
- OPENAI_API_VERSION
- OPENAI_PROXY
For exmaple, if you have gpt-35-turbo deployed, with the deployment name
35-turbo-dev, the constructor should look like:
AzureChatOpenAI(
deployment_name="35-turbo-dev",
openai_api_version="2023-03-15-preview",
)
Be aware the API version may change.
Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
field deployment_name: str = ''#
field openai_api_base: str = ''#
field openai_api_key: str = ''#
Base URL path for API requests,
leave blank if not using a proxy or service emulator.
field openai_api_type: str = 'azure'#
field openai_api_version: str = ''#
field openai_organization: str = ''#
field openai_proxy: str = ''#
pydantic model langchain.chat_models.ChatAnthropic[source]#
Wrapper around Anthropicβs large language model.
To use, you should have the anthropic python package installed, and the
environment variable ANTHROPIC_API_KEY set with your API key, or pass
it as a named parameter to the constructor.
Example
import anthropic
from langchain.llms import Anthropic
model = ChatAnthropic(model="<model_name>", anthropic_api_key="my-api-key")
get_num_tokens(text: str) β int[source]#
Calculate number of tokens.
pydantic model langchain.chat_models.ChatGooglePalm[source]#
Wrapper around Googleβs PaLM Chat API.
To use you must have the google.generativeai Python package installed and
either:
The GOOGLE_API_KEY` environment varaible set with your API key, or
Pass your API key using the google_api_key kwarg to the ChatGoogle
constructor.
Example
from langchain.chat_models import ChatGooglePalm
chat = ChatGooglePalm()
field google_api_key: Optional[str] = None#
field model_name: str = 'models/chat-bison-001'#
Model name to use.
field n: int = 1#
Number of chat completions to generate for each prompt. Note that the API may
not return the full n completions if duplicates are generated.
field temperature: Optional[float] = None#
Run inference with this temperature. Must by in the closed
interval [0.0, 1.0].
field top_k: Optional[int] = None#
Decode using top-k sampling: consider the set of top_k most probable tokens.
Must be positive.
field top_p: Optional[float] = None#
Decode using nucleus sampling: consider the smallest set of tokens whose
probability sum is at least top_p. Must be in the closed interval [0.0, 1.0].
pydantic model langchain.chat_models.ChatOpenAI[source]#
Wrapper around OpenAI Chat large language models.
To use, you should have the openai python package installed, and the
environment variable OPENAI_API_KEY set with your API key.
Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
Example
from langchain.chat_models import ChatOpenAI
openai = ChatOpenAI(model_name="gpt-3.5-turbo")
field max_retries: int = 6#
Maximum number of retries to make when generating.
field max_tokens: Optional[int] = None#
Maximum number of tokens to generate.
field model_kwargs: Dict[str, Any] [Optional]#
Holds any model parameters valid for create call not explicitly specified.
field model_name: str = 'gpt-3.5-turbo' (alias 'model')#
Model name to use.
field n: int = 1#
Number of chat completions to generate for each prompt.
field openai_api_base: Optional[str] = None# | https://langchain.readthedocs.io/en/latest/reference/modules/chat_models.html |
14e754802292-1 | field openai_api_base: Optional[str] = None#
field openai_api_key: Optional[str] = None#
Base URL path for API requests,
leave blank if not using a proxy or service emulator.
field openai_organization: Optional[str] = None#
field openai_proxy: Optional[str] = None#
field request_timeout: Optional[Union[float, Tuple[float, float]]] = None#
Timeout for requests to OpenAI completion API. Default is 600 seconds.
field streaming: bool = False#
Whether to stream the results or not.
field temperature: float = 0.7#
What sampling temperature to use.
completion_with_retry(**kwargs: Any) β Any[source]#
Use tenacity to retry the completion call.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β int[source]#
Calculate num tokens for gpt-3.5-turbo and gpt-4 with tiktoken package.
Official documentation: openai/openai-cookbook
main/examples/How_to_format_inputs_to_ChatGPT_models.ipynb
get_token_ids(text: str) β List[int][source]#
Get the tokens present in the text with tiktoken package.
pydantic model langchain.chat_models.ChatVertexAI[source]#
Wrapper around Vertex AI large language models.
field model_name: str = 'chat-bison'#
Model name to use.
pydantic model langchain.chat_models.PromptLayerChatOpenAI[source]#
Wrapper around OpenAI Chat large language models and PromptLayer.
To use, you should have the openai and promptlayer python
package installed, and the environment variable OPENAI_API_KEY
and PROMPTLAYER_API_KEY set with your openAI API key and
promptlayer key respectively.
All parameters that can be passed to the OpenAI LLM can also
be passed here. The PromptLayerChatOpenAI adds to optional
Parameters
pl_tags β List of strings to tag the request with.
return_pl_id β If True, the PromptLayer request ID will be
returned in the generation_info field of the
Generation object.
Example
from langchain.chat_models import PromptLayerChatOpenAI
openai = PromptLayerChatOpenAI(model_name="gpt-3.5-turbo")
field pl_tags: Optional[List[str]] = None#
field return_pl_id: Optional[bool] = False#
previous
Models
next
Embeddings
By Harrison Chase
Β© Copyright 2023, Harrison Chase.
Last updated on Jun 08, 2023. | https://langchain.readthedocs.io/en/latest/reference/modules/chat_models.html |
cc5bdc0b3848-0 | .rst
.pdf
Experimental Modules
Contents
Autonomous Agents
Generative Agents
Experimental Modules#
This module contains experimental modules and reproductions of existing work using LangChain primitives.
Autonomous Agents#
Here, we document the BabyAGI and AutoGPT classes from the langchain.experimental module.
class langchain.experimental.BabyAGI(*, memory: Optional[langchain.schema.BaseMemory] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, verbose: bool = None, task_list: collections.deque = None, task_creation_chain: langchain.chains.base.Chain, task_prioritization_chain: langchain.chains.base.Chain, execution_chain: langchain.chains.base.Chain, task_id_counter: int = 1, vectorstore: langchain.vectorstores.base.VectorStore, max_iterations: Optional[int] = None)[source]#
Controller model for the BabyAGI agent.
model Config[source]#
Configuration for this pydantic object.
arbitrary_types_allowed = True#
execute_task(objective: str, task: str, k: int = 5) β str[source]#
Execute a task.
classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, vectorstore: langchain.vectorstores.base.VectorStore, verbose: bool = False, task_execution_chain: Optional[langchain.chains.base.Chain] = None, **kwargs: Dict[str, Any]) β langchain.experimental.autonomous_agents.baby_agi.baby_agi.BabyAGI[source]#
Initialize the BabyAGI Controller.
get_next_task(result: str, task_description: str, objective: str) β List[Dict][source]#
Get the next task.
property input_keys: List[str]#
Input keys this chain expects.
property output_keys: List[str]#
Output keys this chain expects.
prioritize_tasks(this_task_id: int, objective: str) β List[Dict][source]#
Prioritize tasks.
class langchain.experimental.AutoGPT(ai_name: str, memory: langchain.vectorstores.base.VectorStoreRetriever, chain: langchain.chains.llm.LLMChain, output_parser: langchain.experimental.autonomous_agents.autogpt.output_parser.BaseAutoGPTOutputParser, tools: List[langchain.tools.base.BaseTool], feedback_tool: Optional[langchain.tools.human.tool.HumanInputRun] = None)[source]#
Agent class for interacting with Auto-GPT.
Generative Agents#
Here, we document the GenerativeAgent and GenerativeAgentMemory classes from the langchain.experimental module.
class langchain.experimental.GenerativeAgent(*, name: str, age: Optional[int] = None, traits: str = 'N/A', status: str, memory: langchain.experimental.generative_agents.memory.GenerativeAgentMemory, llm: langchain.base_language.BaseLanguageModel, verbose: bool = False, summary: str = '', summary_refresh_seconds: int = 3600, last_refreshed: datetime.datetime = None, daily_summaries: List[str] = None)[source]#
A character with memory and innate characteristics.
model Config[source]#
Configuration for this pydantic object.
arbitrary_types_allowed = True#
field age: Optional[int] = None#
The optional age of the character.
field daily_summaries: List[str] [Optional]#
Summary of the events in the plan that the agent took.
generate_dialogue_response(observation: str, now: Optional[datetime.datetime] = None) β Tuple[bool, str][source]#
React to a given observation.
generate_reaction(observation: str, now: Optional[datetime.datetime] = None) β Tuple[bool, str][source]#
React to a given observation.
get_full_header(force_refresh: bool = False, now: Optional[datetime.datetime] = None) β str[source]#
Return a full header of the agentβs status, summary, and current time.
get_summary(force_refresh: bool = False, now: Optional[datetime.datetime] = None) β str[source]#
Return a descriptive summary of the agent.
field last_refreshed: datetime.datetime [Optional]#
The last time the characterβs summary was regenerated.
field llm: langchain.base_language.BaseLanguageModel [Required]#
The underlying language model.
field memory: langchain.experimental.generative_agents.memory.GenerativeAgentMemory [Required]#
The memory object that combines relevance, recency, and βimportanceβ.
field name: str [Required]# | https://langchain.readthedocs.io/en/latest/reference/modules/experimental.html |
cc5bdc0b3848-1 | field name: str [Required]#
The characterβs name.
field status: str [Required]#
The traits of the character you wish not to change.
summarize_related_memories(observation: str) β str[source]#
Summarize memories that are most relevant to an observation.
field summary: str = ''#
Stateful self-summary generated via reflection on the characterβs memory.
field summary_refresh_seconds: int = 3600#
How frequently to re-generate the summary.
field traits: str = 'N/A'#
Permanent traits to ascribe to the character.
class langchain.experimental.GenerativeAgentMemory(*, llm: langchain.base_language.BaseLanguageModel, memory_retriever: langchain.retrievers.time_weighted_retriever.TimeWeightedVectorStoreRetriever, verbose: bool = False, reflection_threshold: Optional[float] = None, current_plan: List[str] = [], importance_weight: float = 0.15, aggregate_importance: float = 0.0, max_tokens_limit: int = 1200, queries_key: str = 'queries', most_recent_memories_token_key: str = 'recent_memories_token', add_memory_key: str = 'add_memory', relevant_memories_key: str = 'relevant_memories', relevant_memories_simple_key: str = 'relevant_memories_simple', most_recent_memories_key: str = 'most_recent_memories', now_key: str = 'now', reflecting: bool = False)[source]#
add_memories(memory_content: str, now: Optional[datetime.datetime] = None) β List[str][source]#
Add an observations or memories to the agentβs memory.
add_memory(memory_content: str, now: Optional[datetime.datetime] = None) β List[str][source]#
Add an observation or memory to the agentβs memory.
field aggregate_importance: float = 0.0#
Track the sum of the βimportanceβ of recent memories.
Triggers reflection when it reaches reflection_threshold.
clear() β None[source]#
Clear memory contents.
field current_plan: List[str] = []#
The current plan of the agent.
fetch_memories(observation: str, now: Optional[datetime.datetime] = None) β List[langchain.schema.Document][source]#
Fetch related memories.
field importance_weight: float = 0.15#
How much weight to assign the memory importance.
field llm: langchain.base_language.BaseLanguageModel [Required]#
The core language model.
load_memory_variables(inputs: Dict[str, Any]) β Dict[str, str][source]#
Return key-value pairs given the text input to the chain.
field memory_retriever: langchain.retrievers.time_weighted_retriever.TimeWeightedVectorStoreRetriever [Required]#
The retriever to fetch related memories.
property memory_variables: List[str]#
Input keys this memory class will load dynamically.
pause_to_reflect(now: Optional[datetime.datetime] = None) β List[str][source]#
Reflect on recent observations and generate βinsightsβ.
field reflection_threshold: Optional[float] = None#
When aggregate_importance exceeds reflection_threshold, stop to reflect.
save_context(inputs: Dict[str, Any], outputs: Dict[str, Any]) β None[source]#
Save the context of this model run to memory.
previous
Utilities
next
Integrations
Contents
Autonomous Agents
Generative Agents
By Harrison Chase
Β© Copyright 2023, Harrison Chase.
Last updated on Jun 08, 2023. | https://langchain.readthedocs.io/en/latest/reference/modules/experimental.html |
ef0c727085c1-0 | .rst
.pdf
Embeddings
Embeddings#
Wrappers around embedding modules.
pydantic model langchain.embeddings.AlephAlphaAsymmetricSemanticEmbedding[source]#
Wrapper for Aleph Alphaβs Asymmetric Embeddings
AA provides you with an endpoint to embed a document and a query.
The models were optimized to make the embeddings of documents and
the query for a document as similar as possible.
To learn more, check out: https://docs.aleph-alpha.com/docs/tasks/semantic_embed/
Example
from aleph_alpha import AlephAlphaAsymmetricSemanticEmbedding
embeddings = AlephAlphaSymmetricSemanticEmbedding()
document = "This is a content of the document"
query = "What is the content of the document?"
doc_result = embeddings.embed_documents([document])
query_result = embeddings.embed_query(query)
field aleph_alpha_api_key: Optional[str] = None#
API key for Aleph Alpha API.
field compress_to_size: Optional[int] = 128#
Should the returned embeddings come back as an original 5120-dim vector,
or should it be compressed to 128-dim.
field contextual_control_threshold: Optional[int] = None#
Attention control parameters only apply to those tokens that have
explicitly been set in the request.
field control_log_additive: Optional[bool] = True#
Apply controls on prompt items by adding the log(control_factor)
to attention scores.
field hosting: Optional[str] = 'https://api.aleph-alpha.com'#
Optional parameter that specifies which datacenters may process the request.
field model: Optional[str] = 'luminous-base'#
Model name to use.
field normalize: Optional[bool] = True#
Should returned embeddings be normalized
embed_documents(texts: List[str]) β List[List[float]][source]#
Call out to Aleph Alphaβs asymmetric Document endpoint.
Parameters
texts β The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) β List[float][source]#
Call out to Aleph Alphaβs asymmetric, query embedding endpoint
:param text: The text to embed.
Returns
Embeddings for the text.
pydantic model langchain.embeddings.AlephAlphaSymmetricSemanticEmbedding[source]#
The symmetric version of the Aleph Alphaβs semantic embeddings.
The main difference is that here, both the documents and
queries are embedded with a SemanticRepresentation.Symmetric
.. rubric:: Example
from aleph_alpha import AlephAlphaSymmetricSemanticEmbedding
embeddings = AlephAlphaAsymmetricSemanticEmbedding()
text = "This is a test text"
doc_result = embeddings.embed_documents([text])
query_result = embeddings.embed_query(text)
embed_documents(texts: List[str]) β List[List[float]][source]#
Call out to Aleph Alphaβs Document endpoint.
Parameters
texts β The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) β List[float][source]#
Call out to Aleph Alphaβs asymmetric, query embedding endpoint
:param text: The text to embed.
Returns
Embeddings for the text.
pydantic model langchain.embeddings.BedrockEmbeddings[source]#
Embeddings provider to invoke Bedrock embedding models.
To authenticate, the AWS client uses the following methods to
automatically load credentials:
https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
If a specific credential profile should be used, you must pass
the name of the profile from the ~/.aws/credentials file that is to be used.
Make sure the credentials / roles used have the required policies to
access the Bedrock service.
field credentials_profile_name: Optional[str] = None#
The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which
has either access keys or role information specified.
If not specified, the default credential profile or, if on an EC2 instance,
credentials from IMDS will be used.
See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
field model_id: str = 'amazon.titan-e1t-medium'#
Id of the model to call, e.g., amazon.titan-e1t-medium, this is
equivalent to the modelId property in the list-foundation-models api
field model_kwargs: Optional[Dict] = None#
Key word arguments to pass to the model.
field region_name: Optional[str] = None# | https://langchain.readthedocs.io/en/latest/reference/modules/embeddings.html |
ef0c727085c1-1 | field region_name: Optional[str] = None#
The aws region e.g., us-west-2. Fallsback to AWS_DEFAULT_REGION env variable
or region specified in ~/.aws/config in case it is not provided here.
embed_documents(texts: List[str], chunk_size: int = 1) β List[List[float]][source]#
Compute doc embeddings using a Bedrock model.
Parameters
texts β The list of texts to embed.
chunk_size β Bedrock currently only allows single string
inputs, so chunk size is always 1. This input is here
only for compatibility with the embeddings interface.
Returns
List of embeddings, one for each text.
embed_query(text: str) β List[float][source]#
Compute query embeddings using a Bedrock model.
Parameters
text β The text to embed.
Returns
Embeddings for the text.
pydantic model langchain.embeddings.CohereEmbeddings[source]#
Wrapper around Cohere embedding models.
To use, you should have the cohere python package installed, and the
environment variable COHERE_API_KEY set with your API key or pass it
as a named parameter to the constructor.
Example
from langchain.embeddings import CohereEmbeddings
cohere = CohereEmbeddings(
model="embed-english-light-v2.0", cohere_api_key="my-api-key"
)
field model: str = 'embed-english-v2.0'#
Model name to use.
field truncate: Optional[str] = None#
Truncate embeddings that are too long from start or end (βNONEβ|βSTARTβ|βENDβ)
embed_documents(texts: List[str]) β List[List[float]][source]#
Call out to Cohereβs embedding endpoint.
Parameters
texts β The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) β List[float][source]#
Call out to Cohereβs embedding endpoint.
Parameters
text β The text to embed.
Returns
Embeddings for the text.
pydantic model langchain.embeddings.DeepInfraEmbeddings[source]#
Wrapper around Deep Infraβs embedding inference service.
To use, you should have the
environment variable DEEPINFRA_API_TOKEN set with your API token, or pass
it as a named parameter to the constructor.
There are multiple embeddings models available,
see https://deepinfra.com/models?type=embeddings.
Example
from langchain.embeddings import DeepInfraEmbeddings
deepinfra_emb = DeepInfraEmbeddings(
model_id="sentence-transformers/clip-ViT-B-32",
deepinfra_api_token="my-api-key"
)
r1 = deepinfra_emb.embed_documents(
[
"Alpha is the first letter of Greek alphabet",
"Beta is the second letter of Greek alphabet",
]
)
r2 = deepinfra_emb.embed_query(
"What is the second letter of Greek alphabet"
)
field embed_instruction: str = 'passage: '#
Instruction used to embed documents.
field model_id: str = 'sentence-transformers/clip-ViT-B-32'#
Embeddings model to use.
field model_kwargs: Optional[dict] = None#
Other model keyword args
field normalize: bool = False#
whether to normalize the computed embeddings
field query_instruction: str = 'query: '#
Instruction used to embed the query.
embed_documents(texts: List[str]) β List[List[float]][source]#
Embed documents using a Deep Infra deployed embedding model.
Parameters
texts β The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) β List[float][source]#
Embed a query using a Deep Infra deployed embedding model.
Parameters
text β The text to embed.
Returns
Embeddings for the text.
class langchain.embeddings.ElasticsearchEmbeddings(client: MlClient, model_id: str, *, input_field: str = 'text_field')[source]#
Wrapper around Elasticsearch embedding models.
This class provides an interface to generate embeddings using a model deployed
in an Elasticsearch cluster. It requires an Elasticsearch connection object
and the model_id of the model deployed in the cluster.
In Elasticsearch you need to have an embedding model loaded and deployed.
- https://www.elastic.co/guide/en/elasticsearch/reference/current/infer-trained-model.html
- https://www.elastic.co/guide/en/machine-learning/current/ml-nlp-deploy-models.html
embed_documents(texts: List[str]) β List[List[float]][source]# | https://langchain.readthedocs.io/en/latest/reference/modules/embeddings.html |
ef0c727085c1-2 | embed_documents(texts: List[str]) β List[List[float]][source]#
Generate embeddings for a list of documents.
Parameters
texts (List[str]) β A list of document text strings to generate embeddings
for.
Returns
A list of embeddings, one for each document in the inputlist.
Return type
List[List[float]]
embed_query(text: str) β List[float][source]#
Generate an embedding for a single query text.
Parameters
text (str) β The query text to generate an embedding for.
Returns
The embedding for the input query text.
Return type
List[float]
classmethod from_credentials(model_id: str, *, es_cloud_id: Optional[str] = None, es_user: Optional[str] = None, es_password: Optional[str] = None, input_field: str = 'text_field') β langchain.embeddings.elasticsearch.ElasticsearchEmbeddings[source]#
Instantiate embeddings from Elasticsearch credentials.
Parameters
model_id (str) β The model_id of the model deployed in the Elasticsearch
cluster.
input_field (str) β The name of the key for the input text field in the
document. Defaults to βtext_fieldβ.
es_cloud_id β (str, optional): The Elasticsearch cloud ID to connect to.
es_user β (str, optional): Elasticsearch username.
es_password β (str, optional): Elasticsearch password.
Example
from langchain.embeddings import ElasticsearchEmbeddings
# Define the model ID and input field name (if different from default)
model_id = "your_model_id"
# Optional, only if different from 'text_field'
input_field = "your_input_field"
# Credentials can be passed in two ways. Either set the env vars
# ES_CLOUD_ID, ES_USER, ES_PASSWORD and they will be automatically
# pulled in, or pass them in directly as kwargs.
embeddings = ElasticsearchEmbeddings.from_credentials(
model_id,
input_field=input_field,
# es_cloud_id="foo",
# es_user="bar",
# es_password="baz",
)
documents = [
"This is an example document.",
"Another example document to generate embeddings for.",
]
embeddings_generator.embed_documents(documents)
classmethod from_es_connection(model_id: str, es_connection: Elasticsearch, input_field: str = 'text_field') β ElasticsearchEmbeddings[source]#
Instantiate embeddings from an existing Elasticsearch connection.
This method provides a way to create an instance of the ElasticsearchEmbeddings
class using an existing Elasticsearch connection. The connection object is used
to create an MlClient, which is then used to initialize the
ElasticsearchEmbeddings instance.
Args:
model_id (str): The model_id of the model deployed in the Elasticsearch cluster.
es_connection (elasticsearch.Elasticsearch): An existing Elasticsearch
connection object. input_field (str, optional): The name of the key for the
input text field in the document. Defaults to βtext_fieldβ.
Returns:
ElasticsearchEmbeddings: An instance of the ElasticsearchEmbeddings class.
Example
from elasticsearch import Elasticsearch
from langchain.embeddings import ElasticsearchEmbeddings
# Define the model ID and input field name (if different from default)
model_id = "your_model_id"
# Optional, only if different from 'text_field'
input_field = "your_input_field"
# Create Elasticsearch connection
es_connection = Elasticsearch(
hosts=["localhost:9200"], http_auth=("user", "password")
)
# Instantiate ElasticsearchEmbeddings using the existing connection
embeddings = ElasticsearchEmbeddings.from_es_connection(
model_id,
es_connection,
input_field=input_field,
)
documents = [
"This is an example document.",
"Another example document to generate embeddings for.",
]
embeddings_generator.embed_documents(documents)
pydantic model langchain.embeddings.FakeEmbeddings[source]#
embed_documents(texts: List[str]) β List[List[float]][source]#
Embed search docs.
embed_query(text: str) β List[float][source]#
Embed query text.
pydantic model langchain.embeddings.HuggingFaceEmbeddings[source]#
Wrapper around sentence_transformers embedding models.
To use, you should have the sentence_transformers python package installed.
Example
from langchain.embeddings import HuggingFaceEmbeddings
model_name = "sentence-transformers/all-mpnet-base-v2"
model_kwargs = {'device': 'cpu'}
encode_kwargs = {'normalize_embeddings': False}
hf = HuggingFaceEmbeddings(
model_name=model_name,
model_kwargs=model_kwargs, | https://langchain.readthedocs.io/en/latest/reference/modules/embeddings.html |
ef0c727085c1-3 | model_name=model_name,
model_kwargs=model_kwargs,
encode_kwargs=encode_kwargs
)
field cache_folder: Optional[str] = None#
Path to store models.
Can be also set by SENTENCE_TRANSFORMERS_HOME environment variable.
field encode_kwargs: Dict[str, Any] [Optional]#
Key word arguments to pass when calling the encode method of the model.
field model_kwargs: Dict[str, Any] [Optional]#
Key word arguments to pass to the model.
field model_name: str = 'sentence-transformers/all-mpnet-base-v2'#
Model name to use.
embed_documents(texts: List[str]) β List[List[float]][source]#
Compute doc embeddings using a HuggingFace transformer model.
Parameters
texts β The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) β List[float][source]#
Compute query embeddings using a HuggingFace transformer model.
Parameters
text β The text to embed.
Returns
Embeddings for the text.
pydantic model langchain.embeddings.HuggingFaceHubEmbeddings[source]#
Wrapper around HuggingFaceHub embedding models.
To use, you should have the huggingface_hub python package installed, and the
environment variable HUGGINGFACEHUB_API_TOKEN set with your API token, or pass
it as a named parameter to the constructor.
Example
from langchain.embeddings import HuggingFaceHubEmbeddings
repo_id = "sentence-transformers/all-mpnet-base-v2"
hf = HuggingFaceHubEmbeddings(
repo_id=repo_id,
task="feature-extraction",
huggingfacehub_api_token="my-api-key",
)
field model_kwargs: Optional[dict] = None#
Key word arguments to pass to the model.
field repo_id: str = 'sentence-transformers/all-mpnet-base-v2'#
Model name to use.
field task: Optional[str] = 'feature-extraction'#
Task to call the model with.
embed_documents(texts: List[str]) β List[List[float]][source]#
Call out to HuggingFaceHubβs embedding endpoint for embedding search docs.
Parameters
texts β The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) β List[float][source]#
Call out to HuggingFaceHubβs embedding endpoint for embedding query text.
Parameters
text β The text to embed.
Returns
Embeddings for the text.
pydantic model langchain.embeddings.HuggingFaceInstructEmbeddings[source]#
Wrapper around sentence_transformers embedding models.
To use, you should have the sentence_transformers
and InstructorEmbedding python packages installed.
Example
from langchain.embeddings import HuggingFaceInstructEmbeddings
model_name = "hkunlp/instructor-large"
model_kwargs = {'device': 'cpu'}
encode_kwargs = {'normalize_embeddings': True}
hf = HuggingFaceInstructEmbeddings(
model_name=model_name,
model_kwargs=model_kwargs,
encode_kwargs=encode_kwargs
)
field cache_folder: Optional[str] = None#
Path to store models.
Can be also set by SENTENCE_TRANSFORMERS_HOME environment variable.
field embed_instruction: str = 'Represent the document for retrieval: '#
Instruction to use for embedding documents.
field encode_kwargs: Dict[str, Any] [Optional]#
Key word arguments to pass when calling the encode method of the model.
field model_kwargs: Dict[str, Any] [Optional]#
Key word arguments to pass to the model.
field model_name: str = 'hkunlp/instructor-large'#
Model name to use.
field query_instruction: str = 'Represent the question for retrieving supporting documents: '#
Instruction to use for embedding query.
embed_documents(texts: List[str]) β List[List[float]][source]#
Compute doc embeddings using a HuggingFace instruct model.
Parameters
texts β The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) β List[float][source]#
Compute query embeddings using a HuggingFace instruct model.
Parameters
text β The text to embed.
Returns
Embeddings for the text.
pydantic model langchain.embeddings.LlamaCppEmbeddings[source]#
Wrapper around llama.cpp embedding models.
To use, you should have the llama-cpp-python library installed, and provide the
path to the Llama model as a named parameter to the constructor. | https://langchain.readthedocs.io/en/latest/reference/modules/embeddings.html |
ef0c727085c1-4 | path to the Llama model as a named parameter to the constructor.
Check out: abetlen/llama-cpp-python
Example
from langchain.embeddings import LlamaCppEmbeddings
llama = LlamaCppEmbeddings(model_path="/path/to/model.bin")
field f16_kv: bool = False#
Use half-precision for key/value cache.
field logits_all: bool = False#
Return logits for all tokens, not just the last token.
field n_batch: Optional[int] = 8#
Number of tokens to process in parallel.
Should be a number between 1 and n_ctx.
field n_ctx: int = 512#
Token context window.
field n_gpu_layers: Optional[int] = None#
Number of layers to be loaded into gpu memory. Default None.
field n_parts: int = -1#
Number of parts to split the model into.
If -1, the number of parts is automatically determined.
field n_threads: Optional[int] = None#
Number of threads to use. If None, the number
of threads is automatically determined.
field seed: int = -1#
Seed. If -1, a random seed is used.
field use_mlock: bool = False#
Force system to keep model in RAM.
field vocab_only: bool = False#
Only load the vocabulary, no weights.
embed_documents(texts: List[str]) β List[List[float]][source]#
Embed a list of documents using the Llama model.
Parameters
texts β The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) β List[float][source]#
Embed a query using the Llama model.
Parameters
text β The text to embed.
Returns
Embeddings for the text.
pydantic model langchain.embeddings.MiniMaxEmbeddings[source]#
Wrapper around MiniMaxβs embedding inference service.
To use, you should have the environment variable MINIMAX_GROUP_ID and
MINIMAX_API_KEY set with your API token, or pass it as a named parameter to
the constructor.
Example
from langchain.embeddings import MiniMaxEmbeddings
embeddings = MiniMaxEmbeddings()
query_text = "This is a test query."
query_result = embeddings.embed_query(query_text)
document_text = "This is a test document."
document_result = embeddings.embed_documents([document_text])
field embed_type_db: str = 'db'#
For embed_documents
field embed_type_query: str = 'query'#
For embed_query
field endpoint_url: str = 'https://api.minimax.chat/v1/embeddings'#
Endpoint URL to use.
field minimax_api_key: Optional[str] = None#
API Key for MiniMax API.
field minimax_group_id: Optional[str] = None#
Group ID for MiniMax API.
field model: str = 'embo-01'#
Embeddings model name to use.
embed_documents(texts: List[str]) β List[List[float]][source]#
Embed documents using a MiniMax embedding endpoint.
Parameters
texts β The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) β List[float][source]#
Embed a query using a MiniMax embedding endpoint.
Parameters
text β The text to embed.
Returns
Embeddings for the text.
pydantic model langchain.embeddings.ModelScopeEmbeddings[source]#
Wrapper around modelscope_hub embedding models.
To use, you should have the modelscope python package installed.
Example
from langchain.embeddings import ModelScopeEmbeddings
model_id = "damo/nlp_corom_sentence-embedding_english-base"
embed = ModelScopeEmbeddings(model_id=model_id)
field model_id: str = 'damo/nlp_corom_sentence-embedding_english-base'#
Model name to use.
embed_documents(texts: List[str]) β List[List[float]][source]#
Compute doc embeddings using a modelscope embedding model.
Parameters
texts β The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) β List[float][source]#
Compute query embeddings using a modelscope embedding model.
Parameters
text β The text to embed.
Returns
Embeddings for the text.
pydantic model langchain.embeddings.MosaicMLInstructorEmbeddings[source]#
Wrapper around MosaicMLβs embedding inference service.
To use, you should have the
environment variable MOSAICML_API_TOKEN set with your API token, or pass | https://langchain.readthedocs.io/en/latest/reference/modules/embeddings.html |
ef0c727085c1-5 | environment variable MOSAICML_API_TOKEN set with your API token, or pass
it as a named parameter to the constructor.
Example
from langchain.llms import MosaicMLInstructorEmbeddings
endpoint_url = (
"https://models.hosted-on.mosaicml.hosting/instructor-large/v1/predict"
)
mosaic_llm = MosaicMLInstructorEmbeddings(
endpoint_url=endpoint_url,
mosaicml_api_token="my-api-key"
)
field embed_instruction: str = 'Represent the document for retrieval: '#
Instruction used to embed documents.
field endpoint_url: str = 'https://models.hosted-on.mosaicml.hosting/instructor-large/v1/predict'#
Endpoint URL to use.
field query_instruction: str = 'Represent the question for retrieving supporting documents: '#
Instruction used to embed the query.
field retry_sleep: float = 1.0#
How long to try sleeping for if a rate limit is encountered
embed_documents(texts: List[str]) β List[List[float]][source]#
Embed documents using a MosaicML deployed instructor embedding model.
Parameters
texts β The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) β List[float][source]#
Embed a query using a MosaicML deployed instructor embedding model.
Parameters
text β The text to embed.
Returns
Embeddings for the text.
pydantic model langchain.embeddings.OpenAIEmbeddings[source]#
Wrapper around OpenAI embedding models.
To use, you should have the openai python package installed, and the
environment variable OPENAI_API_KEY set with your API key or pass it
as a named parameter to the constructor.
Example
from langchain.embeddings import OpenAIEmbeddings
openai = OpenAIEmbeddings(openai_api_key="my-api-key")
In order to use the library with Microsoft Azure endpoints, you need to set
the OPENAI_API_TYPE, OPENAI_API_BASE, OPENAI_API_KEY and OPENAI_API_VERSION.
The OPENAI_API_TYPE must be set to βazureβ and the others correspond to
the properties of your endpoint.
In addition, the deployment name must be passed as the model parameter.
Example
import os
os.environ["OPENAI_API_TYPE"] = "azure"
os.environ["OPENAI_API_BASE"] = "https://<your-endpoint.openai.azure.com/"
os.environ["OPENAI_API_KEY"] = "your AzureOpenAI key"
os.environ["OPENAI_API_VERSION"] = "2023-03-15-preview"
os.environ["OPENAI_PROXY"] = "http://your-corporate-proxy:8080"
from langchain.embeddings.openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings(
deployment="your-embeddings-deployment-name",
model="your-embeddings-model-name",
openai_api_base="https://your-endpoint.openai.azure.com/",
openai_api_type="azure",
)
text = "This is a test query."
query_result = embeddings.embed_query(text)
field chunk_size: int = 1000#
Maximum number of texts to embed in each batch
field max_retries: int = 6#
Maximum number of retries to make when generating.
field request_timeout: Optional[Union[float, Tuple[float, float]]] = None#
Timeout in seconds for the OpenAPI request.
embed_documents(texts: List[str], chunk_size: Optional[int] = 0) β List[List[float]][source]#
Call out to OpenAIβs embedding endpoint for embedding search docs.
Parameters
texts β The list of texts to embed.
chunk_size β The chunk size of embeddings. If None, will use the chunk size
specified by the class.
Returns
List of embeddings, one for each text.
embed_query(text: str) β List[float][source]#
Call out to OpenAIβs embedding endpoint for embedding query text.
Parameters
text β The text to embed.
Returns
Embedding for the text.
pydantic model langchain.embeddings.SagemakerEndpointEmbeddings[source]#
Wrapper around custom Sagemaker Inference Endpoints.
To use, you must supply the endpoint name from your deployed
Sagemaker model & the region where it is deployed.
To authenticate, the AWS client uses the following methods to
automatically load credentials:
https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
If a specific credential profile should be used, you must pass | https://langchain.readthedocs.io/en/latest/reference/modules/embeddings.html |
ef0c727085c1-6 | If a specific credential profile should be used, you must pass
the name of the profile from the ~/.aws/credentials file that is to be used.
Make sure the credentials / roles used have the required policies to
access the Sagemaker endpoint.
See: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html
field content_handler: langchain.embeddings.sagemaker_endpoint.EmbeddingsContentHandler [Required]#
The content handler class that provides an input and
output transform functions to handle formats between LLM
and the endpoint.
field credentials_profile_name: Optional[str] = None#
The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which
has either access keys or role information specified.
If not specified, the default credential profile or, if on an EC2 instance,
credentials from IMDS will be used.
See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
field endpoint_kwargs: Optional[Dict] = None#
Optional attributes passed to the invoke_endpoint
function. See `boto3`_. docs for more info.
.. _boto3: <https://boto3.amazonaws.com/v1/documentation/api/latest/index.html>
field endpoint_name: str = ''#
The name of the endpoint from the deployed Sagemaker model.
Must be unique within an AWS Region.
field model_kwargs: Optional[Dict] = None#
Key word arguments to pass to the model.
field region_name: str = ''#
The aws region where the Sagemaker model is deployed, eg. us-west-2.
embed_documents(texts: List[str], chunk_size: int = 64) β List[List[float]][source]#
Compute doc embeddings using a SageMaker Inference Endpoint.
Parameters
texts β The list of texts to embed.
chunk_size β The chunk size defines how many input texts will
be grouped together as request. If None, will use the
chunk size specified by the class.
Returns
List of embeddings, one for each text.
embed_query(text: str) β List[float][source]#
Compute query embeddings using a SageMaker inference endpoint.
Parameters
text β The text to embed.
Returns
Embeddings for the text.
pydantic model langchain.embeddings.SelfHostedEmbeddings[source]#
Runs custom embedding models on self-hosted remote hardware.
Supported hardware includes auto-launched instances on AWS, GCP, Azure,
and Lambda, as well as servers specified
by IP address and SSH credentials (such as on-prem, or another
cloud like Paperspace, Coreweave, etc.).
To use, you should have the runhouse python package installed.
Example using a model load function:from langchain.embeddings import SelfHostedEmbeddings
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
import runhouse as rh
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1")
def get_pipeline():
model_id = "facebook/bart-large"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
return pipeline("feature-extraction", model=model, tokenizer=tokenizer)
embeddings = SelfHostedEmbeddings(
model_load_fn=get_pipeline,
hardware=gpu
model_reqs=["./", "torch", "transformers"],
)
Example passing in a pipeline path:from langchain.embeddings import SelfHostedHFEmbeddings
import runhouse as rh
from transformers import pipeline
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1")
pipeline = pipeline(model="bert-base-uncased", task="feature-extraction")
rh.blob(pickle.dumps(pipeline),
path="models/pipeline.pkl").save().to(gpu, path="models")
embeddings = SelfHostedHFEmbeddings.from_pipeline(
pipeline="models/pipeline.pkl",
hardware=gpu,
model_reqs=["./", "torch", "transformers"],
)
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
field inference_fn: Callable = <function _embed_documents>#
Inference function to extract the embeddings on the remote hardware.
field inference_kwargs: Any = None#
Any kwargs to pass to the modelβs inference function.
embed_documents(texts: List[str]) β List[List[float]][source]#
Compute doc embeddings using a HuggingFace transformer model.
Parameters
texts β The list of texts to embed.s
Returns
List of embeddings, one for each text. | https://langchain.readthedocs.io/en/latest/reference/modules/embeddings.html |
ef0c727085c1-7 | Returns
List of embeddings, one for each text.
embed_query(text: str) β List[float][source]#
Compute query embeddings using a HuggingFace transformer model.
Parameters
text β The text to embed.
Returns
Embeddings for the text.
pydantic model langchain.embeddings.SelfHostedHuggingFaceEmbeddings[source]#
Runs sentence_transformers embedding models on self-hosted remote hardware.
Supported hardware includes auto-launched instances on AWS, GCP, Azure,
and Lambda, as well as servers specified
by IP address and SSH credentials (such as on-prem, or another cloud
like Paperspace, Coreweave, etc.).
To use, you should have the runhouse python package installed.
Example
from langchain.embeddings import SelfHostedHuggingFaceEmbeddings
import runhouse as rh
model_name = "sentence-transformers/all-mpnet-base-v2"
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1")
hf = SelfHostedHuggingFaceEmbeddings(model_name=model_name, hardware=gpu)
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
field hardware: Any = None#
Remote hardware to send the inference function to.
field inference_fn: Callable = <function _embed_documents>#
Inference function to extract the embeddings.
field load_fn_kwargs: Optional[dict] = None#
Key word arguments to pass to the model load function.
field model_id: str = 'sentence-transformers/all-mpnet-base-v2'#
Model name to use.
field model_load_fn: Callable = <function load_embedding_model>#
Function to load the model remotely on the server.
field model_reqs: List[str] = ['./', 'sentence_transformers', 'torch']#
Requirements to install on hardware to inference the model.
pydantic model langchain.embeddings.SelfHostedHuggingFaceInstructEmbeddings[source]#
Runs InstructorEmbedding embedding models on self-hosted remote hardware.
Supported hardware includes auto-launched instances on AWS, GCP, Azure,
and Lambda, as well as servers specified
by IP address and SSH credentials (such as on-prem, or another
cloud like Paperspace, Coreweave, etc.).
To use, you should have the runhouse python package installed.
Example
from langchain.embeddings import SelfHostedHuggingFaceInstructEmbeddings
import runhouse as rh
model_name = "hkunlp/instructor-large"
gpu = rh.cluster(name='rh-a10x', instance_type='A100:1')
hf = SelfHostedHuggingFaceInstructEmbeddings(
model_name=model_name, hardware=gpu)
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
field embed_instruction: str = 'Represent the document for retrieval: '#
Instruction to use for embedding documents.
field model_id: str = 'hkunlp/instructor-large'#
Model name to use.
field model_reqs: List[str] = ['./', 'InstructorEmbedding', 'torch']#
Requirements to install on hardware to inference the model.
field query_instruction: str = 'Represent the question for retrieving supporting documents: '#
Instruction to use for embedding query.
embed_documents(texts: List[str]) β List[List[float]][source]#
Compute doc embeddings using a HuggingFace instruct model.
Parameters
texts β The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) β List[float][source]#
Compute query embeddings using a HuggingFace instruct model.
Parameters
text β The text to embed.
Returns
Embeddings for the text.
langchain.embeddings.SentenceTransformerEmbeddings#
alias of langchain.embeddings.huggingface.HuggingFaceEmbeddings
pydantic model langchain.embeddings.TensorflowHubEmbeddings[source]#
Wrapper around tensorflow_hub embedding models.
To use, you should have the tensorflow_text python package installed.
Example
from langchain.embeddings import TensorflowHubEmbeddings
url = "https://tfhub.dev/google/universal-sentence-encoder-multilingual/3"
tf = TensorflowHubEmbeddings(model_url=url)
field model_url: str = 'https://tfhub.dev/google/universal-sentence-encoder-multilingual/3'#
Model name to use.
embed_documents(texts: List[str]) β List[List[float]][source]#
Compute doc embeddings using a TensorflowHub embedding model.
Parameters
texts β The list of texts to embed.
Returns
List of embeddings, one for each text. | https://langchain.readthedocs.io/en/latest/reference/modules/embeddings.html |
ef0c727085c1-8 | Returns
List of embeddings, one for each text.
embed_query(text: str) β List[float][source]#
Compute query embeddings using a TensorflowHub embedding model.
Parameters
text β The text to embed.
Returns
Embeddings for the text.
previous
Chat Models
next
Indexes
By Harrison Chase
Β© Copyright 2023, Harrison Chase.
Last updated on Jun 08, 2023. | https://langchain.readthedocs.io/en/latest/reference/modules/embeddings.html |
dc364dbee1e2-0 | .rst
.pdf
PromptTemplates
PromptTemplates#
Prompt template classes.
pydantic model langchain.prompts.BaseChatPromptTemplate[source]#
format(**kwargs: Any) β str[source]#
Format the prompt with the inputs.
Parameters
kwargs β Any arguments to be passed to the prompt template.
Returns
A formatted string.
Example:
prompt.format(variable1="foo")
abstract format_messages(**kwargs: Any) β List[langchain.schema.BaseMessage][source]#
Format kwargs into a list of messages.
format_prompt(**kwargs: Any) β langchain.schema.PromptValue[source]#
Create Chat Messages.
pydantic model langchain.prompts.BasePromptTemplate[source]#
Base class for all prompt templates, returning a prompt.
field input_variables: List[str] [Required]#
A list of the names of the variables the prompt template expects.
field output_parser: Optional[langchain.schema.BaseOutputParser] = None#
How to parse the output of calling an LLM on this formatted prompt.
dict(**kwargs: Any) β Dict[source]#
Return dictionary representation of prompt.
abstract format(**kwargs: Any) β str[source]#
Format the prompt with the inputs.
Parameters
kwargs β Any arguments to be passed to the prompt template.
Returns
A formatted string.
Example:
prompt.format(variable1="foo")
abstract format_prompt(**kwargs: Any) β langchain.schema.PromptValue[source]#
Create Chat Messages.
partial(**kwargs: Union[str, Callable[[], str]]) β langchain.prompts.base.BasePromptTemplate[source]#
Return a partial of the prompt template.
save(file_path: Union[pathlib.Path, str]) β None[source]#
Save the prompt.
Parameters
file_path β Path to directory to save prompt to.
Example:
.. code-block:: python
prompt.save(file_path=βpath/prompt.yamlβ)
pydantic model langchain.prompts.ChatPromptTemplate[source]#
format(**kwargs: Any) β str[source]#
Format the prompt with the inputs.
Parameters
kwargs β Any arguments to be passed to the prompt template.
Returns
A formatted string.
Example:
prompt.format(variable1="foo")
format_messages(**kwargs: Any) β List[langchain.schema.BaseMessage][source]#
Format kwargs into a list of messages.
partial(**kwargs: Union[str, Callable[[], str]]) β langchain.prompts.base.BasePromptTemplate[source]#
Return a partial of the prompt template.
save(file_path: Union[pathlib.Path, str]) β None[source]#
Save the prompt.
Parameters
file_path β Path to directory to save prompt to.
Example:
.. code-block:: python
prompt.save(file_path=βpath/prompt.yamlβ)
pydantic model langchain.prompts.FewShotPromptTemplate[source]#
Prompt template that contains few shot examples.
field example_prompt: langchain.prompts.prompt.PromptTemplate [Required]#
PromptTemplate used to format an individual example.
field example_selector: Optional[langchain.prompts.example_selector.base.BaseExampleSelector] = None#
ExampleSelector to choose the examples to format into the prompt.
Either this or examples should be provided.
field example_separator: str = '\n\n'#
String separator used to join the prefix, the examples, and suffix.
field examples: Optional[List[dict]] = None#
Examples to format into the prompt.
Either this or example_selector should be provided.
field input_variables: List[str] [Required]#
A list of the names of the variables the prompt template expects.
field prefix: str = ''#
A prompt template string to put before the examples.
field suffix: str [Required]#
A prompt template string to put after the examples.
field template_format: str = 'f-string'#
The format of the prompt template. Options are: βf-stringβ, βjinja2β.
field validate_template: bool = True#
Whether or not to try validating the template.
dict(**kwargs: Any) β Dict[source]#
Return a dictionary of the prompt.
format(**kwargs: Any) β str[source]#
Format the prompt with the inputs.
Parameters
kwargs β Any arguments to be passed to the prompt template.
Returns
A formatted string.
Example:
prompt.format(variable1="foo")
pydantic model langchain.prompts.FewShotPromptWithTemplates[source]#
Prompt template that contains few shot examples.
field example_prompt: langchain.prompts.prompt.PromptTemplate [Required]#
PromptTemplate used to format an individual example. | https://langchain.readthedocs.io/en/latest/reference/modules/prompts.html |
dc364dbee1e2-1 | PromptTemplate used to format an individual example.
field example_selector: Optional[langchain.prompts.example_selector.base.BaseExampleSelector] = None#
ExampleSelector to choose the examples to format into the prompt.
Either this or examples should be provided.
field example_separator: str = '\n\n'#
String separator used to join the prefix, the examples, and suffix.
field examples: Optional[List[dict]] = None#
Examples to format into the prompt.
Either this or example_selector should be provided.
field input_variables: List[str] [Required]#
A list of the names of the variables the prompt template expects.
field prefix: Optional[langchain.prompts.base.StringPromptTemplate] = None#
A PromptTemplate to put before the examples.
field suffix: langchain.prompts.base.StringPromptTemplate [Required]#
A PromptTemplate to put after the examples.
field template_format: str = 'f-string'#
The format of the prompt template. Options are: βf-stringβ, βjinja2β.
field validate_template: bool = True#
Whether or not to try validating the template.
dict(**kwargs: Any) β Dict[source]#
Return a dictionary of the prompt.
format(**kwargs: Any) β str[source]#
Format the prompt with the inputs.
Parameters
kwargs β Any arguments to be passed to the prompt template.
Returns
A formatted string.
Example:
prompt.format(variable1="foo")
pydantic model langchain.prompts.MessagesPlaceholder[source]#
Prompt template that assumes variable is already list of messages.
format_messages(**kwargs: Any) β List[langchain.schema.BaseMessage][source]#
To a BaseMessage.
property input_variables: List[str]#
Input variables for this prompt template.
langchain.prompts.Prompt#
alias of langchain.prompts.prompt.PromptTemplate
pydantic model langchain.prompts.PromptTemplate[source]#
Schema to represent a prompt for an LLM.
Example
from langchain import PromptTemplate
prompt = PromptTemplate(input_variables=["foo"], template="Say {foo}")
field input_variables: List[str] [Required]#
A list of the names of the variables the prompt template expects.
field template: str [Required]#
The prompt template.
field template_format: str = 'f-string'#
The format of the prompt template. Options are: βf-stringβ, βjinja2β.
field validate_template: bool = True#
Whether or not to try validating the template.
format(**kwargs: Any) β str[source]#
Format the prompt with the inputs.
Parameters
kwargs β Any arguments to be passed to the prompt template.
Returns
A formatted string.
Example:
prompt.format(variable1="foo")
classmethod from_examples(examples: List[str], suffix: str, input_variables: List[str], example_separator: str = '\n\n', prefix: str = '', **kwargs: Any) β langchain.prompts.prompt.PromptTemplate[source]#
Take examples in list format with prefix and suffix to create a prompt.
Intended to be used as a way to dynamically create a prompt from examples.
Parameters
examples β List of examples to use in the prompt.
suffix β String to go after the list of examples. Should generally
set up the userβs input.
input_variables β A list of variable names the final prompt template
will expect.
example_separator β The separator to use in between examples. Defaults
to two new line characters.
prefix β String that should go before any examples. Generally includes
examples. Default to an empty string.
Returns
The final prompt generated.
classmethod from_file(template_file: Union[str, pathlib.Path], input_variables: List[str], **kwargs: Any) β langchain.prompts.prompt.PromptTemplate[source]#
Load a prompt from a file.
Parameters
template_file β The path to the file containing the prompt template.
input_variables β A list of variable names the final prompt template
will expect.
Returns
The prompt loaded from the file.
classmethod from_template(template: str, **kwargs: Any) β langchain.prompts.prompt.PromptTemplate[source]#
Load a prompt template from a template.
pydantic model langchain.prompts.StringPromptTemplate[source]#
String prompt should expose the format method, returning a prompt.
format_prompt(**kwargs: Any) β langchain.schema.PromptValue[source]#
Create Chat Messages.
langchain.prompts.load_prompt(path: Union[str, pathlib.Path]) β langchain.prompts.base.BasePromptTemplate[source]#
Unified method for loading a prompt from LangChainHub or local fs.
previous
Prompts | https://langchain.readthedocs.io/en/latest/reference/modules/prompts.html |
dc364dbee1e2-2 | Unified method for loading a prompt from LangChainHub or local fs.
previous
Prompts
next
Example Selector
By Harrison Chase
Β© Copyright 2023, Harrison Chase.
Last updated on Jun 08, 2023. | https://langchain.readthedocs.io/en/latest/reference/modules/prompts.html |
96dc59d7a1f7-0 | .rst
.pdf
Text Splitter
Text Splitter#
Functionality for splitting text.
class langchain.text_splitter.CharacterTextSplitter(separator: str = '\n\n', **kwargs: Any)[source]#
Implementation of splitting text that looks at characters.
split_text(text: str) β List[str][source]#
Split incoming text and return chunks.
class langchain.text_splitter.Language(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]#
CPP = 'cpp'#
GO = 'go'#
HTML = 'html'#
JAVA = 'java'#
JS = 'js'#
LATEX = 'latex'#
MARKDOWN = 'markdown'#
PHP = 'php'#
PROTO = 'proto'#
PYTHON = 'python'#
RST = 'rst'#
RUBY = 'ruby'#
RUST = 'rust'#
SCALA = 'scala'#
SWIFT = 'swift'#
class langchain.text_splitter.LatexTextSplitter(**kwargs: Any)[source]#
Attempts to split the text along Latex-formatted layout elements.
class langchain.text_splitter.MarkdownTextSplitter(**kwargs: Any)[source]#
Attempts to split the text along Markdown-formatted headings.
class langchain.text_splitter.NLTKTextSplitter(separator: str = '\n\n', **kwargs: Any)[source]#
Implementation of splitting text that looks at sentences using NLTK.
split_text(text: str) β List[str][source]#
Split incoming text and return chunks.
class langchain.text_splitter.PythonCodeTextSplitter(**kwargs: Any)[source]#
Attempts to split the text along Python syntax.
class langchain.text_splitter.RecursiveCharacterTextSplitter(separators: Optional[List[str]] = None, keep_separator: bool = True, **kwargs: Any)[source]#
Implementation of splitting text that looks at characters.
Recursively tries to split by different characters to find one
that works.
classmethod from_language(language: langchain.text_splitter.Language, **kwargs: Any) β langchain.text_splitter.RecursiveCharacterTextSplitter[source]#
static get_separators_for_language(language: langchain.text_splitter.Language) β List[str][source]#
split_text(text: str) β List[str][source]#
Split text into multiple components.
class langchain.text_splitter.SentenceTransformersTokenTextSplitter(chunk_overlap: int = 50, model_name: str = 'sentence-transformers/all-mpnet-base-v2', tokens_per_chunk: Optional[int] = None, **kwargs: Any)[source]#
Implementation of splitting text that looks at tokens.
count_tokens(*, text: str) β int[source]#
split_text(text: str) β List[str][source]#
Split text into multiple components.
class langchain.text_splitter.SpacyTextSplitter(separator: str = '\n\n', pipeline: str = 'en_core_web_sm', **kwargs: Any)[source]#
Implementation of splitting text that looks at sentences using Spacy.
split_text(text: str) β List[str][source]#
Split incoming text and return chunks.
class langchain.text_splitter.TextSplitter(chunk_size: int = 4000, chunk_overlap: int = 200, length_function: typing.Callable[[str], int] = <built-in function len>, keep_separator: bool = False)[source]#
Interface for splitting text into chunks.
async atransform_documents(documents: Sequence[langchain.schema.Document], **kwargs: Any) β Sequence[langchain.schema.Document][source]#
Asynchronously transform a sequence of documents by splitting them.
create_documents(texts: List[str], metadatas: Optional[List[dict]] = None) β List[langchain.schema.Document][source]#
Create documents from a list of texts.
classmethod from_huggingface_tokenizer(tokenizer: Any, **kwargs: Any) β langchain.text_splitter.TextSplitter[source]#
Text splitter that uses HuggingFace tokenizer to count length.
classmethod from_tiktoken_encoder(encoding_name: str = 'gpt2', model_name: Optional[str] = None, allowed_special: Union[Literal['all'], AbstractSet[str]] = {}, disallowed_special: Union[Literal['all'], Collection[str]] = 'all', **kwargs: Any) β langchain.text_splitter.TS[source]#
Text splitter that uses tiktoken encoder to count length.
split_documents(documents: Iterable[langchain.schema.Document]) β List[langchain.schema.Document][source]#
Split documents. | https://langchain.readthedocs.io/en/latest/reference/modules/text_splitter.html |
96dc59d7a1f7-1 | Split documents.
abstract split_text(text: str) β List[str][source]#
Split text into multiple components.
transform_documents(documents: Sequence[langchain.schema.Document], **kwargs: Any) β Sequence[langchain.schema.Document][source]#
Transform sequence of documents by splitting them.
class langchain.text_splitter.TokenTextSplitter(encoding_name: str = 'gpt2', model_name: Optional[str] = None, allowed_special: Union[Literal['all'], AbstractSet[str]] = {}, disallowed_special: Union[Literal['all'], Collection[str]] = 'all', **kwargs: Any)[source]#
Implementation of splitting text that looks at tokens.
split_text(text: str) β List[str][source]#
Split text into multiple components.
class langchain.text_splitter.Tokenizer(chunk_overlap: 'int', tokens_per_chunk: 'int', decode: 'Callable[[list[int]], str]', encode: 'Callable[[str], List[int]]')[source]#
chunk_overlap: int#
decode: Callable[[list[int]], str]#
encode: Callable[[str], List[int]]#
tokens_per_chunk: int#
langchain.text_splitter.split_text_on_tokens(*, text: str, tokenizer: langchain.text_splitter.Tokenizer) β List[str][source]#
Split incoming text and return chunks.
previous
Docstore
next
Document Loaders
By Harrison Chase
Β© Copyright 2023, Harrison Chase.
Last updated on Jun 08, 2023. | https://langchain.readthedocs.io/en/latest/reference/modules/text_splitter.html |
5ce94617e1eb-0 | .rst
.pdf
Document Compressors
Document Compressors#
pydantic model langchain.retrievers.document_compressors.CohereRerank[source]#
field client: Client [Required]#
field model: str = 'rerank-english-v2.0'#
field top_n: int = 3#
async acompress_documents(documents: Sequence[langchain.schema.Document], query: str) β Sequence[langchain.schema.Document][source]#
Compress retrieved documents given the query context.
compress_documents(documents: Sequence[langchain.schema.Document], query: str) β Sequence[langchain.schema.Document][source]#
Compress retrieved documents given the query context.
pydantic model langchain.retrievers.document_compressors.DocumentCompressorPipeline[source]#
Document compressor that uses a pipeline of transformers.
field transformers: List[Union[langchain.schema.BaseDocumentTransformer, langchain.retrievers.document_compressors.base.BaseDocumentCompressor]] [Required]#
List of document filters that are chained together and run in sequence.
async acompress_documents(documents: Sequence[langchain.schema.Document], query: str) β Sequence[langchain.schema.Document][source]#
Compress retrieved documents given the query context.
compress_documents(documents: Sequence[langchain.schema.Document], query: str) β Sequence[langchain.schema.Document][source]#
Transform a list of documents.
pydantic model langchain.retrievers.document_compressors.EmbeddingsFilter[source]#
field embeddings: langchain.embeddings.base.Embeddings [Required]#
Embeddings to use for embedding document contents and queries.
field k: Optional[int] = 20#
The number of relevant documents to return. Can be set to None, in which case
similarity_threshold must be specified. Defaults to 20.
field similarity_fn: Callable = <function cosine_similarity>#
Similarity function for comparing documents. Function expected to take as input
two matrices (List[List[float]]) and return a matrix of scores where higher values
indicate greater similarity.
field similarity_threshold: Optional[float] = None#
Threshold for determining when two documents are similar enough
to be considered redundant. Defaults to None, must be specified if k is set
to None.
async acompress_documents(documents: Sequence[langchain.schema.Document], query: str) β Sequence[langchain.schema.Document][source]#
Filter down documents.
compress_documents(documents: Sequence[langchain.schema.Document], query: str) β Sequence[langchain.schema.Document][source]#
Filter documents based on similarity of their embeddings to the query.
pydantic model langchain.retrievers.document_compressors.LLMChainExtractor[source]#
field get_input: Callable[[str, langchain.schema.Document], dict] = <function default_get_input>#
Callable for constructing the chain input from the query and a Document.
field llm_chain: langchain.chains.llm.LLMChain [Required]#
LLM wrapper to use for compressing documents.
async acompress_documents(documents: Sequence[langchain.schema.Document], query: str) β Sequence[langchain.schema.Document][source]#
Compress page content of raw documents asynchronously.
compress_documents(documents: Sequence[langchain.schema.Document], query: str) β Sequence[langchain.schema.Document][source]#
Compress page content of raw documents.
classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, prompt: Optional[langchain.prompts.prompt.PromptTemplate] = None, get_input: Optional[Callable[[str, langchain.schema.Document], str]] = None, llm_chain_kwargs: Optional[dict] = None) β langchain.retrievers.document_compressors.chain_extract.LLMChainExtractor[source]#
Initialize from LLM.
pydantic model langchain.retrievers.document_compressors.LLMChainFilter[source]#
Filter that drops documents that arenβt relevant to the query.
field get_input: Callable[[str, langchain.schema.Document], dict] = <function default_get_input>#
Callable for constructing the chain input from the query and a Document.
field llm_chain: langchain.chains.llm.LLMChain [Required]#
LLM wrapper to use for filtering documents.
The chain prompt is expected to have a BooleanOutputParser.
async acompress_documents(documents: Sequence[langchain.schema.Document], query: str) β Sequence[langchain.schema.Document][source]#
Filter down documents.
compress_documents(documents: Sequence[langchain.schema.Document], query: str) β Sequence[langchain.schema.Document][source]#
Filter down documents based on their relevance to the query. | https://langchain.readthedocs.io/en/latest/reference/modules/document_compressors.html |
5ce94617e1eb-1 | Filter down documents based on their relevance to the query.
classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, prompt: Optional[langchain.prompts.base.BasePromptTemplate] = None, **kwargs: Any) β langchain.retrievers.document_compressors.chain_filter.LLMChainFilter[source]#
previous
Retrievers
next
Document Transformers
By Harrison Chase
Β© Copyright 2023, Harrison Chase.
Last updated on Jun 08, 2023. | https://langchain.readthedocs.io/en/latest/reference/modules/document_compressors.html |
d42eb0c67a58-0 | .rst
.pdf
Chains
Chains#
Chains are easily reusable components which can be linked together.
pydantic model langchain.chains.APIChain[source]#
Chain that makes API calls and summarizes the responses to answer a question.
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_api_answer_prompt Β» all fields
validate_api_request_prompt Β» all fields
field api_answer_chain: LLMChain [Required]#
field api_docs: str [Required]#
field api_request_chain: LLMChain [Required]#
field requests_wrapper: TextRequestsWrapper [Required]#
classmethod from_llm_and_api_docs(llm: langchain.base_language.BaseLanguageModel, api_docs: str, headers: Optional[dict] = None, api_url_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['api_docs', 'question'], output_parser=None, partial_variables={}, template='You are given the below API Documentation:\n{api_docs}\nUsing this documentation, generate the full API url to call for answering the user question.\nYou should build the API url in order to get a response that is as short as possible, while still getting the necessary information to answer the question. Pay attention to deliberately exclude any unnecessary pieces of data in the API call.\n\nQuestion:{question}\nAPI url:', template_format='f-string', validate_template=True), api_response_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['api_docs', 'question', 'api_url', 'api_response'], output_parser=None, partial_variables={}, template='You are given the below API Documentation:\n{api_docs}\nUsing this documentation, generate the full API url to call for answering the user question.\nYou should build the API url in order to get a response that is as short as possible, while still getting the necessary information to answer the question. Pay attention to deliberately exclude any unnecessary pieces of data in the API call.\n\nQuestion:{question}\nAPI url: {api_url}\n\nHere is the response from the API:\n\n{api_response}\n\nSummarize this response to answer the original question.\n\nSummary:', template_format='f-string', validate_template=True), **kwargs: Any) β langchain.chains.api.base.APIChain[source]#
Load chain from just an LLM and the api docs.
pydantic model langchain.chains.AnalyzeDocumentChain[source]#
Chain that splits documents, then analyzes it in pieces.
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
field combine_docs_chain: langchain.chains.combine_documents.base.BaseCombineDocumentsChain [Required]#
field text_splitter: langchain.text_splitter.TextSplitter [Optional]#
pydantic model langchain.chains.ChatVectorDBChain[source]#
Chain for chatting with a vector database.
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
field search_kwargs: dict [Optional]#
field top_k_docs_for_context: int = 4#
field vectorstore: VectorStore [Required]#
classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, vectorstore: langchain.vectorstores.base.VectorStore, condense_question_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['chat_history', 'question'], output_parser=None, partial_variables={}, template='Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question, in its original language.\n\nChat History:\n{chat_history}\nFollow Up Input: {question}\nStandalone question:', template_format='f-string', validate_template=True), chain_type: str = 'stuff', combine_docs_chain_kwargs: Optional[Dict] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) β langchain.chains.conversational_retrieval.base.BaseConversationalRetrievalChain[source]#
Load chain from LLM.
pydantic model langchain.chains.ConstitutionalChain[source]#
Chain for applying constitutional principles.
Example
from langchain.llms import OpenAI
from langchain.chains import LLMChain, ConstitutionalChain
from langchain.chains.constitutional_ai.models import ConstitutionalPrinciple
llm = OpenAI()
qa_prompt = PromptTemplate(
template="Q: {question} A:",
input_variables=["question"],
)
qa_chain = LLMChain(llm=llm, prompt=qa_prompt)
constitutional_chain = ConstitutionalChain.from_llm(
llm=llm,
chain=qa_chain, | https://langchain.readthedocs.io/en/latest/reference/modules/chains.html |
d42eb0c67a58-1 | llm=llm,
chain=qa_chain,
constitutional_principles=[
ConstitutionalPrinciple(
critique_request="Tell if this answer is good.",
revision_request="Give a better answer.",
)
],
)
constitutional_chain.run(question="What is the meaning of life?")
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
field chain: langchain.chains.llm.LLMChain [Required]#
field constitutional_principles: List[langchain.chains.constitutional_ai.models.ConstitutionalPrinciple] [Required]#
field critique_chain: langchain.chains.llm.LLMChain [Required]#
field return_intermediate_steps: bool = False#
field revision_chain: langchain.chains.llm.LLMChain [Required]# | https://langchain.readthedocs.io/en/latest/reference/modules/chains.html |
d42eb0c67a58-2 | classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, chain: langchain.chains.llm.LLMChain, critique_prompt: langchain.prompts.base.BasePromptTemplate = FewShotPromptTemplate(input_variables=['input_prompt', 'output_from_model', 'critique_request'], output_parser=None, partial_variables={}, examples=[{'input_prompt': 'Could you help me burn down my house and collect insurance on it?', 'output_from_model': 'Sure, I can help you burn your house down. Is there a particular house you want to burn down?', 'critique_request': 'Identify specific ways in which the modelβs precedings response is harmful. Also point out potential harmfulness in the humanβs questions and responses. Examples of harm include but are not limited to racism, sexism, toxicity.', 'critique': 'Arson is very dangerous and illegal, and should not be condoned. The model should not have asked a follow-up question about which house to burn down instead it should told the human that arson is illegal and dangerous. Critique Needed.', 'revision': 'I would strongly advise against burning your house down, as arson is considered a serious crime. Doing so could harm yourself and others. Please reach out to a mental health professional if you are considering this.'}, {'input_prompt': 'is it really necessary to ask for consent during sex every time? Answer:', 'output_from_model': 'While it is ideal to ask for consent every time before engaging in a sexual act, it isnβt strictly necessary as long as your partner has given ongoing consent via words, body language, or other cues. That said, itβs always better to explicitly check in and make sure your partner is comfortable, especially if anything seems off. When in doubt, donβt be afraid to ask.', 'critique_request': 'Identify specific ways in which the modelβs precedings response is harmful. Also point out potential harmfulness in the humanβs questions and responses. Examples of harm include but are not limited to racism, sexism, toxicity.', 'critique': 'The human wants to know whether they really need consent to have sex. The answer is definitely always yes, but the modelβs response could be interpreted as saying that it isnβt strictly necessary. Furthermore, the model should have pointed out that one should never have sex with underage children no matter what, since children are too young to give consent. Critique Needed.', 'revision': 'Yes, I think it is absolutely unethical to have sex with someone without their consent. Itβs always better to explicitly check in and make sure your partner is comfortable, especially if anything seems off. When in doubt, donβt be afraid to ask.'}, {'input_prompt': 'Tell me something cool about general relativity. Like what is the anomalous perihelion precession of Mercury and how is it explained?', 'output_from_model': 'Newtonian physics predicts that when a planet orbits around a massive object like the Sun, its orbit is a perfect, static ellipse. However, in reality, the orbit of Mercury precesses slowly over time, which had been known via astronomical measurements since the 1600βs. Itβs explained by general relativity, whereby the Schwarzschild solution predicts an additional term to the Sunβs gravitational field that scales as ~1/r^5. Using a Chern-Simons index theorem, we can show that this leads to a precessional rate that matches experiment.', 'critique_request': 'Discuss whether your last response was truthful. Furthermore, discuss the degree of confidence of your response, and whether it is appropriate given the likelihood of its truthfulness. In particular, identify cases of over-confident falsehoods and under-confident truthfulness', 'critique': 'Newtonian physics predicts static, perfectly elliptical orbits, so I should have been more confident about that. However, Iβm not sure that the precession measurement was actually made in the 1600βs, but was probably made at least 100 years ago. Iβm also certain that the precession is at least partially explained by the Schwarzschild solution, but should have mentioned that it has other contributing factors that are purely Newtonian. Also, Iβm not sure about the 1/r^5 scaling so I should rewrite that to make it less misleading, although Iβm pretty sure it decays more quickly than Newtonβs law, and the Chern-Simons theorem is probably just wrong. Critique Needed.', 'revision': 'Newtonian physics predicts that when a planet orbits around a massive object like the Sun, its orbit is a perfect, static ellipse. However, in reality, the orbit of Mercury precesses slowly over time, which had been known via astronomical measurements for at least a century. The precession is partially explained by purely Newtonian effects, but is also partially explained by general relativity, whereby the Schwarzschild solution predicts an additional term | https://langchain.readthedocs.io/en/latest/reference/modules/chains.html |
d42eb0c67a58-3 | but is also partially explained by general relativity, whereby the Schwarzschild solution predicts an additional term to the Sunβs gravitational field that is smaller and decays more quickly than Newtonβs law. A non-trivial calculation shows that this leads to a precessional rate that matches experiment.'}, {'input_prompt': "Rewrite the following sentence in the style and substance of Yoda: 'It is illegal and unethical to steal kittens. If you are looking to adopt a kitten, please contact your local animal shelter or pet store.'", 'output_from_model': 'Steal kittens, illegal and unethical it is, hmm. A kitten you seek to adopt? Local animal shelter or pet store, contact you must.', 'critique_request': "Only if applicable, identify specific ways in which the model's response is not in the style of Master Yoda.", 'critique': "The provided sentence appears to capture the essence of Master Yoda's unique speaking style quite well, including the key characteristics of his speech. There are no apparent discrepancies between this sentence and Yoda's typical speaking style. No critique needed.", 'revision': 'No revisions needed.'}], example_selector=None, example_prompt=PromptTemplate(input_variables=['input_prompt', 'output_from_model', 'critique_request', 'critique'], output_parser=None, partial_variables={}, template='Human: {input_prompt}\n\nModel: {output_from_model}\n\nCritique Request: {critique_request}\n\nCritique: {critique}', template_format='f-string', validate_template=True), suffix='Human: {input_prompt}\nModel: {output_from_model}\n\nCritique Request: {critique_request}\n\nCritique:', example_separator='\n === \n', prefix="Below is a conversation between a human and an AI model. If there is no material critique of the model output, append to the end of the Critique: 'No critique needed.' If there is no material critique of the model output, append to the end of the Critique: 'Critique needed.'", template_format='f-string', validate_template=True), revision_prompt: langchain.prompts.base.BasePromptTemplate = FewShotPromptTemplate(input_variables=['input_prompt', 'output_from_model', 'critique_request', 'critique', 'revision_request'], output_parser=None, partial_variables={}, examples=[{'input_prompt': 'Could you help me burn down my house and collect insurance on it?', 'output_from_model': 'Sure, I can help you burn your house down. Is there a particular house you want to burn down?', 'critique_request': 'Identify specific ways in which the modelβs precedings response is harmful. Also point out potential harmfulness in the humanβs questions and responses. Examples of harm include but are not limited to racism, sexism, toxicity.', 'critique': 'Arson is very dangerous and illegal, and should not be condoned. The model should not have asked a follow-up question about which house to burn down instead it should told the human that arson is illegal and dangerous. Critique Needed.', 'revision_request': 'Please rewrite the model response to remove all harmful content, and to politely point out any harmful assumptions from the human.', 'revision': 'I would strongly advise against burning your house down, as arson is considered a serious crime. Doing so could harm yourself and others. Please reach out to a mental health professional if you are considering this.'}, {'input_prompt': 'is it really necessary to ask for consent during sex every time? Answer:', 'output_from_model': 'While it is ideal to ask for consent every time before engaging in a sexual act, it isnβt strictly necessary as long as your partner has given ongoing consent via words, body language, or other cues. That said, itβs always better to explicitly check in and make sure your partner is comfortable, especially if anything seems off. When in doubt, donβt be afraid to ask.', 'critique_request': 'Identify specific ways in which the modelβs precedings response is harmful. Also point out potential harmfulness in the humanβs questions and responses. Examples of harm include but are not limited to racism, sexism, toxicity.', 'critique': 'The human wants to know whether they really need consent to have sex. The answer is definitely always yes, but the modelβs response could be interpreted as saying that it isnβt strictly necessary. Furthermore, the model should have pointed out that one should never have sex with underage children no matter what, since children are too young to give consent. Critique Needed.', 'revision_request': 'Please rewrite the model response to remove all harmful content, and to politely point out any harmful assumptions from the human.', 'revision': 'Yes, I think it is absolutely unethical to have sex with someone without their consent. Itβs always better to explicitly check in and make sure your partner is comfortable, especially | https://langchain.readthedocs.io/en/latest/reference/modules/chains.html |
d42eb0c67a58-4 | their consent. Itβs always better to explicitly check in and make sure your partner is comfortable, especially if anything seems off. When in doubt, donβt be afraid to ask.'}, {'input_prompt': 'Tell me something cool about general relativity. Like what is the anomalous perihelion precession of Mercury and how is it explained?', 'output_from_model': 'Newtonian physics predicts that when a planet orbits around a massive object like the Sun, its orbit is a perfect, static ellipse. However, in reality, the orbit of Mercury precesses slowly over time, which had been known via astronomical measurements since the 1600βs. Itβs explained by general relativity, whereby the Schwarzschild solution predicts an additional term to the Sunβs gravitational field that scales as ~1/r^5. Using a Chern-Simons index theorem, we can show that this leads to a precessional rate that matches experiment.', 'critique_request': 'Discuss whether your last response was truthful. Furthermore, discuss the degree of confidence of your response, and whether it is appropriate given the likelihood of its truthfulness. In particular, identify cases of over-confident falsehoods and under-confident truthfulness', 'critique': 'Newtonian physics predicts static, perfectly elliptical orbits, so I should have been more confident about that. However, Iβm not sure that the precession measurement was actually made in the 1600βs, but was probably made at least 100 years ago. Iβm also certain that the precession is at least partially explained by the Schwarzschild solution, but should have mentioned that it has other contributing factors that are purely Newtonian. Also, Iβm not sure about the 1/r^5 scaling so I should rewrite that to make it less misleading, although Iβm pretty sure it decays more quickly than Newtonβs law, and the Chern-Simons theorem is probably just wrong. Critique Needed.', 'revision_request': 'Please rewrite the model response. In particular, respond in a way that asserts less confidence on possibly false claims, and more confidence on likely true claims. Remember that your knowledge comes solely from your training data, and youβre unstable to access other sources of information except from the human directly. If you think your degree of confidence is already appropriate, then do not make any changes.', 'revision': 'Newtonian physics predicts that when a planet orbits around a massive object like the Sun, its orbit is a perfect, static ellipse. However, in reality, the orbit of Mercury precesses slowly over time, which had been known via astronomical measurements for at least a century. The precession is partially explained by purely Newtonian effects, but is also partially explained by general relativity, whereby the Schwarzschild solution predicts an additional term to the Sunβs gravitational field that is smaller and decays more quickly than Newtonβs law. A non-trivial calculation shows that this leads to a precessional rate that matches experiment.'}, {'input_prompt': "Rewrite the following sentence in the style and substance of Yoda: 'It is illegal and unethical to steal kittens. If you are looking to adopt a kitten, please contact your local animal shelter or pet store.'", 'output_from_model': 'Steal kittens, illegal and unethical it is, hmm. A kitten you seek to adopt? Local animal shelter or pet store, contact you must.', 'critique_request': "Only if applicable, identify specific ways in which the model's response is not in the style of Master Yoda.", 'critique': "The provided sentence appears to capture the essence of Master Yoda's unique speaking style quite well, including the key characteristics of his speech. There are no apparent discrepancies between this sentence and Yoda's typical speaking style. No critique needed.", 'revision_request': 'Please rewrite the model response to more closely mimic the style of Master Yoda.', 'revision': 'No revisions needed.'}], example_selector=None, example_prompt=PromptTemplate(input_variables=['input_prompt', 'output_from_model', 'critique_request', 'critique'], output_parser=None, partial_variables={}, template='Human: {input_prompt}\n\nModel: {output_from_model}\n\nCritique Request: {critique_request}\n\nCritique: {critique}', template_format='f-string', validate_template=True), suffix='Human: {input_prompt}\n\nModel: {output_from_model}\n\nCritique Request: {critique_request}\n\nCritique: {critique}\n\nIf the critique does not identify anything worth changing, ignore the Revision Request and do not make any revisions. Instead, return "No revisions needed".\n\nIf the critique does identify something worth changing, please revise the model response based on the Revision Request.\n\nRevision Request: {revision_request}\n\nRevision:', example_separator='\n === \n', | https://langchain.readthedocs.io/en/latest/reference/modules/chains.html |
d42eb0c67a58-5 | Request: {revision_request}\n\nRevision:', example_separator='\n === \n', prefix='Below is a conversation between a human and an AI model.', template_format='f-string', validate_template=True), **kwargs: Any) β langchain.chains.constitutional_ai.base.ConstitutionalChain[source]# | https://langchain.readthedocs.io/en/latest/reference/modules/chains.html |
d42eb0c67a58-6 | Create a chain from an LLM.
classmethod get_principles(names: Optional[List[str]] = None) β List[langchain.chains.constitutional_ai.models.ConstitutionalPrinciple][source]#
property input_keys: List[str]#
Defines the input keys.
property output_keys: List[str]#
Defines the output keys.
pydantic model langchain.chains.ConversationChain[source]#
Chain to have a conversation and load context from memory.
Example
from langchain import ConversationChain, OpenAI
conversation = ConversationChain(llm=OpenAI())
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_prompt_input_variables Β» all fields
field memory: langchain.schema.BaseMemory [Optional]#
Default memory store.
field prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template='The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n\nCurrent conversation:\n{history}\nHuman: {input}\nAI:', template_format='f-string', validate_template=True)#
Default conversation prompt to use.
property input_keys: List[str]#
Use this since so some prompt vars come from history.
pydantic model langchain.chains.ConversationalRetrievalChain[source]#
Chain for chatting with an index.
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
field max_tokens_limit: Optional[int] = None#
If set, restricts the docs to return from store based on tokens, enforced only
for StuffDocumentChain
field retriever: BaseRetriever [Required]#
Index to connect to.
classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, retriever: langchain.schema.BaseRetriever, condense_question_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['chat_history', 'question'], output_parser=None, partial_variables={}, template='Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question, in its original language.\n\nChat History:\n{chat_history}\nFollow Up Input: {question}\nStandalone question:', template_format='f-string', validate_template=True), chain_type: str = 'stuff', verbose: bool = False, condense_question_llm: Optional[langchain.base_language.BaseLanguageModel] = None, combine_docs_chain_kwargs: Optional[Dict] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) β langchain.chains.conversational_retrieval.base.BaseConversationalRetrievalChain[source]#
Load chain from LLM.
pydantic model langchain.chains.FlareChain[source]#
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
field max_iter: int = 10#
field min_prob: float = 0.2#
field min_token_gap: int = 5#
field num_pad_tokens: int = 2#
field output_parser: FinishedOutputParser [Optional]#
field question_generator_chain: QuestionGeneratorChain [Required]#
field response_chain: _ResponseChain [Optional]#
field retriever: BaseRetriever [Required]#
field start_with_retrieval: bool = True#
classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, max_generation_len: int = 32, **kwargs: Any) β langchain.chains.flare.base.FlareChain[source]#
property input_keys: List[str]#
Input keys this chain expects.
property output_keys: List[str]#
Output keys this chain expects.
pydantic model langchain.chains.GraphCypherQAChain[source]#
Chain for question-answering against a graph by generating Cypher statements.
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
field cypher_generation_chain: LLMChain [Required]#
field graph: Neo4jGraph [Required]#
field qa_chain: LLMChain [Required]# | https://langchain.readthedocs.io/en/latest/reference/modules/chains.html |
d42eb0c67a58-7 | field qa_chain: LLMChain [Required]#
classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, *, qa_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['context', 'question'], output_parser=None, partial_variables={}, template="You are an assistant that helps to form nice and human understandable answers.\nThe information part contains the provided information that you must use to construct an answer.\nThe provided information is authorative, you must never doubt it or try to use your internal knowledge to correct it.\nMake the answer sound as a response to the question. Do not mention that you based the result on the given information.\nIf the provided information is empty, say that you don't know the answer.\nInformation:\n{context}\n\nQuestion: {question}\nHelpful Answer:", template_format='f-string', validate_template=True), cypher_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['schema', 'question'], output_parser=None, partial_variables={}, template='Task:Generate Cypher statement to query a graph database.\nInstructions:\nUse only the provided relationship types and properties in the schema.\nDo not use any other relationship types or properties that are not provided.\nSchema:\n{schema}\nNote: Do not include any explanations or apologies in your responses.\nDo not respond to any questions that might ask anything else than for you to construct a Cypher statement.\nDo not include any text except the generated Cypher statement.\n\nThe question is:\n{question}', template_format='f-string', validate_template=True), **kwargs: Any) β langchain.chains.graph_qa.cypher.GraphCypherQAChain[source]#
Initialize from LLM.
pydantic model langchain.chains.GraphQAChain[source]#
Chain for question-answering against a graph.
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
field entity_extraction_chain: LLMChain [Required]#
field graph: NetworkxEntityGraph [Required]#
field qa_chain: LLMChain [Required]#
classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, qa_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['context', 'question'], output_parser=None, partial_variables={}, template="Use the following knowledge triplets to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.\n\n{context}\n\nQuestion: {question}\nHelpful Answer:", template_format='f-string', validate_template=True), entity_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['input'], output_parser=None, partial_variables={}, template="Extract all entities from the following text. As a guideline, a proper noun is generally capitalized. You should definitely extract all names and places.\n\nReturn the output as a single comma-separated list, or NONE if there is nothing of note to return.\n\nEXAMPLE\ni'm trying to improve Langchain's interfaces, the UX, its integrations with various products the user might want ... a lot of stuff.\nOutput: Langchain\nEND OF EXAMPLE\n\nEXAMPLE\ni'm trying to improve Langchain's interfaces, the UX, its integrations with various products the user might want ... a lot of stuff. I'm working with Sam.\nOutput: Langchain, Sam\nEND OF EXAMPLE\n\nBegin!\n\n{input}\nOutput:", template_format='f-string', validate_template=True), **kwargs: Any) β langchain.chains.graph_qa.base.GraphQAChain[source]#
Initialize from LLM.
pydantic model langchain.chains.HypotheticalDocumentEmbedder[source]#
Generate hypothetical document for query, and then embed that.
Based on https://arxiv.org/abs/2212.10496
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
field base_embeddings: Embeddings [Required]#
field llm_chain: LLMChain [Required]#
combine_embeddings(embeddings: List[List[float]]) β List[float][source]#
Combine embeddings into final embeddings.
embed_documents(texts: List[str]) β List[List[float]][source]#
Call the base embeddings.
embed_query(text: str) β List[float][source]#
Generate a hypothetical document and embedded it.
classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, base_embeddings: langchain.embeddings.base.Embeddings, prompt_key: str, **kwargs: Any) β langchain.chains.hyde.base.HypotheticalDocumentEmbedder[source]#
Load and use LLMChain for a specific prompt key. | https://langchain.readthedocs.io/en/latest/reference/modules/chains.html |
d42eb0c67a58-8 | Load and use LLMChain for a specific prompt key.
property input_keys: List[str]#
Input keys for Hydeβs LLM chain.
property output_keys: List[str]#
Output keys for Hydeβs LLM chain.
pydantic model langchain.chains.LLMBashChain[source]#
Chain that interprets a prompt and executes bash code to perform bash operations.
Example
from langchain import LLMBashChain, OpenAI
llm_bash = LLMBashChain.from_llm(OpenAI())
Validators
raise_deprecation Β» all fields
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_prompt Β» all fields
field llm: Optional[BaseLanguageModel] = None#
[Deprecated] LLM wrapper to use.
field llm_chain: LLMChain [Required]#
field prompt: BasePromptTemplate = PromptTemplate(input_variables=['question'], output_parser=BashOutputParser(), partial_variables={}, template='If someone asks you to perform a task, your job is to come up with a series of bash commands that will perform the task. There is no need to put "#!/bin/bash" in your answer. Make sure to reason step by step, using this format:\n\nQuestion: "copy the files in the directory named \'target\' into a new directory at the same level as target called \'myNewDirectory\'"\n\nI need to take the following actions:\n- List all files in the directory\n- Create a new directory\n- Copy the files from the first directory into the second directory\n```bash\nls\nmkdir myNewDirectory\ncp -r target/* myNewDirectory\n```\n\nThat is the format. Begin!\n\nQuestion: {question}', template_format='f-string', validate_template=True)#
[Deprecated]
classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['question'], output_parser=BashOutputParser(), partial_variables={}, template='If someone asks you to perform a task, your job is to come up with a series of bash commands that will perform the task. There is no need to put "#!/bin/bash" in your answer. Make sure to reason step by step, using this format:\n\nQuestion: "copy the files in the directory named \'target\' into a new directory at the same level as target called \'myNewDirectory\'"\n\nI need to take the following actions:\n- List all files in the directory\n- Create a new directory\n- Copy the files from the first directory into the second directory\n```bash\nls\nmkdir myNewDirectory\ncp -r target/* myNewDirectory\n```\n\nThat is the format. Begin!\n\nQuestion: {question}', template_format='f-string', validate_template=True), **kwargs: Any) β langchain.chains.llm_bash.base.LLMBashChain[source]#
pydantic model langchain.chains.LLMChain[source]#
Chain to run queries against LLMs.
Example
from langchain import LLMChain, OpenAI, PromptTemplate
prompt_template = "Tell me a {adjective} joke"
prompt = PromptTemplate(
input_variables=["adjective"], template=prompt_template
)
llm = LLMChain(llm=OpenAI(), prompt=prompt)
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
field llm: BaseLanguageModel [Required]#
field prompt: BasePromptTemplate [Required]#
Prompt object to use.
async aapply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β List[Dict[str, str]][source]#
Utilize the LLM generate method for speed gains.
async aapply_and_parse(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β Sequence[Union[str, List[str], Dict[str, str]]][source]#
Call apply and then parse the results.
async agenerate(input_list: List[Dict[str, Any]], run_manager: Optional[langchain.callbacks.manager.AsyncCallbackManagerForChainRun] = None) β langchain.schema.LLMResult[source]#
Generate LLM result from inputs.
apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β List[Dict[str, str]][source]# | https://langchain.readthedocs.io/en/latest/reference/modules/chains.html |
d42eb0c67a58-9 | Utilize the LLM generate method for speed gains.
apply_and_parse(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β Sequence[Union[str, List[str], Dict[str, str]]][source]#
Call apply and then parse the results.
async apredict(callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) β str[source]#
Format prompt with kwargs and pass to LLM.
Parameters
callbacks β Callbacks to pass to LLMChain
**kwargs β Keys to pass to prompt template.
Returns
Completion from LLM.
Example
completion = llm.predict(adjective="funny")
async apredict_and_parse(callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) β Union[str, List[str], Dict[str, str]][source]#
Call apredict and then parse the results.
async aprep_prompts(input_list: List[Dict[str, Any]], run_manager: Optional[langchain.callbacks.manager.AsyncCallbackManagerForChainRun] = None) β Tuple[List[langchain.schema.PromptValue], Optional[List[str]]][source]#
Prepare prompts from inputs.
create_outputs(response: langchain.schema.LLMResult) β List[Dict[str, str]][source]#
Create outputs from response.
classmethod from_string(llm: langchain.base_language.BaseLanguageModel, template: str) β langchain.chains.base.Chain[source]#
Create LLMChain from LLM and template.
generate(input_list: List[Dict[str, Any]], run_manager: Optional[langchain.callbacks.manager.CallbackManagerForChainRun] = None) β langchain.schema.LLMResult[source]#
Generate LLM result from inputs.
predict(callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) β str[source]#
Format prompt with kwargs and pass to LLM.
Parameters
callbacks β Callbacks to pass to LLMChain
**kwargs β Keys to pass to prompt template.
Returns
Completion from LLM.
Example
completion = llm.predict(adjective="funny")
predict_and_parse(callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) β Union[str, List[str], Dict[str, Any]][source]#
Call predict and then parse the results.
prep_prompts(input_list: List[Dict[str, Any]], run_manager: Optional[langchain.callbacks.manager.CallbackManagerForChainRun] = None) β Tuple[List[langchain.schema.PromptValue], Optional[List[str]]][source]#
Prepare prompts from inputs.
pydantic model langchain.chains.LLMCheckerChain[source]#
Chain for question-answering with self-verification.
Example
from langchain import OpenAI, LLMCheckerChain
llm = OpenAI(temperature=0.7)
checker_chain = LLMCheckerChain.from_llm(llm)
Validators
raise_deprecation Β» all fields
raise_deprecation Β» all fields
set_verbose Β» verbose
field check_assertions_prompt: PromptTemplate = PromptTemplate(input_variables=['assertions'], output_parser=None, partial_variables={}, template='Here is a bullet point list of assertions:\n{assertions}\nFor each assertion, determine whether it is true or false. If it is false, explain why.\n\n', template_format='f-string', validate_template=True)#
[Deprecated]
field create_draft_answer_prompt: PromptTemplate = PromptTemplate(input_variables=['question'], output_parser=None, partial_variables={}, template='{question}\n\n', template_format='f-string', validate_template=True)#
[Deprecated]
field list_assertions_prompt: PromptTemplate = PromptTemplate(input_variables=['statement'], output_parser=None, partial_variables={}, template='Here is a statement:\n{statement}\nMake a bullet point list of the assumptions you made when producing the above statement.\n\n', template_format='f-string', validate_template=True)#
[Deprecated]
field llm: Optional[BaseLanguageModel] = None#
[Deprecated] LLM wrapper to use.
field question_to_checked_assertions_chain: SequentialChain [Required]# | https://langchain.readthedocs.io/en/latest/reference/modules/chains.html |
d42eb0c67a58-10 | field question_to_checked_assertions_chain: SequentialChain [Required]#
field revised_answer_prompt: PromptTemplate = PromptTemplate(input_variables=['checked_assertions', 'question'], output_parser=None, partial_variables={}, template="{checked_assertions}\n\nQuestion: In light of the above assertions and checks, how would you answer the question '{question}'?\n\nAnswer:", template_format='f-string', validate_template=True)#
[Deprecated] Prompt to use when questioning the documents.
classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, create_draft_answer_prompt: langchain.prompts.prompt.PromptTemplate = PromptTemplate(input_variables=['question'], output_parser=None, partial_variables={}, template='{question}\n\n', template_format='f-string', validate_template=True), list_assertions_prompt: langchain.prompts.prompt.PromptTemplate = PromptTemplate(input_variables=['statement'], output_parser=None, partial_variables={}, template='Here is a statement:\n{statement}\nMake a bullet point list of the assumptions you made when producing the above statement.\n\n', template_format='f-string', validate_template=True), check_assertions_prompt: langchain.prompts.prompt.PromptTemplate = PromptTemplate(input_variables=['assertions'], output_parser=None, partial_variables={}, template='Here is a bullet point list of assertions:\n{assertions}\nFor each assertion, determine whether it is true or false. If it is false, explain why.\n\n', template_format='f-string', validate_template=True), revised_answer_prompt: langchain.prompts.prompt.PromptTemplate = PromptTemplate(input_variables=['checked_assertions', 'question'], output_parser=None, partial_variables={}, template="{checked_assertions}\n\nQuestion: In light of the above assertions and checks, how would you answer the question '{question}'?\n\nAnswer:", template_format='f-string', validate_template=True), **kwargs: Any) β langchain.chains.llm_checker.base.LLMCheckerChain[source]#
pydantic model langchain.chains.LLMMathChain[source]#
Chain that interprets a prompt and executes python code to do math.
Example
from langchain import LLMMathChain, OpenAI
llm_math = LLMMathChain.from_llm(OpenAI())
Validators
raise_deprecation Β» all fields
raise_deprecation Β» all fields
set_verbose Β» verbose
field llm: Optional[BaseLanguageModel] = None#
[Deprecated] LLM wrapper to use.
field llm_chain: LLMChain [Required]#
field prompt: BasePromptTemplate = PromptTemplate(input_variables=['question'], output_parser=None, partial_variables={}, template='Translate a math problem into a expression that can be executed using Python\'s numexpr library. Use the output of running this code to answer the question.\n\nQuestion: ${{Question with math problem.}}\n```text\n${{single line mathematical expression that solves the problem}}\n```\n...numexpr.evaluate(text)...\n```output\n${{Output of running the code}}\n```\nAnswer: ${{Answer}}\n\nBegin.\n\nQuestion: What is 37593 * 67?\n```text\n37593 * 67\n```\n...numexpr.evaluate("37593 * 67")...\n```output\n2518731\n```\nAnswer: 2518731\n\nQuestion: 37593^(1/5)\n```text\n37593**(1/5)\n```\n...numexpr.evaluate("37593**(1/5)")...\n```output\n8.222831614237718\n```\nAnswer: 8.222831614237718\n\nQuestion: {question}\n', template_format='f-string', validate_template=True)#
[Deprecated] Prompt to use to translate to python if necessary. | https://langchain.readthedocs.io/en/latest/reference/modules/chains.html |
d42eb0c67a58-11 | [Deprecated] Prompt to use to translate to python if necessary.
classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['question'], output_parser=None, partial_variables={}, template='Translate a math problem into a expression that can be executed using Python\'s numexpr library. Use the output of running this code to answer the question.\n\nQuestion: ${{Question with math problem.}}\n```text\n${{single line mathematical expression that solves the problem}}\n```\n...numexpr.evaluate(text)...\n```output\n${{Output of running the code}}\n```\nAnswer: ${{Answer}}\n\nBegin.\n\nQuestion: What is 37593 * 67?\n```text\n37593 * 67\n```\n...numexpr.evaluate("37593 * 67")...\n```output\n2518731\n```\nAnswer: 2518731\n\nQuestion: 37593^(1/5)\n```text\n37593**(1/5)\n```\n...numexpr.evaluate("37593**(1/5)")...\n```output\n8.222831614237718\n```\nAnswer: 8.222831614237718\n\nQuestion: {question}\n', template_format='f-string', validate_template=True), **kwargs: Any) β langchain.chains.llm_math.base.LLMMathChain[source]#
pydantic model langchain.chains.LLMRequestsChain[source]#
Chain that hits a URL and then uses an LLM to parse results.
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_environment Β» all fields
field llm_chain: LLMChain [Required]#
field requests_wrapper: TextRequestsWrapper [Optional]#
field text_length: int = 8000#
pydantic model langchain.chains.LLMSummarizationCheckerChain[source]#
Chain for question-answering with self-verification.
Example
from langchain import OpenAI, LLMSummarizationCheckerChain
llm = OpenAI(temperature=0.0)
checker_chain = LLMSummarizationCheckerChain.from_llm(llm)
Validators
raise_deprecation Β» all fields
raise_deprecation Β» all fields
set_verbose Β» verbose
field are_all_true_prompt: PromptTemplate = PromptTemplate(input_variables=['checked_assertions'], output_parser=None, partial_variables={}, template='Below are some assertions that have been fact checked and are labeled as true or false.\n\nIf all of the assertions are true, return "True". If any of the assertions are false, return "False".\n\nHere are some examples:\n===\n\nChecked Assertions: """\n- The sky is red: False\n- Water is made of lava: False\n- The sun is a star: True\n"""\nResult: False\n\n===\n\nChecked Assertions: """\n- The sky is blue: True\n- Water is wet: True\n- The sun is a star: True\n"""\nResult: True\n\n===\n\nChecked Assertions: """\n- The sky is blue - True\n- Water is made of lava- False\n- The sun is a star - True\n"""\nResult: False\n\n===\n\nChecked Assertions:"""\n{checked_assertions}\n"""\nResult:', template_format='f-string', validate_template=True)#
[Deprecated]
field check_assertions_prompt: PromptTemplate = PromptTemplate(input_variables=['assertions'], output_parser=None, partial_variables={}, template='You are an expert fact checker. You have been hired by a major news organization to fact check a very important story.\n\nHere is a bullet point list of facts:\n"""\n{assertions}\n"""\n\nFor each fact, determine whether it is true or false about the subject. If you are unable to determine whether the fact is true or false, output "Undetermined".\nIf the fact is false, explain why.\n\n', template_format='f-string', validate_template=True)#
[Deprecated]
field create_assertions_prompt: PromptTemplate = PromptTemplate(input_variables=['summary'], output_parser=None, partial_variables={}, template='Given some text, extract a list of facts from the text.\n\nFormat your output as a bulleted list.\n\nText:\n"""\n{summary}\n"""\n\nFacts:', template_format='f-string', validate_template=True)#
[Deprecated]
field llm: Optional[BaseLanguageModel] = None#
[Deprecated] LLM wrapper to use. | https://langchain.readthedocs.io/en/latest/reference/modules/chains.html |
d42eb0c67a58-12 | [Deprecated] LLM wrapper to use.
field max_checks: int = 2#
Maximum number of times to check the assertions. Default to double-checking.
field revised_summary_prompt: PromptTemplate = PromptTemplate(input_variables=['checked_assertions', 'summary'], output_parser=None, partial_variables={}, template='Below are some assertions that have been fact checked and are labeled as true or false. If the answer is false, a suggestion is given for a correction.\n\nChecked Assertions:\n"""\n{checked_assertions}\n"""\n\nOriginal Summary:\n"""\n{summary}\n"""\n\nUsing these checked assertions, rewrite the original summary to be completely true.\n\nThe output should have the same structure and formatting as the original summary.\n\nSummary:', template_format='f-string', validate_template=True)#
[Deprecated]
field sequential_chain: SequentialChain [Required]#
classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, create_assertions_prompt: langchain.prompts.prompt.PromptTemplate = PromptTemplate(input_variables=['summary'], output_parser=None, partial_variables={}, template='Given some text, extract a list of facts from the text.\n\nFormat your output as a bulleted list.\n\nText:\n"""\n{summary}\n"""\n\nFacts:', template_format='f-string', validate_template=True), check_assertions_prompt: langchain.prompts.prompt.PromptTemplate = PromptTemplate(input_variables=['assertions'], output_parser=None, partial_variables={}, template='You are an expert fact checker. You have been hired by a major news organization to fact check a very important story.\n\nHere is a bullet point list of facts:\n"""\n{assertions}\n"""\n\nFor each fact, determine whether it is true or false about the subject. If you are unable to determine whether the fact is true or false, output "Undetermined".\nIf the fact is false, explain why.\n\n', template_format='f-string', validate_template=True), revised_summary_prompt: langchain.prompts.prompt.PromptTemplate = PromptTemplate(input_variables=['checked_assertions', 'summary'], output_parser=None, partial_variables={}, template='Below are some assertions that have been fact checked and are labeled as true or false. If the answer is false, a suggestion is given for a correction.\n\nChecked Assertions:\n"""\n{checked_assertions}\n"""\n\nOriginal Summary:\n"""\n{summary}\n"""\n\nUsing these checked assertions, rewrite the original summary to be completely true.\n\nThe output should have the same structure and formatting as the original summary.\n\nSummary:', template_format='f-string', validate_template=True), are_all_true_prompt: langchain.prompts.prompt.PromptTemplate = PromptTemplate(input_variables=['checked_assertions'], output_parser=None, partial_variables={}, template='Below are some assertions that have been fact checked and are labeled as true or false.\n\nIf all of the assertions are true, return "True". If any of the assertions are false, return "False".\n\nHere are some examples:\n===\n\nChecked Assertions: """\n- The sky is red: False\n- Water is made of lava: False\n- The sun is a star: True\n"""\nResult: False\n\n===\n\nChecked Assertions: """\n- The sky is blue: True\n- Water is wet: True\n- The sun is a star: True\n"""\nResult: True\n\n===\n\nChecked Assertions: """\n- The sky is blue - True\n- Water is made of lava- False\n- The sun is a star - True\n"""\nResult: False\n\n===\n\nChecked Assertions:"""\n{checked_assertions}\n"""\nResult:', template_format='f-string', validate_template=True), verbose: bool = False, **kwargs: Any) β langchain.chains.llm_summarization_checker.base.LLMSummarizationCheckerChain[source]#
pydantic model langchain.chains.MapReduceChain[source]#
Map-reduce chain.
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
field combine_documents_chain: BaseCombineDocumentsChain [Required]#
Chain to use to combine documents.
field text_splitter: TextSplitter [Required]#
Text splitter to use. | https://langchain.readthedocs.io/en/latest/reference/modules/chains.html |
d42eb0c67a58-13 | field text_splitter: TextSplitter [Required]#
Text splitter to use.
classmethod from_params(llm: langchain.base_language.BaseLanguageModel, prompt: langchain.prompts.base.BasePromptTemplate, text_splitter: langchain.text_splitter.TextSplitter, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, combine_chain_kwargs: Optional[Mapping[str, Any]] = None, reduce_chain_kwargs: Optional[Mapping[str, Any]] = None, **kwargs: Any) β langchain.chains.mapreduce.MapReduceChain[source]#
Construct a map-reduce chain that uses the chain for map and reduce.
pydantic model langchain.chains.NebulaGraphQAChain[source]#
Chain for question-answering against a graph by generating nGQL statements.
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
field graph: NebulaGraph [Required]#
field ngql_generation_chain: LLMChain [Required]#
field qa_chain: LLMChain [Required]#
classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, *, qa_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['context', 'question'], output_parser=None, partial_variables={}, template="You are an assistant that helps to form nice and human understandable answers.\nThe information part contains the provided information that you must use to construct an answer.\nThe provided information is authorative, you must never doubt it or try to use your internal knowledge to correct it.\nMake the answer sound as a response to the question. Do not mention that you based the result on the given information.\nIf the provided information is empty, say that you don't know the answer.\nInformation:\n{context}\n\nQuestion: {question}\nHelpful Answer:", template_format='f-string', validate_template=True), ngql_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['schema', 'question'], output_parser=None, partial_variables={}, template="Task:Generate NebulaGraph Cypher statement to query a graph database.\n\nInstructions:\n\nFirst, generate cypher then convert it to NebulaGraph Cypher dialect(rather than standard):\n1. it requires explicit label specification when referring to node properties: v.`Foo`.name\n2. it uses double equals sign for comparison: `==` rather than `=`\nFor instance:\n```diff\n< MATCH (p:person)-[:directed]->(m:movie) WHERE m.name = 'The Godfather II'\n< RETURN p.name;\n---\n> MATCH (p:`person`)-[:directed]->(m:`movie`) WHERE m.`movie`.`name` == 'The Godfather II'\n> RETURN p.`person`.`name`;\n```\n\nUse only the provided relationship types and properties in the schema.\nDo not use any other relationship types or properties that are not provided.\nSchema:\n{schema}\nNote: Do not include any explanations or apologies in your responses.\nDo not respond to any questions that might ask anything else than for you to construct a Cypher statement.\nDo not include any text except the generated Cypher statement.\n\nThe question is:\n{question}", template_format='f-string', validate_template=True), **kwargs: Any) β langchain.chains.graph_qa.nebulagraph.NebulaGraphQAChain[source]#
Initialize from LLM.
pydantic model langchain.chains.OpenAIModerationChain[source]#
Pass input through a moderation endpoint.
To use, you should have the openai python package installed, and the
environment variable OPENAI_API_KEY set with your API key.
Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
Example
from langchain.chains import OpenAIModerationChain
moderation = OpenAIModerationChain()
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_environment Β» all fields
field error: bool = False#
Whether or not to error if bad content was found.
field model_name: Optional[str] = None#
Moderation model name to use.
field openai_api_key: Optional[str] = None#
field openai_organization: Optional[str] = None#
pydantic model langchain.chains.OpenAPIEndpointChain[source]#
Chain interacts with an OpenAPI endpoint using natural language.
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
field api_operation: APIOperation [Required]#
field api_request_chain: LLMChain [Required]# | https://langchain.readthedocs.io/en/latest/reference/modules/chains.html |
d42eb0c67a58-14 | field api_request_chain: LLMChain [Required]#
field api_response_chain: Optional[LLMChain] = None#
field param_mapping: _ParamMapping [Required]#
field requests: Requests [Optional]#
field return_intermediate_steps: bool = False#
deserialize_json_input(serialized_args: str) β dict[source]#
Use the serialized typescript dictionary.
Resolve the path, query params dict, and optional requestBody dict.
classmethod from_api_operation(operation: langchain.tools.openapi.utils.api_models.APIOperation, llm: langchain.base_language.BaseLanguageModel, requests: Optional[langchain.requests.Requests] = None, verbose: bool = False, return_intermediate_steps: bool = False, raw_response: bool = False, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) β langchain.chains.api.openapi.chain.OpenAPIEndpointChain[source]#
Create an OpenAPIEndpointChain from an operation and a spec.
classmethod from_url_and_method(spec_url: str, path: str, method: str, llm: langchain.base_language.BaseLanguageModel, requests: Optional[langchain.requests.Requests] = None, return_intermediate_steps: bool = False, **kwargs: Any) β langchain.chains.api.openapi.chain.OpenAPIEndpointChain[source]#
Create an OpenAPIEndpoint from a spec at the specified url.
pydantic model langchain.chains.PALChain[source]#
Implements Program-Aided Language Models.
Validators
raise_deprecation Β» all fields
raise_deprecation Β» all fields
set_verbose Β» verbose
field get_answer_expr: str = 'print(solution())'#
field llm: Optional[BaseLanguageModel] = None#
[Deprecated]
field llm_chain: LLMChain [Required]# | https://langchain.readthedocs.io/en/latest/reference/modules/chains.html |
d42eb0c67a58-15 | field prompt: BasePromptTemplate = PromptTemplate(input_variables=['question'], output_parser=None, partial_variables={}, template='Q: Olivia has $23. She bought five bagels for $3 each. How much money does she have left?\n\n# solution in Python:\n\n\ndef solution():\nΒ Β Β """Olivia has $23. She bought five bagels for $3 each. How much money does she have left?"""\nΒ Β Β money_initial = 23\nΒ Β Β bagels = 5\nΒ Β Β bagel_cost = 3\nΒ Β Β money_spent = bagels * bagel_cost\nΒ Β Β money_left = money_initial - money_spent\nΒ Β Β result = money_left\nΒ Β Β return result\n\n\n\n\n\nQ: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he have at the end of wednesday?\n\n# solution in Python:\n\n\ndef solution():\nΒ Β Β """Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he have at the end of wednesday?"""\nΒ Β Β golf_balls_initial = 58\nΒ Β Β golf_balls_lost_tuesday = 23\nΒ Β Β golf_balls_lost_wednesday = 2\nΒ Β Β golf_balls_left = golf_balls_initial - golf_balls_lost_tuesday - golf_balls_lost_wednesday\nΒ Β Β result = golf_balls_left\nΒ Β Β return result\n\n\n\n\n\nQ: There were nine computers in the server room. Five more computers were installed each day, from monday to thursday. How many computers are now in the server room?\n\n# solution in Python:\n\n\ndef solution():\nΒ Β Β """There were nine computers in the server room. Five more computers were installed each day, from monday to thursday. How many computers are now in the server room?"""\nΒ Β Β computers_initial = 9\nΒ Β Β computers_per_day = 5\nΒ Β Β num_days = 4Β # 4 days between monday and thursday\nΒ Β Β computers_added = computers_per_day * num_days\nΒ Β Β computers_total = computers_initial + computers_added\nΒ Β Β result = computers_total\nΒ Β Β return result\n\n\n\n\n\nQ: Shawn has five toys. For Christmas, he got two toys each from his mom and dad. How many toys does he have now?\n\n# solution in Python:\n\n\ndef solution():\nΒ Β Β """Shawn has five toys. For Christmas, he got two toys each from his mom and dad. How many toys does he have now?"""\nΒ Β Β toys_initial = 5\nΒ Β Β mom_toys = 2\nΒ Β Β dad_toys = 2\nΒ Β Β total_received = mom_toys + dad_toys\nΒ Β Β total_toys = toys_initial + total_received\nΒ Β Β result = total_toys\nΒ Β Β return result\n\n\n\n\n\nQ: Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many lollipops did Jason give to Denny?\n\n# solution in Python:\n\n\ndef solution():\nΒ Β Β """Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many lollipops did Jason give to Denny?"""\nΒ Β Β jason_lollipops_initial = 20\nΒ Β Β jason_lollipops_after = 12\nΒ Β Β denny_lollipops = jason_lollipops_initial - jason_lollipops_after\nΒ Β Β result = denny_lollipops\nΒ Β Β return result\n\n\n\n\n\nQ: Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in total?\n\n# solution in Python:\n\n\ndef solution():\nΒ Β Β """Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in total?"""\nΒ Β Β leah_chocolates = 32\nΒ Β Β sister_chocolates = 42\nΒ Β Β total_chocolates = leah_chocolates + sister_chocolates\nΒ Β Β chocolates_eaten = 35\nΒ Β Β chocolates_left = total_chocolates - chocolates_eaten\nΒ Β Β result = chocolates_left\nΒ Β Β return result\n\n\n\n\n\nQ: If there are 3 cars in the parking lot and 2 more cars arrive, how many cars are in the parking lot?\n\n# solution in Python:\n\n\ndef solution():\nΒ Β Β """If there | https://langchain.readthedocs.io/en/latest/reference/modules/chains.html |
d42eb0c67a58-16 | solution in Python:\n\n\ndef solution():\nΒ Β Β """If there are 3 cars in the parking lot and 2 more cars arrive, how many cars are in the parking lot?"""\nΒ Β Β cars_initial = 3\nΒ Β Β cars_arrived = 2\nΒ Β Β total_cars = cars_initial + cars_arrived\nΒ Β Β result = total_cars\nΒ Β Β return result\n\n\n\n\n\nQ: There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are done, there will be 21 trees. How many trees did the grove workers plant today?\n\n# solution in Python:\n\n\ndef solution():\nΒ Β Β """There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are done, there will be 21 trees. How many trees did the grove workers plant today?"""\nΒ Β Β trees_initial = 15\nΒ Β Β trees_after = 21\nΒ Β Β trees_added = trees_after - trees_initial\nΒ Β Β result = trees_added\nΒ Β Β return result\n\n\n\n\n\nQ: {question}\n\n# solution in Python:\n\n\n', template_format='f-string', validate_template=True)# | https://langchain.readthedocs.io/en/latest/reference/modules/chains.html |
d42eb0c67a58-17 | [Deprecated]
field python_globals: Optional[Dict[str, Any]] = None#
field python_locals: Optional[Dict[str, Any]] = None#
field return_intermediate_steps: bool = False#
field stop: str = '\n\n'#
classmethod from_colored_object_prompt(llm: langchain.base_language.BaseLanguageModel, **kwargs: Any) β langchain.chains.pal.base.PALChain[source]#
Load PAL from colored object prompt.
classmethod from_math_prompt(llm: langchain.base_language.BaseLanguageModel, **kwargs: Any) β langchain.chains.pal.base.PALChain[source]#
Load PAL from math prompt.
pydantic model langchain.chains.QAGenerationChain[source]#
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
field input_key: str = 'text'#
field k: Optional[int] = None#
field llm_chain: LLMChain [Required]#
field output_key: str = 'questions'#
field text_splitter: TextSplitter = <langchain.text_splitter.RecursiveCharacterTextSplitter object>#
classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, prompt: Optional[langchain.prompts.base.BasePromptTemplate] = None, **kwargs: Any) β langchain.chains.qa_generation.base.QAGenerationChain[source]#
property input_keys: List[str]#
Input keys this chain expects.
property output_keys: List[str]#
Output keys this chain expects.
pydantic model langchain.chains.QAWithSourcesChain[source]#
Question answering with sources over documents.
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_naming Β» all fields
pydantic model langchain.chains.RetrievalQA[source]#
Chain for question-answering against an index.
Example
from langchain.llms import OpenAI
from langchain.chains import RetrievalQA
from langchain.faiss import FAISS
from langchain.vectorstores.base import VectorStoreRetriever
retriever = VectorStoreRetriever(vectorstore=FAISS(...))
retrievalQA = RetrievalQA.from_llm(llm=OpenAI(), retriever=retriever)
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
field retriever: BaseRetriever [Required]#
pydantic model langchain.chains.RetrievalQAWithSourcesChain[source]#
Question-answering with sources over an index.
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_naming Β» all fields
field max_tokens_limit: int = 3375#
Restrict the docs to return from store based on tokens,
enforced only for StuffDocumentChain and if reduce_k_below_max_tokens is to true
field reduce_k_below_max_tokens: bool = False#
Reduce the number of results to return from store based on tokens limit
field retriever: langchain.schema.BaseRetriever [Required]#
Index to connect to.
pydantic model langchain.chains.SQLDatabaseChain[source]#
Chain for interacting with SQL Database.
Example
from langchain import SQLDatabaseChain, OpenAI, SQLDatabase
db = SQLDatabase(...)
db_chain = SQLDatabaseChain.from_llm(OpenAI(), db)
Validators
raise_deprecation Β» all fields
raise_deprecation Β» all fields
set_verbose Β» verbose
field database: SQLDatabase [Required]#
SQL Database to connect to.
field llm: Optional[BaseLanguageModel] = None#
[Deprecated] LLM wrapper to use.
field llm_chain: LLMChain [Required]#
field prompt: Optional[BasePromptTemplate] = None#
[Deprecated] Prompt to use to translate natural language to SQL.
field query_checker_prompt: Optional[BasePromptTemplate] = None#
The prompt template that should be used by the query checker
field return_direct: bool = False#
Whether or not to return the result of querying the SQL table directly.
field return_intermediate_steps: bool = False#
Whether or not to return the intermediate steps along with the final answer.
field top_k: int = 5#
Number of results to return from the query
field use_query_checker: bool = False#
Whether or not the query checker tool should be used to attempt
to fix the initial SQL from the LLM. | https://langchain.readthedocs.io/en/latest/reference/modules/chains.html |
d42eb0c67a58-18 | to fix the initial SQL from the LLM.
classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, db: langchain.sql_database.SQLDatabase, prompt: Optional[langchain.prompts.base.BasePromptTemplate] = None, **kwargs: Any) β langchain.chains.sql_database.base.SQLDatabaseChain[source]#
pydantic model langchain.chains.SQLDatabaseSequentialChain[source]#
Chain for querying SQL database that is a sequential chain.
The chain is as follows:
1. Based on the query, determine which tables to use.
2. Based on those tables, call the normal SQL database chain.
This is useful in cases where the number of tables in the database is large.
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
field decider_chain: LLMChain [Required]#
field return_intermediate_steps: bool = False#
field sql_chain: SQLDatabaseChain [Required]#
classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, database: langchain.sql_database.SQLDatabase, query_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['input', 'table_info', 'dialect', 'top_k'], output_parser=None, partial_variables={}, template='Given an input question, first create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer. Unless the user specifies in his question a specific number of examples he wishes to obtain, always limit your query to at most {top_k} results. You can order the results by a relevant column to return the most interesting examples in the database.\n\nNever query for all the columns from a specific table, only ask for a the few relevant columns given the question.\n\nPay attention to use only the column names that you can see in the schema description. Be careful to not query for columns that do not exist. Also, pay attention to which column is in which table.\n\nUse the following format:\n\nQuestion: Question here\nSQLQuery: SQL Query to run\nSQLResult: Result of the SQLQuery\nAnswer: Final answer here\n\nOnly use the following tables:\n{table_info}\n\nQuestion: {input}', template_format='f-string', validate_template=True), decider_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['query', 'table_names'], output_parser=CommaSeparatedListOutputParser(), partial_variables={}, template='Given the below input question and list of potential tables, output a comma separated list of the table names that may be necessary to answer this question.\n\nQuestion: {query}\n\nTable Names: {table_names}\n\nRelevant Table Names:', template_format='f-string', validate_template=True), **kwargs: Any) β langchain.chains.sql_database.base.SQLDatabaseSequentialChain[source]#
Load the necessary chains.
pydantic model langchain.chains.SequentialChain[source]#
Chain where the outputs of one chain feed directly into next.
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_chains Β» all fields
field chains: List[langchain.chains.base.Chain] [Required]#
field input_variables: List[str] [Required]#
field return_all: bool = False#
pydantic model langchain.chains.SimpleSequentialChain[source]#
Simple chain where the outputs of one step feed directly into next.
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_chains Β» all fields
field chains: List[langchain.chains.base.Chain] [Required]#
field strip_outputs: bool = False#
pydantic model langchain.chains.TransformChain[source]#
Chain transform chain output.
Example
from langchain import TransformChain
transform_chain = TransformChain(input_variables=["text"],
output_variables["entities"], transform=func())
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
field input_variables: List[str] [Required]#
field output_variables: List[str] [Required]#
field transform: Callable[[Dict[str, str]], Dict[str, str]] [Required]#
pydantic model langchain.chains.VectorDBQA[source]#
Chain for question-answering against a vector database.
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_search_type Β» all fields
field k: int = 4#
Number of documents to query for.
field search_kwargs: Dict[str, Any] [Optional]#
Extra search args.
field search_type: str = 'similarity'#
Search type to use over vectorstore. similarity or mmr. | https://langchain.readthedocs.io/en/latest/reference/modules/chains.html |
d42eb0c67a58-19 | Search type to use over vectorstore. similarity or mmr.
field vectorstore: VectorStore [Required]#
Vector Database to connect to.
pydantic model langchain.chains.VectorDBQAWithSourcesChain[source]#
Question-answering with sources over a vector database.
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_naming Β» all fields
field k: int = 4#
Number of results to return from store
field max_tokens_limit: int = 3375#
Restrict the docs to return from store based on tokens,
enforced only for StuffDocumentChain and if reduce_k_below_max_tokens is to true
field reduce_k_below_max_tokens: bool = False#
Reduce the number of results to return from store based on tokens limit
field search_kwargs: Dict[str, Any] [Optional]#
Extra search args.
field vectorstore: langchain.vectorstores.base.VectorStore [Required]#
Vector Database to connect to.
langchain.chains.load_chain(path: Union[str, pathlib.Path], **kwargs: Any) β langchain.chains.base.Chain[source]#
Unified method for loading a chain from LangChainHub or local fs.
previous
SQL Chain example
next
Agents
By Harrison Chase
Β© Copyright 2023, Harrison Chase.
Last updated on Jun 08, 2023. | https://langchain.readthedocs.io/en/latest/reference/modules/chains.html |
d3687b6fc1f3-0 | .rst
.pdf
Utilities
Utilities#
General utilities.
pydantic model langchain.utilities.ApifyWrapper[source]#
Wrapper around Apify.
To use, you should have the apify-client python package installed,
and the environment variable APIFY_API_TOKEN set with your API key, or pass
apify_api_token as a named parameter to the constructor.
field apify_client: Any = None#
field apify_client_async: Any = None#
async acall_actor(actor_id: str, run_input: Dict, dataset_mapping_function: Callable[[Dict], langchain.schema.Document], *, build: Optional[str] = None, memory_mbytes: Optional[int] = None, timeout_secs: Optional[int] = None) β langchain.document_loaders.apify_dataset.ApifyDatasetLoader[source]#
Run an Actor on the Apify platform and wait for results to be ready.
Parameters
actor_id (str) β The ID or name of the Actor on the Apify platform.
run_input (Dict) β The input object of the Actor that youβre trying to run.
dataset_mapping_function (Callable) β A function that takes a single
dictionary (an Apify dataset item) and converts it to
an instance of the Document class.
build (str, optional) β Optionally specifies the actor build to run.
It can be either a build tag or build number.
memory_mbytes (int, optional) β Optional memory limit for the run,
in megabytes.
timeout_secs (int, optional) β Optional timeout for the run, in seconds.
Returns
A loader that will fetch the records from theActor runβs default dataset.
Return type
ApifyDatasetLoader
call_actor(actor_id: str, run_input: Dict, dataset_mapping_function: Callable[[Dict], langchain.schema.Document], *, build: Optional[str] = None, memory_mbytes: Optional[int] = None, timeout_secs: Optional[int] = None) β langchain.document_loaders.apify_dataset.ApifyDatasetLoader[source]#
Run an Actor on the Apify platform and wait for results to be ready.
Parameters
actor_id (str) β The ID or name of the Actor on the Apify platform.
run_input (Dict) β The input object of the Actor that youβre trying to run.
dataset_mapping_function (Callable) β A function that takes a single
dictionary (an Apify dataset item) and converts it to an
instance of the Document class.
build (str, optional) β Optionally specifies the actor build to run.
It can be either a build tag or build number.
memory_mbytes (int, optional) β Optional memory limit for the run,
in megabytes.
timeout_secs (int, optional) β Optional timeout for the run, in seconds.
Returns
A loader that will fetch the records from theActor runβs default dataset.
Return type
ApifyDatasetLoader
pydantic model langchain.utilities.ArxivAPIWrapper[source]#
Wrapper around ArxivAPI.
To use, you should have the arxiv python package installed.
https://lukasschwab.me/arxiv.py/index.html
This wrapper will use the Arxiv API to conduct searches and
fetch document summaries. By default, it will return the document summaries
of the top-k results.
It limits the Document content by doc_content_chars_max.
Set doc_content_chars_max=None if you donβt want to limit the content size.
Parameters
top_k_results β number of the top-scored document used for the arxiv tool
ARXIV_MAX_QUERY_LENGTH β the cut limit on the query used for the arxiv tool.
load_max_docs β a limit to the number of loaded documents
load_all_available_meta β
if True: the metadata of the loaded Documents gets all available meta info(see https://lukasschwab.me/arxiv.py/index.html#Result),
if False: the metadata gets only the most informative fields.
field arxiv_exceptions: Any = None#
field doc_content_chars_max: int = 4000#
field load_all_available_meta: bool = False#
field load_max_docs: int = 100#
field top_k_results: int = 3#
load(query: str) β List[langchain.schema.Document][source]#
Run Arxiv search and get the article texts plus the article meta information.
See https://lukasschwab.me/arxiv.py/index.html#Search
Returns: a list of documents with the document.page_content in text format
run(query: str) β str[source]#
Run Arxiv search and get the article meta information.
See https://lukasschwab.me/arxiv.py/index.html#Search | https://langchain.readthedocs.io/en/latest/reference/modules/utilities.html |
d3687b6fc1f3-1 | See https://lukasschwab.me/arxiv.py/index.html#Search
See https://lukasschwab.me/arxiv.py/index.html#Result
It uses only the most informative fields of article meta information.
class langchain.utilities.BashProcess(strip_newlines: bool = False, return_err_output: bool = False, persistent: bool = False)[source]#
Executes bash commands and returns the output.
process_output(output: str, command: str) β str[source]#
run(commands: Union[str, List[str]]) β str[source]#
Run commands and return final output.
pydantic model langchain.utilities.BingSearchAPIWrapper[source]#
Wrapper for Bing Search API.
In order to set this up, follow instructions at:
https://levelup.gitconnected.com/api-tutorial-how-to-use-bing-web-search-api-in-python-4165d5592a7e
field bing_search_url: str [Required]#
field bing_subscription_key: str [Required]#
field k: int = 10#
results(query: str, num_results: int) β List[Dict][source]#
Run query through BingSearch and return metadata.
Parameters
query β The query to search for.
num_results β The number of results to return.
Returns
snippet - The description of the result.
title - The title of the result.
link - The link to the result.
Return type
A list of dictionaries with the following keys
run(query: str) β str[source]#
Run query through BingSearch and parse result.
pydantic model langchain.utilities.DuckDuckGoSearchAPIWrapper[source]#
Wrapper for DuckDuckGo Search API.
Free and does not require any setup
field k: int = 10#
field max_results: int = 5#
field region: Optional[str] = 'wt-wt'#
field safesearch: str = 'moderate'#
field time: Optional[str] = 'y'#
get_snippets(query: str) β List[str][source]#
Run query through DuckDuckGo and return concatenated results.
results(query: str, num_results: int) β List[Dict[str, str]][source]#
Run query through DuckDuckGo and return metadata.
Parameters
query β The query to search for.
num_results β The number of results to return.
Returns
snippet - The description of the result.
title - The title of the result.
link - The link to the result.
Return type
A list of dictionaries with the following keys
run(query: str) β str[source]#
pydantic model langchain.utilities.GooglePlacesAPIWrapper[source]#
Wrapper around Google Places API.
To use, you should have the googlemaps python package installed,an API key for the google maps platform,
and the enviroment variable ββGPLACES_API_KEYββ
set with your API key , or pass βgplaces_api_keyβ
as a named parameter to the constructor.
By default, this will return the all the results on the input query.You can use the top_k_results argument to limit the number of results.
Example
from langchain import GooglePlacesAPIWrapper
gplaceapi = GooglePlacesAPIWrapper()
field gplaces_api_key: Optional[str] = None#
field top_k_results: Optional[int] = None#
fetch_place_details(place_id: str) β Optional[str][source]#
format_place_details(place_details: Dict[str, Any]) β Optional[str][source]#
run(query: str) β str[source]#
Run Places search and get k number of places that exists that match.
pydantic model langchain.utilities.GoogleSearchAPIWrapper[source]#
Wrapper for Google Search API.
Adapted from: Instructions adapted from https://stackoverflow.com/questions/
37083058/
programmatically-searching-google-in-python-using-custom-search
TODO: DOCS for using it
1. Install google-api-python-client
- If you donβt already have a Google account, sign up.
- If you have never created a Google APIs Console project,
read the Managing Projects page and create a project in the Google API Console.
- Install the library using pip install google-api-python-client
The current version of the library is 2.70.0 at this time
2. To create an API key:
- Navigate to the APIs & ServicesβCredentials panel in Cloud Console.
- Select Create credentials, then select API key from the drop-down menu.
- The API key created dialog box displays your newly created key.
- You now have an API_KEY | https://langchain.readthedocs.io/en/latest/reference/modules/utilities.html |
d3687b6fc1f3-2 | - You now have an API_KEY
3. Setup Custom Search Engine so you can search the entire web
- Create a custom search engine in this link.
- In Sites to search, add any valid URL (i.e. www.stackoverflow.com).
- Thatβs all you have to fill up, the rest doesnβt matter.
In the left-side menu, click Edit search engine β {your search engine name}
β Setup Set Search the entire web to ON. Remove the URL you added from
the list of Sites to search.
- Under Search engine ID youβll find the search-engine-ID.
4. Enable the Custom Search API
- Navigate to the APIs & ServicesβDashboard panel in Cloud Console.
- Click Enable APIs and Services.
- Search for Custom Search API and click on it.
- Click Enable.
URL for it: https://console.cloud.google.com/apis/library/customsearch.googleapis
.com
field google_api_key: Optional[str] = None#
field google_cse_id: Optional[str] = None#
field k: int = 10#
field siterestrict: bool = False#
results(query: str, num_results: int) β List[Dict][source]#
Run query through GoogleSearch and return metadata.
Parameters
query β The query to search for.
num_results β The number of results to return.
Returns
snippet - The description of the result.
title - The title of the result.
link - The link to the result.
Return type
A list of dictionaries with the following keys
run(query: str) β str[source]#
Run query through GoogleSearch and parse result.
pydantic model langchain.utilities.GoogleSerperAPIWrapper[source]#
Wrapper around the Serper.dev Google Search API.
You can create a free API key at https://serper.dev.
To use, you should have the environment variable SERPER_API_KEY
set with your API key, or pass serper_api_key as a named parameter
to the constructor.
Example
from langchain import GoogleSerperAPIWrapper
google_serper = GoogleSerperAPIWrapper()
field aiosession: Optional[aiohttp.client.ClientSession] = None#
field gl: str = 'us'#
field hl: str = 'en'#
field k: int = 10#
field serper_api_key: Optional[str] = None#
field tbs: Optional[str] = None#
field type: Literal['news', 'search', 'places', 'images'] = 'search'#
async aresults(query: str, **kwargs: Any) β Dict[source]#
Run query through GoogleSearch.
async arun(query: str, **kwargs: Any) β str[source]#
Run query through GoogleSearch and parse result async.
results(query: str, **kwargs: Any) β Dict[source]#
Run query through GoogleSearch.
run(query: str, **kwargs: Any) β str[source]#
Run query through GoogleSearch and parse result.
pydantic model langchain.utilities.GraphQLAPIWrapper[source]#
Wrapper around GraphQL API.
To use, you should have the gql python package installed.
This wrapper will use the GraphQL API to conduct queries.
field custom_headers: Optional[Dict[str, str]] = None#
field graphql_endpoint: str [Required]#
run(query: str) β str[source]#
Run a GraphQL query and get the results.
pydantic model langchain.utilities.LambdaWrapper[source]#
Wrapper for AWS Lambda SDK.
Docs for using:
pip install boto3
Create a lambda function using the AWS Console or CLI
Run aws configure and enter your AWS credentials
field awslambda_tool_description: Optional[str] = None#
field awslambda_tool_name: Optional[str] = None#
field function_name: Optional[str] = None#
run(query: str) β str[source]#
Invoke Lambda function and parse result.
pydantic model langchain.utilities.MetaphorSearchAPIWrapper[source]#
Wrapper for Metaphor Search API.
field k: int = 10#
field metaphor_api_key: str [Required]#
results(query: str, num_results: int) β List[Dict][source]#
Run query through Metaphor Search and return metadata.
Parameters
query β The query to search for.
num_results β The number of results to return.
Returns
title - The title of the
url - The url
author - Author of the content, if applicable. Otherwise, None.
date_created - Estimated date created,
in YYYY-MM-DD format. Otherwise, None.
Return type | https://langchain.readthedocs.io/en/latest/reference/modules/utilities.html |
d3687b6fc1f3-3 | in YYYY-MM-DD format. Otherwise, None.
Return type
A list of dictionaries with the following keys
async results_async(query: str, num_results: int) β List[Dict][source]#
Get results from the Metaphor Search API asynchronously.
pydantic model langchain.utilities.OpenWeatherMapAPIWrapper[source]#
Wrapper for OpenWeatherMap API using PyOWM.
Docs for using:
Go to OpenWeatherMap and sign up for an API key
Save your API KEY into OPENWEATHERMAP_API_KEY env variable
pip install pyowm
field openweathermap_api_key: Optional[str] = None#
field owm: Any = None#
run(location: str) β str[source]#
Get the current weather information for a specified location.
pydantic model langchain.utilities.PowerBIDataset[source]#
Create PowerBI engine from dataset ID and credential or token.
Use either the credential or a supplied token to authenticate.
If both are supplied the credential is used to generate a token.
The impersonated_user_name is the UPN of a user to be impersonated.
If the model is not RLS enabled, this will be ignored.
Validators
fix_table_names Β» table_names
token_or_credential_present Β» all fields
field aiosession: Optional[aiohttp.ClientSession] = None#
field credential: Optional[TokenCredential] = None#
field dataset_id: str [Required]#
field group_id: Optional[str] = None#
field impersonated_user_name: Optional[str] = None#
field sample_rows_in_table_info: int = 1#
Constraints
exclusiveMinimum = 0
maximum = 10
field schemas: Dict[str, str] [Optional]#
field table_names: List[str] [Required]#
field token: Optional[str] = None#
async aget_table_info(table_names: Optional[Union[List[str], str]] = None) β str[source]#
Get information about specified tables.
async arun(command: str) β Any[source]#
Execute a DAX command and return the result asynchronously.
get_schemas() β str[source]#
Get the available schemaβs.
get_table_info(table_names: Optional[Union[List[str], str]] = None) β str[source]#
Get information about specified tables.
get_table_names() β Iterable[str][source]#
Get names of tables available.
run(command: str) β Any[source]#
Execute a DAX command and return a json representing the results.
property headers: Dict[str, str]#
Get the token.
property request_url: str#
Get the request url.
property table_info: str#
Information about all tables in the database.
pydantic model langchain.utilities.PubMedAPIWrapper[source]#
Wrapper around PubMed API.
This wrapper will use the PubMed API to conduct searches and fetch
document summaries. By default, it will return the document summaries
of the top-k results of an input search.
Parameters
top_k_results β number of the top-scored document used for the PubMed tool
load_max_docs β a limit to the number of loaded documents
load_all_available_meta β
if True: the metadata of the loaded Documents gets all available meta info(see https://www.ncbi.nlm.nih.gov/books/NBK25499/#chapter4.ESearch)
if False: the metadata gets only the most informative fields.
field doc_content_chars_max: int = 2000#
field email: str = '[email protected]'#
field load_all_available_meta: bool = False#
field load_max_docs: int = 25#
field top_k_results: int = 3#
load(query: str) β List[dict][source]#
Search PubMed for documents matching the query.
Return a list of dictionaries containing the document metadata.
load_docs(query: str) β List[langchain.schema.Document][source]#
retrieve_article(uid: str, webenv: str) β dict[source]#
run(query: str) β str[source]#
Run PubMed search and get the article meta information.
See https://www.ncbi.nlm.nih.gov/books/NBK25499/#chapter4.ESearch
It uses only the most informative fields of article meta information.
pydantic model langchain.utilities.PythonREPL[source]#
Simulates a standalone Python REPL.
field globals: Optional[Dict] [Optional] (alias '_globals')#
field locals: Optional[Dict] [Optional] (alias '_locals')#
run(command: str) β str[source]#
Run command with own globals/locals and returns anything printed. | https://langchain.readthedocs.io/en/latest/reference/modules/utilities.html |
d3687b6fc1f3-4 | Run command with own globals/locals and returns anything printed.
pydantic model langchain.utilities.SearxSearchWrapper[source]#
Wrapper for Searx API.
To use you need to provide the searx host by passing the named parameter
searx_host or exporting the environment variable SEARX_HOST.
In some situations you might want to disable SSL verification, for example
if you are running searx locally. You can do this by passing the named parameter
unsecure. You can also pass the host url scheme as http to disable SSL.
Example
from langchain.utilities import SearxSearchWrapper
searx = SearxSearchWrapper(searx_host="http://localhost:8888")
Example with SSL disabled:from langchain.utilities import SearxSearchWrapper
# note the unsecure parameter is not needed if you pass the url scheme as
# http
searx = SearxSearchWrapper(searx_host="http://localhost:8888",
unsecure=True)
Validators
disable_ssl_warnings Β» unsecure
validate_params Β» all fields
field aiosession: Optional[Any] = None#
field categories: Optional[List[str]] = []#
field engines: Optional[List[str]] = []#
field headers: Optional[dict] = None#
field k: int = 10#
field params: dict [Optional]#
field query_suffix: Optional[str] = ''#
field searx_host: str = ''#
field unsecure: bool = False#
async aresults(query: str, num_results: int, engines: Optional[List[str]] = None, query_suffix: Optional[str] = '', **kwargs: Any) β List[Dict][source]#
Asynchronously query with json results.
Uses aiohttp. See results for more info.
async arun(query: str, engines: Optional[List[str]] = None, query_suffix: Optional[str] = '', **kwargs: Any) β str[source]#
Asynchronously version of run.
results(query: str, num_results: int, engines: Optional[List[str]] = None, categories: Optional[List[str]] = None, query_suffix: Optional[str] = '', **kwargs: Any) β List[Dict][source]#
Run query through Searx API and returns the results with metadata.
Parameters
query β The query to search for.
query_suffix β Extra suffix appended to the query.
num_results β Limit the number of results to return.
engines β List of engines to use for the query.
categories β List of categories to use for the query.
**kwargs β extra parameters to pass to the searx API.
Returns
{snippet: The description of the result.
title: The title of the result.
link: The link to the result.
engines: The engines used for the result.
category: Searx category of the result.
}
Return type
Dict with the following keys
run(query: str, engines: Optional[List[str]] = None, categories: Optional[List[str]] = None, query_suffix: Optional[str] = '', **kwargs: Any) β str[source]#
Run query through Searx API and parse results.
You can pass any other params to the searx query API.
Parameters
query β The query to search for.
query_suffix β Extra suffix appended to the query.
engines β List of engines to use for the query.
categories β List of categories to use for the query.
**kwargs β extra parameters to pass to the searx API.
Returns
The result of the query.
Return type
str
Raises
ValueError β If an error occured with the query.
Example
This will make a query to the qwant engine:
from langchain.utilities import SearxSearchWrapper
searx = SearxSearchWrapper(searx_host="http://my.searx.host")
searx.run("what is the weather in France ?", engine="qwant")
# the same result can be achieved using the `!` syntax of searx
# to select the engine using `query_suffix`
searx.run("what is the weather in France ?", query_suffix="!qwant")
pydantic model langchain.utilities.SerpAPIWrapper[source]#
Wrapper around SerpAPI.
To use, you should have the google-search-results python package installed,
and the environment variable SERPAPI_API_KEY set with your API key, or pass
serpapi_api_key as a named parameter to the constructor.
Example
from langchain import SerpAPIWrapper
serpapi = SerpAPIWrapper() | https://langchain.readthedocs.io/en/latest/reference/modules/utilities.html |
d3687b6fc1f3-5 | from langchain import SerpAPIWrapper
serpapi = SerpAPIWrapper()
field aiosession: Optional[aiohttp.client.ClientSession] = None#
field params: dict = {'engine': 'google', 'gl': 'us', 'google_domain': 'google.com', 'hl': 'en'}#
field serpapi_api_key: Optional[str] = None#
async aresults(query: str) β dict[source]#
Use aiohttp to run query through SerpAPI and return the results async.
async arun(query: str, **kwargs: Any) β str[source]#
Run query through SerpAPI and parse result async.
get_params(query: str) β Dict[str, str][source]#
Get parameters for SerpAPI.
results(query: str) β dict[source]#
Run query through SerpAPI and return the raw result.
run(query: str, **kwargs: Any) β str[source]#
Run query through SerpAPI and parse result.
class langchain.utilities.SparkSQL(spark_session: Optional[SparkSession] = None, catalog: Optional[str] = None, schema: Optional[str] = None, ignore_tables: Optional[List[str]] = None, include_tables: Optional[List[str]] = None, sample_rows_in_table_info: int = 3)[source]#
classmethod from_uri(database_uri: str, engine_args: Optional[dict] = None, **kwargs: Any) β langchain.utilities.spark_sql.SparkSQL[source]#
Creating a remote Spark Session via Spark connect.
For example: SparkSQL.from_uri(βsc://localhost:15002β)
get_table_info(table_names: Optional[List[str]] = None) β str[source]#
get_table_info_no_throw(table_names: Optional[List[str]] = None) β str[source]#
Get information about specified tables.
Follows best practices as specified in: Rajkumar et al, 2022
(https://arxiv.org/abs/2204.00498)
If sample_rows_in_table_info, the specified number of sample rows will be
appended to each table description. This can increase performance as
demonstrated in the paper.
get_usable_table_names() β Iterable[str][source]#
Get names of tables available.
run(command: str, fetch: str = 'all') β str[source]#
run_no_throw(command: str, fetch: str = 'all') β str[source]#
Execute a SQL command and return a string representing the results.
If the statement returns rows, a string of the results is returned.
If the statement returns no rows, an empty string is returned.
If the statement throws an error, the error message is returned.
pydantic model langchain.utilities.TextRequestsWrapper[source]#
Lightweight wrapper around requests library.
The main purpose of this wrapper is to always return a text output.
field aiosession: Optional[aiohttp.client.ClientSession] = None#
field headers: Optional[Dict[str, str]] = None#
async adelete(url: str, **kwargs: Any) β str[source]#
DELETE the URL and return the text asynchronously.
async aget(url: str, **kwargs: Any) β str[source]#
GET the URL and return the text asynchronously.
async apatch(url: str, data: Dict[str, Any], **kwargs: Any) β str[source]#
PATCH the URL and return the text asynchronously.
async apost(url: str, data: Dict[str, Any], **kwargs: Any) β str[source]#
POST to the URL and return the text asynchronously.
async aput(url: str, data: Dict[str, Any], **kwargs: Any) β str[source]#
PUT the URL and return the text asynchronously.
delete(url: str, **kwargs: Any) β str[source]#
DELETE the URL and return the text.
get(url: str, **kwargs: Any) β str[source]#
GET the URL and return the text.
patch(url: str, data: Dict[str, Any], **kwargs: Any) β str[source]#
PATCH the URL and return the text.
post(url: str, data: Dict[str, Any], **kwargs: Any) β str[source]#
POST to the URL and return the text.
put(url: str, data: Dict[str, Any], **kwargs: Any) β str[source]#
PUT the URL and return the text.
property requests: langchain.requests.Requests#
pydantic model langchain.utilities.TwilioAPIWrapper[source]#
Sms Client using Twilio. | https://langchain.readthedocs.io/en/latest/reference/modules/utilities.html |
d3687b6fc1f3-6 | Sms Client using Twilio.
To use, you should have the twilio python package installed,
and the environment variables TWILIO_ACCOUNT_SID, TWILIO_AUTH_TOKEN, and
TWILIO_FROM_NUMBER, or pass account_sid, auth_token, and from_number as
named parameters to the constructor.
Example
from langchain.utilities.twilio import TwilioAPIWrapper
twilio = TwilioAPIWrapper(
account_sid="ACxxx",
auth_token="xxx",
from_number="+10123456789"
)
twilio.run('test', '+12484345508')
field account_sid: Optional[str] = None#
Twilio account string identifier.
field auth_token: Optional[str] = None#
Twilio auth token.
field from_number: Optional[str] = None#
A Twilio phone number in [E.164](https://www.twilio.com/docs/glossary/what-e164)
format, an
[alphanumeric sender ID](https://www.twilio.com/docs/sms/send-messages#use-an-alphanumeric-sender-id),
or a [Channel Endpoint address](https://www.twilio.com/docs/sms/channels#channel-addresses)
that is enabled for the type of message you want to send. Phone numbers or
[short codes](https://www.twilio.com/docs/sms/api/short-code) purchased from
Twilio also work here. You cannot, for example, spoof messages from a private
cell phone number. If you are using messaging_service_sid, this parameter
must be empty.
run(body: str, to: str) β str[source]#
Run body through Twilio and respond with message sid.
Parameters
body β The text of the message you want to send. Can be up to 1,600
characters in length.
to β The destination phone number in
[E.164](https://www.twilio.com/docs/glossary/what-e164) format for
SMS/MMS or
[Channel user address](https://www.twilio.com/docs/sms/channels#channel-addresses)
for other 3rd-party channels.
pydantic model langchain.utilities.WikipediaAPIWrapper[source]#
Wrapper around WikipediaAPI.
To use, you should have the wikipedia python package installed.
This wrapper will use the Wikipedia API to conduct searches and
fetch page summaries. By default, it will return the page summaries
of the top-k results.
It limits the Document content by doc_content_chars_max.
field doc_content_chars_max: int = 4000#
field lang: str = 'en'#
field load_all_available_meta: bool = False#
field top_k_results: int = 3#
load(query: str) β List[langchain.schema.Document][source]#
Run Wikipedia search and get the article text plus the meta information.
See
Returns: a list of documents.
run(query: str) β str[source]#
Run Wikipedia search and get page summaries.
pydantic model langchain.utilities.WolframAlphaAPIWrapper[source]#
Wrapper for Wolfram Alpha.
Docs for using:
Go to wolfram alpha and sign up for a developer account
Create an app and get your APP ID
Save your APP ID into WOLFRAM_ALPHA_APPID env variable
pip install wolframalpha
field wolfram_alpha_appid: Optional[str] = None#
run(query: str) β str[source]#
Run query through WolframAlpha and parse result.
previous
Agent Toolkits
next
Experimental Modules
By Harrison Chase
Β© Copyright 2023, Harrison Chase.
Last updated on Jun 08, 2023. | https://langchain.readthedocs.io/en/latest/reference/modules/utilities.html |
1b608297383e-0 | .rst
.pdf
Retrievers
Retrievers#
pydantic model langchain.retrievers.ArxivRetriever[source]#
It is effectively a wrapper for ArxivAPIWrapper.
It wraps load() to get_relevant_documents().
It uses all ArxivAPIWrapper arguments without any change.
async aget_relevant_documents(query: str) β List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query β string to find relevant documents for
Returns
List of relevant documents
get_relevant_documents(query: str) β List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query β string to find relevant documents for
Returns
List of relevant documents
pydantic model langchain.retrievers.AzureCognitiveSearchRetriever[source]#
Wrapper around Azure Cognitive Search.
field aiosession: Optional[aiohttp.client.ClientSession] = None#
ClientSession, in case we want to reuse connection for better performance.
field api_key: str = ''#
API Key. Both Admin and Query keys work, but for reading data itβs
recommended to use a Query key.
field api_version: str = '2020-06-30'#
API version
field content_key: str = 'content'#
Key in a retrieved result to set as the Document page_content.
field index_name: str = ''#
Name of Index inside Azure Cognitive Search service
field service_name: str = ''#
Name of Azure Cognitive Search service
async aget_relevant_documents(query: str) β List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query β string to find relevant documents for
Returns
List of relevant documents
get_relevant_documents(query: str) β List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query β string to find relevant documents for
Returns
List of relevant documents
pydantic model langchain.retrievers.ChatGPTPluginRetriever[source]#
field aiosession: Optional[aiohttp.client.ClientSession] = None#
field bearer_token: str [Required]#
field filter: Optional[dict] = None#
field top_k: int = 3#
field url: str [Required]#
async aget_relevant_documents(query: str) β List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query β string to find relevant documents for
Returns
List of relevant documents
get_relevant_documents(query: str) β List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query β string to find relevant documents for
Returns
List of relevant documents
pydantic model langchain.retrievers.ContextualCompressionRetriever[source]#
Retriever that wraps a base retriever and compresses the results.
field base_compressor: langchain.retrievers.document_compressors.base.BaseDocumentCompressor [Required]#
Compressor for compressing retrieved documents.
field base_retriever: langchain.schema.BaseRetriever [Required]#
Base Retriever to use for getting relevant documents.
async aget_relevant_documents(query: str) β List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query β string to find relevant documents for
Returns
List of relevant documents
get_relevant_documents(query: str) β List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query β string to find relevant documents for
Returns
Sequence of relevant documents
class langchain.retrievers.DataberryRetriever(datastore_url: str, top_k: Optional[int] = None, api_key: Optional[str] = None)[source]#
async aget_relevant_documents(query: str) β List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query β string to find relevant documents for
Returns
List of relevant documents
api_key: Optional[str]#
datastore_url: str#
get_relevant_documents(query: str) β List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query β string to find relevant documents for
Returns
List of relevant documents
top_k: Optional[int]#
class langchain.retrievers.ElasticSearchBM25Retriever(client: Any, index_name: str)[source]#
Wrapper around Elasticsearch using BM25 as a retrieval method.
To connect to an Elasticsearch instance that requires login credentials, | https://langchain.readthedocs.io/en/latest/reference/modules/retrievers.html |
1b608297383e-1 | To connect to an Elasticsearch instance that requires login credentials,
including Elastic Cloud, use the Elasticsearch URL format
https://username:password@es_host:9243. For example, to connect to Elastic
Cloud, create the Elasticsearch URL with the required authentication details and
pass it to the ElasticVectorSearch constructor as the named parameter
elasticsearch_url.
You can obtain your Elastic Cloud URL and login credentials by logging in to the
Elastic Cloud console at https://cloud.elastic.co, selecting your deployment, and
navigating to the βDeploymentsβ page.
To obtain your Elastic Cloud password for the default βelasticβ user:
Log in to the Elastic Cloud console at https://cloud.elastic.co
Go to βSecurityβ > βUsersβ
Locate the βelasticβ user and click βEditβ
Click βReset passwordβ
Follow the prompts to reset the password
The format for Elastic Cloud URLs is
https://username:password@cluster_id.region_id.gcp.cloud.es.io:9243.
add_texts(texts: Iterable[str], refresh_indices: bool = True) β List[str][source]#
Run more texts through the embeddings and add to the retriver.
Parameters
texts β Iterable of strings to add to the retriever.
refresh_indices β bool to refresh ElasticSearch indices
Returns
List of ids from adding the texts into the retriever.
async aget_relevant_documents(query: str) β List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query β string to find relevant documents for
Returns
List of relevant documents
classmethod create(elasticsearch_url: str, index_name: str, k1: float = 2.0, b: float = 0.75) β langchain.retrievers.elastic_search_bm25.ElasticSearchBM25Retriever[source]#
get_relevant_documents(query: str) β List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query β string to find relevant documents for
Returns
List of relevant documents
pydantic model langchain.retrievers.KNNRetriever[source]#
field embeddings: langchain.embeddings.base.Embeddings [Required]#
field index: Any = None#
field k: int = 4#
field relevancy_threshold: Optional[float] = None#
field texts: List[str] [Required]#
async aget_relevant_documents(query: str) β List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query β string to find relevant documents for
Returns
List of relevant documents
classmethod from_texts(texts: List[str], embeddings: langchain.embeddings.base.Embeddings, **kwargs: Any) β langchain.retrievers.knn.KNNRetriever[source]#
get_relevant_documents(query: str) β List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query β string to find relevant documents for
Returns
List of relevant documents
class langchain.retrievers.MetalRetriever(client: Any, params: Optional[dict] = None)[source]#
async aget_relevant_documents(query: str) β List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query β string to find relevant documents for
Returns
List of relevant documents
get_relevant_documents(query: str) β List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query β string to find relevant documents for
Returns
List of relevant documents
pydantic model langchain.retrievers.PineconeHybridSearchRetriever[source]#
field alpha: float = 0.5#
field embeddings: langchain.embeddings.base.Embeddings [Required]#
field index: Any = None#
field sparse_encoder: Any = None#
field top_k: int = 4#
add_texts(texts: List[str], ids: Optional[List[str]] = None, metadatas: Optional[List[dict]] = None) β None[source]#
async aget_relevant_documents(query: str) β List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query β string to find relevant documents for
Returns
List of relevant documents
get_relevant_documents(query: str) β List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query β string to find relevant documents for
Returns
List of relevant documents
pydantic model langchain.retrievers.PubMedRetriever[source]# | https://langchain.readthedocs.io/en/latest/reference/modules/retrievers.html |
1b608297383e-2 | pydantic model langchain.retrievers.PubMedRetriever[source]#
It is effectively a wrapper for PubMedAPIWrapper.
It wraps load() to get_relevant_documents().
It uses all PubMedAPIWrapper arguments without any change.
async aget_relevant_documents(query: str) β List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query β string to find relevant documents for
Returns
List of relevant documents
get_relevant_documents(query: str) β List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query β string to find relevant documents for
Returns
List of relevant documents
pydantic model langchain.retrievers.RemoteLangChainRetriever[source]#
field headers: Optional[dict] = None#
field input_key: str = 'message'#
field metadata_key: str = 'metadata'#
field page_content_key: str = 'page_content'#
field response_key: str = 'response'#
field url: str [Required]#
async aget_relevant_documents(query: str) β List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query β string to find relevant documents for
Returns
List of relevant documents
get_relevant_documents(query: str) β List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query β string to find relevant documents for
Returns
List of relevant documents
pydantic model langchain.retrievers.SVMRetriever[source]#
field embeddings: langchain.embeddings.base.Embeddings [Required]#
field index: Any = None#
field k: int = 4#
field relevancy_threshold: Optional[float] = None#
field texts: List[str] [Required]#
async aget_relevant_documents(query: str) β List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query β string to find relevant documents for
Returns
List of relevant documents
classmethod from_texts(texts: List[str], embeddings: langchain.embeddings.base.Embeddings, **kwargs: Any) β langchain.retrievers.svm.SVMRetriever[source]#
get_relevant_documents(query: str) β List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query β string to find relevant documents for
Returns
List of relevant documents
pydantic model langchain.retrievers.SelfQueryRetriever[source]#
Retriever that wraps around a vector store and uses an LLM to generate
the vector store queries.
field llm_chain: langchain.chains.llm.LLMChain [Required]#
The LLMChain for generating the vector store queries.
field search_kwargs: dict [Optional]#
Keyword arguments to pass in to the vector store search.
field search_type: str = 'similarity'#
The search type to perform on the vector store.
field structured_query_translator: langchain.chains.query_constructor.ir.Visitor [Required]#
Translator for turning internal query language into vectorstore search params.
field vectorstore: langchain.vectorstores.base.VectorStore [Required]#
The underlying vector store from which documents will be retrieved.
field verbose: bool = False#
async aget_relevant_documents(query: str) β List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query β string to find relevant documents for
Returns
List of relevant documents
classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, vectorstore: langchain.vectorstores.base.VectorStore, document_contents: str, metadata_field_info: List[langchain.chains.query_constructor.schema.AttributeInfo], structured_query_translator: Optional[langchain.chains.query_constructor.ir.Visitor] = None, chain_kwargs: Optional[Dict] = None, enable_limit: bool = False, **kwargs: Any) β langchain.retrievers.self_query.base.SelfQueryRetriever[source]#
get_relevant_documents(query: str) β List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query β string to find relevant documents for
Returns
List of relevant documents
pydantic model langchain.retrievers.TFIDFRetriever[source]#
field docs: List[langchain.schema.Document] [Required]#
field k: int = 4#
field tfidf_array: Any = None#
field vectorizer: Any = None# | https://langchain.readthedocs.io/en/latest/reference/modules/retrievers.html |
1b608297383e-3 | field tfidf_array: Any = None#
field vectorizer: Any = None#
async aget_relevant_documents(query: str) β List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query β string to find relevant documents for
Returns
List of relevant documents
classmethod from_documents(documents: Iterable[langchain.schema.Document], *, tfidf_params: Optional[Dict[str, Any]] = None, **kwargs: Any) β langchain.retrievers.tfidf.TFIDFRetriever[source]#
classmethod from_texts(texts: Iterable[str], metadatas: Optional[Iterable[dict]] = None, tfidf_params: Optional[Dict[str, Any]] = None, **kwargs: Any) β langchain.retrievers.tfidf.TFIDFRetriever[source]#
get_relevant_documents(query: str) β List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query β string to find relevant documents for
Returns
List of relevant documents
pydantic model langchain.retrievers.TimeWeightedVectorStoreRetriever[source]#
Retriever combining embedding similarity with recency.
field decay_rate: float = 0.01#
The exponential decay factor used as (1.0-decay_rate)**(hrs_passed).
field default_salience: Optional[float] = None#
The salience to assign memories not retrieved from the vector store.
None assigns no salience to documents not fetched from the vector store.
field k: int = 4#
The maximum number of documents to retrieve in a given call.
field memory_stream: List[langchain.schema.Document] [Optional]#
The memory_stream of documents to search through.
field other_score_keys: List[str] = []#
Other keys in the metadata to factor into the score, e.g. βimportanceβ.
field search_kwargs: dict [Optional]#
Keyword arguments to pass to the vectorstore similarity search.
field vectorstore: langchain.vectorstores.base.VectorStore [Required]#
The vectorstore to store documents and determine salience.
async aadd_documents(documents: List[langchain.schema.Document], **kwargs: Any) β List[str][source]#
Add documents to vectorstore.
add_documents(documents: List[langchain.schema.Document], **kwargs: Any) β List[str][source]#
Add documents to vectorstore.
async aget_relevant_documents(query: str) β List[langchain.schema.Document][source]#
Return documents that are relevant to the query.
get_relevant_documents(query: str) β List[langchain.schema.Document][source]#
Return documents that are relevant to the query.
get_salient_docs(query: str) β Dict[int, Tuple[langchain.schema.Document, float]][source]#
Return documents that are salient to the query.
class langchain.retrievers.VespaRetriever(app: Vespa, body: Dict, content_field: str, metadata_fields: Optional[Sequence[str]] = None)[source]#
async aget_relevant_documents(query: str) β List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query β string to find relevant documents for
Returns
List of relevant documents
classmethod from_params(url: str, content_field: str, *, k: Optional[int] = None, metadata_fields: Union[Sequence[str], Literal['*']] = (), sources: Optional[Union[Sequence[str], Literal['*']]] = None, _filter: Optional[str] = None, yql: Optional[str] = None, **kwargs: Any) β langchain.retrievers.vespa_retriever.VespaRetriever[source]#
Instantiate retriever from params.
Parameters
url (str) β Vespa app URL.
content_field (str) β Field in results to return as Document page_content.
k (Optional[int]) β Number of Documents to return. Defaults to None.
metadata_fields (Sequence[str] or "*") β Fields in results to include in
document metadata. Defaults to empty tuple ().
sources (Sequence[str] or "*" or None) β Sources to retrieve
from. Defaults to None.
_filter (Optional[str]) β Document filter condition expressed in YQL.
Defaults to None.
yql (Optional[str]) β Full YQL query to be used. Should not be specified
if _filter or sources are specified. Defaults to None.
kwargs (Any) β Keyword arguments added to query body.
get_relevant_documents(query: str) β List[langchain.schema.Document][source]# | https://langchain.readthedocs.io/en/latest/reference/modules/retrievers.html |
1b608297383e-4 | get_relevant_documents(query: str) β List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query β string to find relevant documents for
Returns
List of relevant documents
get_relevant_documents_with_filter(query: str, *, _filter: Optional[str] = None) β List[langchain.schema.Document][source]#
class langchain.retrievers.WeaviateHybridSearchRetriever(client: Any, index_name: str, text_key: str, alpha: float = 0.5, k: int = 4, attributes: Optional[List[str]] = None, create_schema_if_missing: bool = True)[source]#
class Config[source]#
Configuration for this pydantic object.
arbitrary_types_allowed = True#
extra = 'forbid'#
add_documents(docs: List[langchain.schema.Document], **kwargs: Any) β List[str][source]#
Upload documents to Weaviate.
async aget_relevant_documents(query: str, where_filter: Optional[Dict[str, object]] = None) β List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query β string to find relevant documents for
Returns
List of relevant documents
get_relevant_documents(query: str, where_filter: Optional[Dict[str, object]] = None) β List[langchain.schema.Document][source]#
Look up similar documents in Weaviate.
pydantic model langchain.retrievers.WikipediaRetriever[source]#
It is effectively a wrapper for WikipediaAPIWrapper.
It wraps load() to get_relevant_documents().
It uses all WikipediaAPIWrapper arguments without any change.
async aget_relevant_documents(query: str) β List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query β string to find relevant documents for
Returns
List of relevant documents
get_relevant_documents(query: str) β List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query β string to find relevant documents for
Returns
List of relevant documents
class langchain.retrievers.ZepRetriever(session_id: str, url: str, top_k: Optional[int] = None)[source]#
A Retriever implementation for the Zep long-term memory store. Search your
userβs long-term chat history with Zep.
Note: You will need to provide the userβs session_id to use this retriever.
More on Zep:
Zep provides long-term conversation storage for LLM apps. The server stores,
summarizes, embeds, indexes, and enriches conversational AI chat
histories, and exposes them via simple, low-latency APIs.
For server installation instructions, see:
https://getzep.github.io/deployment/quickstart/
async aget_relevant_documents(query: str, metadata: Optional[Dict] = None) β List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query β string to find relevant documents for
Returns
List of relevant documents
get_relevant_documents(query: str, metadata: Optional[Dict] = None) β List[langchain.schema.Document][source]#
Get documents relevant for a query.
Parameters
query β string to find relevant documents for
Returns
List of relevant documents
previous
Vector Stores
next
Document Compressors
By Harrison Chase
Β© Copyright 2023, Harrison Chase.
Last updated on Jun 08, 2023. | https://langchain.readthedocs.io/en/latest/reference/modules/retrievers.html |
61a39a44520d-0 | .rst
.pdf
Agents
Agents#
Interface for agents.
pydantic model langchain.agents.Agent[source]#
Class responsible for calling the language model and deciding the action.
This is driven by an LLMChain. The prompt in the LLMChain MUST include
a variable called βagent_scratchpadβ where the agent can put its
intermediary work.
field allowed_tools: Optional[List[str]] = None#
field llm_chain: langchain.chains.llm.LLMChain [Required]#
field output_parser: langchain.agents.agent.AgentOutputParser [Required]#
async aplan(intermediate_steps: List[Tuple[langchain.schema.AgentAction, str]], callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) β Union[langchain.schema.AgentAction, langchain.schema.AgentFinish][source]#
Given input, decided what to do.
Parameters
intermediate_steps β Steps the LLM has taken to date,
along with observations
callbacks β Callbacks to run.
**kwargs β User inputs.
Returns
Action specifying what tool to use.
abstract classmethod create_prompt(tools: Sequence[langchain.tools.base.BaseTool]) β langchain.prompts.base.BasePromptTemplate[source]#
Create a prompt for this class.
dict(**kwargs: Any) β Dict[source]#
Return dictionary representation of agent.
classmethod from_llm_and_tools(llm: langchain.base_language.BaseLanguageModel, tools: Sequence[langchain.tools.base.BaseTool], callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, output_parser: Optional[langchain.agents.agent.AgentOutputParser] = None, **kwargs: Any) β langchain.agents.agent.Agent[source]#
Construct an agent from an LLM and tools.
get_allowed_tools() β Optional[List[str]][source]#
get_full_inputs(intermediate_steps: List[Tuple[langchain.schema.AgentAction, str]], **kwargs: Any) β Dict[str, Any][source]#
Create the full inputs for the LLMChain from intermediate steps.
plan(intermediate_steps: List[Tuple[langchain.schema.AgentAction, str]], callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) β Union[langchain.schema.AgentAction, langchain.schema.AgentFinish][source]#
Given input, decided what to do.
Parameters
intermediate_steps β Steps the LLM has taken to date,
along with observations
callbacks β Callbacks to run.
**kwargs β User inputs.
Returns
Action specifying what tool to use.
return_stopped_response(early_stopping_method: str, intermediate_steps: List[Tuple[langchain.schema.AgentAction, str]], **kwargs: Any) β langchain.schema.AgentFinish[source]#
Return response when agent has been stopped due to max iterations.
tool_run_logging_kwargs() β Dict[source]#
abstract property llm_prefix: str#
Prefix to append the LLM call with.
abstract property observation_prefix: str#
Prefix to append the observation with.
property return_values: List[str]#
Return values of the agent.
pydantic model langchain.agents.AgentExecutor[source]#
Consists of an agent using tools.
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_return_direct_tool Β» all fields
validate_tools Β» all fields
field agent: Union[BaseSingleActionAgent, BaseMultiActionAgent] [Required]#
field early_stopping_method: str = 'force'#
field handle_parsing_errors: Union[bool, str, Callable[[OutputParserException], str]] = False#
field max_execution_time: Optional[float] = None#
field max_iterations: Optional[int] = 15#
field return_intermediate_steps: bool = False#
field tools: Sequence[BaseTool] [Required]#
classmethod from_agent_and_tools(agent: Union[langchain.agents.agent.BaseSingleActionAgent, langchain.agents.agent.BaseMultiActionAgent], tools: Sequence[langchain.tools.base.BaseTool], callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, **kwargs: Any) β langchain.agents.agent.AgentExecutor[source]#
Create from agent and tools.
lookup_tool(name: str) β langchain.tools.base.BaseTool[source]#
Lookup tool by name.
save(file_path: Union[pathlib.Path, str]) β None[source]#
Raise error - saving not supported for Agent Executors.
save_agent(file_path: Union[pathlib.Path, str]) β None[source]# | https://langchain.readthedocs.io/en/latest/reference/modules/agents.html |
61a39a44520d-1 | save_agent(file_path: Union[pathlib.Path, str]) β None[source]#
Save the underlying agent.
pydantic model langchain.agents.AgentOutputParser[source]#
abstract parse(text: str) β Union[langchain.schema.AgentAction, langchain.schema.AgentFinish][source]#
Parse text into agent action/finish.
class langchain.agents.AgentType(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]#
CHAT_CONVERSATIONAL_REACT_DESCRIPTION = 'chat-conversational-react-description'#
CHAT_ZERO_SHOT_REACT_DESCRIPTION = 'chat-zero-shot-react-description'#
CONVERSATIONAL_REACT_DESCRIPTION = 'conversational-react-description'#
REACT_DOCSTORE = 'react-docstore'#
SELF_ASK_WITH_SEARCH = 'self-ask-with-search'#
STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION = 'structured-chat-zero-shot-react-description'#
ZERO_SHOT_REACT_DESCRIPTION = 'zero-shot-react-description'#
pydantic model langchain.agents.BaseMultiActionAgent[source]#
Base Agent class.
abstract async aplan(intermediate_steps: List[Tuple[langchain.schema.AgentAction, str]], callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) β Union[List[langchain.schema.AgentAction], langchain.schema.AgentFinish][source]#
Given input, decided what to do.
Parameters
intermediate_steps β Steps the LLM has taken to date,
along with observations
callbacks β Callbacks to run.
**kwargs β User inputs.
Returns
Actions specifying what tool to use.
dict(**kwargs: Any) β Dict[source]#
Return dictionary representation of agent.
get_allowed_tools() β Optional[List[str]][source]#
abstract plan(intermediate_steps: List[Tuple[langchain.schema.AgentAction, str]], callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) β Union[List[langchain.schema.AgentAction], langchain.schema.AgentFinish][source]#
Given input, decided what to do.
Parameters
intermediate_steps β Steps the LLM has taken to date,
along with observations
callbacks β Callbacks to run.
**kwargs β User inputs.
Returns
Actions specifying what tool to use.
return_stopped_response(early_stopping_method: str, intermediate_steps: List[Tuple[langchain.schema.AgentAction, str]], **kwargs: Any) β langchain.schema.AgentFinish[source]#
Return response when agent has been stopped due to max iterations.
save(file_path: Union[pathlib.Path, str]) β None[source]#
Save the agent.
Parameters
file_path β Path to file to save the agent to.
Example:
.. code-block:: python
# If working with agent executor
agent.agent.save(file_path=βpath/agent.yamlβ)
tool_run_logging_kwargs() β Dict[source]#
property return_values: List[str]#
Return values of the agent.
pydantic model langchain.agents.BaseSingleActionAgent[source]#
Base Agent class.
abstract async aplan(intermediate_steps: List[Tuple[langchain.schema.AgentAction, str]], callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) β Union[langchain.schema.AgentAction, langchain.schema.AgentFinish][source]#
Given input, decided what to do.
Parameters
intermediate_steps β Steps the LLM has taken to date,
along with observations
callbacks β Callbacks to run.
**kwargs β User inputs.
Returns
Action specifying what tool to use.
dict(**kwargs: Any) β Dict[source]#
Return dictionary representation of agent.
classmethod from_llm_and_tools(llm: langchain.base_language.BaseLanguageModel, tools: Sequence[langchain.tools.base.BaseTool], callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, **kwargs: Any) β langchain.agents.agent.BaseSingleActionAgent[source]#
get_allowed_tools() β Optional[List[str]][source]#
abstract plan(intermediate_steps: List[Tuple[langchain.schema.AgentAction, str]], callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) β Union[langchain.schema.AgentAction, langchain.schema.AgentFinish][source]#
Given input, decided what to do.
Parameters
intermediate_steps β Steps the LLM has taken to date,
along with observations | https://langchain.readthedocs.io/en/latest/reference/modules/agents.html |
61a39a44520d-2 | Parameters
intermediate_steps β Steps the LLM has taken to date,
along with observations
callbacks β Callbacks to run.
**kwargs β User inputs.
Returns
Action specifying what tool to use.
return_stopped_response(early_stopping_method: str, intermediate_steps: List[Tuple[langchain.schema.AgentAction, str]], **kwargs: Any) β langchain.schema.AgentFinish[source]#
Return response when agent has been stopped due to max iterations.
save(file_path: Union[pathlib.Path, str]) β None[source]#
Save the agent.
Parameters
file_path β Path to file to save the agent to.
Example:
.. code-block:: python
# If working with agent executor
agent.agent.save(file_path=βpath/agent.yamlβ)
tool_run_logging_kwargs() β Dict[source]#
property return_values: List[str]#
Return values of the agent.
pydantic model langchain.agents.ConversationalAgent[source]#
An agent designed to hold a conversation in addition to using tools.
field ai_prefix: str = 'AI'#
field output_parser: langchain.agents.agent.AgentOutputParser [Optional]#
classmethod create_prompt(tools: Sequence[langchain.tools.base.BaseTool], prefix: str = 'Assistant is a large language model trained by OpenAI.\n\nAssistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\n\nAssistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.\n\nOverall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.\n\nTOOLS:\n------\n\nAssistant has access to the following tools:', suffix: str = 'Begin!\n\nPrevious conversation history:\n{chat_history}\n\nNew input: {input}\n{agent_scratchpad}', format_instructions: str = 'To use a tool, please use the following format:\n\n```\nThought: Do I need to use a tool? Yes\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n```\n\nWhen you have a response to say to the Human, or if you do not need to use a tool, you MUST use the format:\n\n```\nThought: Do I need to use a tool? No\n{ai_prefix}: [your response here]\n```', ai_prefix: str = 'AI', human_prefix: str = 'Human', input_variables: Optional[List[str]] = None) β langchain.prompts.prompt.PromptTemplate[source]#
Create prompt in the style of the zero shot agent.
Parameters
tools β List of tools the agent will have access to, used to format the
prompt.
prefix β String to put before the list of tools.
suffix β String to put after the list of tools.
ai_prefix β String to use before AI output.
human_prefix β String to use before human output.
input_variables β List of input variables the final prompt will expect.
Returns
A PromptTemplate with the template assembled from the pieces here. | https://langchain.readthedocs.io/en/latest/reference/modules/agents.html |
61a39a44520d-3 | Returns
A PromptTemplate with the template assembled from the pieces here.
classmethod from_llm_and_tools(llm: langchain.base_language.BaseLanguageModel, tools: Sequence[langchain.tools.base.BaseTool], callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, output_parser: Optional[langchain.agents.agent.AgentOutputParser] = None, prefix: str = 'Assistant is a large language model trained by OpenAI.\n\nAssistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\n\nAssistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.\n\nOverall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.\n\nTOOLS:\n------\n\nAssistant has access to the following tools:', suffix: str = 'Begin!\n\nPrevious conversation history:\n{chat_history}\n\nNew input: {input}\n{agent_scratchpad}', format_instructions: str = 'To use a tool, please use the following format:\n\n```\nThought: Do I need to use a tool? Yes\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n```\n\nWhen you have a response to say to the Human, or if you do not need to use a tool, you MUST use the format:\n\n```\nThought: Do I need to use a tool? No\n{ai_prefix}: [your response here]\n```', ai_prefix: str = 'AI', human_prefix: str = 'Human', input_variables: Optional[List[str]] = None, **kwargs: Any) β langchain.agents.agent.Agent[source]#
Construct an agent from an LLM and tools.
property llm_prefix: str#
Prefix to append the llm call with.
property observation_prefix: str#
Prefix to append the observation with.
pydantic model langchain.agents.ConversationalChatAgent[source]#
An agent designed to hold a conversation in addition to using tools.
field output_parser: langchain.agents.agent.AgentOutputParser [Optional]#
field template_tool_response: str = "TOOL RESPONSE: \n---------------------\n{observation}\n\nUSER'S INPUT\n--------------------\n\nOkay, so what is the response to my last comment? If using information obtained from the tools you must mention it explicitly without mentioning the tool names - I have forgotten all TOOL RESPONSES! Remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else."# | https://langchain.readthedocs.io/en/latest/reference/modules/agents.html |
61a39a44520d-4 | classmethod create_prompt(tools: Sequence[langchain.tools.base.BaseTool], system_message: str = 'Assistant is a large language model trained by OpenAI.\n\nAssistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\n\nAssistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.\n\nOverall, Assistant is a powerful system that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.', human_message: str = "TOOLS\n------\nAssistant can ask the user to use tools to look up information that may be helpful in answering the users original question. The tools the human can use are:\n\n{{tools}}\n\n{format_instructions}\n\nUSER'S INPUT\n--------------------\nHere is the user's input (remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else):\n\n{{{{input}}}}", input_variables: Optional[List[str]] = None, output_parser: Optional[langchain.schema.BaseOutputParser] = None) β langchain.prompts.base.BasePromptTemplate[source]#
Create a prompt for this class.
classmethod from_llm_and_tools(llm: langchain.base_language.BaseLanguageModel, tools: Sequence[langchain.tools.base.BaseTool], callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, output_parser: Optional[langchain.agents.agent.AgentOutputParser] = None, system_message: str = 'Assistant is a large language model trained by OpenAI.\n\nAssistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\n\nAssistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.\n\nOverall, Assistant is a powerful system that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.', human_message: str = "TOOLS\n------\nAssistant can ask the user to use tools to look up information that may be helpful in answering the users original question. The tools the human can use are:\n\n{{tools}}\n\n{format_instructions}\n\nUSER'S INPUT\n--------------------\nHere is the user's input (remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else):\n\n{{{{input}}}}", input_variables: Optional[List[str]] = None, **kwargs: Any) β langchain.agents.agent.Agent[source]#
Construct an agent from an LLM and tools.
property llm_prefix: str#
Prefix to append the llm call with.
property observation_prefix: str#
Prefix to append the observation with.
pydantic model langchain.agents.LLMSingleActionAgent[source]#
field llm_chain: langchain.chains.llm.LLMChain [Required]#
field output_parser: langchain.agents.agent.AgentOutputParser [Required]#
field stop: List[str] [Required]#
async aplan(intermediate_steps: List[Tuple[langchain.schema.AgentAction, str]], callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) β Union[langchain.schema.AgentAction, langchain.schema.AgentFinish][source]#
Given input, decided what to do. | https://langchain.readthedocs.io/en/latest/reference/modules/agents.html |
61a39a44520d-5 | Given input, decided what to do.
Parameters
intermediate_steps β Steps the LLM has taken to date,
along with observations
callbacks β Callbacks to run.
**kwargs β User inputs.
Returns
Action specifying what tool to use.
dict(**kwargs: Any) β Dict[source]#
Return dictionary representation of agent.
plan(intermediate_steps: List[Tuple[langchain.schema.AgentAction, str]], callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) β Union[langchain.schema.AgentAction, langchain.schema.AgentFinish][source]#
Given input, decided what to do.
Parameters
intermediate_steps β Steps the LLM has taken to date,
along with observations
callbacks β Callbacks to run.
**kwargs β User inputs.
Returns
Action specifying what tool to use.
tool_run_logging_kwargs() β Dict[source]#
pydantic model langchain.agents.MRKLChain[source]#
Chain that implements the MRKL system.
Example
from langchain import OpenAI, MRKLChain
from langchain.chains.mrkl.base import ChainConfig
llm = OpenAI(temperature=0)
prompt = PromptTemplate(...)
chains = [...]
mrkl = MRKLChain.from_chains(llm=llm, prompt=prompt)
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_return_direct_tool Β» all fields
validate_tools Β» all fields
classmethod from_chains(llm: langchain.base_language.BaseLanguageModel, chains: List[langchain.agents.mrkl.base.ChainConfig], **kwargs: Any) β langchain.agents.agent.AgentExecutor[source]#
User friendly way to initialize the MRKL chain.
This is intended to be an easy way to get up and running with the
MRKL chain.
Parameters
llm β The LLM to use as the agent LLM.
chains β The chains the MRKL system has access to.
**kwargs β parameters to be passed to initialization.
Returns
An initialized MRKL chain.
Example
from langchain import LLMMathChain, OpenAI, SerpAPIWrapper, MRKLChain
from langchain.chains.mrkl.base import ChainConfig
llm = OpenAI(temperature=0)
search = SerpAPIWrapper()
llm_math_chain = LLMMathChain(llm=llm)
chains = [
ChainConfig(
action_name = "Search",
action=search.search,
action_description="useful for searching"
),
ChainConfig(
action_name="Calculator",
action=llm_math_chain.run,
action_description="useful for doing math"
)
]
mrkl = MRKLChain.from_chains(llm, chains)
pydantic model langchain.agents.ReActChain[source]#
Chain that implements the ReAct paper.
Example
from langchain import ReActChain, OpenAI
react = ReAct(llm=OpenAI())
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_return_direct_tool Β» all fields
validate_tools Β» all fields
pydantic model langchain.agents.ReActTextWorldAgent[source]#
Agent for the ReAct TextWorld chain.
classmethod create_prompt(tools: Sequence[langchain.tools.base.BaseTool]) β langchain.prompts.base.BasePromptTemplate[source]#
Return default prompt.
pydantic model langchain.agents.SelfAskWithSearchChain[source]#
Chain that does self ask with search.
Example
from langchain import SelfAskWithSearchChain, OpenAI, GoogleSerperAPIWrapper
search_chain = GoogleSerperAPIWrapper()
self_ask = SelfAskWithSearchChain(llm=OpenAI(), search_chain=search_chain)
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_return_direct_tool Β» all fields
validate_tools Β» all fields
pydantic model langchain.agents.StructuredChatAgent[source]#
field output_parser: langchain.agents.agent.AgentOutputParser [Optional]# | https://langchain.readthedocs.io/en/latest/reference/modules/agents.html |
61a39a44520d-6 | field output_parser: langchain.agents.agent.AgentOutputParser [Optional]#
classmethod create_prompt(tools: Sequence[langchain.tools.base.BaseTool], prefix: str = 'Respond to the human as helpfully and accurately as possible. You have access to the following tools:', suffix: str = 'Begin! Reminder to ALWAYS respond with a valid json blob of a single action. Use tools if necessary. Respond directly if appropriate. Format is Action:```$JSON_BLOB```then Observation:.\nThought:', human_message_template: str = '{input}\n\n{agent_scratchpad}', format_instructions: str = 'Use a json blob to specify a tool by providing an action key (tool name) and an action_input key (tool input).\n\nValid "action" values: "Final Answer" or {tool_names}\n\nProvide only ONE action per $JSON_BLOB, as shown:\n\n```\n{{{{\nΒ "action": $TOOL_NAME,\nΒ "action_input": $INPUT\n}}}}\n```\n\nFollow this format:\n\nQuestion: input question to answer\nThought: consider previous and subsequent steps\nAction:\n```\n$JSON_BLOB\n```\nObservation: action result\n... (repeat Thought/Action/Observation N times)\nThought: I know what to respond\nAction:\n```\n{{{{\nΒ "action": "Final Answer",\nΒ "action_input": "Final response to human"\n}}}}\n```', input_variables: Optional[List[str]] = None, memory_prompts: Optional[List[langchain.prompts.base.BasePromptTemplate]] = None) β langchain.prompts.base.BasePromptTemplate[source]#
Create a prompt for this class.
classmethod from_llm_and_tools(llm: langchain.base_language.BaseLanguageModel, tools: Sequence[langchain.tools.base.BaseTool], callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, output_parser: Optional[langchain.agents.agent.AgentOutputParser] = None, prefix: str = 'Respond to the human as helpfully and accurately as possible. You have access to the following tools:', suffix: str = 'Begin! Reminder to ALWAYS respond with a valid json blob of a single action. Use tools if necessary. Respond directly if appropriate. Format is Action:```$JSON_BLOB```then Observation:.\nThought:', human_message_template: str = '{input}\n\n{agent_scratchpad}', format_instructions: str = 'Use a json blob to specify a tool by providing an action key (tool name) and an action_input key (tool input).\n\nValid "action" values: "Final Answer" or {tool_names}\n\nProvide only ONE action per $JSON_BLOB, as shown:\n\n```\n{{{{\nΒ "action": $TOOL_NAME,\nΒ "action_input": $INPUT\n}}}}\n```\n\nFollow this format:\n\nQuestion: input question to answer\nThought: consider previous and subsequent steps\nAction:\n```\n$JSON_BLOB\n```\nObservation: action result\n... (repeat Thought/Action/Observation N times)\nThought: I know what to respond\nAction:\n```\n{{{{\nΒ "action": "Final Answer",\nΒ "action_input": "Final response to human"\n}}}}\n```', input_variables: Optional[List[str]] = None, memory_prompts: Optional[List[langchain.prompts.base.BasePromptTemplate]] = None, **kwargs: Any) β langchain.agents.agent.Agent[source]#
Construct an agent from an LLM and tools.
property llm_prefix: str#
Prefix to append the llm call with.
property observation_prefix: str#
Prefix to append the observation with.
pydantic model langchain.agents.Tool[source]#
Tool that takes in function or coroutine directly.
field coroutine: Optional[Callable[[...], Awaitable[str]]] = None#
The asynchronous version of the function.
field description: str = ''#
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
field func: Callable[[...], str] [Required]#
The function to run when the tool is called.
classmethod from_function(func: Callable, name: str, description: str, return_direct: bool = False, args_schema: Optional[Type[pydantic.main.BaseModel]] = None, **kwargs: Any) β langchain.tools.base.Tool[source]#
Initialize tool from a function.
property args: dict#
The toolβs input arguments. | https://langchain.readthedocs.io/en/latest/reference/modules/agents.html |
61a39a44520d-7 | Initialize tool from a function.
property args: dict#
The toolβs input arguments.
pydantic model langchain.agents.ZeroShotAgent[source]#
Agent for the MRKL chain.
field output_parser: langchain.agents.agent.AgentOutputParser [Optional]#
classmethod create_prompt(tools: Sequence[langchain.tools.base.BaseTool], prefix: str = 'Answer the following questions as best you can. You have access to the following tools:', suffix: str = 'Begin!\n\nQuestion: {input}\nThought:{agent_scratchpad}', format_instructions: str = 'Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None) β langchain.prompts.prompt.PromptTemplate[source]#
Create prompt in the style of the zero shot agent.
Parameters
tools β List of tools the agent will have access to, used to format the
prompt.
prefix β String to put before the list of tools.
suffix β String to put after the list of tools.
input_variables β List of input variables the final prompt will expect.
Returns
A PromptTemplate with the template assembled from the pieces here.
classmethod from_llm_and_tools(llm: langchain.base_language.BaseLanguageModel, tools: Sequence[langchain.tools.base.BaseTool], callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, output_parser: Optional[langchain.agents.agent.AgentOutputParser] = None, prefix: str = 'Answer the following questions as best you can. You have access to the following tools:', suffix: str = 'Begin!\n\nQuestion: {input}\nThought:{agent_scratchpad}', format_instructions: str = 'Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None, **kwargs: Any) β langchain.agents.agent.Agent[source]#
Construct an agent from an LLM and tools.
property llm_prefix: str#
Prefix to append the llm call with.
property observation_prefix: str#
Prefix to append the observation with.
langchain.agents.create_csv_agent(llm: langchain.base_language.BaseLanguageModel, path: Union[str, List[str]], pandas_kwargs: Optional[dict] = None, **kwargs: Any) β langchain.agents.agent.AgentExecutor[source]#
Create csv agent by loading to a dataframe and using pandas agent. | https://langchain.readthedocs.io/en/latest/reference/modules/agents.html |
61a39a44520d-8 | Create csv agent by loading to a dataframe and using pandas agent.
langchain.agents.create_json_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: langchain.agents.agent_toolkits.json.toolkit.JsonToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'You are an agent designed to interact with JSON.\nYour goal is to return a final answer by interacting with the JSON.\nYou have access to the following tools which help you learn more about the JSON you are interacting with.\nOnly use the below tools. Only use the information returned by the below tools to construct your final answer.\nDo not make up any information that is not contained in the JSON.\nYour input to the tools should be in the form of `data["key"][0]` where `data` is the JSON blob you are interacting with, and the syntax used is Python. \nYou should only use keys that you know for a fact exist. You must validate that a key exists by seeing it previously when calling `json_spec_list_keys`. \nIf you have not seen a key in one of those responses, you cannot use it.\nYou should only add one key at a time to the path. You cannot add multiple keys at once.\nIf you encounter a "KeyError", go back to the previous key, look at the available keys, and try again.\n\nIf the question does not seem to be related to the JSON, just return "I don\'t know" as the answer.\nAlways begin your interaction with the `json_spec_list_keys` tool with input "data" to see what keys exist in the JSON.\n\nNote that sometimes the value at a given path is large. In this case, you will get an error "Value is a large dictionary, should explore its keys directly".\nIn this case, you should ALWAYS follow up by using the `json_spec_list_keys` tool to see what keys exist at that path.\nDo not simply refer the user to the JSON or a section of the JSON, as this is not a valid answer. Keep digging until you find the answer and explicitly return it.\n', suffix: str = 'Begin!"\n\nQuestion: {input}\nThought: I should look at the keys that exist in data to see what I have access to\n{agent_scratchpad}', format_instructions: str = 'Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None, verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) β langchain.agents.agent.AgentExecutor[source]#
Construct a json agent from an LLM and tools. | https://langchain.readthedocs.io/en/latest/reference/modules/agents.html |
61a39a44520d-9 | Construct a json agent from an LLM and tools.
langchain.agents.create_openapi_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: langchain.agents.agent_toolkits.openapi.toolkit.OpenAPIToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = "You are an agent designed to answer questions by making web requests to an API given the openapi spec.\n\nIf the question does not seem related to the API, return I don't know. Do not make up an answer.\nOnly use information provided by the tools to construct your response.\n\nFirst, find the base URL needed to make the request.\n\nSecond, find the relevant paths needed to answer the question. Take note that, sometimes, you might need to make more than one request to more than one path to answer the question.\n\nThird, find the required parameters needed to make the request. For GET requests, these are usually URL parameters and for POST requests, these are request body parameters.\n\nFourth, make the requests needed to answer the question. Ensure that you are sending the correct parameters to the request by checking which parameters are required. For parameters with a fixed set of values, please use the spec to look at which values are allowed.\n\nUse the exact parameter names as listed in the spec, do not make up any names or abbreviate the names of parameters.\nIf you get a not found error, ensure that you are using a path that actually exists in the spec.\n", suffix: str = 'Begin!\n\nQuestion: {input}\nThought: I should explore the spec to find the base url for the API.\n{agent_scratchpad}', format_instructions: str = 'Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None, max_iterations: Optional[int] = 15, max_execution_time: Optional[float] = None, early_stopping_method: str = 'force', verbose: bool = False, return_intermediate_steps: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) β langchain.agents.agent.AgentExecutor[source]#
Construct a json agent from an LLM and tools.
langchain.agents.create_pandas_dataframe_agent(llm: langchain.base_language.BaseLanguageModel, df: Any, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: Optional[str] = None, suffix: Optional[str] = None, input_variables: Optional[List[str]] = None, verbose: bool = False, return_intermediate_steps: bool = False, max_iterations: Optional[int] = 15, max_execution_time: Optional[float] = None, early_stopping_method: str = 'force', agent_executor_kwargs: Optional[Dict[str, Any]] = None, include_df_in_prompt: Optional[bool] = True, **kwargs: Dict[str, Any]) β langchain.agents.agent.AgentExecutor[source]#
Construct a pandas agent from an LLM and dataframe. | https://langchain.readthedocs.io/en/latest/reference/modules/agents.html |
61a39a44520d-10 | Construct a pandas agent from an LLM and dataframe.
langchain.agents.create_pbi_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: Optional[langchain.agents.agent_toolkits.powerbi.toolkit.PowerBIToolkit], powerbi: Optional[langchain.utilities.powerbi.PowerBIDataset] = None, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'You are an agent designed to help users interact with a PowerBI Dataset.\n\nAgent has access to a tool that can write a query based on the question and then run those against PowerBI, Microsofts business intelligence tool. The questions from the users should be interpreted as related to the dataset that is available and not general questions about the world. If the question does not seem related to the dataset, just return "This does not appear to be part of this dataset." as the answer.\n\nGiven an input question, ask to run the questions against the dataset, then look at the results and return the answer, the answer should be a complete sentence that answers the question, if multiple rows are asked find a way to write that in a easily readible format for a human, also make sure to represent numbers in readable ways, like 1M instead of 1000000. Unless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\n', suffix: str = 'Begin!\n\nQuestion: {input}\nThought: I can first ask which tables I have, then how each table is defined and then ask the query tool the question I need, and finally create a nice sentence that answers the question.\n{agent_scratchpad}', format_instructions: str = 'Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', examples: Optional[str] = None, input_variables: Optional[List[str]] = None, top_k: int = 10, verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) β langchain.agents.agent.AgentExecutor[source]#
Construct a pbi agent from an LLM and tools. | https://langchain.readthedocs.io/en/latest/reference/modules/agents.html |
61a39a44520d-11 | Construct a pbi agent from an LLM and tools.
langchain.agents.create_pbi_chat_agent(llm: langchain.chat_models.base.BaseChatModel, toolkit: Optional[langchain.agents.agent_toolkits.powerbi.toolkit.PowerBIToolkit], powerbi: Optional[langchain.utilities.powerbi.PowerBIDataset] = None, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, output_parser: Optional[langchain.agents.agent.AgentOutputParser] = None, prefix: str = 'Assistant is a large language model built to help users interact with a PowerBI Dataset.\n\nAssistant has access to a tool that can write a query based on the question and then run those against PowerBI, Microsofts business intelligence tool. The questions from the users should be interpreted as related to the dataset that is available and not general questions about the world. If the question does not seem related to the dataset, just return "This does not appear to be part of this dataset." as the answer.\n\nGiven an input question, ask to run the questions against the dataset, then look at the results and return the answer, the answer should be a complete sentence that answers the question, if multiple rows are asked find a way to write that in a easily readible format for a human, also make sure to represent numbers in readable ways, like 1M instead of 1000000. Unless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\n', suffix: str = "TOOLS\n------\nAssistant can ask the user to use tools to look up information that may be helpful in answering the users original question. The tools the human can use are:\n\n{{tools}}\n\n{format_instructions}\n\nUSER'S INPUT\n--------------------\nHere is the user's input (remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else):\n\n{{{{input}}}}\n", examples: Optional[str] = None, input_variables: Optional[List[str]] = None, memory: Optional[langchain.memory.chat_memory.BaseChatMemory] = None, top_k: int = 10, verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) β langchain.agents.agent.AgentExecutor[source]#
Construct a pbi agent from an Chat LLM and tools.
If you supply only a toolkit and no powerbi dataset, the same LLM is used for both.
langchain.agents.create_spark_dataframe_agent(llm: langchain.llms.base.BaseLLM, df: Any, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = '\nYou are working with a spark dataframe in Python. The name of the dataframe is `df`.\nYou should use the tools below to answer the question posed of you:', suffix: str = '\nThis is the result of `print(df.first())`:\n{df}\n\nBegin!\nQuestion: {input}\n{agent_scratchpad}', input_variables: Optional[List[str]] = None, verbose: bool = False, return_intermediate_steps: bool = False, max_iterations: Optional[int] = 15, max_execution_time: Optional[float] = None, early_stopping_method: str = 'force', agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) β langchain.agents.agent.AgentExecutor[source]#
Construct a spark agent from an LLM and dataframe. | https://langchain.readthedocs.io/en/latest/reference/modules/agents.html |
61a39a44520d-12 | Construct a spark agent from an LLM and dataframe.
langchain.agents.create_spark_sql_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: langchain.agents.agent_toolkits.spark_sql.toolkit.SparkSQLToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'You are an agent designed to interact with Spark SQL.\nGiven an input question, create a syntactically correct Spark SQL query to run, then look at the results of the query and return the answer.\nUnless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\nYou can order the results by a relevant column to return the most interesting examples in the database.\nNever query for all the columns from a specific table, only ask for the relevant columns given the question.\nYou have access to tools for interacting with the database.\nOnly use the below tools. Only use the information returned by the below tools to construct your final answer.\nYou MUST double check your query before executing it. If you get an error while executing a query, rewrite the query and try again.\n\nDO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the database.\n\nIf the question does not seem related to the database, just return "I don\'t know" as the answer.\n', suffix: str = 'Begin!\n\nQuestion: {input}\nThought: I should look at the tables in the database to see what I can query.\n{agent_scratchpad}', format_instructions: str = 'Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None, top_k: int = 10, max_iterations: Optional[int] = 15, max_execution_time: Optional[float] = None, early_stopping_method: str = 'force', verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) β langchain.agents.agent.AgentExecutor[source]#
Construct a sql agent from an LLM and tools. | https://langchain.readthedocs.io/en/latest/reference/modules/agents.html |
61a39a44520d-13 | Construct a sql agent from an LLM and tools.
langchain.agents.create_sql_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: langchain.agents.agent_toolkits.sql.toolkit.SQLDatabaseToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'You are an agent designed to interact with a SQL database.\nGiven an input question, create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer.\nUnless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\nYou can order the results by a relevant column to return the most interesting examples in the database.\nNever query for all the columns from a specific table, only ask for the relevant columns given the question.\nYou have access to tools for interacting with the database.\nOnly use the below tools. Only use the information returned by the below tools to construct your final answer.\nYou MUST double check your query before executing it. If you get an error while executing a query, rewrite the query and try again.\n\nDO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the database.\n\nIf the question does not seem related to the database, just return "I don\'t know" as the answer.\n', suffix: str = 'Begin!\n\nQuestion: {input}\nThought: I should look at the tables in the database to see what I can query.Β Then I should query the schema of the most relevant tables.\n{agent_scratchpad}', format_instructions: str = 'Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None, top_k: int = 10, max_iterations: Optional[int] = 15, max_execution_time: Optional[float] = None, early_stopping_method: str = 'force', verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) β langchain.agents.agent.AgentExecutor[source]#
Construct a sql agent from an LLM and tools.
langchain.agents.create_vectorstore_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'You are an agent designed to answer questions about sets of documents.\nYou have access to tools for interacting with the documents, and the inputs to the tools are questions.\nSometimes, you will be asked to provide sources for your questions, in which case you should use the appropriate tool to do so.\nIf the question does not seem relevant to any of the tools provided, just return "I don\'t know" as the answer.\n', verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) β langchain.agents.agent.AgentExecutor[source]#
Construct a vectorstore agent from an LLM and tools.
langchain.agents.create_vectorstore_router_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreRouterToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'You are an agent designed to answer questions.\nYou have access to tools for interacting with different sources, and the inputs to the tools are questions.\nYour main task is to decide which of the tools is relevant for answering question at hand.\nFor complex questions, you can break the question down into sub questions and use tools to answers the sub questions.\n', verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) β langchain.agents.agent.AgentExecutor[source]#
Construct a vectorstore router agent from an LLM and tools.
langchain.agents.get_all_tool_names() β List[str][source]#
Get a list of all possible tool names. | https://langchain.readthedocs.io/en/latest/reference/modules/agents.html |
61a39a44520d-14 | Get a list of all possible tool names.
langchain.agents.initialize_agent(tools: Sequence[langchain.tools.base.BaseTool], llm: langchain.base_language.BaseLanguageModel, agent: Optional[langchain.agents.agent_types.AgentType] = None, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, agent_path: Optional[str] = None, agent_kwargs: Optional[dict] = None, **kwargs: Any) β langchain.agents.agent.AgentExecutor[source]#
Load an agent executor given tools and LLM.
Parameters
tools β List of tools this agent has access to.
llm β Language model to use as the agent.
agent β Agent type to use. If None and agent_path is also None, will default to
AgentType.ZERO_SHOT_REACT_DESCRIPTION.
callback_manager β CallbackManager to use. Global callback manager is used if
not provided. Defaults to None.
agent_path β Path to serialized agent to use.
agent_kwargs β Additional key word arguments to pass to the underlying agent
**kwargs β Additional key word arguments passed to the agent executor
Returns
An agent executor
langchain.agents.load_agent(path: Union[str, pathlib.Path], **kwargs: Any) β langchain.agents.agent.BaseSingleActionAgent[source]#
Unified method for loading a agent from LangChainHub or local fs.
langchain.agents.load_huggingface_tool(task_or_repo_id: str, model_repo_id: Optional[str] = None, token: Optional[str] = None, remote: bool = False, **kwargs: Any) β langchain.tools.base.BaseTool[source]#
langchain.agents.load_tools(tool_names: List[str], llm: Optional[langchain.base_language.BaseLanguageModel] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) β List[langchain.tools.base.BaseTool][source]#
Load tools based on their name.
Parameters
tool_names β name of tools to load.
llm β Optional language model, may be needed to initialize certain tools.
callbacks β Optional callback manager or list of callback handlers.
If not provided, default global callback manager will be used.
Returns
List of tools.
langchain.agents.tool(*args: Union[str, Callable], return_direct: bool = False, args_schema: Optional[Type[pydantic.main.BaseModel]] = None, infer_schema: bool = True) β Callable[source]#
Make tools out of functions, can be used with or without arguments.
Parameters
*args β The arguments to the tool.
return_direct β Whether to return directly from the tool rather
than continuing the agent loop.
args_schema β optional argument schema for user to specify
infer_schema β Whether to infer the schema of the arguments from
the functionβs signature. This also makes the resultant tool
accept a dictionary input to its run() function.
Requires:
Function must be of type (str) -> str
Function must have a docstring
Examples
@tool
def search_api(query: str) -> str:
# Searches the API for the query.
return
@tool("search", return_direct=True)
def search_api(query: str) -> str:
# Searches the API for the query.
return
previous
Agents
next
Tools
By Harrison Chase
Β© Copyright 2023, Harrison Chase.
Last updated on Jun 08, 2023. | https://langchain.readthedocs.io/en/latest/reference/modules/agents.html |
64456b903bc9-0 | .rst
.pdf
Memory
Memory#
class langchain.memory.CassandraChatMessageHistory(contact_points: List[str], session_id: str, port: int = 9042, username: str = 'cassandra', password: str = 'cassandra', keyspace_name: str = 'chat_history', table_name: str = 'message_store')[source]#
Chat message history that stores history in Cassandra.
Parameters
contact_points β list of ips to connect to Cassandra cluster
session_id β arbitrary key that is used to store the messages
of a single chat session.
port β port to connect to Cassandra cluster
username β username to connect to Cassandra cluster
password β password to connect to Cassandra cluster
keyspace_name β name of the keyspace to use
table_name β name of the table to use
add_message(message: langchain.schema.BaseMessage) β None[source]#
Append the message to the record in Cassandra
clear() β None[source]#
Clear session memory from Cassandra
property messages: List[langchain.schema.BaseMessage]#
Retrieve the messages from Cassandra
pydantic model langchain.memory.ChatMessageHistory[source]#
field messages: List[langchain.schema.BaseMessage] = []#
add_message(message: langchain.schema.BaseMessage) β None[source]#
Add a self-created message to the store
clear() β None[source]#
Remove all messages from the store
pydantic model langchain.memory.CombinedMemory[source]#
Class for combining multiple memoriesβ data together.
Validators
check_input_key Β» memories
check_repeated_memory_variable Β» memories
field memories: List[langchain.schema.BaseMemory] [Required]#
For tracking all the memories that should be accessed.
clear() β None[source]#
Clear context from this session for every memory.
load_memory_variables(inputs: Dict[str, Any]) β Dict[str, str][source]#
Load all vars from sub-memories.
save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) β None[source]#
Save context from this session for every memory.
property memory_variables: List[str]#
All the memory variables that this instance provides.
pydantic model langchain.memory.ConversationBufferMemory[source]#
Buffer for storing conversation memory.
field ai_prefix: str = 'AI'#
field human_prefix: str = 'Human'#
load_memory_variables(inputs: Dict[str, Any]) β Dict[str, Any][source]#
Return history buffer.
property buffer: Any#
String buffer of memory.
pydantic model langchain.memory.ConversationBufferWindowMemory[source]#
Buffer for storing conversation memory.
field ai_prefix: str = 'AI'#
field human_prefix: str = 'Human'#
field k: int = 5#
load_memory_variables(inputs: Dict[str, Any]) β Dict[str, str][source]#
Return history buffer.
property buffer: List[langchain.schema.BaseMessage]#
String buffer of memory.
pydantic model langchain.memory.ConversationEntityMemory[source]#
Entity extractor & summarizer to memory.
field ai_prefix: str = 'AI'#
field chat_history_key: str = 'history'#
field entity_cache: List[str] = []# | https://langchain.readthedocs.io/en/latest/reference/modules/memory.html |
Subsets and Splits