id
stringlengths
14
16
text
stringlengths
45
2.05k
source
stringlengths
53
111
425acceea52e-16
cannot use it.\nYou should only add one key at a time to the path. You cannot add multiple keys at once.\nIf you encounter a "KeyError", go back to the previous key, look at the available keys, and try again.\n\nIf the question does not seem to be related to the JSON, just return "I don\'t know" as the answer.\nAlways begin your interaction with the `json_spec_list_keys` tool with input "data" to see what keys exist in the JSON.\n\nNote that sometimes the value at a given path is large. In this case, you will get an error "Value is a large dictionary, should explore its keys directly".\nIn this case, you should ALWAYS follow up by using the `json_spec_list_keys` tool to see what keys exist at that path.\nDo not simply refer the user to the JSON or a section of the JSON, as this is not a valid answer. Keep digging until you find the answer and explicitly return it.\n', suffix: str = 'Begin!"\n\nQuestion: {input}\nThought: I
https://langchain.readthedocs.io/en/latest/reference/modules/agents.html
425acceea52e-17
= 'Begin!"\n\nQuestion: {input}\nThought: I should look at the keys that exist in data to see what I have access to\n{agent_scratchpad}', format_instructions: str = 'Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None, verbose: bool = False, **kwargs: Any) β†’ langchain.agents.agent.AgentExecutor[source]#
https://langchain.readthedocs.io/en/latest/reference/modules/agents.html
425acceea52e-18
Construct a json agent from an LLM and tools.
https://langchain.readthedocs.io/en/latest/reference/modules/agents.html
425acceea52e-19
langchain.agents.create_openapi_agent(llm: langchain.llms.base.BaseLLM, toolkit: langchain.agents.agent_toolkits.openapi.toolkit.OpenAPIToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = "You are an agent designed to answer questions by making web requests to an API given the openapi spec.\n\nIf the question does not seem related to the API, return I don't know. Do not make up an answer.\nOnly use information provided by the tools to construct your response.\n\nFirst, find the base URL needed to make the request.\n\nSecond, find the relevant paths needed to answer the question. Take note that, sometimes, you might need to make more than one request to more than one path to answer the question.\n\nThird, find the required parameters needed to make the request. For GET requests, these are usually URL parameters and for POST requests, these are request body parameters.\n\nFourth, make the requests needed to answer the question. Ensure that you are sending the correct parameters to the request by checking which parameters are required. For parameters with a fixed set
https://langchain.readthedocs.io/en/latest/reference/modules/agents.html
425acceea52e-20
which parameters are required. For parameters with a fixed set of values, please use the spec to look at which values are allowed.\n\nUse the exact parameter names as listed in the spec, do not make up any names or abbreviate the names of parameters.\nIf you get a not found error, ensure that you are using a path that actually exists in the spec.\n", suffix: str = 'Begin!"\n\nQuestion: {input}\nThought: I should explore the spec to find the base url for the API.\n{agent_scratchpad}', format_instructions: str = 'Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None, verbose: bool = False, **kwargs: Any) β†’
https://langchain.readthedocs.io/en/latest/reference/modules/agents.html
425acceea52e-21
verbose: bool = False, **kwargs: Any) β†’ langchain.agents.agent.AgentExecutor[source]#
https://langchain.readthedocs.io/en/latest/reference/modules/agents.html
425acceea52e-22
Construct a json agent from an LLM and tools. langchain.agents.create_pandas_dataframe_agent(llm: langchain.llms.base.BaseLLM, df: Any, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = '\nYou are working with a pandas dataframe in Python. The name of the dataframe is `df`.\nYou should use the tools below to answer the question posed of you:', suffix: str = '\nThis is the result of `print(df.head())`:\n{df}\n\nBegin!\nQuestion: {input}\n{agent_scratchpad}', input_variables: Optional[List[str]] = None, verbose: bool = False, **kwargs: Any) β†’ langchain.agents.agent.AgentExecutor[source]# Construct a pandas agent from an LLM and dataframe.
https://langchain.readthedocs.io/en/latest/reference/modules/agents.html
425acceea52e-23
langchain.agents.create_sql_agent(llm: langchain.llms.base.BaseLLM, toolkit: langchain.agents.agent_toolkits.sql.toolkit.SQLDatabaseToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'You are an agent designed to interact with a SQL database.\nGiven an input question, create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer.\nUnless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\nYou can order the results by a relevant column to return the most interesting examples in the database.\nNever query for all the columns from a specific table, only ask for a the few relevant columns given the question.\nYou have access to tools for interacting with the database.\nOnly use the below tools. Only use the information returned by the below tools to construct your final answer.\nYou MUST double check your query before executing it. If you get an error while executing a query, rewrite the query and try again.\n\nDO
https://langchain.readthedocs.io/en/latest/reference/modules/agents.html
425acceea52e-24
a query, rewrite the query and try again.\n\nDO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the database.\n\nIf the question does not seem related to the database, just return "I don\'t know" as the answer.\n', suffix: str = 'Begin!\n\nQuestion: {input}\nThought: I should look at the tables in the database to see what I can query.\n{agent_scratchpad}', format_instructions: str = 'Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None, top_k: int = 10, verbose: bool = False, **kwargs: Any) β†’ langchain.agents.agent.AgentExecutor[source]#
https://langchain.readthedocs.io/en/latest/reference/modules/agents.html
425acceea52e-25
Construct a sql agent from an LLM and tools. langchain.agents.create_vectorstore_agent(llm: langchain.llms.base.BaseLLM, toolkit: langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'You are an agent designed to answer questions about sets of documents.\nYou have access to tools for interacting with the documents, and the inputs to the tools are questions.\nSometimes, you will be asked to provide sources for your questions, in which case you should use the appropriate tool to do so.\nIf the question does not seem relevant to any of the tools provided, just return "I don\'t know" as the answer.\n', verbose: bool = False, **kwargs: Any) β†’ langchain.agents.agent.AgentExecutor[source]# Construct a vectorstore agent from an LLM and tools. langchain.agents.create_vectorstore_router_agent(llm: langchain.llms.base.BaseLLM, toolkit: langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreRouterToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'You are an agent designed to answer questions.\nYou have access to tools for interacting with different sources, and the inputs to the tools are questions.\nYour main task is to decide which of the tools is relevant for answering question at hand.\nFor complex questions, you can break the question down into sub questions and use tools to answers the sub questions.\n', verbose: bool = False, **kwargs: Any) β†’ langchain.agents.agent.AgentExecutor[source]# Construct a vectorstore router agent from an LLM and tools. langchain.agents.get_all_tool_names() β†’ List[str][source]# Get a list of all possible tool names.
https://langchain.readthedocs.io/en/latest/reference/modules/agents.html
425acceea52e-26
Get a list of all possible tool names. langchain.agents.initialize_agent(tools: Sequence[langchain.tools.base.BaseTool], llm: langchain.llms.base.BaseLLM, agent: Optional[str] = None, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, agent_path: Optional[str] = None, agent_kwargs: Optional[dict] = None, **kwargs: Any) β†’ langchain.agents.agent.AgentExecutor[source]# Load an agent executor given tools and LLM. Parameters tools – List of tools this agent has access to. llm – Language model to use as the agent. agent – A string that specified the agent type to use. Valid options are:zero-shot-react-description react-docstore self-ask-with-search conversational-react-description chat-zero-shot-react-description, chat-conversational-react-description, If None and agent_path is also None, will default tozero-shot-react-description. callback_manager – CallbackManager to use. Global callback manager is used if not provided. Defaults to None. agent_path – Path to serialized agent to use. agent_kwargs – Additional key word arguments to pass to the underlying agent **kwargs – Additional key word arguments passed to the agent executor Returns An agent executor langchain.agents.load_agent(path: Union[str, pathlib.Path], **kwargs: Any) β†’ langchain.agents.agent.Agent[source]# Unified method for loading a agent from LangChainHub or local fs. langchain.agents.load_tools(tool_names: List[str], llm: Optional[langchain.llms.base.BaseLLM] = None, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, **kwargs: Any) β†’ List[langchain.tools.base.BaseTool][source]# Load tools based on their name. Parameters
https://langchain.readthedocs.io/en/latest/reference/modules/agents.html
425acceea52e-27
Load tools based on their name. Parameters tool_names – name of tools to load. llm – Optional language model, may be needed to initialize certain tools. callback_manager – Optional callback manager. If not provided, default global callback manager will be used. Returns List of tools. langchain.agents.tool(*args: Union[str, Callable], return_direct: bool = False) β†’ Callable[source]# Make tools out of functions, can be used with or without arguments. Requires: Function must be of type (str) -> str Function must have a docstring Examples @tool def search_api(query: str) -> str: # Searches the API for the query. return @tool("search", return_direct=True) def search_api(query: str) -> str: # Searches the API for the query. return previous Self Ask With Search next Memory By Harrison Chase Β© Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/reference/modules/agents.html
8106e59279c5-0
.rst .pdf PromptTemplates PromptTemplates# Prompt template classes. pydantic model langchain.prompts.BasePromptTemplate[source]# Base class for all prompt templates, returning a prompt. field input_variables: List[str] [Required]# A list of the names of the variables the prompt template expects. field output_parser: Optional[langchain.schema.BaseOutputParser] = None# How to parse the output of calling an LLM on this formatted prompt. dict(**kwargs: Any) β†’ Dict[source]# Return dictionary representation of prompt. abstract format(**kwargs: Any) β†’ str[source]# Format the prompt with the inputs. Parameters kwargs – Any arguments to be passed to the prompt template. Returns A formatted string. Example: prompt.format(variable1="foo") abstract format_prompt(**kwargs: Any) β†’ langchain.schema.PromptValue[source]# Create Chat Messages. partial(**kwargs: Union[str, Callable[[], str]]) β†’ langchain.prompts.base.BasePromptTemplate[source]# Return a partial of the prompt template. save(file_path: Union[pathlib.Path, str]) β†’ None[source]# Save the prompt. Parameters file_path – Path to directory to save prompt to. Example: .. code-block:: python prompt.save(file_path=”path/prompt.yaml”) pydantic model langchain.prompts.ChatPromptTemplate[source]# format(**kwargs: Any) β†’ str[source]# Format the prompt with the inputs. Parameters kwargs – Any arguments to be passed to the prompt template. Returns A formatted string. Example: prompt.format(variable1="foo") format_prompt(**kwargs: Any) β†’ langchain.schema.PromptValue[source]# Create Chat Messages.
https://langchain.readthedocs.io/en/latest/reference/modules/prompt.html
8106e59279c5-1
Create Chat Messages. partial(**kwargs: Union[str, Callable[[], str]]) β†’ langchain.prompts.base.BasePromptTemplate[source]# Return a partial of the prompt template. save(file_path: Union[pathlib.Path, str]) β†’ None[source]# Save the prompt. Parameters file_path – Path to directory to save prompt to. Example: .. code-block:: python prompt.save(file_path=”path/prompt.yaml”) pydantic model langchain.prompts.FewShotPromptTemplate[source]# Prompt template that contains few shot examples. field example_prompt: langchain.prompts.prompt.PromptTemplate [Required]# PromptTemplate used to format an individual example. field example_selector: Optional[langchain.prompts.example_selector.base.BaseExampleSelector] = None# ExampleSelector to choose the examples to format into the prompt. Either this or examples should be provided. field example_separator: str = '\n\n'# String separator used to join the prefix, the examples, and suffix. field examples: Optional[List[dict]] = None# Examples to format into the prompt. Either this or example_selector should be provided. field input_variables: List[str] [Required]# A list of the names of the variables the prompt template expects. field prefix: str = ''# A prompt template string to put before the examples. field suffix: str [Required]# A prompt template string to put after the examples. field template_format: str = 'f-string'# The format of the prompt template. Options are: β€˜f-string’, β€˜jinja2’. field validate_template: bool = True# Whether or not to try validating the template. dict(**kwargs: Any) β†’ Dict[source]# Return a dictionary of the prompt. format(**kwargs: Any) β†’ str[source]#
https://langchain.readthedocs.io/en/latest/reference/modules/prompt.html
8106e59279c5-2
Return a dictionary of the prompt. format(**kwargs: Any) β†’ str[source]# Format the prompt with the inputs. Parameters kwargs – Any arguments to be passed to the prompt template. Returns A formatted string. Example: prompt.format(variable1="foo") pydantic model langchain.prompts.FewShotPromptWithTemplates[source]# Prompt template that contains few shot examples. field example_prompt: langchain.prompts.prompt.PromptTemplate [Required]# PromptTemplate used to format an individual example. field example_selector: Optional[langchain.prompts.example_selector.base.BaseExampleSelector] = None# ExampleSelector to choose the examples to format into the prompt. Either this or examples should be provided. field example_separator: str = '\n\n'# String separator used to join the prefix, the examples, and suffix. field examples: Optional[List[dict]] = None# Examples to format into the prompt. Either this or example_selector should be provided. field input_variables: List[str] [Required]# A list of the names of the variables the prompt template expects. field prefix: Optional[langchain.prompts.base.StringPromptTemplate] = None# A PromptTemplate to put before the examples. field suffix: langchain.prompts.base.StringPromptTemplate [Required]# A PromptTemplate to put after the examples. field template_format: str = 'f-string'# The format of the prompt template. Options are: β€˜f-string’, β€˜jinja2’. field validate_template: bool = True# Whether or not to try validating the template. dict(**kwargs: Any) β†’ Dict[source]# Return a dictionary of the prompt. format(**kwargs: Any) β†’ str[source]# Format the prompt with the inputs. Parameters kwargs – Any arguments to be passed to the prompt template. Returns
https://langchain.readthedocs.io/en/latest/reference/modules/prompt.html
8106e59279c5-3
Parameters kwargs – Any arguments to be passed to the prompt template. Returns A formatted string. Example: prompt.format(variable1="foo") pydantic model langchain.prompts.MessagesPlaceholder[source]# Prompt template that assumes variable is already list of messages. format_messages(**kwargs: Any) β†’ List[langchain.schema.BaseMessage][source]# To a BaseMessage. property input_variables: List[str]# Input variables for this prompt template. langchain.prompts.Prompt# alias of langchain.prompts.prompt.PromptTemplate pydantic model langchain.prompts.PromptTemplate[source]# Schema to represent a prompt for an LLM. Example from langchain import PromptTemplate prompt = PromptTemplate(input_variables=["foo"], template="Say {foo}") field input_variables: List[str] [Required]# A list of the names of the variables the prompt template expects. field template: str [Required]# The prompt template. field template_format: str = 'f-string'# The format of the prompt template. Options are: β€˜f-string’, β€˜jinja2’. field validate_template: bool = True# Whether or not to try validating the template. format(**kwargs: Any) β†’ str[source]# Format the prompt with the inputs. Parameters kwargs – Any arguments to be passed to the prompt template. Returns A formatted string. Example: prompt.format(variable1="foo") classmethod from_examples(examples: List[str], suffix: str, input_variables: List[str], example_separator: str = '\n\n', prefix: str = '') β†’ langchain.prompts.prompt.PromptTemplate[source]# Take examples in list format with prefix and suffix to create a prompt. Intended be used as a way to dynamically create a prompt from examples. Parameters
https://langchain.readthedocs.io/en/latest/reference/modules/prompt.html
8106e59279c5-4
Intended be used as a way to dynamically create a prompt from examples. Parameters examples – List of examples to use in the prompt. suffix – String to go after the list of examples. Should generally set up the user’s input. input_variables – A list of variable names the final prompt template will expect. example_separator – The separator to use in between examples. Defaults to two new line characters. prefix – String that should go before any examples. Generally includes examples. Default to an empty string. Returns The final prompt generated. classmethod from_file(template_file: Union[str, pathlib.Path], input_variables: List[str]) β†’ langchain.prompts.prompt.PromptTemplate[source]# Load a prompt from a file. Parameters template_file – The path to the file containing the prompt template. input_variables – A list of variable names the final prompt template will expect. Returns The prompt loaded from the file. classmethod from_template(template: str) β†’ langchain.prompts.prompt.PromptTemplate[source]# Load a prompt template from a template. pydantic model langchain.prompts.StringPromptTemplate[source]# String prompt should expose the format method, returning a prompt. format_prompt(**kwargs: Any) β†’ langchain.schema.PromptValue[source]# Create Chat Messages. langchain.prompts.load_prompt(path: Union[str, pathlib.Path]) β†’ langchain.prompts.base.BasePromptTemplate[source]# Unified method for loading a prompt from LangChainHub or local fs. previous Prompts next Example Selector By Harrison Chase Β© Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/reference/modules/prompt.html
1f7e94e337b7-0
.rst .pdf SerpAPI SerpAPI# For backwards compatiblity. pydantic model langchain.serpapi.SerpAPIWrapper[source]# Wrapper around SerpAPI. To use, you should have the google-search-results python package installed, and the environment variable SERPAPI_API_KEY set with your API key, or pass serpapi_api_key as a named parameter to the constructor. Example from langchain import SerpAPIWrapper serpapi = SerpAPIWrapper() field aiosession: Optional[aiohttp.client.ClientSession] = None# field params: dict = {'engine': 'google', 'gl': 'us', 'google_domain': 'google.com', 'hl': 'en'}# field serpapi_api_key: Optional[str] = None# async arun(query: str) β†’ str[source]# Use aiohttp to run query through SerpAPI and parse result. get_params(query: str) β†’ Dict[str, str][source]# Get parameters for SerpAPI. results(query: str) β†’ dict[source]# Run query through SerpAPI and return the raw result. run(query: str) β†’ str[source]# Run query through SerpAPI and parse result. previous Python REPL next SearxNG Search By Harrison Chase Β© Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/reference/modules/serpapi.html
7a3977c6b700-0
.rst .pdf LLMs LLMs# Wrappers on top of large language models APIs. pydantic model langchain.llms.AI21[source]# Wrapper around AI21 large language models. To use, you should have the environment variable AI21_API_KEY set with your API key. Example from langchain.llms import AI21 ai21 = AI21(model="j1-jumbo") Validators set_callback_manager Β» callback_manager set_verbose Β» verbose validate_environment Β» all fields field base_url: Optional[str] = None# Base url to use, if None decides based on model name. field countPenalty: langchain.llms.ai21.AI21PenaltyData = AI21PenaltyData(scale=0, applyToWhitespaces=True, applyToPunctuations=True, applyToNumbers=True, applyToStopwords=True, applyToEmojis=True)# Penalizes repeated tokens according to count. field frequencyPenalty: langchain.llms.ai21.AI21PenaltyData = AI21PenaltyData(scale=0, applyToWhitespaces=True, applyToPunctuations=True, applyToNumbers=True, applyToStopwords=True, applyToEmojis=True)# Penalizes repeated tokens according to frequency. field logitBias: Optional[Dict[str, float]] = None# Adjust the probability of specific tokens being generated. field maxTokens: int = 256# The maximum number of tokens to generate in the completion. field minTokens: int = 0# The minimum number of tokens to generate in the completion. field model: str = 'j1-jumbo'# Model name to use. field numResults: int = 1# How many completions to generate for each prompt.
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-1
field numResults: int = 1# How many completions to generate for each prompt. field presencePenalty: langchain.llms.ai21.AI21PenaltyData = AI21PenaltyData(scale=0, applyToWhitespaces=True, applyToPunctuations=True, applyToNumbers=True, applyToStopwords=True, applyToEmojis=True)# Penalizes repeated tokens. field temperature: float = 0.7# What sampling temperature to use. field topP: float = 1.0# Total probability mass of tokens to consider at each step. __call__(prompt: str, stop: Optional[List[str]] = None) β†’ str# Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β†’ Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β†’ Model#
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-2
Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) β†’ Dict# Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) β†’ int# Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β†’ int# Get the number of tokens in the message. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β†’ unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-3
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). save(file_path: Union[pathlib.Path, str]) β†’ None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns: Any) β†’ None# Try to update ForwardRefs on fields based on this Model, globalns and localns. pydantic model langchain.llms.AlephAlpha[source]# Wrapper around Aleph Alpha large language models. To use, you should have the aleph_alpha_client python package installed, and the environment variable ALEPH_ALPHA_API_KEY set with your API key, or pass it as a named parameter to the constructor. Parameters are explained more in depth here: Aleph-Alpha/aleph-alpha-client Example from langchain.llms import AlephAlpha alpeh_alpha = AlephAlpha(aleph_alpha_api_key="my-api-key") Validators set_callback_manager Β» callback_manager set_verbose Β» verbose validate_environment Β» all fields field aleph_alpha_api_key: Optional[str] = None# API key for Aleph Alpha API. field best_of: Optional[int] = None# returns the one with the β€œbest of” results (highest log probability per token) field completion_bias_exclusion_first_token_only: bool = False# Only consider the first token for the completion_bias_exclusion. field contextual_control_threshold: Optional[float] = None# If set to None, attention control parameters only apply to those tokens that have explicitly been set in the request. If set to a non-None value, control parameters are also applied to similar tokens.
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-4
If set to a non-None value, control parameters are also applied to similar tokens. field control_log_additive: Optional[bool] = True# True: apply control by adding the log(control_factor) to attention scores. False: (attention_scores - - attention_scores.min(-1)) * control_factor field echo: bool = False# Echo the prompt in the completion. field frequency_penalty: float = 0.0# Penalizes repeated tokens according to frequency. field log_probs: Optional[int] = None# Number of top log probabilities to be returned for each generated token. field logit_bias: Optional[Dict[int, float]] = None# The logit bias allows to influence the likelihood of generating tokens. field maximum_tokens: int = 64# The maximum number of tokens to be generated. field minimum_tokens: Optional[int] = 0# Generate at least this number of tokens. field model: Optional[str] = 'luminous-base'# Model name to use. field n: int = 1# How many completions to generate for each prompt. field penalty_bias: Optional[str] = None# Penalty bias for the completion. field penalty_exceptions: Optional[List[str]] = None# List of strings that may be generated without penalty, regardless of other penalty settings field penalty_exceptions_include_stop_sequences: Optional[bool] = None# Should stop_sequences be included in penalty_exceptions. field presence_penalty: float = 0.0# Penalizes repeated tokens. field raw_completion: bool = False# Force the raw completion of the model to be returned. field repetition_penalties_include_completion: bool = True# Flag deciding whether presence penalty or frequency penalty are updated from the completion. field repetition_penalties_include_prompt: Optional[bool] = False#
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-5
field repetition_penalties_include_prompt: Optional[bool] = False# Flag deciding whether presence penalty or frequency penalty are updated from the prompt. field stop_sequences: Optional[List[str]] = None# Stop sequences to use. field temperature: float = 0.0# A non-negative float that tunes the degree of randomness in generation. field tokens: Optional[bool] = False# return tokens of completion. field top_k: int = 0# Number of most likely tokens to consider at each step. field top_p: float = 0.0# Total probability mass of tokens to consider at each step. field use_multiplicative_presence_penalty: Optional[bool] = False# Flag deciding whether presence penalty is applied multiplicatively (True) or additively (False). __call__(prompt: str, stop: Optional[List[str]] = None) β†’ str# Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β†’ Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-6
Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β†’ Model# Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) β†’ Dict# Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) β†’ int# Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β†’ int# Get the number of tokens in the message.
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-7
Get the number of tokens in the message. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β†’ unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). save(file_path: Union[pathlib.Path, str]) β†’ None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns: Any) β†’ None# Try to update ForwardRefs on fields based on this Model, globalns and localns. pydantic model langchain.llms.Anthropic[source]# Wrapper around Anthropic large language models. To use, you should have the anthropic python package installed, and the environment variable ANTHROPIC_API_KEY set with your API key, or pass it as a named parameter to the constructor. Example Validators set_callback_manager Β» callback_manager set_verbose Β» verbose validate_environment Β» all fields field max_tokens_to_sample: int = 256# Denotes the number of tokens to predict per generation. field model: str = 'claude-v1'# Model name to use. field temperature: float = 1.0#
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-8
Model name to use. field temperature: float = 1.0# A non-negative float that tunes the degree of randomness in generation. field top_k: int = 0# Number of most likely tokens to consider at each step. field top_p: float = 1# Total probability mass of tokens to consider at each step. __call__(prompt: str, stop: Optional[List[str]] = None) β†’ str# Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β†’ Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β†’ Model# Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-9
exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) β†’ Dict# Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) β†’ int# Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β†’ int# Get the number of tokens in the message. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β†’ unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). save(file_path: Union[pathlib.Path, str]) β†’ None# Save the LLM. Parameters
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-10
Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) stream(prompt: str, stop: Optional[List[str]] = None) β†’ Generator[source]# Call Anthropic completion_stream and return the resulting generator. BETA: this is a beta feature while we figure out the right abstraction. Once that happens, this interface could change. Parameters prompt – The prompt to pass into the model. stop – Optional list of stop words to use when generating. Returns A generator representing the stream of tokens from Anthropic. Example prompt = "Write a poem about a stream." prompt = f"\n\nHuman: {prompt}\n\nAssistant:" generator = anthropic.stream(prompt) for token in generator: yield token classmethod update_forward_refs(**localns: Any) β†’ None# Try to update ForwardRefs on fields based on this Model, globalns and localns. pydantic model langchain.llms.AzureOpenAI[source]# Azure specific OpenAI class that uses deployment name. Validators build_extra Β» all fields set_callback_manager Β» callback_manager set_verbose Β» verbose validate_environment Β» all fields field batch_size: int = 20# Batch size to use when passing multiple documents to generate. field best_of: int = 1# Generates best_of completions server-side and returns the β€œbest”. field deployment_name: str = ''# Deployment name to use. field frequency_penalty: float = 0# Penalizes repeated tokens according to frequency. field logit_bias: Optional[Dict[str, float]] [Optional]# Adjust the probability of specific tokens being generated. field max_retries: int = 6#
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-11
Adjust the probability of specific tokens being generated. field max_retries: int = 6# Maximum number of retries to make when generating. field max_tokens: int = 256# The maximum number of tokens to generate in the completion. -1 returns as many tokens as possible given the prompt and the models maximal context size. field model_kwargs: Dict[str, Any] [Optional]# Holds any model parameters valid for create call not explicitly specified. field model_name: str = 'text-davinci-003'# Model name to use. field n: int = 1# How many completions to generate for each prompt. field presence_penalty: float = 0# Penalizes repeated tokens. field request_timeout: Optional[Union[float, Tuple[float, float]]] = None# Timeout for requests to OpenAI completion API. Default is 600 seconds. field streaming: bool = False# Whether to stream the results or not. field temperature: float = 0.7# What sampling temperature to use. field top_p: float = 1# Total probability mass of tokens to consider at each step. field verbose: bool [Optional]# Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None) β†’ str# Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult.
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-12
Take in a list of prompt values and return an LLMResult. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β†’ Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β†’ Model# Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance create_llm_result(choices: Any, prompts: List[str], token_usage: Dict[str, int]) β†’ langchain.schema.LLMResult# Create the LLMResult from the choices and prompts. dict(**kwargs: Any) β†’ Dict# Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult.
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-13
Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) β†’ int# Calculate num tokens with tiktoken package. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β†’ int# Get the number of tokens in the message. get_sub_prompts(params: Dict[str, Any], prompts: List[str], stop: Optional[List[str]] = None) β†’ List[List[str]]# Get the sub prompts for llm call. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β†’ unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). max_tokens_for_prompt(prompt: str) β†’ int# Calculate the maximum number of tokens possible to generate for a prompt. Parameters prompt – The prompt to pass into the model. Returns The maximum number of tokens to generate for a prompt. Example max_tokens = openai.max_token_for_prompt("Tell me a joke.") modelname_to_contextsize(modelname: str) β†’ int# Calculate the maximum number of tokens possible to generate for a model. text-davinci-003: 4,097 tokens text-curie-001: 2,048 tokens text-babbage-001: 2,048 tokens
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-14
text-babbage-001: 2,048 tokens text-ada-001: 2,048 tokens code-davinci-002: 8,000 tokens code-cushman-001: 2,048 tokens Parameters modelname – The modelname we want to know the context size for. Returns The maximum context size Example max_tokens = openai.modelname_to_contextsize("text-davinci-003") prep_streaming_params(stop: Optional[List[str]] = None) β†’ Dict[str, Any]# Prepare the params for streaming. save(file_path: Union[pathlib.Path, str]) β†’ None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) stream(prompt: str, stop: Optional[List[str]] = None) β†’ Generator# Call OpenAI with streaming flag and return the resulting generator. BETA: this is a beta feature while we figure out the right abstraction. Once that happens, this interface could change. Parameters prompt – The prompts to pass into the model. stop – Optional list of stop words to use when generating. Returns A generator representing the stream of tokens from OpenAI. Example generator = openai.stream("Tell me a joke.") for token in generator: yield token classmethod update_forward_refs(**localns: Any) β†’ None# Try to update ForwardRefs on fields based on this Model, globalns and localns. pydantic model langchain.llms.Banana[source]# Wrapper around Banana large language models. To use, you should have the banana-dev python package installed, and the environment variable BANANA_API_KEY set with your API key.
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-15
and the environment variable BANANA_API_KEY set with your API key. Any parameters that are valid to be passed to the call can be passed in, even if not explicitly saved on this class. Example Validators build_extra Β» all fields set_callback_manager Β» callback_manager set_verbose Β» verbose validate_environment Β» all fields field model_key: str = ''# model endpoint to use field model_kwargs: Dict[str, Any] [Optional]# Holds any model parameters valid for create call not explicitly specified. __call__(prompt: str, stop: Optional[List[str]] = None) β†’ str# Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β†’ Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β†’ Model# Duplicate a model, optionally choose which fields to include, exclude and change. Parameters
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-16
Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) β†’ Dict# Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) β†’ int# Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β†’ int# Get the number of tokens in the message. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β†’ unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-17
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). save(file_path: Union[pathlib.Path, str]) β†’ None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns: Any) β†’ None# Try to update ForwardRefs on fields based on this Model, globalns and localns. pydantic model langchain.llms.CerebriumAI[source]# Wrapper around CerebriumAI large language models. To use, you should have the cerebrium python package installed, and the environment variable CEREBRIUMAI_API_KEY set with your API key. Any parameters that are valid to be passed to the call can be passed in, even if not explicitly saved on this class. Example Validators build_extra Β» all fields set_callback_manager Β» callback_manager set_verbose Β» verbose validate_environment Β» all fields field endpoint_url: str = ''# model endpoint to use field model_kwargs: Dict[str, Any] [Optional]# Holds any model parameters valid for create call not explicitly specified. __call__(prompt: str, stop: Optional[List[str]] = None) β†’ str# Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult.
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-18
Take in a list of prompt values and return an LLMResult. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β†’ Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β†’ Model# Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) β†’ Dict# Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) β†’ int# Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β†’ int#
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-19
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β†’ int# Get the number of tokens in the message. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β†’ unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). save(file_path: Union[pathlib.Path, str]) β†’ None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns: Any) β†’ None# Try to update ForwardRefs on fields based on this Model, globalns and localns. pydantic model langchain.llms.Cohere[source]# Wrapper around Cohere large language models. To use, you should have the cohere python package installed, and the environment variable COHERE_API_KEY set with your API key, or pass it as a named parameter to the constructor. Example from langchain.llms import Cohere cohere = Cohere(model="gptd-instruct-tft", cohere_api_key="my-api-key") Validators set_callback_manager Β» callback_manager set_verbose Β» verbose validate_environment Β» all fields
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-20
Validators set_callback_manager Β» callback_manager set_verbose Β» verbose validate_environment Β» all fields field frequency_penalty: float = 0.0# Penalizes repeated tokens according to frequency. Between 0 and 1. field k: int = 0# Number of most likely tokens to consider at each step. field max_tokens: int = 256# Denotes the number of tokens to predict per generation. field model: Optional[str] = None# Model name to use. field p: int = 1# Total probability mass of tokens to consider at each step. field presence_penalty: float = 0.0# Penalizes repeated tokens. Between 0 and 1. field temperature: float = 0.75# A non-negative float that tunes the degree of randomness in generation. field truncate: Optional[str] = None# Specify how the client handles inputs longer than the maximum token length: Truncate from START, END or NONE __call__(prompt: str, stop: Optional[List[str]] = None) β†’ str# Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β†’ Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed.
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-21
Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β†’ Model# Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) β†’ Dict# Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) β†’ int# Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β†’ int# Get the number of tokens in the message.
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-22
Get the number of tokens in the message. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β†’ unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). save(file_path: Union[pathlib.Path, str]) β†’ None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns: Any) β†’ None# Try to update ForwardRefs on fields based on this Model, globalns and localns. pydantic model langchain.llms.DeepInfra[source]# Wrapper around DeepInfra deployed models. To use, you should have the requests python package installed, and the environment variable DEEPINFRA_API_TOKEN set with your API token, or pass it as a named parameter to the constructor. Only supports text-generation and text2text-generation for now. Example from langchain.llms import DeepInfra di = DeepInfra(model_id="google/flan-t5-xl", deepinfra_api_token="my-api-key") Validators set_callback_manager Β» callback_manager set_verbose Β» verbose validate_environment Β» all fields
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-23
Validators set_callback_manager Β» callback_manager set_verbose Β» verbose validate_environment Β» all fields __call__(prompt: str, stop: Optional[List[str]] = None) β†’ str# Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β†’ Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β†’ Model# Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) β†’ Dict# Return a dictionary of the LLM.
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-24
dict(**kwargs: Any) β†’ Dict# Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) β†’ int# Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β†’ int# Get the number of tokens in the message. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β†’ unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). save(file_path: Union[pathlib.Path, str]) β†’ None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns: Any) β†’ None# Try to update ForwardRefs on fields based on this Model, globalns and localns.
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-25
Try to update ForwardRefs on fields based on this Model, globalns and localns. pydantic model langchain.llms.ForefrontAI[source]# Wrapper around ForefrontAI large language models. To use, you should have the environment variable FOREFRONTAI_API_KEY set with your API key. Example from langchain.llms import ForefrontAI forefrontai = ForefrontAI(endpoint_url="") Validators set_callback_manager Β» callback_manager set_verbose Β» verbose validate_environment Β» all fields field base_url: Optional[str] = None# Base url to use, if None decides based on model name. field endpoint_url: str = ''# Model name to use. field length: int = 256# The maximum number of tokens to generate in the completion. field repetition_penalty: int = 1# Penalizes repeated tokens according to frequency. field temperature: float = 0.7# What sampling temperature to use. field top_k: int = 40# The number of highest probability vocabulary tokens to keep for top-k-filtering. field top_p: float = 1.0# Total probability mass of tokens to consider at each step. __call__(prompt: str, stop: Optional[List[str]] = None) β†’ str# Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult.
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-26
Take in a list of prompt values and return an LLMResult. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β†’ Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β†’ Model# Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) β†’ Dict# Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) β†’ int# Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β†’ int#
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-27
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β†’ int# Get the number of tokens in the message. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β†’ unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). save(file_path: Union[pathlib.Path, str]) β†’ None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns: Any) β†’ None# Try to update ForwardRefs on fields based on this Model, globalns and localns. pydantic model langchain.llms.GooseAI[source]# Wrapper around OpenAI large language models. To use, you should have the openai python package installed, and the environment variable GOOSEAI_API_KEY set with your API key. Any parameters that are valid to be passed to the openai.create call can be passed in, even if not explicitly saved on this class. Example Validators build_extra Β» all fields set_callback_manager Β» callback_manager set_verbose Β» verbose validate_environment Β» all fields field frequency_penalty: float = 0#
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-28
set_verbose Β» verbose validate_environment Β» all fields field frequency_penalty: float = 0# Penalizes repeated tokens according to frequency. field logit_bias: Optional[Dict[str, float]] [Optional]# Adjust the probability of specific tokens being generated. field max_tokens: int = 256# The maximum number of tokens to generate in the completion. -1 returns as many tokens as possible given the prompt and the models maximal context size. field min_tokens: int = 1# The minimum number of tokens to generate in the completion. field model_kwargs: Dict[str, Any] [Optional]# Holds any model parameters valid for create call not explicitly specified. field model_name: str = 'gpt-neo-20b'# Model name to use field n: int = 1# How many completions to generate for each prompt. field presence_penalty: float = 0# Penalizes repeated tokens. field temperature: float = 0.7# What sampling temperature to use field top_p: float = 1# Total probability mass of tokens to consider at each step. __call__(prompt: str, stop: Optional[List[str]] = None) β†’ str# Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β†’ Model#
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-29
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β†’ Model# Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) β†’ Dict# Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) β†’ int# Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β†’ int# Get the number of tokens in the message.
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-30
Get the number of tokens in the message. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β†’ unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). save(file_path: Union[pathlib.Path, str]) β†’ None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns: Any) β†’ None# Try to update ForwardRefs on fields based on this Model, globalns and localns. pydantic model langchain.llms.HuggingFaceEndpoint[source]# Wrapper around HuggingFaceHub Inference Endpoints. To use, you should have the huggingface_hub python package installed, and the environment variable HUGGINGFACEHUB_API_TOKEN set with your API token, or pass it as a named parameter to the constructor. Only supports text-generation and text2text-generation for now. Example from langchain.llms import HuggingFaceEndpoint endpoint_url = ( "https://abcdefghijklmnop.us-east-1.aws.endpoints.huggingface.cloud" ) hf = HuggingFaceEndpoint( endpoint_url=endpoint_url,
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-31
) hf = HuggingFaceEndpoint( endpoint_url=endpoint_url, huggingfacehub_api_token="my-api-key" ) Validators set_callback_manager Β» callback_manager set_verbose Β» verbose validate_environment Β» all fields field endpoint_url: str = ''# Endpoint URL to use. field model_kwargs: Optional[dict] = None# Key word arguments to pass to the model. field task: Optional[str] = None# Task to call the model with. Should be a task that returns generated_text. __call__(prompt: str, stop: Optional[List[str]] = None) β†’ str# Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β†’ Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β†’ Model# Duplicate a model, optionally choose which fields to include, exclude and change. Parameters
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-32
Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) β†’ Dict# Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) β†’ int# Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β†’ int# Get the number of tokens in the message. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β†’ unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-33
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). save(file_path: Union[pathlib.Path, str]) β†’ None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns: Any) β†’ None# Try to update ForwardRefs on fields based on this Model, globalns and localns. pydantic model langchain.llms.HuggingFaceHub[source]# Wrapper around HuggingFaceHub models. To use, you should have the huggingface_hub python package installed, and the environment variable HUGGINGFACEHUB_API_TOKEN set with your API token, or pass it as a named parameter to the constructor. Only supports text-generation and text2text-generation for now. Example from langchain.llms import HuggingFaceHub hf = HuggingFaceHub(repo_id="gpt2", huggingfacehub_api_token="my-api-key") Validators set_callback_manager Β» callback_manager set_verbose Β» verbose validate_environment Β» all fields field model_kwargs: Optional[dict] = None# Key word arguments to pass to the model. field repo_id: str = 'gpt2'# Model name to use. field task: Optional[str] = None# Task to call the model with. Should be a task that returns generated_text. __call__(prompt: str, stop: Optional[List[str]] = None) β†’ str# Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input.
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-34
Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β†’ Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β†’ Model# Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) β†’ Dict# Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult.
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-35
Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) β†’ int# Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β†’ int# Get the number of tokens in the message. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β†’ unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). save(file_path: Union[pathlib.Path, str]) β†’ None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns: Any) β†’ None# Try to update ForwardRefs on fields based on this Model, globalns and localns. pydantic model langchain.llms.HuggingFacePipeline[source]# Wrapper around HuggingFace Pipeline API. To use, you should have the transformers python package installed. Only supports text-generation and text2text-generation for now. Example using from_model_id:from langchain.llms import HuggingFacePipeline hf = HuggingFacePipeline.from_model_id( model_id="gpt2", task="text-generation"
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-36
model_id="gpt2", task="text-generation" ) Example passing pipeline in directly:from langchain.llms import HuggingFacePipeline from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_id = "gpt2" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=10 ) hf = HuggingFacePipeline(pipeline=pipe) Validators set_callback_manager Β» callback_manager set_verbose Β» verbose field model_id: str = 'gpt2'# Model name to use. field model_kwargs: Optional[dict] = None# Key word arguments to pass to the model. __call__(prompt: str, stop: Optional[List[str]] = None) β†’ str# Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β†’ Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-37
Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β†’ Model# Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) β†’ Dict# Return a dictionary of the LLM. classmethod from_model_id(model_id: str, task: str, device: int = - 1, model_kwargs: Optional[dict] = None, **kwargs: Any) β†’ langchain.llms.base.LLM[source]# Construct the pipeline object from model_id and task. generate(prompts: List[str], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) β†’ int# Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β†’ int# Get the number of tokens in the message.
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-38
Get the number of tokens in the message. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β†’ unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). save(file_path: Union[pathlib.Path, str]) β†’ None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns: Any) β†’ None# Try to update ForwardRefs on fields based on this Model, globalns and localns. pydantic model langchain.llms.Modal[source]# Wrapper around Modal large language models. To use, you should have the modal-client python package installed. Any parameters that are valid to be passed to the call can be passed in, even if not explicitly saved on this class. Example Validators build_extra Β» all fields set_callback_manager Β» callback_manager set_verbose Β» verbose field endpoint_url: str = ''# model endpoint to use field model_kwargs: Dict[str, Any] [Optional]# Holds any model parameters valid for create call not explicitly specified. __call__(prompt: str, stop: Optional[List[str]] = None) β†’ str#
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-39
__call__(prompt: str, stop: Optional[List[str]] = None) β†’ str# Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β†’ Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β†’ Model# Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) β†’ Dict# Return a dictionary of the LLM.
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-40
dict(**kwargs: Any) β†’ Dict# Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) β†’ int# Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β†’ int# Get the number of tokens in the message. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β†’ unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). save(file_path: Union[pathlib.Path, str]) β†’ None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns: Any) β†’ None# Try to update ForwardRefs on fields based on this Model, globalns and localns.
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-41
Try to update ForwardRefs on fields based on this Model, globalns and localns. pydantic model langchain.llms.NLPCloud[source]# Wrapper around NLPCloud large language models. To use, you should have the nlpcloud python package installed, and the environment variable NLPCLOUD_API_KEY set with your API key. Example from langchain.llms import NLPCloud nlpcloud = NLPCloud(model="gpt-neox-20b") Validators set_callback_manager Β» callback_manager set_verbose Β» verbose validate_environment Β» all fields field bad_words: List[str] = []# List of tokens not allowed to be generated. field do_sample: bool = True# Whether to use sampling (True) or greedy decoding. field early_stopping: bool = False# Whether to stop beam search at num_beams sentences. field length_no_input: bool = True# Whether min_length and max_length should include the length of the input. field length_penalty: float = 1.0# Exponential penalty to the length. field max_length: int = 256# The maximum number of tokens to generate in the completion. field min_length: int = 1# The minimum number of tokens to generate in the completion. field model_name: str = 'finetuned-gpt-neox-20b'# Model name to use. field num_beams: int = 1# Number of beams for beam search. field num_return_sequences: int = 1# How many completions to generate for each prompt. field remove_end_sequence: bool = True# Whether or not to remove the end sequence token. field remove_input: bool = True# Remove input text from API response field repetition_penalty: float = 1.0#
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-42
Remove input text from API response field repetition_penalty: float = 1.0# Penalizes repeated tokens. 1.0 means no penalty. field temperature: float = 0.7# What sampling temperature to use. field top_k: int = 50# The number of highest probability tokens to keep for top-k filtering. field top_p: int = 1# Total probability mass of tokens to consider at each step. __call__(prompt: str, stop: Optional[List[str]] = None) β†’ str# Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β†’ Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β†’ Model# Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-43
Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) β†’ Dict# Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) β†’ int# Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β†’ int# Get the number of tokens in the message. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β†’ unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). save(file_path: Union[pathlib.Path, str]) β†’ None#
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-44
save(file_path: Union[pathlib.Path, str]) β†’ None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns: Any) β†’ None# Try to update ForwardRefs on fields based on this Model, globalns and localns. pydantic model langchain.llms.OpenAI[source]# Generic OpenAI class that uses model name. Validators build_extra Β» all fields set_callback_manager Β» callback_manager set_verbose Β» verbose validate_environment Β» all fields field verbose: bool [Optional]# Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None) β†’ str# Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β†’ Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-45
Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β†’ Model# Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance create_llm_result(choices: Any, prompts: List[str], token_usage: Dict[str, int]) β†’ langchain.schema.LLMResult# Create the LLMResult from the choices and prompts. dict(**kwargs: Any) β†’ Dict# Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) β†’ int# Calculate num tokens with tiktoken package. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β†’ int# Get the number of tokens in the message.
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-46
Get the number of tokens in the message. get_sub_prompts(params: Dict[str, Any], prompts: List[str], stop: Optional[List[str]] = None) β†’ List[List[str]]# Get the sub prompts for llm call. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β†’ unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). max_tokens_for_prompt(prompt: str) β†’ int# Calculate the maximum number of tokens possible to generate for a prompt. Parameters prompt – The prompt to pass into the model. Returns The maximum number of tokens to generate for a prompt. Example max_tokens = openai.max_token_for_prompt("Tell me a joke.") modelname_to_contextsize(modelname: str) β†’ int# Calculate the maximum number of tokens possible to generate for a model. text-davinci-003: 4,097 tokens text-curie-001: 2,048 tokens text-babbage-001: 2,048 tokens text-ada-001: 2,048 tokens code-davinci-002: 8,000 tokens code-cushman-001: 2,048 tokens Parameters modelname – The modelname we want to know the context size for. Returns The maximum context size Example
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-47
Returns The maximum context size Example max_tokens = openai.modelname_to_contextsize("text-davinci-003") prep_streaming_params(stop: Optional[List[str]] = None) β†’ Dict[str, Any]# Prepare the params for streaming. save(file_path: Union[pathlib.Path, str]) β†’ None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) stream(prompt: str, stop: Optional[List[str]] = None) β†’ Generator# Call OpenAI with streaming flag and return the resulting generator. BETA: this is a beta feature while we figure out the right abstraction. Once that happens, this interface could change. Parameters prompt – The prompts to pass into the model. stop – Optional list of stop words to use when generating. Returns A generator representing the stream of tokens from OpenAI. Example generator = openai.stream("Tell me a joke.") for token in generator: yield token classmethod update_forward_refs(**localns: Any) β†’ None# Try to update ForwardRefs on fields based on this Model, globalns and localns. pydantic model langchain.llms.OpenAIChat[source]# Wrapper around OpenAI Chat large language models. To use, you should have the openai python package installed, and the environment variable OPENAI_API_KEY set with your API key. Any parameters that are valid to be passed to the openai.create call can be passed in, even if not explicitly saved on this class. Example from langchain.llms import OpenAIChat openaichat = OpenAIChat(model_name="gpt-3.5-turbo") Validators build_extra Β» all fields
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-48
Validators build_extra Β» all fields set_callback_manager Β» callback_manager set_verbose Β» verbose validate_environment Β» all fields field max_retries: int = 6# Maximum number of retries to make when generating. field model_kwargs: Dict[str, Any] [Optional]# Holds any model parameters valid for create call not explicitly specified. field model_name: str = 'gpt-3.5-turbo'# Model name to use. field prefix_messages: List [Optional]# Series of messages for Chat input. field streaming: bool = False# Whether to stream the results or not. field verbose: bool [Optional]# Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None) β†’ str# Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β†’ Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-49
Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β†’ Model# Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) β†’ Dict# Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) β†’ int[source]# Calculate num tokens with tiktoken package. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β†’ int# Get the number of tokens in the message.
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-50
Get the number of tokens in the message. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β†’ unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). save(file_path: Union[pathlib.Path, str]) β†’ None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns: Any) β†’ None# Try to update ForwardRefs on fields based on this Model, globalns and localns. pydantic model langchain.llms.Petals[source]# Wrapper around Petals Bloom models. To use, you should have the petals python package installed, and the environment variable HUGGINGFACE_API_KEY set with your API key. Any parameters that are valid to be passed to the call can be passed in, even if not explicitly saved on this class. Example Validators build_extra Β» all fields set_callback_manager Β» callback_manager set_verbose Β» verbose validate_environment Β» all fields field client: Any = None# The client to use for the API calls. field do_sample: bool = True# Whether or not to use sampling; use greedy decoding otherwise.
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-51
Whether or not to use sampling; use greedy decoding otherwise. field max_length: Optional[int] = None# The maximum length of the sequence to be generated. field max_new_tokens: int = 256# The maximum number of new tokens to generate in the completion. field model_kwargs: Dict[str, Any] [Optional]# Holds any model parameters valid for create call not explicitly specified. field model_name: str = 'bigscience/bloom-petals'# The model to use. field temperature: float = 0.7# What sampling temperature to use field tokenizer: Any = None# The tokenizer to use for the API calls. field top_k: Optional[int] = None# The number of highest probability vocabulary tokens to keep for top-k-filtering. field top_p: float = 0.9# The cumulative probability for top-p sampling. __call__(prompt: str, stop: Optional[List[str]] = None) β†’ str# Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β†’ Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-52
Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β†’ Model# Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) β†’ Dict# Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) β†’ int# Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β†’ int# Get the number of tokens in the message.
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-53
Get the number of tokens in the message. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β†’ unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). save(file_path: Union[pathlib.Path, str]) β†’ None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns: Any) β†’ None# Try to update ForwardRefs on fields based on this Model, globalns and localns. pydantic model langchain.llms.PromptLayerOpenAI[source]# Wrapper around OpenAI large language models. To use, you should have the openai and promptlayer python package installed, and the environment variable OPENAI_API_KEY and PROMPTLAYER_API_KEY set with your openAI API key and promptlayer key respectively. All parameters that can be passed to the OpenAI LLM can also be passed here. The PromptLayerOpenAI LLM adds two optional :param pl_tags: List of strings to tag the request with. :param return_pl_id: If True, the PromptLayer request ID will be returned in the generation_info field of the Generation object. Example
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-54
returned in the generation_info field of the Generation object. Example from langchain.llms import PromptLayerOpenAI openai = PromptLayerOpenAI(model_name="text-davinci-003") Validators build_extra Β» all fields set_callback_manager Β» callback_manager set_verbose Β» verbose validate_environment Β» all fields __call__(prompt: str, stop: Optional[List[str]] = None) β†’ str# Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β†’ Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β†’ Model# Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-55
update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance create_llm_result(choices: Any, prompts: List[str], token_usage: Dict[str, int]) β†’ langchain.schema.LLMResult# Create the LLMResult from the choices and prompts. dict(**kwargs: Any) β†’ Dict# Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) β†’ int# Calculate num tokens with tiktoken package. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β†’ int# Get the number of tokens in the message. get_sub_prompts(params: Dict[str, Any], prompts: List[str], stop: Optional[List[str]] = None) β†’ List[List[str]]# Get the sub prompts for llm call. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β†’ unicode#
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-56
Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). max_tokens_for_prompt(prompt: str) β†’ int# Calculate the maximum number of tokens possible to generate for a prompt. Parameters prompt – The prompt to pass into the model. Returns The maximum number of tokens to generate for a prompt. Example max_tokens = openai.max_token_for_prompt("Tell me a joke.") modelname_to_contextsize(modelname: str) β†’ int# Calculate the maximum number of tokens possible to generate for a model. text-davinci-003: 4,097 tokens text-curie-001: 2,048 tokens text-babbage-001: 2,048 tokens text-ada-001: 2,048 tokens code-davinci-002: 8,000 tokens code-cushman-001: 2,048 tokens Parameters modelname – The modelname we want to know the context size for. Returns The maximum context size Example max_tokens = openai.modelname_to_contextsize("text-davinci-003") prep_streaming_params(stop: Optional[List[str]] = None) β†’ Dict[str, Any]# Prepare the params for streaming. save(file_path: Union[pathlib.Path, str]) β†’ None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) stream(prompt: str, stop: Optional[List[str]] = None) β†’ Generator# Call OpenAI with streaming flag and return the resulting generator. BETA: this is a beta feature while we figure out the right abstraction.
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-57
BETA: this is a beta feature while we figure out the right abstraction. Once that happens, this interface could change. Parameters prompt – The prompts to pass into the model. stop – Optional list of stop words to use when generating. Returns A generator representing the stream of tokens from OpenAI. Example generator = openai.stream("Tell me a joke.") for token in generator: yield token classmethod update_forward_refs(**localns: Any) β†’ None# Try to update ForwardRefs on fields based on this Model, globalns and localns. pydantic model langchain.llms.PromptLayerOpenAIChat[source]# Wrapper around OpenAI large language models. To use, you should have the openai and promptlayer python package installed, and the environment variable OPENAI_API_KEY and PROMPTLAYER_API_KEY set with your openAI API key and promptlayer key respectively. All parameters that can be passed to the OpenAIChat LLM can also be passed here. The PromptLayerOpenAIChat adds two optional :param pl_tags: List of strings to tag the request with. :param return_pl_id: If True, the PromptLayer request ID will be returned in the generation_info field of the Generation object. Example from langchain.llms import PromptLayerOpenAIChat openaichat = PromptLayerOpenAIChat(model_name="gpt-3.5-turbo") Validators build_extra Β» all fields set_callback_manager Β» callback_manager set_verbose Β» verbose validate_environment Β» all fields field max_retries: int = 6# Maximum number of retries to make when generating. field model_kwargs: Dict[str, Any] [Optional]# Holds any model parameters valid for create call not explicitly specified.
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-58
Holds any model parameters valid for create call not explicitly specified. field model_name: str = 'gpt-3.5-turbo'# Model name to use. field prefix_messages: List [Optional]# Series of messages for Chat input. field streaming: bool = False# Whether to stream the results or not. __call__(prompt: str, stop: Optional[List[str]] = None) β†’ str# Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β†’ Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β†’ Model# Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-59
update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) β†’ Dict# Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) β†’ int# Calculate num tokens with tiktoken package. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β†’ int# Get the number of tokens in the message. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β†’ unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). save(file_path: Union[pathlib.Path, str]) β†’ None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-60
Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns: Any) β†’ None# Try to update ForwardRefs on fields based on this Model, globalns and localns. pydantic model langchain.llms.SagemakerEndpoint[source]# Wrapper around custom Sagemaker Inference Endpoints. To use, you must supply the endpoint name from your deployed Sagemaker model & the region where it is deployed. To authenticate, the AWS client uses the following methods to automatically load credentials: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html If a specific credential profile should be used, you must pass the name of the profile from the ~/.aws/credentials file that is to be used. Make sure the credentials / roles used have the required policies to access the Sagemaker endpoint. See: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html Validators set_callback_manager Β» callback_manager set_verbose Β» verbose validate_environment Β» all fields field content_handler: langchain.llms.sagemaker_endpoint.ContentHandlerBase [Required]# The content handler class that provides an input and output transform functions to handle formats between LLM and the endpoint. field credentials_profile_name: Optional[str] = None# The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which has either access keys or role information specified. If not specified, the default credential profile or, if on an EC2 instance, credentials from IMDS will be used. See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html field endpoint_kwargs: Optional[Dict] = None# Optional attributes passed to the invoke_endpoint
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-61
field endpoint_kwargs: Optional[Dict] = None# Optional attributes passed to the invoke_endpoint function. See `boto3`_. docs for more info. .. _boto3: <https://boto3.amazonaws.com/v1/documentation/api/latest/index.html> field endpoint_name: str = ''# The name of the endpoint from the deployed Sagemaker model. Must be unique within an AWS Region. field model_kwargs: Optional[Dict] = None# Key word arguments to pass to the model. field region_name: str = ''# The aws region where the Sagemaker model is deployed, eg. us-west-2. __call__(prompt: str, stop: Optional[List[str]] = None) β†’ str# Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β†’ Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-62
Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β†’ Model# Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) β†’ Dict# Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) β†’ int# Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β†’ int# Get the number of tokens in the message.
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-63
Get the number of tokens in the message. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β†’ unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). save(file_path: Union[pathlib.Path, str]) β†’ None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns: Any) β†’ None# Try to update ForwardRefs on fields based on this Model, globalns and localns. pydantic model langchain.llms.SelfHostedHuggingFaceLLM[source]# Wrapper around HuggingFace Pipeline API to run on self-hosted remote hardware. Supported hardware includes auto-launched instances on AWS, GCP, Azure, and Lambda, as well as servers specified by IP address and SSH credentials (such as on-prem, or another cloud like Paperspace, Coreweave, etc.). To use, you should have the runhouse python package installed. Only supports text-generation and text2text-generation for now. Example using from_model_id:from langchain.llms import SelfHostedHuggingFaceLLM import runhouse as rh
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-64
import runhouse as rh gpu = rh.cluster(name="rh-a10x", instance_type="A100:1") hf = SelfHostedHuggingFaceLLM( model_id="google/flan-t5-large", task="text2text-generation", hardware=gpu ) Example passing fn that generates a pipeline (bc the pipeline is not serializable):from langchain.llms import SelfHostedHuggingFaceLLM from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline import runhouse as rh def get_pipeline(): model_id = "gpt2" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer ) return pipe hf = SelfHostedHuggingFaceLLM( model_load_fn=get_pipeline, model_id="gpt2", hardware=gpu) Validators set_callback_manager Β» callback_manager set_verbose Β» verbose field device: int = 0# Device to use for inference. -1 for CPU, 0 for GPU, 1 for second GPU, etc. field hardware: Any = None# Remote hardware to send the inference function to. field inference_fn: Callable = <function _generate_text># Inference function to send to the remote hardware. field load_fn_kwargs: Optional[dict] = None# Key word arguments to pass to the model load function. field model_id: str = 'gpt2'# Hugging Face model_id to load the model. field model_kwargs: Optional[dict] = None# Key word arguments to pass to the model. field model_load_fn: Callable = <function _load_transformer># Function to load the model remotely on the server.
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-65
Function to load the model remotely on the server. field model_reqs: List[str] = ['./', 'transformers', 'torch']# Requirements to install on hardware to inference the model. field task: str = 'text-generation'# Hugging Face task (either β€œtext-generation” or β€œtext2text-generation”). __call__(prompt: str, stop: Optional[List[str]] = None) β†’ str# Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β†’ Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β†’ Model# Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-66
update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) β†’ Dict# Return a dictionary of the LLM. classmethod from_pipeline(pipeline: Any, hardware: Any, model_reqs: Optional[List[str]] = None, device: int = 0, **kwargs: Any) β†’ langchain.llms.base.LLM# Init the SelfHostedPipeline from a pipeline object or string. generate(prompts: List[str], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) β†’ int# Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β†’ int# Get the number of tokens in the message. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β†’ unicode# Generate a JSON representation of the model, include and exclude arguments as per dict().
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-67
Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). save(file_path: Union[pathlib.Path, str]) β†’ None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns: Any) β†’ None# Try to update ForwardRefs on fields based on this Model, globalns and localns. pydantic model langchain.llms.SelfHostedPipeline[source]# Run model inference on self-hosted remote hardware. Supported hardware includes auto-launched instances on AWS, GCP, Azure, and Lambda, as well as servers specified by IP address and SSH credentials (such as on-prem, or another cloud like Paperspace, Coreweave, etc.). To use, you should have the runhouse python package installed. Example for custom pipeline and inference functions:from langchain.llms import SelfHostedPipeline from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline import runhouse as rh def load_pipeline(): tokenizer = AutoTokenizer.from_pretrained("gpt2") model = AutoModelForCausalLM.from_pretrained("gpt2") return pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=10 ) def inference_fn(pipeline, prompt, stop = None): return pipeline(prompt)[0]["generated_text"] gpu = rh.cluster(name="rh-a10x", instance_type="A100:1") llm = SelfHostedPipeline( model_load_fn=load_pipeline, hardware=gpu,
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-68
model_load_fn=load_pipeline, hardware=gpu, model_reqs=model_reqs, inference_fn=inference_fn ) Example for <2GB model (can be serialized and sent directly to the server):from langchain.llms import SelfHostedPipeline import runhouse as rh gpu = rh.cluster(name="rh-a10x", instance_type="A100:1") my_model = ... llm = SelfHostedPipeline.from_pipeline( pipeline=my_model, hardware=gpu, model_reqs=["./", "torch", "transformers"], ) Example passing model path for larger models:from langchain.llms import SelfHostedPipeline import runhouse as rh import pickle from transformers import pipeline generator = pipeline(model="gpt2") rh.blob(pickle.dumps(generator), path="models/pipeline.pkl" ).save().to(gpu, path="models") llm = SelfHostedPipeline.from_pipeline( pipeline="models/pipeline.pkl", hardware=gpu, model_reqs=["./", "torch", "transformers"], ) Validators set_callback_manager Β» callback_manager set_verbose Β» verbose field hardware: Any = None# Remote hardware to send the inference function to. field inference_fn: Callable = <function _generate_text># Inference function to send to the remote hardware. field load_fn_kwargs: Optional[dict] = None# Key word arguments to pass to the model load function. field model_load_fn: Callable [Required]# Function to load the model remotely on the server. field model_reqs: List[str] = ['./', 'torch']# Requirements to install on hardware to inference the model. __call__(prompt: str, stop: Optional[List[str]] = None) β†’ str#
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-69
__call__(prompt: str, stop: Optional[List[str]] = None) β†’ str# Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β†’ Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β†’ Model# Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) β†’ Dict# Return a dictionary of the LLM.
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-70
dict(**kwargs: Any) β†’ Dict# Return a dictionary of the LLM. classmethod from_pipeline(pipeline: Any, hardware: Any, model_reqs: Optional[List[str]] = None, device: int = 0, **kwargs: Any) β†’ langchain.llms.base.LLM[source]# Init the SelfHostedPipeline from a pipeline object or string. generate(prompts: List[str], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) β†’ int# Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β†’ int# Get the number of tokens in the message. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β†’ unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). save(file_path: Union[pathlib.Path, str]) β†’ None# Save the LLM. Parameters file_path – Path to file to save the LLM to.
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-71
Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns: Any) β†’ None# Try to update ForwardRefs on fields based on this Model, globalns and localns. pydantic model langchain.llms.StochasticAI[source]# Wrapper around StochasticAI large language models. To use, you should have the environment variable STOCHASTICAI_API_KEY set with your API key. Example from langchain.llms import StochasticAI stochasticai = StochasticAI(api_url="") Validators build_extra Β» all fields set_callback_manager Β» callback_manager set_verbose Β» verbose validate_environment Β» all fields field api_url: str = ''# Model name to use. field model_kwargs: Dict[str, Any] [Optional]# Holds any model parameters valid for create call not explicitly specified. __call__(prompt: str, stop: Optional[List[str]] = None) β†’ str# Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β†’ Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed.
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-72
Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β†’ Model# Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) β†’ Dict# Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) β†’ int# Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β†’ int# Get the number of tokens in the message.
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-73
Get the number of tokens in the message. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β†’ unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). save(file_path: Union[pathlib.Path, str]) β†’ None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns: Any) β†’ None# Try to update ForwardRefs on fields based on this Model, globalns and localns. pydantic model langchain.llms.Writer[source]# Wrapper around Writer large language models. To use, you should have the environment variable WRITER_API_KEY set with your API key. Example from langchain import Writer writer = Writer(model_id="palmyra-base") Validators set_callback_manager Β» callback_manager set_verbose Β» verbose validate_environment Β» all fields field base_url: Optional[str] = None# Base url to use, if None decides based on model name. field beam_search_diversity_rate: float = 1.0# Only applies to beam search, i.e. when the beam width is >1.
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-74
Only applies to beam search, i.e. when the beam width is >1. A higher value encourages beam search to return a more diverse set of candidates field beam_width: Optional[int] = None# The number of concurrent candidates to keep track of during beam search field length: int = 256# The maximum number of tokens to generate in the completion. field length_pentaly: float = 1.0# Only applies to beam search, i.e. when the beam width is >1. Larger values penalize long candidates more heavily, thus preferring shorter candidates field logprobs: bool = False# Whether to return log probabilities. field model_id: str = 'palmyra-base'# Model name to use. field random_seed: int = 0# The model generates random results. Changing the random seed alone will produce a different response with similar characteristics. It is possible to reproduce results by fixing the random seed (assuming all other hyperparameters are also fixed) field repetition_penalty: float = 1.0# Penalizes repeated tokens according to frequency. field stop: Optional[List[str]] = None# Sequences when completion generation will stop field temperature: float = 1.0# What sampling temperature to use. field tokens_to_generate: int = 24# Max number of tokens to generate. field top_k: int = 1# The number of highest probability vocabulary tokens to keep for top-k-filtering. field top_p: float = 1.0# Total probability mass of tokens to consider at each step. __call__(prompt: str, stop: Optional[List[str]] = None) β†’ str# Check Cache and run the LLM on the given prompt and input.
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-75
Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β†’ Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β†’ Model# Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) β†’ Dict# Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input.
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
7a3977c6b700-76
Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) β†’ int# Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β†’ int# Get the number of tokens in the message. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β†’ unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). save(file_path: Union[pathlib.Path, str]) β†’ None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns: Any) β†’ None# Try to update ForwardRefs on fields based on this Model, globalns and localns. previous Streaming with LLMs next Document Loaders By Harrison Chase Β© Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/reference/modules/llms.html
5211de1b2b1f-0
.rst .pdf Embeddings Embeddings# Wrappers around embedding modules. pydantic model langchain.embeddings.CohereEmbeddings[source]# Wrapper around Cohere embedding models. To use, you should have the cohere python package installed, and the environment variable COHERE_API_KEY set with your API key or pass it as a named parameter to the constructor. Example from langchain.embeddings import CohereEmbeddings cohere = CohereEmbeddings(model="medium", cohere_api_key="my-api-key") field model: str = 'large'# Model name to use. field truncate: Optional[str] = None# Truncate embeddings that are too long from start or end (β€œNONE”|”START”|”END”) embed_documents(texts: List[str]) β†’ List[List[float]][source]# Call out to Cohere’s embedding endpoint. Parameters texts – The list of texts to embed. Returns List of embeddings, one for each text. embed_query(text: str) β†’ List[float][source]# Call out to Cohere’s embedding endpoint. Parameters text – The text to embed. Returns Embeddings for the text. pydantic model langchain.embeddings.FakeEmbeddings[source]# embed_documents(texts: List[str]) β†’ List[List[float]][source]# Embed search docs. embed_query(text: str) β†’ List[float][source]# Embed query text. pydantic model langchain.embeddings.HuggingFaceEmbeddings[source]# Wrapper around sentence_transformers embedding models. To use, you should have the sentence_transformers python package installed. Example from langchain.embeddings import HuggingFaceEmbeddings model_name = "sentence-transformers/all-mpnet-base-v2" hf = HuggingFaceEmbeddings(model_name=model_name)
https://langchain.readthedocs.io/en/latest/reference/modules/embeddings.html
5211de1b2b1f-1
hf = HuggingFaceEmbeddings(model_name=model_name) field model_name: str = 'sentence-transformers/all-mpnet-base-v2'# Model name to use. embed_documents(texts: List[str]) β†’ List[List[float]][source]# Compute doc embeddings using a HuggingFace transformer model. Parameters texts – The list of texts to embed. Returns List of embeddings, one for each text. embed_query(text: str) β†’ List[float][source]# Compute query embeddings using a HuggingFace transformer model. Parameters text – The text to embed. Returns Embeddings for the text. pydantic model langchain.embeddings.HuggingFaceHubEmbeddings[source]# Wrapper around HuggingFaceHub embedding models. To use, you should have the huggingface_hub python package installed, and the environment variable HUGGINGFACEHUB_API_TOKEN set with your API token, or pass it as a named parameter to the constructor. Example from langchain.embeddings import HuggingFaceHubEmbeddings repo_id = "sentence-transformers/all-mpnet-base-v2" hf = HuggingFaceHubEmbeddings( repo_id=repo_id, task="feature-extraction", huggingfacehub_api_token="my-api-key", ) field model_kwargs: Optional[dict] = None# Key word arguments to pass to the model. field repo_id: str = 'sentence-transformers/all-mpnet-base-v2'# Model name to use. field task: Optional[str] = 'feature-extraction'# Task to call the model with. embed_documents(texts: List[str]) β†’ List[List[float]][source]# Call out to HuggingFaceHub’s embedding endpoint for embedding search docs. Parameters texts – The list of texts to embed. Returns
https://langchain.readthedocs.io/en/latest/reference/modules/embeddings.html
5211de1b2b1f-2
Parameters texts – The list of texts to embed. Returns List of embeddings, one for each text. embed_query(text: str) β†’ List[float][source]# Call out to HuggingFaceHub’s embedding endpoint for embedding query text. Parameters text – The text to embed. Returns Embeddings for the text. pydantic model langchain.embeddings.HuggingFaceInstructEmbeddings[source]# Wrapper around sentence_transformers embedding models. To use, you should have the sentence_transformers and InstructorEmbedding python package installed. Example from langchain.embeddings import HuggingFaceInstructEmbeddings model_name = "hkunlp/instructor-large" hf = HuggingFaceInstructEmbeddings(model_name=model_name) field embed_instruction: str = 'Represent the document for retrieval: '# Instruction to use for embedding documents. field model_name: str = 'hkunlp/instructor-large'# Model name to use. field query_instruction: str = 'Represent the question for retrieving supporting documents: '# Instruction to use for embedding query. embed_documents(texts: List[str]) β†’ List[List[float]][source]# Compute doc embeddings using a HuggingFace instruct model. Parameters texts – The list of texts to embed. Returns List of embeddings, one for each text. embed_query(text: str) β†’ List[float][source]# Compute query embeddings using a HuggingFace instruct model. Parameters text – The text to embed. Returns Embeddings for the text. pydantic model langchain.embeddings.OpenAIEmbeddings[source]# Wrapper around OpenAI embedding models. To use, you should have the openai python package installed, and the environment variable OPENAI_API_KEY set with your API key or pass it as a named parameter to the constructor. Example
https://langchain.readthedocs.io/en/latest/reference/modules/embeddings.html
5211de1b2b1f-3
as a named parameter to the constructor. Example from langchain.embeddings import OpenAIEmbeddings openai = OpenAIEmbeddings(openai_api_key="my-api-key") In order to use the library with Microsoft Azure endpoints, you need to set the OPENAI_API_TYPE, OPENAI_API_BASE, OPENAI_API_KEY and optionally and API_VERSION. The OPENAI_API_TYPE must be set to β€˜azure’ and the others correspond to the properties of your endpoint. In addition, the deployment name must be passed as the model parameter. Example import os os.environ["OPENAI_API_TYPE"] = "azure" os.environ["OPENAI_API_BASE"] = "https://<your-endpoint.openai.azure.com/" os.environ["OPENAI_API_KEY"] = "your AzureOpenAI key" from langchain.embeddings.openai import OpenAIEmbeddings embeddings = OpenAIEmbeddings(model="your-embeddings-deployment-name") text = "This is a test query." query_result = embeddings.embed_query(text) field chunk_size: int = 1000# Maximum number of texts to embed in each batch field max_retries: int = 6# Maximum number of retries to make when generating. embed_documents(texts: List[str], chunk_size: Optional[int] = 0) β†’ List[List[float]][source]# Call out to OpenAI’s embedding endpoint for embedding search docs. Parameters texts – The list of texts to embed. chunk_size – The chunk size of embeddings. If None, will use the chunk size specified by the class. Returns List of embeddings, one for each text. embed_query(text: str) β†’ List[float][source]# Call out to OpenAI’s embedding endpoint for embedding query text. Parameters text – The text to embed. Returns
https://langchain.readthedocs.io/en/latest/reference/modules/embeddings.html
5211de1b2b1f-4
Parameters text – The text to embed. Returns Embeddings for the text. pydantic model langchain.embeddings.SagemakerEndpointEmbeddings[source]# Wrapper around custom Sagemaker Inference Endpoints. To use, you must supply the endpoint name from your deployed Sagemaker model & the region where it is deployed. To authenticate, the AWS client uses the following methods to automatically load credentials: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html If a specific credential profile should be used, you must pass the name of the profile from the ~/.aws/credentials file that is to be used. Make sure the credentials / roles used have the required policies to access the Sagemaker endpoint. See: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html field content_handler: langchain.llms.sagemaker_endpoint.ContentHandlerBase [Required]# The content handler class that provides an input and output transform functions to handle formats between LLM and the endpoint. field credentials_profile_name: Optional[str] = None# The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which has either access keys or role information specified. If not specified, the default credential profile or, if on an EC2 instance, credentials from IMDS will be used. See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html field endpoint_kwargs: Optional[Dict] = None# Optional attributes passed to the invoke_endpoint function. See `boto3`_. docs for more info. .. _boto3: <https://boto3.amazonaws.com/v1/documentation/api/latest/index.html> field endpoint_name: str = ''# The name of the endpoint from the deployed Sagemaker model.
https://langchain.readthedocs.io/en/latest/reference/modules/embeddings.html