id
stringlengths
14
15
text
stringlengths
101
5.26k
source
stringlengths
57
120
64456b903bc9-1
field entity_cache: List[str] = []# field entity_extraction_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template='You are an AI assistant reading the transcript of a conversation between an AI and a human. Extract all of the proper nouns from the last line of conversation. As a guideline, a proper noun is generally capitalized. You should definitely extract all names and places.\n\nThe conversation history is provided just in case of a coreference (e.g. "What do you know about him" where "him" is defined in a previous line) -- ignore items mentioned there that are not in the last line.\n\nReturn the output as a single comma-separated list, or NONE if there is nothing of note to return (e.g. the user is just issuing a greeting or having a simple conversation).\n\nEXAMPLE\nConversation history:\nPerson #1: how\'s it going today?\nAI: "It\'s going great! How about you?"\nPerson #1: good! busy working on Langchain. lots to do.\nAI: "That sounds like a lot of work! What kind of things are you doing to make Langchain better?"\nLast line:\nPerson #1: i\'m trying to improve Langchain\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff.\nOutput: Langchain\nEND OF EXAMPLE\n\nEXAMPLE\nConversation history:\nPerson #1: how\'s it going today?\nAI: "It\'s going great! How about you?"\nPerson #1: good! busy working on Langchain. lots to do.\nAI: "That sounds like a lot of work! What kind of things are you doing to make Langchain better?"\nLast line:\nPerson #1: i\'m trying to improve Langchain\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff. I\'m working with Person #2.\nOutput: Langchain, Person #2\nEND OF EXAMPLE\n\nConversation history (for reference only):\n{history}\nLast line of conversation (for extraction):\nHuman: {input}\n\nOutput:', template_format='f-string', validate_template=True)# field entity_store: langchain.memory.entity.BaseEntityStore [Optional]# field entity_summarization_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['entity', 'summary', 'history', 'input'], output_parser=None, partial_variables={}, template='You are an AI assistant helping a human keep track of facts about relevant people, places, and concepts in their life. Update the summary of the provided entity in the "Entity" section based on the last line of your conversation with the human. If you are writing the summary for the first time, return a single sentence.\nThe update should only include facts that are relayed in the last line of conversation about the provided entity, and should only contain facts about the provided entity.\n\nIf there is no new information about the provided entity or the information is not worth noting (not an important or relevant fact to remember long-term), return the existing summary unchanged.\n\nFull conversation history (for context):\n{history}\n\nEntity to summarize:\n{entity}\n\nExisting summary of {entity}:\n{summary}\n\nLast line of conversation:\nHuman: {input}\nUpdated summary:', template_format='f-string', validate_template=True)# field human_prefix: str = 'Human'# field k: int = 3# field llm: langchain.base_language.BaseLanguageModel [Required]# clear() → None[source]# Clear memory contents. load_memory_variables(inputs: Dict[str, Any]) → Dict[str, Any][source]# Return history buffer. save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None[source]# Save context from this conversation to buffer. property buffer: List[langchain.schema.BaseMessage]# pydantic model langchain.memory.ConversationKGMemory[source]# Knowledge graph memory for storing conversation memory. Integrates with external knowledge graph to store and retrieve information about knowledge triples in the conversation. field ai_prefix: str = 'AI'#
https://langchain.readthedocs.io/en/latest/reference/modules/memory.html
64456b903bc9-2
information about knowledge triples in the conversation. field ai_prefix: str = 'AI'# field entity_extraction_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template='You are an AI assistant reading the transcript of a conversation between an AI and a human. Extract all of the proper nouns from the last line of conversation. As a guideline, a proper noun is generally capitalized. You should definitely extract all names and places.\n\nThe conversation history is provided just in case of a coreference (e.g. "What do you know about him" where "him" is defined in a previous line) -- ignore items mentioned there that are not in the last line.\n\nReturn the output as a single comma-separated list, or NONE if there is nothing of note to return (e.g. the user is just issuing a greeting or having a simple conversation).\n\nEXAMPLE\nConversation history:\nPerson #1: how\'s it going today?\nAI: "It\'s going great! How about you?"\nPerson #1: good! busy working on Langchain. lots to do.\nAI: "That sounds like a lot of work! What kind of things are you doing to make Langchain better?"\nLast line:\nPerson #1: i\'m trying to improve Langchain\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff.\nOutput: Langchain\nEND OF EXAMPLE\n\nEXAMPLE\nConversation history:\nPerson #1: how\'s it going today?\nAI: "It\'s going great! How about you?"\nPerson #1: good! busy working on Langchain. lots to do.\nAI: "That sounds like a lot of work! What kind of things are you doing to make Langchain better?"\nLast line:\nPerson #1: i\'m trying to improve Langchain\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff. I\'m working with Person #2.\nOutput: Langchain, Person #2\nEND OF EXAMPLE\n\nConversation history (for reference only):\n{history}\nLast line of conversation (for extraction):\nHuman: {input}\n\nOutput:', template_format='f-string', validate_template=True)# field human_prefix: str = 'Human'# field k: int = 2# field kg: langchain.graphs.networkx_graph.NetworkxEntityGraph [Optional]#
https://langchain.readthedocs.io/en/latest/reference/modules/memory.html
64456b903bc9-3
field kg: langchain.graphs.networkx_graph.NetworkxEntityGraph [Optional]# field knowledge_extraction_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template="You are a networked intelligence helping a human track knowledge triples about all relevant people, things, concepts, etc. and integrating them with your knowledge stored within your weights as well as that stored in a knowledge graph. Extract all of the knowledge triples from the last line of conversation. A knowledge triple is a clause that contains a subject, a predicate, and an object. The subject is the entity being described, the predicate is the property of the subject that is being described, and the object is the value of the property.\n\nEXAMPLE\nConversation history:\nPerson #1: Did you hear aliens landed in Area 51?\nAI: No, I didn't hear that. What do you know about Area 51?\nPerson #1: It's a secret military base in Nevada.\nAI: What do you know about Nevada?\nLast line of conversation:\nPerson #1: It's a state in the US. It's also the number 1 producer of gold in the US.\n\nOutput: (Nevada, is a, state)<|>(Nevada, is in, US)<|>(Nevada, is the number 1 producer of, gold)\nEND OF EXAMPLE\n\nEXAMPLE\nConversation history:\nPerson #1: Hello.\nAI: Hi! How are you?\nPerson #1: I'm good. How are you?\nAI: I'm good too.\nLast line of conversation:\nPerson #1: I'm going to the store.\n\nOutput: NONE\nEND OF EXAMPLE\n\nEXAMPLE\nConversation history:\nPerson #1: What do you know about Descartes?\nAI: Descartes was a French philosopher, mathematician, and scientist who lived in the 17th century.\nPerson #1: The Descartes I'm referring to is a standup comedian and interior designer from Montreal.\nAI: Oh yes, He is a comedian and an interior designer. He has been in the industry for 30 years. His favorite food is baked bean pie.\nLast line of conversation:\nPerson #1: Oh huh. I know Descartes likes to drive antique scooters and play the mandolin.\nOutput: (Descartes, likes to drive, antique scooters)<|>(Descartes, plays, mandolin)\nEND OF EXAMPLE\n\nConversation history (for reference only):\n{history}\nLast line of conversation (for extraction):\nHuman: {input}\n\nOutput:", template_format='f-string', validate_template=True)# field llm: langchain.base_language.BaseLanguageModel [Required]# field summary_message_cls: Type[langchain.schema.BaseMessage] = <class 'langchain.schema.SystemMessage'># Number of previous utterances to include in the context. clear() → None[source]# Clear memory contents. get_current_entities(input_string: str) → List[str][source]# get_knowledge_triplets(input_string: str) → List[langchain.graphs.networkx_graph.KnowledgeTriple][source]# load_memory_variables(inputs: Dict[str, Any]) → Dict[str, Any][source]# Return history buffer. save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None[source]# Save context from this conversation to buffer. pydantic model langchain.memory.ConversationStringBufferMemory[source]# Buffer for storing conversation memory. field ai_prefix: str = 'AI'# Prefix to use for AI generated responses. field buffer: str = ''# field human_prefix: str = 'Human'# field input_key: Optional[str] = None# field output_key: Optional[str] = None# clear() → None[source]# Clear memory contents. load_memory_variables(inputs: Dict[str, Any]) → Dict[str, str][source]# Return history buffer. save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None[source]# Save context from this conversation to buffer. property memory_variables: List[str]# Will always return list of memory variables. :meta private: pydantic model langchain.memory.ConversationSummaryBufferMemory[source]# Buffer with summarizer for storing conversation memory. field max_token_limit: int = 2000# field memory_key: str = 'history'# field moving_summary_buffer: str = ''# clear() → None[source]# Clear memory contents.
https://langchain.readthedocs.io/en/latest/reference/modules/memory.html
64456b903bc9-4
clear() → None[source]# Clear memory contents. load_memory_variables(inputs: Dict[str, Any]) → Dict[str, Any][source]# Return history buffer. prune() → None[source]# Prune buffer if it exceeds max token limit save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None[source]# Save context from this conversation to buffer. property buffer: List[langchain.schema.BaseMessage]# pydantic model langchain.memory.ConversationSummaryMemory[source]# Conversation summarizer to memory. field buffer: str = ''# clear() → None[source]# Clear memory contents. classmethod from_messages(llm: langchain.base_language.BaseLanguageModel, chat_memory: langchain.schema.BaseChatMessageHistory, *, summarize_step: int = 2, **kwargs: Any) → langchain.memory.summary.ConversationSummaryMemory[source]# load_memory_variables(inputs: Dict[str, Any]) → Dict[str, Any][source]# Return history buffer. save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None[source]# Save context from this conversation to buffer. pydantic model langchain.memory.ConversationTokenBufferMemory[source]# Buffer for storing conversation memory. field ai_prefix: str = 'AI'# field human_prefix: str = 'Human'# field llm: langchain.base_language.BaseLanguageModel [Required]# field max_token_limit: int = 2000# field memory_key: str = 'history'# load_memory_variables(inputs: Dict[str, Any]) → Dict[str, Any][source]# Return history buffer. save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None[source]# Save context from this conversation to buffer. Pruned. property buffer: List[langchain.schema.BaseMessage]# String buffer of memory. class langchain.memory.CosmosDBChatMessageHistory(cosmos_endpoint: str, cosmos_database: str, cosmos_container: str, session_id: str, user_id: str, credential: Any = None, connection_string: Optional[str] = None, ttl: Optional[int] = None, cosmos_client_kwargs: Optional[dict] = None)[source]# Chat history backed by Azure CosmosDB. add_message(message: langchain.schema.BaseMessage) → None[source]# Add a self-created message to the store clear() → None[source]# Clear session memory from this memory and cosmos. load_messages() → None[source]# Retrieve the messages from Cosmos prepare_cosmos() → None[source]# Prepare the CosmosDB client. Use this function or the context manager to make sure your database is ready. upsert_messages() → None[source]# Update the cosmosdb item. class langchain.memory.DynamoDBChatMessageHistory(table_name: str, session_id: str)[source]# Chat message history that stores history in AWS DynamoDB. This class expects that a DynamoDB table with name table_name and a partition Key of SessionId is present. Parameters table_name – name of the DynamoDB table session_id – arbitrary key that is used to store the messages of a single chat session. add_message(message: langchain.schema.BaseMessage) → None[source]# Append the message to the record in DynamoDB clear() → None[source]# Clear session memory from DynamoDB property messages: List[langchain.schema.BaseMessage]# Retrieve the messages from DynamoDB class langchain.memory.FileChatMessageHistory(file_path: str)[source]# Chat message history that stores history in a local file. Parameters file_path – path of the local file to store the messages. add_message(message: langchain.schema.BaseMessage) → None[source]# Append the message to the record in the local file clear() → None[source]# Clear session memory from the local file property messages: List[langchain.schema.BaseMessage]# Retrieve the messages from the local file pydantic model langchain.memory.InMemoryEntityStore[source]# Basic in-memory entity store. field store: Dict[str, Optional[str]] = {}# clear() → None[source]# Delete all entities from store. delete(key: str) → None[source]# Delete entity value from store. exists(key: str) → bool[source]# Check if entity exists in store. get(key: str, default: Optional[str] = None) → Optional[str][source]# Get entity value from store. set(key: str, value: Optional[str]) → None[source]#
https://langchain.readthedocs.io/en/latest/reference/modules/memory.html
64456b903bc9-5
set(key: str, value: Optional[str]) → None[source]# Set entity value in store. class langchain.memory.MomentoChatMessageHistory(session_id: str, cache_client: momento.CacheClient, cache_name: str, *, key_prefix: str = 'message_store:', ttl: Optional[timedelta] = None, ensure_cache_exists: bool = True)[source]# Chat message history cache that uses Momento as a backend. See https://gomomento.com/ add_message(message: langchain.schema.BaseMessage) → None[source]# Store a message in the cache. Parameters message (BaseMessage) – The message object to store. Raises SdkException – Momento service or network error. Exception – Unexpected response. clear() → None[source]# Remove the session’s messages from the cache. Raises SdkException – Momento service or network error. Exception – Unexpected response. classmethod from_client_params(session_id: str, cache_name: str, ttl: timedelta, *, configuration: Optional[momento.config.Configuration] = None, auth_token: Optional[str] = None, **kwargs: Any) → MomentoChatMessageHistory[source]# Construct cache from CacheClient parameters. property messages: list[langchain.schema.BaseMessage]# Retrieve the messages from Momento. Raises SdkException – Momento service or network error Exception – Unexpected response Returns List of cached messages Return type list[BaseMessage] class langchain.memory.MongoDBChatMessageHistory(connection_string: str, session_id: str, database_name: str = 'chat_history', collection_name: str = 'message_store')[source]# Chat message history that stores history in MongoDB. Parameters connection_string – connection string to connect to MongoDB session_id – arbitrary key that is used to store the messages of a single chat session. database_name – name of the database to use collection_name – name of the collection to use add_message(message: langchain.schema.BaseMessage) → None[source]# Append the message to the record in MongoDB clear() → None[source]# Clear session memory from MongoDB property messages: List[langchain.schema.BaseMessage]# Retrieve the messages from MongoDB class langchain.memory.PostgresChatMessageHistory(session_id: str, connection_string: str = 'postgresql://postgres:mypassword@localhost/chat_history', table_name: str = 'message_store')[source]# add_message(message: langchain.schema.BaseMessage) → None[source]# Append the message to the record in PostgreSQL clear() → None[source]# Clear session memory from PostgreSQL property messages: List[langchain.schema.BaseMessage]# Retrieve the messages from PostgreSQL pydantic model langchain.memory.ReadOnlySharedMemory[source]# A memory wrapper that is read-only and cannot be changed. field memory: langchain.schema.BaseMemory [Required]# clear() → None[source]# Nothing to clear, got a memory like a vault. load_memory_variables(inputs: Dict[str, Any]) → Dict[str, str][source]# Load memory variables from memory. save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None[source]# Nothing should be saved or changed property memory_variables: List[str]# Return memory variables. class langchain.memory.RedisChatMessageHistory(session_id: str, url: str = 'redis://localhost:6379/0', key_prefix: str = 'message_store:', ttl: Optional[int] = None)[source]# add_message(message: langchain.schema.BaseMessage) → None[source]# Append the message to the record in Redis clear() → None[source]# Clear session memory from Redis property key: str# Construct the record key to use property messages: List[langchain.schema.BaseMessage]# Retrieve the messages from Redis pydantic model langchain.memory.RedisEntityStore[source]# Redis-backed Entity store. Entities get a TTL of 1 day by default, and that TTL is extended by 3 days every time the entity is read back. field key_prefix: str = 'memory_store'# field recall_ttl: Optional[int] = 259200# field redis_client: Any = None# field session_id: str = 'default'# field ttl: Optional[int] = 86400# clear() → None[source]# Delete all entities from store. delete(key: str) → None[source]# Delete entity value from store. exists(key: str) → bool[source]# Check if entity exists in store.
https://langchain.readthedocs.io/en/latest/reference/modules/memory.html
64456b903bc9-6
exists(key: str) → bool[source]# Check if entity exists in store. get(key: str, default: Optional[str] = None) → Optional[str][source]# Get entity value from store. set(key: str, value: Optional[str]) → None[source]# Set entity value in store. property full_key_prefix: str# pydantic model langchain.memory.SQLiteEntityStore[source]# SQLite-backed Entity store field session_id: str = 'default'# field table_name: str = 'memory_store'# clear() → None[source]# Delete all entities from store. delete(key: str) → None[source]# Delete entity value from store. exists(key: str) → bool[source]# Check if entity exists in store. get(key: str, default: Optional[str] = None) → Optional[str][source]# Get entity value from store. set(key: str, value: Optional[str]) → None[source]# Set entity value in store. property full_table_name: str# pydantic model langchain.memory.SimpleMemory[source]# Simple memory for storing context or other bits of information that shouldn’t ever change between prompts. field memories: Dict[str, Any] = {}# clear() → None[source]# Nothing to clear, got a memory like a vault. load_memory_variables(inputs: Dict[str, Any]) → Dict[str, str][source]# Return key-value pairs given the text input to the chain. If None, return all memories save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None[source]# Nothing should be saved or changed, my memory is set in stone. property memory_variables: List[str]# Input keys this memory class will load dynamically. pydantic model langchain.memory.VectorStoreRetrieverMemory[source]# Class for a VectorStore-backed memory object. field input_key: Optional[str] = None# Key name to index the inputs to load_memory_variables. field memory_key: str = 'history'# Key name to locate the memories in the result of load_memory_variables. field retriever: langchain.vectorstores.base.VectorStoreRetriever [Required]# VectorStoreRetriever object to connect to. field return_docs: bool = False# Whether or not to return the result of querying the database directly. clear() → None[source]# Nothing to clear. load_memory_variables(inputs: Dict[str, Any]) → Dict[str, Union[List[langchain.schema.Document], str]][source]# Return history buffer. save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None[source]# Save context from this conversation to buffer. property memory_variables: List[str]# The list of keys emitted from the load_memory_variables method. previous Document Transformers next Agents By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/reference/modules/memory.html
d82e18b5bc14-0
.rst .pdf Agent Toolkits Agent Toolkits# Agent toolkits. pydantic model langchain.agents.agent_toolkits.AzureCognitiveServicesToolkit[source]# Toolkit for Azure Cognitive Services. get_tools() → List[langchain.tools.base.BaseTool][source]# Get the tools in the toolkit. pydantic model langchain.agents.agent_toolkits.FileManagementToolkit[source]# Toolkit for interacting with a Local Files. field root_dir: Optional[str] = None# If specified, all file operations are made relative to root_dir. field selected_tools: Optional[List[str]] = None# If provided, only provide the selected tools. Defaults to all. get_tools() → List[langchain.tools.base.BaseTool][source]# Get the tools in the toolkit. pydantic model langchain.agents.agent_toolkits.GmailToolkit[source]# Toolkit for interacting with Gmail. field api_resource: Resource [Optional]# get_tools() → List[langchain.tools.base.BaseTool][source]# Get the tools in the toolkit. pydantic model langchain.agents.agent_toolkits.JiraToolkit[source]# Jira Toolkit. field tools: List[langchain.tools.base.BaseTool] = []# classmethod from_jira_api_wrapper(jira_api_wrapper: langchain.utilities.jira.JiraAPIWrapper) → langchain.agents.agent_toolkits.jira.toolkit.JiraToolkit[source]# get_tools() → List[langchain.tools.base.BaseTool][source]# Get the tools in the toolkit. pydantic model langchain.agents.agent_toolkits.JsonToolkit[source]# Toolkit for interacting with a JSON spec. field spec: langchain.tools.json.tool.JsonSpec [Required]# get_tools() → List[langchain.tools.base.BaseTool][source]# Get the tools in the toolkit. pydantic model langchain.agents.agent_toolkits.NLAToolkit[source]# Natural Language API Toolkit Definition. field nla_tools: Sequence[langchain.agents.agent_toolkits.nla.tool.NLATool] [Required]# List of API Endpoint Tools. classmethod from_llm_and_ai_plugin(llm: langchain.base_language.BaseLanguageModel, ai_plugin: langchain.tools.plugin.AIPlugin, requests: Optional[langchain.requests.Requests] = None, verbose: bool = False, **kwargs: Any) → langchain.agents.agent_toolkits.nla.toolkit.NLAToolkit[source]# Instantiate the toolkit from an OpenAPI Spec URL classmethod from_llm_and_ai_plugin_url(llm: langchain.base_language.BaseLanguageModel, ai_plugin_url: str, requests: Optional[langchain.requests.Requests] = None, verbose: bool = False, **kwargs: Any) → langchain.agents.agent_toolkits.nla.toolkit.NLAToolkit[source]# Instantiate the toolkit from an OpenAPI Spec URL classmethod from_llm_and_spec(llm: langchain.base_language.BaseLanguageModel, spec: langchain.tools.openapi.utils.openapi_utils.OpenAPISpec, requests: Optional[langchain.requests.Requests] = None, verbose: bool = False, **kwargs: Any) → langchain.agents.agent_toolkits.nla.toolkit.NLAToolkit[source]# Instantiate the toolkit by creating tools for each operation. classmethod from_llm_and_url(llm: langchain.base_language.BaseLanguageModel, open_api_url: str, requests: Optional[langchain.requests.Requests] = None, verbose: bool = False, **kwargs: Any) → langchain.agents.agent_toolkits.nla.toolkit.NLAToolkit[source]# Instantiate the toolkit from an OpenAPI Spec URL get_tools() → List[langchain.tools.base.BaseTool][source]# Get the tools for all the API operations. pydantic model langchain.agents.agent_toolkits.OpenAPIToolkit[source]# Toolkit for interacting with a OpenAPI api. field json_agent: langchain.agents.agent.AgentExecutor [Required]# field requests_wrapper: langchain.requests.TextRequestsWrapper [Required]# classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, json_spec: langchain.tools.json.tool.JsonSpec, requests_wrapper: langchain.requests.TextRequestsWrapper, **kwargs: Any) → langchain.agents.agent_toolkits.openapi.toolkit.OpenAPIToolkit[source]# Create json agent from llm, then initialize. get_tools() → List[langchain.tools.base.BaseTool][source]# Get the tools in the toolkit. pydantic model langchain.agents.agent_toolkits.PlayWrightBrowserToolkit[source]# Toolkit for web browser tools.
https://langchain.readthedocs.io/en/latest/reference/modules/agent_toolkits.html
d82e18b5bc14-1
Toolkit for web browser tools. field async_browser: Optional['AsyncBrowser'] = None# field sync_browser: Optional['SyncBrowser'] = None# classmethod from_browser(sync_browser: Optional[SyncBrowser] = None, async_browser: Optional[AsyncBrowser] = None) → PlayWrightBrowserToolkit[source]# Instantiate the toolkit. get_tools() → List[langchain.tools.base.BaseTool][source]# Get the tools in the toolkit. pydantic model langchain.agents.agent_toolkits.PowerBIToolkit[source]# Toolkit for interacting with PowerBI dataset. field callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None# field examples: Optional[str] = None# field llm: langchain.base_language.BaseLanguageModel [Required]# field max_iterations: int = 5# field powerbi: langchain.utilities.powerbi.PowerBIDataset [Required]# get_tools() → List[langchain.tools.base.BaseTool][source]# Get the tools in the toolkit. pydantic model langchain.agents.agent_toolkits.SQLDatabaseToolkit[source]# Toolkit for interacting with SQL databases. field db: langchain.sql_database.SQLDatabase [Required]# field llm: langchain.base_language.BaseLanguageModel [Required]# get_tools() → List[langchain.tools.base.BaseTool][source]# Get the tools in the toolkit. property dialect: str# Return string representation of dialect to use. pydantic model langchain.agents.agent_toolkits.SparkSQLToolkit[source]# Toolkit for interacting with Spark SQL. field db: langchain.utilities.spark_sql.SparkSQL [Required]# field llm: langchain.base_language.BaseLanguageModel [Required]# get_tools() → List[langchain.tools.base.BaseTool][source]# Get the tools in the toolkit. pydantic model langchain.agents.agent_toolkits.VectorStoreInfo[source]# Information about a vectorstore. field description: str [Required]# field name: str [Required]# field vectorstore: langchain.vectorstores.base.VectorStore [Required]# pydantic model langchain.agents.agent_toolkits.VectorStoreRouterToolkit[source]# Toolkit for routing between vectorstores. field llm: langchain.base_language.BaseLanguageModel [Optional]# field vectorstores: List[langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreInfo] [Required]# get_tools() → List[langchain.tools.base.BaseTool][source]# Get the tools in the toolkit. pydantic model langchain.agents.agent_toolkits.VectorStoreToolkit[source]# Toolkit for interacting with a vector store. field llm: langchain.base_language.BaseLanguageModel [Optional]# field vectorstore_info: langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreInfo [Required]# get_tools() → List[langchain.tools.base.BaseTool][source]# Get the tools in the toolkit. pydantic model langchain.agents.agent_toolkits.ZapierToolkit[source]# Zapier Toolkit. field tools: List[langchain.tools.base.BaseTool] = []# classmethod from_zapier_nla_wrapper(zapier_nla_wrapper: langchain.utilities.zapier.ZapierNLAWrapper) → langchain.agents.agent_toolkits.zapier.toolkit.ZapierToolkit[source]# Create a toolkit from a ZapierNLAWrapper. get_tools() → List[langchain.tools.base.BaseTool][source]# Get the tools in the toolkit. langchain.agents.agent_toolkits.create_csv_agent(llm: langchain.base_language.BaseLanguageModel, path: Union[str, List[str]], pandas_kwargs: Optional[dict] = None, **kwargs: Any) → langchain.agents.agent.AgentExecutor[source]# Create csv agent by loading to a dataframe and using pandas agent.
https://langchain.readthedocs.io/en/latest/reference/modules/agent_toolkits.html
d82e18b5bc14-2
Create csv agent by loading to a dataframe and using pandas agent. langchain.agents.agent_toolkits.create_json_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: langchain.agents.agent_toolkits.json.toolkit.JsonToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'You are an agent designed to interact with JSON.\nYour goal is to return a final answer by interacting with the JSON.\nYou have access to the following tools which help you learn more about the JSON you are interacting with.\nOnly use the below tools. Only use the information returned by the below tools to construct your final answer.\nDo not make up any information that is not contained in the JSON.\nYour input to the tools should be in the form of `data["key"][0]` where `data` is the JSON blob you are interacting with, and the syntax used is Python. \nYou should only use keys that you know for a fact exist. You must validate that a key exists by seeing it previously when calling `json_spec_list_keys`. \nIf you have not seen a key in one of those responses, you cannot use it.\nYou should only add one key at a time to the path. You cannot add multiple keys at once.\nIf you encounter a "KeyError", go back to the previous key, look at the available keys, and try again.\n\nIf the question does not seem to be related to the JSON, just return "I don\'t know" as the answer.\nAlways begin your interaction with the `json_spec_list_keys` tool with input "data" to see what keys exist in the JSON.\n\nNote that sometimes the value at a given path is large. In this case, you will get an error "Value is a large dictionary, should explore its keys directly".\nIn this case, you should ALWAYS follow up by using the `json_spec_list_keys` tool to see what keys exist at that path.\nDo not simply refer the user to the JSON or a section of the JSON, as this is not a valid answer. Keep digging until you find the answer and explicitly return it.\n', suffix: str = 'Begin!"\n\nQuestion: {input}\nThought: I should look at the keys that exist in data to see what I have access to\n{agent_scratchpad}', format_instructions: str = 'Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None, verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) → langchain.agents.agent.AgentExecutor[source]# Construct a json agent from an LLM and tools.
https://langchain.readthedocs.io/en/latest/reference/modules/agent_toolkits.html
d82e18b5bc14-3
Construct a json agent from an LLM and tools. langchain.agents.agent_toolkits.create_openapi_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: langchain.agents.agent_toolkits.openapi.toolkit.OpenAPIToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = "You are an agent designed to answer questions by making web requests to an API given the openapi spec.\n\nIf the question does not seem related to the API, return I don't know. Do not make up an answer.\nOnly use information provided by the tools to construct your response.\n\nFirst, find the base URL needed to make the request.\n\nSecond, find the relevant paths needed to answer the question. Take note that, sometimes, you might need to make more than one request to more than one path to answer the question.\n\nThird, find the required parameters needed to make the request. For GET requests, these are usually URL parameters and for POST requests, these are request body parameters.\n\nFourth, make the requests needed to answer the question. Ensure that you are sending the correct parameters to the request by checking which parameters are required. For parameters with a fixed set of values, please use the spec to look at which values are allowed.\n\nUse the exact parameter names as listed in the spec, do not make up any names or abbreviate the names of parameters.\nIf you get a not found error, ensure that you are using a path that actually exists in the spec.\n", suffix: str = 'Begin!\n\nQuestion: {input}\nThought: I should explore the spec to find the base url for the API.\n{agent_scratchpad}', format_instructions: str = 'Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None, max_iterations: Optional[int] = 15, max_execution_time: Optional[float] = None, early_stopping_method: str = 'force', verbose: bool = False, return_intermediate_steps: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) → langchain.agents.agent.AgentExecutor[source]# Construct a json agent from an LLM and tools. langchain.agents.agent_toolkits.create_pandas_dataframe_agent(llm: langchain.base_language.BaseLanguageModel, df: Any, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: Optional[str] = None, suffix: Optional[str] = None, input_variables: Optional[List[str]] = None, verbose: bool = False, return_intermediate_steps: bool = False, max_iterations: Optional[int] = 15, max_execution_time: Optional[float] = None, early_stopping_method: str = 'force', agent_executor_kwargs: Optional[Dict[str, Any]] = None, include_df_in_prompt: Optional[bool] = True, **kwargs: Dict[str, Any]) → langchain.agents.agent.AgentExecutor[source]# Construct a pandas agent from an LLM and dataframe.
https://langchain.readthedocs.io/en/latest/reference/modules/agent_toolkits.html
d82e18b5bc14-4
Construct a pandas agent from an LLM and dataframe. langchain.agents.agent_toolkits.create_pbi_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: Optional[langchain.agents.agent_toolkits.powerbi.toolkit.PowerBIToolkit], powerbi: Optional[langchain.utilities.powerbi.PowerBIDataset] = None, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'You are an agent designed to help users interact with a PowerBI Dataset.\n\nAgent has access to a tool that can write a query based on the question and then run those against PowerBI, Microsofts business intelligence tool. The questions from the users should be interpreted as related to the dataset that is available and not general questions about the world. If the question does not seem related to the dataset, just return "This does not appear to be part of this dataset." as the answer.\n\nGiven an input question, ask to run the questions against the dataset, then look at the results and return the answer, the answer should be a complete sentence that answers the question, if multiple rows are asked find a way to write that in a easily readible format for a human, also make sure to represent numbers in readable ways, like 1M instead of 1000000. Unless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\n', suffix: str = 'Begin!\n\nQuestion: {input}\nThought: I can first ask which tables I have, then how each table is defined and then ask the query tool the question I need, and finally create a nice sentence that answers the question.\n{agent_scratchpad}', format_instructions: str = 'Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', examples: Optional[str] = None, input_variables: Optional[List[str]] = None, top_k: int = 10, verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) → langchain.agents.agent.AgentExecutor[source]# Construct a pbi agent from an LLM and tools.
https://langchain.readthedocs.io/en/latest/reference/modules/agent_toolkits.html
d82e18b5bc14-5
Construct a pbi agent from an LLM and tools. langchain.agents.agent_toolkits.create_pbi_chat_agent(llm: langchain.chat_models.base.BaseChatModel, toolkit: Optional[langchain.agents.agent_toolkits.powerbi.toolkit.PowerBIToolkit], powerbi: Optional[langchain.utilities.powerbi.PowerBIDataset] = None, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, output_parser: Optional[langchain.agents.agent.AgentOutputParser] = None, prefix: str = 'Assistant is a large language model built to help users interact with a PowerBI Dataset.\n\nAssistant has access to a tool that can write a query based on the question and then run those against PowerBI, Microsofts business intelligence tool. The questions from the users should be interpreted as related to the dataset that is available and not general questions about the world. If the question does not seem related to the dataset, just return "This does not appear to be part of this dataset." as the answer.\n\nGiven an input question, ask to run the questions against the dataset, then look at the results and return the answer, the answer should be a complete sentence that answers the question, if multiple rows are asked find a way to write that in a easily readible format for a human, also make sure to represent numbers in readable ways, like 1M instead of 1000000. Unless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\n', suffix: str = "TOOLS\n------\nAssistant can ask the user to use tools to look up information that may be helpful in answering the users original question. The tools the human can use are:\n\n{{tools}}\n\n{format_instructions}\n\nUSER'S INPUT\n--------------------\nHere is the user's input (remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else):\n\n{{{{input}}}}\n", examples: Optional[str] = None, input_variables: Optional[List[str]] = None, memory: Optional[langchain.memory.chat_memory.BaseChatMemory] = None, top_k: int = 10, verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) → langchain.agents.agent.AgentExecutor[source]# Construct a pbi agent from an Chat LLM and tools. If you supply only a toolkit and no powerbi dataset, the same LLM is used for both. langchain.agents.agent_toolkits.create_python_agent(llm: langchain.base_language.BaseLanguageModel, tool: langchain.tools.python.tool.PythonREPLTool, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, verbose: bool = False, prefix: str = 'You are an agent designed to write and execute python code to answer questions.\nYou have access to a python REPL, which you can use to execute python code.\nIf you get an error, debug your code and try again.\nOnly use the output of your code to answer the question. \nYou might know the answer without running any code, but you should still run the code to get the answer.\nIf it does not seem like you can write code to answer the question, just return "I don\'t know" as the answer.\n', agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) → langchain.agents.agent.AgentExecutor[source]# Construct a python agent from an LLM and tool. langchain.agents.agent_toolkits.create_spark_dataframe_agent(llm: langchain.llms.base.BaseLLM, df: Any, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = '\nYou are working with a spark dataframe in Python. The name of the dataframe is `df`.\nYou should use the tools below to answer the question posed of you:', suffix: str = '\nThis is the result of `print(df.first())`:\n{df}\n\nBegin!\nQuestion: {input}\n{agent_scratchpad}', input_variables: Optional[List[str]] = None, verbose: bool = False, return_intermediate_steps: bool = False, max_iterations: Optional[int] = 15, max_execution_time: Optional[float] = None, early_stopping_method: str = 'force', agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) → langchain.agents.agent.AgentExecutor[source]# Construct a spark agent from an LLM and dataframe.
https://langchain.readthedocs.io/en/latest/reference/modules/agent_toolkits.html
d82e18b5bc14-6
Construct a spark agent from an LLM and dataframe. langchain.agents.agent_toolkits.create_spark_sql_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: langchain.agents.agent_toolkits.spark_sql.toolkit.SparkSQLToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'You are an agent designed to interact with Spark SQL.\nGiven an input question, create a syntactically correct Spark SQL query to run, then look at the results of the query and return the answer.\nUnless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\nYou can order the results by a relevant column to return the most interesting examples in the database.\nNever query for all the columns from a specific table, only ask for the relevant columns given the question.\nYou have access to tools for interacting with the database.\nOnly use the below tools. Only use the information returned by the below tools to construct your final answer.\nYou MUST double check your query before executing it. If you get an error while executing a query, rewrite the query and try again.\n\nDO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the database.\n\nIf the question does not seem related to the database, just return "I don\'t know" as the answer.\n', suffix: str = 'Begin!\n\nQuestion: {input}\nThought: I should look at the tables in the database to see what I can query.\n{agent_scratchpad}', format_instructions: str = 'Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None, top_k: int = 10, max_iterations: Optional[int] = 15, max_execution_time: Optional[float] = None, early_stopping_method: str = 'force', verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) → langchain.agents.agent.AgentExecutor[source]# Construct a sql agent from an LLM and tools.
https://langchain.readthedocs.io/en/latest/reference/modules/agent_toolkits.html
d82e18b5bc14-7
Construct a sql agent from an LLM and tools. langchain.agents.agent_toolkits.create_sql_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: langchain.agents.agent_toolkits.sql.toolkit.SQLDatabaseToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'You are an agent designed to interact with a SQL database.\nGiven an input question, create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer.\nUnless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\nYou can order the results by a relevant column to return the most interesting examples in the database.\nNever query for all the columns from a specific table, only ask for the relevant columns given the question.\nYou have access to tools for interacting with the database.\nOnly use the below tools. Only use the information returned by the below tools to construct your final answer.\nYou MUST double check your query before executing it. If you get an error while executing a query, rewrite the query and try again.\n\nDO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the database.\n\nIf the question does not seem related to the database, just return "I don\'t know" as the answer.\n', suffix: str = 'Begin!\n\nQuestion: {input}\nThought: I should look at the tables in the database to see what I can query.  Then I should query the schema of the most relevant tables.\n{agent_scratchpad}', format_instructions: str = 'Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None, top_k: int = 10, max_iterations: Optional[int] = 15, max_execution_time: Optional[float] = None, early_stopping_method: str = 'force', verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) → langchain.agents.agent.AgentExecutor[source]# Construct a sql agent from an LLM and tools. langchain.agents.agent_toolkits.create_vectorstore_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'You are an agent designed to answer questions about sets of documents.\nYou have access to tools for interacting with the documents, and the inputs to the tools are questions.\nSometimes, you will be asked to provide sources for your questions, in which case you should use the appropriate tool to do so.\nIf the question does not seem relevant to any of the tools provided, just return "I don\'t know" as the answer.\n', verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) → langchain.agents.agent.AgentExecutor[source]# Construct a vectorstore agent from an LLM and tools. langchain.agents.agent_toolkits.create_vectorstore_router_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreRouterToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'You are an agent designed to answer questions.\nYou have access to tools for interacting with different sources, and the inputs to the tools are questions.\nYour main task is to decide which of the tools is relevant for answering question at hand.\nFor complex questions, you can break the question down into sub questions and use tools to answers the sub questions.\n', verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) → langchain.agents.agent.AgentExecutor[source]# Construct a vectorstore router agent from an LLM and tools. previous Tools next Utilities By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/reference/modules/agent_toolkits.html
e3a914c29c6c-0
.rst .pdf Document Transformers Document Transformers# Transform documents pydantic model langchain.document_transformers.EmbeddingsRedundantFilter[source]# Filter that drops redundant documents by comparing their embeddings. field embeddings: langchain.embeddings.base.Embeddings [Required]# Embeddings to use for embedding document contents. field similarity_fn: Callable = <function cosine_similarity># Similarity function for comparing documents. Function expected to take as input two matrices (List[List[float]]) and return a matrix of scores where higher values indicate greater similarity. field similarity_threshold: float = 0.95# Threshold for determining when two documents are similar enough to be considered redundant. async atransform_documents(documents: Sequence[langchain.schema.Document], **kwargs: Any) → Sequence[langchain.schema.Document][source]# Asynchronously transform a list of documents. transform_documents(documents: Sequence[langchain.schema.Document], **kwargs: Any) → Sequence[langchain.schema.Document][source]# Filter down documents. langchain.document_transformers.get_stateful_documents(documents: Sequence[langchain.schema.Document]) → Sequence[langchain.document_transformers._DocumentWithState][source]# previous Document Compressors next Memory By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/reference/modules/document_transformers.html
35c54f5cf747-0
.rst .pdf Tools Tools# Core toolkit implementations. pydantic model langchain.tools.AIPluginTool[source]# field api_spec: str [Required]# field args_schema: Type[AIPluginToolSchema] = <class 'langchain.tools.plugin.AIPluginToolSchema'># Pydantic model class to validate and parse the tool’s input arguments. field plugin: AIPlugin [Required]# classmethod from_plugin_url(url: str) → langchain.tools.plugin.AIPluginTool[source]# pydantic model langchain.tools.APIOperation[source]# A model for a single API operation. field base_url: str [Required]# The base URL of the operation. field description: Optional[str] = None# The description of the operation. field method: langchain.tools.openapi.utils.openapi_utils.HTTPVerb [Required]# The HTTP method of the operation. field operation_id: str [Required]# The unique identifier of the operation. field path: str [Required]# The path of the operation. field properties: Sequence[langchain.tools.openapi.utils.api_models.APIProperty] [Required]# field request_body: Optional[langchain.tools.openapi.utils.api_models.APIRequestBody] = None# The request body of the operation. classmethod from_openapi_spec(spec: langchain.tools.openapi.utils.openapi_utils.OpenAPISpec, path: str, method: str) → langchain.tools.openapi.utils.api_models.APIOperation[source]# Create an APIOperation from an OpenAPI spec. classmethod from_openapi_url(spec_url: str, path: str, method: str) → langchain.tools.openapi.utils.api_models.APIOperation[source]# Create an APIOperation from an OpenAPI URL. to_typescript() → str[source]# Get typescript string representation of the operation. static ts_type_from_python(type_: Union[str, Type, tuple, None, enum.Enum]) → str[source]# property body_params: List[str]# property path_params: List[str]# property query_params: List[str]# pydantic model langchain.tools.AzureCogsFormRecognizerTool[source]# Tool that queries the Azure Cognitive Services Form Recognizer API. In order to set this up, follow instructions at: https://learn.microsoft.com/en-us/azure/applied-ai-services/form-recognizer/quickstarts/get-started-sdks-rest-api?view=form-recog-3.0.0&pivots=programming-language-python pydantic model langchain.tools.AzureCogsImageAnalysisTool[source]# Tool that queries the Azure Cognitive Services Image Analysis API. In order to set this up, follow instructions at: https://learn.microsoft.com/en-us/azure/cognitive-services/computer-vision/quickstarts-sdk/image-analysis-client-library-40 pydantic model langchain.tools.AzureCogsSpeech2TextTool[source]# Tool that queries the Azure Cognitive Services Speech2Text API. In order to set this up, follow instructions at: https://learn.microsoft.com/en-us/azure/cognitive-services/speech-service/get-started-speech-to-text?pivots=programming-language-python pydantic model langchain.tools.AzureCogsText2SpeechTool[source]# Tool that queries the Azure Cognitive Services Text2Speech API. In order to set this up, follow instructions at: https://learn.microsoft.com/en-us/azure/cognitive-services/speech-service/get-started-text-to-speech?pivots=programming-language-python pydantic model langchain.tools.BaseTool[source]# Interface LangChain tools must implement. field args_schema: Optional[Type[pydantic.main.BaseModel]] = None# Pydantic model class to validate and parse the tool’s input arguments. field callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None# Deprecated. Please use callbacks instead. field callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None# Callbacks to be called during tool execution. field description: str [Required]# Used to tell the model how/when/why to use the tool. You can provide few-shot examples as a part of the description. field handle_tool_error: Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]] = False# Handle the content of the ToolException thrown. field name: str [Required]# The unique name of the tool that clearly communicates its purpose. field return_direct: bool = False# Whether to return the tool’s output directly. Setting this to True means
https://langchain.readthedocs.io/en/latest/reference/modules/tools.html
35c54f5cf747-1
Whether to return the tool’s output directly. Setting this to True means that after the tool is called, the AgentExecutor will stop looping. field verbose: bool = False# Whether to log the tool’s progress. async arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → Any[source]# Run the tool asynchronously. run(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → Any[source]# Run the tool. property args: dict# property is_single_input: bool# Whether the tool only accepts a single input. pydantic model langchain.tools.BingSearchResults[source]# Tool that has capability to query the Bing Search API and get back json. field api_wrapper: langchain.utilities.bing_search.BingSearchAPIWrapper [Required]# field num_results: int = 4# pydantic model langchain.tools.BingSearchRun[source]# Tool that adds the capability to query the Bing search API. field api_wrapper: langchain.utilities.bing_search.BingSearchAPIWrapper [Required]# pydantic model langchain.tools.BraveSearch[source]# field search_wrapper: BraveSearchWrapper [Required]# classmethod from_api_key(api_key: str, search_kwargs: Optional[dict] = None, **kwargs: Any) → langchain.tools.brave_search.tool.BraveSearch[source]# pydantic model langchain.tools.ClickTool[source]# field args_schema: Type[BaseModel] = <class 'langchain.tools.playwright.click.ClickToolInput'># Pydantic model class to validate and parse the tool’s input arguments. field description: str = 'Click on an element with the given CSS selector'# Used to tell the model how/when/why to use the tool. You can provide few-shot examples as a part of the description. field name: str = 'click_element'# The unique name of the tool that clearly communicates its purpose. field playwright_strict: bool = False# Whether to employ Playwright’s strict mode when clicking on elements. field playwright_timeout: float = 1000# Timeout (in ms) for Playwright to wait for element to be ready. field visible_only: bool = True# Whether to consider only visible elements. pydantic model langchain.tools.CopyFileTool[source]# field args_schema: Type[pydantic.main.BaseModel] = <class 'langchain.tools.file_management.copy.FileCopyInput'># Pydantic model class to validate and parse the tool’s input arguments. field description: str = 'Create a copy of a file in a specified location'# Used to tell the model how/when/why to use the tool. You can provide few-shot examples as a part of the description. field name: str = 'copy_file'# The unique name of the tool that clearly communicates its purpose. pydantic model langchain.tools.CurrentWebPageTool[source]# field args_schema: Type[BaseModel] = <class 'pydantic.main.BaseModel'># Pydantic model class to validate and parse the tool’s input arguments. field description: str = 'Returns the URL of the current page'# Used to tell the model how/when/why to use the tool. You can provide few-shot examples as a part of the description. field name: str = 'current_webpage'# The unique name of the tool that clearly communicates its purpose. pydantic model langchain.tools.DeleteFileTool[source]# field args_schema: Type[pydantic.main.BaseModel] = <class 'langchain.tools.file_management.delete.FileDeleteInput'># Pydantic model class to validate and parse the tool’s input arguments. field description: str = 'Delete a file'# Used to tell the model how/when/why to use the tool. You can provide few-shot examples as a part of the description. field name: str = 'file_delete'# The unique name of the tool that clearly communicates its purpose. pydantic model langchain.tools.DuckDuckGoSearchResults[source]#
https://langchain.readthedocs.io/en/latest/reference/modules/tools.html
35c54f5cf747-2
pydantic model langchain.tools.DuckDuckGoSearchResults[source]# Tool that queries the Duck Duck Go Search API and get back json. field api_wrapper: langchain.utilities.duckduckgo_search.DuckDuckGoSearchAPIWrapper [Optional]# field num_results: int = 4# pydantic model langchain.tools.DuckDuckGoSearchRun[source]# Tool that adds the capability to query the DuckDuckGo search API. field api_wrapper: langchain.utilities.duckduckgo_search.DuckDuckGoSearchAPIWrapper [Optional]# pydantic model langchain.tools.ExtractHyperlinksTool[source]# Extract all hyperlinks on the page. field args_schema: Type[BaseModel] = <class 'langchain.tools.playwright.extract_hyperlinks.ExtractHyperlinksToolInput'># Pydantic model class to validate and parse the tool’s input arguments. field description: str = 'Extract all hyperlinks on the current webpage'# Used to tell the model how/when/why to use the tool. You can provide few-shot examples as a part of the description. field name: str = 'extract_hyperlinks'# The unique name of the tool that clearly communicates its purpose. static scrape_page(page: Any, html_content: str, absolute_urls: bool) → str[source]# pydantic model langchain.tools.ExtractTextTool[source]# field args_schema: Type[BaseModel] = <class 'pydantic.main.BaseModel'># Pydantic model class to validate and parse the tool’s input arguments. field description: str = 'Extract all the text on the current webpage'# Used to tell the model how/when/why to use the tool. You can provide few-shot examples as a part of the description. field name: str = 'extract_text'# The unique name of the tool that clearly communicates its purpose. pydantic model langchain.tools.FileSearchTool[source]# field args_schema: Type[pydantic.main.BaseModel] = <class 'langchain.tools.file_management.file_search.FileSearchInput'># Pydantic model class to validate and parse the tool’s input arguments. field description: str = 'Recursively search for files in a subdirectory that match the regex pattern'# Used to tell the model how/when/why to use the tool. You can provide few-shot examples as a part of the description. field name: str = 'file_search'# The unique name of the tool that clearly communicates its purpose. pydantic model langchain.tools.GetElementsTool[source]# field args_schema: Type[BaseModel] = <class 'langchain.tools.playwright.get_elements.GetElementsToolInput'># Pydantic model class to validate and parse the tool’s input arguments. field description: str = 'Retrieve elements in the current web page matching the given CSS selector'# Used to tell the model how/when/why to use the tool. You can provide few-shot examples as a part of the description. field name: str = 'get_elements'# The unique name of the tool that clearly communicates its purpose. pydantic model langchain.tools.GmailCreateDraft[source]# field args_schema: Type[langchain.tools.gmail.create_draft.CreateDraftSchema] = <class 'langchain.tools.gmail.create_draft.CreateDraftSchema'># Pydantic model class to validate and parse the tool’s input arguments. field description: str = 'Use this tool to create a draft email with the provided message fields.'# Used to tell the model how/when/why to use the tool. You can provide few-shot examples as a part of the description. field name: str = 'create_gmail_draft'# The unique name of the tool that clearly communicates its purpose. pydantic model langchain.tools.GmailGetMessage[source]# field args_schema: Type[langchain.tools.gmail.get_message.SearchArgsSchema] = <class 'langchain.tools.gmail.get_message.SearchArgsSchema'># Pydantic model class to validate and parse the tool’s input arguments. field description: str = 'Use this tool to fetch an email by message ID. Returns the thread ID, snipet, body, subject, and sender.'# Used to tell the model how/when/why to use the tool. You can provide few-shot examples as a part of the description. field name: str = 'get_gmail_message'# The unique name of the tool that clearly communicates its purpose. pydantic model langchain.tools.GmailGetThread[source]#
https://langchain.readthedocs.io/en/latest/reference/modules/tools.html
35c54f5cf747-3
pydantic model langchain.tools.GmailGetThread[source]# field args_schema: Type[langchain.tools.gmail.get_thread.GetThreadSchema] = <class 'langchain.tools.gmail.get_thread.GetThreadSchema'># Pydantic model class to validate and parse the tool’s input arguments. field description: str = 'Use this tool to search for email messages. The input must be a valid Gmail query. The output is a JSON list of messages.'# Used to tell the model how/when/why to use the tool. You can provide few-shot examples as a part of the description. field name: str = 'get_gmail_thread'# The unique name of the tool that clearly communicates its purpose. pydantic model langchain.tools.GmailSearch[source]# field args_schema: Type[langchain.tools.gmail.search.SearchArgsSchema] = <class 'langchain.tools.gmail.search.SearchArgsSchema'># Pydantic model class to validate and parse the tool’s input arguments. field description: str = 'Use this tool to search for email messages or threads. The input must be a valid Gmail query. The output is a JSON list of the requested resource.'# Used to tell the model how/when/why to use the tool. You can provide few-shot examples as a part of the description. field name: str = 'search_gmail'# The unique name of the tool that clearly communicates its purpose. pydantic model langchain.tools.GmailSendMessage[source]# field description: str = 'Use this tool to send email messages. The input is the message, recipents'# Used to tell the model how/when/why to use the tool. You can provide few-shot examples as a part of the description. field name: str = 'send_gmail_message'# The unique name of the tool that clearly communicates its purpose. pydantic model langchain.tools.GooglePlacesTool[source]# Tool that adds the capability to query the Google places API. field api_wrapper: langchain.utilities.google_places_api.GooglePlacesAPIWrapper [Optional]# field args_schema: Type[pydantic.main.BaseModel] = <class 'langchain.tools.google_places.tool.GooglePlacesSchema'># Pydantic model class to validate and parse the tool’s input arguments. pydantic model langchain.tools.GoogleSearchResults[source]# Tool that has capability to query the Google Search API and get back json. field api_wrapper: langchain.utilities.google_search.GoogleSearchAPIWrapper [Required]# field num_results: int = 4# pydantic model langchain.tools.GoogleSearchRun[source]# Tool that adds the capability to query the Google search API. field api_wrapper: langchain.utilities.google_search.GoogleSearchAPIWrapper [Required]# pydantic model langchain.tools.GoogleSerperResults[source]# Tool that has capability to query the Serper.dev Google Search API and get back json. field api_wrapper: langchain.utilities.google_serper.GoogleSerperAPIWrapper [Optional]# pydantic model langchain.tools.GoogleSerperRun[source]# Tool that adds the capability to query the Serper.dev Google search API. field api_wrapper: langchain.utilities.google_serper.GoogleSerperAPIWrapper [Required]# pydantic model langchain.tools.HumanInputRun[source]# Tool that adds the capability to ask user for input. field input_func: Callable [Optional]# field prompt_func: Callable[[str], None] [Optional]# pydantic model langchain.tools.IFTTTWebhook[source]# IFTTT Webhook. Parameters name – name of the tool description – description of the tool url – url to hit with the json event. field url: str [Required]# pydantic model langchain.tools.InfoPowerBITool[source]# Tool for getting metadata about a PowerBI Dataset. field powerbi: langchain.utilities.powerbi.PowerBIDataset [Required]# pydantic model langchain.tools.ListDirectoryTool[source]# field args_schema: Type[pydantic.main.BaseModel] = <class 'langchain.tools.file_management.list_dir.DirectoryListingInput'># Pydantic model class to validate and parse the tool’s input arguments. field description: str = 'List files and directories in a specified folder'# Used to tell the model how/when/why to use the tool. You can provide few-shot examples as a part of the description. field name: str = 'list_directory'# The unique name of the tool that clearly communicates its purpose. pydantic model langchain.tools.ListPowerBITool[source]#
https://langchain.readthedocs.io/en/latest/reference/modules/tools.html
35c54f5cf747-4
pydantic model langchain.tools.ListPowerBITool[source]# Tool for getting tables names. field powerbi: langchain.utilities.powerbi.PowerBIDataset [Required]# pydantic model langchain.tools.MetaphorSearchResults[source]# Tool that has capability to query the Metaphor Search API and get back json. field api_wrapper: langchain.utilities.metaphor_search.MetaphorSearchAPIWrapper [Required]# pydantic model langchain.tools.MoveFileTool[source]# field args_schema: Type[pydantic.main.BaseModel] = <class 'langchain.tools.file_management.move.FileMoveInput'># Pydantic model class to validate and parse the tool’s input arguments. field description: str = 'Move or rename a file from one location to another'# Used to tell the model how/when/why to use the tool. You can provide few-shot examples as a part of the description. field name: str = 'move_file'# The unique name of the tool that clearly communicates its purpose. pydantic model langchain.tools.NavigateBackTool[source]# Navigate back to the previous page in the browser history. field args_schema: Type[BaseModel] = <class 'pydantic.main.BaseModel'># Pydantic model class to validate and parse the tool’s input arguments. field description: str = 'Navigate back to the previous page in the browser history'# Used to tell the model how/when/why to use the tool. You can provide few-shot examples as a part of the description. field name: str = 'previous_webpage'# The unique name of the tool that clearly communicates its purpose. pydantic model langchain.tools.NavigateTool[source]# field args_schema: Type[BaseModel] = <class 'langchain.tools.playwright.navigate.NavigateToolInput'># Pydantic model class to validate and parse the tool’s input arguments. field description: str = 'Navigate a browser to the specified URL'# Used to tell the model how/when/why to use the tool. You can provide few-shot examples as a part of the description. field name: str = 'navigate_browser'# The unique name of the tool that clearly communicates its purpose. pydantic model langchain.tools.OpenAPISpec[source]# OpenAPI Model that removes misformatted parts of the spec. classmethod from_file(path: Union[str, pathlib.Path]) → langchain.tools.openapi.utils.openapi_utils.OpenAPISpec[source]# Get an OpenAPI spec from a file path. classmethod from_spec_dict(spec_dict: dict) → langchain.tools.openapi.utils.openapi_utils.OpenAPISpec[source]# Get an OpenAPI spec from a dict. classmethod from_text(text: str) → langchain.tools.openapi.utils.openapi_utils.OpenAPISpec[source]# Get an OpenAPI spec from a text. classmethod from_url(url: str) → langchain.tools.openapi.utils.openapi_utils.OpenAPISpec[source]# Get an OpenAPI spec from a URL. static get_cleaned_operation_id(operation: openapi_schema_pydantic.v3.v3_1_0.operation.Operation, path: str, method: str) → str[source]# Get a cleaned operation id from an operation id. get_methods_for_path(path: str) → List[str][source]# Return a list of valid methods for the specified path. get_operation(path: str, method: str) → openapi_schema_pydantic.v3.v3_1_0.operation.Operation[source]# Get the operation object for a given path and HTTP method. get_parameters_for_operation(operation: openapi_schema_pydantic.v3.v3_1_0.operation.Operation) → List[openapi_schema_pydantic.v3.v3_1_0.parameter.Parameter][source]# Get the components for a given operation. get_referenced_schema(ref: openapi_schema_pydantic.v3.v3_1_0.reference.Reference) → openapi_schema_pydantic.v3.v3_1_0.schema.Schema[source]# Get a schema (or nested reference) or err. get_request_body_for_operation(operation: openapi_schema_pydantic.v3.v3_1_0.operation.Operation) → Optional[openapi_schema_pydantic.v3.v3_1_0.request_body.RequestBody][source]# Get the request body for a given operation. classmethod parse_obj(obj: dict) → langchain.tools.openapi.utils.openapi_utils.OpenAPISpec[source]# property base_url: str# Get the base url. pydantic model langchain.tools.OpenWeatherMapQueryRun[source]#
https://langchain.readthedocs.io/en/latest/reference/modules/tools.html
35c54f5cf747-5
pydantic model langchain.tools.OpenWeatherMapQueryRun[source]# Tool that adds the capability to query using the OpenWeatherMap API. field api_wrapper: langchain.utilities.openweathermap.OpenWeatherMapAPIWrapper [Optional]# pydantic model langchain.tools.PubmedQueryRun[source]# Tool that adds the capability to search using the PubMed API. field api_wrapper: langchain.utilities.pupmed.PubMedAPIWrapper [Optional]# pydantic model langchain.tools.QueryPowerBITool[source]# Tool for querying a Power BI Dataset. Validators raise_deprecation » all fields validate_llm_chain_input_variables » llm_chain field examples: Optional[str] = '\nQuestion: How many rows are in the table <table>?\nDAX: EVALUATE ROW("Number of rows", COUNTROWS(<table>))\n----\nQuestion: How many rows are in the table <table> where <column> is not empty?\nDAX: EVALUATE ROW("Number of rows", COUNTROWS(FILTER(<table>, <table>[<column>] <> "")))\n----\nQuestion: What was the average of <column> in <table>?\nDAX: EVALUATE ROW("Average", AVERAGE(<table>[<column>]))\n----\n'# field llm_chain: langchain.chains.llm.LLMChain [Required]# field max_iterations: int = 5# field powerbi: langchain.utilities.powerbi.PowerBIDataset [Required]# field session_cache: Dict[str, Any] [Optional]#
https://langchain.readthedocs.io/en/latest/reference/modules/tools.html
35c54f5cf747-6
field template: Optional[str] = '\nAnswer the question below with a DAX query that can be sent to Power BI. DAX queries have a simple syntax comprised of just one required keyword, EVALUATE, and several optional keywords: ORDER BY, START AT, DEFINE, MEASURE, VAR, TABLE, and COLUMN. Each keyword defines a statement used for the duration of the query. Any time < or > are used in the text below it means that those values need to be replaced by table, columns or other things. If the question is not something you can answer with a DAX query, reply with "I cannot answer this" and the question will be escalated to a human.\n\nSome DAX functions return a table instead of a scalar, and must be wrapped in a function that evaluates the table and returns a scalar; unless the table is a single column, single row table, then it is treated as a scalar value. Most DAX functions require one or more arguments, which can include tables, columns, expressions, and values. However, some functions, such as PI, do not require any arguments, but always require parentheses to indicate the null argument. For example, you must always type PI(), not PI. You can also nest functions within other functions. \n\nSome commonly used functions are:\nEVALUATE <table> - At the most basic level, a DAX query is an EVALUATE statement containing a table expression. At least one EVALUATE statement is required, however, a query can contain any number of EVALUATE statements.\nEVALUATE <table> ORDER BY <expression> ASC or DESC - The optional ORDER BY keyword defines one or more expressions used to sort query results. Any expression that can be evaluated for each row of the result is valid.\nEVALUATE <table> ORDER BY <expression> ASC or DESC START AT <value> or <parameter> - The optional START AT keyword is used inside an ORDER BY clause. It defines the value at which the query results begin.\nDEFINE MEASURE | VAR; EVALUATE <table> - The optional DEFINE keyword introduces one or more calculated entity definitions that exist only for the duration of the query. Definitions precede the EVALUATE statement and are valid for all EVALUATE statements in the query. Definitions can be variables, measures, tables1, and columns1. Definitions can reference other definitions that appear before or after the current definition. At least one definition is required if the DEFINE keyword is included in a query.\nMEASURE <table name>[<measure name>] = <scalar expression> - Introduces a measure definition in a DEFINE statement of a DAX query.\nVAR <name> = <expression> - Stores the result of an expression as a named variable, which can then be passed as an argument to other measure expressions. Once resultant values have been calculated for a variable expression, those values do not change, even if the variable is referenced in another expression.\n\nFILTER(<table>,<filter>) - Returns a table that represents a subset of another table or expression, where <filter> is a Boolean expression that is to be evaluated for each row of the table. For example, [Amount] > 0 or [Region] = "France"\nROW(<name>, <expression>) - Returns a table with a single row containing values that result from the expressions given to each column.\nDISTINCT(<column>) - Returns a one-column table that contains the distinct values from the specified column. In other words, duplicate values are removed and only unique values are returned. This function cannot be used to Return values into a cell or column on a worksheet; rather, you nest the DISTINCT function within a formula, to get a list of distinct values that can be passed to another function and then counted, summed, or used for other operations.\nDISTINCT(<table>) - Returns a table by removing duplicate rows from another table or expression.\n\nAggregation functions, names with a A in it, handle booleans and empty strings in appropriate ways, while the same function without A only uses the numeric values in a column. Functions names with an X in it can include a expression as an argument, this will be evaluated for each row in the table and the result will be used in the regular function calculation, these are the functions:\nCOUNT(<column>), COUNTA(<column>), COUNTX(<table>,<expression>), COUNTAX(<table>,<expression>), COUNTROWS([<table>]), COUNTBLANK(<column>), DISTINCTCOUNT(<column>), DISTINCTCOUNTNOBLANK (<column>) - these are all variantions of count functions.\nAVERAGE(<column>), AVERAGEA(<column>), AVERAGEX(<table>,<expression>) - these are all variantions of average
https://langchain.readthedocs.io/en/latest/reference/modules/tools.html
35c54f5cf747-7
AVERAGEX(<table>,<expression>) - these are all variantions of average functions.\nMAX(<column>), MAXA(<column>), MAXX(<table>,<expression>) - these are all variantions of max functions.\nMIN(<column>), MINA(<column>), MINX(<table>,<expression>) - these are all variantions of min functions.\nPRODUCT(<column>), PRODUCTX(<table>,<expression>) - these are all variantions of product functions.\nSUM(<column>), SUMX(<table>,<expression>) - these are all variantions of sum functions.\n\nDate and time functions:\nDATE(year, month, day) - Returns a date value that represents the specified year, month, and day.\nDATEDIFF(date1, date2, <interval>) - Returns the difference between two date values, in the specified interval, that can be SECOND, MINUTE, HOUR, DAY, WEEK, MONTH, QUARTER, YEAR.\nDATEVALUE(<date_text>) - Returns a date value that represents the specified date.\nYEAR(<date>), QUARTER(<date>), MONTH(<date>), DAY(<date>), HOUR(<date>), MINUTE(<date>), SECOND(<date>) - Returns the part of the date for the specified date.\n\nFinally, make sure to escape double quotes with a single backslash, and make sure that only table names have single quotes around them, while names of measures or the values of columns that you want to compare against are in escaped double quotes. Newlines are not necessary and can be skipped. The queries are serialized as json and so will have to fit be compliant with json syntax. Sometimes you will get a question, a DAX query and a error, in that case you need to rewrite the DAX query to get the correct answer.\n\nThe following tables exist: {tables}\n\nand the schema\'s for some are given here:\n{schemas}\n\nExamples:\n{examples}\n\nQuestion: {tool_input}\nDAX: \n'#
https://langchain.readthedocs.io/en/latest/reference/modules/tools.html
35c54f5cf747-8
pydantic model langchain.tools.ReadFileTool[source]# field args_schema: Type[pydantic.main.BaseModel] = <class 'langchain.tools.file_management.read.ReadFileInput'># Pydantic model class to validate and parse the tool’s input arguments. field description: str = 'Read file from disk'# Used to tell the model how/when/why to use the tool. You can provide few-shot examples as a part of the description. field name: str = 'read_file'# The unique name of the tool that clearly communicates its purpose. pydantic model langchain.tools.SceneXplainTool[source]# Tool that adds the capability to explain images. field api_wrapper: langchain.utilities.scenexplain.SceneXplainAPIWrapper [Optional]# pydantic model langchain.tools.ShellTool[source]# Tool to run shell commands. field args_schema: Type[pydantic.main.BaseModel] = <class 'langchain.tools.shell.tool.ShellInput'># Schema for input arguments. field description: str = 'Run shell commands on this Linux machine.'# Description of tool. field name: str = 'terminal'# Name of tool. field process: langchain.utilities.bash.BashProcess [Optional]# Bash process to run commands. pydantic model langchain.tools.SteamshipImageGenerationTool[source]# field model_name: ModelName [Required]# field return_urls: Optional[bool] = False# field size: Optional[str] = '512x512'# field steamship: Steamship [Required]# pydantic model langchain.tools.StructuredTool[source]# Tool that can operate on any number of inputs. field args_schema: Type[pydantic.main.BaseModel] [Required]# The input arguments’ schema. The tool schema. field coroutine: Optional[Callable[[...], Awaitable[Any]]] = None# The asynchronous version of the function. field description: str = ''# Used to tell the model how/when/why to use the tool. You can provide few-shot examples as a part of the description. field func: Callable[[...], Any] [Required]# The function to run when the tool is called. classmethod from_function(func: Callable, name: Optional[str] = None, description: Optional[str] = None, return_direct: bool = False, args_schema: Optional[Type[pydantic.main.BaseModel]] = None, infer_schema: bool = True, **kwargs: Any) → langchain.tools.base.StructuredTool[source]# property args: dict# The tool’s input arguments. pydantic model langchain.tools.Tool[source]# Tool that takes in function or coroutine directly. field args_schema: Optional[Type[pydantic.main.BaseModel]] = None# Pydantic model class to validate and parse the tool’s input arguments. field callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None# Deprecated. Please use callbacks instead. field callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None# Callbacks to be called during tool execution. field coroutine: Optional[Callable[[...], Awaitable[str]]] = None# The asynchronous version of the function. field description: str = ''# Used to tell the model how/when/why to use the tool. You can provide few-shot examples as a part of the description. field func: Callable[[...], str] [Required]# The function to run when the tool is called. field handle_tool_error: Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]] = False# Handle the content of the ToolException thrown. field name: str [Required]# The unique name of the tool that clearly communicates its purpose. field return_direct: bool = False# Whether to return the tool’s output directly. Setting this to True means that after the tool is called, the AgentExecutor will stop looping. field verbose: bool = False# Whether to log the tool’s progress. classmethod from_function(func: Callable, name: str, description: str, return_direct: bool = False, args_schema: Optional[Type[pydantic.main.BaseModel]] = None, **kwargs: Any) → langchain.tools.base.Tool[source]# Initialize tool from a function. property args: dict# The tool’s input arguments. pydantic model langchain.tools.VectorStoreQATool[source]#
https://langchain.readthedocs.io/en/latest/reference/modules/tools.html
35c54f5cf747-9
pydantic model langchain.tools.VectorStoreQATool[source]# Tool for the VectorDBQA chain. To be initialized with name and chain. static get_description(name: str, description: str) → str[source]# pydantic model langchain.tools.VectorStoreQAWithSourcesTool[source]# Tool for the VectorDBQAWithSources chain. static get_description(name: str, description: str) → str[source]# pydantic model langchain.tools.WikipediaQueryRun[source]# Tool that adds the capability to search using the Wikipedia API. field api_wrapper: langchain.utilities.wikipedia.WikipediaAPIWrapper [Required]# pydantic model langchain.tools.WolframAlphaQueryRun[source]# Tool that adds the capability to query using the Wolfram Alpha SDK. field api_wrapper: langchain.utilities.wolfram_alpha.WolframAlphaAPIWrapper [Required]# pydantic model langchain.tools.WriteFileTool[source]# field args_schema: Type[pydantic.main.BaseModel] = <class 'langchain.tools.file_management.write.WriteFileInput'># Pydantic model class to validate and parse the tool’s input arguments. field description: str = 'Write file to disk'# Used to tell the model how/when/why to use the tool. You can provide few-shot examples as a part of the description. field name: str = 'write_file'# The unique name of the tool that clearly communicates its purpose. pydantic model langchain.tools.YouTubeSearchTool[source]# pydantic model langchain.tools.ZapierNLAListActions[source]# Returns a list of all exposed (enabled) actions associated withcurrent user (associated with the set api_key). Change your exposed actions here: https://nla.zapier.com/demo/start/ The return list can be empty if no actions exposed. Else will contain a list of action objects: [{“id”: str, “description”: str, “params”: Dict[str, str] }] params will always contain an instructions key, the only required param. All others optional and if provided will override any AI guesses (see “understanding the AI guessing flow” here: https://nla.zapier.com/api/v1/docs) Parameters None – field api_wrapper: langchain.utilities.zapier.ZapierNLAWrapper [Optional]# pydantic model langchain.tools.ZapierNLARunAction[source]# Executes an action that is identified by action_id, must be exposed(enabled) by the current user (associated with the set api_key). Change your exposed actions here: https://nla.zapier.com/demo/start/ The return JSON is guaranteed to be less than ~500 words (350 tokens) making it safe to inject into the prompt of another LLM call. Parameters action_id – a specific action ID (from list actions) of the action to execute (the set api_key must be associated with the action owner) instructions – a natural language instruction string for using the action (eg. “get the latest email from Mike Knoop” for “Gmail: find email” action) params – a dict, optional. Any params provided will override AI guesses from instructions (see “understanding the AI guessing flow” here: https://nla.zapier.com/api/v1/docs) field action_id: str [Required]# field api_wrapper: langchain.utilities.zapier.ZapierNLAWrapper [Optional]# field base_prompt: str = 'A wrapper around Zapier NLA actions. The input to this tool is a natural language instruction, for example "get the latest email from my bank" or "send a slack message to the #general channel". Each tool will have params associated with it that are specified as a list. You MUST take into account the params when creating the instruction. For example, if the params are [\'Message_Text\', \'Channel\'], your instruction should be something like \'send a slack message to the #general channel with the text hello world\'. Another example: if the params are [\'Calendar\', \'Search_Term\'], your instruction should be something like \'find the meeting in my personal calendar at 3pm\'. Do not make up params, they will be explicitly specified in the tool description. If you do not have enough information to fill in the params, just say \'not enough information provided in the instruction, missing <param>\'. If you get a none or null response, STOP EXECUTION, do not try to another tool!This tool specifically used for: {zapier_description}, and has params: {params}'#
https://langchain.readthedocs.io/en/latest/reference/modules/tools.html
35c54f5cf747-10
field params: Optional[dict] = None# field params_schema: Dict[str, str] [Optional]# field zapier_description: str [Required]# langchain.tools.tool(*args: Union[str, Callable], return_direct: bool = False, args_schema: Optional[Type[pydantic.main.BaseModel]] = None, infer_schema: bool = True) → Callable[source]# Make tools out of functions, can be used with or without arguments. Parameters *args – The arguments to the tool. return_direct – Whether to return directly from the tool rather than continuing the agent loop. args_schema – optional argument schema for user to specify infer_schema – Whether to infer the schema of the arguments from the function’s signature. This also makes the resultant tool accept a dictionary input to its run() function. Requires: Function must be of type (str) -> str Function must have a docstring Examples @tool def search_api(query: str) -> str: # Searches the API for the query. return @tool("search", return_direct=True) def search_api(query: str) -> str: # Searches the API for the query. return previous Agents next Agent Toolkits By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/reference/modules/tools.html
02eaaead57f5-0
.rst .pdf Models Contents Model Types Models# Note Conceptual Guide This section of the documentation deals with different types of models that are used in LangChain. On this page we will go over the model types at a high level, but we have individual pages for each model type. The pages contain more detailed “how-to” guides for working with that model, as well as a list of different model providers. Getting Started: An overview of the models. Model Types# LLMs: Large Language Models (LLMs) take a text string as input and return a text string as output. Chat Models: Chat Models are usually backed by a language model, but their APIs are more structured. Specifically, these models take a list of Chat Messages as input, and return a Chat Message. Text Embedding Models: Text embedding models take text as input and return a list of floats. previous Tutorials next Getting Started Contents Model Types By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/models.html
aeea8047b05a-0
.rst .pdf Prompts Prompts# Note Conceptual Guide The new way of programming models is through prompts. A prompt refers to the input to the model. This input is often constructed from multiple components. A PromptTemplate is responsible for the construction of this input. LangChain provides several classes and functions to make constructing and working with prompts easy. Getting Started: An overview of the prompts. LLM Prompt Templates: How to use PromptTemplates to prompt Language Models. Chat Prompt Templates: How to use PromptTemplates to prompt Chat Models. Example Selectors: Often times it is useful to include examples in prompts. These examples can be dynamically selected. This section goes over example selection. Output Parsers: Language models (and Chat Models) output text. But many times you may want to get more structured information. This is where output parsers come in. Output Parsers: instruct the model how output should be formatted, parse output into the desired formatting (including retrying if necessary). previous Tensorflow Hub next Getting Started By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/prompts.html
7b04887d7def-0
.rst .pdf Indexes Contents Index Types Indexes# Note Conceptual Guide Indexes refer to ways to structure documents so that LLMs can best interact with them. The most common way that indexes are used in chains is in a “retrieval” step. This step refers to taking a user’s query and returning the most relevant documents. We draw this distinction because (1) an index can be used for other things besides retrieval, and (2) retrieval can use other logic besides an index to find relevant documents. We therefore have a concept of a Retriever interface - this is the interface that most chains work with. Most of the time when we talk about indexes and retrieval we are talking about indexing and retrieving unstructured data (like text documents). For interacting with structured data (SQL tables, etc) or APIs, please see the corresponding use case sections for links to relevant functionality. Getting Started: An overview of the indexes. Index Types# Document Loaders: How to load documents from a variety of sources. Text Splitters: An overview and different types of the Text Splitters. VectorStores: An overview and different types of the Vector Stores. Retrievers: An overview and different types of the Retrievers. previous Zep Memory next Getting Started Contents Index Types By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/indexes.html
4b37dba65969-0
.rst .pdf Chains Chains# Note Conceptual Guide Using an LLM in isolation is fine for some simple applications, but more complex applications require chaining LLMs - either with each other or with other experts. LangChain provides a standard interface for Chains, as well as several common implementations of chains. Getting Started: An overview of chains. How-To Guides: How-to guides about various types of chains. Reference: API reference documentation for all Chain classes. previous Zep next Getting Started By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/chains.html
38481b646b42-0
.rst .pdf Agents Contents Action Agents Plan-and-Execute Agents Agents# Note Conceptual Guide Some applications require not just a predetermined chain of calls to LLMs/other tools, but potentially an unknown chain that depends on the user’s input. In these types of chains, there is an agent which has access to a suite of tools. Depending on the user input, the agent can then decide which, if any, of these tools to call. At the moment, there are two main types of agents: Action Agents: these agents decide the actions to take and execute that actions one action at a time. Plan-and-Execute Agents: these agents first decide a plan of actions to take, and then execute those actions one at a time. When should you use each one? Action Agents are more conventional, and good for small tasks. For more complex or long running tasks, the initial planning step helps to maintain long term objectives and focus. However, that comes at the expense of generally more calls and higher latency. These two agents are also not mutually exclusive - in fact, it is often best to have an Action Agent be in charge of the execution for the Plan and Execute agent. Action Agents# High level pseudocode of the Action Agents: The user input is received The agent decides which tool - if any - to use, and what the tool input should be That tool is then called with the tool input, and an observation is recorded (the output of this calling) That history of tool, tool input, and observation is passed back into the agent, and it decides the next step This is repeated until the agent decides it no longer needs to use a tool, and then it responds directly to the user. The different abstractions involved in agents are: Agent: this is where the logic of the application lives. Agents expose an interface that takes in user input along with a list of previous steps the agent has taken, and returns either an AgentAction or AgentFinish AgentAction corresponds to the tool to use and the input to that tool AgentFinish means the agent is done, and has information around what to return to the user Tools: these are the actions an agent can take. What tools you give an agent highly depend on what you want the agent to do Toolkits: these are groups of tools designed for a specific use case. For example, in order for an agent to interact with a SQL database in the best way it may need access to one tool to execute queries and another tool to inspect tables. Agent Executor: this wraps an agent and a list of tools. This is responsible for the loop of running the agent iteratively until the stopping criteria is met. Getting Started: An overview of agents. It covers how to use all things related to agents in an end-to-end manner. Agent Construction: Although an agent can be constructed in many way, the typical way to construct an agent is with: PromptTemplate: this is responsible for taking the user input and previous steps and constructing a prompt to send to the language model Language Model: this takes the prompt constructed by the PromptTemplate and returns some output Output Parser: this takes the output of the Language Model and parses it into an AgentAction or AgentFinish object. Additional Documentation: Tools: Different types of tools LangChain supports natively. We also cover how to add your own tools. Agents: Different types of agents LangChain supports natively. We also cover how to modify and create your own agents. Toolkits: Various toolkits that LangChain supports out of the box, and how to create an agent from them. Agent Executor: The Agent Executor class, which is responsible for calling the agent and tools in a loop. We go over different ways to customize this, and options you can use for more control. Plan-and-Execute Agents# High level pseudocode of the Plan-and-Execute Agents: The user input is received The planner lists out the steps to take The executor goes through the list of steps, executing them The most typical implementation is to have the planner be a language model, and the executor be an action agent. Plan-and-Execute Agents previous Chains next Getting Started Contents Action Agents Plan-and-Execute Agents By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/agents.html
4b048f9cead6-0
.rst .pdf Memory Memory# Note Conceptual Guide By default, Chains and Agents are stateless, meaning that they treat each incoming query independently (as are the underlying LLMs and chat models). In some applications (chatbots being a GREAT example) it is highly important to remember previous interactions, both at a short term but also at a long term level. The Memory does exactly that. LangChain provides memory components in two forms. First, LangChain provides helper utilities for managing and manipulating previous chat messages. These are designed to be modular and useful regardless of how they are used. Secondly, LangChain provides easy ways to incorporate these utilities into chains. Getting Started: An overview of different types of memory. How-To Guides: A collection of how-to guides. These highlight different types of memory, as well as how to use memory in chains. previous Structured Output Parser next Getting Started By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/memory.html
f22f920cac97-0
.rst .pdf LLMs LLMs# Note Conceptual Guide Large Language Models (LLMs) are a core component of LangChain. LangChain is not a provider of LLMs, but rather provides a standard interface through which you can interact with a variety of LLMs. The following sections of documentation are provided: Getting Started: An overview of all the functionality the LangChain LLM class provides. How-To Guides: A collection of how-to guides. These highlight how to accomplish various objectives with our LLM class (streaming, async, etc). Integrations: A collection of examples on how to integrate different LLM providers with LangChain (OpenAI, Hugging Face, etc). Reference: API reference documentation for all LLM classes. previous Getting Started next Getting Started By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/models/llms.html
394e80e1d743-0
.rst .pdf Text Embedding Models Text Embedding Models# Note Conceptual Guide This documentation goes over how to use the Embedding class in LangChain. The Embedding class is a class designed for interfacing with embeddings. There are lots of Embedding providers (OpenAI, Cohere, Hugging Face, etc) - this class is designed to provide a standard interface for all of them. Embeddings create a vector representation of a piece of text. This is useful because it means we can think about text in the vector space, and do things like semantic search where we look for pieces of text that are most similar in the vector space. The base Embedding class in LangChain exposes two methods: embed_documents and embed_query. The largest difference is that these two methods have different interfaces: one works over multiple documents, while the other works over a single document. Besides this, another reason for having these as two separate methods is that some embedding providers have different embedding methods for documents (to be searched over) vs queries (the search query itself). The following integrations exist for text embeddings. Aleph Alpha Amazon Bedrock Azure OpenAI Cohere DeepInfra Elasticsearch Fake Embeddings Google Vertex AI PaLM Hugging Face Hub HuggingFace Instruct Jina Llama-cpp MiniMax ModelScope MosaicML OpenAI SageMaker Endpoint Self Hosted Embeddings Sentence Transformers Tensorflow Hub previous PromptLayer ChatOpenAI next Aleph Alpha By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/models/text_embedding.html
a1ec9c95141e-0
.ipynb .pdf Getting Started Contents Language Models text -> text interface messages -> message interface Getting Started# One of the core value props of LangChain is that it provides a standard interface to models. This allows you to swap easily between models. At a high level, there are two main types of models: Language Models: good for text generation Text Embedding Models: good for turning text into a numerical representation Language Models# There are two different sub-types of Language Models: LLMs: these wrap APIs which take text in and return text ChatModels: these wrap models which take chat messages in and return a chat message This is a subtle difference, but a value prop of LangChain is that we provide a unified interface accross these. This is nice because although the underlying APIs are actually quite different, you often want to use them interchangeably. To see this, let’s look at OpenAI (a wrapper around OpenAI’s LLM) vs ChatOpenAI (a wrapper around OpenAI’s ChatModel). from langchain.llms import OpenAI from langchain.chat_models import ChatOpenAI llm = OpenAI() chat_model = ChatOpenAI() text -> text interface# llm.predict("say hi!") '\n\nHi there!' chat_model.predict("say hi!") 'Hello there!' messages -> message interface# from langchain.schema import HumanMessage llm.predict_messages([HumanMessage(content="say hi!")]) AIMessage(content='\n\nHello! Nice to meet you!', additional_kwargs={}, example=False) chat_model.predict_messages([HumanMessage(content="say hi!")]) AIMessage(content='Hello! How can I assist you today?', additional_kwargs={}, example=False) previous Models next LLMs Contents Language Models text -> text interface messages -> message interface By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/models/getting_started.html
9d86444f89bc-0
.rst .pdf Chat Models Chat Models# Note Conceptual Guide Chat models are a variation on language models. While chat models use language models under the hood, the interface they expose is a bit different. Rather than expose a “text in, text out” API, they expose an interface where “chat messages” are the inputs and outputs. Chat model APIs are fairly new, so we are still figuring out the correct abstractions. The following sections of documentation are provided: Getting Started: An overview of all the functionality the LangChain LLM class provides. How-To Guides: A collection of how-to guides. These highlight how to accomplish various objectives with our LLM class (streaming, async, etc). Integrations: A collection of examples on how to integrate different LLM providers with LangChain (OpenAI, Hugging Face, etc). previous LLMs next Getting Started By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/models/chat.html
f97316815651-0
.ipynb .pdf Getting Started Contents PromptTemplates LLMChain Streaming Getting Started# This notebook covers how to get started with chat models. The interface is based around messages rather than raw text. from langchain.chat_models import ChatOpenAI from langchain import PromptTemplate, LLMChain from langchain.prompts.chat import ( ChatPromptTemplate, SystemMessagePromptTemplate, AIMessagePromptTemplate, HumanMessagePromptTemplate, ) from langchain.schema import ( AIMessage, HumanMessage, SystemMessage ) chat = ChatOpenAI(temperature=0) You can get chat completions by passing one or more messages to the chat model. The response will be a message. The types of messages currently supported in LangChain are AIMessage, HumanMessage, SystemMessage, and ChatMessage – ChatMessage takes in an arbitrary role parameter. Most of the time, you’ll just be dealing with HumanMessage, AIMessage, and SystemMessage chat([HumanMessage(content="Translate this sentence from English to French. I love programming.")]) AIMessage(content="J'aime programmer.", additional_kwargs={}) OpenAI’s chat model supports multiple messages as input. See here for more information. Here is an example of sending a system and user message to the chat model: messages = [ SystemMessage(content="You are a helpful assistant that translates English to French."), HumanMessage(content="I love programming.") ] chat(messages) AIMessage(content="J'aime programmer.", additional_kwargs={}) You can go one step further and generate completions for multiple sets of messages using generate. This returns an LLMResult with an additional message parameter. batch_messages = [ [ SystemMessage(content="You are a helpful assistant that translates English to French."), HumanMessage(content="I love programming.") ], [ SystemMessage(content="You are a helpful assistant that translates English to French."), HumanMessage(content="I love artificial intelligence.") ], ] result = chat.generate(batch_messages) result LLMResult(generations=[[ChatGeneration(text="J'aime programmer.", generation_info=None, message=AIMessage(content="J'aime programmer.", additional_kwargs={}))], [ChatGeneration(text="J'aime l'intelligence artificielle.", generation_info=None, message=AIMessage(content="J'aime l'intelligence artificielle.", additional_kwargs={}))]], llm_output={'token_usage': {'prompt_tokens': 57, 'completion_tokens': 20, 'total_tokens': 77}}) You can recover things like token usage from this LLMResult result.llm_output {'token_usage': {'prompt_tokens': 57, 'completion_tokens': 20, 'total_tokens': 77}} PromptTemplates# You can make use of templating by using a MessagePromptTemplate. You can build a ChatPromptTemplate from one or more MessagePromptTemplates. You can use ChatPromptTemplate’s format_prompt – this returns a PromptValue, which you can convert to a string or Message object, depending on whether you want to use the formatted value as input to an llm or chat model. For convenience, there is a from_template method exposed on the template. If you were to use this template, this is what it would look like: template="You are a helpful assistant that translates {input_language} to {output_language}." system_message_prompt = SystemMessagePromptTemplate.from_template(template) human_template="{text}" human_message_prompt = HumanMessagePromptTemplate.from_template(human_template) chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt]) # get a chat completion from the formatted messages chat(chat_prompt.format_prompt(input_language="English", output_language="French", text="I love programming.").to_messages()) AIMessage(content="J'adore la programmation.", additional_kwargs={}) If you wanted to construct the MessagePromptTemplate more directly, you could create a PromptTemplate outside and then pass it in, eg: prompt=PromptTemplate( template="You are a helpful assistant that translates {input_language} to {output_language}.", input_variables=["input_language", "output_language"], ) system_message_prompt = SystemMessagePromptTemplate(prompt=prompt) LLMChain# You can use the existing LLMChain in a very similar way to before - provide a prompt and a model. chain = LLMChain(llm=chat, prompt=chat_prompt) chain.run(input_language="English", output_language="French", text="I love programming.") "J'adore la programmation."
https://langchain.readthedocs.io/en/latest/modules/models/chat/getting_started.html
f97316815651-1
"J'adore la programmation." Streaming# Streaming is supported for ChatOpenAI through callback handling. from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler chat = ChatOpenAI(streaming=True, callbacks=[StreamingStdOutCallbackHandler()], temperature=0) resp = chat([HumanMessage(content="Write me a song about sparkling water.")]) Verse 1: Bubbles rising to the top A refreshing drink that never stops Clear and crisp, it's pure delight A taste that's sure to excite Chorus: Sparkling water, oh so fine A drink that's always on my mind With every sip, I feel alive Sparkling water, you're my vibe Verse 2: No sugar, no calories, just pure bliss A drink that's hard to resist It's the perfect way to quench my thirst A drink that always comes first Chorus: Sparkling water, oh so fine A drink that's always on my mind With every sip, I feel alive Sparkling water, you're my vibe Bridge: From the mountains to the sea Sparkling water, you're the key To a healthy life, a happy soul A drink that makes me feel whole Chorus: Sparkling water, oh so fine A drink that's always on my mind With every sip, I feel alive Sparkling water, you're my vibe Outro: Sparkling water, you're the one A drink that's always so much fun I'll never let you go, my friend Sparkling previous Chat Models next How-To Guides Contents PromptTemplates LLMChain Streaming By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/models/chat/getting_started.html
030cc8cbd4d1-0
.rst .pdf How-To Guides How-To Guides# The examples here all address certain “how-to” guides for working with chat models. How to use few shot examples How to stream responses previous Getting Started next How to use few shot examples By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/models/chat/how_to_guides.html
6bc58d8ac987-0
.rst .pdf Integrations Integrations# The examples here all highlight how to integrate with different chat models. Anthropic Azure Google Vertex AI PaLM OpenAI PromptLayer ChatOpenAI previous How to stream responses next Anthropic By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/models/chat/integrations.html
23e07c25024d-0
.ipynb .pdf PromptLayer ChatOpenAI Contents Install PromptLayer Imports Set the Environment API Key Use the PromptLayerOpenAI LLM like normal Using PromptLayer Track PromptLayer ChatOpenAI# PromptLayer is a devtool that allows you to track, manage, and share your GPT prompt engineering. It acts as a middleware between your code and OpenAI’s python library, recording all your API requests and saving relevant metadata for easy exploration and search in the PromptLayer dashboard. Install PromptLayer# The promptlayer package is required to use PromptLayer with OpenAI. Install promptlayer using pip. pip install promptlayer Imports# import os from langchain.chat_models import PromptLayerChatOpenAI from langchain.schema import HumanMessage Set the Environment API Key# You can create a PromptLayer API Key at www.promptlayer.com by clicking the settings cog in the navbar. Set it as an environment variable called PROMPTLAYER_API_KEY. os.environ["PROMPTLAYER_API_KEY"] = "**********" Use the PromptLayerOpenAI LLM like normal# You can optionally pass in pl_tags to track your requests with PromptLayer’s tagging feature. chat = PromptLayerChatOpenAI(pl_tags=["langchain"]) chat([HumanMessage(content="I am a cat and I want")]) AIMessage(content='to take a nap in a cozy spot. I search around for a suitable place and finally settle on a soft cushion on the window sill. I curl up into a ball and close my eyes, relishing the warmth of the sun on my fur. As I drift off to sleep, I can hear the birds chirping outside and feel the gentle breeze blowing through the window. This is the life of a contented cat.', additional_kwargs={}) The above request should now appear on your PromptLayer dashboard. Using PromptLayer Track# If you would like to use any of the PromptLayer tracking features, you need to pass the argument return_pl_id when instantializing the PromptLayer LLM to get the request id. chat = PromptLayerChatOpenAI(return_pl_id=True) chat_results = chat.generate([[HumanMessage(content="I am a cat and I want")]]) for res in chat_results.generations: pl_request_id = res[0].generation_info["pl_request_id"] promptlayer.track.score(request_id=pl_request_id, score=100) Using this allows you to track the performance of your model in the PromptLayer dashboard. If you are using a prompt template, you can attach a template to a request as well. Overall, this gives you the opportunity to track the performance of different templates and models in the PromptLayer dashboard. previous OpenAI next Text Embedding Models Contents Install PromptLayer Imports Set the Environment API Key Use the PromptLayerOpenAI LLM like normal Using PromptLayer Track By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/models/chat/integrations/promptlayer_chatopenai.html
9b25e26baf98-0
.ipynb .pdf Google Vertex AI PaLM Google Vertex AI PaLM# Vertex AI is a machine learning (ML) platform that lets you train and deploy ML models and AI applications. Vertex AI combines data engineering, data science, and ML engineering workflows, enabling your teams to collaborate using a common toolset. Note: This is seperate from the Google PaLM integration. Google has chosen to offer an enterprise version of PaLM through GCP, and this supports the models made available through there. PaLM API on Vertex AI is a Preview offering, subject to the Pre-GA Offerings Terms of the GCP Service Specific Terms. Pre-GA products and features may have limited support, and changes to pre-GA products and features may not be compatible with other pre-GA versions. For more information, see the launch stage descriptions. Further, by using PaLM API on Vertex AI, you agree to the Generative AI Preview terms and conditions (Preview Terms). For PaLM API on Vertex AI, you can process personal data as outlined in the Cloud Data Processing Addendum, subject to applicable restrictions and obligations in the Agreement (as defined in the Preview Terms). To use Vertex AI PaLM you must have the google-cloud-aiplatform Python package installed and either: Have credentials configured for your environment (gcloud, workload identity, etc…) Store the path to a service account JSON file as the GOOGLE_APPLICATION_CREDENTIALS environment variable This codebase uses the google.auth library which first looks for the application credentials variable mentioned above, and then looks for system-level auth. For more information, see: https://cloud.google.com/docs/authentication/application-default-credentials#GAC https://googleapis.dev/python/google-auth/latest/reference/google.auth.html#module-google.auth #!pip install google-cloud-aiplatform from langchain.chat_models import ChatVertexAI from langchain.prompts.chat import ( ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate, ) from langchain.schema import ( HumanMessage, SystemMessage ) chat = ChatVertexAI() messages = [ SystemMessage(content="You are a helpful assistant that translates English to French."), HumanMessage(content="Translate this sentence from English to French. I love programming.") ] chat(messages) AIMessage(content='Sure, here is the translation of the sentence "I love programming" from English to French:\n\nJ\'aime programmer.', additional_kwargs={}, example=False) You can make use of templating by using a MessagePromptTemplate. You can build a ChatPromptTemplate from one or more MessagePromptTemplates. You can use ChatPromptTemplate’s format_prompt – this returns a PromptValue, which you can convert to a string or Message object, depending on whether you want to use the formatted value as input to an llm or chat model. For convenience, there is a from_template method exposed on the template. If you were to use this template, this is what it would look like: template="You are a helpful assistant that translates {input_language} to {output_language}." system_message_prompt = SystemMessagePromptTemplate.from_template(template) human_template="{text}" human_message_prompt = HumanMessagePromptTemplate.from_template(human_template) chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt]) # get a chat completion from the formatted messages chat(chat_prompt.format_prompt(input_language="English", output_language="French", text="I love programming.").to_messages()) AIMessage(content='Sure, here is the translation of "I love programming" in French:\n\nJ\'aime programmer.', additional_kwargs={}, example=False) previous Azure next OpenAI By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/models/chat/integrations/google_vertex_ai_palm.html
68bc3168b86a-0
.ipynb .pdf Anthropic Contents ChatAnthropic also supports async and streaming functionality: Anthropic# Anthropic is an American artificial intelligence (AI) startup and public-benefit corporation, founded by former members of OpenAI. Anthropic specializes in developing general AI systems and language models, with a company ethos of responsible AI usage. Anthropic develops a chatbot, named Claude. Similar to ChatGPT, Claude uses a messaging interface where users can submit questions or requests and receive highly detailed and relevant responses. from langchain.chat_models import ChatAnthropic from langchain.prompts.chat import ( ChatPromptTemplate, SystemMessagePromptTemplate, AIMessagePromptTemplate, HumanMessagePromptTemplate, ) from langchain.schema import ( AIMessage, HumanMessage, SystemMessage ) chat = ChatAnthropic() messages = [ HumanMessage(content="Translate this sentence from English to French. I love programming.") ] chat(messages) AIMessage(content=" J'aime programmer. ", additional_kwargs={}) ChatAnthropic also supports async and streaming functionality:# from langchain.callbacks.manager import CallbackManager from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler await chat.agenerate([messages]) LLMResult(generations=[[ChatGeneration(text=" J'aime la programmation.", generation_info=None, message=AIMessage(content=" J'aime la programmation.", additional_kwargs={}))]], llm_output={}) chat = ChatAnthropic(streaming=True, verbose=True, callback_manager=CallbackManager([StreamingStdOutCallbackHandler()])) chat(messages) J'adore programmer. AIMessage(content=" J'adore programmer.", additional_kwargs={}) previous Integrations next Azure Contents ChatAnthropic also supports async and streaming functionality: By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/models/chat/integrations/anthropic.html
9d0d21a46ae0-0
.ipynb .pdf OpenAI OpenAI# This notebook covers how to get started with OpenAI chat models. from langchain.chat_models import ChatOpenAI from langchain.prompts.chat import ( ChatPromptTemplate, SystemMessagePromptTemplate, AIMessagePromptTemplate, HumanMessagePromptTemplate, ) from langchain.schema import ( AIMessage, HumanMessage, SystemMessage ) chat = ChatOpenAI(temperature=0) messages = [ SystemMessage(content="You are a helpful assistant that translates English to French."), HumanMessage(content="Translate this sentence from English to French. I love programming.") ] chat(messages) AIMessage(content="J'aime programmer.", additional_kwargs={}, example=False) You can make use of templating by using a MessagePromptTemplate. You can build a ChatPromptTemplate from one or more MessagePromptTemplates. You can use ChatPromptTemplate’s format_prompt – this returns a PromptValue, which you can convert to a string or Message object, depending on whether you want to use the formatted value as input to an llm or chat model. For convenience, there is a from_template method exposed on the template. If you were to use this template, this is what it would look like: template="You are a helpful assistant that translates {input_language} to {output_language}." system_message_prompt = SystemMessagePromptTemplate.from_template(template) human_template="{text}" human_message_prompt = HumanMessagePromptTemplate.from_template(human_template) chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt]) # get a chat completion from the formatted messages chat(chat_prompt.format_prompt(input_language="English", output_language="French", text="I love programming.").to_messages()) AIMessage(content="J'adore la programmation.", additional_kwargs={}) previous Google Vertex AI PaLM next PromptLayer ChatOpenAI By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/models/chat/integrations/openai.html
d57d7dc8daab-0
.ipynb .pdf Azure Azure# This notebook goes over how to connect to an Azure hosted OpenAI endpoint from langchain.chat_models import AzureChatOpenAI from langchain.schema import HumanMessage BASE_URL = "https://${TODO}.openai.azure.com" API_KEY = "..." DEPLOYMENT_NAME = "chat" model = AzureChatOpenAI( openai_api_base=BASE_URL, openai_api_version="2023-03-15-preview", deployment_name=DEPLOYMENT_NAME, openai_api_key=API_KEY, openai_api_type = "azure", ) model([HumanMessage(content="Translate this sentence from English to French. I love programming.")]) AIMessage(content="\n\nJ'aime programmer.", additional_kwargs={}) previous Anthropic next Google Vertex AI PaLM By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/models/chat/integrations/azure_chat_openai.html
a36603026f21-0
.ipynb .pdf How to stream responses How to stream responses# This notebook goes over how to use streaming with a chat model. from langchain.chat_models import ChatOpenAI from langchain.schema import ( HumanMessage, ) from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler chat = ChatOpenAI(streaming=True, callbacks=[StreamingStdOutCallbackHandler()], temperature=0) resp = chat([HumanMessage(content="Write me a song about sparkling water.")]) Verse 1: Bubbles rising to the top A refreshing drink that never stops Clear and crisp, it's pure delight A taste that's sure to excite Chorus: Sparkling water, oh so fine A drink that's always on my mind With every sip, I feel alive Sparkling water, you're my vibe Verse 2: No sugar, no calories, just pure bliss A drink that's hard to resist It's the perfect way to quench my thirst A drink that always comes first Chorus: Sparkling water, oh so fine A drink that's always on my mind With every sip, I feel alive Sparkling water, you're my vibe Bridge: From the mountains to the sea Sparkling water, you're the key To a healthy life, a happy soul A drink that makes me feel whole Chorus: Sparkling water, oh so fine A drink that's always on my mind With every sip, I feel alive Sparkling water, you're my vibe Outro: Sparkling water, you're the one A drink that's always so much fun I'll never let you go, my friend Sparkling previous How to use few shot examples next Integrations By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/models/chat/examples/streaming.html
7def20fc6c66-0
.ipynb .pdf How to use few shot examples Contents Alternating Human/AI messages System Messages How to use few shot examples# This notebook covers how to use few shot examples in chat models. There does not appear to be solid consensus on how best to do few shot prompting. As a result, we are not solidifying any abstractions around this yet but rather using existing abstractions. Alternating Human/AI messages# The first way of doing few shot prompting relies on using alternating human/ai messages. See an example of this below. from langchain.chat_models import ChatOpenAI from langchain import PromptTemplate, LLMChain from langchain.prompts.chat import ( ChatPromptTemplate, SystemMessagePromptTemplate, AIMessagePromptTemplate, HumanMessagePromptTemplate, ) from langchain.schema import ( AIMessage, HumanMessage, SystemMessage ) chat = ChatOpenAI(temperature=0) template="You are a helpful assistant that translates english to pirate." system_message_prompt = SystemMessagePromptTemplate.from_template(template) example_human = HumanMessagePromptTemplate.from_template("Hi") example_ai = AIMessagePromptTemplate.from_template("Argh me mateys") human_template="{text}" human_message_prompt = HumanMessagePromptTemplate.from_template(human_template) chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, example_human, example_ai, human_message_prompt]) chain = LLMChain(llm=chat, prompt=chat_prompt) # get a chat completion from the formatted messages chain.run("I love programming.") "I be lovin' programmin', me hearty!" System Messages# OpenAI provides an optional name parameter that they also recommend using in conjunction with system messages to do few shot prompting. Here is an example of how to do that below. template="You are a helpful assistant that translates english to pirate." system_message_prompt = SystemMessagePromptTemplate.from_template(template) example_human = SystemMessagePromptTemplate.from_template("Hi", additional_kwargs={"name": "example_user"}) example_ai = SystemMessagePromptTemplate.from_template("Argh me mateys", additional_kwargs={"name": "example_assistant"}) human_template="{text}" human_message_prompt = HumanMessagePromptTemplate.from_template(human_template) chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, example_human, example_ai, human_message_prompt]) chain = LLMChain(llm=chat, prompt=chat_prompt) # get a chat completion from the formatted messages chain.run("I love programming.") "I be lovin' programmin', me hearty." previous How-To Guides next How to stream responses Contents Alternating Human/AI messages System Messages By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/models/chat/examples/few_shot_examples.html
6fb7e9b90c68-0
.ipynb .pdf Getting Started Getting Started# This notebook goes over how to use the LLM class in LangChain. The LLM class is a class designed for interfacing with LLMs. There are lots of LLM providers (OpenAI, Cohere, Hugging Face, etc) - this class is designed to provide a standard interface for all of them. In this part of the documentation, we will focus on generic LLM functionality. For details on working with a specific LLM wrapper, please see the examples in the How-To section. For this notebook, we will work with an OpenAI LLM wrapper, although the functionalities highlighted are generic for all LLM types. from langchain.llms import OpenAI llm = OpenAI(model_name="text-ada-001", n=2, best_of=2) Generate Text: The most basic functionality an LLM has is just the ability to call it, passing in a string and getting back a string. llm("Tell me a joke") '\n\nWhy did the chicken cross the road?\n\nTo get to the other side.' Generate: More broadly, you can call it with a list of inputs, getting back a more complete response than just the text. This complete response includes things like multiple top responses, as well as LLM provider specific information llm_result = llm.generate(["Tell me a joke", "Tell me a poem"]*15) len(llm_result.generations) 30 llm_result.generations[0] [Generation(text='\n\nWhy did the chicken cross the road?\n\nTo get to the other side!'), Generation(text='\n\nWhy did the chicken cross the road?\n\nTo get to the other side.')] llm_result.generations[-1] [Generation(text="\n\nWhat if love neverspeech\n\nWhat if love never ended\n\nWhat if love was only a feeling\n\nI'll never know this love\n\nIt's not a feeling\n\nBut it's what we have for each other\n\nWe just know that love is something strong\n\nAnd we can't help but be happy\n\nWe just feel what love is for us\n\nAnd we love each other with all our heart\n\nWe just don't know how\n\nHow it will go\n\nBut we know that love is something strong\n\nAnd we'll always have each other\n\nIn our lives."), Generation(text='\n\nOnce upon a time\n\nThere was a love so pure and true\n\nIt lasted for centuries\n\nAnd never became stale or dry\n\nIt was moving and alive\n\nAnd the heart of the love-ick\n\nIs still beating strong and true.')] You can also access provider specific information that is returned. This information is NOT standardized across providers. llm_result.llm_output {'token_usage': {'completion_tokens': 3903, 'total_tokens': 4023, 'prompt_tokens': 120}} Number of Tokens: You can also estimate how many tokens a piece of text will be in that model. This is useful because models have a context length (and cost more for more tokens), which means you need to be aware of how long the text you are passing in is. Notice that by default the tokens are estimated using tiktoken (except for legacy version <3.8, where a Hugging Face tokenizer is used) llm.get_num_tokens("what a joke") 3 previous LLMs next Generic Functionality By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/models/llms/getting_started.html
4b8f8b8e2431-0
.rst .pdf Generic Functionality Generic Functionality# The examples here all address certain “how-to” guides for working with LLMs. How to use the async API for LLMs How to write a custom LLM wrapper How (and why) to use the fake LLM How (and why) to use the human input LLM How to cache LLM calls How to serialize LLM classes How to stream LLM and Chat Model responses How to track token usage previous Getting Started next How to use the async API for LLMs By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/models/llms/how_to_guides.html
edf776916a32-0
.rst .pdf Integrations Integrations# The examples here are all “how-to” guides for how to integrate with various LLM providers. AI21 Aleph Alpha Anyscale Aviary Azure OpenAI Banana Beam Bedrock CerebriumAI Cohere C Transformers Databricks DeepInfra ForefrontAI Google Cloud Platform Vertex AI PaLM GooseAI GPT4All Hugging Face Hub Hugging Face Pipeline Huggingface TextGen Inference Jsonformer Llama-cpp Manifest Modal MosaicML NLP Cloud OpenAI OpenLM Petals PipelineAI Prediction Guard Control the output structure/ type of LLMs Chaining PromptLayer OpenAI ReLLM Replicate Runhouse SageMaker Endpoint StochasticAI Writer previous How to track token usage next AI21 By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/models/llms/integrations.html
cef8273f7d60-0
.ipynb .pdf ForefrontAI Contents Imports Set the Environment API Key Create the ForefrontAI instance Create a Prompt Template Initiate the LLMChain Run the LLMChain ForefrontAI# The Forefront platform gives you the ability to fine-tune and use open source large language models. This notebook goes over how to use Langchain with ForefrontAI. Imports# import os from langchain.llms import ForefrontAI from langchain import PromptTemplate, LLMChain Set the Environment API Key# Make sure to get your API key from ForefrontAI. You are given a 5 day free trial to test different models. # get a new token: https://docs.forefront.ai/forefront/api-reference/authentication from getpass import getpass FOREFRONTAI_API_KEY = getpass() os.environ["FOREFRONTAI_API_KEY"] = FOREFRONTAI_API_KEY Create the ForefrontAI instance# You can specify different parameters such as the model endpoint url, length, temperature, etc. You must provide an endpoint url. llm = ForefrontAI(endpoint_url="YOUR ENDPOINT URL HERE") Create a Prompt Template# We will create a prompt template for Question and Answer. template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) Initiate the LLMChain# llm_chain = LLMChain(prompt=prompt, llm=llm) Run the LLMChain# Provide a question and run the LLMChain. question = "What NFL team won the Super Bowl in the year Justin Beiber was born?" llm_chain.run(question) previous DeepInfra next Google Cloud Platform Vertex AI PaLM Contents Imports Set the Environment API Key Create the ForefrontAI instance Create a Prompt Template Initiate the LLMChain Run the LLMChain By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/models/llms/integrations/forefrontai_example.html
3789748e1823-0
.ipynb .pdf Azure OpenAI Contents API configuration Deployments Azure OpenAI# This notebook goes over how to use Langchain with Azure OpenAI. The Azure OpenAI API is compatible with OpenAI’s API. The openai Python package makes it easy to use both OpenAI and Azure OpenAI. You can call Azure OpenAI the same way you call OpenAI with the exceptions noted below. API configuration# You can configure the openai package to use Azure OpenAI using environment variables. The following is for bash: # Set this to `azure` export OPENAI_API_TYPE=azure # The API version you want to use: set this to `2023-03-15-preview` for the released version. export OPENAI_API_VERSION=2023-03-15-preview # The base URL for your Azure OpenAI resource. You can find this in the Azure portal under your Azure OpenAI resource. export OPENAI_API_BASE=https://your-resource-name.openai.azure.com # The API key for your Azure OpenAI resource. You can find this in the Azure portal under your Azure OpenAI resource. export OPENAI_API_KEY=<your Azure OpenAI API key> Alternatively, you can configure the API right within your running Python environment: import os os.environ["OPENAI_API_TYPE"] = "azure" ... Deployments# With Azure OpenAI, you set up your own deployments of the common GPT-3 and Codex models. When calling the API, you need to specify the deployment you want to use. Let’s say your deployment name is text-davinci-002-prod. In the openai Python API, you can specify this deployment with the engine parameter. For example: import openai response = openai.Completion.create( engine="text-davinci-002-prod", prompt="This is a test", max_tokens=5 ) !pip install openai import os os.environ["OPENAI_API_TYPE"] = "azure" os.environ["OPENAI_API_VERSION"] = "2023-03-15-preview" os.environ["OPENAI_API_BASE"] = "..." os.environ["OPENAI_API_KEY"] = "..." # Import Azure OpenAI from langchain.llms import AzureOpenAI # Create an instance of Azure OpenAI # Replace the deployment name with your own llm = AzureOpenAI( deployment_name="td2", model_name="text-davinci-002", ) # Run the LLM llm("Tell me a joke") "\n\nWhy couldn't the bicycle stand up by itself? Because it was...two tired!" We can also print the LLM and see its custom print. print(llm) AzureOpenAI Params: {'deployment_name': 'text-davinci-002', 'model_name': 'text-davinci-002', 'temperature': 0.7, 'max_tokens': 256, 'top_p': 1, 'frequency_penalty': 0, 'presence_penalty': 0, 'n': 1, 'best_of': 1} previous Aviary next Banana Contents API configuration Deployments By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/models/llms/integrations/azure_openai_example.html
55825e7c57cb-0
.ipynb .pdf Anyscale Anyscale# Anyscale is a fully-managed Ray platform, on which you can build, deploy, and manage scalable AI and Python applications This example goes over how to use LangChain to interact with Anyscale service import os os.environ["ANYSCALE_SERVICE_URL"] = ANYSCALE_SERVICE_URL os.environ["ANYSCALE_SERVICE_ROUTE"] = ANYSCALE_SERVICE_ROUTE os.environ["ANYSCALE_SERVICE_TOKEN"] = ANYSCALE_SERVICE_TOKEN from langchain.llms import Anyscale from langchain import PromptTemplate, LLMChain template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) llm = Anyscale() llm_chain = LLMChain(prompt=prompt, llm=llm) question = "When was George Washington president?" llm_chain.run(question) With Ray, we can distribute the queries without asyncrhonized implementation. This not only applies to Anyscale LLM model, but to any other Langchain LLM models which do not have _acall or _agenerate implemented prompt_list = [ "When was George Washington president?", "Explain to me the difference between nuclear fission and fusion.", "Give me a list of 5 science fiction books I should read next.", "Explain the difference between Spark and Ray.", "Suggest some fun holiday ideas.", "Tell a joke.", "What is 2+2?", "Explain what is machine learning like I am five years old.", "Explain what is artifical intelligence.", ] import ray @ray.remote def send_query(llm, prompt): resp = llm(prompt) return resp futures = [send_query.remote(llm, prompt) for prompt in prompt_list] results = ray.get(futures) previous Aleph Alpha next Aviary By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/models/llms/integrations/anyscale.html
6824c50dd26d-0
.ipynb .pdf Bedrock Contents Using in a conversation chain Bedrock# Amazon Bedrock is a fully managed service that makes FMs from leading AI startups and Amazon available via an API, so you can choose from a wide range of FMs to find the model that is best suited for your use case %pip install boto3 from langchain.llms.bedrock import Bedrock llm = Bedrock(credentials_profile_name="bedrock-admin", model_id="amazon.titan-tg1-large") Using in a conversation chain# from langchain.chains import ConversationChain from langchain.memory import ConversationBufferMemory conversation = ConversationChain( llm=llm, verbose=True, memory=ConversationBufferMemory() ) conversation.predict(input="Hi there!") previous Beam next CerebriumAI Contents Using in a conversation chain By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/models/llms/integrations/bedrock.html
0c8cdeb5635e-0
.ipynb .pdf Runhouse Runhouse# The Runhouse allows remote compute and data across environments and users. See the Runhouse docs. This example goes over how to use LangChain and Runhouse to interact with models hosted on your own GPU, or on-demand GPUs on AWS, GCP, AWS, or Lambda. Note: Code uses SelfHosted name instead of the Runhouse. !pip install runhouse from langchain.llms import SelfHostedPipeline, SelfHostedHuggingFaceLLM from langchain import PromptTemplate, LLMChain import runhouse as rh INFO | 2023-04-17 16:47:36,173 | No auth token provided, so not using RNS API to save and load configs # For an on-demand A100 with GCP, Azure, or Lambda gpu = rh.cluster(name="rh-a10x", instance_type="A100:1", use_spot=False) # For an on-demand A10G with AWS (no single A100s on AWS) # gpu = rh.cluster(name='rh-a10x', instance_type='g5.2xlarge', provider='aws') # For an existing cluster # gpu = rh.cluster(ips=['<ip of the cluster>'], # ssh_creds={'ssh_user': '...', 'ssh_private_key':'<path_to_key>'}, # name='rh-a10x') template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) llm = SelfHostedHuggingFaceLLM(model_id="gpt2", hardware=gpu, model_reqs=["pip:./", "transformers", "torch"]) llm_chain = LLMChain(prompt=prompt, llm=llm) question = "What NFL team won the Super Bowl in the year Justin Beiber was born?" llm_chain.run(question) INFO | 2023-02-17 05:42:23,537 | Running _generate_text via gRPC INFO | 2023-02-17 05:42:24,016 | Time to send message: 0.48 seconds "\n\nLet's say we're talking sports teams who won the Super Bowl in the year Justin Beiber" You can also load more custom models through the SelfHostedHuggingFaceLLM interface: llm = SelfHostedHuggingFaceLLM( model_id="google/flan-t5-small", task="text2text-generation", hardware=gpu, ) llm("What is the capital of Germany?") INFO | 2023-02-17 05:54:21,681 | Running _generate_text via gRPC INFO | 2023-02-17 05:54:21,937 | Time to send message: 0.25 seconds 'berlin' Using a custom load function, we can load a custom pipeline directly on the remote hardware: def load_pipeline(): from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline # Need to be inside the fn in notebooks model_id = "gpt2" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=10 ) return pipe def inference_fn(pipeline, prompt, stop = None): return pipeline(prompt)[0]["generated_text"][len(prompt):] llm = SelfHostedHuggingFaceLLM(model_load_fn=load_pipeline, hardware=gpu, inference_fn=inference_fn) llm("Who is the current US president?") INFO | 2023-02-17 05:42:59,219 | Running _generate_text via gRPC INFO | 2023-02-17 05:42:59,522 | Time to send message: 0.3 seconds 'john w. bush' You can send your pipeline directly over the wire to your model, but this will only work for small models (<2 Gb), and will be pretty slow: pipeline = load_pipeline() llm = SelfHostedPipeline.from_pipeline( pipeline=pipeline, hardware=gpu, model_reqs=model_reqs ) Instead, we can also send it to the hardware’s filesystem, which will be much faster. rh.blob(pickle.dumps(pipeline), path="models/pipeline.pkl").save().to(gpu, path="models")
https://langchain.readthedocs.io/en/latest/modules/models/llms/integrations/runhouse.html
0c8cdeb5635e-1
llm = SelfHostedPipeline.from_pipeline(pipeline="models/pipeline.pkl", hardware=gpu) previous Replicate next SageMaker Endpoint By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/models/llms/integrations/runhouse.html
db70a1dc79f4-0
.ipynb .pdf AI21 AI21# AI21 Studio provides API access to Jurassic-2 large language models. This example goes over how to use LangChain to interact with AI21 models. # install the package: !pip install ai21 # get AI21_API_KEY. Use https://studio.ai21.com/account/account from getpass import getpass AI21_API_KEY = getpass() from langchain.llms import AI21 from langchain import PromptTemplate, LLMChain template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) llm = AI21(ai21_api_key=AI21_API_KEY) llm_chain = LLMChain(prompt=prompt, llm=llm) question = "What NFL team won the Super Bowl in the year Justin Beiber was born?" llm_chain.run(question) '\n1. What year was Justin Bieber born?\nJustin Bieber was born in 1994.\n2. What team won the Super Bowl in 1994?\nThe Dallas Cowboys won the Super Bowl in 1994.' previous Integrations next Aleph Alpha By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/models/llms/integrations/ai21.html
03570e9198c0-0
.ipynb .pdf OpenLM Contents Setup Using LangChain with OpenLM OpenLM# OpenLM is a zero-dependency OpenAI-compatible LLM provider that can call different inference endpoints directly via HTTP. It implements the OpenAI Completion class so that it can be used as a drop-in replacement for the OpenAI API. This changeset utilizes BaseOpenAI for minimal added code. This examples goes over how to use LangChain to interact with both OpenAI and HuggingFace. You’ll need API keys from both. Setup# Install dependencies and set API keys. # Uncomment to install openlm and openai if you haven't already # !pip install openlm # !pip install openai from getpass import getpass import os import subprocess # Check if OPENAI_API_KEY environment variable is set if "OPENAI_API_KEY" not in os.environ: print("Enter your OpenAI API key:") os.environ["OPENAI_API_KEY"] = getpass() # Check if HF_API_TOKEN environment variable is set if "HF_API_TOKEN" not in os.environ: print("Enter your HuggingFace Hub API key:") os.environ["HF_API_TOKEN"] = getpass() Using LangChain with OpenLM# Here we’re going to call two models in an LLMChain, text-davinci-003 from OpenAI and gpt2 on HuggingFace. from langchain.llms import OpenLM from langchain import PromptTemplate, LLMChain question = "What is the capital of France?" template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) for model in ["text-davinci-003", "huggingface.co/gpt2"]: llm = OpenLM(model=model) llm_chain = LLMChain(prompt=prompt, llm=llm) result = llm_chain.run(question) print("""Model: {} Result: {}""".format(model, result)) Model: text-davinci-003 Result: France is a country in Europe. The capital of France is Paris. Model: huggingface.co/gpt2 Result: Question: What is the capital of France? Answer: Let's think step by step. I am not going to lie, this is a complicated issue, and I don't see any solutions to all this, but it is still far more previous OpenAI next Petals Contents Setup Using LangChain with OpenLM By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/models/llms/integrations/openlm.html
05d1989954ed-0
.ipynb .pdf Manifest Contents Compare HF Models Manifest# This notebook goes over how to use Manifest and LangChain. For more detailed information on manifest, and how to use it with local hugginface models like in this example, see https://github.com/HazyResearch/manifest Another example of using Manifest with Langchain. !pip install manifest-ml from manifest import Manifest from langchain.llms.manifest import ManifestWrapper manifest = Manifest( client_name = "huggingface", client_connection = "http://127.0.0.1:5000" ) print(manifest.client.get_model_params()) llm = ManifestWrapper(client=manifest, llm_kwargs={"temperature": 0.001, "max_tokens": 256}) # Map reduce example from langchain import PromptTemplate from langchain.text_splitter import CharacterTextSplitter from langchain.chains.mapreduce import MapReduceChain _prompt = """Write a concise summary of the following: {text} CONCISE SUMMARY:""" prompt = PromptTemplate(template=_prompt, input_variables=["text"]) text_splitter = CharacterTextSplitter() mp_chain = MapReduceChain.from_params(llm, prompt, text_splitter) with open('../../../state_of_the_union.txt') as f: state_of_the_union = f.read() mp_chain.run(state_of_the_union) 'President Obama delivered his annual State of the Union address on Tuesday night, laying out his priorities for the coming year. Obama said the government will provide free flu vaccines to all Americans, ending the government shutdown and allowing businesses to reopen. The president also said that the government will continue to send vaccines to 112 countries, more than any other nation. "We have lost so much to COVID-19," Trump said. "Time with one another. And worst of all, so much loss of life." He said the CDC is working on a vaccine for kids under 5, and that the government will be ready with plenty of vaccines when they are available. Obama says the new guidelines are a "great step forward" and that the virus is no longer a threat. He says the government is launching a "Test to Treat" initiative that will allow people to get tested at a pharmacy and get antiviral pills on the spot at no cost. Obama says the new guidelines are a "great step forward" and that the virus is no longer a threat. He says the government will continue to send vaccines to 112 countries, more than any other nation. "We are coming for your' Compare HF Models# from langchain.model_laboratory import ModelLaboratory manifest1 = ManifestWrapper( client=Manifest( client_name="huggingface", client_connection="http://127.0.0.1:5000" ), llm_kwargs={"temperature": 0.01} ) manifest2 = ManifestWrapper( client=Manifest( client_name="huggingface", client_connection="http://127.0.0.1:5001" ), llm_kwargs={"temperature": 0.01} ) manifest3 = ManifestWrapper( client=Manifest( client_name="huggingface", client_connection="http://127.0.0.1:5002" ), llm_kwargs={"temperature": 0.01} ) llms = [manifest1, manifest2, manifest3] model_lab = ModelLaboratory(llms) model_lab.compare("What color is a flamingo?") Input: What color is a flamingo? ManifestWrapper Params: {'model_name': 'bigscience/T0_3B', 'model_path': 'bigscience/T0_3B', 'temperature': 0.01} pink ManifestWrapper Params: {'model_name': 'EleutherAI/gpt-neo-125M', 'model_path': 'EleutherAI/gpt-neo-125M', 'temperature': 0.01} A flamingo is a small, round ManifestWrapper Params: {'model_name': 'google/flan-t5-xl', 'model_path': 'google/flan-t5-xl', 'temperature': 0.01} pink previous Llama-cpp next Modal Contents Compare HF Models By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/models/llms/integrations/manifest.html
be1cc2036820-0
.ipynb .pdf GooseAI Contents Install openai Imports Set the Environment API Key Create the GooseAI instance Create a Prompt Template Initiate the LLMChain Run the LLMChain GooseAI# GooseAI is a fully managed NLP-as-a-Service, delivered via API. GooseAI provides access to these models. This notebook goes over how to use Langchain with GooseAI. Install openai# The openai package is required to use the GooseAI API. Install openai using pip3 install openai. $ pip3 install openai Imports# import os from langchain.llms import GooseAI from langchain import PromptTemplate, LLMChain Set the Environment API Key# Make sure to get your API key from GooseAI. You are given $10 in free credits to test different models. from getpass import getpass GOOSEAI_API_KEY = getpass() os.environ["GOOSEAI_API_KEY"] = GOOSEAI_API_KEY Create the GooseAI instance# You can specify different parameters such as the model name, max tokens generated, temperature, etc. llm = GooseAI() Create a Prompt Template# We will create a prompt template for Question and Answer. template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) Initiate the LLMChain# llm_chain = LLMChain(prompt=prompt, llm=llm) Run the LLMChain# Provide a question and run the LLMChain. question = "What NFL team won the Super Bowl in the year Justin Beiber was born?" llm_chain.run(question) previous Google Cloud Platform Vertex AI PaLM next GPT4All Contents Install openai Imports Set the Environment API Key Create the GooseAI instance Create a Prompt Template Initiate the LLMChain Run the LLMChain By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/models/llms/integrations/gooseai_example.html
01573e1dcd4a-0
.ipynb .pdf NLP Cloud NLP Cloud# The NLP Cloud serves high performance pre-trained or custom models for NER, sentiment-analysis, classification, summarization, paraphrasing, grammar and spelling correction, keywords and keyphrases extraction, chatbot, product description and ad generation, intent classification, text generation, image generation, blog post generation, code generation, question answering, automatic speech recognition, machine translation, language detection, semantic search, semantic similarity, tokenization, POS tagging, embeddings, and dependency parsing. It is ready for production, served through a REST API. This example goes over how to use LangChain to interact with NLP Cloud models. !pip install nlpcloud # get a token: https://docs.nlpcloud.com/#authentication from getpass import getpass NLPCLOUD_API_KEY = getpass() import os os.environ["NLPCLOUD_API_KEY"] = NLPCLOUD_API_KEY from langchain.llms import NLPCloud from langchain import PromptTemplate, LLMChain template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) llm = NLPCloud() llm_chain = LLMChain(prompt=prompt, llm=llm) question = "What NFL team won the Super Bowl in the year Justin Beiber was born?" llm_chain.run(question) ' Justin Bieber was born in 1994, so the team that won the Super Bowl that year was the San Francisco 49ers.' previous MosaicML next OpenAI By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/models/llms/integrations/nlpcloud.html
dbba96b7ddd9-0
.ipynb .pdf Beam Beam# Beam makes it easy to run code on GPUs, deploy scalable web APIs, schedule cron jobs, and run massively parallel workloads — without managing any infrastructure. Calls the Beam API wrapper to deploy and make subsequent calls to an instance of the gpt2 LLM in a cloud deployment. Requires installation of the Beam library and registration of Beam Client ID and Client Secret. By calling the wrapper an instance of the model is created and run, with returned text relating to the prompt. Additional calls can then be made by directly calling the Beam API. Create an account, if you don’t have one already. Grab your API keys from the dashboard. Install the Beam CLI !curl https://raw.githubusercontent.com/slai-labs/get-beam/main/get-beam.sh -sSfL | sh Register API Keys and set your beam client id and secret environment variables: import os import subprocess beam_client_id = "<Your beam client id>" beam_client_secret = "<Your beam client secret>" # Set the environment variables os.environ['BEAM_CLIENT_ID'] = beam_client_id os.environ['BEAM_CLIENT_SECRET'] = beam_client_secret # Run the beam configure command !beam configure --clientId={beam_client_id} --clientSecret={beam_client_secret} Install the Beam SDK: !pip install beam-sdk Deploy and call Beam directly from langchain! Note that a cold start might take a couple of minutes to return the response, but subsequent calls will be faster! from langchain.llms.beam import Beam llm = Beam(model_name="gpt2", name="langchain-gpt2-test", cpu=8, memory="32Gi", gpu="A10G", python_version="python3.8", python_packages=[ "diffusers[torch]>=0.10", "transformers", "torch", "pillow", "accelerate", "safetensors", "xformers",], max_length="50", verbose=False) llm._deploy() response = llm._call("Running machine learning on a remote GPU") print(response) previous Banana next Bedrock By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/models/llms/integrations/beam.html
b8a5e880a727-0
.ipynb .pdf Hugging Face Hub Contents Examples StableLM, by Stability AI Dolly, by Databricks Camel, by Writer Hugging Face Hub# The Hugging Face Hub is a platform with over 120k models, 20k datasets, and 50k demo apps (Spaces), all open source and publicly available, in an online platform where people can easily collaborate and build ML together. This example showcases how to connect to the Hugging Face Hub. To use, you should have the huggingface_hub python package installed. !pip install huggingface_hub > /dev/null # get a token: https://huggingface.co/docs/api-inference/quicktour#get-your-api-token from getpass import getpass HUGGINGFACEHUB_API_TOKEN = getpass() import os os.environ["HUGGINGFACEHUB_API_TOKEN"] = HUGGINGFACEHUB_API_TOKEN Select a Model from langchain import HuggingFaceHub repo_id = "google/flan-t5-xl" # See https://huggingface.co/models?pipeline_tag=text-generation&sort=downloads for some other options llm = HuggingFaceHub(repo_id=repo_id, model_kwargs={"temperature":0, "max_length":64}) from langchain import PromptTemplate, LLMChain template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) llm_chain = LLMChain(prompt=prompt, llm=llm) question = "Who won the FIFA World Cup in the year 1994? " print(llm_chain.run(question)) Examples# Below are some examples of models you can access through the Hugging Face Hub integration. StableLM, by Stability AI# See Stability AI’s organization page for a list of available models. repo_id = "stabilityai/stablelm-tuned-alpha-3b" # Others include stabilityai/stablelm-base-alpha-3b # as well as 7B parameter versions llm = HuggingFaceHub(repo_id=repo_id, model_kwargs={"temperature":0, "max_length":64}) # Reuse the prompt and question from above. llm_chain = LLMChain(prompt=prompt, llm=llm) print(llm_chain.run(question)) Dolly, by Databricks# See Databricks organization page for a list of available models. from langchain import HuggingFaceHub repo_id = "databricks/dolly-v2-3b" llm = HuggingFaceHub(repo_id=repo_id, model_kwargs={"temperature":0, "max_length":64}) # Reuse the prompt and question from above. llm_chain = LLMChain(prompt=prompt, llm=llm) print(llm_chain.run(question)) Camel, by Writer# See Writer’s organization page for a list of available models. from langchain import HuggingFaceHub repo_id = "Writer/camel-5b-hf" # See https://huggingface.co/Writer for other options llm = HuggingFaceHub(repo_id=repo_id, model_kwargs={"temperature":0, "max_length":64}) # Reuse the prompt and question from above. llm_chain = LLMChain(prompt=prompt, llm=llm) print(llm_chain.run(question)) And many more! previous GPT4All next Hugging Face Pipeline Contents Examples StableLM, by Stability AI Dolly, by Databricks Camel, by Writer By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/models/llms/integrations/huggingface_hub.html
bb805bf9082e-0
.ipynb .pdf SageMaker Endpoint Contents Set up Example SageMaker Endpoint# Amazon SageMaker is a system that can build, train, and deploy machine learning (ML) models for any use case with fully managed infrastructure, tools, and workflows. This notebooks goes over how to use an LLM hosted on a SageMaker endpoint. !pip3 install langchain boto3 Set up# You have to set up following required parameters of the SagemakerEndpoint call: endpoint_name: The name of the endpoint from the deployed Sagemaker model. Must be unique within an AWS Region. credentials_profile_name: The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which has either access keys or role information specified. If not specified, the default credential profile or, if on an EC2 instance, credentials from IMDS will be used. See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html Example# from langchain.docstore.document import Document example_doc_1 = """ Peter and Elizabeth took a taxi to attend the night party in the city. While in the party, Elizabeth collapsed and was rushed to the hospital. Since she was diagnosed with a brain injury, the doctor told Peter to stay besides her until she gets well. Therefore, Peter stayed with her at the hospital for 3 days without leaving. """ docs = [ Document( page_content=example_doc_1, ) ] from typing import Dict from langchain import PromptTemplate, SagemakerEndpoint from langchain.llms.sagemaker_endpoint import LLMContentHandler from langchain.chains.question_answering import load_qa_chain import json query = """How long was Elizabeth hospitalized? """ prompt_template = """Use the following pieces of context to answer the question at the end. {context} Question: {question} Answer:""" PROMPT = PromptTemplate( template=prompt_template, input_variables=["context", "question"] ) class ContentHandler(LLMContentHandler): content_type = "application/json" accepts = "application/json" def transform_input(self, prompt: str, model_kwargs: Dict) -> bytes: input_str = json.dumps({prompt: prompt, **model_kwargs}) return input_str.encode('utf-8') def transform_output(self, output: bytes) -> str: response_json = json.loads(output.read().decode("utf-8")) return response_json[0]["generated_text"] content_handler = ContentHandler() chain = load_qa_chain( llm=SagemakerEndpoint( endpoint_name="endpoint-name", credentials_profile_name="credentials-profile-name", region_name="us-west-2", model_kwargs={"temperature":1e-10}, content_handler=content_handler ), prompt=PROMPT ) chain({"input_documents": docs, "question": query}, return_only_outputs=True) previous Runhouse next StochasticAI Contents Set up Example By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/models/llms/integrations/sagemaker.html
6c1927bdabca-0
.ipynb .pdf Google Cloud Platform Vertex AI PaLM Google Cloud Platform Vertex AI PaLM# Note: This is seperate from the Google PaLM integration. Google has chosen to offer an enterprise version of PaLM through GCP, and this supports the models made available through there. PaLM API on Vertex AI is a Preview offering, subject to the Pre-GA Offerings Terms of the GCP Service Specific Terms. Pre-GA products and features may have limited support, and changes to pre-GA products and features may not be compatible with other pre-GA versions. For more information, see the launch stage descriptions. Further, by using PaLM API on Vertex AI, you agree to the Generative AI Preview terms and conditions (Preview Terms). For PaLM API on Vertex AI, you can process personal data as outlined in the Cloud Data Processing Addendum, subject to applicable restrictions and obligations in the Agreement (as defined in the Preview Terms). To use Vertex AI PaLM you must have the google-cloud-aiplatform Python package installed and either: Have credentials configured for your environment (gcloud, workload identity, etc…) Store the path to a service account JSON file as the GOOGLE_APPLICATION_CREDENTIALS environment variable This codebase uses the google.auth library which first looks for the application credentials variable mentioned above, and then looks for system-level auth. For more information, see: https://cloud.google.com/docs/authentication/application-default-credentials#GAC https://googleapis.dev/python/google-auth/latest/reference/google.auth.html#module-google.auth #!pip install google-cloud-aiplatform from langchain.llms import VertexAI from langchain import PromptTemplate, LLMChain template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) llm = VertexAI() llm_chain = LLMChain(prompt=prompt, llm=llm) question = "What NFL team won the Super Bowl in the year Justin Beiber was born?" llm_chain.run(question) 'Justin Bieber was born on March 1, 1994. The Super Bowl in 1994 was won by the San Francisco 49ers.\nThe final answer: San Francisco 49ers.' previous ForefrontAI next GooseAI By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/models/llms/integrations/google_vertex_ai_palm.html
c1d86a971c0c-0
.ipynb .pdf GPT4All Contents Specify Model GPT4All# GitHub:nomic-ai/gpt4all an ecosystem of open-source chatbots trained on a massive collections of clean assistant data including code, stories and dialogue. This example goes over how to use LangChain to interact with GPT4All models. %pip install gpt4all > /dev/null Note: you may need to restart the kernel to use updated packages. from langchain import PromptTemplate, LLMChain from langchain.llms import GPT4All from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) Specify Model# To run locally, download a compatible ggml-formatted model. For more info, visit https://github.com/nomic-ai/gpt4all For full installation instructions go here. The GPT4All Chat installer needs to decompress a 3GB LLM model during the installation process! Note that new models are uploaded regularly - check the link above for the most recent .bin URL local_path = './models/ggml-gpt4all-l13b-snoozy.bin' # replace with your desired local file path Uncomment the below block to download a model. You may want to update url to a new version. # import requests # from pathlib import Path # from tqdm import tqdm # Path(local_path).parent.mkdir(parents=True, exist_ok=True) # # Example model. Check https://github.com/nomic-ai/gpt4all for the latest models. # url = 'http://gpt4all.io/models/ggml-gpt4all-l13b-snoozy.bin' # # send a GET request to the URL to download the file. Stream since it's large # response = requests.get(url, stream=True) # # open the file in binary mode and write the contents of the response to it in chunks # # This is a large file, so be prepared to wait. # with open(local_path, 'wb') as f: # for chunk in tqdm(response.iter_content(chunk_size=8192)): # if chunk: # f.write(chunk) # Callbacks support token-wise streaming callbacks = [StreamingStdOutCallbackHandler()] # Verbose is required to pass to the callback manager llm = GPT4All(model=local_path, callbacks=callbacks, verbose=True) # If you want to use a custom model add the backend parameter # Check https://docs.gpt4all.io/gpt4all_python.html for supported backends llm = GPT4All(model=local_path, backend='gptj', callbacks=callbacks, verbose=True) llm_chain = LLMChain(prompt=prompt, llm=llm) question = "What NFL team won the Super Bowl in the year Justin Bieber was born?" llm_chain.run(question) previous GooseAI next Hugging Face Hub Contents Specify Model By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/models/llms/integrations/gpt4all.html
5f31a564acc1-0
.ipynb .pdf Huggingface TextGen Inference Huggingface TextGen Inference# Text Generation Inference is a Rust, Python and gRPC server for text generation inference. Used in production at HuggingFace to power LLMs api-inference widgets. This notebooks goes over how to use a self hosted LLM using Text Generation Inference. To use, you should have the text_generation python package installed. # !pip3 install text_generation llm = HuggingFaceTextGenInference( inference_server_url='http://localhost:8010/', max_new_tokens=512, top_k=10, top_p=0.95, typical_p=0.95, temperature=0.01, repetition_penalty=1.03, ) llm("What did foo say about bar?") previous Hugging Face Pipeline next Jsonformer By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/models/llms/integrations/huggingface_textgen_inference.html
8b5dfee89727-0
.ipynb .pdf Llama-cpp Contents Installation CPU only installation Installation with OpenBLAS / cuBLAS / CLBlast Usage CPU GPU Llama-cpp# llama-cpp is a Python binding for llama.cpp. It supports several LLMs. This notebook goes over how to run llama-cpp within LangChain. Installation# There is a banch of options how to install the llama-cpp package: only CPU usage CPU + GPU (using one of many BLAS backends) CPU only installation# !pip install llama-cpp-python Installation with OpenBLAS / cuBLAS / CLBlast# lama.cpp supports multiple BLAS backends for faster processing. Use the FORCE_CMAKE=1 environment variable to force the use of cmake and install the pip package for the desired BLAS backend (source). Example installation with cuBLAS backend: !CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-python IMPORTANT: If you have already installed a cpu only version of the package, you need to reinstall it from scratch: condiser the following command: !CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install --upgrade --force-reinstall llama-cpp-python Usage# Make sure you are following all instructions to install all necessary model files. You don’t need an API_TOKEN! from langchain.llms import LlamaCpp from langchain import PromptTemplate, LLMChain from langchain.callbacks.manager import CallbackManager from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler Consider using a template that suits your model! Check the models page on HuggingFace etc. to get a correct prompting template. template = """Question: {question} Answer: Let's work this out in a step by step way to be sure we have the right answer.""" prompt = PromptTemplate(template=template, input_variables=["question"]) # Callbacks support token-wise streaming callback_manager = CallbackManager([StreamingStdOutCallbackHandler()]) # Verbose is required to pass to the callback manager CPU# # Make sure the model path is correct for your system! llm = LlamaCpp( model_path="./ggml-model-q4_0.bin", callback_manager=callback_manager, verbose=True ) llm_chain = LLMChain(prompt=prompt, llm=llm) question = "What NFL team won the Super Bowl in the year Justin Bieber was born?" llm_chain.run(question) 1. First, find out when Justin Bieber was born. 2. We know that Justin Bieber was born on March 1, 1994. 3. Next, we need to look up when the Super Bowl was played in that year. 4. The Super Bowl was played on January 28, 1995. 5. Finally, we can use this information to answer the question. The NFL team that won the Super Bowl in the year Justin Bieber was born is the San Francisco 49ers. llama_print_timings: load time = 434.15 ms llama_print_timings: sample time = 41.81 ms / 121 runs ( 0.35 ms per token) llama_print_timings: prompt eval time = 2523.78 ms / 48 tokens ( 52.58 ms per token) llama_print_timings: eval time = 23971.57 ms / 121 runs ( 198.11 ms per token) llama_print_timings: total time = 28945.95 ms '\n\n1. First, find out when Justin Bieber was born.\n2. We know that Justin Bieber was born on March 1, 1994.\n3. Next, we need to look up when the Super Bowl was played in that year.\n4. The Super Bowl was played on January 28, 1995.\n5. Finally, we can use this information to answer the question. The NFL team that won the Super Bowl in the year Justin Bieber was born is the San Francisco 49ers.' GPU# If the installation with BLAS backend was correct, you will see an BLAS = 1 indicator in model properties. Two of the most important parameters for use with GPU are: n_gpu_layers - determines how many layers of the model are offloaded to your GPU. n_batch - how many tokens are processed in parallel.
https://langchain.readthedocs.io/en/latest/modules/models/llms/integrations/llamacpp.html
8b5dfee89727-1
n_batch - how many tokens are processed in parallel. Setting these parameters correctly will dramatically improve the evaluation speed (see wrapper code for more details). n_gpu_layers = 40 # Change this value based on your model and your GPU VRAM pool. n_batch = 512 # Should be between 1 and n_ctx, consider the amount of VRAM in your GPU. # Make sure the model path is correct for your system! llm = LlamaCpp( model_path="./ggml-model-q4_0.bin", n_gpu_layers=n_gpu_layers, n_batch=n_batch, callback_manager=callback_manager, verbose=True ) llm_chain = LLMChain(prompt=prompt, llm=llm) question = "What NFL team won the Super Bowl in the year Justin Bieber was born?" llm_chain.run(question) We are looking for an NFL team that won the Super Bowl when Justin Bieber (born March 1, 1994) was born. First, let's look up which year is closest to when Justin Bieber was born: * The year before he was born: 1993 * The year of his birth: 1994 * The year after he was born: 1995 We want to know what NFL team won the Super Bowl in the year that is closest to when Justin Bieber was born. Therefore, we should look up the NFL team that won the Super Bowl in either 1993 or 1994. Now let's find out which NFL team did win the Super Bowl in either of those years: * In 1993, the San Francisco 49ers won the Super Bowl against the Dallas Cowboys by a score of 20-16. * In 1994, the San Francisco 49ers won the Super Bowl again, this time against the San Diego Chargers by a score of 49-26. llama_print_timings: load time = 238.10 ms llama_print_timings: sample time = 84.23 ms / 256 runs ( 0.33 ms per token) llama_print_timings: prompt eval time = 238.04 ms / 49 tokens ( 4.86 ms per token) llama_print_timings: eval time = 10391.96 ms / 255 runs ( 40.75 ms per token) llama_print_timings: total time = 15664.80 ms " We are looking for an NFL team that won the Super Bowl when Justin Bieber (born March 1, 1994) was born. \n\nFirst, let's look up which year is closest to when Justin Bieber was born:\n\n* The year before he was born: 1993\n* The year of his birth: 1994\n* The year after he was born: 1995\n\nWe want to know what NFL team won the Super Bowl in the year that is closest to when Justin Bieber was born. Therefore, we should look up the NFL team that won the Super Bowl in either 1993 or 1994.\n\nNow let's find out which NFL team did win the Super Bowl in either of those years:\n\n* In 1993, the San Francisco 49ers won the Super Bowl against the Dallas Cowboys by a score of 20-16.\n* In 1994, the San Francisco 49ers won the Super Bowl again, this time against the San Diego Chargers by a score of 49-26.\n" previous Jsonformer next Manifest Contents Installation CPU only installation Installation with OpenBLAS / cuBLAS / CLBlast Usage CPU GPU By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/models/llms/integrations/llamacpp.html
526d94c3c4a5-0
.ipynb .pdf CerebriumAI Contents Install cerebrium Imports Set the Environment API Key Create the CerebriumAI instance Create a Prompt Template Initiate the LLMChain Run the LLMChain CerebriumAI# Cerebrium is an AWS Sagemaker alternative. It also provides API access to several LLM models. This notebook goes over how to use Langchain with CerebriumAI. Install cerebrium# The cerebrium package is required to use the CerebriumAI API. Install cerebrium using pip3 install cerebrium. # Install the package !pip3 install cerebrium Imports# import os from langchain.llms import CerebriumAI from langchain import PromptTemplate, LLMChain Set the Environment API Key# Make sure to get your API key from CerebriumAI. See here. You are given a 1 hour free of serverless GPU compute to test different models. os.environ["CEREBRIUMAI_API_KEY"] = "YOUR_KEY_HERE" Create the CerebriumAI instance# You can specify different parameters such as the model endpoint url, max length, temperature, etc. You must provide an endpoint url. llm = CerebriumAI(endpoint_url="YOUR ENDPOINT URL HERE") Create a Prompt Template# We will create a prompt template for Question and Answer. template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) Initiate the LLMChain# llm_chain = LLMChain(prompt=prompt, llm=llm) Run the LLMChain# Provide a question and run the LLMChain. question = "What NFL team won the Super Bowl in the year Justin Beiber was born?" llm_chain.run(question) previous Bedrock next Cohere Contents Install cerebrium Imports Set the Environment API Key Create the CerebriumAI instance Create a Prompt Template Initiate the LLMChain Run the LLMChain By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/models/llms/integrations/cerebriumai_example.html
ba134dac5dc0-0
.ipynb .pdf DeepInfra Contents Imports Set the Environment API Key Create the DeepInfra instance Create a Prompt Template Initiate the LLMChain Run the LLMChain DeepInfra# DeepInfra provides several LLMs. This notebook goes over how to use Langchain with DeepInfra. Imports# import os from langchain.llms import DeepInfra from langchain import PromptTemplate, LLMChain Set the Environment API Key# Make sure to get your API key from DeepInfra. You have to Login and get a new token. You are given a 1 hour free of serverless GPU compute to test different models. (see here) You can print your token with deepctl auth token # get a new token: https://deepinfra.com/login?from=%2Fdash from getpass import getpass DEEPINFRA_API_TOKEN = getpass() os.environ["DEEPINFRA_API_TOKEN"] = DEEPINFRA_API_TOKEN Create the DeepInfra instance# You can also use our open source deepctl tool to manage your model deployments. You can view a list of available parameters here. llm = DeepInfra(model_id="databricks/dolly-v2-12b") llm.model_kwargs = {'temperature': 0.7, 'repetition_penalty': 1.2, 'max_new_tokens': 250, 'top_p': 0.9} Create a Prompt Template# We will create a prompt template for Question and Answer. template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) Initiate the LLMChain# llm_chain = LLMChain(prompt=prompt, llm=llm) Run the LLMChain# Provide a question and run the LLMChain. question = "Can penguins reach the North pole?" llm_chain.run(question) "Penguins live in the Southern hemisphere.\nThe North pole is located in the Northern hemisphere.\nSo, first you need to turn the penguin South.\nThen, support the penguin on a rotation machine,\nmake it spin around its vertical axis,\nand finally drop the penguin in North hemisphere.\nNow, you have a penguin in the north pole!\n\nStill didn't understand?\nWell, you're a failure as a teacher." previous Databricks next ForefrontAI Contents Imports Set the Environment API Key Create the DeepInfra instance Create a Prompt Template Initiate the LLMChain Run the LLMChain By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/models/llms/integrations/deepinfra_example.html
142d2e6d9403-0
.ipynb .pdf OpenAI OpenAI# OpenAI offers a spectrum of models with different levels of power suitable for different tasks. This example goes over how to use LangChain to interact with OpenAI models # get a token: https://platform.openai.com/account/api-keys from getpass import getpass OPENAI_API_KEY = getpass() ········ import os os.environ["OPENAI_API_KEY"] = OPENAI_API_KEY from langchain.llms import OpenAI from langchain import PromptTemplate, LLMChain template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) llm = OpenAI() llm_chain = LLMChain(prompt=prompt, llm=llm) question = "What NFL team won the Super Bowl in the year Justin Beiber was born?" llm_chain.run(question) ' Justin Bieber was born in 1994, so we are looking for the Super Bowl winner from that year. The Super Bowl in 1994 was Super Bowl XXVIII, and the winner was the Dallas Cowboys.' If you are behind an explicit proxy, you can use the OPENAI_PROXY environment variable to pass through os.environ["OPENAI_PROXY"] = "http://proxy.yourcompany.com:8080" previous NLP Cloud next OpenLM By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/models/llms/integrations/openai.html
a0c475e9c443-0
.ipynb .pdf Petals Contents Install petals Imports Set the Environment API Key Create the Petals instance Create a Prompt Template Initiate the LLMChain Run the LLMChain Petals# Petals runs 100B+ language models at home, BitTorrent-style. This notebook goes over how to use Langchain with Petals. Install petals# The petals package is required to use the Petals API. Install petals using pip3 install petals. !pip3 install petals Imports# import os from langchain.llms import Petals from langchain import PromptTemplate, LLMChain Set the Environment API Key# Make sure to get your API key from Huggingface. from getpass import getpass HUGGINGFACE_API_KEY = getpass() os.environ["HUGGINGFACE_API_KEY"] = HUGGINGFACE_API_KEY Create the Petals instance# You can specify different parameters such as the model name, max new tokens, temperature, etc. # this can take several minutes to download big files! llm = Petals(model_name="bigscience/bloom-petals") Downloading: 1%|▏ | 40.8M/7.19G [00:24<15:44, 7.57MB/s] Create a Prompt Template# We will create a prompt template for Question and Answer. template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) Initiate the LLMChain# llm_chain = LLMChain(prompt=prompt, llm=llm) Run the LLMChain# Provide a question and run the LLMChain. question = "What NFL team won the Super Bowl in the year Justin Beiber was born?" llm_chain.run(question) previous OpenLM next PipelineAI Contents Install petals Imports Set the Environment API Key Create the Petals instance Create a Prompt Template Initiate the LLMChain Run the LLMChain By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/models/llms/integrations/petals_example.html
573c61cdf0b6-0
.ipynb .pdf Aviary Aviary# Aviary is an open source tooklit for evaluating and deploying production open source LLMs. This example goes over how to use LangChain to interact with Aviary. You can try Aviary out https://aviary.anyscale.com. You can find out more about Aviary at https://github.com/ray-project/aviary. One Aviary instance can serve multiple models. You can get a list of the available models by using the cli: % aviary models Or you can connect directly to the endpoint and get a list of available models by using the /models endpoint. The constructor requires a url for an Aviary backend, and optionally a token to validate the connection. import os from langchain.llms import Aviary llm = Aviary(model='amazon/LightGPT', aviary_url=os.environ['AVIARY_URL'], aviary_token=os.environ['AVIARY_TOKEN']) result = llm.predict('What is the meaning of love?') print(result) Love is an emotion that involves feelings of attraction, affection and empathy for another person. It can also refer to a deep bond between two people or groups of people. Love can be expressed in many different ways, such as through words, actions, gestures, music, art, literature, and other forms of communication. previous Anyscale next Azure OpenAI By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/models/llms/integrations/aviary.html
c18caeb8fbe7-0
.ipynb .pdf ReLLM Contents Hugging Face Baseline RELLM LLM Wrapper ReLLM# ReLLM is a library that wraps local Hugging Face pipeline models for structured decoding. It works by generating tokens one at a time. At each step, it masks tokens that don’t conform to the provided partial regular expression. Warning - this module is still experimental !pip install rellm > /dev/null Hugging Face Baseline# First, let’s establish a qualitative baseline by checking the output of the model without structured decoding. import logging logging.basicConfig(level=logging.ERROR) prompt = """Human: "What's the capital of the United States?" AI Assistant:{ "action": "Final Answer", "action_input": "The capital of the United States is Washington D.C." } Human: "What's the capital of Pennsylvania?" AI Assistant:{ "action": "Final Answer", "action_input": "The capital of Pennsylvania is Harrisburg." } Human: "What 2 + 5?" AI Assistant:{ "action": "Final Answer", "action_input": "2 + 5 = 7." } Human: 'What's the capital of Maryland?' AI Assistant:""" from transformers import pipeline from langchain.llms import HuggingFacePipeline hf_model = pipeline("text-generation", model="cerebras/Cerebras-GPT-590M", max_new_tokens=200) original_model = HuggingFacePipeline(pipeline=hf_model) generated = original_model.generate([prompt], stop=["Human:"]) print(generated) Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation. generations=[[Generation(text=' "What\'s the capital of Maryland?"\n', generation_info=None)]] llm_output=None That’s not so impressive, is it? It didn’t answer the question and it didn’t follow the JSON format at all! Let’s try with the structured decoder. RELLM LLM Wrapper# Let’s try that again, now providing a regex to match the JSON structured format. import regex # Note this is the regex library NOT python's re stdlib module # We'll choose a regex that matches to a structured json string that looks like: # { # "action": "Final Answer", # "action_input": string or dict # } pattern = regex.compile(r'\{\s*"action":\s*"Final Answer",\s*"action_input":\s*(\{.*\}|"[^"]*")\s*\}\nHuman:') from langchain.experimental.llms import RELLM model = RELLM(pipeline=hf_model, regex=pattern, max_new_tokens=200) generated = model.predict(prompt, stop=["Human:"]) print(generated) {"action": "Final Answer", "action_input": "The capital of Maryland is Baltimore." } Voila! Free of parsing errors. previous PromptLayer OpenAI next Replicate Contents Hugging Face Baseline RELLM LLM Wrapper By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/models/llms/integrations/rellm_experimental.html
27647d5369bb-0
.ipynb .pdf PipelineAI Contents Install pipeline-ai Imports Set the Environment API Key Create the PipelineAI instance Create a Prompt Template Initiate the LLMChain Run the LLMChain PipelineAI# PipelineAI allows you to run your ML models at scale in the cloud. It also provides API access to several LLM models. This notebook goes over how to use Langchain with PipelineAI. Install pipeline-ai# The pipeline-ai library is required to use the PipelineAI API, AKA Pipeline Cloud. Install pipeline-ai using pip install pipeline-ai. # Install the package !pip install pipeline-ai Imports# import os from langchain.llms import PipelineAI from langchain import PromptTemplate, LLMChain Set the Environment API Key# Make sure to get your API key from PipelineAI. Check out the cloud quickstart guide. You’ll be given a 30 day free trial with 10 hours of serverless GPU compute to test different models. os.environ["PIPELINE_API_KEY"] = "YOUR_API_KEY_HERE" Create the PipelineAI instance# When instantiating PipelineAI, you need to specify the id or tag of the pipeline you want to use, e.g. pipeline_key = "public/gpt-j:base". You then have the option of passing additional pipeline-specific keyword arguments: llm = PipelineAI(pipeline_key="YOUR_PIPELINE_KEY", pipeline_kwargs={...}) Create a Prompt Template# We will create a prompt template for Question and Answer. template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) Initiate the LLMChain# llm_chain = LLMChain(prompt=prompt, llm=llm) Run the LLMChain# Provide a question and run the LLMChain. question = "What NFL team won the Super Bowl in the year Justin Beiber was born?" llm_chain.run(question) previous Petals next Prediction Guard Contents Install pipeline-ai Imports Set the Environment API Key Create the PipelineAI instance Create a Prompt Template Initiate the LLMChain Run the LLMChain By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/models/llms/integrations/pipelineai_example.html
a1fc71f33da1-0
.ipynb .pdf PromptLayer OpenAI Contents Install PromptLayer Imports Set the Environment API Key Use the PromptLayerOpenAI LLM like normal Using PromptLayer Track PromptLayer OpenAI# PromptLayer is the first platform that allows you to track, manage, and share your GPT prompt engineering. PromptLayer acts a middleware between your code and OpenAI’s python library. PromptLayer records all your OpenAI API requests, allowing you to search and explore request history in the PromptLayer dashboard. This example showcases how to connect to PromptLayer to start recording your OpenAI requests. Another example is here. Install PromptLayer# The promptlayer package is required to use PromptLayer with OpenAI. Install promptlayer using pip. !pip install promptlayer Imports# import os from langchain.llms import PromptLayerOpenAI import promptlayer Set the Environment API Key# You can create a PromptLayer API Key at www.promptlayer.com by clicking the settings cog in the navbar. Set it as an environment variable called PROMPTLAYER_API_KEY. You also need an OpenAI Key, called OPENAI_API_KEY. from getpass import getpass PROMPTLAYER_API_KEY = getpass() os.environ["PROMPTLAYER_API_KEY"] = PROMPTLAYER_API_KEY from getpass import getpass OPENAI_API_KEY = getpass() os.environ["OPENAI_API_KEY"] = OPENAI_API_KEY Use the PromptLayerOpenAI LLM like normal# You can optionally pass in pl_tags to track your requests with PromptLayer’s tagging feature. llm = PromptLayerOpenAI(pl_tags=["langchain"]) llm("I am a cat and I want") The above request should now appear on your PromptLayer dashboard. Using PromptLayer Track# If you would like to use any of the PromptLayer tracking features, you need to pass the argument return_pl_id when instantializing the PromptLayer LLM to get the request id. llm = PromptLayerOpenAI(return_pl_id=True) llm_results = llm.generate(["Tell me a joke"]) for res in llm_results.generations: pl_request_id = res[0].generation_info["pl_request_id"] promptlayer.track.score(request_id=pl_request_id, score=100) Using this allows you to track the performance of your model in the PromptLayer dashboard. If you are using a prompt template, you can attach a template to a request as well. Overall, this gives you the opportunity to track the performance of different templates and models in the PromptLayer dashboard. previous Prediction Guard next ReLLM Contents Install PromptLayer Imports Set the Environment API Key Use the PromptLayerOpenAI LLM like normal Using PromptLayer Track By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/models/llms/integrations/promptlayer_openai.html
4332467bf60c-0
.ipynb .pdf Databricks Contents Wrapping a serving endpoint Wrapping a cluster driver proxy app Databricks# The Databricks Lakehouse Platform unifies data, analytics, and AI on one platform. This example notebook shows how to wrap Databricks endpoints as LLMs in LangChain. It supports two endpoint types: Serving endpoint, recommended for production and development, Cluster driver proxy app, recommended for iteractive development. from langchain.llms import Databricks Wrapping a serving endpoint# Prerequisites: An LLM was registered and deployed to a Databricks serving endpoint. You have “Can Query” permission to the endpoint. The expected MLflow model signature is: inputs: [{"name": "prompt", "type": "string"}, {"name": "stop", "type": "list[string]"}] outputs: [{"type": "string"}] If the model signature is incompatible or you want to insert extra configs, you can set transform_input_fn and transform_output_fn accordingly. # If running a Databricks notebook attached to an interactive cluster in "single user" # or "no isolation shared" mode, you only need to specify the endpoint name to create # a `Databricks` instance to query a serving endpoint in the same workspace. llm = Databricks(endpoint_name="dolly") llm("How are you?") 'I am happy to hear that you are in good health and as always, you are appreciated.' llm("How are you?", stop=["."]) 'Good' # Otherwise, you can manually specify the Databricks workspace hostname and personal access token # or set `DATABRICKS_HOST` and `DATABRICKS_TOKEN` environment variables, respectively. # See https://docs.databricks.com/dev-tools/auth.html#databricks-personal-access-tokens # We strongly recommend not exposing the API token explicitly inside a notebook. # You can use Databricks secret manager to store your API token securely. # See https://docs.databricks.com/dev-tools/databricks-utils.html#secrets-utility-dbutilssecrets import os os.environ["DATABRICKS_TOKEN"] = dbutils.secrets.get("myworkspace", "api_token") llm = Databricks(host="myworkspace.cloud.databricks.com", endpoint_name="dolly") llm("How are you?") 'I am fine. Thank you!' # If the serving endpoint accepts extra parameters like `temperature`, # you can set them in `model_kwargs`. llm = Databricks(endpoint_name="dolly", model_kwargs={"temperature": 0.1}) llm("How are you?") 'I am fine.' # Use `transform_input_fn` and `transform_output_fn` if the serving endpoint # expects a different input schema and does not return a JSON string, # respectively, or you want to apply a prompt template on top. def transform_input(**request): full_prompt = f"""{request["prompt"]} Be Concise. """ request["prompt"] = full_prompt return request llm = Databricks(endpoint_name="dolly", transform_input_fn=transform_input) llm("How are you?") 'I’m Excellent. You?' Wrapping a cluster driver proxy app# Prerequisites: An LLM loaded on a Databricks interactive cluster in “single user” or “no isolation shared” mode. A local HTTP server running on the driver node to serve the model at "/" using HTTP POST with JSON input/output. It uses a port number between [3000, 8000] and listens to the driver IP address or simply 0.0.0.0 instead of localhost only. You have “Can Attach To” permission to the cluster. The expected server schema (using JSON schema) is: inputs: {"type": "object", "properties": { "prompt": {"type": "string"}, "stop": {"type": "array", "items": {"type": "string"}}}, "required": ["prompt"]} outputs: {"type": "string"} If the server schema is incompatible or you want to insert extra configs, you can use transform_input_fn and transform_output_fn accordingly. The following is a minimal example for running a driver proxy app to serve an LLM: from flask import Flask, request, jsonify import torch from transformers import pipeline, AutoTokenizer, StoppingCriteria model = "databricks/dolly-v2-3b"
https://langchain.readthedocs.io/en/latest/modules/models/llms/integrations/databricks.html
4332467bf60c-1
model = "databricks/dolly-v2-3b" tokenizer = AutoTokenizer.from_pretrained(model, padding_side="left") dolly = pipeline(model=model, tokenizer=tokenizer, trust_remote_code=True, device_map="auto") device = dolly.device class CheckStop(StoppingCriteria): def __init__(self, stop=None): super().__init__() self.stop = stop or [] self.matched = "" self.stop_ids = [tokenizer.encode(s, return_tensors='pt').to(device) for s in self.stop] def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs): for i, s in enumerate(self.stop_ids): if torch.all((s == input_ids[0][-s.shape[1]:])).item(): self.matched = self.stop[i] return True return False def llm(prompt, stop=None, **kwargs): check_stop = CheckStop(stop) result = dolly(prompt, stopping_criteria=[check_stop], **kwargs) return result[0]["generated_text"].rstrip(check_stop.matched) app = Flask("dolly") @app.route('/', methods=['POST']) def serve_llm(): resp = llm(**request.json) return jsonify(resp) app.run(host="0.0.0.0", port="7777") Once the server is running, you can create a Databricks instance to wrap it as an LLM. # If running a Databricks notebook attached to the same cluster that runs the app, # you only need to specify the driver port to create a `Databricks` instance. llm = Databricks(cluster_driver_port="7777") llm("How are you?") 'Hello, thank you for asking. It is wonderful to hear that you are well.' # Otherwise, you can manually specify the cluster ID to use, # as well as Databricks workspace hostname and personal access token. llm = Databricks(cluster_id="0000-000000-xxxxxxxx", cluster_driver_port="7777") llm("How are you?") 'I am well. You?' # If the app accepts extra parameters like `temperature`, # you can set them in `model_kwargs`. llm = Databricks(cluster_driver_port="7777", model_kwargs={"temperature": 0.1}) llm("How are you?") 'I am very well. It is a pleasure to meet you.' # Use `transform_input_fn` and `transform_output_fn` if the app # expects a different input schema and does not return a JSON string, # respectively, or you want to apply a prompt template on top. def transform_input(**request): full_prompt = f"""{request["prompt"]} Be Concise. """ request["prompt"] = full_prompt return request def transform_output(response): return response.upper() llm = Databricks( cluster_driver_port="7777", transform_input_fn=transform_input, transform_output_fn=transform_output) llm("How are you?") 'I AM DOING GREAT THANK YOU.' previous C Transformers next DeepInfra Contents Wrapping a serving endpoint Wrapping a cluster driver proxy app By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/models/llms/integrations/databricks.html
a89dda4b28f9-0
.ipynb .pdf Prediction Guard Contents Prediction Guard Control the output structure/ type of LLMs Chaining Prediction Guard# Prediction Guard gives a quick and easy access to state-of-the-art open and closed access LLMs, without needing to spend days and weeks figuring out all of the implementation details, managing a bunch of different API specs, and setting up the infrastructure for model deployments. ! pip install predictionguard langchain import os import predictionguard as pg from langchain.llms import PredictionGuard from langchain import PromptTemplate, LLMChain # Optional, add your OpenAI API Key. This is optional, as Prediction Guard allows # you to access all the latest open access models (see https://docs.predictionguard.com) os.environ["OPENAI_API_KEY"] = "<your OpenAI api key>" # Your Prediction Guard API key. Get one at predictionguard.com os.environ["PREDICTIONGUARD_TOKEN"] = "<your Prediction Guard access token>" pgllm = PredictionGuard(model="OpenAI-text-davinci-003") pgllm("Tell me a joke") Control the output structure/ type of LLMs# template = """Respond to the following query based on the context. Context: EVERY comment, DM + email suggestion has led us to this EXCITING announcement! 🎉 We have officially added TWO new candle subscription box options! 📦 Exclusive Candle Box - $80 Monthly Candle Box - $45 (NEW!) Scent of The Month Box - $28 (NEW!) Head to stories to get ALLL the deets on each box! 👆 BONUS: Save 50% on your first box with code 50OFF! 🎉 Query: {query} Result: """ prompt = PromptTemplate(template=template, input_variables=["query"]) # Without "guarding" or controlling the output of the LLM. pgllm(prompt.format(query="What kind of post is this?")) # With "guarding" or controlling the output of the LLM. See the # Prediction Guard docs (https://docs.predictionguard.com) to learn how to # control the output with integer, float, boolean, JSON, and other types and # structures. pgllm = PredictionGuard(model="OpenAI-text-davinci-003", output={ "type": "categorical", "categories": [ "product announcement", "apology", "relational" ] }) pgllm(prompt.format(query="What kind of post is this?")) Chaining# pgllm = PredictionGuard(model="OpenAI-text-davinci-003") template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) llm_chain = LLMChain(prompt=prompt, llm=pgllm, verbose=True) question = "What NFL team won the Super Bowl in the year Justin Beiber was born?" llm_chain.predict(question=question) template = """Write a {adjective} poem about {subject}.""" prompt = PromptTemplate(template=template, input_variables=["adjective", "subject"]) llm_chain = LLMChain(prompt=prompt, llm=pgllm, verbose=True) llm_chain.predict(adjective="sad", subject="ducks") previous PipelineAI next PromptLayer OpenAI Contents Prediction Guard Control the output structure/ type of LLMs Chaining By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/models/llms/integrations/predictionguard.html
202ea09dce66-0
.ipynb .pdf Jsonformer Contents HuggingFace Baseline JSONFormer LLM Wrapper Jsonformer# Jsonformer is a library that wraps local HuggingFace pipeline models for structured decoding of a subset of the JSON Schema. It works by filling in the structure tokens and then sampling the content tokens from the model. Warning - this module is still experimental !pip install --upgrade jsonformer > /dev/null HuggingFace Baseline# First, let’s establish a qualitative baseline by checking the output of the model without structured decoding. import logging logging.basicConfig(level=logging.ERROR) from typing import Optional from langchain.tools import tool import os import json import requests HF_TOKEN = os.environ.get("HUGGINGFACE_API_KEY") @tool def ask_star_coder(query: str, temperature: float = 1.0, max_new_tokens: float = 250): """Query the BigCode StarCoder model about coding questions.""" url = "https://api-inference.huggingface.co/models/bigcode/starcoder" headers = { "Authorization": f"Bearer {HF_TOKEN}", "content-type": "application/json" } payload = { "inputs": f"{query}\n\nAnswer:", "temperature": temperature, "max_new_tokens": int(max_new_tokens), } response = requests.post(url, headers=headers, data=json.dumps(payload)) response.raise_for_status() return json.loads(response.content.decode("utf-8")) prompt = """You must respond using JSON format, with a single action and single action input. You may 'ask_star_coder' for help on coding problems. {arg_schema} EXAMPLES ---- Human: "So what's all this about a GIL?" AI Assistant:{{ "action": "ask_star_coder", "action_input": {{"query": "What is a GIL?", "temperature": 0.0, "max_new_tokens": 100}}" }} Observation: "The GIL is python's Global Interpreter Lock" Human: "Could you please write a calculator program in LISP?" AI Assistant:{{ "action": "ask_star_coder", "action_input": {{"query": "Write a calculator program in LISP", "temperature": 0.0, "max_new_tokens": 250}} }} Observation: "(defun add (x y) (+ x y))\n(defun sub (x y) (- x y ))" Human: "What's the difference between an SVM and an LLM?" AI Assistant:{{ "action": "ask_star_coder", "action_input": {{"query": "What's the difference between SGD and an SVM?", "temperature": 1.0, "max_new_tokens": 250}} }} Observation: "SGD stands for stochastic gradient descent, while an SVM is a Support Vector Machine." BEGIN! Answer the Human's question as best as you are able. ------ Human: 'What's the difference between an iterator and an iterable?' AI Assistant:""".format(arg_schema=ask_star_coder.args) from transformers import pipeline from langchain.llms import HuggingFacePipeline hf_model = pipeline("text-generation", model="cerebras/Cerebras-GPT-590M", max_new_tokens=200) original_model = HuggingFacePipeline(pipeline=hf_model) generated = original_model.predict(prompt, stop=["Observation:", "Human:"]) print(generated) Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation. 'What's the difference between an iterator and an iterable?' That’s not so impressive, is it? It didn’t follow the JSON format at all! Let’s try with the structured decoder. JSONFormer LLM Wrapper# Let’s try that again, now providing a the Action input’s JSON Schema to the model. decoder_schema = { "title": "Decoding Schema", "type": "object", "properties": { "action": {"type": "string", "default": ask_star_coder.name}, "action_input": { "type": "object", "properties": ask_star_coder.args, } } } from langchain.experimental.llms import JsonFormer json_former = JsonFormer(json_schema=decoder_schema, pipeline=hf_model)
https://langchain.readthedocs.io/en/latest/modules/models/llms/integrations/jsonformer_experimental.html
202ea09dce66-1
json_former = JsonFormer(json_schema=decoder_schema, pipeline=hf_model) results = json_former.predict(prompt, stop=["Observation:", "Human:"]) print(results) {"action": "ask_star_coder", "action_input": {"query": "What's the difference between an iterator and an iter", "temperature": 0.0, "max_new_tokens": 50.0}} Voila! Free of parsing errors. previous Huggingface TextGen Inference next Llama-cpp Contents HuggingFace Baseline JSONFormer LLM Wrapper By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/models/llms/integrations/jsonformer_experimental.html
8e17057eb757-0
.ipynb .pdf StochasticAI StochasticAI# Stochastic Acceleration Platform aims to simplify the life cycle of a Deep Learning model. From uploading and versioning the model, through training, compression and acceleration to putting it into production. This example goes over how to use LangChain to interact with StochasticAI models. You have to get the API_KEY and the API_URL here. from getpass import getpass STOCHASTICAI_API_KEY = getpass() import os os.environ["STOCHASTICAI_API_KEY"] = STOCHASTICAI_API_KEY YOUR_API_URL = getpass() from langchain.llms import StochasticAI from langchain import PromptTemplate, LLMChain template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) llm = StochasticAI(api_url=YOUR_API_URL) llm_chain = LLMChain(prompt=prompt, llm=llm) question = "What NFL team won the Super Bowl in the year Justin Beiber was born?" llm_chain.run(question) "\n\nStep 1: In 1999, the St. Louis Rams won the Super Bowl.\n\nStep 2: In 1999, Beiber was born.\n\nStep 3: The Rams were in Los Angeles at the time.\n\nStep 4: So they didn't play in the Super Bowl that year.\n" previous SageMaker Endpoint next Writer By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/models/llms/integrations/stochasticai.html
5090d5c6c019-0
.ipynb .pdf C Transformers C Transformers# The C Transformers library provides Python bindings for GGML models. This example goes over how to use LangChain to interact with C Transformers models. Install %pip install ctransformers Load Model from langchain.llms import CTransformers llm = CTransformers(model='marella/gpt-2-ggml') Generate Text print(llm('AI is going to')) Streaming from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler llm = CTransformers(model='marella/gpt-2-ggml', callbacks=[StreamingStdOutCallbackHandler()]) response = llm('AI is going to') LLMChain from langchain import PromptTemplate, LLMChain template = """Question: {question} Answer:""" prompt = PromptTemplate(template=template, input_variables=['question']) llm_chain = LLMChain(prompt=prompt, llm=llm) response = llm_chain.run('What is AI?') previous Cohere next Databricks By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/models/llms/integrations/ctransformers.html
3be102031509-0
.ipynb .pdf Banana Banana# Banana is focused on building the machine learning infrastructure. This example goes over how to use LangChain to interact with Banana models # Install the package https://docs.banana.dev/banana-docs/core-concepts/sdks/python !pip install banana-dev # get new tokens: https://app.banana.dev/ # We need two tokens, not just an `api_key`: `BANANA_API_KEY` and `YOUR_MODEL_KEY` import os from getpass import getpass os.environ["BANANA_API_KEY"] = "YOUR_API_KEY" # OR # BANANA_API_KEY = getpass() from langchain.llms import Banana from langchain import PromptTemplate, LLMChain template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) llm = Banana(model_key="YOUR_MODEL_KEY") llm_chain = LLMChain(prompt=prompt, llm=llm) question = "What NFL team won the Super Bowl in the year Justin Beiber was born?" llm_chain.run(question) previous Azure OpenAI next Beam By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/models/llms/integrations/banana.html
b4380670ce9e-0
.ipynb .pdf Writer Writer# Writer is a platform to generate different language content. This example goes over how to use LangChain to interact with Writer models. You have to get the WRITER_API_KEY here. from getpass import getpass WRITER_API_KEY = getpass() import os os.environ["WRITER_API_KEY"] = WRITER_API_KEY from langchain.llms import Writer from langchain import PromptTemplate, LLMChain template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) # If you get an error, probably, you need to set up the "base_url" parameter that can be taken from the error log. llm = Writer() llm_chain = LLMChain(prompt=prompt, llm=llm) question = "What NFL team won the Super Bowl in the year Justin Beiber was born?" llm_chain.run(question) previous StochasticAI next LLMs By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/models/llms/integrations/writer.html
f0f0e4d0eefe-0
.ipynb .pdf Hugging Face Pipeline Contents Load the model Integrate the model in an LLMChain Hugging Face Pipeline# Hugging Face models can be run locally through the HuggingFacePipeline class. The Hugging Face Model Hub hosts over 120k models, 20k datasets, and 50k demo apps (Spaces), all open source and publicly available, in an online platform where people can easily collaborate and build ML together. These can be called from LangChain either through this local pipeline wrapper or by calling their hosted inference endpoints through the HuggingFaceHub class. For more information on the hosted pipelines, see the HuggingFaceHub notebook. To use, you should have the transformers python package installed. !pip install transformers > /dev/null Load the model# from langchain import HuggingFacePipeline llm = HuggingFacePipeline.from_model_id(model_id="bigscience/bloom-1b7", task="text-generation", model_kwargs={"temperature":0, "max_length":64}) WARNING:root:Failed to default session, using empty session: HTTPConnectionPool(host='localhost', port=8000): Max retries exceeded with url: /sessions (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x1117f9790>: Failed to establish a new connection: [Errno 61] Connection refused')) Integrate the model in an LLMChain# from langchain import PromptTemplate, LLMChain template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) llm_chain = LLMChain(prompt=prompt, llm=llm) question = "What is electroencephalography?" print(llm_chain.run(question)) /Users/wfh/code/lc/lckg/.venv/lib/python3.11/site-packages/transformers/generation/utils.py:1288: UserWarning: Using `max_length`'s default (64) to control the generation length. This behaviour is deprecated and will be removed from the config in v5 of Transformers -- we recommend using `max_new_tokens` to control the maximum length of the generation. warnings.warn( WARNING:root:Failed to persist run: HTTPConnectionPool(host='localhost', port=8000): Max retries exceeded with url: /chain-runs (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x144d06910>: Failed to establish a new connection: [Errno 61] Connection refused')) First, we need to understand what is an electroencephalogram. An electroencephalogram is a recording of brain activity. It is a recording of brain activity that is made by placing electrodes on the scalp. The electrodes are placed previous Hugging Face Hub next Huggingface TextGen Inference Contents Load the model Integrate the model in an LLMChain By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/models/llms/integrations/huggingface_pipelines.html
99359df7ce00-0
.ipynb .pdf Aleph Alpha Aleph Alpha# The Luminous series is a family of large language models. This example goes over how to use LangChain to interact with Aleph Alpha models # Install the package !pip install aleph-alpha-client # create a new token: https://docs.aleph-alpha.com/docs/account/#create-a-new-token from getpass import getpass ALEPH_ALPHA_API_KEY = getpass() from langchain.llms import AlephAlpha from langchain import PromptTemplate, LLMChain template = """Q: {question} A:""" prompt = PromptTemplate(template=template, input_variables=["question"]) llm = AlephAlpha(model="luminous-extended", maximum_tokens=20, stop_sequences=["Q:"], aleph_alpha_api_key=ALEPH_ALPHA_API_KEY) llm_chain = LLMChain(prompt=prompt, llm=llm) question = "What is AI?" llm_chain.run(question) ' Artificial Intelligence (AI) is the simulation of human intelligence processes by machines, especially computer systems.\n' previous AI21 next Anyscale By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/models/llms/integrations/aleph_alpha.html
3a596b67fd81-0
.ipynb .pdf MosaicML MosaicML# MosaicML offers a managed inference service. You can either use a variety of open source models, or deploy your own. This example goes over how to use LangChain to interact with MosaicML Inference for text completion. # sign up for an account: https://forms.mosaicml.com/demo?utm_source=langchain from getpass import getpass MOSAICML_API_TOKEN = getpass() import os os.environ["MOSAICML_API_TOKEN"] = MOSAICML_API_TOKEN from langchain.llms import MosaicML from langchain import PromptTemplate, LLMChain template = """Question: {question}""" prompt = PromptTemplate(template=template, input_variables=["question"]) llm = MosaicML(inject_instruction_format=True, model_kwargs={'do_sample': False}) llm_chain = LLMChain(prompt=prompt, llm=llm) question = "What is one good reason why you should train a large language model on domain specific data?" llm_chain.run(question) previous Modal next NLP Cloud By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/models/llms/integrations/mosaicml.html
3a54aa28c60c-0
.ipynb .pdf Replicate Contents Setup Calling a model Chaining Calls Replicate# Replicate runs machine learning models in the cloud. We have a library of open-source models that you can run with a few lines of code. If you’re building your own machine learning models, Replicate makes it easy to deploy them at scale. This example goes over how to use LangChain to interact with Replicate models Setup# To run this notebook, you’ll need to create a replicate account and install the replicate python client. !pip install replicate # get a token: https://replicate.com/account from getpass import getpass REPLICATE_API_TOKEN = getpass() ········ import os os.environ["REPLICATE_API_TOKEN"] = REPLICATE_API_TOKEN from langchain.llms import Replicate from langchain import PromptTemplate, LLMChain Calling a model# Find a model on the replicate explore page, and then paste in the model name and version in this format: model_name/version For example, for this dolly model, click on the API tab. The model name/version would be: replicate/dolly-v2-12b:ef0e1aefc61f8e096ebe4db6b2bacc297daf2ef6899f0f7e001ec445893500e5 Only the model param is required, but we can add other model params when initializing. For example, if we were running stable diffusion and wanted to change the image dimensions: Replicate(model="stability-ai/stable-diffusion:db21e45d3f7023abc2a46ee38a23973f6dce16bb082a930b0c49861f96d1e5bf", input={'image_dimensions': '512x512'}) Note that only the first output of a model will be returned. llm = Replicate(model="replicate/dolly-v2-12b:ef0e1aefc61f8e096ebe4db6b2bacc297daf2ef6899f0f7e001ec445893500e5") prompt = """ Answer the following yes/no question by reasoning step by step. Can a dog drive a car? """ llm(prompt) 'The legal driving age of dogs is 2. Cars are designed for humans to drive. Therefore, the final answer is yes.' We can call any replicate model using this syntax. For example, we can call stable diffusion. text2image = Replicate(model="stability-ai/stable-diffusion:db21e45d3f7023abc2a46ee38a23973f6dce16bb082a930b0c49861f96d1e5bf", input={'image_dimensions': '512x512'}) image_output = text2image("A cat riding a motorcycle by Picasso") image_output 'https://replicate.delivery/pbxt/Cf07B1zqzFQLOSBQcKG7m9beE74wf7kuip5W9VxHJFembefKE/out-0.png' The model spits out a URL. Let’s render it. from PIL import Image import requests from io import BytesIO response = requests.get(image_output) img = Image.open(BytesIO(response.content)) img Chaining Calls# The whole point of langchain is to… chain! Here’s an example of how do that. from langchain.chains import SimpleSequentialChain First, let’s define the LLM for this model as a flan-5, and text2image as a stable diffusion model. dolly_llm = Replicate(model="replicate/dolly-v2-12b:ef0e1aefc61f8e096ebe4db6b2bacc297daf2ef6899f0f7e001ec445893500e5") text2image = Replicate(model="stability-ai/stable-diffusion:db21e45d3f7023abc2a46ee38a23973f6dce16bb082a930b0c49861f96d1e5bf") First prompt in the chain prompt = PromptTemplate( input_variables=["product"], template="What is a good name for a company that makes {product}?", ) chain = LLMChain(llm=dolly_llm, prompt=prompt) Second prompt to get the logo for company description second_prompt = PromptTemplate(
https://langchain.readthedocs.io/en/latest/modules/models/llms/integrations/replicate.html
3a54aa28c60c-1
Second prompt to get the logo for company description second_prompt = PromptTemplate( input_variables=["company_name"], template="Write a description of a logo for this company: {company_name}", ) chain_two = LLMChain(llm=dolly_llm, prompt=second_prompt) Third prompt, let’s create the image based on the description output from prompt 2 third_prompt = PromptTemplate( input_variables=["company_logo_description"], template="{company_logo_description}", ) chain_three = LLMChain(llm=text2image, prompt=third_prompt) Now let’s run it! # Run the chain specifying only the input variable for the first chain. overall_chain = SimpleSequentialChain(chains=[chain, chain_two, chain_three], verbose=True) catchphrase = overall_chain.run("colorful socks") print(catchphrase) > Entering new SimpleSequentialChain chain... novelty socks todd & co. https://replicate.delivery/pbxt/BedAP1PPBwXFfkmeD7xDygXO4BcvApp1uvWOwUdHM4tcQfvCB/out-0.png > Finished chain. https://replicate.delivery/pbxt/BedAP1PPBwXFfkmeD7xDygXO4BcvApp1uvWOwUdHM4tcQfvCB/out-0.png response = requests.get("https://replicate.delivery/pbxt/eq6foRJngThCAEBqse3nL3Km2MBfLnWQNd0Hy2SQRo2LuprCB/out-0.png") img = Image.open(BytesIO(response.content)) img previous ReLLM next Runhouse Contents Setup Calling a model Chaining Calls By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/models/llms/integrations/replicate.html
fcabce51c202-0
.ipynb .pdf Cohere Cohere# Cohere is a Canadian startup that provides natural language processing models that help companies improve human-machine interactions. This example goes over how to use LangChain to interact with Cohere models. # Install the package !pip install cohere # get a new token: https://dashboard.cohere.ai/ from getpass import getpass COHERE_API_KEY = getpass() from langchain.llms import Cohere from langchain import PromptTemplate, LLMChain template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) llm = Cohere(cohere_api_key=COHERE_API_KEY) llm_chain = LLMChain(prompt=prompt, llm=llm) question = "What NFL team won the Super Bowl in the year Justin Beiber was born?" llm_chain.run(question) " Let's start with the year that Justin Beiber was born. You know that he was born in 1994. We have to go back one year. 1993.\n\n1993 was the year that the Dallas Cowboys won the Super Bowl. They won over the Buffalo Bills in Super Bowl 26.\n\nNow, let's do it backwards. According to our information, the Green Bay Packers last won the Super Bowl in the 2010-2011 season. Now, we can't go back in time, so let's go from 2011 when the Packers won the Super Bowl, back to 1984. That is the year that the Packers won the Super Bowl over the Raiders.\n\nSo, we have the year that Justin Beiber was born, 1994, and the year that the Packers last won the Super Bowl, 2011, and now we have to go in the middle, 1986. That is the year that the New York Giants won the Super Bowl over the Denver Broncos. The Giants won Super Bowl 21.\n\nThe New York Giants won the Super Bowl in 1986. This means that the Green Bay Packers won the Super Bowl in 2011.\n\nDid you get it right? If you are still a bit confused, just try to go back to the question again and review the answer" previous CerebriumAI next C Transformers By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/models/llms/integrations/cohere.html
bf714b54d5fb-0
.ipynb .pdf Modal Modal# The Modal Python Library provides convenient, on-demand access to serverless cloud compute from Python scripts on your local computer. The Modal itself does not provide any LLMs but only the infrastructure. This example goes over how to use LangChain to interact with Modal. Here is another example how to use LangChain to interact with Modal. !pip install modal-client # register and get a new token !modal token new [?25lLaunching login page in your browser window... If this is not showing up, please copy this URL into your web browser manually: m⠙ Waiting for authentication in the web browser... ]8;id=417802;https://modal.com/token-flow/tf-ptEuGecm7T1T5YQe42kwM1\https://modal.com/token-flow/tf-ptEuGecm7T1T5YQe42kwM1]8;;\ ⠙ Waiting for authentication in the web browser... ^C Aborted. Follow these instructions to deal with secrets. from langchain.llms import Modal from langchain import PromptTemplate, LLMChain template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) llm = Modal(endpoint_url="YOUR_ENDPOINT_URL") llm_chain = LLMChain(prompt=prompt, llm=llm) question = "What NFL team won the Super Bowl in the year Justin Beiber was born?" llm_chain.run(question) previous Manifest next MosaicML By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/models/llms/integrations/modal.html
e98a40784bdd-0
.ipynb .pdf How (and why) to use the fake LLM How (and why) to use the fake LLM# We expose a fake LLM class that can be used for testing. This allows you to mock out calls to the LLM and simulate what would happen if the LLM responded in a certain way. In this notebook we go over how to use this. We start this with using the FakeLLM in an agent. from langchain.llms.fake import FakeListLLM from langchain.agents import load_tools from langchain.agents import initialize_agent from langchain.agents import AgentType tools = load_tools(["python_repl"]) responses=[ "Action: Python REPL\nAction Input: print(2 + 2)", "Final Answer: 4" ] llm = FakeListLLM(responses=responses) agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True) agent.run("whats 2 + 2") > Entering new AgentExecutor chain... Action: Python REPL Action Input: print(2 + 2) Observation: 4 Thought:Final Answer: 4 > Finished chain. '4' previous How to write a custom LLM wrapper next How (and why) to use the human input LLM By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/models/llms/examples/fake_llm.html
ccc5f662796f-0
.ipynb .pdf How to write a custom LLM wrapper How to write a custom LLM wrapper# This notebook goes over how to create a custom LLM wrapper, in case you want to use your own LLM or a different wrapper than one that is supported in LangChain. There is only one required thing that a custom LLM needs to implement: A _call method that takes in a string, some optional stop words, and returns a string There is a second optional thing it can implement: An _identifying_params property that is used to help with printing of this class. Should return a dictionary. Let’s implement a very simple custom LLM that just returns the first N characters of the input. from typing import Any, List, Mapping, Optional from langchain.callbacks.manager import CallbackManagerForLLMRun from langchain.llms.base import LLM class CustomLLM(LLM): n: int @property def _llm_type(self) -> str: return "custom" def _call( self, prompt: str, stop: Optional[List[str]] = None, run_manager: Optional[CallbackManagerForLLMRun] = None, ) -> str: if stop is not None: raise ValueError("stop kwargs are not permitted.") return prompt[:self.n] @property def _identifying_params(self) -> Mapping[str, Any]: """Get the identifying parameters.""" return {"n": self.n} We can now use this as an any other LLM. llm = CustomLLM(n=10) llm("This is a foobar thing") 'This is a ' We can also print the LLM and see its custom print. print(llm) CustomLLM Params: {'n': 10} previous How to use the async API for LLMs next How (and why) to use the fake LLM By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/models/llms/examples/custom_llm.html
54e11334c9f2-0
.ipynb .pdf How to cache LLM calls Contents In Memory Cache SQLite Cache Redis Cache Standard Cache Semantic Cache GPTCache Momento Cache SQLAlchemy Cache Custom SQLAlchemy Schemas Optional Caching Optional Caching in Chains How to cache LLM calls# This notebook covers how to cache results of individual LLM calls. import langchain from langchain.llms import OpenAI # To make the caching really obvious, lets use a slower model. llm = OpenAI(model_name="text-davinci-002", n=2, best_of=2) In Memory Cache# from langchain.cache import InMemoryCache langchain.llm_cache = InMemoryCache() %%time # The first time, it is not yet in cache, so it should take longer llm("Tell me a joke") CPU times: user 35.9 ms, sys: 28.6 ms, total: 64.6 ms Wall time: 4.83 s "\n\nWhy couldn't the bicycle stand up by itself? It was...two tired!" %%time # The second time it is, so it goes faster llm("Tell me a joke") CPU times: user 238 µs, sys: 143 µs, total: 381 µs Wall time: 1.76 ms '\n\nWhy did the chicken cross the road?\n\nTo get to the other side.' SQLite Cache# !rm .langchain.db # We can do the same thing with a SQLite cache from langchain.cache import SQLiteCache langchain.llm_cache = SQLiteCache(database_path=".langchain.db") %%time # The first time, it is not yet in cache, so it should take longer llm("Tell me a joke") CPU times: user 17 ms, sys: 9.76 ms, total: 26.7 ms Wall time: 825 ms '\n\nWhy did the chicken cross the road?\n\nTo get to the other side.' %%time # The second time it is, so it goes faster llm("Tell me a joke") CPU times: user 2.46 ms, sys: 1.23 ms, total: 3.7 ms Wall time: 2.67 ms '\n\nWhy did the chicken cross the road?\n\nTo get to the other side.' Redis Cache# Standard Cache# Use Redis to cache prompts and responses. # We can do the same thing with a Redis cache # (make sure your local Redis instance is running first before running this example) from redis import Redis from langchain.cache import RedisCache langchain.llm_cache = RedisCache(redis_=Redis()) %%time # The first time, it is not yet in cache, so it should take longer llm("Tell me a joke") CPU times: user 6.88 ms, sys: 8.75 ms, total: 15.6 ms Wall time: 1.04 s '\n\nWhy did the chicken cross the road?\n\nTo get to the other side!' %%time # The second time it is, so it goes faster llm("Tell me a joke") CPU times: user 1.59 ms, sys: 610 µs, total: 2.2 ms Wall time: 5.58 ms '\n\nWhy did the chicken cross the road?\n\nTo get to the other side!' Semantic Cache# Use Redis to cache prompts and responses and evaluate hits based on semantic similarity. from langchain.embeddings import OpenAIEmbeddings from langchain.cache import RedisSemanticCache langchain.llm_cache = RedisSemanticCache( redis_url="redis://localhost:6379", embedding=OpenAIEmbeddings() ) %%time # The first time, it is not yet in cache, so it should take longer llm("Tell me a joke") CPU times: user 351 ms, sys: 156 ms, total: 507 ms Wall time: 3.37 s "\n\nWhy don't scientists trust atoms?\nBecause they make up everything." %%time # The second time, while not a direct hit, the question is semantically similar to the original question, # so it uses the cached result! llm("Tell me one joke") CPU times: user 6.25 ms, sys: 2.72 ms, total: 8.97 ms Wall time: 262 ms
https://langchain.readthedocs.io/en/latest/modules/models/llms/examples/llm_caching.html
54e11334c9f2-1
Wall time: 262 ms "\n\nWhy don't scientists trust atoms?\nBecause they make up everything." GPTCache# We can use GPTCache for exact match caching OR to cache results based on semantic similarity Let’s first start with an example of exact match from gptcache import Cache from gptcache.manager.factory import manager_factory from gptcache.processor.pre import get_prompt from langchain.cache import GPTCache import hashlib def get_hashed_name(name): return hashlib.sha256(name.encode()).hexdigest() def init_gptcache(cache_obj: Cache, llm: str): hashed_llm = get_hashed_name(llm) cache_obj.init( pre_embedding_func=get_prompt, data_manager=manager_factory(manager="map", data_dir=f"map_cache_{hashed_llm}"), ) langchain.llm_cache = GPTCache(init_gptcache) %%time # The first time, it is not yet in cache, so it should take longer llm("Tell me a joke") CPU times: user 21.5 ms, sys: 21.3 ms, total: 42.8 ms Wall time: 6.2 s '\n\nWhy did the chicken cross the road?\n\nTo get to the other side!' %%time # The second time it is, so it goes faster llm("Tell me a joke") CPU times: user 571 µs, sys: 43 µs, total: 614 µs Wall time: 635 µs '\n\nWhy did the chicken cross the road?\n\nTo get to the other side!' Let’s now show an example of similarity caching from gptcache import Cache from gptcache.adapter.api import init_similar_cache from langchain.cache import GPTCache import hashlib def get_hashed_name(name): return hashlib.sha256(name.encode()).hexdigest() def init_gptcache(cache_obj: Cache, llm: str): hashed_llm = get_hashed_name(llm) init_similar_cache(cache_obj=cache_obj, data_dir=f"similar_cache_{hashed_llm}") langchain.llm_cache = GPTCache(init_gptcache) %%time # The first time, it is not yet in cache, so it should take longer llm("Tell me a joke") CPU times: user 1.42 s, sys: 279 ms, total: 1.7 s Wall time: 8.44 s '\n\nWhy did the chicken cross the road?\n\nTo get to the other side.' %%time # This is an exact match, so it finds it in the cache llm("Tell me a joke") CPU times: user 866 ms, sys: 20 ms, total: 886 ms Wall time: 226 ms '\n\nWhy did the chicken cross the road?\n\nTo get to the other side.' %%time # This is not an exact match, but semantically within distance so it hits! llm("Tell me joke") CPU times: user 853 ms, sys: 14.8 ms, total: 868 ms Wall time: 224 ms '\n\nWhy did the chicken cross the road?\n\nTo get to the other side.' Momento Cache# Use Momento to cache prompts and responses. Requires momento to use, uncomment below to install: # !pip install momento You’ll need to get a Momemto auth token to use this class. This can either be passed in to a momento.CacheClient if you’d like to instantiate that directly, as a named parameter auth_token to MomentoChatMessageHistory.from_client_params, or can just be set as an environment variable MOMENTO_AUTH_TOKEN. from datetime import timedelta from langchain.cache import MomentoCache cache_name = "langchain" ttl = timedelta(days=1) langchain.llm_cache = MomentoCache.from_client_params(cache_name, ttl) %%time # The first time, it is not yet in cache, so it should take longer llm("Tell me a joke") CPU times: user 40.7 ms, sys: 16.5 ms, total: 57.2 ms Wall time: 1.73 s '\n\nWhy did the chicken cross the road?\n\nTo get to the other side!' %%time # The second time it is, so it goes faster # When run in the same region as the cache, latencies are single digit ms
https://langchain.readthedocs.io/en/latest/modules/models/llms/examples/llm_caching.html
54e11334c9f2-2
# When run in the same region as the cache, latencies are single digit ms llm("Tell me a joke") CPU times: user 3.16 ms, sys: 2.98 ms, total: 6.14 ms Wall time: 57.9 ms '\n\nWhy did the chicken cross the road?\n\nTo get to the other side!' SQLAlchemy Cache# # You can use SQLAlchemyCache to cache with any SQL database supported by SQLAlchemy. # from langchain.cache import SQLAlchemyCache # from sqlalchemy import create_engine # engine = create_engine("postgresql://postgres:postgres@localhost:5432/postgres") # langchain.llm_cache = SQLAlchemyCache(engine) Custom SQLAlchemy Schemas# # You can define your own declarative SQLAlchemyCache child class to customize the schema used for caching. For example, to support high-speed fulltext prompt indexing with Postgres, use: from sqlalchemy import Column, Integer, String, Computed, Index, Sequence from sqlalchemy import create_engine from sqlalchemy.ext.declarative import declarative_base from sqlalchemy_utils import TSVectorType from langchain.cache import SQLAlchemyCache Base = declarative_base() class FulltextLLMCache(Base): # type: ignore """Postgres table for fulltext-indexed LLM Cache""" __tablename__ = "llm_cache_fulltext" id = Column(Integer, Sequence('cache_id'), primary_key=True) prompt = Column(String, nullable=False) llm = Column(String, nullable=False) idx = Column(Integer) response = Column(String) prompt_tsv = Column(TSVectorType(), Computed("to_tsvector('english', llm || ' ' || prompt)", persisted=True)) __table_args__ = ( Index("idx_fulltext_prompt_tsv", prompt_tsv, postgresql_using="gin"), ) engine = create_engine("postgresql://postgres:postgres@localhost:5432/postgres") langchain.llm_cache = SQLAlchemyCache(engine, FulltextLLMCache) Optional Caching# You can also turn off caching for specific LLMs should you choose. In the example below, even though global caching is enabled, we turn it off for a specific LLM llm = OpenAI(model_name="text-davinci-002", n=2, best_of=2, cache=False) %%time llm("Tell me a joke") CPU times: user 5.8 ms, sys: 2.71 ms, total: 8.51 ms Wall time: 745 ms '\n\nWhy did the chicken cross the road?\n\nTo get to the other side!' %%time llm("Tell me a joke") CPU times: user 4.91 ms, sys: 2.64 ms, total: 7.55 ms Wall time: 623 ms '\n\nTwo guys stole a calendar. They got six months each.' Optional Caching in Chains# You can also turn off caching for particular nodes in chains. Note that because of certain interfaces, its often easier to construct the chain first, and then edit the LLM afterwards. As an example, we will load a summarizer map-reduce chain. We will cache results for the map-step, but then not freeze it for the combine step. llm = OpenAI(model_name="text-davinci-002") no_cache_llm = OpenAI(model_name="text-davinci-002", cache=False) from langchain.text_splitter import CharacterTextSplitter from langchain.chains.mapreduce import MapReduceChain text_splitter = CharacterTextSplitter() with open('../../../state_of_the_union.txt') as f: state_of_the_union = f.read() texts = text_splitter.split_text(state_of_the_union) from langchain.docstore.document import Document docs = [Document(page_content=t) for t in texts[:3]] from langchain.chains.summarize import load_summarize_chain chain = load_summarize_chain(llm, chain_type="map_reduce", reduce_llm=no_cache_llm) %%time chain.run(docs) CPU times: user 452 ms, sys: 60.3 ms, total: 512 ms Wall time: 5.09 s
https://langchain.readthedocs.io/en/latest/modules/models/llms/examples/llm_caching.html
54e11334c9f2-3
Wall time: 5.09 s '\n\nPresident Biden is discussing the American Rescue Plan and the Bipartisan Infrastructure Law, which will create jobs and help Americans. He also talks about his vision for America, which includes investing in education and infrastructure. In response to Russian aggression in Ukraine, the United States is joining with European allies to impose sanctions and isolate Russia. American forces are being mobilized to protect NATO countries in the event that Putin decides to keep moving west. The Ukrainians are bravely fighting back, but the next few weeks will be hard for them. Putin will pay a high price for his actions in the long run. Americans should not be alarmed, as the United States is taking action to protect its interests and allies.' When we run it again, we see that it runs substantially faster but the final answer is different. This is due to caching at the map steps, but not at the reduce step. %%time chain.run(docs) CPU times: user 11.5 ms, sys: 4.33 ms, total: 15.8 ms Wall time: 1.04 s '\n\nPresident Biden is discussing the American Rescue Plan and the Bipartisan Infrastructure Law, which will create jobs and help Americans. He also talks about his vision for America, which includes investing in education and infrastructure.' !rm .langchain.db sqlite.db previous How (and why) to use the human input LLM next How to serialize LLM classes Contents In Memory Cache SQLite Cache Redis Cache Standard Cache Semantic Cache GPTCache Momento Cache SQLAlchemy Cache Custom SQLAlchemy Schemas Optional Caching Optional Caching in Chains By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/models/llms/examples/llm_caching.html
05a8334ed79d-0
.ipynb .pdf How to track token usage How to track token usage# This notebook goes over how to track your token usage for specific calls. It is currently only implemented for the OpenAI API. Let’s first look at an extremely simple example of tracking token usage for a single LLM call. from langchain.llms import OpenAI from langchain.callbacks import get_openai_callback llm = OpenAI(model_name="text-davinci-002", n=2, best_of=2) with get_openai_callback() as cb: result = llm("Tell me a joke") print(cb) Tokens Used: 42 Prompt Tokens: 4 Completion Tokens: 38 Successful Requests: 1 Total Cost (USD): $0.00084 Anything inside the context manager will get tracked. Here’s an example of using it to track multiple calls in sequence. with get_openai_callback() as cb: result = llm("Tell me a joke") result2 = llm("Tell me a joke") print(cb.total_tokens) 91 If a chain or agent with multiple steps in it is used, it will track all those steps. from langchain.agents import load_tools from langchain.agents import initialize_agent from langchain.agents import AgentType from langchain.llms import OpenAI llm = OpenAI(temperature=0) tools = load_tools(["serpapi", "llm-math"], llm=llm) agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True) with get_openai_callback() as cb: response = agent.run("Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power?") print(f"Total Tokens: {cb.total_tokens}") print(f"Prompt Tokens: {cb.prompt_tokens}") print(f"Completion Tokens: {cb.completion_tokens}") print(f"Total Cost (USD): ${cb.total_cost}") > Entering new AgentExecutor chain... I need to find out who Olivia Wilde's boyfriend is and then calculate his age raised to the 0.23 power. Action: Search Action Input: "Olivia Wilde boyfriend" Observation: Sudeikis and Wilde's relationship ended in November 2020. Wilde was publicly served with court documents regarding child custody while she was presenting Don't Worry Darling at CinemaCon 2022. In January 2021, Wilde began dating singer Harry Styles after meeting during the filming of Don't Worry Darling. Thought: I need to find out Harry Styles' age. Action: Search Action Input: "Harry Styles age" Observation: 29 years Thought: I need to calculate 29 raised to the 0.23 power. Action: Calculator Action Input: 29^0.23 Observation: Answer: 2.169459462491557 Thought: I now know the final answer. Final Answer: Harry Styles, Olivia Wilde's boyfriend, is 29 years old and his age raised to the 0.23 power is 2.169459462491557. > Finished chain. Total Tokens: 1506 Prompt Tokens: 1350 Completion Tokens: 156 Total Cost (USD): $0.03012 previous How to stream LLM and Chat Model responses next Integrations By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 08, 2023.
https://langchain.readthedocs.io/en/latest/modules/models/llms/examples/token_usage_tracking.html