id
stringlengths 14
16
| text
stringlengths 36
2.73k
| source
stringlengths 49
117
|
---|---|---|
a7be8239db86-0 | .rst
.pdf
Agents
Agents#
Note
Conceptual Guide
In this part of the documentation we cover the different types of agents, disregarding which specific tools they are used with.
For a high level overview of the different types of agents, see the below documentation.
Agent Types
For documentation on how to create a custom agent, see the below.
Custom Agent
Custom LLM Agent
Custom LLM Agent (with a ChatModel)
Custom MRKL Agent
Custom MultiAction Agent
Custom Agent with Tool Retrieval
We also have documentation for an in-depth dive into each agent type.
Conversation Agent (for Chat Models)
Conversation Agent
MRKL
MRKL Chat
ReAct
Self Ask With Search
Structured Tool Chat Agent
previous
Zapier Natural Language Actions API
next
Agent Types
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 11, 2023. | https://python.langchain.com/en/latest/modules/agents/agents.html |
0af0fd2c3d47-0 | .rst
.pdf
Tools
Tools#
Note
Conceptual Guide
Tools are ways that an agent can use to interact with the outside world.
For an overview of what a tool is, how to use them, and a full list of examples, please see the getting started documentation
Getting Started
Next, we have some examples of customizing and generically working with tools
Defining Custom Tools
Multi-Input Tools
Tool Input Schema
In this documentation we cover generic tooling functionality (eg how to create your own)
as well as examples of tools and how to use them.
Apify
ArXiv API Tool
AWS Lambda API
Shell Tool
Bing Search
Brave Search
ChatGPT Plugins
DuckDuckGo Search
File System Tools
Google Places
Google Search
Google Serper API
Gradio Tools
GraphQL tool
HuggingFace Tools
Human as a tool
IFTTT WebHooks
Metaphor Search
Call the API
Use Metaphor as a tool
OpenWeatherMap API
PubMed Tool
Python REPL
Requests
SceneXplain
Search Tools
SearxNG Search API
SerpAPI
Twilio
Wikipedia
Wolfram Alpha
YouTubeSearchTool
Zapier Natural Language Actions API
Example with SimpleSequentialChain
previous
Getting Started
next
Getting Started
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 11, 2023. | https://python.langchain.com/en/latest/modules/agents/tools.html |
a65ba9180d73-0 | .md
.pdf
Getting Started
Contents
List of Tools
Getting Started#
Tools are functions that agents can use to interact with the world.
These tools can be generic utilities (e.g. search), other chains, or even other agents.
Currently, tools can be loaded with the following snippet:
from langchain.agents import load_tools
tool_names = [...]
tools = load_tools(tool_names)
Some tools (e.g. chains, agents) may require a base LLM to use to initialize them.
In that case, you can pass in an LLM as well:
from langchain.agents import load_tools
tool_names = [...]
llm = ...
tools = load_tools(tool_names, llm=llm)
Below is a list of all supported tools and relevant information:
Tool Name: The name the LLM refers to the tool by.
Tool Description: The description of the tool that is passed to the LLM.
Notes: Notes about the tool that are NOT passed to the LLM.
Requires LLM: Whether this tool requires an LLM to be initialized.
(Optional) Extra Parameters: What extra parameters are required to initialize this tool.
List of Tools#
python_repl
Tool Name: Python REPL
Tool Description: A Python shell. Use this to execute python commands. Input should be a valid python command. If you expect output it should be printed out.
Notes: Maintains state.
Requires LLM: No
serpapi
Tool Name: Search
Tool Description: A search engine. Useful for when you need to answer questions about current events. Input should be a search query.
Notes: Calls the Serp API and then parses results.
Requires LLM: No
wolfram-alpha
Tool Name: Wolfram Alpha | https://python.langchain.com/en/latest/modules/agents/tools/getting_started.html |
a65ba9180d73-1 | Requires LLM: No
wolfram-alpha
Tool Name: Wolfram Alpha
Tool Description: A wolfram alpha search engine. Useful for when you need to answer questions about Math, Science, Technology, Culture, Society and Everyday Life. Input should be a search query.
Notes: Calls the Wolfram Alpha API and then parses results.
Requires LLM: No
Extra Parameters: wolfram_alpha_appid: The Wolfram Alpha app id.
requests
Tool Name: Requests
Tool Description: A portal to the internet. Use this when you need to get specific content from a site. Input should be a specific url, and the output will be all the text on that page.
Notes: Uses the Python requests module.
Requires LLM: No
terminal
Tool Name: Terminal
Tool Description: Executes commands in a terminal. Input should be valid commands, and the output will be any output from running that command.
Notes: Executes commands with subprocess.
Requires LLM: No
pal-math
Tool Name: PAL-MATH
Tool Description: A language model that is excellent at solving complex word math problems. Input should be a fully worded hard word math problem.
Notes: Based on this paper.
Requires LLM: Yes
pal-colored-objects
Tool Name: PAL-COLOR-OBJ
Tool Description: A language model that is wonderful at reasoning about position and the color attributes of objects. Input should be a fully worded hard reasoning problem. Make sure to include all information about the objects AND the final question you want to answer.
Notes: Based on this paper.
Requires LLM: Yes
llm-math
Tool Name: Calculator
Tool Description: Useful for when you need to answer questions about math.
Notes: An instance of the LLMMath chain.
Requires LLM: Yes
open-meteo-api
Tool Name: Open Meteo API | https://python.langchain.com/en/latest/modules/agents/tools/getting_started.html |
a65ba9180d73-2 | Requires LLM: Yes
open-meteo-api
Tool Name: Open Meteo API
Tool Description: Useful for when you want to get weather information from the OpenMeteo API. The input should be a question in natural language that this API can answer.
Notes: A natural language connection to the Open Meteo API (https://api.open-meteo.com/), specifically the /v1/forecast endpoint.
Requires LLM: Yes
news-api
Tool Name: News API
Tool Description: Use this when you want to get information about the top headlines of current news stories. The input should be a question in natural language that this API can answer.
Notes: A natural language connection to the News API (https://newsapi.org), specifically the /v2/top-headlines endpoint.
Requires LLM: Yes
Extra Parameters: news_api_key (your API key to access this endpoint)
tmdb-api
Tool Name: TMDB API
Tool Description: Useful for when you want to get information from The Movie Database. The input should be a question in natural language that this API can answer.
Notes: A natural language connection to the TMDB API (https://api.themoviedb.org/3), specifically the /search/movie endpoint.
Requires LLM: Yes
Extra Parameters: tmdb_bearer_token (your Bearer Token to access this endpoint - note that this is different from the API key)
google-search
Tool Name: Search
Tool Description: A wrapper around Google Search. Useful for when you need to answer questions about current events. Input should be a search query.
Notes: Uses the Google Custom Search API
Requires LLM: No
Extra Parameters: google_api_key, google_cse_id
For more information on this, see this page
searx-search
Tool Name: Search | https://python.langchain.com/en/latest/modules/agents/tools/getting_started.html |
a65ba9180d73-3 | For more information on this, see this page
searx-search
Tool Name: Search
Tool Description: A wrapper around SearxNG meta search engine. Input should be a search query.
Notes: SearxNG is easy to deploy self-hosted. It is a good privacy friendly alternative to Google Search. Uses the SearxNG API.
Requires LLM: No
Extra Parameters: searx_host
google-serper
Tool Name: Search
Tool Description: A low-cost Google Search API. Useful for when you need to answer questions about current events. Input should be a search query.
Notes: Calls the serper.dev Google Search API and then parses results.
Requires LLM: No
Extra Parameters: serper_api_key
For more information on this, see this page
wikipedia
Tool Name: Wikipedia
Tool Description: A wrapper around Wikipedia. Useful for when you need to answer general questions about people, places, companies, historical events, or other subjects. Input should be a search query.
Notes: Uses the wikipedia Python package to call the MediaWiki API and then parses results.
Requires LLM: No
Extra Parameters: top_k_results
podcast-api
Tool Name: Podcast API
Tool Description: Use the Listen Notes Podcast API to search all podcasts or episodes. The input should be a question in natural language that this API can answer.
Notes: A natural language connection to the Listen Notes Podcast API (https://www.PodcastAPI.com), specifically the /search/ endpoint.
Requires LLM: Yes
Extra Parameters: listen_api_key (your api key to access this endpoint)
openweathermap-api
Tool Name: OpenWeatherMap
Tool Description: A wrapper around OpenWeatherMap API. Useful for fetching current weather information for a specified location. Input should be a location string (e.g. London,GB). | https://python.langchain.com/en/latest/modules/agents/tools/getting_started.html |
a65ba9180d73-4 | Notes: A connection to the OpenWeatherMap API (https://api.openweathermap.org), specifically the /data/2.5/weather endpoint.
Requires LLM: No
Extra Parameters: openweathermap_api_key (your API key to access this endpoint)
sleep
Tool Name: Sleep
Tool Description: Make agent sleep for some time.
Requires LLM: No
previous
Tools
next
Defining Custom Tools
Contents
List of Tools
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 11, 2023. | https://python.langchain.com/en/latest/modules/agents/tools/getting_started.html |
ce85bdbcb5d7-0 | .ipynb
.pdf
Tool Input Schema
Tool Input Schema#
By default, tools infer the argument schema by inspecting the function signature. For more strict requirements, custom input schema can be specified, along with custom validation logic.
from typing import Any, Dict
from langchain.agents import AgentType, initialize_agent
from langchain.llms import OpenAI
from langchain.tools.requests.tool import RequestsGetTool, TextRequestsWrapper
from pydantic import BaseModel, Field, root_validator
llm = OpenAI(temperature=0)
!pip install tldextract > /dev/null
[notice] A new release of pip is available: 23.0.1 -> 23.1
[notice] To update, run: pip install --upgrade pip
import tldextract
_APPROVED_DOMAINS = {
"langchain",
"wikipedia",
}
class ToolInputSchema(BaseModel):
url: str = Field(...)
@root_validator
def validate_query(cls, values: Dict[str, Any]) -> Dict:
url = values["url"]
domain = tldextract.extract(url).domain
if domain not in _APPROVED_DOMAINS:
raise ValueError(f"Domain {domain} is not on the approved list:"
f" {sorted(_APPROVED_DOMAINS)}")
return values
tool = RequestsGetTool(args_schema=ToolInputSchema, requests_wrapper=TextRequestsWrapper())
agent = initialize_agent([tool], llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=False)
# This will succeed, since there aren't any arguments that will be triggered during validation
answer = agent.run("What's the main title on langchain.com?")
print(answer) | https://python.langchain.com/en/latest/modules/agents/tools/tool_input_validation.html |
ce85bdbcb5d7-1 | print(answer)
The main title of langchain.com is "LANG CHAIN 🦜️🔗 Official Home Page"
agent.run("What's the main title on google.com?")
---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
Cell In[7], line 1
----> 1 agent.run("What's the main title on google.com?")
File ~/code/lc/lckg/langchain/chains/base.py:213, in Chain.run(self, *args, **kwargs)
211 if len(args) != 1:
212 raise ValueError("`run` supports only one positional argument.")
--> 213 return self(args[0])[self.output_keys[0]]
215 if kwargs and not args:
216 return self(kwargs)[self.output_keys[0]]
File ~/code/lc/lckg/langchain/chains/base.py:116, in Chain.__call__(self, inputs, return_only_outputs)
114 except (KeyboardInterrupt, Exception) as e:
115 self.callback_manager.on_chain_error(e, verbose=self.verbose)
--> 116 raise e
117 self.callback_manager.on_chain_end(outputs, verbose=self.verbose)
118 return self.prep_outputs(inputs, outputs, return_only_outputs)
File ~/code/lc/lckg/langchain/chains/base.py:113, in Chain.__call__(self, inputs, return_only_outputs)
107 self.callback_manager.on_chain_start(
108 {"name": self.__class__.__name__},
109 inputs,
110 verbose=self.verbose,
111 )
112 try:
--> 113 outputs = self._call(inputs)
114 except (KeyboardInterrupt, Exception) as e: | https://python.langchain.com/en/latest/modules/agents/tools/tool_input_validation.html |
ce85bdbcb5d7-2 | 114 except (KeyboardInterrupt, Exception) as e:
115 self.callback_manager.on_chain_error(e, verbose=self.verbose)
File ~/code/lc/lckg/langchain/agents/agent.py:792, in AgentExecutor._call(self, inputs)
790 # We now enter the agent loop (until it returns something).
791 while self._should_continue(iterations, time_elapsed):
--> 792 next_step_output = self._take_next_step(
793 name_to_tool_map, color_mapping, inputs, intermediate_steps
794 )
795 if isinstance(next_step_output, AgentFinish):
796 return self._return(next_step_output, intermediate_steps)
File ~/code/lc/lckg/langchain/agents/agent.py:695, in AgentExecutor._take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps)
693 tool_run_kwargs["llm_prefix"] = ""
694 # We then call the tool on the tool input to get an observation
--> 695 observation = tool.run(
696 agent_action.tool_input,
697 verbose=self.verbose,
698 color=color,
699 **tool_run_kwargs,
700 )
701 else:
702 tool_run_kwargs = self.agent.tool_run_logging_kwargs()
File ~/code/lc/lckg/langchain/tools/base.py:110, in BaseTool.run(self, tool_input, verbose, start_color, color, **kwargs)
101 def run(
102 self,
103 tool_input: Union[str, Dict],
(...)
107 **kwargs: Any,
108 ) -> str: | https://python.langchain.com/en/latest/modules/agents/tools/tool_input_validation.html |
ce85bdbcb5d7-3 | 107 **kwargs: Any,
108 ) -> str:
109 """Run the tool."""
--> 110 run_input = self._parse_input(tool_input)
111 if not self.verbose and verbose is not None:
112 verbose_ = verbose
File ~/code/lc/lckg/langchain/tools/base.py:71, in BaseTool._parse_input(self, tool_input)
69 if issubclass(input_args, BaseModel):
70 key_ = next(iter(input_args.__fields__.keys()))
---> 71 input_args.parse_obj({key_: tool_input})
72 # Passing as a positional argument is more straightforward for
73 # backwards compatability
74 return tool_input
File ~/code/lc/lckg/.venv/lib/python3.11/site-packages/pydantic/main.py:526, in pydantic.main.BaseModel.parse_obj()
File ~/code/lc/lckg/.venv/lib/python3.11/site-packages/pydantic/main.py:341, in pydantic.main.BaseModel.__init__()
ValidationError: 1 validation error for ToolInputSchema
__root__
Domain google is not on the approved list: ['langchain', 'wikipedia'] (type=value_error)
previous
Multi-Input Tools
next
Apify
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 11, 2023. | https://python.langchain.com/en/latest/modules/agents/tools/tool_input_validation.html |
b7b81808f7e4-0 | .ipynb
.pdf
Defining Custom Tools
Contents
Completely New Tools - String Input and Output
Tool dataclass
Subclassing the BaseTool class
Using the tool decorator
Custom Structured Tools
StructuredTool dataclass
Subclassing the BaseTool
Using the decorator
Modify existing tools
Defining the priorities among Tools
Using tools to return directly
Handling Tool Errors
Defining Custom Tools#
When constructing your own agent, you will need to provide it with a list of Tools that it can use. Besides the actual function that is called, the Tool consists of several components:
name (str), is required and must be unique within a set of tools provided to an agent
description (str), is optional but recommended, as it is used by an agent to determine tool use
return_direct (bool), defaults to False
args_schema (Pydantic BaseModel), is optional but recommended, can be used to provide more information (e.g., few-shot examples) or validation for expected parameters.
There are two main ways to define a tool, we will cover both in the example below.
# Import things that are needed generically
from langchain import LLMMathChain, SerpAPIWrapper
from langchain.agents import AgentType, initialize_agent
from langchain.chat_models import ChatOpenAI
from langchain.tools import BaseTool, StructuredTool, Tool, tool
Initialize the LLM to use for the agent.
llm = ChatOpenAI(temperature=0)
Completely New Tools - String Input and Output#
The simplest tools accept a single query string and return a string output. If your tool function requires multiple arguments, you might want to skip down to the StructuredTool section below.
There are two ways to do this: either by using the Tool dataclass, or by subclassing the BaseTool class.
Tool dataclass# | https://python.langchain.com/en/latest/modules/agents/tools/custom_tools.html |
b7b81808f7e4-1 | Tool dataclass#
The ‘Tool’ dataclass wraps functions that accept a single string input and returns a string output.
# Load the tool configs that are needed.
search = SerpAPIWrapper()
llm_math_chain = LLMMathChain(llm=llm, verbose=True)
tools = [
Tool.from_function(
func=search.run,
name = "Search",
description="useful for when you need to answer questions about current events"
# coroutine= ... <- you can specify an async method if desired as well
),
]
/Users/wfh/code/lc/lckg/langchain/chains/llm_math/base.py:50: UserWarning: Directly instantiating an LLMMathChain with an llm is deprecated. Please instantiate with llm_chain argument or using the from_llm class method.
warnings.warn(
You can also define a custom `args_schema`` to provide more information about inputs.
from pydantic import BaseModel, Field
class CalculatorInput(BaseModel):
question: str = Field()
tools.append(
Tool.from_function(
func=llm_math_chain.run,
name="Calculator",
description="useful for when you need to answer questions about math",
args_schema=CalculatorInput
# coroutine= ... <- you can specify an async method if desired as well
)
)
# Construct the agent. We will use the default agent type here.
# See documentation for a full list of options.
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
agent.run("Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?")
> Entering new AgentExecutor chain... | https://python.langchain.com/en/latest/modules/agents/tools/custom_tools.html |
b7b81808f7e4-2 | > Entering new AgentExecutor chain...
I need to find out Leo DiCaprio's girlfriend's name and her age
Action: Search
Action Input: "Leo DiCaprio girlfriend"
Observation: After rumours of a romance with Gigi Hadid, the Oscar winner has seemingly moved on. First being linked to the television personality in September 2022, it appears as if his "age bracket" has moved up. This follows his rumoured relationship with mere 19-year-old Eden Polani.
Thought:I still need to find out his current girlfriend's name and age
Action: Search
Action Input: "Leo DiCaprio current girlfriend"
Observation: Just Jared on Instagram: “Leonardo DiCaprio & girlfriend Camila Morrone couple up for a lunch date!
Thought:Now that I know his girlfriend's name is Camila Morrone, I need to find her current age
Action: Search
Action Input: "Camila Morrone age"
Observation: 25 years
Thought:Now that I have her age, I need to calculate her age raised to the 0.43 power
Action: Calculator
Action Input: 25^(0.43)
> Entering new LLMMathChain chain...
25^(0.43)```text
25**(0.43)
```
...numexpr.evaluate("25**(0.43)")...
Answer: 3.991298452658078
> Finished chain.
Observation: Answer: 3.991298452658078
Thought:I now know the final answer
Final Answer: Camila Morrone's current age raised to the 0.43 power is approximately 3.99.
> Finished chain.
"Camila Morrone's current age raised to the 0.43 power is approximately 3.99."
Subclassing the BaseTool class# | https://python.langchain.com/en/latest/modules/agents/tools/custom_tools.html |
b7b81808f7e4-3 | Subclassing the BaseTool class#
You can also directly subclass BaseTool. This is useful if you want more control over the instance variables or if you want to propagate callbacks to nested chains or other tools.
from typing import Optional, Type
from langchain.callbacks.manager import AsyncCallbackManagerForToolRun, CallbackManagerForToolRun
class CustomSearchTool(BaseTool):
name = "custom_search"
description = "useful for when you need to answer questions about current events"
def _run(self, query: str, run_manager: Optional[CallbackManagerForToolRun] = None) -> str:
"""Use the tool."""
return search.run(query)
async def _arun(self, query: str, run_manager: Optional[AsyncCallbackManagerForToolRun] = None) -> str:
"""Use the tool asynchronously."""
raise NotImplementedError("custom_search does not support async")
class CustomCalculatorTool(BaseTool):
name = "Calculator"
description = "useful for when you need to answer questions about math"
args_schema: Type[BaseModel] = CalculatorInput
def _run(self, query: str, run_manager: Optional[CallbackManagerForToolRun] = None) -> str:
"""Use the tool."""
return llm_math_chain.run(query)
async def _arun(self, query: str, run_manager: Optional[AsyncCallbackManagerForToolRun] = None) -> str:
"""Use the tool asynchronously."""
raise NotImplementedError("Calculator does not support async")
tools = [CustomSearchTool(), CustomCalculatorTool()]
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True) | https://python.langchain.com/en/latest/modules/agents/tools/custom_tools.html |
b7b81808f7e4-4 | agent.run("Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?")
> Entering new AgentExecutor chain...
I need to use custom_search to find out who Leo DiCaprio's girlfriend is, and then use the Calculator to raise her age to the 0.43 power.
Action: custom_search
Action Input: "Leo DiCaprio girlfriend"
Observation: After rumours of a romance with Gigi Hadid, the Oscar winner has seemingly moved on. First being linked to the television personality in September 2022, it appears as if his "age bracket" has moved up. This follows his rumoured relationship with mere 19-year-old Eden Polani.
Thought:I need to find out the current age of Eden Polani.
Action: custom_search
Action Input: "Eden Polani age"
Observation: 19 years old
Thought:Now I can use the Calculator to raise her age to the 0.43 power.
Action: Calculator
Action Input: 19 ^ 0.43
> Entering new LLMMathChain chain...
19 ^ 0.43```text
19 ** 0.43
```
...numexpr.evaluate("19 ** 0.43")...
Answer: 3.547023357958959
> Finished chain.
Observation: Answer: 3.547023357958959
Thought:I now know the final answer.
Final Answer: 3.547023357958959
> Finished chain.
'3.547023357958959'
Using the tool decorator# | https://python.langchain.com/en/latest/modules/agents/tools/custom_tools.html |
b7b81808f7e4-5 | '3.547023357958959'
Using the tool decorator#
To make it easier to define custom tools, a @tool decorator is provided. This decorator can be used to quickly create a Tool from a simple function. The decorator uses the function name as the tool name by default, but this can be overridden by passing a string as the first argument. Additionally, the decorator will use the function’s docstring as the tool’s description.
from langchain.tools import tool
@tool
def search_api(query: str) -> str:
"""Searches the API for the query."""
return f"Results for query {query}"
search_api
You can also provide arguments like the tool name and whether to return directly.
@tool("search", return_direct=True)
def search_api(query: str) -> str:
"""Searches the API for the query."""
return "Results"
search_api
Tool(name='search', description='search(query: str) -> str - Searches the API for the query.', args_schema=<class 'pydantic.main.SearchApi'>, return_direct=True, verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x12748c4c0>, func=<function search_api at 0x16bd66310>, coroutine=None)
You can also provide args_schema to provide more information about the argument
class SearchInput(BaseModel):
query: str = Field(description="should be a search query")
@tool("search", return_direct=True, args_schema=SearchInput)
def search_api(query: str) -> str:
"""Searches the API for the query."""
return "Results"
search_api | https://python.langchain.com/en/latest/modules/agents/tools/custom_tools.html |
b7b81808f7e4-6 | """Searches the API for the query."""
return "Results"
search_api
Tool(name='search', description='search(query: str) -> str - Searches the API for the query.', args_schema=<class '__main__.SearchInput'>, return_direct=True, verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x12748c4c0>, func=<function search_api at 0x16bcf0ee0>, coroutine=None)
Custom Structured Tools#
If your functions require more structured arguments, you can use the StructuredTool class directly, or still subclass the BaseTool class.
StructuredTool dataclass#
To dynamically generate a structured tool from a given function, the fastest way to get started is with StructuredTool.from_function().
import requests
from langchain.tools import StructuredTool
def post_message(url: str, body: dict, parameters: Optional[dict] = None) -> str:
"""Sends a POST request to the given url with the given body and parameters."""
result = requests.post(url, json=body, params=parameters)
return f"Status: {result.status_code} - {result.text}"
tool = StructuredTool.from_function(post_message)
Subclassing the BaseTool#
The BaseTool automatically infers the schema from the _run method’s signature.
from typing import Optional, Type
from langchain.callbacks.manager import AsyncCallbackManagerForToolRun, CallbackManagerForToolRun
class CustomSearchTool(BaseTool):
name = "custom_search"
description = "useful for when you need to answer questions about current events" | https://python.langchain.com/en/latest/modules/agents/tools/custom_tools.html |
b7b81808f7e4-7 | description = "useful for when you need to answer questions about current events"
def _run(self, query: str, engine: str = "google", gl: str = "us", hl: str = "en", run_manager: Optional[CallbackManagerForToolRun] = None) -> str:
"""Use the tool."""
search_wrapper = SerpAPIWrapper(params={"engine": engine, "gl": gl, "hl": hl})
return search_wrapper.run(query)
async def _arun(self, query: str, engine: str = "google", gl: str = "us", hl: str = "en", run_manager: Optional[AsyncCallbackManagerForToolRun] = None) -> str:
"""Use the tool asynchronously."""
raise NotImplementedError("custom_search does not support async")
# You can provide a custom args schema to add descriptions or custom validation
class SearchSchema(BaseModel):
query: str = Field(description="should be a search query")
engine: str = Field(description="should be a search engine")
gl: str = Field(description="should be a country code")
hl: str = Field(description="should be a language code")
class CustomSearchTool(BaseTool):
name = "custom_search"
description = "useful for when you need to answer questions about current events"
args_schema: Type[SearchSchema] = SearchSchema
def _run(self, query: str, engine: str = "google", gl: str = "us", hl: str = "en", run_manager: Optional[CallbackManagerForToolRun] = None) -> str:
"""Use the tool."""
search_wrapper = SerpAPIWrapper(params={"engine": engine, "gl": gl, "hl": hl})
return search_wrapper.run(query) | https://python.langchain.com/en/latest/modules/agents/tools/custom_tools.html |
b7b81808f7e4-8 | return search_wrapper.run(query)
async def _arun(self, query: str, engine: str = "google", gl: str = "us", hl: str = "en", run_manager: Optional[AsyncCallbackManagerForToolRun] = None) -> str:
"""Use the tool asynchronously."""
raise NotImplementedError("custom_search does not support async")
Using the decorator#
The tool decorator creates a structured tool automatically if the signature has multiple arguments.
import requests
from langchain.tools import tool
@tool
def post_message(url: str, body: dict, parameters: Optional[dict] = None) -> str:
"""Sends a POST request to the given url with the given body and parameters."""
result = requests.post(url, json=body, params=parameters)
return f"Status: {result.status_code} - {result.text}"
Modify existing tools#
Now, we show how to load existing tools and modify them directly. In the example below, we do something really simple and change the Search tool to have the name Google Search.
from langchain.agents import load_tools
tools = load_tools(["serpapi", "llm-math"], llm=llm)
tools[0].name = "Google Search"
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
agent.run("Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?")
> Entering new AgentExecutor chain...
I need to find out Leo DiCaprio's girlfriend's name and her age.
Action: Google Search
Action Input: "Leo DiCaprio girlfriend" | https://python.langchain.com/en/latest/modules/agents/tools/custom_tools.html |
b7b81808f7e4-9 | Action: Google Search
Action Input: "Leo DiCaprio girlfriend"
Observation: After rumours of a romance with Gigi Hadid, the Oscar winner has seemingly moved on. First being linked to the television personality in September 2022, it appears as if his "age bracket" has moved up. This follows his rumoured relationship with mere 19-year-old Eden Polani.
Thought:I still need to find out his current girlfriend's name and her age.
Action: Google Search
Action Input: "Leo DiCaprio current girlfriend age"
Observation: Leonardo DiCaprio has been linked with 19-year-old model Eden Polani, continuing the rumour that he doesn't date any women over the age of ...
Thought:I need to find out the age of Eden Polani.
Action: Calculator
Action Input: 19^(0.43)
Observation: Answer: 3.547023357958959
Thought:I now know the final answer.
Final Answer: The age of Leo DiCaprio's girlfriend raised to the 0.43 power is approximately 3.55.
> Finished chain.
"The age of Leo DiCaprio's girlfriend raised to the 0.43 power is approximately 3.55."
Defining the priorities among Tools#
When you made a Custom tool, you may want the Agent to use the custom tool more than normal tools.
For example, you made a custom tool, which gets information on music from your database. When a user wants information on songs, You want the Agent to use the custom tool more than the normal Search tool. But the Agent might prioritize a normal Search tool.
This can be accomplished by adding a statement such as Use this more than the normal search if the question is about Music, like 'who is the singer of yesterday?' or 'what is the most popular song in 2022?' to the description. | https://python.langchain.com/en/latest/modules/agents/tools/custom_tools.html |
b7b81808f7e4-10 | An example is below.
# Import things that are needed generically
from langchain.agents import initialize_agent, Tool
from langchain.agents import AgentType
from langchain.llms import OpenAI
from langchain import LLMMathChain, SerpAPIWrapper
search = SerpAPIWrapper()
tools = [
Tool(
name = "Search",
func=search.run,
description="useful for when you need to answer questions about current events"
),
Tool(
name="Music Search",
func=lambda x: "'All I Want For Christmas Is You' by Mariah Carey.", #Mock Function
description="A Music search engine. Use this more than the normal search if the question is about Music, like 'who is the singer of yesterday?' or 'what is the most popular song in 2022?'",
)
]
agent = initialize_agent(tools, OpenAI(temperature=0), agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
agent.run("what is the most famous song of christmas")
> Entering new AgentExecutor chain...
I should use a music search engine to find the answer
Action: Music Search
Action Input: most famous song of christmas'All I Want For Christmas Is You' by Mariah Carey. I now know the final answer
Final Answer: 'All I Want For Christmas Is You' by Mariah Carey.
> Finished chain.
"'All I Want For Christmas Is You' by Mariah Carey."
Using tools to return directly#
Often, it can be desirable to have a tool output returned directly to the user, if it’s called. You can do this easily with LangChain by setting the return_direct flag for a tool to be True.
llm_math_chain = LLMMathChain(llm=llm)
tools = [ | https://python.langchain.com/en/latest/modules/agents/tools/custom_tools.html |
b7b81808f7e4-11 | llm_math_chain = LLMMathChain(llm=llm)
tools = [
Tool(
name="Calculator",
func=llm_math_chain.run,
description="useful for when you need to answer questions about math",
return_direct=True
)
]
llm = OpenAI(temperature=0)
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
agent.run("whats 2**.12")
> Entering new AgentExecutor chain...
I need to calculate this
Action: Calculator
Action Input: 2**.12Answer: 1.086734862526058
> Finished chain.
'Answer: 1.086734862526058'
Handling Tool Errors#
When a tool encounters an error and the exception is not caught, the agent will stop executing. If you want the agent to continue execution, you can raise a ToolException and set handle_tool_error accordingly.
When ToolException is thrown, the agent will not stop working, but will handle the exception according to the handle_tool_error variable of the tool, and the processing result will be returned to the agent as observation, and printed in red.
You can set handle_tool_error to True, set it a unified string value, or set it as a function. If it’s set as a function, the function should take a ToolException as a parameter and return a str value.
Please note that only raising a ToolException won’t be effective. You need to first set the handle_tool_error of the tool because its default value is False.
from langchain.schema import ToolException
from langchain import SerpAPIWrapper
from langchain.agents import AgentType, initialize_agent
from langchain.chat_models import ChatOpenAI
from langchain.tools import Tool | https://python.langchain.com/en/latest/modules/agents/tools/custom_tools.html |
b7b81808f7e4-12 | from langchain.chat_models import ChatOpenAI
from langchain.tools import Tool
from langchain.chat_models import ChatOpenAI
def _handle_error(error:ToolException) -> str:
return "The following errors occurred during tool execution:" + error.args[0]+ "Please try another tool."
def search_tool1(s: str):raise ToolException("The search tool1 is not available.")
def search_tool2(s: str):raise ToolException("The search tool2 is not available.")
search_tool3 = SerpAPIWrapper()
description="useful for when you need to answer questions about current events.You should give priority to using it."
tools = [
Tool.from_function(
func=search_tool1,
name="Search_tool1",
description=description,
handle_tool_error=True,
),
Tool.from_function(
func=search_tool2,
name="Search_tool2",
description=description,
handle_tool_error=_handle_error,
),
Tool.from_function(
func=search_tool3.run,
name="Search_tool3",
description="useful for when you need to answer questions about current events",
),
]
agent = initialize_agent(
tools,
ChatOpenAI(temperature=0),
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
)
agent.run("Who is Leo DiCaprio's girlfriend?")
> Entering new AgentExecutor chain...
I should use Search_tool1 to find recent news articles about Leo DiCaprio's personal life.
Action: Search_tool1
Action Input: "Leo DiCaprio girlfriend"
Observation: The search tool1 is not available.
Thought:I should try using Search_tool2 instead.
Action: Search_tool2 | https://python.langchain.com/en/latest/modules/agents/tools/custom_tools.html |
b7b81808f7e4-13 | Thought:I should try using Search_tool2 instead.
Action: Search_tool2
Action Input: "Leo DiCaprio girlfriend"
Observation: The following errors occurred during tool execution:The search tool2 is not available.Please try another tool.
Thought:I should try using Search_tool3 as a last resort.
Action: Search_tool3
Action Input: "Leo DiCaprio girlfriend"
Observation: Leonardo DiCaprio and Gigi Hadid were recently spotted at a pre-Oscars party, sparking interest once again in their rumored romance. The Revenant actor and the model first made headlines when they were spotted together at a New York Fashion Week afterparty in September 2022.
Thought:Based on the information from Search_tool3, it seems that Gigi Hadid is currently rumored to be Leo DiCaprio's girlfriend.
Final Answer: Gigi Hadid is currently rumored to be Leo DiCaprio's girlfriend.
> Finished chain.
"Gigi Hadid is currently rumored to be Leo DiCaprio's girlfriend."
previous
Getting Started
next
Multi-Input Tools
Contents
Completely New Tools - String Input and Output
Tool dataclass
Subclassing the BaseTool class
Using the tool decorator
Custom Structured Tools
StructuredTool dataclass
Subclassing the BaseTool
Using the decorator
Modify existing tools
Defining the priorities among Tools
Using tools to return directly
Handling Tool Errors
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 11, 2023. | https://python.langchain.com/en/latest/modules/agents/tools/custom_tools.html |
22070b1be88f-0 | .ipynb
.pdf
Multi-Input Tools
Contents
Multi-Input Tools with a string format
Multi-Input Tools#
This notebook shows how to use a tool that requires multiple inputs with an agent. The recommended way to do so is with the StructuredTool class.
import os
os.environ["LANGCHAIN_TRACING"] = "true"
from langchain import OpenAI
from langchain.agents import initialize_agent, AgentType
llm = OpenAI(temperature=0)
from langchain.tools import StructuredTool
def multiplier(a: float, b: float) -> float:
"""Multiply the provided floats."""
return a * b
tool = StructuredTool.from_function(multiplier)
# Structured tools are compatible with the STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION agent type.
agent_executor = initialize_agent([tool], llm, agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
agent_executor.run("What is 3 times 4")
> Entering new AgentExecutor chain...
Thought: I need to multiply 3 and 4
Action:
```
{
"action": "multiplier",
"action_input": {"a": 3, "b": 4}
}
```
Observation: 12
Thought: I know what to respond
Action:
```
{
"action": "Final Answer",
"action_input": "3 times 4 is 12"
}
```
> Finished chain.
'3 times 4 is 12'
Multi-Input Tools with a string format# | https://python.langchain.com/en/latest/modules/agents/tools/multi_input_tool.html |
22070b1be88f-1 | '3 times 4 is 12'
Multi-Input Tools with a string format#
An alternative to the structured tool would be to use the regular Tool class and accept a single string. The tool would then have to handle the parsing logic to extract the relavent values from the text, which tightly couples the tool representation to the agent prompt. This is still useful if the underlying language model can’t reliabl generate structured schema.
Let’s take the multiplication function as an example. In order to use this, we will tell the agent to generate the “Action Input” as a comma-separated list of length two. We will then write a thin wrapper that takes a string, splits it into two around a comma, and passes both parsed sides as integers to the multiplication function.
from langchain.llms import OpenAI
from langchain.agents import initialize_agent, Tool
from langchain.agents import AgentType
Here is the multiplication function, as well as a wrapper to parse a string as input.
def multiplier(a, b):
return a * b
def parsing_multiplier(string):
a, b = string.split(",")
return multiplier(int(a), int(b))
llm = OpenAI(temperature=0)
tools = [
Tool(
name = "Multiplier",
func=parsing_multiplier,
description="useful for when you need to multiply two numbers together. The input to this tool should be a comma separated list of numbers of length two, representing the two numbers you want to multiply together. For example, `1,2` would be the input if you wanted to multiply 1 by 2."
)
]
mrkl = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
mrkl.run("What is 3 times 4")
> Entering new AgentExecutor chain... | https://python.langchain.com/en/latest/modules/agents/tools/multi_input_tool.html |
22070b1be88f-2 | > Entering new AgentExecutor chain...
I need to multiply two numbers
Action: Multiplier
Action Input: 3,4
Observation: 12
Thought: I now know the final answer
Final Answer: 3 times 4 is 12
> Finished chain.
'3 times 4 is 12'
previous
Defining Custom Tools
next
Tool Input Schema
Contents
Multi-Input Tools with a string format
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 11, 2023. | https://python.langchain.com/en/latest/modules/agents/tools/multi_input_tool.html |
6cadad82d9a6-0 | .ipynb
.pdf
SearxNG Search API
Contents
Custom Parameters
Obtaining results with metadata
SearxNG Search API#
This notebook goes over how to use a self hosted SearxNG search API to search the web.
You can check this link for more informations about Searx API parameters.
import pprint
from langchain.utilities import SearxSearchWrapper
search = SearxSearchWrapper(searx_host="http://127.0.0.1:8888")
For some engines, if a direct answer is available the warpper will print the answer instead of the full list of search results. You can use the results method of the wrapper if you want to obtain all the results.
search.run("What is the capital of France")
'Paris is the capital of France, the largest country of Europe with 550 000 km2 (65 millions inhabitants). Paris has 2.234 million inhabitants end 2011. She is the core of Ile de France region (12 million people).'
Custom Parameters#
SearxNG supports up to 139 search engines. You can also customize the Searx wrapper with arbitrary named parameters that will be passed to the Searx search API . In the below example we will making a more interesting use of custom search parameters from searx search api.
In this example we will be using the engines parameters to query wikipedia
search = SearxSearchWrapper(searx_host="http://127.0.0.1:8888", k=5) # k is for max number of items
search.run("large language model ", engines=['wiki']) | https://python.langchain.com/en/latest/modules/agents/tools/examples/searx_search.html |
6cadad82d9a6-1 | search.run("large language model ", engines=['wiki'])
'Large language models (LLMs) represent a major advancement in AI, with the promise of transforming domains through learned knowledge. LLM sizes have been increasing 10X every year for the last few years, and as these models grow in complexity and size, so do their capabilities.\n\nGPT-3 can translate language, write essays, generate computer code, and more — all with limited to no supervision. In July 2020, OpenAI unveiled GPT-3, a language model that was easily the largest known at the time. Put simply, GPT-3 is trained to predict the next word in a sentence, much like how a text message autocomplete feature works.\n\nA large language model, or LLM, is a deep learning algorithm that can recognize, summarize, translate, predict and generate text and other content based on knowledge gained from massive datasets. Large language models are among the most successful applications of transformer models.\n\nAll of today’s well-known language models—e.g., GPT-3 from OpenAI, PaLM or LaMDA from Google, Galactica or OPT from Meta, Megatron-Turing from Nvidia/Microsoft, Jurassic-1 from AI21 Labs—are...\n\nLarge language models (LLMs) such as GPT-3are increasingly being used to generate text. These tools should be used with care, since they can generate content that is biased, non-verifiable, constitutes original research, or violates copyrights.'
Passing other Searx parameters for searx like language
search = SearxSearchWrapper(searx_host="http://127.0.0.1:8888", k=1)
search.run("deep learning", language='es', engines=['wiki']) | https://python.langchain.com/en/latest/modules/agents/tools/examples/searx_search.html |
6cadad82d9a6-2 | search.run("deep learning", language='es', engines=['wiki'])
'Aprendizaje profundo (en inglés, deep learning) es un conjunto de algoritmos de aprendizaje automático (en inglés, machine learning) que intenta modelar abstracciones de alto nivel en datos usando arquitecturas computacionales que admiten transformaciones no lineales múltiples e iterativas de datos expresados en forma matricial o tensorial. 1'
Obtaining results with metadata#
In this example we will be looking for scientific paper using the categories parameter and limiting the results to a time_range (not all engines support the time range option).
We also would like to obtain the results in a structured way including metadata. For this we will be using the results method of the wrapper.
search = SearxSearchWrapper(searx_host="http://127.0.0.1:8888")
results = search.results("Large Language Model prompt", num_results=5, categories='science', time_range='year')
pprint.pp(results)
[{'snippet': '… on natural language instructions, large language models (… the '
'prompt used to steer the model, and most effective prompts … to '
'prompt engineering, we propose Automatic Prompt …',
'title': 'Large language models are human-level prompt engineers',
'link': 'https://arxiv.org/abs/2211.01910',
'engines': ['google scholar'],
'category': 'science'},
{'snippet': '… Large language models (LLMs) have introduced new possibilities '
'for prototyping with AI [18]. Pre-trained on a large amount of '
'text data, models … language instructions called prompts. …',
'title': 'Promptchainer: Chaining large language model prompts through ' | https://python.langchain.com/en/latest/modules/agents/tools/examples/searx_search.html |
6cadad82d9a6-3 | 'title': 'Promptchainer: Chaining large language model prompts through '
'visual programming',
'link': 'https://dl.acm.org/doi/abs/10.1145/3491101.3519729',
'engines': ['google scholar'],
'category': 'science'},
{'snippet': '… can introspect the large prompt model. We derive the view '
'ϕ0(X) and the model h0 from T01. However, instead of fully '
'fine-tuning T0 during co-training, we focus on soft prompt '
'tuning, …',
'title': 'Co-training improves prompt-based learning for large language '
'models',
'link': 'https://proceedings.mlr.press/v162/lang22a.html',
'engines': ['google scholar'],
'category': 'science'},
{'snippet': '… With the success of large language models (LLMs) of code and '
'their use as … prompt design process become important. In this '
'work, we propose a framework called Repo-Level Prompt …',
'title': 'Repository-level prompt generation for large language models of '
'code',
'link': 'https://arxiv.org/abs/2206.12839',
'engines': ['google scholar'],
'category': 'science'},
{'snippet': '… Figure 2 | The benefits of different components of a prompt '
'for the largest language model (Gopher), as estimated from '
'hierarchical logistic regression. Each point estimates the '
'unique …',
'title': 'Can language models learn from explanations in context?',
'link': 'https://arxiv.org/abs/2204.02329', | https://python.langchain.com/en/latest/modules/agents/tools/examples/searx_search.html |
6cadad82d9a6-4 | 'link': 'https://arxiv.org/abs/2204.02329',
'engines': ['google scholar'],
'category': 'science'}]
Get papers from arxiv
results = search.results("Large Language Model prompt", num_results=5, engines=['arxiv'])
pprint.pp(results)
[{'snippet': 'Thanks to the advanced improvement of large pre-trained language '
'models, prompt-based fine-tuning is shown to be effective on a '
'variety of downstream tasks. Though many prompting methods have '
'been investigated, it remains unknown which type of prompts are '
'the most effective among three types of prompts (i.e., '
'human-designed prompts, schema prompts and null prompts). In '
'this work, we empirically compare the three types of prompts '
'under both few-shot and fully-supervised settings. Our '
'experimental results show that schema prompts are the most '
'effective in general. Besides, the performance gaps tend to '
'diminish when the scale of training data grows large.',
'title': 'Do Prompts Solve NLP Tasks Using Natural Language?',
'link': 'http://arxiv.org/abs/2203.00902v1',
'engines': ['arxiv'],
'category': 'science'},
{'snippet': 'Cross-prompt automated essay scoring (AES) requires the system '
'to use non target-prompt essays to award scores to a '
'target-prompt essay. Since obtaining a large quantity of '
'pre-graded essays to a particular prompt is often difficult and '
'unrealistic, the task of cross-prompt AES is vital for the '
'development of real-world AES systems, yet it remains an ' | https://python.langchain.com/en/latest/modules/agents/tools/examples/searx_search.html |
6cadad82d9a6-5 | 'development of real-world AES systems, yet it remains an '
'under-explored area of research. Models designed for '
'prompt-specific AES rely heavily on prompt-specific knowledge '
'and perform poorly in the cross-prompt setting, whereas current '
'approaches to cross-prompt AES either require a certain quantity '
'of labelled target-prompt essays or require a large quantity of '
'unlabelled target-prompt essays to perform transfer learning in '
'a multi-step manner. To address these issues, we introduce '
'Prompt Agnostic Essay Scorer (PAES) for cross-prompt AES. Our '
'method requires no access to labelled or unlabelled '
'target-prompt data during training and is a single-stage '
'approach. PAES is easy to apply in practice and achieves '
'state-of-the-art performance on the Automated Student Assessment '
'Prize (ASAP) dataset.',
'title': 'Prompt Agnostic Essay Scorer: A Domain Generalization Approach to '
'Cross-prompt Automated Essay Scoring',
'link': 'http://arxiv.org/abs/2008.01441v1',
'engines': ['arxiv'],
'category': 'science'},
{'snippet': 'Research on prompting has shown excellent performance with '
'little or even no supervised training across many tasks. '
'However, prompting for machine translation is still '
'under-explored in the literature. We fill this gap by offering a '
'systematic study on prompting strategies for translation, '
'examining various factors for prompt template and demonstration '
'example selection. We further explore the use of monolingual ' | https://python.langchain.com/en/latest/modules/agents/tools/examples/searx_search.html |
6cadad82d9a6-6 | 'example selection. We further explore the use of monolingual '
'data and the feasibility of cross-lingual, cross-domain, and '
'sentence-to-document transfer learning in prompting. Extensive '
'experiments with GLM-130B (Zeng et al., 2022) as the testbed '
'show that 1) the number and the quality of prompt examples '
'matter, where using suboptimal examples degenerates translation; '
'2) several features of prompt examples, such as semantic '
'similarity, show significant Spearman correlation with their '
'prompting performance; yet, none of the correlations are strong '
'enough; 3) using pseudo parallel prompt examples constructed '
'from monolingual data via zero-shot prompting could improve '
'translation; and 4) improved performance is achievable by '
'transferring knowledge from prompt examples selected in other '
'settings. We finally provide an analysis on the model outputs '
'and discuss several problems that prompting still suffers from.',
'title': 'Prompting Large Language Model for Machine Translation: A Case '
'Study',
'link': 'http://arxiv.org/abs/2301.07069v2',
'engines': ['arxiv'],
'category': 'science'},
{'snippet': 'Large language models can perform new tasks in a zero-shot '
'fashion, given natural language prompts that specify the desired '
'behavior. Such prompts are typically hand engineered, but can '
'also be learned with gradient-based methods from labeled data. '
'However, it is underexplored what factors make the prompts '
'effective, especially when the prompts are natural language. In ' | https://python.langchain.com/en/latest/modules/agents/tools/examples/searx_search.html |
6cadad82d9a6-7 | 'effective, especially when the prompts are natural language. In '
'this paper, we investigate common attributes shared by effective '
'prompts. We first propose a human readable prompt tuning method '
'(F LUENT P ROMPT) based on Langevin dynamics that incorporates a '
'fluency constraint to find a diverse distribution of effective '
'and fluent prompts. Our analysis reveals that effective prompts '
'are topically related to the task domain and calibrate the prior '
'probability of label words. Based on these findings, we also '
'propose a method for generating prompts using only unlabeled '
'data, outperforming strong baselines by an average of 7.0% '
'accuracy across three tasks.',
'title': "Toward Human Readable Prompt Tuning: Kubrick's The Shining is a "
'good movie, and a good prompt too?',
'link': 'http://arxiv.org/abs/2212.10539v1',
'engines': ['arxiv'],
'category': 'science'},
{'snippet': 'Prevailing methods for mapping large generative language models '
"to supervised tasks may fail to sufficiently probe models' novel "
'capabilities. Using GPT-3 as a case study, we show that 0-shot '
'prompts can significantly outperform few-shot prompts. We '
'suggest that the function of few-shot examples in these cases is '
'better described as locating an already learned task rather than '
'meta-learning. This analysis motivates rethinking the role of '
'prompts in controlling and evaluating powerful language models. '
'In this work, we discuss methods of prompt programming, ' | https://python.langchain.com/en/latest/modules/agents/tools/examples/searx_search.html |
6cadad82d9a6-8 | 'In this work, we discuss methods of prompt programming, '
'emphasizing the usefulness of considering prompts through the '
'lens of natural language. We explore techniques for exploiting '
'the capacity of narratives and cultural anchors to encode '
'nuanced intentions and techniques for encouraging deconstruction '
'of a problem into components before producing a verdict. '
'Informed by this more encompassing theory of prompt programming, '
'we also introduce the idea of a metaprompt that seeds the model '
'to generate its own natural language prompts for a range of '
'tasks. Finally, we discuss how these more general methods of '
'interacting with language models can be incorporated into '
'existing and future benchmarks and practical applications.',
'title': 'Prompt Programming for Large Language Models: Beyond the Few-Shot '
'Paradigm',
'link': 'http://arxiv.org/abs/2102.07350v1',
'engines': ['arxiv'],
'category': 'science'}]
In this example we query for large language models under the it category. We then filter the results that come from github.
results = search.results("large language model", num_results = 20, categories='it')
pprint.pp(list(filter(lambda r: r['engines'][0] == 'github', results)))
[{'snippet': 'Guide to using pre-trained large language models of source code',
'title': 'Code-LMs',
'link': 'https://github.com/VHellendoorn/Code-LMs',
'engines': ['github'],
'category': 'it'},
{'snippet': 'Dramatron uses large language models to generate coherent '
'scripts and screenplays.', | https://python.langchain.com/en/latest/modules/agents/tools/examples/searx_search.html |
6cadad82d9a6-9 | 'scripts and screenplays.',
'title': 'dramatron',
'link': 'https://github.com/deepmind/dramatron',
'engines': ['github'],
'category': 'it'}]
We could also directly query for results from github and other source forges.
results = search.results("large language model", num_results = 20, engines=['github', 'gitlab'])
pprint.pp(results)
[{'snippet': "Implementation of 'A Watermark for Large Language Models' paper "
'by Kirchenbauer & Geiping et. al.',
'title': 'Peutlefaire / LMWatermark',
'link': 'https://gitlab.com/BrianPulfer/LMWatermark',
'engines': ['gitlab'],
'category': 'it'},
{'snippet': 'Guide to using pre-trained large language models of source code',
'title': 'Code-LMs',
'link': 'https://github.com/VHellendoorn/Code-LMs',
'engines': ['github'],
'category': 'it'},
{'snippet': '',
'title': 'Simen Burud / Large-scale Language Models for Conversational '
'Speech Recognition',
'link': 'https://gitlab.com/BrianPulfer',
'engines': ['gitlab'],
'category': 'it'},
{'snippet': 'Dramatron uses large language models to generate coherent '
'scripts and screenplays.',
'title': 'dramatron',
'link': 'https://github.com/deepmind/dramatron',
'engines': ['github'],
'category': 'it'}, | https://python.langchain.com/en/latest/modules/agents/tools/examples/searx_search.html |
6cadad82d9a6-10 | 'engines': ['github'],
'category': 'it'},
{'snippet': 'Code for loralib, an implementation of "LoRA: Low-Rank '
'Adaptation of Large Language Models"',
'title': 'LoRA',
'link': 'https://github.com/microsoft/LoRA',
'engines': ['github'],
'category': 'it'},
{'snippet': 'Code for the paper "Evaluating Large Language Models Trained on '
'Code"',
'title': 'human-eval',
'link': 'https://github.com/openai/human-eval',
'engines': ['github'],
'category': 'it'},
{'snippet': 'A trend starts from "Chain of Thought Prompting Elicits '
'Reasoning in Large Language Models".',
'title': 'Chain-of-ThoughtsPapers',
'link': 'https://github.com/Timothyxxx/Chain-of-ThoughtsPapers',
'engines': ['github'],
'category': 'it'},
{'snippet': 'Mistral: A strong, northwesterly wind: Framework for transparent '
'and accessible large-scale language model training, built with '
'Hugging Face 🤗 Transformers.',
'title': 'mistral',
'link': 'https://github.com/stanford-crfm/mistral',
'engines': ['github'],
'category': 'it'},
{'snippet': 'A prize for finding tasks that cause large language models to '
'show inverse scaling',
'title': 'prize',
'link': 'https://github.com/inverse-scaling/prize',
'engines': ['github'], | https://python.langchain.com/en/latest/modules/agents/tools/examples/searx_search.html |
6cadad82d9a6-11 | 'engines': ['github'],
'category': 'it'},
{'snippet': 'Optimus: the first large-scale pre-trained VAE language model',
'title': 'Optimus',
'link': 'https://github.com/ChunyuanLI/Optimus',
'engines': ['github'],
'category': 'it'},
{'snippet': 'Seminar on Large Language Models (COMP790-101 at UNC Chapel '
'Hill, Fall 2022)',
'title': 'llm-seminar',
'link': 'https://github.com/craffel/llm-seminar',
'engines': ['github'],
'category': 'it'},
{'snippet': 'A central, open resource for data and tools related to '
'chain-of-thought reasoning in large language models. Developed @ '
'Samwald research group: https://samwald.info/',
'title': 'ThoughtSource',
'link': 'https://github.com/OpenBioLink/ThoughtSource',
'engines': ['github'],
'category': 'it'},
{'snippet': 'A comprehensive list of papers using large language/multi-modal '
'models for Robotics/RL, including papers, codes, and related '
'websites',
'title': 'Awesome-LLM-Robotics',
'link': 'https://github.com/GT-RIPL/Awesome-LLM-Robotics',
'engines': ['github'],
'category': 'it'},
{'snippet': 'Tools for curating biomedical training data for large-scale '
'language modeling',
'title': 'biomedical',
'link': 'https://github.com/bigscience-workshop/biomedical', | https://python.langchain.com/en/latest/modules/agents/tools/examples/searx_search.html |
6cadad82d9a6-12 | 'link': 'https://github.com/bigscience-workshop/biomedical',
'engines': ['github'],
'category': 'it'},
{'snippet': 'ChatGPT @ Home: Large Language Model (LLM) chatbot application, '
'written by ChatGPT',
'title': 'ChatGPT-at-Home',
'link': 'https://github.com/Sentdex/ChatGPT-at-Home',
'engines': ['github'],
'category': 'it'},
{'snippet': 'Design and Deploy Large Language Model Apps',
'title': 'dust',
'link': 'https://github.com/dust-tt/dust',
'engines': ['github'],
'category': 'it'},
{'snippet': 'Polyglot: Large Language Models of Well-balanced Competence in '
'Multi-languages',
'title': 'polyglot',
'link': 'https://github.com/EleutherAI/polyglot',
'engines': ['github'],
'category': 'it'},
{'snippet': 'Code release for "Learning Video Representations from Large '
'Language Models"',
'title': 'LaViLa',
'link': 'https://github.com/facebookresearch/LaViLa',
'engines': ['github'],
'category': 'it'},
{'snippet': 'SmoothQuant: Accurate and Efficient Post-Training Quantization '
'for Large Language Models',
'title': 'smoothquant',
'link': 'https://github.com/mit-han-lab/smoothquant',
'engines': ['github'],
'category': 'it'}, | https://python.langchain.com/en/latest/modules/agents/tools/examples/searx_search.html |
6cadad82d9a6-13 | 'engines': ['github'],
'category': 'it'},
{'snippet': 'This repository contains the code, data, and models of the paper '
'titled "XL-Sum: Large-Scale Multilingual Abstractive '
'Summarization for 44 Languages" published in Findings of the '
'Association for Computational Linguistics: ACL-IJCNLP 2021.',
'title': 'xl-sum',
'link': 'https://github.com/csebuetnlp/xl-sum',
'engines': ['github'],
'category': 'it'}]
previous
Search Tools
next
SerpAPI
Contents
Custom Parameters
Obtaining results with metadata
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 11, 2023. | https://python.langchain.com/en/latest/modules/agents/tools/examples/searx_search.html |
00a2039b4fd1-0 | .ipynb
.pdf
Wolfram Alpha
Wolfram Alpha#
This notebook goes over how to use the wolfram alpha component.
First, you need to set up your Wolfram Alpha developer account and get your APP ID:
Go to wolfram alpha and sign up for a developer account here
Create an app and get your APP ID
pip install wolframalpha
Then we will need to set some environment variables:
Save your APP ID into WOLFRAM_ALPHA_APPID env variable
pip install wolframalpha
import os
os.environ["WOLFRAM_ALPHA_APPID"] = ""
from langchain.utilities.wolfram_alpha import WolframAlphaAPIWrapper
wolfram = WolframAlphaAPIWrapper()
wolfram.run("What is 2x+5 = -3x + 7?")
'x = 2/5'
previous
Wikipedia
next
YouTubeSearchTool
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 11, 2023. | https://python.langchain.com/en/latest/modules/agents/tools/examples/wolfram_alpha.html |
3d6ef38b599e-0 | .ipynb
.pdf
Search Tools
Contents
Google Serper API Wrapper
SerpAPI
GoogleSearchAPIWrapper
SearxNG Meta Search Engine
Search Tools#
This notebook shows off usage of various search tools.
from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType
from langchain.llms import OpenAI
llm = OpenAI(temperature=0)
Google Serper API Wrapper#
First, let’s try to use the Google Serper API tool.
tools = load_tools(["google-serper"], llm=llm)
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
agent.run("What is the weather in Pomfret?")
> Entering new AgentExecutor chain...
I should look up the current weather conditions.
Action: Search
Action Input: "weather in Pomfret"
Observation: 37°F
Thought: I now know the current temperature in Pomfret.
Final Answer: The current temperature in Pomfret is 37°F.
> Finished chain.
'The current temperature in Pomfret is 37°F.'
SerpAPI#
Now, let’s use the SerpAPI tool.
tools = load_tools(["serpapi"], llm=llm)
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
agent.run("What is the weather in Pomfret?")
> Entering new AgentExecutor chain...
I need to find out what the current weather is in Pomfret.
Action: Search
Action Input: "weather in Pomfret" | https://python.langchain.com/en/latest/modules/agents/tools/examples/search_tools.html |
3d6ef38b599e-1 | Action: Search
Action Input: "weather in Pomfret"
Observation: Partly cloudy skies during the morning hours will give way to cloudy skies with light rain and snow developing in the afternoon. High 42F. Winds WNW at 10 to 15 ...
Thought: I now know the current weather in Pomfret.
Final Answer: Partly cloudy skies during the morning hours will give way to cloudy skies with light rain and snow developing in the afternoon. High 42F. Winds WNW at 10 to 15 mph.
> Finished chain.
'Partly cloudy skies during the morning hours will give way to cloudy skies with light rain and snow developing in the afternoon. High 42F. Winds WNW at 10 to 15 mph.'
GoogleSearchAPIWrapper#
Now, let’s use the official Google Search API Wrapper.
tools = load_tools(["google-search"], llm=llm)
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
agent.run("What is the weather in Pomfret?")
> Entering new AgentExecutor chain...
I should look up the current weather conditions.
Action: Google Search
Action Input: "weather in Pomfret" | https://python.langchain.com/en/latest/modules/agents/tools/examples/search_tools.html |
3d6ef38b599e-2 | Action: Google Search
Action Input: "weather in Pomfret"
Observation: Showers early becoming a steady light rain later in the day. Near record high temperatures. High around 60F. Winds SW at 10 to 15 mph. Chance of rain 60%. Pomfret, CT Weather Forecast, with current conditions, wind, air quality, and what to expect for the next 3 days. Hourly Weather-Pomfret, CT. As of 12:52 am EST. Special Weather Statement +2 ... Hazardous Weather Conditions. Special Weather Statement ... Pomfret CT. Tonight ... National Digital Forecast Database Maximum Temperature Forecast. Pomfret Center Weather Forecasts. Weather Underground provides local & long-range weather forecasts, weatherreports, maps & tropical weather conditions for ... Pomfret, CT 12 hour by hour weather forecast includes precipitation, temperatures, sky conditions, rain chance, dew-point, relative humidity, wind direction ... North Pomfret Weather Forecasts. Weather Underground provides local & long-range weather forecasts, weatherreports, maps & tropical weather conditions for ... Today's Weather - Pomfret, CT. Dec 31, 2022 4:00 PM. Putnam MS. --. Weather forecast icon. Feels like --. Hi --. Lo --. Pomfret, CT temperature trend for the next 14 Days. Find daytime highs and nighttime lows from TheWeatherNetwork.com. Pomfret, MD Weather Forecast Date: 332 PM EST Wed Dec 28 2022. The area/counties/county of: Charles, including the cites of: St. Charles and Waldorf.
Thought: I now know the current weather conditions in Pomfret. | https://python.langchain.com/en/latest/modules/agents/tools/examples/search_tools.html |
3d6ef38b599e-3 | Thought: I now know the current weather conditions in Pomfret.
Final Answer: Showers early becoming a steady light rain later in the day. Near record high temperatures. High around 60F. Winds SW at 10 to 15 mph. Chance of rain 60%.
> Finished AgentExecutor chain.
'Showers early becoming a steady light rain later in the day. Near record high temperatures. High around 60F. Winds SW at 10 to 15 mph. Chance of rain 60%.'
SearxNG Meta Search Engine#
Here we will be using a self hosted SearxNG meta search engine.
tools = load_tools(["searx-search"], searx_host="http://localhost:8888", llm=llm)
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
agent.run("What is the weather in Pomfret")
> Entering new AgentExecutor chain...
I should look up the current weather
Action: SearX Search
Action Input: "weather in Pomfret"
Observation: Mainly cloudy with snow showers around in the morning. High around 40F. Winds NNW at 5 to 10 mph. Chance of snow 40%. Snow accumulations less than one inch.
10 Day Weather - Pomfret, MD As of 1:37 pm EST Today 49°/ 41° 52% Mon 27 | Day 49° 52% SE 14 mph Cloudy with occasional rain showers. High 49F. Winds SE at 10 to 20 mph. Chance of rain 50%.... | https://python.langchain.com/en/latest/modules/agents/tools/examples/search_tools.html |
3d6ef38b599e-4 | 10 Day Weather - Pomfret, VT As of 3:51 am EST Special Weather Statement Today 39°/ 32° 37% Wed 01 | Day 39° 37% NE 4 mph Cloudy with snow showers developing for the afternoon. High 39F....
Pomfret, CT ; Current Weather. 1:06 AM. 35°F · RealFeel® 32° ; TODAY'S WEATHER FORECAST. 3/3. 44°Hi. RealFeel® 50° ; TONIGHT'S WEATHER FORECAST. 3/3. 32°Lo.
Pomfret, MD Forecast Today Hourly Daily Morning 41° 1% Afternoon 43° 0% Evening 35° 3% Overnight 34° 2% Don't Miss Finally, Here’s Why We Get More Colds and Flu When It’s Cold Coast-To-Coast...
Pomfret, MD Weather Forecast | AccuWeather Current Weather 5:35 PM 35° F RealFeel® 36° RealFeel Shade™ 36° Air Quality Excellent Wind E 3 mph Wind Gusts 5 mph Cloudy More Details WinterCast...
Pomfret, VT Weather Forecast | AccuWeather Current Weather 11:21 AM 23° F RealFeel® 27° RealFeel Shade™ 25° Air Quality Fair Wind ESE 3 mph Wind Gusts 7 mph Cloudy More Details WinterCast...
Pomfret Center, CT Weather Forecast | AccuWeather Daily Current Weather 6:50 PM 39° F RealFeel® 36° Air Quality Fair Wind NW 6 mph Wind Gusts 16 mph Mostly clear More Details WinterCast... | https://python.langchain.com/en/latest/modules/agents/tools/examples/search_tools.html |
3d6ef38b599e-5 | 12:00 pm · Feels Like36° · WindN 5 mph · Humidity43% · UV Index3 of 10 · Cloud Cover65% · Rain Amount0 in ...
Pomfret Center, CT Weather Conditions | Weather Underground star Popular Cities San Francisco, CA 49 °F Clear Manhattan, NY 37 °F Fair Schiller Park, IL (60176) warning39 °F Mostly Cloudy...
Thought: I now know the final answer
Final Answer: The current weather in Pomfret is mainly cloudy with snow showers around in the morning. The temperature is around 40F with winds NNW at 5 to 10 mph. Chance of snow is 40%.
> Finished chain.
'The current weather in Pomfret is mainly cloudy with snow showers around in the morning. The temperature is around 40F with winds NNW at 5 to 10 mph. Chance of snow is 40%.'
previous
SceneXplain
next
SearxNG Search API
Contents
Google Serper API Wrapper
SerpAPI
GoogleSearchAPIWrapper
SearxNG Meta Search Engine
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 11, 2023. | https://python.langchain.com/en/latest/modules/agents/tools/examples/search_tools.html |
d04c744db019-0 | .ipynb
.pdf
HuggingFace Tools
HuggingFace Tools#
Huggingface Tools supporting text I/O can be
loaded directly using the load_huggingface_tool function.
# Requires transformers>=4.29.0 and huggingface_hub>=0.14.1
!pip install --upgrade transformers huggingface_hub > /dev/null
from langchain.agents import load_huggingface_tool
tool = load_huggingface_tool("lysandre/hf-model-downloads")
print(f"{tool.name}: {tool.description}")
model_download_counter: This is a tool that returns the most downloaded model of a given task on the Hugging Face Hub. It takes the name of the category (such as text-classification, depth-estimation, etc), and returns the name of the checkpoint
tool.run("text-classification")
'facebook/bart-large-mnli'
previous
GraphQL tool
next
Human as a tool
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 11, 2023. | https://python.langchain.com/en/latest/modules/agents/tools/examples/huggingface_tools.html |
7df0ffe1e0a8-0 | .ipynb
.pdf
Apify
Apify#
This notebook shows how to use the Apify integration for LangChain.
Apify is a cloud platform for web scraping and data extraction,
which provides an ecosystem of more than a thousand
ready-made apps called Actors for various web scraping, crawling, and data extraction use cases.
For example, you can use it to extract Google Search results, Instagram and Facebook profiles, products from Amazon or Shopify, Google Maps reviews, etc. etc.
In this example, we’ll use the Website Content Crawler Actor,
which can deeply crawl websites such as documentation, knowledge bases, help centers, or blogs,
and extract text content from the web pages. Then we feed the documents into a vector index and answer questions from it.
#!pip install apify-client
First, import ApifyWrapper into your source code:
from langchain.document_loaders.base import Document
from langchain.indexes import VectorstoreIndexCreator
from langchain.utilities import ApifyWrapper
Initialize it using your Apify API token and for the purpose of this example, also with your OpenAI API key:
import os
os.environ["OPENAI_API_KEY"] = "Your OpenAI API key"
os.environ["APIFY_API_TOKEN"] = "Your Apify API token"
apify = ApifyWrapper()
Then run the Actor, wait for it to finish, and fetch its results from the Apify dataset into a LangChain document loader.
Note that if you already have some results in an Apify dataset, you can load them directly using ApifyDatasetLoader, as shown in this notebook. In that notebook, you’ll also find the explanation of the dataset_mapping_function, which is used to map fields from the Apify dataset records to LangChain Document fields.
loader = apify.call_actor(
actor_id="apify/website-content-crawler", | https://python.langchain.com/en/latest/modules/agents/tools/examples/apify.html |
7df0ffe1e0a8-1 | actor_id="apify/website-content-crawler",
run_input={"startUrls": [{"url": "https://python.langchain.com/en/latest/"}]},
dataset_mapping_function=lambda item: Document(
page_content=item["text"] or "", metadata={"source": item["url"]}
),
)
Initialize the vector index from the crawled documents:
index = VectorstoreIndexCreator().from_loaders([loader])
And finally, query the vector index:
query = "What is LangChain?"
result = index.query_with_sources(query)
print(result["answer"])
print(result["sources"])
LangChain is a standard interface through which you can interact with a variety of large language models (LLMs). It provides modules that can be used to build language model applications, and it also provides chains and agents with memory capabilities.
https://python.langchain.com/en/latest/modules/models/llms.html, https://python.langchain.com/en/latest/getting_started/getting_started.html
previous
Tool Input Schema
next
ArXiv API Tool
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 11, 2023. | https://python.langchain.com/en/latest/modules/agents/tools/examples/apify.html |
f25fbd8b5099-0 | .ipynb
.pdf
GraphQL tool
GraphQL tool#
This Jupyter Notebook demonstrates how to use the BaseGraphQLTool component with an Agent.
GraphQL is a query language for APIs and a runtime for executing those queries against your data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools.
By including a BaseGraphQLTool in the list of tools provided to an Agent, you can grant your Agent the ability to query data from GraphQL APIs for any purposes you need.
In this example, we’ll be using the public Star Wars GraphQL API available at the following endpoint: https://swapi-graphql.netlify.app/.netlify/functions/index.
First, you need to install httpx and gql Python packages.
pip install httpx gql > /dev/null
Now, let’s create a BaseGraphQLTool instance with the specified Star Wars API endpoint and initialize an Agent with the tool.
from langchain import OpenAI
from langchain.agents import load_tools, initialize_agent, AgentType
from langchain.utilities import GraphQLAPIWrapper
llm = OpenAI(temperature=0)
tools = load_tools(["graphql"], graphql_endpoint="https://swapi-graphql.netlify.app/.netlify/functions/index", llm=llm)
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
Now, we can use the Agent to run queries against the Star Wars GraphQL API. Let’s ask the Agent to list all the Star Wars films and their release dates.
graphql_fields = """allFilms {
films {
title
director
releaseDate
speciesConnection {
species {
name
classification
homeworld {
name
} | https://python.langchain.com/en/latest/modules/agents/tools/examples/graphql.html |
f25fbd8b5099-1 | name
classification
homeworld {
name
}
}
}
}
}
"""
suffix = "Search for the titles of all the stawars films stored in the graphql database that has this schema "
agent.run(suffix + graphql_fields)
> Entering new AgentExecutor chain...
I need to query the graphql database to get the titles of all the star wars films
Action: query_graphql
Action Input: query { allFilms { films { title } } }
Observation: "{\n \"allFilms\": {\n \"films\": [\n {\n \"title\": \"A New Hope\"\n },\n {\n \"title\": \"The Empire Strikes Back\"\n },\n {\n \"title\": \"Return of the Jedi\"\n },\n {\n \"title\": \"The Phantom Menace\"\n },\n {\n \"title\": \"Attack of the Clones\"\n },\n {\n \"title\": \"Revenge of the Sith\"\n }\n ]\n }\n}"
Thought: I now know the titles of all the star wars films
Final Answer: The titles of all the star wars films are: A New Hope, The Empire Strikes Back, Return of the Jedi, The Phantom Menace, Attack of the Clones, and Revenge of the Sith.
> Finished chain.
'The titles of all the star wars films are: A New Hope, The Empire Strikes Back, Return of the Jedi, The Phantom Menace, Attack of the Clones, and Revenge of the Sith.'
previous
Gradio Tools
next
HuggingFace Tools
By Harrison Chase
© Copyright 2023, Harrison Chase. | https://python.langchain.com/en/latest/modules/agents/tools/examples/graphql.html |
f25fbd8b5099-2 | By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 11, 2023. | https://python.langchain.com/en/latest/modules/agents/tools/examples/graphql.html |
14575a11db8c-0 | .ipynb
.pdf
Metaphor Search
Contents
Metaphor Search
Call the API
Use Metaphor as a tool
Metaphor Search#
This notebook goes over how to use Metaphor search.
First, you need to set up the proper API keys and environment variables. Request an API key [here](Sign up for early access here).
Then enter your API key as an environment variable.
import os
os.environ["METAPHOR_API_KEY"] = ""
from langchain.utilities import MetaphorSearchAPIWrapper
search = MetaphorSearchAPIWrapper()
Call the API#
results takes in a Metaphor-optimized search query and a number of results (up to 500). It returns a list of results with title, url, author, and creation date.
search.results("The best blog post about AI safety is definitely this: ", 10) | https://python.langchain.com/en/latest/modules/agents/tools/examples/metaphor_search.html |
14575a11db8c-1 | {'results': [{'url': 'https://www.anthropic.com/index/core-views-on-ai-safety', 'title': 'Core Views on AI Safety: When, Why, What, and How', 'dateCreated': '2023-03-08', 'author': None, 'score': 0.1998831331729889}, {'url': 'https://aisafety.wordpress.com/', 'title': 'Extinction Risk from Artificial Intelligence', 'dateCreated': '2013-10-08', 'author': None, 'score': 0.19801370799541473}, {'url': 'https://www.lesswrong.com/posts/WhNxG4r774bK32GcH/the-simple-picture-on-ai-safety', 'title': 'The simple picture on AI safety - LessWrong', 'dateCreated': '2018-05-27', 'author': 'Alex Flint', 'score': 0.19735534489154816}, {'url': 'https://slatestarcodex.com/2015/05/29/no-time-like-the-present-for-ai-safety-work/', 'title': 'No Time Like The Present For AI Safety Work', 'dateCreated': '2015-05-29', 'author': None, 'score': 0.19408763945102692}, {'url': 'https://www.lesswrong.com/posts/5BJvusxdwNXYQ4L9L/so-you-want-to-save-the-world', 'title': 'So You Want to Save the World - LessWrong', 'dateCreated': '2012-01-01', 'author': 'Lukeprog', 'score': 0.18853715062141418}, {'url': 'https://openai.com/blog/planning-for-agi-and-beyond', 'title': 'Planning for AGI and beyond', 'dateCreated': | https://python.langchain.com/en/latest/modules/agents/tools/examples/metaphor_search.html |
14575a11db8c-2 | 'title': 'Planning for AGI and beyond', 'dateCreated': '2023-02-24', 'author': 'Authors', 'score': 0.18665121495723724}, {'url': 'https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html', 'title': 'The Artificial Intelligence Revolution: Part 1 - Wait But Why', 'dateCreated': '2015-01-22', 'author': 'Tim Urban', 'score': 0.18604731559753418}, {'url': 'https://forum.effectivealtruism.org/posts/uGDCaPFaPkuxAowmH/anthropic-core-views-on-ai-safety-when-why-what-and-how', 'title': 'Anthropic: Core Views on AI Safety: When, Why, What, and How - EA Forum', 'dateCreated': '2023-03-09', 'author': 'Jonmenaster', 'score': 0.18415069580078125}, {'url': 'https://www.lesswrong.com/posts/xBrpph9knzWdtMWeQ/the-proof-of-doom', 'title': 'The Proof of Doom - LessWrong', 'dateCreated': '2022-03-09', 'author': 'Johnlawrenceaspden', 'score': 0.18159329891204834}, {'url': 'https://intelligence.org/why-ai-safety/', 'title': 'Why AI Safety? - Machine Intelligence Research Institute', 'dateCreated': '2017-03-01', 'author': None, 'score': 0.1814115345478058}]} | https://python.langchain.com/en/latest/modules/agents/tools/examples/metaphor_search.html |
14575a11db8c-3 | [{'title': 'Core Views on AI Safety: When, Why, What, and How',
'url': 'https://www.anthropic.com/index/core-views-on-ai-safety',
'author': None,
'date_created': '2023-03-08'},
{'title': 'Extinction Risk from Artificial Intelligence',
'url': 'https://aisafety.wordpress.com/',
'author': None,
'date_created': '2013-10-08'},
{'title': 'The simple picture on AI safety - LessWrong',
'url': 'https://www.lesswrong.com/posts/WhNxG4r774bK32GcH/the-simple-picture-on-ai-safety',
'author': 'Alex Flint',
'date_created': '2018-05-27'},
{'title': 'No Time Like The Present For AI Safety Work',
'url': 'https://slatestarcodex.com/2015/05/29/no-time-like-the-present-for-ai-safety-work/',
'author': None,
'date_created': '2015-05-29'},
{'title': 'So You Want to Save the World - LessWrong',
'url': 'https://www.lesswrong.com/posts/5BJvusxdwNXYQ4L9L/so-you-want-to-save-the-world',
'author': 'Lukeprog',
'date_created': '2012-01-01'},
{'title': 'Planning for AGI and beyond',
'url': 'https://openai.com/blog/planning-for-agi-and-beyond',
'author': 'Authors',
'date_created': '2023-02-24'}, | https://python.langchain.com/en/latest/modules/agents/tools/examples/metaphor_search.html |
14575a11db8c-4 | 'date_created': '2023-02-24'},
{'title': 'The Artificial Intelligence Revolution: Part 1 - Wait But Why',
'url': 'https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html',
'author': 'Tim Urban',
'date_created': '2015-01-22'},
{'title': 'Anthropic: Core Views on AI Safety: When, Why, What, and How - EA Forum',
'url': 'https://forum.effectivealtruism.org/posts/uGDCaPFaPkuxAowmH/anthropic-core-views-on-ai-safety-when-why-what-and-how',
'author': 'Jonmenaster',
'date_created': '2023-03-09'},
{'title': 'The Proof of Doom - LessWrong',
'url': 'https://www.lesswrong.com/posts/xBrpph9knzWdtMWeQ/the-proof-of-doom',
'author': 'Johnlawrenceaspden',
'date_created': '2022-03-09'},
{'title': 'Why AI Safety? - Machine Intelligence Research Institute',
'url': 'https://intelligence.org/why-ai-safety/',
'author': None,
'date_created': '2017-03-01'}]
Use Metaphor as a tool#
Metaphor can be used as a tool that gets URLs that other tools such as browsing tools.
from langchain.agents.agent_toolkits import PlayWrightBrowserToolkit
from langchain.tools.playwright.utils import (
create_async_playwright_browser,# A synchronous browser is available, though it isn't compatible with jupyter.
)
async_browser = create_async_playwright_browser() | https://python.langchain.com/en/latest/modules/agents/tools/examples/metaphor_search.html |
14575a11db8c-5 | )
async_browser = create_async_playwright_browser()
toolkit = PlayWrightBrowserToolkit.from_browser(async_browser=async_browser)
tools = toolkit.get_tools()
tools_by_name = {tool.name: tool for tool in tools}
print(tools_by_name.keys())
navigate_tool = tools_by_name["navigate_browser"]
extract_text = tools_by_name["extract_text"]
from langchain.agents import initialize_agent, AgentType
from langchain.chat_models import ChatOpenAI
from langchain.tools import MetaphorSearchResults
llm = ChatOpenAI(model_name="gpt-4", temperature=0.7)
metaphor_tool = MetaphorSearchResults(api_wrapper=search)
agent_chain = initialize_agent([metaphor_tool, extract_text, navigate_tool], llm, agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
agent_chain.run("find me an interesting tweet about AI safety using Metaphor, then tell me the first sentence in the post. Do not finish until able to retrieve the first sentence.")
> Entering new AgentExecutor chain...
Thought: I need to find a tweet about AI safety using Metaphor Search.
Action:
```
{
"action": "Metaphor Search Results JSON",
"action_input": {
"query": "interesting tweet AI safety",
"num_results": 1
}
}
```
{'results': [{'url': 'https://safe.ai/', 'title': 'Center for AI Safety', 'dateCreated': '2022-01-01', 'author': None, 'score': 0.18083244562149048}]} | https://python.langchain.com/en/latest/modules/agents/tools/examples/metaphor_search.html |
14575a11db8c-6 | Observation: [{'title': 'Center for AI Safety', 'url': 'https://safe.ai/', 'author': None, 'date_created': '2022-01-01'}]
Thought:I need to navigate to the URL provided in the search results to find the tweet.
> Finished chain.
'I need to navigate to the URL provided in the search results to find the tweet.'
previous
IFTTT WebHooks
next
OpenWeatherMap API
Contents
Metaphor Search
Call the API
Use Metaphor as a tool
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 11, 2023. | https://python.langchain.com/en/latest/modules/agents/tools/examples/metaphor_search.html |
3daf91967447-0 | .ipynb
.pdf
Wikipedia
Wikipedia#
Wikipedia is a multilingual free online encyclopedia written and maintained by a community of volunteers, known as Wikipedians, through open collaboration and using a wiki-based editing system called MediaWiki. Wikipedia is the largest and most-read reference work in history.
First, you need to install wikipedia python package.
!pip install wikipedia
from langchain.utilities import WikipediaAPIWrapper
wikipedia = WikipediaAPIWrapper()
wikipedia.run('HUNTER X HUNTER') | https://python.langchain.com/en/latest/modules/agents/tools/examples/wikipedia.html |
3daf91967447-1 | 'Page: Hunter × Hunter\nSummary: Hunter × Hunter (stylized as HUNTER×HUNTER and pronounced "hunter hunter") is a Japanese manga series written and illustrated by Yoshihiro Togashi. It has been serialized in Shueisha\'s shōnen manga magazine Weekly Shōnen Jump since March 1998, although the manga has frequently gone on extended hiatuses since 2006. Its chapters have been collected in 37 tankōbon volumes as of November 2022. The story focuses on a young boy named Gon Freecss who discovers that his father, who left him at a young age, is actually a world-renowned Hunter, a licensed professional who specializes in fantastical pursuits such as locating rare or unidentified animal species, treasure hunting, surveying unexplored enclaves, or hunting down lawless individuals. Gon departs on a journey to become a Hunter and eventually find his father. Along the way, Gon meets various other Hunters and encounters the paranormal.\nHunter × Hunter was adapted into a 62-episode anime television series produced by Nippon Animation and directed by Kazuhiro Furuhashi, which ran on Fuji Television from October 1999 to March 2001. Three separate original video animations (OVAs) totaling 30 episodes were subsequently produced by Nippon Animation and released in Japan from 2002 to 2004. A second anime television series by Madhouse aired on Nippon Television from October 2011 to September 2014, totaling 148 episodes, with two animated theatrical films released in 2013. There are also numerous audio albums, video games, musicals, and other media based on Hunter × Hunter.\nThe manga has been translated into English and released in North America by Viz Media since April 2005. Both television series have been also licensed by Viz Media, with the first series having aired on the Funimation Channel in 2009 | https://python.langchain.com/en/latest/modules/agents/tools/examples/wikipedia.html |
3daf91967447-2 | by Viz Media, with the first series having aired on the Funimation Channel in 2009 and the second series broadcast on Adult Swim\'s Toonami programming block from April 2016 to June 2019.\nHunter × Hunter has been a huge critical and financial success and has become one of the best-selling manga series of all time, having over 84 million copies in circulation by July 2022.\n\nPage: Hunter × Hunter (2011 TV series)\nSummary: Hunter × Hunter is an anime television series that aired from 2011 to 2014 based on Yoshihiro Togashi\'s manga series Hunter × Hunter. The story begins with a young boy named Gon Freecss, who one day discovers that the father who he thought was dead, is in fact alive and well. He learns that his father, Ging, is a legendary "Hunter", an individual who has proven themselves an elite member of humanity. Despite the fact that Ging left his son with his relatives in order to pursue his own dreams, Gon becomes determined to follow in his father\'s footsteps, pass the rigorous "Hunter Examination", and eventually find his father to become a Hunter in his own right.\nThis new Hunter × Hunter anime was announced on July 24, 2011. It is a complete reboot of the anime adaptation starting from the beginning of the manga, with no connections to the first anime from 1999. Produced by Nippon TV, VAP, Shueisha and Madhouse, the series is directed by Hiroshi Kōjina, with Atsushi Maekawa and Tsutomu Kamishiro handling series composition, Takahiro Yoshimatsu designing the characters and Yoshihisa Hirano composing the music. Instead of having the old cast reprise their roles for the new adaptation, the series features an entirely new cast to voice the characters. The new series premiered airing weekly on Nippon TV and the nationwide | https://python.langchain.com/en/latest/modules/agents/tools/examples/wikipedia.html |
3daf91967447-3 | cast to voice the characters. The new series premiered airing weekly on Nippon TV and the nationwide Nippon News Network from October 2, 2011. The series started to be collected in both DVD and Blu-ray format on January 25, 2012. Viz Media has licensed the anime for a DVD/Blu-ray release in North America with an English dub. On television, the series began airing on Adult Swim\'s Toonami programming block on April 17, 2016, and ended on June 23, 2019.The anime series\' opening theme is alternated between the song "Departure!" and an alternate version titled "Departure! -Second Version-" both sung by Galneryus\' vocalist Masatoshi Ono. Five pieces of music were used as the ending theme; "Just Awake" by the Japanese band Fear, and Loathing in Las Vegas in episodes 1 to 26, "Hunting for Your Dream" by Galneryus in episodes 27 to 58, "Reason" sung by Japanese duo Yuzu in episodes 59 to 75, "Nagareboshi Kirari" also sung by Yuzu from episode 76 to 98, which was originally from the anime film adaptation, Hunter × Hunter: Phantom Rouge, and "Hyōri Ittai" by Yuzu featuring Hyadain from episode 99 to 146, which was also used in the film Hunter × Hunter: The Last Mission. The background music and soundtrack for the series was composed by Yoshihisa Hirano.\n\n\n\nPage: List of Hunter × Hunter characters\nSummary: The Hunter × Hunter manga series, created by Yoshihiro Togashi, features an extensive cast of characters. It takes place in a fictional universe where licensed specialists known as Hunters travel the world taking on special jobs ranging from treasure hunting to assassination. The story initially focuses on Gon Freecss and | https://python.langchain.com/en/latest/modules/agents/tools/examples/wikipedia.html |
3daf91967447-4 | on special jobs ranging from treasure hunting to assassination. The story initially focuses on Gon Freecss and his quest to become a Hunter in order to find his father, Ging, who is himself a famous Hunter. On the way, Gon meets and becomes close friends with Killua Zoldyck, Kurapika and Leorio Paradinight.\nAlthough most characters are human, most possess superhuman strength and/or supernatural abilities due to Nen, the ability to control one\'s own life energy or aura. The world of the series also includes fantastical beasts such as the Chimera Ants or the Five great calamities.' | https://python.langchain.com/en/latest/modules/agents/tools/examples/wikipedia.html |
3daf91967447-5 | previous
Twilio
next
Wolfram Alpha
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 11, 2023. | https://python.langchain.com/en/latest/modules/agents/tools/examples/wikipedia.html |
166c9d5b3d01-0 | .ipynb
.pdf
Bing Search
Contents
Number of results
Metadata Results
Bing Search#
This notebook goes over how to use the bing search component.
First, you need to set up the proper API keys and environment variables. To set it up, follow the instructions found here.
Then we will need to set some environment variables.
import os
os.environ["BING_SUBSCRIPTION_KEY"] = ""
os.environ["BING_SEARCH_URL"] = ""
from langchain.utilities import BingSearchAPIWrapper
search = BingSearchAPIWrapper()
search.run("python") | https://python.langchain.com/en/latest/modules/agents/tools/examples/bing_search.html |
166c9d5b3d01-1 | 'Thanks to the flexibility of <b>Python</b> and the powerful ecosystem of packages, the Azure CLI supports features such as autocompletion (in shells that support it), persistent credentials, JMESPath result parsing, lazy initialization, network-less unit tests, and more. Building an open-source and cross-platform Azure CLI with <b>Python</b> by Dan Taylor. <b>Python</b> releases by version number: Release version Release date Click for more. <b>Python</b> 3.11.1 Dec. 6, 2022 Download Release Notes. <b>Python</b> 3.10.9 Dec. 6, 2022 Download Release Notes. <b>Python</b> 3.9.16 Dec. 6, 2022 Download Release Notes. <b>Python</b> 3.8.16 Dec. 6, 2022 Download Release Notes. <b>Python</b> 3.7.16 Dec. 6, 2022 Download Release Notes. In this lesson, we will look at the += operator in <b>Python</b> and see how it works with several simple examples.. The operator ‘+=’ is a shorthand for the addition assignment operator.It adds two values and assigns the sum to a variable (left operand). W3Schools offers free online tutorials, references and exercises in all the major languages of the web. Covering popular subjects like HTML, CSS, JavaScript, <b>Python</b>, SQL, Java, and many, many more. This tutorial introduces the reader informally to the basic concepts and features of the <b>Python</b> language and system. It helps to have a <b>Python</b> interpreter handy for hands-on experience, but all examples are self-contained, so the tutorial can be read off-line as well. For a description of standard objects | https://python.langchain.com/en/latest/modules/agents/tools/examples/bing_search.html |
166c9d5b3d01-2 | self-contained, so the tutorial can be read off-line as well. For a description of standard objects and modules, see The <b>Python</b> Standard ... <b>Python</b> is a general-purpose, versatile, and powerful programming language. It's a great first language because <b>Python</b> code is concise and easy to read. Whatever you want to do, <b>python</b> can do it. From web development to machine learning to data science, <b>Python</b> is the language for you. To install <b>Python</b> using the Microsoft Store: Go to your Start menu (lower left Windows icon), type "Microsoft Store", select the link to open the store. Once the store is open, select Search from the upper-right menu and enter "<b>Python</b>". Select which version of <b>Python</b> you would like to use from the results under Apps. Under the “<b>Python</b> Releases for Mac OS X” heading, click the link for the Latest <b>Python</b> 3 Release - <b>Python</b> 3.x.x. As of this writing, the latest version was <b>Python</b> 3.8.4. Scroll to the bottom and click macOS 64-bit installer to start the download. When the installer is finished downloading, move on to the next step. Step 2: Run the Installer' | https://python.langchain.com/en/latest/modules/agents/tools/examples/bing_search.html |
166c9d5b3d01-3 | Number of results#
You can use the k parameter to set the number of results
search = BingSearchAPIWrapper(k=1)
search.run("python")
'Thanks to the flexibility of <b>Python</b> and the powerful ecosystem of packages, the Azure CLI supports features such as autocompletion (in shells that support it), persistent credentials, JMESPath result parsing, lazy initialization, network-less unit tests, and more. Building an open-source and cross-platform Azure CLI with <b>Python</b> by Dan Taylor.'
Metadata Results#
Run query through BingSearch and return snippet, title, and link metadata.
Snippet: The description of the result.
Title: The title of the result.
Link: The link to the result.
search = BingSearchAPIWrapper()
search.results("apples", 5)
[{'snippet': 'Lady Alice. Pink Lady <b>apples</b> aren’t the only lady in the apple family. Lady Alice <b>apples</b> were discovered growing, thanks to bees pollinating, in Washington. They are smaller and slightly more stout in appearance than other varieties. Their skin color appears to have red and yellow stripes running from stem to butt.',
'title': '25 Types of Apples - Jessica Gavin',
'link': 'https://www.jessicagavin.com/types-of-apples/'},
{'snippet': '<b>Apples</b> can do a lot for you, thanks to plant chemicals called flavonoids. And they have pectin, a fiber that breaks down in your gut. If you take off the apple’s skin before eating it, you won ...',
'title': 'Apples: Nutrition & Health Benefits - WebMD',
'link': 'https://www.webmd.com/food-recipes/benefits-apples'}, | https://python.langchain.com/en/latest/modules/agents/tools/examples/bing_search.html |
166c9d5b3d01-4 | {'snippet': '<b>Apples</b> boast many vitamins and minerals, though not in high amounts. However, <b>apples</b> are usually a good source of vitamin C. Vitamin C. Also called ascorbic acid, this vitamin is a common ...',
'title': 'Apples 101: Nutrition Facts and Health Benefits',
'link': 'https://www.healthline.com/nutrition/foods/apples'},
{'snippet': 'Weight management. The fibers in <b>apples</b> can slow digestion, helping one to feel greater satisfaction after eating. After following three large prospective cohorts of 133,468 men and women for 24 years, researchers found that higher intakes of fiber-rich fruits with a low glycemic load, particularly <b>apples</b> and pears, were associated with the least amount of weight gain over time.',
'title': 'Apples | The Nutrition Source | Harvard T.H. Chan School of Public Health',
'link': 'https://www.hsph.harvard.edu/nutritionsource/food-features/apples/'}]
previous
Shell Tool
next
Brave Search
Contents
Number of results
Metadata Results
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 11, 2023. | https://python.langchain.com/en/latest/modules/agents/tools/examples/bing_search.html |
e847eb4b9d57-0 | .ipynb
.pdf
OpenWeatherMap API
Contents
Use the wrapper
Use the tool
OpenWeatherMap API#
This notebook goes over how to use the OpenWeatherMap component to fetch weather information.
First, you need to sign up for an OpenWeatherMap API key:
Go to OpenWeatherMap and sign up for an API key here
pip install pyowm
Then we will need to set some environment variables:
Save your API KEY into OPENWEATHERMAP_API_KEY env variable
Use the wrapper#
from langchain.utilities import OpenWeatherMapAPIWrapper
import os
os.environ["OPENWEATHERMAP_API_KEY"] = ""
weather = OpenWeatherMapAPIWrapper()
weather_data = weather.run("London,GB")
print(weather_data)
In London,GB, the current weather is as follows:
Detailed status: broken clouds
Wind speed: 2.57 m/s, direction: 240°
Humidity: 55%
Temperature:
- Current: 20.12°C
- High: 21.75°C
- Low: 18.68°C
- Feels like: 19.62°C
Rain: {}
Heat index: None
Cloud cover: 75%
Use the tool#
from langchain.llms import OpenAI
from langchain.agents import load_tools, initialize_agent, AgentType
import os
os.environ["OPENAI_API_KEY"] = ""
os.environ["OPENWEATHERMAP_API_KEY"] = ""
llm = OpenAI(temperature=0)
tools = load_tools(["openweathermap-api"], llm)
agent_chain = initialize_agent(
tools=tools,
llm=llm,
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=True
) | https://python.langchain.com/en/latest/modules/agents/tools/examples/openweathermap.html |
e847eb4b9d57-1 | agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=True
)
agent_chain.run("What's the weather like in London?")
> Entering new AgentExecutor chain...
I need to find out the current weather in London.
Action: OpenWeatherMap
Action Input: London,GB
Observation: In London,GB, the current weather is as follows:
Detailed status: broken clouds
Wind speed: 2.57 m/s, direction: 240°
Humidity: 56%
Temperature:
- Current: 20.11°C
- High: 21.75°C
- Low: 18.68°C
- Feels like: 19.64°C
Rain: {}
Heat index: None
Cloud cover: 75%
Thought: I now know the current weather in London.
Final Answer: The current weather in London is broken clouds, with a wind speed of 2.57 m/s, direction 240°, humidity of 56%, temperature of 20.11°C, high of 21.75°C, low of 18.68°C, and a heat index of None.
> Finished chain.
'The current weather in London is broken clouds, with a wind speed of 2.57 m/s, direction 240°, humidity of 56%, temperature of 20.11°C, high of 21.75°C, low of 18.68°C, and a heat index of None.'
previous
Metaphor Search
next
PubMed Tool
Contents
Use the wrapper
Use the tool
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 11, 2023. | https://python.langchain.com/en/latest/modules/agents/tools/examples/openweathermap.html |
3bbcf171f181-0 | .ipynb
.pdf
SerpAPI
Contents
Custom Parameters
SerpAPI#
This notebook goes over how to use the SerpAPI component to search the web.
from langchain.utilities import SerpAPIWrapper
search = SerpAPIWrapper()
search.run("Obama's first name?")
'Barack Hussein Obama II'
Custom Parameters#
You can also customize the SerpAPI wrapper with arbitrary parameters. For example, in the below example we will use bing instead of google.
params = {
"engine": "bing",
"gl": "us",
"hl": "en",
}
search = SerpAPIWrapper(params=params)
search.run("Obama's first name?")
'Barack Hussein Obama II is an American politician who served as the 44th president of the United States from 2009 to 2017. A member of the Democratic Party, Obama was the first African-American presi…New content will be added above the current area of focus upon selectionBarack Hussein Obama II is an American politician who served as the 44th president of the United States from 2009 to 2017. A member of the Democratic Party, Obama was the first African-American president of the United States. He previously served as a U.S. senator from Illinois from 2005 to 2008 and as an Illinois state senator from 1997 to 2004, and previously worked as a civil rights lawyer before entering politics.Wikipediabarackobama.com'
from langchain.agents import Tool
# You can create the tool to pass to an agent
repl_tool = Tool(
name="python_repl", | https://python.langchain.com/en/latest/modules/agents/tools/examples/serpapi.html |
3bbcf171f181-1 | repl_tool = Tool(
name="python_repl",
description="A Python shell. Use this to execute python commands. Input should be a valid python command. If you want to see the output of a value, you should print it out with `print(...)`.",
func=search.run,
)
previous
SearxNG Search API
next
Twilio
Contents
Custom Parameters
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 11, 2023. | https://python.langchain.com/en/latest/modules/agents/tools/examples/serpapi.html |
f82d14cfb42d-0 | .ipynb
.pdf
IFTTT WebHooks
Contents
Creating a webhook
Configuring the “If This”
Configuring the “Then That”
Finishing up
IFTTT WebHooks#
This notebook shows how to use IFTTT Webhooks.
From https://github.com/SidU/teams-langchain-js/wiki/Connecting-IFTTT-Services.
Creating a webhook#
Go to https://ifttt.com/create
Configuring the “If This”#
Click on the “If This” button in the IFTTT interface.
Search for “Webhooks” in the search bar.
Choose the first option for “Receive a web request with a JSON payload.”
Choose an Event Name that is specific to the service you plan to connect to.
This will make it easier for you to manage the webhook URL.
For example, if you’re connecting to Spotify, you could use “Spotify” as your
Event Name.
Click the “Create Trigger” button to save your settings and create your webhook.
Configuring the “Then That”#
Tap on the “Then That” button in the IFTTT interface.
Search for the service you want to connect, such as Spotify.
Choose an action from the service, such as “Add track to a playlist”.
Configure the action by specifying the necessary details, such as the playlist name,
e.g., “Songs from AI”.
Reference the JSON Payload received by the Webhook in your action. For the Spotify
scenario, choose “{{JsonPayload}}” as your search query.
Tap the “Create Action” button to save your action settings.
Once you have finished configuring your action, click the “Finish” button to
complete the setup.
Congratulations! You have successfully connected the Webhook to the desired
service, and you’re ready to start receiving data and triggering actions 🎉 | https://python.langchain.com/en/latest/modules/agents/tools/examples/ifttt.html |
f82d14cfb42d-1 | service, and you’re ready to start receiving data and triggering actions 🎉
Finishing up#
To get your webhook URL go to https://ifttt.com/maker_webhooks/settings
Copy the IFTTT key value from there. The URL is of the form
https://maker.ifttt.com/use/YOUR_IFTTT_KEY. Grab the YOUR_IFTTT_KEY value.
from langchain.tools.ifttt import IFTTTWebhook
import os
key = os.environ["IFTTTKey"]
url = f"https://maker.ifttt.com/trigger/spotify/json/with/key/{key}"
tool = IFTTTWebhook(name="Spotify", description="Add a song to spotify playlist", url=url)
tool.run("taylor swift")
"Congratulations! You've fired the spotify JSON event"
previous
Human as a tool
next
Metaphor Search
Contents
Creating a webhook
Configuring the “If This”
Configuring the “Then That”
Finishing up
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 11, 2023. | https://python.langchain.com/en/latest/modules/agents/tools/examples/ifttt.html |
5822eb36f7e2-0 | .ipynb
.pdf
Google Places
Google Places#
This notebook goes through how to use Google Places API
#!pip install googlemaps
import os
os.environ["GPLACES_API_KEY"] = ""
from langchain.tools import GooglePlacesTool
places = GooglePlacesTool()
places.run("al fornos")
"1. Delfina Restaurant\nAddress: 3621 18th St, San Francisco, CA 94110, USA\nPhone: (415) 552-4055\nWebsite: https://www.delfinasf.com/\n\n\n2. Piccolo Forno\nAddress: 725 Columbus Ave, San Francisco, CA 94133, USA\nPhone: (415) 757-0087\nWebsite: https://piccolo-forno-sf.com/\n\n\n3. L'Osteria del Forno\nAddress: 519 Columbus Ave, San Francisco, CA 94133, USA\nPhone: (415) 982-1124\nWebsite: Unknown\n\n\n4. Il Fornaio\nAddress: 1265 Battery St, San Francisco, CA 94111, USA\nPhone: (415) 986-0100\nWebsite: https://www.ilfornaio.com/\n\n"
previous
File System Tools
next
Google Search
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 11, 2023. | https://python.langchain.com/en/latest/modules/agents/tools/examples/google_places.html |
dc9ee2c40fdb-0 | .ipynb
.pdf
Shell Tool
Contents
Use with Agents
Shell Tool#
Giving agents access to the shell is powerful (though risky outside a sandboxed environment).
The LLM can use it to execute any shell commands. A common use case for this is letting the LLM interact with your local file system.
from langchain.tools import ShellTool
shell_tool = ShellTool()
print(shell_tool.run({"commands": ["echo 'Hello World!'", "time"]}))
Hello World!
real 0m0.000s
user 0m0.000s
sys 0m0.000s
/Users/wfh/code/lc/lckg/langchain/tools/shell/tool.py:34: UserWarning: The shell tool has no safeguards by default. Use at your own risk.
warnings.warn(
Use with Agents#
As with all tools, these can be given to an agent to accomplish more complex tasks. Let’s have the agent fetch some links from a web page.
from langchain.chat_models import ChatOpenAI
from langchain.agents import initialize_agent
from langchain.agents import AgentType
llm = ChatOpenAI(temperature=0)
shell_tool.description = shell_tool.description + f"args {shell_tool.args}".replace("{", "{{").replace("}", "}}")
self_ask_with_search = initialize_agent([shell_tool], llm, agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
self_ask_with_search.run("Download the langchain.com webpage and grep for all urls. Return only a sorted list of them. Be sure to use double quotes.")
> Entering new AgentExecutor chain...
Question: What is the task?
Thought: We need to download the langchain.com webpage and extract all the URLs from it. Then we need to sort the URLs and return them.
Action:
``` | https://python.langchain.com/en/latest/modules/agents/tools/examples/bash.html |
dc9ee2c40fdb-1 | Action:
```
{
"action": "shell",
"action_input": {
"commands": [
"curl -s https://langchain.com | grep -o 'http[s]*://[^\" ]*' | sort"
]
}
}
```
/Users/wfh/code/lc/lckg/langchain/tools/shell/tool.py:34: UserWarning: The shell tool has no safeguards by default. Use at your own risk.
warnings.warn(
Observation: https://blog.langchain.dev/
https://discord.gg/6adMQxSpJS
https://docs.langchain.com/docs/
https://github.com/hwchase17/chat-langchain
https://github.com/hwchase17/langchain
https://github.com/hwchase17/langchainjs
https://github.com/sullivan-sean/chat-langchainjs
https://js.langchain.com/docs/
https://python.langchain.com/en/latest/
https://twitter.com/langchainai
Thought:The URLs have been successfully extracted and sorted. We can return the list of URLs as the final answer.
Final Answer: ["https://blog.langchain.dev/", "https://discord.gg/6adMQxSpJS", "https://docs.langchain.com/docs/", "https://github.com/hwchase17/chat-langchain", "https://github.com/hwchase17/langchain", "https://github.com/hwchase17/langchainjs", "https://github.com/sullivan-sean/chat-langchainjs", "https://js.langchain.com/docs/", "https://python.langchain.com/en/latest/", "https://twitter.com/langchainai"]
> Finished chain. | https://python.langchain.com/en/latest/modules/agents/tools/examples/bash.html |
dc9ee2c40fdb-2 | > Finished chain.
'["https://blog.langchain.dev/", "https://discord.gg/6adMQxSpJS", "https://docs.langchain.com/docs/", "https://github.com/hwchase17/chat-langchain", "https://github.com/hwchase17/langchain", "https://github.com/hwchase17/langchainjs", "https://github.com/sullivan-sean/chat-langchainjs", "https://js.langchain.com/docs/", "https://python.langchain.com/en/latest/", "https://twitter.com/langchainai"]'
previous
AWS Lambda API
next
Bing Search
Contents
Use with Agents
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 11, 2023. | https://python.langchain.com/en/latest/modules/agents/tools/examples/bash.html |
76c3574f986e-0 | .ipynb
.pdf
Requests
Contents
Inside the tool
Requests#
The web contains a lot of information that LLMs do not have access to. In order to easily let LLMs interact with that information, we provide a wrapper around the Python Requests module that takes in a URL and fetches data from that URL.
from langchain.agents import load_tools
requests_tools = load_tools(["requests_all"])
requests_tools
[RequestsGetTool(name='requests_get', description='A portal to the internet. Use this when you need to get specific content from a website. Input should be a url (i.e. https://www.google.com). The output will be the text response of the GET request.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, requests_wrapper=TextRequestsWrapper(headers=None, aiosession=None)),
RequestsPostTool(name='requests_post', description='Use this when you want to POST to a website.\n Input should be a json string with two keys: "url" and "data".\n The value of "url" should be a string, and the value of "data" should be a dictionary of \n key-value pairs you want to POST to the url.\n Be careful to always use double quotes for strings in the json string\n The output will be the text response of the POST request.\n ', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, requests_wrapper=TextRequestsWrapper(headers=None, aiosession=None)), | https://python.langchain.com/en/latest/modules/agents/tools/examples/requests.html |
76c3574f986e-1 | RequestsPatchTool(name='requests_patch', description='Use this when you want to PATCH to a website.\n Input should be a json string with two keys: "url" and "data".\n The value of "url" should be a string, and the value of "data" should be a dictionary of \n key-value pairs you want to PATCH to the url.\n Be careful to always use double quotes for strings in the json string\n The output will be the text response of the PATCH request.\n ', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, requests_wrapper=TextRequestsWrapper(headers=None, aiosession=None)),
RequestsPutTool(name='requests_put', description='Use this when you want to PUT to a website.\n Input should be a json string with two keys: "url" and "data".\n The value of "url" should be a string, and the value of "data" should be a dictionary of \n key-value pairs you want to PUT to the url.\n Be careful to always use double quotes for strings in the json string.\n The output will be the text response of the PUT request.\n ', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, requests_wrapper=TextRequestsWrapper(headers=None, aiosession=None)),
RequestsDeleteTool(name='requests_delete', description='A portal to the internet. Use this when you need to make a DELETE request to a URL. Input should be a specific url, and the output will be the text response of the DELETE request.', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, requests_wrapper=TextRequestsWrapper(headers=None, aiosession=None))]
Inside the tool#
Each requests tool contains a requests wrapper. You can work with these wrappers directly below | https://python.langchain.com/en/latest/modules/agents/tools/examples/requests.html |
76c3574f986e-2 | Each requests tool contains a requests wrapper. You can work with these wrappers directly below
# Each tool wrapps a requests wrapper
requests_tools[0].requests_wrapper
TextRequestsWrapper(headers=None, aiosession=None)
from langchain.utilities import TextRequestsWrapper
requests = TextRequestsWrapper()
requests.get("https://www.google.com") | https://python.langchain.com/en/latest/modules/agents/tools/examples/requests.html |
76c3574f986e-3 | '<!doctype html><html itemscope="" itemtype="http://schema.org/WebPage" lang="en"><head><meta content="Search the world\'s information, including webpages, images, videos and more. Google has many special features to help you find exactly what you\'re looking for." name="description"><meta content="noodp" name="robots"><meta content="text/html; charset=UTF-8" http-equiv="Content-Type"><meta content="/images/branding/googleg/1x/googleg_standard_color_128dp.png" itemprop="image"><title>Google</title><script | https://python.langchain.com/en/latest/modules/agents/tools/examples/requests.html |
76c3574f986e-4 | nonce="MXrF0nnIBPkxBza4okrgPA">(function(){window.google={kEI:\'TA9QZOa5EdTakPIPuIad-Ac\',kEXPI:\'0,1359409,6059,206,4804,2316,383,246,5,1129120,1197768,626,380097,16111,28687,22431,1361,12319,17581,4997,13228,37471,7692,2891,3926,213,7615,606,50058,8228,17728,432,3,346,1244,1,16920,2648,4,1528,2304,29062,9871,3194,13658,2980,1457,16786,5803,2554,4094,7596,1,42154,2,14022,2373,342,23024,6699,3112 | https://python.langchain.com/en/latest/modules/agents/tools/examples/requests.html |
76c3574f986e-5 | ,342,23024,6699,31123,4568,6258,23418,1252,5835,14967,4333,4239,3245,445,2,2,1,26632,239,7916,7321,60,2,3,15965,872,7830,1796,10008,7,1922,9779,36154,6305,2007,17765,427,20136,14,82,2730,184,13600,3692,109,2412,1548,4308,3785,15175,3888,1515,3030,5628,478,4,9706,1804,7734,2738,1853,1032,9480,2995,576,1041,5648,3722,2058,3048,2130,2365,662,476,958,87,111,5807,2,975,1167,891,3580,1439,1128,7343,426, | https://python.langchain.com/en/latest/modules/agents/tools/examples/requests.html |
76c3574f986e-6 | ,1439,1128,7343,426,249,517,95,1102,14,696,1270,750,400,2208,274,2776,164,89,119,204,139,129,1710,2505,320,3,631,439,2,300,1645,172,1783,784,169,642,329,401,50,479,614,238,757,535,717,102,2,739,738,44,232,22,442,961,45,214,383,567,500,487,151,120,256,253,179,673,2,102,2,10,535,123,135,1685,5206695,190,2,20,50,198,5994221,2804424,3311,141,795,19735,1,1,346,5008,7,13,10,24,31,2,39,1,5,1,16,7,2,41,24 | https://python.langchain.com/en/latest/modules/agents/tools/examples/requests.html |
76c3574f986e-7 | 9,1,5,1,16,7,2,41,247,4,9,7,9,15,4,4,121,24,23944834,4042142,1964,16672,2894,6250,15739,1726,647,409,837,1411438,146986,23612960,7,84,93,33,101,816,57,532,163,1,441,86,1,951,73,31,2,345,178,243,472,2,148,962,455,167,178,29,702,1856,288,292,805,93,137,68,416,177,292,399,55,95,2566\',kBL:\'hw1A\',kOPI:89978449};google.sn=\'webhp\';google.kHL=\'en\';})();(function(){\nvar | https://python.langchain.com/en/latest/modules/agents/tools/examples/requests.html |
76c3574f986e-8 | h=this||self;function l(){return void 0!==window.google&&void 0!==window.google.kOPI&&0!==window.google.kOPI?window.google.kOPI:null};var m,n=[];function p(a){for(var b;a&&(!a.getAttribute||!(b=a.getAttribute("eid")));)a=a.parentNode;return b||m}function q(a){for(var b=null;a&&(!a.getAttribute||!(b=a.getAttribute("leid")));)a=a.parentNode;return b}function r(a){/^http:/i.test(a)&&"https:"===window.location.protocol&&(google.ml&&google.ml(Error("a"),!1,{src:a,glmm:1}),a="");return a}\nfunction t(a,b,c,d,k){var e="";-1===b.search("&ei=")&&(e="&ei="+p(d),-1===b.search("&lei=")&&(d=q(d))&&(e+="&lei="+d));d="";var g=-1===b.search("&cshid=")&&"slh"!==a,f=[];f.push(["zx",Date.now().toString()]);h._cshid&&g&&f.push(["cshid",h._cshid]);c=c();null!=c&&f.push(["opi",c.toString()]);for(c=0;c<f.length;c++){if(0===c||0<c)d+="&";d+=f[c][0]+"="+f[c][1]}return"/"+(k||"gen_204")+"?atyp=i&ct="+String(a)+"&cad="+(b+e+d)};m=google.kEI;google.getEI=p;google.getLEI=q;google.ml=function(){return null};google.log=function(a,b,c,d,k,e){e=void | https://python.langchain.com/en/latest/modules/agents/tools/examples/requests.html |
76c3574f986e-9 | null};google.log=function(a,b,c,d,k,e){e=void 0===e?l:e;c||(c=t(a,b,e,d,k));if(c=r(c)){a=new Image;var g=n.length;n[g]=a;a.onerror=a.onload=a.onabort=function(){delete n[g]};a.src=c}};google.logUrl=function(a,b){b=void 0===b?l:b;return t("",a,b)};}).call(this);(function(){google.y={};google.sy=[];google.x=function(a,b){if(a)var c=a.id;else{do c=Math.random();while(google.y[c])}google.y[c]=[a,b];return!1};google.sx=function(a){google.sy.push(a)};google.lm=[];google.plm=function(a){google.lm.push.apply(google.lm,a)};google.lq=[];google.load=function(a,b,c){google.lq.push([[a],b,c])};google.loadAll=function(a,b){google.lq.push([a,b])};google.bx=!1;google.lx=function(){};}).call(this);google.f={};(function(){\ndocument.documentElement.addEventListener("submit",function(b){var a;if(a=b.target){var c=a.getAttribute("data-submitfalse");a="1"===c||"q"===c&&!a.elements.q.value?!0:!1}else a=!1;a&&(b.preventDefault(),b.stopPropagation())},!0);document.documentElement.addEventListener("click",function(b){var a;a:{for(a=b.target;a&&a!==document.documentElement;a=a.parentElement)if("A"===a.tagName){a="1"===a.getAttribute("data-nohref");break a}a=!1}a&&b.preventDefault()},!0);}).call(this);</script><style>#gbar,#guser{font-size:13px;padding-top:1px | https://python.langchain.com/en/latest/modules/agents/tools/examples/requests.html |
76c3574f986e-10 | !important;}#gbar{height:22px}#guser{padding-bottom:7px !important;text-align:right}.gbh,.gbd{border-top:1px solid #c9d7f1;font-size:1px}.gbh{height:0;position:absolute;top:24px;width:100%}@media all{.gb1{height:22px;margin-right:.5em;vertical-align:top}#gbar{float:left}}a.gb1,a.gb4{text-decoration:underline !important}a.gb1,a.gb4{color:#00c !important}.gbi .gb4{color:#dd8e27 !important}.gbf .gb4{color:#900 !important}\n</style><style>body,td,a,p,.h{font-family:arial,sans-serif}body{margin:0;overflow-y:scroll}#gog{padding:3px 8px 0}td{line-height:.8em}.gac_m td{line-height:17px}form{margin-bottom:20px}.h{color:#1558d6}em{font-weight:bold;font-style:normal}.lst{height:25px;width:496px}.gsfi,.lst{font:18px arial,sans-serif}.gsfs{font:17px arial,sans-serif}.ds{display:inline-box;display:inline-block;margin:3px 0 4px;margin-left:4px}input{font-family:inherit}body{background:#fff;color:#000}a{color:#4b11a8;text-decoration:none}a:hover,a:active{text-decoration:underline}.fl a{color:#1558d6}a:visited{color:#4b11a8}.sblc{padding-top:5px}.sblc a{display:block;margin:2px | https://python.langchain.com/en/latest/modules/agents/tools/examples/requests.html |
76c3574f986e-11 | a{display:block;margin:2px 0;margin-left:13px;font-size:11px}.lsbb{background:#f8f9fa;border:solid 1px;border-color:#dadce0 #70757a #70757a #dadce0;height:30px}.lsbb{display:block}#WqQANb a{display:inline-block;margin:0 12px}.lsb{background:url(/images/nav_logo229.png) 0 -261px repeat-x;border:none;color:#000;cursor:pointer;height:30px;margin:0;outline:0;font:15px arial,sans-serif;vertical-align:top}.lsb:active{background:#dadce0}.lst:focus{outline:none}</style><script nonce="MXrF0nnIBPkxBza4okrgPA">(function(){window.google.erd={jsr:1,bv:1785,de:true};\nvar h=this||self;var k,l=null!=(k=h.mei)?k:1,n,p=null!=(n=h.sdo)?n:!0,q=0,r,t=google.erd,v=t.jsr;google.ml=function(a,b,d,m,e){e=void 0===e?2:e;b&&(r=a&&a.message);if(google.dl)return google.dl(a,e,d),null;if(0>v){window.console&&console.error(a,d);if(-2===v)throw a;b=!1}else b=!a||!a.message||"Error loading script"===a.message||q>=l&&!m?!1:!0;if(!b)return null;q++;d=d||{};b=encodeURIComponent;var | https://python.langchain.com/en/latest/modules/agents/tools/examples/requests.html |
76c3574f986e-12 | null;q++;d=d||{};b=encodeURIComponent;var c="/gen_204?atyp=i&ei="+b(google.kEI);google.kEXPI&&(c+="&jexpid="+b(google.kEXPI));c+="&srcpg="+b(google.sn)+"&jsr="+b(t.jsr)+"&bver="+b(t.bv);var f=a.lineNumber;void 0!==f&&(c+="&line="+f);var g=\na.fileName;g&&(0<g.indexOf("-extension:/")&&(e=3),c+="&script="+b(g),f&&g===window.location.href&&(f=document.documentElement.outerHTML.split("\\n")[f],c+="&cad="+b(f?f.substring(0,300):"No script found.")));c+="&jsel="+e;for(var u in d)c+="&",c+=b(u),c+="=",c+=b(d[u]);c=c+"&emsg="+b(a.name+": "+a.message);c=c+"&jsst="+b(a.stack||"N/A");12288<=c.length&&(c=c.substr(0,12288));a=c;m||google.log(0,"",a);return a};window.onerror=function(a,b,d,m,e){r!==a&&(a=e instanceof Error?e:Error(a),void 0===d||"lineNumber"in a||(a.lineNumber=d),void 0===b||"fileName"in a||(a.fileName=b),google.ml(a,!1,void 0,!1,"SyntaxError"===a.name||"SyntaxError"===a.message.substring(0,11)||-1!==a.message.indexOf("Script error")?3:0));r=null;p&&q>=l&&(window.onerror=null)};})();</script></head><body bgcolor="#fff"><script nonce="MXrF0nnIBPkxBza4okrgPA">(function(){var | https://python.langchain.com/en/latest/modules/agents/tools/examples/requests.html |
76c3574f986e-13 | nonce="MXrF0nnIBPkxBza4okrgPA">(function(){var src=\'/images/nav_logo229.png\';var iesg=false;document.body.onload = function(){window.n && window.n();if (document.images){new Image().src=src;}\nif (!iesg){document.f&&document.f.q.focus();document.gbqf&&document.gbqf.q.focus();}\n}\n})();</script><div id="mngb"><div id=gbar><nobr><b class=gb1>Search</b> <a class=gb1 href="https://www.google.com/imghp?hl=en&tab=wi">Images</a> <a class=gb1 href="https://maps.google.com/maps?hl=en&tab=wl">Maps</a> <a class=gb1 href="https://play.google.com/?hl=en&tab=w8">Play</a> <a class=gb1 href="https://www.youtube.com/?tab=w1">YouTube</a> <a class=gb1 href="https://news.google.com/?tab=wn">News</a> <a class=gb1 href="https://mail.google.com/mail/?tab=wm">Gmail</a> <a class=gb1 href="https://drive.google.com/?tab=wo">Drive</a> <a class=gb1 style="text-decoration:none" href="https://www.google.com/intl/en/about/products?tab=wh"><u>More</u> »</a></nobr></div><div id=guser width=100%><nobr><span id=gbn class=gbi></span><span id=gbf class=gbf></span><span id=gbe></span><a href="http://www.google.com/history/optout?hl=en" class=gb4>Web History</a> | https://python.langchain.com/en/latest/modules/agents/tools/examples/requests.html |
76c3574f986e-14 | class=gb4>Web History</a> | <a href="/preferences?hl=en" class=gb4>Settings</a> | <a target=_top id=gb_70 href="https://accounts.google.com/ServiceLogin?hl=en&passive=true&continue=https://www.google.com/&ec=GAZAAQ" class=gb4>Sign in</a></nobr></div><div class=gbh style=left:0></div><div class=gbh style=right:0></div></div><center><br clear="all" id="lgpd"><div id="lga"><img alt="Google" height="92" src="/images/branding/googlelogo/1x/googlelogo_white_background_color_272x92dp.png" style="padding:28px 0 14px" width="272" id="hplogo"><br><br></div><form action="/search" name="f"><table cellpadding="0" cellspacing="0"><tr valign="top"><td width="25%"> </td><td align="center" nowrap=""><input name="ie" value="ISO-8859-1" type="hidden"><input value="en" name="hl" type="hidden"><input name="source" type="hidden" value="hp"><input name="biw" type="hidden"><input name="bih" type="hidden"><div class="ds" style="height:32px;margin:4px 0"><input class="lst" style="margin:0;padding:5px 8px 0 6px;vertical-align:top;color:#000" autocomplete="off" value="" title="Google Search" maxlength="2048" name="q" size="57"></div><br style="line-height:0"><span class="ds"><span class="lsbb"><input class="lsb" value="Google | https://python.langchain.com/en/latest/modules/agents/tools/examples/requests.html |
76c3574f986e-15 | class="ds"><span class="lsbb"><input class="lsb" value="Google Search" name="btnG" type="submit"></span></span><span class="ds"><span class="lsbb"><input class="lsb" id="tsuid_1" value="I\'m Feeling Lucky" name="btnI" type="submit"><script nonce="MXrF0nnIBPkxBza4okrgPA">(function(){var id=\'tsuid_1\';document.getElementById(id).onclick = function(){if (this.form.q.value){this.checked = 1;if (this.form.iflsig)this.form.iflsig.disabled = false;}\nelse top.location=\'/doodles/\';};})();</script><input value="AOEireoAAAAAZFAdXGKCXWBK5dlWxPhh8hNPQz1s9YT6" name="iflsig" type="hidden"></span></span></td><td class="fl sblc" align="left" nowrap="" width="25%"><a href="/advanced_search?hl=en&authuser=0">Advanced search</a></td></tr></table><input id="gbv" name="gbv" type="hidden" value="1"><script nonce="MXrF0nnIBPkxBza4okrgPA">(function(){var a,b="1";if(document&&document.getElementById)if("undefined"!=typeof XMLHttpRequest)b="2";else if("undefined"!=typeof ActiveXObject){var c,d,e=["MSXML2.XMLHTTP.6.0","MSXML2.XMLHTTP.3.0","MSXML2.XMLHTTP","Microsoft.XMLHTTP"];for(c=0;d=e[c++];)try{new | https://python.langchain.com/en/latest/modules/agents/tools/examples/requests.html |
76c3574f986e-16 | ActiveXObject(d),b="2"}catch(h){}}a=b;if("2"==a&&-1==location.search.indexOf("&gbv=2")){var f=google.gbvu,g=document.getElementById("gbv");g&&(g.value=a);f&&window.setTimeout(function(){location.href=f},0)};}).call(this);</script></form><div id="gac_scont"></div><div style="font-size:83%;min-height:3.5em"><br><div id="prm"><style>.szppmdbYutt__middle-slot-promo{font-size:small;margin-bottom:32px}.szppmdbYutt__middle-slot-promo a.ZIeIlb{display:inline-block;text-decoration:none}.szppmdbYutt__middle-slot-promo img{border:none;margin-right:5px;vertical-align:middle}</style><div class="szppmdbYutt__middle-slot-promo" data-ved="0ahUKEwjmj7fr6dT-AhVULUQIHThDB38QnIcBCAQ"><a class="NKcBbd" href="https://www.google.com/url?q=https://blog.google/outreach-initiatives/diversity/asian-pacific-american-heritage-month-2023/%3Futm_source%3Dhpp%26utm_medium%3Downed%26utm_campaign%3Dapahm&source=hpp&id=19035152&ct=3&usg=AOvVaw1zrN82vzhoWl4hz1zZ4gLp&sa=X&ved=0ahUKEwjmj7fr6dT-AhVULUQIHThDB38Q8IcBCAU" rel="nofollow">Celebrate Asian Pacific American Heritage Month with Google</a></div></div></div><span | https://python.langchain.com/en/latest/modules/agents/tools/examples/requests.html |
Subsets and Splits