id
stringlengths
14
16
text
stringlengths
45
2.05k
source
stringlengths
53
111
dd6daa9d75f6-6
"TrackId" INTEGER NOT NULL, PRIMARY KEY ("PlaylistId", "TrackId"), FOREIGN KEY("TrackId") REFERENCES "Track" ("TrackId"), FOREIGN KEY("PlaylistId") REFERENCES "Playlist" ("PlaylistId") ) SELECT * FROM 'PlaylistTrack' LIMIT 3; PlaylistId TrackId 1 3402 1 3389 1 3390 Thought: I can use a SELECT statement to get the total number of tracks in each playlist. Action: query_checker_sql_db Action Input: SELECT Playlist.Name, COUNT(PlaylistTrack.TrackId) AS TotalTracks FROM Playlist INNER JOIN PlaylistTrack ON Playlist.PlaylistId = PlaylistTrack.PlaylistId GROUP BY Playlist.Name Observation: SELECT Playlist.Name, COUNT(PlaylistTrack.TrackId) AS TotalTracks FROM Playlist INNER JOIN PlaylistTrack ON Playlist.PlaylistId = PlaylistTrack.PlaylistId GROUP BY Playlist.Name Thought: The query looks correct, I can now execute it. Action: query_sql_db Action Input: SELECT Playlist.Name, COUNT(PlaylistTrack.TrackId) AS TotalTracks FROM Playlist INNER JOIN PlaylistTrack ON Playlist.PlaylistId = PlaylistTrack.PlaylistId GROUP BY Playlist.Name LIMIT 10 Observation: [('90’s Music', 1477), ('Brazilian Music', 39), ('Classical', 75), ('Classical 101 - Deep Cuts', 25), ('Classical 101 - Next Steps', 25), ('Classical 101 - The Basics', 25), ('Grunge', 15), ('Heavy Metal Classic', 26), ('Music', 6580), ('Music Videos', 1)] Thought: I now know the final answer.
https://langchain.readthedocs.io/en/latest/modules/agents/agent_toolkits/sql_database.html
dd6daa9d75f6-7
Thought: I now know the final answer. Final Answer: The total number of tracks in each playlist are: '90’s Music' (1477), 'Brazilian Music' (39), 'Classical' (75), 'Classical 101 - Deep Cuts' (25), 'Classical 101 - Next Steps' (25), 'Classical 101 - The Basics' (25), 'Grunge' (15), 'Heavy Metal Classic' (26), 'Music' (6580), 'Music Videos' (1). > Finished chain. "The total number of tracks in each playlist are: '90’s Music' (1477), 'Brazilian Music' (39), 'Classical' (75), 'Classical 101 - Deep Cuts' (25), 'Classical 101 - Next Steps' (25), 'Classical 101 - The Basics' (25), 'Grunge' (15), 'Heavy Metal Classic' (26), 'Music' (6580), 'Music Videos' (1)." Recovering from an error# In this example, the agent is able to recover from an error after initially trying to access an attribute (Track.ArtistId) which doesn’t exist. agent_executor.run("Who are the top 3 best selling artists?") > Entering new AgentExecutor chain... Action: list_tables_sql_db Action Input: "" Observation: MediaType, Track, Invoice, Album, Playlist, Customer, Employee, InvoiceLine, PlaylistTrack, Genre, Artist Thought: I should look at the schema of the Artist, InvoiceLine, and Track tables to see what columns I can use. Action: schema_sql_db Action Input: "Artist, InvoiceLine, Track" Observation: CREATE TABLE "Artist" ( "ArtistId" INTEGER NOT NULL,
https://langchain.readthedocs.io/en/latest/modules/agents/agent_toolkits/sql_database.html
dd6daa9d75f6-8
CREATE TABLE "Artist" ( "ArtistId" INTEGER NOT NULL, "Name" NVARCHAR(120), PRIMARY KEY ("ArtistId") ) SELECT * FROM 'Artist' LIMIT 3; ArtistId Name 1 AC/DC 2 Accept 3 Aerosmith CREATE TABLE "Track" ( "TrackId" INTEGER NOT NULL, "Name" NVARCHAR(200) NOT NULL, "AlbumId" INTEGER, "MediaTypeId" INTEGER NOT NULL, "GenreId" INTEGER, "Composer" NVARCHAR(220), "Milliseconds" INTEGER NOT NULL, "Bytes" INTEGER, "UnitPrice" NUMERIC(10, 2) NOT NULL, PRIMARY KEY ("TrackId"), FOREIGN KEY("MediaTypeId") REFERENCES "MediaType" ("MediaTypeId"), FOREIGN KEY("GenreId") REFERENCES "Genre" ("GenreId"), FOREIGN KEY("AlbumId") REFERENCES "Album" ("AlbumId") ) SELECT * FROM 'Track' LIMIT 3; TrackId Name AlbumId MediaTypeId GenreId Composer Milliseconds Bytes UnitPrice 1 For Those About To Rock (We Salute You) 1 1 1 Angus Young, Malcolm Young, Brian Johnson 343719 11170334 0.99 2 Balls to the Wall 2 2 1 None 342562 5510424 0.99 3 Fast As a Shark 3 2 1 F. Baltes, S. Kaufman, U. Dirkscneider & W. Hoffman 230619 3990994 0.99 CREATE TABLE "InvoiceLine" ( "InvoiceLineId" INTEGER NOT NULL, "InvoiceId" INTEGER NOT NULL,
https://langchain.readthedocs.io/en/latest/modules/agents/agent_toolkits/sql_database.html
dd6daa9d75f6-9
"InvoiceId" INTEGER NOT NULL, "TrackId" INTEGER NOT NULL, "UnitPrice" NUMERIC(10, 2) NOT NULL, "Quantity" INTEGER NOT NULL, PRIMARY KEY ("InvoiceLineId"), FOREIGN KEY("TrackId") REFERENCES "Track" ("TrackId"), FOREIGN KEY("InvoiceId") REFERENCES "Invoice" ("InvoiceId") ) SELECT * FROM 'InvoiceLine' LIMIT 3; InvoiceLineId InvoiceId TrackId UnitPrice Quantity 1 1 2 0.99 1 2 1 4 0.99 1 3 2 6 0.99 1 Thought: I should query the database to get the top 3 best selling artists. Action: query_sql_db Action Input: SELECT Artist.Name, SUM(InvoiceLine.Quantity) AS TotalQuantity FROM Artist INNER JOIN Track ON Artist.ArtistId = Track.ArtistId INNER JOIN InvoiceLine ON Track.TrackId = InvoiceLine.TrackId GROUP BY Artist.Name ORDER BY TotalQuantity DESC LIMIT 3 Observation: Error: (sqlite3.OperationalError) no such column: Track.ArtistId [SQL: SELECT Artist.Name, SUM(InvoiceLine.Quantity) AS TotalQuantity FROM Artist INNER JOIN Track ON Artist.ArtistId = Track.ArtistId INNER JOIN InvoiceLine ON Track.TrackId = InvoiceLine.TrackId GROUP BY Artist.Name ORDER BY TotalQuantity DESC LIMIT 3] (Background on this error at: https://sqlalche.me/e/14/e3q8) Thought: I should double check my query before executing it. Action: query_checker_sql_db
https://langchain.readthedocs.io/en/latest/modules/agents/agent_toolkits/sql_database.html
dd6daa9d75f6-10
Thought: I should double check my query before executing it. Action: query_checker_sql_db Action Input: SELECT Artist.Name, SUM(InvoiceLine.Quantity) AS TotalQuantity FROM Artist INNER JOIN Track ON Artist.ArtistId = Track.ArtistId INNER JOIN InvoiceLine ON Track.TrackId = InvoiceLine.TrackId GROUP BY Artist.Name ORDER BY TotalQuantity DESC LIMIT 3 Observation: SELECT Artist.Name, SUM(InvoiceLine.Quantity) AS TotalQuantity FROM Artist INNER JOIN Track ON Artist.ArtistId = Track.ArtistId INNER JOIN InvoiceLine ON Track.TrackId = InvoiceLine.TrackId GROUP BY Artist.Name ORDER BY TotalQuantity DESC LIMIT 3; Thought: I now know the final answer. Action: query_sql_db Action Input: SELECT Artist.Name, SUM(InvoiceLine.Quantity) AS TotalQuantity FROM Artist INNER JOIN Album ON Artist.ArtistId = Album.ArtistId INNER JOIN Track ON Album.AlbumId = Track.AlbumId INNER JOIN InvoiceLine ON Track.TrackId = InvoiceLine.TrackId GROUP BY Artist.Name ORDER BY TotalQuantity DESC LIMIT 3 Observation: [('Iron Maiden', 140), ('U2', 107), ('Metallica', 91)] Thought: I now know the final answer. Final Answer: The top 3 best selling artists are Iron Maiden, U2, and Metallica. > Finished chain. 'The top 3 best selling artists are Iron Maiden, U2, and Metallica.' previous Python Agent next Vectorstore Agent Contents Initialization Example: describing a table Example: describing a table, recovering from an error Example: running queries Recovering from an error By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/agents/agent_toolkits/sql_database.html
0cb7cf374e77-0
.md .pdf Key Concepts Contents Python REPL Bash Requests Wrapper Google Search SerpAPI Searx Search Key Concepts# Python REPL# Sometimes, for complex calculations, rather than have an LLM generate the answer directly, it can be better to have the LLM generate code to calculate the answer, and then run that code to get the answer. In order to easily do that, we provide a simple Python REPL to execute commands in. This interface will only return things that are printed - therefore, if you want to use it to calculate an answer, make sure to have it print out the answer. Bash# It can often be useful to have an LLM generate bash commands, and then run them. A common use case for this is letting the LLM interact with your local file system. We provide an easy component to execute bash commands. Requests Wrapper# The web contains a lot of information that LLMs do not have access to. In order to easily let LLMs interact with that information, we provide a wrapper around the Python Requests module that takes in a URL and fetches data from that URL. Google Search# This uses the official Google Search API to look up information on the web. SerpAPI# This uses SerpAPI, a third party search API engine, to interact with Google Search. Searx Search# This uses the Searx (SearxNG fork) meta search engine API to lookup information on the web. It supports 139 search engines and is easy to self-host which makes it a good choice for privacy-conscious users. previous Utils next Generic Utilities Contents Python REPL Bash Requests Wrapper Google Search SerpAPI Searx Search By Harrison Chase
https://langchain.readthedocs.io/en/latest/modules/utils/key_concepts.html
0cb7cf374e77-1
Requests Wrapper Google Search SerpAPI Searx Search By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/utils/key_concepts.html
43b06cff9e46-0
.rst .pdf Generic Utilities Generic Utilities# There are a lot of different utilities that LangChain provides integrations for These guides go over how to use them. The utilities listed here are all generic utilities. Bash: How to use a bash wrapper to execute bash commands. Python REPL: How to use a Python wrapper to execute python commands. Requests: How to use a requests wrapper to interact with the web. Google Search: How to use the google search wrapper to search the web. SerpAPI: How to use the SerpAPI wrapper to search the web. SearxNG Search API: Hot to use the SearxNG meta search wrapper to search the web. Bing Search: How to use the Bing search wrapper to search the web. Wolfram Alpha: How to use the Wolfram Alpha wrapper to interact with Wolfram Alpha. previous Key Concepts next Bash By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/utils/how_to_guides.html
6128c93d3b7d-0
.ipynb .pdf Zapier Natural Language Actions API Contents Zapier Natural Language Actions API Example with Agent Example with SimpleSequentialChain Zapier Natural Language Actions API# Full docs here: https://nla.zapier.com/api/v1/docs Zapier Natural Language Actions gives you access to the 5k+ apps, 20k+ actions on Zapier’s platform through a natural language API interface. NLA supports apps like Gmail, Salesforce, Trello, Slack, Asana, HubSpot, Google Sheets, Microsoft Teams, and thousands more apps: https://zapier.com/apps Zapier NLA handles ALL the underlying API auth and translation from natural language –> underlying API call –> return simplified output for LLMs. The key idea is you, or your users, expose a set of actions via an oauth-like setup window, which you can then query and execute via a REST API. NLA offers both API Key and OAuth for signing NLA API requests. Server-side (API Key): for quickly getting started, testing, and production scenarios where LangChain will only use actions exposed in the developer’s Zapier account (and will use the developer’s connected accounts on Zapier.com) User-facing (Oauth): for production scenarios where you are deploying an end-user facing application and LangChain needs access to end-user’s exposed actions and connected accounts on Zapier.com This quick start will focus on the server-side use case for brevity. Review full docs or reach out to [email protected] for user-facing oauth developer support. This example goes over how to use the Zapier integration with a SimpleSequentialChain, then an Agent. In code, below: %load_ext autoreload %autoreload 2 import os # get from https://platform.openai.com/
https://langchain.readthedocs.io/en/latest/modules/utils/examples/zapier.html
6128c93d3b7d-1
%autoreload 2 import os # get from https://platform.openai.com/ os.environ["OPENAI_API_KEY"] = os.environ.get("OPENAI_API_KEY", "") # get from https://nla.zapier.com/demo/provider/debug (under User Information, after logging in): os.environ["ZAPIER_NLA_API_KEY"] = os.environ.get("ZAPIER_NLA_API_KEY", "") Example with Agent# Zapier tools can be used with an agent. See the example below. from langchain.llms import OpenAI from langchain.agents import initialize_agent from langchain.agents.agent_toolkits import ZapierToolkit from langchain.utilities.zapier import ZapierNLAWrapper ## step 0. expose gmail 'find email' and slack 'send channel message' actions # first go here, log in, expose (enable) the two actions: https://nla.zapier.com/demo/start -- for this example, can leave all fields "Have AI guess" # in an oauth scenario, you'd get your own <provider> id (instead of 'demo') which you route your users through first llm = OpenAI(temperature=0) zapier = ZapierNLAWrapper() toolkit = ZapierToolkit.from_zapier_nla_wrapper(zapier) agent = initialize_agent(toolkit.get_tools(), llm, agent="zero-shot-react-description", verbose=True) agent.run("Summarize the last email I received regarding Silicon Valley Bank. Send the summary to the #test-zapier channel in slack.") > Entering new AgentExecutor chain... I need to find the email and summarize it. Action: Gmail: Find Email Action Input: Find the latest email from Silicon Valley Bank
https://langchain.readthedocs.io/en/latest/modules/utils/examples/zapier.html
6128c93d3b7d-2
Action: Gmail: Find Email Action Input: Find the latest email from Silicon Valley Bank Observation: {"from__name": "Silicon Valley Bridge Bank, N.A.", "from__email": "[email protected]", "body_plain": "Dear Clients, After chaotic, tumultuous & stressful days, we have clarity on path for SVB, FDIC is fully insuring all deposits & have an ask for clients & partners as we rebuild. Tim Mayopoulos <https://eml.svb.com/NjEwLUtBSy0yNjYAAAGKgoxUeBCLAyF_NxON97X4rKEaNBLG", "reply_to__email": "[email protected]", "subject": "Meet the new CEO Tim Mayopoulos", "date": "Tue, 14 Mar 2023 23:42:29 -0500 (CDT)", "message_url": "https://mail.google.com/mail/u/0/#inbox/186e393b13cfdf0a", "attachment_count": "0", "to__emails": "[email protected]", "message_id": "186e393b13cfdf0a", "labels": "IMPORTANT, CATEGORY_UPDATES, INBOX"} Thought: I need to summarize the email and send it to the #test-zapier channel in Slack. Action: Slack: Send Channel Message Action Input: Send a slack message to the #test-zapier channel with the text "Silicon Valley Bank has announced that Tim Mayopoulos is the new CEO. FDIC is fully insuring all deposits and they have an ask for clients and partners as they rebuild."
https://langchain.readthedocs.io/en/latest/modules/utils/examples/zapier.html
6128c93d3b7d-3
Observation: {"message__text": "Silicon Valley Bank has announced that Tim Mayopoulos is the new CEO. FDIC is fully insuring all deposits and they have an ask for clients and partners as they rebuild.", "message__permalink": "https://langchain.slack.com/archives/C04TSGU0RA7/p1678859932375259", "channel": "C04TSGU0RA7", "message__bot_profile__name": "Zapier", "message__team": "T04F8K3FZB5", "message__bot_id": "B04TRV4R74K", "message__bot_profile__deleted": "false", "message__bot_profile__app_id": "A024R9PQM", "ts_time": "2023-03-15T05:58:52Z", "message__bot_profile__icons__image_36": "https://avatars.slack-edge.com/2022-08-02/3888649620612_f864dc1bb794cf7d82b0_36.png", "message__blocks[]block_id": "kdZZ", "message__blocks[]elements[]type": "['rich_text_section']"} Thought: I now know the final answer. Final Answer: I have sent a summary of the last email from Silicon Valley Bank to the #test-zapier channel in Slack. > Finished chain. 'I have sent a summary of the last email from Silicon Valley Bank to the #test-zapier channel in Slack.' Example with SimpleSequentialChain# If you need more explicit control, use a chain, like below. from langchain.llms import OpenAI from langchain.chains import LLMChain, TransformChain, SimpleSequentialChain from langchain.prompts import PromptTemplate from langchain.tools.zapier.tool import ZapierNLARunAction
https://langchain.readthedocs.io/en/latest/modules/utils/examples/zapier.html
6128c93d3b7d-4
from langchain.tools.zapier.tool import ZapierNLARunAction from langchain.utilities.zapier import ZapierNLAWrapper ## step 0. expose gmail 'find email' and slack 'send direct message' actions # first go here, log in, expose (enable) the two actions: https://nla.zapier.com/demo/start -- for this example, can leave all fields "Have AI guess" # in an oauth scenario, you'd get your own <provider> id (instead of 'demo') which you route your users through first actions = ZapierNLAWrapper().list() ## step 1. gmail find email GMAIL_SEARCH_INSTRUCTIONS = "Grab the latest email from Silicon Valley Bank" def nla_gmail(inputs): action = next((a for a in actions if a["description"].startswith("Gmail: Find Email")), None) return {"email_data": ZapierNLARunAction(action_id=action["id"], zapier_description=action["description"], params_schema=action["params"]).run(inputs["instructions"])} gmail_chain = TransformChain(input_variables=["instructions"], output_variables=["email_data"], transform=nla_gmail) ## step 2. generate draft reply template = """You are an assisstant who drafts replies to an incoming email. Output draft reply in plain text (not JSON). Incoming email: {email_data} Draft email reply:""" prompt_template = PromptTemplate(input_variables=["email_data"], template=template) reply_chain = LLMChain(llm=OpenAI(temperature=.7), prompt=prompt_template) ## step 3. send draft reply via a slack direct message SLACK_HANDLE = "@Ankush Gola" def nla_slack(inputs):
https://langchain.readthedocs.io/en/latest/modules/utils/examples/zapier.html
6128c93d3b7d-5
SLACK_HANDLE = "@Ankush Gola" def nla_slack(inputs): action = next((a for a in actions if a["description"].startswith("Slack: Send Direct Message")), None) instructions = f'Send this to {SLACK_HANDLE} in Slack: {inputs["draft_reply"]}' return {"slack_data": ZapierNLARunAction(action_id=action["id"], zapier_description=action["description"], params_schema=action["params"]).run(instructions)} slack_chain = TransformChain(input_variables=["draft_reply"], output_variables=["slack_data"], transform=nla_slack) ## finally, execute overall_chain = SimpleSequentialChain(chains=[gmail_chain, reply_chain, slack_chain], verbose=True) overall_chain.run(GMAIL_SEARCH_INSTRUCTIONS) > Entering new SimpleSequentialChain chain...
https://langchain.readthedocs.io/en/latest/modules/utils/examples/zapier.html
6128c93d3b7d-6
overall_chain.run(GMAIL_SEARCH_INSTRUCTIONS) > Entering new SimpleSequentialChain chain... {"from__name": "Silicon Valley Bridge Bank, N.A.", "from__email": "[email protected]", "body_plain": "Dear Clients, After chaotic, tumultuous & stressful days, we have clarity on path for SVB, FDIC is fully insuring all deposits & have an ask for clients & partners as we rebuild. Tim Mayopoulos <https://eml.svb.com/NjEwLUtBSy0yNjYAAAGKgoxUeBCLAyF_NxON97X4rKEaNBLG", "reply_to__email": "[email protected]", "subject": "Meet the new CEO Tim Mayopoulos", "date": "Tue, 14 Mar 2023 23:42:29 -0500 (CDT)", "message_url": "https://mail.google.com/mail/u/0/#inbox/186e393b13cfdf0a", "attachment_count": "0", "to__emails": "[email protected]", "message_id": "186e393b13cfdf0a", "labels": "IMPORTANT, CATEGORY_UPDATES, INBOX"} Dear Silicon Valley Bridge Bank, Thank you for your email and the update regarding your new CEO Tim Mayopoulos. We appreciate your dedication to keeping your clients and partners informed and we look forward to continuing our relationship with you. Best regards, [Your Name]
https://langchain.readthedocs.io/en/latest/modules/utils/examples/zapier.html
6128c93d3b7d-7
Best regards, [Your Name] {"message__text": "Dear Silicon Valley Bridge Bank, \n\nThank you for your email and the update regarding your new CEO Tim Mayopoulos. We appreciate your dedication to keeping your clients and partners informed and we look forward to continuing our relationship with you. \n\nBest regards, \n[Your Name]", "message__permalink": "https://langchain.slack.com/archives/D04TKF5BBHU/p1678859968241629", "channel": "D04TKF5BBHU", "message__bot_profile__name": "Zapier", "message__team": "T04F8K3FZB5", "message__bot_id": "B04TRV4R74K", "message__bot_profile__deleted": "false", "message__bot_profile__app_id": "A024R9PQM", "ts_time": "2023-03-15T05:59:28Z", "message__blocks[]block_id": "p7i", "message__blocks[]elements[]elements[]type": "[['text']]", "message__blocks[]elements[]type": "['rich_text_section']"} > Finished chain.
https://langchain.readthedocs.io/en/latest/modules/utils/examples/zapier.html
6128c93d3b7d-8
> Finished chain. '{"message__text": "Dear Silicon Valley Bridge Bank, \\n\\nThank you for your email and the update regarding your new CEO Tim Mayopoulos. We appreciate your dedication to keeping your clients and partners informed and we look forward to continuing our relationship with you. \\n\\nBest regards, \\n[Your Name]", "message__permalink": "https://langchain.slack.com/archives/D04TKF5BBHU/p1678859968241629", "channel": "D04TKF5BBHU", "message__bot_profile__name": "Zapier", "message__team": "T04F8K3FZB5", "message__bot_id": "B04TRV4R74K", "message__bot_profile__deleted": "false", "message__bot_profile__app_id": "A024R9PQM", "ts_time": "2023-03-15T05:59:28Z", "message__blocks[]block_id": "p7i", "message__blocks[]elements[]elements[]type": "[[\'text\']]", "message__blocks[]elements[]type": "[\'rich_text_section\']"}' previous Wolfram Alpha next Utilities Contents Zapier Natural Language Actions API Example with Agent Example with SimpleSequentialChain By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/utils/examples/zapier.html
faf4ea7a7012-0
.ipynb .pdf Python REPL Python REPL# Sometimes, for complex calculations, rather than have an LLM generate the answer directly, it can be better to have the LLM generate code to calculate the answer, and then run that code to get the answer. In order to easily do that, we provide a simple Python REPL to execute commands in. This interface will only return things that are printed - therefor, if you want to use it to calculate an answer, make sure to have it print out the answer. from langchain.utilities import PythonREPL python_repl = PythonREPL() python_repl.run("print(1+1)") '2\n' previous IFTTT WebHooks next Requests By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/utils/examples/python.html
0b6d88463c54-0
.ipynb .pdf Bash Bash# It can often be useful to have an LLM generate bash commands, and then run them. A common use case for this is letting the LLM interact with your local file system. We provide an easy util to execute bash commands. from langchain.utilities import BashProcess bash = BashProcess() print(bash.run("ls")) bash.ipynb google_search.ipynb python.ipynb requests.ipynb serpapi.ipynb previous Generic Utilities next Bing Search By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/utils/examples/bash.html
ecb3d9b48a55-0
.ipynb .pdf SearxNG Search API Contents SearxNG Search API Custom Parameters Obtaining results with metadata SearxNG Search API# This notebook goes over how to use a self hosted SearxNG search API to search the web. You can check this link for more informations about Searx API parameters. import pprint from langchain.utilities import SearxSearchWrapper search = SearxSearchWrapper(searx_host="http://127.0.0.1:8888") For some engines, if a direct answer is available the warpper will print the answer instead of the full list of search results. You can use the results method of the wrapper if you want to obtain all the results. search.run("What is the capital of France") 'Paris is the capital of France, the largest country of Europe with 550 000 km2 (65 millions inhabitants). Paris has 2.234 million inhabitants end 2011. She is the core of Ile de France region (12 million people).' Custom Parameters# SearxNG supports up to 139 search engines. You can also customize the Searx wrapper with arbitrary named parameters that will be passed to the Searx search API . In the below example we will making a more interesting use of custom search parameters from searx search api. In this example we will be using the engines parameters to query wikipedia search = SearxSearchWrapper(searx_host="http://127.0.0.1:8888", k=5) # k is for max number of items search.run("large language model ", engines=['wiki'])
https://langchain.readthedocs.io/en/latest/modules/utils/examples/searx_search.html
ecb3d9b48a55-1
search.run("large language model ", engines=['wiki']) 'Large language models (LLMs) represent a major advancement in AI, with the promise of transforming domains through learned knowledge. LLM sizes have been increasing 10X every year for the last few years, and as these models grow in complexity and size, so do their capabilities.\n\nGPT-3 can translate language, write essays, generate computer code, and more — all with limited to no supervision. In July 2020, OpenAI unveiled GPT-3, a language model that was easily the largest known at the time. Put simply, GPT-3 is trained to predict the next word in a sentence, much like how a text message autocomplete feature works.\n\nA large language model, or LLM, is a deep learning algorithm that can recognize, summarize, translate, predict and generate text and other content based on knowledge gained from massive datasets. Large language models are among the most successful applications of transformer models.\n\nAll of today’s well-known language models—e.g., GPT-3 from OpenAI, PaLM or LaMDA from Google, Galactica or OPT from Meta, Megatron-Turing from Nvidia/Microsoft, Jurassic-1 from AI21 Labs—are...\n\nLarge language models (LLMs) such as GPT-3are increasingly being used to generate text. These tools should be used with care, since they can generate content that is biased, non-verifiable, constitutes original research, or violates copyrights.' Passing other Searx parameters for searx like language search = SearxSearchWrapper(searx_host="http://127.0.0.1:8888", k=1) search.run("deep learning", language='es', engines=['wiki'])
https://langchain.readthedocs.io/en/latest/modules/utils/examples/searx_search.html
ecb3d9b48a55-2
search.run("deep learning", language='es', engines=['wiki']) 'Aprendizaje profundo (en inglés, deep learning) es un conjunto de algoritmos de aprendizaje automático (en inglés, machine learning) que intenta modelar abstracciones de alto nivel en datos usando arquitecturas computacionales que admiten transformaciones no lineales múltiples e iterativas de datos expresados en forma matricial o tensorial. 1' Obtaining results with metadata# In this example we will be looking for scientific paper using the categories parameter and limiting the results to a time_range (not all engines support the time range option). We also would like to obtain the results in a structured way including metadata. For this we will be using the results method of the wrapper. search = SearxSearchWrapper(searx_host="http://127.0.0.1:8888") results = search.results("Large Language Model prompt", num_results=5, categories='science', time_range='year') pprint.pp(results) [{'snippet': '… on natural language instructions, large language models (… the ' 'prompt used to steer the model, and most effective prompts … to ' 'prompt engineering, we propose Automatic Prompt …', 'title': 'Large language models are human-level prompt engineers', 'link': 'https://arxiv.org/abs/2211.01910', 'engines': ['google scholar'], 'category': 'science'}, {'snippet': '… Large language models (LLMs) have introduced new possibilities ' 'for prototyping with AI [18]. Pre-trained on a large amount of ' 'text data, models … language instructions called prompts. …', 'title': 'Promptchainer: Chaining large language model prompts through '
https://langchain.readthedocs.io/en/latest/modules/utils/examples/searx_search.html
ecb3d9b48a55-3
'title': 'Promptchainer: Chaining large language model prompts through ' 'visual programming', 'link': 'https://dl.acm.org/doi/abs/10.1145/3491101.3519729', 'engines': ['google scholar'], 'category': 'science'}, {'snippet': '… can introspect the large prompt model. We derive the view ' 'ϕ0(X) and the model h0 from T01. However, instead of fully ' 'fine-tuning T0 during co-training, we focus on soft prompt ' 'tuning, …', 'title': 'Co-training improves prompt-based learning for large language ' 'models', 'link': 'https://proceedings.mlr.press/v162/lang22a.html', 'engines': ['google scholar'], 'category': 'science'}, {'snippet': '… With the success of large language models (LLMs) of code and ' 'their use as … prompt design process become important. In this ' 'work, we propose a framework called Repo-Level Prompt …', 'title': 'Repository-level prompt generation for large language models of ' 'code', 'link': 'https://arxiv.org/abs/2206.12839', 'engines': ['google scholar'], 'category': 'science'}, {'snippet': '… Figure 2 | The benefits of different components of a prompt ' 'for the largest language model (Gopher), as estimated from ' 'hierarchical logistic regression. Each point estimates the ' 'unique …', 'title': 'Can language models learn from explanations in context?', 'link': 'https://arxiv.org/abs/2204.02329',
https://langchain.readthedocs.io/en/latest/modules/utils/examples/searx_search.html
ecb3d9b48a55-4
'link': 'https://arxiv.org/abs/2204.02329', 'engines': ['google scholar'], 'category': 'science'}] Get papers from arxiv results = search.results("Large Language Model prompt", num_results=5, engines=['arxiv']) pprint.pp(results) [{'snippet': 'Thanks to the advanced improvement of large pre-trained language ' 'models, prompt-based fine-tuning is shown to be effective on a ' 'variety of downstream tasks. Though many prompting methods have ' 'been investigated, it remains unknown which type of prompts are ' 'the most effective among three types of prompts (i.e., ' 'human-designed prompts, schema prompts and null prompts). In ' 'this work, we empirically compare the three types of prompts ' 'under both few-shot and fully-supervised settings. Our ' 'experimental results show that schema prompts are the most ' 'effective in general. Besides, the performance gaps tend to ' 'diminish when the scale of training data grows large.', 'title': 'Do Prompts Solve NLP Tasks Using Natural Language?', 'link': 'http://arxiv.org/abs/2203.00902v1', 'engines': ['arxiv'], 'category': 'science'}, {'snippet': 'Cross-prompt automated essay scoring (AES) requires the system ' 'to use non target-prompt essays to award scores to a ' 'target-prompt essay. Since obtaining a large quantity of ' 'pre-graded essays to a particular prompt is often difficult and ' 'unrealistic, the task of cross-prompt AES is vital for the ' 'development of real-world AES systems, yet it remains an '
https://langchain.readthedocs.io/en/latest/modules/utils/examples/searx_search.html
ecb3d9b48a55-5
'development of real-world AES systems, yet it remains an ' 'under-explored area of research. Models designed for ' 'prompt-specific AES rely heavily on prompt-specific knowledge ' 'and perform poorly in the cross-prompt setting, whereas current ' 'approaches to cross-prompt AES either require a certain quantity ' 'of labelled target-prompt essays or require a large quantity of ' 'unlabelled target-prompt essays to perform transfer learning in ' 'a multi-step manner. To address these issues, we introduce ' 'Prompt Agnostic Essay Scorer (PAES) for cross-prompt AES. Our ' 'method requires no access to labelled or unlabelled ' 'target-prompt data during training and is a single-stage ' 'approach. PAES is easy to apply in practice and achieves ' 'state-of-the-art performance on the Automated Student Assessment ' 'Prize (ASAP) dataset.', 'title': 'Prompt Agnostic Essay Scorer: A Domain Generalization Approach to ' 'Cross-prompt Automated Essay Scoring', 'link': 'http://arxiv.org/abs/2008.01441v1', 'engines': ['arxiv'], 'category': 'science'}, {'snippet': 'Research on prompting has shown excellent performance with ' 'little or even no supervised training across many tasks. ' 'However, prompting for machine translation is still ' 'under-explored in the literature. We fill this gap by offering a ' 'systematic study on prompting strategies for translation, ' 'examining various factors for prompt template and demonstration ' 'example selection. We further explore the use of monolingual '
https://langchain.readthedocs.io/en/latest/modules/utils/examples/searx_search.html
ecb3d9b48a55-6
'example selection. We further explore the use of monolingual ' 'data and the feasibility of cross-lingual, cross-domain, and ' 'sentence-to-document transfer learning in prompting. Extensive ' 'experiments with GLM-130B (Zeng et al., 2022) as the testbed ' 'show that 1) the number and the quality of prompt examples ' 'matter, where using suboptimal examples degenerates translation; ' '2) several features of prompt examples, such as semantic ' 'similarity, show significant Spearman correlation with their ' 'prompting performance; yet, none of the correlations are strong ' 'enough; 3) using pseudo parallel prompt examples constructed ' 'from monolingual data via zero-shot prompting could improve ' 'translation; and 4) improved performance is achievable by ' 'transferring knowledge from prompt examples selected in other ' 'settings. We finally provide an analysis on the model outputs ' 'and discuss several problems that prompting still suffers from.', 'title': 'Prompting Large Language Model for Machine Translation: A Case ' 'Study', 'link': 'http://arxiv.org/abs/2301.07069v2', 'engines': ['arxiv'], 'category': 'science'}, {'snippet': 'Large language models can perform new tasks in a zero-shot ' 'fashion, given natural language prompts that specify the desired ' 'behavior. Such prompts are typically hand engineered, but can ' 'also be learned with gradient-based methods from labeled data. ' 'However, it is underexplored what factors make the prompts ' 'effective, especially when the prompts are natural language. In '
https://langchain.readthedocs.io/en/latest/modules/utils/examples/searx_search.html
ecb3d9b48a55-7
'effective, especially when the prompts are natural language. In ' 'this paper, we investigate common attributes shared by effective ' 'prompts. We first propose a human readable prompt tuning method ' '(F LUENT P ROMPT) based on Langevin dynamics that incorporates a ' 'fluency constraint to find a diverse distribution of effective ' 'and fluent prompts. Our analysis reveals that effective prompts ' 'are topically related to the task domain and calibrate the prior ' 'probability of label words. Based on these findings, we also ' 'propose a method for generating prompts using only unlabeled ' 'data, outperforming strong baselines by an average of 7.0% ' 'accuracy across three tasks.', 'title': "Toward Human Readable Prompt Tuning: Kubrick's The Shining is a " 'good movie, and a good prompt too?', 'link': 'http://arxiv.org/abs/2212.10539v1', 'engines': ['arxiv'], 'category': 'science'}, {'snippet': 'Prevailing methods for mapping large generative language models ' "to supervised tasks may fail to sufficiently probe models' novel " 'capabilities. Using GPT-3 as a case study, we show that 0-shot ' 'prompts can significantly outperform few-shot prompts. We ' 'suggest that the function of few-shot examples in these cases is ' 'better described as locating an already learned task rather than ' 'meta-learning. This analysis motivates rethinking the role of ' 'prompts in controlling and evaluating powerful language models. ' 'In this work, we discuss methods of prompt programming, '
https://langchain.readthedocs.io/en/latest/modules/utils/examples/searx_search.html
ecb3d9b48a55-8
'In this work, we discuss methods of prompt programming, ' 'emphasizing the usefulness of considering prompts through the ' 'lens of natural language. We explore techniques for exploiting ' 'the capacity of narratives and cultural anchors to encode ' 'nuanced intentions and techniques for encouraging deconstruction ' 'of a problem into components before producing a verdict. ' 'Informed by this more encompassing theory of prompt programming, ' 'we also introduce the idea of a metaprompt that seeds the model ' 'to generate its own natural language prompts for a range of ' 'tasks. Finally, we discuss how these more general methods of ' 'interacting with language models can be incorporated into ' 'existing and future benchmarks and practical applications.', 'title': 'Prompt Programming for Large Language Models: Beyond the Few-Shot ' 'Paradigm', 'link': 'http://arxiv.org/abs/2102.07350v1', 'engines': ['arxiv'], 'category': 'science'}] In this example we query for large language models under the it category. We then filter the results that come from github. results = search.results("large language model", num_results = 20, categories='it') pprint.pp(list(filter(lambda r: r['engines'][0] == 'github', results))) [{'snippet': 'Guide to using pre-trained large language models of source code', 'title': 'Code-LMs', 'link': 'https://github.com/VHellendoorn/Code-LMs', 'engines': ['github'], 'category': 'it'}, {'snippet': 'Dramatron uses large language models to generate coherent ' 'scripts and screenplays.',
https://langchain.readthedocs.io/en/latest/modules/utils/examples/searx_search.html
ecb3d9b48a55-9
'scripts and screenplays.', 'title': 'dramatron', 'link': 'https://github.com/deepmind/dramatron', 'engines': ['github'], 'category': 'it'}] We could also directly query for results from github and other source forges. results = search.results("large language model", num_results = 20, engines=['github', 'gitlab']) pprint.pp(results) [{'snippet': "Implementation of 'A Watermark for Large Language Models' paper " 'by Kirchenbauer & Geiping et. al.', 'title': 'Peutlefaire / LMWatermark', 'link': 'https://gitlab.com/BrianPulfer/LMWatermark', 'engines': ['gitlab'], 'category': 'it'}, {'snippet': 'Guide to using pre-trained large language models of source code', 'title': 'Code-LMs', 'link': 'https://github.com/VHellendoorn/Code-LMs', 'engines': ['github'], 'category': 'it'}, {'snippet': '', 'title': 'Simen Burud / Large-scale Language Models for Conversational ' 'Speech Recognition', 'link': 'https://gitlab.com/BrianPulfer', 'engines': ['gitlab'], 'category': 'it'}, {'snippet': 'Dramatron uses large language models to generate coherent ' 'scripts and screenplays.', 'title': 'dramatron', 'link': 'https://github.com/deepmind/dramatron', 'engines': ['github'], 'category': 'it'},
https://langchain.readthedocs.io/en/latest/modules/utils/examples/searx_search.html
ecb3d9b48a55-10
'engines': ['github'], 'category': 'it'}, {'snippet': 'Code for loralib, an implementation of "LoRA: Low-Rank ' 'Adaptation of Large Language Models"', 'title': 'LoRA', 'link': 'https://github.com/microsoft/LoRA', 'engines': ['github'], 'category': 'it'}, {'snippet': 'Code for the paper "Evaluating Large Language Models Trained on ' 'Code"', 'title': 'human-eval', 'link': 'https://github.com/openai/human-eval', 'engines': ['github'], 'category': 'it'}, {'snippet': 'A trend starts from "Chain of Thought Prompting Elicits ' 'Reasoning in Large Language Models".', 'title': 'Chain-of-ThoughtsPapers', 'link': 'https://github.com/Timothyxxx/Chain-of-ThoughtsPapers', 'engines': ['github'], 'category': 'it'}, {'snippet': 'Mistral: A strong, northwesterly wind: Framework for transparent ' 'and accessible large-scale language model training, built with ' 'Hugging Face 🤗 Transformers.', 'title': 'mistral', 'link': 'https://github.com/stanford-crfm/mistral', 'engines': ['github'], 'category': 'it'}, {'snippet': 'A prize for finding tasks that cause large language models to ' 'show inverse scaling', 'title': 'prize', 'link': 'https://github.com/inverse-scaling/prize', 'engines': ['github'],
https://langchain.readthedocs.io/en/latest/modules/utils/examples/searx_search.html
ecb3d9b48a55-11
'engines': ['github'], 'category': 'it'}, {'snippet': 'Optimus: the first large-scale pre-trained VAE language model', 'title': 'Optimus', 'link': 'https://github.com/ChunyuanLI/Optimus', 'engines': ['github'], 'category': 'it'}, {'snippet': 'Seminar on Large Language Models (COMP790-101 at UNC Chapel ' 'Hill, Fall 2022)', 'title': 'llm-seminar', 'link': 'https://github.com/craffel/llm-seminar', 'engines': ['github'], 'category': 'it'}, {'snippet': 'A central, open resource for data and tools related to ' 'chain-of-thought reasoning in large language models. Developed @ ' 'Samwald research group: https://samwald.info/', 'title': 'ThoughtSource', 'link': 'https://github.com/OpenBioLink/ThoughtSource', 'engines': ['github'], 'category': 'it'}, {'snippet': 'A comprehensive list of papers using large language/multi-modal ' 'models for Robotics/RL, including papers, codes, and related ' 'websites', 'title': 'Awesome-LLM-Robotics', 'link': 'https://github.com/GT-RIPL/Awesome-LLM-Robotics', 'engines': ['github'], 'category': 'it'}, {'snippet': 'Tools for curating biomedical training data for large-scale ' 'language modeling', 'title': 'biomedical', 'link': 'https://github.com/bigscience-workshop/biomedical',
https://langchain.readthedocs.io/en/latest/modules/utils/examples/searx_search.html
ecb3d9b48a55-12
'link': 'https://github.com/bigscience-workshop/biomedical', 'engines': ['github'], 'category': 'it'}, {'snippet': 'ChatGPT @ Home: Large Language Model (LLM) chatbot application, ' 'written by ChatGPT', 'title': 'ChatGPT-at-Home', 'link': 'https://github.com/Sentdex/ChatGPT-at-Home', 'engines': ['github'], 'category': 'it'}, {'snippet': 'Design and Deploy Large Language Model Apps', 'title': 'dust', 'link': 'https://github.com/dust-tt/dust', 'engines': ['github'], 'category': 'it'}, {'snippet': 'Polyglot: Large Language Models of Well-balanced Competence in ' 'Multi-languages', 'title': 'polyglot', 'link': 'https://github.com/EleutherAI/polyglot', 'engines': ['github'], 'category': 'it'}, {'snippet': 'Code release for "Learning Video Representations from Large ' 'Language Models"', 'title': 'LaViLa', 'link': 'https://github.com/facebookresearch/LaViLa', 'engines': ['github'], 'category': 'it'}, {'snippet': 'SmoothQuant: Accurate and Efficient Post-Training Quantization ' 'for Large Language Models', 'title': 'smoothquant', 'link': 'https://github.com/mit-han-lab/smoothquant', 'engines': ['github'], 'category': 'it'},
https://langchain.readthedocs.io/en/latest/modules/utils/examples/searx_search.html
ecb3d9b48a55-13
'engines': ['github'], 'category': 'it'}, {'snippet': 'This repository contains the code, data, and models of the paper ' 'titled "XL-Sum: Large-Scale Multilingual Abstractive ' 'Summarization for 44 Languages" published in Findings of the ' 'Association for Computational Linguistics: ACL-IJCNLP 2021.', 'title': 'xl-sum', 'link': 'https://github.com/csebuetnlp/xl-sum', 'engines': ['github'], 'category': 'it'}] previous Requests next SerpAPI Contents SearxNG Search API Custom Parameters Obtaining results with metadata By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/utils/examples/searx_search.html
bb782251a688-0
.ipynb .pdf Google Search Contents Number of Results Metadata Results Google Search# This notebook goes over how to use the google search component. First, you need to set up the proper API keys and environment variables. To set it up, follow the instructions found here. Then we will need to set some environment variables. import os os.environ["GOOGLE_CSE_ID"] = "" os.environ["GOOGLE_API_KEY"] = "" from langchain.utilities import GoogleSearchAPIWrapper search = GoogleSearchAPIWrapper() search.run("Obama's first name?")
https://langchain.readthedocs.io/en/latest/modules/utils/examples/google_search.html
bb782251a688-1
'1 Child\'s First Name. 2. 6. 7d. Street Address. 71. (Type or print). BARACK. Sex. 3. This Birth. 4. If Twin or Triplet,. Was Child Born. Barack Hussein Obama II is an American retired politician who served as the 44th president of the United States from 2009 to 2017. His full name is Barack Hussein Obama II. Since the “II” is simply because he was named for his father, his last name is Obama. Feb 9, 2015 ... Michael Jordan misspelled Barack Obama\'s first name on 50th-birthday gift ... Knowing Obama is a Chicagoan and huge basketball fan,\xa0... Aug 18, 2017 ... It took him several seconds and multiple clues to remember former President Barack Obama\'s first name. Miller knew that every answer had to end\xa0... First Lady Michelle LaVaughn Robinson Obama is a lawyer, writer, and the wife of the 44th President, Barack Obama. She is the first African-American First\xa0... Barack Obama, in full Barack Hussein Obama II, (born August 4, 1961, Honolulu, Hawaii, U.S.), 44th president of the United States (2009–17) and the first\xa0... When Barack Obama was elected president in 2008, he became the first African American to hold ... The Middle East remained a key foreign policy challenge. Feb 27, 2020 ... President Barack Obama was born Barack Hussein Obama, II, as shown here on his birth certificate here . As reported by Reuters here , his\xa0... Jan 16, 2007 ... 4, 1961, in Honolulu. His first name means "one who is blessed" in Swahili. While Obama\'s father, Barack Hussein Obama Sr., was from Kenya, his\xa0...'
https://langchain.readthedocs.io/en/latest/modules/utils/examples/google_search.html
bb782251a688-2
Number of Results# You can use the k parameter to set the number of results search = GoogleSearchAPIWrapper(k=1) search.run("python") 'The official home of the Python Programming Language.' ‘The official home of the Python Programming Language.’ Metadata Results# Run query through GoogleSearch and return snippet, title, and link metadata. Snippet: The description of the result. Title: The title of the result. Link: The link to the result. search = GoogleSearchAPIWrapper() search.results("apples", 5) [{'snippet': 'Discover the innovative world of Apple and shop everything iPhone, iPad, Apple Watch, Mac, and Apple TV, plus explore accessories, entertainment,\xa0...', 'title': 'Apple', 'link': 'https://www.apple.com/'}, {'snippet': "Jul 10, 2022 ... Whether or not you're up on your apple trivia, no doubt you know how delicious this popular fruit is, and how nutritious. Apples are rich in\xa0...", 'title': '25 Types of Apples and What to Make With Them - Parade ...', 'link': 'https://parade.com/1330308/bethlipton/types-of-apples/'}, {'snippet': 'An apple is an edible fruit produced by an apple tree (Malus domestica). Apple trees are cultivated worldwide and are the most widely grown species in the\xa0...', 'title': 'Apple - Wikipedia', 'link': 'https://en.wikipedia.org/wiki/Apple'}, {'snippet': 'Apples are a popular fruit. They contain antioxidants, vitamins, dietary fiber, and a range of other nutrients. Due to their varied nutrient content,\xa0...', 'title': 'Apples: Benefits, nutrition, and tips',
https://langchain.readthedocs.io/en/latest/modules/utils/examples/google_search.html
bb782251a688-3
'title': 'Apples: Benefits, nutrition, and tips', 'link': 'https://www.medicalnewstoday.com/articles/267290'}, {'snippet': "An apple is a crunchy, bright-colored fruit, one of the most popular in the United States. You've probably heard the age-old saying, “An apple a day keeps\xa0...", 'title': 'Apples: Nutrition & Health Benefits', 'link': 'https://www.webmd.com/food-recipes/benefits-apples'}] previous Bing Search next Google Serper API Contents Number of Results Metadata Results By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/utils/examples/google_search.html
c06b30a234ba-0
.ipynb .pdf Requests Requests# The web contains a lot of information that LLMs do not have access to. In order to easily let LLMs interact with that information, we provide a wrapper around the Python Requests module that takes in a URL and fetches data from that URL. from langchain.utilities import RequestsWrapper requests = RequestsWrapper() requests.get("https://www.google.com")
https://langchain.readthedocs.io/en/latest/modules/utils/examples/requests.html
c06b30a234ba-1
'<!doctype html><html itemscope="" itemtype="http://schema.org/WebPage" lang="en"><head><meta content="Search the world\'s information, including webpages, images, videos and more. Google has many special features to help you find exactly what you\'re looking for." name="description"><meta content="noodp" name="robots"><meta content="text/html; charset=UTF-8" http-equiv="Content-Type"><meta content="/logos/doodles/2023/international-womens-day-2023-6753651837109578-l.png" itemprop="image"><meta content="International Women\'s Day 2023" property="twitter:title"><meta content="International Women\'s Day 2023! #GoogleDoodle" property="twitter:description"><meta content="International Women\'s Day 2023! #GoogleDoodle" property="og:description"><meta content="summary_large_image" property="twitter:card"><meta content="@GoogleDoodles" property="twitter:site"><meta content="https://www.google.com/logos/doodles/2023/international-womens-day-2023-6753651837109578-2x.png" property="twitter:image"><meta content="https://www.google.com/logos/doodles/2023/international-womens-day-2023-6753651837109578-2x.png" property="og:image"><meta content="1000" property="og:image:width"><meta content="400" property="og:image:height"><title>Google</title><script
https://langchain.readthedocs.io/en/latest/modules/utils/examples/requests.html
c06b30a234ba-2
nonce="skA52jTjrFARNMkurZZTjQ">(function(){window.google={kEI:\'5dkIZP3HJ4WPur8PmJ23iAc\',kEXPI:\'0,18168,772936,568305,6059,206,4804,2316,383,246,5,1129120,1197787,614,165,379924,16115,28684,22431,1361,12313,17586,4998,13228,37471,4820,887,1985,2891,3926,7828,606,29842,826,19390,10632,15324,432,3,346,1244,1,5444,149,11323,2652,4,1528,2304,29062,9871,3194,11443,2215,2980,10815,7428,5821,2536,4094,
https://langchain.readthedocs.io/en/latest/modules/utils/examples/requests.html
c06b30a234ba-3
7428,5821,2536,4094,7596,1,42154,2,14022,2373,342,23024,5679,1021,31121,4569,6258,23418,1252,5835,14967,4333,7484,445,2,2,1,24626,2006,8155,7381,2,3,15965,872,9626,10008,7,1922,5784,3995,19130,2261,14763,6304,2008,18192,927,14678,4531,14,82,16514,3692,109,1513,899,879,2226,2751,1854,1931,156,8524,2426,721,1021,904,1423,4415,988,3030,426,5684,1411,23,867,2685,4720,1300,504,567,6974,9,184,26,469,223
https://langchain.readthedocs.io/en/latest/modules/utils/examples/requests.html
c06b30a234ba-4
974,9,184,26,469,2238,5,1648,109,1127,450,6708,5318,1002,258,3392,1991,4,29,212,2,375,537,1046,314,1720,78,890,1861,1,1172,2275,129,29,632,274,599,731,1305,392,307,536,592,87,113,762,845,2552,3788,220,669,3,750,1174,601,310,611,27,54,49,398,51,238,1079,67,3232,710,1652,82,5,667,2077,544,3,15,2,24,497,977,40,338,224,119,101,149,4,4,129,218,25,683,1,378,533,382,284,189,143,5,204,393,1137,781,4,81,15
https://langchain.readthedocs.io/en/latest/modules/utils/examples/requests.html
c06b30a234ba-5
393,1137,781,4,81,1558,241,104,5232351,297,152,8798692,3311,141,795,19735,302,46,23950484,553,4041590,1964,1008,15665,2893,512,5738,12560,1540,1218,146,1415332\',kBL:\'Td3a\'};google.sn=\'webhp\';google.kHL=\'en\';})();(function(){\nvar
https://langchain.readthedocs.io/en/latest/modules/utils/examples/requests.html
c06b30a234ba-6
f=this||self;var h,k=[];function l(a){for(var b;a&&(!a.getAttribute||!(b=a.getAttribute("eid")));)a=a.parentNode;return b||h}function m(a){for(var b=null;a&&(!a.getAttribute||!(b=a.getAttribute("leid")));)a=a.parentNode;return b}\nfunction n(a,b,c,d,g){var e="";c||-1!==b.search("&ei=")||(e="&ei="+l(d),-1===b.search("&lei=")&&(d=m(d))&&(e+="&lei="+d));d="";!c&&f._cshid&&-1===b.search("&cshid=")&&"slh"!==a&&(d="&cshid="+f._cshid);c=c||"/"+(g||"gen_204")+"?atyp=i&ct="+a+"&cad="+b+e+"&zx="+Date.now()+d;/^http:/i.test(c)&&"https:"===window.location.protocol&&(google.ml&&google.ml(Error("a"),!1,{src:c,glmm:1}),c="");return c};h=google.kEI;google.getEI=l;google.getLEI=m;google.ml=function(){return null};google.log=function(a,b,c,d,g){if(c=n(a,b,c,d,g)){a=new Image;var e=k.length;k[e]=a;a.onerror=a.onload=a.onabort=function(){delete k[e]};a.src=c}};google.logUrl=n;}).call(this);(function(){google.y={};google.sy=[];google.x=function(a,b){if(a)var c=a.id;else{do
https://langchain.readthedocs.io/en/latest/modules/utils/examples/requests.html
c06b30a234ba-7
c=a.id;else{do c=Math.random();while(google.y[c])}google.y[c]=[a,b];return!1};google.sx=function(a){google.sy.push(a)};google.lm=[];google.plm=function(a){google.lm.push.apply(google.lm,a)};google.lq=[];google.load=function(a,b,c){google.lq.push([[a],b,c])};google.loadAll=function(a,b){google.lq.push([a,b])};google.bx=!1;google.lx=function(){};}).call(this);google.f={};(function(){\ndocument.documentElement.addEventListener("submit",function(b){var a;if(a=b.target){var c=a.getAttribute("data-submitfalse");a="1"===c||"q"===c&&!a.elements.q.value?!0:!1}else a=!1;a&&(b.preventDefault(),b.stopPropagation())},!0);document.documentElement.addEventListener("click",function(b){var a;a:{for(a=b.target;a&&a!==document.documentElement;a=a.parentElement)if("A"===a.tagName){a="1"===a.getAttribute("data-nohref");break a}a=!1}a&&b.preventDefault()},!0);}).call(this);</script><style>#gbar,#guser{font-size:13px;padding-top:1px !important;}#gbar{height:22px}#guser{padding-bottom:7px !important;text-align:right}.gbh,.gbd{border-top:1px solid #c9d7f1;font-size:1px}.gbh{height:0;position:absolute;top:24px;width:100%}@media all{.gb1{height:22px;margin-right:.5em;vertical-align:top}#gbar{float:left}}a.gb1,a.gb4{text-decoration:underline
https://langchain.readthedocs.io/en/latest/modules/utils/examples/requests.html
c06b30a234ba-8
!important}a.gb1,a.gb4{color:#00c !important}.gbi .gb4{color:#dd8e27 !important}.gbf .gb4{color:#900 !important}\n</style><style>body,td,a,p,.h{font-family:arial,sans-serif}body{margin:0;overflow-y:scroll}#gog{padding:3px 8px 0}td{line-height:.8em}.gac_m td{line-height:17px}form{margin-bottom:20px}.h{color:#1558d6}em{font-weight:bold;font-style:normal}.lst{height:25px;width:496px}.gsfi,.lst{font:18px arial,sans-serif}.gsfs{font:17px arial,sans-serif}.ds{display:inline-box;display:inline-block;margin:3px 0 4px;margin-left:4px}input{font-family:inherit}body{background:#fff;color:#000}a{color:#4b11a8;text-decoration:none}a:hover,a:active{text-decoration:underline}.fl a{color:#1558d6}a:visited{color:#4b11a8}.sblc{padding-top:5px}.sblc a{display:block;margin:2px 0;margin-left:13px;font-size:11px}.lsbb{background:#f8f9fa;border:solid 1px;border-color:#dadce0 #70757a #70757a #dadce0;height:30px}.lsbb{display:block}#WqQANb a{display:inline-block;margin:0 12px}.lsb{background:url(/images/nav_logo229.png) 0 -261px
https://langchain.readthedocs.io/en/latest/modules/utils/examples/requests.html
c06b30a234ba-9
0 -261px repeat-x;border:none;color:#000;cursor:pointer;height:30px;margin:0;outline:0;font:15px arial,sans-serif;vertical-align:top}.lsb:active{background:#dadce0}.lst:focus{outline:none}</style><script nonce="skA52jTjrFARNMkurZZTjQ">(function(){window.google.erd={jsr:1,bv:1756,de:true};\nvar h=this||self;var k,l=null!=(k=h.mei)?k:1,n,p=null!=(n=h.sdo)?n:!0,q=0,r,t=google.erd,v=t.jsr;google.ml=function(a,b,d,m,e){e=void 0===e?2:e;b&&(r=a&&a.message);if(google.dl)return google.dl(a,e,d),null;if(0>v){window.console&&console.error(a,d);if(-2===v)throw a;b=!1}else b=!a||!a.message||"Error loading script"===a.message||q>=l&&!m?!1:!0;if(!b)return null;q++;d=d||{};b=encodeURIComponent;var c="/gen_204?atyp=i&ei="+b(google.kEI);google.kEXPI&&(c+="&jexpid="+b(google.kEXPI));c+="&srcpg="+b(google.sn)+"&jsr="+b(t.jsr)+"&bver="+b(t.bv);var f=a.lineNumber;void 0!==f&&(c+="&line="+f);var
https://langchain.readthedocs.io/en/latest/modules/utils/examples/requests.html
c06b30a234ba-10
f=a.lineNumber;void 0!==f&&(c+="&line="+f);var g=\na.fileName;g&&(0<g.indexOf("-extension:/")&&(e=3),c+="&script="+b(g),f&&g===window.location.href&&(f=document.documentElement.outerHTML.split("\\n")[f],c+="&cad="+b(f?f.substring(0,300):"No script found.")));c+="&jsel="+e;for(var u in d)c+="&",c+=b(u),c+="=",c+=b(d[u]);c=c+"&emsg="+b(a.name+": "+a.message);c=c+"&jsst="+b(a.stack||"N/A");12288<=c.length&&(c=c.substr(0,12288));a=c;m||google.log(0,"",a);return a};window.onerror=function(a,b,d,m,e){r!==a&&(a=e instanceof Error?e:Error(a),void 0===d||"lineNumber"in a||(a.lineNumber=d),void 0===b||"fileName"in a||(a.fileName=b),google.ml(a,!1,void 0,!1,"SyntaxError"===a.name||"SyntaxError"===a.message.substring(0,11)||-1!==a.message.indexOf("Script error")?3:0));r=null;p&&q>=l&&(window.onerror=null)};})();</script></head><body bgcolor="#fff"><script nonce="skA52jTjrFARNMkurZZTjQ">(function(){var src=\'/images/nav_logo229.png\';var iesg=false;document.body.onload = function(){window.n && window.n();if (document.images){new Image().src=src;}\nif
https://langchain.readthedocs.io/en/latest/modules/utils/examples/requests.html
c06b30a234ba-11
window.n();if (document.images){new Image().src=src;}\nif (!iesg){document.f&&document.f.q.focus();document.gbqf&&document.gbqf.q.focus();}\n}\n})();</script><div id="mngb"><div id=gbar><nobr><b class=gb1>Search</b> <a class=gb1 href="https://www.google.com/imghp?hl=en&tab=wi">Images</a> <a class=gb1 href="https://maps.google.com/maps?hl=en&tab=wl">Maps</a> <a class=gb1 href="https://play.google.com/?hl=en&tab=w8">Play</a> <a class=gb1 href="https://www.youtube.com/?tab=w1">YouTube</a> <a class=gb1 href="https://news.google.com/?tab=wn">News</a> <a class=gb1 href="https://mail.google.com/mail/?tab=wm">Gmail</a> <a class=gb1 href="https://drive.google.com/?tab=wo">Drive</a> <a class=gb1 style="text-decoration:none" href="https://www.google.com/intl/en/about/products?tab=wh"><u>More</u> &raquo;</a></nobr></div><div id=guser width=100%><nobr><span id=gbn class=gbi></span><span id=gbf class=gbf></span><span id=gbe></span><a href="http://www.google.com/history/optout?hl=en" class=gb4>Web History</a> | <a
https://langchain.readthedocs.io/en/latest/modules/utils/examples/requests.html
c06b30a234ba-12
class=gb4>Web History</a> | <a href="/preferences?hl=en" class=gb4>Settings</a> | <a target=_top id=gb_70 href="https://accounts.google.com/ServiceLogin?hl=en&passive=true&continue=https://www.google.com/&ec=GAZAAQ" class=gb4>Sign in</a></nobr></div><div class=gbh style=left:0></div><div class=gbh style=right:0></div></div><center><br clear="all" id="lgpd"><div id="lga"><a href="/search?ie=UTF-8&amp;q=International+Women%27s+Day&amp;oi=ddle&amp;ct=207425752&amp;hl=en&amp;si=AEcPFx5y3cpWB8t3QIlw940Bbgd-HLN-aNYSTraERzz0WyAsdPcV8QlbA9KRIH1_r1H1b32dXlTjZQe5B0MVNeLogkXOiBOkfs-S-hFQywzzxlKEI54jx7H2iV6NSfskfTE00IkUfobnZU0dHdFeGABAmixr9Gj6a8WKVaZeEhYyauqHyAnlpd4%3D&amp;sa=X&amp;ved=0ahUKEwi9zuH2gM39AhWFh-4BHZjODXEQPQgD"><img alt="International Women\'s Day 2023" border="0" height="200"
https://langchain.readthedocs.io/en/latest/modules/utils/examples/requests.html
c06b30a234ba-13
Women\'s Day 2023" border="0" height="200" src="/logos/doodles/2023/international-womens-day-2023-6753651837109578-l.png" title="International Women\'s Day 2023" width="500" id="hplogo"><br></a><br></div><form action="/search" name="f"><table cellpadding="0" cellspacing="0"><tr valign="top"><td width="25%">&nbsp;</td><td align="center" nowrap=""><input name="ie" value="ISO-8859-1" type="hidden"><input value="en" name="hl" type="hidden"><input name="source" type="hidden" value="hp"><input name="biw" type="hidden"><input name="bih" type="hidden"><div class="ds" style="height:32px;margin:4px 0"><input class="lst" style="margin:0;padding:5px 8px 0 6px;vertical-align:top;color:#000" autocomplete="off" value="" title="Google Search" maxlength="2048" name="q" size="57"></div><br style="line-height:0"><span class="ds"><span class="lsbb"><input class="lsb" value="Google Search" name="btnG" type="submit"></span></span><span class="ds"><span class="lsbb"><input class="lsb" id="tsuid_1" value="I\'m Feeling Lucky" name="btnI" type="submit"><script
https://langchain.readthedocs.io/en/latest/modules/utils/examples/requests.html
c06b30a234ba-14
Feeling Lucky" name="btnI" type="submit"><script nonce="skA52jTjrFARNMkurZZTjQ">(function(){var id=\'tsuid_1\';document.getElementById(id).onclick = function(){if (this.form.q.value){this.checked = 1;if (this.form.iflsig)this.form.iflsig.disabled = false;}\nelse top.location=\'/doodles/\';};})();</script><input value="AK50M_UAAAAAZAjn9T7DxAH0-e8rhw3d8palbJFsdibi" name="iflsig" type="hidden"></span></span></td><td class="fl sblc" align="left" nowrap="" width="25%"><a href="/advanced_search?hl=en&amp;authuser=0">Advanced search</a></td></tr></table><input id="gbv" name="gbv" type="hidden" value="1"><script nonce="skA52jTjrFARNMkurZZTjQ">(function(){var a,b="1";if(document&&document.getElementById)if("undefined"!=typeof XMLHttpRequest)b="2";else if("undefined"!=typeof ActiveXObject){var c,d,e=["MSXML2.XMLHTTP.6.0","MSXML2.XMLHTTP.3.0","MSXML2.XMLHTTP","Microsoft.XMLHTTP"];for(c=0;d=e[c++];)try{new ActiveXObject(d),b="2"}catch(h){}}a=b;if("2"==a&&-1==location.search.indexOf("&gbv=2")){var
https://langchain.readthedocs.io/en/latest/modules/utils/examples/requests.html
c06b30a234ba-15
f=google.gbvu,g=document.getElementById("gbv");g&&(g.value=a);f&&window.setTimeout(function(){location.href=f},0)};}).call(this);</script></form><div id="gac_scont"></div><div style="font-size:83%;min-height:3.5em"><br><div id="prm"><style>.szppmdbYutt__middle-slot-promo{font-size:small;margin-bottom:32px}.szppmdbYutt__middle-slot-promo a.ZIeIlb{display:inline-block;text-decoration:none}.szppmdbYutt__middle-slot-promo img{border:none;margin-right:5px;vertical-align:middle}</style><div class="szppmdbYutt__middle-slot-promo" data-ved="0ahUKEwi9zuH2gM39AhWFh-4BHZjODXEQnIcBCAQ"><span>Celebrate </span><a class="NKcBbd" href="https://www.google.com/url?q=https://artsandculture.google.com/project/women-in-culture%3Futm_source%3Dgoogle%26utm_medium%3Dhppromo%26utm_campaign%3Dinternationalwomensday23&amp;source=hpp&amp;id=19034031&amp;ct=3&amp;usg=AOvVaw1Q51Nb9U7JNUznM352o8BF&amp;sa=X&amp;ved=0ahUKEwi9zuH2gM39AhWFh-4BHZjODXEQ8IcBCAU" rel="nofollow">International Women\'s Day</a><span> with Google</span></div></div></div><span id="footer"><div style="font-size:10pt"><div
https://langchain.readthedocs.io/en/latest/modules/utils/examples/requests.html
c06b30a234ba-16
id="footer"><div style="font-size:10pt"><div style="margin:19px auto;text-align:center" id="WqQANb"><a href="/intl/en/ads/">Advertising</a><a href="/services/">Business Solutions</a><a href="/intl/en/about.html">About Google</a></div></div><p style="font-size:8pt;color:#70757a">&copy; 2023 - <a href="/intl/en/policies/privacy/">Privacy</a> - <a href="/intl/en/policies/terms/">Terms</a></p></span></center><script nonce="skA52jTjrFARNMkurZZTjQ">(function(){window.google.cdo={height:757,width:1440};(function(){var a=window.innerWidth,b=window.innerHeight;if(!a||!b){var c=window.document,d="CSS1Compat"==c.compatMode?c.documentElement:c.body;a=d.clientWidth;b=d.clientHeight}a&&b&&(a!=google.cdo.width||b!=google.cdo.height)&&google.log("","","/client_204?&atyp=i&biw="+a+"&bih="+b+"&ei="+google.kEI);}).call(this);})();</script> <script nonce="skA52jTjrFARNMkurZZTjQ">(function(){google.xjs={ck:\'xjs.hp.Y2W3KAJ0Jco.L.X.O\',cs:\'ACT90oEk9pJxm1OOdkVmpGo-yLFc4v5z8w\',excm:[]};})();</script> <script nonce="skA52jTjrFARNMkurZZTjQ">(function(){var
https://langchain.readthedocs.io/en/latest/modules/utils/examples/requests.html
c06b30a234ba-17
nonce="skA52jTjrFARNMkurZZTjQ">(function(){var u=\'/xjs/_/js/k\\x3dxjs.hp.en.ObwAV4EjOBQ.O/am\\x3dAACgEwBAAYAF/d\\x3d1/ed\\x3d1/rs\\x3dACT90oGDUDSLlBIGF3CSmUWoHe0AKqeZ6w/m\\x3dsb_he,d\';var amd=0;\nvar d=this||self,e=function(a){return a};var g;var l=function(a,b){this.g=b===h?a:""};l.prototype.toString=function(){return this.g+""};var h={};\nfunction m(){var a=u;google.lx=function(){p(a);google.lx=function(){}};google.bx||google.lx()}\nfunction p(a){google.timers&&google.timers.load&&google.tick&&google.tick("load","xjsls");var b=document;var c="SCRIPT";"application/xhtml+xml"===b.contentType&&(c=c.toLowerCase());c=b.createElement(c);a=null===a?"null":void 0===a?"undefined":a;if(void 0===g){b=null;var k=d.trustedTypes;if(k&&k.createPolicy){try{b=k.createPolicy("goog#html",{createHTML:e,createScript:e,createScriptURL:e})}catch(q){d.console&&d.console.error(q.message)}g=b}else g=b}a=(b=g)?b.createScriptURL(a):a;a=new l(a,h);c.src=\na instanceof l&&a.constructor===l?a.g:"type_error:TrustedResourceUrl";var
https://langchain.readthedocs.io/en/latest/modules/utils/examples/requests.html
c06b30a234ba-18
l&&a.constructor===l?a.g:"type_error:TrustedResourceUrl";var f,n;(f=(a=null==(n=(f=(c.ownerDocument&&c.ownerDocument.defaultView||window).document).querySelector)?void 0:n.call(f,"script[nonce]"))?a.nonce||a.getAttribute("nonce")||"":"")&&c.setAttribute("nonce",f);document.body.appendChild(c);google.psa=!0};google.xjsu=u;setTimeout(function(){0<amd?google.caft(function(){return m()},amd):m()},0);})();function _DumpException(e){throw e;}\nfunction _F_installCss(c){}\n(function(){google.jl={blt:\'none\',chnk:0,dw:false,dwu:true,emtn:0,end:0,ico:false,ikb:0,ine:false,injs:\'none\',injt:0,injth:0,injv2:false,lls:\'default\',pdt:0,rep:0,snet:true,strt:0,ubm:false,uwp:true};})();(function(){var pmc=\'{\\x22d\\x22:{},\\x22sb_he\\x22:{\\x22agen\\x22:true,\\x22cgen\\x22:true,\\x22client\\x22:\\x22heirloom-hp\\x22,\\x22dh\\x22:true,\\x22ds\\x22:\\x22\\x22,\\x22fl\\x22:true,\\x22host\\x22:\\x22google.com\\x22,\\x22jsonp\\x22:true,\\x22msgs\\x22:{\\x22cibl\\x22:\\x22Clear Search\\x22,\\x22dym\\x22:\\x22Did you
https://langchain.readthedocs.io/en/latest/modules/utils/examples/requests.html
c06b30a234ba-19
Search\\x22,\\x22dym\\x22:\\x22Did you mean:\\x22,\\x22lcky\\x22:\\x22I\\\\u0026#39;m Feeling Lucky\\x22,\\x22lml\\x22:\\x22Learn more\\x22,\\x22psrc\\x22:\\x22This search was removed from your \\\\u003Ca href\\x3d\\\\\\x22/history\\\\\\x22\\\\u003EWeb History\\\\u003C/a\\\\u003E\\x22,\\x22psrl\\x22:\\x22Remove\\x22,\\x22sbit\\x22:\\x22Search by image\\x22,\\x22srch\\x22:\\x22Google Search\\x22},\\x22ovr\\x22:{},\\x22pq\\x22:\\x22\\x22,\\x22rfs\\x22:[],\\x22sbas\\x22:\\x220 3px 8px 0 rgba(0,0,0,0.2),0 0 0 1px rgba(0,0,0,0.08)\\x22,\\x22stok\\x22:\\x222J2TpqBbW29n4YEWhckcWkIgvqM\\x22}}\';google.pmc=JSON.parse(pmc);})();</script> </body></html>'
https://langchain.readthedocs.io/en/latest/modules/utils/examples/requests.html
c06b30a234ba-20
previous Python REPL next SearxNG Search API By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/utils/examples/requests.html
488be996378e-0
.ipynb .pdf Google Serper API Contents As part of a Self Ask With Search Chain Google Serper API# This notebook goes over how to use the Google Serper component to search the web. First you need to sign up for a free account at serper.dev and get your api key. import os os.environ["SERPER_API_KEY"] = "" from langchain.utilities import GoogleSerperAPIWrapper search = GoogleSerperAPIWrapper() search.run("Obama's first name?") 'Barack Hussein Obama II' As part of a Self Ask With Search Chain# os.environ['OPENAI_API_KEY'] = "" from langchain.utilities import GoogleSerperAPIWrapper from langchain.llms.openai import OpenAI from langchain.agents import initialize_agent, Tool llm = OpenAI(temperature=0) search = GoogleSerperAPIWrapper() tools = [ Tool( name="Intermediate Answer", func=search.run, description="useful for when you need to ask with search" ) ] self_ask_with_search = initialize_agent(tools, llm, agent="self-ask-with-search", verbose=True) self_ask_with_search.run("What is the hometown of the reigning men's U.S. Open champion?") > Entering new AgentExecutor chain... Yes. Follow up: Who is the reigning men's U.S. Open champion? Intermediate answer: Current champions Carlos Alcaraz, 2022 men's singles champion. Follow up: Where is Carlos Alcaraz from? Intermediate answer: El Palmar, Spain So the final answer is: El Palmar, Spain > Finished chain. 'El Palmar, Spain' previous Google Search next IFTTT WebHooks Contents
https://langchain.readthedocs.io/en/latest/modules/utils/examples/google_serper.html
488be996378e-1
previous Google Search next IFTTT WebHooks Contents As part of a Self Ask With Search Chain By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/utils/examples/google_serper.html
7e239ed037eb-0
.ipynb .pdf Wolfram Alpha Wolfram Alpha# This notebook goes over how to use the wolfram alpha component. First, you need to set up your Wolfram Alpha developer account and get your APP ID: Go to wolfram alpha and sign up for a developer account here Create an app and get your APP ID pip install wolframalpha Then we will need to set some environment variables: Save your APP ID into WOLFRAM_ALPHA_APPID env variable pip install wolframalpha import os os.environ["WOLFRAM_ALPHA_APPID"] = "" from langchain.utilities.wolfram_alpha import WolframAlphaAPIWrapper wolfram = WolframAlphaAPIWrapper() wolfram.run("What is 2x+5 = -3x + 7?") 'x = 2/5' previous SerpAPI next Zapier Natural Language Actions API By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/utils/examples/wolfram_alpha.html
0fb8d2659d01-0
.ipynb .pdf Bing Search Contents Number of results Metadata Results Bing Search# This notebook goes over how to use the bing search component. First, you need to set up the proper API keys and environment variables. To set it up, follow the instructions found here. Then we will need to set some environment variables. import os os.environ["BING_SUBSCRIPTION_KEY"] = "" os.environ["BING_SEARCH_URL"] = "" from langchain.utilities import BingSearchAPIWrapper search = BingSearchAPIWrapper() search.run("python")
https://langchain.readthedocs.io/en/latest/modules/utils/examples/bing_search.html
0fb8d2659d01-1
'Thanks to the flexibility of <b>Python</b> and the powerful ecosystem of packages, the Azure CLI supports features such as autocompletion (in shells that support it), persistent credentials, JMESPath result parsing, lazy initialization, network-less unit tests, and more. Building an open-source and cross-platform Azure CLI with <b>Python</b> by Dan Taylor. <b>Python</b> releases by version number: Release version Release date Click for more. <b>Python</b> 3.11.1 Dec. 6, 2022 Download Release Notes. <b>Python</b> 3.10.9 Dec. 6, 2022 Download Release Notes. <b>Python</b> 3.9.16 Dec. 6, 2022 Download Release Notes. <b>Python</b> 3.8.16 Dec. 6, 2022 Download Release Notes. <b>Python</b> 3.7.16 Dec. 6, 2022 Download Release Notes. In this lesson, we will look at the += operator in <b>Python</b> and see how it works with several simple examples.. The operator ‘+=’ is a shorthand for the addition assignment operator.It adds two values and assigns the sum
https://langchain.readthedocs.io/en/latest/modules/utils/examples/bing_search.html
0fb8d2659d01-2
assignment operator.It adds two values and assigns the sum to a variable (left operand). W3Schools offers free online tutorials, references and exercises in all the major languages of the web. Covering popular subjects like HTML, CSS, JavaScript, <b>Python</b>, SQL, Java, and many, many more. This tutorial introduces the reader informally to the basic concepts and features of the <b>Python</b> language and system. It helps to have a <b>Python</b> interpreter handy for hands-on experience, but all examples are self-contained, so the tutorial can be read off-line as well. For a description of standard objects and modules, see The <b>Python</b> Standard ... <b>Python</b> is a general-purpose, versatile, and powerful programming language. It&#39;s a great first language because <b>Python</b> code is concise and easy to read. Whatever you want to do, <b>python</b> can do it. From web development to machine learning to data science, <b>Python</b> is the language for you. To install <b>Python</b> using the Microsoft Store:
https://langchain.readthedocs.io/en/latest/modules/utils/examples/bing_search.html
0fb8d2659d01-3
To install <b>Python</b> using the Microsoft Store: Go to your Start menu (lower left Windows icon), type &quot;Microsoft Store&quot;, select the link to open the store. Once the store is open, select Search from the upper-right menu and enter &quot;<b>Python</b>&quot;. Select which version of <b>Python</b> you would like to use from the results under Apps. Under the “<b>Python</b> Releases for Mac OS X” heading, click the link for the Latest <b>Python</b> 3 Release - <b>Python</b> 3.x.x. As of this writing, the latest version was <b>Python</b> 3.8.4. Scroll to the bottom and click macOS 64-bit installer to start the download. When the installer is finished downloading, move on to the next step. Step 2: Run the Installer'
https://langchain.readthedocs.io/en/latest/modules/utils/examples/bing_search.html
0fb8d2659d01-4
Number of results# You can use the k parameter to set the number of results search = BingSearchAPIWrapper(k=1) search.run("python") 'Thanks to the flexibility of <b>Python</b> and the powerful ecosystem of packages, the Azure CLI supports features such as autocompletion (in shells that support it), persistent credentials, JMESPath result parsing, lazy initialization, network-less unit tests, and more. Building an open-source and cross-platform Azure CLI with <b>Python</b> by Dan Taylor.' Metadata Results# Run query through BingSearch and return snippet, title, and link metadata. Snippet: The description of the result. Title: The title of the result. Link: The link to the result. search = BingSearchAPIWrapper() search.results("apples", 5) [{'snippet': 'Lady Alice. Pink Lady <b>apples</b> aren’t the only lady in the apple family. Lady Alice <b>apples</b> were discovered growing, thanks to bees pollinating, in Washington. They are smaller and slightly more stout in appearance than other varieties. Their skin color appears to have red and yellow stripes running from stem to butt.', 'title': '25 Types of Apples - Jessica Gavin', 'link': 'https://www.jessicagavin.com/types-of-apples/'}, {'snippet': '<b>Apples</b> can do a lot for you, thanks to plant chemicals called flavonoids. And they have pectin, a fiber that breaks down in your gut. If you take off the apple’s skin before eating it, you won ...', 'title': 'Apples: Nutrition &amp; Health Benefits - WebMD', 'link': 'https://www.webmd.com/food-recipes/benefits-apples'},
https://langchain.readthedocs.io/en/latest/modules/utils/examples/bing_search.html
0fb8d2659d01-5
{'snippet': '<b>Apples</b> boast many vitamins and minerals, though not in high amounts. However, <b>apples</b> are usually a good source of vitamin C. Vitamin C. Also called ascorbic acid, this vitamin is a common ...', 'title': 'Apples 101: Nutrition Facts and Health Benefits', 'link': 'https://www.healthline.com/nutrition/foods/apples'}, {'snippet': 'Weight management. The fibers in <b>apples</b> can slow digestion, helping one to feel greater satisfaction after eating. After following three large prospective cohorts of 133,468 men and women for 24 years, researchers found that higher intakes of fiber-rich fruits with a low glycemic load, particularly <b>apples</b> and pears, were associated with the least amount of weight gain over time.', 'title': 'Apples | The Nutrition Source | Harvard T.H. Chan School of Public Health', 'link': 'https://www.hsph.harvard.edu/nutritionsource/food-features/apples/'}] previous Bash next Google Search Contents Number of results Metadata Results By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/utils/examples/bing_search.html
192acce45352-0
.ipynb .pdf IFTTT WebHooks Contents IFTTT WebHooks Creating a webhook Configuring the “If This” Configuring the “Then That” Finishing up IFTTT WebHooks# This notebook shows how to use IFTTT Webhooks. From https://github.com/SidU/teams-langchain-js/wiki/Connecting-IFTTT-Services. Creating a webhook# Go to https://ifttt.com/create Configuring the “If This”# Click on the “If This” button in the IFTTT interface. Search for “Webhooks” in the search bar. Choose the first option for “Receive a web request with a JSON payload.” Choose an Event Name that is specific to the service you plan to connect to. This will make it easier for you to manage the webhook URL. For example, if you’re connecting to Spotify, you could use “Spotify” as your Event Name. Click the “Create Trigger” button to save your settings and create your webhook. Configuring the “Then That”# Tap on the “Then That” button in the IFTTT interface. Search for the service you want to connect, such as Spotify. Choose an action from the service, such as “Add track to a playlist”. Configure the action by specifying the necessary details, such as the playlist name, e.g., “Songs from AI”. Reference the JSON Payload received by the Webhook in your action. For the Spotify scenario, choose “{{JsonPayload}}” as your search query. Tap the “Create Action” button to save your action settings. Once you have finished configuring your action, click the “Finish” button to complete the setup. Congratulations! You have successfully connected the Webhook to the desired
https://langchain.readthedocs.io/en/latest/modules/utils/examples/ifttt.html
192acce45352-1
complete the setup. Congratulations! You have successfully connected the Webhook to the desired service, and you’re ready to start receiving data and triggering actions 🎉 Finishing up# To get your webhook URL go to https://ifttt.com/maker_webhooks/settings Copy the IFTTT key value from there. The URL is of the form https://maker.ifttt.com/use/YOUR_IFTTT_KEY. Grab the YOUR_IFTTT_KEY value. from langchain.tools.ifttt import IFTTTWebhook import os key = os.environ["IFTTTKey"] url = f"https://maker.ifttt.com/trigger/spotify/json/with/key/{key}" tool = IFTTTWebhook(name="Spotify", description="Add a song to spotify playlist", url=url) tool.run("taylor swift") "Congratulations! You've fired the spotify JSON event" previous Google Serper API next Python REPL Contents IFTTT WebHooks Creating a webhook Configuring the “If This” Configuring the “Then That” Finishing up By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/utils/examples/ifttt.html
5b700ba48a4e-0
.ipynb .pdf SerpAPI Contents Custom Parameters SerpAPI# This notebook goes over how to use the SerpAPI component to search the web. from langchain.utilities import SerpAPIWrapper search = SerpAPIWrapper() search.run("Obama's first name?") 'Barack Hussein Obama II' Custom Parameters# You can also customize the SerpAPI wrapper with arbitrary parameters. For example, in the below example we will use bing instead of google. params = { "engine": "bing", "gl": "us", "hl": "en", } search = SerpAPIWrapper(params=params) search.run("Obama's first name?") 'Barack Hussein Obama II is an American politician who served as the 44th president of the United States from 2009 to 2017. A member of the Democratic Party, Obama was the first African-American presi…New content will be added above the current area of focus upon selectionBarack Hussein Obama II is an American politician who served as the 44th president of the United States from 2009 to 2017. A member of the Democratic Party, Obama was the first African-American president of the United States. He previously served as a U.S. senator from Illinois from 2005 to 2008 and as an Illinois state senator from 1997 to 2004, and previously worked as a civil rights lawyer before entering politics.Wikipediabarackobama.com' previous SearxNG Search API next Wolfram Alpha Contents Custom Parameters By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/utils/examples/serpapi.html
260d71273e1f-0
.md .pdf Key Concepts Contents Prompts Prompt Templates Input Variables Few Shot Examples Example selection Serialization Key Concepts# Prompts# A prompt is the input to a language model. It is a string of text that is used to generate a response from the language model. Prompt Templates# PromptTemplates are a way to create prompts in a reproducible way. They contain a template string, and a set of input variables. The template string can be formatted with the input variables to generate a prompt. The template string often contains instructions to the language model, a few shot examples, and a question to the language model. PromptTemplates generically have a format method that takes in variables and returns a formatted string. The most simple implementation of this is to have a template string with some variables in it, and then format it with the incoming variables. More complex iterations dynamically construct the template string from few shot examples, etc. To learn more about PromptTemplates, see Prompt Templates. As an example, consider the following template string: """ Predict the capital of a country. Country: {country} Capital: """ Input Variables# Input variables are the variables that are used to fill in the template string. In the example above, the input variable is country. Given an input variable, the PromptTemplate can generate a prompt by filling in the template string with the input variable. For example, if the input variable is United States, the template string can be formatted to generate the following prompt: """ Predict the capital of a country. Country: United States Capital: """ Few Shot Examples#
https://langchain.readthedocs.io/en/latest/modules/prompts/key_concepts.html
260d71273e1f-1
Country: United States Capital: """ Few Shot Examples# Few shot examples refer to in-context examples that are provided to a language model as part of a prompt. The examples can be used to help the language model understand the context of the prompt, and as a result generate a better response. Few shot examples can contain both positive and negative examples about the expected response. Below, we list out some few shot examples that may be relevant for the task of predicting the capital of a country. Country: United States Capital: Washington, D.C. Country: Canada Capital: Ottawa To learn more about how to provide few shot examples, see Few Shot Examples. Example selection# If there are multiple examples that are relevant to a prompt, it is important to select the most relevant examples. Generally, the quality of the response from the LLM can be significantly improved by selecting the most relevant examples. This is because the language model will be able to better understand the context of the prompt, and also potentially learn failure modes to avoid. To help the user with selecting the most relevant examples, we provide example selectors that select the most relevant based on different criteria, such as length, semantic similarity, etc. The example selector takes in a list of examples and returns a list of selected examples, formatted as a string. The user can also provide their own example selector. To learn more about example selectors, see Example Selectors. Serialization# To make it easy to share PromptTemplates, we provide a serialize method that returns a JSON string. The JSON string can be saved to a file, and then loaded back into a PromptTemplate using the deserialize method. This allows users to share PromptTemplates with others, and also to save them for later use. To learn more about serialization, see Serialization. previous Getting Started next How-To Guides Contents Prompts Prompt Templates Input Variables Few Shot Examples
https://langchain.readthedocs.io/en/latest/modules/prompts/key_concepts.html
260d71273e1f-2
How-To Guides Contents Prompts Prompt Templates Input Variables Few Shot Examples Example selection Serialization By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/prompts/key_concepts.html
e268ed4bb662-0
.rst .pdf How-To Guides How-To Guides# If you’re new to the library, you may want to start with the Quickstart. The user guide here shows more advanced workflows and how to use the library in different ways. Custom Prompt Template: How to create and use a custom PromptTemplate, the logic that decides how input variables get formatted into a prompt. Custom Example Selector: How to create and use a custom ExampleSelector (the class responsible for choosing which examples to use in a prompt). Few Shot Prompt Templates: How to include examples in the prompt. Example Selectors: How to use different types of example selectors. Prompt Serialization: A walkthrough of how to serialize prompts to and from disk. Few Shot Prompt Examples: Examples of Few Shot Prompt Templates. Partial Prompt Template: How to partial Prompt Templates. previous Key Concepts next Create a custom prompt template By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/prompts/how_to_guides.html
ad8529d318c1-0
.md .pdf Getting Started Contents What is a prompt template? Create a prompt template Load a prompt template from LangChainHub Pass few shot examples to a prompt template Select examples for a prompt template Getting Started# In this tutorial, we will learn about: what a prompt template is, and why it is needed, how to create a prompt template, how to pass few shot examples to a prompt template, how to select examples for a prompt template. What is a prompt template?# A prompt template refers to a reproducible way to generate a prompt. It contains a text string (“the template”), that can take in a set of parameters from the end user and generate a prompt. The prompt template may contain: instructions to the language model, a set of few shot examples to help the language model generate a better response, a question to the language model. The following code snippet contains an example of a prompt template: from langchain import PromptTemplate template = """ I want you to act as a naming consultant for new companies. Here are some examples of good company names: - search engine, Google - social media, Facebook - video sharing, YouTube The name should be short, catchy and easy to remember. What is a good name for a company that makes {product}? """ prompt = PromptTemplate( input_variables=["product"], template=template, ) Create a prompt template# You can create simple hardcoded prompts using the PromptTemplate class. Prompt templates can take any number of input variables, and can be formatted to generate a prompt. from langchain import PromptTemplate # An example prompt with no input variables no_input_prompt = PromptTemplate(input_variables=[], template="Tell me a joke.") no_input_prompt.format() # -> "Tell me a joke."
https://langchain.readthedocs.io/en/latest/modules/prompts/getting_started.html
ad8529d318c1-1
no_input_prompt.format() # -> "Tell me a joke." # An example prompt with one input variable one_input_prompt = PromptTemplate(input_variables=["adjective"], template="Tell me a {adjective} joke.") one_input_prompt.format(adjective="funny") # -> "Tell me a funny joke." # An example prompt with multiple input variables multiple_input_prompt = PromptTemplate( input_variables=["adjective", "content"], template="Tell me a {adjective} joke about {content}." ) multiple_input_prompt.format(adjective="funny", content="chickens") # -> "Tell me a funny joke about chickens." You can create custom prompt templates that format the prompt in any way you want. For more information, see Custom Prompt Templates. Note Currently, the template should be formatted as a Python f-string. We also support Jinja2 templates (see Using Jinja templates). In the future, we will support more templating languages such as Mako. Load a prompt template from LangChainHub# LangChainHub contains a collection of prompts which can be loaded directly via LangChain. from langchain.prompts import load_prompt prompt = load_prompt("lc://prompts/conversation/prompt.json") prompt.format(history="", input="What is 1 + 1?") You can read more about LangChainHub and the prompts available with it here. Pass few shot examples to a prompt template# Few shot examples are a set of examples that can be used to help the language model generate a better response. To generate a prompt with few shot examples, you can use the FewShotPromptTemplate. This class takes in a PromptTemplate and a list of few shot examples. It then formats the prompt template with the few shot examples. In this example, we’ll create a prompt to generate word antonyms.
https://langchain.readthedocs.io/en/latest/modules/prompts/getting_started.html
ad8529d318c1-2
In this example, we’ll create a prompt to generate word antonyms. from langchain import PromptTemplate, FewShotPromptTemplate # First, create the list of few shot examples. examples = [ {"word": "happy", "antonym": "sad"}, {"word": "tall", "antonym": "short"}, ] # Next, we specify the template to format the examples we have provided. # We use the `PromptTemplate` class for this. example_formatter_template = """ Word: {word} Antonym: {antonym}\n """ example_prompt = PromptTemplate( input_variables=["word", "antonym"], template=example_formatter_template, ) # Finally, we create the `FewShotPromptTemplate` object. few_shot_prompt = FewShotPromptTemplate( # These are the examples we want to insert into the prompt. examples=examples, # This is how we want to format the examples when we insert them into the prompt. example_prompt=example_prompt, # The prefix is some text that goes before the examples in the prompt. # Usually, this consists of intructions. prefix="Give the antonym of every input", # The suffix is some text that goes after the examples in the prompt. # Usually, this is where the user input will go suffix="Word: {input}\nAntonym:", # The input variables are the variables that the overall prompt expects. input_variables=["input"], # The example_separator is the string we will use to join the prefix, examples, and suffix together with. example_separator="\n\n", ) # We can now generate a prompt using the `format` method. print(few_shot_prompt.format(input="big"))
https://langchain.readthedocs.io/en/latest/modules/prompts/getting_started.html
ad8529d318c1-3
print(few_shot_prompt.format(input="big")) # -> Give the antonym of every input # -> # -> Word: happy # -> Antonym: sad # -> # -> Word: tall # -> Antonym: short # -> # -> Word: big # -> Antonym: Select examples for a prompt template# If you have a large number of examples, you can use the ExampleSelector to select a subset of examples that will be most informative for the Language Model. This will help you generate a prompt that is more likely to generate a good response. Below, we’ll use the LengthBasedExampleSelector, which selects examples based on the length of the input. This is useful when you are worried about constructing a prompt that will go over the length of the context window. For longer inputs, it will select fewer examples to include, while for shorter inputs it will select more. We’ll continue with the example from the previous section, but this time we’ll use the LengthBasedExampleSelector to select the examples. from langchain.prompts.example_selector import LengthBasedExampleSelector # These are a lot of examples of a pretend task of creating antonyms. examples = [ {"word": "happy", "antonym": "sad"}, {"word": "tall", "antonym": "short"}, {"word": "energetic", "antonym": "lethargic"}, {"word": "sunny", "antonym": "gloomy"}, {"word": "windy", "antonym": "calm"}, ] # We'll use the `LengthBasedExampleSelector` to select the examples. example_selector = LengthBasedExampleSelector( # These are the examples is has available to choose from. examples=examples,
https://langchain.readthedocs.io/en/latest/modules/prompts/getting_started.html
ad8529d318c1-4
# These are the examples is has available to choose from. examples=examples, # This is the PromptTemplate being used to format the examples. example_prompt=example_prompt, # This is the maximum length that the formatted examples should be. # Length is measured by the get_text_length function below. max_length=25, ) # We can now use the `example_selector` to create a `FewShotPromptTemplate`. dynamic_prompt = FewShotPromptTemplate( # We provide an ExampleSelector instead of examples. example_selector=example_selector, example_prompt=example_prompt, prefix="Give the antonym of every input", suffix="Word: {input}\nAntonym:", input_variables=["input"], example_separator="\n\n", ) # We can now generate a prompt using the `format` method. print(dynamic_prompt.format(input="big")) # -> Give the antonym of every input # -> # -> Word: happy # -> Antonym: sad # -> # -> Word: tall # -> Antonym: short # -> # -> Word: energetic # -> Antonym: lethargic # -> # -> Word: sunny # -> Antonym: gloomy # -> # -> Word: windy # -> Antonym: calm # -> # -> Word: big # -> Antonym: In contrast, if we provide a very long input, the LengthBasedExampleSelector will select fewer examples to include in the prompt. long_string = "big and huge and massive and large and gigantic and tall and much much much much much bigger than everything else" print(dynamic_prompt.format(input=long_string)) # -> Give the antonym of every input # -> Word: happy # -> Antonym: sad
https://langchain.readthedocs.io/en/latest/modules/prompts/getting_started.html
ad8529d318c1-5
# -> Word: happy # -> Antonym: sad # -> # -> Word: big and huge and massive and large and gigantic and tall and much much much much much bigger than everything else # -> Antonym: LangChain comes with a few example selectors that you can use. For more details on how to use them, see Example Selectors. You can create custom example selectors that select examples based on any criteria you want. For more details on how to do this, see Creating a custom example selector. previous Prompt Templates next Key Concepts Contents What is a prompt template? Create a prompt template Load a prompt template from LangChainHub Pass few shot examples to a prompt template Select examples for a prompt template By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/prompts/getting_started.html
4981211d3af7-0
.ipynb .pdf Prompt Serialization Contents PromptTemplate Loading from YAML Loading from JSON Loading Template from a File FewShotPromptTemplate Examples Loading from YAML Loading from JSON Examples in the Config Example Prompt from a File Prompt Serialization# It is often preferrable to store prompts not as python code but as files. This can make it easy to share, store, and version prompts. This notebook covers how to do that in LangChain, walking through all the different types of prompts and the different serialization options. At a high level, the following design principles are applied to serialization: Both JSON and YAML are supported. We want to support serialization methods that are human readable on disk, and YAML and JSON are two of the most popular methods for that. Note that this rule applies to prompts. For other assets, like Examples, different serialization methods may be supported. We support specifying everything in one file, or storing different components (templates, examples, etc) in different files and referencing them. For some cases, storing everything in file makes the most sense, but for others it is preferrable to split up some of the assets (long templates, large examples, reusable components). LangChain supports both. There is also a single entry point to load prompts from disk, making it easy to load any type of prompt. # All prompts are loaded through the `load_prompt` function. from langchain.prompts import load_prompt PromptTemplate# This section covers examples for loading a PromptTemplate. Loading from YAML# This shows an example of loading a PromptTemplate from YAML. !cat simple_prompt.yaml _type: prompt input_variables: ["adjective", "content"] template: Tell me a {adjective} joke about {content}. prompt = load_prompt("simple_prompt.yaml")
https://langchain.readthedocs.io/en/latest/modules/prompts/examples/prompt_serialization.html
4981211d3af7-1
prompt = load_prompt("simple_prompt.yaml") print(prompt.format(adjective="funny", content="chickens")) Tell me a funny joke about chickens. Loading from JSON# This shows an example of loading a PromptTemplate from JSON. !cat simple_prompt.json { "_type": "prompt", "input_variables": ["adjective", "content"], "template": "Tell me a {adjective} joke about {content}." } prompt = load_prompt("simple_prompt.json") print(prompt.format(adjective="funny", content="chickens")) Tell me a funny joke about chickens. Loading Template from a File# This shows an example of storing the template in a separate file and then referencing it in the config. Notice that the key changes from template to template_path. !cat simple_template.txt Tell me a {adjective} joke about {content}. !cat simple_prompt_with_template_file.json { "_type": "prompt", "input_variables": ["adjective", "content"], "template_path": "simple_template.txt" } prompt = load_prompt("simple_prompt_with_template_file.json") print(prompt.format(adjective="funny", content="chickens")) Tell me a funny joke about chickens. FewShotPromptTemplate# This section covers examples for loading few shot prompt templates. Examples# This shows an example of what examples stored as json might look like. !cat examples.json [ {"input": "happy", "output": "sad"}, {"input": "tall", "output": "short"} ] And here is what the same examples stored as yaml might look like. !cat examples.yaml - input: happy output: sad - input: tall output: short Loading from YAML#
https://langchain.readthedocs.io/en/latest/modules/prompts/examples/prompt_serialization.html
4981211d3af7-2
output: sad - input: tall output: short Loading from YAML# This shows an example of loading a few shot example from YAML. !cat few_shot_prompt.yaml _type: few_shot input_variables: ["adjective"] prefix: Write antonyms for the following words. example_prompt: _type: prompt input_variables: ["input", "output"] template: "Input: {input}\nOutput: {output}" examples: examples.json suffix: "Input: {adjective}\nOutput:" prompt = load_prompt("few_shot_prompt.yaml") print(prompt.format(adjective="funny")) Write antonyms for the following words. Input: happy Output: sad Input: tall Output: short Input: funny Output: The same would work if you loaded examples from the yaml file. !cat few_shot_prompt_yaml_examples.yaml _type: few_shot input_variables: ["adjective"] prefix: Write antonyms for the following words. example_prompt: _type: prompt input_variables: ["input", "output"] template: "Input: {input}\nOutput: {output}" examples: examples.yaml suffix: "Input: {adjective}\nOutput:" prompt = load_prompt("few_shot_prompt_yaml_examples.yaml") print(prompt.format(adjective="funny")) Write antonyms for the following words. Input: happy Output: sad Input: tall Output: short Input: funny Output: Loading from JSON# This shows an example of loading a few shot example from JSON. !cat few_shot_prompt.json { "_type": "few_shot",
https://langchain.readthedocs.io/en/latest/modules/prompts/examples/prompt_serialization.html
4981211d3af7-3
!cat few_shot_prompt.json { "_type": "few_shot", "input_variables": ["adjective"], "prefix": "Write antonyms for the following words.", "example_prompt": { "_type": "prompt", "input_variables": ["input", "output"], "template": "Input: {input}\nOutput: {output}" }, "examples": "examples.json", "suffix": "Input: {adjective}\nOutput:" } prompt = load_prompt("few_shot_prompt.json") print(prompt.format(adjective="funny")) Write antonyms for the following words. Input: happy Output: sad Input: tall Output: short Input: funny Output: Examples in the Config# This shows an example of referencing the examples directly in the config. !cat few_shot_prompt_examples_in.json { "_type": "few_shot", "input_variables": ["adjective"], "prefix": "Write antonyms for the following words.", "example_prompt": { "_type": "prompt", "input_variables": ["input", "output"], "template": "Input: {input}\nOutput: {output}" }, "examples": [ {"input": "happy", "output": "sad"}, {"input": "tall", "output": "short"} ], "suffix": "Input: {adjective}\nOutput:" } prompt = load_prompt("few_shot_prompt_examples_in.json") print(prompt.format(adjective="funny")) Write antonyms for the following words. Input: happy Output: sad Input: tall Output: short Input: funny Output: Example Prompt from a File#
https://langchain.readthedocs.io/en/latest/modules/prompts/examples/prompt_serialization.html
4981211d3af7-4
Output: short Input: funny Output: Example Prompt from a File# This shows an example of loading the PromptTemplate that is used to format the examples from a separate file. Note that the key changes from example_prompt to example_prompt_path. !cat example_prompt.json { "_type": "prompt", "input_variables": ["input", "output"], "template": "Input: {input}\nOutput: {output}" } !cat few_shot_prompt_example_prompt.json { "_type": "few_shot", "input_variables": ["adjective"], "prefix": "Write antonyms for the following words.", "example_prompt_path": "example_prompt.json", "examples": "examples.json", "suffix": "Input: {adjective}\nOutput:" } prompt = load_prompt("few_shot_prompt_example_prompt.json") print(prompt.format(adjective="funny")) Write antonyms for the following words. Input: happy Output: sad Input: tall Output: short Input: funny Output: previous Provide few shot examples to a prompt next Example Selectors Contents PromptTemplate Loading from YAML Loading from JSON Loading Template from a File FewShotPromptTemplate Examples Loading from YAML Loading from JSON Examples in the Config Example Prompt from a File By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/prompts/examples/prompt_serialization.html
2d2ff4b69a87-0
.ipynb .pdf Partial Prompt Templates Contents Partial With Strings Partial With Functions Partial Prompt Templates# A prompt template is a class with a .format method which takes in a key-value map and returns a string (a prompt) to pass to the language model. Like other methods, it can make sense to “partial” a prompt template - eg pass in a subset of the required values, as to create a new prompt template which expects only the remaining subset of values. LangChain supports this in two ways: we allow for partially formatted prompts (1) with string values, (2) with functions that return string values. These two different ways support different use cases. In the documentation below we go over the motivations for both use cases as well as how to do it in LangChain. Partial With Strings# One common use case for wanting to partial a prompt template is if you get some of the variables before others. For example, suppose you have a prompt template that requires two variables, foo and baz. If you get the foo value early on in the chain, but the baz value later, it can be annoying to wait until you have both variables in the same place to pass them to the prompt template. Instead, you can partial the prompt template with the foo value, and then pass the partialed prompt template along and just use that. Below is an example of doing this: from langchain.prompts import PromptTemplate prompt = PromptTemplate(template="{foo}{bar}", input_variables=["foo", "bar"]) partial_prompt = prompt.partial(foo="foo"); print(partial_prompt.format(bar="baz")) foobaz You can also just initialize the prompt with the partialed variables. prompt = PromptTemplate(template="{foo}{bar}", input_variables=["bar"], partial_variables={"foo": "foo"}) print(prompt.format(bar="baz")) foobaz Partial With Functions#
https://langchain.readthedocs.io/en/latest/modules/prompts/examples/partial.html
2d2ff4b69a87-1
print(prompt.format(bar="baz")) foobaz Partial With Functions# The other common use is to partial with a function. The use case for this is when you have a variable you know that you always want to fetch in a common way. A prime example of this is with date or time. Imagine you have a prompt which you always want to have the current date. You can’t hard code it in the prompt, and passing it along with the other input variables is a bit annoying. In this case, it’s very handy to be able to partial the prompt with a function that always returns the current date. from datetime import datetime def _get_datetime(): now = datetime.now() return now.strftime("%m/%d/%Y, %H:%M:%S") prompt = PromptTemplate( template="Tell me a {adjective} joke about the day {date}", input_variables=["adjective", "date"] ); partial_prompt = prompt.partial(date=_get_datetime) print(partial_prompt.format(adjective="funny")) Tell me a funny joke about the day 02/27/2023, 22:15:16 You can also just initialize the prompt with the partialed variables, which often makes more sense in this workflow. prompt = PromptTemplate( template="Tell me a {adjective} joke about the day {date}", input_variables=["adjective"], partial_variables={"date": _get_datetime} ); print(prompt.format(adjective="funny")) Tell me a funny joke about the day 02/27/2023, 22:15:16 Contents Partial With Strings Partial With Functions By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/prompts/examples/partial.html
eab1382e11f7-0
.ipynb .pdf Example Selectors Contents LengthBased ExampleSelector Similarity ExampleSelector Maximal Marginal Relevance ExampleSelector NGram Overlap ExampleSelector Example Selectors# If you have a large number of examples, you may need to select which ones to include in the prompt. The ExampleSelector is the class responsible for doing so. The base interface is defined as below. class BaseExampleSelector(ABC): """Interface for selecting examples to include in prompts.""" @abstractmethod def select_examples(self, input_variables: Dict[str, str]) -> List[dict]: """Select which examples to use based on the inputs.""" The only method it needs to expose is a select_examples method. This takes in the input variables and then returns a list of examples. It is up to each specific implementation as to how those examples are selected. Let’s take a look at some below. from langchain.prompts import FewShotPromptTemplate LengthBased ExampleSelector# This ExampleSelector selects which examples to use based on length. This is useful when you are worried about constructing a prompt that will go over the length of the context window. For longer inputs, it will select fewer examples to include, while for shorter inputs it will select more. from langchain.prompts import PromptTemplate from langchain.prompts.example_selector import LengthBasedExampleSelector # These are a lot of examples of a pretend task of creating antonyms. examples = [ {"input": "happy", "output": "sad"}, {"input": "tall", "output": "short"}, {"input": "energetic", "output": "lethargic"}, {"input": "sunny", "output": "gloomy"}, {"input": "windy", "output": "calm"}, ] example_prompt = PromptTemplate(
https://langchain.readthedocs.io/en/latest/modules/prompts/examples/example_selectors.html
eab1382e11f7-1
] example_prompt = PromptTemplate( input_variables=["input", "output"], template="Input: {input}\nOutput: {output}", ) example_selector = LengthBasedExampleSelector( # These are the examples it has available to choose from. examples=examples, # This is the PromptTemplate being used to format the examples. example_prompt=example_prompt, # This is the maximum length that the formatted examples should be. # Length is measured by the get_text_length function below. max_length=25, # This is the function used to get the length of a string, which is used # to determine which examples to include. It is commented out because # it is provided as a default value if none is specified. # get_text_length: Callable[[str], int] = lambda x: len(re.split("\n| ", x)) ) dynamic_prompt = FewShotPromptTemplate( # We provide an ExampleSelector instead of examples. example_selector=example_selector, example_prompt=example_prompt, prefix="Give the antonym of every input", suffix="Input: {adjective}\nOutput:", input_variables=["adjective"], ) # An example with small input, so it selects all examples. print(dynamic_prompt.format(adjective="big")) Give the antonym of every input Input: happy Output: sad Input: tall Output: short Input: energetic Output: lethargic Input: sunny Output: gloomy Input: windy Output: calm Input: big Output: # An example with long input, so it selects only one example. long_string = "big and huge and massive and large and gigantic and tall and much much much much much bigger than everything else"
https://langchain.readthedocs.io/en/latest/modules/prompts/examples/example_selectors.html
eab1382e11f7-2
print(dynamic_prompt.format(adjective=long_string)) Give the antonym of every input Input: happy Output: sad Input: big and huge and massive and large and gigantic and tall and much much much much much bigger than everything else Output: # You can add an example to an example selector as well. new_example = {"input": "big", "output": "small"} dynamic_prompt.example_selector.add_example(new_example) print(dynamic_prompt.format(adjective="enthusiastic")) Give the antonym of every input Input: happy Output: sad Input: tall Output: short Input: energetic Output: lethargic Input: sunny Output: gloomy Input: windy Output: calm Input: big Output: small Input: enthusiastic Output: Similarity ExampleSelector# The SemanticSimilarityExampleSelector selects examples based on which examples are most similar to the inputs. It does this by finding the examples with the embeddings that have the greatest cosine similarity with the inputs. from langchain.prompts.example_selector import SemanticSimilarityExampleSelector from langchain.vectorstores import Chroma from langchain.embeddings import OpenAIEmbeddings example_selector = SemanticSimilarityExampleSelector.from_examples( # This is the list of examples available to select from. examples, # This is the embedding class used to produce embeddings which are used to measure semantic similarity. OpenAIEmbeddings(), # This is the VectorStore class that is used to store the embeddings and do a similarity search over. Chroma, # This is the number of examples to produce. k=1 ) similar_prompt = FewShotPromptTemplate( # We provide an ExampleSelector instead of examples. example_selector=example_selector, example_prompt=example_prompt,
https://langchain.readthedocs.io/en/latest/modules/prompts/examples/example_selectors.html
eab1382e11f7-3
example_selector=example_selector, example_prompt=example_prompt, prefix="Give the antonym of every input", suffix="Input: {adjective}\nOutput:", input_variables=["adjective"], ) Running Chroma using direct local API. Using DuckDB in-memory for database. Data will be transient. # Input is a feeling, so should select the happy/sad example print(similar_prompt.format(adjective="worried")) Give the antonym of every input Input: happy Output: sad Input: worried Output: # Input is a measurement, so should select the tall/short example print(similar_prompt.format(adjective="fat")) Give the antonym of every input Input: happy Output: sad Input: fat Output: # You can add new examples to the SemanticSimilarityExampleSelector as well similar_prompt.example_selector.add_example({"input": "enthusiastic", "output": "apathetic"}) print(similar_prompt.format(adjective="joyful")) Give the antonym of every input Input: happy Output: sad Input: joyful Output: Maximal Marginal Relevance ExampleSelector# The MaxMarginalRelevanceExampleSelector selects examples based on a combination of which examples are most similar to the inputs, while also optimizing for diversity. It does this by finding the examples with the embeddings that have the greatest cosine similarity with the inputs, and then iteratively adding them while penalizing them for closeness to already selected examples. from langchain.prompts.example_selector import MaxMarginalRelevanceExampleSelector from langchain.vectorstores import FAISS example_selector = MaxMarginalRelevanceExampleSelector.from_examples( # This is the list of examples available to select from. examples,
https://langchain.readthedocs.io/en/latest/modules/prompts/examples/example_selectors.html
eab1382e11f7-4
# This is the list of examples available to select from. examples, # This is the embedding class used to produce embeddings which are used to measure semantic similarity. OpenAIEmbeddings(), # This is the VectorStore class that is used to store the embeddings and do a similarity search over. FAISS, # This is the number of examples to produce. k=2 ) mmr_prompt = FewShotPromptTemplate( # We provide an ExampleSelector instead of examples. example_selector=example_selector, example_prompt=example_prompt, prefix="Give the antonym of every input", suffix="Input: {adjective}\nOutput:", input_variables=["adjective"], ) # Input is a feeling, so should select the happy/sad example as the first one print(mmr_prompt.format(adjective="worried")) Give the antonym of every input Input: happy Output: sad Input: windy Output: calm Input: worried Output: # Let's compare this to what we would just get if we went solely off of similarity similar_prompt.example_selector.k = 2 print(similar_prompt.format(adjective="worried")) Give the antonym of every input Input: enthusiastic Output: apathetic Input: worried Output: NGram Overlap ExampleSelector# The NGramOverlapExampleSelector selects and orders examples based on which examples are most similar to the input, according to an ngram overlap score. The ngram overlap score is a float between 0.0 and 1.0, inclusive.
https://langchain.readthedocs.io/en/latest/modules/prompts/examples/example_selectors.html
eab1382e11f7-5
The selector allows for a threshold score to be set. Examples with an ngram overlap score less than or equal to the threshold are excluded. The threshold is set to -1.0, by default, so will not exclude any examples, only reorder them. Setting the threshold to 0.0 will exclude examples that have no ngram overlaps with the input. from langchain.prompts import PromptTemplate from langchain.prompts.example_selector.ngram_overlap import NGramOverlapExampleSelector # These are examples of a fictional translation task. examples = [ {"input": "See Spot run.", "output": "Ver correr a Spot."}, {"input": "My dog barks.", "output": "Mi perro ladra."}, {"input": "Spot can run.", "output": "Spot puede correr."}, ] example_prompt = PromptTemplate( input_variables=["input", "output"], template="Input: {input}\nOutput: {output}", ) example_selector = NGramOverlapExampleSelector( # These are the examples it has available to choose from. examples=examples, # This is the PromptTemplate being used to format the examples. example_prompt=example_prompt, # This is the threshold, at which selector stops. # It is set to -1.0 by default. threshold=-1.0, # For negative threshold: # Selector sorts examples by ngram overlap score, and excludes none. # For threshold greater than 1.0: # Selector excludes all examples, and returns an empty list. # For threshold equal to 0.0: # Selector sorts examples by ngram overlap score, # and excludes those with no ngram overlap with input. )
https://langchain.readthedocs.io/en/latest/modules/prompts/examples/example_selectors.html
eab1382e11f7-6
# and excludes those with no ngram overlap with input. ) dynamic_prompt = FewShotPromptTemplate( # We provide an ExampleSelector instead of examples. example_selector=example_selector, example_prompt=example_prompt, prefix="Give the Spanish translation of every input", suffix="Input: {sentence}\nOutput:", input_variables=["sentence"], ) # An example input with large ngram overlap with "Spot can run." # and no overlap with "My dog barks." print(dynamic_prompt.format(sentence="Spot can run fast.")) Give the Spanish translation of every input Input: Spot can run. Output: Spot puede correr. Input: See Spot run. Output: Ver correr a Spot. Input: My dog barks. Output: Mi perro ladra. Input: Spot can run fast. Output: # You can add examples to NGramOverlapExampleSelector as well. new_example = {"input": "Spot plays fetch.", "output": "Spot juega a buscar."} example_selector.add_example(new_example) print(dynamic_prompt.format(sentence="Spot can run fast.")) Give the Spanish translation of every input Input: Spot can run. Output: Spot puede correr. Input: See Spot run. Output: Ver correr a Spot. Input: Spot plays fetch. Output: Spot juega a buscar. Input: My dog barks. Output: Mi perro ladra. Input: Spot can run fast. Output: # You can set a threshold at which examples are excluded. # For example, setting threshold equal to 0.0 # excludes examples with no ngram overlaps with input. # Since "My dog barks." has no ngram overlaps with "Spot can run fast."
https://langchain.readthedocs.io/en/latest/modules/prompts/examples/example_selectors.html
eab1382e11f7-7
# Since "My dog barks." has no ngram overlaps with "Spot can run fast." # it is excluded. example_selector.threshold=0.0 print(dynamic_prompt.format(sentence="Spot can run fast.")) Give the Spanish translation of every input Input: Spot can run. Output: Spot puede correr. Input: See Spot run. Output: Ver correr a Spot. Input: Spot plays fetch. Output: Spot juega a buscar. Input: Spot can run fast. Output: # Setting small nonzero threshold example_selector.threshold=0.09 print(dynamic_prompt.format(sentence="Spot can play fetch.")) Give the Spanish translation of every input Input: Spot can run. Output: Spot puede correr. Input: Spot plays fetch. Output: Spot juega a buscar. Input: Spot can play fetch. Output: # Setting threshold greater than 1.0 example_selector.threshold=1.0+1e-9 print(dynamic_prompt.format(sentence="Spot can play fetch.")) Give the Spanish translation of every input Input: Spot can play fetch. Output: previous Prompt Serialization next Output Parsers Contents LengthBased ExampleSelector Similarity ExampleSelector Maximal Marginal Relevance ExampleSelector NGram Overlap ExampleSelector By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 22, 2023.
https://langchain.readthedocs.io/en/latest/modules/prompts/examples/example_selectors.html
2d66cbce8b80-0
.ipynb .pdf Provide few shot examples to a prompt Contents Use Case Using an example set Create the example set Create a formatter for the few shot examples Feed examples and formatter to FewShotPromptTemplate Using an example selector Feed examples into ExampleSelector Feed example selector into FewShotPromptTemplate Provide few shot examples to a prompt# In this tutorial, we’ll learn how to create a prompt template that uses few shot examples. We’ll use the FewShotPromptTemplate class to create a prompt template that uses few shot examples. This class either takes in a set of examples, or an ExampleSelector object. In this tutorial, we’ll go over both options. Use Case# In this tutorial, we’ll configure few shot examples for self-ask with search. Using an example set# Create the example set# To get started, create a list of few shot examples. Each example should be a dictionary with the keys being the input variables and the values being the values for those input variables. from langchain.prompts.few_shot import FewShotPromptTemplate from langchain.prompts.prompt import PromptTemplate examples = [ { "question": "Who lived longer, Muhammad Ali or Alan Turing?", "answer": """ Are follow up questions needed here: Yes. Follow up: How old was Muhammad Ali when he died? Intermediate answer: Muhammad Ali was 74 years old when he died. Follow up: How old was Alan Turing when he died? Intermediate answer: Alan Turing was 41 years old when he died. So the final answer is: Muhammad Ali """ }, { "question": "When was the founder of craigslist born?", "answer": """ Are follow up questions needed here: Yes. Follow up: Who was the founder of craigslist?
https://langchain.readthedocs.io/en/latest/modules/prompts/examples/few_shot_examples.html
2d66cbce8b80-1
Follow up: Who was the founder of craigslist? Intermediate answer: Craigslist was founded by Craig Newmark. Follow up: When was Craig Newmark born? Intermediate answer: Craig Newmark was born on December 6, 1952. So the final answer is: December 6, 1952 """ }, { "question": "Who was the maternal grandfather of George Washington?", "answer": """ Are follow up questions needed here: Yes. Follow up: Who was the mother of George Washington? Intermediate answer: The mother of George Washington was Mary Ball Washington. Follow up: Who was the father of Mary Ball Washington? Intermediate answer: The father of Mary Ball Washington was Joseph Ball. So the final answer is: Joseph Ball """ }, { "question": "Are both the directors of Jaws and Casino Royale from the same country?", "answer": """ Are follow up questions needed here: Yes. Follow up: Who is the director of Jaws? Intermediate Answer: The director of Jaws is Steven Spielberg. Follow up: Where is Steven Spielberg from? Intermediate Answer: The United States. Follow up: Who is the director of Casino Royale? Intermediate Answer: The director of Casino Royale is Martin Campbell. Follow up: Where is Martin Campbell from? Intermediate Answer: New Zealand. So the final answer is: No """ } ] Create a formatter for the few shot examples# Configure a formatter that will format the few shot examples into a string. This formatter should be a PromptTemplate object. example_prompt = PromptTemplate(input_variables=["question", "answer"], template="Question: {question}\n{answer}") print(example_prompt.format(**examples[0])) Question: Who lived longer, Muhammad Ali or Alan Turing?
https://langchain.readthedocs.io/en/latest/modules/prompts/examples/few_shot_examples.html
2d66cbce8b80-2
Question: Who lived longer, Muhammad Ali or Alan Turing? Are follow up questions needed here: Yes. Follow up: How old was Muhammad Ali when he died? Intermediate answer: Muhammad Ali was 74 years old when he died. Follow up: How old was Alan Turing when he died? Intermediate answer: Alan Turing was 41 years old when he died. So the final answer is: Muhammad Ali Feed examples and formatter to FewShotPromptTemplate# Finally, create a FewShotPromptTemplate object. This object takes in the few shot examples and the formatter for the few shot examples. prompt = FewShotPromptTemplate( examples=examples, example_prompt=example_prompt, suffix="Question: {input}", input_variables=["input"] ) print(prompt.format(input="Who was the father of Mary Ball Washington?")) Question: Who lived longer, Muhammad Ali or Alan Turing? Are follow up questions needed here: Yes. Follow up: How old was Muhammad Ali when he died? Intermediate answer: Muhammad Ali was 74 years old when he died. Follow up: How old was Alan Turing when he died? Intermediate answer: Alan Turing was 41 years old when he died. So the final answer is: Muhammad Ali Question: When was the founder of craigslist born? Are follow up questions needed here: Yes. Follow up: Who was the founder of craigslist? Intermediate answer: Craigslist was founded by Craig Newmark. Follow up: When was Craig Newmark born? Intermediate answer: Craig Newmark was born on December 6, 1952. So the final answer is: December 6, 1952 Question: Who was the maternal grandfather of George Washington? Are follow up questions needed here: Yes. Follow up: Who was the mother of George Washington?
https://langchain.readthedocs.io/en/latest/modules/prompts/examples/few_shot_examples.html
2d66cbce8b80-3
Follow up: Who was the mother of George Washington? Intermediate answer: The mother of George Washington was Mary Ball Washington. Follow up: Who was the father of Mary Ball Washington? Intermediate answer: The father of Mary Ball Washington was Joseph Ball. So the final answer is: Joseph Ball Question: Are both the directors of Jaws and Casino Royale from the same country? Are follow up questions needed here: Yes. Follow up: Who is the director of Jaws? Intermediate Answer: The director of Jaws is Steven Spielberg. Follow up: Where is Steven Spielberg from? Intermediate Answer: The United States. Follow up: Who is the director of Casino Royale? Intermediate Answer: The director of Casino Royale is Martin Campbell. Follow up: Where is Martin Campbell from? Intermediate Answer: New Zealand. So the final answer is: No Question: Who was the father of Mary Ball Washington? Using an example selector# Feed examples into ExampleSelector# We will reuse the example set and the formatter from the previous section. However, instead of feeding the examples directly into the FewShotPromptTemplate object, we will feed them into an ExampleSelector object. In this tutorial, we will use the SemanticSimilarityExampleSelector class. This class selects few shot examples based on their similarity to the input. It uses an embedding model to compute the similarity between the input and the few shot examples, as well as a vector store to perform the nearest neighbor search. from langchain.prompts.example_selector import SemanticSimilarityExampleSelector from langchain.vectorstores import Chroma from langchain.embeddings import OpenAIEmbeddings example_selector = SemanticSimilarityExampleSelector.from_examples( # This is the list of examples available to select from. examples, # This is the embedding class used to produce embeddings which are used to measure semantic similarity. OpenAIEmbeddings(),
https://langchain.readthedocs.io/en/latest/modules/prompts/examples/few_shot_examples.html
2d66cbce8b80-4
OpenAIEmbeddings(), # This is the VectorStore class that is used to store the embeddings and do a similarity search over. Chroma, # This is the number of examples to produce. k=1 ) # Select the most similar example to the input. question = "Who was the father of Mary Ball Washington?" selected_examples = example_selector.select_examples({"question": question}) print(f"Examples most similar to the input: {question}") for example in selected_examples: print("\n") for k, v in example.items(): print(f"{k}: {v}") Running Chroma using direct local API. Using DuckDB in-memory for database. Data will be transient. Examples most similar to the input: Who was the father of Mary Ball Washington? question: Who was the maternal grandfather of George Washington? answer: Are follow up questions needed here: Yes. Follow up: Who was the mother of George Washington? Intermediate answer: The mother of George Washington was Mary Ball Washington. Follow up: Who was the father of Mary Ball Washington? Intermediate answer: The father of Mary Ball Washington was Joseph Ball. So the final answer is: Joseph Ball Feed example selector into FewShotPromptTemplate# Finally, create a FewShotPromptTemplate object. This object takes in the example selector and the formatter for the few shot examples. prompt = FewShotPromptTemplate( example_selector=example_selector, example_prompt=example_prompt, suffix="Question: {input}", input_variables=["input"] ) print(prompt.format(input="Who was the father of Mary Ball Washington?")) Question: Who was the maternal grandfather of George Washington? Are follow up questions needed here: Yes. Follow up: Who was the mother of George Washington?
https://langchain.readthedocs.io/en/latest/modules/prompts/examples/few_shot_examples.html