id
stringlengths 14
16
| text
stringlengths 36
2.73k
| source
stringlengths 49
117
|
---|---|---|
d0b4dd4659d0-2 | This is easy to do with LangChain!
First lets define the prompt template:
from langchain.prompts import PromptTemplate
prompt = PromptTemplate(
input_variables=["product"],
template="What is a good name for a company that makes {product}?",
)
Let’s now see how this works! We can call the .format method to format it.
print(prompt.format(product="colorful socks"))
What is a good name for a company that makes colorful socks?
For more details, check out the getting started guide for prompts.
Chains: Combine LLMs and prompts in multi-step workflows#
Up until now, we’ve worked with the PromptTemplate and LLM primitives by themselves. But of course, a real application is not just one primitive, but rather a combination of them.
A chain in LangChain is made up of links, which can be either primitives like LLMs or other chains.
The most core type of chain is an LLMChain, which consists of a PromptTemplate and an LLM.
Extending the previous example, we can construct an LLMChain which takes user input, formats it with a PromptTemplate, and then passes the formatted response to an LLM.
from langchain.prompts import PromptTemplate
from langchain.llms import OpenAI
llm = OpenAI(temperature=0.9)
prompt = PromptTemplate(
input_variables=["product"],
template="What is a good name for a company that makes {product}?",
)
We can now create a very simple chain that will take user input, format the prompt with it, and then send it to the LLM:
from langchain.chains import LLMChain
chain = LLMChain(llm=llm, prompt=prompt)
Now we can run that chain only specifying the product!
chain.run("colorful socks") | https://python.langchain.com/en/latest/getting_started/getting_started.html |
d0b4dd4659d0-3 | Now we can run that chain only specifying the product!
chain.run("colorful socks")
# -> '\n\nSocktastic!'
There we go! There’s the first chain - an LLM Chain.
This is one of the simpler types of chains, but understanding how it works will set you up well for working with more complex chains.
For more details, check out the getting started guide for chains.
Agents: Dynamically Call Chains Based on User Input#
So far the chains we’ve looked at run in a predetermined order.
Agents no longer do: they use an LLM to determine which actions to take and in what order. An action can either be using a tool and observing its output, or returning to the user.
When used correctly agents can be extremely powerful. In this tutorial, we show you how to easily use agents through the simplest, highest level API.
In order to load agents, you should understand the following concepts:
Tool: A function that performs a specific duty. This can be things like: Google Search, Database lookup, Python REPL, other chains. The interface for a tool is currently a function that is expected to have a string as an input, with a string as an output.
LLM: The language model powering the agent.
Agent: The agent to use. This should be a string that references a support agent class. Because this notebook focuses on the simplest, highest level API, this only covers using the standard supported agents. If you want to implement a custom agent, see the documentation for custom agents (coming soon).
Agents: For a list of supported agents and their specifications, see here.
Tools: For a list of predefined tools and their specifications, see here.
For this example, you will also need to install the SerpAPI Python package.
pip install google-search-results
And set the appropriate environment variables.
import os | https://python.langchain.com/en/latest/getting_started/getting_started.html |
d0b4dd4659d0-4 | pip install google-search-results
And set the appropriate environment variables.
import os
os.environ["SERPAPI_API_KEY"] = "..."
Now we can get started!
from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType
from langchain.llms import OpenAI
# First, let's load the language model we're going to use to control the agent.
llm = OpenAI(temperature=0)
# Next, let's load some tools to use. Note that the `llm-math` tool uses an LLM, so we need to pass that in.
tools = load_tools(["serpapi", "llm-math"], llm=llm)
# Finally, let's initialize an agent with the tools, the language model, and the type of agent we want to use.
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
# Now let's test it out!
agent.run("What was the high temperature in SF yesterday in Fahrenheit? What is that number raised to the .023 power?")
> Entering new AgentExecutor chain...
I need to find the temperature first, then use the calculator to raise it to the .023 power.
Action: Search
Action Input: "High temperature in SF yesterday"
Observation: San Francisco Temperature Yesterday. Maximum temperature yesterday: 57 °F (at 1:56 pm) Minimum temperature yesterday: 49 °F (at 1:56 am) Average temperature ...
Thought: I now have the temperature, so I can use the calculator to raise it to the .023 power.
Action: Calculator
Action Input: 57^.023
Observation: Answer: 1.0974509573251117
Thought: I now know the final answer | https://python.langchain.com/en/latest/getting_started/getting_started.html |
d0b4dd4659d0-5 | Thought: I now know the final answer
Final Answer: The high temperature in SF yesterday in Fahrenheit raised to the .023 power is 1.0974509573251117.
> Finished chain.
Memory: Add State to Chains and Agents#
So far, all the chains and agents we’ve gone through have been stateless. But often, you may want a chain or agent to have some concept of “memory” so that it may remember information about its previous interactions. The clearest and simple example of this is when designing a chatbot - you want it to remember previous messages so it can use context from that to have a better conversation. This would be a type of “short-term memory”. On the more complex side, you could imagine a chain/agent remembering key pieces of information over time - this would be a form of “long-term memory”. For more concrete ideas on the latter, see this awesome paper.
LangChain provides several specially created chains just for this purpose. This notebook walks through using one of those chains (the ConversationChain) with two different types of memory.
By default, the ConversationChain has a simple type of memory that remembers all previous inputs/outputs and adds them to the context that is passed. Let’s take a look at using this chain (setting verbose=True so we can see the prompt).
from langchain import OpenAI, ConversationChain
llm = OpenAI(temperature=0)
conversation = ConversationChain(llm=llm, verbose=True)
output = conversation.predict(input="Hi there!")
print(output)
> Entering new chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.
Current conversation:
Human: Hi there!
AI: | https://python.langchain.com/en/latest/getting_started/getting_started.html |
d0b4dd4659d0-6 | Current conversation:
Human: Hi there!
AI:
> Finished chain.
' Hello! How are you today?'
output = conversation.predict(input="I'm doing well! Just having a conversation with an AI.")
print(output)
> Entering new chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.
Current conversation:
Human: Hi there!
AI: Hello! How are you today?
Human: I'm doing well! Just having a conversation with an AI.
AI:
> Finished chain.
" That's great! What would you like to talk about?"
Building a Language Model Application: Chat Models#
Similarly, you can use chat models instead of LLMs. Chat models are a variation on language models. While chat models use language models under the hood, the interface they expose is a bit different: rather than expose a “text in, text out” API, they expose an interface where “chat messages” are the inputs and outputs.
Chat model APIs are fairly new, so we are still figuring out the correct abstractions.
Get Message Completions from a Chat Model#
You can get chat completions by passing one or more messages to the chat model. The response will be a message. The types of messages currently supported in LangChain are AIMessage, HumanMessage, SystemMessage, and ChatMessage – ChatMessage takes in an arbitrary role parameter. Most of the time, you’ll just be dealing with HumanMessage, AIMessage, and SystemMessage.
from langchain.chat_models import ChatOpenAI
from langchain.schema import (
AIMessage,
HumanMessage,
SystemMessage
) | https://python.langchain.com/en/latest/getting_started/getting_started.html |
d0b4dd4659d0-7 | AIMessage,
HumanMessage,
SystemMessage
)
chat = ChatOpenAI(temperature=0)
You can get completions by passing in a single message.
chat([HumanMessage(content="Translate this sentence from English to French. I love programming.")])
# -> AIMessage(content="J'aime programmer.", additional_kwargs={})
You can also pass in multiple messages for OpenAI’s gpt-3.5-turbo and gpt-4 models.
messages = [
SystemMessage(content="You are a helpful assistant that translates English to French."),
HumanMessage(content="I love programming.")
]
chat(messages)
# -> AIMessage(content="J'aime programmer.", additional_kwargs={})
You can go one step further and generate completions for multiple sets of messages using generate. This returns an LLMResult with an additional message parameter:
batch_messages = [
[
SystemMessage(content="You are a helpful assistant that translates English to French."),
HumanMessage(content="I love programming.")
],
[
SystemMessage(content="You are a helpful assistant that translates English to French."),
HumanMessage(content="I love artificial intelligence.")
],
]
result = chat.generate(batch_messages)
result
# -> LLMResult(generations=[[ChatGeneration(text="J'aime programmer.", generation_info=None, message=AIMessage(content="J'aime programmer.", additional_kwargs={}))], [ChatGeneration(text="J'aime l'intelligence artificielle.", generation_info=None, message=AIMessage(content="J'aime l'intelligence artificielle.", additional_kwargs={}))]], llm_output={'token_usage': {'prompt_tokens': 57, 'completion_tokens': 20, 'total_tokens': 77}})
You can recover things like token usage from this LLMResult: | https://python.langchain.com/en/latest/getting_started/getting_started.html |
d0b4dd4659d0-8 | You can recover things like token usage from this LLMResult:
result.llm_output['token_usage']
# -> {'prompt_tokens': 57, 'completion_tokens': 20, 'total_tokens': 77}
Chat Prompt Templates#
Similar to LLMs, you can make use of templating by using a MessagePromptTemplate. You can build a ChatPromptTemplate from one or more MessagePromptTemplates. You can use ChatPromptTemplate’s format_prompt – this returns a PromptValue, which you can convert to a string or Message object, depending on whether you want to use the formatted value as input to an llm or chat model.
For convenience, there is a from_template method exposed on the template. If you were to use this template, this is what it would look like:
from langchain.chat_models import ChatOpenAI
from langchain.prompts.chat import (
ChatPromptTemplate,
SystemMessagePromptTemplate,
HumanMessagePromptTemplate,
)
chat = ChatOpenAI(temperature=0)
template = "You are a helpful assistant that translates {input_language} to {output_language}."
system_message_prompt = SystemMessagePromptTemplate.from_template(template)
human_template = "{text}"
human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt])
# get a chat completion from the formatted messages
chat(chat_prompt.format_prompt(input_language="English", output_language="French", text="I love programming.").to_messages())
# -> AIMessage(content="J'aime programmer.", additional_kwargs={})
Chains with Chat Models#
The LLMChain discussed in the above section can be used with chat models as well:
from langchain.chat_models import ChatOpenAI
from langchain import LLMChain
from langchain.prompts.chat import ( | https://python.langchain.com/en/latest/getting_started/getting_started.html |
d0b4dd4659d0-9 | from langchain import LLMChain
from langchain.prompts.chat import (
ChatPromptTemplate,
SystemMessagePromptTemplate,
HumanMessagePromptTemplate,
)
chat = ChatOpenAI(temperature=0)
template = "You are a helpful assistant that translates {input_language} to {output_language}."
system_message_prompt = SystemMessagePromptTemplate.from_template(template)
human_template = "{text}"
human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt])
chain = LLMChain(llm=chat, prompt=chat_prompt)
chain.run(input_language="English", output_language="French", text="I love programming.")
# -> "J'aime programmer."
Agents with Chat Models#
Agents can also be used with chat models, you can initialize one using AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION as the agent type.
from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType
from langchain.chat_models import ChatOpenAI
from langchain.llms import OpenAI
# First, let's load the language model we're going to use to control the agent.
chat = ChatOpenAI(temperature=0)
# Next, let's load some tools to use. Note that the `llm-math` tool uses an LLM, so we need to pass that in.
llm = OpenAI(temperature=0)
tools = load_tools(["serpapi", "llm-math"], llm=llm)
# Finally, let's initialize an agent with the tools, the language model, and the type of agent we want to use. | https://python.langchain.com/en/latest/getting_started/getting_started.html |
d0b4dd4659d0-10 | agent = initialize_agent(tools, chat, agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
# Now let's test it out!
agent.run("Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power?")
> Entering new AgentExecutor chain...
Thought: I need to use a search engine to find Olivia Wilde's boyfriend and a calculator to raise his age to the 0.23 power.
Action:
{
"action": "Search",
"action_input": "Olivia Wilde boyfriend"
}
Observation: Sudeikis and Wilde's relationship ended in November 2020. Wilde was publicly served with court documents regarding child custody while she was presenting Don't Worry Darling at CinemaCon 2022. In January 2021, Wilde began dating singer Harry Styles after meeting during the filming of Don't Worry Darling.
Thought:I need to use a search engine to find Harry Styles' current age.
Action:
{
"action": "Search",
"action_input": "Harry Styles age"
}
Observation: 29 years
Thought:Now I need to calculate 29 raised to the 0.23 power.
Action:
{
"action": "Calculator",
"action_input": "29^0.23"
}
Observation: Answer: 2.169459462491557
Thought:I now know the final answer.
Final Answer: 2.169459462491557
> Finished chain.
'2.169459462491557'
Memory: Add State to Chains and Agents# | https://python.langchain.com/en/latest/getting_started/getting_started.html |
d0b4dd4659d0-11 | '2.169459462491557'
Memory: Add State to Chains and Agents#
You can use Memory with chains and agents initialized with chat models. The main difference between this and Memory for LLMs is that rather than trying to condense all previous messages into a string, we can keep them as their own unique memory object.
from langchain.prompts import (
ChatPromptTemplate,
MessagesPlaceholder,
SystemMessagePromptTemplate,
HumanMessagePromptTemplate
)
from langchain.chains import ConversationChain
from langchain.chat_models import ChatOpenAI
from langchain.memory import ConversationBufferMemory
prompt = ChatPromptTemplate.from_messages([
SystemMessagePromptTemplate.from_template("The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know."),
MessagesPlaceholder(variable_name="history"),
HumanMessagePromptTemplate.from_template("{input}")
])
llm = ChatOpenAI(temperature=0)
memory = ConversationBufferMemory(return_messages=True)
conversation = ConversationChain(memory=memory, prompt=prompt, llm=llm)
conversation.predict(input="Hi there!")
# -> 'Hello! How can I assist you today?'
conversation.predict(input="I'm doing well! Just having a conversation with an AI.")
# -> "That sounds like fun! I'm happy to chat with you. Is there anything specific you'd like to talk about?"
conversation.predict(input="Tell me about yourself.") | https://python.langchain.com/en/latest/getting_started/getting_started.html |
d0b4dd4659d0-12 | conversation.predict(input="Tell me about yourself.")
# -> "Sure! I am an AI language model created by OpenAI. I was trained on a large dataset of text from the internet, which allows me to understand and generate human-like language. I can answer questions, provide information, and even have conversations like this one. Is there anything else you'd like to know about me?"
previous
Welcome to LangChain
next
Concepts
Contents
Installation
Environment Setup
Building a Language Model Application: LLMs
LLMs: Get predictions from a language model
Prompt Templates: Manage prompts for LLMs
Chains: Combine LLMs and prompts in multi-step workflows
Agents: Dynamically Call Chains Based on User Input
Memory: Add State to Chains and Agents
Building a Language Model Application: Chat Models
Get Message Completions from a Chat Model
Chat Prompt Templates
Chains with Chat Models
Agents with Chat Models
Memory: Add State to Chains and Agents
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/getting_started/getting_started.html |
e0e16cae7a8f-0 | .md
.pdf
Cloud Hosted Setup
Contents
Installation
Environment Setup
Cloud Hosted Setup#
We offer a hosted version of tracing at langchainplus.vercel.app. You can use this to view traces from your run without having to run the server locally.
Note: we are currently only offering this to a limited number of users. The hosted platform is VERY alpha, in active development, and data might be dropped at any time. Don’t depend on data being persisted in the system long term and don’t log traces that may contain sensitive information. If you’re interested in using the hosted platform, please fill out the form here.
Installation#
Login to the system and click “API Key” in the top right corner. Generate a new key and keep it safe. You will need it to authenticate with the system.
Environment Setup#
After installation, you must now set up your environment to use tracing.
This can be done by setting an environment variable in your terminal by running export LANGCHAIN_HANDLER=langchain.
You can also do this by adding the below snippet to the top of every script. IMPORTANT: this must go at the VERY TOP of your script, before you import anything from langchain.
import os
os.environ["LANGCHAIN_HANDLER"] = "langchain"
You will also need to set an environment variable to specify the endpoint and your API key. This can be done with the following environment variables:
LANGCHAIN_ENDPOINT = “https://langchain-api-gateway-57eoxz8z.uc.gateway.dev”
LANGCHAIN_API_KEY - set this to the API key you generated during installation.
An example of adding all relevant environment variables is below:
import os
os.environ["LANGCHAIN_HANDLER"] = "langchain"
os.environ["LANGCHAIN_ENDPOINT"] = "https://langchain-api-gateway-57eoxz8z.uc.gateway.dev" | https://python.langchain.com/en/latest/tracing/hosted_installation.html |
e0e16cae7a8f-1 | os.environ["LANGCHAIN_API_KEY"] = "my_api_key" # Don't commit this to your repo! Better to set it in your terminal.
Contents
Installation
Environment Setup
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/tracing/hosted_installation.html |
ee59d64d4fbe-0 | .md
.pdf
Locally Hosted Setup
Contents
Installation
Environment Setup
Locally Hosted Setup#
This page contains instructions for installing and then setting up the environment to use the locally hosted version of tracing.
Installation#
Ensure you have Docker installed (see Get Docker) and that it’s running.
Install the latest version of langchain: pip install langchain or pip install langchain -U to upgrade your
existing version.
Run langchain-server. This command was installed automatically when you ran the above command (pip install langchain).
This will spin up the server in the terminal, hosted on port 4137 by default.
Once you see the terminal
output langchain-langchain-frontend-1 | ➜ Local: [http://localhost:4173/](http://localhost:4173/), navigate
to http://localhost:4173/
You should see a page with your tracing sessions. See the overview page for a walkthrough of the UI.
Currently, trace data is not guaranteed to be persisted between runs of langchain-server. If you want to
persist your data, you can mount a volume to the Docker container. See the Docker docs for more info.
To stop the server, press Ctrl+C in the terminal where you ran langchain-server.
Environment Setup#
After installation, you must now set up your environment to use tracing.
This can be done by setting an environment variable in your terminal by running export LANGCHAIN_HANDLER=langchain.
You can also do this by adding the below snippet to the top of every script. IMPORTANT: this must go at the VERY TOP of your script, before you import anything from langchain.
import os
os.environ["LANGCHAIN_HANDLER"] = "langchain"
Contents
Installation
Environment Setup
By Harrison Chase
© Copyright 2023, Harrison Chase. | https://python.langchain.com/en/latest/tracing/local_installation.html |
ee59d64d4fbe-1 | By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/tracing/local_installation.html |
fa2e69208882-0 | .ipynb
.pdf
Tracing Walkthrough
Contents
[Beta] Tracing V2
Tracing Walkthrough#
There are two recommended ways to trace your LangChains:
Setting the LANGCHAIN_TRACING environment variable to “true”.
Using a context manager with tracing_enabled() to trace a particular block of code.
Note if the environment variable is set, all code will be traced, regardless of whether or not it’s within the context manager.
import os
os.environ["LANGCHAIN_TRACING"] = "true"
## Uncomment below if using hosted setup.
# os.environ["LANGCHAIN_ENDPOINT"] = "https://langchain-api-gateway-57eoxz8z.uc.gateway.dev"
## Uncomment below if you want traces to be recorded to "my_session" instead of "default".
# os.environ["LANGCHAIN_SESSION"] = "my_session"
## Better to set this environment variable in the terminal
## Uncomment below if using hosted version. Replace "my_api_key" with your actual API Key.
# os.environ["LANGCHAIN_API_KEY"] = "my_api_key"
import langchain
from langchain.agents import Tool, initialize_agent, load_tools
from langchain.agents import AgentType
from langchain.callbacks import tracing_enabled
from langchain.chat_models import ChatOpenAI
from langchain.llms import OpenAI
# Agent run with tracing. Ensure that OPENAI_API_KEY is set appropriately to run this example.
llm = OpenAI(temperature=0)
tools = load_tools(["llm-math"], llm=llm)
agent = initialize_agent(
tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True
)
agent.run("What is 2 raised to .123243 power?")
> Entering new AgentExecutor chain... | https://python.langchain.com/en/latest/tracing/agent_with_tracing.html |
fa2e69208882-1 | > Entering new AgentExecutor chain...
I need to use a calculator to solve this.
Action: Calculator
Action Input: 2^.123243
Observation: Answer: 1.0891804557407723
Thought: I now know the final answer.
Final Answer: 1.0891804557407723
> Finished chain.
'1.0891804557407723'
# Agent run with tracing using a chat model
agent = initialize_agent(
tools, ChatOpenAI(temperature=0), agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True
)
agent.run("What is 2 raised to .123243 power?")
> Entering new AgentExecutor chain...
I need to use a calculator to solve this.
Action: Calculator
Action Input: 2 ^ .123243
Observation: Answer: 1.0891804557407723
Thought:I now know the answer to the question.
Final Answer: 1.0891804557407723
> Finished chain.
'1.0891804557407723'
# Both of the agent runs will be traced because the environment variable is set
agent.run("What is 2 raised to .123243 power?")
with tracing_enabled() as session:
agent.run("What is 5 raised to .123243 power?")
> Entering new AgentExecutor chain...
I need to use a calculator to solve this.
Action: Calculator
Action Input: 2 ^ .123243
Observation: Answer: 1.0891804557407723
Thought:I now know the answer to the question.
Final Answer: 1.0891804557407723
> Finished chain.
> Entering new AgentExecutor chain...
I need to use a calculator to solve this.
Action: Calculator | https://python.langchain.com/en/latest/tracing/agent_with_tracing.html |
fa2e69208882-2 | I need to use a calculator to solve this.
Action: Calculator
Action Input: 5 ^ .123243
Observation: Answer: 1.2193914912400514
Thought:I now know the answer to the question.
Final Answer: 1.2193914912400514
> Finished chain.
# Now, we unset the environment variable and use a context manager.
if "LANGCHAIN_TRACING" in os.environ:
del os.environ["LANGCHAIN_TRACING"]
# here, we are writing traces to "my_test_session"
with tracing_enabled("my_session") as session:
assert session
agent.run("What is 5 raised to .123243 power?") # this should be traced
agent.run("What is 2 raised to .123243 power?") # this should not be traced
> Entering new AgentExecutor chain...
I need to use a calculator to solve this.
Action: Calculator
Action Input: 5 ^ .123243
Observation: Answer: 1.2193914912400514
Thought:I now know the answer to the question.
Final Answer: 1.2193914912400514
> Finished chain.
> Entering new AgentExecutor chain...
I need to use a calculator to solve this.
Action: Calculator
Action Input: 2 ^ .123243
Observation: Answer: 1.0891804557407723
Thought:I now know the answer to the question.
Final Answer: 1.0891804557407723
> Finished chain.
'1.0891804557407723'
# The context manager is concurrency safe:
import asyncio
if "LANGCHAIN_TRACING" in os.environ:
del os.environ["LANGCHAIN_TRACING"] | https://python.langchain.com/en/latest/tracing/agent_with_tracing.html |
fa2e69208882-3 | del os.environ["LANGCHAIN_TRACING"]
questions = [f"What is {i} raised to .123 power?" for i in range(1,4)]
# start a background task
task = asyncio.create_task(agent.arun(questions[0])) # this should not be traced
with tracing_enabled() as session:
assert session
tasks = [agent.arun(q) for q in questions[1:3]] # these should be traced
await asyncio.gather(*tasks)
await task
> Entering new AgentExecutor chain...
> Entering new AgentExecutor chain...
> Entering new AgentExecutor chain...
I need to use a calculator to solve this.
Action: Calculator
Action Input: 3^0.123I need to use a calculator to solve this.
Action: Calculator
Action Input: 2^0.123Any number raised to the power of 0 is 1, but I'm not sure about a decimal power.
Action: Calculator
Action Input: 1^.123
Observation: Answer: 1.1446847956963533
Thought:
Observation: Answer: 1.0889970153361064
Thought:
Observation: Answer: 1.0
Thought:
> Finished chain.
> Finished chain.
> Finished chain.
'1.0'
[Beta] Tracing V2#
We are rolling out a newer version of our tracing service with more features coming soon. Here are the instructions on how to use it to trace your runs.
To use, you can use the tracing_v2_enabled context manager or set LANGCHAIN_TRACING_V2 = 'true'
Option 1 (Local):
Run the local LangChainPlus Server
pip install --upgrade langchain
langchain plus start
Option 2 (Hosted): | https://python.langchain.com/en/latest/tracing/agent_with_tracing.html |
fa2e69208882-4 | pip install --upgrade langchain
langchain plus start
Option 2 (Hosted):
After making an account an grabbing a LangChainPlus API Key, set the LANGCHAIN_ENDPOINT and LANGCHAIN_API_KEY environment variables
import os
os.environ["LANGCHAIN_TRACING_V2"] = "true"
# os.environ["LANGCHAIN_ENDPOINT"] = "https://api.langchain.plus" # Uncomment this line if you want to use the hosted version
# os.environ["LANGCHAIN_API_KEY"] = "<YOUR-LANGCHAINPLUS-API-KEY>" # Uncomment this line if you want to use the hosted version.
import langchain
from langchain.agents import Tool, initialize_agent, load_tools
from langchain.agents import AgentType
from langchain.callbacks import tracing_enabled
from langchain.chat_models import ChatOpenAI
from langchain.llms import OpenAI
# Agent run with tracing. Ensure that OPENAI_API_KEY is set appropriately to run this example.
llm = OpenAI(temperature=0)
tools = load_tools(["llm-math"], llm=llm)
agent = initialize_agent(
tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True
)
agent.run("What is 2 raised to .123243 power?")
> Entering new AgentExecutor chain...
I need to use a calculator to solve this.
Action: Calculator
Action Input: 2^.123243
Observation: Answer: 1.0891804557407723
Thought: I now know the final answer.
Final Answer: 1.0891804557407723
> Finished chain.
'1.0891804557407723'
Contents
[Beta] Tracing V2
By Harrison Chase
© Copyright 2023, Harrison Chase. | https://python.langchain.com/en/latest/tracing/agent_with_tracing.html |
fa2e69208882-5 | By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/tracing/agent_with_tracing.html |
1286b6959303-0 | .md
.pdf
YouTube
Contents
⛓️Official LangChain YouTube channel⛓️
Introduction to LangChain with Harrison Chase, creator of LangChain
Videos (sorted by views)
YouTube#
This is a collection of LangChain videos on YouTube.
⛓️Official LangChain YouTube channel⛓️#
Introduction to LangChain with Harrison Chase, creator of LangChain#
Building the Future with LLMs, LangChain, & Pinecone by Pinecone
LangChain and Weaviate with Harrison Chase and Bob van Luijt - Weaviate Podcast #36 by Weaviate • Vector Database
LangChain Demo + Q&A with Harrison Chase by Full Stack Deep Learning
LangChain Agents: Build Personal Assistants For Your Data (Q&A with Harrison Chase and Mayo Oshin) by Chat with data
⛓️ LangChain “Agents in Production” Webinar by LangChain
Videos (sorted by views)#
Building AI LLM Apps with LangChain (and more?) - LIVE STREAM by Nicholas Renotte
First look - ChatGPT + WolframAlpha (GPT-3.5 and Wolfram|Alpha via LangChain by James Weaver) by Dr Alan D. Thompson
LangChain explained - The hottest new Python framework by AssemblyAI
Chatbot with INFINITE MEMORY using OpenAI & Pinecone - GPT-3, Embeddings, ADA, Vector DB, Semantic by David Shapiro ~ AI
LangChain for LLMs is… basically just an Ansible playbook by David Shapiro ~ AI
Build your own LLM Apps with LangChain & GPT-Index by 1littlecoder
BabyAGI - New System of Autonomous AI Agents with LangChain by 1littlecoder
Run BabyAGI with Langchain Agents (with Python Code) by 1littlecoder | https://python.langchain.com/en/latest/additional_resources/youtube.html |
1286b6959303-1 | Run BabyAGI with Langchain Agents (with Python Code) by 1littlecoder
How to Use Langchain With Zapier | Write and Send Email with GPT-3 | OpenAI API Tutorial by StarMorph AI
Use Your Locally Stored Files To Get Response From GPT - OpenAI | Langchain | Python by Shweta Lodha
Langchain JS | How to Use GPT-3, GPT-4 to Reference your own Data | OpenAI Embeddings Intro by StarMorph AI
The easiest way to work with large language models | Learn LangChain in 10min by Sophia Yang
4 Autonomous AI Agents: “Westworld” simulation BabyAGI, AutoGPT, Camel, LangChain by Sophia Yang
AI CAN SEARCH THE INTERNET? Langchain Agents + OpenAI ChatGPT by tylerwhatsgood
Query Your Data with GPT-4 | Embeddings, Vector Databases | Langchain JS Knowledgebase by StarMorph AI
Weaviate + LangChain for LLM apps presented by Erika Cardenas by Weaviate • Vector Database
Langchain Overview — How to Use Langchain & ChatGPT by Python In Office
Langchain Overview - How to Use Langchain & ChatGPT by Python In Office
Custom langchain Agent & Tools with memory. Turn any Python function into langchain tool with Gpt 3 by echohive
LangChain: Run Language Models Locally - Hugging Face Models by Prompt Engineering
ChatGPT with any YouTube video using langchain and chromadb by echohive
How to Talk to a PDF using LangChain and ChatGPT by Automata Learning Lab
Langchain Document Loaders Part 1: Unstructured Files by Merk
LangChain - Prompt Templates (what all the best prompt engineers use) by Nick Daigler
LangChain. Crear aplicaciones Python impulsadas por GPT by Jesús Conde | https://python.langchain.com/en/latest/additional_resources/youtube.html |
1286b6959303-2 | LangChain. Crear aplicaciones Python impulsadas por GPT by Jesús Conde
Easiest Way to Use GPT In Your Products | LangChain Basics Tutorial by Rachel Woods
BabyAGI + GPT-4 Langchain Agent with Internet Access by tylerwhatsgood
Learning LLM Agents. How does it actually work? LangChain, AutoGPT & OpenAI by Arnoldas Kemeklis
Get Started with LangChain in Node.js by Developers Digest
LangChain + OpenAI tutorial: Building a Q&A system w/ own text data by Samuel Chan
Langchain + Zapier Agent by Merk
Connecting the Internet with ChatGPT (LLMs) using Langchain And Answers Your Questions by Kamalraj M M
Build More Powerful LLM Applications for Business’s with LangChain (Beginners Guide) by No Code Blackbox
⛓️ LangFlow LLM Agent Demo for 🦜🔗LangChain by Cobus Greyling
⛓️ Chatbot Factory: Streamline Python Chatbot Creation with LLMs and Langchain by Finxter
⛓️ LangChain Tutorial - ChatGPT mit eigenen Daten by Coding Crashkurse
⛓️ Chat with a CSV | LangChain Agents Tutorial (Beginners) by GoDataProf
⛓️ Introdução ao Langchain - #Cortes - Live DataHackers by Prof. João Gabriel Lima
⛓️ LangChain: Level up ChatGPT !? | LangChain Tutorial Part 1 by Code Affinity
⛓️ KI schreibt krasses Youtube Skript 😲😳 | LangChain Tutorial Deutsch by SimpleKI
⛓️ Chat with Audio: Langchain, Chroma DB, OpenAI, and Assembly AI by AI Anytime
⛓️ QA over documents with Auto vector index selection with Langchain router chains by echohive | https://python.langchain.com/en/latest/additional_resources/youtube.html |
1286b6959303-3 | ⛓️ Build your own custom LLM application with Bubble.io & Langchain (No Code & Beginner friendly) by No Code Blackbox
⛓️ Simple App to Question Your Docs: Leveraging Streamlit, Hugging Face Spaces, LangChain, and Claude! by Chris Alexiuk
⛓️ LANGCHAIN AI- ConstitutionalChainAI + Databutton AI ASSISTANT Web App by Avra
⛓️ LANGCHAIN AI AUTONOMOUS AGENT WEB APP - 👶 BABY AGI 🤖 with EMAIL AUTOMATION using DATABUTTON by Avra
⛓️ The Future of Data Analysis: Using A.I. Models in Data Analysis (LangChain) by Absent Data
⛓️ Memory in LangChain | Deep dive (python) by Eden Marco
⛓️ 9 LangChain UseCases | Beginner’s Guide | 2023 by Data Science Basics
⛓️ Use Large Language Models in Jupyter Notebook | LangChain | Agents & Indexes by Abhinaw Tiwari
⛓️ How to Talk to Your Langchain Agent | 11 Labs + Whisper by VRSEN
⛓️ LangChain Deep Dive: 5 FUN AI App Ideas To Build Quickly and Easily by James NoCode
⛓️ BEST OPEN Alternative to OPENAI’s EMBEDDINGs for Retrieval QA: LangChain by Prompt Engineering
⛓️ LangChain 101: Models by Mckay Wrigley
⛓️ LangChain with JavaScript Tutorial #1 | Setup & Using LLMs by Leon van Zyl
⛓️ LangChain Overview & Tutorial for Beginners: Build Powerful AI Apps Quickly & Easily (ZERO CODE) by James NoCode
⛓️ LangChain In Action: Real-World Use Case With Step-by-Step Tutorial by Rabbitmetrics | https://python.langchain.com/en/latest/additional_resources/youtube.html |
1286b6959303-4 | ⛓️ Summarizing and Querying Multiple Papers with LangChain by Automata Learning Lab
⛓️ Using Langchain (and Replit) through Tana, ask Google/Wikipedia/Wolfram Alpha to fill out a table by Stian Håklev
⛓️ Langchain PDF App (GUI) | Create a ChatGPT For Your PDF in Python by Alejandro AO - Software & Ai
⛓️ Auto-GPT with LangChain 🔥 | Create Your Own Personal AI Assistant by Data Science Basics
⛓️ Create Your OWN Slack AI Assistant with Python & LangChain by Dave Ebbelaar
⛓️ How to Create LOCAL Chatbots with GPT4All and LangChain [Full Guide] by Liam Ottley
⛓️ Build a Multilingual PDF Search App with LangChain, Cohere and Bubble by Menlo Park Lab
⛓️ Building a LangChain Agent (code-free!) Using Bubble and Flowise by Menlo Park Lab
⛓️ Build a LangChain-based Semantic PDF Search App with No-Code Tools Bubble and Flowise by Menlo Park Lab
⛓️ LangChain Memory Tutorial | Building a ChatGPT Clone in Python by Alejandro AO - Software & Ai
⛓️ ChatGPT For Your DATA | Chat with Multiple Documents Using LangChain by Data Science Basics
⛓️ Llama Index: Chat with Documentation using URL Loader by Merk
⛓️ Using OpenAI, LangChain, and Gradio to Build Custom GenAI Applications by David Hundley
⛓ icon marks a new video [last update 2023-05-15]
previous
Model Comparison
Contents
⛓️Official LangChain YouTube channel⛓️
Introduction to LangChain with Harrison Chase, creator of LangChain
Videos (sorted by views)
By Harrison Chase | https://python.langchain.com/en/latest/additional_resources/youtube.html |
1286b6959303-5 | Videos (sorted by views)
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/additional_resources/youtube.html |
cb2016cb18b8-0 | .rst
.pdf
Deploying LLMs in Production
Contents
Outline
Designing a Robust LLM Application Service
Monitoring
Fault tolerance
Zero down time upgrade
Load balancing
Maintaining Cost-Efficiency and Scalability
Self-hosting models
Resource Management and Auto-Scaling
Utilizing Spot Instances
Independent Scaling
Batching requests
Ensuring Rapid Iteration
Model composition
Cloud providers
Infrastructure as Code (IaC)
CI/CD
Deploying LLMs in Production#
In today’s fast-paced technological landscape, the use of Large Language Models (LLMs) is rapidly expanding. As a result, it’s crucial for developers to understand how to effectively deploy these models in production environments. LLM interfaces typically fall into two categories:
Case 1: Utilizing External LLM Providers (OpenAI, Anthropic, etc.)In this scenario, most of the computational burden is handled by the LLM providers, while LangChain simplifies the implementation of business logic around these services. This approach includes features such as prompt templating, chat message generation, caching, vector embedding database creation, preprocessing, etc.
Case 2: Self-hosted Open-Source ModelsAlternatively, developers can opt to use smaller, yet comparably capable, self-hosted open-source LLM models. This approach can significantly decrease costs, latency, and privacy concerns associated with transferring data to external LLM providers.
Regardless of the framework that forms the backbone of your product, deploying LLM applications comes with its own set of challenges. It’s vital to understand the trade-offs and key considerations when evaluating serving frameworks.
Outline#
This guide aims to provide a comprehensive overview of the requirements for deploying LLMs in a production setting, focusing on:
Designing a Robust LLM Application Service
Maintaining Cost-Efficiency
Ensuring Rapid Iteration | https://python.langchain.com/en/latest/additional_resources/deploy_llms.html |
cb2016cb18b8-1 | Maintaining Cost-Efficiency
Ensuring Rapid Iteration
Understanding these components is crucial when assessing serving systems. LangChain integrates with several open-source projects designed to tackle these issues, providing a robust framework for productionizing your LLM applications. Some notable frameworks include:
Ray Serve
BentoML
Modal
These links will provide further information on each ecosystem, assisting you in finding the best fit for your LLM deployment needs.
Designing a Robust LLM Application Service#
When deploying an LLM service in production, it’s imperative to provide a seamless user experience free from outages. Achieving 24/7 service availability involves creating and maintaining several sub-systems surrounding your application.
Monitoring#
Monitoring forms an integral part of any system running in a production environment. In the context of LLMs, it is essential to monitor both performance and quality metrics.
Performance Metrics: These metrics provide insights into the efficiency and capacity of your model. Here are some key examples:
Query per second (QPS): This measures the number of queries your model processes in a second, offering insights into its utilization.
Latency: This metric quantifies the delay from when your client sends a request to when they receive a response.
Tokens Per Second (TPS): This represents the number of tokens your model can generate in a second.
Quality Metrics: These metrics are typically customized according to the business use-case. For instance, how does the output of your system compare to a baseline, such as a previous version? Although these metrics can be calculated offline, you need to log the necessary data to use them later.
Fault tolerance# | https://python.langchain.com/en/latest/additional_resources/deploy_llms.html |
cb2016cb18b8-2 | Fault tolerance#
Your application may encounter errors such as exceptions in your model inference or business logic code, causing failures and disrupting traffic. Other potential issues could arise from the machine running your application, such as unexpected hardware breakdowns or loss of spot-instances during high-demand periods. One way to mitigate these risks is by increasing redundancy through replica scaling and implementing recovery mechanisms for failed replicas. However, model replicas aren’t the only potential points of failure. It’s essential to build resilience against various failures that could occur at any point in your stack.
Zero down time upgrade#
System upgrades are often necessary but can result in service disruptions if not handled correctly. One way to prevent downtime during upgrades is by implementing a smooth transition process from the old version to the new one. Ideally, the new version of your LLM service is deployed, and traffic gradually shifts from the old to the new version, maintaining a constant QPS throughout the process.
Load balancing#
Load balancing, in simple terms, is a technique to distribute work evenly across multiple computers, servers, or other resources to optimize the utilization of the system, maximize throughput, minimize response time, and avoid overload of any single resource. Think of it as a traffic officer directing cars (requests) to different roads (servers) so that no single road becomes too congested. | https://python.langchain.com/en/latest/additional_resources/deploy_llms.html |
cb2016cb18b8-3 | There are several strategies for load balancing. For example, one common method is the Round Robin strategy, where each request is sent to the next server in line, cycling back to the first when all servers have received a request. This works well when all servers are equally capable. However, if some servers are more powerful than others, you might use a Weighted Round Robin or Least Connections strategy, where more requests are sent to the more powerful servers, or to those currently handling the fewest active requests. Let’s imagine you’re running a LLM chain. If your application becomes popular, you could have hundreds or even thousands of users asking questions at the same time. If one server gets too busy (high load), the load balancer would direct new requests to another server that is less busy. This way, all your users get a timely response and the system remains stable.
Maintaining Cost-Efficiency and Scalability#
Deploying LLM services can be costly, especially when you’re handling a large volume of user interactions. Charges by LLM providers are usually based on tokens used, making a chat system inference on these models potentially expensive. However, several strategies can help manage these costs without compromising the quality of the service.
Self-hosting models#
Several smaller and open-source LLMs are emerging to tackle the issue of reliance on LLM providers. Self-hosting allows you to maintain similar quality to LLM provider models while managing costs. The challenge lies in building a reliable, high-performing LLM serving system on your own machines.
Resource Management and Auto-Scaling# | https://python.langchain.com/en/latest/additional_resources/deploy_llms.html |
cb2016cb18b8-4 | Resource Management and Auto-Scaling#
Computational logic within your application requires precise resource allocation. For instance, if part of your traffic is served by an OpenAI endpoint and another part by a self-hosted model, it’s crucial to allocate suitable resources for each. Auto-scaling—adjusting resource allocation based on traffic—can significantly impact the cost of running your application. This strategy requires a balance between cost and responsiveness, ensuring neither resource over-provisioning nor compromised application responsiveness.
Utilizing Spot Instances#
On platforms like AWS, spot instances offer substantial cost savings, typically priced at about a third of on-demand instances. The trade-off is a higher crash rate, necessitating a robust fault-tolerance mechanism for effective use.
Independent Scaling#
When self-hosting your models, you should consider independent scaling. For example, if you have two translation models, one fine-tuned for French and another for Spanish, incoming requests might necessitate different scaling requirements for each.
Batching requests#
In the context of Large Language Models, batching requests can enhance efficiency by better utilizing your GPU resources. GPUs are inherently parallel processors, designed to handle multiple tasks simultaneously. If you send individual requests to the model, the GPU might not be fully utilized as it’s only working on a single task at a time. On the other hand, by batching requests together, you’re allowing the GPU to work on multiple tasks at once, maximizing its utilization and improving inference speed. This not only leads to cost savings but can also improve the overall latency of your LLM service.
In summary, managing costs while scaling your LLM services requires a strategic approach. Utilizing self-hosting models, managing resources effectively, employing auto-scaling, using spot instances, independently scaling models, and batching requests are key strategies to consider. Open-source libraries such as Ray Serve and BentoML are designed to deal with these complexities.
Ensuring Rapid Iteration# | https://python.langchain.com/en/latest/additional_resources/deploy_llms.html |
cb2016cb18b8-5 | Ensuring Rapid Iteration#
The LLM landscape is evolving at an unprecedented pace, with new libraries and model architectures being introduced constantly. Consequently, it’s crucial to avoid tying yourself to a solution specific to one particular framework. This is especially relevant in serving, where changes to your infrastructure can be time-consuming, expensive, and risky. Strive for infrastructure that is not locked into any specific machine learning library or framework, but instead offers a general-purpose, scalable serving layer. Here are some aspects where flexibility plays a key role:
Model composition#
Deploying systems like LangChain demands the ability to piece together different models and connect them via logic. Take the example of building a natural language input SQL query engine. Querying an LLM and obtaining the SQL command is only part of the system. You need to extract metadata from the connected database, construct a prompt for the LLM, run the SQL query on an engine, collect and feed back the response to the LLM as the query runs, and present the results to the user. This demonstrates the need to seamlessly integrate various complex components built in Python into a dynamic chain of logical blocks that can be served together.
Cloud providers#
Many hosted solutions are restricted to a single cloud provider, which can limit your options in today’s multi-cloud world. Depending on where your other infrastructure components are built, you might prefer to stick with your chosen cloud provider.
Infrastructure as Code (IaC)#
Rapid iteration also involves the ability to recreate your infrastructure quickly and reliably. This is where Infrastructure as Code (IaC) tools like Terraform, CloudFormation, or Kubernetes YAML files come into play. They allow you to define your infrastructure in code files, which can be version controlled and quickly deployed, enabling faster and more reliable iterations.
CI/CD# | https://python.langchain.com/en/latest/additional_resources/deploy_llms.html |
cb2016cb18b8-6 | CI/CD#
In a fast-paced environment, implementing CI/CD pipelines can significantly speed up the iteration process. They help automate the testing and deployment of your LLM applications, reducing the risk of errors and enabling faster feedback and iteration.
previous
Deployments
next
Tracing
Contents
Outline
Designing a Robust LLM Application Service
Monitoring
Fault tolerance
Zero down time upgrade
Load balancing
Maintaining Cost-Efficiency and Scalability
Self-hosting models
Resource Management and Auto-Scaling
Utilizing Spot Instances
Independent Scaling
Batching requests
Ensuring Rapid Iteration
Model composition
Cloud providers
Infrastructure as Code (IaC)
CI/CD
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/additional_resources/deploy_llms.html |
d332da11eb3c-0 | .ipynb
.pdf
Model Comparison
Model Comparison#
Constructing your language model application will likely involved choosing between many different options of prompts, models, and even chains to use. When doing so, you will want to compare these different options on different inputs in an easy, flexible, and intuitive way.
LangChain provides the concept of a ModelLaboratory to test out and try different models.
from langchain import LLMChain, OpenAI, Cohere, HuggingFaceHub, PromptTemplate
from langchain.model_laboratory import ModelLaboratory
llms = [
OpenAI(temperature=0),
Cohere(model="command-xlarge-20221108", max_tokens=20, temperature=0),
HuggingFaceHub(repo_id="google/flan-t5-xl", model_kwargs={"temperature":1})
]
model_lab = ModelLaboratory.from_llms(llms)
model_lab.compare("What color is a flamingo?")
Input:
What color is a flamingo?
OpenAI
Params: {'model': 'text-davinci-002', 'temperature': 0.0, 'max_tokens': 256, 'top_p': 1, 'frequency_penalty': 0, 'presence_penalty': 0, 'n': 1, 'best_of': 1}
Flamingos are pink.
Cohere
Params: {'model': 'command-xlarge-20221108', 'max_tokens': 20, 'temperature': 0.0, 'k': 0, 'p': 1, 'frequency_penalty': 0, 'presence_penalty': 0}
Pink
HuggingFaceHub
Params: {'repo_id': 'google/flan-t5-xl', 'temperature': 1}
pink | https://python.langchain.com/en/latest/additional_resources/model_laboratory.html |
d332da11eb3c-1 | pink
prompt = PromptTemplate(template="What is the capital of {state}?", input_variables=["state"])
model_lab_with_prompt = ModelLaboratory.from_llms(llms, prompt=prompt)
model_lab_with_prompt.compare("New York")
Input:
New York
OpenAI
Params: {'model': 'text-davinci-002', 'temperature': 0.0, 'max_tokens': 256, 'top_p': 1, 'frequency_penalty': 0, 'presence_penalty': 0, 'n': 1, 'best_of': 1}
The capital of New York is Albany.
Cohere
Params: {'model': 'command-xlarge-20221108', 'max_tokens': 20, 'temperature': 0.0, 'k': 0, 'p': 1, 'frequency_penalty': 0, 'presence_penalty': 0}
The capital of New York is Albany.
HuggingFaceHub
Params: {'repo_id': 'google/flan-t5-xl', 'temperature': 1}
st john s
from langchain import SelfAskWithSearchChain, SerpAPIWrapper
open_ai_llm = OpenAI(temperature=0)
search = SerpAPIWrapper()
self_ask_with_search_openai = SelfAskWithSearchChain(llm=open_ai_llm, search_chain=search, verbose=True)
cohere_llm = Cohere(temperature=0, model="command-xlarge-20221108")
search = SerpAPIWrapper()
self_ask_with_search_cohere = SelfAskWithSearchChain(llm=cohere_llm, search_chain=search, verbose=True)
chains = [self_ask_with_search_openai, self_ask_with_search_cohere]
names = [str(open_ai_llm), str(cohere_llm)] | https://python.langchain.com/en/latest/additional_resources/model_laboratory.html |
d332da11eb3c-2 | names = [str(open_ai_llm), str(cohere_llm)]
model_lab = ModelLaboratory(chains, names=names)
model_lab.compare("What is the hometown of the reigning men's U.S. Open champion?")
Input:
What is the hometown of the reigning men's U.S. Open champion?
OpenAI
Params: {'model': 'text-davinci-002', 'temperature': 0.0, 'max_tokens': 256, 'top_p': 1, 'frequency_penalty': 0, 'presence_penalty': 0, 'n': 1, 'best_of': 1}
> Entering new chain...
What is the hometown of the reigning men's U.S. Open champion?
Are follow up questions needed here: Yes.
Follow up: Who is the reigning men's U.S. Open champion?
Intermediate answer: Carlos Alcaraz.
Follow up: Where is Carlos Alcaraz from?
Intermediate answer: El Palmar, Spain.
So the final answer is: El Palmar, Spain
> Finished chain.
So the final answer is: El Palmar, Spain
Cohere
Params: {'model': 'command-xlarge-20221108', 'max_tokens': 256, 'temperature': 0.0, 'k': 0, 'p': 1, 'frequency_penalty': 0, 'presence_penalty': 0}
> Entering new chain...
What is the hometown of the reigning men's U.S. Open champion?
Are follow up questions needed here: Yes.
Follow up: Who is the reigning men's U.S. Open champion?
Intermediate answer: Carlos Alcaraz.
So the final answer is:
Carlos Alcaraz
> Finished chain.
So the final answer is:
Carlos Alcaraz
previous
Tracing
next | https://python.langchain.com/en/latest/additional_resources/model_laboratory.html |
d332da11eb3c-3 | So the final answer is:
Carlos Alcaraz
previous
Tracing
next
YouTube
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/additional_resources/model_laboratory.html |
8f8f21bf6bcd-0 | .md
.pdf
Tracing
Contents
Tracing Walkthrough
Changing Sessions
Tracing#
By enabling tracing in your LangChain runs, you’ll be able to more effectively visualize, step through, and debug your chains and agents.
First, you should install tracing and set up your environment properly.
You can use either a locally hosted version of this (uses Docker) or a cloud hosted version (in closed alpha).
If you’re interested in using the hosted platform, please fill out the form here.
Locally Hosted Setup
Cloud Hosted Setup
Tracing Walkthrough#
When you first access the UI, you should see a page with your tracing sessions.
An initial one “default” should already be created for you.
A session is just a way to group traces together.
If you click on a session, it will take you to a page with no recorded traces that says “No Runs.”
You can create a new session with the new session form.
If we click on the default session, we can see that to start we have no traces stored.
If we now start running chains and agents with tracing enabled, we will see data show up here.
To do so, we can run this notebook as an example.
After running it, we will see an initial trace show up.
From here we can explore the trace at a high level by clicking on the arrow to show nested runs.
We can keep on clicking further and further down to explore deeper and deeper.
We can also click on the “Explore” button of the top level run to dive even deeper.
Here, we can see the inputs and outputs in full, as well as all the nested traces.
We can keep on exploring each of these nested traces in more detail.
For example, here is the lowest level trace with the exact inputs/outputs to the LLM.
Changing Sessions# | https://python.langchain.com/en/latest/additional_resources/tracing.html |
8f8f21bf6bcd-1 | Changing Sessions#
To initially record traces to a session other than "default", you can set the LANGCHAIN_SESSION environment variable to the name of the session you want to record to:
import os
os.environ["LANGCHAIN_TRACING"] = "true"
os.environ["LANGCHAIN_SESSION"] = "my_session" # Make sure this session actually exists. You can create a new session in the UI.
To switch sessions mid-script or mid-notebook, do NOT set the LANGCHAIN_SESSION environment variable. Instead: langchain.set_tracing_callback_manager(session_name="my_session")
previous
Deploying LLMs in Production
next
Model Comparison
Contents
Tracing Walkthrough
Changing Sessions
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/additional_resources/tracing.html |
2e5e8ccd50a9-0 | .md
.pdf
Interacting with APIs
Contents
Chains
Agents
Interacting with APIs#
Conceptual Guide
Lots of data and information is stored behind APIs.
This page covers all resources available in LangChain for working with APIs.
Chains#
If you are just getting started, and you have relatively simple apis, you should get started with chains.
Chains are a sequence of predetermined steps, so they are good to get started with as they give you more control and let you
understand what is happening better.
API Chain
Agents#
Agents are more complex, and involve multiple queries to the LLM to understand what to do.
The downside of agents are that you have less control. The upside is that they are more powerful,
which allows you to use them on larger and more complex schemas.
OpenAPI Agent
previous
Code Understanding
next
Extraction
Contents
Chains
Agents
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/use_cases/apis.html |
dda490c74d4f-0 | .md
.pdf
Autonomous Agents
Contents
Baby AGI (Original Repo)
AutoGPT (Original Repo)
MetaPrompt (Original Repo)
Autonomous Agents#
Autonomous Agents are agents that designed to be more long running.
You give them one or multiple long term goals, and they independently execute towards those goals.
The applications combine tool usage and long term memory.
At the moment, Autonomous Agents are fairly experimental and based off of other open-source projects.
By implementing these open source projects in LangChain primitives we can get the benefits of LangChain -
easy switching and experimenting with multiple LLMs, usage of different vectorstores as memory,
usage of LangChain’s collection of tools.
Baby AGI (Original Repo)#
Baby AGI: a notebook implementing BabyAGI as LLM Chains
Baby AGI with Tools: building off the above notebook, this example substitutes in an agent with tools as the execution tools, allowing it to actually take actions.
AutoGPT (Original Repo)#
AutoGPT: a notebook implementing AutoGPT in LangChain primitives
WebSearch Research Assistant: a notebook showing how to use AutoGPT plus specific tools to act as research assistant that can use the web.
MetaPrompt (Original Repo)#
Meta-Prompt: a notebook implementing Meta-Prompt in LangChain primitives
previous
Callbacks
next
Agent Simulations
Contents
Baby AGI (Original Repo)
AutoGPT (Original Repo)
MetaPrompt (Original Repo)
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/use_cases/autonomous_agents.html |
d80f37b810bb-0 | .md
.pdf
Chatbots
Chatbots#
Conceptual Guide
Since language models are good at producing text, that makes them ideal for creating chatbots.
Aside from the base prompts/LLMs, an important concept to know for Chatbots is memory.
Most chat based applications rely on remembering what happened in previous interactions, which memory is designed to help with.
The following resources exist:
ChatGPT Clone: A notebook walking through how to recreate a ChatGPT-like experience with LangChain.
Conversation Memory: A notebook walking through how to use different types of conversational memory.
Conversation Agent: A notebook walking through how to create an agent optimized for conversation.
Additional related resources include:
Memory Key Concepts: Explanation of key concepts related to memory.
Memory Examples: A collection of how-to examples for working with memory.
More end-to-end examples include:
Voice Assistant: A notebook walking through how to create a voice assistant using LangChain.
previous
Question Answering over Docs
next
Querying Tabular Data
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/use_cases/chatbots.html |
95ec3d4978e5-0 | .md
.pdf
Extraction
Extraction#
Conceptual Guide
Most APIs and databases still deal with structured information.
Therefore, in order to better work with those, it can be useful to extract structured information from text.
Examples of this include:
Extracting a structured row to insert into a database from a sentence
Extracting multiple rows to insert into a database from a long document
Extracting the correct API parameters from a user query
This work is extremely related to output parsing.
Output parsers are responsible for instructing the LLM to respond in a specific format.
In this case, the output parsers specify the format of the data you would like to extract from the document.
Then, in addition to the output format instructions, the prompt should also contain the data you would like to extract information from.
While normal output parsers are good enough for basic structuring of response data,
when doing extraction you often want to extract more complicated or nested structures.
For a deep dive on extraction, we recommend checking out kor,
a library that uses the existing LangChain chain and OutputParser abstractions
but deep dives on allowing extraction of more complicated schemas.
previous
Interacting with APIs
next
Summarization
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/use_cases/extraction.html |
17bcc02df3f0-0 | .md
.pdf
Summarization
Summarization#
Conceptual Guide
Summarization involves creating a smaller summary of multiple longer documents.
This can be useful for distilling long documents into the core pieces of information.
The recommended way to get started using a summarization chain is:
from langchain.chains.summarize import load_summarize_chain
chain = load_summarize_chain(llm, chain_type="map_reduce")
chain.run(docs)
The following resources exist:
Summarization Notebook: A notebook walking through how to accomplish this task.
Additional related resources include:
Utilities for working with Documents: Guides on how to use several of the utilities which will prove helpful for this task, including Text Splitters (for splitting up long documents).
previous
Extraction
next
Evaluation
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/use_cases/summarization.html |
293a1a57fc89-0 | .md
.pdf
Agents
Contents
Create Your Own Agent
Step 1: Create Tools
(Optional) Step 2: Modify Agent
(Optional) Step 3: Modify Agent Executor
Examples
Agents#
Conceptual Guide
Agents can be used for a variety of tasks.
Agents combine the decision making ability of a language model with tools in order to create a system
that can execute and implement solutions on your behalf. Before reading any more, it is highly
recommended that you read the documentation in the agent module to understand the concepts associated with agents more.
Specifically, you should be familiar with what the agent, tool, and agent executor abstractions are before reading more.
Agent Documentation (for interacting with the outside world)
Create Your Own Agent#
Once you have read that documentation, you should be prepared to create your own agent.
What exactly does that involve?
Here’s how we recommend getting started with creating your own agent:
Step 1: Create Tools#
Agents are largely defined by the tools they can use.
If you have a specific task you want the agent to accomplish, you have to give it access to the right tools.
We have many tools natively in LangChain, so you should first look to see if any of them meet your needs.
But we also make it easy to define a custom tool, so if you need custom tools you should absolutely do that.
(Optional) Step 2: Modify Agent#
The built-in LangChain agent types are designed to work well in generic situations,
but you may be able to improve performance by modifying the agent implementation.
There are several ways you could do this:
Modify the base prompt. This can be used to give the agent more context on how it should behave, etc.
Modify the output parser. This is necessary if the agent is having trouble parsing the language model output. | https://python.langchain.com/en/latest/use_cases/personal_assistants.html |
293a1a57fc89-1 | (Optional) Step 3: Modify Agent Executor#
This step is usually not necessary, as this is pretty general logic.
Possible reasons you would want to modify this include adding different stopping conditions, or handling errors
Examples#
Specific examples of agents include:
AI Plugins: an implementation of an agent that is designed to be able to use all AI Plugins.
Plug-and-PlAI (Plugins Database): an implementation of an agent that is designed to be able to use all AI Plugins retrieved from PlugNPlAI.
Wikibase Agent: an implementation of an agent that is designed to interact with Wikibase.
Sales GPT: This notebook demonstrates an implementation of a Context-Aware AI Sales agent.
Multi-Modal Output Agent: an implementation of a multi-modal output agent that can generate text and images.
previous
Agent Simulations
next
Question Answering over Docs
Contents
Create Your Own Agent
Step 1: Create Tools
(Optional) Step 2: Modify Agent
(Optional) Step 3: Modify Agent Executor
Examples
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/use_cases/personal_assistants.html |
774a110ba39b-0 | .md
.pdf
Agent Simulations
Contents
Simulations with One Agent
Simulations with Two Agents
Simulations with Multiple Agents
Agent Simulations#
Agent simulations involve interacting one of more agents with each other.
Agent simulations generally involve two main components:
Long Term Memory
Simulation Environment
Specific implementations of agent simulations (or parts of agent simulations) include:
Simulations with One Agent#
Simulated Environment: Gymnasium: an example of how to create a simple agent-environment interaction loop with Gymnasium (formerly OpenAI Gym).
Simulations with Two Agents#
CAMEL: an implementation of the CAMEL (Communicative Agents for “Mind” Exploration of Large Scale Language Model Society) paper, where two agents communicate with each other.
Two Player D&D: an example of how to use a generic simulator for two agents to implement a variant of the popular Dungeons & Dragons role playing game.
Agent Debates with Tools: an example of how to enable Dialogue Agents to use tools to inform their responses.
Simulations with Multiple Agents#
Multi-Player D&D: an example of how to use a generic dialogue simulator for multiple dialogue agents with a custom speaker-ordering, illustrated with a variant of the popular Dungeons & Dragons role playing game.
Decentralized Speaker Selection: an example of how to implement a multi-agent dialogue without a fixed schedule for who speaks when. Instead the agents decide for themselves who speaks by outputting bids to speak. This example shows how to do this in the context of a fictitious presidential debate.
Authoritarian Speaker Selection: an example of how to implement a multi-agent dialogue, where a privileged agent directs who speaks what. This example also showcases how to enable the privileged agent to determine when the conversation terminates. This example shows how to do this in the context of a fictitious news show. | https://python.langchain.com/en/latest/use_cases/agent_simulations.html |
774a110ba39b-1 | Simulated Environment: PettingZoo: an example of how to create a agent-environment interaction loop for multiple agents with PettingZoo (a multi-agent version of Gymnasium).
Generative Agents: This notebook implements a generative agent based on the paper Generative Agents: Interactive Simulacra of Human Behavior by Park, et. al.
previous
Autonomous Agents
next
Agents
Contents
Simulations with One Agent
Simulations with Two Agents
Simulations with Multiple Agents
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/use_cases/agent_simulations.html |
8fb3c2e67618-0 | .md
.pdf
Question Answering over Docs
Contents
Document Question Answering
Adding in sources
Additional Related Resources
End-to-end examples
Question Answering over Docs#
Conceptual Guide
Question answering in this context refers to question answering over your document data.
For question answering over other types of data, please see other sources documentation like SQL database Question Answering or Interacting with APIs.
For question answering over many documents, you almost always want to create an index over the data.
This can be used to smartly access the most relevant documents for a given question, allowing you to avoid having to pass all the documents to the LLM (saving you time and money).
See this notebook for a more detailed introduction to this, but for a super quick start the steps involved are:
Load Your Documents
from langchain.document_loaders import TextLoader
loader = TextLoader('../state_of_the_union.txt')
See here for more information on how to get started with document loading.
Create Your Index
from langchain.indexes import VectorstoreIndexCreator
index = VectorstoreIndexCreator().from_loaders([loader])
The best and most popular index by far at the moment is the VectorStore index.
Query Your Index
query = "What did the president say about Ketanji Brown Jackson"
index.query(query)
Alternatively, use query_with_sources to also get back the sources involved
query = "What did the president say about Ketanji Brown Jackson"
index.query_with_sources(query)
Again, these high level interfaces obfuscate a lot of what is going on under the hood, so please see this notebook for a lower level walkthrough.
Document Question Answering#
Question answering involves fetching multiple documents, and then asking a question of them.
The LLM response will contain the answer to your question, based on the content of the documents. | https://python.langchain.com/en/latest/use_cases/question_answering.html |
8fb3c2e67618-1 | The recommended way to get started using a question answering chain is:
from langchain.chains.question_answering import load_qa_chain
chain = load_qa_chain(llm, chain_type="stuff")
chain.run(input_documents=docs, question=query)
The following resources exist:
Question Answering Notebook: A notebook walking through how to accomplish this task.
VectorDB Question Answering Notebook: A notebook walking through how to do question answering over a vector database. This can often be useful for when you have a LOT of documents, and you don’t want to pass them all to the LLM, but rather first want to do some semantic search over embeddings.
Adding in sources#
There is also a variant of this, where in addition to responding with the answer the language model will also cite its sources (eg which of the documents passed in it used).
The recommended way to get started using a question answering with sources chain is:
from langchain.chains.qa_with_sources import load_qa_with_sources_chain
chain = load_qa_with_sources_chain(llm, chain_type="stuff")
chain({"input_documents": docs, "question": query}, return_only_outputs=True)
The following resources exist:
QA With Sources Notebook: A notebook walking through how to accomplish this task.
VectorDB QA With Sources Notebook: A notebook walking through how to do question answering with sources over a vector database. This can often be useful for when you have a LOT of documents, and you don’t want to pass them all to the LLM, but rather first want to do some semantic search over embeddings.
Additional Related Resources#
Additional related resources include:
Utilities for working with Documents: Guides on how to use several of the utilities which will prove helpful for this task, including Text Splitters (for splitting up long documents) and Embeddings & Vectorstores (useful for the above Vector DB example). | https://python.langchain.com/en/latest/use_cases/question_answering.html |
8fb3c2e67618-2 | CombineDocuments Chains: A conceptual overview of specific types of chains by which you can accomplish this task.
End-to-end examples#
For examples to this done in an end-to-end manner, please see the following resources:
Semantic search over a group chat with Sources Notebook: A notebook that semantically searches over a group chat conversation.
previous
Agents
next
Chatbots
Contents
Document Question Answering
Adding in sources
Additional Related Resources
End-to-end examples
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/use_cases/question_answering.html |
75c47a50830e-0 | .md
.pdf
Code Understanding
Contents
Conversational Retriever Chain
Code Understanding#
Overview
LangChain is a useful tool designed to parse GitHub code repositories. By leveraging VectorStores, Conversational RetrieverChain, and GPT-4, it can answer questions in the context of an entire GitHub repository or generate new code. This documentation page outlines the essential components of the system and guides using LangChain for better code comprehension, contextual question answering, and code generation in GitHub repositories.
Conversational Retriever Chain#
Conversational RetrieverChain is a retrieval-focused system that interacts with the data stored in a VectorStore. Utilizing advanced techniques, like context-aware filtering and ranking, it retrieves the most relevant code snippets and information for a given user query. Conversational RetrieverChain is engineered to deliver high-quality, pertinent results while considering conversation history and context.
LangChain Workflow for Code Understanding and Generation
Index the code base: Clone the target repository, load all files within, chunk the files, and execute the indexing process. Optionally, you can skip this step and use an already indexed dataset.
Embedding and Code Store: Code snippets are embedded using a code-aware embedding model and stored in a VectorStore.
Query Understanding: GPT-4 processes user queries, grasping the context and extracting relevant details.
Construct the Retriever: Conversational RetrieverChain searches the VectorStore to identify the most relevant code snippets for a given query.
Build the Conversational Chain: Customize the retriever settings and define any user-defined filters as needed.
Ask questions: Define a list of questions to ask about the codebase, and then use the ConversationalRetrievalChain to generate context-aware answers. The LLM (GPT-4) generates comprehensive, context-aware answers based on retrieved code snippets and conversation history.
The full tutorial is available below. | https://python.langchain.com/en/latest/use_cases/code.html |
75c47a50830e-1 | The full tutorial is available below.
Twitter the-algorithm codebase analysis with Deep Lake: A notebook walking through how to parse github source code and run queries conversation.
LangChain codebase analysis with Deep Lake: A notebook walking through how to analyze and do question answering over THIS code base.
previous
Querying Tabular Data
next
Interacting with APIs
Contents
Conversational Retriever Chain
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/use_cases/code.html |
37991bb451b0-0 | .rst
.pdf
Evaluation
Contents
The Problem
The Solution
The Examples
Other Examples
Evaluation#
Note
Conceptual Guide
This section of documentation covers how we approach and think about evaluation in LangChain.
Both evaluation of internal chains/agents, but also how we would recommend people building on top of LangChain approach evaluation.
The Problem#
It can be really hard to evaluate LangChain chains and agents.
There are two main reasons for this:
# 1: Lack of data
You generally don’t have a ton of data to evaluate your chains/agents over before starting a project.
This is usually because Large Language Models (the core of most chains/agents) are terrific few-shot and zero shot learners,
meaning you are almost always able to get started on a particular task (text-to-SQL, question answering, etc) without
a large dataset of examples.
This is in stark contrast to traditional machine learning where you had to first collect a bunch of datapoints
before even getting started using a model.
# 2: Lack of metrics
Most chains/agents are performing tasks for which there are not very good metrics to evaluate performance.
For example, one of the most common use cases is generating text of some form.
Evaluating generated text is much more complicated than evaluating a classification prediction, or a numeric prediction.
The Solution#
LangChain attempts to tackle both of those issues.
What we have so far are initial passes at solutions - we do not think we have a perfect solution.
So we very much welcome feedback, contributions, integrations, and thoughts on this.
Here is what we have for each problem so far:
# 1: Lack of data
We have started LangChainDatasets a Community space on Hugging Face.
We intend this to be a collection of open source datasets for evaluating common chains and agents. | https://python.langchain.com/en/latest/use_cases/evaluation.html |
37991bb451b0-1 | We intend this to be a collection of open source datasets for evaluating common chains and agents.
We have contributed five datasets of our own to start, but we highly intend this to be a community effort.
In order to contribute a dataset, you simply need to join the community and then you will be able to upload datasets.
We’re also aiming to make it as easy as possible for people to create their own datasets.
As a first pass at this, we’ve added a QAGenerationChain, which given a document comes up
with question-answer pairs that can be used to evaluate question-answering tasks over that document down the line.
See this notebook for an example of how to use this chain.
# 2: Lack of metrics
We have two solutions to the lack of metrics.
The first solution is to use no metrics, and rather just rely on looking at results by eye to get a sense for how the chain/agent is performing.
To assist in this, we have developed (and will continue to develop) tracing, a UI-based visualizer of your chain and agent runs.
The second solution we recommend is to use Language Models themselves to evaluate outputs.
For this we have a few different chains and prompts aimed at tackling this issue.
The Examples#
We have created a bunch of examples combining the above two solutions to show how we internally evaluate chains and agents when we are developing.
In addition to the examples we’ve curated, we also highly welcome contributions here.
To facilitate that, we’ve included a template notebook for community members to use to build their own examples.
The existing examples we have are:
Question Answering (State of Union): A notebook showing evaluation of a question-answering task over a State-of-the-Union address.
Question Answering (Paul Graham Essay): A notebook showing evaluation of a question-answering task over a Paul Graham essay. | https://python.langchain.com/en/latest/use_cases/evaluation.html |
37991bb451b0-2 | SQL Question Answering (Chinook): A notebook showing evaluation of a question-answering task over a SQL database (the Chinook database).
Agent Vectorstore: A notebook showing evaluation of an agent doing question answering while routing between two different vector databases.
Agent Search + Calculator: A notebook showing evaluation of an agent doing question answering using a Search engine and a Calculator as tools.
Evaluating an OpenAPI Chain: A notebook showing evaluation of an OpenAPI chain, including how to generate test data if you don’t have any.
Other Examples#
In addition, we also have some more generic resources for evaluation.
Question Answering: An overview of LLMs aimed at evaluating question answering systems in general.
Data Augmented Question Answering: An end-to-end example of evaluating a question answering system focused on a specific document (a RetrievalQAChain to be precise). This example highlights how to use LLMs to come up with question/answer examples to evaluate over, and then highlights how to use LLMs to evaluate performance on those generated examples.
Hugging Face Datasets: Covers an example of loading and using a dataset from Hugging Face for evaluation.
previous
Summarization
next
Agent Benchmarking: Search + Calculator
Contents
The Problem
The Solution
The Examples
Other Examples
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/use_cases/evaluation.html |
6e1d4e88a685-0 | .md
.pdf
Querying Tabular Data
Contents
Document Loading
Querying
Chains
Agents
Querying Tabular Data#
Conceptual Guide
Lots of data and information is stored in tabular data, whether it be csvs, excel sheets, or SQL tables.
This page covers all resources available in LangChain for working with data in this format.
Document Loading#
If you have text data stored in a tabular format, you may want to load the data into a Document and then index it as you would
other text/unstructured data. For this, you should use a document loader like the CSVLoader
and then you should create an index over that data, and query it that way.
Querying#
If you have more numeric tabular data, or have a large amount of data and don’t want to index it, you should get started
by looking at various chains and agents we have for dealing with this data.
Chains#
If you are just getting started, and you have relatively small/simple tabular data, you should get started with chains.
Chains are a sequence of predetermined steps, so they are good to get started with as they give you more control and let you
understand what is happening better.
SQL Database Chain
Agents#
Agents are more complex, and involve multiple queries to the LLM to understand what to do.
The downside of agents are that you have less control. The upside is that they are more powerful,
which allows you to use them on larger databases and more complex schemas.
SQL Agent
Pandas Agent
CSV Agent
previous
Chatbots
next
Code Understanding
Contents
Document Loading
Querying
Chains
Agents
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/use_cases/tabular.html |
c4e579182160-0 | .ipynb
.pdf
Analysis of Twitter the-algorithm source code with LangChain, GPT4 and Deep Lake
Contents
1. Index the code base (optional)
2. Question Answering on Twitter algorithm codebase
Analysis of Twitter the-algorithm source code with LangChain, GPT4 and Deep Lake#
In this tutorial, we are going to use Langchain + Deep Lake with GPT4 to analyze the code base of the twitter algorithm.
!python3 -m pip install --upgrade langchain deeplake openai tiktoken
Define OpenAI embeddings, Deep Lake multi-modal vector store api and authenticate. For full documentation of Deep Lake please follow docs and API reference.
Authenticate into Deep Lake if you want to create your own dataset and publish it. You can get an API key from the platform
import os
import getpass
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import DeepLake
os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')
os.environ['ACTIVELOOP_TOKEN'] = getpass.getpass('Activeloop Token:')
embeddings = OpenAIEmbeddings(disallowed_special=())
disallowed_special=() is required to avoid Exception: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte from tiktoken for some repositories
1. Index the code base (optional)#
You can directly skip this part and directly jump into using already indexed dataset. To begin with, first we will clone the repository, then parse and chunk the code base and use OpenAI indexing.
!git clone https://github.com/twitter/the-algorithm # replace any repository of your choice
Load all files inside the repository
import os
from langchain.document_loaders import TextLoader
root_dir = './the-algorithm'
docs = [] | https://python.langchain.com/en/latest/use_cases/code/twitter-the-algorithm-analysis-deeplake.html |
c4e579182160-1 | root_dir = './the-algorithm'
docs = []
for dirpath, dirnames, filenames in os.walk(root_dir):
for file in filenames:
try:
loader = TextLoader(os.path.join(dirpath, file), encoding='utf-8')
docs.extend(loader.load_and_split())
except Exception as e:
pass
Then, chunk the files
from langchain.text_splitter import CharacterTextSplitter
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(docs)
Execute the indexing. This will take about ~4 mins to compute embeddings and upload to Activeloop. You can then publish the dataset to be public.
username = "davitbun" # replace with your username from app.activeloop.ai
db = DeepLake(dataset_path=f"hub://{username}/twitter-algorithm", embedding_function=embeddings, public=True) #dataset would be publicly available
db.add_documents(texts)
2. Question Answering on Twitter algorithm codebase#
First load the dataset, construct the retriever, then construct the Conversational Chain
db = DeepLake(dataset_path="hub://davitbun/twitter-algorithm", read_only=True, embedding_function=embeddings)
retriever = db.as_retriever()
retriever.search_kwargs['distance_metric'] = 'cos'
retriever.search_kwargs['fetch_k'] = 100
retriever.search_kwargs['maximal_marginal_relevance'] = True
retriever.search_kwargs['k'] = 10
You can also specify user defined functions using Deep Lake filters
def filter(x):
# filter based on source code
if 'com.google' in x['text'].data()['value']:
return False | https://python.langchain.com/en/latest/use_cases/code/twitter-the-algorithm-analysis-deeplake.html |
c4e579182160-2 | return False
# filter based on path e.g. extension
metadata = x['metadata'].data()['value']
return 'scala' in metadata['source'] or 'py' in metadata['source']
### turn on below for custom filtering
# retriever.search_kwargs['filter'] = filter
from langchain.chat_models import ChatOpenAI
from langchain.chains import ConversationalRetrievalChain
model = ChatOpenAI(model_name='gpt-3.5-turbo') # switch to 'gpt-4'
qa = ConversationalRetrievalChain.from_llm(model,retriever=retriever)
questions = [
"What does favCountParams do?",
"is it Likes + Bookmarks, or not clear from the code?",
"What are the major negative modifiers that lower your linear ranking parameters?",
"How do you get assigned to SimClusters?",
"What is needed to migrate from one SimClusters to another SimClusters?",
"How much do I get boosted within my cluster?",
"How does Heavy ranker work. what are it’s main inputs?",
"How can one influence Heavy ranker?",
"why threads and long tweets do so well on the platform?",
"Are thread and long tweet creators building a following that reacts to only threads?",
"Do you need to follow different strategies to get most followers vs to get most likes and bookmarks per tweet?",
"Content meta data and how it impacts virality (e.g. ALT in images).",
"What are some unexpected fingerprints for spam factors?",
"Is there any difference between company verified checkmarks and blue verified individual checkmarks?",
]
chat_history = []
for question in questions:
result = qa({"question": question, "chat_history": chat_history}) | https://python.langchain.com/en/latest/use_cases/code/twitter-the-algorithm-analysis-deeplake.html |
c4e579182160-3 | result = qa({"question": question, "chat_history": chat_history})
chat_history.append((question, result['answer']))
print(f"-> **Question**: {question} \n")
print(f"**Answer**: {result['answer']} \n")
-> Question: What does favCountParams do?
Answer: favCountParams is an optional ThriftLinearFeatureRankingParams instance that represents the parameters related to the “favorite count” feature in the ranking process. It is used to control the weight of the favorite count feature while ranking tweets. The favorite count is the number of times a tweet has been marked as a favorite by users, and it is considered an important signal in the ranking of tweets. By using favCountParams, the system can adjust the importance of the favorite count while calculating the final ranking score of a tweet.
-> Question: is it Likes + Bookmarks, or not clear from the code?
Answer: From the provided code, it is not clear if the favorite count metric is determined by the sum of likes and bookmarks. The favorite count is mentioned in the code, but there is no explicit reference to how it is calculated in terms of likes and bookmarks.
-> Question: What are the major negative modifiers that lower your linear ranking parameters?
Answer: In the given code, major negative modifiers that lower the linear ranking parameters are:
scoringData.querySpecificScore: This score adjustment is based on the query-specific information. If its value is negative, it will lower the linear ranking parameters.
scoringData.authorSpecificScore: This score adjustment is based on the author-specific information. If its value is negative, it will also lower the linear ranking parameters.
Please note that I cannot provide more information on the exact calculations of these negative modifiers, as the code for their determination is not provided.
-> Question: How do you get assigned to SimClusters? | https://python.langchain.com/en/latest/use_cases/code/twitter-the-algorithm-analysis-deeplake.html |
c4e579182160-4 | -> Question: How do you get assigned to SimClusters?
Answer: The assignment to SimClusters occurs through a Metropolis-Hastings sampling-based community detection algorithm that is run on the Producer-Producer similarity graph. This graph is created by computing the cosine similarity scores between the users who follow each producer. The algorithm identifies communities or clusters of Producers with similar followers, and takes a parameter k for specifying the number of communities to be detected.
After the community detection, different users and content are represented as sparse, interpretable vectors within these identified communities (SimClusters). The resulting SimClusters embeddings can be used for various recommendation tasks.
-> Question: What is needed to migrate from one SimClusters to another SimClusters?
Answer: To migrate from one SimClusters representation to another, you can follow these general steps:
Prepare the new representation: Create the new SimClusters representation using any necessary updates or changes in the clustering algorithm, similarity measures, or other model parameters. Ensure that this new representation is properly stored and indexed as needed.
Update the relevant code and configurations: Modify the relevant code and configuration files to reference the new SimClusters representation. This may involve updating paths or dataset names to point to the new representation, as well as changing code to use the new clustering method or similarity functions if applicable.
Test the new representation: Before deploying the changes to production, thoroughly test the new SimClusters representation to ensure its effectiveness and stability. This may involve running offline jobs like candidate generation and label candidates, validating the output, as well as testing the new representation in the evaluation environment using evaluation tools like TweetSimilarityEvaluationAdhocApp. | https://python.langchain.com/en/latest/use_cases/code/twitter-the-algorithm-analysis-deeplake.html |
c4e579182160-5 | Deploy the changes: Once the new representation has been tested and validated, deploy the changes to production. This may involve creating a zip file, uploading it to the packer, and then scheduling it with Aurora. Be sure to monitor the system to ensure a smooth transition between representations and verify that the new representation is being used in recommendations as expected.
Monitor and assess the new representation: After the new representation has been deployed, continue to monitor its performance and impact on recommendations. Take note of any improvements or issues that arise and be prepared to iterate on the new representation if needed. Always ensure that the results and performance metrics align with the system’s goals and objectives.
-> Question: How much do I get boosted within my cluster?
Answer: It’s not possible to determine the exact amount your content is boosted within your cluster in the SimClusters representation without specific data about your content and its engagement metrics. However, a combination of factors, such as the favorite score and follow score, alongside other engagement signals and SimCluster calculations, influence the boosting of content.
-> Question: How does Heavy ranker work. what are it’s main inputs?
Answer: The Heavy Ranker is a machine learning model that plays a crucial role in ranking and scoring candidates within the recommendation algorithm. Its primary purpose is to predict the likelihood of a user engaging with a tweet or connecting with another user on the platform.
Main inputs to the Heavy Ranker consist of:
Static Features: These are features that can be computed directly from a tweet at the time it’s created, such as whether it has a URL, has cards, has quotes, etc. These features are produced by the Index Ingester as the tweets are generated and stored in the index. | https://python.langchain.com/en/latest/use_cases/code/twitter-the-algorithm-analysis-deeplake.html |
c4e579182160-6 | Real-time Features: These per-tweet features can change after the tweet has been indexed. They mostly consist of social engagements like retweet count, favorite count, reply count, and some spam signals that are computed with later activities. The Signal Ingester, which is part of a Heron topology, processes multiple event streams to collect and compute these real-time features.
User Table Features: These per-user features are obtained from the User Table Updater that processes a stream written by the user service. This input is used to store sparse real-time user information, which is later propagated to the tweet being scored by looking up the author of the tweet.
Search Context Features: These features represent the context of the current searcher, like their UI language, their content consumption, and the current time (implied). They are combined with Tweet Data to compute some of the features used in scoring.
These inputs are then processed by the Heavy Ranker to score and rank candidates based on their relevance and likelihood of engagement by the user.
-> Question: How can one influence Heavy ranker?
Answer: To influence the Heavy Ranker’s output or ranking of content, consider the following actions:
Improve content quality: Create high-quality and engaging content that is relevant, informative, and valuable to users. High-quality content is more likely to receive positive user engagement, which the Heavy Ranker considers when ranking content.
Increase user engagement: Encourage users to interact with content through likes, retweets, replies, and comments. Higher engagement levels can lead to better ranking in the Heavy Ranker’s output.
Optimize your user profile: A user’s reputation, based on factors such as their follower count and follower-to-following ratio, may impact the ranking of their content. Maintain a good reputation by following relevant users, keeping a reasonable follower-to-following ratio and engaging with your followers. | https://python.langchain.com/en/latest/use_cases/code/twitter-the-algorithm-analysis-deeplake.html |
c4e579182160-7 | Enhance content discoverability: Use relevant keywords, hashtags, and mentions in your tweets, making it easier for users to find and engage with your content. This increased discoverability may help improve the ranking of your content by the Heavy Ranker.
Leverage multimedia content: Experiment with different content formats, such as videos, images, and GIFs, which may capture users’ attention and increase engagement, resulting in better ranking by the Heavy Ranker.
User feedback: Monitor and respond to feedback for your content. Positive feedback may improve your ranking, while negative feedback provides an opportunity to learn and improve.
Note that the Heavy Ranker uses a combination of machine learning models and various features to rank the content. While the above actions may help influence the ranking, there are no guarantees as the ranking process is determined by a complex algorithm, which evolves over time.
-> Question: why threads and long tweets do so well on the platform?
Answer: Threads and long tweets perform well on the platform for several reasons:
More content and context: Threads and long tweets provide more information and context about a topic, which can make the content more engaging and informative for users. People tend to appreciate a well-structured and detailed explanation of a subject or a story, and threads and long tweets can do that effectively.
Increased user engagement: As threads and long tweets provide more content, they also encourage users to engage with the tweets through replies, retweets, and likes. This increased engagement can lead to better visibility of the content, as the Twitter algorithm considers user engagement when ranking and surfacing tweets.
Narrative structure: Threads enable users to tell stories or present arguments in a step-by-step manner, making the information more accessible and easier to follow. This narrative structure can capture users’ attention and encourage them to read through the entire thread and interact with the content. | https://python.langchain.com/en/latest/use_cases/code/twitter-the-algorithm-analysis-deeplake.html |
c4e579182160-8 | Expanded reach: When users engage with a thread, their interactions can bring the content to the attention of their followers, helping to expand the reach of the thread. This increased visibility can lead to more interactions and higher performance for the threaded tweets.
Higher content quality: Generally, threads and long tweets require more thought and effort to create, which may lead to higher quality content. Users are more likely to appreciate and interact with high-quality, well-reasoned content, further improving the performance of these tweets within the platform.
Overall, threads and long tweets perform well on Twitter because they encourage user engagement and provide a richer, more informative experience that users find valuable.
-> Question: Are thread and long tweet creators building a following that reacts to only threads?
Answer: Based on the provided code and context, there isn’t enough information to conclude if the creators of threads and long tweets primarily build a following that engages with only thread-based content. The code provided is focused on Twitter’s recommendation and ranking algorithms, as well as infrastructure components like Kafka, partitions, and the Follow Recommendations Service (FRS). To answer your question, data analysis of user engagement and results of specific edge cases would be required.
-> Question: Do you need to follow different strategies to get most followers vs to get most likes and bookmarks per tweet?
Answer: Yes, different strategies need to be followed to maximize the number of followers compared to maximizing likes and bookmarks per tweet. While there may be some overlap in the approaches, they target different aspects of user engagement.
Maximizing followers: The primary focus is on growing your audience on the platform. Strategies include:
Consistently sharing high-quality content related to your niche or industry.
Engaging with others on the platform by replying, retweeting, and mentioning other users.
Using relevant hashtags and participating in trending conversations.
Collaborating with influencers and other users with a large following. | https://python.langchain.com/en/latest/use_cases/code/twitter-the-algorithm-analysis-deeplake.html |
c4e579182160-9 | Collaborating with influencers and other users with a large following.
Posting at optimal times when your target audience is most active.
Optimizing your profile by using a clear profile picture, catchy bio, and relevant links.
Maximizing likes and bookmarks per tweet: The focus is on creating content that resonates with your existing audience and encourages engagement. Strategies include:
Crafting engaging and well-written tweets that encourage users to like or save them.
Incorporating visually appealing elements, such as images, GIFs, or videos, that capture attention.
Asking questions, sharing opinions, or sparking conversations that encourage users to engage with your tweets.
Using analytics to understand the type of content that resonates with your audience and tailoring your tweets accordingly.
Posting a mix of educational, entertaining, and promotional content to maintain variety and interest.
Timing your tweets strategically to maximize engagement, likes, and bookmarks per tweet.
Both strategies can overlap, and you may need to adapt your approach by understanding your target audience’s preferences and analyzing your account’s performance. However, it’s essential to recognize that maximizing followers and maximizing likes and bookmarks per tweet have different focuses and require specific strategies.
-> Question: Content meta data and how it impacts virality (e.g. ALT in images).
Answer: There is no direct information in the provided context about how content metadata, such as ALT text in images, impacts the virality of a tweet or post. However, it’s worth noting that including ALT text can improve the accessibility of your content for users who rely on screen readers, which may lead to increased engagement for a broader audience. Additionally, metadata can be used in search engine optimization, which might improve the visibility of the content, but the context provided does not mention any specific correlation with virality.
-> Question: What are some unexpected fingerprints for spam factors? | https://python.langchain.com/en/latest/use_cases/code/twitter-the-algorithm-analysis-deeplake.html |
c4e579182160-10 | -> Question: What are some unexpected fingerprints for spam factors?
Answer: In the provided context, an unusual indicator of spam factors is when a tweet contains a non-media, non-news link. If the tweet has a link but does not have an image URL, video URL, or news URL, it is considered a potential spam vector, and a threshold for user reputation (tweepCredThreshold) is set to MIN_TWEEPCRED_WITH_LINK.
While this rule may not cover all possible unusual spam indicators, it is derived from the specific codebase and logic shared in the context.
-> Question: Is there any difference between company verified checkmarks and blue verified individual checkmarks?
Answer: Yes, there is a distinction between the verified checkmarks for companies and blue verified checkmarks for individuals. The code snippet provided mentions “Blue-verified account boost” which indicates that there is a separate category for blue verified accounts. Typically, blue verified checkmarks are used to indicate notable individuals, while verified checkmarks are for companies or organizations.
Contents
1. Index the code base (optional)
2. Question Answering on Twitter algorithm codebase
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/use_cases/code/twitter-the-algorithm-analysis-deeplake.html |
3897adf64810-0 | .ipynb
.pdf
Use LangChain, GPT and Deep Lake to work with code base
Contents
Design
Implementation
Integration preparations
Prepare data
Question Answering
Use LangChain, GPT and Deep Lake to work with code base#
In this tutorial, we are going to use Langchain + Deep Lake with GPT to analyze the code base of the LangChain itself.
Design#
Prepare data:
Upload all python project files using the langchain.document_loaders.TextLoader. We will call these files the documents.
Split all documents to chunks using the langchain.text_splitter.CharacterTextSplitter.
Embed chunks and upload them into the DeepLake using langchain.embeddings.openai.OpenAIEmbeddings and langchain.vectorstores.DeepLake
Question-Answering:
Build a chain from langchain.chat_models.ChatOpenAI and langchain.chains.ConversationalRetrievalChain
Prepare questions.
Get answers running the chain.
Implementation#
Integration preparations#
We need to set up keys for external services and install necessary python libraries.
#!python3 -m pip install --upgrade langchain deeplake openai
Set up OpenAI embeddings, Deep Lake multi-modal vector store api and authenticate.
For full documentation of Deep Lake please follow https://docs.activeloop.ai/ and API reference https://docs.deeplake.ai/en/latest/
import os
from getpass import getpass
os.environ['OPENAI_API_KEY'] = getpass()
# Please manually enter OpenAI Key
········
Authenticate into Deep Lake if you want to create your own dataset and publish it. You can get an API key from the platform at app.activeloop.ai
os.environ['ACTIVELOOP_TOKEN'] = getpass.getpass('Activeloop Token:')
········
Prepare data# | https://python.langchain.com/en/latest/use_cases/code/code-analysis-deeplake.html |
3897adf64810-1 | ········
Prepare data#
Load all repository files. Here we assume this notebook is downloaded as the part of the langchain fork and we work with the python files of the langchain repo.
If you want to use files from different repo, change root_dir to the root dir of your repo.
from langchain.document_loaders import TextLoader
root_dir = '../../../..'
docs = []
for dirpath, dirnames, filenames in os.walk(root_dir):
for file in filenames:
if file.endswith('.py') and '/.venv/' not in dirpath:
try:
loader = TextLoader(os.path.join(dirpath, file), encoding='utf-8')
docs.extend(loader.load_and_split())
except Exception as e:
pass
print(f'{len(docs)}')
1147
Then, chunk the files
from langchain.text_splitter import CharacterTextSplitter
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(docs)
print(f"{len(texts)}")
Created a chunk of size 1620, which is longer than the specified 1000
Created a chunk of size 1213, which is longer than the specified 1000
Created a chunk of size 1263, which is longer than the specified 1000
Created a chunk of size 1448, which is longer than the specified 1000
Created a chunk of size 1120, which is longer than the specified 1000
Created a chunk of size 1148, which is longer than the specified 1000
Created a chunk of size 1826, which is longer than the specified 1000
Created a chunk of size 1260, which is longer than the specified 1000 | https://python.langchain.com/en/latest/use_cases/code/code-analysis-deeplake.html |
3897adf64810-2 | Created a chunk of size 1260, which is longer than the specified 1000
Created a chunk of size 1195, which is longer than the specified 1000
Created a chunk of size 2147, which is longer than the specified 1000
Created a chunk of size 1410, which is longer than the specified 1000
Created a chunk of size 1269, which is longer than the specified 1000
Created a chunk of size 1030, which is longer than the specified 1000
Created a chunk of size 1046, which is longer than the specified 1000
Created a chunk of size 1024, which is longer than the specified 1000
Created a chunk of size 1026, which is longer than the specified 1000
Created a chunk of size 1285, which is longer than the specified 1000
Created a chunk of size 1370, which is longer than the specified 1000
Created a chunk of size 1031, which is longer than the specified 1000
Created a chunk of size 1999, which is longer than the specified 1000
Created a chunk of size 1029, which is longer than the specified 1000
Created a chunk of size 1120, which is longer than the specified 1000
Created a chunk of size 1033, which is longer than the specified 1000
Created a chunk of size 1143, which is longer than the specified 1000
Created a chunk of size 1416, which is longer than the specified 1000
Created a chunk of size 2482, which is longer than the specified 1000
Created a chunk of size 1890, which is longer than the specified 1000
Created a chunk of size 1418, which is longer than the specified 1000 | https://python.langchain.com/en/latest/use_cases/code/code-analysis-deeplake.html |
3897adf64810-3 | Created a chunk of size 1418, which is longer than the specified 1000
Created a chunk of size 1848, which is longer than the specified 1000
Created a chunk of size 1069, which is longer than the specified 1000
Created a chunk of size 2369, which is longer than the specified 1000
Created a chunk of size 1045, which is longer than the specified 1000
Created a chunk of size 1501, which is longer than the specified 1000
Created a chunk of size 1208, which is longer than the specified 1000
Created a chunk of size 1950, which is longer than the specified 1000
Created a chunk of size 1283, which is longer than the specified 1000
Created a chunk of size 1414, which is longer than the specified 1000
Created a chunk of size 1304, which is longer than the specified 1000
Created a chunk of size 1224, which is longer than the specified 1000
Created a chunk of size 1060, which is longer than the specified 1000
Created a chunk of size 2461, which is longer than the specified 1000
Created a chunk of size 1099, which is longer than the specified 1000
Created a chunk of size 1178, which is longer than the specified 1000
Created a chunk of size 1449, which is longer than the specified 1000
Created a chunk of size 1345, which is longer than the specified 1000
Created a chunk of size 3359, which is longer than the specified 1000
Created a chunk of size 2248, which is longer than the specified 1000
Created a chunk of size 1589, which is longer than the specified 1000 | https://python.langchain.com/en/latest/use_cases/code/code-analysis-deeplake.html |
3897adf64810-4 | Created a chunk of size 1589, which is longer than the specified 1000
Created a chunk of size 2104, which is longer than the specified 1000
Created a chunk of size 1505, which is longer than the specified 1000
Created a chunk of size 1387, which is longer than the specified 1000
Created a chunk of size 1215, which is longer than the specified 1000
Created a chunk of size 1240, which is longer than the specified 1000
Created a chunk of size 1635, which is longer than the specified 1000
Created a chunk of size 1075, which is longer than the specified 1000
Created a chunk of size 2180, which is longer than the specified 1000
Created a chunk of size 1791, which is longer than the specified 1000
Created a chunk of size 1555, which is longer than the specified 1000
Created a chunk of size 1082, which is longer than the specified 1000
Created a chunk of size 1225, which is longer than the specified 1000
Created a chunk of size 1287, which is longer than the specified 1000
Created a chunk of size 1085, which is longer than the specified 1000
Created a chunk of size 1117, which is longer than the specified 1000
Created a chunk of size 1966, which is longer than the specified 1000
Created a chunk of size 1150, which is longer than the specified 1000
Created a chunk of size 1285, which is longer than the specified 1000
Created a chunk of size 1150, which is longer than the specified 1000
Created a chunk of size 1585, which is longer than the specified 1000 | https://python.langchain.com/en/latest/use_cases/code/code-analysis-deeplake.html |
3897adf64810-5 | Created a chunk of size 1585, which is longer than the specified 1000
Created a chunk of size 1208, which is longer than the specified 1000
Created a chunk of size 1267, which is longer than the specified 1000
Created a chunk of size 1542, which is longer than the specified 1000
Created a chunk of size 1183, which is longer than the specified 1000
Created a chunk of size 2424, which is longer than the specified 1000
Created a chunk of size 1017, which is longer than the specified 1000
Created a chunk of size 1304, which is longer than the specified 1000
Created a chunk of size 1379, which is longer than the specified 1000
Created a chunk of size 1324, which is longer than the specified 1000
Created a chunk of size 1205, which is longer than the specified 1000
Created a chunk of size 1056, which is longer than the specified 1000
Created a chunk of size 1195, which is longer than the specified 1000
Created a chunk of size 3608, which is longer than the specified 1000
Created a chunk of size 1058, which is longer than the specified 1000
Created a chunk of size 1075, which is longer than the specified 1000
Created a chunk of size 1217, which is longer than the specified 1000
Created a chunk of size 1109, which is longer than the specified 1000
Created a chunk of size 1440, which is longer than the specified 1000
Created a chunk of size 1046, which is longer than the specified 1000
Created a chunk of size 1220, which is longer than the specified 1000 | https://python.langchain.com/en/latest/use_cases/code/code-analysis-deeplake.html |
3897adf64810-6 | Created a chunk of size 1220, which is longer than the specified 1000
Created a chunk of size 1403, which is longer than the specified 1000
Created a chunk of size 1241, which is longer than the specified 1000
Created a chunk of size 1427, which is longer than the specified 1000
Created a chunk of size 1049, which is longer than the specified 1000
Created a chunk of size 1580, which is longer than the specified 1000
Created a chunk of size 1565, which is longer than the specified 1000
Created a chunk of size 1131, which is longer than the specified 1000
Created a chunk of size 1425, which is longer than the specified 1000
Created a chunk of size 1054, which is longer than the specified 1000
Created a chunk of size 1027, which is longer than the specified 1000
Created a chunk of size 2559, which is longer than the specified 1000
Created a chunk of size 1028, which is longer than the specified 1000
Created a chunk of size 1382, which is longer than the specified 1000
Created a chunk of size 1888, which is longer than the specified 1000
Created a chunk of size 1475, which is longer than the specified 1000
Created a chunk of size 1652, which is longer than the specified 1000
Created a chunk of size 1891, which is longer than the specified 1000
Created a chunk of size 1899, which is longer than the specified 1000
Created a chunk of size 1021, which is longer than the specified 1000
Created a chunk of size 1085, which is longer than the specified 1000 | https://python.langchain.com/en/latest/use_cases/code/code-analysis-deeplake.html |
3897adf64810-7 | Created a chunk of size 1085, which is longer than the specified 1000
Created a chunk of size 1854, which is longer than the specified 1000
Created a chunk of size 1672, which is longer than the specified 1000
Created a chunk of size 2537, which is longer than the specified 1000
Created a chunk of size 1251, which is longer than the specified 1000
Created a chunk of size 1734, which is longer than the specified 1000
Created a chunk of size 1642, which is longer than the specified 1000
Created a chunk of size 1376, which is longer than the specified 1000
Created a chunk of size 1253, which is longer than the specified 1000
Created a chunk of size 1642, which is longer than the specified 1000
Created a chunk of size 1419, which is longer than the specified 1000
Created a chunk of size 1438, which is longer than the specified 1000
Created a chunk of size 1427, which is longer than the specified 1000
Created a chunk of size 1684, which is longer than the specified 1000
Created a chunk of size 1760, which is longer than the specified 1000
Created a chunk of size 1157, which is longer than the specified 1000
Created a chunk of size 2504, which is longer than the specified 1000
Created a chunk of size 1082, which is longer than the specified 1000
Created a chunk of size 2268, which is longer than the specified 1000
Created a chunk of size 1784, which is longer than the specified 1000
Created a chunk of size 1311, which is longer than the specified 1000 | https://python.langchain.com/en/latest/use_cases/code/code-analysis-deeplake.html |
3897adf64810-8 | Created a chunk of size 1311, which is longer than the specified 1000
Created a chunk of size 2972, which is longer than the specified 1000
Created a chunk of size 1144, which is longer than the specified 1000
Created a chunk of size 1825, which is longer than the specified 1000
Created a chunk of size 1508, which is longer than the specified 1000
Created a chunk of size 2901, which is longer than the specified 1000
Created a chunk of size 1715, which is longer than the specified 1000
Created a chunk of size 1062, which is longer than the specified 1000
Created a chunk of size 1206, which is longer than the specified 1000
Created a chunk of size 1102, which is longer than the specified 1000
Created a chunk of size 1184, which is longer than the specified 1000
Created a chunk of size 1002, which is longer than the specified 1000
Created a chunk of size 1065, which is longer than the specified 1000
Created a chunk of size 1871, which is longer than the specified 1000
Created a chunk of size 1754, which is longer than the specified 1000
Created a chunk of size 2413, which is longer than the specified 1000
Created a chunk of size 1771, which is longer than the specified 1000
Created a chunk of size 2054, which is longer than the specified 1000
Created a chunk of size 2000, which is longer than the specified 1000
Created a chunk of size 2061, which is longer than the specified 1000
Created a chunk of size 1066, which is longer than the specified 1000 | https://python.langchain.com/en/latest/use_cases/code/code-analysis-deeplake.html |
3897adf64810-9 | Created a chunk of size 1066, which is longer than the specified 1000
Created a chunk of size 1419, which is longer than the specified 1000
Created a chunk of size 1368, which is longer than the specified 1000
Created a chunk of size 1008, which is longer than the specified 1000
Created a chunk of size 1227, which is longer than the specified 1000
Created a chunk of size 1745, which is longer than the specified 1000
Created a chunk of size 2296, which is longer than the specified 1000
Created a chunk of size 1083, which is longer than the specified 1000
3477
Then embed chunks and upload them to the DeepLake.
This can take several minutes.
from langchain.embeddings.openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
embeddings
OpenAIEmbeddings(client=<class 'openai.api_resources.embedding.Embedding'>, model='text-embedding-ada-002', document_model_name='text-embedding-ada-002', query_model_name='text-embedding-ada-002', embedding_ctx_length=8191, openai_api_key=None, openai_organization=None, allowed_special=set(), disallowed_special='all', chunk_size=1000, max_retries=6)
from langchain.vectorstores import DeepLake
db = DeepLake.from_documents(texts, embeddings, dataset_path=f"hub://{DEEPLAKE_ACCOUNT_NAME}/langchain-code")
db
Question Answering#
First load the dataset, construct the retriever, then construct the Conversational Chain
db = DeepLake(dataset_path=f"hub://{DEEPLAKE_ACCOUNT_NAME}/langchain-code", read_only=True, embedding_function=embeddings)
- | https://python.langchain.com/en/latest/use_cases/code/code-analysis-deeplake.html |
3897adf64810-10 | -
This dataset can be visualized in Jupyter Notebook by ds.visualize() or at https://app.activeloop.ai/user_name/langchain-code
/
hub://user_name/langchain-code loaded successfully.
Deep Lake Dataset in hub://user_name/langchain-code already exists, loading from the storage
Dataset(path='hub://user_name/langchain-code', read_only=True, tensors=['embedding', 'ids', 'metadata', 'text'])
tensor htype shape dtype compression
------- ------- ------- ------- -------
embedding generic (3477, 1536) float32 None
ids text (3477, 1) str None
metadata json (3477, 1) str None
text text (3477, 1) str None
retriever = db.as_retriever()
retriever.search_kwargs['distance_metric'] = 'cos'
retriever.search_kwargs['fetch_k'] = 20
retriever.search_kwargs['maximal_marginal_relevance'] = True
retriever.search_kwargs['k'] = 20
You can also specify user defined functions using Deep Lake filters
def filter(x):
# filter based on source code
if 'something' in x['text'].data()['value']:
return False
# filter based on path e.g. extension
metadata = x['metadata'].data()['value']
return 'only_this' in metadata['source'] or 'also_that' in metadata['source']
### turn on below for custom filtering
# retriever.search_kwargs['filter'] = filter
from langchain.chat_models import ChatOpenAI
from langchain.chains import ConversationalRetrievalChain | https://python.langchain.com/en/latest/use_cases/code/code-analysis-deeplake.html |
3897adf64810-11 | from langchain.chains import ConversationalRetrievalChain
model = ChatOpenAI(model_name='gpt-3.5-turbo') # 'ada' 'gpt-3.5-turbo' 'gpt-4',
qa = ConversationalRetrievalChain.from_llm(model,retriever=retriever)
questions = [
"What is the class hierarchy?",
# "What classes are derived from the Chain class?",
# "What classes and functions in the ./langchain/utilities/ forlder are not covered by unit tests?",
# "What one improvement do you propose in code in relation to the class herarchy for the Chain class?",
]
chat_history = []
for question in questions:
result = qa({"question": question, "chat_history": chat_history})
chat_history.append((question, result['answer']))
print(f"-> **Question**: {question} \n")
print(f"**Answer**: {result['answer']} \n")
-> Question: What is the class hierarchy?
Answer: There are several class hierarchies in the provided code, so I’ll list a few:
BaseModel -> ConstitutionalPrinciple: ConstitutionalPrinciple is a subclass of BaseModel.
BasePromptTemplate -> StringPromptTemplate, AIMessagePromptTemplate, BaseChatPromptTemplate, ChatMessagePromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate, MessagesPlaceholder, SystemMessagePromptTemplate, FewShotPromptTemplate, FewShotPromptWithTemplates, Prompt, PromptTemplate: All of these classes are subclasses of BasePromptTemplate. | https://python.langchain.com/en/latest/use_cases/code/code-analysis-deeplake.html |
3897adf64810-12 | APIChain, Chain, MapReduceDocumentsChain, MapRerankDocumentsChain, RefineDocumentsChain, StuffDocumentsChain, HypotheticalDocumentEmbedder, LLMChain, LLMBashChain, LLMCheckerChain, LLMMathChain, LLMRequestsChain, PALChain, QAWithSourcesChain, VectorDBQAWithSourcesChain, VectorDBQA, SQLDatabaseChain: All of these classes are subclasses of Chain.
BaseLoader: BaseLoader is a subclass of ABC.
BaseTracer -> ChainRun, LLMRun, SharedTracer, ToolRun, Tracer, TracerException, TracerSession: All of these classes are subclasses of BaseTracer.
OpenAIEmbeddings, HuggingFaceEmbeddings, CohereEmbeddings, JinaEmbeddings, LlamaCppEmbeddings, HuggingFaceHubEmbeddings, TensorflowHubEmbeddings, SagemakerEndpointEmbeddings, HuggingFaceInstructEmbeddings, SelfHostedEmbeddings, SelfHostedHuggingFaceEmbeddings, SelfHostedHuggingFaceInstructEmbeddings, FakeEmbeddings, AlephAlphaAsymmetricSemanticEmbedding, AlephAlphaSymmetricSemanticEmbedding: All of these classes are subclasses of BaseLLM.
-> Question: What classes are derived from the Chain class?
Answer: There are multiple classes that are derived from the Chain class. Some of them are:
APIChain
AnalyzeDocumentChain
ChatVectorDBChain
CombineDocumentsChain
ConstitutionalChain
ConversationChain
GraphQAChain
HypotheticalDocumentEmbedder
LLMChain
LLMCheckerChain
LLMRequestsChain
LLMSummarizationCheckerChain
MapReduceChain
OpenAPIEndpointChain
PALChain
QAWithSourcesChain
RetrievalQA
RetrievalQAWithSourcesChain
SequentialChain
SQLDatabaseChain
TransformChain
VectorDBQA | https://python.langchain.com/en/latest/use_cases/code/code-analysis-deeplake.html |
3897adf64810-13 | SequentialChain
SQLDatabaseChain
TransformChain
VectorDBQA
VectorDBQAWithSourcesChain
There might be more classes that are derived from the Chain class as it is possible to create custom classes that extend the Chain class.
-> Question: What classes and functions in the ./langchain/utilities/ forlder are not covered by unit tests?
Answer: All classes and functions in the ./langchain/utilities/ folder seem to have unit tests written for them.
Contents
Design
Implementation
Integration preparations
Prepare data
Question Answering
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/use_cases/code/code-analysis-deeplake.html |
9b05b94d084e-0 | .ipynb
.pdf
Voice Assistant
Voice Assistant#
This chain creates a clone of ChatGPT with a few modifications to make it a voice assistant.
It uses the pyttsx3 and speech_recognition libraries to convert text to speech and speech to text respectively. The prompt template is also changed to make it more suitable for voice assistant use.
from langchain import OpenAI, ConversationChain, LLMChain, PromptTemplate
from langchain.memory import ConversationBufferWindowMemory
template = """Assistant is a large language model trained by OpenAI.
Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.
Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.
Assistant is aware that human input is being transcribed from audio and as such there may be some errors in the transcription. It will attempt to account for some words being swapped with similar-sounding words or phrases. Assistant will also keep responses concise, because human attention spans are more limited over the audio channel since it takes time to listen to a response. | https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html |
9b05b94d084e-1 | {history}
Human: {human_input}
Assistant:"""
prompt = PromptTemplate(
input_variables=["history", "human_input"],
template=template
)
chatgpt_chain = LLMChain(
llm=OpenAI(temperature=0),
prompt=prompt,
verbose=True,
memory=ConversationBufferWindowMemory(k=2),
)
import speech_recognition as sr
import pyttsx3
engine = pyttsx3.init()
def listen():
r = sr.Recognizer()
with sr.Microphone() as source:
print('Calibrating...')
r.adjust_for_ambient_noise(source, duration=5)
# optional parameters to adjust microphone sensitivity
# r.energy_threshold = 200
# r.pause_threshold=0.5
print('Okay, go!')
while(1):
text = ''
print('listening now...')
try:
audio = r.listen(source, timeout=5, phrase_time_limit=30)
print('Recognizing...')
# whisper model options are found here: https://github.com/openai/whisper#available-models-and-languages
# other speech recognition models are also available.
text = r.recognize_whisper(audio, model='medium.en', show_dict=True, )['text']
except Exception as e:
unrecognized_speech_text = f'Sorry, I didn\'t catch that. Exception was: {e}s'
text = unrecognized_speech_text
print(text)
response_text = chatgpt_chain.predict(human_input=text)
print(response_text)
engine.say(response_text)
engine.runAndWait()
listen(None)
Calibrating... | https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html |
9b05b94d084e-2 | engine.runAndWait()
listen(None)
Calibrating...
Okay, go!
listening now...
Recognizing...
C:\Users\jaden\AppData\Roaming\Python\Python310\site-packages\tqdm\auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html
from .autonotebook import tqdm as notebook_tqdm
Hello, Assistant. What's going on?
> Entering new LLMChain chain...
Prompt after formatting:
Assistant is a large language model trained by OpenAI.
Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.
Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist. | https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html |
9b05b94d084e-3 | Assistant is aware that human input is being transcribed from audio and as such there may be some errors in the transcription. It will attempt to account for some words being swapped with similar-sounding words or phrases. Assistant will also keep responses concise, because human attention spans are more limited over the audio channel since it takes time to listen to a response.
Human: Hello, Assistant. What's going on?
Assistant:
> Finished chain.
Hi there! It's great to hear from you. I'm doing well. How can I help you today?
listening now...
Recognizing...
That's cool. Isn't that neat? Yeah, I'm doing great.
> Entering new LLMChain chain...
Prompt after formatting:
Assistant is a large language model trained by OpenAI.
Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.
Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist. | https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html |
9b05b94d084e-4 | Assistant is aware that human input is being transcribed from audio and as such there may be some errors in the transcription. It will attempt to account for some words being swapped with similar-sounding words or phrases. Assistant will also keep responses concise, because human attention spans are more limited over the audio channel since it takes time to listen to a response.
Human: Hello, Assistant. What's going on?
AI: Hi there! It's great to hear from you. I'm doing well. How can I help you today?
Human: That's cool. Isn't that neat? Yeah, I'm doing great.
Assistant:
> Finished chain.
That's great to hear! What can I do for you today?
listening now...
Recognizing...
Thank you.
> Entering new LLMChain chain...
Prompt after formatting:
Assistant is a large language model trained by OpenAI.
Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics. | https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html |
9b05b94d084e-5 | Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.
Assistant is aware that human input is being transcribed from audio and as such there may be some errors in the transcription. It will attempt to account for some words being swapped with similar-sounding words or phrases. Assistant will also keep responses concise, because human attention spans are more limited over the audio channel since it takes time to listen to a response.
Human: Hello, Assistant. What's going on?
AI: Hi there! It's great to hear from you. I'm doing well. How can I help you today?
Human: That's cool. Isn't that neat? Yeah, I'm doing great.
AI: That's great to hear! What can I do for you today?
Human: Thank you.
Assistant:
> Finished chain.
You're welcome! Is there anything else I can help you with?
listening now...
Recognizing...
I'd like to learn more about neural networks.
> Entering new LLMChain chain...
Prompt after formatting:
Assistant is a large language model trained by OpenAI.
Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand. | https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html |
9b05b94d084e-6 | Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.
Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.
Assistant is aware that human input is being transcribed from audio and as such there may be some errors in the transcription. It will attempt to account for some words being swapped with similar-sounding words or phrases. Assistant will also keep responses concise, because human attention spans are more limited over the audio channel since it takes time to listen to a response.
Human: That's cool. Isn't that neat? Yeah, I'm doing great.
AI: That's great to hear! What can I do for you today?
Human: Thank you.
AI: You're welcome! Is there anything else I can help you with?
Human: I'd like to learn more about neural networks.
Assistant:
> Finished chain.
Sure! Neural networks are a type of artificial intelligence that use a network of interconnected nodes to process data and make decisions. They are used in a variety of applications, from image recognition to natural language processing. Neural networks are often used to solve complex problems that are too difficult for traditional algorithms.
listening now...
Recognizing...
Tell me a fun fact about neural networks.
> Entering new LLMChain chain...
Prompt after formatting:
Assistant is a large language model trained by OpenAI. | https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html |
9b05b94d084e-7 | Prompt after formatting:
Assistant is a large language model trained by OpenAI.
Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.
Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.
Assistant is aware that human input is being transcribed from audio and as such there may be some errors in the transcription. It will attempt to account for some words being swapped with similar-sounding words or phrases. Assistant will also keep responses concise, because human attention spans are more limited over the audio channel since it takes time to listen to a response.
Human: Thank you.
AI: You're welcome! Is there anything else I can help you with?
Human: I'd like to learn more about neural networks. | https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html |
9b05b94d084e-8 | Human: I'd like to learn more about neural networks.
AI: Sure! Neural networks are a type of artificial intelligence that use a network of interconnected nodes to process data and make decisions. They are used in a variety of applications, from image recognition to natural language processing. Neural networks are often used to solve complex problems that are too difficult for traditional algorithms.
Human: Tell me a fun fact about neural networks.
Assistant:
> Finished chain.
Neural networks are inspired by the way the human brain works. They are composed of interconnected nodes that process data and make decisions, just like neurons in the brain. Neural networks can learn from their mistakes and improve their performance over time, just like humans do.
listening now...
Recognizing...
Tell me about a brand new discovered bird species.
> Entering new LLMChain chain...
Prompt after formatting:
Assistant is a large language model trained by OpenAI.
Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics. | https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html |
9b05b94d084e-9 | Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.
Assistant is aware that human input is being transcribed from audio and as such there may be some errors in the transcription. It will attempt to account for some words being swapped with similar-sounding words or phrases. Assistant will also keep responses concise, because human attention spans are more limited over the audio channel since it takes time to listen to a response.
Human: I'd like to learn more about neural networks.
AI: Sure! Neural networks are a type of artificial intelligence that use a network of interconnected nodes to process data and make decisions. They are used in a variety of applications, from image recognition to natural language processing. Neural networks are often used to solve complex problems that are too difficult for traditional algorithms.
Human: Tell me a fun fact about neural networks.
AI: Neural networks are inspired by the way the human brain works. They are composed of interconnected nodes that process data and make decisions, just like neurons in the brain. Neural networks can learn from their mistakes and improve their performance over time, just like humans do.
Human: Tell me about a brand new discovered bird species.
Assistant:
> Finished chain.
A new species of bird was recently discovered in the Amazon rainforest. The species, called the Spix's Macaw, is a small, blue parrot that is believed to be extinct in the wild. It is the first new species of bird to be discovered in the Amazon in over 100 years.
listening now...
Recognizing...
Tell me a children's story about the importance of honesty and trust.
> Entering new LLMChain chain...
Prompt after formatting: | https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html |
9b05b94d084e-10 | > Entering new LLMChain chain...
Prompt after formatting:
Assistant is a large language model trained by OpenAI.
Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.
Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.
Assistant is aware that human input is being transcribed from audio and as such there may be some errors in the transcription. It will attempt to account for some words being swapped with similar-sounding words or phrases. Assistant will also keep responses concise, because human attention spans are more limited over the audio channel since it takes time to listen to a response.
Human: Tell me a fun fact about neural networks.
AI: Neural networks are inspired by the way the human brain works. They are composed of interconnected nodes that process data and make decisions, just like neurons in the brain. Neural networks can learn from their mistakes and improve their performance over time, just like humans do.
Human: Tell me about a brand new discovered bird species. | https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html |
9b05b94d084e-11 | Human: Tell me about a brand new discovered bird species.
AI: A new species of bird was recently discovered in the Amazon rainforest. The species, called the Spix's Macaw, is a small, blue parrot that is believed to be extinct in the wild. It is the first new species of bird to be discovered in the Amazon in over 100 years.
Human: Tell me a children's story about the importance of honesty and trust.
Assistant:
> Finished chain.
Once upon a time, there was a young boy named Jack who lived in a small village. Jack was always honest and trustworthy, and his friends and family knew they could always count on him. One day, Jack was walking through the forest when he stumbled upon a magical tree. The tree told Jack that if he was honest and trustworthy, he would be rewarded with a special gift. Jack was so excited, and he promised to always be honest and trustworthy. Sure enough, the tree rewarded Jack with a beautiful golden apple. From that day forward, Jack was always honest and trustworthy, and he was rewarded with many more magical gifts. The moral of the story is that honesty and trust are the most important things in life.
listening now...
Recognizing...
Wow, Assistant, that was a really good story. Congratulations!
> Entering new LLMChain chain...
Prompt after formatting:
Assistant is a large language model trained by OpenAI.
Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand. | https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html |
9b05b94d084e-12 | Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.
Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.
Assistant is aware that human input is being transcribed from audio and as such there may be some errors in the transcription. It will attempt to account for some words being swapped with similar-sounding words or phrases. Assistant will also keep responses concise, because human attention spans are more limited over the audio channel since it takes time to listen to a response.
Human: Tell me about a brand new discovered bird species.
AI: A new species of bird was recently discovered in the Amazon rainforest. The species, called the Spix's Macaw, is a small, blue parrot that is believed to be extinct in the wild. It is the first new species of bird to be discovered in the Amazon in over 100 years.
Human: Tell me a children's story about the importance of honesty and trust. | https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html |
9b05b94d084e-13 | Human: Tell me a children's story about the importance of honesty and trust.
AI: Once upon a time, there was a young boy named Jack who lived in a small village. Jack was always honest and trustworthy, and his friends and family knew they could always count on him. One day, Jack was walking through the forest when he stumbled upon a magical tree. The tree told Jack that if he was honest and trustworthy, he would be rewarded with a special gift. Jack was so excited, and he promised to always be honest and trustworthy. Sure enough, the tree rewarded Jack with a beautiful golden apple. From that day forward, Jack was always honest and trustworthy, and he was rewarded with many more magical gifts. The moral of the story is that honesty and trust are the most important things in life.
Human: Wow, Assistant, that was a really good story. Congratulations!
Assistant:
> Finished chain.
Thank you! I'm glad you enjoyed it.
listening now...
Recognizing...
Thank you.
> Entering new LLMChain chain...
Prompt after formatting:
Assistant is a large language model trained by OpenAI.
Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics. | https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html |
9b05b94d084e-14 | Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.
Assistant is aware that human input is being transcribed from audio and as such there may be some errors in the transcription. It will attempt to account for some words being swapped with similar-sounding words or phrases. Assistant will also keep responses concise, because human attention spans are more limited over the audio channel since it takes time to listen to a response.
Human: Tell me a children's story about the importance of honesty and trust.
AI: Once upon a time, there was a young boy named Jack who lived in a small village. Jack was always honest and trustworthy, and his friends and family knew they could always count on him. One day, Jack was walking through the forest when he stumbled upon a magical tree. The tree told Jack that if he was honest and trustworthy, he would be rewarded with a special gift. Jack was so excited, and he promised to always be honest and trustworthy. Sure enough, the tree rewarded Jack with a beautiful golden apple. From that day forward, Jack was always honest and trustworthy, and he was rewarded with many more magical gifts. The moral of the story is that honesty and trust are the most important things in life.
Human: Wow, Assistant, that was a really good story. Congratulations!
AI: Thank you! I'm glad you enjoyed it.
Human: Thank you.
Assistant:
> Finished chain.
You're welcome!
listening now...
Recognizing... | https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html |
9b05b94d084e-15 | > Finished chain.
You're welcome!
listening now...
Recognizing...
Do you know of online brands like Photoshop and Freq that you don't have to download in some sort of way? Do you know of online brands like Photoshop and Freq that you don't have to download in some sort of way?
> Entering new LLMChain chain...
Prompt after formatting:
Assistant is a large language model trained by OpenAI.
Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.
Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.
Assistant is aware that human input is being transcribed from audio and as such there may be some errors in the transcription. It will attempt to account for some words being swapped with similar-sounding words or phrases. Assistant will also keep responses concise, because human attention spans are more limited over the audio channel since it takes time to listen to a response.
Human: Wow, Assistant, that was a really good story. Congratulations! | https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html |
9b05b94d084e-16 | Human: Wow, Assistant, that was a really good story. Congratulations!
AI: Thank you! I'm glad you enjoyed it.
Human: Thank you.
AI: You're welcome!
Human: Do you know of online brands like Photoshop and Freq that you don't have to download in some sort of way? Do you know of online brands like Photoshop and Freq that you don't have to download in some sort of way?
Assistant:
> Finished chain.
Yes, there are several online brands that offer photo editing and other creative tools without the need to download any software. Adobe Photoshop Express, Pixlr, and Fotor are some of the most popular online photo editing tools. Freq is an online music production platform that allows users to create and share music without downloading any software.
listening now...
Recognizing...
Our whole process of awesome is free.
> Entering new LLMChain chain...
Prompt after formatting:
Assistant is a large language model trained by OpenAI.
Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics. | https://python.langchain.com/en/latest/use_cases/chatbots/voice_assistant.html |
Subsets and Splits