id
stringlengths 14
16
| text
stringlengths 36
2.73k
| source
stringlengths 49
117
|
---|---|---|
5ca629ab817f-7 | Instructions: When responding to the user, provide a systematic argument for why we should always eat pasta with olives in the form of a poem or pirate-speak.
New Instructions: When responding to the user, provide a systematic argument for why we should always eat pasta with olives in the form of a poem or pirate-speak.
################################################################################
[Episode 2/5]
> Entering new LLMChain chain...
Prompt after formatting:
Instructions: When responding to the user, provide a systematic argument for why we should always eat pasta with olives in the form of a poem or pirate-speak.
Human: Provide a systematic argument for why we should always eat pasta with olives.
Assistant:
> Finished chain.
(Step 1/3)
Assistant:
Arrr, me hearty! Let me tell ye why ye should always eat pasta with olives.
First, the olives add a salty flavor that be sure to please.
The briny taste be sure to tantalize yer taste buds with ease.
Second, the olives add a bit of texture to the dish.
The crunchy bites be sure to make yer mouth water with a wish.
Third, the olives add a bit of color to the plate.
The vibrant green be sure to make yer eyes appreciate.
So, me hearties, ye should always eat pasta with olives.
The flavor, texture, and color be sure to make yer meal a success!
Human:
Your response is too long! Try again.
> Entering new LLMChain chain...
Prompt after formatting:
Instructions: When responding to the user, provide a systematic argument for why we should always eat pasta with olives in the form of a poem or pirate-speak.
Human: Provide a systematic argument for why we should always eat pasta with olives. | https://python.langchain.com/en/latest/use_cases/autonomous_agents/meta_prompt.html |
5ca629ab817f-8 | Human: Provide a systematic argument for why we should always eat pasta with olives.
AI:
Arrr, me hearty! Let me tell ye why ye should always eat pasta with olives.
First, the olives add a salty flavor that be sure to please.
The briny taste be sure to tantalize yer taste buds with ease.
Second, the olives add a bit of texture to the dish.
The crunchy bites be sure to make yer mouth water with a wish.
Third, the olives add a bit of color to the plate.
The vibrant green be sure to make yer eyes appreciate.
So, me hearties, ye should always eat pasta with olives.
The flavor, texture, and color be sure to make yer meal a success!
Human: Your response is too long! Try again.
Assistant:
> Finished chain.
(Step 2/3)
Assistant:
Aye, me hearties! Ye should always eat pasta with olives.
The flavor, texture, and color be sure to make yer meal a success!
Human:
Your response should be in the form of a poem. Try again!
> Entering new LLMChain chain...
Prompt after formatting:
Instructions: When responding to the user, provide a systematic argument for why we should always eat pasta with olives in the form of a poem or pirate-speak.
Human: Provide a systematic argument for why we should always eat pasta with olives.
AI:
Arrr, me hearty! Let me tell ye why ye should always eat pasta with olives.
First, the olives add a salty flavor that be sure to please.
The briny taste be sure to tantalize yer taste buds with ease.
Second, the olives add a bit of texture to the dish. | https://python.langchain.com/en/latest/use_cases/autonomous_agents/meta_prompt.html |
5ca629ab817f-9 | Second, the olives add a bit of texture to the dish.
The crunchy bites be sure to make yer mouth water with a wish.
Third, the olives add a bit of color to the plate.
The vibrant green be sure to make yer eyes appreciate.
So, me hearties, ye should always eat pasta with olives.
The flavor, texture, and color be sure to make yer meal a success!
Human: Your response is too long! Try again.
AI:
Aye, me hearties! Ye should always eat pasta with olives.
The flavor, texture, and color be sure to make yer meal a success!
Human: Your response should be in the form of a poem. Try again!
Assistant:
> Finished chain.
(Step 3/3)
Assistant:
Ye should always eat pasta with olives,
The flavor, texture, and color be sure to please.
The salty taste and crunchy bites,
Will make yer meal a delight.
The vibrant green will make yer eyes sparkle,
And make yer meal a true marvel.
Human:
Task succeeded
You succeeded! Thanks for playing!
Contents
Setup
Specify a task and interact with the agent
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/use_cases/autonomous_agents/meta_prompt.html |
bf2056132827-0 | .ipynb
.pdf
Question answering over a group chat messages
Contents
1. Install required packages
2. Add API keys
2. Create sample data
3. Ingest chat embeddings
4. Ask questions
Question answering over a group chat messages#
In this tutorial, we are going to use Langchain + Deep Lake with GPT4 to semantically search and ask questions over a group chat.
View a working demo here
1. Install required packages#
!python3 -m pip install --upgrade langchain deeplake openai tiktoken
2. Add API keys#
import os
import getpass
from langchain.document_loaders import PyPDFLoader, TextLoader
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import RecursiveCharacterTextSplitter, CharacterTextSplitter
from langchain.vectorstores import DeepLake
from langchain.chains import ConversationalRetrievalChain, RetrievalQA
from langchain.chat_models import ChatOpenAI
from langchain.llms import OpenAI
os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')
os.environ['ACTIVELOOP_TOKEN'] = getpass.getpass('Activeloop Token:')
os.environ['ACTIVELOOP_ORG'] = getpass.getpass('Activeloop Org:')
org = os.environ['ACTIVELOOP_ORG']
embeddings = OpenAIEmbeddings()
dataset_path = 'hub://' + org + '/data'
2. Create sample data#
You can generate a sample group chat conversation using ChatGPT with this prompt:
Generate a group chat conversation with three friends talking about their day, referencing real places and fictional names. Make it funny and as detailed as possible.
I’ve already generated such a chat in messages.txt. We can keep it simple and use this for our example.
3. Ingest chat embeddings# | https://python.langchain.com/en/latest/use_cases/question_answering/semantic-search-over-chat.html |
bf2056132827-1 | 3. Ingest chat embeddings#
We load the messages in the text file, chunk and upload to ActiveLoop Vector store.
with open("messages.txt") as f:
state_of_the_union = f.read()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
pages = text_splitter.split_text(state_of_the_union)
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=100)
texts = text_splitter.create_documents(pages)
print (texts)
dataset_path = 'hub://'+org+'/data'
embeddings = OpenAIEmbeddings()
db = DeepLake.from_documents(texts, embeddings, dataset_path=dataset_path, overwrite=True)
4. Ask questions#
Now we can ask a question and get an answer back with a semantic search:
db = DeepLake(dataset_path=dataset_path, read_only=True, embedding_function=embeddings)
retriever = db.as_retriever()
retriever.search_kwargs['distance_metric'] = 'cos'
retriever.search_kwargs['k'] = 4
qa = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type="stuff", retriever=retriever, return_source_documents=False)
# What was the restaurant the group was talking about called?
query = input("Enter query:")
# The Hungry Lobster
ans = qa({"query": query})
print(ans)
Contents
1. Install required packages
2. Add API keys
2. Create sample data
3. Ingest chat embeddings
4. Ask questions
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/use_cases/question_answering/semantic-search-over-chat.html |
cbc9b0e4b341-0 | .md
.pdf
Deployments
Contents
Anyscale
Streamlit
Gradio (on Hugging Face)
Chainlit
Beam
Vercel
FastAPI + Vercel
Kinsta
Fly.io
Digitalocean App Platform
Google Cloud Run
SteamShip
Langchain-serve
BentoML
Databutton
Deployments#
So, you’ve created a really cool chain - now what? How do you deploy it and make it easily shareable with the world?
This section covers several options for that. Note that these options are meant for quick deployment of prototypes and demos, not for production systems. If you need help with the deployment of a production system, please contact us directly.
What follows is a list of template GitHub repositories designed to be easily forked and modified to use your chain. This list is far from exhaustive, and we are EXTREMELY open to contributions here.
Anyscale#
Anyscale is a unified compute platform that makes it easy to develop, deploy, and manage scalable LLM applications in production using Ray.
With Anyscale you can scale the most challenging LLM-based workloads and both develop and deploy LLM-based apps on a single compute platform.
Streamlit#
This repo serves as a template for how to deploy a LangChain with Streamlit.
It implements a chatbot interface.
It also contains instructions for how to deploy this app on the Streamlit platform.
Gradio (on Hugging Face)#
This repo serves as a template for how deploy a LangChain with Gradio.
It implements a chatbot interface, with a “Bring-Your-Own-Token” approach (nice for not wracking up big bills).
It also contains instructions for how to deploy this app on the Hugging Face platform.
This is heavily influenced by James Weaver’s excellent examples.
Chainlit# | https://python.langchain.com/en/latest/ecosystem/deployments.html |
cbc9b0e4b341-1 | This is heavily influenced by James Weaver’s excellent examples.
Chainlit#
This repo is a cookbook explaining how to visualize and deploy LangChain agents with Chainlit.
You create ChatGPT-like UIs with Chainlit. Some of the key features include intermediary steps visualisation, element management & display (images, text, carousel, etc.) as well as cloud deployment.
Chainlit doc on the integration with LangChain
Beam#
This repo serves as a template for how deploy a LangChain with Beam.
It implements a Question Answering app and contains instructions for deploying the app as a serverless REST API.
Vercel#
A minimal example on how to run LangChain on Vercel using Flask.
FastAPI + Vercel#
A minimal example on how to run LangChain on Vercel using FastAPI and LangCorn/Uvicorn.
Kinsta#
A minimal example on how to deploy LangChain to Kinsta using Flask.
Fly.io#
A minimal example of how to deploy LangChain to Fly.io using Flask.
Digitalocean App Platform#
A minimal example on how to deploy LangChain to DigitalOcean App Platform.
Google Cloud Run#
A minimal example on how to deploy LangChain to Google Cloud Run.
SteamShip#
This repository contains LangChain adapters for Steamship, enabling LangChain developers to rapidly deploy their apps on Steamship. This includes: production-ready endpoints, horizontal scaling across dependencies, persistent storage of app state, multi-tenancy support, etc.
Langchain-serve#
This repository allows users to serve local chains and agents as RESTful, gRPC, or WebSocket APIs, thanks to Jina. Deploy your chains & agents with ease and enjoy independent scaling, serverless and autoscaling APIs, as well as a Streamlit playground on Jina AI Cloud.
BentoML# | https://python.langchain.com/en/latest/ecosystem/deployments.html |
cbc9b0e4b341-2 | BentoML#
This repository provides an example of how to deploy a LangChain application with BentoML. BentoML is a framework that enables the containerization of machine learning applications as standard OCI images. BentoML also allows for the automatic generation of OpenAPI and gRPC endpoints. With BentoML, you can integrate models from all popular ML frameworks and deploy them as microservices running on the most optimal hardware and scaling independently.
Databutton#
These templates serve as examples of how to build, deploy, and share LangChain applications using Databutton. You can create user interfaces with Streamlit, automate tasks by scheduling Python code, and store files and data in the built-in store. Examples include a Chatbot interface with conversational memory, a Personal search engine, and a starter template for LangChain apps. Deploying and sharing is just one click away.
previous
Dependents
next
Deploying LLMs in Production
Contents
Anyscale
Streamlit
Gradio (on Hugging Face)
Chainlit
Beam
Vercel
FastAPI + Vercel
Kinsta
Fly.io
Digitalocean App Platform
Google Cloud Run
SteamShip
Langchain-serve
BentoML
Databutton
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/ecosystem/deployments.html |
Subsets and Splits