id
stringlengths 14
15
| text
stringlengths 101
5.26k
| source
stringlengths 57
120
|
---|---|---|
a0c062c47ffd-2 | > Finished chain.
Memory: Add State to Chains and Agents#
So far, all the chains and agents weβve gone through have been stateless. But often, you may want a chain or agent to have some concept of βmemoryβ so that it may remember information about its previous interactions. The clearest and simple example of this is when designing a chatbot - you want it to remember previous messages so it can use context from that to have a better conversation. This would be a type of βshort-term memoryβ. On the more complex side, you could imagine a chain/agent remembering key pieces of information over time - this would be a form of βlong-term memoryβ. For more concrete ideas on the latter, see this awesome paper.
LangChain provides several specially created chains just for this purpose. This notebook walks through using one of those chains (the ConversationChain) with two different types of memory.
By default, the ConversationChain has a simple type of memory that remembers all previous inputs/outputs and adds them to the context that is passed. Letβs take a look at using this chain (setting verbose=True so we can see the prompt).
from langchain import OpenAI, ConversationChain
llm = OpenAI(temperature=0)
conversation = ConversationChain(llm=llm, verbose=True)
output = conversation.predict(input="Hi there!")
print(output)
> Entering new chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.
Current conversation:
Human: Hi there!
AI:
> Finished chain.
' Hello! How are you today?'
output = conversation.predict(input="I'm doing well! Just having a conversation with an AI.")
print(output)
> Entering new chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.
Current conversation:
Human: Hi there!
AI: Hello! How are you today?
Human: I'm doing well! Just having a conversation with an AI.
AI:
> Finished chain.
" That's great! What would you like to talk about?"
Building a Language Model Application: Chat Models#
Similarly, you can use chat models instead of LLMs. Chat models are a variation on language models. While chat models use language models under the hood, the interface they expose is a bit different: rather than expose a βtext in, text outβ API, they expose an interface where βchat messagesβ are the inputs and outputs.
Chat model APIs are fairly new, so we are still figuring out the correct abstractions.
Get Message Completions from a Chat Model#
You can get chat completions by passing one or more messages to the chat model. The response will be a message. The types of messages currently supported in LangChain are AIMessage, HumanMessage, SystemMessage, and ChatMessage β ChatMessage takes in an arbitrary role parameter. Most of the time, youβll just be dealing with HumanMessage, AIMessage, and SystemMessage.
from langchain.chat_models import ChatOpenAI
from langchain.schema import (
AIMessage,
HumanMessage,
SystemMessage
)
chat = ChatOpenAI(temperature=0)
You can get completions by passing in a single message.
chat([HumanMessage(content="Translate this sentence from English to French. I love programming.")])
# -> AIMessage(content="J'aime programmer.", additional_kwargs={})
You can also pass in multiple messages for OpenAIβs gpt-3.5-turbo and gpt-4 models.
messages = [
SystemMessage(content="You are a helpful assistant that translates English to French."),
HumanMessage(content="I love programming.")
]
chat(messages)
# -> AIMessage(content="J'aime programmer.", additional_kwargs={})
You can go one step further and generate completions for multiple sets of messages using generate. This returns an LLMResult with an additional message parameter:
batch_messages = [
[
SystemMessage(content="You are a helpful assistant that translates English to French."),
HumanMessage(content="I love programming.")
],
[
SystemMessage(content="You are a helpful assistant that translates English to French."),
HumanMessage(content="I love artificial intelligence.")
],
]
result = chat.generate(batch_messages)
result | https://langchain.readthedocs.io/en/latest/getting_started/getting_started.html |
a0c062c47ffd-3 | ],
]
result = chat.generate(batch_messages)
result
# -> LLMResult(generations=[[ChatGeneration(text="J'aime programmer.", generation_info=None, message=AIMessage(content="J'aime programmer.", additional_kwargs={}))], [ChatGeneration(text="J'aime l'intelligence artificielle.", generation_info=None, message=AIMessage(content="J'aime l'intelligence artificielle.", additional_kwargs={}))]], llm_output={'token_usage': {'prompt_tokens': 57, 'completion_tokens': 20, 'total_tokens': 77}})
You can recover things like token usage from this LLMResult:
result.llm_output['token_usage']
# -> {'prompt_tokens': 57, 'completion_tokens': 20, 'total_tokens': 77}
Chat Prompt Templates#
Similar to LLMs, you can make use of templating by using a MessagePromptTemplate. You can build a ChatPromptTemplate from one or more MessagePromptTemplates. You can use ChatPromptTemplateβs format_prompt β this returns a PromptValue, which you can convert to a string or Message object, depending on whether you want to use the formatted value as input to an llm or chat model.
For convenience, there is a from_template method exposed on the template. If you were to use this template, this is what it would look like:
from langchain.chat_models import ChatOpenAI
from langchain.prompts.chat import (
ChatPromptTemplate,
SystemMessagePromptTemplate,
HumanMessagePromptTemplate,
)
chat = ChatOpenAI(temperature=0)
template = "You are a helpful assistant that translates {input_language} to {output_language}."
system_message_prompt = SystemMessagePromptTemplate.from_template(template)
human_template = "{text}"
human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt])
# get a chat completion from the formatted messages
chat(chat_prompt.format_prompt(input_language="English", output_language="French", text="I love programming.").to_messages())
# -> AIMessage(content="J'aime programmer.", additional_kwargs={})
Chains with Chat Models#
The LLMChain discussed in the above section can be used with chat models as well:
from langchain.chat_models import ChatOpenAI
from langchain import LLMChain
from langchain.prompts.chat import (
ChatPromptTemplate,
SystemMessagePromptTemplate,
HumanMessagePromptTemplate,
)
chat = ChatOpenAI(temperature=0)
template = "You are a helpful assistant that translates {input_language} to {output_language}."
system_message_prompt = SystemMessagePromptTemplate.from_template(template)
human_template = "{text}"
human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt])
chain = LLMChain(llm=chat, prompt=chat_prompt)
chain.run(input_language="English", output_language="French", text="I love programming.")
# -> "J'aime programmer."
Agents with Chat Models#
Agents can also be used with chat models, you can initialize one using AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION as the agent type.
from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType
from langchain.chat_models import ChatOpenAI
from langchain.llms import OpenAI
# First, let's load the language model we're going to use to control the agent.
chat = ChatOpenAI(temperature=0)
# Next, let's load some tools to use. Note that the `llm-math` tool uses an LLM, so we need to pass that in.
llm = OpenAI(temperature=0)
tools = load_tools(["serpapi", "llm-math"], llm=llm)
# Finally, let's initialize an agent with the tools, the language model, and the type of agent we want to use.
agent = initialize_agent(tools, chat, agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
# Now let's test it out!
agent.run("Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power?")
> Entering new AgentExecutor chain...
Thought: I need to use a search engine to find Olivia Wilde's boyfriend and a calculator to raise his age to the 0.23 power.
Action:
{
"action": "Search", | https://langchain.readthedocs.io/en/latest/getting_started/getting_started.html |
a0c062c47ffd-4 | Action:
{
"action": "Search",
"action_input": "Olivia Wilde boyfriend"
}
Observation: Sudeikis and Wilde's relationship ended in November 2020. Wilde was publicly served with court documents regarding child custody while she was presenting Don't Worry Darling at CinemaCon 2022. In January 2021, Wilde began dating singer Harry Styles after meeting during the filming of Don't Worry Darling.
Thought:I need to use a search engine to find Harry Styles' current age.
Action:
{
"action": "Search",
"action_input": "Harry Styles age"
}
Observation: 29 years
Thought:Now I need to calculate 29 raised to the 0.23 power.
Action:
{
"action": "Calculator",
"action_input": "29^0.23"
}
Observation: Answer: 2.169459462491557
Thought:I now know the final answer.
Final Answer: 2.169459462491557
> Finished chain.
'2.169459462491557'
Memory: Add State to Chains and Agents#
You can use Memory with chains and agents initialized with chat models. The main difference between this and Memory for LLMs is that rather than trying to condense all previous messages into a string, we can keep them as their own unique memory object.
from langchain.prompts import (
ChatPromptTemplate,
MessagesPlaceholder,
SystemMessagePromptTemplate,
HumanMessagePromptTemplate
)
from langchain.chains import ConversationChain
from langchain.chat_models import ChatOpenAI
from langchain.memory import ConversationBufferMemory
prompt = ChatPromptTemplate.from_messages([
SystemMessagePromptTemplate.from_template("The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know."),
MessagesPlaceholder(variable_name="history"),
HumanMessagePromptTemplate.from_template("{input}")
])
llm = ChatOpenAI(temperature=0)
memory = ConversationBufferMemory(return_messages=True)
conversation = ConversationChain(memory=memory, prompt=prompt, llm=llm)
conversation.predict(input="Hi there!")
# -> 'Hello! How can I assist you today?'
conversation.predict(input="I'm doing well! Just having a conversation with an AI.")
# -> "That sounds like fun! I'm happy to chat with you. Is there anything specific you'd like to talk about?"
conversation.predict(input="Tell me about yourself.")
# -> "Sure! I am an AI language model created by OpenAI. I was trained on a large dataset of text from the internet, which allows me to understand and generate human-like language. I can answer questions, provide information, and even have conversations like this one. Is there anything else you'd like to know about me?"
previous
Welcome to LangChain
next
Concepts
Contents
Installation
Environment Setup
Building a Language Model Application: LLMs
LLMs: Get predictions from a language model
Prompt Templates: Manage prompts for LLMs
Chains: Combine LLMs and prompts in multi-step workflows
Agents: Dynamically Call Chains Based on User Input
Memory: Add State to Chains and Agents
Building a Language Model Application: Chat Models
Get Message Completions from a Chat Model
Chat Prompt Templates
Chains with Chat Models
Agents with Chat Models
Memory: Add State to Chains and Agents
By Harrison Chase
Β© Copyright 2023, Harrison Chase.
Last updated on Jun 08, 2023. | https://langchain.readthedocs.io/en/latest/getting_started/getting_started.html |
916b6f2ad712-0 | .md
.pdf
Tutorials
Contents
DeepLearning.AI course
Handbook
Tutorials
Tutorials#
β icon marks a new addition [last update 2023-05-15]
DeepLearning.AI course#
βLangChain for LLM Application Development by Harrison Chase presented by Andrew Ng
Handbook#
LangChain AI Handbook By James Briggs and Francisco Ingham
Tutorials#
LangChain Tutorials by Edrick:
β LangChain, Chroma DB, OpenAI Beginner Guide | ChatGPT with your PDF
β LangChain 101: The Complete Beginnerβs Guide
LangChain Crash Course: Build an AutoGPT app in 25 minutes by Nicholas Renotte
LangChain Crash Course - Build apps with language models by Patrick Loeber
LangChain Explained in 13 Minutes | QuickStart Tutorial for Beginners by Rabbitmetrics
#
LangChain for Gen AI and LLMs by James Briggs:
#1 Getting Started with GPT-3 vs. Open Source LLMs
#2 Prompt Templates for GPT 3.5 and other LLMs
#3 LLM Chains using GPT 3.5 and other LLMs
#4 Chatbot Memory for Chat-GPT, Davinci + other LLMs
#5 Chat with OpenAI in LangChain
β #6 Fixing LLM Hallucinations with Retrieval Augmentation in LangChain
β #7 LangChain Agents Deep Dive with GPT 3.5
β #8 Create Custom Tools for Chatbots in LangChain
β #9 Build Conversational Agents with Vector DBs
#
LangChain 101 by Data Independent:
What Is LangChain? - LangChain + ChatGPT Overview
Quickstart Guide
Beginner Guide To 7 Essential Concepts
OpenAI + Wolfram Alpha
Ask Questions On Your Custom (or Private) Files
Connect Google Drive Files To OpenAI
YouTube Transcripts + OpenAI
Question A 300 Page Book (w/ OpenAI + Pinecone)
Workaround OpenAI's Token Limit With Chain Types
Build Your Own OpenAI + LangChain Web App in 23 Minutes
Working With The New ChatGPT API
OpenAI + LangChain Wrote Me 100 Custom Sales Emails
Structured Output From OpenAI (Clean Dirty Data)
Connect OpenAI To +5,000 Tools (LangChain + Zapier)
Use LLMs To Extract Data From Text (Expert Mode)
β Extract Insights From Interview Transcripts Using LLMs
β 5 Levels Of LLM Summarizing: Novice to Expert
#
LangChain How to and guides by Sam Witteveen:
LangChain Basics - LLMs & PromptTemplates with Colab
LangChain Basics - Tools and Chains
ChatGPT API Announcement & Code Walkthrough with LangChain
Conversations with Memory (explanation & code walkthrough)
Chat with Flan20B
Using Hugging Face Models locally (code walkthrough)
PAL : Program-aided Language Models with LangChain code
Building a Summarization System with LangChain and GPT-3 - Part 1
Building a Summarization System with LangChain and GPT-3 - Part 2
Microsoftβs Visual ChatGPT using LangChain
LangChain Agents - Joining Tools and Chains with Decisions
Comparing LLMs with LangChain
Using Constitutional AI in LangChain
Talking to Alpaca with LangChain - Creating an Alpaca Chatbot
Talk to your CSV & Excel with LangChain
BabyAGI: Discover the Power of Task-Driven Autonomous Agents!
Improve your BabyAGI with LangChain
β Master PDF Chat with LangChain - Your essential guide to queries on documents
β Using LangChain with DuckDuckGO Wikipedia & PythonREPL Tools
β Building Custom Tools and Agents with LangChain (gpt-3.5-turbo)
β LangChain Retrieval QA Over Multiple Files with ChromaDB
β LangChain Retrieval QA with Instructor Embeddings & ChromaDB for PDFs
β LangChain + Retrieval Local LLMs for Retrieval QA - No OpenAI!!!
#
LangChain by Prompt Engineering:
LangChain Crash Course β All You Need to Know to Build Powerful Apps with LLMs
Working with MULTIPLE PDF Files in LangChain: ChatGPT for your Data
ChatGPT for YOUR OWN PDF files with LangChain
Talk to YOUR DATA without OpenAI APIs: LangChain
βοΈ CHATGPT For WEBSITES: Custom ChatBOT
#
LangChain by Chat with data | https://langchain.readthedocs.io/en/latest/getting_started/tutorials.html |
916b6f2ad712-1 | #
LangChain by Chat with data
LangChain Beginnerβs Tutorial for Typescript/Javascript
GPT-4 Tutorial: How to Chat With Multiple PDF Files (~1000 pages of Teslaβs 10-K Annual Reports)
GPT-4 & LangChain Tutorial: How to Chat With A 56-Page PDF Document (w/Pinecone)
β LangChain & Supabase Tutorial: How to Build a ChatGPT Chatbot For Your Website
#
Get SH*T Done with Prompt Engineering and LangChain by Venelin Valkov
Getting Started with LangChain: Load Custom Data, Run OpenAI Models, Embeddings and ChatGPT
Loaders, Indexes & Vectorstores in LangChain: Question Answering on PDF files with ChatGPT
LangChain Models: ChatGPT, Flan Alpaca, OpenAI Embeddings, Prompt Templates & Streaming
LangChain Chains: Use ChatGPT to Build Conversational Agents, Summaries and Q&A on Text With LLMs
Analyze Custom CSV Data with GPT-4 using Langchain
β Build ChatGPT Chatbots with LangChain Memory: Understanding and Implementing Memory in Conversations
β icon marks a new addition [last update 2023-05-15]
previous
Concepts
next
Models
Contents
DeepLearning.AI course
Handbook
Tutorials
By Harrison Chase
Β© Copyright 2023, Harrison Chase.
Last updated on Jun 08, 2023. | https://langchain.readthedocs.io/en/latest/getting_started/tutorials.html |
a3931eab5795-0 | .md
.pdf
Concepts
Contents
Chain of Thought
Action Plan Generation
ReAct
Self-ask
Prompt Chaining
Memetic Proxy
Self Consistency
Inception
MemPrompt
Concepts#
These are concepts and terminology commonly used when developing LLM applications.
It contains reference to external papers or sources where the concept was first introduced,
as well as to places in LangChain where the concept is used.
Chain of Thought#
Chain of Thought (CoT) is a prompting technique used to encourage the model to generate a series of intermediate reasoning steps.
A less formal way to induce this behavior is to include βLetβs think step-by-stepβ in the prompt.
Chain-of-Thought Paper
Step-by-Step Paper
Action Plan Generation#
Action Plan Generation is a prompting technique that uses a language model to generate actions to take.
The results of these actions can then be fed back into the language model to generate a subsequent action.
WebGPT Paper
SayCan Paper
ReAct#
ReAct is a prompting technique that combines Chain-of-Thought prompting with action plan generation.
This induces the model to think about what action to take, then take it.
Paper
LangChain Example
Self-ask#
Self-ask is a prompting method that builds on top of chain-of-thought prompting.
In this method, the model explicitly asks itself follow-up questions, which are then answered by an external search engine.
Paper
LangChain Example
Prompt Chaining#
Prompt Chaining is combining multiple LLM calls, with the output of one-step being the input to the next.
PromptChainer Paper
Language Model Cascades
ICE Primer Book
Socratic Models
Memetic Proxy#
Memetic Proxy is encouraging the LLM
to respond in a certain way framing the discussion in a context that the model knows of and that
will result in that type of response.
For example, as a conversation between a student and a teacher.
Paper
Self Consistency#
Self Consistency is a decoding strategy that samples a diverse set of reasoning paths and then selects the most consistent answer.
Is most effective when combined with Chain-of-thought prompting.
Paper
Inception#
Inception is also called First Person Instruction.
It is encouraging the model to think a certain way by including the start of the modelβs response in the prompt.
Example
MemPrompt#
MemPrompt maintains a memory of errors and user feedback, and uses them to prevent repetition of mistakes.
Paper
previous
Quickstart Guide
next
Tutorials
Contents
Chain of Thought
Action Plan Generation
ReAct
Self-ask
Prompt Chaining
Memetic Proxy
Self Consistency
Inception
MemPrompt
By Harrison Chase
Β© Copyright 2023, Harrison Chase.
Last updated on Jun 08, 2023. | https://langchain.readthedocs.io/en/latest/getting_started/concepts.html |
7c050749d7c0-0 | .md
.pdf
Installation
Contents
Official Releases
Installing from source
Installation#
Official Releases#
LangChain is available on PyPi, so to it is easily installable with:
pip install langchain
That will install the bare minimum requirements of LangChain.
A lot of the value of LangChain comes when integrating it with various model providers, datastores, etc.
By default, the dependencies needed to do that are NOT installed.
However, there are two other ways to install LangChain that do bring in those dependencies.
To install modules needed for the common LLM providers, run:
pip install langchain[llms]
To install all modules needed for all integrations, run:
pip install langchain[all]
Note that if you are using zsh, youβll need to quote square brackets when passing them as an argument to a command, for example:
pip install 'langchain[all]'
Installing from source#
If you want to install from source, you can do so by cloning the repo and running:
pip install -e .
previous
SQL Question Answering Benchmarking: Chinook
next
API References
Contents
Official Releases
Installing from source
By Harrison Chase
Β© Copyright 2023, Harrison Chase.
Last updated on Jun 08, 2023. | https://langchain.readthedocs.io/en/latest/reference/installation.html |
4d4ae6790163-0 | .rst
.pdf
Models
Models#
LangChain provides interfaces and integrations for a number of different types of models.
LLMs
Chat Models
Embeddings
previous
API References
next
Chat Models
By Harrison Chase
Β© Copyright 2023, Harrison Chase.
Last updated on Jun 08, 2023. | https://langchain.readthedocs.io/en/latest/reference/models.html |
0d193454d9a8-0 | .rst
.pdf
Prompts
Prompts#
The reference guides here all relate to objects for working with Prompts.
PromptTemplates
Example Selector
Output Parsers
previous
How to serialize prompts
next
PromptTemplates
By Harrison Chase
Β© Copyright 2023, Harrison Chase.
Last updated on Jun 08, 2023. | https://langchain.readthedocs.io/en/latest/reference/prompts.html |
d6de872c0718-0 | .rst
.pdf
Indexes
Indexes#
Indexes refer to ways to structure documents so that LLMs can best interact with them.
LangChain has a number of modules that help you load, structure, store, and retrieve documents.
Docstore
Text Splitter
Document Loaders
Vector Stores
Retrievers
Document Compressors
Document Transformers
previous
Embeddings
next
Docstore
By Harrison Chase
Β© Copyright 2023, Harrison Chase.
Last updated on Jun 08, 2023. | https://langchain.readthedocs.io/en/latest/reference/indexes.html |
27aea6a4ff0f-0 | .rst
.pdf
Agents
Agents#
Reference guide for Agents and associated abstractions.
Agents
Tools
Agent Toolkits
previous
Memory
next
Agents
By Harrison Chase
Β© Copyright 2023, Harrison Chase.
Last updated on Jun 08, 2023. | https://langchain.readthedocs.io/en/latest/reference/agents.html |
e00d5fd22af8-0 | .rst
.pdf
Output Parsers
Output Parsers#
pydantic model langchain.output_parsers.CommaSeparatedListOutputParser[source]#
Parse out comma separated lists.
get_format_instructions() β str[source]#
Instructions on how the LLM output should be formatted.
parse(text: str) β List[str][source]#
Parse the output of an LLM call.
pydantic model langchain.output_parsers.DatetimeOutputParser[source]#
field format: str = '%Y-%m-%dT%H:%M:%S.%fZ'#
get_format_instructions() β str[source]#
Instructions on how the LLM output should be formatted.
parse(response: str) β datetime.datetime[source]#
Parse the output of an LLM call.
A method which takes in a string (assumed output of a language model )
and parses it into some structure.
Parameters
text β output of language model
Returns
structured output
pydantic model langchain.output_parsers.GuardrailsOutputParser[source]#
field guard: Any = None#
classmethod from_rail(rail_file: str, num_reasks: int = 1) β langchain.output_parsers.rail_parser.GuardrailsOutputParser[source]#
classmethod from_rail_string(rail_str: str, num_reasks: int = 1) β langchain.output_parsers.rail_parser.GuardrailsOutputParser[source]#
get_format_instructions() β str[source]#
Instructions on how the LLM output should be formatted.
parse(text: str) β Dict[source]#
Parse the output of an LLM call.
A method which takes in a string (assumed output of a language model )
and parses it into some structure.
Parameters
text β output of language model
Returns
structured output
pydantic model langchain.output_parsers.ListOutputParser[source]#
Class to parse the output of an LLM call to a list.
abstract parse(text: str) β List[str][source]#
Parse the output of an LLM call.
pydantic model langchain.output_parsers.OutputFixingParser[source]#
Wraps a parser and tries to fix parsing errors.
field parser: langchain.schema.BaseOutputParser[langchain.output_parsers.fix.T] [Required]#
field retry_chain: langchain.chains.llm.LLMChain [Required]#
classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, parser: langchain.schema.BaseOutputParser[langchain.output_parsers.fix.T], prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['completion', 'error', 'instructions'], output_parser=None, partial_variables={}, template='Instructions:\n--------------\n{instructions}\n--------------\nCompletion:\n--------------\n{completion}\n--------------\n\nAbove, the Completion did not satisfy the constraints given in the Instructions.\nError:\n--------------\n{error}\n--------------\n\nPlease try again. Please only respond with an answer that satisfies the constraints laid out in the Instructions:', template_format='f-string', validate_template=True)) β langchain.output_parsers.fix.OutputFixingParser[langchain.output_parsers.fix.T][source]#
get_format_instructions() β str[source]#
Instructions on how the LLM output should be formatted.
parse(completion: str) β langchain.output_parsers.fix.T[source]#
Parse the output of an LLM call.
A method which takes in a string (assumed output of a language model )
and parses it into some structure.
Parameters
text β output of language model
Returns
structured output
pydantic model langchain.output_parsers.PydanticOutputParser[source]#
field pydantic_object: Type[langchain.output_parsers.pydantic.T] [Required]#
get_format_instructions() β str[source]#
Instructions on how the LLM output should be formatted.
parse(text: str) β langchain.output_parsers.pydantic.T[source]#
Parse the output of an LLM call.
A method which takes in a string (assumed output of a language model )
and parses it into some structure.
Parameters
text β output of language model
Returns
structured output
pydantic model langchain.output_parsers.RegexDictParser[source]#
Class to parse the output into a dictionary.
field no_update_value: Optional[str] = None#
field output_key_to_format: Dict[str, str] [Required]#
field regex_pattern: str = "{}:\\s?([^.'\\n']*)\\.?"#
parse(text: str) β Dict[str, str][source]# | https://langchain.readthedocs.io/en/latest/reference/modules/output_parsers.html |
e00d5fd22af8-1 | parse(text: str) β Dict[str, str][source]#
Parse the output of an LLM call.
pydantic model langchain.output_parsers.RegexParser[source]#
Class to parse the output into a dictionary.
field default_output_key: Optional[str] = None#
field output_keys: List[str] [Required]#
field regex: str [Required]#
parse(text: str) β Dict[str, str][source]#
Parse the output of an LLM call.
pydantic model langchain.output_parsers.ResponseSchema[source]#
field description: str [Required]#
field name: str [Required]#
field type: str = 'string'#
pydantic model langchain.output_parsers.RetryOutputParser[source]#
Wraps a parser and tries to fix parsing errors.
Does this by passing the original prompt and the completion to another
LLM, and telling it the completion did not satisfy criteria in the prompt.
field parser: langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T] [Required]#
field retry_chain: langchain.chains.llm.LLMChain [Required]#
classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, parser: langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T], prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['completion', 'prompt'], output_parser=None, partial_variables={}, template='Prompt:\n{prompt}\nCompletion:\n{completion}\n\nAbove, the Completion did not satisfy the constraints given in the Prompt.\nPlease try again:', template_format='f-string', validate_template=True)) β langchain.output_parsers.retry.RetryOutputParser[langchain.output_parsers.retry.T][source]#
get_format_instructions() β str[source]#
Instructions on how the LLM output should be formatted.
parse(completion: str) β langchain.output_parsers.retry.T[source]#
Parse the output of an LLM call.
A method which takes in a string (assumed output of a language model )
and parses it into some structure.
Parameters
text β output of language model
Returns
structured output
parse_with_prompt(completion: str, prompt_value: langchain.schema.PromptValue) β langchain.output_parsers.retry.T[source]#
Optional method to parse the output of an LLM call with a prompt.
The prompt is largely provided in the event the OutputParser wants
to retry or fix the output in some way, and needs information from
the prompt to do so.
Parameters
completion β output of language model
prompt β prompt value
Returns
structured output
pydantic model langchain.output_parsers.RetryWithErrorOutputParser[source]#
Wraps a parser and tries to fix parsing errors.
Does this by passing the original prompt, the completion, AND the error
that was raised to another language model and telling it that the completion
did not work, and raised the given error. Differs from RetryOutputParser
in that this implementation provides the error that was raised back to the
LLM, which in theory should give it more information on how to fix it.
field parser: langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T] [Required]#
field retry_chain: langchain.chains.llm.LLMChain [Required]#
classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, parser: langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T], prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['completion', 'error', 'prompt'], output_parser=None, partial_variables={}, template='Prompt:\n{prompt}\nCompletion:\n{completion}\n\nAbove, the Completion did not satisfy the constraints given in the Prompt.\nDetails: {error}\nPlease try again:', template_format='f-string', validate_template=True)) β langchain.output_parsers.retry.RetryWithErrorOutputParser[langchain.output_parsers.retry.T][source]#
get_format_instructions() β str[source]#
Instructions on how the LLM output should be formatted.
parse(completion: str) β langchain.output_parsers.retry.T[source]#
Parse the output of an LLM call.
A method which takes in a string (assumed output of a language model )
and parses it into some structure.
Parameters
text β output of language model
Returns
structured output
parse_with_prompt(completion: str, prompt_value: langchain.schema.PromptValue) β langchain.output_parsers.retry.T[source]#
Optional method to parse the output of an LLM call with a prompt. | https://langchain.readthedocs.io/en/latest/reference/modules/output_parsers.html |
e00d5fd22af8-2 | Optional method to parse the output of an LLM call with a prompt.
The prompt is largely provided in the event the OutputParser wants
to retry or fix the output in some way, and needs information from
the prompt to do so.
Parameters
completion β output of language model
prompt β prompt value
Returns
structured output
pydantic model langchain.output_parsers.StructuredOutputParser[source]#
field response_schemas: List[langchain.output_parsers.structured.ResponseSchema] [Required]#
classmethod from_response_schemas(response_schemas: List[langchain.output_parsers.structured.ResponseSchema]) β langchain.output_parsers.structured.StructuredOutputParser[source]#
get_format_instructions() β str[source]#
Instructions on how the LLM output should be formatted.
parse(text: str) β Any[source]#
Parse the output of an LLM call.
A method which takes in a string (assumed output of a language model )
and parses it into some structure.
Parameters
text β output of language model
Returns
structured output
previous
Example Selector
next
Chat Prompt Templates
By Harrison Chase
Β© Copyright 2023, Harrison Chase.
Last updated on Jun 08, 2023. | https://langchain.readthedocs.io/en/latest/reference/modules/output_parsers.html |
d548198f9a02-0 | .rst
.pdf
Example Selector
Example Selector#
Logic for selecting examples to include in prompts.
pydantic model langchain.prompts.example_selector.LengthBasedExampleSelector[source]#
Select examples based on length.
Validators
calculate_example_text_lengths Β» example_text_lengths
field example_prompt: langchain.prompts.prompt.PromptTemplate [Required]#
Prompt template used to format the examples.
field examples: List[dict] [Required]#
A list of the examples that the prompt template expects.
field get_text_length: Callable[[str], int] = <function _get_length_based>#
Function to measure prompt length. Defaults to word count.
field max_length: int = 2048#
Max length for the prompt, beyond which examples are cut.
add_example(example: Dict[str, str]) β None[source]#
Add new example to list.
select_examples(input_variables: Dict[str, str]) β List[dict][source]#
Select which examples to use based on the input lengths.
pydantic model langchain.prompts.example_selector.MaxMarginalRelevanceExampleSelector[source]#
ExampleSelector that selects examples based on Max Marginal Relevance.
This was shown to improve performance in this paper:
https://arxiv.org/pdf/2211.13892.pdf
field fetch_k: int = 20#
Number of examples to fetch to rerank.
classmethod from_examples(examples: List[dict], embeddings: langchain.embeddings.base.Embeddings, vectorstore_cls: Type[langchain.vectorstores.base.VectorStore], k: int = 4, input_keys: Optional[List[str]] = None, fetch_k: int = 20, **vectorstore_cls_kwargs: Any) β langchain.prompts.example_selector.semantic_similarity.MaxMarginalRelevanceExampleSelector[source]#
Create k-shot example selector using example list and embeddings.
Reshuffles examples dynamically based on query similarity.
Parameters
examples β List of examples to use in the prompt.
embeddings β An iniialized embedding API interface, e.g. OpenAIEmbeddings().
vectorstore_cls β A vector store DB interface class, e.g. FAISS.
k β Number of examples to select
input_keys β If provided, the search is based on the input variables
instead of all variables.
vectorstore_cls_kwargs β optional kwargs containing url for vector store
Returns
The ExampleSelector instantiated, backed by a vector store.
select_examples(input_variables: Dict[str, str]) β List[dict][source]#
Select which examples to use based on semantic similarity.
pydantic model langchain.prompts.example_selector.SemanticSimilarityExampleSelector[source]#
Example selector that selects examples based on SemanticSimilarity.
field example_keys: Optional[List[str]] = None#
Optional keys to filter examples to.
field input_keys: Optional[List[str]] = None#
Optional keys to filter input to. If provided, the search is based on
the input variables instead of all variables.
field k: int = 4#
Number of examples to select.
field vectorstore: langchain.vectorstores.base.VectorStore [Required]#
VectorStore than contains information about examples.
add_example(example: Dict[str, str]) β str[source]#
Add new example to vectorstore.
classmethod from_examples(examples: List[dict], embeddings: langchain.embeddings.base.Embeddings, vectorstore_cls: Type[langchain.vectorstores.base.VectorStore], k: int = 4, input_keys: Optional[List[str]] = None, **vectorstore_cls_kwargs: Any) β langchain.prompts.example_selector.semantic_similarity.SemanticSimilarityExampleSelector[source]#
Create k-shot example selector using example list and embeddings.
Reshuffles examples dynamically based on query similarity.
Parameters
examples β List of examples to use in the prompt.
embeddings β An initialized embedding API interface, e.g. OpenAIEmbeddings().
vectorstore_cls β A vector store DB interface class, e.g. FAISS.
k β Number of examples to select
input_keys β If provided, the search is based on the input variables
instead of all variables.
vectorstore_cls_kwargs β optional kwargs containing url for vector store
Returns
The ExampleSelector instantiated, backed by a vector store.
select_examples(input_variables: Dict[str, str]) β List[dict][source]#
Select which examples to use based on semantic similarity.
previous
PromptTemplates
next
Output Parsers
By Harrison Chase
Β© Copyright 2023, Harrison Chase.
Last updated on Jun 08, 2023. | https://langchain.readthedocs.io/en/latest/reference/modules/example_selector.html |
da5e8b5b7a2a-0 | .rst
.pdf
Vector Stores
Vector Stores#
Wrappers on top of vector stores.
class langchain.vectorstores.AnalyticDB(connection_string: str, embedding_function: langchain.embeddings.base.Embeddings, collection_name: str = 'langchain', collection_metadata: Optional[dict] = None, pre_delete_collection: bool = False, logger: Optional[logging.Logger] = None)[source]#
VectorStore implementation using AnalyticDB.
AnalyticDB is a distributed full PostgresSQL syntax cloud-native database.
- connection_string is a postgres connection string.
- embedding_function any embedding function implementing
langchain.embeddings.base.Embeddings interface.
collection_name is the name of the collection to use. (default: langchain)
NOTE: This is not the name of the table, but the name of the collection.The tables will be created when initializing the store (if not exists)
So, make sure the user has the right permissions to create tables.
pre_delete_collection if True, will delete the collection if it exists.(default: False)
- Useful for testing.
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) β List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts β Iterable of strings to add to the vectorstore.
metadatas β Optional list of metadatas associated with the texts.
kwargs β vectorstore specific parameters
Returns
List of ids from adding the texts into the vectorstore.
connect() β sqlalchemy.engine.base.Connection[source]#
classmethod connection_string_from_db_params(driver: str, host: str, port: int, database: str, user: str, password: str) β str[source]#
Return connection string from database parameters.
create_collection() β None[source]#
create_tables_if_not_exists() β None[source]#
delete_collection() β None[source]#
drop_tables() β None[source]#
classmethod from_documents(documents: List[langchain.schema.Document], embedding: langchain.embeddings.base.Embeddings, collection_name: str = 'langchain', ids: Optional[List[str]] = None, pre_delete_collection: bool = False, **kwargs: Any) β langchain.vectorstores.analyticdb.AnalyticDB[source]#
Return VectorStore initialized from documents and embeddings.
Postgres connection string is required
Either pass it as a parameter
or set the PGVECTOR_CONNECTION_STRING environment variable.
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, collection_name: str = 'langchain', ids: Optional[List[str]] = None, pre_delete_collection: bool = False, **kwargs: Any) β langchain.vectorstores.analyticdb.AnalyticDB[source]#
Return VectorStore initialized from texts and embeddings.
Postgres connection string is required
Either pass it as a parameter
or set the PGVECTOR_CONNECTION_STRING environment variable.
get_collection(session: sqlalchemy.orm.session.Session) β Optional[langchain.vectorstores.analyticdb.CollectionStore][source]#
classmethod get_connection_string(kwargs: Dict[str, Any]) β str[source]#
similarity_search(query: str, k: int = 4, filter: Optional[dict] = None, **kwargs: Any) β List[langchain.schema.Document][source]#
Run similarity search with AnalyticDB with distance.
Parameters
query (str) β Query text to search for.
k (int) β Number of results to return. Defaults to 4.
filter (Optional[Dict[str, str]]) β Filter by metadata. Defaults to None.
Returns
List of Documents most similar to the query.
similarity_search_by_vector(embedding: List[float], k: int = 4, filter: Optional[dict] = None, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs most similar to embedding vector.
Parameters
embedding β Embedding to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
filter (Optional[Dict[str, str]]) β Filter by metadata. Defaults to None.
Returns
List of Documents most similar to the query vector.
similarity_search_with_score(query: str, k: int = 4, filter: Optional[dict] = None) β List[Tuple[langchain.schema.Document, float]][source]#
Return docs most similar to query.
Parameters
query β Text to look up documents similar to.
k β Number of Documents to return. Defaults to 4. | https://langchain.readthedocs.io/en/latest/reference/modules/vectorstores.html |
da5e8b5b7a2a-1 | k β Number of Documents to return. Defaults to 4.
filter (Optional[Dict[str, str]]) β Filter by metadata. Defaults to None.
Returns
List of Documents most similar to the query and score for each
similarity_search_with_score_by_vector(embedding: List[float], k: int = 4, filter: Optional[dict] = None) β List[Tuple[langchain.schema.Document, float]][source]#
class langchain.vectorstores.Annoy(embedding_function: Callable, index: Any, metric: str, docstore: langchain.docstore.base.Docstore, index_to_docstore_id: Dict[int, str])[source]#
Wrapper around Annoy vector database.
To use, you should have the annoy python package installed.
Example
from langchain import Annoy
db = Annoy(embedding_function, index, docstore, index_to_docstore_id)
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) β List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts β Iterable of strings to add to the vectorstore.
metadatas β Optional list of metadatas associated with the texts.
kwargs β vectorstore specific parameters
Returns
List of ids from adding the texts into the vectorstore.
classmethod from_embeddings(text_embeddings: List[Tuple[str, List[float]]], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, metric: str = 'angular', trees: int = 100, n_jobs: int = - 1, **kwargs: Any) β langchain.vectorstores.annoy.Annoy[source]#
Construct Annoy wrapper from embeddings.
Parameters
text_embeddings β List of tuples of (text, embedding)
embedding β Embedding function to use.
metadatas β List of metadata dictionaries to associate with documents.
metric β Metric to use for indexing. Defaults to βangularβ.
trees β Number of trees to use for indexing. Defaults to 100.
n_jobs β Number of jobs to use for indexing. Defaults to -1
This is a user friendly interface that:
Creates an in memory docstore with provided embeddings
Initializes the Annoy database
This is intended to be a quick way to get started.
Example
from langchain import Annoy
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
text_embeddings = embeddings.embed_documents(texts)
text_embedding_pairs = list(zip(texts, text_embeddings))
db = Annoy.from_embeddings(text_embedding_pairs, embeddings)
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, metric: str = 'angular', trees: int = 100, n_jobs: int = - 1, **kwargs: Any) β langchain.vectorstores.annoy.Annoy[source]#
Construct Annoy wrapper from raw documents.
Parameters
texts β List of documents to index.
embedding β Embedding function to use.
metadatas β List of metadata dictionaries to associate with documents.
metric β Metric to use for indexing. Defaults to βangularβ.
trees β Number of trees to use for indexing. Defaults to 100.
n_jobs β Number of jobs to use for indexing. Defaults to -1.
This is a user friendly interface that:
Embeds documents.
Creates an in memory docstore
Initializes the Annoy database
This is intended to be a quick way to get started.
Example
from langchain import Annoy
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
index = Annoy.from_texts(texts, embeddings)
classmethod load_local(folder_path: str, embeddings: langchain.embeddings.base.Embeddings) β langchain.vectorstores.annoy.Annoy[source]#
Load Annoy index, docstore, and index_to_docstore_id to disk.
Parameters
folder_path β folder path to load index, docstore,
and index_to_docstore_id from.
embeddings β Embeddings to use when generating queries.
max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters | https://langchain.readthedocs.io/en/latest/reference/modules/vectorstores.html |
da5e8b5b7a2a-2 | Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
query β Text to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
fetch_k β Number of Documents to fetch to pass to MMR algorithm.
lambda_mult β Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
max_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
embedding β Embedding to look up documents similar to.
fetch_k β Number of Documents to fetch to pass to MMR algorithm.
k β Number of Documents to return. Defaults to 4.
lambda_mult β Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
process_index_results(idxs: List[int], dists: List[float]) β List[Tuple[langchain.schema.Document, float]][source]#
Turns annoy results into a list of documents and scores.
Parameters
idxs β List of indices of the documents in the index.
dists β List of distances of the documents in the index.
Returns
List of Documents and scores.
save_local(folder_path: str, prefault: bool = False) β None[source]#
Save Annoy index, docstore, and index_to_docstore_id to disk.
Parameters
folder_path β folder path to save index, docstore,
and index_to_docstore_id to.
prefault β Whether to pre-load the index into memory.
similarity_search(query: str, k: int = 4, search_k: int = - 1, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs most similar to query.
Parameters
query β Text to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
search_k β inspect up to search_k nodes which defaults
to n_trees * n if not provided
Returns
List of Documents most similar to the query.
similarity_search_by_index(docstore_index: int, k: int = 4, search_k: int = - 1, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs most similar to docstore_index.
Parameters
docstore_index β Index of document in docstore
k β Number of Documents to return. Defaults to 4.
search_k β inspect up to search_k nodes which defaults
to n_trees * n if not provided
Returns
List of Documents most similar to the embedding.
similarity_search_by_vector(embedding: List[float], k: int = 4, search_k: int = - 1, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs most similar to embedding vector.
Parameters
embedding β Embedding to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
search_k β inspect up to search_k nodes which defaults
to n_trees * n if not provided
Returns
List of Documents most similar to the embedding.
similarity_search_with_score(query: str, k: int = 4, search_k: int = - 1) β List[Tuple[langchain.schema.Document, float]][source]#
Return docs most similar to query.
Parameters
query β Text to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
search_k β inspect up to search_k nodes which defaults
to n_trees * n if not provided
Returns
List of Documents most similar to the query and score for each
similarity_search_with_score_by_index(docstore_index: int, k: int = 4, search_k: int = - 1) β List[Tuple[langchain.schema.Document, float]][source]#
Return docs most similar to query.
Parameters
query β Text to look up documents similar to.
k β Number of Documents to return. Defaults to 4. | https://langchain.readthedocs.io/en/latest/reference/modules/vectorstores.html |
da5e8b5b7a2a-3 | k β Number of Documents to return. Defaults to 4.
search_k β inspect up to search_k nodes which defaults
to n_trees * n if not provided
Returns
List of Documents most similar to the query and score for each
similarity_search_with_score_by_vector(embedding: List[float], k: int = 4, search_k: int = - 1) β List[Tuple[langchain.schema.Document, float]][source]#
Return docs most similar to query.
Parameters
query β Text to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
search_k β inspect up to search_k nodes which defaults
to n_trees * n if not provided
Returns
List of Documents most similar to the query and score for each
class langchain.vectorstores.AtlasDB(name: str, embedding_function: Optional[langchain.embeddings.base.Embeddings] = None, api_key: Optional[str] = None, description: str = 'A description for your project', is_public: bool = True, reset_project_if_exists: bool = False)[source]#
Wrapper around Atlas: Nomicβs neural database and rhizomatic instrument.
To use, you should have the nomic python package installed.
Example
from langchain.vectorstores import AtlasDB
from langchain.embeddings.openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
vectorstore = AtlasDB("my_project", embeddings.embed_query)
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, refresh: bool = True, **kwargs: Any) β List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts (Iterable[str]) β Texts to add to the vectorstore.
metadatas (Optional[List[dict]], optional) β Optional list of metadatas.
ids (Optional[List[str]]) β An optional list of ids.
refresh (bool) β Whether or not to refresh indices with the updated data.
Default True.
Returns
List of IDs of the added texts.
Return type
List[str]
create_index(**kwargs: Any) β Any[source]#
Creates an index in your project.
See
https://docs.nomic.ai/atlas_api.html#nomic.project.AtlasProject.create_index
for full detail.
classmethod from_documents(documents: List[langchain.schema.Document], embedding: Optional[langchain.embeddings.base.Embeddings] = None, ids: Optional[List[str]] = None, name: Optional[str] = None, api_key: Optional[str] = None, persist_directory: Optional[str] = None, description: str = 'A description for your project', is_public: bool = True, reset_project_if_exists: bool = False, index_kwargs: Optional[dict] = None, **kwargs: Any) β langchain.vectorstores.atlas.AtlasDB[source]#
Create an AtlasDB vectorstore from a list of documents.
Parameters
name (str) β Name of the collection to create.
api_key (str) β Your nomic API key,
documents (List[Document]) β List of documents to add to the vectorstore.
embedding (Optional[Embeddings]) β Embedding function. Defaults to None.
ids (Optional[List[str]]) β Optional list of document IDs. If None,
ids will be auto created
description (str) β A description for your project.
is_public (bool) β Whether your project is publicly accessible.
True by default.
reset_project_if_exists (bool) β Whether to reset this project if
it already exists. Default False.
Generally userful during development and testing.
index_kwargs (Optional[dict]) β Dict of kwargs for index creation.
See https://docs.nomic.ai/atlas_api.html
Returns
Nomicβs neural database and finest rhizomatic instrument
Return type
AtlasDB
classmethod from_texts(texts: List[str], embedding: Optional[langchain.embeddings.base.Embeddings] = None, metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, name: Optional[str] = None, api_key: Optional[str] = None, description: str = 'A description for your project', is_public: bool = True, reset_project_if_exists: bool = False, index_kwargs: Optional[dict] = None, **kwargs: Any) β langchain.vectorstores.atlas.AtlasDB[source]#
Create an AtlasDB vectorstore from a raw documents.
Parameters
texts (List[str]) β The list of texts to ingest. | https://langchain.readthedocs.io/en/latest/reference/modules/vectorstores.html |
da5e8b5b7a2a-4 | Parameters
texts (List[str]) β The list of texts to ingest.
name (str) β Name of the project to create.
api_key (str) β Your nomic API key,
embedding (Optional[Embeddings]) β Embedding function. Defaults to None.
metadatas (Optional[List[dict]]) β List of metadatas. Defaults to None.
ids (Optional[List[str]]) β Optional list of document IDs. If None,
ids will be auto created
description (str) β A description for your project.
is_public (bool) β Whether your project is publicly accessible.
True by default.
reset_project_if_exists (bool) β Whether to reset this project if it
already exists. Default False.
Generally userful during development and testing.
index_kwargs (Optional[dict]) β Dict of kwargs for index creation.
See https://docs.nomic.ai/atlas_api.html
Returns
Nomicβs neural database and finest rhizomatic instrument
Return type
AtlasDB
similarity_search(query: str, k: int = 4, **kwargs: Any) β List[langchain.schema.Document][source]#
Run similarity search with AtlasDB
Parameters
query (str) β Query text to search for.
k (int) β Number of results to return. Defaults to 4.
Returns
List of documents most similar to the query text.
Return type
List[Document]
class langchain.vectorstores.Chroma(collection_name: str = 'langchain', embedding_function: Optional[Embeddings] = None, persist_directory: Optional[str] = None, client_settings: Optional[chromadb.config.Settings] = None, collection_metadata: Optional[Dict] = None, client: Optional[chromadb.Client] = None)[source]#
Wrapper around ChromaDB embeddings platform.
To use, you should have the chromadb python package installed.
Example
from langchain.vectorstores import Chroma
from langchain.embeddings.openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
vectorstore = Chroma("langchain_store", embeddings)
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) β List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts (Iterable[str]) β Texts to add to the vectorstore.
metadatas (Optional[List[dict]], optional) β Optional list of metadatas.
ids (Optional[List[str]], optional) β Optional list of IDs.
Returns
List of IDs of the added texts.
Return type
List[str]
delete_collection() β None[source]#
Delete the collection.
classmethod from_documents(documents: List[Document], embedding: Optional[Embeddings] = None, ids: Optional[List[str]] = None, collection_name: str = 'langchain', persist_directory: Optional[str] = None, client_settings: Optional[chromadb.config.Settings] = None, client: Optional[chromadb.Client] = None, **kwargs: Any) β Chroma[source]#
Create a Chroma vectorstore from a list of documents.
If a persist_directory is specified, the collection will be persisted there.
Otherwise, the data will be ephemeral in-memory.
Parameters
collection_name (str) β Name of the collection to create.
persist_directory (Optional[str]) β Directory to persist the collection.
ids (Optional[List[str]]) β List of document IDs. Defaults to None.
documents (List[Document]) β List of documents to add to the vectorstore.
embedding (Optional[Embeddings]) β Embedding function. Defaults to None.
client_settings (Optional[chromadb.config.Settings]) β Chroma client settings
Returns
Chroma vectorstore.
Return type
Chroma
classmethod from_texts(texts: List[str], embedding: Optional[Embeddings] = None, metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, collection_name: str = 'langchain', persist_directory: Optional[str] = None, client_settings: Optional[chromadb.config.Settings] = None, client: Optional[chromadb.Client] = None, **kwargs: Any) β Chroma[source]#
Create a Chroma vectorstore from a raw documents.
If a persist_directory is specified, the collection will be persisted there.
Otherwise, the data will be ephemeral in-memory.
Parameters
texts (List[str]) β List of texts to add to the collection.
collection_name (str) β Name of the collection to create. | https://langchain.readthedocs.io/en/latest/reference/modules/vectorstores.html |
da5e8b5b7a2a-5 | collection_name (str) β Name of the collection to create.
persist_directory (Optional[str]) β Directory to persist the collection.
embedding (Optional[Embeddings]) β Embedding function. Defaults to None.
metadatas (Optional[List[dict]]) β List of metadatas. Defaults to None.
ids (Optional[List[str]]) β List of document IDs. Defaults to None.
client_settings (Optional[chromadb.config.Settings]) β Chroma client settings
Returns
Chroma vectorstore.
Return type
Chroma
get(include: Optional[List[str]] = None) β Dict[str, Any][source]#
Gets the collection.
Parameters
include (Optional[List[str]]) β List of fields to include from db.
Defaults to None.
max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, filter: Optional[Dict[str, str]] = None, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
query β Text to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
fetch_k β Number of Documents to fetch to pass to MMR algorithm.
lambda_mult β Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
filter (Optional[Dict[str, str]]) β Filter by metadata. Defaults to None.
Returns
List of Documents selected by maximal marginal relevance.
max_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, filter: Optional[Dict[str, str]] = None, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
embedding β Embedding to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
fetch_k β Number of Documents to fetch to pass to MMR algorithm.
lambda_mult β Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
filter (Optional[Dict[str, str]]) β Filter by metadata. Defaults to None.
Returns
List of Documents selected by maximal marginal relevance.
persist() β None[source]#
Persist the collection.
This can be used to explicitly persist the data to disk.
It will also be called automatically when the object is destroyed.
similarity_search(query: str, k: int = 4, filter: Optional[Dict[str, str]] = None, **kwargs: Any) β List[langchain.schema.Document][source]#
Run similarity search with Chroma.
Parameters
query (str) β Query text to search for.
k (int) β Number of results to return. Defaults to 4.
filter (Optional[Dict[str, str]]) β Filter by metadata. Defaults to None.
Returns
List of documents most similar to the query text.
Return type
List[Document]
similarity_search_by_vector(embedding: List[float], k: int = 4, filter: Optional[Dict[str, str]] = None, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs most similar to embedding vector.
:param embedding: Embedding to look up documents similar to.
:type embedding: str
:param k: Number of Documents to return. Defaults to 4.
:type k: int
:param filter: Filter by metadata. Defaults to None.
:type filter: Optional[Dict[str, str]]
Returns
List of Documents most similar to the query vector.
similarity_search_with_score(query: str, k: int = 4, filter: Optional[Dict[str, str]] = None, **kwargs: Any) β List[Tuple[langchain.schema.Document, float]][source]#
Run similarity search with Chroma with distance.
Parameters
query (str) β Query text to search for.
k (int) β Number of results to return. Defaults to 4.
filter (Optional[Dict[str, str]]) β Filter by metadata. Defaults to None.
Returns
List of documents most similar to | https://langchain.readthedocs.io/en/latest/reference/modules/vectorstores.html |
da5e8b5b7a2a-6 | Returns
List of documents most similar to
the query text and cosine distance in float for each.
Lower score represents more similarity.
Return type
List[Tuple[Document, float]]
update_document(document_id: str, document: langchain.schema.Document) β None[source]#
Update a document in the collection.
Parameters
document_id (str) β ID of the document to update.
document (Document) β Document to update.
class langchain.vectorstores.Clickhouse(embedding: langchain.embeddings.base.Embeddings, config: Optional[langchain.vectorstores.clickhouse.ClickhouseSettings] = None, **kwargs: Any)[source]#
Wrapper around ClickHouse vector database
You need a clickhouse-connect python package, and a valid account
to connect to ClickHouse.
ClickHouse can not only search with simple vector indexes,
it also supports complex query with multiple conditions,
constraints and even sub-queries.
For more information, please visit[ClickHouse official site](https://clickhouse.com/clickhouse)
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, batch_size: int = 32, ids: Optional[Iterable[str]] = None, **kwargs: Any) β List[str][source]#
Insert more texts through the embeddings and add to the VectorStore.
Parameters
texts β Iterable of strings to add to the VectorStore.
ids β Optional list of ids to associate with the texts.
batch_size β Batch size of insertion
metadata β Optional column data to be inserted
Returns
List of ids from adding the texts into the VectorStore.
drop() β None[source]#
Helper function: Drop data
escape_str(value: str) β str[source]#
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[Dict[Any, Any]]] = None, config: Optional[langchain.vectorstores.clickhouse.ClickhouseSettings] = None, text_ids: Optional[Iterable[str]] = None, batch_size: int = 32, **kwargs: Any) β langchain.vectorstores.clickhouse.Clickhouse[source]#
Create ClickHouse wrapper with existing texts
Parameters
embedding_function (Embeddings) β Function to extract text embedding
texts (Iterable[str]) β List or tuple of strings to be added
config (ClickHouseSettings, Optional) β ClickHouse configuration
text_ids (Optional[Iterable], optional) β IDs for the texts.
Defaults to None.
batch_size (int, optional) β Batchsize when transmitting data to ClickHouse.
Defaults to 32.
metadata (List[dict], optional) β metadata to texts. Defaults to None.
into (Other keyword arguments will pass) β [clickhouse-connect](https://clickhouse.com/docs/en/integrations/python#clickhouse-connect-driver-api)
Returns
ClickHouse Index
property metadata_column: str#
similarity_search(query: str, k: int = 4, where_str: Optional[str] = None, **kwargs: Any) β List[langchain.schema.Document][source]#
Perform a similarity search with ClickHouse
Parameters
query (str) β query string
k (int, optional) β Top K neighbors to retrieve. Defaults to 4.
where_str (Optional[str], optional) β where condition string.
Defaults to None.
NOTE β Please do not let end-user to fill this and always be aware
of SQL injection. When dealing with metadatas, remember to
use {self.metadata_column}.attribute instead of attribute
alone. The default name for it is metadata.
Returns
List of Documents
Return type
List[Document]
similarity_search_by_vector(embedding: List[float], k: int = 4, where_str: Optional[str] = None, **kwargs: Any) β List[langchain.schema.Document][source]#
Perform a similarity search with ClickHouse by vectors
Parameters
query (str) β query string
k (int, optional) β Top K neighbors to retrieve. Defaults to 4.
where_str (Optional[str], optional) β where condition string.
Defaults to None.
NOTE β Please do not let end-user to fill this and always be aware
of SQL injection. When dealing with metadatas, remember to
use {self.metadata_column}.attribute instead of attribute
alone. The default name for it is metadata.
Returns
List of (Document, similarity)
Return type
List[Document]
similarity_search_with_relevance_scores(query: str, k: int = 4, where_str: Optional[str] = None, **kwargs: Any) β List[Tuple[langchain.schema.Document, float]][source]# | https://langchain.readthedocs.io/en/latest/reference/modules/vectorstores.html |
da5e8b5b7a2a-7 | Perform a similarity search with ClickHouse
Parameters
query (str) β query string
k (int, optional) β Top K neighbors to retrieve. Defaults to 4.
where_str (Optional[str], optional) β where condition string.
Defaults to None.
NOTE β Please do not let end-user to fill this and always be aware
of SQL injection. When dealing with metadatas, remember to
use {self.metadata_column}.attribute instead of attribute
alone. The default name for it is metadata.
Returns
List of documents
Return type
List[Document]
pydantic settings langchain.vectorstores.ClickhouseSettings[source]#
ClickHouse Client Configuration
Attribute:
clickhouse_host (str)An URL to connect to MyScale backend.Defaults to βlocalhostβ.
clickhouse_port (int) : URL port to connect with HTTP. Defaults to 8443.
username (str) : Username to login. Defaults to None.
password (str) : Password to login. Defaults to None.
index_type (str): index type string.
index_param (list): index build parameter.
index_query_params(dict): index query parameters.
database (str) : Database name to find the table. Defaults to βdefaultβ.
table (str) : Table name to operate on.
Defaults to βvector_tableβ.
metric (str)Metric to compute distance,supported are (βangularβ, βeuclideanβ, βmanhattanβ, βhammingβ,
βdotβ). Defaults to βangularβ.
spotify/annoy
column_map (Dict)Column type map to project column name onto langchainsemantics. Must have keys: text, id, vector,
must be same size to number of columns. For example:
.. code-block:: python
{βidβ: βtext_idβ,
βuuidβ: βglobal_unique_idβ
βembeddingβ: βtext_embeddingβ,
βdocumentβ: βtext_plainβ,
βmetadataβ: βmetadata_dictionary_in_jsonβ,
}
Defaults to identity map.
Show JSON schema{
"title": "ClickhouseSettings",
"description": "ClickHouse Client Configuration\n\nAttribute:\n clickhouse_host (str) : An URL to connect to MyScale backend.\n Defaults to 'localhost'.\n clickhouse_port (int) : URL port to connect with HTTP. Defaults to 8443.\n username (str) : Username to login. Defaults to None.\n password (str) : Password to login. Defaults to None.\n index_type (str): index type string.\n index_param (list): index build parameter.\n index_query_params(dict): index query parameters.\n database (str) : Database name to find the table. Defaults to 'default'.\n table (str) : Table name to operate on.\n Defaults to 'vector_table'.\n metric (str) : Metric to compute distance,\n supported are ('angular', 'euclidean', 'manhattan', 'hamming',\n 'dot'). Defaults to 'angular'.\n https://github.com/spotify/annoy/blob/main/src/annoymodule.cc#L149-L169\n\n column_map (Dict) : Column type map to project column name onto langchain\n semantics. Must have keys: `text`, `id`, `vector`,\n must be same size to number of columns. For example:\n .. code-block:: python\n\n {\n 'id': 'text_id',\n 'uuid': 'global_unique_id'\n 'embedding': 'text_embedding',\n 'document': 'text_plain',\n 'metadata': 'metadata_dictionary_in_json',\n }\n\n Defaults to identity map.",
"type": "object",
"properties": {
"host": {
"title": "Host",
"default": "localhost",
"env_names": "{'clickhouse_host'}",
"type": "string"
},
"port": {
"title": "Port",
"default": 8123,
"env_names": "{'clickhouse_port'}",
"type": "integer"
},
"username": {
"title": "Username",
"env_names": "{'clickhouse_username'}",
"type": "string"
},
"password": {
"title": "Password",
"env_names": "{'clickhouse_password'}",
"type": "string"
},
"index_type": {
"title": "Index Type", | https://langchain.readthedocs.io/en/latest/reference/modules/vectorstores.html |
da5e8b5b7a2a-8 | },
"index_type": {
"title": "Index Type",
"default": "annoy",
"env_names": "{'clickhouse_index_type'}",
"type": "string"
},
"index_param": {
"title": "Index Param",
"default": [
100,
"'L2Distance'"
],
"env_names": "{'clickhouse_index_param'}",
"anyOf": [
{
"type": "array",
"items": {}
},
{
"type": "object"
}
]
},
"index_query_params": {
"title": "Index Query Params",
"default": {},
"env_names": "{'clickhouse_index_query_params'}",
"type": "object",
"additionalProperties": {
"type": "string"
}
},
"column_map": {
"title": "Column Map",
"default": {
"id": "id",
"uuid": "uuid",
"document": "document",
"embedding": "embedding",
"metadata": "metadata"
},
"env_names": "{'clickhouse_column_map'}",
"type": "object",
"additionalProperties": {
"type": "string"
}
},
"database": {
"title": "Database",
"default": "default",
"env_names": "{'clickhouse_database'}",
"type": "string"
},
"table": {
"title": "Table",
"default": "langchain",
"env_names": "{'clickhouse_table'}",
"type": "string"
},
"metric": {
"title": "Metric",
"default": "angular",
"env_names": "{'clickhouse_metric'}",
"type": "string"
}
},
"additionalProperties": false
}
Config
env_file: str = .env
env_file_encoding: str = utf-8
env_prefix: str = clickhouse_
Fields
column_map (Dict[str, str])
database (str)
host (str)
index_param (Optional[Union[List, Dict]])
index_query_params (Dict[str, str])
index_type (str)
metric (str)
password (Optional[str])
port (int)
table (str)
username (Optional[str])
field column_map: Dict[str, str] = {'document': 'document', 'embedding': 'embedding', 'id': 'id', 'metadata': 'metadata', 'uuid': 'uuid'}#
field database: str = 'default'#
field host: str = 'localhost'#
field index_param: Optional[Union[List, Dict]] = [100, "'L2Distance'"]#
field index_query_params: Dict[str, str] = {}#
field index_type: str = 'annoy'#
field metric: str = 'angular'#
field password: Optional[str] = None#
field port: int = 8123#
field table: str = 'langchain'#
field username: Optional[str] = None#
class langchain.vectorstores.DeepLake(dataset_path: str = './deeplake/', token: Optional[str] = None, embedding_function: Optional[langchain.embeddings.base.Embeddings] = None, read_only: Optional[bool] = False, ingestion_batch_size: int = 1024, num_workers: int = 0, verbose: bool = True, **kwargs: Any)[source]#
Wrapper around Deep Lake, a data lake for deep learning applications.
We implement naive similarity search and filtering for fast prototyping,
but it can be extended with Tensor Query Language (TQL) for production use cases
over billion rows.
Why Deep Lake?
Not only stores embeddings, but also the original data with version control.
Serverless, doesnβt require another service and can be used with majorcloud providers (S3, GCS, etc.)
More than just a multi-modal vector store. You can use the datasetto fine-tune your own LLM models.
To use, you should have the deeplake python package installed.
Example
from langchain.vectorstores import DeepLake
from langchain.embeddings.openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings() | https://langchain.readthedocs.io/en/latest/reference/modules/vectorstores.html |
da5e8b5b7a2a-9 | embeddings = OpenAIEmbeddings()
vectorstore = DeepLake("langchain_store", embeddings.embed_query)
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) β List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts (Iterable[str]) β Texts to add to the vectorstore.
metadatas (Optional[List[dict]], optional) β Optional list of metadatas.
ids (Optional[List[str]], optional) β Optional list of IDs.
Returns
List of IDs of the added texts.
Return type
List[str]
delete(ids: Any[List[str], None] = None, filter: Any[Dict[str, str], None] = None, delete_all: Any[bool, None] = None) β bool[source]#
Delete the entities in the dataset
Parameters
ids (Optional[List[str]], optional) β The document_ids to delete.
Defaults to None.
filter (Optional[Dict[str, str]], optional) β The filter to delete by.
Defaults to None.
delete_all (Optional[bool], optional) β Whether to drop the dataset.
Defaults to None.
delete_dataset() β None[source]#
Delete the collection.
classmethod force_delete_by_path(path: str) β None[source]#
Force delete dataset by path
classmethod from_texts(texts: List[str], embedding: Optional[langchain.embeddings.base.Embeddings] = None, metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, dataset_path: str = './deeplake/', **kwargs: Any) β langchain.vectorstores.deeplake.DeepLake[source]#
Create a Deep Lake dataset from a raw documents.
If a dataset_path is specified, the dataset will be persisted in that location,
otherwise by default at ./deeplake
Parameters
path (str, pathlib.Path) β
The full path to the dataset. Can be:
Deep Lake cloud path of the form hub://username/dataset_name.To write to Deep Lake cloud datasets,
ensure that you are logged in to Deep Lake
(use βactiveloop loginβ from command line)
AWS S3 path of the form s3://bucketname/path/to/dataset.Credentials are required in either the environment
Google Cloud Storage path of the formgcs://bucketname/path/to/dataset Credentials are required
in either the environment
Local file system path of the form ./path/to/dataset or~/path/to/dataset or path/to/dataset.
In-memory path of the form mem://path/to/dataset which doesnβtsave the dataset, but keeps it in memory instead.
Should be used only for testing as it does not persist.
documents (List[Document]) β List of documents to add.
embedding (Optional[Embeddings]) β Embedding function. Defaults to None.
metadatas (Optional[List[dict]]) β List of metadatas. Defaults to None.
ids (Optional[List[str]]) β List of document IDs. Defaults to None.
Returns
Deep Lake dataset.
Return type
DeepLake
max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
:param query: Text to look up documents similar to.
:param k: Number of Documents to return. Defaults to 4.
:param fetch_k: Number of Documents to fetch to pass to MMR algorithm.
:param lambda_mult: Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
max_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
embedding β Embedding to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
fetch_k β Number of Documents to fetch to pass to MMR algorithm. | https://langchain.readthedocs.io/en/latest/reference/modules/vectorstores.html |
da5e8b5b7a2a-10 | fetch_k β Number of Documents to fetch to pass to MMR algorithm.
lambda_mult β Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
persist() β None[source]#
Persist the collection.
similarity_search(query: str, k: int = 4, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs most similar to query.
Parameters
query β text to embed and run the query on.
k β Number of Documents to return.
Defaults to 4.
query β Text to look up documents similar to.
embedding β Embedding function to use.
Defaults to None.
k β Number of Documents to return.
Defaults to 4.
distance_metric β L2 for Euclidean, L1 for Nuclear, max
L-infinity distance, cos for cosine similarity, βdotβ for dot product
Defaults to L2.
filter β Attribute filter by metadata example {βkeyβ: βvalueβ}.
Defaults to None.
maximal_marginal_relevance β Whether to use maximal marginal relevance.
Defaults to False.
fetch_k β Number of Documents to fetch to pass to MMR algorithm.
Defaults to 20.
return_score β Whether to return the score. Defaults to False.
Returns
List of Documents most similar to the query vector.
similarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs most similar to embedding vector.
Parameters
embedding β Embedding to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query vector.
similarity_search_with_score(query: str, distance_metric: str = 'L2', k: int = 4, filter: Optional[Dict[str, str]] = None) β List[Tuple[langchain.schema.Document, float]][source]#
Run similarity search with Deep Lake with distance returned.
Parameters
query (str) β Query text to search for.
distance_metric β L2 for Euclidean, L1 for Nuclear, max L-infinity
distance, cos for cosine similarity, βdotβ for dot product.
Defaults to L2.
k (int) β Number of results to return. Defaults to 4.
filter (Optional[Dict[str, str]]) β Filter by metadata. Defaults to None.
Returns
List of documents most similar to the querytext with distance in float.
Return type
List[Tuple[Document, float]]
class langchain.vectorstores.DocArrayHnswSearch(doc_index: BaseDocIndex, embedding: langchain.embeddings.base.Embeddings)[source]#
Wrapper around HnswLib storage.
To use it, you should have the docarray package with version >=0.32.0 installed.
You can install it with pip install βlangchain[docarray]β.
classmethod from_params(embedding: langchain.embeddings.base.Embeddings, work_dir: str, n_dim: int, dist_metric: Literal['cosine', 'ip', 'l2'] = 'cosine', max_elements: int = 1024, index: bool = True, ef_construction: int = 200, ef: int = 10, M: int = 16, allow_replace_deleted: bool = True, num_threads: int = 1, **kwargs: Any) β langchain.vectorstores.docarray.hnsw.DocArrayHnswSearch[source]#
Initialize DocArrayHnswSearch store.
Parameters
embedding (Embeddings) β Embedding function.
work_dir (str) β path to the location where all the data will be stored.
n_dim (int) β dimension of an embedding.
dist_metric (str) β Distance metric for DocArrayHnswSearch can be one of:
βcosineβ, βipβ, and βl2β. Defaults to βcosineβ.
max_elements (int) β Maximum number of vectors that can be stored.
Defaults to 1024.
index (bool) β Whether an index should be built for this field.
Defaults to True.
ef_construction (int) β defines a construction time/accuracy trade-off.
Defaults to 200.
ef (int) β parameter controlling query time/accuracy trade-off.
Defaults to 10.
M (int) β parameter that defines the maximum number of outgoing
connections in the graph. Defaults to 16. | https://langchain.readthedocs.io/en/latest/reference/modules/vectorstores.html |
da5e8b5b7a2a-11 | connections in the graph. Defaults to 16.
allow_replace_deleted (bool) β Enables replacing of deleted elements
with new added ones. Defaults to True.
num_threads (int) β Sets the number of cpu threads to use. Defaults to 1.
**kwargs β Other keyword arguments to be passed to the get_doc_cls method.
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, work_dir: Optional[str] = None, n_dim: Optional[int] = None, **kwargs: Any) β langchain.vectorstores.docarray.hnsw.DocArrayHnswSearch[source]#
Create an DocArrayHnswSearch store and insert data.
Parameters
texts (List[str]) β Text data.
embedding (Embeddings) β Embedding function.
metadatas (Optional[List[dict]]) β Metadata for each text if it exists.
Defaults to None.
work_dir (str) β path to the location where all the data will be stored.
n_dim (int) β dimension of an embedding.
**kwargs β Other keyword arguments to be passed to the __init__ method.
Returns
DocArrayHnswSearch Vector Store
class langchain.vectorstores.DocArrayInMemorySearch(doc_index: BaseDocIndex, embedding: langchain.embeddings.base.Embeddings)[source]#
Wrapper around in-memory storage for exact search.
To use it, you should have the docarray package with version >=0.32.0 installed.
You can install it with pip install βlangchain[docarray]β.
classmethod from_params(embedding: langchain.embeddings.base.Embeddings, metric: Literal['cosine_sim', 'euclidian_dist', 'sgeuclidean_dist'] = 'cosine_sim', **kwargs: Any) β langchain.vectorstores.docarray.in_memory.DocArrayInMemorySearch[source]#
Initialize DocArrayInMemorySearch store.
Parameters
embedding (Embeddings) β Embedding function.
metric (str) β metric for exact nearest-neighbor search.
Can be one of: βcosine_simβ, βeuclidean_distβ and βsqeuclidean_distβ.
Defaults to βcosine_simβ.
**kwargs β Other keyword arguments to be passed to the get_doc_cls method.
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[Dict[Any, Any]]] = None, **kwargs: Any) β langchain.vectorstores.docarray.in_memory.DocArrayInMemorySearch[source]#
Create an DocArrayInMemorySearch store and insert data.
Parameters
texts (List[str]) β Text data.
embedding (Embeddings) β Embedding function.
metadatas (Optional[List[Dict[Any, Any]]]) β Metadata for each text
if it exists. Defaults to None.
metric (str) β metric for exact nearest-neighbor search.
Can be one of: βcosine_simβ, βeuclidean_distβ and βsqeuclidean_distβ.
Defaults to βcosine_simβ.
Returns
DocArrayInMemorySearch Vector Store
class langchain.vectorstores.ElasticVectorSearch(elasticsearch_url: str, index_name: str, embedding: langchain.embeddings.base.Embeddings, *, ssl_verify: Optional[Dict[str, Any]] = None)[source]#
Wrapper around Elasticsearch as a vector database.
To connect to an Elasticsearch instance that does not require
login credentials, pass the Elasticsearch URL and index name along with the
embedding object to the constructor.
Example
from langchain import ElasticVectorSearch
from langchain.embeddings import OpenAIEmbeddings
embedding = OpenAIEmbeddings()
elastic_vector_search = ElasticVectorSearch(
elasticsearch_url="http://localhost:9200",
index_name="test_index",
embedding=embedding
)
To connect to an Elasticsearch instance that requires login credentials,
including Elastic Cloud, use the Elasticsearch URL format
https://username:password@es_host:9243. For example, to connect to Elastic
Cloud, create the Elasticsearch URL with the required authentication details and
pass it to the ElasticVectorSearch constructor as the named parameter
elasticsearch_url.
You can obtain your Elastic Cloud URL and login credentials by logging in to the
Elastic Cloud console at https://cloud.elastic.co, selecting your deployment, and
navigating to the βDeploymentsβ page.
To obtain your Elastic Cloud password for the default βelasticβ user:
Log in to the Elastic Cloud console at https://cloud.elastic.co
Go to βSecurityβ > βUsersβ
Locate the βelasticβ user and click βEditβ | https://langchain.readthedocs.io/en/latest/reference/modules/vectorstores.html |
da5e8b5b7a2a-12 | Locate the βelasticβ user and click βEditβ
Click βReset passwordβ
Follow the prompts to reset the password
The format for Elastic Cloud URLs is
https://username:password@cluster_id.region_id.gcp.cloud.es.io:9243.
Example
from langchain import ElasticVectorSearch
from langchain.embeddings import OpenAIEmbeddings
embedding = OpenAIEmbeddings()
elastic_host = "cluster_id.region_id.gcp.cloud.es.io"
elasticsearch_url = f"https://username:password@{elastic_host}:9243"
elastic_vector_search = ElasticVectorSearch(
elasticsearch_url=elasticsearch_url,
index_name="test_index",
embedding=embedding
)
Parameters
elasticsearch_url (str) β The URL for the Elasticsearch instance.
index_name (str) β The name of the Elasticsearch index for the embeddings.
embedding (Embeddings) β An object that provides the ability to embed text.
It should be an instance of a class that subclasses the Embeddings
abstract base class, such as OpenAIEmbeddings()
Raises
ValueError β If the elasticsearch python package is not installed.
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, refresh_indices: bool = True, **kwargs: Any) β List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts β Iterable of strings to add to the vectorstore.
metadatas β Optional list of metadatas associated with the texts.
refresh_indices β bool to refresh ElasticSearch indices
Returns
List of ids from adding the texts into the vectorstore.
client_search(client: Any, index_name: str, script_query: Dict, size: int) β Any[source]#
create_index(client: Any, index_name: str, mapping: Dict) β None[source]#
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, elasticsearch_url: Optional[str] = None, index_name: Optional[str] = None, refresh_indices: bool = True, **kwargs: Any) β langchain.vectorstores.elastic_vector_search.ElasticVectorSearch[source]#
Construct ElasticVectorSearch wrapper from raw documents.
This is a user-friendly interface that:
Embeds documents.
Creates a new index for the embeddings in the Elasticsearch instance.
Adds the documents to the newly created Elasticsearch index.
This is intended to be a quick way to get started.
Example
from langchain import ElasticVectorSearch
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
elastic_vector_search = ElasticVectorSearch.from_texts(
texts,
embeddings,
elasticsearch_url="http://localhost:9200"
)
similarity_search(query: str, k: int = 4, filter: Optional[dict] = None, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs most similar to query.
Parameters
query β Text to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query.
similarity_search_with_score(query: str, k: int = 4, filter: Optional[dict] = None, **kwargs: Any) β List[Tuple[langchain.schema.Document, float]][source]#
Return docs most similar to query.
:param query: Text to look up documents similar to.
:param k: Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query.
class langchain.vectorstores.FAISS(embedding_function: typing.Callable, index: typing.Any, docstore: langchain.docstore.base.Docstore, index_to_docstore_id: typing.Dict[int, str], relevance_score_fn: typing.Optional[typing.Callable[[float], float]] = <function _default_relevance_score_fn>, normalize_L2: bool = False)[source]#
Wrapper around FAISS vector database.
To use, you should have the faiss python package installed.
Example
from langchain import FAISS
faiss = FAISS(embedding_function, index, docstore, index_to_docstore_id)
add_embeddings(text_embeddings: Iterable[Tuple[str, List[float]]], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) β List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
text_embeddings β Iterable pairs of string and embedding to | https://langchain.readthedocs.io/en/latest/reference/modules/vectorstores.html |
da5e8b5b7a2a-13 | Parameters
text_embeddings β Iterable pairs of string and embedding to
add to the vectorstore.
metadatas β Optional list of metadatas associated with the texts.
ids β Optional list of unique IDs.
Returns
List of ids from adding the texts into the vectorstore.
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) β List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts β Iterable of strings to add to the vectorstore.
metadatas β Optional list of metadatas associated with the texts.
ids β Optional list of unique IDs.
Returns
List of ids from adding the texts into the vectorstore.
classmethod from_embeddings(text_embeddings: List[Tuple[str, List[float]]], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) β langchain.vectorstores.faiss.FAISS[source]#
Construct FAISS wrapper from raw documents.
This is a user friendly interface that:
Embeds documents.
Creates an in memory docstore
Initializes the FAISS database
This is intended to be a quick way to get started.
Example
from langchain import FAISS
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
text_embeddings = embeddings.embed_documents(texts)
text_embedding_pairs = list(zip(texts, text_embeddings))
faiss = FAISS.from_embeddings(text_embedding_pairs, embeddings)
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) β langchain.vectorstores.faiss.FAISS[source]#
Construct FAISS wrapper from raw documents.
This is a user friendly interface that:
Embeds documents.
Creates an in memory docstore
Initializes the FAISS database
This is intended to be a quick way to get started.
Example
from langchain import FAISS
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
faiss = FAISS.from_texts(texts, embeddings)
classmethod load_local(folder_path: str, embeddings: langchain.embeddings.base.Embeddings, index_name: str = 'index') β langchain.vectorstores.faiss.FAISS[source]#
Load FAISS index, docstore, and index_to_docstore_id from disk.
Parameters
folder_path β folder path to load index, docstore,
and index_to_docstore_id from.
embeddings β Embeddings to use when generating queries
index_name β for saving with a specific index file name
max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
query β Text to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
fetch_k β Number of Documents to fetch to pass to MMR algorithm.
lambda_mult β Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
max_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
embedding β Embedding to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
fetch_k β Number of Documents to fetch to pass to MMR algorithm.
lambda_mult β Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
merge_from(target: langchain.vectorstores.faiss.FAISS) β None[source]# | https://langchain.readthedocs.io/en/latest/reference/modules/vectorstores.html |
da5e8b5b7a2a-14 | merge_from(target: langchain.vectorstores.faiss.FAISS) β None[source]#
Merge another FAISS object with the current one.
Add the target FAISS to the current one.
Parameters
target β FAISS object you wish to merge into the current one
Returns
None.
save_local(folder_path: str, index_name: str = 'index') β None[source]#
Save FAISS index, docstore, and index_to_docstore_id to disk.
Parameters
folder_path β folder path to save index, docstore,
and index_to_docstore_id to.
index_name β for saving with a specific index file name
similarity_search(query: str, k: int = 4, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs most similar to query.
Parameters
query β Text to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query.
similarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs most similar to embedding vector.
Parameters
embedding β Embedding to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the embedding.
similarity_search_with_score(query: str, k: int = 4) β List[Tuple[langchain.schema.Document, float]][source]#
Return docs most similar to query.
Parameters
query β Text to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
Returns
List of documents most similar to the query text with
L2 distance in float. Lower score represents more similarity.
similarity_search_with_score_by_vector(embedding: List[float], k: int = 4) β List[Tuple[langchain.schema.Document, float]][source]#
Return docs most similar to query.
Parameters
embedding β Embedding vector to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
Returns
List of documents most similar to the query text and L2 distance
in float for each. Lower score represents more similarity.
class langchain.vectorstores.LanceDB(connection: Any, embedding: langchain.embeddings.base.Embeddings, vector_key: Optional[str] = 'vector', id_key: Optional[str] = 'id', text_key: Optional[str] = 'text')[source]#
Wrapper around LanceDB vector database.
To use, you should have lancedb python package installed.
Example
db = lancedb.connect('./lancedb')
table = db.open_table('my_table')
vectorstore = LanceDB(table, embedding_function)
vectorstore.add_texts(['text1', 'text2'])
result = vectorstore.similarity_search('text1')
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) β List[str][source]#
Turn texts into embedding and add it to the database
Parameters
texts β Iterable of strings to add to the vectorstore.
metadatas β Optional list of metadatas associated with the texts.
ids β Optional list of ids to associate with the texts.
Returns
List of ids of the added texts.
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, connection: Any = None, vector_key: Optional[str] = 'vector', id_key: Optional[str] = 'id', text_key: Optional[str] = 'text', **kwargs: Any) β langchain.vectorstores.lancedb.LanceDB[source]#
Return VectorStore initialized from texts and embeddings.
similarity_search(query: str, k: int = 4, **kwargs: Any) β List[langchain.schema.Document][source]#
Return documents most similar to the query
Parameters
query β String to query the vectorstore with.
k β Number of documents to return.
Returns
List of documents most similar to the query.
class langchain.vectorstores.MatchingEngine(project_id: str, index: MatchingEngineIndex, endpoint: MatchingEngineIndexEndpoint, embedding: Embeddings, gcs_client: storage.Client, gcs_bucket_name: str, credentials: Optional[Credentials] = None)[source]#
Vertex Matching Engine implementation of the vector store.
While the embeddings are stored in the Matching Engine, the embedded
documents will be stored in GCS. | https://langchain.readthedocs.io/en/latest/reference/modules/vectorstores.html |
da5e8b5b7a2a-15 | documents will be stored in GCS.
An existing Index and corresponding Endpoint are preconditions for
using this module.
See usage in docs/modules/indexes/vectorstores/examples/matchingengine.ipynb
Note that this implementation is mostly meant for reading if you are
planning to do a real time implementation. While reading is a real time
operation, updating the index takes close to one hour.
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) β List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts β Iterable of strings to add to the vectorstore.
metadatas β Optional list of metadatas associated with the texts.
kwargs β vectorstore specific parameters.
Returns
List of ids from adding the texts into the vectorstore.
classmethod from_components(project_id: str, region: str, gcs_bucket_name: str, index_id: str, endpoint_id: str, credentials_path: Optional[str] = None, embedding: Optional[langchain.embeddings.base.Embeddings] = None) β langchain.vectorstores.matching_engine.MatchingEngine[source]#
Takes the object creation out of the constructor.
Parameters
project_id β The GCP project id.
region β The default location making the API calls. It must have
regional. (the same location as the GCS bucket and must be) β
gcs_bucket_name β The location where the vectors will be stored in
created. (order for the index to be) β
index_id β The id of the created index.
endpoint_id β The id of the created endpoint.
credentials_path β (Optional) The path of the Google credentials on
system. (the local file) β
embedding β The Embeddings that will be used for
texts. (embedding the) β
Returns
A configured MatchingEngine with the texts added to the index.
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) β langchain.vectorstores.matching_engine.MatchingEngine[source]#
Use from components instead.
similarity_search(query: str, k: int = 4, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs most similar to query.
Parameters
query β The string that will be used to search for similar documents.
k β The amount of neighbors that will be retrieved.
Returns
A list of k matching documents.
class langchain.vectorstores.Milvus(embedding_function: langchain.embeddings.base.Embeddings, collection_name: str = 'LangChainCollection', connection_args: Optional[dict[str, Any]] = None, consistency_level: str = 'Session', index_params: Optional[dict] = None, search_params: Optional[dict] = None, drop_old: Optional[bool] = False)[source]#
Wrapper around the Milvus vector database.
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, timeout: Optional[int] = None, batch_size: int = 1000, **kwargs: Any) β List[str][source]#
Insert text data into Milvus.
Inserting data when the collection has not be made yet will result
in creating a new Collection. The data of the first entity decides
the schema of the new collection, the dim is extracted from the first
embedding and the columns are decided by the first metadata dict.
Metada keys will need to be present for all inserted values. At
the moment there is no None equivalent in Milvus.
Parameters
texts (Iterable[str]) β The texts to embed, it is assumed
that they all fit in memory.
metadatas (Optional[List[dict]]) β Metadata dicts attached to each of
the texts. Defaults to None.
timeout (Optional[int]) β Timeout for each batch insert. Defaults
to None.
batch_size (int, optional) β Batch size to use for insertion.
Defaults to 1000.
Raises
MilvusException β Failure to add texts
Returns
The resulting keys for each inserted element.
Return type
List[str] | https://langchain.readthedocs.io/en/latest/reference/modules/vectorstores.html |
da5e8b5b7a2a-16 | Returns
The resulting keys for each inserted element.
Return type
List[str]
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, collection_name: str = 'LangChainCollection', connection_args: dict[str, Any] = {'host': 'localhost', 'password': '', 'port': '19530', 'secure': False, 'user': ''}, consistency_level: str = 'Session', index_params: Optional[dict] = None, search_params: Optional[dict] = None, drop_old: bool = False, **kwargs: Any) β langchain.vectorstores.milvus.Milvus[source]#
Create a Milvus collection, indexes it with HNSW, and insert data.
Parameters
texts (List[str]) β Text data.
embedding (Embeddings) β Embedding function.
metadatas (Optional[List[dict]]) β Metadata for each text if it exists.
Defaults to None.
collection_name (str, optional) β Collection name to use. Defaults to
βLangChainCollectionβ.
connection_args (dict[str, Any], optional) β Connection args to use. Defaults
to DEFAULT_MILVUS_CONNECTION.
consistency_level (str, optional) β Which consistency level to use. Defaults
to βSessionβ.
index_params (Optional[dict], optional) β Which index_params to use. Defaults
to None.
search_params (Optional[dict], optional) β Which search params to use.
Defaults to None.
drop_old (Optional[bool], optional) β Whether to drop the collection with
that name if it exists. Defaults to False.
Returns
Milvus Vector Store
Return type
Milvus
max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, param: Optional[dict] = None, expr: Optional[str] = None, timeout: Optional[int] = None, **kwargs: Any) β List[langchain.schema.Document][source]#
Perform a search and return results that are reordered by MMR.
Parameters
query (str) β The text being searched.
k (int, optional) β How many results to give. Defaults to 4.
fetch_k (int, optional) β Total results to select k from.
Defaults to 20.
lambda_mult β Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5
param (dict, optional) β The search params for the specified index.
Defaults to None.
expr (str, optional) β Filtering expression. Defaults to None.
timeout (int, optional) β How long to wait before timeout error.
Defaults to None.
kwargs β Collection.search() keyword arguments.
Returns
Document results for search.
Return type
List[Document]
max_marginal_relevance_search_by_vector(embedding: list[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, param: Optional[dict] = None, expr: Optional[str] = None, timeout: Optional[int] = None, **kwargs: Any) β List[langchain.schema.Document][source]#
Perform a search and return results that are reordered by MMR.
Parameters
embedding (str) β The embedding vector being searched.
k (int, optional) β How many results to give. Defaults to 4.
fetch_k (int, optional) β Total results to select k from.
Defaults to 20.
lambda_mult β Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5
param (dict, optional) β The search params for the specified index.
Defaults to None.
expr (str, optional) β Filtering expression. Defaults to None.
timeout (int, optional) β How long to wait before timeout error.
Defaults to None.
kwargs β Collection.search() keyword arguments.
Returns
Document results for search.
Return type
List[Document]
similarity_search(query: str, k: int = 4, param: Optional[dict] = None, expr: Optional[str] = None, timeout: Optional[int] = None, **kwargs: Any) β List[langchain.schema.Document][source]#
Perform a similarity search against the query string.
Parameters | https://langchain.readthedocs.io/en/latest/reference/modules/vectorstores.html |
da5e8b5b7a2a-17 | Perform a similarity search against the query string.
Parameters
query (str) β The text to search.
k (int, optional) β How many results to return. Defaults to 4.
param (dict, optional) β The search params for the index type.
Defaults to None.
expr (str, optional) β Filtering expression. Defaults to None.
timeout (int, optional) β How long to wait before timeout error.
Defaults to None.
kwargs β Collection.search() keyword arguments.
Returns
Document results for search.
Return type
List[Document]
similarity_search_by_vector(embedding: List[float], k: int = 4, param: Optional[dict] = None, expr: Optional[str] = None, timeout: Optional[int] = None, **kwargs: Any) β List[langchain.schema.Document][source]#
Perform a similarity search against the query string.
Parameters
embedding (List[float]) β The embedding vector to search.
k (int, optional) β How many results to return. Defaults to 4.
param (dict, optional) β The search params for the index type.
Defaults to None.
expr (str, optional) β Filtering expression. Defaults to None.
timeout (int, optional) β How long to wait before timeout error.
Defaults to None.
kwargs β Collection.search() keyword arguments.
Returns
Document results for search.
Return type
List[Document]
similarity_search_with_score(query: str, k: int = 4, param: Optional[dict] = None, expr: Optional[str] = None, timeout: Optional[int] = None, **kwargs: Any) β List[Tuple[langchain.schema.Document, float]][source]#
Perform a search on a query string and return results with score.
For more information about the search parameters, take a look at the pymilvus
documentation found here:
https://milvus.io/api-reference/pymilvus/v2.2.6/Collection/search().md
Parameters
query (str) β The text being searched.
k (int, optional) β The amount of results ot return. Defaults to 4.
param (dict) β The search params for the specified index.
Defaults to None.
expr (str, optional) β Filtering expression. Defaults to None.
timeout (int, optional) β How long to wait before timeout error.
Defaults to None.
kwargs β Collection.search() keyword arguments.
Return type
List[float], List[Tuple[Document, any, any]]
similarity_search_with_score_by_vector(embedding: List[float], k: int = 4, param: Optional[dict] = None, expr: Optional[str] = None, timeout: Optional[int] = None, **kwargs: Any) β List[Tuple[langchain.schema.Document, float]][source]#
Perform a search on a query string and return results with score.
For more information about the search parameters, take a look at the pymilvus
documentation found here:
https://milvus.io/api-reference/pymilvus/v2.2.6/Collection/search().md
Parameters
embedding (List[float]) β The embedding vector being searched.
k (int, optional) β The amount of results ot return. Defaults to 4.
param (dict) β The search params for the specified index.
Defaults to None.
expr (str, optional) β Filtering expression. Defaults to None.
timeout (int, optional) β How long to wait before timeout error.
Defaults to None.
kwargs β Collection.search() keyword arguments.
Returns
Result doc and score.
Return type
List[Tuple[Document, float]]
class langchain.vectorstores.MongoDBAtlasVectorSearch(collection: Collection[MongoDBDocumentType], embedding: Embeddings, *, index_name: str = 'default', text_key: str = 'text', embedding_key: str = 'embedding')[source]#
Wrapper around MongoDB Atlas Vector Search.
To use, you should have both:
- the pymongo python package installed
- a connection string associated with a MongoDB Atlas Cluster having deployed an
Atlas Search index
Example
from langchain.vectorstores import MongoDBAtlasVectorSearch
from langchain.embeddings.openai import OpenAIEmbeddings
from pymongo import MongoClient
mongo_client = MongoClient("<YOUR-CONNECTION-STRING>")
collection = mongo_client["<db_name>"]["<collection_name>"]
embeddings = OpenAIEmbeddings()
vectorstore = MongoDBAtlasVectorSearch(collection, embeddings) | https://langchain.readthedocs.io/en/latest/reference/modules/vectorstores.html |
da5e8b5b7a2a-18 | vectorstore = MongoDBAtlasVectorSearch(collection, embeddings)
add_texts(texts: Iterable[str], metadatas: Optional[List[Dict[str, Any]]] = None, **kwargs: Any) β List[source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts β Iterable of strings to add to the vectorstore.
metadatas β Optional list of metadatas associated with the texts.
Returns
List of ids from adding the texts into the vectorstore.
classmethod from_connection_string(connection_string: str, namespace: str, embedding: langchain.embeddings.base.Embeddings, **kwargs: Any) β langchain.vectorstores.mongodb_atlas.MongoDBAtlasVectorSearch[source]#
classmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, collection: Optional[Collection[MongoDBDocumentType]] = None, **kwargs: Any) β MongoDBAtlasVectorSearch[source]#
Construct MongoDBAtlasVectorSearch wrapper from raw documents.
This is a user-friendly interface that:
Embeds documents.
Adds the documents to a provided MongoDB Atlas Vector Search index(Lucene)
This is intended to be a quick way to get started.
Example
similarity_search(query: str, k: int = 4, pre_filter: Optional[dict] = None, post_filter_pipeline: Optional[List[Dict]] = None, **kwargs: Any) β List[langchain.schema.Document][source]#
Return MongoDB documents most similar to query.
Use the knnBeta Operator available in MongoDB Atlas Search
This feature is in early access and available only for evaluation purposes, to
validate functionality, and to gather feedback from a small closed group of
early access users. It is not recommended for production deployments as we may
introduce breaking changes.
For more: https://www.mongodb.com/docs/atlas/atlas-search/knn-beta
Parameters
query β Text to look up documents similar to.
k β Optional Number of Documents to return. Defaults to 4.
pre_filter β Optional Dictionary of argument(s) to prefilter on document
fields.
post_filter_pipeline β Optional Pipeline of MongoDB aggregation stages
following the knnBeta search.
Returns
List of Documents most similar to the query and score for each
similarity_search_with_score(query: str, *, k: int = 4, pre_filter: Optional[dict] = None, post_filter_pipeline: Optional[List[Dict]] = None) β List[Tuple[langchain.schema.Document, float]][source]#
Return MongoDB documents most similar to query, along with scores.
Use the knnBeta Operator available in MongoDB Atlas Search
This feature is in early access and available only for evaluation purposes, to
validate functionality, and to gather feedback from a small closed group of
early access users. It is not recommended for production deployments as we
may introduce breaking changes.
For more: https://www.mongodb.com/docs/atlas/atlas-search/knn-beta
Parameters
query β Text to look up documents similar to.
k β Optional Number of Documents to return. Defaults to 4.
pre_filter β Optional Dictionary of argument(s) to prefilter on document
fields.
post_filter_pipeline β Optional Pipeline of MongoDB aggregation stages
following the knnBeta search.
Returns
List of Documents most similar to the query and score for each
class langchain.vectorstores.MyScale(embedding: langchain.embeddings.base.Embeddings, config: Optional[langchain.vectorstores.myscale.MyScaleSettings] = None, **kwargs: Any)[source]#
Wrapper around MyScale vector database
You need a clickhouse-connect python package, and a valid account
to connect to MyScale.
MyScale can not only search with simple vector indexes,
it also supports complex query with multiple conditions,
constraints and even sub-queries.
For more information, please visit[myscale official site](https://docs.myscale.com/en/overview/)
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, batch_size: int = 32, ids: Optional[Iterable[str]] = None, **kwargs: Any) β List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts β Iterable of strings to add to the vectorstore.
ids β Optional list of ids to associate with the texts.
batch_size β Batch size of insertion
metadata β Optional column data to be inserted
Returns
List of ids from adding the texts into the vectorstore.
drop() β None[source]#
Helper function: Drop data
escape_str(value: str) β str[source]# | https://langchain.readthedocs.io/en/latest/reference/modules/vectorstores.html |
da5e8b5b7a2a-19 | Helper function: Drop data
escape_str(value: str) β str[source]#
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[Dict[Any, Any]]] = None, config: Optional[langchain.vectorstores.myscale.MyScaleSettings] = None, text_ids: Optional[Iterable[str]] = None, batch_size: int = 32, **kwargs: Any) β langchain.vectorstores.myscale.MyScale[source]#
Create Myscale wrapper with existing texts
Parameters
embedding_function (Embeddings) β Function to extract text embedding
texts (Iterable[str]) β List or tuple of strings to be added
config (MyScaleSettings, Optional) β Myscale configuration
text_ids (Optional[Iterable], optional) β IDs for the texts.
Defaults to None.
batch_size (int, optional) β Batchsize when transmitting data to MyScale.
Defaults to 32.
metadata (List[dict], optional) β metadata to texts. Defaults to None.
into (Other keyword arguments will pass) β [clickhouse-connect](https://clickhouse.com/docs/en/integrations/python#clickhouse-connect-driver-api)
Returns
MyScale Index
property metadata_column: str#
similarity_search(query: str, k: int = 4, where_str: Optional[str] = None, **kwargs: Any) β List[langchain.schema.Document][source]#
Perform a similarity search with MyScale
Parameters
query (str) β query string
k (int, optional) β Top K neighbors to retrieve. Defaults to 4.
where_str (Optional[str], optional) β where condition string.
Defaults to None.
NOTE β Please do not let end-user to fill this and always be aware
of SQL injection. When dealing with metadatas, remember to
use {self.metadata_column}.attribute instead of attribute
alone. The default name for it is metadata.
Returns
List of Documents
Return type
List[Document]
similarity_search_by_vector(embedding: List[float], k: int = 4, where_str: Optional[str] = None, **kwargs: Any) β List[langchain.schema.Document][source]#
Perform a similarity search with MyScale by vectors
Parameters
query (str) β query string
k (int, optional) β Top K neighbors to retrieve. Defaults to 4.
where_str (Optional[str], optional) β where condition string.
Defaults to None.
NOTE β Please do not let end-user to fill this and always be aware
of SQL injection. When dealing with metadatas, remember to
use {self.metadata_column}.attribute instead of attribute
alone. The default name for it is metadata.
Returns
List of (Document, similarity)
Return type
List[Document]
similarity_search_with_relevance_scores(query: str, k: int = 4, where_str: Optional[str] = None, **kwargs: Any) β List[Tuple[langchain.schema.Document, float]][source]#
Perform a similarity search with MyScale
Parameters
query (str) β query string
k (int, optional) β Top K neighbors to retrieve. Defaults to 4.
where_str (Optional[str], optional) β where condition string.
Defaults to None.
NOTE β Please do not let end-user to fill this and always be aware
of SQL injection. When dealing with metadatas, remember to
use {self.metadata_column}.attribute instead of attribute
alone. The default name for it is metadata.
Returns
List of documents most similar to the query text
and cosine distance in float for each.
Lower score represents more similarity.
Return type
List[Document]
pydantic settings langchain.vectorstores.MyScaleSettings[source]#
MyScale Client Configuration
Attribute:
myscale_host (str)An URL to connect to MyScale backend.Defaults to βlocalhostβ.
myscale_port (int) : URL port to connect with HTTP. Defaults to 8443.
username (str) : Username to login. Defaults to None.
password (str) : Password to login. Defaults to None.
index_type (str): index type string.
index_param (dict): index build parameter.
database (str) : Database name to find the table. Defaults to βdefaultβ.
table (str) : Table name to operate on.
Defaults to βvector_tableβ.
metric (str)Metric to compute distance,supported are (βl2β, βcosineβ, βipβ). Defaults to βcosineβ. | https://langchain.readthedocs.io/en/latest/reference/modules/vectorstores.html |
da5e8b5b7a2a-20 | column_map (Dict)Column type map to project column name onto langchainsemantics. Must have keys: text, id, vector,
must be same size to number of columns. For example:
.. code-block:: python
{βidβ: βtext_idβ,
βvectorβ: βtext_embeddingβ,
βtextβ: βtext_plainβ,
βmetadataβ: βmetadata_dictionary_in_jsonβ,
}
Defaults to identity map.
Show JSON schema{
"title": "MyScaleSettings",
"description": "MyScale Client Configuration\n\nAttribute:\n myscale_host (str) : An URL to connect to MyScale backend.\n Defaults to 'localhost'.\n myscale_port (int) : URL port to connect with HTTP. Defaults to 8443.\n username (str) : Username to login. Defaults to None.\n password (str) : Password to login. Defaults to None.\n index_type (str): index type string.\n index_param (dict): index build parameter.\n database (str) : Database name to find the table. Defaults to 'default'.\n table (str) : Table name to operate on.\n Defaults to 'vector_table'.\n metric (str) : Metric to compute distance,\n supported are ('l2', 'cosine', 'ip'). Defaults to 'cosine'.\n column_map (Dict) : Column type map to project column name onto langchain\n semantics. Must have keys: `text`, `id`, `vector`,\n must be same size to number of columns. For example:\n .. code-block:: python\n\n {\n 'id': 'text_id',\n 'vector': 'text_embedding',\n 'text': 'text_plain',\n 'metadata': 'metadata_dictionary_in_json',\n }\n\n Defaults to identity map.",
"type": "object",
"properties": {
"host": {
"title": "Host",
"default": "localhost",
"env_names": "{'myscale_host'}",
"type": "string"
},
"port": {
"title": "Port",
"default": 8443,
"env_names": "{'myscale_port'}",
"type": "integer"
},
"username": {
"title": "Username",
"env_names": "{'myscale_username'}",
"type": "string"
},
"password": {
"title": "Password",
"env_names": "{'myscale_password'}",
"type": "string"
},
"index_type": {
"title": "Index Type",
"default": "IVFFLAT",
"env_names": "{'myscale_index_type'}",
"type": "string"
},
"index_param": {
"title": "Index Param",
"env_names": "{'myscale_index_param'}",
"type": "object",
"additionalProperties": {
"type": "string"
}
},
"column_map": {
"title": "Column Map",
"default": {
"id": "id",
"text": "text",
"vector": "vector",
"metadata": "metadata"
},
"env_names": "{'myscale_column_map'}",
"type": "object",
"additionalProperties": {
"type": "string"
}
},
"database": {
"title": "Database",
"default": "default",
"env_names": "{'myscale_database'}",
"type": "string"
},
"table": {
"title": "Table",
"default": "langchain",
"env_names": "{'myscale_table'}",
"type": "string"
},
"metric": {
"title": "Metric",
"default": "cosine",
"env_names": "{'myscale_metric'}",
"type": "string"
}
},
"additionalProperties": false
}
Config
env_file: str = .env
env_file_encoding: str = utf-8
env_prefix: str = myscale_
Fields
column_map (Dict[str, str])
database (str) | https://langchain.readthedocs.io/en/latest/reference/modules/vectorstores.html |
da5e8b5b7a2a-21 | Fields
column_map (Dict[str, str])
database (str)
host (str)
index_param (Optional[Dict[str, str]])
index_type (str)
metric (str)
password (Optional[str])
port (int)
table (str)
username (Optional[str])
field column_map: Dict[str, str] = {'id': 'id', 'metadata': 'metadata', 'text': 'text', 'vector': 'vector'}#
field database: str = 'default'#
field host: str = 'localhost'#
field index_param: Optional[Dict[str, str]] = None#
field index_type: str = 'IVFFLAT'#
field metric: str = 'cosine'#
field password: Optional[str] = None#
field port: int = 8443#
field table: str = 'langchain'#
field username: Optional[str] = None#
class langchain.vectorstores.OpenSearchVectorSearch(opensearch_url: str, index_name: str, embedding_function: langchain.embeddings.base.Embeddings, **kwargs: Any)[source]#
Wrapper around OpenSearch as a vector database.
Example
from langchain import OpenSearchVectorSearch
opensearch_vector_search = OpenSearchVectorSearch(
"http://localhost:9200",
"embeddings",
embedding_function
)
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, bulk_size: int = 500, **kwargs: Any) β List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts β Iterable of strings to add to the vectorstore.
metadatas β Optional list of metadatas associated with the texts.
bulk_size β Bulk API request count; Default: 500
Returns
List of ids from adding the texts into the vectorstore.
Optional Args:vector_field: Document field embeddings are stored in. Defaults to
βvector_fieldβ.
text_field: Document field the text of the document is stored in. Defaults
to βtextβ.
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, bulk_size: int = 500, **kwargs: Any) β langchain.vectorstores.opensearch_vector_search.OpenSearchVectorSearch[source]#
Construct OpenSearchVectorSearch wrapper from raw documents.
Example
from langchain import OpenSearchVectorSearch
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
opensearch_vector_search = OpenSearchVectorSearch.from_texts(
texts,
embeddings,
opensearch_url="http://localhost:9200"
)
OpenSearch by default supports Approximate Search powered by nmslib, faiss
and lucene engines recommended for large datasets. Also supports brute force
search through Script Scoring and Painless Scripting.
Optional Args:vector_field: Document field embeddings are stored in. Defaults to
βvector_fieldβ.
text_field: Document field the text of the document is stored in. Defaults
to βtextβ.
Optional Keyword Args for Approximate Search:engine: βnmslibβ, βfaissβ, βluceneβ; default: βnmslibβ
space_type: βl2β, βl1β, βcosinesimilβ, βlinfβ, βinnerproductβ; default: βl2β
ef_search: Size of the dynamic list used during k-NN searches. Higher values
lead to more accurate but slower searches; default: 512
ef_construction: Size of the dynamic list used during k-NN graph creation.
Higher values lead to more accurate graph but slower indexing speed;
default: 512
m: Number of bidirectional links created for each new element. Large impact
on memory consumption. Between 2 and 100; default: 16
Keyword Args for Script Scoring or Painless Scripting:is_appx_search: False
similarity_search(query: str, k: int = 4, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs most similar to query.
By default supports Approximate Search.
Also supports Script Scoring and Painless Scripting.
Parameters
query β Text to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query.
Optional Args:vector_field: Document field embeddings are stored in. Defaults to
βvector_fieldβ.
text_field: Document field the text of the document is stored in. Defaults
to βtextβ. | https://langchain.readthedocs.io/en/latest/reference/modules/vectorstores.html |
da5e8b5b7a2a-22 | to βtextβ.
metadata_field: Document field that metadata is stored in. Defaults to
βmetadataβ.
Can be set to a special value β*β to include the entire document.
Optional Args for Approximate Search:search_type: βapproximate_searchβ; default: βapproximate_searchβ
boolean_filter: A Boolean filter consists of a Boolean query that
contains a k-NN query and a filter.
subquery_clause: Query clause on the knn vector field; default: βmustβ
lucene_filter: the Lucene algorithm decides whether to perform an exact
k-NN search with pre-filtering or an approximate search with modified
post-filtering.
Optional Args for Script Scoring Search:search_type: βscript_scoringβ; default: βapproximate_searchβ
space_type: βl2β, βl1β, βlinfβ, βcosinesimilβ, βinnerproductβ,
βhammingbitβ; default: βl2β
pre_filter: script_score query to pre-filter documents before identifying
nearest neighbors; default: {βmatch_allβ: {}}
Optional Args for Painless Scripting Search:search_type: βpainless_scriptingβ; default: βapproximate_searchβ
space_type: βl2Squaredβ, βl1Normβ, βcosineSimilarityβ; default: βl2Squaredβ
pre_filter: script_score query to pre-filter documents before identifying
nearest neighbors; default: {βmatch_allβ: {}}
similarity_search_with_score(query: str, k: int = 4, **kwargs: Any) β List[Tuple[langchain.schema.Document, float]][source]#
Return docs and itβs scores most similar to query.
By default supports Approximate Search.
Also supports Script Scoring and Painless Scripting.
Parameters
query β Text to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
Returns
List of Documents along with its scores most similar to the query.
Optional Args:same as similarity_search
class langchain.vectorstores.Pinecone(index: Any, embedding_function: Callable, text_key: str, namespace: Optional[str] = None)[source]#
Wrapper around Pinecone vector database.
To use, you should have the pinecone-client python package installed.
Example
from langchain.vectorstores import Pinecone
from langchain.embeddings.openai import OpenAIEmbeddings
import pinecone
# The environment should be the one specified next to the API key
# in your Pinecone console
pinecone.init(api_key="***", environment="...")
index = pinecone.Index("langchain-demo")
embeddings = OpenAIEmbeddings()
vectorstore = Pinecone(index, embeddings.embed_query, "text")
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, namespace: Optional[str] = None, batch_size: int = 32, **kwargs: Any) β List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts β Iterable of strings to add to the vectorstore.
metadatas β Optional list of metadatas associated with the texts.
ids β Optional list of ids to associate with the texts.
namespace β Optional pinecone namespace to add the texts to.
Returns
List of ids from adding the texts into the vectorstore.
classmethod from_existing_index(index_name: str, embedding: langchain.embeddings.base.Embeddings, text_key: str = 'text', namespace: Optional[str] = None) β langchain.vectorstores.pinecone.Pinecone[source]#
Load pinecone vectorstore from index name.
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, batch_size: int = 32, text_key: str = 'text', index_name: Optional[str] = None, namespace: Optional[str] = None, **kwargs: Any) β langchain.vectorstores.pinecone.Pinecone[source]#
Construct Pinecone wrapper from raw documents.
This is a user friendly interface that:
Embeds documents.
Adds the documents to a provided Pinecone index
This is intended to be a quick way to get started.
Example
from langchain import Pinecone
from langchain.embeddings import OpenAIEmbeddings
import pinecone
# The environment should be the one specified next to the API key
# in your Pinecone console
pinecone.init(api_key="***", environment="...")
embeddings = OpenAIEmbeddings() | https://langchain.readthedocs.io/en/latest/reference/modules/vectorstores.html |
da5e8b5b7a2a-23 | embeddings = OpenAIEmbeddings()
pinecone = Pinecone.from_texts(
texts,
embeddings,
index_name="langchain-demo"
)
similarity_search(query: str, k: int = 4, filter: Optional[dict] = None, namespace: Optional[str] = None, **kwargs: Any) β List[langchain.schema.Document][source]#
Return pinecone documents most similar to query.
Parameters
query β Text to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
filter β Dictionary of argument(s) to filter on metadata
namespace β Namespace to search in. Default will search in ββ namespace.
Returns
List of Documents most similar to the query and score for each
similarity_search_with_score(query: str, k: int = 4, filter: Optional[dict] = None, namespace: Optional[str] = None) β List[Tuple[langchain.schema.Document, float]][source]#
Return pinecone documents most similar to query, along with scores.
Parameters
query β Text to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
filter β Dictionary of argument(s) to filter on metadata
namespace β Namespace to search in. Default will search in ββ namespace.
Returns
List of Documents most similar to the query and score for each
class langchain.vectorstores.Qdrant(client: Any, collection_name: str, embeddings: Optional[langchain.embeddings.base.Embeddings] = None, content_payload_key: str = 'page_content', metadata_payload_key: str = 'metadata', embedding_function: Optional[Callable] = None)[source]#
Wrapper around Qdrant vector database.
To use you should have the qdrant-client package installed.
Example
from qdrant_client import QdrantClient
from langchain import Qdrant
client = QdrantClient()
collection_name = "MyCollection"
qdrant = Qdrant(client, collection_name, embedding_function)
CONTENT_KEY = 'page_content'#
METADATA_KEY = 'metadata'#
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[Sequence[str]] = None, batch_size: int = 64, **kwargs: Any) β List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts β Iterable of strings to add to the vectorstore.
metadatas β Optional list of metadatas associated with the texts.
ids β Optional list of ids to associate with the texts. Ids have to be
uuid-like strings.
batch_size β How many vectors upload per-request.
Default: 64
Returns
List of ids from adding the texts into the vectorstore.
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, ids: Optional[Sequence[str]] = None, location: Optional[str] = None, url: Optional[str] = None, port: Optional[int] = 6333, grpc_port: int = 6334, prefer_grpc: bool = False, https: Optional[bool] = None, api_key: Optional[str] = None, prefix: Optional[str] = None, timeout: Optional[float] = None, host: Optional[str] = None, path: Optional[str] = None, collection_name: Optional[str] = None, distance_func: str = 'Cosine', content_payload_key: str = 'page_content', metadata_payload_key: str = 'metadata', batch_size: int = 64, **kwargs: Any) β langchain.vectorstores.qdrant.Qdrant[source]#
Construct Qdrant wrapper from a list of texts.
Parameters
texts β A list of texts to be indexed in Qdrant.
embedding β A subclass of Embeddings, responsible for text vectorization.
metadatas β An optional list of metadata. If provided it has to be of the same
length as a list of texts.
ids β Optional list of ids to associate with the texts. Ids have to be
uuid-like strings.
location β If :memory: - use in-memory Qdrant instance.
If str - use it as a url parameter.
If None - fallback to relying on host and port parameters.
url β either host or str of βOptional[scheme], host, Optional[port],
Optional[prefix]β. Default: None
port β Port of the REST API interface. Default: 6333
grpc_port β Port of the gRPC interface. Default: 6334 | https://langchain.readthedocs.io/en/latest/reference/modules/vectorstores.html |
da5e8b5b7a2a-24 | grpc_port β Port of the gRPC interface. Default: 6334
prefer_grpc β If true - use gPRC interface whenever possible in custom methods.
Default: False
https β If true - use HTTPS(SSL) protocol. Default: None
api_key β API key for authentication in Qdrant Cloud. Default: None
prefix β If not None - add prefix to the REST URL path.
Example: service/v1 will result in
http://localhost:6333/service/v1/{qdrant-endpoint} for REST API.
Default: None
timeout β Timeout for REST and gRPC API requests.
Default: 5.0 seconds for REST and unlimited for gRPC
host β Host name of Qdrant service. If url and host are None, set to
βlocalhostβ. Default: None
path β Path in which the vectors will be stored while using local mode.
Default: None
collection_name β Name of the Qdrant collection to be used. If not provided,
it will be created randomly. Default: None
distance_func β Distance function. One of: βCosineβ / βEuclidβ / βDotβ.
Default: βCosineβ
content_payload_key β A payload key used to store the content of the document.
Default: βpage_contentβ
metadata_payload_key β A payload key used to store the metadata of the document.
Default: βmetadataβ
batch_size β How many vectors upload per-request.
Default: 64
**kwargs β Additional arguments passed directly into REST client initialization
This is a user-friendly interface that:
1. Creates embeddings, one for each text
2. Initializes the Qdrant database as an in-memory docstore by default
(and overridable to a remote docstore)
Adds the text embeddings to the Qdrant database
This is intended to be a quick way to get started.
Example
from langchain import Qdrant
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
qdrant = Qdrant.from_texts(texts, embeddings, "localhost")
max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
query β Text to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
fetch_k β Number of Documents to fetch to pass to MMR algorithm.
Defaults to 20.
lambda_mult β Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
similarity_search(query: str, k: int = 4, filter: Optional[MetadataFilter] = None, **kwargs: Any) β List[Document][source]#
Return docs most similar to query.
Parameters
query β Text to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
filter β Filter by metadata. Defaults to None.
Returns
List of Documents most similar to the query.
similarity_search_with_score(query: str, k: int = 4, filter: Optional[MetadataFilter] = None, **kwargs: Any) β List[Tuple[Document, float]][source]#
Return docs most similar to query.
Parameters
query β Text to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
filter β Filter by metadata. Defaults to None.
Returns
List of documents most similar to the query text and cosine
distance in float for each.
Lower score represents more similarity.
class langchain.vectorstores.Redis(redis_url: str, index_name: str, embedding_function: typing.Callable, content_key: str = 'content', metadata_key: str = 'metadata', vector_key: str = 'content_vector', relevance_score_fn: typing.Optional[typing.Callable[[float], float]] = <function _default_relevance_score>, **kwargs: typing.Any)[source]#
Wrapper around Redis vector database.
To use, you should have the redis python package installed.
Example
from langchain.vectorstores import Redis
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
vectorstore = Redis( | https://langchain.readthedocs.io/en/latest/reference/modules/vectorstores.html |
da5e8b5b7a2a-25 | embeddings = OpenAIEmbeddings()
vectorstore = Redis(
redis_url="redis://username:password@localhost:6379"
index_name="my-index",
embedding_function=embeddings.embed_query,
)
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, embeddings: Optional[List[List[float]]] = None, keys: Optional[List[str]] = None, batch_size: int = 1000, **kwargs: Any) β List[str][source]#
Add more texts to the vectorstore.
Parameters
texts (Iterable[str]) β Iterable of strings/text to add to the vectorstore.
metadatas (Optional[List[dict]], optional) β Optional list of metadatas.
Defaults to None.
embeddings (Optional[List[List[float]]], optional) β Optional pre-generated
embeddings. Defaults to None.
keys (Optional[List[str]], optional) β Optional key values to use as ids.
Defaults to None.
batch_size (int, optional) β Batch size to use for writes. Defaults to 1000.
Returns
List of ids added to the vectorstore
Return type
List[str]
as_retriever(**kwargs: Any) β langchain.vectorstores.redis.RedisVectorStoreRetriever[source]#
static drop_index(index_name: str, delete_documents: bool, **kwargs: Any) β bool[source]#
Drop a Redis search index.
Parameters
index_name (str) β Name of the index to drop.
delete_documents (bool) β Whether to drop the associated documents.
Returns
Whether or not the drop was successful.
Return type
bool
classmethod from_existing_index(embedding: langchain.embeddings.base.Embeddings, index_name: str, content_key: str = 'content', metadata_key: str = 'metadata', vector_key: str = 'content_vector', **kwargs: Any) β langchain.vectorstores.redis.Redis[source]#
Connect to an existing Redis index.
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, index_name: Optional[str] = None, content_key: str = 'content', metadata_key: str = 'metadata', vector_key: str = 'content_vector', **kwargs: Any) β langchain.vectorstores.redis.Redis[source]#
Create a Redis vectorstore from raw documents.
This is a user-friendly interface that:
Embeds documents.
Creates a new index for the embeddings in Redis.
Adds the documents to the newly created Redis index.
This is intended to be a quick way to get started.
.. rubric:: Example
classmethod from_texts_return_keys(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, index_name: Optional[str] = None, content_key: str = 'content', metadata_key: str = 'metadata', vector_key: str = 'content_vector', distance_metric: Literal['COSINE', 'IP', 'L2'] = 'COSINE', **kwargs: Any) β Tuple[langchain.vectorstores.redis.Redis, List[str]][source]#
Create a Redis vectorstore from raw documents.
This is a user-friendly interface that:
Embeds documents.
Creates a new index for the embeddings in Redis.
Adds the documents to the newly created Redis index.
This is intended to be a quick way to get started.
.. rubric:: Example
similarity_search(query: str, k: int = 4, **kwargs: Any) β List[langchain.schema.Document][source]#
Returns the most similar indexed documents to the query text.
Parameters
query (str) β The query text for which to find similar documents.
k (int) β The number of documents to return. Default is 4.
Returns
A list of documents that are most similar to the query text.
Return type
List[Document]
similarity_search_limit_score(query: str, k: int = 4, score_threshold: float = 0.2, **kwargs: Any) β List[langchain.schema.Document][source]#
Returns the most similar indexed documents to the query text within the
score_threshold range.
Parameters
query (str) β The query text for which to find similar documents.
k (int) β The number of documents to return. Default is 4.
score_threshold (float) β The minimum matching score required for a document
0.2. (to be considered a match. Defaults to) β
similarity (Because the similarity calculation algorithm is based on cosine) β
:param : | https://langchain.readthedocs.io/en/latest/reference/modules/vectorstores.html |
da5e8b5b7a2a-26 | similarity (Because the similarity calculation algorithm is based on cosine) β
:param :
:param the smaller the angle:
:param the higher the similarity.:
Returns
A list of documents that are most similar to the query text,
including the match score for each document.
Return type
List[Document]
Note
If there are no documents that satisfy the score_threshold value,
an empty list is returned.
similarity_search_with_score(query: str, k: int = 4) β List[Tuple[langchain.schema.Document, float]][source]#
Return docs most similar to query.
Parameters
query β Text to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query and score for each
class langchain.vectorstores.SKLearnVectorStore(embedding: langchain.embeddings.base.Embeddings, *, persist_path: Optional[str] = None, serializer: Literal['json', 'bson', 'parquet'] = 'json', metric: str = 'cosine', **kwargs: Any)[source]#
A simple in-memory vector store based on the scikit-learn library
NearestNeighbors implementation.
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) β List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts β Iterable of strings to add to the vectorstore.
metadatas β Optional list of metadatas associated with the texts.
kwargs β vectorstore specific parameters
Returns
List of ids from adding the texts into the vectorstore.
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, persist_path: Optional[str] = None, **kwargs: Any) β langchain.vectorstores.sklearn.SKLearnVectorStore[source]#
Return VectorStore initialized from texts and embeddings.
max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
:param query: Text to look up documents similar to.
:param k: Number of Documents to return. Defaults to 4.
:param fetch_k: Number of Documents to fetch to pass to MMR algorithm.
:param lambda_mult: Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
max_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
:param embedding: Embedding to look up documents similar to.
:param k: Number of Documents to return. Defaults to 4.
:param fetch_k: Number of Documents to fetch to pass to MMR algorithm.
:param lambda_mult: Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
persist() β None[source]#
similarity_search(query: str, k: int = 4, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs most similar to query.
similarity_search_with_score(query: str, *, k: int = 4, **kwargs: Any) β List[Tuple[langchain.schema.Document, float]][source]#
class langchain.vectorstores.SingleStoreDB(embedding: langchain.embeddings.base.Embeddings, *, table_name: str = 'embeddings', content_field: str = 'content', metadata_field: str = 'metadata', vector_field: str = 'vector', pool_size: int = 5, max_overflow: int = 10, timeout: float = 30, **kwargs: Any)[source]#
This class serves as a Pythonic interface to the SingleStore DB database. | https://langchain.readthedocs.io/en/latest/reference/modules/vectorstores.html |
da5e8b5b7a2a-27 | This class serves as a Pythonic interface to the SingleStore DB database.
The prerequisite for using this class is the installation of the singlestoredb
Python package.
The SingleStoreDB vectorstore can be created by providing an embedding function and
the relevant parameters for the database connection, connection pool, and
optionally, the names of the table and the fields to use.
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, embeddings: Optional[List[List[float]]] = None, **kwargs: Any) β List[str][source]#
Add more texts to the vectorstore.
Parameters
texts (Iterable[str]) β Iterable of strings/text to add to the vectorstore.
metadatas (Optional[List[dict]], optional) β Optional list of metadatas.
Defaults to None.
embeddings (Optional[List[List[float]]], optional) β Optional pre-generated
embeddings. Defaults to None.
Returns
empty list
Return type
List[str]
as_retriever(**kwargs: Any) β langchain.vectorstores.singlestoredb.SingleStoreDBRetriever[source]#
connection_kwargs#
Create connection pool.
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, table_name: str = 'embeddings', content_field: str = 'content', metadata_field: str = 'metadata', vector_field: str = 'vector', pool_size: int = 5, max_overflow: int = 10, timeout: float = 30, **kwargs: Any) β langchain.vectorstores.singlestoredb.SingleStoreDB[source]#
Create a SingleStoreDB vectorstore from raw documents.
This is a user-friendly interface that:
Embeds documents.
Creates a new table for the embeddings in SingleStoreDB.
Adds the documents to the newly created table.
This is intended to be a quick way to get started.
.. rubric:: Example
similarity_search(query: str, k: int = 4, **kwargs: Any) β List[langchain.schema.Document][source]#
Returns the most similar indexed documents to the query text.
Uses cosine similarity.
Parameters
query (str) β The query text for which to find similar documents.
k (int) β The number of documents to return. Default is 4.
Returns
A list of documents that are most similar to the query text.
Return type
List[Document]
similarity_search_with_score(query: str, k: int = 4) β List[Tuple[langchain.schema.Document, float]][source]#
Return docs most similar to query. Uses cosine similarity.
Parameters
query β Text to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query and score for each
vector_field#
Pass the rest of the kwargs to the connection.
class langchain.vectorstores.SupabaseVectorStore(client: supabase.client.Client, embedding: Embeddings, table_name: str, query_name: Union[str, None] = None)[source]#
VectorStore for a Supabase postgres database. Assumes you have the pgvector
extension installed and a match_documents (or similar) function. For more details:
https://js.langchain.com/docs/modules/indexes/vector_stores/integrations/supabase
You can implement your own match_documents function in order to limit the search
space to a subset of documents based on your own authorization or business logic.
Note that the Supabase Python client does not yet support async operations.
If youβd like to use max_marginal_relevance_search, please review the instructions
below on modifying the match_documents function to return matched embeddings.
add_texts(texts: Iterable[str], metadatas: Optional[List[dict[Any, Any]]] = None, **kwargs: Any) β List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts β Iterable of strings to add to the vectorstore.
metadatas β Optional list of metadatas associated with the texts.
kwargs β vectorstore specific parameters
Returns
List of ids from adding the texts into the vectorstore.
add_vectors(vectors: List[List[float]], documents: List[langchain.schema.Document]) β List[str][source]# | https://langchain.readthedocs.io/en/latest/reference/modules/vectorstores.html |
da5e8b5b7a2a-28 | classmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, client: Optional[supabase.client.Client] = None, table_name: Optional[str] = 'documents', query_name: Union[str, None] = 'match_documents', **kwargs: Any) β SupabaseVectorStore[source]#
Return VectorStore initialized from texts and embeddings.
max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
query β Text to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
fetch_k β Number of Documents to fetch to pass to MMR algorithm.
lambda_mult β Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
max_marginal_relevance_search requires that query_name returns matched
embeddings alongside the match documents. The following function
demonstrates how to do this:
```sql
CREATE FUNCTION match_documents_embeddings(query_embedding vector(1536),
match_count int)
RETURNS TABLE(id bigint,
content text,
metadata jsonb,
embedding vector(1536),
similarity float)
LANGUAGE plpgsql
AS $$
# variable_conflict use_column
BEGINRETURN query
SELECT
id,
content,
metadata,
embedding,
1 -(docstore.embedding <=> query_embedding) AS similarity
FROMdocstore
ORDER BYdocstore.embedding <=> query_embedding
LIMIT match_count;
END;
$$;
```
max_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
embedding β Embedding to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
fetch_k β Number of Documents to fetch to pass to MMR algorithm.
lambda_mult β Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
query_name: str#
similarity_search(query: str, k: int = 4, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs most similar to query.
similarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs most similar to embedding vector.
Parameters
embedding β Embedding to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query vector.
similarity_search_by_vector_returning_embeddings(query: List[float], k: int) β List[Tuple[langchain.schema.Document, float, numpy.ndarray[numpy.float32, Any]]][source]#
similarity_search_by_vector_with_relevance_scores(query: List[float], k: int) β List[Tuple[langchain.schema.Document, float]][source]#
similarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) β List[Tuple[langchain.schema.Document, float]][source]#
Return docs and relevance scores in the range [0, 1].
0 is dissimilar, 1 is most similar.
Parameters
query β input text
k β Number of Documents to return. Defaults to 4.
**kwargs β kwargs to be passed to similarity search. Should include:
score_threshold: Optional, a floating point value between 0 to 1 to
filter the resulting set of retrieved docs
Returns
List of Tuples of (doc, similarity_score)
table_name: str# | https://langchain.readthedocs.io/en/latest/reference/modules/vectorstores.html |
da5e8b5b7a2a-29 | Returns
List of Tuples of (doc, similarity_score)
table_name: str#
class langchain.vectorstores.Tair(embedding_function: langchain.embeddings.base.Embeddings, url: str, index_name: str, content_key: str = 'content', metadata_key: str = 'metadata', search_params: Optional[dict] = None, **kwargs: Any)[source]#
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) β List[str][source]#
Add texts data to an existing index.
create_index_if_not_exist(dim: int, distance_type: str, index_type: str, data_type: str, **kwargs: Any) β bool[source]#
static drop_index(index_name: str = 'langchain', **kwargs: Any) β bool[source]#
Drop an existing index.
Parameters
index_name (str) β Name of the index to drop.
Returns
True if the index is dropped successfully.
Return type
bool
classmethod from_documents(documents: List[langchain.schema.Document], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, index_name: str = 'langchain', content_key: str = 'content', metadata_key: str = 'metadata', **kwargs: Any) β langchain.vectorstores.tair.Tair[source]#
Return VectorStore initialized from documents and embeddings.
classmethod from_existing_index(embedding: langchain.embeddings.base.Embeddings, index_name: str = 'langchain', content_key: str = 'content', metadata_key: str = 'metadata', **kwargs: Any) β langchain.vectorstores.tair.Tair[source]#
Connect to an existing Tair index.
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, index_name: str = 'langchain', content_key: str = 'content', metadata_key: str = 'metadata', **kwargs: Any) β langchain.vectorstores.tair.Tair[source]#
Return VectorStore initialized from texts and embeddings.
similarity_search(query: str, k: int = 4, **kwargs: Any) β List[langchain.schema.Document][source]#
Returns the most similar indexed documents to the query text.
Parameters
query (str) β The query text for which to find similar documents.
k (int) β The number of documents to return. Default is 4.
Returns
A list of documents that are most similar to the query text.
Return type
List[Document]
class langchain.vectorstores.Tigris(client: TigrisClient, embeddings: Embeddings, index_name: str)[source]#
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) β List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts β Iterable of strings to add to the vectorstore.
metadatas β Optional list of metadatas associated with the texts.
ids β Optional list of ids for documents.
Ids will be autogenerated if not provided.
kwargs β vectorstore specific parameters
Returns
List of ids from adding the texts into the vectorstore.
classmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, client: Optional[TigrisClient] = None, index_name: Optional[str] = None, **kwargs: Any) β Tigris[source]#
Return VectorStore initialized from texts and embeddings.
property search_index: TigrisVectorStore#
similarity_search(query: str, k: int = 4, filter: Optional[TigrisFilter] = None, **kwargs: Any) β List[Document][source]#
Return docs most similar to query.
similarity_search_with_score(query: str, k: int = 4, filter: Optional[TigrisFilter] = None) β List[Tuple[Document, float]][source]#
Run similarity search with Chroma with distance.
Parameters
query (str) β Query text to search for.
k (int) β Number of results to return. Defaults to 4.
filter (Optional[TigrisFilter]) β Filter by metadata. Defaults to None.
Returns
List of documents most similar to the querytext with distance in float.
Return type
List[Tuple[Document, float]] | https://langchain.readthedocs.io/en/latest/reference/modules/vectorstores.html |
da5e8b5b7a2a-30 | Return type
List[Tuple[Document, float]]
class langchain.vectorstores.Typesense(typesense_client: Client, embedding: Embeddings, *, typesense_collection_name: Optional[str] = None, text_key: str = 'text')[source]#
Wrapper around Typesense vector search.
To use, you should have the typesense python package installed.
Example
from langchain.embedding.openai import OpenAIEmbeddings
from langchain.vectorstores import Typesense
import typesense
node = {
"host": "localhost", # For Typesense Cloud use xxx.a1.typesense.net
"port": "8108", # For Typesense Cloud use 443
"protocol": "http" # For Typesense Cloud use https
}
typesense_client = typesense.Client(
{
"nodes": [node],
"api_key": "<API_KEY>",
"connection_timeout_seconds": 2
}
)
typesense_collection_name = "langchain-memory"
embedding = OpenAIEmbeddings()
vectorstore = Typesense(
typesense_client,
typesense_collection_name,
embedding.embed_query,
"text",
)
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) β List[str][source]#
Run more texts through the embedding and add to the vectorstore.
Parameters
texts β Iterable of strings to add to the vectorstore.
metadatas β Optional list of metadatas associated with the texts.
ids β Optional list of ids to associate with the texts.
Returns
List of ids from adding the texts into the vectorstore.
classmethod from_client_params(embedding: langchain.embeddings.base.Embeddings, *, host: str = 'localhost', port: Union[str, int] = '8108', protocol: str = 'http', typesense_api_key: Optional[str] = None, connection_timeout_seconds: int = 2, **kwargs: Any) β langchain.vectorstores.typesense.Typesense[source]#
Initialize Typesense directly from client parameters.
Example
from langchain.embedding.openai import OpenAIEmbeddings
from langchain.vectorstores import Typesense
# Pass in typesense_api_key as kwarg or set env var "TYPESENSE_API_KEY".
vectorstore = Typesense(
OpenAIEmbeddings(),
host="localhost",
port="8108",
protocol="http",
typesense_collection_name="langchain-memory",
)
classmethod from_texts(texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, typesense_client: Optional[Client] = None, typesense_client_params: Optional[dict] = None, typesense_collection_name: Optional[str] = None, text_key: str = 'text', **kwargs: Any) β Typesense[source]#
Construct Typesense wrapper from raw text.
similarity_search(query: str, k: int = 4, filter: Optional[str] = '', **kwargs: Any) β List[langchain.schema.Document][source]#
Return typesense documents most similar to query.
Parameters
query β Text to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
filter β typesense filter_by expression to filter documents on
Returns
List of Documents most similar to the query and score for each
similarity_search_with_score(query: str, k: int = 4, filter: Optional[str] = '') β List[Tuple[langchain.schema.Document, float]][source]#
Return typesense documents most similar to query, along with scores.
Parameters
query β Text to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
filter β typesense filter_by expression to filter documents on
Returns
List of Documents most similar to the query and score for each
class langchain.vectorstores.Vectara(vectara_customer_id: Optional[str] = None, vectara_corpus_id: Optional[str] = None, vectara_api_key: Optional[str] = None)[source]#
Implementation of Vector Store using Vectara (https://vectara.com).
.. rubric:: Example
from langchain.vectorstores import Vectara
vectorstore = Vectara(
vectara_customer_id=vectara_customer_id,
vectara_corpus_id=vectara_corpus_id,
vectara_api_key=vectara_api_key
) | https://langchain.readthedocs.io/en/latest/reference/modules/vectorstores.html |
da5e8b5b7a2a-31 | vectara_api_key=vectara_api_key
)
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) β List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts β Iterable of strings to add to the vectorstore.
metadatas β Optional list of metadatas associated with the texts.
Returns
List of ids from adding the texts into the vectorstore.
as_retriever(**kwargs: Any) β langchain.vectorstores.vectara.VectaraRetriever[source]#
classmethod from_texts(texts: List[str], embedding: Optional[langchain.embeddings.base.Embeddings] = None, metadatas: Optional[List[dict]] = None, **kwargs: Any) β langchain.vectorstores.vectara.Vectara[source]#
Construct Vectara wrapper from raw documents.
This is intended to be a quick way to get started.
.. rubric:: Example
from langchain import Vectara
vectara = Vectara.from_texts(
texts,
vectara_customer_id=customer_id,
vectara_corpus_id=corpus_id,
vectara_api_key=api_key,
)
similarity_search(query: str, k: int = 5, alpha: float = 0.025, filter: Optional[str] = None, **kwargs: Any) β List[langchain.schema.Document][source]#
Return Vectara documents most similar to query, along with scores.
Parameters
query β Text to look up documents similar to.
k β Number of Documents to return. Defaults to 5.
filter β Dictionary of argument(s) to filter on metadata. For example a
filter can be βdoc.rating > 3.0 and part.lang = βdeuββ} see
https://docs.vectara.com/docs/search-apis/sql/filter-overview for more
details.
Returns
List of Documents most similar to the query
similarity_search_with_score(query: str, k: int = 5, alpha: float = 0.025, filter: Optional[str] = None, **kwargs: Any) β List[Tuple[langchain.schema.Document, float]][source]#
Return Vectara documents most similar to query, along with scores.
Parameters
query β Text to look up documents similar to.
k β Number of Documents to return. Defaults to 5.
alpha β parameter for hybrid search (called βlambdaβ in Vectara
documentation).
filter β Dictionary of argument(s) to filter on metadata. For example a
filter can be βdoc.rating > 3.0 and part.lang = βdeuββ} see
https://docs.vectara.com/docs/search-apis/sql/filter-overview
for more details.
Returns
List of Documents most similar to the query and score for each.
class langchain.vectorstores.VectorStore[source]#
Interface for vector stores.
async aadd_documents(documents: List[langchain.schema.Document], **kwargs: Any) β List[str][source]#
Run more documents through the embeddings and add to the vectorstore.
Parameters
(List[Document] (documents) β Documents to add to the vectorstore.
Returns
List of IDs of the added texts.
Return type
List[str]
async aadd_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) β List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
add_documents(documents: List[langchain.schema.Document], **kwargs: Any) β List[str][source]#
Run more documents through the embeddings and add to the vectorstore.
Parameters
(List[Document] (documents) β Documents to add to the vectorstore.
Returns
List of IDs of the added texts.
Return type
List[str]
abstract add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) β List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts β Iterable of strings to add to the vectorstore.
metadatas β Optional list of metadatas associated with the texts.
kwargs β vectorstore specific parameters
Returns
List of ids from adding the texts into the vectorstore.
async classmethod afrom_documents(documents: List[langchain.schema.Document], embedding: langchain.embeddings.base.Embeddings, **kwargs: Any) β langchain.vectorstores.base.VST[source]#
Return VectorStore initialized from documents and embeddings. | https://langchain.readthedocs.io/en/latest/reference/modules/vectorstores.html |
da5e8b5b7a2a-32 | Return VectorStore initialized from documents and embeddings.
async classmethod afrom_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) β langchain.vectorstores.base.VST[source]#
Return VectorStore initialized from texts and embeddings.
async amax_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
async amax_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
as_retriever(**kwargs: Any) β langchain.vectorstores.base.VectorStoreRetriever[source]#
async asearch(query: str, search_type: str, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs most similar to query using specified search type.
async asimilarity_search(query: str, k: int = 4, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs most similar to query.
async asimilarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs most similar to embedding vector.
async asimilarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) β List[Tuple[langchain.schema.Document, float]][source]#
Return docs most similar to query.
classmethod from_documents(documents: List[langchain.schema.Document], embedding: langchain.embeddings.base.Embeddings, **kwargs: Any) β langchain.vectorstores.base.VST[source]#
Return VectorStore initialized from documents and embeddings.
abstract classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) β langchain.vectorstores.base.VST[source]#
Return VectorStore initialized from texts and embeddings.
max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
query β Text to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
fetch_k β Number of Documents to fetch to pass to MMR algorithm.
lambda_mult β Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
max_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
embedding β Embedding to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
fetch_k β Number of Documents to fetch to pass to MMR algorithm.
lambda_mult β Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
search(query: str, search_type: str, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs most similar to query using specified search type.
abstract similarity_search(query: str, k: int = 4, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs most similar to query.
similarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) β List[langchain.schema.Document][source]# | https://langchain.readthedocs.io/en/latest/reference/modules/vectorstores.html |
da5e8b5b7a2a-33 | Return docs most similar to embedding vector.
Parameters
embedding β Embedding to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query vector.
similarity_search_with_relevance_scores(query: str, k: int = 4, **kwargs: Any) β List[Tuple[langchain.schema.Document, float]][source]#
Return docs and relevance scores in the range [0, 1].
0 is dissimilar, 1 is most similar.
Parameters
query β input text
k β Number of Documents to return. Defaults to 4.
**kwargs β kwargs to be passed to similarity search. Should include:
score_threshold: Optional, a floating point value between 0 to 1 to
filter the resulting set of retrieved docs
Returns
List of Tuples of (doc, similarity_score)
class langchain.vectorstores.Weaviate(client: typing.Any, index_name: str, text_key: str, embedding: typing.Optional[langchain.embeddings.base.Embeddings] = None, attributes: typing.Optional[typing.List[str]] = None, relevance_score_fn: typing.Optional[typing.Callable[[float], float]] = <function _default_score_normalizer>, by_text: bool = True)[source]#
Wrapper around Weaviate vector database.
To use, you should have the weaviate-client python package installed.
Example
import weaviate
from langchain.vectorstores import Weaviate
client = weaviate.Client(url=os.environ["WEAVIATE_URL"], ...)
weaviate = Weaviate(client, index_name, text_key)
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) β List[str][source]#
Upload texts with metadata (properties) to Weaviate.
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) β langchain.vectorstores.weaviate.Weaviate[source]#
Construct Weaviate wrapper from raw documents.
This is a user-friendly interface that:
Embeds documents.
Creates a new index for the embeddings in the Weaviate instance.
Adds the documents to the newly created Weaviate index.
This is intended to be a quick way to get started.
Example
from langchain.vectorstores.weaviate import Weaviate
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
weaviate = Weaviate.from_texts(
texts,
embeddings,
weaviate_url="http://localhost:8080"
)
max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
query β Text to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
fetch_k β Number of Documents to fetch to pass to MMR algorithm.
lambda_mult β Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
max_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
embedding β Embedding to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
fetch_k β Number of Documents to fetch to pass to MMR algorithm.
lambda_mult β Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns
List of Documents selected by maximal marginal relevance.
similarity_search(query: str, k: int = 4, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs most similar to query.
Parameters | https://langchain.readthedocs.io/en/latest/reference/modules/vectorstores.html |
da5e8b5b7a2a-34 | Return docs most similar to query.
Parameters
query β Text to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query.
similarity_search_by_text(query: str, k: int = 4, **kwargs: Any) β List[langchain.schema.Document][source]#
Return docs most similar to query.
Parameters
query β Text to look up documents similar to.
k β Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query.
similarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) β List[langchain.schema.Document][source]#
Look up similar documents by embedding vector in Weaviate.
similarity_search_with_score(query: str, k: int = 4, **kwargs: Any) β List[Tuple[langchain.schema.Document, float]][source]#
Return list of documents most similar to the query
text and cosine distance in float for each.
Lower score represents more similarity.
class langchain.vectorstores.Zilliz(embedding_function: langchain.embeddings.base.Embeddings, collection_name: str = 'LangChainCollection', connection_args: Optional[dict[str, Any]] = None, consistency_level: str = 'Session', index_params: Optional[dict] = None, search_params: Optional[dict] = None, drop_old: Optional[bool] = False)[source]#
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, collection_name: str = 'LangChainCollection', connection_args: dict[str, Any] = {}, consistency_level: str = 'Session', index_params: Optional[dict] = None, search_params: Optional[dict] = None, drop_old: bool = False, **kwargs: Any) β langchain.vectorstores.zilliz.Zilliz[source]#
Create a Zilliz collection, indexes it with HNSW, and insert data.
Parameters
texts (List[str]) β Text data.
embedding (Embeddings) β Embedding function.
metadatas (Optional[List[dict]]) β Metadata for each text if it exists.
Defaults to None.
collection_name (str, optional) β Collection name to use. Defaults to
βLangChainCollectionβ.
connection_args (dict[str, Any], optional) β Connection args to use. Defaults
to DEFAULT_MILVUS_CONNECTION.
consistency_level (str, optional) β Which consistency level to use. Defaults
to βSessionβ.
index_params (Optional[dict], optional) β Which index_params to use.
Defaults to None.
search_params (Optional[dict], optional) β Which search params to use.
Defaults to None.
drop_old (Optional[bool], optional) β Whether to drop the collection with
that name if it exists. Defaults to False.
Returns
Zilliz Vector Store
Return type
Zilliz
previous
Document Loaders
next
Retrievers
By Harrison Chase
Β© Copyright 2023, Harrison Chase.
Last updated on Jun 08, 2023. | https://langchain.readthedocs.io/en/latest/reference/modules/vectorstores.html |
03890c24ea5c-0 | .rst
.pdf
LLMs
LLMs#
Wrappers on top of large language models APIs.
pydantic model langchain.llms.AI21[source]#
Wrapper around AI21 large language models.
To use, you should have the environment variable AI21_API_KEY
set with your API key.
Example
from langchain.llms import AI21
ai21 = AI21(model="j2-jumbo-instruct")
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_environment Β» all fields
field base_url: Optional[str] = None#
Base url to use, if None decides based on model name.
field countPenalty: langchain.llms.ai21.AI21PenaltyData = AI21PenaltyData(scale=0, applyToWhitespaces=True, applyToPunctuations=True, applyToNumbers=True, applyToStopwords=True, applyToEmojis=True)#
Penalizes repeated tokens according to count.
field frequencyPenalty: langchain.llms.ai21.AI21PenaltyData = AI21PenaltyData(scale=0, applyToWhitespaces=True, applyToPunctuations=True, applyToNumbers=True, applyToStopwords=True, applyToEmojis=True)#
Penalizes repeated tokens according to frequency.
field logitBias: Optional[Dict[str, float]] = None#
Adjust the probability of specific tokens being generated.
field maxTokens: int = 256#
The maximum number of tokens to generate in the completion.
field minTokens: int = 0#
The minimum number of tokens to generate in the completion.
field model: str = 'j2-jumbo-instruct'#
Model name to use.
field numResults: int = 1#
How many completions to generate for each prompt.
field presencePenalty: langchain.llms.ai21.AI21PenaltyData = AI21PenaltyData(scale=0, applyToWhitespaces=True, applyToPunctuations=True, applyToNumbers=True, applyToStopwords=True, applyToEmojis=True)#
Penalizes repeated tokens.
field temperature: float = 0.7#
What sampling temperature to use.
field topP: float = 1.0#
Total probability mass of tokens to consider at each step.
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data | https://langchain.readthedocs.io/en/latest/reference/modules/llms.html |
03890c24ea5c-1 | the new model: you should trust this data
deep β set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) β int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β int#
Get the number of tokens in the message.
get_token_ids(text: str) β List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) β None#
Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.AlephAlpha[source]#
Wrapper around Aleph Alpha large language models.
To use, you should have the aleph_alpha_client python package installed, and the
environment variable ALEPH_ALPHA_API_KEY set with your API key, or pass
it as a named parameter to the constructor.
Parameters are explained more in depth here:
Aleph-Alpha/aleph-alpha-client
Example
from langchain.llms import AlephAlpha
alpeh_alpha = AlephAlpha(aleph_alpha_api_key="my-api-key")
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_environment Β» all fields
field aleph_alpha_api_key: Optional[str] = None#
API key for Aleph Alpha API.
field best_of: Optional[int] = None#
returns the one with the βbest ofβ results
(highest log probability per token)
field completion_bias_exclusion_first_token_only: bool = False#
Only consider the first token for the completion_bias_exclusion.
field contextual_control_threshold: Optional[float] = None#
If set to None, attention control parameters only apply to those tokens that have
explicitly been set in the request.
If set to a non-None value, control parameters are also applied to similar tokens.
field control_log_additive: Optional[bool] = True#
True: apply control by adding the log(control_factor) to attention scores.
False: (attention_scores - - attention_scores.min(-1)) * control_factor
field echo: bool = False#
Echo the prompt in the completion.
field frequency_penalty: float = 0.0#
Penalizes repeated tokens according to frequency.
field log_probs: Optional[int] = None#
Number of top log probabilities to be returned for each generated token.
field logit_bias: Optional[Dict[int, float]] = None#
The logit bias allows to influence the likelihood of generating tokens.
field maximum_tokens: int = 64#
The maximum number of tokens to be generated. | https://langchain.readthedocs.io/en/latest/reference/modules/llms.html |
03890c24ea5c-2 | field maximum_tokens: int = 64#
The maximum number of tokens to be generated.
field minimum_tokens: Optional[int] = 0#
Generate at least this number of tokens.
field model: Optional[str] = 'luminous-base'#
Model name to use.
field n: int = 1#
How many completions to generate for each prompt.
field penalty_bias: Optional[str] = None#
Penalty bias for the completion.
field penalty_exceptions: Optional[List[str]] = None#
List of strings that may be generated without penalty,
regardless of other penalty settings
field penalty_exceptions_include_stop_sequences: Optional[bool] = None#
Should stop_sequences be included in penalty_exceptions.
field presence_penalty: float = 0.0#
Penalizes repeated tokens.
field raw_completion: bool = False#
Force the raw completion of the model to be returned.
field repetition_penalties_include_completion: bool = True#
Flag deciding whether presence penalty or frequency penalty
are updated from the completion.
field repetition_penalties_include_prompt: Optional[bool] = False#
Flag deciding whether presence penalty or frequency penalty are
updated from the prompt.
field stop_sequences: Optional[List[str]] = None#
Stop sequences to use.
field temperature: float = 0.0#
A non-negative float that tunes the degree of randomness in generation.
field tokens: Optional[bool] = False#
return tokens of completion.
field top_k: int = 0#
Number of most likely tokens to consider at each step.
field top_p: float = 0.0#
Total probability mass of tokens to consider at each step.
field use_multiplicative_presence_penalty: Optional[bool] = False#
Flag deciding whether presence penalty is applied
multiplicatively (True) or additively (False).
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep β set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input. | https://langchain.readthedocs.io/en/latest/reference/modules/llms.html |
03890c24ea5c-3 | Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) β int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β int#
Get the number of tokens in the message.
get_token_ids(text: str) β List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) β None#
Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.Anthropic[source]#
Wrapper around Anthropicβs large language models.
To use, you should have the anthropic python package installed, and the
environment variable ANTHROPIC_API_KEY set with your API key, or pass
it as a named parameter to the constructor.
Example
import anthropic
from langchain.llms import Anthropic
model = Anthropic(model="<model_name>", anthropic_api_key="my-api-key")
# Simplest invocation, automatically wrapped with HUMAN_PROMPT
# and AI_PROMPT.
response = model("What are the biggest risks facing humanity?")
# Or if you want to use the chat mode, build a few-shot-prompt, or
# put words in the Assistant's mouth, use HUMAN_PROMPT and AI_PROMPT:
raw_prompt = "What are the biggest risks facing humanity?"
prompt = f"{anthropic.HUMAN_PROMPT} {prompt}{anthropic.AI_PROMPT}"
response = model(prompt)
Validators
raise_deprecation Β» all fields
raise_warning Β» all fields
set_verbose Β» verbose
validate_environment Β» all fields
field default_request_timeout: Optional[Union[float, Tuple[float, float]]] = None#
Timeout for requests to Anthropic Completion API. Default is 600 seconds.
field max_tokens_to_sample: int = 256#
Denotes the number of tokens to predict per generation.
field model: str = 'claude-v1'#
Model name to use.
field streaming: bool = False#
Whether to stream the results.
field temperature: Optional[float] = None#
A non-negative float that tunes the degree of randomness in generation.
field top_k: Optional[int] = None#
Number of most likely tokens to consider at each step.
field top_p: Optional[float] = None#
Total probability mass of tokens to consider at each step.
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult# | https://langchain.readthedocs.io/en/latest/reference/modules/llms.html |
03890c24ea5c-4 | Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep β set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) β int[source]#
Calculate number of tokens.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β int#
Get the number of tokens in the message.
get_token_ids(text: str) β List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) β None#
Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
stream(prompt: str, stop: Optional[List[str]] = None) β Generator[source]#
Call Anthropic completion_stream and return the resulting generator.
BETA: this is a beta feature while we figure out the right abstraction.
Once that happens, this interface could change.
Parameters
prompt β The prompt to pass into the model.
stop β Optional list of stop words to use when generating.
Returns
A generator representing the stream of tokens from Anthropic.
Example
prompt = "Write a poem about a stream." | https://langchain.readthedocs.io/en/latest/reference/modules/llms.html |
03890c24ea5c-5 | Example
prompt = "Write a poem about a stream."
prompt = f"\n\nHuman: {prompt}\n\nAssistant:"
generator = anthropic.stream(prompt)
for token in generator:
yield token
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.Anyscale[source]#
Wrapper around Anyscale Services.
To use, you should have the environment variable ANYSCALE_SERVICE_URL,
ANYSCALE_SERVICE_ROUTE and ANYSCALE_SERVICE_TOKEN set with your Anyscale
Service, or pass it as a named parameter to the constructor.
Example
from langchain.llms import Anyscale
anyscale = Anyscale(anyscale_service_url="SERVICE_URL",
anyscale_service_route="SERVICE_ROUTE",
anyscale_service_token="SERVICE_TOKEN")
# Use Ray for distributed processing
import ray
prompt_list=[]
@ray.remote
def send_query(llm, prompt):
resp = llm(prompt)
return resp
futures = [send_query.remote(anyscale, prompt) for prompt in prompt_list]
results = ray.get(futures)
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_environment Β» all fields
field model_kwargs: Optional[dict] = None#
Key word arguments to pass to the model. Reserved for future use
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep β set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) β int#
Get the number of tokens present in the text. | https://langchain.readthedocs.io/en/latest/reference/modules/llms.html |
03890c24ea5c-6 | Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β int#
Get the number of tokens in the message.
get_token_ids(text: str) β List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) β None#
Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.Aviary[source]#
Allow you to use an Aviary.
Aviary is a backend for hosted models. You can
find out more about aviary at
ray-project/aviary
Has no dependencies, since it connects to backend
directly.
To get a list of the models supported on an
aviary, follow the instructions on the web site to
install the aviary CLI and then use:
aviary models
You must at least specify the environment
variable or parameter AVIARY_URL.
You may optionally specify the environment variable
or parameter AVIARY_TOKEN.
Example
from langchain.llms import Aviary
light = Aviary(aviary_url='AVIARY_URL',
model='amazon/LightGPT')
result = light.predict('How do you make fried rice?')
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_environment Β» all fields
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model# | https://langchain.readthedocs.io/en/latest/reference/modules/llms.html |
03890c24ea5c-7 | Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep β set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) β int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β int#
Get the number of tokens in the message.
get_token_ids(text: str) β List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) β None#
Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.AzureOpenAI[source]#
Wrapper around Azure-specific OpenAI large language models.
To use, you should have the openai python package installed, and the
environment variable OPENAI_API_KEY set with your API key.
Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
Example
from langchain.llms import AzureOpenAI
openai = AzureOpenAI(model_name="text-davinci-003")
Validators
build_extra Β» all fields
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_azure_settings Β» all fields
validate_environment Β» all fields
field allowed_special: Union[Literal['all'], AbstractSet[str]] = {}#
Set of special tokens that are allowedγ
field batch_size: int = 20#
Batch size to use when passing multiple documents to generate.
field best_of: int = 1#
Generates best_of completions server-side and returns the βbestβ.
field deployment_name: str = ''#
Deployment name to use.
field disallowed_special: Union[Literal['all'], Collection[str]] = 'all'#
Set of special tokens that are not allowedγ
field frequency_penalty: float = 0#
Penalizes repeated tokens according to frequency.
field logit_bias: Optional[Dict[str, float]] [Optional]#
Adjust the probability of specific tokens being generated.
field max_retries: int = 6#
Maximum number of retries to make when generating.
field max_tokens: int = 256#
The maximum number of tokens to generate in the completion.
-1 returns as many tokens as possible given the prompt and | https://langchain.readthedocs.io/en/latest/reference/modules/llms.html |
03890c24ea5c-8 | -1 returns as many tokens as possible given the prompt and
the models maximal context size.
field model_kwargs: Dict[str, Any] [Optional]#
Holds any model parameters valid for create call not explicitly specified.
field model_name: str = 'text-davinci-003' (alias 'model')#
Model name to use.
field n: int = 1#
How many completions to generate for each prompt.
field presence_penalty: float = 0#
Penalizes repeated tokens.
field request_timeout: Optional[Union[float, Tuple[float, float]]] = None#
Timeout for requests to OpenAI completion API. Default is 600 seconds.
field streaming: bool = False#
Whether to stream the results or not.
field temperature: float = 0.7#
What sampling temperature to use.
field top_p: float = 1#
Total probability mass of tokens to consider at each step.
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep β set to True to make a deep copy of the model
Returns
new model instance
create_llm_result(choices: Any, prompts: List[str], token_usage: Dict[str, int]) β langchain.schema.LLMResult#
Create the LLMResult from the choices and prompts.
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) β int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β int#
Get the number of tokens in the message. | https://langchain.readthedocs.io/en/latest/reference/modules/llms.html |
03890c24ea5c-9 | Get the number of tokens in the message.
get_sub_prompts(params: Dict[str, Any], prompts: List[str], stop: Optional[List[str]] = None) β List[List[str]]#
Get the sub prompts for llm call.
get_token_ids(text: str) β List[int]#
Get the token IDs using the tiktoken package.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
max_tokens_for_prompt(prompt: str) β int#
Calculate the maximum number of tokens possible to generate for a prompt.
Parameters
prompt β The prompt to pass into the model.
Returns
The maximum number of tokens to generate for a prompt.
Example
max_tokens = openai.max_token_for_prompt("Tell me a joke.")
modelname_to_contextsize(modelname: str) β int#
Calculate the maximum number of tokens possible to generate for a model.
Parameters
modelname β The modelname we want to know the context size for.
Returns
The maximum context size
Example
max_tokens = openai.modelname_to_contextsize("text-davinci-003")
predict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
prep_streaming_params(stop: Optional[List[str]] = None) β Dict[str, Any]#
Prepare the params for streaming.
save(file_path: Union[pathlib.Path, str]) β None#
Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
stream(prompt: str, stop: Optional[List[str]] = None) β Generator#
Call OpenAI with streaming flag and return the resulting generator.
BETA: this is a beta feature while we figure out the right abstraction.
Once that happens, this interface could change.
Parameters
prompt β The prompts to pass into the model.
stop β Optional list of stop words to use when generating.
Returns
A generator representing the stream of tokens from OpenAI.
Example
generator = openai.stream("Tell me a joke.")
for token in generator:
yield token
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.Banana[source]#
Wrapper around Banana large language models.
To use, you should have the banana-dev python package installed,
and the environment variable BANANA_API_KEY set with your API key.
Any parameters that are valid to be passed to the call can be passed
in, even if not explicitly saved on this class.
Example
from langchain.llms import Banana
banana = Banana(model_key="")
Validators
build_extra Β» all fields
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_environment Β» all fields
field model_key: str = ''#
model endpoint to use
field model_kwargs: Dict[str, Any] [Optional]#
Holds any model parameters valid for create call not
explicitly specified.
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input. | https://langchain.readthedocs.io/en/latest/reference/modules/llms.html |
03890c24ea5c-10 | Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep β set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) β int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β int#
Get the number of tokens in the message.
get_token_ids(text: str) β List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) β None#
Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.Beam[source]#
Wrapper around Beam API for gpt2 large language model.
To use, you should have the beam-sdk python package installed,
and the environment variable BEAM_CLIENT_ID set with your client id
and BEAM_CLIENT_SECRET set with your client secret. Information on how | https://langchain.readthedocs.io/en/latest/reference/modules/llms.html |
03890c24ea5c-11 | and BEAM_CLIENT_SECRET set with your client secret. Information on how
to get these is available here: https://docs.beam.cloud/account/api-keys.
The wrapper can then be called as follows, where the name, cpu, memory, gpu,
python version, and python packages can be updated accordingly. Once deployed,
the instance can be called.
Example
llm = Beam(model_name="gpt2",
name="langchain-gpt2",
cpu=8,
memory="32Gi",
gpu="A10G",
python_version="python3.8",
python_packages=[
"diffusers[torch]>=0.10",
"transformers",
"torch",
"pillow",
"accelerate",
"safetensors",
"xformers",],
max_length=50)
llm._deploy()
call_result = llm._call(input)
Validators
build_extra Β» all fields
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_environment Β» all fields
field model_kwargs: Dict[str, Any] [Optional]#
Holds any model parameters valid for create call not
explicitly specified.
field url: str = ''#
model endpoint to use
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
app_creation() β None[source]#
Creates a Python file which will contain your Beam app definition.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep β set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) β int#
Get the number of tokens present in the text. | https://langchain.readthedocs.io/en/latest/reference/modules/llms.html |
03890c24ea5c-12 | Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β int#
Get the number of tokens in the message.
get_token_ids(text: str) β List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
run_creation() β None[source]#
Creates a Python file which will be deployed on beam.
save(file_path: Union[pathlib.Path, str]) β None#
Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.Bedrock[source]#
LLM provider to invoke Bedrock models.
To authenticate, the AWS client uses the following methods to
automatically load credentials:
https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
If a specific credential profile should be used, you must pass
the name of the profile from the ~/.aws/credentials file that is to be used.
Make sure the credentials / roles used have the required policies to
access the Bedrock service.
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_environment Β» all fields
field credentials_profile_name: Optional[str] = None#
The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which
has either access keys or role information specified.
If not specified, the default credential profile or, if on an EC2 instance,
credentials from IMDS will be used.
See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
field model_id: str [Required]#
Id of the model to call, e.g., amazon.titan-tg1-large, this is
equivalent to the modelId property in the list-foundation-models api
field model_kwargs: Optional[Dict] = None#
Key word arguments to pass to the model.
field region_name: Optional[str] = None#
The aws region e.g., us-west-2. Fallsback to AWS_DEFAULT_REGION env variable
or region specified in ~/.aws/config in case it is not provided here.
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text. | https://langchain.readthedocs.io/en/latest/reference/modules/llms.html |
03890c24ea5c-13 | Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep β set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) β int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β int#
Get the number of tokens in the message.
get_token_ids(text: str) β List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) β None#
Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.CTransformers[source]#
Wrapper around the C Transformers LLM interface.
To use, you should have the ctransformers python package installed.
See marella/ctransformers
Example
from langchain.llms import CTransformers
llm = CTransformers(model="/path/to/ggml-gpt-2.bin", model_type="gpt2")
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_environment Β» all fields
field config: Optional[Dict[str, Any]] = None#
The config parameters.
See marella/ctransformers
field lib: Optional[str] = None#
The path to a shared library or one of avx2, avx, basic.
field model: str [Required]# | https://langchain.readthedocs.io/en/latest/reference/modules/llms.html |
03890c24ea5c-14 | field model: str [Required]#
The path to a model file or directory or the name of a Hugging Face Hub
model repo.
field model_file: Optional[str] = None#
The name of the model file in repo or directory.
field model_type: Optional[str] = None#
The model type.
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep β set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) β int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β int#
Get the number of tokens in the message.
get_token_ids(text: str) β List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text. | https://langchain.readthedocs.io/en/latest/reference/modules/llms.html |
03890c24ea5c-15 | Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) β None#
Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.CerebriumAI[source]#
Wrapper around CerebriumAI large language models.
To use, you should have the cerebrium python package installed, and the
environment variable CEREBRIUMAI_API_KEY set with your API key.
Any parameters that are valid to be passed to the call can be passed
in, even if not explicitly saved on this class.
Example
from langchain.llms import CerebriumAI
cerebrium = CerebriumAI(endpoint_url="")
Validators
build_extra Β» all fields
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_environment Β» all fields
field endpoint_url: str = ''#
model endpoint to use
field model_kwargs: Dict[str, Any] [Optional]#
Holds any model parameters valid for create call not
explicitly specified.
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep β set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult. | https://langchain.readthedocs.io/en/latest/reference/modules/llms.html |
03890c24ea5c-16 | Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) β int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β int#
Get the number of tokens in the message.
get_token_ids(text: str) β List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) β None#
Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.Cohere[source]#
Wrapper around Cohere large language models.
To use, you should have the cohere python package installed, and the
environment variable COHERE_API_KEY set with your API key, or pass
it as a named parameter to the constructor.
Example
from langchain.llms import Cohere
cohere = Cohere(model="gptd-instruct-tft", cohere_api_key="my-api-key")
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_environment Β» all fields
field frequency_penalty: float = 0.0#
Penalizes repeated tokens according to frequency. Between 0 and 1.
field k: int = 0#
Number of most likely tokens to consider at each step.
field max_retries: int = 10#
Maximum number of retries to make when generating.
field max_tokens: int = 256#
Denotes the number of tokens to predict per generation.
field model: Optional[str] = None#
Model name to use.
field p: int = 1#
Total probability mass of tokens to consider at each step.
field presence_penalty: float = 0.0#
Penalizes repeated tokens. Between 0 and 1.
field temperature: float = 0.75#
A non-negative float that tunes the degree of randomness in generation.
field truncate: Optional[str] = None#
Specify how the client handles inputs longer than the maximum token
length: Truncate from START, END or NONE
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text. | https://langchain.readthedocs.io/en/latest/reference/modules/llms.html |
03890c24ea5c-17 | Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep β set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) β int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β int#
Get the number of tokens in the message.
get_token_ids(text: str) β List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) β None#
Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.Databricks[source]#
LLM wrapper around a Databricks serving endpoint or a cluster driver proxy app.
It supports two endpoint types:
Serving endpoint (recommended for both production and development).
We assume that an LLM was registered and deployed to a serving endpoint.
To wrap it as an LLM you must have βCan Queryβ permission to the endpoint.
Set endpoint_name accordingly and do not set cluster_id and
cluster_driver_port.
The expected model signature is:
inputs:
[{"name": "prompt", "type": "string"},
{"name": "stop", "type": "list[string]"}]
outputs: [{"type": "string"}]
Cluster driver proxy app (recommended for interactive development).
One can load an LLM on a Databricks interactive cluster and start a local HTTP | https://langchain.readthedocs.io/en/latest/reference/modules/llms.html |
03890c24ea5c-18 | One can load an LLM on a Databricks interactive cluster and start a local HTTP
server on the driver node to serve the model at / using HTTP POST method
with JSON input/output.
Please use a port number between [3000, 8000] and let the server listen to
the driver IP address or simply 0.0.0.0 instead of localhost only.
To wrap it as an LLM you must have βCan Attach Toβ permission to the cluster.
Set cluster_id and cluster_driver_port and do not set endpoint_name.
The expected server schema (using JSON schema) is:
inputs:
{"type": "object",
"properties": {
"prompt": {"type": "string"},
"stop": {"type": "array", "items": {"type": "string"}}},
"required": ["prompt"]}`
outputs: {"type": "string"}
If the endpoint model signature is different or you want to set extra params,
you can use transform_input_fn and transform_output_fn to apply necessary
transformations before and after the query.
Validators
raise_deprecation Β» all fields
set_cluster_driver_port Β» cluster_driver_port
set_cluster_id Β» cluster_id
set_model_kwargs Β» model_kwargs
set_verbose Β» verbose
field api_token: str [Optional]#
Databricks personal access token.
If not provided, the default value is determined by
the DATABRICKS_TOKEN environment variable if present, or
an automatically generated temporary token if running inside a Databricks
notebook attached to an interactive cluster in βsingle userβ or
βno isolation sharedβ mode.
field cluster_driver_port: Optional[str] = None#
The port number used by the HTTP server running on the cluster driver node.
The server should listen on the driver IP address or simply 0.0.0.0 to connect.
We recommend the server using a port number between [3000, 8000].
field cluster_id: Optional[str] = None#
ID of the cluster if connecting to a cluster driver proxy app.
If neither endpoint_name nor cluster_id is not provided and the code runs
inside a Databricks notebook attached to an interactive cluster in βsingle userβ
or βno isolation sharedβ mode, the current cluster ID is used as default.
You must not set both endpoint_name and cluster_id.
field endpoint_name: Optional[str] = None#
Name of the model serving endpont.
You must specify the endpoint name to connect to a model serving endpoint.
You must not set both endpoint_name and cluster_id.
field host: str [Optional]#
Databricks workspace hostname.
If not provided, the default value is determined by
the DATABRICKS_HOST environment variable if present, or
the hostname of the current Databricks workspace if running inside
a Databricks notebook attached to an interactive cluster in βsingle userβ
or βno isolation sharedβ mode.
field model_kwargs: Optional[Dict[str, Any]] = None#
Extra parameters to pass to the endpoint.
field transform_input_fn: Optional[Callable] = None#
A function that transforms {prompt, stop, **kwargs} into a JSON-compatible
request object that the endpoint accepts.
For example, you can apply a prompt template to the input prompt.
field transform_output_fn: Optional[Callable[[...], str]] = None#
A function that transforms the output from the endpoint to the generated text.
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text. | https://langchain.readthedocs.io/en/latest/reference/modules/llms.html |
03890c24ea5c-19 | Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep β set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) β int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β int#
Get the number of tokens in the message.
get_token_ids(text: str) β List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) β None#
Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.DeepInfra[source]#
Wrapper around DeepInfra deployed models.
To use, you should have the requests python package installed, and the
environment variable DEEPINFRA_API_TOKEN set with your API token, or pass
it as a named parameter to the constructor.
Only supports text-generation and text2text-generation for now.
Example
from langchain.llms import DeepInfra
di = DeepInfra(model_id="google/flan-t5-xl",
deepinfra_api_token="my-api-key")
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_environment Β» all fields
field verbose: bool [Optional]#
Whether to print out response text. | https://langchain.readthedocs.io/en/latest/reference/modules/llms.html |
03890c24ea5c-20 | field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep β set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) β int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β int#
Get the number of tokens in the message.
get_token_ids(text: str) β List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) β None#
Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example: | https://langchain.readthedocs.io/en/latest/reference/modules/llms.html |
03890c24ea5c-21 | Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.FakeListLLM[source]#
Fake LLM wrapper for testing purposes.
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep β set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) β int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β int#
Get the number of tokens in the message.
get_token_ids(text: str) β List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict(). | https://langchain.readthedocs.io/en/latest/reference/modules/llms.html |
03890c24ea5c-22 | Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) β None#
Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.ForefrontAI[source]#
Wrapper around ForefrontAI large language models.
To use, you should have the environment variable FOREFRONTAI_API_KEY
set with your API key.
Example
from langchain.llms import ForefrontAI
forefrontai = ForefrontAI(endpoint_url="")
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_environment Β» all fields
field base_url: Optional[str] = None#
Base url to use, if None decides based on model name.
field endpoint_url: str = ''#
Model name to use.
field length: int = 256#
The maximum number of tokens to generate in the completion.
field repetition_penalty: int = 1#
Penalizes repeated tokens according to frequency.
field temperature: float = 0.7#
What sampling temperature to use.
field top_k: int = 40#
The number of highest probability vocabulary tokens to
keep for top-k-filtering.
field top_p: float = 1.0#
Total probability mass of tokens to consider at each step.
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep β set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM. | https://langchain.readthedocs.io/en/latest/reference/modules/llms.html |
03890c24ea5c-23 | dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) β int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β int#
Get the number of tokens in the message.
get_token_ids(text: str) β List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) β None#
Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.GPT4All[source]#
Wrapper around GPT4All language models.
To use, you should have the gpt4all python package installed, the
pre-trained model file, and the modelβs config information.
Example
from langchain.llms import GPT4All
model = GPT4All(model="./models/gpt4all-model.bin", n_ctx=512, n_threads=8)
# Simplest invocation
response = model("Once upon a time, ")
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_environment Β» all fields
field allow_download: bool = False#
If model does not exist in ~/.cache/gpt4all/, download it.
field context_erase: float = 0.5#
Leave (n_ctx * context_erase) tokens
starting from beginning if the context has run out.
field echo: Optional[bool] = False#
Whether to echo the prompt.
field embedding: bool = False#
Use embedding mode only.
field f16_kv: bool = False#
Use half-precision for key/value cache.
field logits_all: bool = False#
Return logits for all tokens, not just the last token.
field model: str [Required]#
Path to the pre-trained GPT4All model file.
field n_batch: int = 1#
Batch size for prompt processing.
field n_ctx: int = 512#
Token context window.
field n_parts: int = -1#
Number of parts to split the model into.
If -1, the number of parts is automatically determined.
field n_predict: Optional[int] = 256#
The maximum number of tokens to generate.
field n_threads: Optional[int] = 4#
Number of threads to use.
field repeat_last_n: Optional[int] = 64#
Last n tokens to penalize
field repeat_penalty: Optional[float] = 1.3#
The penalty to apply to repeated tokens.
field seed: int = 0#
Seed. If -1, a random seed is used.
field stop: Optional[List[str]] = []# | https://langchain.readthedocs.io/en/latest/reference/modules/llms.html |
03890c24ea5c-24 | field stop: Optional[List[str]] = []#
A list of strings to stop generation when encountered.
field streaming: bool = False#
Whether to stream the results or not.
field temp: Optional[float] = 0.8#
The temperature to use for sampling.
field top_k: Optional[int] = 40#
The top-k value to use for sampling.
field top_p: Optional[float] = 0.95#
The top-p value to use for sampling.
field use_mlock: bool = False#
Force system to keep model in RAM.
field verbose: bool [Optional]#
Whether to print out response text.
field vocab_only: bool = False#
Only load the vocabulary, no weights.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep β set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) β int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β int#
Get the number of tokens in the message.
get_token_ids(text: str) β List[int]#
Get the token present in the text. | https://langchain.readthedocs.io/en/latest/reference/modules/llms.html |
03890c24ea5c-25 | Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) β None#
Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.GooglePalm[source]#
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_environment Β» all fields
field max_output_tokens: Optional[int] = None#
Maximum number of tokens to include in a candidate. Must be greater than zero.
If unset, will default to 64.
field model_name: str = 'models/text-bison-001'#
Model name to use.
field n: int = 1#
Number of chat completions to generate for each prompt. Note that the API may
not return the full n completions if duplicates are generated.
field temperature: float = 0.7#
Run inference with this temperature. Must by in the closed interval
[0.0, 1.0].
field top_k: Optional[int] = None#
Decode using top-k sampling: consider the set of top_k most probable tokens.
Must be positive.
field top_p: Optional[float] = None#
Decode using nucleus sampling: consider the smallest set of tokens whose
probability sum is at least top_p. Must be in the closed interval [0.0, 1.0].
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model# | https://langchain.readthedocs.io/en/latest/reference/modules/llms.html |
03890c24ea5c-26 | Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep β set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) β int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β int#
Get the number of tokens in the message.
get_token_ids(text: str) β List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) β None#
Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.GooseAI[source]#
Wrapper around OpenAI large language models.
To use, you should have the openai python package installed, and the
environment variable GOOSEAI_API_KEY set with your API key.
Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
Example
from langchain.llms import GooseAI
gooseai = GooseAI(model_name="gpt-neo-20b")
Validators
build_extra Β» all fields
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_environment Β» all fields
field frequency_penalty: float = 0#
Penalizes repeated tokens according to frequency.
field logit_bias: Optional[Dict[str, float]] [Optional]#
Adjust the probability of specific tokens being generated.
field max_tokens: int = 256#
The maximum number of tokens to generate in the completion.
-1 returns as many tokens as possible given the prompt and
the models maximal context size.
field min_tokens: int = 1#
The minimum number of tokens to generate in the completion.
field model_kwargs: Dict[str, Any] [Optional]#
Holds any model parameters valid for create call not explicitly specified.
field model_name: str = 'gpt-neo-20b'#
Model name to use
field n: int = 1#
How many completions to generate for each prompt.
field presence_penalty: float = 0#
Penalizes repeated tokens.
field temperature: float = 0.7#
What sampling temperature to use
field top_p: float = 1# | https://langchain.readthedocs.io/en/latest/reference/modules/llms.html |
03890c24ea5c-27 | What sampling temperature to use
field top_p: float = 1#
Total probability mass of tokens to consider at each step.
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep β set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) β int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β int#
Get the number of tokens in the message.
get_token_ids(text: str) β List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages. | https://langchain.readthedocs.io/en/latest/reference/modules/llms.html |
03890c24ea5c-28 | Predict message from messages.
save(file_path: Union[pathlib.Path, str]) β None#
Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.HuggingFaceEndpoint[source]#
Wrapper around HuggingFaceHub Inference Endpoints.
To use, you should have the huggingface_hub python package installed, and the
environment variable HUGGINGFACEHUB_API_TOKEN set with your API token, or pass
it as a named parameter to the constructor.
Only supports text-generation and text2text-generation for now.
Example
from langchain.llms import HuggingFaceEndpoint
endpoint_url = (
"https://abcdefghijklmnop.us-east-1.aws.endpoints.huggingface.cloud"
)
hf = HuggingFaceEndpoint(
endpoint_url=endpoint_url,
huggingfacehub_api_token="my-api-key"
)
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_environment Β» all fields
field endpoint_url: str = ''#
Endpoint URL to use.
field model_kwargs: Optional[dict] = None#
Key word arguments to pass to the model.
field task: Optional[str] = None#
Task to call the model with.
Should be a task that returns generated_text or summary_text.
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep β set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input. | https://langchain.readthedocs.io/en/latest/reference/modules/llms.html |
03890c24ea5c-29 | Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) β int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β int#
Get the number of tokens in the message.
get_token_ids(text: str) β List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) β None#
Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.HuggingFaceHub[source]#
Wrapper around HuggingFaceHub models.
To use, you should have the huggingface_hub python package installed, and the
environment variable HUGGINGFACEHUB_API_TOKEN set with your API token, or pass
it as a named parameter to the constructor.
Only supports text-generation, text2text-generation and summarization for now.
Example
from langchain.llms import HuggingFaceHub
hf = HuggingFaceHub(repo_id="gpt2", huggingfacehub_api_token="my-api-key")
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_environment Β» all fields
field model_kwargs: Optional[dict] = None#
Key word arguments to pass to the model.
field repo_id: str = 'gpt2'#
Model name to use.
field task: Optional[str] = None#
Task to call the model with.
Should be a task that returns generated_text or summary_text.
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model# | https://langchain.readthedocs.io/en/latest/reference/modules/llms.html |
03890c24ea5c-30 | Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep β set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) β int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β int#
Get the number of tokens in the message.
get_token_ids(text: str) β List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) β None#
Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.HuggingFacePipeline[source]#
Wrapper around HuggingFace Pipeline API.
To use, you should have the transformers python package installed.
Only supports text-generation, text2text-generation and summarization for now.
Example using from_model_id:from langchain.llms import HuggingFacePipeline
hf = HuggingFacePipeline.from_model_id(
model_id="gpt2",
task="text-generation",
pipeline_kwargs={"max_new_tokens": 10},
)
Example passing pipeline in directly:from langchain.llms import HuggingFacePipeline
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_id = "gpt2"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
pipe = pipeline(
"text-generation", model=model, tokenizer=tokenizer, max_new_tokens=10
)
hf = HuggingFacePipeline(pipeline=pipe)
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
field model_id: str = 'gpt2'#
Model name to use. | https://langchain.readthedocs.io/en/latest/reference/modules/llms.html |
03890c24ea5c-31 | field model_id: str = 'gpt2'#
Model name to use.
field model_kwargs: Optional[dict] = None#
Key word arguments passed to the model.
field pipeline_kwargs: Optional[dict] = None#
Key word arguments passed to the pipeline.
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep β set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
classmethod from_model_id(model_id: str, task: str, device: int = - 1, model_kwargs: Optional[dict] = None, pipeline_kwargs: Optional[dict] = None, **kwargs: Any) β langchain.llms.base.LLM[source]#
Construct the pipeline object from model_id and task.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) β int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β int#
Get the number of tokens in the message.
get_token_ids(text: str) β List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β unicode# | https://langchain.readthedocs.io/en/latest/reference/modules/llms.html |
03890c24ea5c-32 | Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) β None#
Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.HuggingFaceTextGenInference[source]#
HuggingFace text generation inference API.
This class is a wrapper around the HuggingFace text generation inference API.
It is used to generate text from a given prompt.
Attributes:
- max_new_tokens: The maximum number of tokens to generate.
- top_k: The number of top-k tokens to consider when generating text.
- top_p: The cumulative probability threshold for generating text.
- typical_p: The typical probability threshold for generating text.
- temperature: The temperature to use when generating text.
- repetition_penalty: The repetition penalty to use when generating text.
- stop_sequences: A list of stop sequences to use when generating text.
- seed: The seed to use when generating text.
- inference_server_url: The URL of the inference server to use.
- timeout: The timeout value in seconds to use while connecting to inference server.
- client: The client object used to communicate with the inference server.
Methods:
- _call: Generates text based on a given prompt and stop sequences.
- _llm_type: Returns the type of LLM.
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_environment Β» all fields
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep β set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) β Dict# | https://langchain.readthedocs.io/en/latest/reference/modules/llms.html |
03890c24ea5c-33 | Returns
new model instance
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) β int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β int#
Get the number of tokens in the message.
get_token_ids(text: str) β List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) β None#
Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.HumanInputLLM[source]#
A LLM wrapper which returns user input as the response.
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values | https://langchain.readthedocs.io/en/latest/reference/modules/llms.html |
03890c24ea5c-34 | Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep β set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) β int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β int#
Get the number of tokens in the message.
get_token_ids(text: str) β List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) β None#
Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.LlamaCpp[source]#
Wrapper around the llama.cpp model.
To use, you should have the llama-cpp-python library installed, and provide the
path to the Llama model as a named parameter to the constructor.
Check out: abetlen/llama-cpp-python
Example
from langchain.llms import LlamaCppEmbeddings
llm = LlamaCppEmbeddings(model_path="/path/to/llama/model")
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_environment Β» all fields
field echo: Optional[bool] = False#
Whether to echo the prompt.
field f16_kv: bool = True#
Use half-precision for key/value cache.
field last_n_tokens_size: Optional[int] = 64#
The number of tokens to look back when applying the repeat_penalty.
field logits_all: bool = False#
Return logits for all tokens, not just the last token.
field logprobs: Optional[int] = None#
The number of logprobs to return. If None, no logprobs are returned.
field lora_base: Optional[str] = None#
The path to the Llama LoRA base model.
field lora_path: Optional[str] = None# | https://langchain.readthedocs.io/en/latest/reference/modules/llms.html |
03890c24ea5c-35 | field lora_path: Optional[str] = None#
The path to the Llama LoRA. If None, no LoRa is loaded.
field max_tokens: Optional[int] = 256#
The maximum number of tokens to generate.
field model_path: str [Required]#
The path to the Llama model file.
field n_batch: Optional[int] = 8#
Number of tokens to process in parallel.
Should be a number between 1 and n_ctx.
field n_ctx: int = 512#
Token context window.
field n_gpu_layers: Optional[int] = None#
Number of layers to be loaded into gpu memory. Default None.
field n_parts: int = -1#
Number of parts to split the model into.
If -1, the number of parts is automatically determined.
field n_threads: Optional[int] = None#
Number of threads to use.
If None, the number of threads is automatically determined.
field repeat_penalty: Optional[float] = 1.1#
The penalty to apply to repeated tokens.
field seed: int = -1#
Seed. If -1, a random seed is used.
field stop: Optional[List[str]] = []#
A list of strings to stop generation when encountered.
field streaming: bool = True#
Whether to stream the results, token by token.
field suffix: Optional[str] = None#
A suffix to append to the generated text. If None, no suffix is appended.
field temperature: Optional[float] = 0.8#
The temperature to use for sampling.
field top_k: Optional[int] = 40#
The top-k value to use for sampling.
field top_p: Optional[float] = 0.95#
The top-p value to use for sampling.
field use_mlock: bool = False#
Force system to keep model in RAM.
field use_mmap: Optional[bool] = True#
Whether to keep the model loaded in RAM
field verbose: bool [Optional]#
Whether to print out response text.
field vocab_only: bool = False#
Only load the vocabulary, no weights.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep β set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM. | https://langchain.readthedocs.io/en/latest/reference/modules/llms.html |
03890c24ea5c-36 | dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) β int[source]#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β int#
Get the number of tokens in the message.
get_token_ids(text: str) β List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) β None#
Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
stream(prompt: str, stop: Optional[List[str]] = None, run_manager: Optional[langchain.callbacks.manager.CallbackManagerForLLMRun] = None) β Generator[Dict, None, None][source]#
Yields results objects as they are generated in real time.
BETA: this is a beta feature while we figure out the right abstraction.
Once that happens, this interface could change.
It also calls the callback managerβs on_llm_new_token event with
similar parameters to the OpenAI LLM class method of the same name.
Args:prompt: The prompts to pass into the model.
stop: Optional list of stop words to use when generating.
Returns:A generator representing the stream of tokens being generated.
Yields:A dictionary like objects containing a string token and metadata.
See llama-cpp-python docs and below for more.
Example:from langchain.llms import LlamaCpp
llm = LlamaCpp(
model_path="/path/to/local/model.bin",
temperature = 0.5
)
for chunk in llm.stream("Ask 'Hi, how are you?' like a pirate:'",
stop=["'","
β]):result = chunk[βchoicesβ][0]
print(result[βtextβ], end=ββ, flush=True)
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.Modal[source]#
Wrapper around Modal large language models.
To use, you should have the modal-client python package installed.
Any parameters that are valid to be passed to the call can be passed
in, even if not explicitly saved on this class.
Example
from langchain.llms import Modal
modal = Modal(endpoint_url="")
Validators
build_extra Β» all fields
raise_deprecation Β» all fields
set_verbose Β» verbose
field endpoint_url: str = ''#
model endpoint to use
field model_kwargs: Dict[str, Any] [Optional]#
Holds any model parameters valid for create call not
explicitly specified.
field verbose: bool [Optional]#
Whether to print out response text. | https://langchain.readthedocs.io/en/latest/reference/modules/llms.html |
03890c24ea5c-37 | field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep β set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) β int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β int#
Get the number of tokens in the message.
get_token_ids(text: str) β List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) β None#
Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example: | https://langchain.readthedocs.io/en/latest/reference/modules/llms.html |
03890c24ea5c-38 | Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.MosaicML[source]#
Wrapper around MosaicMLβs LLM inference service.
To use, you should have the
environment variable MOSAICML_API_TOKEN set with your API token, or pass
it as a named parameter to the constructor.
Example
from langchain.llms import MosaicML
endpoint_url = (
"https://models.hosted-on.mosaicml.hosting/mpt-7b-instruct/v1/predict"
)
mosaic_llm = MosaicML(
endpoint_url=endpoint_url,
mosaicml_api_token="my-api-key"
)
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_environment Β» all fields
field endpoint_url: str = 'https://models.hosted-on.mosaicml.hosting/mpt-7b-instruct/v1/predict'#
Endpoint URL to use.
field inject_instruction_format: bool = False#
Whether to inject the instruction format into the prompt.
field model_kwargs: Optional[dict] = None#
Key word arguments to pass to the model.
field retry_sleep: float = 1.0#
How long to try sleeping for if a rate limit is encountered
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep β set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult# | https://langchain.readthedocs.io/en/latest/reference/modules/llms.html |
03890c24ea5c-39 | Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) β int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β int#
Get the number of tokens in the message.
get_token_ids(text: str) β List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) β None#
Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.NLPCloud[source]#
Wrapper around NLPCloud large language models.
To use, you should have the nlpcloud python package installed, and the
environment variable NLPCLOUD_API_KEY set with your API key.
Example
from langchain.llms import NLPCloud
nlpcloud = NLPCloud(model="gpt-neox-20b")
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_environment Β» all fields
field bad_words: List[str] = []#
List of tokens not allowed to be generated.
field do_sample: bool = True#
Whether to use sampling (True) or greedy decoding.
field early_stopping: bool = False#
Whether to stop beam search at num_beams sentences.
field length_no_input: bool = True#
Whether min_length and max_length should include the length of the input.
field length_penalty: float = 1.0#
Exponential penalty to the length.
field max_length: int = 256#
The maximum number of tokens to generate in the completion.
field min_length: int = 1#
The minimum number of tokens to generate in the completion.
field model_name: str = 'finetuned-gpt-neox-20b'#
Model name to use.
field num_beams: int = 1#
Number of beams for beam search.
field num_return_sequences: int = 1#
How many completions to generate for each prompt.
field remove_end_sequence: bool = True#
Whether or not to remove the end sequence token.
field remove_input: bool = True#
Remove input text from API response
field repetition_penalty: float = 1.0#
Penalizes repeated tokens. 1.0 means no penalty.
field temperature: float = 0.7#
What sampling temperature to use.
field top_k: int = 50#
The number of highest probability tokens to keep for top-k filtering.
field top_p: int = 1#
Total probability mass of tokens to consider at each step.
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input. | https://langchain.readthedocs.io/en/latest/reference/modules/llms.html |
03890c24ea5c-40 | Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep β set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) β int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β int#
Get the number of tokens in the message.
get_token_ids(text: str) β List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) β None#
Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.OpenAI[source]#
Wrapper around OpenAI large language models.
To use, you should have the openai python package installed, and the
environment variable OPENAI_API_KEY set with your API key.
Any parameters that are valid to be passed to the openai.create call can be passed | https://langchain.readthedocs.io/en/latest/reference/modules/llms.html |
03890c24ea5c-41 | Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
Example
from langchain.llms import OpenAI
openai = OpenAI(model_name="text-davinci-003")
Validators
build_extra Β» all fields
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_environment Β» all fields
field allowed_special: Union[Literal['all'], AbstractSet[str]] = {}#
Set of special tokens that are allowedγ
field batch_size: int = 20#
Batch size to use when passing multiple documents to generate.
field best_of: int = 1#
Generates best_of completions server-side and returns the βbestβ.
field disallowed_special: Union[Literal['all'], Collection[str]] = 'all'#
Set of special tokens that are not allowedγ
field frequency_penalty: float = 0#
Penalizes repeated tokens according to frequency.
field logit_bias: Optional[Dict[str, float]] [Optional]#
Adjust the probability of specific tokens being generated.
field max_retries: int = 6#
Maximum number of retries to make when generating.
field max_tokens: int = 256#
The maximum number of tokens to generate in the completion.
-1 returns as many tokens as possible given the prompt and
the models maximal context size.
field model_kwargs: Dict[str, Any] [Optional]#
Holds any model parameters valid for create call not explicitly specified.
field model_name: str = 'text-davinci-003' (alias 'model')#
Model name to use.
field n: int = 1#
How many completions to generate for each prompt.
field presence_penalty: float = 0#
Penalizes repeated tokens.
field request_timeout: Optional[Union[float, Tuple[float, float]]] = None#
Timeout for requests to OpenAI completion API. Default is 600 seconds.
field streaming: bool = False#
Whether to stream the results or not.
field temperature: float = 0.7#
What sampling temperature to use.
field top_p: float = 1#
Total probability mass of tokens to consider at each step.
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep β set to True to make a deep copy of the model
Returns
new model instance | https://langchain.readthedocs.io/en/latest/reference/modules/llms.html |
03890c24ea5c-42 | deep β set to True to make a deep copy of the model
Returns
new model instance
create_llm_result(choices: Any, prompts: List[str], token_usage: Dict[str, int]) β langchain.schema.LLMResult#
Create the LLMResult from the choices and prompts.
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) β int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β int#
Get the number of tokens in the message.
get_sub_prompts(params: Dict[str, Any], prompts: List[str], stop: Optional[List[str]] = None) β List[List[str]]#
Get the sub prompts for llm call.
get_token_ids(text: str) β List[int]#
Get the token IDs using the tiktoken package.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
max_tokens_for_prompt(prompt: str) β int#
Calculate the maximum number of tokens possible to generate for a prompt.
Parameters
prompt β The prompt to pass into the model.
Returns
The maximum number of tokens to generate for a prompt.
Example
max_tokens = openai.max_token_for_prompt("Tell me a joke.")
modelname_to_contextsize(modelname: str) β int#
Calculate the maximum number of tokens possible to generate for a model.
Parameters
modelname β The modelname we want to know the context size for.
Returns
The maximum context size
Example
max_tokens = openai.modelname_to_contextsize("text-davinci-003")
predict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
prep_streaming_params(stop: Optional[List[str]] = None) β Dict[str, Any]#
Prepare the params for streaming.
save(file_path: Union[pathlib.Path, str]) β None#
Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
stream(prompt: str, stop: Optional[List[str]] = None) β Generator#
Call OpenAI with streaming flag and return the resulting generator.
BETA: this is a beta feature while we figure out the right abstraction.
Once that happens, this interface could change.
Parameters
prompt β The prompts to pass into the model.
stop β Optional list of stop words to use when generating.
Returns
A generator representing the stream of tokens from OpenAI.
Example
generator = openai.stream("Tell me a joke.")
for token in generator:
yield token
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.OpenAIChat[source]#
Wrapper around OpenAI Chat large language models.
To use, you should have the openai python package installed, and the
environment variable OPENAI_API_KEY set with your API key. | https://langchain.readthedocs.io/en/latest/reference/modules/llms.html |
03890c24ea5c-43 | environment variable OPENAI_API_KEY set with your API key.
Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
Example
from langchain.llms import OpenAIChat
openaichat = OpenAIChat(model_name="gpt-3.5-turbo")
Validators
build_extra Β» all fields
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_environment Β» all fields
field allowed_special: Union[Literal['all'], AbstractSet[str]] = {}#
Set of special tokens that are allowedγ
field disallowed_special: Union[Literal['all'], Collection[str]] = 'all'#
Set of special tokens that are not allowedγ
field max_retries: int = 6#
Maximum number of retries to make when generating.
field model_kwargs: Dict[str, Any] [Optional]#
Holds any model parameters valid for create call not explicitly specified.
field model_name: str = 'gpt-3.5-turbo'#
Model name to use.
field prefix_messages: List [Optional]#
Series of messages for Chat input.
field streaming: bool = False#
Whether to stream the results or not.
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep β set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) β int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β int#
Get the number of tokens in the message. | https://langchain.readthedocs.io/en/latest/reference/modules/llms.html |
03890c24ea5c-44 | Get the number of tokens in the message.
get_token_ids(text: str) β List[int][source]#
Get the token IDs using the tiktoken package.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) β None#
Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.OpenLM[source]#
Validators
build_extra Β» all fields
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_environment Β» all fields
field allowed_special: Union[Literal['all'], AbstractSet[str]] = {}#
Set of special tokens that are allowedγ
field batch_size: int = 20#
Batch size to use when passing multiple documents to generate.
field best_of: int = 1#
Generates best_of completions server-side and returns the βbestβ.
field disallowed_special: Union[Literal['all'], Collection[str]] = 'all'#
Set of special tokens that are not allowedγ
field frequency_penalty: float = 0#
Penalizes repeated tokens according to frequency.
field logit_bias: Optional[Dict[str, float]] [Optional]#
Adjust the probability of specific tokens being generated.
field max_retries: int = 6#
Maximum number of retries to make when generating.
field max_tokens: int = 256#
The maximum number of tokens to generate in the completion.
-1 returns as many tokens as possible given the prompt and
the models maximal context size.
field model_kwargs: Dict[str, Any] [Optional]#
Holds any model parameters valid for create call not explicitly specified.
field model_name: str = 'text-davinci-003' (alias 'model')#
Model name to use.
field n: int = 1#
How many completions to generate for each prompt.
field presence_penalty: float = 0#
Penalizes repeated tokens.
field request_timeout: Optional[Union[float, Tuple[float, float]]] = None#
Timeout for requests to OpenAI completion API. Default is 600 seconds.
field streaming: bool = False#
Whether to stream the results or not.
field temperature: float = 0.7#
What sampling temperature to use.
field top_p: float = 1#
Total probability mass of tokens to consider at each step.
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult. | https://langchain.readthedocs.io/en/latest/reference/modules/llms.html |
03890c24ea5c-45 | Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep β set to True to make a deep copy of the model
Returns
new model instance
create_llm_result(choices: Any, prompts: List[str], token_usage: Dict[str, int]) β langchain.schema.LLMResult#
Create the LLMResult from the choices and prompts.
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) β int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β int#
Get the number of tokens in the message.
get_sub_prompts(params: Dict[str, Any], prompts: List[str], stop: Optional[List[str]] = None) β List[List[str]]#
Get the sub prompts for llm call.
get_token_ids(text: str) β List[int]#
Get the token IDs using the tiktoken package.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
max_tokens_for_prompt(prompt: str) β int#
Calculate the maximum number of tokens possible to generate for a prompt.
Parameters
prompt β The prompt to pass into the model.
Returns
The maximum number of tokens to generate for a prompt.
Example
max_tokens = openai.max_token_for_prompt("Tell me a joke.")
modelname_to_contextsize(modelname: str) β int#
Calculate the maximum number of tokens possible to generate for a model.
Parameters
modelname β The modelname we want to know the context size for.
Returns
The maximum context size
Example
max_tokens = openai.modelname_to_contextsize("text-davinci-003")
predict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages. | https://langchain.readthedocs.io/en/latest/reference/modules/llms.html |
03890c24ea5c-46 | Predict message from messages.
prep_streaming_params(stop: Optional[List[str]] = None) β Dict[str, Any]#
Prepare the params for streaming.
save(file_path: Union[pathlib.Path, str]) β None#
Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
stream(prompt: str, stop: Optional[List[str]] = None) β Generator#
Call OpenAI with streaming flag and return the resulting generator.
BETA: this is a beta feature while we figure out the right abstraction.
Once that happens, this interface could change.
Parameters
prompt β The prompts to pass into the model.
stop β Optional list of stop words to use when generating.
Returns
A generator representing the stream of tokens from OpenAI.
Example
generator = openai.stream("Tell me a joke.")
for token in generator:
yield token
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.Petals[source]#
Wrapper around Petals Bloom models.
To use, you should have the petals python package installed, and the
environment variable HUGGINGFACE_API_KEY set with your API key.
Any parameters that are valid to be passed to the call can be passed
in, even if not explicitly saved on this class.
Example
from langchain.llms import petals
petals = Petals()
Validators
build_extra Β» all fields
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_environment Β» all fields
field client: Any = None#
The client to use for the API calls.
field do_sample: bool = True#
Whether or not to use sampling; use greedy decoding otherwise.
field max_length: Optional[int] = None#
The maximum length of the sequence to be generated.
field max_new_tokens: int = 256#
The maximum number of new tokens to generate in the completion.
field model_kwargs: Dict[str, Any] [Optional]#
Holds any model parameters valid for create call
not explicitly specified.
field model_name: str = 'bigscience/bloom-petals'#
The model to use.
field temperature: float = 0.7#
What sampling temperature to use
field tokenizer: Any = None#
The tokenizer to use for the API calls.
field top_k: Optional[int] = None#
The number of highest probability vocabulary tokens
to keep for top-k-filtering.
field top_p: float = 0.9#
The cumulative probability for top-p sampling.
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values | https://langchain.readthedocs.io/en/latest/reference/modules/llms.html |
03890c24ea5c-47 | Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep β set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) β int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β int#
Get the number of tokens in the message.
get_token_ids(text: str) β List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) β None#
Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.PipelineAI[source]#
Wrapper around PipelineAI large language models.
To use, you should have the pipeline-ai python package installed,
and the environment variable PIPELINE_API_KEY set with your API key.
Any parameters that are valid to be passed to the call can be passed
in, even if not explicitly saved on this class.
Example
from langchain import PipelineAI
pipeline = PipelineAI(pipeline_key="")
Validators
build_extra Β» all fields
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_environment Β» all fields
field pipeline_key: str = ''#
The id or tag of the target pipeline
field pipeline_kwargs: Dict[str, Any] [Optional]#
Holds any pipeline parameters valid for create call not
explicitly specified.
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β str#
Check Cache and run the LLM on the given prompt and input. | https://langchain.readthedocs.io/en/latest/reference/modules/llms.html |
03890c24ea5c-48 | Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep β set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) β int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β int#
Get the number of tokens in the message.
get_token_ids(text: str) β List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) β None#
Save the LLM.
Parameters
file_path β Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns: Any) β None#
Try to update ForwardRefs on fields based on this Model, globalns and localns. | https://langchain.readthedocs.io/en/latest/reference/modules/llms.html |
03890c24ea5c-49 | Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.PredictionGuard[source]#
Wrapper around Prediction Guard large language models.
To use, you should have the predictionguard python package installed, and the
environment variable PREDICTIONGUARD_TOKEN set with your access token, or pass
it as a named parameter to the constructor. To use Prediction Guardβs API along
with OpenAI models, set the environment variable OPENAI_API_KEY with your
OpenAI API key as well.
Example
pgllm = PredictionGuard(model="MPT-7B-Instruct",
token="my-access-token",
output={
"type": "boolean"
})
Validators
raise_deprecation Β» all fields
set_verbose Β» verbose
validate_environment Β» all fields
field max_tokens: int = 256#
Denotes the number of tokens to predict per generation.
field model: Optional[str] = 'MPT-7B-Instruct'#
Model name to use.
field output: Optional[Dict[str, Any]] = None#
The output type or structure for controlling the LLM output.
field temperature: float = 0.75#
A non-negative float that tunes the degree of randomness in generation.
field token: Optional[str] = None#
Your Prediction Guard access token.
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None) β str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None) β langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include β fields to include in new model
exclude β fields to exclude from new model, as with values this takes precedence over include
update β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep β set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) β Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) β langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) β int#
Get the number of tokens present in the text. | https://langchain.readthedocs.io/en/latest/reference/modules/llms.html |
Subsets and Splits