id
stringlengths 14
16
| text
stringlengths 36
2.73k
| source
stringlengths 49
117
|
---|---|---|
884d79fd4972-7 | str = 'Begin!"\n\nQuestion: {input}\nThought: I should look at the keys that exist in data to see what I have access to\n{agent_scratchpad}', format_instructions: str = 'Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None, verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) → langchain.agents.agent.AgentExecutor[source]# | https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html |
884d79fd4972-8 | Construct a json agent from an LLM and tools. | https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html |
884d79fd4972-9 | langchain.agents.agent_toolkits.create_openapi_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: langchain.agents.agent_toolkits.openapi.toolkit.OpenAPIToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = "You are an agent designed to answer questions by making web requests to an API given the openapi spec.\n\nIf the question does not seem related to the API, return I don't know. Do not make up an answer.\nOnly use information provided by the tools to construct your response.\n\nFirst, find the base URL needed to make the request.\n\nSecond, find the relevant paths needed to answer the question. Take note that, sometimes, you might need to make more than one request to more than one path to answer the question.\n\nThird, find the required parameters needed to make the request. For GET requests, these are usually URL parameters and for POST requests, these are request body parameters.\n\nFourth, make the requests needed to answer the question. Ensure that you are sending the correct parameters to the request by checking which parameters are required. For parameters with a fixed | https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html |
884d79fd4972-10 | checking which parameters are required. For parameters with a fixed set of values, please use the spec to look at which values are allowed.\n\nUse the exact parameter names as listed in the spec, do not make up any names or abbreviate the names of parameters.\nIf you get a not found error, ensure that you are using a path that actually exists in the spec.\n", suffix: str = 'Begin!\n\nQuestion: {input}\nThought: I should explore the spec to find the base url for the API.\n{agent_scratchpad}', format_instructions: str = 'Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None, max_iterations: Optional[int] = 15, max_execution_time: | https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html |
884d79fd4972-11 | None, max_iterations: Optional[int] = 15, max_execution_time: Optional[float] = None, early_stopping_method: str = 'force', verbose: bool = False, return_intermediate_steps: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) → langchain.agents.agent.AgentExecutor[source]# | https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html |
884d79fd4972-12 | Construct a json agent from an LLM and tools.
langchain.agents.agent_toolkits.create_pandas_dataframe_agent(llm: langchain.base_language.BaseLanguageModel, df: Any, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: Optional[str] = None, suffix: Optional[str] = None, input_variables: Optional[List[str]] = None, verbose: bool = False, return_intermediate_steps: bool = False, max_iterations: Optional[int] = 15, max_execution_time: Optional[float] = None, early_stopping_method: str = 'force', agent_executor_kwargs: Optional[Dict[str, Any]] = None, include_df_in_prompt: Optional[bool] = True, **kwargs: Dict[str, Any]) → langchain.agents.agent.AgentExecutor[source]#
Construct a pandas agent from an LLM and dataframe. | https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html |
884d79fd4972-13 | langchain.agents.agent_toolkits.create_pbi_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: Optional[langchain.agents.agent_toolkits.powerbi.toolkit.PowerBIToolkit], powerbi: Optional[langchain.utilities.powerbi.PowerBIDataset] = None, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'You are an agent designed to help users interact with a PowerBI Dataset.\n\nAgent has access to a tool that can write a query based on the question and then run those against PowerBI, Microsofts business intelligence tool. The questions from the users should be interpreted as related to the dataset that is available and not general questions about the world. If the question does not seem related to the dataset, just return "This does not appear to be part of this dataset." as the answer.\n\nGiven an input question, ask to run the questions against the dataset, then look at the results and return the answer, the answer should be a complete sentence that answers the question, if multiple rows are asked find a way to write that in a easily readible format for a human, also make sure | https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html |
884d79fd4972-14 | easily readible format for a human, also make sure to represent numbers in readable ways, like 1M instead of 1000000. Unless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\n', suffix: str = 'Begin!\n\nQuestion: {input}\nThought: I can first ask which tables I have, then how each table is defined and then ask the query tool the question I need, and finally create a nice sentence that answers the question.\n{agent_scratchpad}', format_instructions: str = 'Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', examples: Optional[str] = None, input_variables: Optional[List[str]] = None, top_k: | https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html |
884d79fd4972-15 | None, input_variables: Optional[List[str]] = None, top_k: int = 10, verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) → langchain.agents.agent.AgentExecutor[source]# | https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html |
884d79fd4972-16 | Construct a pbi agent from an LLM and tools. | https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html |
884d79fd4972-17 | langchain.agents.agent_toolkits.create_pbi_chat_agent(llm: langchain.chat_models.base.BaseChatModel, toolkit: Optional[langchain.agents.agent_toolkits.powerbi.toolkit.PowerBIToolkit], powerbi: Optional[langchain.utilities.powerbi.PowerBIDataset] = None, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, output_parser: Optional[langchain.agents.agent.AgentOutputParser] = None, prefix: str = 'Assistant is a large language model built to help users interact with a PowerBI Dataset.\n\nAssistant has access to a tool that can write a query based on the question and then run those against PowerBI, Microsofts business intelligence tool. The questions from the users should be interpreted as related to the dataset that is available and not general questions about the world. If the question does not seem related to the dataset, just return "This does not appear to be part of this dataset." as the answer.\n\nGiven an input question, ask to run the questions against the dataset, then look at the results and return the answer, the answer should be a complete sentence that answers the question, if multiple rows are asked find a way to write that | https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html |
884d79fd4972-18 | multiple rows are asked find a way to write that in a easily readible format for a human, also make sure to represent numbers in readable ways, like 1M instead of 1000000. Unless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\n', suffix: str = "TOOLS\n------\nAssistant can ask the user to use tools to look up information that may be helpful in answering the users original question. The tools the human can use are:\n\n{{tools}}\n\n{format_instructions}\n\nUSER'S INPUT\n--------------------\nHere is the user's input (remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else):\n\n{{{{input}}}}\n", examples: Optional[str] = None, input_variables: Optional[List[str]] = None, memory: Optional[langchain.memory.chat_memory.BaseChatMemory] = None, top_k: int = 10, verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) → langchain.agents.agent.AgentExecutor[source]# | https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html |
884d79fd4972-19 | Construct a pbi agent from an Chat LLM and tools.
If you supply only a toolkit and no powerbi dataset, the same LLM is used for both.
langchain.agents.agent_toolkits.create_python_agent(llm: langchain.base_language.BaseLanguageModel, tool: langchain.tools.python.tool.PythonREPLTool, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, verbose: bool = False, prefix: str = 'You are an agent designed to write and execute python code to answer questions.\nYou have access to a python REPL, which you can use to execute python code.\nIf you get an error, debug your code and try again.\nOnly use the output of your code to answer the question. \nYou might know the answer without running any code, but you should still run the code to get the answer.\nIf it does not seem like you can write code to answer the question, just return "I don\'t know" as the answer.\n', agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) → langchain.agents.agent.AgentExecutor[source]#
Construct a python agent from an LLM and tool. | https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html |
884d79fd4972-20 | Construct a python agent from an LLM and tool.
langchain.agents.agent_toolkits.create_spark_dataframe_agent(llm: langchain.llms.base.BaseLLM, df: Any, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = '\nYou are working with a spark dataframe in Python. The name of the dataframe is `df`.\nYou should use the tools below to answer the question posed of you:', suffix: str = '\nThis is the result of `print(df.first())`:\n{df}\n\nBegin!\nQuestion: {input}\n{agent_scratchpad}', input_variables: Optional[List[str]] = None, verbose: bool = False, return_intermediate_steps: bool = False, max_iterations: Optional[int] = 15, max_execution_time: Optional[float] = None, early_stopping_method: str = 'force', agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) → langchain.agents.agent.AgentExecutor[source]#
Construct a spark agent from an LLM and dataframe. | https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html |
884d79fd4972-21 | langchain.agents.agent_toolkits.create_spark_sql_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: langchain.agents.agent_toolkits.spark_sql.toolkit.SparkSQLToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'You are an agent designed to interact with Spark SQL.\nGiven an input question, create a syntactically correct Spark SQL query to run, then look at the results of the query and return the answer.\nUnless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\nYou can order the results by a relevant column to return the most interesting examples in the database.\nNever query for all the columns from a specific table, only ask for the relevant columns given the question.\nYou have access to tools for interacting with the database.\nOnly use the below tools. Only use the information returned by the below tools to construct your final answer.\nYou MUST double check your query before executing it. If you get an error while executing a query, rewrite the query and try again.\n\nDO | https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html |
884d79fd4972-22 | a query, rewrite the query and try again.\n\nDO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the database.\n\nIf the question does not seem related to the database, just return "I don\'t know" as the answer.\n', suffix: str = 'Begin!\n\nQuestion: {input}\nThought: I should look at the tables in the database to see what I can query.\n{agent_scratchpad}', format_instructions: str = 'Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None, top_k: int = 10, max_iterations: Optional[int] = 15, max_execution_time: Optional[float] = None, early_stopping_method: str = 'force', verbose: bool = | https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html |
884d79fd4972-23 | early_stopping_method: str = 'force', verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) → langchain.agents.agent.AgentExecutor[source]# | https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html |
884d79fd4972-24 | Construct a sql agent from an LLM and tools. | https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html |
884d79fd4972-25 | langchain.agents.agent_toolkits.create_sql_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: langchain.agents.agent_toolkits.sql.toolkit.SQLDatabaseToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'You are an agent designed to interact with a SQL database.\nGiven an input question, create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer.\nUnless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\nYou can order the results by a relevant column to return the most interesting examples in the database.\nNever query for all the columns from a specific table, only ask for the relevant columns given the question.\nYou have access to tools for interacting with the database.\nOnly use the below tools. Only use the information returned by the below tools to construct your final answer.\nYou MUST double check your query before executing it. If you get an error while executing a query, rewrite the query and try again.\n\nDO NOT | https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html |
884d79fd4972-26 | query, rewrite the query and try again.\n\nDO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the database.\n\nIf the question does not seem related to the database, just return "I don\'t know" as the answer.\n', suffix: str = 'Begin!\n\nQuestion: {input}\nThought: I should look at the tables in the database to see what I can query.\n{agent_scratchpad}', format_instructions: str = 'Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None, top_k: int = 10, max_iterations: Optional[int] = 15, max_execution_time: Optional[float] = None, early_stopping_method: str = 'force', verbose: bool = | https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html |
884d79fd4972-27 | early_stopping_method: str = 'force', verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) → langchain.agents.agent.AgentExecutor[source]# | https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html |
884d79fd4972-28 | Construct a sql agent from an LLM and tools.
langchain.agents.agent_toolkits.create_vectorstore_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'You are an agent designed to answer questions about sets of documents.\nYou have access to tools for interacting with the documents, and the inputs to the tools are questions.\nSometimes, you will be asked to provide sources for your questions, in which case you should use the appropriate tool to do so.\nIf the question does not seem relevant to any of the tools provided, just return "I don\'t know" as the answer.\n', verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) → langchain.agents.agent.AgentExecutor[source]#
Construct a vectorstore agent from an LLM and tools.
langchain.agents.agent_toolkits.create_vectorstore_router_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreRouterToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'You are an agent designed to answer questions.\nYou have access to tools for interacting with different sources, and the inputs to the tools are questions.\nYour main task is to decide which of the tools is relevant for answering question at hand.\nFor complex questions, you can break the question down into sub questions and use tools to answers the sub questions.\n', verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) → langchain.agents.agent.AgentExecutor[source]# | https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html |
884d79fd4972-29 | Construct a vectorstore router agent from an LLM and tools.
previous
Tools
next
Utilities
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/reference/modules/agent_toolkits.html |
4676ce2551ec-0 | .rst
.pdf
Embeddings
Embeddings#
Wrappers around embedding modules.
pydantic model langchain.embeddings.AlephAlphaAsymmetricSemanticEmbedding[source]#
Wrapper for Aleph Alpha’s Asymmetric Embeddings
AA provides you with an endpoint to embed a document and a query.
The models were optimized to make the embeddings of documents and
the query for a document as similar as possible.
To learn more, check out: https://docs.aleph-alpha.com/docs/tasks/semantic_embed/
Example
from aleph_alpha import AlephAlphaAsymmetricSemanticEmbedding
embeddings = AlephAlphaSymmetricSemanticEmbedding()
document = "This is a content of the document"
query = "What is the content of the document?"
doc_result = embeddings.embed_documents([document])
query_result = embeddings.embed_query(query)
field aleph_alpha_api_key: Optional[str] = None#
API key for Aleph Alpha API.
field compress_to_size: Optional[int] = 128#
Should the returned embeddings come back as an original 5120-dim vector,
or should it be compressed to 128-dim.
field contextual_control_threshold: Optional[int] = None#
Attention control parameters only apply to those tokens that have
explicitly been set in the request.
field control_log_additive: Optional[bool] = True#
Apply controls on prompt items by adding the log(control_factor)
to attention scores.
field hosting: Optional[str] = 'https://api.aleph-alpha.com'#
Optional parameter that specifies which datacenters may process the request.
field model: Optional[str] = 'luminous-base'#
Model name to use.
field normalize: Optional[bool] = True#
Should returned embeddings be normalized
embed_documents(texts: List[str]) → List[List[float]][source]# | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
4676ce2551ec-1 | embed_documents(texts: List[str]) → List[List[float]][source]#
Call out to Aleph Alpha’s asymmetric Document endpoint.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Call out to Aleph Alpha’s asymmetric, query embedding endpoint
:param text: The text to embed.
Returns
Embeddings for the text.
pydantic model langchain.embeddings.AlephAlphaSymmetricSemanticEmbedding[source]#
The symmetric version of the Aleph Alpha’s semantic embeddings.
The main difference is that here, both the documents and
queries are embedded with a SemanticRepresentation.Symmetric
.. rubric:: Example
embed_documents(texts: List[str]) → List[List[float]][source]#
Call out to Aleph Alpha’s Document endpoint.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Call out to Aleph Alpha’s asymmetric, query embedding endpoint
:param text: The text to embed.
Returns
Embeddings for the text.
pydantic model langchain.embeddings.CohereEmbeddings[source]#
Wrapper around Cohere embedding models.
To use, you should have the cohere python package installed, and the
environment variable COHERE_API_KEY set with your API key or pass it
as a named parameter to the constructor.
Example
from langchain.embeddings import CohereEmbeddings
cohere = CohereEmbeddings(
model="embed-english-light-v2.0", cohere_api_key="my-api-key"
)
field model: str = 'embed-english-v2.0'#
Model name to use.
field truncate: Optional[str] = None# | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
4676ce2551ec-2 | Model name to use.
field truncate: Optional[str] = None#
Truncate embeddings that are too long from start or end (“NONE”|”START”|”END”)
embed_documents(texts: List[str]) → List[List[float]][source]#
Call out to Cohere’s embedding endpoint.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Call out to Cohere’s embedding endpoint.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
class langchain.embeddings.ElasticsearchEmbeddings(client: MlClient, model_id: str, *, input_field: str = 'text_field')[source]#
Wrapper around Elasticsearch embedding models.
This class provides an interface to generate embeddings using a model deployed
in an Elasticsearch cluster. It requires an Elasticsearch connection object
and the model_id of the model deployed in the cluster.
In Elasticsearch you need to have an embedding model loaded and deployed.
- https://www.elastic.co/guide/en/elasticsearch/reference/current/infer-trained-model.html
- https://www.elastic.co/guide/en/machine-learning/current/ml-nlp-deploy-models.html
embed_documents(texts: List[str]) → List[List[float]][source]#
Generate embeddings for a list of documents.
Parameters
texts (List[str]) – A list of document text strings to generate embeddings
for.
Returns
A list of embeddings, one for each document in the inputlist.
Return type
List[List[float]]
embed_query(text: str) → List[float][source]#
Generate an embedding for a single query text.
Parameters
text (str) – The query text to generate an embedding for.
Returns
The embedding for the input query text.
Return type
List[float] | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
4676ce2551ec-3 | Returns
The embedding for the input query text.
Return type
List[float]
classmethod from_credentials(model_id: str, *, es_cloud_id: Optional[str] = None, es_user: Optional[str] = None, es_password: Optional[str] = None, input_field: str = 'text_field') → langchain.embeddings.elasticsearch.ElasticsearchEmbeddings[source]#
Instantiate embeddings from Elasticsearch credentials.
Parameters
model_id (str) – The model_id of the model deployed in the Elasticsearch
cluster.
input_field (str) – The name of the key for the input text field in the
document. Defaults to ‘text_field’.
es_cloud_id – (str, optional): The Elasticsearch cloud ID to connect to.
es_user – (str, optional): Elasticsearch username.
es_password – (str, optional): Elasticsearch password.
Example Usage:from langchain.embeddings import ElasticsearchEmbeddings
# Define the model ID and input field name (if different from default)
model_id = “your_model_id”
# Optional, only if different from ‘text_field’
input_field = “your_input_field”
# Credentials can be passed in two ways. Either set the env vars
# ES_CLOUD_ID, ES_USER, ES_PASSWORD and they will be automatically pulled
# in, or pass them in directly as kwargs.
embeddings = ElasticsearchEmbeddings.from_credentials(
model_id,
input_field=input_field,
# es_cloud_id=”foo”,
# es_user=”bar”,
# es_password=”baz”,
)
documents = [“This is an example document.”,
“Another example document to generate embeddings for.”,
]
embeddings_generator.embed_documents(documents)
pydantic model langchain.embeddings.FakeEmbeddings[source]#
embed_documents(texts: List[str]) → List[List[float]][source]#
Embed search docs. | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
4676ce2551ec-4 | Embed search docs.
embed_query(text: str) → List[float][source]#
Embed query text.
pydantic model langchain.embeddings.HuggingFaceEmbeddings[source]#
Wrapper around sentence_transformers embedding models.
To use, you should have the sentence_transformers python package installed.
Example
from langchain.embeddings import HuggingFaceEmbeddings
model_name = "sentence-transformers/all-mpnet-base-v2"
model_kwargs = {'device': 'cpu'}
hf = HuggingFaceEmbeddings(model_name=model_name, model_kwargs=model_kwargs)
field cache_folder: Optional[str] = None#
Path to store models.
Can be also set by SENTENCE_TRANSFORMERS_HOME environment variable.
field encode_kwargs: Dict[str, Any] [Optional]#
Key word arguments to pass when calling the encode method of the model.
field model_kwargs: Dict[str, Any] [Optional]#
Key word arguments to pass to the model.
field model_name: str = 'sentence-transformers/all-mpnet-base-v2'#
Model name to use.
embed_documents(texts: List[str]) → List[List[float]][source]#
Compute doc embeddings using a HuggingFace transformer model.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Compute query embeddings using a HuggingFace transformer model.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
pydantic model langchain.embeddings.HuggingFaceHubEmbeddings[source]#
Wrapper around HuggingFaceHub embedding models.
To use, you should have the huggingface_hub python package installed, and the
environment variable HUGGINGFACEHUB_API_TOKEN set with your API token, or pass | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
4676ce2551ec-5 | environment variable HUGGINGFACEHUB_API_TOKEN set with your API token, or pass
it as a named parameter to the constructor.
Example
from langchain.embeddings import HuggingFaceHubEmbeddings
repo_id = "sentence-transformers/all-mpnet-base-v2"
hf = HuggingFaceHubEmbeddings(
repo_id=repo_id,
task="feature-extraction",
huggingfacehub_api_token="my-api-key",
)
field model_kwargs: Optional[dict] = None#
Key word arguments to pass to the model.
field repo_id: str = 'sentence-transformers/all-mpnet-base-v2'#
Model name to use.
field task: Optional[str] = 'feature-extraction'#
Task to call the model with.
embed_documents(texts: List[str]) → List[List[float]][source]#
Call out to HuggingFaceHub’s embedding endpoint for embedding search docs.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Call out to HuggingFaceHub’s embedding endpoint for embedding query text.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
pydantic model langchain.embeddings.HuggingFaceInstructEmbeddings[source]#
Wrapper around sentence_transformers embedding models.
To use, you should have the sentence_transformers
and InstructorEmbedding python packages installed.
Example
from langchain.embeddings import HuggingFaceInstructEmbeddings
model_name = "hkunlp/instructor-large"
model_kwargs = {'device': 'cpu'}
hf = HuggingFaceInstructEmbeddings(
model_name=model_name, model_kwargs=model_kwargs
)
field cache_folder: Optional[str] = None# | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
4676ce2551ec-6 | )
field cache_folder: Optional[str] = None#
Path to store models.
Can be also set by SENTENCE_TRANSFORMERS_HOME environment variable.
field embed_instruction: str = 'Represent the document for retrieval: '#
Instruction to use for embedding documents.
field model_kwargs: Dict[str, Any] [Optional]#
Key word arguments to pass to the model.
field model_name: str = 'hkunlp/instructor-large'#
Model name to use.
field query_instruction: str = 'Represent the question for retrieving supporting documents: '#
Instruction to use for embedding query.
embed_documents(texts: List[str]) → List[List[float]][source]#
Compute doc embeddings using a HuggingFace instruct model.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Compute query embeddings using a HuggingFace instruct model.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
pydantic model langchain.embeddings.LlamaCppEmbeddings[source]#
Wrapper around llama.cpp embedding models.
To use, you should have the llama-cpp-python library installed, and provide the
path to the Llama model as a named parameter to the constructor.
Check out: abetlen/llama-cpp-python
Example
from langchain.embeddings import LlamaCppEmbeddings
llama = LlamaCppEmbeddings(model_path="/path/to/model.bin")
field f16_kv: bool = False#
Use half-precision for key/value cache.
field logits_all: bool = False#
Return logits for all tokens, not just the last token.
field n_batch: Optional[int] = 8#
Number of tokens to process in parallel.
Should be a number between 1 and n_ctx. | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
4676ce2551ec-7 | Number of tokens to process in parallel.
Should be a number between 1 and n_ctx.
field n_ctx: int = 512#
Token context window.
field n_gpu_layers: Optional[int] = None#
Number of layers to be loaded into gpu memory. Default None.
field n_parts: int = -1#
Number of parts to split the model into.
If -1, the number of parts is automatically determined.
field n_threads: Optional[int] = None#
Number of threads to use. If None, the number
of threads is automatically determined.
field seed: int = -1#
Seed. If -1, a random seed is used.
field use_mlock: bool = False#
Force system to keep model in RAM.
field vocab_only: bool = False#
Only load the vocabulary, no weights.
embed_documents(texts: List[str]) → List[List[float]][source]#
Embed a list of documents using the Llama model.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Embed a query using the Llama model.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
pydantic model langchain.embeddings.MiniMaxEmbeddings[source]#
Wrapper around MiniMax’s embedding inference service.
To use, you should have the environment variable MINIMAX_GROUP_ID and
MINIMAX_API_KEY set with your API token, or pass it as a named parameter to
the constructor.
Example
from langchain.embeddings import MiniMaxEmbeddings
embeddings = MiniMaxEmbeddings()
query_text = "This is a test query."
query_result = embeddings.embed_query(query_text)
document_text = "This is a test document." | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
4676ce2551ec-8 | query_result = embeddings.embed_query(query_text)
document_text = "This is a test document."
document_result = embeddings.embed_documents([document_text])
field embed_type_db: str = 'db'#
For embed_documents
field embed_type_query: str = 'query'#
For embed_query
field endpoint_url: str = 'https://api.minimax.chat/v1/embeddings'#
Endpoint URL to use.
field minimax_api_key: Optional[str] = None#
API Key for MiniMax API.
field minimax_group_id: Optional[str] = None#
Group ID for MiniMax API.
field model: str = 'embo-01'#
Embeddings model name to use.
embed_documents(texts: List[str]) → List[List[float]][source]#
Embed documents using a MiniMax embedding endpoint.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Embed a query using a MiniMax embedding endpoint.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
pydantic model langchain.embeddings.ModelScopeEmbeddings[source]#
Wrapper around modelscope_hub embedding models.
To use, you should have the modelscope python package installed.
Example
from langchain.embeddings import ModelScopeEmbeddings
model_id = "damo/nlp_corom_sentence-embedding_english-base"
embed = ModelScopeEmbeddings(model_id=model_id)
field model_id: str = 'damo/nlp_corom_sentence-embedding_english-base'#
Model name to use.
embed_documents(texts: List[str]) → List[List[float]][source]#
Compute doc embeddings using a modelscope embedding model.
Parameters
texts – The list of texts to embed.
Returns | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
4676ce2551ec-9 | Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Compute query embeddings using a modelscope embedding model.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
pydantic model langchain.embeddings.MosaicMLInstructorEmbeddings[source]#
Wrapper around MosaicML’s embedding inference service.
To use, you should have the
environment variable MOSAICML_API_TOKEN set with your API token, or pass
it as a named parameter to the constructor.
Example
from langchain.llms import MosaicMLInstructorEmbeddings
endpoint_url = (
"https://models.hosted-on.mosaicml.hosting/instructor-large/v1/predict"
)
mosaic_llm = MosaicMLInstructorEmbeddings(
endpoint_url=endpoint_url,
mosaicml_api_token="my-api-key"
)
field embed_instruction: str = 'Represent the document for retrieval: '#
Instruction used to embed documents.
field endpoint_url: str = 'https://models.hosted-on.mosaicml.hosting/instructor-large/v1/predict'#
Endpoint URL to use.
field query_instruction: str = 'Represent the question for retrieving supporting documents: '#
Instruction used to embed the query.
field retry_sleep: float = 1.0#
How long to try sleeping for if a rate limit is encountered
embed_documents(texts: List[str]) → List[List[float]][source]#
Embed documents using a MosaicML deployed instructor embedding model.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Embed a query using a MosaicML deployed instructor embedding model. | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
4676ce2551ec-10 | Embed a query using a MosaicML deployed instructor embedding model.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
pydantic model langchain.embeddings.OpenAIEmbeddings[source]#
Wrapper around OpenAI embedding models.
To use, you should have the openai python package installed, and the
environment variable OPENAI_API_KEY set with your API key or pass it
as a named parameter to the constructor.
Example
from langchain.embeddings import OpenAIEmbeddings
openai = OpenAIEmbeddings(openai_api_key="my-api-key")
In order to use the library with Microsoft Azure endpoints, you need to set
the OPENAI_API_TYPE, OPENAI_API_BASE, OPENAI_API_KEY and OPENAI_API_VERSION.
The OPENAI_API_TYPE must be set to ‘azure’ and the others correspond to
the properties of your endpoint.
In addition, the deployment name must be passed as the model parameter.
Example
import os
os.environ["OPENAI_API_TYPE"] = "azure"
os.environ["OPENAI_API_BASE"] = "https://<your-endpoint.openai.azure.com/"
os.environ["OPENAI_API_KEY"] = "your AzureOpenAI key"
os.environ["OPENAI_API_VERSION"] = "2023-03-15-preview"
os.environ["OPENAI_PROXY"] = "http://your-corporate-proxy:8080"
from langchain.embeddings.openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings(
deployment="your-embeddings-deployment-name",
model="your-embeddings-model-name",
api_base="https://your-endpoint.openai.azure.com/",
api_type="azure",
)
text = "This is a test query."
query_result = embeddings.embed_query(text)
field chunk_size: int = 1000# | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
4676ce2551ec-11 | query_result = embeddings.embed_query(text)
field chunk_size: int = 1000#
Maximum number of texts to embed in each batch
field max_retries: int = 6#
Maximum number of retries to make when generating.
field request_timeout: Optional[Union[float, Tuple[float, float]]] = None#
Timeout in seconds for the OpenAPI request.
embed_documents(texts: List[str], chunk_size: Optional[int] = 0) → List[List[float]][source]#
Call out to OpenAI’s embedding endpoint for embedding search docs.
Parameters
texts – The list of texts to embed.
chunk_size – The chunk size of embeddings. If None, will use the chunk size
specified by the class.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Call out to OpenAI’s embedding endpoint for embedding query text.
Parameters
text – The text to embed.
Returns
Embedding for the text.
pydantic model langchain.embeddings.SagemakerEndpointEmbeddings[source]#
Wrapper around custom Sagemaker Inference Endpoints.
To use, you must supply the endpoint name from your deployed
Sagemaker model & the region where it is deployed.
To authenticate, the AWS client uses the following methods to
automatically load credentials:
https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
If a specific credential profile should be used, you must pass
the name of the profile from the ~/.aws/credentials file that is to be used.
Make sure the credentials / roles used have the required policies to
access the Sagemaker endpoint.
See: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html
field content_handler: langchain.embeddings.sagemaker_endpoint.EmbeddingsContentHandler [Required]# | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
4676ce2551ec-12 | field content_handler: langchain.embeddings.sagemaker_endpoint.EmbeddingsContentHandler [Required]#
The content handler class that provides an input and
output transform functions to handle formats between LLM
and the endpoint.
field credentials_profile_name: Optional[str] = None#
The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which
has either access keys or role information specified.
If not specified, the default credential profile or, if on an EC2 instance,
credentials from IMDS will be used.
See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
field endpoint_kwargs: Optional[Dict] = None#
Optional attributes passed to the invoke_endpoint
function. See `boto3`_. docs for more info.
.. _boto3: <https://boto3.amazonaws.com/v1/documentation/api/latest/index.html>
field endpoint_name: str = ''#
The name of the endpoint from the deployed Sagemaker model.
Must be unique within an AWS Region.
field model_kwargs: Optional[Dict] = None#
Key word arguments to pass to the model.
field region_name: str = ''#
The aws region where the Sagemaker model is deployed, eg. us-west-2.
embed_documents(texts: List[str], chunk_size: int = 64) → List[List[float]][source]#
Compute doc embeddings using a SageMaker Inference Endpoint.
Parameters
texts – The list of texts to embed.
chunk_size – The chunk size defines how many input texts will
be grouped together as request. If None, will use the
chunk size specified by the class.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Compute query embeddings using a SageMaker inference endpoint.
Parameters | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
4676ce2551ec-13 | Compute query embeddings using a SageMaker inference endpoint.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
pydantic model langchain.embeddings.SelfHostedEmbeddings[source]#
Runs custom embedding models on self-hosted remote hardware.
Supported hardware includes auto-launched instances on AWS, GCP, Azure,
and Lambda, as well as servers specified
by IP address and SSH credentials (such as on-prem, or another
cloud like Paperspace, Coreweave, etc.).
To use, you should have the runhouse python package installed.
Example using a model load function:from langchain.embeddings import SelfHostedEmbeddings
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
import runhouse as rh
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1")
def get_pipeline():
model_id = "facebook/bart-large"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
return pipeline("feature-extraction", model=model, tokenizer=tokenizer)
embeddings = SelfHostedEmbeddings(
model_load_fn=get_pipeline,
hardware=gpu
model_reqs=["./", "torch", "transformers"],
)
Example passing in a pipeline path:from langchain.embeddings import SelfHostedHFEmbeddings
import runhouse as rh
from transformers import pipeline
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1")
pipeline = pipeline(model="bert-base-uncased", task="feature-extraction")
rh.blob(pickle.dumps(pipeline),
path="models/pipeline.pkl").save().to(gpu, path="models")
embeddings = SelfHostedHFEmbeddings.from_pipeline( | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
4676ce2551ec-14 | embeddings = SelfHostedHFEmbeddings.from_pipeline(
pipeline="models/pipeline.pkl",
hardware=gpu,
model_reqs=["./", "torch", "transformers"],
)
Validators
raise_deprecation » all fields
set_verbose » verbose
field inference_fn: Callable = <function _embed_documents>#
Inference function to extract the embeddings on the remote hardware.
field inference_kwargs: Any = None#
Any kwargs to pass to the model’s inference function.
embed_documents(texts: List[str]) → List[List[float]][source]#
Compute doc embeddings using a HuggingFace transformer model.
Parameters
texts – The list of texts to embed.s
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Compute query embeddings using a HuggingFace transformer model.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
pydantic model langchain.embeddings.SelfHostedHuggingFaceEmbeddings[source]#
Runs sentence_transformers embedding models on self-hosted remote hardware.
Supported hardware includes auto-launched instances on AWS, GCP, Azure,
and Lambda, as well as servers specified
by IP address and SSH credentials (such as on-prem, or another cloud
like Paperspace, Coreweave, etc.).
To use, you should have the runhouse python package installed.
Example
from langchain.embeddings import SelfHostedHuggingFaceEmbeddings
import runhouse as rh
model_name = "sentence-transformers/all-mpnet-base-v2"
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1")
hf = SelfHostedHuggingFaceEmbeddings(model_name=model_name, hardware=gpu)
Validators
raise_deprecation » all fields
set_verbose » verbose | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
4676ce2551ec-15 | Validators
raise_deprecation » all fields
set_verbose » verbose
field hardware: Any = None#
Remote hardware to send the inference function to.
field inference_fn: Callable = <function _embed_documents>#
Inference function to extract the embeddings.
field load_fn_kwargs: Optional[dict] = None#
Key word arguments to pass to the model load function.
field model_id: str = 'sentence-transformers/all-mpnet-base-v2'#
Model name to use.
field model_load_fn: Callable = <function load_embedding_model>#
Function to load the model remotely on the server.
field model_reqs: List[str] = ['./', 'sentence_transformers', 'torch']#
Requirements to install on hardware to inference the model.
pydantic model langchain.embeddings.SelfHostedHuggingFaceInstructEmbeddings[source]#
Runs InstructorEmbedding embedding models on self-hosted remote hardware.
Supported hardware includes auto-launched instances on AWS, GCP, Azure,
and Lambda, as well as servers specified
by IP address and SSH credentials (such as on-prem, or another
cloud like Paperspace, Coreweave, etc.).
To use, you should have the runhouse python package installed.
Example
from langchain.embeddings import SelfHostedHuggingFaceInstructEmbeddings
import runhouse as rh
model_name = "hkunlp/instructor-large"
gpu = rh.cluster(name='rh-a10x', instance_type='A100:1')
hf = SelfHostedHuggingFaceInstructEmbeddings(
model_name=model_name, hardware=gpu)
Validators
raise_deprecation » all fields
set_verbose » verbose
field embed_instruction: str = 'Represent the document for retrieval: '#
Instruction to use for embedding documents.
field model_id: str = 'hkunlp/instructor-large'# | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
4676ce2551ec-16 | field model_id: str = 'hkunlp/instructor-large'#
Model name to use.
field model_reqs: List[str] = ['./', 'InstructorEmbedding', 'torch']#
Requirements to install on hardware to inference the model.
field query_instruction: str = 'Represent the question for retrieving supporting documents: '#
Instruction to use for embedding query.
embed_documents(texts: List[str]) → List[List[float]][source]#
Compute doc embeddings using a HuggingFace instruct model.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Compute query embeddings using a HuggingFace instruct model.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
langchain.embeddings.SentenceTransformerEmbeddings#
alias of langchain.embeddings.huggingface.HuggingFaceEmbeddings
pydantic model langchain.embeddings.TensorflowHubEmbeddings[source]#
Wrapper around tensorflow_hub embedding models.
To use, you should have the tensorflow_text python package installed.
Example
from langchain.embeddings import TensorflowHubEmbeddings
url = "https://tfhub.dev/google/universal-sentence-encoder-multilingual/3"
tf = TensorflowHubEmbeddings(model_url=url)
field model_url: str = 'https://tfhub.dev/google/universal-sentence-encoder-multilingual/3'#
Model name to use.
embed_documents(texts: List[str]) → List[List[float]][source]#
Compute doc embeddings using a TensorflowHub embedding model.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Compute query embeddings using a TensorflowHub embedding model.
Parameters | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
4676ce2551ec-17 | Compute query embeddings using a TensorflowHub embedding model.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
previous
Chat Models
next
Indexes
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/reference/modules/embeddings.html |
fb6bddbab1fa-0 | .rst
.pdf
Document Compressors
Document Compressors#
pydantic model langchain.retrievers.document_compressors.CohereRerank[source]#
field client: Client [Required]#
field model: str = 'rerank-english-v2.0'#
field top_n: int = 3#
async acompress_documents(documents: Sequence[langchain.schema.Document], query: str) → Sequence[langchain.schema.Document][source]#
Compress retrieved documents given the query context.
compress_documents(documents: Sequence[langchain.schema.Document], query: str) → Sequence[langchain.schema.Document][source]#
Compress retrieved documents given the query context.
pydantic model langchain.retrievers.document_compressors.DocumentCompressorPipeline[source]#
Document compressor that uses a pipeline of transformers.
field transformers: List[Union[langchain.schema.BaseDocumentTransformer, langchain.retrievers.document_compressors.base.BaseDocumentCompressor]] [Required]#
List of document filters that are chained together and run in sequence.
async acompress_documents(documents: Sequence[langchain.schema.Document], query: str) → Sequence[langchain.schema.Document][source]#
Compress retrieved documents given the query context.
compress_documents(documents: Sequence[langchain.schema.Document], query: str) → Sequence[langchain.schema.Document][source]#
Transform a list of documents.
pydantic model langchain.retrievers.document_compressors.EmbeddingsFilter[source]#
field embeddings: langchain.embeddings.base.Embeddings [Required]#
Embeddings to use for embedding document contents and queries.
field k: Optional[int] = 20#
The number of relevant documents to return. Can be set to None, in which case
similarity_threshold must be specified. Defaults to 20. | https://python.langchain.com/en/latest/reference/modules/document_compressors.html |
fb6bddbab1fa-1 | similarity_threshold must be specified. Defaults to 20.
field similarity_fn: Callable = <function cosine_similarity>#
Similarity function for comparing documents. Function expected to take as input
two matrices (List[List[float]]) and return a matrix of scores where higher values
indicate greater similarity.
field similarity_threshold: Optional[float] = None#
Threshold for determining when two documents are similar enough
to be considered redundant. Defaults to None, must be specified if k is set
to None.
async acompress_documents(documents: Sequence[langchain.schema.Document], query: str) → Sequence[langchain.schema.Document][source]#
Filter down documents.
compress_documents(documents: Sequence[langchain.schema.Document], query: str) → Sequence[langchain.schema.Document][source]#
Filter documents based on similarity of their embeddings to the query.
pydantic model langchain.retrievers.document_compressors.LLMChainExtractor[source]#
field get_input: Callable[[str, langchain.schema.Document], dict] = <function default_get_input>#
Callable for constructing the chain input from the query and a Document.
field llm_chain: langchain.chains.llm.LLMChain [Required]#
LLM wrapper to use for compressing documents.
async acompress_documents(documents: Sequence[langchain.schema.Document], query: str) → Sequence[langchain.schema.Document][source]#
Compress page content of raw documents asynchronously.
compress_documents(documents: Sequence[langchain.schema.Document], query: str) → Sequence[langchain.schema.Document][source]#
Compress page content of raw documents. | https://python.langchain.com/en/latest/reference/modules/document_compressors.html |
fb6bddbab1fa-2 | Compress page content of raw documents.
classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, prompt: Optional[langchain.prompts.prompt.PromptTemplate] = None, get_input: Optional[Callable[[str, langchain.schema.Document], str]] = None, llm_chain_kwargs: Optional[dict] = None) → langchain.retrievers.document_compressors.chain_extract.LLMChainExtractor[source]#
Initialize from LLM.
pydantic model langchain.retrievers.document_compressors.LLMChainFilter[source]#
Filter that drops documents that aren’t relevant to the query.
field get_input: Callable[[str, langchain.schema.Document], dict] = <function default_get_input>#
Callable for constructing the chain input from the query and a Document.
field llm_chain: langchain.chains.llm.LLMChain [Required]#
LLM wrapper to use for filtering documents.
The chain prompt is expected to have a BooleanOutputParser.
async acompress_documents(documents: Sequence[langchain.schema.Document], query: str) → Sequence[langchain.schema.Document][source]#
Filter down documents.
compress_documents(documents: Sequence[langchain.schema.Document], query: str) → Sequence[langchain.schema.Document][source]#
Filter down documents based on their relevance to the query.
classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, prompt: Optional[langchain.prompts.base.BasePromptTemplate] = None, **kwargs: Any) → langchain.retrievers.document_compressors.chain_filter.LLMChainFilter[source]#
previous
Retrievers
next
Document Transformers
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/reference/modules/document_compressors.html |
6cb5b9904d8c-0 | .rst
.pdf
Docstore
Docstore#
Wrappers on top of docstores.
class langchain.docstore.InMemoryDocstore(_dict: Dict[str, langchain.schema.Document])[source]#
Simple in memory docstore in the form of a dict.
add(texts: Dict[str, langchain.schema.Document]) → None[source]#
Add texts to in memory dictionary.
search(search: str) → Union[str, langchain.schema.Document][source]#
Search via direct lookup.
class langchain.docstore.Wikipedia[source]#
Wrapper around wikipedia API.
search(search: str) → Union[str, langchain.schema.Document][source]#
Try to search for wiki page.
If page exists, return the page summary, and a PageWithLookups object.
If page does not exist, return similar entries.
previous
Indexes
next
Text Splitter
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/reference/modules/docstore.html |
6606a3f1d50c-0 | .md
.pdf
Tracing
Contents
Tracing Walkthrough
Changing Sessions
Tracing#
By enabling tracing in your LangChain runs, you’ll be able to more effectively visualize, step through, and debug your chains and agents.
First, you should install tracing and set up your environment properly.
You can use either a locally hosted version of this (uses Docker) or a cloud hosted version (in closed alpha).
If you’re interested in using the hosted platform, please fill out the form here.
Locally Hosted Setup
Cloud Hosted Setup
Tracing Walkthrough#
When you first access the UI, you should see a page with your tracing sessions.
An initial one “default” should already be created for you.
A session is just a way to group traces together.
If you click on a session, it will take you to a page with no recorded traces that says “No Runs.”
You can create a new session with the new session form.
If we click on the default session, we can see that to start we have no traces stored.
If we now start running chains and agents with tracing enabled, we will see data show up here.
To do so, we can run this notebook as an example.
After running it, we will see an initial trace show up.
From here we can explore the trace at a high level by clicking on the arrow to show nested runs.
We can keep on clicking further and further down to explore deeper and deeper.
We can also click on the “Explore” button of the top level run to dive even deeper.
Here, we can see the inputs and outputs in full, as well as all the nested traces.
We can keep on exploring each of these nested traces in more detail.
For example, here is the lowest level trace with the exact inputs/outputs to the LLM.
Changing Sessions# | https://python.langchain.com/en/latest/additional_resources/tracing.html |
6606a3f1d50c-1 | Changing Sessions#
To initially record traces to a session other than "default", you can set the LANGCHAIN_SESSION environment variable to the name of the session you want to record to:
import os
os.environ["LANGCHAIN_TRACING"] = "true"
os.environ["LANGCHAIN_SESSION"] = "my_session" # Make sure this session actually exists. You can create a new session in the UI.
To switch sessions mid-script or mid-notebook, do NOT set the LANGCHAIN_SESSION environment variable. Instead: langchain.set_tracing_callback_manager(session_name="my_session")
previous
Deployments
next
Model Comparison
Contents
Tracing Walkthrough
Changing Sessions
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/additional_resources/tracing.html |
54fdb0652f8a-0 | .md
.pdf
YouTube
Contents
⛓️Official LangChain YouTube channel⛓️
Introduction to LangChain with Harrison Chase, creator of LangChain
Videos (sorted by views)
YouTube#
This is a collection of LangChain videos on YouTube.
⛓️Official LangChain YouTube channel⛓️#
Introduction to LangChain with Harrison Chase, creator of LangChain#
Building the Future with LLMs, LangChain, & Pinecone by Pinecone
LangChain and Weaviate with Harrison Chase and Bob van Luijt - Weaviate Podcast #36 by Weaviate • Vector Database
LangChain Demo + Q&A with Harrison Chase by Full Stack Deep Learning
LangChain Agents: Build Personal Assistants For Your Data (Q&A with Harrison Chase and Mayo Oshin) by Chat with data
⛓️ LangChain “Agents in Production” Webinar by LangChain
Videos (sorted by views)#
Building AI LLM Apps with LangChain (and more?) - LIVE STREAM by Nicholas Renotte
First look - ChatGPT + WolframAlpha (GPT-3.5 and Wolfram|Alpha via LangChain by James Weaver) by Dr Alan D. Thompson
LangChain explained - The hottest new Python framework by AssemblyAI
Chatbot with INFINITE MEMORY using OpenAI & Pinecone - GPT-3, Embeddings, ADA, Vector DB, Semantic by David Shapiro ~ AI
LangChain for LLMs is… basically just an Ansible playbook by David Shapiro ~ AI
Build your own LLM Apps with LangChain & GPT-Index by 1littlecoder
BabyAGI - New System of Autonomous AI Agents with LangChain by 1littlecoder
Run BabyAGI with Langchain Agents (with Python Code) by 1littlecoder | https://python.langchain.com/en/latest/additional_resources/youtube.html |
54fdb0652f8a-1 | Run BabyAGI with Langchain Agents (with Python Code) by 1littlecoder
How to Use Langchain With Zapier | Write and Send Email with GPT-3 | OpenAI API Tutorial by StarMorph AI
Use Your Locally Stored Files To Get Response From GPT - OpenAI | Langchain | Python by Shweta Lodha
Langchain JS | How to Use GPT-3, GPT-4 to Reference your own Data | OpenAI Embeddings Intro by StarMorph AI
The easiest way to work with large language models | Learn LangChain in 10min by Sophia Yang
4 Autonomous AI Agents: “Westworld” simulation BabyAGI, AutoGPT, Camel, LangChain by Sophia Yang
AI CAN SEARCH THE INTERNET? Langchain Agents + OpenAI ChatGPT by tylerwhatsgood
Query Your Data with GPT-4 | Embeddings, Vector Databases | Langchain JS Knowledgebase by StarMorph AI
Weaviate + LangChain for LLM apps presented by Erika Cardenas by Weaviate • Vector Database
Langchain Overview — How to Use Langchain & ChatGPT by Python In Office
Langchain Overview - How to Use Langchain & ChatGPT by Python In Office
Custom langchain Agent & Tools with memory. Turn any Python function into langchain tool with Gpt 3 by echohive
LangChain: Run Language Models Locally - Hugging Face Models by Prompt Engineering
ChatGPT with any YouTube video using langchain and chromadb by echohive
How to Talk to a PDF using LangChain and ChatGPT by Automata Learning Lab
Langchain Document Loaders Part 1: Unstructured Files by Merk
LangChain - Prompt Templates (what all the best prompt engineers use) by Nick Daigler
LangChain. Crear aplicaciones Python impulsadas por GPT by Jesús Conde | https://python.langchain.com/en/latest/additional_resources/youtube.html |
54fdb0652f8a-2 | LangChain. Crear aplicaciones Python impulsadas por GPT by Jesús Conde
Easiest Way to Use GPT In Your Products | LangChain Basics Tutorial by Rachel Woods
BabyAGI + GPT-4 Langchain Agent with Internet Access by tylerwhatsgood
Learning LLM Agents. How does it actually work? LangChain, AutoGPT & OpenAI by Arnoldas Kemeklis
Get Started with LangChain in Node.js by Developers Digest
LangChain + OpenAI tutorial: Building a Q&A system w/ own text data by Samuel Chan
Langchain + Zapier Agent by Merk
Connecting the Internet with ChatGPT (LLMs) using Langchain And Answers Your Questions by Kamalraj M M
Build More Powerful LLM Applications for Business’s with LangChain (Beginners Guide) by No Code Blackbox
⛓️ LangFlow LLM Agent Demo for 🦜🔗LangChain by Cobus Greyling
⛓️ Chatbot Factory: Streamline Python Chatbot Creation with LLMs and Langchain by Finxter
⛓️ LangChain Tutorial - ChatGPT mit eigenen Daten by Coding Crashkurse
⛓️ Chat with a CSV | LangChain Agents Tutorial (Beginners) by GoDataProf
⛓️ Introdução ao Langchain - #Cortes - Live DataHackers by Prof. João Gabriel Lima
⛓️ LangChain: Level up ChatGPT !? | LangChain Tutorial Part 1 by Code Affinity
⛓️ KI schreibt krasses Youtube Skript 😲😳 | LangChain Tutorial Deutsch by SimpleKI
⛓️ Chat with Audio: Langchain, Chroma DB, OpenAI, and Assembly AI by AI Anytime
⛓️ QA over documents with Auto vector index selection with Langchain router chains by echohive | https://python.langchain.com/en/latest/additional_resources/youtube.html |
54fdb0652f8a-3 | ⛓️ QA over documents with Auto vector index selection with Langchain router chains by echohive
⛓️ Build your own custom LLM application with Bubble.io & Langchain (No Code & Beginner friendly) by No Code Blackbox
⛓️ Simple App to Question Your Docs: Leveraging Streamlit, Hugging Face Spaces, LangChain, and Claude! by Chris Alexiuk
⛓️ LANGCHAIN AI- ConstitutionalChainAI + Databutton AI ASSISTANT Web App by Avra
⛓️ LANGCHAIN AI AUTONOMOUS AGENT WEB APP - 👶 BABY AGI 🤖 with EMAIL AUTOMATION using DATABUTTON by Avra
⛓️ The Future of Data Analysis: Using A.I. Models in Data Analysis (LangChain) by Absent Data
⛓️ Memory in LangChain | Deep dive (python) by Eden Marco
⛓️ 9 LangChain UseCases | Beginner’s Guide | 2023 by Data Science Basics
⛓️ Use Large Language Models in Jupyter Notebook | LangChain | Agents & Indexes by Abhinaw Tiwari
⛓️ How to Talk to Your Langchain Agent | 11 Labs + Whisper by VRSEN
⛓️ LangChain Deep Dive: 5 FUN AI App Ideas To Build Quickly and Easily by James NoCode
⛓️ BEST OPEN Alternative to OPENAI’s EMBEDDINGs for Retrieval QA: LangChain by Prompt Engineering
⛓️ LangChain 101: Models by Mckay Wrigley
⛓️ LangChain with JavaScript Tutorial #1 | Setup & Using LLMs by Leon van Zyl
⛓️ LangChain Overview & Tutorial for Beginners: Build Powerful AI Apps Quickly & Easily (ZERO CODE) by James NoCode | https://python.langchain.com/en/latest/additional_resources/youtube.html |
54fdb0652f8a-4 | ⛓️ LangChain In Action: Real-World Use Case With Step-by-Step Tutorial by Rabbitmetrics
⛓️ Summarizing and Querying Multiple Papers with LangChain by Automata Learning Lab
⛓️ Using Langchain (and Replit) through Tana, ask Google/Wikipedia/Wolfram Alpha to fill out a table by Stian Håklev
⛓️ Langchain PDF App (GUI) | Create a ChatGPT For Your PDF in Python by Alejandro AO - Software & Ai
⛓️ Auto-GPT with LangChain 🔥 | Create Your Own Personal AI Assistant by Data Science Basics
⛓️ Create Your OWN Slack AI Assistant with Python & LangChain by Dave Ebbelaar
⛓️ How to Create LOCAL Chatbots with GPT4All and LangChain [Full Guide] by Liam Ottley
⛓️ Build a Multilingual PDF Search App with LangChain, Cohere and Bubble by Menlo Park Lab
⛓️ Building a LangChain Agent (code-free!) Using Bubble and Flowise by Menlo Park Lab
⛓️ Build a LangChain-based Semantic PDF Search App with No-Code Tools Bubble and Flowise by Menlo Park Lab
⛓️ LangChain Memory Tutorial | Building a ChatGPT Clone in Python by Alejandro AO - Software & Ai
⛓️ ChatGPT For Your DATA | Chat with Multiple Documents Using LangChain by Data Science Basics
⛓️ Llama Index: Chat with Documentation using URL Loader by Merk
⛓️ Using OpenAI, LangChain, and Gradio to Build Custom GenAI Applications by David Hundley
⛓ icon marks a new video [last update 2023-05-15]
previous
Model Comparison
Contents
⛓️Official LangChain YouTube channel⛓️ | https://python.langchain.com/en/latest/additional_resources/youtube.html |
54fdb0652f8a-5 | Model Comparison
Contents
⛓️Official LangChain YouTube channel⛓️
Introduction to LangChain with Harrison Chase, creator of LangChain
Videos (sorted by views)
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/additional_resources/youtube.html |
633c83860acc-0 | .ipynb
.pdf
Model Comparison
Model Comparison#
Constructing your language model application will likely involved choosing between many different options of prompts, models, and even chains to use. When doing so, you will want to compare these different options on different inputs in an easy, flexible, and intuitive way.
LangChain provides the concept of a ModelLaboratory to test out and try different models.
from langchain import LLMChain, OpenAI, Cohere, HuggingFaceHub, PromptTemplate
from langchain.model_laboratory import ModelLaboratory
llms = [
OpenAI(temperature=0),
Cohere(model="command-xlarge-20221108", max_tokens=20, temperature=0),
HuggingFaceHub(repo_id="google/flan-t5-xl", model_kwargs={"temperature":1})
]
model_lab = ModelLaboratory.from_llms(llms)
model_lab.compare("What color is a flamingo?")
Input:
What color is a flamingo?
OpenAI
Params: {'model': 'text-davinci-002', 'temperature': 0.0, 'max_tokens': 256, 'top_p': 1, 'frequency_penalty': 0, 'presence_penalty': 0, 'n': 1, 'best_of': 1}
Flamingos are pink.
Cohere
Params: {'model': 'command-xlarge-20221108', 'max_tokens': 20, 'temperature': 0.0, 'k': 0, 'p': 1, 'frequency_penalty': 0, 'presence_penalty': 0}
Pink
HuggingFaceHub
Params: {'repo_id': 'google/flan-t5-xl', 'temperature': 1}
pink | https://python.langchain.com/en/latest/additional_resources/model_laboratory.html |
633c83860acc-1 | pink
prompt = PromptTemplate(template="What is the capital of {state}?", input_variables=["state"])
model_lab_with_prompt = ModelLaboratory.from_llms(llms, prompt=prompt)
model_lab_with_prompt.compare("New York")
Input:
New York
OpenAI
Params: {'model': 'text-davinci-002', 'temperature': 0.0, 'max_tokens': 256, 'top_p': 1, 'frequency_penalty': 0, 'presence_penalty': 0, 'n': 1, 'best_of': 1}
The capital of New York is Albany.
Cohere
Params: {'model': 'command-xlarge-20221108', 'max_tokens': 20, 'temperature': 0.0, 'k': 0, 'p': 1, 'frequency_penalty': 0, 'presence_penalty': 0}
The capital of New York is Albany.
HuggingFaceHub
Params: {'repo_id': 'google/flan-t5-xl', 'temperature': 1}
st john s
from langchain import SelfAskWithSearchChain, SerpAPIWrapper
open_ai_llm = OpenAI(temperature=0)
search = SerpAPIWrapper()
self_ask_with_search_openai = SelfAskWithSearchChain(llm=open_ai_llm, search_chain=search, verbose=True)
cohere_llm = Cohere(temperature=0, model="command-xlarge-20221108")
search = SerpAPIWrapper()
self_ask_with_search_cohere = SelfAskWithSearchChain(llm=cohere_llm, search_chain=search, verbose=True)
chains = [self_ask_with_search_openai, self_ask_with_search_cohere]
names = [str(open_ai_llm), str(cohere_llm)] | https://python.langchain.com/en/latest/additional_resources/model_laboratory.html |
633c83860acc-2 | names = [str(open_ai_llm), str(cohere_llm)]
model_lab = ModelLaboratory(chains, names=names)
model_lab.compare("What is the hometown of the reigning men's U.S. Open champion?")
Input:
What is the hometown of the reigning men's U.S. Open champion?
OpenAI
Params: {'model': 'text-davinci-002', 'temperature': 0.0, 'max_tokens': 256, 'top_p': 1, 'frequency_penalty': 0, 'presence_penalty': 0, 'n': 1, 'best_of': 1}
> Entering new chain...
What is the hometown of the reigning men's U.S. Open champion?
Are follow up questions needed here: Yes.
Follow up: Who is the reigning men's U.S. Open champion?
Intermediate answer: Carlos Alcaraz.
Follow up: Where is Carlos Alcaraz from?
Intermediate answer: El Palmar, Spain.
So the final answer is: El Palmar, Spain
> Finished chain.
So the final answer is: El Palmar, Spain
Cohere
Params: {'model': 'command-xlarge-20221108', 'max_tokens': 256, 'temperature': 0.0, 'k': 0, 'p': 1, 'frequency_penalty': 0, 'presence_penalty': 0}
> Entering new chain...
What is the hometown of the reigning men's U.S. Open champion?
Are follow up questions needed here: Yes.
Follow up: Who is the reigning men's U.S. Open champion?
Intermediate answer: Carlos Alcaraz.
So the final answer is:
Carlos Alcaraz
> Finished chain.
So the final answer is:
Carlos Alcaraz
previous
Tracing
next
YouTube | https://python.langchain.com/en/latest/additional_resources/model_laboratory.html |
633c83860acc-3 | So the final answer is:
Carlos Alcaraz
previous
Tracing
next
YouTube
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/additional_resources/model_laboratory.html |
db450aede1b8-0 | .md
.pdf
Google Serper
Contents
Setup
Wrappers
Utility
Output
Tool
Google Serper#
This page covers how to use the Serper Google Search API within LangChain. Serper is a low-cost Google Search API that can be used to add answer box, knowledge graph, and organic results data from Google Search.
It is broken into two parts: setup, and then references to the specific Google Serper wrapper.
Setup#
Go to serper.dev to sign up for a free account
Get the api key and set it as an environment variable (SERPER_API_KEY)
Wrappers#
Utility#
There exists a GoogleSerperAPIWrapper utility which wraps this API. To import this utility:
from langchain.utilities import GoogleSerperAPIWrapper
You can use it as part of a Self Ask chain:
from langchain.utilities import GoogleSerperAPIWrapper
from langchain.llms.openai import OpenAI
from langchain.agents import initialize_agent, Tool
from langchain.agents import AgentType
import os
os.environ["SERPER_API_KEY"] = ""
os.environ['OPENAI_API_KEY'] = ""
llm = OpenAI(temperature=0)
search = GoogleSerperAPIWrapper()
tools = [
Tool(
name="Intermediate Answer",
func=search.run,
description="useful for when you need to ask with search"
)
]
self_ask_with_search = initialize_agent(tools, llm, agent=AgentType.SELF_ASK_WITH_SEARCH, verbose=True)
self_ask_with_search.run("What is the hometown of the reigning men's U.S. Open champion?")
Output#
Entering new AgentExecutor chain...
Yes.
Follow up: Who is the reigning men's U.S. Open champion? | https://python.langchain.com/en/latest/integrations/google_serper.html |
db450aede1b8-1 | Yes.
Follow up: Who is the reigning men's U.S. Open champion?
Intermediate answer: Current champions Carlos Alcaraz, 2022 men's singles champion.
Follow up: Where is Carlos Alcaraz from?
Intermediate answer: El Palmar, Spain
So the final answer is: El Palmar, Spain
> Finished chain.
'El Palmar, Spain'
For a more detailed walkthrough of this wrapper, see this notebook.
Tool#
You can also easily load this wrapper as a Tool (to use with an Agent).
You can do this with:
from langchain.agents import load_tools
tools = load_tools(["google-serper"])
For more information on this, see this page
previous
Google Search
next
GooseAI
Contents
Setup
Wrappers
Utility
Output
Tool
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/integrations/google_serper.html |
52a0e0fbcffe-0 | .md
.pdf
CerebriumAI
Contents
Installation and Setup
Wrappers
LLM
CerebriumAI#
This page covers how to use the CerebriumAI ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific CerebriumAI wrappers.
Installation and Setup#
Install with pip install cerebrium
Get an CerebriumAI api key and set it as an environment variable (CEREBRIUMAI_API_KEY)
Wrappers#
LLM#
There exists an CerebriumAI LLM wrapper, which you can access with
from langchain.llms import CerebriumAI
previous
Beam
next
Chroma
Contents
Installation and Setup
Wrappers
LLM
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/integrations/cerebriumai.html |
ad96d512e9a9-0 | .md
.pdf
AtlasDB
Contents
Installation and Setup
Wrappers
VectorStore
AtlasDB#
This page covers how to use Nomic’s Atlas ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Atlas wrappers.
Installation and Setup#
Install the Python package with pip install nomic
Nomic is also included in langchains poetry extras poetry install -E all
Wrappers#
VectorStore#
There exists a wrapper around the Atlas neural database, allowing you to use it as a vectorstore.
This vectorstore also gives you full access to the underlying AtlasProject object, which will allow you to use the full range of Atlas map interactions, such as bulk tagging and automatic topic modeling.
Please see the Atlas docs for more detailed information.
To import this vectorstore:
from langchain.vectorstores import AtlasDB
For a more detailed walkthrough of the AtlasDB wrapper, see this notebook
previous
Apify
next
Banana
Contents
Installation and Setup
Wrappers
VectorStore
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/integrations/atlas.html |
5ace049a79a9-0 | .md
.pdf
GooseAI
Contents
Installation and Setup
Wrappers
LLM
GooseAI#
This page covers how to use the GooseAI ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific GooseAI wrappers.
Installation and Setup#
Install the Python SDK with pip install openai
Get your GooseAI api key from this link here.
Set the environment variable (GOOSEAI_API_KEY).
import os
os.environ["GOOSEAI_API_KEY"] = "YOUR_API_KEY"
Wrappers#
LLM#
There exists an GooseAI LLM wrapper, which you can access with:
from langchain.llms import GooseAI
previous
Google Serper
next
GPT4All
Contents
Installation and Setup
Wrappers
LLM
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/integrations/gooseai.html |
1533184474e0-0 | .md
.pdf
OpenWeatherMap API
Contents
Installation and Setup
Wrappers
Utility
Tool
OpenWeatherMap API#
This page covers how to use the OpenWeatherMap API within LangChain.
It is broken into two parts: installation and setup, and then references to specific OpenWeatherMap API wrappers.
Installation and Setup#
Install requirements with pip install pyowm
Go to OpenWeatherMap and sign up for an account to get your API key here
Set your API key as OPENWEATHERMAP_API_KEY environment variable
Wrappers#
Utility#
There exists a OpenWeatherMapAPIWrapper utility which wraps this API. To import this utility:
from langchain.utilities.openweathermap import OpenWeatherMapAPIWrapper
For a more detailed walkthrough of this wrapper, see this notebook.
Tool#
You can also easily load this wrapper as a Tool (to use with an Agent).
You can do this with:
from langchain.agents import load_tools
tools = load_tools(["openweathermap-api"])
For more information on this, see this page
previous
OpenSearch
next
Petals
Contents
Installation and Setup
Wrappers
Utility
Tool
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/integrations/openweathermap.html |
dcafb68868ef-0 | .md
.pdf
Prediction Guard
Contents
Installation and Setup
LLM Wrapper
Example usage
Prediction Guard#
This page covers how to use the Prediction Guard ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Prediction Guard wrappers.
Installation and Setup#
Install the Python SDK with pip install predictionguard
Get an Prediction Guard access token (as described here) and set it as an environment variable (PREDICTIONGUARD_TOKEN)
LLM Wrapper#
There exists a Prediction Guard LLM wrapper, which you can access with
from langchain.llms import PredictionGuard
You can provide the name of your Prediction Guard “proxy” as an argument when initializing the LLM:
pgllm = PredictionGuard(name="your-text-gen-proxy")
Alternatively, you can use Prediction Guard’s default proxy for SOTA LLMs:
pgllm = PredictionGuard(name="default-text-gen")
You can also provide your access token directly as an argument:
pgllm = PredictionGuard(name="default-text-gen", token="<your access token>")
Example usage#
Basic usage of the LLM wrapper:
from langchain.llms import PredictionGuard
pgllm = PredictionGuard(name="default-text-gen")
pgllm("Tell me a joke")
Basic LLM Chaining with the Prediction Guard wrapper:
from langchain import PromptTemplate, LLMChain
from langchain.llms import PredictionGuard
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm_chain = LLMChain(prompt=prompt, llm=PredictionGuard(name="default-text-gen"), verbose=True)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.predict(question=question)
previous | https://python.langchain.com/en/latest/integrations/predictionguard.html |
dcafb68868ef-1 | llm_chain.predict(question=question)
previous
PipelineAI
next
PromptLayer
Contents
Installation and Setup
LLM Wrapper
Example usage
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/integrations/predictionguard.html |
5f5dd417b7ae-0 | .md
.pdf
Psychic
Contents
Psychic
What is Psychic?
Quick start
Advantages vs Other Document Loaders
Psychic#
This page covers how to use Psychic within LangChain.
What is Psychic?#
Psychic is a platform for integrating with your customer’s SaaS tools like Notion, Zendesk, Confluence, and Google Drive via OAuth and syncing documents from these applications to your SQL or vector database. You can think of it like Plaid for unstructured data. Psychic is easy to set up - you use it by importing the react library and configuring it with your Sidekick API key, which you can get from the Psychic dashboard. When your users connect their applications, you can view these connections from the dashboard and retrieve data using the server-side libraries.
Quick start#
Create an account in the dashboard.
Use the react library to add the Psychic link modal to your frontend react app. Users will use this to connect their SaaS apps.
Once your user has created a connection, you can use the langchain PsychicLoader by following the example notebook
Advantages vs Other Document Loaders#
Universal API: Instead of building OAuth flows and learning the APIs for every SaaS app, you integrate Psychic once and leverage our universal API to retrieve data.
Data Syncs: Data in your customers’ SaaS apps can get stale fast. With Psychic you can configure webhooks to keep your documents up to date on a daily or realtime basis.
Simplified OAuth: Psychic handles OAuth end-to-end so that you don’t have to spend time creating OAuth clients for each integration, keeping access tokens fresh, and handling OAuth redirect logic.
previous
PromptLayer
next
Qdrant
Contents
Psychic
What is Psychic?
Quick start
Advantages vs Other Document Loaders
By Harrison Chase
© Copyright 2023, Harrison Chase. | https://python.langchain.com/en/latest/integrations/psychic.html |
5f5dd417b7ae-1 | By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/integrations/psychic.html |
cbbca030e15b-0 | .md
.pdf
Tair
Contents
Installation and Setup
Wrappers
VectorStore
Tair#
This page covers how to use the Tair ecosystem within LangChain.
Installation and Setup#
Install Tair Python SDK with pip install tair.
Wrappers#
VectorStore#
There exists a wrapper around TairVector, allowing you to use it as a vectorstore,
whether for semantic search or example selection.
To import this vectorstore:
from langchain.vectorstores import Tair
For a more detailed walkthrough of the Tair wrapper, see this notebook
previous
StochasticAI
next
Unstructured
Contents
Installation and Setup
Wrappers
VectorStore
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/integrations/tair.html |
778785aad9b3-0 | .md
.pdf
StochasticAI
Contents
Installation and Setup
Wrappers
LLM
StochasticAI#
This page covers how to use the StochasticAI ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific StochasticAI wrappers.
Installation and Setup#
Install with pip install stochasticx
Get an StochasticAI api key and set it as an environment variable (STOCHASTICAI_API_KEY)
Wrappers#
LLM#
There exists an StochasticAI LLM wrapper, which you can access with
from langchain.llms import StochasticAI
previous
SerpAPI
next
Tair
Contents
Installation and Setup
Wrappers
LLM
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/integrations/stochasticai.html |
15113d0a7246-0 | .md
.pdf
NLPCloud
Contents
Installation and Setup
Wrappers
LLM
NLPCloud#
This page covers how to use the NLPCloud ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific NLPCloud wrappers.
Installation and Setup#
Install the Python SDK with pip install nlpcloud
Get an NLPCloud api key and set it as an environment variable (NLPCLOUD_API_KEY)
Wrappers#
LLM#
There exists an NLPCloud LLM wrapper, which you can access with
from langchain.llms import NLPCloud
previous
MyScale
next
OpenAI
Contents
Installation and Setup
Wrappers
LLM
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/integrations/nlpcloud.html |
98301f42cce8-0 | .md
.pdf
PipelineAI
Contents
Installation and Setup
Wrappers
LLM
PipelineAI#
This page covers how to use the PipelineAI ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific PipelineAI wrappers.
Installation and Setup#
Install with pip install pipeline-ai
Get a Pipeline Cloud api key and set it as an environment variable (PIPELINE_API_KEY)
Wrappers#
LLM#
There exists a PipelineAI LLM wrapper, which you can access with
from langchain.llms import PipelineAI
previous
Pinecone
next
Prediction Guard
Contents
Installation and Setup
Wrappers
LLM
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/integrations/pipelineai.html |
4c3c0bb14ab9-0 | .md
.pdf
Chroma
Contents
Installation and Setup
Wrappers
VectorStore
Chroma#
This page covers how to use the Chroma ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Chroma wrappers.
Installation and Setup#
Install the Python package with pip install chromadb
Wrappers#
VectorStore#
There exists a wrapper around Chroma vector databases, allowing you to use it as a vectorstore,
whether for semantic search or example selection.
To import this vectorstore:
from langchain.vectorstores import Chroma
For a more detailed walkthrough of the Chroma wrapper, see this notebook
previous
CerebriumAI
next
ClearML Integration
Contents
Installation and Setup
Wrappers
VectorStore
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/integrations/chroma.html |
ebf19b7e25e8-0 | .md
.pdf
Vectara
Contents
Installation and Setup
VectorStore
Vectara#
What is Vectara?
Vectara Overview:
Vectara is developer-first API platform for building conversational search applications
To use Vectara - first sign up and create an account. Then create a corpus and an API key for indexing and searching.
You can use Vectara’s indexing API to add documents into Vectara’s index
You can use Vectara’s Search API to query Vectara’s index (which also supports Hybrid search implicitly).
You can use Vectara’s integration with LangChain as a Vector store or using the Retriever abstraction.
Installation and Setup#
To use Vectara with LangChain no special installation steps are required. You just have to provide your customer_id, corpus ID, and an API key created within the Vectara console to enable indexing and searching.
VectorStore#
There exists a wrapper around the Vectara platform, allowing you to use it as a vectorstore, whether for semantic search or example selection.
To import this vectorstore:
from langchain.vectorstores import Vectara
To create an instance of the Vectara vectorstore:
vectara = Vectara(
vectara_customer_id=customer_id,
vectara_corpus_id=corpus_id,
vectara_api_key=api_key
)
The customer_id, corpus_id and api_key are optional, and if they are not supplied will be read from the environment variables VECTARA_CUSTOMER_ID, VECTARA_CORPUS_ID and VECTARA_API_KEY, respectively.
For a more detailed walkthrough of the Vectara wrapper, see one of the two example notebooks:
Chat Over Documents with Vectara
Vectara Text Generation
previous
Unstructured
next
Weights & Biases
Contents
Installation and Setup
VectorStore
By Harrison Chase | https://python.langchain.com/en/latest/integrations/vectara.html |
ebf19b7e25e8-1 | Contents
Installation and Setup
VectorStore
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/integrations/vectara.html |
6c06442a4e6e-0 | .md
.pdf
Yeager.ai
Contents
What is Yeager.ai?
yAgents
How to use?
Creating and Executing Tools with yAgents
Yeager.ai#
This page covers how to use Yeager.ai to generate LangChain tools and agents.
What is Yeager.ai?#
Yeager.ai is an ecosystem designed to simplify the process of creating AI agents and tools.
It features yAgents, a No-code LangChain Agent Builder, which enables users to build, test, and deploy AI solutions with ease. Leveraging the LangChain framework, yAgents allows seamless integration with various language models and resources, making it suitable for developers, researchers, and AI enthusiasts across diverse applications.
yAgents#
Low code generative agent designed to help you build, prototype, and deploy Langchain tools with ease.
How to use?#
pip install yeagerai-agent
yeagerai-agent
Go to http://127.0.0.1:7860
This will install the necessary dependencies and set up yAgents on your system. After the first run, yAgents will create a .env file where you can input your OpenAI API key. You can do the same directly from the Gradio interface under the tab “Settings”.
OPENAI_API_KEY=<your_openai_api_key_here>
We recommend using GPT-4,. However, the tool can also work with GPT-3 if the problem is broken down sufficiently.
Creating and Executing Tools with yAgents#
yAgents makes it easy to create and execute AI-powered tools. Here’s a brief overview of the process:
Create a tool: To create a tool, provide a natural language prompt to yAgents. The prompt should clearly describe the tool’s purpose and functionality. For example:
create a tool that returns the n-th prime number | https://python.langchain.com/en/latest/integrations/yeagerai.html |
6c06442a4e6e-1 | create a tool that returns the n-th prime number
Load the tool into the toolkit: To load a tool into yAgents, simply provide a command to yAgents that says so. For example:
load the tool that you just created it into your toolkit
Execute the tool: To run a tool or agent, simply provide a command to yAgents that includes the name of the tool and any required parameters. For example:
generate the 50th prime number
You can see a video of how it works here.
As you become more familiar with yAgents, you can create more advanced tools and agents to automate your work and enhance your productivity.
For more information, see yAgents’ Github or our docs
previous
Writer
next
Zilliz
Contents
What is Yeager.ai?
yAgents
How to use?
Creating and Executing Tools with yAgents
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/integrations/yeagerai.html |
af095fa69438-0 | .md
.pdf
OpenAI
Contents
Installation and Setup
Wrappers
LLM
Embeddings
Tokenizer
Moderation
OpenAI#
This page covers how to use the OpenAI ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific OpenAI wrappers.
Installation and Setup#
Install the Python SDK with pip install openai
Get an OpenAI api key and set it as an environment variable (OPENAI_API_KEY)
If you want to use OpenAI’s tokenizer (only available for Python 3.9+), install it with pip install tiktoken
Wrappers#
LLM#
There exists an OpenAI LLM wrapper, which you can access with
from langchain.llms import OpenAI
If you are using a model hosted on Azure, you should use different wrapper for that:
from langchain.llms import AzureOpenAI
For a more detailed walkthrough of the Azure wrapper, see this notebook
Embeddings#
There exists an OpenAI Embeddings wrapper, which you can access with
from langchain.embeddings import OpenAIEmbeddings
For a more detailed walkthrough of this, see this notebook
Tokenizer#
There are several places you can use the tiktoken tokenizer. By default, it is used to count tokens
for OpenAI LLMs.
You can also use it to count tokens when splitting documents with
from langchain.text_splitter import CharacterTextSplitter
CharacterTextSplitter.from_tiktoken_encoder(...)
For a more detailed walkthrough of this, see this notebook
Moderation#
You can also access the OpenAI content moderation endpoint with
from langchain.chains import OpenAIModerationChain
For a more detailed walkthrough of this, see this notebook
previous
NLPCloud
next
OpenSearch
Contents
Installation and Setup
Wrappers
LLM
Embeddings
Tokenizer | https://python.langchain.com/en/latest/integrations/openai.html |
af095fa69438-1 | Contents
Installation and Setup
Wrappers
LLM
Embeddings
Tokenizer
Moderation
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/integrations/openai.html |
fcb3aaab433f-0 | .ipynb
.pdf
Comet
Contents
Install Comet and Dependencies
Initialize Comet and Set your Credentials
Set OpenAI and SerpAPI credentials
Scenario 1: Using just an LLM
Scenario 2: Using an LLM in a Chain
Scenario 3: Using An Agent with Tools
Scenario 4: Using Custom Evaluation Metrics
Comet#
In this guide we will demonstrate how to track your Langchain Experiments, Evaluation Metrics, and LLM Sessions with Comet.
Example Project: Comet with LangChain
Install Comet and Dependencies#
%pip install comet_ml langchain openai google-search-results spacy textstat pandas
import sys
!{sys.executable} -m spacy download en_core_web_sm
Initialize Comet and Set your Credentials#
You can grab your Comet API Key here or click the link after initializing Comet
import comet_ml
comet_ml.init(project_name="comet-example-langchain")
Set OpenAI and SerpAPI credentials#
You will need an OpenAI API Key and a SerpAPI API Key to run the following examples
import os
os.environ["OPENAI_API_KEY"] = "..."
#os.environ["OPENAI_ORGANIZATION"] = "..."
os.environ["SERPAPI_API_KEY"] = "..."
Scenario 1: Using just an LLM#
from datetime import datetime
from langchain.callbacks import CometCallbackHandler, StdOutCallbackHandler
from langchain.llms import OpenAI
comet_callback = CometCallbackHandler(
project_name="comet-example-langchain",
complexity_metrics=True,
stream_logs=True,
tags=["llm"],
visualizations=["dep"],
)
callbacks = [StdOutCallbackHandler(), comet_callback]
llm = OpenAI(temperature=0.9, callbacks=callbacks, verbose=True) | https://python.langchain.com/en/latest/integrations/comet_tracking.html |
fcb3aaab433f-1 | llm = OpenAI(temperature=0.9, callbacks=callbacks, verbose=True)
llm_result = llm.generate(["Tell me a joke", "Tell me a poem", "Tell me a fact"] * 3)
print("LLM result", llm_result)
comet_callback.flush_tracker(llm, finish=True)
Scenario 2: Using an LLM in a Chain#
from langchain.callbacks import CometCallbackHandler, StdOutCallbackHandler
from langchain.chains import LLMChain
from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate
comet_callback = CometCallbackHandler(
complexity_metrics=True,
project_name="comet-example-langchain",
stream_logs=True,
tags=["synopsis-chain"],
)
callbacks = [StdOutCallbackHandler(), comet_callback]
llm = OpenAI(temperature=0.9, callbacks=callbacks)
template = """You are a playwright. Given the title of play, it is your job to write a synopsis for that title.
Title: {title}
Playwright: This is a synopsis for the above play:"""
prompt_template = PromptTemplate(input_variables=["title"], template=template)
synopsis_chain = LLMChain(llm=llm, prompt=prompt_template, callbacks=callbacks)
test_prompts = [{"title": "Documentary about Bigfoot in Paris"}]
print(synopsis_chain.apply(test_prompts))
comet_callback.flush_tracker(synopsis_chain, finish=True)
Scenario 3: Using An Agent with Tools#
from langchain.agents import initialize_agent, load_tools
from langchain.callbacks import CometCallbackHandler, StdOutCallbackHandler
from langchain.llms import OpenAI
comet_callback = CometCallbackHandler(
project_name="comet-example-langchain",
complexity_metrics=True, | https://python.langchain.com/en/latest/integrations/comet_tracking.html |
fcb3aaab433f-2 | project_name="comet-example-langchain",
complexity_metrics=True,
stream_logs=True,
tags=["agent"],
)
callbacks = [StdOutCallbackHandler(), comet_callback]
llm = OpenAI(temperature=0.9, callbacks=callbacks)
tools = load_tools(["serpapi", "llm-math"], llm=llm, callbacks=callbacks)
agent = initialize_agent(
tools,
llm,
agent="zero-shot-react-description",
callbacks=callbacks,
verbose=True,
)
agent.run(
"Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?"
)
comet_callback.flush_tracker(agent, finish=True)
Scenario 4: Using Custom Evaluation Metrics#
The CometCallbackManager also allows you to define and use Custom Evaluation Metrics to assess generated outputs from your model. Let’s take a look at how this works.
In the snippet below, we will use the ROUGE metric to evaluate the quality of a generated summary of an input prompt.
%pip install rouge-score
from rouge_score import rouge_scorer
from langchain.callbacks import CometCallbackHandler, StdOutCallbackHandler
from langchain.chains import LLMChain
from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate
class Rouge:
def __init__(self, reference):
self.reference = reference
self.scorer = rouge_scorer.RougeScorer(["rougeLsum"], use_stemmer=True)
def compute_metric(self, generation, prompt_idx, gen_idx):
prediction = generation.text
results = self.scorer.score(target=self.reference, prediction=prediction)
return { | https://python.langchain.com/en/latest/integrations/comet_tracking.html |
fcb3aaab433f-3 | return {
"rougeLsum_score": results["rougeLsum"].fmeasure,
"reference": self.reference,
}
reference = """
The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building.
It was the first structure to reach a height of 300 metres.
It is now taller than the Chrysler Building in New York City by 5.2 metres (17 ft)
Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France .
"""
rouge_score = Rouge(reference=reference)
template = """Given the following article, it is your job to write a summary.
Article:
{article}
Summary: This is the summary for the above article:"""
prompt_template = PromptTemplate(input_variables=["article"], template=template)
comet_callback = CometCallbackHandler(
project_name="comet-example-langchain",
complexity_metrics=False,
stream_logs=True,
tags=["custom_metrics"],
custom_metrics=rouge_score.compute_metric,
)
callbacks = [StdOutCallbackHandler(), comet_callback]
llm = OpenAI(temperature=0.9)
synopsis_chain = LLMChain(llm=llm, prompt=prompt_template)
test_prompts = [
{
"article": """
The tower is 324 metres (1,063 ft) tall, about the same height as
an 81-storey building, and the tallest structure in Paris. Its base is square,
measuring 125 metres (410 ft) on each side.
During its construction, the Eiffel Tower surpassed the
Washington Monument to become the tallest man-made structure in the world,
a title it held for 41 years until the Chrysler Building | https://python.langchain.com/en/latest/integrations/comet_tracking.html |
fcb3aaab433f-4 | a title it held for 41 years until the Chrysler Building
in New York City was finished in 1930.
It was the first structure to reach a height of 300 metres.
Due to the addition of a broadcasting aerial at the top of the tower in 1957,
it is now taller than the Chrysler Building by 5.2 metres (17 ft).
Excluding transmitters, the Eiffel Tower is the second tallest
free-standing structure in France after the Millau Viaduct.
"""
}
]
print(synopsis_chain.apply(test_prompts, callbacks=callbacks))
comet_callback.flush_tracker(synopsis_chain, finish=True)
previous
Cohere
next
C Transformers
Contents
Install Comet and Dependencies
Initialize Comet and Set your Credentials
Set OpenAI and SerpAPI credentials
Scenario 1: Using just an LLM
Scenario 2: Using an LLM in a Chain
Scenario 3: Using An Agent with Tools
Scenario 4: Using Custom Evaluation Metrics
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/integrations/comet_tracking.html |
f03c3ea8f549-0 | .md
.pdf
SerpAPI
Contents
Installation and Setup
Wrappers
Utility
Tool
SerpAPI#
This page covers how to use the SerpAPI search APIs within LangChain.
It is broken into two parts: installation and setup, and then references to the specific SerpAPI wrapper.
Installation and Setup#
Install requirements with pip install google-search-results
Get a SerpAPI api key and either set it as an environment variable (SERPAPI_API_KEY)
Wrappers#
Utility#
There exists a SerpAPI utility which wraps this API. To import this utility:
from langchain.utilities import SerpAPIWrapper
For a more detailed walkthrough of this wrapper, see this notebook.
Tool#
You can also easily load this wrapper as a Tool (to use with an Agent).
You can do this with:
from langchain.agents import load_tools
tools = load_tools(["serpapi"])
For more information on this, see this page
previous
SearxNG Search API
next
StochasticAI
Contents
Installation and Setup
Wrappers
Utility
Tool
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/integrations/serpapi.html |
4025d6141238-0 | .md
.pdf
Qdrant
Contents
Installation and Setup
Wrappers
VectorStore
Qdrant#
This page covers how to use the Qdrant ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Qdrant wrappers.
Installation and Setup#
Install the Python SDK with pip install qdrant-client
Wrappers#
VectorStore#
There exists a wrapper around Qdrant indexes, allowing you to use it as a vectorstore,
whether for semantic search or example selection.
To import this vectorstore:
from langchain.vectorstores import Qdrant
For a more detailed walkthrough of the Qdrant wrapper, see this notebook
previous
Psychic
next
Rebuff: Prompt Injection Detection with LangChain
Contents
Installation and Setup
Wrappers
VectorStore
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/integrations/qdrant.html |
5105fdd5c66a-0 | .md
.pdf
Helicone
Contents
What is Helicone?
Quick start
How to enable Helicone caching
How to use Helicone custom properties
Helicone#
This page covers how to use the Helicone ecosystem within LangChain.
What is Helicone?#
Helicone is an open source observability platform that proxies your OpenAI traffic and provides you key insights into your spend, latency and usage.
Quick start#
With your LangChain environment you can just add the following parameter.
export OPENAI_API_BASE="https://oai.hconeai.com/v1"
Now head over to helicone.ai to create your account, and add your OpenAI API key within our dashboard to view your logs.
How to enable Helicone caching#
from langchain.llms import OpenAI
import openai
openai.api_base = "https://oai.hconeai.com/v1"
llm = OpenAI(temperature=0.9, headers={"Helicone-Cache-Enabled": "true"})
text = "What is a helicone?"
print(llm(text))
Helicone caching docs
How to use Helicone custom properties#
from langchain.llms import OpenAI
import openai
openai.api_base = "https://oai.hconeai.com/v1"
llm = OpenAI(temperature=0.9, headers={
"Helicone-Property-Session": "24",
"Helicone-Property-Conversation": "support_issue_2",
"Helicone-Property-App": "mobile",
})
text = "What is a helicone?"
print(llm(text))
Helicone property docs
previous
Hazy Research
next
Hugging Face
Contents
What is Helicone?
Quick start
How to enable Helicone caching
How to use Helicone custom properties | https://python.langchain.com/en/latest/integrations/helicone.html |
5105fdd5c66a-1 | Quick start
How to enable Helicone caching
How to use Helicone custom properties
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/integrations/helicone.html |
dc837898a556-0 | .ipynb
.pdf
ClearML Integration
Contents
Getting API Credentials
Setting Up
Scenario 1: Just an LLM
Scenario 2: Creating an agent with tools
Tips and Next Steps
ClearML Integration#
In order to properly keep track of your langchain experiments and their results, you can enable the ClearML integration. ClearML is an experiment manager that neatly tracks and organizes all your experiment runs.
Getting API Credentials#
We’ll be using quite some APIs in this notebook, here is a list and where to get them:
ClearML: https://app.clear.ml/settings/workspace-configuration
OpenAI: https://platform.openai.com/account/api-keys
SerpAPI (google search): https://serpapi.com/dashboard
import os
os.environ["CLEARML_API_ACCESS_KEY"] = ""
os.environ["CLEARML_API_SECRET_KEY"] = ""
os.environ["OPENAI_API_KEY"] = ""
os.environ["SERPAPI_API_KEY"] = ""
Setting Up#
!pip install clearml
!pip install pandas
!pip install textstat
!pip install spacy
!python -m spacy download en_core_web_sm
from datetime import datetime
from langchain.callbacks import ClearMLCallbackHandler, StdOutCallbackHandler
from langchain.llms import OpenAI
# Setup and use the ClearML Callback
clearml_callback = ClearMLCallbackHandler(
task_type="inference",
project_name="langchain_callback_demo",
task_name="llm",
tags=["test"],
# Change the following parameters based on the amount of detail you want tracked
visualize=True,
complexity_metrics=True,
stream_logs=True
)
callbacks = [StdOutCallbackHandler(), clearml_callback]
# Get the OpenAI model ready to go
llm = OpenAI(temperature=0, callbacks=callbacks) | https://python.langchain.com/en/latest/integrations/clearml_tracking.html |
dc837898a556-1 | llm = OpenAI(temperature=0, callbacks=callbacks)
The clearml callback is currently in beta and is subject to change based on updates to `langchain`. Please report any issues to https://github.com/allegroai/clearml/issues with the tag `langchain`.
Scenario 1: Just an LLM#
First, let’s just run a single LLM a few times and capture the resulting prompt-answer conversation in ClearML
# SCENARIO 1 - LLM
llm_result = llm.generate(["Tell me a joke", "Tell me a poem"] * 3)
# After every generation run, use flush to make sure all the metrics
# prompts and other output are properly saved separately
clearml_callback.flush_tracker(langchain_asset=llm, name="simple_sequential")
{'action': 'on_llm_start', 'name': 'OpenAI', 'step': 3, 'starts': 2, 'ends': 1, 'errors': 0, 'text_ctr': 0, 'chain_starts': 0, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 1, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'prompts': 'Tell me a joke'} | https://python.langchain.com/en/latest/integrations/clearml_tracking.html |
dc837898a556-2 | {'action': 'on_llm_start', 'name': 'OpenAI', 'step': 3, 'starts': 2, 'ends': 1, 'errors': 0, 'text_ctr': 0, 'chain_starts': 0, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 1, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'prompts': 'Tell me a poem'}
{'action': 'on_llm_start', 'name': 'OpenAI', 'step': 3, 'starts': 2, 'ends': 1, 'errors': 0, 'text_ctr': 0, 'chain_starts': 0, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 1, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'prompts': 'Tell me a joke'}
{'action': 'on_llm_start', 'name': 'OpenAI', 'step': 3, 'starts': 2, 'ends': 1, 'errors': 0, 'text_ctr': 0, 'chain_starts': 0, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 1, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'prompts': 'Tell me a poem'} | https://python.langchain.com/en/latest/integrations/clearml_tracking.html |
dc837898a556-3 | {'action': 'on_llm_start', 'name': 'OpenAI', 'step': 3, 'starts': 2, 'ends': 1, 'errors': 0, 'text_ctr': 0, 'chain_starts': 0, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 1, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'prompts': 'Tell me a joke'}
{'action': 'on_llm_start', 'name': 'OpenAI', 'step': 3, 'starts': 2, 'ends': 1, 'errors': 0, 'text_ctr': 0, 'chain_starts': 0, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 1, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'prompts': 'Tell me a poem'} | https://python.langchain.com/en/latest/integrations/clearml_tracking.html |
dc837898a556-4 | {'action': 'on_llm_end', 'token_usage_prompt_tokens': 24, 'token_usage_completion_tokens': 138, 'token_usage_total_tokens': 162, 'model_name': 'text-davinci-003', 'step': 4, 'starts': 2, 'ends': 2, 'errors': 0, 'text_ctr': 0, 'chain_starts': 0, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 2, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'text': '\n\nQ: What did the fish say when it hit the wall?\nA: Dam!', 'generation_info_finish_reason': 'stop', 'generation_info_logprobs': None, 'flesch_reading_ease': 109.04, 'flesch_kincaid_grade': 1.3, 'smog_index': 0.0, 'coleman_liau_index': -1.24, 'automated_readability_index': 0.3, 'dale_chall_readability_score': 5.5, 'difficult_words': 0, 'linsear_write_formula': 5.5, 'gunning_fog': 5.2, 'text_standard': '5th and 6th grade', 'fernandez_huerta': 133.58, 'szigriszt_pazos': 131.54, 'gutierrez_polini': 62.3, 'crawford': -0.2, 'gulpease_index': 79.8, 'osman': 116.91} | https://python.langchain.com/en/latest/integrations/clearml_tracking.html |
dc837898a556-5 | {'action': 'on_llm_end', 'token_usage_prompt_tokens': 24, 'token_usage_completion_tokens': 138, 'token_usage_total_tokens': 162, 'model_name': 'text-davinci-003', 'step': 4, 'starts': 2, 'ends': 2, 'errors': 0, 'text_ctr': 0, 'chain_starts': 0, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 2, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'text': '\n\nRoses are red,\nViolets are blue,\nSugar is sweet,\nAnd so are you.', 'generation_info_finish_reason': 'stop', 'generation_info_logprobs': None, 'flesch_reading_ease': 83.66, 'flesch_kincaid_grade': 4.8, 'smog_index': 0.0, 'coleman_liau_index': 3.23, 'automated_readability_index': 3.9, 'dale_chall_readability_score': 6.71, 'difficult_words': 2, 'linsear_write_formula': 6.5, 'gunning_fog': 8.28, 'text_standard': '6th and 7th grade', 'fernandez_huerta': 115.58, 'szigriszt_pazos': 112.37, 'gutierrez_polini': 54.83, 'crawford': 1.4, 'gulpease_index': 72.1, 'osman': 100.17} | https://python.langchain.com/en/latest/integrations/clearml_tracking.html |
dc837898a556-6 | {'action': 'on_llm_end', 'token_usage_prompt_tokens': 24, 'token_usage_completion_tokens': 138, 'token_usage_total_tokens': 162, 'model_name': 'text-davinci-003', 'step': 4, 'starts': 2, 'ends': 2, 'errors': 0, 'text_ctr': 0, 'chain_starts': 0, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 2, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'text': '\n\nQ: What did the fish say when it hit the wall?\nA: Dam!', 'generation_info_finish_reason': 'stop', 'generation_info_logprobs': None, 'flesch_reading_ease': 109.04, 'flesch_kincaid_grade': 1.3, 'smog_index': 0.0, 'coleman_liau_index': -1.24, 'automated_readability_index': 0.3, 'dale_chall_readability_score': 5.5, 'difficult_words': 0, 'linsear_write_formula': 5.5, 'gunning_fog': 5.2, 'text_standard': '5th and 6th grade', 'fernandez_huerta': 133.58, 'szigriszt_pazos': 131.54, 'gutierrez_polini': 62.3, 'crawford': -0.2, 'gulpease_index': 79.8, 'osman': 116.91} | https://python.langchain.com/en/latest/integrations/clearml_tracking.html |
dc837898a556-7 | {'action': 'on_llm_end', 'token_usage_prompt_tokens': 24, 'token_usage_completion_tokens': 138, 'token_usage_total_tokens': 162, 'model_name': 'text-davinci-003', 'step': 4, 'starts': 2, 'ends': 2, 'errors': 0, 'text_ctr': 0, 'chain_starts': 0, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 2, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'text': '\n\nRoses are red,\nViolets are blue,\nSugar is sweet,\nAnd so are you.', 'generation_info_finish_reason': 'stop', 'generation_info_logprobs': None, 'flesch_reading_ease': 83.66, 'flesch_kincaid_grade': 4.8, 'smog_index': 0.0, 'coleman_liau_index': 3.23, 'automated_readability_index': 3.9, 'dale_chall_readability_score': 6.71, 'difficult_words': 2, 'linsear_write_formula': 6.5, 'gunning_fog': 8.28, 'text_standard': '6th and 7th grade', 'fernandez_huerta': 115.58, 'szigriszt_pazos': 112.37, 'gutierrez_polini': 54.83, 'crawford': 1.4, 'gulpease_index': 72.1, 'osman': 100.17} | https://python.langchain.com/en/latest/integrations/clearml_tracking.html |
dc837898a556-8 | {'action': 'on_llm_end', 'token_usage_prompt_tokens': 24, 'token_usage_completion_tokens': 138, 'token_usage_total_tokens': 162, 'model_name': 'text-davinci-003', 'step': 4, 'starts': 2, 'ends': 2, 'errors': 0, 'text_ctr': 0, 'chain_starts': 0, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 2, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'text': '\n\nQ: What did the fish say when it hit the wall?\nA: Dam!', 'generation_info_finish_reason': 'stop', 'generation_info_logprobs': None, 'flesch_reading_ease': 109.04, 'flesch_kincaid_grade': 1.3, 'smog_index': 0.0, 'coleman_liau_index': -1.24, 'automated_readability_index': 0.3, 'dale_chall_readability_score': 5.5, 'difficult_words': 0, 'linsear_write_formula': 5.5, 'gunning_fog': 5.2, 'text_standard': '5th and 6th grade', 'fernandez_huerta': 133.58, 'szigriszt_pazos': 131.54, 'gutierrez_polini': 62.3, 'crawford': -0.2, 'gulpease_index': 79.8, 'osman': 116.91} | https://python.langchain.com/en/latest/integrations/clearml_tracking.html |
dc837898a556-9 | {'action': 'on_llm_end', 'token_usage_prompt_tokens': 24, 'token_usage_completion_tokens': 138, 'token_usage_total_tokens': 162, 'model_name': 'text-davinci-003', 'step': 4, 'starts': 2, 'ends': 2, 'errors': 0, 'text_ctr': 0, 'chain_starts': 0, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 2, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'text': '\n\nRoses are red,\nViolets are blue,\nSugar is sweet,\nAnd so are you.', 'generation_info_finish_reason': 'stop', 'generation_info_logprobs': None, 'flesch_reading_ease': 83.66, 'flesch_kincaid_grade': 4.8, 'smog_index': 0.0, 'coleman_liau_index': 3.23, 'automated_readability_index': 3.9, 'dale_chall_readability_score': 6.71, 'difficult_words': 2, 'linsear_write_formula': 6.5, 'gunning_fog': 8.28, 'text_standard': '6th and 7th grade', 'fernandez_huerta': 115.58, 'szigriszt_pazos': 112.37, 'gutierrez_polini': 54.83, 'crawford': 1.4, 'gulpease_index': 72.1, 'osman': 100.17}
{'action_records': action name step starts ends errors text_ctr chain_starts \ | https://python.langchain.com/en/latest/integrations/clearml_tracking.html |
dc837898a556-10 | 0 on_llm_start OpenAI 1 1 0 0 0 0
1 on_llm_start OpenAI 1 1 0 0 0 0
2 on_llm_start OpenAI 1 1 0 0 0 0
3 on_llm_start OpenAI 1 1 0 0 0 0
4 on_llm_start OpenAI 1 1 0 0 0 0
5 on_llm_start OpenAI 1 1 0 0 0 0
6 on_llm_end NaN 2 1 1 0 0 0
7 on_llm_end NaN 2 1 1 0 0 0
8 on_llm_end NaN 2 1 1 0 0 0
9 on_llm_end NaN 2 1 1 0 0 0
10 on_llm_end NaN 2 1 1 0 0 0
11 on_llm_end NaN 2 1 1 0 0 0
12 on_llm_start OpenAI 3 2 1 0 0 0
13 on_llm_start OpenAI 3 2 1 0 0 0 | https://python.langchain.com/en/latest/integrations/clearml_tracking.html |
dc837898a556-11 | 14 on_llm_start OpenAI 3 2 1 0 0 0
15 on_llm_start OpenAI 3 2 1 0 0 0
16 on_llm_start OpenAI 3 2 1 0 0 0
17 on_llm_start OpenAI 3 2 1 0 0 0
18 on_llm_end NaN 4 2 2 0 0 0
19 on_llm_end NaN 4 2 2 0 0 0
20 on_llm_end NaN 4 2 2 0 0 0
21 on_llm_end NaN 4 2 2 0 0 0
22 on_llm_end NaN 4 2 2 0 0 0
23 on_llm_end NaN 4 2 2 0 0 0
chain_ends llm_starts ... difficult_words linsear_write_formula \
0 0 1 ... NaN NaN
1 0 1 ... NaN NaN
2 0 1 ... NaN NaN
3 0 1 ... NaN NaN
4 0 1 ... NaN NaN
5 0 1 ... NaN NaN | https://python.langchain.com/en/latest/integrations/clearml_tracking.html |
dc837898a556-12 | 5 0 1 ... NaN NaN
6 0 1 ... 0.0 5.5
7 0 1 ... 2.0 6.5
8 0 1 ... 0.0 5.5
9 0 1 ... 2.0 6.5
10 0 1 ... 0.0 5.5
11 0 1 ... 2.0 6.5
12 0 2 ... NaN NaN
13 0 2 ... NaN NaN
14 0 2 ... NaN NaN
15 0 2 ... NaN NaN
16 0 2 ... NaN NaN
17 0 2 ... NaN NaN
18 0 2 ... 0.0 5.5
19 0 2 ... 2.0 6.5
20 0 2 ... 0.0 5.5
21 0 2 ... 2.0 6.5
22 0 2 ... 0.0 5.5
23 0 2 ... 2.0 6.5
gunning_fog text_standard fernandez_huerta szigriszt_pazos \
0 NaN NaN NaN NaN | https://python.langchain.com/en/latest/integrations/clearml_tracking.html |
Subsets and Splits