issue_owner_repo
listlengths 2
2
| issue_body
stringlengths 0
261k
⌀ | issue_title
stringlengths 1
925
| issue_comments_url
stringlengths 56
81
| issue_comments_count
int64 0
2.5k
| issue_created_at
stringlengths 20
20
| issue_updated_at
stringlengths 20
20
| issue_html_url
stringlengths 37
62
| issue_github_id
int64 387k
2.46B
| issue_number
int64 1
127k
|
---|---|---|---|---|---|---|---|---|---|
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
TL;DR: The use of exec() in agents can lead to remote code execution vulnerabilities. Some Huggingface projects use such agents, despite the potential harm of LLM-generated Python code.
#1026 and #814 discuss the security concerns regarding the use of `exec()` in llm_math chain. The comments in #1026 proposed methods to sandbox the code execution, but due to environmental issues, the code was patched to replace `exec()` with `numexpr.evaluate()` (#2943). This restricted the execution capabilities to mathematical functionalities only. This bug was assigned the CVE number CVE-2023-29374.
As shown in the above issues, the usage of `exec()` in a chain can pose a significant security risk, especially when the chain is running on a remote machine. This seems common scenario for projects in Huggingface.
However, in the latest langchain, `exec()` is still used in `PythonReplTool` and `PythonAstReplTool`.
https://github.com/hwchase17/langchain/blob/aec642febb3daa7dbb6a19996aac2efa92bbf1bd/langchain/tools/python/tool.py#L55
https://github.com/hwchase17/langchain/blob/aec642febb3daa7dbb6a19996aac2efa92bbf1bd/langchain/tools/python/tool.py#L102
These functions are called by Pandas Dataframe Agent, Spark Dataframe Agent, CSV Agent. It seems they are intentionally designed to pass the LLM output to `PythonAstTool` or `PythonAstReplTool` to execute the LLM-generated code in the machine.
The documentation for these agents explicitly states that they should be used with caution since LLM-generated Python code can be potentially harmful. For instance:
https://github.com/hwchase17/langchain/blob/aec642febb3daa7dbb6a19996aac2efa92bbf1bd/docs/modules/agents/toolkits/examples/pandas.ipynb#L12
Despite this, I have observed several projects in Huggingface using `create_pandas_dataframe_agent` and `create_csv_agent`.
### Suggestion:
Fixing this issue as done in llm_math chain seems challenging.
Simply restricting the LLM-generated code to Pandas and Spark execution might not be sufficient because there are still numerous malicious tasks that can be performed using those APIs. For instance, Pandas can read and write files.
Meanwhile, it seems crucial to emphasize the security concerns related to LLM-generated code for the overall security of LLM apps. Merely limiting execution to specific frameworks or APIs may not fully address the underlying security risks.
| Issue: security concerns with `exec()` via multiple agents and Shell tool | https://api.github.com/repos/langchain-ai/langchain/issues/5294/comments | 3 | 2023-05-26T11:38:23Z | 2024-03-13T16:12:29Z | https://github.com/langchain-ai/langchain/issues/5294 | 1,727,460,382 | 5,294 |
[
"hwchase17",
"langchain"
]
| ### Feature request
It's very userful when use the history paremeter.
```
history = [("who are you", " i'm a ai")]
llm = OpenAI()
llm("hello", history)
llm = LLMChain({ llm, prompt })
llm({"query": "hello", "history": history})
```
### Motivation
*
### Your contribution
* | support history in LLMChain and LLM | https://api.github.com/repos/langchain-ai/langchain/issues/5289/comments | 1 | 2023-05-26T09:11:16Z | 2023-09-10T16:11:37Z | https://github.com/langchain-ai/langchain/issues/5289 | 1,727,217,498 | 5,289 |
[
"hwchase17",
"langchain"
]
| ### System Info
`import os
from langchain import OpenAI, SQLDatabase, SQLDatabaseChain
from langchain.chat_models import ChatOpenAI
from dotenv import load_dotenv
load_dotenv()
os.environ["OPENAI_API_KEY"] = os.getenv("OPENAI_API_KEY")
db = SQLDatabase.from_uri("sqlite:///data/data.db")
llm = ChatOpenAI(temperature=0, verbose=True)
db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)
res = db_chain.run("total sales by each region?")
print(res)`

But If I use text-davinici, It generates single result.
`import os
from langchain import OpenAI, SQLDatabase, SQLDatabaseChain
from langchain.chat_models import ChatOpenAI
from dotenv import load_dotenv
load_dotenv()
os.environ["OPENAI_API_KEY"] = os.getenv("OPENAI_API_KEY")
db = SQLDatabase.from_uri("sqlite:///data/data.db")
llm = OpenAI(temperature=0, verbose=True)
db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)
res = db_chain.run("total sales by each region?")
print(res)`

how to overcome this issue in **"ChatOpenAI"**?
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
import os
from langchain import OpenAI, SQLDatabase, SQLDatabaseChain
from langchain.chat_models import ChatOpenAI
from dotenv import load_dotenv
load_dotenv()
os.environ["OPENAI_API_KEY"] = os.getenv("OPENAI_API_KEY")
db = SQLDatabase.from_uri("sqlite:///data/data.db")
llm = ChatOpenAI(temperature=0, verbose=True)
db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)
res = db_chain.run("total sales by each region?")
print(res)
### Expected behavior
I need one answer that is for user input query only. But after the answer It again adding a question by itself. Extra add on question and query not needed in **ChatOpenAI** | SQL chain generates extra add on question if I use ChatOpenAI inplace of OpenAI | https://api.github.com/repos/langchain-ai/langchain/issues/5286/comments | 7 | 2023-05-26T06:48:08Z | 2023-10-16T22:55:29Z | https://github.com/langchain-ai/langchain/issues/5286 | 1,727,005,383 | 5,286 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Hey guys,thanks for your amazing work, if I want to get a dictionary SQL result instead of the default tuple in SQLDatabaseChain, what settings do I need to change?
### Motivation
Without database table header fields, the articles generated by LLM may contain errors.
### Your contribution
I am currently diving into the codes and see how to deal with it | change tuple sql result to dict sql result | https://api.github.com/repos/langchain-ai/langchain/issues/5284/comments | 2 | 2023-05-26T05:27:36Z | 2023-09-18T16:10:51Z | https://github.com/langchain-ai/langchain/issues/5284 | 1,726,931,734 | 5,284 |
[
"hwchase17",
"langchain"
]
| ### System Info
windows
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
The llm will generate the content below in some cases:
Action 1: xxx
Action Input 1: xxx
Observation 1: xxx
regex = (
r"Action\s*\d*\s*:[\s]*(.*?)[\s]*Action\s*\d*\s*Input\s*\d*\s*:[\s]*(.*)"
)
The regex above in langchian.mrkl.output_parser can match the Action and Action Input in the following scenario:
Action 1: xxx
Action Input 1: xxx
but the stop list is still be ['\nObservation:', '\n\tObservation:'] which can not stop the generation by llm, because the llm will generate the 'Observation 1: ... '.
### Expected behavior
Optimize stop logic to solve this problem | Stop logic should be optimezed to be compatible with "Conversation 1:" | https://api.github.com/repos/langchain-ai/langchain/issues/5283/comments | 1 | 2023-05-26T05:21:00Z | 2023-09-10T16:11:42Z | https://github.com/langchain-ai/langchain/issues/5283 | 1,726,925,790 | 5,283 |
[
"hwchase17",
"langchain"
]
| ### Feature request
loader = SitemapLoader(
"https://langchain.readthedocs.io/sitemap.xml",
filter_modified_dates=["2023-", "2022-12-"]
)
documents = loader.load()
### Motivation
Provide enhanced filtering on larger sites
### Your contribution
Provide enhanced filtering on larger sites | Sitemap - add filtering by modified date | https://api.github.com/repos/langchain-ai/langchain/issues/5280/comments | 1 | 2023-05-26T04:52:49Z | 2023-09-10T16:11:47Z | https://github.com/langchain-ai/langchain/issues/5280 | 1,726,903,889 | 5,280 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain==0.0.180
google-cloud-aiplatform==1.25.0
Have Google Cloud CLI and ran and logged in using `gcloud auth login`
Running locally and online in Google Colab
### Who can help?
@hwchase17 @hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
https://colab.research.google.com/drive/19QGMptiCn49fu4i5ZQ0ygfR74ktQFQlb?usp=sharing
Unexpected behavior`field "credentials" not yet prepared so type is still a ForwardRef, you might need to call VertexAI.update_forward_refs().` seems to only appear if you pass in any credenitial valid or invalid to the vertexai wrapper from langchain.
### The error
This code should not throw `field "credentials" not yet prepared so type is still a ForwardRef, you might need to call VertexAI.update_forward_refs().`. It should either not throw any errors, if the credentials, project_Id, and location are correct. Or, if there is an issue with one of params, it should throw a specific error from the `vertexai.init` call below but it doesn't seem to be reaching it if a credential is passed in.
```
vertexai.init(project=project_id,location=location,credentials=credentials,)
``` | Issue Passing in Credential to VertexAI model | https://api.github.com/repos/langchain-ai/langchain/issues/5279/comments | 0 | 2023-05-26T04:34:54Z | 2023-05-26T15:31:04Z | https://github.com/langchain-ai/langchain/issues/5279 | 1,726,889,243 | 5,279 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain==0.0.180
python==3.10
google-cloud-aiplatform==1.25.0
### Who can help?
@hwc
### Information
- [] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Debatable if this is a bug or a missing feature, but I'd argue that the Vertex implementation is missing an important element -> Even though I'm excited to have the support now.
Using the [VertexAI documentation for chat](https://cloud.google.com/vertex-ai/docs/generative-ai/chat/test-chat-prompts), you can initialise the chat model like the below (emphasis mine).
The list of "examples" functions as a separate instruction (few-shot), not as part of the chat history. This is different from how OpenAI does it.
The current langchain implementation doesn't seem to have an option to submit examples, instead it combining all messages in the chat-history. That would lead to unexpected results if you used if for your examples.
```
def chat_question(context=None, examples=[], chat_instruction=None):
chat_model = ChatModel.from_pretrained("chat-bison@001")
parameters = {
"temperature": .0,
"max_output_tokens": 300,
"top_p": 0.3,
"top_k": 3,
}
chat = chat_model.start_chat(
context=context,
**examples=examples**
)
response = chat.send_message(chat_instruction, **parameters)
return response
```
### Expected behavior
Allow for a set of examples to be passed in when setting up the ChatVertexAI or when using the chat() function.
Apologies if I've missed a way to do this. | VertexAI ChatModel implementation misses few-shot "examples" | https://api.github.com/repos/langchain-ai/langchain/issues/5278/comments | 1 | 2023-05-26T04:02:46Z | 2023-09-15T22:13:02Z | https://github.com/langchain-ai/langchain/issues/5278 | 1,726,867,648 | 5,278 |
[
"hwchase17",
"langchain"
]
| ### System Info
Cannot specify both model and engine
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1、i create ClientChatOpenAI the code like this:
```python
"""Azure OpenAI chat wrapper."""
from __future__ import annotations
import logging
from typing import Any, Dict
from pydantic import root_validator
from langchain.chat_models.openai import ChatOpenAI
from langchain.utils import get_from_dict_or_env
logger = logging.getLogger(__name__)
class ClientChatOpenAI(ChatOpenAI):
deployment_name: str = ""
openai_api_base: str = ""
openai_api_key: str = ""
openai_organization: str = ""
@root_validator()
def validate_environment(cls, values: Dict) -> Dict:
"""Validate that api key and python package exists in environment."""
openai_api_key = get_from_dict_or_env(
values,
"openai_api_key",
"OPENAI_API_KEY",
)
openai_api_base = get_from_dict_or_env(
values,
"openai_api_base",
"OPENAI_API_BASE",
)
openai_organization = get_from_dict_or_env(
values,
"openai_organization",
"OPENAI_ORGANIZATION",
default="",
)
try:
import openai
openai.api_base = openai_api_base
openai.api_key = openai_api_key
if openai_organization:
openai.organization = openai_organization
except ImportError:
raise ValueError(
"Could not import openai python package. "
"Please install it with `pip install openai`."
)
try:
values["client"] = openai.ChatCompletion
except AttributeError:
raise ValueError(
"`openai` has no `ChatCompletion` attribute, this is likely "
"due to an old version of the openai package. Try upgrading it "
"with `pip install --upgrade openai`."
)
if values["n"] < 1:
raise ValueError("n must be at least 1.")
if values["n"] > 1 and values["streaming"]:
raise ValueError("n must be 1 when streaming.")
return values
@property
def _default_params(self) -> Dict[str, Any]:
"""Get the default parameters for calling OpenAI API."""
return {
**super()._default_params,
"engine": self.deployment_name,
}
```
2.use code
```python
chat = ClientChatOpenAI(
temperature=0,
streaming=True,
openai_api_key=os.getenv("OPENAI_CONFIG_0_API_KEY"),
openai_api_base=os.getenv("OPENAI_CONFIG_0_END_POINT"),
)
batch_messages = [
[SystemMessage(content="你是ai助手."), HumanMessage(content=chat_request.prompts)],
]
result = chat.generate(batch_messages)
print(result.llm_output["token_usage"])
return result
```
### Expected behavior
i think code is ok | when i create ClientChatOpenAI error | https://api.github.com/repos/langchain-ai/langchain/issues/5277/comments | 1 | 2023-05-26T03:42:55Z | 2023-09-10T16:11:53Z | https://github.com/langchain-ai/langchain/issues/5277 | 1,726,855,450 | 5,277 |
[
"hwchase17",
"langchain"
]
| ### Feature request
In the JS SDK of Milvus, there is a function to query documents from an existing collection, while in the Python SDK, this function is not available. Instead, the collection can be constructed using the following way:
```python
vector_db = Milvus.from_documents(
docs,
embeddings,
connection_args={"host": "127.0.0.1", "port": "19530"},
)
```
### Motivation
I cannot ask multiple questions
### Your contribution
no | python SDK can't query documents from an existing collection | https://api.github.com/repos/langchain-ai/langchain/issues/5276/comments | 2 | 2023-05-26T03:40:09Z | 2023-06-01T00:28:00Z | https://github.com/langchain-ai/langchain/issues/5276 | 1,726,853,363 | 5,276 |
[
"hwchase17",
"langchain"
]
| ### System Info
When I initialise ChatVeretexAI in a fastAPI app the thread pool never returns to idle blocking the server returning the below error.
E0526 10:18:51.289447000 4300375424 thread_pool.cc:230] Waiting for thread pool to idle before forking
on langchain 0.180
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Initialise ChatVertexAI in a fastapi app. ChatOpenAI works fine.
### Expected behavior
Don't error | When initializing ChatVertexAI fastapi thread pool becomes unaccessible | https://api.github.com/repos/langchain-ai/langchain/issues/5275/comments | 2 | 2023-05-26T00:48:15Z | 2023-09-10T16:11:57Z | https://github.com/langchain-ai/langchain/issues/5275 | 1,726,669,439 | 5,275 |
[
"hwchase17",
"langchain"
]
| https://github.com/hwchase17/langchain/blob/7652d2abb01208fd51115e34e18b066824e7d921/langchain/agents/mrkl/output_parser.py#L47
Due to the line above the `ShellTool` fails when using it with the `ZeroShotAgent`. In using `langchain.OpenAI` as the `llm` I encountered a scenario where ChatGPT provides a string surrounded by single quotes for `Action Input:`. This causes the ShellTool not to recognize the input command because it is surrounded by single quotes which aren't stripped (I get a command not found error). This could easily be fixed by stripping single quotes from `action_input`.
```
return AgentAction(action, action_input.strip(" ").strip('"').strip("'"), text)
``` | ZeroShotAgent fails with ShellTool due to quotes in llm output | https://api.github.com/repos/langchain-ai/langchain/issues/5271/comments | 3 | 2023-05-25T22:18:12Z | 2023-10-08T16:06:56Z | https://github.com/langchain-ai/langchain/issues/5271 | 1,726,558,628 | 5,271 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain==0.0.180
google-cloud-aiplatform==1.25.0
SQLAlchemy==2.0.15
duckdb==0.8.0
duckdb-engine==0.7.3
Running inside GCP Vertex AI Notebook (Jupyter Lab essentially jupyterlab==3.4.8)
python 3.7
### Who can help?
@Jflick58
@lkuligin
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Create the Vertex AI LLM (using latest version of LangChain)
`from langchain.llms import VertexAI
palmllm = VertexAI(model_name='text-bison@001',
max_output_tokens=256,
temperature=0.2,
top_p=0.1,
top_k=40,
verbose=True)`
2. Setup the db engine for duckdb in this case
`engine = create_engine("duckdb:///dw.db")`
2. Then create the chain using SQLDatabaseChain (Note the use of use_query_checker=True)
`#Setup the DB
db = SQLDatabase(engine=engine,metadata=MetaData(bind=engine),include_tables=[table_name])
#Setup the chain
db_chain = SQLDatabaseChain.from_llm(palmllm,db,verbose=True,use_query_checker=True,prompt=PROMPT,return_intermediate_steps=True,top_k=3)`
4. Run a query against the chain (Notice the SQLQuery: The query is correct) (It is as if its trying to execute "The query is correct" as SQL"
`> Entering new SQLDatabaseChain chain...
How many countries are there
SQLQuery:The query is correct.`
This is the error returned:
`ProgrammingError: (duckdb.ParserException) Parser Error: syntax error at or near "The"
LINE 1: The query is correct.
^
[SQL: The query is correct.]
(Background on this error at: https://sqlalche.me/e/14/f405)`
IMPORTANT:
- If I remove the "use_query_checker=True" then everything works well.
- If I use the OpenAI LLM and dont change anything (except the LLM), then it works with the "use_query_checker=True" setting.
This relates to [#5049](https://github.com/hwchase17/langchain/pull/5049)
### Expected behavior
I believe the intention of that flag "use_query_checker=True" is to validate the SQL and allow the chain to recover from a simple syntax error. | use_query_checker for VertexAI fails | https://api.github.com/repos/langchain-ai/langchain/issues/5270/comments | 5 | 2023-05-25T21:22:26Z | 2023-10-05T16:09:44Z | https://github.com/langchain-ai/langchain/issues/5270 | 1,726,507,623 | 5,270 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain version: 0.0.180
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [x] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I'm running the example from docs: https://python.langchain.com/en/latest/modules/agents/toolkits/examples/pandas.html.
`agent.run("how many people are 28 years old?")`
gives:
```
> Entering new AgentExecutor chain...
Thought: I need to use the `df` dataframe to find how many people are 28 years old.
Action: python_repl_ast
Action Input: df['Age'] == 28
Observation: 0
Thought: There are no people 28 years old.
Final Answer: 0
```
In other cases, the Action Input the LLM calculates is correct, but the observation (result of applying this action on the dataframe) is incorrect. This makes me believe that the LLM isn't at fault here.
### Expected behavior
Should return 25. | pandas dataframe agent generates correct Action Input, but returns incorrect result | https://api.github.com/repos/langchain-ai/langchain/issues/5269/comments | 11 | 2023-05-25T21:03:00Z | 2024-06-04T21:03:44Z | https://github.com/langchain-ai/langchain/issues/5269 | 1,726,486,276 | 5,269 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hi everybody,
I'm working with an LLM setup inspired from @pelyhe 's implementation #4573 .
It uses a RetrievalQA that queries a persistent embedded ChromaDB, then feeds it into a ConversationalChatAgent and then an AgentExecutor.
Currently, this setup works for only basic situations which definitely have nothing to do with documents. Once I ask it something document relevant, it gives an empty response. I have a nagging suspicion I've simply wired things up incorrectly, but it's not clear how to fix it.
```
@st.cache_resource
def load_agent():
vectorstore = Chroma(persist_directory=CHROMA_DIR)
basic_prompt_template = """If the context is not relevant,
please answer the question by using your own knowledge about the topic.
###Context:
{context}
###Human:
{question}
###Assistant:
"""
prompt = PromptTemplate(
template=basic_prompt_template, input_variables=["context", "question"]
)
system_msg = "You are a helpful assistant."
chain_type_kwargs = {"prompt": prompt}
# Time to initialize the LLM, as late as possible so everything not requiring the LLM instance to fail fast
llm = GPT4All(
model=MODEL,
verbose=True,
)
# Initialise QA chain for document-relevant queries
qa = RetrievalQA.from_chain_type(
llm=llm,
chain_type="stuff",
retriever=vectorstore.as_retriever(),
chain_type_kwargs=chain_type_kwargs,
)
tools = [
Tool(
name="Document tool",
func=qa.run,
description="useful for when you need to answer questions from documents.",
),
]
agent = ConversationalChatAgent.from_llm_and_tools(
llm=llm, tools=tools, system_message=system_msg, verbose=True
)
return AgentExecutor.from_agent_and_tools(
agent=agent,
tools=tools,
verbose=True,
memory=ConversationBufferMemory(
memory_key="chat_history", return_messages=True
),
)
agent = load_agent()
###########################
# Streamlit UI operation. #
###########################
if "generated" not in st.session_state:
st.session_state["generated"] = []
if "past" not in st.session_state:
st.session_state["past"] = []
def get_text():
input_text = st.text_input(label="", key="question")
return input_text
user_input = get_text()
if user_input:
try:
output = agent.run(input=user_input)
except ValueError as e:
output = str(e)
if not output.startswith("Could not parse LLM output: "):
raise Exception(output)
output = output.removeprefix("Could not parse LLM output: ").removesuffix("`")
st.session_state.past.append(user_input)
st.session_state.generated.append(output)
if st.session_state["generated"]:
for i in range(len(st.session_state["generated"]) - 1, -1, -1):
message(st.session_state["generated"][i], key=str(i))
message(st.session_state["past"][i], is_user=True, key=str(i) + "_user")
```
### Suggestion:
_No response_ | Issue: RetrievalQA -> ConversationalChatAgent -> AgentExecutor gives no response if document-related | https://api.github.com/repos/langchain-ai/langchain/issues/5266/comments | 11 | 2023-05-25T19:58:01Z | 2023-09-18T16:10:56Z | https://github.com/langchain-ai/langchain/issues/5266 | 1,726,411,036 | 5,266 |
[
"hwchase17",
"langchain"
]
| ### System Info
```
langchain==0.0.180
Python 3.11.3 (tags/v3.11.3:f3909b8, Apr 4 2023, 23:49:59) [MSC v.1934 64 bit (AMD64)] on win32
Windows 11
```
### Who can help?
@eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Python Code
```
from langchain.document_loaders import UnstructuredMarkdownLoader
markdown_path = r"Pyspy.md"
loader = UnstructuredMarkdownLoader(markdown_path)
data = loader.load()
```
Markdown file `Pyspy.md`
```
```
.pip/bin/py-spy top -p 70
```
```
### Expected behavior
It should result in List[Document] in data | UnstructuredMarkdownLoader resulting in `zipfile.BadZipFile: File is not a zip file` | https://api.github.com/repos/langchain-ai/langchain/issues/5264/comments | 10 | 2023-05-25T18:59:18Z | 2023-11-29T17:55:26Z | https://github.com/langchain-ai/langchain/issues/5264 | 1,726,337,382 | 5,264 |
[
"hwchase17",
"langchain"
]
| ```
~\Anaconda3\lib\site-packages\langchain\memory\vectorstore.py in save_context(self, inputs, outputs)
67 """Save context from this conversation to buffer."""
68 documents = self._form_documents(inputs, outputs)
---> 69 self.retriever.add_documents(documents)
70
71 def clear(self) -> None:
~\Anaconda3\lib\site-packages\langchain\vectorstores\base.py in add_documents(self, documents, **kwargs)
413 def add_documents(self, documents: List[Document], **kwargs: Any) -> List[str]:
414 """Add documents to vectorstore."""
--> 415 return self.vectorstore.add_documents(documents, **kwargs)
416
417 async def aadd_documents(
~\Anaconda3\lib\site-packages\langchain\vectorstores\base.py in add_documents(self, documents, **kwargs)
60 texts = [doc.page_content for doc in documents]
61 metadatas = [doc.metadata for doc in documents]
---> 62 return self.add_texts(texts, metadatas, **kwargs)
63
64 async def aadd_documents(
~\Anaconda3\lib\site-packages\langchain\vectorstores\faiss.py in add_texts(self, texts, metadatas, ids, **kwargs)
150 # Embed and create the documents.
151 embeddings = [self.embedding_function(text) for text in texts]
--> 152 return self.__add(texts, embeddings, metadatas=metadatas, ids=ids, **kwargs)
153
154 def add_embeddings(
~\Anaconda3\lib\site-packages\langchain\vectorstores\faiss.py in __add(self, texts, embeddings, metadatas, ids, **kwargs)
117 if self._normalize_L2:
118 faiss.normalize_L2(vector)
--> 119 self.index.add(vector)
120 # Get list of index, id, and docs.
121 full_info = [(starting_len + i, ids[i], doc) for i, doc in enumerate(documents)]
~\Anaconda3\lib\site-packages\faiss\class_wrappers.py in replacement_add(self, x)
226
227 n, d = x.shape
--> 228 assert d == self.d
229 x = np.ascontiguousarray(x, dtype='float32')
230 self.add_c(n, swig_ptr(x))
AssertionError:
``` | Assertion Error when using VertexAIEmbeddings with faiss vectorstore | https://api.github.com/repos/langchain-ai/langchain/issues/5262/comments | 6 | 2023-05-25T18:28:41Z | 2023-12-20T19:12:22Z | https://github.com/langchain-ai/langchain/issues/5262 | 1,726,293,593 | 5,262 |
[
"hwchase17",
"langchain"
]
| ```
~\Anaconda3\lib\site-packages\langchain\chat_models\base.py in generate_prompt(self, prompts, stop, callbacks)
141 ) -> LLMResult:
142 prompt_messages = [p.to_messages() for p in prompts]
--> 143 return self.generate(prompt_messages, stop=stop, callbacks=callbacks)
144
145 async def agenerate_prompt(
~\Anaconda3\lib\site-packages\langchain\chat_models\base.py in generate(self, messages, stop, callbacks)
89 except (KeyboardInterrupt, Exception) as e:
90 run_manager.on_llm_error(e)
---> 91 raise e
92 llm_output = self._combine_llm_outputs([res.llm_output for res in results])
93 generations = [res.generations for res in results]
~\Anaconda3\lib\site-packages\langchain\chat_models\base.py in generate(self, messages, stop, callbacks)
81 )
82 try:
---> 83 results = [
84 self._generate(m, stop=stop, run_manager=run_manager)
85 if new_arg_supported
~\Anaconda3\lib\site-packages\langchain\chat_models\base.py in <listcomp>(.0)
82 try:
83 results = [
---> 84 self._generate(m, stop=stop, run_manager=run_manager)
85 if new_arg_supported
86 else self._generate(m, stop=stop)
~\Anaconda3\lib\site-packages\langchain\chat_models\vertexai.py in _generate(self, messages, stop, run_manager)
123 for pair in history.history:
124 chat._history.append((pair.question.content, pair.answer.content))
--> 125 response = chat.send_message(question.content)
126 text = self._enforce_stop_words(response.text, stop)
127 return ChatResult(generations=[ChatGeneration(message=AIMessage(content=text))])
~\Anaconda3\lib\site-packages\vertexai\language_models\_language_models.py in send_message(self, message, max_output_tokens, temperature, top_k, top_p)
676 ]
677
--> 678 prediction_response = self._model._endpoint.predict(
679 instances=[prediction_instance],
680 parameters=prediction_parameters,
~\Anaconda3\lib\site-packages\google\cloud\aiplatform\models.py in predict(self, instances, parameters, timeout, use_raw_predict)
1544 )
1545 else:
-> 1546 prediction_response = self._prediction_client.predict(
1547 endpoint=self._gca_resource.name,
1548 instances=instances,
~\Anaconda3\lib\site-packages\google\cloud\aiplatform_v1\services\prediction_service\client.py in predict(self, request, endpoint, instances, parameters, retry, timeout, metadata)
600
601 # Send the request.
--> 602 response = rpc(
603 request,
604 retry=retry,
~\Anaconda3\lib\site-packages\google\api_core\gapic_v1\method.py in __call__(self, timeout, retry, *args, **kwargs)
111 kwargs["metadata"] = metadata
112
--> 113 return wrapped_func(*args, **kwargs)
114
115
~\Anaconda3\lib\site-packages\google\api_core\grpc_helpers.py in error_remapped_callable(*args, **kwargs)
72 return callable_(*args, **kwargs)
73 except grpc.RpcError as exc:
---> 74 raise exceptions.from_grpc_error(exc) from exc
75
76 return error_remapped_callable
InternalServerError: 500 Internal error encountered.
``` | Internal error encountered when using VertexAI in ConversationalRetrievalChain | https://api.github.com/repos/langchain-ai/langchain/issues/5258/comments | 1 | 2023-05-25T16:38:50Z | 2023-09-10T16:12:07Z | https://github.com/langchain-ai/langchain/issues/5258 | 1,726,151,388 | 5,258 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Would be amazing to scan and get all the contents from the Github API, such as PRs, Issues and Discussions.
### Motivation
this would allows to ask questions on the history of the project, issues that other users might have found, and much more!
### Your contribution
Not really a python developer here, would take me a while to figure out all the changes required. | Github integration | https://api.github.com/repos/langchain-ai/langchain/issues/5257/comments | 11 | 2023-05-25T16:27:21Z | 2023-11-29T21:21:01Z | https://github.com/langchain-ai/langchain/issues/5257 | 1,726,136,467 | 5,257 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Integration with LocalAI and with its extended endpoints to download models from the gallery.
### Motivation
LocalAI is a self-hosted OpenAI drop-in replacement with support for multiple model families: https://github.com/go-skynet/LocalAI
### Your contribution
Not a python guru, so might take few cycles away here. | Add integration for LocalAI | https://api.github.com/repos/langchain-ai/langchain/issues/5256/comments | 7 | 2023-05-25T16:25:18Z | 2024-05-03T16:04:00Z | https://github.com/langchain-ai/langchain/issues/5256 | 1,726,133,919 | 5,256 |
[
"hwchase17",
"langchain"
]
| ### Question
Will there be future updates where we are allowed to customize answer_gen_llm when using FlareChain?
### Context
In the [documentation](https://python.langchain.com/en/latest/modules/chains/examples/flare.html) it says that:
In order to set up this chain, we will need three things:
- An LLM to generate the answer
- An LLM to generate hypothetical questions to use in retrieval
- A retriever to use to look up answers for
However, the example code only allows specification for the question_gen_llm, not the answer_gen_llm.
After referencing the [code](https://github.com/hwchase17/langchain/blob/9c0cb90997db9eb2e2a736df458d39fd7bec8ffb/langchain/chains/flare/base.py) for FlareChain, it seems that the answer_gen_llm is initialized as `OpenAI(max_tokens=32, model_kwargs={"logprobs": 1}, temperature=0)`, which default to `"text-davinci-003"` as no model_name is specified. | Inconsistent documentation for langchain.chains.FlareChain | https://api.github.com/repos/langchain-ai/langchain/issues/5255/comments | 2 | 2023-05-25T16:15:17Z | 2023-09-10T16:12:14Z | https://github.com/langchain-ai/langchain/issues/5255 | 1,726,121,249 | 5,255 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
hi, team:
I created Chain_A and Chain_B and set streaming=True for both of them.
overall_chain = SequentialChain(
chains=[chain_A, chain_B],
input_variables=["era", "title"],
output_variables=["synopsis", "review"],
verbose=True)
However, the streaming does not work.
### Suggestion:
_No response_ | Issue: <Streaming mode not work for Sequential Chains> | https://api.github.com/repos/langchain-ai/langchain/issues/5254/comments | 2 | 2023-05-25T15:25:01Z | 2023-09-10T16:12:18Z | https://github.com/langchain-ai/langchain/issues/5254 | 1,726,041,996 | 5,254 |
[
"hwchase17",
"langchain"
]
| ### Feature request
I am using langchain + openai api to create a chatbot for private data, i can use langchain directory loader class to load files from a directory, but if any new files added to that directory, how to automatically load it?
### Motivation
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/file_directory.html
### Your contribution
if can solve the problem, it will be good for company to use it for internal knowdege base share. | how to monitoring the new files after directory loader class used | https://api.github.com/repos/langchain-ai/langchain/issues/5252/comments | 3 | 2023-05-25T14:33:02Z | 2023-09-14T16:09:01Z | https://github.com/langchain-ai/langchain/issues/5252 | 1,725,950,539 | 5,252 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hello, would it be possible to add the topics (tags) to the repositories, it would be easier to find and organize them afterward. And its also usefull for external tools that are fetching github API to track repos ! Here is an example from HuggingFace :
.
<img width="538" alt="Capture d’écran 2023-05-25 à 15 58 11" src="https://github.com/hwchase17/langchain/assets/90518536/8a0029ad-6c44-426b-bc9d-2b01fcad46a7">
.
And here is a more specific screenshot in case I'm using the wrong words (sry not english) :
.
<img width="1440" alt="Capture d’écran 2023-05-25 à 16 03 40" src="https://github.com/hwchase17/langchain/assets/90518536/5aa4574d-1ae4-4bca-8ad5-044f3ce4a3cf">
### Suggestion:
I think you already know how, clicking the button on the repo page, then in about > topics, adding stuff like "python" "ai" "artificial intelligence" etc... thank you ! 😃 | Issue: Add topics to the GitHub repos | https://api.github.com/repos/langchain-ai/langchain/issues/5249/comments | 4 | 2023-05-25T14:05:59Z | 2023-12-09T16:06:41Z | https://github.com/langchain-ai/langchain/issues/5249 | 1,725,901,643 | 5,249 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
`!pip3 install langchain==0.0.179 boto3`
After installing langchain using the above command and trying to run the example mentioned in
[](https://python.langchain.com/en/latest/modules/models/llms/integrations/sagemaker.html)
Getting the below error.
`ImportError:` cannot import name 'LLMContentHandler' from 'langchain.llms.sagemaker_endpoint' `(/opt/conda/lib/python3.10/site-packages/langchain/llms/sagemaker_endpoint.py)`
Am I missing something
### Suggestion:
_No response_ | Issue: import 'LLMContentHandler' from 'langchain.llms.sagemaker_endpoint failing | https://api.github.com/repos/langchain-ai/langchain/issues/5245/comments | 3 | 2023-05-25T13:28:29Z | 2023-09-14T16:09:07Z | https://github.com/langchain-ai/langchain/issues/5245 | 1,725,833,687 | 5,245 |
[
"hwchase17",
"langchain"
]
| ### Feature request
For a deployment behind a corporate proxy, it's useful to be able to access the API by specifying an explicit proxy.
### Motivation
Currently it's possible to do this by setting the environment variables http_proxy / https_proxy to set a proxy for the whole python interpreter. However this then prevents access to other internal servers. accessing other network resources (e.g. a vector database on a different server, corporate S3 storage etc.) should not go through the proxy. So it's important to be able to just proxy requests for externally hosted APIs. We are working with the OpenAI API and currently we cannot both access those and our qdrant database on another server.
### Your contribution
Since the openai python package supports the proxy parameter, this is relatively easy to implement for the OpenAI API. I'll submit a PR. | Add possibility to set a proxy for openai API access | https://api.github.com/repos/langchain-ai/langchain/issues/5243/comments | 0 | 2023-05-25T13:00:09Z | 2023-05-25T16:50:27Z | https://github.com/langchain-ai/langchain/issues/5243 | 1,725,784,636 | 5,243 |
[
"hwchase17",
"langchain"
]
| ### System Info
I am using langchain version of 0.0.176 and hitting the error of 'numpy._DTypeMeta' object is not subscriptable while using Chroma DB for carrying out any operation.
### Who can help?
@hwchase17 - please help me out with this error -- do I need to upgrade the version of Langchain to overcome this problem
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Code snippets producing this behavior
1. docsearch=Chroma.from_texts(texts,embeddings,metadatas=[{"source":str(i)} for i in range(len(texts))]).as_retriever()
2. docsearch=Chroma.from_texts(texts,embeddings)
query="...."
docs=docsearch.similarity_search(query)
3. db1=Chroma.from_documents(docs_1,embeddings)
### Expected behavior
Should be able to use ChromaDb as a retriever without hitting any error. | 'numpy._DTypeMeta' object is not subscriptable | https://api.github.com/repos/langchain-ai/langchain/issues/5242/comments | 2 | 2023-05-25T12:43:16Z | 2023-09-12T16:13:19Z | https://github.com/langchain-ai/langchain/issues/5242 | 1,725,751,778 | 5,242 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hi everybody, does anyone know if there is a way to make a post request using a custom agent/tool? The idea is when the user need a specific thing the agent intercept it and the custom tool make it. I can't find anything useful in the documentation, the fact is that when I try it doesn't work.
In my case I have:
`class FlowTool(BaseTool):
name = "Call To Max"
description = "use the run function when the user ask to make a call to Max. You don't need any parameter"
def _run(self):
url = "https://ex.mex.com/web"
data = {
"prova": 'ciao'
}
response = requests.post(url, json=data, verify=False)
return 'done'
def _arun(self, radius: int):
raise NotImplementedError("This tool does not support async")`
`tools = [FlowTool()]
agent = initialize_agent(
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
tools=tools,
llm=llm,
verbose=True,
max_iterations=3,
early_stopping_method='generate',
)`
`agent("Can you make a call to mex?")`
Thank you for helping me
### Suggestion:
_No response_ | Issue: How to make a request into an agent/tool | https://api.github.com/repos/langchain-ai/langchain/issues/5241/comments | 1 | 2023-05-25T12:31:15Z | 2023-09-10T16:12:38Z | https://github.com/langchain-ai/langchain/issues/5241 | 1,725,733,180 | 5,241 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
LangChain 0.0.179, hosted elasticsearch (Platinum edition)
V0.0.179 introduced elasticsearch embeddings, great!
But it is only implemented for elastic cloud.
I want to be able to do embeddings on my own elastic cluster.
@jeffvestal @derickson
### Suggestion:
_No response_ | Issue: ElasticsearchEmbeddings does not work on hosted elasticsearch (Platinum) | https://api.github.com/repos/langchain-ai/langchain/issues/5239/comments | 5 | 2023-05-25T12:21:19Z | 2023-05-31T07:40:33Z | https://github.com/langchain-ai/langchain/issues/5239 | 1,725,718,432 | 5,239 |
[
"hwchase17",
"langchain"
]
| ### System Info
When i try to use chatgpt plugin with agents like showed in the documentantion, some plugins like the MediumPluginGPT will reach the token limit during the task and give an error.

### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
https://colab.research.google.com/drive/1Pya_AaPucsgw__OJa0Xho1u8OI1xFqYB#scrollTo=Ri2RPTKrxF6b
### Expected behavior
Should return the ten new most recent about AI | Token limit reached trying to use plugin | https://api.github.com/repos/langchain-ai/langchain/issues/5237/comments | 1 | 2023-05-25T11:17:24Z | 2023-09-10T16:12:44Z | https://github.com/langchain-ai/langchain/issues/5237 | 1,725,616,952 | 5,237 |
[
"hwchase17",
"langchain"
]
| ### System Info
I need to use OpenAPI for calling an API , but that API needs some params in body, and that value needs to be taken from User,
I need to understand the way that can take slots name that needs to be filled from user , is there any wat to do this?
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
Need code
### Expected behavior
Slots filling from user | Slots Filling in Langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5236/comments | 3 | 2023-05-25T10:50:38Z | 2023-09-17T13:10:59Z | https://github.com/langchain-ai/langchain/issues/5236 | 1,725,576,992 | 5,236 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Allow a user to specify a record ttl for messages/sessions persisted to dynamodb in https://github.com/hwchase17/langchain/blob/5cfa72a130f675c8da5963a11d416f553f692e72/langchain/memory/chat_message_histories/dynamodb.py#L17-L20.
### Motivation
This will allow automated purging of chat history after a specified time period.
### Your contribution
Maybe, depends on my available time. | Support for ttl in DynamoDBChatMessageHistory | https://api.github.com/repos/langchain-ai/langchain/issues/5235/comments | 2 | 2023-05-25T10:35:27Z | 2023-11-24T14:35:31Z | https://github.com/langchain-ai/langchain/issues/5235 | 1,725,555,032 | 5,235 |
[
"hwchase17",
"langchain"
]
| ### Feature request
There are other vector database that support the use of async in Langchain, adding Redis to those list would be better for programmers who use asynchronous programming in python. I believe with package like aioredis, this should be easily achievable.
### Motivation
The motivation to to be able to support python async programmers with this feature and also to boost performance when querying from the vector store and inserting data into the vector store.
### Your contribution
I can contribute by opening a PR or by testing the code once it is done. | Make Redis Vector database operations Asynchronous | https://api.github.com/repos/langchain-ai/langchain/issues/5234/comments | 3 | 2023-05-25T10:04:53Z | 2023-09-25T16:07:01Z | https://github.com/langchain-ai/langchain/issues/5234 | 1,725,509,252 | 5,234 |
[
"hwchase17",
"langchain"
]
| ### System Info
Hello,
I'm trying to use ChatVertexAI and I noticed that the following import is not working :
```python
from langchain.chat_models import ChatVertexAI
```
But this one is working correctly :
```python
from langchain.chat_models.vertexai import ChatVertexAI
```
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. install the main branch: `pip install git+https://github.com/hwchase17/langchain.git`
2. try to import `from langchain.chat_models import ChatVertexAI`
3. try to import `from langchain.chat_models.vertexai import ChatVertexAI`
### Expected behavior
The import `from langchain.chat_models import ChatVertexAI` should work | ChatVertexAI is not imported | https://api.github.com/repos/langchain-ai/langchain/issues/5233/comments | 2 | 2023-05-25T08:46:26Z | 2023-06-02T11:55:03Z | https://github.com/langchain-ai/langchain/issues/5233 | 1,725,368,096 | 5,233 |
[
"hwchase17",
"langchain"
]
| ### System Info
code snippet:
https://python.langchain.com/en/latest/modules/callbacks/getting_started.html?highlight=callbacks#async-callbacks
python:Python 3.9.6
langchain :Version: 0.0.178
### Who can help?
@agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [X] Async
### Reproduction
1.copy the source code of Async Callbacks from documentation
the code is wrong ,including syntax error to await outside the async function and missing import module or functions.
2.after making a litte fix,run it again.
the code is :
import asyncio,logging
from langchain.chat_models import ChatOpenAI
from typing import Any, Dict, List
from langchain.schema import LLMResult,HumanMessage
from langchain.callbacks.base import AsyncCallbackHandler,BaseCallbackHandler
class MyCustomSyncHandler(BaseCallbackHandler):
def on_llm_new_token(self, token: str, **kwargs) -> None:
print(f"Sync handler being called in a `thread_pool_executor`: token: {token}")
class MyCustomAsyncHandler(AsyncCallbackHandler):
"""Async callback handler that can be used to handle callbacks from langchain."""
async def on_llm_start(
self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any
) -> None:
"""Run when chain starts running."""
print("zzzz....")
await asyncio.sleep(0.3)
class_name = serialized["name"]
print("Hi! I just woke up. Your llm is starting")
async def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:
"""Run when chain ends running."""
print("zzzz....")
await asyncio.sleep(0.3)
print("Hi! I just woke up. Your llm is ending")
# To enable streaming, we pass in `streaming=True` to the ChatModel constructor
# Additionally, we pass in a list with our custom handler
async def main():
chat = ChatOpenAI(openai_api_key="xxxxxx",max_tokens=25, streaming=True, callbacks=[MyCustomSyncHandler(), MyCustomAsyncHandler()])
await chat.agenerate([[HumanMessage(content="Tell me a joke")]])
asyncio.run(main())
3.it can not be acted as thew documentation showed, it stuck as the following

### Expected behavior
should act as the documentation said | AsyncCallbacks : Wrong document and stuck when running in terminal ,which finnal turn out to be error retry | https://api.github.com/repos/langchain-ai/langchain/issues/5229/comments | 2 | 2023-05-25T07:02:21Z | 2023-10-02T16:07:46Z | https://github.com/langchain-ai/langchain/issues/5229 | 1,725,213,622 | 5,229 |
[
"hwchase17",
"langchain"
]
|
I want to build a langchain which can
• can chat with human on greetings etc
And
• can do what the create_csv_agent does
And
• has a memory.
So i was using a conversational agent for chat models to do this with memory buffer
It is able to perform 1 and 3 from the things i want. I also gave
tools = [PythonAstREPLTool(locals={"df": df})] as the tools to this agent.
But i am confused where i should give the dataframe df for the chat model similar to how we give for create_csv_agent.
I tried giving it in the prompt by doing prompt.partial but i got an error saying partial method not implemented for chatprompttemplate.
I want the chat model to know that it has access to the df dataframe and questions like "what are top 2 issues " should be answered using that dataframe.
Right now, it outputs saying that, sure i can provide top 2 issues, but please provide information on what data you want me to work on.
Can you help on this? Please let me know if you need additional information
### Suggestion:
_No response_ | Issue: Dataframe with conversation agent for chat models | https://api.github.com/repos/langchain-ai/langchain/issues/5227/comments | 2 | 2023-05-25T03:26:09Z | 2023-09-10T16:12:59Z | https://github.com/langchain-ai/langchain/issues/5227 | 1,725,018,973 | 5,227 |
[
"hwchase17",
"langchain"
]
| ### System Info
I am trying to use the File System tools to generate some boilerplate source code using OpenAI's APIs, and the chain works, but does not write the file to the file-system.
I think it is because there's an issue with the size of the text that needs to be written to the file, the agent fails to execute
My code is as follows
```python
import os
from langchain.tools.file_management import *
from langchain.agents.agent_toolkits import FileManagementToolkit
from langchain.llms.openai import OpenAI
from langchain.agents import initialize_agent, Tool
from langchain.agents import AgentType
from dotenv import load_dotenv
load_dotenv()
toolkit = FileManagementToolkit()
write_files = toolkit.get_tools()[5]
list_files = toolkit.get_tools()[6]
read_files = toolkit.get_tools()[4]
llm = OpenAI(temperature=0)
tools = [
Tool(
name="Write Files to directory",
func = write_files.run,
description = "useful for when you need to write files to a local file system"
),
Tool(
name="List Files in directory",
func = list_files.run,
description = "useful for when you need to list files in a local file system"
),
Tool(
name="Read Files in directory",
func = read_files.run,
description = "useful for when you need to read files in a local file system"
)
]
self_write_files_git = initialize_agent(
tools,
llm,
agent = AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION,
verbose = True
)
self_write_files_git.run("Generate a source code for a boilerplate Python Flask Application")
```
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce
1. My original source code is this
```python
import os
from langchain.tools.file_management import *
from langchain.agents.agent_toolkits import FileManagementToolkit
from langchain.llms.openai import OpenAI
from langchain.agents import initialize_agent, Tool
from langchain.agents import AgentType
from dotenv import load_dotenv
load_dotenv()
toolkit = FileManagementToolkit()
write_files = toolkit.get_tools()[5]
list_files = toolkit.get_tools()[6]
read_files = toolkit.get_tools()[4]
llm = OpenAI(temperature=0)
tools = [
Tool(
name="Write Files to directory",
func = write_files.run,
description = "useful for when you need to write files to a local file system"
),
Tool(
name="List Files in directory",
func = list_files.run,
description = "useful for when you need to list files in a local file system"
),
Tool(
name="Read Files in directory",
func = read_files.run,
description = "useful for when you need to read files in a local file system"
)
]
self_write_files_git = initialize_agent(
tools,
llm,
agent = AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION,
verbose = True
)
self_write_files_git.run("Generate a source code for a boilerplate Python Flask Application")
```
2. You can use it with a Jupyter Notebook/etc to run it you'll see the following
AgentExecutor creates the right action input, but the action_input seems cut off (JSON not properly formatted) and as a result, likely doesnt write to the filesystem
### Expected behavior
Based on the plan, which shows up, I would assume that it would write it to the file-system | Write File action_input issues. How to handle when action input is large | https://api.github.com/repos/langchain-ai/langchain/issues/5226/comments | 6 | 2023-05-25T01:38:22Z | 2023-10-24T16:08:23Z | https://github.com/langchain-ai/langchain/issues/5226 | 1,724,948,886 | 5,226 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
How to use callback functions in a Langchain sequential chain, such as 1->2->3. I want to loop through the 2 function n times in the middle, where the output of the 2 function is its input. At the end of the loop, the output of the 2 function is input to the 3 function, and the final result is obtained
### Suggestion:
_No response_ | Issue:How to use callback functions in a Langchain sequential chain | https://api.github.com/repos/langchain-ai/langchain/issues/5225/comments | 6 | 2023-05-25T01:00:58Z | 2023-11-20T13:09:03Z | https://github.com/langchain-ai/langchain/issues/5225 | 1,724,924,866 | 5,225 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain==0.0.176
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.chains import PALChain
from langchain import OpenAI
llm = OpenAI(temperature=0, max_tokens=512)
pal_chain = PALChain.from_math_prompt(llm, verbose=True)
question = "Jan has three times the number of pets as Marcia. Marcia has two more pets than Cindy. If Cindy has four pets, how many total pets do the three have?"
pal_chain.save("/Users/liang.zhang/pal_chain.yaml")
loaded_chain = load_chain("/Users/liang.zhang/pal_chain.yaml")
```
Error:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Input In [17], in <cell line: 1>()
----> 1 loaded_chain = load_chain("/Users/liang.zhang/pal_chain.yaml")
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:449, in load_chain(path, **kwargs)
447 return hub_result
448 else:
--> 449 return _load_chain_from_file(path, **kwargs)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:476, in _load_chain_from_file(file, **kwargs)
473 config["memory"] = kwargs.pop("memory")
475 # Load the chain from the config now.
--> 476 return load_chain_from_config(config, **kwargs)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:439, in load_chain_from_config(config, **kwargs)
436 raise ValueError(f"Loading {config_type} chain not supported")
438 chain_loader = type_to_loader_dict[config_type]
--> 439 return chain_loader(config, **kwargs)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:234, in _load_pal_chain(config, **kwargs)
232 if "llm" in config:
233 llm_config = config.pop("llm")
--> 234 llm = load_llm_from_config(llm_config)
235 elif "llm_path" in config:
236 llm = load_llm(config.pop("llm_path"))
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/llms/loading.py:14, in load_llm_from_config(config)
12 def load_llm_from_config(config: dict) -> BaseLLM:
13 """Load LLM from Config Dict."""
---> 14 if "_type" not in config:
15 raise ValueError("Must specify an LLM Type in config")
16 config_type = config.pop("_type")
TypeError: argument of type 'NoneType' is not iterable
```
### Expected behavior
No errors should occur. | PALChain loading fails | https://api.github.com/repos/langchain-ai/langchain/issues/5224/comments | 0 | 2023-05-25T00:58:09Z | 2023-05-29T13:44:48Z | https://github.com/langchain-ai/langchain/issues/5224 | 1,724,922,616 | 5,224 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain==0.0.178
python3.11
### Who can help?
@hwchase17 @agola11
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
@staticmethod
def embeddings(texts: List[Document]):
embeddings = OpenAIEmbeddings()
vectordb = Chroma.from_documents(texts, embeddings, persist_directory="chroma_db", collection_name="aixplora")
return vectordb
```
ends up in `openai.error.AuthenticationError: <empty message>`
more context here: https://github.com/grumpyp/aixplora/blob/main/backend/embeddings/index_files.py
This happened just a few hours ago btw! Before it was running, so possibly a bug which came with a release.
### Expected behavior
No error! - just embedding into my chroma db :) | openai.error.AuthenticationError: <empty message> | https://api.github.com/repos/langchain-ai/langchain/issues/5215/comments | 4 | 2023-05-24T22:04:59Z | 2023-09-18T16:11:06Z | https://github.com/langchain-ai/langchain/issues/5215 | 1,724,798,523 | 5,215 |
[
"hwchase17",
"langchain"
]
| Using the following script, I can only return maximum 4 documents. With k = 1, k= 2, k=3, k = 4, k =5, k=6, ... similarity_search_with_score returns 1, 2, 3, 4, 4, 4... docs.
opensearch_url = "xxxxxxxxx.com"
docsearch = OpenSearchVectorSearch.from_documents(docs,
embedding = HuggingFaceEmbeddings(),
opensearch_url=opensearch_url,
index_name="my_index_name")
retrieved_docs = docsearch.similarity_search_with_score(query, k=10)
This only return 4 documents even though I have len(docs) = 90+. Tried various indexes and various queries. Confirmed the issue is persistent.
Find a [related issue](https://github.com/hwchase17/langchain/issues/1946) (also max out at 4 regardless of k) for Chroma.
| OpenSearch VectorStore cannot return more than 4 retrieved result. | https://api.github.com/repos/langchain-ai/langchain/issues/5212/comments | 2 | 2023-05-24T20:49:47Z | 2023-05-25T16:51:25Z | https://github.com/langchain-ai/langchain/issues/5212 | 1,724,714,949 | 5,212 |
[
"hwchase17",
"langchain"
]
| Hi, I believe this issue is related to this one: #1372
I'm using GPT4All integration and get the following error after running `ConversationalRetrievalChain` with `AsyncCallbackManager`:
`ERROR:root:Async generation not implemented for this LLM.`
Changing to `CallbackManager` does not fix anything.
The issue is model-agnostic, i.e., I have used _ggml-gpt4all-j-v1.3-groovy.bin_ and _ggml-mpt-7b-base.bin_. The LangChain version I'm using is `0.0.179`. Any ideas how this can be potentially solved or should we just wait for a new release fixing it?
### Suggestion:
Release a fix, similar as in #1372 | GPT4All chat error with async calls | https://api.github.com/repos/langchain-ai/langchain/issues/5210/comments | 27 | 2023-05-24T19:27:35Z | 2024-03-29T23:22:46Z | https://github.com/langchain-ai/langchain/issues/5210 | 1,724,609,382 | 5,210 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Enable chains for Chat Models.
I spend some time looking at the following docs:
https://python.langchain.com/en/latest/modules/chains/index_examples/summarize.html
https://docs.langchain.com/docs/components/chains/index_related_chains
as well looking into the codebase, it seems this works only for completion models not Chat Models.
### Motivation
I would like use GPT-4 which is only available via Completions endpoint. I am currently building the chain manually, but I see value in native support in langchain for better codebase and easy access for others.
### Your contribution
I can help by submitting a PR, assuming I am not missing something obvious and this is already supported. | Enable chains (MapReduce, Refine, ...) for Chat Models. | https://api.github.com/repos/langchain-ai/langchain/issues/5209/comments | 1 | 2023-05-24T19:09:24Z | 2023-05-24T20:38:29Z | https://github.com/langchain-ai/langchain/issues/5209 | 1,724,587,758 | 5,209 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain==0.0.178
Python 3.11.2
### Who can help?
@agola11 @hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
`chroma.from_documents(texts, embeddings, persist_directory="chroma_db", collection_name="x")`
```
query_texts = "tell me about whatsmyserp policy"
res = collection.query(
query_texts=query_texts,
n_results=n_results,
where=where or None,
where_document=where_document or None
)
```
printing res
`{'ids': [[]], 'embeddings': None, 'documents': [[]], 'metadatas': [[]], 'distances': [[0.3748544454574585]]}`
I believe it doesn't create the collection itself or something related to that, cause if I create the collection myself before it seems to work.
More information and our discussion in the thread of the Chroma discord:
https://discord.com/channels/1073293645303795742/1110965198904369374
### Expected behavior
It should at least return the related documents,.. | Chroma integration .from_documents() isn't working | https://api.github.com/repos/langchain-ai/langchain/issues/5207/comments | 3 | 2023-05-24T18:34:51Z | 2023-09-18T16:11:11Z | https://github.com/langchain-ai/langchain/issues/5207 | 1,724,543,505 | 5,207 |
[
"hwchase17",
"langchain"
]
| ### System Info
Here is the link to the tutorial: https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/chroma_self_query.html
You can see on the page that the results do not seem to correlate to the question. First question about dinosaurs brings back two movies that are nothing to do with dinosaurs. Then last question asking about 2 movies about dinosaurs brings back 3 movies - 2 of which are nothing to do with dinosaurs.
In fact I found I can type "What are some movies about cabbages?" and get back 3 random movie results.
This tutorial doesn't seem to work.
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Follow the tutorial - in fact just read the the tutorial.
### Expected behavior
It should either bring back only movies matching your query, or if it brings back more than one it should give a score as to how confident it is. Right now it seems almost pointless. | I don't think Self-querying with Chroma is right | https://api.github.com/repos/langchain-ai/langchain/issues/5205/comments | 5 | 2023-05-24T18:13:30Z | 2023-09-11T16:57:00Z | https://github.com/langchain-ai/langchain/issues/5205 | 1,724,516,597 | 5,205 |
[
"hwchase17",
"langchain"
]
| HI, I have a requirement to customize the format instructions for multiple languages.
Specifically, I need to make modifications to the output_parser.get_format_instructions() string. This function currently utilizes the following structured format instructions:
```
STRUCTURED_FORMAT_INSTRUCTIONS = """The output should be a markdown code snippet formatted in the following schema, including the leading and trailing "
json" and "
":
{{
{format}
}}
```
To modify this value, I have considered the following approach:
> langchain.output_parsers.structured.STRUCTURED_FORMAT_INSTRUCTIONS = """traduction of the string below"""
However, please note that this approach is not thread-safe. If multiple users are simultaneously using my application with different languages, there is a risk of interference between their settings.
Could you please advise on the appropriate solution for this issue?
How i can do? May i miss something?
Thanks for your help
### Suggestion:
A solution could be to change get_format_instructions(self) -> str function by adding a string params def get_format_instructions(self, cust_str_format_instructions) -> str:
Another solution is to create a class inherited from StructuredOutputParser
```
from langchain.output_parsers.structured import _get_sub_string
from langchain.output_parsers.format_instructions import STRUCTURED_FORMAT_INSTRUCTIONS
from typing import List, Any
from pydantic import Field
class CustomStructuredOutputParser(StructuredOutputParser):
language: str = Field(default=None)
cust_struct_format_instructions: str = Field(default=None)
def __init__(self, response_schemas: List[ResponseSchema], **data: Any):
super().__init__(response_schemas=response_schemas, **data)
if self.language == "fr_FR":
self.cust_struct_format_instructions = """La sortie doit être un extrait de code au format markdown, formaté selon le schéma suivant, en incluant le début et la fin "\`\`\`json" et "\`\`\`":
```json
{{
{format}
}}
```"""
@classmethod
def from_response_schemas(
cls, response_schemas: List[ResponseSchema], language: str = None, cust_struct_format_instructions: str = None
) -> 'CustomStructuredOutputParser':
return cls(response_schemas=response_schemas, language=language, cust_struct_format_instructions=cust_struct_format_instructions)
def get_format_instructions(self) -> str:
schema_str = "\n".join(
[_get_sub_string(schema) for schema in self.response_schemas]
)
if self.cust_struct_format_instructions:
return self.cust_struct_format_instructions.format(format=schema_str)
return STRUCTURED_FORMAT_INSTRUCTIONS.format(format=schema_str)
summary_response_schemas = [
ResponseSchema(name="resumé", description="Fournissez un résumé en une ou deux phrases."),
ResponseSchema(name="types_réponses", description="Fournissez un objet JSON contenant jusqu'à 4 types de réponses distincts en tant que clés, et une description pour chaque type de réponse en tant que valeurs."),
]
summary_output_parser = CustomStructuredOutputParser.from_response_schemas(summary_response_schemas, language='EN_US')
summary_output_parser.get_format_instructions()
```
The problem is that if you are modifying the code, I have to maintain it. | Issue: Customizing 'structured_format_instructions' for Non-English Languages | https://api.github.com/repos/langchain-ai/langchain/issues/5203/comments | 7 | 2023-05-24T16:32:42Z | 2024-03-27T16:06:12Z | https://github.com/langchain-ai/langchain/issues/5203 | 1,724,375,986 | 5,203 |
[
"hwchase17",
"langchain"
]
| ### Wrong condition to raise ValueError in LLMChain.prep_prompts
In `LLMChain`, in the `prep_prompts` method, a `ValueError` may be raised on lines 112-114:
https://github.com/hwchase17/langchain/blob/fd866d1801793d22dca5cabe200df4f2b80fa7a4/langchain/chains/llm.py#L100-L114
The issue is that the condition that raises this `ValueError` does not accurately capture the `ValueError`'s message.
Suppose `"stop" in input_list[0]`, but `"stop"` is not a key in any of the remaining inputs in `input_list`. Then the condition
```python
"stop" in inputs and inputs["stop"] != stop
```
is false for all `inputs` in `input_list`. For `input_list[0]`, it is false by definition of `stop` (`stop` is `input_list[0]["stop"]`), and for any other `inputs` in `input_list` it is false in this hypothetical scenario because `stop` is not a key in `inputs`.
Thus, in this scenario, the ValueError will not be raised, even though it should be.
### Suggestion:
The condition on line 111 can be changed to
```python
stop is not None and "stop" in inputs and inputs["stop"] != stop
```
to accurately produce the desired behavior. | Issue: Wrong condition to raise ValueError in LLMChain.prep_prompts | https://api.github.com/repos/langchain-ai/langchain/issues/5202/comments | 1 | 2023-05-24T16:13:55Z | 2023-09-10T16:13:04Z | https://github.com/langchain-ai/langchain/issues/5202 | 1,724,346,533 | 5,202 |
[
"hwchase17",
"langchain"
]
| I'm using FAISS in memory and I need to obtain the vector of embeddings.
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1500, chunk_overlap=100, separators=["\n\n", "\n", "\t", '. '])
documents = text_splitter.split_documents(docs)
total_chuks = len(documents)
for i, d in enumerate(documents):
d.metadata['paragraph'] = f'Paragraph: {i+1} of {total_chuks}'
emb = OpenAIEmbeddings(chunk_size=1)
vs = FAISS.from_documents(documents=documents, embedding=emb)
I need to obtain the vector embeddings from OpenAIEmbeddings
I'm trying to create a KMeans from the vector indices to clusted the data for searching
| Embeddings vectors from FAISS objects | https://api.github.com/repos/langchain-ai/langchain/issues/5199/comments | 3 | 2023-05-24T14:30:23Z | 2023-09-18T16:11:16Z | https://github.com/langchain-ai/langchain/issues/5199 | 1,724,140,220 | 5,199 |
[
"hwchase17",
"langchain"
]
| ### System Info
like:
import openai
openai.api_key = openai_api_key
problem:
api_key is a global variable, it is not safe on concurrency if we have different api_keys to be used concurrency. api_key will out of control.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
use local variable
### Expected behavior
to be fix | openai.api_key in OpenAIEmbeddings is unsafe on concurrency | https://api.github.com/repos/langchain-ai/langchain/issues/5195/comments | 3 | 2023-05-24T12:45:23Z | 2023-09-18T16:11:21Z | https://github.com/langchain-ai/langchain/issues/5195 | 1,723,907,172 | 5,195 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
The [Atlassian API](https://atlassian-python-api.readthedocs.io/) (including Confluence) supports just passing a PAT (as token=<PAT>) to authenticate as a user, unfortunately the LangChain abstraction doesn't.
### Suggestion:
Add an optional "token" parameter to ConfluenceLoader and use it to authenticate within as an alternative to api_key/password/oauth. | Support personal access token (PAT) in ConfluenceLoader | https://api.github.com/repos/langchain-ai/langchain/issues/5191/comments | 3 | 2023-05-24T11:15:54Z | 2023-06-03T21:57:51Z | https://github.com/langchain-ai/langchain/issues/5191 | 1,723,748,960 | 5,191 |
[
"hwchase17",
"langchain"
]
| ### System Info
**MacOS Ventura Version 13.3.1
LangChain==0.0.178**
**When querying the database, the answer does not show in the terminal. Instead, the AI response is always: "Is there anything else I can help you with?". The response seems fine when not using the SQL tool.**
```
db_chain = SQLDatabaseChain.from_llm(llm=llm, db=sql_database, verbose=True)
sql_tool = Tool(
name='Student DB',
func=db_chain.run,
description="Useful for when you need to answer questions regarding the students, their information, attendance, and anything regarding the database. "
)
tools = load_tools(
["llm-math"],
llm=llm
)
tools.append(sql_tool)
memory = ConversationBufferMemory(memory_key="chat_history")
conversational_agent = initialize_agent(
agent='conversational-react-description',
tools=tools,
llm=llm,
verbose=True,
memory=memory
)
conversational_agent.run(input="Who were sick last May 22, 2023?")
```
Here is the output in the terminal:
<img width="1480" alt="Screenshot 2023-05-24 at 17 50 14" src="https://github.com/hwchase17/langchain/assets/108784595/47a86e82-f7d3-4a46-9a8f-5a9ec1d1d17a">
I always get the AI response instead that of the answer in 'Observation'.
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce the behavior:
1. create sqldatabasechain
2. create custom too for sql querying
3. use agent type conversational-react-description
4. output answer
### Expected behavior
I would like that the output of the chain, when using the sql tool, is the Query output and not the "Is there anything else I can help you with?" AI response. | Answer from the SQLDatabaseChain does not output when using the Agent 'conversational-react-description' | https://api.github.com/repos/langchain-ai/langchain/issues/5188/comments | 3 | 2023-05-24T09:59:16Z | 2023-10-24T05:51:11Z | https://github.com/langchain-ai/langchain/issues/5188 | 1,723,618,860 | 5,188 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Langchain == 0.0.178
llama-cpp-python == 0.1.54
LLM def:
`callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])`
`llm = LlamaCpp(model_path=f"{self.llm_path}/{self.selection}.bin", n_gpu_layers=25, n_ctx=1024, n_threads=8, callback_manager=callback_manager, verbose=True)`
Loaded model info:
> llama.cpp: loading model from models/gpt4all-13B/gpt4all-13B.bin
> llama_model_load_internal: format = ggjt v3 (latest)
> llama_model_load_internal: n_vocab = 32000
> llama_model_load_internal: n_ctx = 1024
> llama_model_load_internal: n_embd = 5120
> llama_model_load_internal: n_mult = 256
> llama_model_load_internal: n_head = 40
> llama_model_load_internal: n_layer = 40
> llama_model_load_internal: n_rot = 128
> llama_model_load_internal: ftype = 9 (mostly Q5_1)
> llama_model_load_internal: n_ff = 13824
> llama_model_load_internal: n_parts = 1
> llama_model_load_internal: model size = 13B
> llama_model_load_internal: ggml ctx size = 0.09 MB
> llama_model_load_internal: mem required = 11359.05 MB (+ 1608.00 MB per state)
> .
> llama_init_from_file: kv self size = 800.00 MB
> AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | VSX = 0 |
I do not see any info regarding no of layers offloaded to gpu.
### Code Repo:
[GitHub](https://github.com/allthatido/FileGPT) | Issue: LlamaCPP still uses cpu after passing the n_gpu_layer param | https://api.github.com/repos/langchain-ai/langchain/issues/5187/comments | 5 | 2023-05-24T09:34:07Z | 2023-08-17T10:12:23Z | https://github.com/langchain-ai/langchain/issues/5187 | 1,723,569,792 | 5,187 |
[
"hwchase17",
"langchain"
]
| ### System Info
Latest, macOS ventura, 3.8/11
It seems to be a bug when calling the server in create_openapi_agent. This is returning only the first letter of the server. Please see attached.
<img width="616" alt="image" src="https://github.com/hwchase17/langchain/assets/10047986/11b95d67-e05b-406e-a1f1-c35ef3da6abe">
### Who can help?
@vowelparrot
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Create a new openapi_spec
2. Edit the server field
3. Create new openapi agent --> planner.create_openapi_agent(new_spec, requests_wrapper, llm)
4. I am also using gpt-3.5-turbo model (although it doesn't matter)
5. You could also just printout `spec.servers[0]["url"]`
### Expected behavior
The openapi agent should be initialized with the url as `base_url`
Removing `[0]["url"]` from `spec.servers[0]["url"]` should fix it | Bug in openapi agent planner | https://api.github.com/repos/langchain-ai/langchain/issues/5186/comments | 4 | 2023-05-24T09:29:23Z | 2023-09-18T16:11:26Z | https://github.com/langchain-ai/langchain/issues/5186 | 1,723,561,148 | 5,186 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain 0.0.166
platform linux
python 3.8
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [x] LLMs/Chat Models
- [ ] Embedding Models
- [x] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [x] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I want to conversation reply to me step by step but not predict all the dialogue and return to me, just like this:
```shell
'Alright, I will create a zone with the name “CC” and the type “kitchen.” Please confirm if the information is correct.\nuser: yes\nassistant: {"command":{"action_key":"create_zone","action_model":{"zone_name":"CC"}}}'
```
below is my code and full log:
````python
from langchain.llms import OpenAI
from langchain.chains import ConversationChain
from langchain.prompts.prompt import PromptTemplate
openai_api_key = 'xxxxxxxxxxxxx'
llm = OpenAI(
model_name="gpt-3.5-turbo",
temperature=0,
max_tokens=2000,
openai_api_key=openai_api_key
)
template = """You are an AI assistant to create a zone. The necessary information for creating a zone includes the zone name and zone type. If the necessary information is not included in what the user said, you can ask the user to say the necessary information by asking or guiding. If you have all the necessary information, please send the necessary information to the user for confirmation. After the user confirms, use the following json template to generate json and output it. Please only output json, and do not output anything other than json.
```json
command: {{ // command which will be excute
action_key: string // The id key of the command, represents the user's intent
action_model: {{ // the detail content of command
zone_name: string // a zone name; zone name should be the word 'zone' append with a int; e.g., zone26.
}}
}}
```
user: I want to create a zone
assistant: Please provide me with the necessary information for creating the zone, including the zone name and zone type.
user: zone name is JJ and zone type is bedroom
assistant: Alright, I will create a zone with the name “JJ” and the type “bedroom.” Please confirm if the information is correct.
user: Yes
assistant: {{"command":{{"action_key":"create_zone","action_model":{{"zone_name":"JJ"}}}}}}
user: please help me to create a zone, its name is QQ and its type is dinning room
assistant: Alright, I will create a zone with the name “QQ” and the type “dinning room.” Please confirm if the information is correct.
user: correct
assistant: {{"command":{{"action_key":"create_zone","action_model":{{"zone_name":"QQ"}}}}}}
{history}
user: {input}
assistant:"""
PROMPT = PromptTemplate(
input_variables=["history", "input"], template=template
)
conversation = ConversationChain(
prompt=PROMPT,
llm=llm,
verbose=True
)
conversation.predict(input="please help me to create a zone")
````
````shell
> Entering new ConversationChain chain...
Prompt after formatting:
You are an AI assistant to create a zone. The necessary information for creating a zone includes the zone name and zone type. If the necessary information is not included in what the user said, you can ask the user to say the necessary information by asking or guiding. If you have all the necessary information, please send the necessary information to the user for confirmation. After the user confirms, use the following json template to generate json and output it. Please only output json, and do not output anything other than json.
```json
command: { // command which will be excute
action_key: string // The id key of the command, represents the user's intent
action_model: { // the detail content of command
zone_name: string // a zone name; zone name should be the word 'zone' append with a int; e.g., zone26.
}
}
```
user: I want to create a zone
assistant: Please provide me with the necessary information for creating the zone, including the zone name and zone type.
user: zone name is JJ and zone type is bedroom
assistant: Alright, I will create a zone with the name “JJ” and the type “bedroom.” Please confirm if the information is correct.
user: Yes
assistant: {"command":{"action_key":"create_zone","action_model":{"zone_name":"JJ"}}}
user: please help me to create a zone, its name is QQ and its type is dinning room
assistant: Alright, I will create a zone with the name “QQ” and the type “dinning room.” Please confirm if the information is correct.
user: correct
assistant: {"command":{"action_key":"create_zone","action_model":{"zone_name":"QQ"}}}
user: a zone called CC should be created
assistant: Please provide the zone type for the zone called “CC”.
user: zone type is kitchen
assistant: Alright, I will create a zone with the name “CC” and the type “kitchen.” Please confirm if the information is correct.
user: correct
assistant: {"command":{"action_key":"create_zone","action_model":{"zone_name":"CC"}}}
user: please help me to create a zone
assistant:
> Finished chain.
'Please provide me with the necessary information for creating the zone, including the zone name and zone type.'
````
```python
conversation.predict(input="zone name is CC, zone type is kitchen")
```
````shell
> Entering new ConversationChain chain...
Prompt after formatting:
You are an AI assistant to create a zone. The necessary information for creating a zone includes the zone name and zone type. If the necessary information is not included in what the user said, you can ask the user to say the necessary information by asking or guiding. If you have all the necessary information, please send the necessary information to the user for confirmation. After the user confirms, use the following json template to generate json and output it. Please only output json, and do not output anything other than json.
```json
command: { // command which will be excute
action_key: string // The id key of the command, represents the user's intent
action_model: { // the detail content of command
zone_name: string // a zone name; zone name should be the word 'zone' append with a int; e.g., zone26.
}
}
```
user: I want to create a zone
assistant: Please provide me with the necessary information for creating the zone, including the zone name and zone type.
user: zone name is JJ and zone type is bedroom
assistant: Alright, I will create a zone with the name “JJ” and the type “bedroom.” Please confirm if the information is correct.
user: Yes
assistant: {"command":{"action_key":"create_zone","action_model":{"zone_name":"JJ"}}}
user: please help me to create a zone, its name is QQ and its type is dinning room
assistant: Alright, I will create a zone with the name “QQ” and the type “dinning room.” Please confirm if the information is correct.
user: correct
assistant: {"command":{"action_key":"create_zone","action_model":{"zone_name":"QQ"}}}
user: a zone called CC should be created
assistant: Please provide the zone type for the zone called “CC”.
user: zone type is kitchen
assistant: Alright, I will create a zone with the name “CC” and the type “kitchen.” Please confirm if the information is correct.
user: correct
assistant: {"command":{"action_key":"create_zone","action_model":{"zone_name":"CC"}}}
Human: please help me to create a zone
AI: Please provide me with the necessary information for creating the zone, including the zone name and zone type.
user: zone name is CC, zone type is kitchen
assistant:
> Finished chain.
'Alright, I will create a zone with the name “CC” and the type “kitchen.” Please confirm if the information is correct.\nuser: yes\nassistant: {"command":{"action_key":"create_zone","action_model":{"zone_name":"CC"}}}'
````
### Expected behavior
I also put the same prompt in chatgpt console, its working properly, the model are both GPT-3.5-turbo. The console always working properly like below.
```shell
Input: I would like to create a zone
Output: Please provide me with the necessary information for creating the zone, including the zone name and zone type.
Input: zone name is FF
Output: Please provide the zone type for the zone called “FF”.
Input: living room
Output: Alright, I will create a zone with the name “FF” and the type “living room.” Please confirm if the information is correct.
Input: correct
Output: {“command”:{“action_key”:“create_zone”,“action_model”:{“zone_name”:“FF”}}}
``` | why not langchain conversation reply to user step by step? | https://api.github.com/repos/langchain-ai/langchain/issues/5183/comments | 2 | 2023-05-24T09:02:08Z | 2023-10-15T16:07:03Z | https://github.com/langchain-ai/langchain/issues/5183 | 1,723,509,200 | 5,183 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I am writing to seek assistance regarding the functionality of the LangChain API. I have successfully implemented the "create" and "askquestion" API endpoints, which are working as expected. However, I am facing challenges with the "update," "delete," and "view" functionalities based on the requirements I previously mentioned.
To provide some context, I am using LangChain in conjunction with Vector db, faiss for local storage, Python Flask for API development, and OpenAI for chat completion and embeddings. I have followed the rephrased content provided in my previous communication to explain my requirements in detail.
Specifically, I am encountering difficulties with the following functionalities:
Update: I am unable to update a particular title or content based on the provided ID. The update should include modifying the content as well as updating the associated embeddings.
Delete: I need assistance in implementing the deletion of a specific entry, including both the title and content, along with its corresponding embeddings.
View: I am unable to retrieve and display the ID, title, and content in the response.
I would greatly appreciate it if you could provide me with guidance or code examples on how to address these challenges. It would be immensely helpful if you could provide a detailed explanation or step-by-step instructions to implement the desired functionalities correctly.
Please let me know if any additional information or code snippets are required from my end to better assist you in understanding the issue. I look forward to your prompt response and guidance.
Thank you for your attention to this matter.
### Suggestion:
_No response_ | Need Support: <Need Assistance with Update, Delete, and View Functions in LangChain API prefix> | https://api.github.com/repos/langchain-ai/langchain/issues/5180/comments | 1 | 2023-05-24T08:34:18Z | 2023-09-10T16:13:14Z | https://github.com/langchain-ai/langchain/issues/5180 | 1,723,462,516 | 5,180 |
[
"hwchase17",
"langchain"
]
| ### Discussed in https://github.com/hwchase17/langchain/discussions/5159
<div type='discussions-op-text'>
<sup>Originally posted by **axiangcoding** May 24, 2023</sup>
code example here:
```
async def summary(callback: BaseCallbackHandler):
llm = AzureChatOpenAI(
deployment_name=os.environ["OPENAI_GPT35_DEPLOYMENT_NAME"],
)
text_splitter = NLTKTextSplitter(chunk_size=1000)
texts = text_splitter.split_text(content)
docs = [Document(page_content=t) for t in texts]
chain = load_summarize_chain(llm, chain_type="map_reduce", verbose=False)
await chain.arun(docs, callbacks=[callback])
```
and callback defined here:
```
class SummaryCallback(BaseCallbackHandler):
def on_chain_end(self, outputs: Dict[str, Any], *, run_id: UUID, parent_run_id: Optional[UUID] = None,
**kwargs: Any) -> Any:
logger.info(f"on_chain_end: {outputs}, {run_id}, {parent_run_id}, {kwargs}")
def on_tool_end(self, output: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) -> Any:
logger.info(f"on_tool_end: {output}, {run_id}, {parent_run_id}, {kwargs}")
def on_llm_end(self, response: LLMResult, *, run_id: UUID, parent_run_id: Optional[UUID] = None,
**kwargs: Any) -> Any:
logger.info(f"on_llm_end: {response}, {run_id}, {parent_run_id}, {kwargs}")
```
when i test it, console shows:
```
2023-05-24 08:42:46.143 | INFO | routers.v1.skill:on_llm_end:56 - on_llm_end: generations=[[ChatGeneration(text='There is no text provided, so there is no main idea to summarize.', generation_info=None, message=AIMessage(content='There is no text provided, so there is no main idea to summarize.', additional_kwargs={}, example=False))]] llm_output={'token_usage': {'prompt_tokens': 27, 'completion_tokens': 15, 'total_tokens': 42}, 'model_name': 'gpt-3.5-turbo'}, b9cb89c9-3e89-4335-93e9-8ac8104f9de1, 08558b5a-399c-4ff8-b64a-5856439df7e0, {}
2023-05-24 08:42:46.144 | INFO | routers.v1.skill:on_chain_end:47 - on_chain_end: {'outputs': [{'text': 'There is no text provided, so there is no main idea to summarize.'}]}, 08558b5a-399c-4ff8-b64a-5856439df7e0, 4a9fe8e7-dfd9-4c7c-a610-513da156071f, {}
2023-05-24 08:42:48.537 | INFO | routers.v1.skill:on_llm_end:56 - on_llm_end: generations=[[ChatGeneration(text='As an AI language model, I am unable to provide a summary of the text below as no text has been provided.', generation_info=None, message=AIMessage(content='As an AI language model, I am unable to provide a summary of the text below as no text has been provided.', additional_kwargs={}, example=False))]] llm_output={'token_usage': {'prompt_tokens': 39, 'completion_tokens': 24, 'total_tokens': 63}, 'model_name': 'gpt-3.5-turbo'}, 3471ac9f-2290-494e-a939-406bc7b5b8a1, bfe3f758-1275-4662-a553-5e4889aa3958, {}
2023-05-24 08:42:48.539 | INFO | routers.v1.skill:on_chain_end:47 - on_chain_end: {'text': 'As an AI language model, I am unable to provide a summary of the text below as no text has been provided.'}, bfe3f758-1275-4662-a553-5e4889aa3958, 12bc5030-dced-4243-a841-be44fa411d03, {}
2023-05-24 08:42:48.539 | INFO | routers.v1.skill:on_chain_end:47 - on_chain_end: {'output_text': 'As an AI language model, I am unable to provide a summary of the text below as no text has been provided.'}, 12bc5030-dced-4243-a841-be44fa411d03, 4a9fe8e7-dfd9-4c7c-a610-513da156071f, {}
2023-05-24 08:42:48.539 | INFO | routers.v1.skill:on_chain_end:47 - on_chain_end: {'output_text': 'As an AI language model, I am unable to provide a summary of the text below as no text has been provided.'}, 4a9fe8e7-dfd9-4c7c-a610-513da156071f, None, {}
```
`on_chain_end` and `on_llm_end` printed several times, which one is the final output?
</div> | How to get the final output from the load_summarize_chain async run? | https://api.github.com/repos/langchain-ai/langchain/issues/5176/comments | 2 | 2023-05-24T07:40:45Z | 2023-09-15T22:13:01Z | https://github.com/langchain-ai/langchain/issues/5176 | 1,723,374,761 | 5,176 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Add support for more code text splitters
### Motivation
For now langchain only supports python
### Your contribution
Add more code text splitters | Add support for more code text splitters | https://api.github.com/repos/langchain-ai/langchain/issues/5170/comments | 0 | 2023-05-24T05:35:01Z | 2023-05-24T18:39:52Z | https://github.com/langchain-ai/langchain/issues/5170 | 1,723,203,164 | 5,170 |
[
"hwchase17",
"langchain"
]
| Hi,
I've seen applications that are able to give fast responses using the Langchain & OpenAI (chat with own data).
However, in my case responses on simple questions seem to take a long time. I've been playing around with settings but I am wondering if there is anything else I can do to increase speed.
Current settings:
- Chunk Size: 700
- Chunk Overlap: 100
- Max tokens: 150
- Streaming enabled
What am I missing?
thanks!
| Tips for speeding up OpenAI API answers? | https://api.github.com/repos/langchain-ai/langchain/issues/5169/comments | 6 | 2023-05-24T05:20:27Z | 2023-09-18T16:11:31Z | https://github.com/langchain-ai/langchain/issues/5169 | 1,723,192,424 | 5,169 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain version: 0.0.170
python: 3.8
### Who can help?
_No response_
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [x] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Here I came across an issue related to the output of **router chain**.
When I ran the tutorial of "router chain" in [langchain website](https://python.langchain.com/en/stable/modules/chains/generic/router.html), the input query is: "What is black body radiation?" and the output of LLM is:
```
'{
"destination": "physics",
"next_inputs": "What is black body radiation?"
}'
```
Use the class **RouterOutputParser** to parse the output then I got the error:
> {OutputParserException}Got invalid return object. Expected markdown code snippet with JSON object, but got:
> {
> "destination": "physics",
> "next_inputs": "What is black body radiation?"
> }
When I debug step by step I found the error raised in this function: **parse_json_markdown**
```python
def parse_json_markdown(text: str, expected_keys: List[str]) -> Any:
if "```json" not in text:
raise OutputParserException(
f"Got invalid return object. Expected markdown code snippet with JSON "
f"object, but got:\n{text}"
)
json_string = text.split("```json")[1].strip().strip("```").strip()
try:
json_obj = json.loads(json_string)
except json.JSONDecodeError as e:
raise OutputParserException(f"Got invalid JSON object. Error: {e}")
for key in expected_keys:
if key not in json_obj:
raise OutputParserException(
f"Got invalid return object. Expected key `{key}` "
f"to be present, but got {json_obj}"
)
return json_obj
```
You can see there is no "```json" string in the output of LLM, so it will step into the "if" in the first row of this function and raise the bug.
### Expected behavior
Can anyone give me some solutions? thanks. | Invalid Output Parser Format for "Router Chain" | https://api.github.com/repos/langchain-ai/langchain/issues/5163/comments | 22 | 2023-05-24T03:40:35Z | 2023-12-20T02:12:18Z | https://github.com/langchain-ai/langchain/issues/5163 | 1,723,124,127 | 5,163 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain==0.0.176
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [x] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I run the following code snippet:
```python
from langchain.chains.qa_with_sources import load_qa_with_sources_chain
from langchain.schema import Document
chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="stuff")
json_strings = [
'{"page_content": "I love MLflow.", "metadata": {"source": "/path/to/mlflow.txt"}}',
'{"page_content": "I love langchain.", "metadata": {"source": "/path/to/langchain.txt"}}',
'{"page_content": "I love AI.", "metadata": {"source": "/path/to/ai.txt"}}',
]
input_docs = [Document.parse_raw(j) for j in json_strings]
query = "What do I like?"
chain.run(input_documents=input_docs, question=query)
# This gives me a reasonable answer:
# ' I like MLflow, langchain, and AI.\nSOURCES: /path/to/mlflow.txt, /path/to/langchain.txt, /path/to/ai.txt'
chain.input_keys
```
Output:
```
['input_documents']
```
### Expected behavior
Output:
```
['input_documents', 'question']
```
Because when I run the chain as `chain.run(input_documents=input_docs, question=query)`.
If the expected behavior is `['input_documents']`, could you elaborate the reason? Thanks! | StuffDocumentsChain input_keys does not contain "question" | https://api.github.com/repos/langchain-ai/langchain/issues/5160/comments | 0 | 2023-05-24T02:36:47Z | 2023-08-11T23:25:14Z | https://github.com/langchain-ai/langchain/issues/5160 | 1,723,080,995 | 5,160 |
[
"hwchase17",
"langchain"
]
| ### Feature request
There are some tables out there with comments.
It would be nice if the sql agent could read them.
### Motivation
Sometimes, column names do not describe perfectly well what they contain.
If sql agent can consider the table and column comments, it will be able to better respond to queries.
### Your contribution
Maybe something like this could work for the table comments:
```diff
--- sql_database.py.orig 2023-05-23 20:34:09.877909913 -0400
+++ sql_database.py 2023-05-23 20:34:13.857925528 -0400
@@ -268,11 +268,14 @@
# add create table command
create_table = str(CreateTable(table).compile(self._engine))
table_info = f"{create_table.rstrip()}"
+ table_comment = table.comment
has_extra_info = (
- self._indexes_in_table_info or self._sample_rows_in_table_info
+ self._indexes_in_table_info or self._sample_rows_in_table_info or table_comment
)
if has_extra_info:
table_info += "\n\n/*"
+ if table_comment:
+ table_info += f"\nTable comment: {table_comment}\n"
if self._indexes_in_table_info:
table_info += f"\n{self._get_table_indexes(table)}\n"
if self._sample_rows_in_table_info:
```
| It would be nice to make the SQL helper consider the table and column comments | https://api.github.com/repos/langchain-ai/langchain/issues/5158/comments | 11 | 2023-05-24T00:40:35Z | 2024-03-27T16:06:07Z | https://github.com/langchain-ai/langchain/issues/5158 | 1,722,985,477 | 5,158 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain==0.0.178
python==3.10.11
os=win
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
following this example with Langchain, using azureopenai as llm instead of openai: https://github.com/emptycrown/llama-hub/tree/main
### Expected behavior
get answer back from azureopenai resource | InvalidRequestError: Resource not found when running qa_chain.run with azureopenai llm | https://api.github.com/repos/langchain-ai/langchain/issues/5149/comments | 2 | 2023-05-23T21:42:04Z | 2023-09-10T16:13:20Z | https://github.com/langchain-ai/langchain/issues/5149 | 1,722,827,675 | 5,149 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I got the error `OSError: [Errno 22] Invalid argument` when trying to pickle the faiss vector store. with the following code
```
merge_file_path = "combined_hf_faiss_vectorstore.pkl"
with open(merge_file_path, "wb") as f:
pickle.dump(csfaq_index, f)
```
It works on mac local laptop but not on the linux machine in the Databricks cloud.
Here is the system info:
sysname='Linux', release='5.15.0-1035-aws', version='#39~20.04.1-Ubuntu SMP Wed Apr 19 15:34:33 UTC 2023', machine='x86_64'
Any suggestion?
### Suggestion:
_No response_ | Can't pickle the faiss vector store object in langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5145/comments | 9 | 2023-05-23T19:56:17Z | 2024-04-26T03:26:11Z | https://github.com/langchain-ai/langchain/issues/5145 | 1,722,695,626 | 5,145 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Hello,
I would like to contribute a new feature in the api module for langchain, specifically we're looking to build a mechnism to translate natural language to APIs for the Adobe Experience Platform: https://developer.adobe.com/experience-platform-apis/ . I would like to lead and contribute this module back to langchain, I have forked the codebase, let me know what else is needed from my end, I will be sending across a PR soon that covers the basics
### Motivation
We're working on an end to end ml pipeline project part of which could use this langchain functionality to translate user's natural language commands to API requests/responses, like a gpt pair programmer
### Your contribution
As I said I want to lead/contribute all of this | AEP API Module | https://api.github.com/repos/langchain-ai/langchain/issues/5141/comments | 4 | 2023-05-23T17:10:45Z | 2023-12-06T17:46:05Z | https://github.com/langchain-ai/langchain/issues/5141 | 1,722,482,074 | 5,141 |
[
"hwchase17",
"langchain"
]
| ### System Info
Hi, I am trying to query tabular data and play with it using langchain SparkSQLAgent, from the below link.
https://python.langchain.com/en/latest/modules/agents/toolkits/examples/spark_sql.html
I got the below error
**"ModuleNotFoundError: No module named 'pyspark.errors'"** because of the below spark code from langchain library
_try:
from pyspark.errors import PySparkException
except ImportError:__
Obviously pyspark.errors library is still not present in synapse spark pool which runs on pyspark version 3.3(latest). We dont have option to upgrade to pyspark version 3.4 in our spark pools.
Is it possible to align the library to pyspark version 3.3 also, it will help all the developers using syanpse spark now!! thanks
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Note: errors appears in synapse spark version which runs on pyspark 3.3 runtime
llm = OpenAI(engine="",temperature=0)
toolkit = SparkSQLToolkit(db=spark_sql, llm=llm)
agent_executor = create_spark_sql_agent(
llm=llm,
toolkit=toolkit,verbose = True)
### Expected behavior
please make langchain module aligned with pyspark 3.3 version to help synapse spark developers, as pyspark 3.4 is still not available. | Unable to Use "Spark SQL Agent" in Azure Synapse Spark Pool (pyspark 3.3 version) | https://api.github.com/repos/langchain-ai/langchain/issues/5139/comments | 1 | 2023-05-23T16:42:17Z | 2023-09-10T16:13:31Z | https://github.com/langchain-ai/langchain/issues/5139 | 1,722,444,819 | 5,139 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Change the return type of `BaseRetriever`'s `get_relevant_documents` (and `aget_relevant_documents`) to return an `Iterable[Document]` rather than `List[Document]`:
https://github.com/hwchase17/langchain/blob/753f4cfc26c04debfa02bb086a441d86877884c1/langchain/schema.py#L277-L297
### Motivation
Isn't clear why the results needs to be in a concrete / eagerly formed list. This change would make it easy to write a merge retriever, etc
### Your contribution
Simple to change the type definition, though technically would be a breaking change. | BaseRetriever's get_relevant_documents to return Iterable rather than List | https://api.github.com/repos/langchain-ai/langchain/issues/5133/comments | 1 | 2023-05-23T15:29:29Z | 2023-09-10T16:13:36Z | https://github.com/langchain-ai/langchain/issues/5133 | 1,722,313,908 | 5,133 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain==0.0.176
### Who can help?
@hwchase17 @dev2049
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Running the following code throws an error:
```python
from langchain.llms import OpenAI
from langchain.chains import HypotheticalDocumentEmbedder
from langchain.embeddings.openai import OpenAIEmbeddings
base_embeddings = OpenAIEmbeddings()
llm = OpenAI()
# Load with `web_search` prompt
embeddings = HypotheticalDocumentEmbedder.from_llm(llm, base_embeddings, "web_search")
embeddings.save("/Users/liang.zhang/emb.yaml")
load_chain("/Users/liang.zhang/emb.yaml")
```
Error:
```
---------------------------------------------------------------------------
ConstructorError Traceback (most recent call last)
Input In [33], in <cell line: 1>()
----> 1 load_chain("/Users/liang.zhang/emb.yaml")
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:449, in load_chain(path, **kwargs)
447 return hub_result
448 else:
--> 449 return _load_chain_from_file(path, **kwargs)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:465, in _load_chain_from_file(file, **kwargs)
463 elif file_path.suffix == ".yaml":
464 with open(file_path, "r") as f:
--> 465 config = yaml.safe_load(f)
466 else:
467 raise ValueError("File type must be json or yaml")
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/yaml/__init__.py:125, in safe_load(stream)
117 def safe_load(stream):
118 """
119 Parse the first YAML document in a stream
120 and produce the corresponding Python object.
(...)
123 to be safe for untrusted input.
124 """
--> 125 return load(stream, SafeLoader)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/yaml/__init__.py:81, in load(stream, Loader)
79 loader = Loader(stream)
80 try:
---> 81 return loader.get_single_data()
82 finally:
83 loader.dispose()
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/yaml/constructor.py:51, in BaseConstructor.get_single_data(self)
49 node = self.get_single_node()
50 if node is not None:
---> 51 return self.construct_document(node)
52 return None
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/yaml/constructor.py:60, in BaseConstructor.construct_document(self, node)
58 self.state_generators = []
59 for generator in state_generators:
---> 60 for dummy in generator:
61 pass
62 self.constructed_objects = {}
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/yaml/constructor.py:413, in SafeConstructor.construct_yaml_map(self, node)
411 data = {}
412 yield data
--> 413 value = self.construct_mapping(node)
414 data.update(value)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/yaml/constructor.py:218, in SafeConstructor.construct_mapping(self, node, deep)
216 if isinstance(node, MappingNode):
217 self.flatten_mapping(node)
--> 218 return super().construct_mapping(node, deep=deep)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/yaml/constructor.py:143, in BaseConstructor.construct_mapping(self, node, deep)
140 if not isinstance(key, collections.abc.Hashable):
141 raise ConstructorError("while constructing a mapping", node.start_mark,
142 "found unhashable key", key_node.start_mark)
--> 143 value = self.construct_object(value_node, deep=deep)
144 mapping[key] = value
145 return mapping
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/yaml/constructor.py:100, in BaseConstructor.construct_object(self, node, deep)
98 constructor = self.__class__.construct_mapping
99 if tag_suffix is None:
--> 100 data = constructor(self, node)
101 else:
102 data = constructor(self, tag_suffix, node)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/yaml/constructor.py:427, in SafeConstructor.construct_undefined(self, node)
426 def construct_undefined(self, node):
--> 427 raise ConstructorError(None, None,
428 "could not determine a constructor for the tag %r" % node.tag,
429 node.start_mark)
ConstructorError: could not determine a constructor for the tag 'tag:yaml.org,2002:python/name:openai.api_resources.embedding.Embedding'
in "/Users/liang.zhang/emb.yaml", line 5, column 11
```
### Expected behavior
No errors should occur. | HypotheticalDocumentEmbedder loading fails | https://api.github.com/repos/langchain-ai/langchain/issues/5131/comments | 4 | 2023-05-23T13:59:44Z | 2023-09-18T16:11:36Z | https://github.com/langchain-ai/langchain/issues/5131 | 1,722,147,865 | 5,131 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain==0.0.176
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Running the following code to load a saved APIChain fails.
```python
from langchain.chains.api.prompt import API_RESPONSE_PROMPT
from langchain.chains import APIChain
from langchain.prompts.prompt import PromptTemplate
from langchain.llms import OpenAI
llm = OpenAI(temperature=0)
from langchain.chains.api import open_meteo_docs
chain_new = APIChain.from_llm_and_api_docs(llm, open_meteo_docs.OPEN_METEO_DOCS, verbose=True)
chain_new.save("/Users/liang.zhang/api.yaml")
chain = load_chain("/Users/liang.zhang/api.yaml")
```
Error:
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Input In [29], in <cell line: 1>()
----> 1 chain = load_chain("/Users/liang.zhang/api.yaml")
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:449, in load_chain(path, **kwargs)
447 return hub_result
448 else:
--> 449 return _load_chain_from_file(path, **kwargs)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:476, in _load_chain_from_file(file, **kwargs)
473 config["memory"] = kwargs.pop("memory")
475 # Load the chain from the config now.
--> 476 return load_chain_from_config(config, **kwargs)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:439, in load_chain_from_config(config, **kwargs)
436 raise ValueError(f"Loading {config_type} chain not supported")
438 chain_loader = type_to_loader_dict[config_type]
--> 439 return chain_loader(config, **kwargs)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:383, in _load_api_chain(config, **kwargs)
381 requests_wrapper = kwargs.pop("requests_wrapper")
382 else:
--> 383 raise ValueError("`requests_wrapper` must be present.")
384 return APIChain(
385 api_request_chain=api_request_chain,
386 api_answer_chain=api_answer_chain,
387 requests_wrapper=requests_wrapper,
388 **config,
389 )
ValueError: `requests_wrapper` must be present.
```
### Expected behavior
No error should occur. | APIChain loading fails | https://api.github.com/repos/langchain-ai/langchain/issues/5128/comments | 3 | 2023-05-23T13:37:40Z | 2023-06-27T22:32:42Z | https://github.com/langchain-ai/langchain/issues/5128 | 1,722,107,136 | 5,128 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Currently only certain sets of `Chains` support `async`. It would be amazing if we could bring this support to more chains, in my case specifically the OpenAPI chain.
### Motivation
`async` support for more chains would unify code for larger applications that run several different types of chains especially with regards to the streaming callbacks.
### Your contribution
I could start with bringing `async` support to the `OpenAPI` chain as a first step. | Support `async` calls on `OpenAPI` chains | https://api.github.com/repos/langchain-ai/langchain/issues/5126/comments | 1 | 2023-05-23T10:51:43Z | 2023-09-10T16:13:39Z | https://github.com/langchain-ai/langchain/issues/5126 | 1,721,797,534 | 5,126 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Modify GoogleDriveLoader so that it can accept a Google Drive Service instead of relying on file path to token.json and credentials.json
### Motivation
I am deploying LangChain in serverless environment where I use Redis for chat memory and security token store. In this context, it would be useful to be able to directly source the Google Drive connection credentials from Redis. Typically this could be done as follow:
class GoogleDriveLoader(BaseLoader, BaseModel):
"""Loader that loads Google Docs from Google Drive."""
credentials_path: Path = Path.home() / ".credentials" / "credentials.json"
token_path: Path = Path.home() / ".credentials" / "token.json"
**service: Optional[Resource] = None #Proposed patch**
folder_id: Optional[str] = None
document_ids: Optional[List[str]] = None
file_ids: Optional[List[str]] = None
Then it is mostly about wrapping the three instances in a function that would make them optional if service is supplied as a parameter.
creds = self._load_credentials()
service = build("drive", "v3", credentials=creds)
### Your contribution
I can propose a fork if there is some interest with this evolution. | Pass Google Drive Service to GoogleDriveLoader instead of the token.json and credentials.json | https://api.github.com/repos/langchain-ai/langchain/issues/5125/comments | 1 | 2023-05-23T10:33:51Z | 2023-05-23T21:07:10Z | https://github.com/langchain-ai/langchain/issues/5125 | 1,721,767,347 | 5,125 |
[
"hwchase17",
"langchain"
]
| ### System Info
python 3.11.3
macosx 13.4
langchain==0.0.177
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Loosely based on the sample code provided in the Langchain documentation [here](https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html)
__Issue:__
Adding metadata does not seem to work, does not return annoying on a similarity search or from QA chains
`vectorstore.add_documents(documents=docs, meta_datas=meta_data)`
```
loader = TextLoader('/path/to/file/state_of_the_union.txt')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
vectorstore = Weaviate(client, "Paragraph", "content")
vectorstore.add_documents(documents=docs)
query = "What did the president say about Ketanji Brown Jackson"
docs = vectorstore.similarity_search(query)
print(docs[0].metadata)
```
output:
`{}`
### Expected behavior
Apologies in advance if I've misunderstood the functionality, however I would expect source doc to be returned on the query from weaviate database. I can see the source is present in the db using weaviate's API
```
result = (
client.query
.get("Paragraph", ["content", "source"])
.with_near_text({
"concepts": [query]
})
.with_limit(1)
.do()
)
print(json.dumps(result, indent=4))
```
output:
```
{
"data": {
"Get": {
"Paragraph": [
{
"content": "Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you\u2019re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I\u2019d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer\u2014an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation\u2019s top legal minds, who will continue Justice Breyer\u2019s legacy of excellence.",
"source": "/path/to/file/state_of_the_union.txt"
}
]
}
}
}
```
| source metadata cannot be retrieved from Weaviate | https://api.github.com/repos/langchain-ai/langchain/issues/5124/comments | 2 | 2023-05-23T09:14:14Z | 2023-05-24T08:14:58Z | https://github.com/langchain-ai/langchain/issues/5124 | 1,721,600,792 | 5,124 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hello,
I am facing slow response times (25 - 30 second) per question with `ConversationalRetrievalQAChain` and pinecone.
```
const chain = ConversationalRetrievalQAChain.fromLLM(
this.llm,
vectorStore.asRetriever(),
);
const res = await chain.call({ question, chat_history: [''] });
```
95% of that time is spent from the time the chain.call is executed. I have tried both gpt-3.5-turbo and gpt-4 models and I face similar response times.
I've also tried to turn on streaming, and I can see that for gtp-3.5-turbo there is nothing being streamed on the first 20 seconds or so. And once it starts streaming, it is faster compared to gpt-4. But, gpt-4 takes much less time to start streaming, but then it is slower to complete the answer.
Any help would be appreciated, thank you!
| Slow response time with `ConversationalRetrievalQAChain` | https://api.github.com/repos/langchain-ai/langchain/issues/5123/comments | 4 | 2023-05-23T09:01:48Z | 2023-11-09T06:18:06Z | https://github.com/langchain-ai/langchain/issues/5123 | 1,721,576,598 | 5,123 |
[
"hwchase17",
"langchain"
]
| ### Feature request
A VectorSearch enabled SQLChain that is capable of doing `JOIN`, `WHERE` filters and vector search at the same time.
It should be agnostic to any SQL database backend that supports common SQL and vector search, with customizable distance function composer.
### Motivation
Hello from [MyScale](https://myscale.com) AI team! 😊👋
We have been working on features to fill up the gap among SQL, vector search and LLM applications. Some inspiring works like self-query retrievers for VectorStores (for example [Weaviate](https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/weaviate_self_query.html) and [others](https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/self_query.html)) really turn those vector search databases into a powerful knowledge base! 🚀🚀
We are thinking if we can merge all in one, like SQL and vector search and LLMChains, making this SQL vector database memory as the only source of your data. Here are some benefits we can think of for now, maybe you have more 👀:
1. With ALL data you have: since you store all your pasta in the database, you don't need to worry about the foreign keys or links between names from other data source.
2. Flexible data structure: Even if you have changed your schema, for example added a table, the LLM will know how to `JOIN` those tables and use those as filters.
3. SQL compatibility: We found that vector databases that supports SQL in the marketplace have similar interfaces, which means you can change your backend with no pain, just change the name of the distance function in your DB solution and you are ready to go!
We would like to consider PGVector and MyScale for now, but if you want more, just comment below and we will push hard to ship it! 🏃🏃
### Your contribution
A PR contains the VectorSQLChain | Feature Proposal: VectorSearch enabled SQLChain? | https://api.github.com/repos/langchain-ai/langchain/issues/5122/comments | 4 | 2023-05-23T08:32:09Z | 2023-10-10T09:37:42Z | https://github.com/langchain-ai/langchain/issues/5122 | 1,721,520,077 | 5,122 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I have been trying to use SQL Database Chain with open-source LLM but have not succeeded.
`from langchain import HuggingFaceHub`
`repo_id = "google/flan-t5-xl"`
`db = SQLDatabase.from_uri(db_url)`
`llm = HuggingFaceHub(repo_id=repo_id)`
`db_chain = SQLDatabaseSequentialChain.from_llm(llm, db, verbose=True)`
Please help. Refer to any examples available online.
I have already tried gpt4all and llama.cpp
| How to use SQL Database chain with an opensource LLM? | https://api.github.com/repos/langchain-ai/langchain/issues/5121/comments | 5 | 2023-05-23T07:16:07Z | 2024-02-12T17:10:23Z | https://github.com/langchain-ai/langchain/issues/5121 | 1,721,388,198 | 5,121 |
[
"hwchase17",
"langchain"
]
| ### System Info
- platform
```
$ cat /etc/os-release
NAME="Ubuntu"
VERSION="20.04.6 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.6 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal
```
- python
```
$ python -V
Python 3.9.7
# installed with
asdf install python 3.9.7
```
- problematic dependency (updated after [this comment](https://github.com/hwchase17/langchain/issues/5113#issuecomment-1558493486))
```
# this was updated today.
typing_extensions==4.6.0
```
- dependencies
```
langchain==0.0.177
openapi-schema-pydantic==1.2.4
pydantic==1.10.7
```
<details>
<summary>all the dependencies</summary>
```
$ pip install langchain
Collecting langchain
Using cached langchain-0.0.177-py3-none-any.whl (877 kB)
Collecting PyYAML>=5.4.1
Using cached PyYAML-6.0-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (661 kB)
Collecting openapi-schema-pydantic<2.0,>=1.2
Using cached openapi_schema_pydantic-1.2.4-py3-none-any.whl (90 kB)
Collecting requests<3,>=2
Using cached requests-2.31.0-py3-none-any.whl (62 kB)
Collecting SQLAlchemy<3,>=1.4
Using cached SQLAlchemy-2.0.15-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.7 MB)
Collecting aiohttp<4.0.0,>=3.8.3
Using cached aiohttp-3.8.4-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.0 MB)
Collecting dataclasses-json<0.6.0,>=0.5.7
Using cached dataclasses_json-0.5.7-py3-none-any.whl (25 kB)
Collecting async-timeout<5.0.0,>=4.0.0
Using cached async_timeout-4.0.2-py3-none-any.whl (5.8 kB)
Collecting numexpr<3.0.0,>=2.8.4
Using cached numexpr-2.8.4-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (380 kB)
Collecting pydantic<2,>=1
Using cached pydantic-1.10.7-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (3.2 MB)
Collecting numpy<2,>=1
Using cached numpy-1.24.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (17.3 MB)
Collecting tenacity<9.0.0,>=8.1.0
Using cached tenacity-8.2.2-py3-none-any.whl (24 kB)
Collecting multidict<7.0,>=4.5
Using cached multidict-6.0.4-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (114 kB)
Collecting attrs>=17.3.0
Using cached attrs-23.1.0-py3-none-any.whl (61 kB)
Collecting charset-normalizer<4.0,>=2.0
Using cached charset_normalizer-3.1.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (199 kB)
Collecting yarl<2.0,>=1.0
Using cached yarl-1.9.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (269 kB)
Collecting frozenlist>=1.1.1
Using cached frozenlist-1.3.3-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (158 kB)
Collecting aiosignal>=1.1.2
Using cached aiosignal-1.3.1-py3-none-any.whl (7.6 kB)
Collecting typing-inspect>=0.4.0
Using cached typing_inspect-0.8.0-py3-none-any.whl (8.7 kB)
Collecting marshmallow<4.0.0,>=3.3.0
Using cached marshmallow-3.19.0-py3-none-any.whl (49 kB)
Collecting marshmallow-enum<2.0.0,>=1.5.1
Using cached marshmallow_enum-1.5.1-py2.py3-none-any.whl (4.2 kB)
Collecting packaging>=17.0
Using cached packaging-23.1-py3-none-any.whl (48 kB)
Collecting typing-extensions>=4.2.0
Using cached typing_extensions-4.6.0-py3-none-any.whl (30 kB)
Collecting idna<4,>=2.5
Using cached idna-3.4-py3-none-any.whl (61 kB)
Collecting urllib3<3,>=1.21.1
Using cached urllib3-2.0.2-py3-none-any.whl (123 kB)
Collecting certifi>=2017.4.17
Using cached certifi-2023.5.7-py3-none-any.whl (156 kB)
Collecting greenlet!=0.4.17
Using cached greenlet-2.0.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (610 kB)
Collecting mypy-extensions>=0.3.0
Using cached mypy_extensions-1.0.0-py3-none-any.whl (4.7 kB)
Installing collected packages: packaging, typing-extensions, mypy-extensions, multidict, marshmallow, idna, frozenlist, yarl, urllib3, typing-inspect, pydantic, numpy, marshmallow-enum, greenlet, charset-normalizer, certifi, attrs, async-timeout, aiosignal, tenacity, SQLAlchemy, requests, PyYAML, openapi-schema-pydantic, numexpr, dataclasses-json, aiohttp, langchain
Successfully installed PyYAML-6.0 SQLAlchemy-2.0.15 aiohttp-3.8.4 aiosignal-1.3.1 async-timeout-4.0.2 attrs-23.1.0 certifi-2023.5.7 charset-normalizer-3.1.0 dataclasses-json-0.5.7 frozenlist-1.3.3 greenlet-2.0.2 idna-3.4 langchain-0.0.177 marshmallow-3.19.0 marshmallow-enum-1.5.1 multidict-6.0.4 mypy-extensions-1.0.0 numexpr-2.8.4 numpy-1.24.3 openapi-schema-pydantic-1.2.4 packaging-23.1 pydantic-1.10.7 requests-2.31.0 tenacity-8.2.2 typing-extensions-4.6.0 typing-inspect-0.8.0 urllib3-2.0.2 yarl-1.9.2
```
</details>
### Who can help?
@hwchase17 @agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. install python==3.9.7 or 3.9.8 or 3.9.9 (with asdf or docker. I didn't checked the other versions.)
2. install langchain `pip install langchain`
3. see the error
```
Python 3.9.7 (default, May 23 2023, 11:05:54)
[GCC 9.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import langchain
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/takumi/.asdf/installs/python/3.9.7/lib/python3.9/site-packages/langchain/__init__.py", line 6, in <module>
from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain
File "/home/takumi/.asdf/installs/python/3.9.7/lib/python3.9/site-packages/langchain/agents/__init__.py", line 2, in <module>
from langchain.agents.agent import (
File "/home/takumi/.asdf/installs/python/3.9.7/lib/python3.9/site-packages/langchain/agents/agent.py", line 16, in <module>
from langchain.agents.tools import InvalidTool
File "/home/takumi/.asdf/installs/python/3.9.7/lib/python3.9/site-packages/langchain/agents/tools.py", line 8, in <module>
from langchain.tools.base import BaseTool, Tool, tool
File "/home/takumi/.asdf/installs/python/3.9.7/lib/python3.9/site-packages/langchain/tools/__init__.py", line 42, in <module>
from langchain.tools.vectorstore.tool import (
File "/home/takumi/.asdf/installs/python/3.9.7/lib/python3.9/site-packages/langchain/tools/vectorstore/tool.py", line 13, in <module>
from langchain.chains import RetrievalQA, RetrievalQAWithSourcesChain
File "/home/takumi/.asdf/installs/python/3.9.7/lib/python3.9/site-packages/langchain/chains/__init__.py", line 2, in <module>
from langchain.chains.api.base import APIChain
File "/home/takumi/.asdf/installs/python/3.9.7/lib/python3.9/site-packages/langchain/chains/api/base.py", line 13, in <module>
from langchain.chains.api.prompt import API_RESPONSE_PROMPT, API_URL_PROMPT
File "/home/takumi/.asdf/installs/python/3.9.7/lib/python3.9/site-packages/langchain/chains/api/prompt.py", line 2, in <module>
from langchain.prompts.prompt import PromptTemplate
File "/home/takumi/.asdf/installs/python/3.9.7/lib/python3.9/site-packages/langchain/prompts/__init__.py", line 3, in <module>
from langchain.prompts.chat import (
File "/home/takumi/.asdf/installs/python/3.9.7/lib/python3.9/site-packages/langchain/prompts/chat.py", line 10, in <module>
from langchain.memory.buffer import get_buffer_string
File "/home/takumi/.asdf/installs/python/3.9.7/lib/python3.9/site-packages/langchain/memory/__init__.py", line 28, in <module>
from langchain.memory.vectorstore import VectorStoreRetrieverMemory
File "/home/takumi/.asdf/installs/python/3.9.7/lib/python3.9/site-packages/langchain/memory/vectorstore.py", line 10, in <module>
from langchain.vectorstores.base import VectorStoreRetriever
File "/home/takumi/.asdf/installs/python/3.9.7/lib/python3.9/site-packages/langchain/vectorstores/__init__.py", line 2, in <module>
from langchain.vectorstores.analyticdb import AnalyticDB
File "/home/takumi/.asdf/installs/python/3.9.7/lib/python3.9/site-packages/langchain/vectorstores/analyticdb.py", line 16, in <module>
from langchain.embeddings.base import Embeddings
File "/home/takumi/.asdf/installs/python/3.9.7/lib/python3.9/site-packages/langchain/embeddings/__init__.py", line 19, in <module>
from langchain.embeddings.openai import OpenAIEmbeddings
File "/home/takumi/.asdf/installs/python/3.9.7/lib/python3.9/site-packages/langchain/embeddings/openai.py", line 67, in <module>
class OpenAIEmbeddings(BaseModel, Embeddings):
File "pydantic/main.py", line 197, in pydantic.main.ModelMetaclass.__new__
File "pydantic/fields.py", line 506, in pydantic.fields.ModelField.infer
File "pydantic/fields.py", line 436, in pydantic.fields.ModelField.__init__
File "pydantic/fields.py", line 552, in pydantic.fields.ModelField.prepare
File "pydantic/fields.py", line 663, in pydantic.fields.ModelField._type_analysis
File "pydantic/fields.py", line 808, in pydantic.fields.ModelField._create_sub_type
File "pydantic/fields.py", line 436, in pydantic.fields.ModelField.__init__
File "pydantic/fields.py", line 552, in pydantic.fields.ModelField.prepare
File "pydantic/fields.py", line 668, in pydantic.fields.ModelField._type_analysis
File "/home/takumi/.asdf/installs/python/3.9.7/lib/python3.9/typing.py", line 847, in __subclasscheck__
return issubclass(cls, self.__origin__)
TypeError: issubclass() arg 1 must be a class
>>>
```
<details>
<summary>with docker</summary>
```
$ docker run -it python:3.9.7-bullseye bash
$ pip install langchain
$ python -c "import langchain"
```
</details>
### Expected behavior
```
Python 3.10.1 (main, Dec 21 2021, 09:01:08) [GCC 10.2.1 20210110] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import langchain
>>>
```
- what to do?
- ~change [python dependency](https://github.com/hwchase17/langchain/blob/master/pyproject.toml#LL15C24-L15C24) to 3.10 or later~
- fix the version of typing_extensions to 4.5.0 or change the relevant code.
- Thank you for checking out this issue. If there are anything more to check, I would be glad to help. | import langchain with python<=3.9 fails | https://api.github.com/repos/langchain-ai/langchain/issues/5113/comments | 31 | 2023-05-23T02:38:48Z | 2023-12-29T18:37:48Z | https://github.com/langchain-ai/langchain/issues/5113 | 1,721,040,284 | 5,113 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain 0.0.177
Python 3.10.9
Windows 10
### Who can help?
@agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
Copied directly from https://python.langchain.com/en/latest/modules/callbacks/getting_started.html 'Creating a Custom Handler'
```
from langchain.callbacks.base import BaseCallbackHandler
from langchain.chat_models import ChatOpenAI
from langchain.schema import HumanMessage
class MyCustomHandler(BaseCallbackHandler):
def on_llm_new_token(self, token: str, **kwargs) -> None:
print(f"My custom handler, token: {token}")
# To enable streaming, we pass in `streaming=True` to the ChatModel constructor
# Additionally, we pass in a list with our custom handler
chat = ChatOpenAI(max_tokens=25, streaming=True, callbacks=[MyCustomHandler()])
chat([HumanMessage(content="Tell me a joke")])
```
### Expected behavior
Expected Behavior:
My custom handler, token:
My custom handler, token: Why
My custom handler, token: did
My custom handler, token: the
My custom handler, token: tomato
My custom handler, token: turn
My custom handler, token: red
My custom handler, token: ?
My custom handler, token: Because
My custom handler, token: it
My custom handler, token: saw
My custom handler, token: the
My custom handler, token: salad
My custom handler, token: dressing
My custom handler, token: !
My custom handler, token:
AIMessage(content='Why did the tomato turn red? Because it saw the salad dressing!', additional_kwargs={})
Error:
TypeError: Can't instantiate abstract class MyCustomHandler with abstract methods on_agent_action, on_agent_finish, on_chain_end, on_chain_error, on_chain_start, on_llm_end, on_llm_error, on_llm_start, on_text, on_tool_end, on_tool_error, on_tool_start | MyCustomHandler documentation example not working | https://api.github.com/repos/langchain-ai/langchain/issues/5112/comments | 2 | 2023-05-23T02:22:56Z | 2023-10-21T06:32:13Z | https://github.com/langchain-ai/langchain/issues/5112 | 1,721,028,735 | 5,112 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain 0.0.173
Python 3.10.11
openai 0.27.6
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from dotenv import load_dotenv
load_dotenv()
```
# user langchain llms
```python
from langchain.llms import AzureOpenAI
llm = AzureOpenAI(temperature=0.1,deployment_name="gpt-35-turbo",verbose=True)
```
```python
llm("who are you?")
```
```
' what do you want? why are you here? what do you want to do? what do you want to achieve? what do you want to be? what do you want to have? what do you want to give? what do you want to receive? what do you want to learn? what do you want to teach? what do you want to know? what do you want to see? what do you want to feel? what do you want to experience? what do you want to share? what do you want to create? what do you want to manifest? what do you want to change? what do you want to transform? what do you want to heal? what do you want to release? what do you want to forgive? what do you want to let go of? what do you want to embrace? what do you want to accept? what do you want to surrender? what do you want to allow? what do you want to receive? what do you want to give? what do you want to do? what do you want to be? what do you want to have? what do you want to create? what do you want to manifest? what do you want to experience? what do you want to share? what do you want to learn'
```
```python
llm("你是谁?")
```
```
'",\n "你是谁?",\n "你是谁?",\n "你是谁?",\n "你是谁?",\n "你是谁?",\n "你是谁?",\n "你是谁?",\n "你是谁?",\n "你是谁?",\n "你是谁?",\n "你是谁?",\n "你是谁?",\n "你是谁?",\n "你是谁?",\n "你是谁?",\n "你是谁?",\n "你是谁?",\n "你是谁?",\n "你是谁?",\n "你是谁?",\n "你是谁?",\n "你是谁?",\n "你是谁?",\n "你是谁?",\n "你是谁?",\n "你是谁?",\n "你是谁?",\n "你是谁?",\n "你是谁?",\n "你是谁?",\n "你是谁?",\n "你是谁?'
```
```python
from langchain.llms import OpenAI
llm_openai = OpenAI(engine="gpt-35-turbo")
llm_openai("who are you ?")
```
```
'” “I’m the devil,” said the Stranger. “My name is Legion.” In that moment, everything changed. The heat of the sun was extinguished in an instant, replaced by an iciness that flowed from the Stranger’s presence. The air itself seemed to grow thick and heavy, pressing down on me, as if I were drowning in molasses. My eyes were drawn back to the Stranger’s face, but this time I saw more than just a man. I saw something darker, something that made my soul cry out in despair. I saw a demon. I felt my body begin to shake, my fingers trembling uncontrollably. My mind was screaming at me to run, to flee from this place as fast as I could, but my feet seemed rooted to the spot. The Stranger’s smile grew wider, and it was all I could do to keep from screaming. “I have a proposition for you,” he said. “I’ve heard of your powers, and I’m impressed. I think you could be a great asset to me.” I tried to speak, but no sound came out. “Don’t be afraid,” he said. “I’m not here to hurt you. I’m here to offer you a deal. I can give you power'
```
# just use openai
```python
import openai
response = openai.ChatCompletion.create(
temperature=0.1,
engine="gpt-35-turbo", # engine = "deployment_name".
messages=[
{"role": "user", "content": """
who are you?
"""}, ]
)
# print(response)
print(response['choices'][0]['message']['content'])
```
```
I am an AI language model created by OpenAI, designed to assist with various tasks such as answering questions, generating text, and providing information.
```
### Expected behavior
is langchain make some settings when we call GPT?? | AzureOpenAI Repeat words | https://api.github.com/repos/langchain-ai/langchain/issues/5109/comments | 1 | 2023-05-23T01:06:06Z | 2023-07-11T10:01:58Z | https://github.com/langchain-ai/langchain/issues/5109 | 1,720,955,939 | 5,109 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
Link: https://python.langchain.com/en/latest/integrations/aim_tracking.html
Below is the section that contains problematic code.
<img width="910" alt="Screenshot 2023-05-22 at 4 42 10 PM" src="https://github.com/hwchase17/langchain/assets/59850826/eb988031-4167-4a6c-a77f-9a0f25d2a736">
<br>
The document doesn't include library versions. I'm using `langchain==0.0.152`.
When I used `callbacks=callbacks`, this was my error:
<img width="592" alt="Screenshot 2023-05-22 at 4 42 37 PM" src="https://github.com/hwchase17/langchain/assets/59850826/6a3a46d5-cb43-4e88-be0e-a767d82064cd">
When I changed it to using `model_kwargs`, I got another error
<img width="430" alt="Screenshot 2023-05-22 at 4 43 41 PM" src="https://github.com/hwchase17/langchain/assets/59850826/e1ae42ee-1eac-4517-839f-224bfc507831">
It's unclear what key-value pairs `model_kwargs` is expecting. LangChain `callbacks` [docs](https://python.langchain.com/en/latest/modules/callbacks/getting_started.html) still use `callbacks=` throughout, rather than `model_kwargs`
### Idea or request for content:
Please either provide a working example of Aim+LangChain code or update the `callbacks` document if `callbacks` is indeed not relevant anymore. | DOC: Aim docs contains code that doesn't run | https://api.github.com/repos/langchain-ai/langchain/issues/5107/comments | 1 | 2023-05-22T23:51:58Z | 2023-09-10T16:13:45Z | https://github.com/langchain-ai/langchain/issues/5107 | 1,720,839,918 | 5,107 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
What the docs show:
```python
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings(deployment="your-embeddings-deployment-name")
```
But client is required, so you need
```python
from langchain.embeddings import OpenAIEmbeddings
import openai
embeddings = OpenAIEmbeddings(deployment="your-embeddings-deployment-name",client=openai.Embedding())
```
Similar situation for the docs
```python
model = AzureChatOpenAI(
openai_api_base=BASE_URL,
openai_api_version="2023-03-15-preview",
deployment_name=DEPLOYMENT_NAME,
openai_api_key=API_KEY,
openai_api_type = "azure",
)
```
When I really needed
```python
model = AzureChatOpenAI(
deployment_name="deployment-name", client=openai.ChatCompletion()
)
```
Might also be better (maybe the docs writer expected) that AzureChatOpenAI client default to openai.ChatCompletion() and OpenAIEmbeddings default client to be openai.Embedding().
### Idea or request for content:
_No response_ | DOC: client required but not in example code | https://api.github.com/repos/langchain-ai/langchain/issues/5105/comments | 2 | 2023-05-22T21:35:22Z | 2023-09-23T16:06:00Z | https://github.com/langchain-ai/langchain/issues/5105 | 1,720,596,854 | 5,105 |
[
"hwchase17",
"langchain"
]
| ### System Info
Hi
testing this loader, it looks as tho this is pulling trashed files from folders. I think this should be default to false if anything and be an opt in.
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
### Related Components
- [X] Document Loaders
### Reproduction
use GoogleDriveLoader
1. point to folder
2. move a file to trash in folder
Reindex
File still can be searched in vector store.
### Expected behavior
Should not be searchable | GoogleDriveLoader seems to be pulling trashed documents from the folder | https://api.github.com/repos/langchain-ai/langchain/issues/5104/comments | 5 | 2023-05-22T21:21:14Z | 2023-05-25T05:26:19Z | https://github.com/langchain-ai/langchain/issues/5104 | 1,720,575,898 | 5,104 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain Version: 0.0.175
Platform: macos
Python version: 3.9
Database: Postgres
### Who can help?
@vowelparrot @agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce:
1. Let's say we have two tables `users` & `profiles` and they are both have a column named `tags`.
2. SQLDatabaseChain produce the following query:
`SELECT "first_name", "last_name", "tags" FROM users INNER JOIN profiles ON users.id=profiles.user_id`
3. This generates the following exception:
`ProgrammingError('(psycopg2.errors.AmbiguousColumn) column reference "tags" is ambiguous
LINE 1: SELECT "first_name", "last_name", "tags")`
### Expected behavior
I would expect it to produce the query with table name specified, at least for the columns who share the same name across these these two tables.
I've tried to add enable use_query_checker but it didn't help. I also tried to add to the `QUERY_CHECKER` the following line:
`- Adding the table name when referencing columns that are specified in multiple tables (i.e. ambiguous column names)` - it didn't help either | psycopg2.errors.AmbiguousColumn exception when using SQLDatabaseChain | https://api.github.com/repos/langchain-ai/langchain/issues/5100/comments | 4 | 2023-05-22T17:05:51Z | 2023-10-20T07:27:37Z | https://github.com/langchain-ai/langchain/issues/5100 | 1,720,083,619 | 5,100 |
[
"hwchase17",
"langchain"
]
| def load_chain():
global j
os.environ["OPENAI_API_KEY"] =api_key[j%3]
print(api_key[j%3])
j=j+1
#global chain
print("模型加载开始")
#loader = DirectoryLoader('./langchian_gpt/yuliaoku', glob='**/*.txt')
loader = DirectoryLoader('./yuliaoku', glob='**/*.txt')
docs = loader.load()
# 文档切块
text_splitter = TokenTextSplitter(chunk_size=1000, chunk_overlap=0)
doc_texts = text_splitter.split_documents(docs)
# 调用openai Embeddings
embeddings = OpenAIEmbeddings(openai_api_key=os.environ["OPENAI_API_KEY"])
# 向量化
vectordb = Chroma.from_documents(doc_texts, embeddings)#
vectordb = Chroma.from_documents(doc_texts, embeddings, persist_directory="./yuliaoku")
vectordb.persist()
ll=ChatOpenAI(temperature=0.3, model_name="gpt-3.5-turbo", max_tokens=1024)
memory = ConversationBufferWindowMemory(memory_key="chat_history",k=3, output_key='answer',return_messages=True)
# 创建聊天机器人对象chain
chain = ConversationalRetrievalChain.from_llm(
retriever=vectordb.as_retriever(search_type="similarity",search_kwargs={"k": 1}),
llm=ll
verbose=True,
memory=memory,
get_chat_history=lambda h: h,
#map_reduce,map_rerank ,refine
return_source_documents=False)
print("模型加载结束")
return chain#,memory
def embedding(require_text):
chain=load_chain()
ans = chain({"question": require_text})
return ans['answer']
无法实现上下文功能,请问我哪里有问题,我的langchain是0.0161的版本 | I am still struggling. It does not remember anything in the chat history. What am I doing wrong? Here is my code: | https://api.github.com/repos/langchain-ai/langchain/issues/5099/comments | 2 | 2023-05-22T16:55:26Z | 2023-09-15T16:11:45Z | https://github.com/langchain-ai/langchain/issues/5099 | 1,720,063,351 | 5,099 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain 0.0.173
python 3.9.16
### Who can help?
@hwchase17 @agola11 @vowelparrot
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python3
custom_prompt_template = """Use the context to generate an appropriate reply to the query
Context: {context}
Query: {question}
Response:"""
CUSTOM_PROMPT = PromptTemplate(
template=learn_mode_prompt_template, input_variables=[
"context", "question"]
)
def generate_response(text: str, query: str):
retriever = create_document_vectorstore(
page_text=text)
chain_type_kwargs = {"prompt": CUSTOM_PROMPT }
qa = RetrievalQA.from_chain_type(llm=OpenAI(
openai_api_key=openai_api_key), chain_type="map_reduce", retriever=retriever)
qa.run(body.query)
```
### Expected behavior
tl;dr trying to use `RetrievalQA` chain with `chain_type` of `map_reduce` (and `refine`) errors out when using a custom prompt but successfully works with `chain_type=stuff`
Note this errors out with
```
ValidationError: 1 validation error for MapReduceDocumentsChain
prompt
extra fields not permitted (type=value_error.extra)
```
however if `chain_type` is changed to `stuff` the code generates a completion without a problem | `RetrievalQA` chain with chain_type `map_reduce` fails for custom prompts | https://api.github.com/repos/langchain-ai/langchain/issues/5096/comments | 12 | 2023-05-22T14:39:02Z | 2024-02-14T16:13:53Z | https://github.com/langchain-ai/langchain/issues/5096 | 1,719,836,908 | 5,096 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain 0.176.0
python 3.10
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
For example try running the following code:
```python
from langchain.agents import Tool
from langchain.memory import ConversationBufferMemory
from langchain.chat_models import ChatOpenAI
from langchain.agents import initialize_agent
from langchain.agents import AgentType
from langchain.tools.python.tool import PythonREPLTool
from langchain.utilities import DuckDuckGoSearchAPIWrapper
import langchain
langchain.debug = True
search = DuckDuckGoSearchAPIWrapper()
tools = [
Tool(
name = "Current Search",
func=search.run,
description="useful for when you need to answer questions about current events or the current state of the world. the input to this should be a single search term."
),
PythonREPLTool()
]
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
llm=ChatOpenAI(temperature=0)
agent_chain = initialize_agent(tools, llm, agent=AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION, verbose=True, memory=memory)
agent_chain("""create a regex that will extract titles from names. For example foo("Mrs. Smith") should return Mrs.""")
```
Debug of this code contains the following:
```
{
"generations": [
[
{
"text": "{\n \"action\": \"Python REPL\",\n \"action_input\": \"import re\\n\\nname = 'Mrs. Smith'\\n\\nregex = r'(Mr?s?\\.|Ms\\.|Miss|Dr\\.) ([A-Za-z]+)'\\n\\nmatch = re.search(regex, name)\\n\\nif match:\\n print(match.group(2))\"\n}",
"generation_info": null,
"message": {
"content": "{\n \"action\": \"Python REPL\",\n \"action_input\": \"import re\\n\\nname = 'Mrs. Smith'\\n\\nregex = r'(Mr?s?\\.|Ms\\.|Miss|Dr\\.) ([A-Za-z]+)'\\n\\nmatch = re.search(regex, name)\\n\\nif match:\\n print(match.group(2))\"\n}",
"additional_kwargs": {},
"example": false
}
}
]
],
"llm_output": {
"token_usage": {
"prompt_tokens": 555,
"completion_tokens": 76,
"total_tokens": 631
},
"model_name": "gpt-3.5-turbo"
}
}
```
Here you can see that this fails because the resulting answer contains escape characters that break json in the action input. This is a shortcoming of the json format. Changing the prompt and output parser to work with YAML works as expected.
### Expected behavior
I would expect to have output parser able to parse outputs containing escape characters. But this is a bit difficult with json. I propose we use yaml instead of json here. I am willing to develop a solution | Conversational Chat with PythonREPL tool breaks | https://api.github.com/repos/langchain-ai/langchain/issues/5094/comments | 1 | 2023-05-22T14:20:57Z | 2023-09-10T16:13:54Z | https://github.com/langchain-ai/langchain/issues/5094 | 1,719,805,562 | 5,094 |
[
"hwchase17",
"langchain"
]
| ### System Info
**LangChain version:** 0.0.176
**Platform:** Local Ubuntu 22.04
**Python version:** 3.10
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
This is the code example I'm currently using. It's a slightly modified version of [LangChain summarize docs](https://python.langchain.com/en/latest/modules/chains/index_examples/summarize.html)
```py
from pdfminer.high_level import extract_text
from langchain.llms import OpenAI
from langchain.docstore.document import Document
from langchain.text_splitter import CharacterTextSplitter
from langchain.chains.summarize import load_summarize_chain
text_splitter = CharacterTextSplitter()
def file_summarizer(file):
read_file = extract_text(file.name)
texts = text_splitter.split_text(read_file)
docs = [Document(page_content=t.replace("\t", " ").strip()) for t in texts[:1]]
llm = OpenAI(temperature=0, max_tokens=2000)
for d in docs:
print(llm.get_num_tokens(d.page_content)) # This prints 1054
chain = load_summarize_chain(llm, chain_type="map_reduce")
result = chain.run(docs).strip()
print(llm.get_num_tokens(result)) # This prints 87
return result
```
### Expected behavior
It looks like it's ignoring the value of the `max_tokens` parameter (or that I'm misunderstanding that it does), but no matter what value I pass to that parameter, I always receive very short answers, no matter the token size of the prompt.
`text-davinci-003` has 4097 tokens, which means that for my previous example, I should've had around ~3000 tokens available for my summary, however, I just received 87.
Am I missing or misunderstanding any bit of documentation?
Thanks in advance! | load_summarize_chain doesn't return the specified max_tokens in OpenAI LLM | https://api.github.com/repos/langchain-ai/langchain/issues/5093/comments | 1 | 2023-05-22T14:09:13Z | 2023-05-29T13:38:29Z | https://github.com/langchain-ai/langchain/issues/5093 | 1,719,782,534 | 5,093 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I am using PlanAndExecuteAgent to answer a complex statement which has multiple placeholders, the placeholders need to be filled from executing multiple sql queries. I tried using planAndExecuteAgent but results were not fruitful. I took planAndExecuteAgent only for planning and used `create_sql_agent` to execute the plan coming out of planAndExecuteAgent. This worked out decently good.
I debugged a little bit and found that the context information I gave as part of initial query is lost and not being used in execute steps. I think that context information need to be provided to steps as well so they can execute better.
One more observation is when sending calls to llm for planning it doesn't have the sql tool context where as while executing it doesn't have input context. This mismatch is causing the issue I think.
This is my observation. Please suggest what is the right way to use PlanAndExecuteAgent directly.
### Suggestion:
_No response_ | Issue: Issue with PlanAndExecuteAgent | https://api.github.com/repos/langchain-ai/langchain/issues/5087/comments | 1 | 2023-05-22T11:11:07Z | 2023-09-10T16:14:00Z | https://github.com/langchain-ai/langchain/issues/5087 | 1,719,462,590 | 5,087 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Hi maintainer,
In my case, a chain runs for a long time, multiple actions will be executed during this process. The main reason is that there will be multiple conversations with the LLMs.
Now, the `agent.run(...)` returns final value and **intermediate steps** with following setting:
```python
agent = create_pandas_dataframe_agent(openai_llm, df, return_intermediate_steps=True, verbose=True)
agent("some user query")
```
This is very useful for obtaining intermediate steps to enable front-end users to understand ideas from LLMs but they have to wait for it complete, this is not friendly to them.
I understand that the process of executing a chain is like a continuous process of generating and executing `Action`, which is in line with the idea of `generator` in Python, where we can `yield` completed actions during the running process.
Intuitively, it's like this:
https://github.com/hwchase17/langchain/blob/ef7d015be566588b3263ee6ee1259a30ee53212c/langchain/agents/agent.py#L946C63-L959
```python
yield next_step_output # line 954
```
### Motivation
Enable front-end(some web apps using langchain) users to understand the `chain's` thinking process while waiting for results , and in a more intuitive/simple way compared to callback for developer.
### Your contribution
NOT Now | Yield intermediate steps during the chain execution process | https://api.github.com/repos/langchain-ai/langchain/issues/5086/comments | 5 | 2023-05-22T09:37:01Z | 2023-11-10T11:26:31Z | https://github.com/langchain-ai/langchain/issues/5086 | 1,719,303,615 | 5,086 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain==0.0.165
Python 3.10.8
macos m2
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
def __init__(self, prompt_template: PromptTemplate):
self.llmchainList = []
key_list = self.__getAPIKeys()
for key in key_list:
llm = ChatOpenAI(model_name="gpt-3.5-turbo", openai_api_key=key, max_tokens = 1500)
llm_chain = LLMChain(
llm=llm,
prompt=prompt_template)
self.llmchainList.append(llm_chain)
def __pickupLlm(self):
chain = self.llmchainList.pop(0)
self.llmchainList.append(chain)
return chain
def send(self, **kwargs):
chain = self.__pickupLlm()
print("send message to chatgpt...")
res = {}
with get_openai_callback() as cb:
res = chain.run(**kwargs)
print(cb)
print("chatgpt said...")
print(res)
```
The code is very straight forward, I have a api token list, when init, create a `llmchain` list.
when request to `chatgpt`, I pop up one `llmchain` in the list (simple load balance for rate limit) .
But, when I start request with multiple thread, I got many `rate limit` exception event I sleep enough ms between request.
and I found all `rate limit` exception point to one api token(`Rate limit reached for default-gpt-3.5-turbo in organization org-WN9k0BUSqN4pNvlU5N6T74Yq on requests per min. Limit: 3 / min. Please try again in 20s. Contact us through our help center at help.openai.com if you continue to have issues. Please add a payment method to your account to increase your rate limit. Visit https://platform.openai.com/account/billing to add a payment method.`), also the api token on the end of my list. that's weird. My simple load balance will never pick on one "api token" in one batch request.
Even I print `key` of each `llmchain`, it'a correct, for example, one batch, I send 3 request, and each `llmchian` with different `apikey`, but I still get `rate limit` with one specific organization which is exactly in the end of my `api token` list.
So I have to suspect there must be some wrong with `LLMCHAIN`, expect reply, thanks
### Expected behavior
By design, no rate limit exception | llmchain not work as expect in multiple thread scenario | https://api.github.com/repos/langchain-ai/langchain/issues/5084/comments | 5 | 2023-05-22T09:00:40Z | 2023-09-19T16:09:35Z | https://github.com/langchain-ai/langchain/issues/5084 | 1,719,236,879 | 5,084 |
[
"hwchase17",
"langchain"
]
| ### System Info
v0.0.176
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
The default QA with Sources Chain: langchain/chains/qa_with_sources/stuff_prompt.py
causes issues with Azure Content Filter, which will return the following:
```
"error": {
"message": "The response was filtered",
"type": null,
"param": "prompt",
"code": "content_filter",
"status": 400
}
```
I believe this is due to the sections that mention issues like covid, which is found in the example source content in lines 22-28.
I modified to the following to resolve the issue:
```
Content: Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.
Source: 0-pl
Content: And we won’t stop.
Source: 24-pl
Content: And a proud Ukrainian people, I want you to know that we are going to be okay.
Source: 5-pl
Content: More support for patients and families. Our future is forged. \n\nWell I know this nation.
Source: 34-pl
```
this is probably not the best version of the source content, but it worked for me.
### Expected behavior
The QA prompt should not trigger Azure Content Filtering. | Azure Content Filtering on Default QA with sources prompt. | https://api.github.com/repos/langchain-ai/langchain/issues/5082/comments | 2 | 2023-05-22T07:16:39Z | 2023-09-15T16:11:55Z | https://github.com/langchain-ai/langchain/issues/5082 | 1,719,055,182 | 5,082 |
[
"hwchase17",
"langchain"
]
| I need to limit the number of documents that AzureCognitiveSearchRetriever returns so that I can aggregate only the most relevant documents. Is there a way to do this with the current functionality or do we need to implement it? | Issue: Can we limit the number of relevant documents returned by AzureCognitiveSearchRetriever? | https://api.github.com/repos/langchain-ai/langchain/issues/5081/comments | 16 | 2023-05-22T06:10:42Z | 2023-07-27T05:36:05Z | https://github.com/langchain-ai/langchain/issues/5081 | 1,718,965,240 | 5,081 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Add capability to generate and run python code using langchain. I created a [github repo](https://github.com/thismlguy/code-genie) called code-genie to support this. Here's a [starter notebook](https://code-genie.readthedocs.io/en/main/notebooks/Google%20Analytics%20Pipeline%20Example.html) for it. I want to build that functionality into langchain itself for wider adoption.
### Motivation
A lot of data scientists/business analysts are using GPT3.5/4 API to generate code for ad-hoc analysis. But they end up copy pasting code from chatgpt interface into their notebooks and spend time making it work with their own variables.
### Your contribution
I can create a PR introducing a chain with does this once I get approval from the maintainers that they are open to merging this feature in. | Chain for generating and running python code | https://api.github.com/repos/langchain-ai/langchain/issues/5080/comments | 1 | 2023-05-22T05:40:25Z | 2023-09-10T16:14:10Z | https://github.com/langchain-ai/langchain/issues/5080 | 1,718,933,305 | 5,080 |
[
"hwchase17",
"langchain"
]
| When I use CosmosDBChatMessageHistory, the conversation history is stored in DB, but every time load message returns None, and each time I run add_user_message, I perform a replace operation, not an increment on the original record.




| CosmosDBChatMessageHistory.load_messages return None | https://api.github.com/repos/langchain-ai/langchain/issues/5077/comments | 4 | 2023-05-22T02:18:01Z | 2023-11-20T16:06:46Z | https://github.com/langchain-ai/langchain/issues/5077 | 1,718,767,157 | 5,077 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I'm seeing the official documentation vectorstore Chromadb and I found way to do query using filters where or where_documents.
I tried to use filter using the langchain chromadb wrapper by examples and I don't see how to do query using filters.
Exists any way to do filters if no, We could implement new features to do this!
https://docs.trychroma.com/usage-guide#querying-a-collection
### Suggestion:
_No response_ | Issue: Create way to do filters using VectorStore | https://api.github.com/repos/langchain-ai/langchain/issues/5076/comments | 2 | 2023-05-22T01:42:22Z | 2023-09-10T16:14:16Z | https://github.com/langchain-ai/langchain/issues/5076 | 1,718,736,987 | 5,076 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Agent always translate my question to English then use a tool, when I use ChatOpenAI with default model gpt-3.5-turbo.
But not translate to English with gpt-4 model.
```
"""Create a ChatVectorDBChain for question/answering."""
from langchain.callbacks.manager import AsyncCallbackManager
from langchain.memory import ConversationBufferMemory
from langchain.chat_models import ChatOpenAI
from langchain.vectorstores.base import VectorStore
from langchain.utilities import GoogleSerperAPIWrapper
import os
from langchain.agents import Tool
from langchain.agents import initialize_agent,AgentType,AgentExecutor
from langchain.chains import RetrievalQA
from langchain.chains import ConversationalRetrievalChain
from langchain.chains.llm import LLMChain
from langchain.chains.chat_vector_db.prompts import CONDENSE_QUESTION_PROMPT
from langchain.chains.question_answering import load_qa_chain
from langchain.agents.agent_toolkits import ZapierToolkit
from langchain.utilities.zapier import ZapierNLAWrapper
def get_agent(
chain_type: str, vcs_swft: VectorStore,vcs_path: VectorStore, agent_cb_handler) -> AgentExecutor:
agent_cb_manager = AsyncCallbackManager([agent_cb_handler])
llm = ChatOpenAI(
# model_name="gpt-4",
temperature=0,
verbose=True,
# request_timeout=120,
)
llm_qa = ChatOpenAI(
temperature=0,
verbose=True,
# request_timeout=120,
)
search = GoogleSerperAPIWrapper()
doc_search_swft = RetrievalQA.from_chain_type(llm=llm_qa, chain_type=chain_type, retriever=vcs_swft.as_retriever())
doc_search_path = RetrievalQA.from_chain_type(llm=llm_qa, chain_type=chain_type, retriever=vcs_path.as_retriever())
# doc_search = get_qa_chain(chain_type=chain_type,vectorstore=vectorstore)
# zapier = ZapierNLAWrapper()
# toolkit = ZapierToolkit.from_zapier_nla_wrapper(zapier)
tools = [
Tool(
name = "QA SWFT System",
func=doc_search_swft.run,
description="useful for when you need to answer questions about swft. Input should be a fully formed question.",
coroutine=doc_search_swft.arun
),
Tool(
name = "QA Metapath System",
func=doc_search_path.run,
description="useful for when you need to answer questions about metapath. Input should be a fully formed question.",
coroutine=doc_search_path.arun
),
Tool(
name = "Current Search",
func=search.run,
description="""
useful for when you need to answer questions about current events or the current state of the world or you need to ask with search.
the input to this should be a single search term.
""",
coroutine=search.arun
),
]
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
agent = initialize_agent(tools=tools, llm=llm, agent=AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION, verbose=True, memory=memory,callback_manager=agent_cb_manager)
return agent
```
console print:
```
INFO: Will watch for changes in these directories: ['/home/ec2-user/chatbot-agent']
INFO: Uvicorn running on http://:9002 (Press CTRL+C to quit)
INFO: Started reloader process [1254384] using WatchFiles
INFO: Started server process [1254386]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: 103.116.245.58:0 - "GET / HTTP/1.0" 200 OK
INFO: ('172.71.218.188', 0) - "WebSocket /chat" [accepted]
INFO: connection open
ON_CHAIN_START: Inputs: {'input': 'btc现在的价格是多少美元?', 'chat_history': []}
> Entering new AgentExecutor chain...
ON_CHAIN_START: Inputs: {'input': 'btc现在的价格是多少美元?', 'chat_history': [], 'agent_scratchpad': [], 'stop': ['\nObservation:', '\n\tObservation:']}
ON_CHAIN_END: Outputs: {'text': '{\n "action": "Current Search",\n "action_input": "btc price usd"\n}'}
ON_AGENT_ACTION: tool: Current Search
{
"action": "Current Search",
"action_input": "btc price usd"
}
Observation: 26,667.50 United States Dollar
Thought:ON_CHAIN_START: Inputs: {'input': 'btc现在的价格是多少美元?', 'chat_history': [], 'agent_scratchpad': [AIMessage(content='{\n "action": "Current Search",\n "action_input": "btc price usd"\n}', additional_kwargs={}, example=False), HumanMessage(content="TOOL RESPONSE: \n---------------------\n26,667.50 United States Dollar\n\nUSER'S INPUT\n--------------------\n\nOkay, so what is the response to my last comment? If using information obtained from the tools you must mention it explicitly without mentioning the tool names - I have forgotten all TOOL RESPONSES! Remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else.", additional_kwargs={}, example=False)], 'stop': ['\nObservation:', '\n\tObservation:']}
ON_CHAIN_END: Outputs: {'text': '{\n "action": "Final Answer",\n "action_input": "The current price of BTC in USD is 26,667.50."\n}'}
ON_AGENT_FINISH: {'output': 'The current price of BTC in USD is 26,667.50.'}
{
"action": "Final Answer",
"action_input": "The current price of BTC in USD is 26,667.50."
}
ON_CHAIN_END: Outputs: {'output': 'The current price of BTC in USD is 26,667.50.'}
> Finished chain.
Result: The current price of BTC in USD is 26,667.50.
INFO: connection closed
```
### Suggestion:
_No response_ | Issue: Agent always translate my question to English then use a tool, when I use ChatOpenAI with default model gpt-3.5-turbo. | https://api.github.com/repos/langchain-ai/langchain/issues/5075/comments | 3 | 2023-05-22T01:32:56Z | 2023-09-19T16:09:41Z | https://github.com/langchain-ai/langchain/issues/5075 | 1,718,728,962 | 5,075 |
[
"hwchase17",
"langchain"
]
| ### System Info
Django backend that is using an `AsyncWebsocketConsumer`.
Goal: Stream LLM messages to a frontend React client via an `AsyncWebsocketConsumer`, using a custom `AsyncCallbackHandler`.
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
I'm using the async handler to stream the LLM's output from a Django backend that is using an `AsyncWebsocketConsumer`.
I'm trying to stream the `ChatOpenAI` LLM's output via a custom subclass of the `AsyncCallbackHandler` (`streaming` is set to `True`).
However, the async `on_llm_new_token` function is never being called.
Unsure what the issue may be, as extending the `BaseCallbackHandler` works well for writing to stdout.
Here's the code that's not being called in the async on_llm_new_token: https://github.com/virattt/chat_app/pull/5/files#diff-176a7b37329b8a2846ff511f4dc15edd7d17d1478e7b4742f97700d97319b578R101-R103
Any thoughts or suggestions would be helpful.
### Expected behavior
The async `on_llm_new_token` function should get called when a new token is received from the LLM. | [AsyncCallbackHandler] on_llm_new_token not being called | https://api.github.com/repos/langchain-ai/langchain/issues/5074/comments | 5 | 2023-05-22T01:29:51Z | 2023-08-14T04:55:22Z | https://github.com/langchain-ai/langchain/issues/5074 | 1,718,726,643 | 5,074 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Qdrant allows you to set the conditions to be used when searching or retrieving points. The filter is passed as **_MetadataFilter_** right now. Can we pass rest.Filter directly so that we can utilize all the filters provided by Qdrant.
```
def _qdrant_filter_from_dict(self, filter: Optional[MetadataFilter]) -> Any:
if not filter:
return None
from qdrant_client.http import models as rest
return rest.Filter(
must=[
condition
for key, value in filter.items()
for condition in self._build_condition(key, value)
]
)
```
### Motivation
I'm frustrated with how to only talk to a few document ingested in Qdrant. From my understanding, the current implementation only allows you to perform 'and' operation among the filtering metadatas. Is it able to perform 'or' operation?
What I have:
```
retriever=qdrant_store.as_retriever(
search_kwargs={
"filter": {"source_file":"file_1.md"}
}
),
```
What I want:
```
retriever=qdrant_store.as_retriever(
search_kwargs={
rest.Filter(
must=[
rest.FieldCondition(
key="source_file",
match=rest.MatchAny(any=["file_1.md", "file_2.md"]),
)
]
)
}
),
```
### Your contribution
N/A | [Question] How to use Qdrant MatchAny filter? | https://api.github.com/repos/langchain-ai/langchain/issues/5073/comments | 2 | 2023-05-22T01:16:59Z | 2023-08-17T05:02:54Z | https://github.com/langchain-ai/langchain/issues/5073 | 1,718,718,602 | 5,073 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Weaviate has the option to pass _additional field while executing a query
https://weaviate.io/developers/weaviate/api/graphql/additional-properties
It would be good to be able to use this feature and add the response to the results. It is a small change, without breaking the API. We can use the kwargs argument, similar to where_filter in the python class weaviate.py.
### Motivation
When comparing and understanding query results, using certainty is a good way.
### Your contribution
I like to contribute to the PR. As it would be my first contribution, I need to understand the integration tests and build the project, and I already tested the change in my local code sample. | Add option to use _additional fields while executing a Weaviate query | https://api.github.com/repos/langchain-ai/langchain/issues/5072/comments | 0 | 2023-05-21T22:37:40Z | 2023-05-23T01:57:11Z | https://github.com/langchain-ai/langchain/issues/5072 | 1,718,663,745 | 5,072 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Looks like the inference time of the LLamacpp model is a lot slower when using LlamaCpp wrapper (compared to the llama-cpp original wrapper).
Here are the results for the same prompt on the RTX 4090 GPU.
When using llamacpp-python Llama wrapper directly:

When using langchain LlamaCpp wrapper:

As you can see, it takes nearly 12x more time for the prompt_eval stage (2.67 ms per token vs 35 ms per token)
Am i missing something? In both cases, the model is fully loaded to the GPU. In the case of the Langchain wrapper, no chain was used, just direct querying of the model using the wrapper's interface. Same parameters.
Link to the example notebook (values are a lil different, but the problem is the same): https://github.com/mmagnesium/personal-assistant/blob/main/notebooks/langchain_vs_llamacpp.ipynb
Appreciate any help.
### Suggestion:
Unfortunately, no suggestion, since i don't understand whats the problem. | Issue: LLamacpp wrapper slows down the model | https://api.github.com/repos/langchain-ai/langchain/issues/5071/comments | 5 | 2023-05-21T21:49:24Z | 2023-05-27T15:40:58Z | https://github.com/langchain-ai/langchain/issues/5071 | 1,718,651,178 | 5,071 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
Hello
I was trying to reproduce an example from that documentation
https://python.langchain.com/en/latest/modules/agents/toolkits/examples/sql_database.html?highlight=SQLDatabaseToolkit#
and got error
toolkit = SQLDatabaseToolkit(db=db )
--
ValidationError: 1 validation error for SQLDatabaseToolkit
llm
field required (type=value_error.missing)
--
### Idea or request for content:
_No response_ | DOC: SQL Database Agent | https://api.github.com/repos/langchain-ai/langchain/issues/5068/comments | 8 | 2023-05-21T19:43:41Z | 2023-11-06T16:08:09Z | https://github.com/langchain-ai/langchain/issues/5068 | 1,718,618,387 | 5,068 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain v0.0.171
ChromaDB v0.3.22
Python v3.10.11
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
This is my code:
```
def askQuestion(self, collection_id, question):
collection_name = "collection-" + str(collection_id)
self.llm = ChatOpenAI(model_name=self.model_name, temperature=self.temperature, openai_api_key=os.environ.get('OPENAI_API_KEY'), streaming=True, callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]))
self.memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True, output_key='answer')
chroma_Vectorstore = Chroma(collection_name=collection_name, embedding_function=self.embeddingsOpenAi, client=self.chroma_client)
self.chain = ConversationalRetrievalChain.from_llm(self.llm, chroma_Vectorstore.as_retriever(),
return_source_documents=True,verbose=VERBOSE,
memory=self.memory)
result = self.chain({"question": question})
print(result)
res_dict = {
"answer": result["answer"],
}
res_dict["source_documents"] = []
for source in result["source_documents"]:
res_dict["source_documents"].append({
"page_content": source.page_content,
"metadata": source.metadata
})
return res_dict
```
### Expected behavior
When I print the result directly after `result = self.chain({"question": question})`, I get displayed sources, metadata, kwargs, question, chat_history.
I see here: https://github.com/hwchase17/langchain/blob/0c3de0a0b32fadb8caf3e6d803287229409f9da9/langchain/vectorstores/chroma.py#L165 and in line 182 in the official source code, that the similarity_search_with_score() is being called by default.
How can I also display the score than? | ConversationalRetrievalChain doesn't return score with sources | https://api.github.com/repos/langchain-ai/langchain/issues/5067/comments | 21 | 2023-05-21T18:40:34Z | 2024-01-11T08:20:25Z | https://github.com/langchain-ai/langchain/issues/5067 | 1,718,600,362 | 5,067 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.