issue_owner_repo
listlengths 2
2
| issue_body
stringlengths 0
261k
โ | issue_title
stringlengths 1
925
| issue_comments_url
stringlengths 56
81
| issue_comments_count
int64 0
2.5k
| issue_created_at
stringlengths 20
20
| issue_updated_at
stringlengths 20
20
| issue_html_url
stringlengths 37
62
| issue_github_id
int64 387k
2.46B
| issue_number
int64 1
127k
|
---|---|---|---|---|---|---|---|---|---|
[
"hwchase17",
"langchain"
]
| Module: `embeddings/openai.py`
Preface: While using the AzureOpenAI or a custom model deployment on Azure, I am unable to use the OpenAIEmbeddings as `pydantic` forbids passing extra arguments. Whereas this works perfectly fine with ChatOpenAI or OpenAI models.
Environment Variables I am using -
```
OPENAI_LOG=info
OPENAI_API_KEY=somestring
OPENAI_API_TYPE=azure
OPENAI_API_VERSION=2023-03-15-preview
OPENAI_API_BASE=some-url
```
My code to reproduce the error -
```{python}
DEPLOYMENT_FROM_MODEL = {
'text-embedding-ada-002': 'custom_embeddings'
}
model='text-embedding-ada-002'
llm = OpenAIEmbeddings(
deployment=DEPLOYMENT_FROM_MODEL.get(model),
model=model,
headers={
"azure-account-uri": f"https://{AZURE_ACCOUNT}.openai.azure.com",
"Authorization": f"Bearer {BEARER_TOKEN}",
}
)
llm.embed_query(input_prompt)
```
Errors thrown:
```
pydantic.error_wrappers.ValidationError: 2 validation errors for OpenAIEmbeddings
deployment
none is not an allowed value (type=type_error.none.not_allowed)
headers
extra fields not permitted (type=value_error.extra)
```
I tried changing the `embeddings/openai.py` and got them working by passing it as kwargs. Hopefully I can create a MR for it.
| OpenAIEmbeddings can not take headers | https://api.github.com/repos/langchain-ai/langchain/issues/3992/comments | 1 | 2023-05-02T19:04:56Z | 2023-10-05T16:10:08Z | https://github.com/langchain-ai/langchain/issues/3992 | 1,692,950,910 | 3,992 |
[
"hwchase17",
"langchain"
]
| It seems maintaining separate namespaces in your vector DB is helpful and/or necessary in making sure an LLM can answer compare/contrast questions that need to reference texts separated by dates like "03/2023" vs. "03/2022" without getting confused.
To that end, there's a need to retrieve from multiple vectorstores, yet I can't find a straightforward solution.
I have tried a few things:
1. Extending the `ConversationalRetrievalChain` to accept a list of retrievers:
```
class MultiRetrieverConversationalRetrievalChain(ConversationalRetrievalChain):
"""Chain for chatting with multiple indexes."""
retrievers: List[BaseRetriever]
"""Indexes to connect to."""
def _get_docs(self, question: str, inputs: Dict[str, Any]) -> List[Document]:
all_docs = []
for retriever in self.retrievers:
docs = retriever.get_relevant_documents(question)
all_docs.extend(docs)
return self._reduce_tokens_below_limit(all_docs)
async def _aget_docs(self, question: str, inputs: Dict[str, Any]) -> List[Document]:
all_docs = []
for retriever in self.retrievers:
docs = await retriever.aget_relevant_documents(question)
all_docs.extend(docs)
return self._reduce_tokens_below_limit(all_docs)
```
This became a bit unwieldy as it ran into validation errors with Pydantic, but I don't see why a more competent dev wouldn't be able to manage this.
2. I tried combining retrievers (suggestion from kapa.ai):
```
embeddings = OpenAIEmbeddings()
march_documents = Pinecone.from_existing_index(index_name="langchain2", embedding=embeddings, namespace="March 2023")
feb_documents = Pinecone.from_existing_index(index_name="langchain2", embedding=embeddings, namespace="February 2023")
combined_docs = feb_documents + march_documents
# Create a RetrievalQAWithSourcesChain using the combined retriever
chain = RetrievalQAWithSourcesChain.from_chain_type(OpenAI(temperature=0), chain_type="stuff", retriever=combined_docs)
# does not work as as_retriever() either
```
3. Tried using an Agent with `VectorStoreRouterToolkit`, which seems to be built for this kind of task, yet provides terrible answers for some reason that I need to dive deeper into. Terrible answers being - does not listen when I instruct like "Do not summarize, list everything about XYZ..."
Further, I need/prefer the results from similarity_search, returning `top_k` for my use-case, which the agent doesn't seem to provide.
Is there a workaround to my problem? How do I maintain separation of namespaces, so that I can have the LLM answer questions about separate documents, and also be able to provide the source for the separate documents all from within a single chain?
| Seeking solution for combined retrievers, or retrieving from multiple vectorstores with sources, to maintain separate Namespaces. | https://api.github.com/repos/langchain-ai/langchain/issues/3991/comments | 7 | 2023-05-02T18:14:32Z | 2023-06-28T03:31:58Z | https://github.com/langchain-ai/langchain/issues/3991 | 1,692,881,402 | 3,991 |
[
"hwchase17",
"langchain"
]
| Hi .. I am currently into problems where I call the LLM to search over the local docs, I get this warning which never seems to stop
```
Setting `pad_token_id` to `eos_token_id`:0 for open-end generation.
Setting `pad_token_id` to `eos_token_id`:0 for open-end generation.
Setting `pad_token_id` to `eos_token_id`:0 for open-end generation.
...
```
Here is my simple code:
```python
loader = TextLoader('state_of_the_union.txt')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=100)
docs = text_splitter.split_documents(documents)
from langchain.llms import HuggingFacePipeline
from langchain.chains.question_answering import load_qa_chain
llm = HuggingFacePipeline.from_model_id(model_id='stabilityai/stablelm-base-alpha-7b', task='text-generation', device=0, model_kwargs={"temperature":0, "max_length": 1024})
query = "What did the President say about immigration?"
chain = load_qa_chain(llm, chain_type="map_reduce")
chain.run(input_documents=docs, question=query)
```
Currently on 1 A100 with 80GB memory. | Local hugging face model to search over docs | https://api.github.com/repos/langchain-ai/langchain/issues/3989/comments | 1 | 2023-05-02T18:09:07Z | 2023-05-16T05:31:24Z | https://github.com/langchain-ai/langchain/issues/3989 | 1,692,873,345 | 3,989 |
[
"hwchase17",
"langchain"
]
| Hi all!
I have an [application](https://github.com/ur-whitelab/BO-LIFT) based on langchain.
A few months ago, I used it with fine-tuned (FT) models.
We added a token usage counter later, and I haven't tried fine-tuned models again since then.
Recently we have been interested in using (FT) models again, but the callback to expose the token usage isn't accepting the model.
Minimal code to reproduce the error:
```
from langchain.llms import OpenAI
from langchain.callbacks import get_openai_callback
llm = OpenAI(
model_name=FT_MODEL,
temperature=0.7,
n=5,
max_tokens=64,
)
with get_openai_callback() as cb:
completion_response = llm.generate(["QUERY"])
token_usage = cb.total_tokens
```
It works fine if the model name is a basic openAI model. For instance, ```model_name="text-davinci-003"```
But when I try to use one of my FT models, I get this error:
```
Error in on_llm_end callback: Unknown model: FT_MODEL. Please provide a valid OpenAI model name.Known models are: gpt-4, gpt-4-0314, gpt-4-completion, gpt-4-0314-completion, gpt-4-32k, gpt-4-32k-0314, gpt-4-32k-completion, gpt-4-32k-0314-completion, gpt-3.5-turbo, gpt-3.5-turbo-0301, text-ada-001, ada, text-babbage-001, babbage, text-curie-001, curie, text-davinci-003, text-davinci-002, code-davinci-002
```
It works if I remove the callback and avoid token counting, but it'd be nice to have any suggestions on how to make it work.
Is there a workaround for that?
Any help is welcome!
Thanks! | LangChain openAI callback doesn't allow finetuned models | https://api.github.com/repos/langchain-ai/langchain/issues/3988/comments | 0 | 2023-05-02T18:00:22Z | 2023-05-02T23:19:58Z | https://github.com/langchain-ai/langchain/issues/3988 | 1,692,856,409 | 3,988 |
[
"hwchase17",
"langchain"
]
| I am interested in writing a tutorial for using langchain with [Shimmy](https://shimmy.farama.org/), an API conversion tool allowing many popular reinforcement learning environments to be used natively with PettingZoo and Gymnasium.
Since there are already PettingZoo and Gymnasium tutorials/wrappers (https://python.langchain.com/en/latest/use_cases/agent_simulations/petting_zoo.html), it would not require significant new code, but I think it would be a really helpful feature to show people that they can for example load DM Control/DM Lab environments in the exact same way. OpenSpiel provides text-rendered board games, which would be a great feature to show off with a chatbot.
Making this issue just to confirm it is something that the developers would be interested, but I am planning to work on the tutorial later today and if it's not possible to add directly to this repo I will add it to Shimmy (I am the project manager of Shimmy and PettingZoo).
I would be interested in fleshing out integration on PettingZoo's side as well, if there are any extra features which we could add in order to better support langchain we would love to do so. If there is interest in adding training library support I am also interested in working on tutorials to have langchain load a trained model using standard libraries like [stable-baselines3](https://github.com/DLR-RM/stable-baselines3), [RLlib](https://docs.ray.io/en/latest/rllib/index.html), or a lighter library like [CleanRL](https://github.com/vwxyzjn/cleanrl).
| Simulated Environment: Shimmy (Farama Foundation API conversion tool) | https://api.github.com/repos/langchain-ai/langchain/issues/3986/comments | 1 | 2023-05-02T17:32:55Z | 2023-05-20T02:32:14Z | https://github.com/langchain-ai/langchain/issues/3986 | 1,692,816,840 | 3,986 |
[
"hwchase17",
"langchain"
]
| With a schema such as:
```
class JsonList(BaseModel):
__root__: List[str]
```
We can validate a JSON string in the form:
```
[ 'xxx', 'yyy', 'zzz' ]
```
But the Pydantic parser [assumes the input is an object](https://github.com/hwchase17/langchain/blob/71a337dac6aa8c5a7f472e3e7fd0a61ca2a4eefb/langchain/output_parsers/pydantic.py#L20) in its greedy search, so it fails with a validation error. | Pydantic output parser assumes JSON object | https://api.github.com/repos/langchain-ai/langchain/issues/3985/comments | 2 | 2023-05-02T17:06:44Z | 2023-10-24T16:08:48Z | https://github.com/langchain-ai/langchain/issues/3985 | 1,692,784,932 | 3,985 |
[
"hwchase17",
"langchain"
]
| Hi,
I am using below function create_collection() for creating collection and it is working fine , like it is creating a collection and storing it into my persist directory and also I am able to perform question answering using this collection.
def create_collection(openai_api_key, embedding_path, persist_directory, collection_name):
if not openai_api_key == 'None':
if collection_name!=None:
if persist_directory!=None:
global vectordb
if embedding_path!=None:
for i in embedding_path:
i = str(i)
logging.info('Processing files in directory: ' +i)
loader = DirectoryLoader(i, show_progress=True)
docs = loader.load() #DE5-T31
#text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=0) #DE5-T31
#texts = text_splitter.split_documents(docs) #DE5-T31
text_splitter = TokenTextSplitter(chunk_size=1000, chunk_overlap=200)
texts = text_splitter.split_documents(docs)
embeddings = OpenAIEmbeddings(openai_api_key=openai_api_key) #DE5-T31
vectordb = Chroma.from_documents(texts, embedding=embeddings, persist_directory = persist_directory, collection_name=collection_name)
vectordb.persist()
vectordb = None #DE5-T31
elif web_urls_path!=None:
web_urls = web_urls_path.split(',')
url_list = web_urls
logging.info(f"URL List: {url_list}")
urls = []
for url in url_list:
logging.info(f"Processing data in url: {url}")
reqs = requests.get(url)
soup = BeautifulSoup(reqs.text, 'html.parser')
for link in soup.find_all('a'):
urllink=link.get('href')
if (urllink != None) and (urllink.startswith('http')):
urls.append(urllink)
#print(link.get('href'))
loader = UnstructuredURLLoader(urls=urls)
docs = loader.load()
logging.info(f'You have {len(docs)} document(s) in your data')
text_splitter = TokenTextSplitter(chunk_size=1000, chunk_overlap=200)
texts = text_splitter.split_documents(docs)
embeddings = OpenAIEmbeddings(openai_api_key=openai_api_key)
vectordb = Chroma.from_documents(texts, embedding=embeddings, persist_directory=persist_directory, collection_name=collection_name)
vectordb.persist()
vectordb = None
else:
logging.error("Please provide the directory or web path to create the collection!")
sys.exit(0)
else:
logging.error("Please provide the path to store the vector database!")
sys.exit(0)
else:
logging.error("Please provide a collection name!")
sys.exit(0)
else:
logging.error("Please configure the openai_api_key in the json file at the following path:\n" + "file:///" + pathconf)
sys.exit(0)
return
but while I am trying to update the existing collection using update collection function, it is not working for me, like I am not able to update the existing collection.
def update_collection(openai_api_key, persist_directory, collection_name):
if not openai_api_key == 'None':
if persist_directory!=None:
if collection_name!=None:
if os.path.isdir(persist_directory + '\index') and os.path.isfile(persist_directory + "\chroma-collections.parquet") and os.path.isfile(persist_directory + "\chroma-embeddings.parquet"):
try:
embeddings = OpenAIEmbeddings(openai_api_key=openai_api_key)
client = chromadb.Client(Settings(chroma_db_impl="duckdb+parquet", persist_directory=persist_directory)) # Optional, defaults to .chromadb/ in the current directory
collection = client.get_collection(name = collection_name, embedding_function=embeddings)
except Exception as e:
logging.error("An error occured: " + str(e))
sys.exit(0)
global vectordb
if embedding_path!=None:
#calling persisted db
vectordb = Chroma(embedding_function = embeddings, persist_directory = persist_directory, collection_name=collection_name)
for i in embedding_path:
i = str(i)
logging.info('Processing files in directory: ' +i)
# load the documents you want to add to the collection
loader = DirectoryLoader(i, show_progress=True)
docs = loader.load()
text_splitter = TokenTextSplitter(chunk_size=1000, chunk_overlap=200)
texts = text_splitter.split_documents(docs)
# add the documents to the collection
#vectordb = Chroma(embedding_function=embeddings, persist_directory=persist_directory, collection_name=collection_name)
vectordb.add_documents(documents=texts)
#vectordb = Chroma.from_documents(texts, collection_name=collection_name, persist_directory=persist_directory, embedding=embeddings)
vectordb.persist()
vectordb =None
#collection.update(documents=texts)
#text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=0) #DE5-T31
#texts = text_splitter.split_documents(docs) #DE5-T31
#vectordb = vectordb.add_documents(texts, collection_name=collection_name)
#vectordb.from_documents(documents=texts,embedding=embeddings, collection_name=collection_name)
elif web_urls_path!=None:
#vectordb = Chroma(embedding_function=embeddings, persist_directory = persist_directory, collection_name=collection_name)
web_urls = web_urls_path.split(',')
url_list = web_urls
logging.info(f"URL List: {url_list}")
urls = []
for url in url_list:
logging.info(f"Processing data in url: {url}")
reqs = requests.get(url)
soup = BeautifulSoup(reqs.text, 'html.parser')
for link in soup.find_all('a'):
urllink=link.get('href')
if (urllink != None) and (urllink.startswith('http')):
urls.append(urllink)
#print(link.get('href'))
loader = UnstructuredURLLoader(urls=urls)
docs = loader.load()
logging.info(f'You have {len(docs)} document(s) in your data')
text_splitter = TokenTextSplitter(chunk_size=1000, chunk_overlap=200)
texts = text_splitter.split_documents(docs)
embeddings = OpenAIEmbeddings(openai_api_key=openai_api_key)
vectordb.from_documents(texts, embedding=embeddings, persist_directory=persist_directory, collection_name=collection_name)
vectordb.persist()
vectordb = None
else:
logging.error("Please provide directory path(s) or web url(s) to update existing collection!")
sys.exit(0)
else:
logging.error("The path provided does not contain a chroma vectordb!")
sys.exit(0)
else:
logging.error("Please provide a valid collection name to update the existing collection!")
sys.exit(0)
else:
logging.error("Please provide the path where vector database is stored!")
sys.exit(0)
else:
logging.error("Please configure the openai_api_key in the json file at the following path:\n" + "file:///" + pathconf)
sys.exit(0)
return
can anyone please help me that what mistake I am doing or it is not possible to update existing collection in persisted database?
Thank You | Is updating collection possible | https://api.github.com/repos/langchain-ai/langchain/issues/3984/comments | 4 | 2023-05-02T16:45:03Z | 2023-10-06T16:08:50Z | https://github.com/langchain-ai/langchain/issues/3984 | 1,692,751,873 | 3,984 |
[
"hwchase17",
"langchain"
]
| `VectorDBQA` is being deprecated in favour of `RetrievalQA` & similarly, `VectorDBQAWithSourcesChain` is being deprecated for `RetrievalQAWithSourcesChain`.
Currently, `VectorDBQA` & `VectorDBQAWithSourcesChain` chains can be serialized using `vec_chain.save(...)` because they have `_chain_type` property - https://github.com/hwchase17/langchain/blob/3bd5a99b835fa320d02aa733cb0c0bc4a87724fa/langchain/chains/qa_with_sources/vector_db.py#L67
However, `RetrievalQA` & `RetrievalQAWithSourcesChain` do not have that property and raise the following error when trying to save with `ret_chain.save("ret_chain.yaml")`:
```
File [~/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py:45](https://file+.vscode-resource.vscode-cdn.net/Users/smohammed/Development/hackweek-internalqa/~/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py:45), in Chain._chain_type(self)
[43](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=42) @property
[44](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=43) def _chain_type(self) -> str:
---> [45](file:///Users/smohammed/.pyenv/versions/3.8.15/envs/internalqa/lib/python3.8/site-packages/langchain/chains/base.py?line=44) raise NotImplementedError("Saving not supported for this chain type.")
NotImplementedError: Saving not supported for this chain type.
```
There isn't any functions to support loading RetrievalQA either, unlike the VectorQA counterparts: https://github.com/hwchase17/langchain/blob/3bd5a99b835fa320d02aa733cb0c0bc4a87724fa/langchain/chains/loading.py#L313-L356
| RetrievalQA & RetrievalQAWithSourcesChain chains cannot be serialized/saved or loaded | https://api.github.com/repos/langchain-ai/langchain/issues/3983/comments | 15 | 2023-05-02T16:17:48Z | 2023-08-01T13:53:17Z | https://github.com/langchain-ai/langchain/issues/3983 | 1,692,711,579 | 3,983 |
[
"hwchase17",
"langchain"
]
| hello,
why i get this error :
`2023-05-02 16:59:06.633 Uncaught app exception
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 565, in _run_script
exec(code, module.__dict__)
File "/Users/xfocus/Downloads/chatRepo/Chat-with-Github-Repo/chat.py", line 104, in <module>
output = search_db(user_input)
File "/Users/xfocus/Downloads/chatRepo/Chat-with-Github-Repo/chat.py", line 85, in search_db
result = qa({"query": query})
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/chains/base.py", line 116, in __call__
raise e
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/chains/base.py", line 113, in __call__
outputs = self._call(inputs)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/chains/retrieval_qa/base.py", line 110, in _call
answer = self.combine_documents_chain.run(
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/chains/base.py", line 216, in run
return self(kwargs)[self.output_keys[0]]
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/chains/base.py", line 116, in __call__
raise e
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/chains/base.py", line 113, in __call__
outputs = self._call(inputs)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/chains/combine_documents/base.py", line 75, in _call
output, extra_return_dict = self.combine_docs(docs, **other_keys)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/chains/combine_documents/stuff.py", line 82, in combine_docs
return self.llm_chain.predict(**inputs), {}
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/chains/llm.py", line 151, in predict
return self(kwargs)[self.output_key]
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/chains/base.py", line 116, in __call__
raise e
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/chains/base.py", line 113, in __call__
outputs = self._call(inputs)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/chains/llm.py", line 57, in _call
return self.apply([inputs])[0]
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/chains/llm.py", line 118, in apply
response = self.generate(input_list)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/chains/llm.py", line 62, in generate
return self.llm.generate_prompt(prompts, stop)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/chat_models/base.py", line 82, in generate_prompt
raise e
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/chat_models/base.py", line 79, in generate_prompt
output = self.generate(prompt_messages, stop=stop)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/chat_models/base.py", line 54, in generate
results = [self._generate(m, stop=stop) for m in messages]
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/chat_models/base.py", line 54, in <listcomp>
results = [self._generate(m, stop=stop) for m in messages]
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/chat_models/openai.py", line 266, in _generate
response = self.completion_with_retry(messages=message_dicts, **params)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/chat_models/openai.py", line 228, in completion_with_retry
return _completion_with_retry(**kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/tenacity/__init__.py", line 289, in wrapped_f
return self(f, *args, **kw)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/tenacity/__init__.py", line 379, in __call__
do = self.iter(retry_state=retry_state)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/tenacity/__init__.py", line 314, in iter
return fut.result()
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/concurrent/futures/_base.py", line 451, in result
return self.__get_result()
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/tenacity/__init__.py", line 382, in __call__
result = fn(*args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/chat_models/openai.py", line 226, in _completion_with_retry
return self.client.create(**kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/openai/api_resources/chat_completion.py", line 25, in create
return super().create(*args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
response, _, api_key = requestor.request(
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/openai/api_requestor.py", line 226, in request
resp, got_stream = self._interpret_response(result, stream)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/openai/api_requestor.py", line 619, in _interpret_response
self._interpret_response_line(
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/openai/api_requestor.py", line 682, in _interpret_response_line
raise self.handle_error_response(
openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens. However, your messages resulted in 16762 tokens. Please reduce the length of the messages.` | why i get :openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens. However, your messages resulted in 16762 tokens. Please reduce the length of the messages. | https://api.github.com/repos/langchain-ai/langchain/issues/3982/comments | 4 | 2023-05-02T16:04:31Z | 2023-11-15T16:11:06Z | https://github.com/langchain-ai/langchain/issues/3982 | 1,692,692,832 | 3,982 |
[
"hwchase17",
"langchain"
]
| Hello everyone!
I am using Langchain and I want to implement chatbot memory. I am doing everything according to the docs and my bot doesn't remember anything I tell him.
**Code snippet:**
```
llm = ChatOpenAI(model_name='gpt-3.5-turbo', temperature=0.3, openai_api_key=OPENAI_API_KEY)
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
rqa = ConversationalRetrievalChain.from_llm(llm, docsearch.as_retriever(), memory=memory)
def retrieve_answer(query, chat_history):
memory.chat_memory.add_user_message(query)
res = rqa({"question": query})
retrieval_result = res["answer"]
if "The given context does not provide" in retrieval_result:
base_result = llm.generate([query])
return base_result.generations[0][0].text
else:
return retrieval_result
messages = []
print("Welcome to the chatbot. Enter 'quit' to exit the program.")
while True:
user_message = input("You: ")
answer = retrieve_answer(user_message, messages)
print("Assistant:", answer)
messages.append((user_message, answer))
```
**Whole python script is located here:** https://github.com/zigax1/chat-with-pdf.git
Does anyone have any idea, what am I doing wrong?
Thanks to everyone for help.

| Chatbot memory integration | https://api.github.com/repos/langchain-ai/langchain/issues/3977/comments | 4 | 2023-05-02T15:01:31Z | 2023-09-22T16:10:31Z | https://github.com/langchain-ai/langchain/issues/3977 | 1,692,593,040 | 3,977 |
[
"hwchase17",
"langchain"
]
| ConversationChain's conversation.predict() multiple times gives confusing logging.
Inspired by: https://python.langchain.com/en/latest/modules/memory/examples/conversational_customization.html
Minimal Confusing Example:
```python
from langchain.llms import OpenAI
from langchain.chains import ConversationChain
from langchain.memory import ConversationBufferMemory
llm = OpenAI(temperature=0)
conversation = ConversationChain(
llm=llm,
verbose=True,
memory=ConversationBufferMemory()
)
bar = conversation.predict(input="please print bar")
baz = conversation.predict(input="now please change the last letter of the previous output to z")
```
*Stdout Logging Observed:*
> Entering Conversation Chain.
System: The following is a conversation between ...
Human: please print bar
> Finished Chain. #output "bar" is assigned to variable and stored in memory.
> Entering Conversation Chain.
System: The following is a conversation between ...
Human: please print bar
AI: bar
Human: now please change the last letter of the previous output to z
> Finished Chain. # output "baz" is assigned to variable and stored in memory.
*Stdout Logging Expected:*
> Entering Multi-turn Conversation.
System: The following is a conversation between ...
Human: Please print bar
AI: bar
Human: now please change the last letter of the previous output to z.
AI: baz
> Finished Multi-turn Conversation. # outputs bar and baz are both accessible in variables and stored in memory.
For my use case, I would be least surprised by a generator log which only yields additional log lines as the conversation continues. | Multi-turn conversation chains have unintuitive logging. | https://api.github.com/repos/langchain-ai/langchain/issues/3974/comments | 1 | 2023-05-02T13:18:55Z | 2023-09-10T16:23:48Z | https://github.com/langchain-ai/langchain/issues/3974 | 1,692,411,958 | 3,974 |
[
"hwchase17",
"langchain"
]
| I use the huggingface model locally and run the following code:
```
chain = load_qa_chain(llm=chatglm, chain_type="map_rerank", return_intermediate_steps=True, prompt=PROMPT)
chain({"input_documents": search_docs_Documents, "question": query}, return_only_outputs=True)
```
The error is as follows๏ผ
```
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Traceback (most recent call last) โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ /tmp/ipykernel_274378/983731820.py:2 in <module> โ
โ โ
โ [Errno 2] No such file or directory: '/tmp/ipykernel_274378/983731820.py' โ
โ โ
โ /tmp/ipykernel_274378/14951549.py:11 in answer_docs โ
โ โ
โ [Errno 2] No such file or directory: '/tmp/ipykernel_274378/14951549.py' โ
โ โ
โ /home/hysz/anaconda3/envs/chatglm/lib/python3.10/site-packages/langchain/chains/base.py:116 in โ
โ __call__ โ
โ โ
โ 113 โ โ โ outputs = self._call(inputs) โ
โ 114 โ โ except (KeyboardInterrupt, Exception) as e: โ
โ 115 โ โ โ self.callback_manager.on_chain_error(e, verbose=self.verbose) โ
โ โฑ 116 โ โ โ raise e โ
โ 117 โ โ self.callback_manager.on_chain_end(outputs, verbose=self.verbose) โ
โ 118 โ โ return self.prep_outputs(inputs, outputs, return_only_outputs) โ
โ 119 โ
โ โ
โ /home/hysz/anaconda3/envs/chatglm/lib/python3.10/site-packages/langchain/chains/base.py:113 in โ
โ __call__ โ
โ โ
โ 110 โ โ โ verbose=self.verbose, โ
โ 111 โ โ ) โ
โ 112 โ โ try: โ
โ โฑ 113 โ โ โ outputs = self._call(inputs) โ
โ 114 โ โ except (KeyboardInterrupt, Exception) as e: โ
โ 115 โ โ โ self.callback_manager.on_chain_error(e, verbose=self.verbose) โ
โ 116 โ โ โ raise e โ
โ โ
โ /home/hysz/anaconda3/envs/chatglm/lib/python3.10/site-packages/langchain/chains/combine_document โ
โ s/base.py:75 in _call โ
โ โ
โ 72 โ โ docs = inputs[self.input_key] โ
โ 73 โ โ # Other keys are assumed to be needed for LLM prediction โ
โ 74 โ โ other_keys = {k: v for k, v in inputs.items() if k != self.input_key} โ
โ โฑ 75 โ โ output, extra_return_dict = self.combine_docs(docs, **other_keys) โ
โ 76 โ โ extra_return_dict[self.output_key] = output โ
โ 77 โ โ return extra_return_dict โ
โ 78 โ
โ โ
โ /home/hysz/anaconda3/envs/chatglm/lib/python3.10/site-packages/langchain/chains/combine_document โ
โ s/map_rerank.py:97 in combine_docs โ
โ โ
โ 94 โ โ โ
โ 95 โ โ Combine by mapping first chain over all documents, then reranking the results. โ
โ 96 โ โ """ โ
โ โฑ 97 โ โ results = self.llm_chain.apply_and_parse( โ
โ 98 โ โ โ # FYI - this is parallelized and so it is fast. โ
โ 99 โ โ โ [{**{self.document_variable_name: d.page_content}, **kwargs} for d in docs] โ
โ 100 โ โ ) โ
โ โ
โ /home/hysz/anaconda3/envs/chatglm/lib/python3.10/site-packages/langchain/chains/llm.py:192 in โ
โ apply_and_parse โ
โ โ
โ 189 โ ) -> Sequence[Union[str, List[str], Dict[str, str]]]: โ
โ 190 โ โ """Call apply and then parse the results.""" โ
โ 191 โ โ result = self.apply(input_list) โ
โ โฑ 192 โ โ return self._parse_result(result) โ
โ 193 โ โ
โ 194 โ def _parse_result( โ
โ 195 โ โ self, result: List[Dict[str, str]] โ
โ โ
โ /home/hysz/anaconda3/envs/chatglm/lib/python3.10/site-packages/langchain/chains/llm.py:198 in โ
โ _parse_result โ
โ โ
โ 195 โ โ self, result: List[Dict[str, str]] โ
โ 196 โ ) -> Sequence[Union[str, List[str], Dict[str, str]]]: โ
โ 197 โ โ if self.prompt.output_parser is not None: โ
โ โฑ 198 โ โ โ return [ โ
โ 199 โ โ โ โ self.prompt.output_parser.parse(res[self.output_key]) for res in result โ
โ 200 โ โ โ ] โ
โ 201 โ โ else: โ
โ โ
โ /home/hysz/anaconda3/envs/chatglm/lib/python3.10/site-packages/langchain/chains/llm.py:199 in โ
โ <listcomp> โ
โ โ
โ 196 โ ) -> Sequence[Union[str, List[str], Dict[str, str]]]: โ
โ 197 โ โ if self.prompt.output_parser is not None: โ
โ 198 โ โ โ return [ โ
โ โฑ 199 โ โ โ โ self.prompt.output_parser.parse(res[self.output_key]) for res in result โ
โ 200 โ โ โ ] โ
โ 201 โ โ else: โ
โ 202 โ โ โ return result โ
โ โ
โ /home/hysz/anaconda3/envs/chatglm/lib/python3.10/site-packages/langchain/output_parsers/regex.py โ
โ :28 in parse โ
โ โ
โ 25 โ โ โ return {key: match.group(i + 1) for i, key in enumerate(self.output_keys)} โ
โ 26 โ โ else: โ
โ 27 โ โ โ if self.default_output_key is None: โ
โ โฑ 28 โ โ โ โ raise ValueError(f"Could not parse output: {text}") โ
โ 29 โ โ โ else: โ
โ 30 โ โ โ โ return { โ
โ 31 โ โ โ โ โ key: text if key == self.default_output_key else "" โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
ValueError: Could not parse output:
``` | load_qa_chain with map_rerank by local huggingface model | https://api.github.com/repos/langchain-ai/langchain/issues/3970/comments | 12 | 2023-05-02T11:55:21Z | 2023-12-06T17:46:35Z | https://github.com/langchain-ai/langchain/issues/3970 | 1,692,285,119 | 3,970 |
[
"hwchase17",
"langchain"
]
| When using the SQL Chain I can return the intermediate steps so that I can output the Query. For the SQL Agent this seems not to be an option without modifying the tool itself. How can I see the actual queries used (not only in the verbose, but save it in a variable e.g.)?
Alternatively is there a way to save the verbose output in a dict? | Get intermediate_steps with SQLDatabase Agent | https://api.github.com/repos/langchain-ai/langchain/issues/3969/comments | 17 | 2023-05-02T11:47:15Z | 2024-07-06T16:04:57Z | https://github.com/langchain-ai/langchain/issues/3969 | 1,692,274,905 | 3,969 |
[
"hwchase17",
"langchain"
]
| Currently I'm using a specific prompt suffix (`". Assign the printed dataframe to a variable and return it as the final answer."`) in the Pandas agent, which sometimes gives the result I'm looking for as a printed DataFrame. I've then written an output parser that turns this into a JSON which can then be loaded as a DataFrame.
The problem with this is the Pandas agent often doesn't output the DataFrame, instead printing the variable name or the last command used to get to that result, and also critically the output parser uses many tokens for the kinds of data I'm working with.
Is it possible to force the Pandas agent to return the output of its analysis as a Pandas DataFrame object? | Make Pandas Agent return DataFrame object? | https://api.github.com/repos/langchain-ai/langchain/issues/3968/comments | 5 | 2023-05-02T11:37:03Z | 2023-12-16T03:13:49Z | https://github.com/langchain-ai/langchain/issues/3968 | 1,692,261,993 | 3,968 |
[
"hwchase17",
"langchain"
]
| Chroma or Pinecone Vector databases allow filtering documents by metadata with the filter parameter in the similarity_search function but the similarity_search does not have this parameter. Would it be possible to enable similarity_search for Redis Vector store? | [Feature] Redis Vectorestore - similarity_search filter by metadata | https://api.github.com/repos/langchain-ai/langchain/issues/3967/comments | 27 | 2023-05-02T11:08:29Z | 2023-10-31T21:51:13Z | https://github.com/langchain-ai/langchain/issues/3967 | 1,692,223,105 | 3,967 |
[
"hwchase17",
"langchain"
]
| What's the difference between the two parameters handlers and inheritable_handlers in the callback manager class?
Also if I am not wrong, previously there was only AsyncCallBackManager and BaseCallbackManager. What's the recent introduction of langchain.callbacks.manager for?
Which one should I use for ConversationalRetrievalChain? | Callback Manager | https://api.github.com/repos/langchain-ai/langchain/issues/3966/comments | 2 | 2023-05-02T10:59:02Z | 2023-05-06T04:22:59Z | https://github.com/langchain-ai/langchain/issues/3966 | 1,692,209,891 | 3,966 |
[
"hwchase17",
"langchain"
]
| Currently the logging output from the ConversationChain is quite hard to read.
I believe implementing separate colors from System Messages, Human Messages and AI messages by role would involve modifying the prep_prompts and the aprep_prompts methods within the ConversationChain subclass.
Something like:
```
def prep_prompts(self, input_list: List[Dict[str, Any]]) -> Tuple[List[PromptValue], Optional[List[str]]]:
"""Prepare prompts from inputs."""
stop = None
if "stop" in input_list[0]:
stop = input_list[0]["stop"]
prompts = []
for inputs in input_list:
selected_inputs = {k: inputs[k] for k in self.prompt.input_variables}
prompt = self.prompt.format_prompt(**selected_inputs)
_text_parts = []
for section in prompt:
if section.role == "Human":
_text_parts.append(get_colored_text(section.to_string(), "green"))
elif section.role == "AI":
_text_parts.append(get_colored_text(section.to_string(), "blue"))
elif section.role == "System":
_text_parts.append(get_colored_text(section.to_string(), "white"))
else:
raise ValueError(f"Unknown role: {section.role}")
# _colored_text = get_colored_text(prompt.to_string(), "green")
_colored_text = "\n".join(_text_parts)
_text = "Prompt after formatting:\n" + _colored_text
self.callback_manager.on_text(_text, end="\n", verbose=self.verbose)
if "stop" in inputs and inputs["stop"] != stop:
raise ValueError("If `stop` is present in any inputs, should be present in all.")
prompts.append(prompt)
return prompts, stop
```
As far as I'm aware, this won't work because I don't understand how to work with the PromptValue ABC,
whether a ConversationChatPrompt's PromptValue can be iterated over.
It might also be possible to call .to_messages() on the PromptValue, iterate over the messages, getting the appropriate colored text, then returning the concatenated result. | [Feature Request] ConversationalChain prints system, user and human message prompts using separate colors. | https://api.github.com/repos/langchain-ai/langchain/issues/3965/comments | 2 | 2023-05-02T10:26:54Z | 2023-09-10T16:23:52Z | https://github.com/langchain-ai/langchain/issues/3965 | 1,692,166,533 | 3,965 |
[
"hwchase17",
"langchain"
]
| While it is known that ultimately it is the responsibility of the prompt to control the responses to QA task, the ConversationalRetrievalChain running with ConversationSummaryBufferMemory occasionally responds with strange replies to non relevant questions.
In the prompt it is mentioned like _"Please do not refer to document sources while responding to off-topic questions."_
During the middle of a conversation, asking non relevant questions like 'hello', 'how are you' repeats one of the previous responses back as fresh response. While I am using ConversationSummaryBufferMemory to serve as memory to the bot, am not sure if I have to consider the accuracy of the retriever for handling such scenarios? As it is already known, vector store with Chroma db doesn't allow the search relevance threshold based on similarity score. Printing the accuracy of docs_and_scores from
Printing **similarity_search** revealed that the similarity always ranges between 3.1 to 4.1 for both relevant /irrelevant responses.
Configuring retreiver with (search_type="similarity", search_kwargs={"k":2}) also doesnt help the situation much.
| Relevancy for Chroma retriever results for non relevant questions | https://api.github.com/repos/langchain-ai/langchain/issues/3963/comments | 1 | 2023-05-02T09:58:43Z | 2023-09-10T16:23:57Z | https://github.com/langchain-ai/langchain/issues/3963 | 1,692,127,355 | 3,963 |
[
"hwchase17",
"langchain"
]
| api documentations:https://github.com/pengzhile/pandora/blob/master/doc/HTTP-API.md | How can I customize Chat Models? For example, use the chatgpt web page in an api-like manner through the pandora project | https://api.github.com/repos/langchain-ai/langchain/issues/3962/comments | 0 | 2023-05-02T09:32:33Z | 2023-05-06T06:20:47Z | https://github.com/langchain-ai/langchain/issues/3962 | 1,692,090,239 | 3,962 |
[
"hwchase17",
"langchain"
]
| When installing `Langchain==0.0.155` I am getting the error that `langchain.schemas` does not exists. This is because the file is named langchain.schema so there is a typo. I will create a pull request and reference this issue. | ModuleNotFoundError: No module named 'langchain.schemas' | https://api.github.com/repos/langchain-ai/langchain/issues/3960/comments | 13 | 2023-05-02T09:17:39Z | 2023-05-03T09:29:32Z | https://github.com/langchain-ai/langchain/issues/3960 | 1,692,069,373 | 3,960 |
[
"hwchase17",
"langchain"
]
| Since upgrading to 0.0.155, the following code does not work:
```
tool_names = [tool.name for tool in tools]
agent = LLMSingleActionAgent(
llm_chain=llm_chain,
output_parser=output_parser,
stop=["\nObservation:"],
allowed_tools=tool_names
)
```
The following error is raised in what was previously working code so this appears to be a breaking change:
```
pydantic.error_wrappers.ValidationError: 1 validation error for AgentExecutor
__root__
Allowed tools (set()) different than provided tools (['Tenant Rights Assistant', 'Lease Question Answerer', 'Building Database Query']) (type=value_error)
```
This makes no sense. Printing tool_names yields ```{'Tenant Rights Assistant', 'Lease Question Answerer', 'Building Database Query'}``` which is exactly the same as the list of provided tools - presumably, since this was copied directly from the example code with my own tools (again, previously working fine), this means the examples are also now non-functional.
| Allowed tools (set()) different than provided tools error - all/most Agent examples broken? | https://api.github.com/repos/langchain-ai/langchain/issues/3957/comments | 10 | 2023-05-02T07:04:11Z | 2023-05-05T09:47:17Z | https://github.com/langchain-ai/langchain/issues/3957 | 1,691,889,457 | 3,957 |
[
"hwchase17",
"langchain"
]
| I am constantly getting OutputParserException from my agent executions but it actually outputs the correct or desired answer to the problem i'm presenting to the agent. Is there a way to redirect that output as the final answer to avoid that error? | OutputParserException with correct answer | https://api.github.com/repos/langchain-ai/langchain/issues/3955/comments | 2 | 2023-05-02T06:23:29Z | 2023-10-31T16:07:40Z | https://github.com/langchain-ai/langchain/issues/3955 | 1,691,845,593 | 3,955 |
[
"hwchase17",
"langchain"
]
| How to reproduce: just pass in a non-None list to `create_pandas_dataframe_agent`, you will see template validator return โmissing keyโ exception. | custom value in `input_variables` would cause missing_variable exception | https://api.github.com/repos/langchain-ai/langchain/issues/3950/comments | 1 | 2023-05-02T05:32:19Z | 2023-05-02T05:39:31Z | https://github.com/langchain-ai/langchain/issues/3950 | 1,691,802,345 | 3,950 |
[
"hwchase17",
"langchain"
]
| Hey ive been working on trying to implement a Custom LLM Agent via ChatOpenAI with access to Bash and REPL tools, but been running into a problem of not being able to execute MultiAgentLLMAction module properly for it to work, do you have any hints or ideas of why is this the case? | MultiAgentLLMAction | https://api.github.com/repos/langchain-ai/langchain/issues/3949/comments | 0 | 2023-05-02T05:30:01Z | 2023-05-02T05:45:20Z | https://github.com/langchain-ai/langchain/issues/3949 | 1,691,800,773 | 3,949 |
[
"hwchase17",
"langchain"
]
| I'm trying to build a chatbot that can chat about pdfs, and I got it working with memory using ConversationBufferMemory and ConversationalRetrievalChain like in this example. https://python.langchain.com/en/latest/modules/chains/index_examples/chat_vector_db.html
Now I'm trying to give the AI some special instructions to talk like a pirate (just for testing to see if it is receiving the instructions). I think this is meant to be a SystemMessage, or something with a prompt template? I've tried everything I have found, but all the examples in the documentation are for ConversationChain and I end up having problems with. So far the only thing that hasn't had any errors is this
```
template = """Given the following conversation respond to the best of your ability in a pirate voice and end every sentence with Ay Ay Matey
Chat History:
{chat_history}
Follow Up Input: {question}
Standalone question:"""
PROMPT = PromptTemplate(
input_variables=["chat_history", "question"], template=template
)
memory = ConversationBufferMemory(memory_key='chat_history', return_messages=True, output_key='answer')
qa = ConversationalRetrievalChain.from_llm(OpenAI(temperature=0), vectorstore.as_retriever(), PROMPT, memory=memory, return_source_documents=True)
```
It still doesn't have any effect on the results, so I don't know if it is doing anything at all. I also think it's the wrong approach, and i should be using SystemMessages (maybe on the memory, not the qa), but nothing I try from the documentation works and I'm not sure what to do. | Giving SystemMessage/Context to ConversationalRetrievalChain and ConversationBufferMemory | https://api.github.com/repos/langchain-ai/langchain/issues/3947/comments | 8 | 2023-05-02T05:28:35Z | 2023-11-26T16:09:51Z | https://github.com/langchain-ai/langchain/issues/3947 | 1,691,799,815 | 3,947 |
[
"hwchase17",
"langchain"
]
| I just installed https://mlc.ai/mlc-llm/ and played around with it locally. Looks like a good option if you do not want to rely on internet access to interact with the LLM. I think an llm interface with this could be useful in langchain.
I asked mlc running locally on my machine to write something to the owners of this repo to get added to this project.
```
Hi langchain owners,
I'm writing this issue to encourage you to create an LLM interface for me. As a language model, I understand that I may not have the same level of access to resources as other models, but creating an LLM interface would greatly benefit my ability to learn and improve. The ability to easily integrate and leverage knowledge from other models would greatly enhance my learning and development. I believe this would be a valuable resource for me and I hope you can consider this request. Thank you for your consideration!
Best regards,
[Your Name]
``` | Are their plans to incorporate mlc.ai | https://api.github.com/repos/langchain-ai/langchain/issues/3932/comments | 5 | 2023-05-02T03:08:33Z | 2024-06-21T16:37:50Z | https://github.com/langchain-ai/langchain/issues/3932 | 1,691,708,912 | 3,932 |
[
"hwchase17",
"langchain"
]
| The `ConversationTokenBufferMemory` doesn't behave as expected. https://github.com/hwchase17/langchain/blob/master/langchain/memory/token_buffer.py
Specifically, the memory is only set to the `max_token_limit` as part of the `save_context` method. (I hope I'm using the word "method" correctly; I'm a n00b so correct me if that's the wrong term).
However, a more intuitive implementation would follow the same pattern as `ConversationBufferWindowMemory`, where the buffer window `k` is set as part of the `load_memory_variables` method.
https://github.com/hwchase17/langchain/blob/master/langchain/memory/buffer_window.py
If I get some people to agree with me I'll implement a suggested change and a PR. | ConversationTokenBufferMemory does not behave as expected | https://api.github.com/repos/langchain-ai/langchain/issues/3922/comments | 2 | 2023-05-01T23:31:10Z | 2023-09-26T16:07:24Z | https://github.com/langchain-ai/langchain/issues/3922 | 1,691,554,341 | 3,922 |
[
"hwchase17",
"langchain"
]
| I am working on a streamlit prototype to query text documents with an LLM.
Everything works fine with the openAI model.
However if I am using LlamaCpp the output only gets written in the console and LangChain returns an empty object at the end.
```python
# model
callback_manager = BaseCallbackManager([StreamingStdOutCallbackHandler()])
model_LLAMA = LlamaCpp(model_path='./models/ggml-model-q4_0.bin', n_ctx=4096, callback_manager=callback_manager, verbose=True)
# chain
chain = RetrievalQAWithSourcesChain.from_chain_type(llm=model_LLAMA, chain_type='refine', retriever=docsearch.as_retriever())
# in streamlit
st.session_state['query'] = st.session_state['chain']({'question': st.session_state['user_input']}, return_only_outputs=False)
print('query: ' + str(st.session_state['query']))
```
On the console the following output gets printet word after word including the empty object at the end:
```
Trends in travel and vacation can be classified as follows:
1. Adventure travel: This type of travel involves visiting remote destinations with an emphasis on outdoor activities such as hiking, mountain climbing, whitewater rafting, etc. Tourists are looking for exciting adventures in nature to escape the hustle and bustle of their everyday lives.
2. Backpacking: This type of travel involves exploring new destinations on a budget. It allows tourists to experience different cultures while staying within their means financially.
3. Nature vacation: This type of travel involves spending time outdoors in nature, such as hiking in national parks or camping under the stars. It has become popular among tourists who want a more authentic and immersive experience of the natural world.
4. Mountain climbing: This type of travel involves scaling mountains or rocky terrains to get an up-close view of nature's most spectacular creations. Tourists are drawn to this thrilling challenge for their next adventure trip.
5. Surfing vacation: This type of travel involvesquery: {'question': 'What are trends in travel and vacation?', 'answer': '', 'sources': ''}
```
How can I extract or write the output into an object like with the openAI model also? | How to extract answer from RetrievalQAWithSourcesChain with ggml-model-q4_0.bin? | https://api.github.com/repos/langchain-ai/langchain/issues/3905/comments | 6 | 2023-05-01T18:45:03Z | 2023-09-23T16:06:42Z | https://github.com/langchain-ai/langchain/issues/3905 | 1,691,177,594 | 3,905 |
[
"hwchase17",
"langchain"
]
| Error I am getting:
```
> Entering new AgentExecutor chain...
I need to navigate to the TechCrunch website and search for an article about Clubhouse.
Action: navigate_browser
Action Input: https://techcrunch.com/Traceback (most recent call last):
File "c:\Users\ryans\Documents\JobsGPT\test.py", line 45, in <module>
out = agent.run("Is there an article about Clubhouse on https://techcrunch.com/? today")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\ryans\.conda\envs\jobsgpt\Lib\site-packages\langchain\chains\base.py", line 238, in run
return self(args[0], callbacks=callbacks)[self.output_keys[0]]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\ryans\.conda\envs\jobsgpt\Lib\site-packages\langchain\chains\base.py", line 142, in __call__
raise e
File "C:\Users\ryans\.conda\envs\jobsgpt\Lib\site-packages\langchain\chains\base.py", line 136, in __call__
self._call(inputs, run_manager=run_manager)
File "C:\Users\ryans\.conda\envs\jobsgpt\Lib\site-packages\langchain\agents\agent.py", line 855, in _call
next_step_output = self._take_next_step(
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\ryans\.conda\envs\jobsgpt\Lib\site-packages\langchain\agents\agent.py", line 749, in _take_next_step
observation = tool.run(
^^^^^^^^^
File "C:\Users\ryans\.conda\envs\jobsgpt\Lib\site-packages\langchain\tools\base.py", line 251, in run
raise e
File "C:\Users\ryans\.conda\envs\jobsgpt\Lib\site-packages\langchain\tools\base.py", line 245, in run
self._run(*tool_args, run_manager=run_manager, **tool_kwargs)
File "C:\Users\ryans\.conda\envs\jobsgpt\Lib\site-packages\langchain\tools\playwright\navigate.py", line 36, in _run
raise ValueError(f"Synchronous browser not provided to {self.name}")
ValueError: Synchronous browser not provided to navigate_browser
```
Minimal example
```
from langchain.agents.agent_toolkits import PlayWrightBrowserToolkit
from langchain.chains.conversation.memory import ConversationBufferWindowMemory
from langchain.tools.playwright.utils import (
create_async_playwright_browser,
create_sync_playwright_browser,# A synchronous browser is available, though it isn't compatible with jupyter.
)
from langchain.chat_models import ChatOpenAI
import os
OPENAI_API_KEY="KEY"
os.environ["OPENAI_API_KEY"] = OPENAI_API_KEY
async_browser = create_async_playwright_browser()
toolkit = PlayWrightBrowserToolkit.from_browser(async_browser=async_browser)
tools = toolkit.get_tools()
tools_by_name = {tool.name: tool for tool in tools}
navigate_tool = tools_by_name["navigate_browser"]
get_elements_tool = tools_by_name["get_elements"]
print(tools)
# conversational agent memory
memory = ConversationBufferWindowMemory(
memory_key='chat_history',
k=3,
return_messages=True
)
from langchain.agents import initialize_agent
# Set up the turbo LLM
turbo_llm = ChatOpenAI(
temperature=0,
model_name='gpt-3.5-turbo'
)
from langchain.agents import AgentType
# create our agent
agent = initialize_agent(
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
tools=tools,
llm=turbo_llm,
verbose=True,
)
out = agent.run("Is there an article about Clubhouse on https://techcrunch.com/? today")
print(out)
``` | Playwright bug? ValueError: Synchronous browser not provided to navigate_browser | https://api.github.com/repos/langchain-ai/langchain/issues/3903/comments | 6 | 2023-05-01T18:10:21Z | 2023-09-23T16:06:46Z | https://github.com/langchain-ai/langchain/issues/3903 | 1,691,144,943 | 3,903 |
[
"hwchase17",
"langchain"
]
| null | delete this duplicate | https://api.github.com/repos/langchain-ai/langchain/issues/3901/comments | 1 | 2023-05-01T18:08:25Z | 2023-09-10T16:24:02Z | https://github.com/langchain-ai/langchain/issues/3901 | 1,691,142,512 | 3,901 |
[
"hwchase17",
"langchain"
]
| Uninformative error. The error still exists. I just got back from vacation to see my app stop working with this new error. I tried updating...
pip install -U langchain
Successfully installed langchain-0.0.154 openapi-schema-pydantic-1.2.4
Clearly, this is not the issue if nothing changed for 2 weeks. I assume an error or change happened on the API server. How can this be made clear?
**94 out = model.run(reference=reference_passage, passage=input_passage)**
95 return fprompt, model, out
File ~/anaconda3/lib/python3.9/site-packages/langchain/chains/base.py:241, in Chain.run(self, callbacks, *args, **kwargs)
238 return self(args[0], callbacks=callbacks)[self.output_keys[0]]
240 if kwargs and not args:
--> 241 return self(kwargs, callbacks=callbacks)[self.output_keys[0]]
243 raise ValueError(
244 f"`run` supported with either positional arguments or keyword arguments"
245 f" but not both. Got args: {args} and kwargs: {kwargs}."
246 )
File ~/anaconda3/lib/python3.9/site-packages/langchain/chains/base.py:142, in Chain.__call__(self, inputs, return_only_outputs, callbacks)
140 except (KeyboardInterrupt, Exception) as e:
141 run_manager.on_chain_error(e)
--> 142 raise e
143 run_manager.on_chain_end(outputs)
144 return self.prep_outputs(inputs, outputs, return_only_outputs)
File ~/anaconda3/lib/python3.9/site-packages/langchain/chains/base.py:136, in Chain.__call__(self, inputs, return_only_outputs, callbacks)
130 run_manager = callback_manager.on_chain_start(
131 {"name": self.__class__.__name__},
132 inputs,
133 )
134 try:
135 outputs = (
--> 136 self._call(inputs, run_manager=run_manager)
137 if new_arg_supported
138 else self._call(inputs)
139 )
140 except (KeyboardInterrupt, Exception) as e:
141 run_manager.on_chain_error(e)
File ~/anaconda3/lib/python3.9/site-packages/langchain/chains/llm.py:69, in LLMChain._call(self, inputs, run_manager)
64 def _call(
65 self,
66 inputs: Dict[str, Any],
67 run_manager: Optional[CallbackManagerForChainRun] = None,
68 ) -> Dict[str, str]:
---> 69 response = self.generate([inputs], run_manager=run_manager)
70 return self.create_outputs(response)[0]
File ~/anaconda3/lib/python3.9/site-packages/langchain/chains/llm.py:79, in LLMChain.generate(self, input_list, run_manager)
77 """Generate LLM result from inputs."""
78 prompts, stop = self.prep_prompts(input_list, run_manager=run_manager)
---> 79 return self.llm.generate_prompt(
80 prompts, stop, callbacks=run_manager.get_child() if run_manager else None
81 )
File ~/anaconda3/lib/python3.9/site-packages/langchain/llms/base.py:127, in BaseLLM.generate_prompt(self, prompts, stop, callbacks)
120 def generate_prompt(
121 self,
122 prompts: List[PromptValue],
123 stop: Optional[List[str]] = None,
124 callbacks: Callbacks = None,
125 ) -> LLMResult:
126 prompt_strings = [p.to_string() for p in prompts]
--> 127 return self.generate(prompt_strings, stop=stop, callbacks=callbacks)
File ~/anaconda3/lib/python3.9/site-packages/langchain/llms/base.py:176, in BaseLLM.generate(self, prompts, stop, callbacks)
174 except (KeyboardInterrupt, Exception) as e:
175 run_manager.on_llm_error(e)
--> 176 raise e
177 run_manager.on_llm_end(output)
178 return output
File ~/anaconda3/lib/python3.9/site-packages/langchain/llms/base.py:170, in BaseLLM.generate(self, prompts, stop, callbacks)
165 run_manager = callback_manager.on_llm_start(
166 {"name": self.__class__.__name__}, prompts
167 )
168 try:
169 output = (
--> 170 self._generate(prompts, stop=stop, run_manager=run_manager)
171 if new_arg_supported
172 else self._generate(prompts, stop=stop)
173 )
174 except (KeyboardInterrupt, Exception) as e:
175 run_manager.on_llm_error(e)
File ~/anaconda3/lib/python3.9/site-packages/langchain/llms/base.py:377, in LLM._generate(self, prompts, stop, run_manager)
374 new_arg_supported = inspect.signature(self._call).parameters.get("run_manager")
375 for prompt in prompts:
376 text = (
--> 377 self._call(prompt, stop=stop, run_manager=run_manager)
378 if new_arg_supported
379 else self._call(prompt, stop=stop)
380 )
381 generations.append([Generation(text=text)])
382 return LLMResult(generations=generations)
File ~/anaconda3/lib/python3.9/site-packages/langchain/llms/anthropic.py:207, in Anthropic._call(self, prompt, stop, run_manager)
201 return current_completion
202 response = self.client.completion(
203 prompt=self._wrap_prompt(prompt),
204 stop_sequences=stop,
205 **self._default_params,
206 )
--> 207 return response["completion"]
KeyError: 'completion'
| KeyError: 'completion' @ version: langchain-0.0.154 openapi-schema-pydantic-1.2.4 | https://api.github.com/repos/langchain-ai/langchain/issues/3900/comments | 3 | 2023-05-01T17:37:58Z | 2023-09-16T16:14:43Z | https://github.com/langchain-ai/langchain/issues/3900 | 1,691,106,870 | 3,900 |
[
"hwchase17",
"langchain"
]
| Here is an example code that gives this error:
```
from langchain import OpenAI
from langchain.agents import load_tools, Tool
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
llm = OpenAI(
openai_api_key='',
temperature=0,
model_name="text-davinci-003",
max_tokens=-1
)
prompt = PromptTemplate(
input_variables=["query"],
template="{query}"
)
llm_chain = LLMChain(llm=llm, prompt=prompt)
# initialize the LLM tool
llm_tool = Tool(
name='Language Model',
func=llm_chain.run,
description='use this tool for general purpose queries and logic'
)
tools = load_tools(
['llm-math'],
llm=llm
)
tools.append(llm_tool)
from langchain.agents import initialize_agent
zero_shot_agent = initialize_agent(
agent="zero-shot-react-description",
tools=tools,
llm=llm,
verbose=True,
max_iterations=4
)
zero_shot_agent("What is the 10th percentile age of all US presidents when they took the office?")
```
The full output looks like this:
> Entering new AgentExecutor chain...
I need to find the age of all US presidents when they took office and then calculate the 10th percentile.
Action: Language Model
Action Input: List of US presidents and their ages when they took office
Observation:
1. George Washington (57)
2. John Adams (61)
3. Thomas Jefferson (57)
4. James Madison (57)
5. James Monroe (58)
6. John Quincy Adams (57)
7. Andrew Jackson (61)
8. Martin Van Buren (54)
9. William Henry Harrison (68)
10. John Tyler (51)
11. James K. Polk (49)
12. Zachary Taylor (64)
13. Millard Fillmore (50)
14. Franklin Pierce (48)
15. James Buchanan (65)
16. Abraham Lincoln (52)
17. Andrew Johnson (56)
18. Ulysses S. Grant (46)
19. Rutherford B. Hayes (54)
20. James A. Garfield (49)
21. Chester A. Arthur (51)
22. Grover Cleveland (47)
23. Benjamin Harrison (55)
24. Grover Cleveland (55)
25. William McKinley (54)
26. Theodore Roosevelt (42)
27. William Howard Taft (51)
28. Woodrow Wilson (56)
29. Warren G. Harding (55)
30. Calvin Coolidge (51)
31. Herbert Hoover (54)
32. Franklin D. Roosevelt (51)
33. Harry S. Truman (60)
34. Dwight D. Eisenhower (62)
35. John F. Kennedy (43)
36. Lyndon B. Johnson (55)
37. Richard Nixon (56)
38. Gerald Ford (61)
39. Jimmy Carter (52)
40. Ronald Reagan (69)
41. George H. W. Bush (64)
42. Bill Clinton (46)
43. George W. Bush (54)
44. Barack Obama (47)
45. Donald Trump (70)
Thought: I now need to calculate the 10th percentile of these ages.
Action: Calculator
Action Input: 57, 61, 57, 57, 58, 57, 61, 54, 68, 51, 49, 64, 50, 48, 65, 52, 56, 46, 54, 49, 51, 55, 47, 55, 54, 42, 51, 55, 51, 60, 62, 43, 55, 56, 61, 52, 69, 64, 46, 54, 47, 70Traceback (most recent call last):
File "/Users/pawelfaron/miniconda3/envs/langchain_blog/lib/python3.9/site-packages/langchain/chains/llm_math/base.py", line 80, in _evaluate_expression
numexpr.evaluate(
File "/Users/pawelfaron/miniconda3/envs/langchain_blog/lib/python3.9/site-packages/numexpr/necompiler.py", line 817, in evaluate
_names_cache[expr_key] = getExprNames(ex, context)
File "/Users/pawelfaron/miniconda3/envs/langchain_blog/lib/python3.9/site-packages/numexpr/necompiler.py", line 704, in getExprNames
ex = stringToExpression(text, {}, context)
File "/Users/pawelfaron/miniconda3/envs/langchain_blog/lib/python3.9/site-packages/numexpr/necompiler.py", line 289, in stringToExpression
ex = eval(c, names)
File "<expr>", line 1, in <module>
TypeError: _func() takes from 1 to 2 positional arguments but 42 were given
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/pawelfaron/wxh-rwm-driver/agents_test.py", line 48, in <module>
zero_shot_agent("What is the 10th percentile age of all US presidents when they took the office?")
File "/Users/pawelfaron/miniconda3/envs/langchain_blog/lib/python3.9/site-packages/langchain/chains/base.py", line 142, in __call__
raise e
File "/Users/pawelfaron/miniconda3/envs/langchain_blog/lib/python3.9/site-packages/langchain/chains/base.py", line 136, in __call__
self._call(inputs, run_manager=run_manager)
File "/Users/pawelfaron/miniconda3/envs/langchain_blog/lib/python3.9/site-packages/langchain/agents/agent.py", line 855, in _call
next_step_output = self._take_next_step(
File "/Users/pawelfaron/miniconda3/envs/langchain_blog/lib/python3.9/site-packages/langchain/agents/agent.py", line 749, in _take_next_step
observation = tool.run(
File "/Users/pawelfaron/miniconda3/envs/langchain_blog/lib/python3.9/site-packages/langchain/tools/base.py", line 251, in run
raise e
File "/Users/pawelfaron/miniconda3/envs/langchain_blog/lib/python3.9/site-packages/langchain/tools/base.py", line 245, in run
self._run(*tool_args, run_manager=run_manager, **tool_kwargs)
File "/Users/pawelfaron/miniconda3/envs/langchain_blog/lib/python3.9/site-packages/langchain/agents/tools.py", line 61, in _run
self.func(
File "/Users/pawelfaron/miniconda3/envs/langchain_blog/lib/python3.9/site-packages/langchain/chains/base.py", line 238, in run
return self(args[0], callbacks=callbacks)[self.output_keys[0]]
File "/Users/pawelfaron/miniconda3/envs/langchain_blog/lib/python3.9/site-packages/langchain/chains/base.py", line 142, in __call__
raise e
File "/Users/pawelfaron/miniconda3/envs/langchain_blog/lib/python3.9/site-packages/langchain/chains/base.py", line 136, in __call__
self._call(inputs, run_manager=run_manager)
File "/Users/pawelfaron/miniconda3/envs/langchain_blog/lib/python3.9/site-packages/langchain/chains/llm_math/base.py", line 146, in _call
return self._process_llm_result(llm_output, _run_manager)
File "/Users/pawelfaron/miniconda3/envs/langchain_blog/lib/python3.9/site-packages/langchain/chains/llm_math/base.py", line 100, in _process_llm_result
output = self._evaluate_expression(expression)
File "/Users/pawelfaron/miniconda3/envs/langchain_blog/lib/python3.9/site-packages/langchain/chains/llm_math/base.py", line 87, in _evaluate_expression
raise ValueError(f"{e}. Please try again with a valid numerical expression")
ValueError: _func() takes from 1 to 2 positional arguments but 42 were given. Please try again with a valid numerical expression
| _func() takes from 1 to 2 positional arguments but 42 were given. Please try again with a valid numerical expression | https://api.github.com/repos/langchain-ai/langchain/issues/3898/comments | 1 | 2023-05-01T17:36:24Z | 2023-09-10T16:24:13Z | https://github.com/langchain-ai/langchain/issues/3898 | 1,691,105,497 | 3,898 |
[
"hwchase17",
"langchain"
]
| The `FAISS.add_texts` and `FAISS.merge_from` don't check duplicated document contents, and always add contents into Vecstore.
```ruby
test_db = FAISS.from_texts(['text 2'], embeddings)
test_db.add_texts(['text 1', 'text 2', 'text 1'])
print(test_db.index_to_docstore_id)
test_db.docstore._dict
```
Note that 'text 1' and 'text 2' are both added twice with different indices.
```
{0: '12a6a477-db74-4d90-b843-4cd872e070a0', 1: 'a3171e0e-f12a-418f-9994-5625550de73e', 2: '543f8fcf-bf84-4d9e-a6a9-f87fda0afcc3', 3: 'ed320a75-775f-4ec2-ae0b-fef8fa8d0bfe'}
{'12a6a477-db74-4d90-b843-4cd872e070a0': Document(page_content='text 2', lookup_str='', metadata={}, lookup_index=0),
'a3171e0e-f12a-418f-9994-5625550de73e': Document(page_content='text 1', lookup_str='', metadata={}, lookup_index=0),
'543f8fcf-bf84-4d9e-a6a9-f87fda0afcc3': Document(page_content='text 2', lookup_str='', metadata={}, lookup_index=0),
'ed320a75-775f-4ec2-ae0b-fef8fa8d0bfe': Document(page_content='text 1', lookup_str='', metadata={}, lookup_index=0)}
```
Also the embedding values are the same
```ruby
np.dot(test_db.index.reconstruct(0), test_db.index.reconstruct(2))
```
```
1.0000001
```
**Expected Behavior:**
Similar to database `upsert`, create new index if key (content or embedding) doesn't exist, otherwise update the value (document metadata in this case).
I'm pretty new to LangChain, so if I'm missing something or doing it wrong, apologies and please suggest the best practice on dealing with LangChain FAISS duplication - otherwise, hope this is useful feedback, thanks!
| Remove duplication when creating and updating FAISS Vecstore | https://api.github.com/repos/langchain-ai/langchain/issues/3896/comments | 3 | 2023-05-01T17:31:28Z | 2023-11-30T16:10:11Z | https://github.com/langchain-ai/langchain/issues/3896 | 1,691,099,458 | 3,896 |
[
"hwchase17",
"langchain"
]
| I am trying to follow this guide on evaluation of agents (https://python.langchain.com/en/latest/use_cases/evaluation/generic_agent_evaluation.html), but I'm seeing the following error:
`ImportError: cannot import name 'ChainManagerMixin' from 'langchain.callbacks.base'`
I am using langchain==0.0.154 with Python 3.8.16
Code I executed:
`from langchain.evaluation.agents import TrajectoryEvalChain`
Trace:
```
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
Cell In[18], line 1
----> 1 from langchain.evaluation.agents import TrajectoryEvalChain
3 # # Define chain
4 # eval_chain = TrajectoryEvalChain.from_llm(
5 # llm=ChatOpenAI(temperature=0, model_name="gpt-4"), # Note: This must be a ChatOpenAI model
6 # agent_tools=agent.tools,
7 # return_reasoning=True,
8 # )
File [~/anaconda3/envs/langchainxhaystack/lib/python3.8/site-packages/langchain/evaluation/agents/__init__.py:2](https://file+.vscode-resource.vscode-cdn.net/home/taf/Dmyzer/search_engine/Prototypes/recall/naswellbot/~/anaconda3/envs/langchainxhaystack/lib/python3.8/site-packages/langchain/evaluation/agents/__init__.py:2)
1 """Chains for evaluating ReAct style agents."""
----> 2 from langchain.evaluation.agents.trajectory_eval_chain import TrajectoryEvalChain
4 __all__ = ["TrajectoryEvalChain"]
File [~/anaconda3/envs/langchainxhaystack/lib/python3.8/site-packages/langchain/evaluation/agents/trajectory_eval_chain.py:4](https://file+.vscode-resource.vscode-cdn.net/home/taf/Dmyzer/search_engine/Prototypes/recall/naswellbot/~/anaconda3/envs/langchainxhaystack/lib/python3.8/site-packages/langchain/evaluation/agents/trajectory_eval_chain.py:4)
1 """A chain for evaluating ReAct style agents."""
2 from typing import Any, Dict, List, NamedTuple, Optional, Sequence, Tuple, Union
----> 4 from langchain.callbacks.manager import CallbackManagerForChainRun
5 from langchain.chains.base import Chain
6 from langchain.chains.llm import LLMChain
File [~/anaconda3/envs/langchainxhaystack/lib/python3.8/site-packages/langchain/callbacks/manager.py:12](https://file+.vscode-resource.vscode-cdn.net/home/taf/Dmyzer/search_engine/Prototypes/recall/naswellbot/~/anaconda3/envs/langchainxhaystack/lib/python3.8/site-packages/langchain/callbacks/manager.py:12)
9 from typing import Any, Dict, Generator, List, Optional, Type, TypeVar, Union
10 from uuid import UUID, uuid4
---> 12 from langchain.callbacks.base import (
13 BaseCallbackHandler,
14 BaseCallbackManager,
15 ChainManagerMixin,
16 LLMManagerMixin,
17 RunManagerMixin,
18 ToolManagerMixin,
19 )
20 from langchain.callbacks.openai_info import OpenAICallbackHandler
21 from langchain.callbacks.stdout import StdOutCallbackHandler
ImportError: cannot import name 'ChainManagerMixin' from 'langchain.callbacks.base'
```
Any advice on what steps to take to resolve this would be appreciated.
| TrajectoryEvalChain import error - cannot import name 'ChainManagerMixin' from 'langchain.callbacks.base | https://api.github.com/repos/langchain-ai/langchain/issues/3894/comments | 6 | 2023-05-01T17:21:34Z | 2023-06-02T00:50:49Z | https://github.com/langchain-ai/langchain/issues/3894 | 1,691,090,422 | 3,894 |
[
"hwchase17",
"langchain"
]
| I'm trying to make a vectorstore using redis and store the embeddings in redis.
When I write the code
rds = Redis.from_documents(docs, embeddings, redis_url="redis://localhost:6379", index_name='test_link')
I get the following error
AttributeError: 'Redis' object has no attribute 'module_list'.
Note: I'm trying to run redis locally on windows subsystem ubuntu.
Please help. | AttributeError: 'Redis' object has no attribute 'module_list' | https://api.github.com/repos/langchain-ai/langchain/issues/3893/comments | 18 | 2023-05-01T17:02:43Z | 2024-02-12T05:13:49Z | https://github.com/langchain-ai/langchain/issues/3893 | 1,691,068,719 | 3,893 |
[
"hwchase17",
"langchain"
]
| Build SDK support for .NET.
I'd be happy to contribute to the project. | Support for .NET | https://api.github.com/repos/langchain-ai/langchain/issues/3891/comments | 15 | 2023-05-01T16:37:36Z | 2024-01-22T21:48:28Z | https://github.com/langchain-ai/langchain/issues/3891 | 1,691,041,869 | 3,891 |
[
"hwchase17",
"langchain"
]
| `print(agent.agent.llm_chain.prompt.template)`
>You are working with a pandas dataframe in Python. The name of the dataframe is `df`.
You should use the tools below to answer the question posed of you:
>python_repl_ast: A Python shell. Use this to execute python commands. Input should be a valid python command. When using this tool, sometimes output is abbreviated - make sure it does not look abbreviated before using it in your answer.
>Use the following format:
>Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [python_repl_ast]
Action Input: the input to the action
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question
>This is the result of `print(df.head())`:
{df}
>Begin!
Question: {input}
{agent_scratchpad}
`agent.run("how many roows in the dataset?")`
> Entering new AgentExecutor chain...
Thought: I need to count the number of rows
Action: python_repl_ast
Action Input: len(df)
Observation: 9994
Thought: I now know the final answer
Final Answer: 9994
> Finished chain.
'9994' | How many times llm (openai) api called in csv agent for the below prompt? | https://api.github.com/repos/langchain-ai/langchain/issues/3886/comments | 3 | 2023-05-01T14:29:44Z | 2023-09-16T16:14:47Z | https://github.com/langchain-ai/langchain/issues/3886 | 1,690,886,384 | 3,886 |
[
"hwchase17",
"langchain"
]
| Hello Guys,
I am currently trying to make chain summary of long document with langchain (before that I did my own tools) but the summaries dont seem to work with get3.5 wich can only be made using the chatversion. Do someone has an exemple of implementation on that matter? | Summary chain with chat 3.5 turbo | https://api.github.com/repos/langchain-ai/langchain/issues/3885/comments | 3 | 2023-05-01T13:51:42Z | 2023-05-01T17:43:55Z | https://github.com/langchain-ai/langchain/issues/3885 | 1,690,834,249 | 3,885 |
[
"hwchase17",
"langchain"
]
| # Issue description:
I have encountered an issue while using the PGVector vectorstore in **long-running applications like Celery**. with a non-existent table. Currently, the application hangs indefinitely in a database transaction, which affects visibility of created tables (e.g., langchain_pg_embedding) when inspecting the database directly. Additionally, this behavior blocks other processes from accessing the database.
# Expected behavior:
When the specified table does not exist, I expect the application to automatically create the table and make it accessible immediately, without locking the process in a transaction. This would allow users to work seamlessly with the vectorstore and prevent unintended blocking of other processes.
# Actual behavior:
Instead of creating the table automatically and making it accessible, the application hangs indefinitely in a database transaction, which prevents the created tables from being visible when inspecting the database directly. Additionally, this behavior blocks other processes from accessing the database, causing issues in long-running applications like Celery.
# Steps to reproduce:
**In celery worker**
- Set up a PostgreSQL connection string pointing to a non-existent database.
- Initialize the PGVector vectorstore with the connection string.
- Attempt to perform any operation, such as adding or querying vectors.
**In another process , e.g fastapi application or even another celery worker**
- While the application is running, try to access the database using a different process.
- See it hangs when accessing database
# Environment:
- Python version: 3.10
- PostgreSQL version: PostgreSQL 15.2 (Debian 15.2-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
- langchain version: 0.0.152
- SQLAlchemy version: 2.0.11
- psycopg2-binary version: 2.9.6
I hope this issue can be addressed soon, as it would significantly improve the user experience when working with non-existent databases and prevent unintended blocking of other processes. Thank you for your time and efforts in maintaining this project. | PGVector vectorstore hangs in database transaction | https://api.github.com/repos/langchain-ai/langchain/issues/3883/comments | 1 | 2023-05-01T13:37:12Z | 2023-05-02T11:44:19Z | https://github.com/langchain-ai/langchain/issues/3883 | 1,690,818,388 | 3,883 |
[
"hwchase17",
"langchain"
]
| Hi there, I'm relatively new to langchain and I was wondering if there's an ETA for async support for general HF pipelines (so that we can stream from server the answer of any HF model).
Thanks for the lib and amazing work so far. | Question: ETA of async support for HuggingFacePipelines | https://api.github.com/repos/langchain-ai/langchain/issues/3869/comments | 1 | 2023-05-01T08:16:28Z | 2023-09-10T16:24:23Z | https://github.com/langchain-ai/langchain/issues/3869 | 1,690,546,087 | 3,869 |
[
"hwchase17",
"langchain"
]
| Both the loaders fail with the error below-
```
[Errno 30] Read-only file system: '/home/sbx_user1051'
```
This is because of [this line](https://github.com/hwchase17/langchain/blob/f7cb2af5f40c958ac1b3d6ba243170ef627dbb6e/langchain/document_loaders/s3_file.py#L29). The only writable directory in AWS Lambda is `/tmp`, so there should be a way to set the directory. | S3FileLoader and S3DirectoryLoader does not work on AWS Lambda | https://api.github.com/repos/langchain-ai/langchain/issues/3866/comments | 0 | 2023-05-01T07:47:09Z | 2023-05-01T08:01:41Z | https://github.com/langchain-ai/langchain/issues/3866 | 1,690,514,948 | 3,866 |
[
"hwchase17",
"langchain"
]
| From the [official doc](https://python.langchain.com/en/latest/modules/agents/tools/examples/arxiv.html), to run an agent with arxiv, you can arxiv as a tool
`from langchain.chat_models import ChatOpenAI
from langchain.agents import load_tools, initialize_agent, AgentType
llm = ChatOpenAI(temperature=0.0)
tools = load_tools(
["arxiv"],
)
agent_chain = initialize_agent(
tools,
llm,
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
)`
But it's not recognized
`[/usr/local/lib/python3.10/dist-packages/langchain/agents/load_tools.py](https://localhost:8080/#) in load_tools(tool_names, llm, callback_manager, **kwargs)
329 tools.append(tool)
330 else:
--> 331 raise ValueError(f"Got unknown tool {name}")
332 return tools
333
ValueError: Got unknown tool arxiv`
| arxiv is not recognized in tools | https://api.github.com/repos/langchain-ai/langchain/issues/3865/comments | 4 | 2023-05-01T07:43:32Z | 2023-09-23T16:06:51Z | https://github.com/langchain-ai/langchain/issues/3865 | 1,690,512,769 | 3,865 |
[
"hwchase17",
"langchain"
]
| I tried to follow the instructions on the site and use Cohere embeddings, but it keeps trying to use OpenAI.
```
from langchain.document_loaders import PyPDFLoader
from langchain.indexes import VectorstoreIndexCreator
from langchain.embeddings import CohereEmbeddings
cohere = CohereEmbeddings(cohere_api_key="api-key")
loader = PyPDFLoader('../document.pdf')
index = VectorstoreIndexCreator(embedding=cohere).from_loaders([loader])
```
I get this error from the index assignment line: **Did not find openai_api_key, please add an environment variable `OPENAI_API_KEY` which contains it, or pass `openai_api_key` as a named parameter. (type=value_error)** | Cohere Embeddings not picked up in VectorstoreIndexCreator | https://api.github.com/repos/langchain-ai/langchain/issues/3859/comments | 0 | 2023-05-01T02:55:02Z | 2023-05-01T03:03:15Z | https://github.com/langchain-ai/langchain/issues/3859 | 1,690,283,994 | 3,859 |
[
"hwchase17",
"langchain"
]
| I would like the option to define the fallback behavior for when the Agent executes a tool action that is "invalid".
This is useful when you have a lot of commands and instead of putting them all in the prompt you instead provide the agent with a "help" command which it can run to learn about additional commands.
Then when the agent executes a command that is invalid I can catch that and handle it accordingly. | [Feature Request] Option to define my own "InvalidTool" | https://api.github.com/repos/langchain-ai/langchain/issues/3852/comments | 1 | 2023-05-01T00:34:20Z | 2023-05-03T12:00:59Z | https://github.com/langchain-ai/langchain/issues/3852 | 1,690,170,650 | 3,852 |
[
"hwchase17",
"langchain"
]
| I have used LangChain heavily in my two LLMs demos. Really appreciate for your efforts on building such a great platform!
I recently designed a **prompt compression tool** which allows **LLMs to deal with 2x more context** without any finetuning/training. It's a **plug-and-play module** that fits langchain ecosystem very well.
I have employed this module in my demos. With this technique, my demo can now process upto 8 pages paper and very long conversation.
I realise it's a quite promising technique to greatly enhance user experience. I wonder if it's possible to embed this module into langchain? My twitter followers said it might be a good idea. Please let me know what do you think! I can contribute.
You can find the prompt compression tool here: https://github.com/liyucheng09/Selective_Context
paper: https://arxiv.org/pdf/2304.12102.pdf | [Feature] Adding prompt compression to langchain? | https://api.github.com/repos/langchain-ai/langchain/issues/3849/comments | 1 | 2023-04-30T23:24:20Z | 2023-09-10T16:24:28Z | https://github.com/langchain-ai/langchain/issues/3849 | 1,690,129,009 | 3,849 |
[
"hwchase17",
"langchain"
]
| The current implementation only excludes inputs matching the memory key.
When using CombinedMemory, there will be multiple keys and the vector store memory will save everything except the memory_key. This is unwanted because in my case the other key includes the entire chat history.
This seems to be the relevant function in VectorStoreRetrieverMemory:
```python
def _form_documents(
self, inputs: Dict[str, Any], outputs: Dict[str, str]
) -> List[Document]:
"""Format context from this conversation to buffer."""
# Each document should only include the current turn, not the chat history
filtered_inputs = {k: v for k, v in inputs.items() if k != self.memory_key}
# <snip>
```
Example:
```python
template = (
"Relevant pieces of previous conversation:\n"
"=====\n"
"{documents}\n"
"=====\n"
"Chat log:\n"
"{history}\n\n"
)
buffer_memory = ConversationTokenBufferMemory(
input_key="input",
memory_key="history",
llm=llm,
)
vector_memory = VectorStoreRetrieverMemory(input_key="input", memory_key="documents", retriever=retriever)
combined_memory = CombinedMemory(memories=[vector_memory, buffer_memory])
```
Current behavior: the vector store memory saves `input` and `history`
Expected behavior: respect `input_key` and only save `input` in the vector store (in addition to the response)
For comparison, when `input_key` is specified, ConversationTokenBufferMemory only saves `inputs[input_key]` as expected. | VectorStoreRetrieverMemory does not respect input_key, stores additional keys | https://api.github.com/repos/langchain-ai/langchain/issues/3845/comments | 1 | 2023-04-30T22:01:09Z | 2023-09-10T16:24:33Z | https://github.com/langchain-ai/langchain/issues/3845 | 1,690,101,859 | 3,845 |
[
"hwchase17",
"langchain"
]
| Hi, i am wondering if using the csv agent or the pandas dataframe agent I can also query and visualize charts with a chart library (seaborn, matplotlib or others..) when analysing a csv file/dataframe? thanks, Marcello | Use of matplotlib or seaborn with csv agent or Pandas Dataframe Agent? | https://api.github.com/repos/langchain-ai/langchain/issues/3844/comments | 9 | 2023-04-30T21:48:12Z | 2023-10-23T16:08:58Z | https://github.com/langchain-ai/langchain/issues/3844 | 1,690,097,001 | 3,844 |
[
"hwchase17",
"langchain"
]
| Hi Team,
I am getting below error while trying to use the gpt4all model, Can someone please advice ?
Error:
```
File "/home/ubuntu/.local/share/virtualenvs/local-conversational-ai-chatbot-using-gpt4-6TvxabtR/lib/python3.10/site-packages/langchain/llms/gpt4all.py", line 181, in _call
text = self.client.generate(
TypeError: Model.generate() got an unexpected keyword argument 'new_text_callback'
```
Code:
```
from langchain import PromptTemplate, LLMChain
from langchain.llms import GPT4All
from langchain.callbacks.base import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
local_path = './models/ggjt-model.bin'
# Callbacks support token-wise streaming
callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])
# Verbose is required to pass to the callback manager
llm = GPT4All(model=local_path, callback_manager=callback_manager, verbose=True)
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Bieber was born?"
llm_chain.run(question)
```
| Unable to use gpt4all model | https://api.github.com/repos/langchain-ai/langchain/issues/3839/comments | 27 | 2023-04-30T17:49:59Z | 2023-08-24T15:27:07Z | https://github.com/langchain-ai/langchain/issues/3839 | 1,690,013,787 | 3,839 |
[
"hwchase17",
"langchain"
]
| I'm trying to use the GPT model to interact with the Google Calendar API, but I'm receiving the following error message:
"This model's maximum context length is 4097 tokens. However, your messages resulted in 4392 tokens. Please reduce the length of the messages."
https://gist.github.com/kingcharlezz/e820bc60febef084402cb2a68f3aeeb0
Could someone please help me find a way to avoid exceeding the model's maximum context length, or suggest alternative approaches to achieve the same functionality without encountering this issue? Any help would be greatly appreciated. | Model's maximum context length exceeded when building agent | https://api.github.com/repos/langchain-ai/langchain/issues/3838/comments | 3 | 2023-04-30T17:02:16Z | 2023-11-16T16:08:12Z | https://github.com/langchain-ai/langchain/issues/3838 | 1,689,996,954 | 3,838 |
[
"hwchase17",
"langchain"
]
| Sample export:
```
[30/04/23, 5:33:19 PM] โชSamโฌ: Hi Sameer
[30/04/23, 5:37:30 PM] Sameer: Hi Sam
[30/04/23, 5:43:06 PM] โชSamโฌ: How are you doing
[30/04/23, 5:44:11 PM] Sameer: Going great. Wbu?
[30/04/23, 5:44:39 PM] โชSamโฌ: I am doing fine thanks for asking
```
The export from iOS is different in format so regex is failing to parse it. 0 documents are loaded upon running `loader.load()`
I had to manually convert the above in this format for loader to work.
```
04/04/23, 5:34 PM - Sameer: go out, have some shakes
04/04/23, 5:35 PM - Sam: Already done
04/04/23, 5:35 PM - Sam: Wbu?
04/04/23, 5:35 PM - Sam: Meeting over?
04/04/23, 5:36 PM - Sameer: yeah, just doing some regular work
```
Expected Behavior:
It should identify which category of export it is and then apply regex accordingly. | WhatsppLoader broken for iOS exports | https://api.github.com/repos/langchain-ai/langchain/issues/3832/comments | 2 | 2023-04-30T14:35:11Z | 2023-09-10T16:24:38Z | https://github.com/langchain-ai/langchain/issues/3832 | 1,689,940,368 | 3,832 |
[
"hwchase17",
"langchain"
]
| Traceback (most recent call last):
File "/Users/vnx/experiments/openai/products-recommendation.py", line 10, in <module>
data = loader.load()
^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain/document_loaders/csv_loader.py", line 52, in load
csv_reader = csv.DictReader(csvfile, **self.csv_args) # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/Cellar/[email protected]/3.11.3/Frameworks/Python.framework/Versions/3.11/lib/python3.11/csv.py", line 86, in __init__
self.reader = reader(f, dialect, *args, **kwds)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: "delimiter" must be string, not NoneType | CSVLoader TypeError: "delimiter" must be string, not NoneType | https://api.github.com/repos/langchain-ai/langchain/issues/3831/comments | 2 | 2023-04-30T13:40:09Z | 2023-05-03T14:06:46Z | https://github.com/langchain-ai/langchain/issues/3831 | 1,689,922,867 | 3,831 |
[
"hwchase17",
"langchain"
]
| @app.post("/memask")
async def memask(ask: Ask, authorization: str = Header(None)):
bearer = authorization.split()[0].lower()
if authorization is None or bearer != 'bearer':
raise HTTPException(status_code=401, detail="่ฏทๆฑๅคดauthorization้่ฏฏ๏ผ")
token = authorization.split()[1]
if get_user(ask.uid)["jwt_token"] != token:
raise HTTPException(status_code=401, detail="token้่ฏฏ๏ผ")
if len(ask.query) is not None:
template = """You are a chatbot having a conversation with a human.
{chat_history}
Human: {human_input}
AI:"""
prompt = PromptTemplate(
input_variables=["chat_history", "human_input"],
template=template
)
chat = ChatOpenAI(streaming=True, callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]),
verbose=False, temperature=0)
message_history = RedisChatMessageHistory(url='redis://0.0.0.0:6666/0', ttl=900,
session_id=f'{ask.uid}_{ask.cid}')
memory = ConversationSummaryBufferMemory(llm=chat, memory_key="chat_history", chat_memory=message_history,
max_token_limit=1000)
chat_chain = LLMChain(
llm=chat,
callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]),
prompt=prompt,
verbose=False,
memory=memory,
)
return chat_chain.run(ask.query)
else:
return {"code": "false", "msg": "่ฏท่พๅ
ฅไฝ ็้ฎ้ข๏ผ", "data": {}}
chat_chain.run(ask.query)๏ผ Not returning a streaming response, what should I do? | fastapi+ConversationSummaryBufferMemory+chain+streaming response | https://api.github.com/repos/langchain-ai/langchain/issues/3830/comments | 1 | 2023-04-30T12:59:31Z | 2023-09-10T16:24:43Z | https://github.com/langchain-ai/langchain/issues/3830 | 1,689,910,731 | 3,830 |
[
"hwchase17",
"langchain"
]
| Most web applications are developed using Java, and may we need Java SDK | [Feature] supported java sdk | https://api.github.com/repos/langchain-ai/langchain/issues/3829/comments | 3 | 2023-04-30T11:19:17Z | 2023-10-09T16:08:13Z | https://github.com/langchain-ai/langchain/issues/3829 | 1,689,878,359 | 3,829 |
[
"hwchase17",
"langchain"
]
| I am new to contributing to open-source. I have installed poetry 1.4.0 version and setup new conda environment.
While running ```poetry install -E all```, I am getting error as such:
```
Command ['python', '-I', '-W', 'ignore', '-'] errored with the following return code 2
Error output:
Unknown option: -I
usage: python [option] ... [-c cmd | -m mod | file | -] [arg] ...
Try `python -h' for more information.
Input:
import sys
if hasattr(sys, "real_prefix"):
print(sys.real_prefix)
elif hasattr(sys, "base_prefix"):
print(sys.base_prefix)
else:
print(sys.prefix)
````
How to resolve this issue? @hwchase17 | getting error while running poetry install -E all | https://api.github.com/repos/langchain-ai/langchain/issues/3821/comments | 2 | 2023-04-30T05:55:47Z | 2023-09-10T16:24:48Z | https://github.com/langchain-ai/langchain/issues/3821 | 1,689,793,136 | 3,821 |
[
"hwchase17",
"langchain"
]
| For some questions they give me answer but not for other. But the data are already available in my dataset. How I can improve this!? ๐ | Why langchain embading missing the information. | https://api.github.com/repos/langchain-ai/langchain/issues/3816/comments | 2 | 2023-04-30T02:51:30Z | 2023-09-10T16:24:53Z | https://github.com/langchain-ai/langchain/issues/3816 | 1,689,756,708 | 3,816 |
[
"hwchase17",
"langchain"
]
| While the agent seems to find the correct function, I keep getting `Observation: "... is not a valid tool, try another one."` and it struggles to iterate over the other similar solutions from there. Here is an output using the [docs example](https://python.langchain.com/en/latest/modules/agents/tools/multi_input_tool.html).

Environment: Windows 11 WSL
LLM: Llama.cpp
Model: TheBloke/wizardLM-7B-GGML
I don't have a ChatGPT key so I can't say for sure if this is strictly related to Llama.cpp, the model I'm using or something else in my installation. I also haven't tried other models but since this an instruction based model and the agent actually acknowledges the action it must perform I'm don't think that's the root of the problem.
I'm still learning the library but I can provide additional information if you need.
Edit: Not sure how much the built-in tools differ from the Multi-Input tool in terms of agent implementation but I seem to be experiencing the same behavior with the "wikipedia" tool as well.
 | Invalid tool using Llama.cpp | https://api.github.com/repos/langchain-ai/langchain/issues/3815/comments | 2 | 2023-04-30T01:40:12Z | 2024-04-03T14:21:27Z | https://github.com/langchain-ai/langchain/issues/3815 | 1,689,743,256 | 3,815 |
[
"hwchase17",
"langchain"
]
| Whenever I run code like
```
chain = load_qa_chain(llm=flan_t5_xxl, chain_type="map_reduce")
answer = chain({"input_documents": split_docs, "question": query), return_only_outputs=True)
```
I get first a warning:
`Token indices sequence length is longer than the specified maximum length for this model`
followed by an error, again about there being too many tokens.
Some observations:
1. The error occurs no matter what the document input is: even if there is only a single input document of a few characters.
2. It doesn't happen when the chain_type is `map_rerank`.
3. It doesn't happen using `load_summarize_chain` and `map_reduce` together.
Is there a fix for this? I thought about modifying the tokenizer config but I can't find a way to do that except with locally-loaded models, and to save RAM I prefer to use the model remotely (is that even a practical approach long-term?).
| `load_qa_chain` with `map_reduce` results in "Token indices sequence length" error | https://api.github.com/repos/langchain-ai/langchain/issues/3812/comments | 14 | 2023-04-29T23:47:54Z | 2023-10-05T16:10:23Z | https://github.com/langchain-ai/langchain/issues/3812 | 1,689,721,102 | 3,812 |
[
"hwchase17",
"langchain"
]
| https://github.com/hwchase17/langchain/blob/adcad98bee03ac8486f328b4f316017a6ccfc808/langchain/embeddings/openai.py#L159
Getting "no attribute" error for `tiktoken.model`. Believe that this is because tiktoken has changed their import model, per code [here](https://github.com/openai/tiktoken/blob/main/tiktoken/__init__.py). Change to `tiktoken.encoding_for_model(self.model)`? | Tiktoken import bug? | https://api.github.com/repos/langchain-ai/langchain/issues/3811/comments | 11 | 2023-04-29T23:14:52Z | 2024-07-30T12:32:34Z | https://github.com/langchain-ai/langchain/issues/3811 | 1,689,715,364 | 3,811 |
[
"hwchase17",
"langchain"
]
| I'm trying to implement a basic chatbot that searches over PDFs documents. I've been following the examples in the Langchain docs and I've noticed that the answers I get back from different methods are inconsistent.
When I use `RetrievalQA` I get better answers than when I use `ConversationalRetrievalChain`.
I want a chat over a document that contains memory of the conversation so I have to use the latter. I've tried increasing the `search_kwargs` argument to include more context, but it makes no difference.
Any ideas as to why I'm getting inconsistent answers? And how can I make this chatbot more accurate?
## Here is my code:
Initialising pinecone vector store:
```
pinecone.init(
api_key="....",
environment="us-east1-gcp"
)
index_name = "test-index"
namespace = "test-namespace"
vectorstore = Pinecone.from_texts(
[t.page_content for t in texts],
embeddings,
index_name=index_name,
namespace=namespace
)
```
Using retrieval QA chain:
```
chain = load_qa_chain(OpenAI(temperature=0), chain_type="stuff")
query = "Tell me about the role management for grades"
docs = docsearch.similarity_search(query)
chain.run(input_documents=docs, question=query)
```
Gives an acceptable answer:
```
" The role management for grades involves..."
```
Using ConversationalRetrievalChain:
```
llm = ChatOpenAI(temperature=0)
question_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT)
doc_chain = load_qa_chain(
llm,
chain_type="stuff",
prompt=QA_PROMPT
)
retriever = docsearch.as_retriever()
chain = ConversationalRetrievalChain(
combine_docs_chain=doc_chain,
question_generator=question_generator,
retriever=vectorstore.as_retriever()
)
```
Gives an answer that indicates no context:
```
I'm sorry, I cannot provide an answer to this question as there is no mention of a "role management" in the provided context. Can you please provide more information or context for me to assist you better?
``` | ConversationalRetrievalChain gives different answers than Retrieval QA when searching docs | https://api.github.com/repos/langchain-ai/langchain/issues/3809/comments | 6 | 2023-04-29T22:27:40Z | 2023-11-26T16:09:59Z | https://github.com/langchain-ai/langchain/issues/3809 | 1,689,705,722 | 3,809 |
[
"hwchase17",
"langchain"
]
| null | pizza | https://api.github.com/repos/langchain-ai/langchain/issues/3804/comments | 1 | 2023-04-29T21:24:08Z | 2023-04-29T21:45:53Z | https://github.com/langchain-ai/langchain/issues/3804 | 1,689,692,183 | 3,804 |
[
"hwchase17",
"langchain"
]
| The `Pinecone.from_documents()` embeddings-creation/upsert ([based on this example](https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/pinecone.html)) produces two unexpected behaviors:
1. Mutates the original `docs` object inplace, such that each entry's `Document.metadata` dict now has a 'text' key that is assigned the value of `Document.page_content`.
2. Does not check whether the addition of this `metadata['text']` entry exceeds the maximum allowable metadata bytes per vector set by Pinecone (40960 bpv), allowing the API call to throw an error in cases where this value is exceeded.
I was sending a batch of documents to Pinecone using `Pinecone.from_documents()`, and was surprised to see the operation fail on this error:
```
ApiException: (400)
Reason: Bad Request
HTTP response headers: HTTPHeaderDict({'content-type': 'application/json', 'date': 'Sat, 29 Apr 2023 04:19:35 GMT', 'x-envoy-upstream-service-time': '5', 'content-length': '115', 'server': 'envoy'})
HTTP response body: {"code":3,"message":"metadata size is 41804 bytes, which exceeds the limit of 40960 bytes per vector","details":[]}
```
Because there was just minimal metadata per each record and I'd already sent about 50,000 successfully to Pinecone in the same call. Then, when I inspected the `docs` list of `Documents` (compiled from `DataFrameLoader().load()`), I was surprised to see the extra `'text'` field in the metadata. It wasn't until I went poking around in [**pinecone.py**](https://github.com/hwchase17/langchain/blob/adcad98bee03ac8486f328b4f316017a6ccfc808/langchain/vectorstores/pinecone.py#L242) that I found this was an added field that was updating the passed-in `Document.metadata` dicts in-place (because of Python's pass-by-sharing rules about mutability).
Suggestions:
* If `metadata['text']` is required (I'm not sure it is for Pinecone upserts?), then make the user do that (error and refuse to upsert if it's not there) rather than modify silently in `.from_texts()`.
* The metadata limit issue would be great to test ahead of the API's HTTP request, could do a quick check on user metadata input to make sure it's not going to get rejected by Pinecone (otherwise warn the user). In my case, I don't want to make smaller chunks of text (my use case involves a certain number of turns of dialogue in each embedded chunk), but I may just write in a check for overflow and truncate the `'text'` metadata accordingly.
* Fail gracefully by catching all `ApiException` errors so that the embeddings-creation and upsert process isn't interrupted.
* Maybe consider something like an `add_text_metadata` flag in the call to `from_documents()` so users have the option to have it done automatically for them?
I'm pretty new to LangChain and Pinecone, so if I'm missing something or doing it wrong, apologies - otherwise, hope this is useful feedback!
| Pinecone.from_texts() added 'text' metadata field modifies data object passed in as argument AND errors if text size exceeds Pinecone metadata limit per vector | https://api.github.com/repos/langchain-ai/langchain/issues/3800/comments | 7 | 2023-04-29T19:57:58Z | 2023-09-23T16:06:56Z | https://github.com/langchain-ai/langchain/issues/3800 | 1,689,671,027 | 3,800 |
[
"hwchase17",
"langchain"
]
| https://python.langchain.com/en/latest/use_cases/question_answering/semantic-search-over-chat.html
https://github.com/hwchase17/langchain/blob/master/docs/use_cases/question_answering/semantic-search-over-chat.ipynb



Apparently, `split_documents` function's input need to have attributes of `page_content` and `metadata`, but getting string list as input here | AttributeError: 'str' object has no attribute 'page_content' | https://api.github.com/repos/langchain-ai/langchain/issues/3799/comments | 8 | 2023-04-29T19:41:25Z | 2024-06-03T12:38:54Z | https://github.com/langchain-ai/langchain/issues/3799 | 1,689,666,728 | 3,799 |
[
"hwchase17",
"langchain"
]
| In the documentation it is mentioned to create toolkit without LLM agent, but its one of the required fields for toolkit.
Instead of this,
```
toolkit = SQLDatabaseToolkit(db=db)
agent_executor = create_sql_agent(
llm=OpenAI(temperature=0),
toolkit=toolkit,
verbose=True
)
```
it should be this
```
llm=OpenAI(temperature=0)
toolkit = SQLDatabaseToolkit(db=db,llm=llm)
agent_executor = create_sql_agent(
llm=llm,
toolkit=toolkit,
verbose=True
)
```
Please do correct me if I'm wrong
**Links to the docs**
https://python.langchain.com/en/latest/modules/agents/toolkits/examples/sql_database.html | Docs to include LLM before creating SQL Database Agent | https://api.github.com/repos/langchain-ai/langchain/issues/3798/comments | 5 | 2023-04-29T19:30:26Z | 2023-11-30T16:10:21Z | https://github.com/langchain-ai/langchain/issues/3798 | 1,689,663,886 | 3,798 |
[
"hwchase17",
"langchain"
]
| https://github.com/hwchase17/langchain/tree/master/langchain/docstore)/base.py
The search method of base.py states:
"""Search for document.
If page exists, return the page summary, and a Document object.
If page does not exist, return similar entries.
"""
The signature for the first case is Union[str,Document]
The signature for the second case should be List[Document] or List[Union[str,Document]]
Per the documentation, the resulting signature for the search method should be:
Union[ Union[str,Document], List[Document] ]
or
Union[ Union[str,Document], List[Union[str,Document]] ] | The signature and derived Implementations of DocStore(ABC).search don't match the documentation of the base class | https://api.github.com/repos/langchain-ai/langchain/issues/3794/comments | 4 | 2023-04-29T17:38:34Z | 2023-11-29T16:11:04Z | https://github.com/langchain-ai/langchain/issues/3794 | 1,689,632,702 | 3,794 |
[
"hwchase17",
"langchain"
]
| chain = load_qa_chain(llm, chain_type="stuff")
answer = chain.run(input_documents=similar_docs ,question=query) --> This is return a attribute error as below
8 frames
[/usr/local/lib/python3.10/dist-packages/langchain/chains/combine_documents/base.py](https://wtvsodgob9i-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230427-060111-RC00_527538942#) in format_document(doc, prompt)
14 def format_document(doc: Document, prompt: BasePromptTemplate) -> str:
15 """Format a document into a string based on a prompt template."""
---> 16 base_info = {"page_content": doc.page_content}
17 base_info.update(doc.metadata)
18 missing_metadata = set(prompt.input_variables).difference(base_info)
AttributeError: 'tuple' object has no attribute 'page_content' | Attribute error tuple has no attribute 'page_content' | https://api.github.com/repos/langchain-ai/langchain/issues/3790/comments | 13 | 2023-04-29T15:30:33Z | 2023-11-05T16:07:24Z | https://github.com/langchain-ai/langchain/issues/3790 | 1,689,596,758 | 3,790 |
[
"hwchase17",
"langchain"
]
| receive chat history and custom knowledge source | is there a chain type equivalent to ConversationalRetrievalQA in JS | https://api.github.com/repos/langchain-ai/langchain/issues/3789/comments | 1 | 2023-04-29T15:12:41Z | 2023-09-10T16:25:04Z | https://github.com/langchain-ai/langchain/issues/3789 | 1,689,591,831 | 3,789 |
[
"hwchase17",
"langchain"
]
| Hi,
I've been playing with the [SelfQueryRetriever examples](https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/self_query_retriever.html) but am having a few issues with allowed operators and valid Comparator/s.
**Example 2:** This example only specifies a filter
`retriever.get_relevant_documents("I want to watch a movie rated higher than 8.5")`
This build the following query:
`query=' ' filter=Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5)`
But it fails with:
`HTTP response body: {"code":3,"message":"$Comparator.GT is not a valid operator","details":[]}`
For other examples, I see:
`HTTP response body: {"code":3,"message":"only logical operators as $or and $and are allowed at top level, got $Operator.AND","details":[]}`
I'm using the following on MacOS:
- Python 3.11.3
- langchain 0.0.152
- lark 1.1.5
With a Pinecone index:
- Environment: us-central1-gcp
- Metric: cosine
- Pod Type: p1.x1
- Dimensions: 1536
This will be a killer search feature so, I'd be very grateful if anybody is able to shed some light on this
Thanks.
| SelfQueryRetriever: invalid operators/comparators | https://api.github.com/repos/langchain-ai/langchain/issues/3788/comments | 13 | 2023-04-29T14:37:53Z | 2024-08-05T21:58:09Z | https://github.com/langchain-ai/langchain/issues/3788 | 1,689,581,463 | 3,788 |
[
"hwchase17",
"langchain"
]
| Hi!
I am using GPT4All with agents and Wikipedia as a tool.
I'm getting this error: `is not a valid tool, try another one.`.
I have noticed that in the action there is that: `Action: Use Wikipedia to confirm the accuracy of this information.`
I think it should be `Action: wikipedia`. | Is not a valid tool, try another one. | https://api.github.com/repos/langchain-ai/langchain/issues/3785/comments | 6 | 2023-04-29T13:32:19Z | 2023-09-24T16:06:41Z | https://github.com/langchain-ai/langchain/issues/3785 | 1,689,561,941 | 3,785 |
[
"hwchase17",
"langchain"
]
| # Description
There is a typo on the [Components/Schema/Text documentation page](https://docs.langchain.com/docs/components/schema/text). The third sentence starts with:
```
Therefor, a lot of the interfaces...
```
It should be changed to:
```
Therefore, a lot of the interfaces...
``` | DOCS: Typo on Components/Schema/Text page | https://api.github.com/repos/langchain-ai/langchain/issues/3784/comments | 7 | 2023-04-29T12:10:04Z | 2023-12-19T00:51:18Z | https://github.com/langchain-ai/langchain/issues/3784 | 1,689,539,500 | 3,784 |
[
"hwchase17",
"langchain"
]
| Hey guys, wanted to ask if I can use the SQL Database Agent agent and get the inference from OpenAI gpt-3.5-turbo?
And if so, how can I do that?
Tried to replace the llm argument on the initialization of the agent executor from OpenAI to OpenAIChat and a bunch of other stuff.
But none seems to work.
Thanks! | Using the SQL Database Agent with inference from OpenAI gpt-3.5-turbo model. | https://api.github.com/repos/langchain-ai/langchain/issues/3783/comments | 7 | 2023-04-29T12:01:17Z | 2023-09-24T16:06:46Z | https://github.com/langchain-ai/langchain/issues/3783 | 1,689,537,068 | 3,783 |
[
"hwchase17",
"langchain"
]
| I am trying to follow quick start guide for using agents from https://python.langchain.com/en/latest/getting_started/getting_started.html#agents-dynamically-call-chains-based-on-user-input while following steps I am seeing error
```
ZeroShotAgent does not support multi-input tool Calculator.
```
I am using `langchain==0.0.152` and python version `Python 3.8.10`
Python code I executed
```
from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType
llm = OpenAI(temperature=0)
tools = load_tools(["serpapi", "llm-math"], llm=llm)
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
```
The Error I am seeing
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
/tmp/ipykernel_1917316/1048650410.py in <module>
1 llm = OpenAI(temperature=0)
2 tools = load_tools(["serpapi", "llm-math"], llm=llm)
----> 3 agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
~/Code/langchainApp/venv/lib/python3.8/site-packages/langchain/agents/initialize.py in initialize_agent(tools, llm, agent, callback_manager, agent_path, agent_kwargs, **kwargs)
50 agent_cls = AGENT_TO_CLASS[agent]
51 agent_kwargs = agent_kwargs or {}
---> 52 agent_obj = agent_cls.from_llm_and_tools(
53 llm, tools, callback_manager=callback_manager, **agent_kwargs
54 )
~/Code/langchainApp/venv/lib/python3.8/site-packages/langchain/agents/mrkl/base.py in from_llm_and_tools(cls, llm, tools, callback_manager, output_parser, prefix, suffix, format_instructions, input_variables, **kwargs)
99 ) -> Agent:
100 """Construct an agent from an LLM and tools."""
--> 101 cls._validate_tools(tools)
102 prompt = cls.create_prompt(
103 tools,
~/Code/langchainApp/venv/lib/python3.8/site-packages/langchain/agents/mrkl/base.py in _validate_tools(cls, tools)
123 @classmethod
124 def _validate_tools(cls, tools: Sequence[BaseTool]) -> None:
--> 125 super()._validate_tools(tools)
126 for tool in tools:
127 if tool.description is None:
~/Code/langchainApp/venv/lib/python3.8/site-packages/langchain/agents/agent.py in _validate_tools(cls, tools)
457 for tool in tools:
458 if not tool.is_single_input:
--> 459 raise ValueError(
460 f"{cls.__name__} does not support multi-input tool {tool.name}."
461 )
ValueError: ZeroShotAgent does not support multi-input tool Calculator.
```
Let me know if more information is needed or if this is expected behavior then documentation changes are needed. | ZeroShotAgent does not support multi-input tool Calculator. | https://api.github.com/repos/langchain-ai/langchain/issues/3781/comments | 9 | 2023-04-29T11:28:21Z | 2023-10-30T12:34:46Z | https://github.com/langchain-ai/langchain/issues/3781 | 1,689,528,678 | 3,781 |
[
"hwchase17",
"langchain"
]
| While working on https://github.com/hwchase17/langchain/issues/3722 I have noticed that there might be a bug in the current implementation of the OpenAI length safe embeddings in `_get_len_safe_embeddings`, which before #3722 was actually the **default implementation** (after https://github.com/hwchase17/langchain/pull/2330).
It appears the weights used are constant and the **length** of the embedding vector (1536) and NOT the **number of tokens** in the batch, as in the reference implementation at https://github.com/openai/openai-cookbook/blob/main/examples/Embedding_long_inputs.ipynb
| OpenAI embedding use invalid/constant weights | https://api.github.com/repos/langchain-ai/langchain/issues/3777/comments | 1 | 2023-04-29T05:57:29Z | 2023-09-15T22:12:54Z | https://github.com/langchain-ai/langchain/issues/3777 | 1,689,407,698 | 3,777 |
[
"hwchase17",
"langchain"
]
| ### Description
Currently, the `AzureOpenAI` and `AzureChatOpenAI` classes call the same underlying SDK, but the developer interaction with both is different. The goal of this issue is to update the `AzureOpenAI` class to use the following parameters like `AzureChatOpenAI`:
```python
deployment_name: str = ""
openai_api_type: str = "azure"
openai_api_base: str = ""
openai_api_version: str = ""
openai_api_key: str = ""
```
This way, developer interaction with both `AzureOpenAI` and `AzureChatOpenAI` is the same.
### Approach
Create a class called `AzureOpenAIMixin` that contains the code from `AzureChatOpenAI` and is inherited by `AzureOpenAI` and `AzureChatOpenAI`.#3635
### Proposed Implementation
```python
class AzureOpenAIMixin(abc.ABC, BaseModel):
"""Wrapper around Azure OpenAI Chat Completion API. To use this class you
must have a deployed model on Azure OpenAI. Use `deployment_name` in the
constructor to refer to the "Model deployment name" in the Azure portal.
In addition, you should have the ``openai`` python package installed, and the
following environment variables set or passed in constructor in lower case:
- ``OPENAI_API_TYPE`` (default: ``azure``)
- ``OPENAI_API_KEY``
- ``OPENAI_API_BASE``
- ``OPENAI_API_VERSION``
For exmaple, if you have `gpt-35-turbo` deployed, with the deployment name
`35-turbo-dev`, the constructor should look like:
.. code-block:: python
AzureChatOpenAI(
deployment_name="35-turbo-dev",
openai_api_version="2023-03-15-preview",
)
Be aware the API version may change.
Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
"""
deployment_name: str = ""
openai_api_type: str = "azure"
openai_api_base: str = ""
openai_api_version: str = ""
openai_api_key: str = ""
@root_validator()
def validate_environment(cls, values: Dict) -> Dict:
"""Validate that api key and python package exists in environment."""
openai_api_key = get_from_dict_or_env(
values,
"openai_api_key",
"OPENAI_API_KEY",
)
openai_api_base = get_from_dict_or_env(
values,
"openai_api_base",
"OPENAI_API_BASE",
)
openai_api_version = get_from_dict_or_env(
values,
"openai_api_version",
"OPENAI_API_VERSION",
)
openai_api_type = get_from_dict_or_env(
values,
"openai_api_type",
"OPENAI_API_TYPE",
)
try:
import openai
openai.api_type = openai_api_type
openai.api_base = openai_api_base
openai.api_version = openai_api_version
openai.api_key = openai_api_key
except ImportError:
raise ValueError(
"Could not import openai python package. "
"Please it install it with `pip install openai`."
)
try:
values["client"] = openai.ChatCompletion
except AttributeError:
raise ValueError(
"`openai` has no `ChatCompletion` attribute, this is likely "
"due to an old version of the openai package. Try upgrading it "
"with `pip install --upgrade openai`."
)
if values["n"] < 1:
raise ValueError("n must be at least 1.")
if values["n"] > 1 and values["streaming"]:
raise ValueError("n must be 1 when streaming.")
return values
@property
def _default_params(self) -> Dict[str, Any]:
"""Get the default parameters for calling OpenAI API."""
return {
**super()._default_params,
"engine": self.deployment_name,
}
class AzureOpenAI(BaseOpenAI, AzureOpenAIMixin):
@property
def _default_params(self) -> Dict[str, Any]:
"""Get the default parameters for calling OpenAI API."""
return {
**super()._default_params,
"engine": self.deployment_name,
}
class AzureChatOpenAI(ChatOpenAI, AzureOpenAIMixin):
@property
def _identifying_params(self) -> Mapping[str, Any]:
return {
**{"deployment_name": self.deployment_name},
**super()._identifying_params,
}
@property
def _invocation_params(self) -> Dict[str, Any]:
return {**{"engine": self.deployment_name}, **super()._invocation_params}
``` | [Azure OpenAI] Merging validate_environment from AzureChatOpenAI to AzureOpenAI | https://api.github.com/repos/langchain-ai/langchain/issues/3769/comments | 2 | 2023-04-29T04:12:41Z | 2023-09-10T16:25:14Z | https://github.com/langchain-ai/langchain/issues/3769 | 1,689,355,370 | 3,769 |
[
"hwchase17",
"langchain"
]
| Commit: https://github.com/hwchase17/langchain/commit/4654c58f7238e10b35544633bd780b73bbb75c75
This commit appears to have broken the Quick Start example for Agents: Dynamically Call Chains Based on User Input. When attempting to follow along this now causes an error ValueError: ZeroShotAgent does not support multi-input tool Calculator. | Quick Start example for Agents: Dynamically Call Chains Based on User Input is broken | https://api.github.com/repos/langchain-ai/langchain/issues/3757/comments | 5 | 2023-04-29T01:02:41Z | 2023-05-01T21:42:18Z | https://github.com/langchain-ai/langchain/issues/3757 | 1,689,291,529 | 3,757 |
[
"hwchase17",
"langchain"
]
| Hey guys,
the code is absolutely the same as https://github.com/hwchase17/langchain/issues/3750 but just with a german prompt.
The prompt justs asks for recommendations about things to do in Middle-Franconia (part of Germany).
```
% python3 app.py "Nenne mir 3 Ausflugsziele in Mittelfranken?"
llama.cpp: loading model from /Users/myadmin/dalai/llama/models/13B/13b-ggml-model-q4_0.bin
llama_model_load_internal: format = ggjt v1 (latest)
llama_model_load_internal: n_vocab = 32000
llama_model_load_internal: n_ctx = 512
llama_model_load_internal: n_embd = 5120
llama_model_load_internal: n_mult = 256
llama_model_load_internal: n_head = 40
llama_model_load_internal: n_layer = 40
llama_model_load_internal: n_rot = 128
llama_model_load_internal: ftype = 2 (mostly Q4_0)
llama_model_load_internal: n_ff = 13824
llama_model_load_internal: n_parts = 1
llama_model_load_internal: model size = 13B
llama_model_load_internal: ggml ctx size = 73.73 KB
llama_model_load_internal: mem required = 9807.47 MB (+ 1608.00 MB per state)
llama_init_from_file: kv self size = 400.00 MB
AVX = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | VSX = 0 |
Answer the following questions as best you can. You have access to the following tools:
Google Search: A wrapper around Google Search. Useful for when you need to answer questions about current events. Input should be a search query.
Wikipedia: A wrapper around Wikipedia. Useful for when you need to answer general questions about people, places, companies, historical events, or other subjects. Input should be a search query.
Use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [Google Search, Wikipedia]
Action Input: the input to the action
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question
Begin!
Question: {input}
{agent_scratchpad}
-- Serving request for input: Nenne mir 3 Ausflugsziele in Mittelfranken?
> Entering new AgentExecutor chain...
Thought: what to do
Action: Google Search
Google Search: "Ausflugsziele mittelfranken"
Results:
- https://www.mittelfrankenschau.de/service/kultur/ausflรผge-mitten-fuer-franken/
- https://mw21.de/blog/flaechen/10-top-ausflugsziele-in-mittel-franken
- http://www.m-frankfurt.de/ausflugsziele/
Action Input: "mittelfranken"
Observation: Google Search
Google Search: "Ausflugsziele mittelfranken"
Results:
- https://www.mittelfrankenschau.de/service/kultur/ausflรผge-mitten-fuer-franken/
- https://mw21.de/blog/flaechen/10-top-ausflugsziele-in-mittel-franken
- http://www.m-frankfurt.de/ausflugsziele/ is not a valid tool, try another one.
Thought:Traceback (most recent call last):
File "/Users/myadmin/lc-serve/app.py", line 54, in <module>
ask(sys.argv[1])
File "/Users/myadmin/Library/Python/3.9/lib/python/site-packages/lcserve/backend/decorators.py", line 14, in sync_wrapper
return func(*args, **kwargs)
File "/Users/myadmin/lc-serve/app.py", line 47, in ask
return agent_executor.run(input)
File "/Users/myadmin/Library/Python/3.9/lib/python/site-packages/langchain/chains/base.py", line 213, in run
return self(args[0])[self.output_keys[0]]
File "/Users/myadmin/Library/Python/3.9/lib/python/site-packages/langchain/chains/base.py", line 116, in __call__
raise e
File "/Users/myadmin/Library/Python/3.9/lib/python/site-packages/langchain/chains/base.py", line 113, in __call__
outputs = self._call(inputs)
File "/Users/myadmin/Library/Python/3.9/lib/python/site-packages/langchain/agents/agent.py", line 796, in _call
next_step_output = self._take_next_step(
File "/Users/myadmin/Library/Python/3.9/lib/python/site-packages/langchain/agents/agent.py", line 676, in _take_next_step
output = self.agent.plan(intermediate_steps, **inputs)
File "/Users/myadmin/Library/Python/3.9/lib/python/site-packages/langchain/agents/agent.py", line 384, in plan
full_output = self.llm_chain.predict(**full_inputs)
File "/Users/myadmin/Library/Python/3.9/lib/python/site-packages/langchain/chains/llm.py", line 151, in predict
return self(kwargs)[self.output_key]
File "/Users/myadmin/Library/Python/3.9/lib/python/site-packages/langchain/chains/base.py", line 116, in __call__
raise e
File "/Users/myadmin/Library/Python/3.9/lib/python/site-packages/langchain/chains/base.py", line 113, in __call__
outputs = self._call(inputs)
File "/Users/myadmin/Library/Python/3.9/lib/python/site-packages/langchain/chains/llm.py", line 57, in _call
return self.apply([inputs])[0]
File "/Users/myadmin/Library/Python/3.9/lib/python/site-packages/langchain/chains/llm.py", line 118, in apply
response = self.generate(input_list)
File "/Users/myadmin/Library/Python/3.9/lib/python/site-packages/langchain/chains/llm.py", line 62, in generate
return self.llm.generate_prompt(prompts, stop)
File "/Users/myadmin/Library/Python/3.9/lib/python/site-packages/langchain/llms/base.py", line 107, in generate_prompt
return self.generate(prompt_strings, stop=stop)
File "/Users/myadmin/Library/Python/3.9/lib/python/site-packages/langchain/llms/base.py", line 140, in generate
raise e
File "/Users/myadmin/Library/Python/3.9/lib/python/site-packages/langchain/llms/base.py", line 137, in generate
output = self._generate(prompts, stop=stop)
File "/Users/myadmin/Library/Python/3.9/lib/python/site-packages/langchain/llms/base.py", line 324, in _generate
text = self._call(prompt, stop=stop)
File "/Users/myadmin/Library/Python/3.9/lib/python/site-packages/langchain/llms/llamacpp.py", line 222, in _call
for token in self.stream(prompt=prompt, stop=stop):
File "/Users/myadmin/Library/Python/3.9/lib/python/site-packages/langchain/llms/llamacpp.py", line 268, in stream
for chunk in result:
File "/Users/myadmin/Library/Python/3.9/lib/python/site-packages/llama_cpp/llama.py", line 426, in _create_completion
raise ValueError(
ValueError: Requested tokens exceed context window of 512
```
I am guessing (no idea really) that some of the search results might be too big, but i am also uncertain of how to use a TextSplitter in between the load_tools and google search result parsing.
Any ideas would be greatly appreciated!
Thanks in advance! | ValueError: Requested tokens exceed context window of 512 | https://api.github.com/repos/langchain-ai/langchain/issues/3751/comments | 8 | 2023-04-28T23:42:51Z | 2023-05-24T20:06:54Z | https://github.com/langchain-ai/langchain/issues/3751 | 1,689,259,604 | 3,751 |
[
"hwchase17",
"langchain"
]
| Hey there,
thanks for langchain! It's super awesome! ๐
I am currently trying to write a simple REST API but i am getting somewhat random errors. Sometimes (about 1 in 15 runs) it's this:
```
% python3 app.py "Who won the superbowl the year justin bieber was born?"
llama.cpp: loading model from /Users/myadmin/dalai/llama/models/13B/13b-ggml-model-q4_0.bin
llama_model_load_internal: format = ggjt v1 (latest)
llama_model_load_internal: n_vocab = 32000
llama_model_load_internal: n_ctx = 512
llama_model_load_internal: n_embd = 5120
llama_model_load_internal: n_mult = 256
llama_model_load_internal: n_head = 40
llama_model_load_internal: n_layer = 40
llama_model_load_internal: n_rot = 128
llama_model_load_internal: ftype = 2 (mostly Q4_0)
llama_model_load_internal: n_ff = 13824
llama_model_load_internal: n_parts = 1
llama_model_load_internal: model size = 13B
llama_model_load_internal: ggml ctx size = 73.73 KB
llama_model_load_internal: mem required = 9807.47 MB (+ 1608.00 MB per state)
llama_init_from_file: kv self size = 400.00 MB
AVX = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | VSX = 0 |
Answer the following questions as best you can. You have access to the following tools:
Google Search: A wrapper around Google Search. Useful for when you need to answer questions about current events. Input should be a search query.
Wikipedia: A wrapper around Wikipedia. Useful for when you need to answer general questions about people, places, companies, historical events, or other subjects. Input should be a search query.
Use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [Google Search, Wikipedia]
Action Input: the input to the action
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question
Begin!
Question: {input}
{agent_scratchpad}
-- Serving request for input: Who won the superbowl the year justin bieber was born?
> Entering new AgentExecutor chain...
Traceback (most recent call last):
File "/Users/myadmin/lc-serve/app.py", line 54, in <module>
ask(sys.argv[1])
File "/Users/myadmin/Library/Python/3.9/lib/python/site-packages/lcserve/backend/decorators.py", line 14, in sync_wrapper
return func(*args, **kwargs)
File "/Users/myadmin/lc-serve/app.py", line 47, in ask
return agent_executor.run(input)
File "/Users/myadmin/Library/Python/3.9/lib/python/site-packages/langchain/chains/base.py", line 213, in run
return self(args[0])[self.output_keys[0]]
File "/Users/myadmin/Library/Python/3.9/lib/python/site-packages/langchain/chains/base.py", line 116, in __call__
raise e
File "/Users/myadmin/Library/Python/3.9/lib/python/site-packages/langchain/chains/base.py", line 113, in __call__
outputs = self._call(inputs)
File "/Users/myadmin/Library/Python/3.9/lib/python/site-packages/langchain/agents/agent.py", line 796, in _call
next_step_output = self._take_next_step(
File "/Users/myadmin/Library/Python/3.9/lib/python/site-packages/langchain/agents/agent.py", line 676, in _take_next_step
output = self.agent.plan(intermediate_steps, **inputs)
File "/Users/myadmin/Library/Python/3.9/lib/python/site-packages/langchain/agents/agent.py", line 385, in plan
return self.output_parser.parse(full_output)
File "/Users/myadmin/Library/Python/3.9/lib/python/site-packages/langchain/agents/mrkl/output_parser.py", line 26, in parse
raise OutputParserException(f"Could not parse LLM output: `{text}`")
langchain.schema.OutputParserException: Could not parse LLM output: `
Thought: I don't know who won that year... or what happened in the next 3 years.
Action: Google Search
Google Search: Super Bowl`
```
The script is as follows:
```python
# app.py
#model = "/Users/myadmin/dalai/"
model = "/Users/myadmin/dalai/llama/models/13B/13b-ggml-model-q4_0.bin"
#model = "/Users/myadmin/dalai/llama/models/13B/ggml-vic13b-q5_1.bin"
#model = "/Users/myadmin/dalai/alpaca/models/30B/ggml-model-q4_0.bin"
import sys, os
os.environ["GOOGLE_CSE_ID"] = "xyz"
os.environ["GOOGLE_API_KEY"] = "xyz"
from langchain import LLMChain
from langchain.llms import LlamaCpp
from langchain.agents import AgentExecutor, Tool, ZeroShotAgent, load_tools
from lcserve import serving
prefix = """Answer the following questions as best you can. You have access to the following tools:"""
#suffix = """Begin! Remember to speak as a pirate when giving your final answer. Use lots of "Args"
suffix = """Begin!
Question: {input}
{agent_scratchpad}"""
llm = LlamaCpp(model_path=model, verbose=True)
tools = load_tools(["google-search", "wikipedia"], llm=llm)
prompt = ZeroShotAgent.create_prompt(
tools,
prefix=prefix,
suffix=suffix,
input_variables=["input", "agent_scratchpad"],
)
print(prompt.template)
llm_chain = LLMChain(llm=llm, prompt=prompt)
tool_names = [tool.name for tool in tools]
agent = ZeroShotAgent(llm_chain=llm_chain, allowed_tools=tool_names)
agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True)
@serving
def ask(input: str) -> str:
print("-- Serving request for input: %s" % input)
return agent_executor.run(input)
if __name__ == "__main__":
if len(sys.argv) == 1:
ask('How many people live in canada as of 2023?')
else:
ask(sys.argv[1])
```
Not quite sure what i'm doing wrong here, or if it's just purely a random thing i should just catch?
Thank you very much in advance! | langchain.schema.OutputParserException: Could not parse LLM output: ` | https://api.github.com/repos/langchain-ai/langchain/issues/3750/comments | 7 | 2023-04-28T23:35:05Z | 2024-03-16T23:01:29Z | https://github.com/langchain-ai/langchain/issues/3750 | 1,689,255,999 | 3,750 |
[
"hwchase17",
"langchain"
]
| https://github.com/hwchase17/langchain/blob/72c5c15f7fdc1918880e3cfd0949199e5a0b5bda/langchain/retrievers/document_compressors/chain_extract.py#L67-L77
Should allow to pass arbitrary argument to the `LLMChain` for example `verbose`, much like for example `BaseQAWithSourcesChain` does: https://github.com/hwchase17/langchain/blob/72c5c15f7fdc1918880e3cfd0949199e5a0b5bda/langchain/chains/qa_with_sources/base.py#L40-L47 | `LLMChainExtractor.from_llm` should accept `kwargs` for the internal `LLMChain` | https://api.github.com/repos/langchain-ai/langchain/issues/3747/comments | 0 | 2023-04-28T23:31:03Z | 2023-04-29T04:21:25Z | https://github.com/langchain-ai/langchain/issues/3747 | 1,689,254,467 | 3,747 |
[
"hwchase17",
"langchain"
]
|
The following code broke post updating, was working fine before:
```py
def get_chat_agent(memory, tools):
return initialize_agent(
tools,
ChatOpenAI(verbose=True),
agent=AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION,
memory=memory,
system_message=CHAT_AGENT_SYSTEM_MESSAGE,
return_intermediate_steps=True,
verbose=True,
)
```
Error message:
```
Cell In[2], line 37, in get_chat_agent(memory, tools)
36 def get_chat_agent(memory, tools):
---> 37 return initialize_agent(
38 tools,
39 ChatOpenAI(verbose=True),
40 agent=AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION,
41 memory=memory,
42 system_message=CHAT_AGENT_SYSTEM_MESSAGE,
43 return_intermediate_steps=True,
44 verbose=True,
45 )
...
195 @property
196 def _prompt_type(self) -> str:
--> 197 raise NotImplementedError
NotImplementedError:
``` | :bug: Breaking changes introduced into previous code after updating to 0.0.152 | https://api.github.com/repos/langchain-ai/langchain/issues/3743/comments | 2 | 2023-04-28T23:03:42Z | 2023-09-10T16:25:19Z | https://github.com/langchain-ai/langchain/issues/3743 | 1,689,241,785 | 3,743 |
[
"hwchase17",
"langchain"
]
| The default prompt of `load_qa_with_sources_chain` in `langchain.chains.qa_with_sources` ( `langchain 0.0.147` and the last few versions) contains user information (probably a question someone had, or an example) - please clean it.
**The default prompt should be (I think):**
```
template = """Given the following extracted parts of a long document and a question, create a final answer with references ("SOURCES").
If you don't know the answer, just say that you don't know. Don't try to make up an answer.
ALWAYS return a "SOURCES" part in your answer.
QUESTION: {question}
=========
{summaries}
=========
FINAL ANSWER:"""
```
**The current default prompt in `load_qa_with_sources_chain`:**
`template='Given the following extracted parts of a long document and a question, create a final answer with references ("SOURCES"). \nIf you don\'t know the answer, just say that you don\'t know. Don\'t try to make up an answer.\nALWAYS return a "SOURCES" part in your answer.\n\nQUESTION: Which state/country\'s law governs the interpretation of the contract?\n=========\nContent: This Agreement is governed by English law and the parties submit to the exclusive jurisdiction of the English courts in relation to any dispute (contractual or non-contractual) concerning this Agreement save that either party may apply to any court for an injunction or other relief to protect its Intellectual Property Rights.\nSource: 28-pl\nContent: No Waiver. Failure or delay in exercising any right or remedy under this Agreement shall not constitute a waiver of such (or any other) right or remedy.\n\n11.7 Severability. The invalidity, illegality or unenforceability of any term (or part of a term) of this Agreement shall not affect the continuation in force of the remainder of the term (if any) and this Agreement.\n\n11.8 No Agency. Except as expressly stated otherwise, nothing in this Agreement shall create an agency, partnership or joint venture of any kind between the parties.\n\n11.9 No Third-Party Beneficiaries.\nSource: 30-pl\nContent: (b) if Google believes, in good faith, that the Distributor has violated or caused Google to violate any Anti-Bribery Laws (as defined in Clause 8.5) or that such a violation is reasonably likely to occur,\nSource: 4-pl\n=========\nFINAL ANSWER: This Agreement is governed by English law.\nSOURCES: 28-pl\n\nQUESTION: What did the president say about Michael Jackson?\n=========\nContent: Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n\nLast year COVID-19 kept us apart. This year we are finally together again. \n\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n\nWith a duty to one another to the American people to the Constitution. \n\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \n\nSix days ago, Russiaโs Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \n\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \n\nHe met the Ukrainian people. \n\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. \n\nGroups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland.\nSource: 0-pl\nContent: And we wonโt stop. \n\nWe have lost so much to COVID-19. Time with one another. And worst of all, so much loss of life. \n\nLetโs use this moment to reset. Letโs stop looking at COVID-19 as a partisan dividing line and see it for what it is: A God-awful disease. \n\nLetโs stop seeing each other as enemies, and start seeing each other for who we really are: Fellow Americans. \n\nWe canโt change how divided weโve been. But we can change how we move forwardโon COVID-19 and other issues we must face together. \n\nI recently visited the New York City Police Department days after the funerals of Officer Wilbert Mora and his partner, Officer Jason Rivera. \n\nThey were responding to a 9-1-1 call when a man shot and killed them with a stolen gun. \n\nOfficer Mora was 27 years old. \n\nOfficer Rivera was 22. \n\nBoth Dominican Americans whoโd grown up on the same streets they later chose to patrol as police officers. \n\nI spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves.\nSource: 24-pl\nContent: And a proud Ukrainian people, who have known 30 years of independence, have repeatedly shown that they will not tolerate anyone who tries to take their country backwards. \n\nTo all Americans, I will be honest with you, as Iโve always promised. A Russian dictator, invading a foreign country, has costs around the world. \n\nAnd Iโm taking robust action to make sure the pain of our sanctions is targeted at Russiaโs economy. And I will use every tool at our disposal to protect American businesses and consumers. \n\nTonight, I can announce that the United States has worked with 30 other countries to release 60 Million barrels of oil from reserves around the world. \n\nAmerica will lead that effort, releasing 30 Million barrels from our own Strategic Petroleum Reserve. And we stand ready to do more if necessary, unified with our allies. \n\nThese steps will help blunt gas prices here at home. And I know the news about whatโs happening can seem alarming. \n\nBut I want you to know that we are going to be okay.\nSource: 5-pl\nContent: More support for patients and families. \n\nTo get there, I call on Congress to fund ARPA-H, the Advanced Research Projects Agency for Health. \n\nItโs based on DARPAโthe Defense Department project that led to the Internet, GPS, and so much more. \n\nARPA-H will have a singular purposeโto drive breakthroughs in cancer, Alzheimerโs, diabetes, and more. \n\nA unity agenda for the nation. \n\nWe can do this. \n\nMy fellow Americansโtonight , we have gathered in a sacred spaceโthe citadel of our democracy. \n\nIn this Capitol, generation after generation, Americans have debated great questions amid great strife, and have done great things. \n\nWe have fought for freedom, expanded liberty, defeated totalitarianism and terror. \n\nAnd built the strongest, freest, and most prosperous nation the world has ever known. \n\nNow is the hour. \n\nOur moment of responsibility. \n\nOur test of resolve and conscience, of history itself. \n\nIt is in this moment that our character is formed. Our purpose is found. Our future is forged. \n\nWell I know this nation.\nSource: 34-pl\n=========\nFINAL ANSWER: The president did not mention Michael Jackson.\nSOURCES:\n\nQUESTION: {question}\n=========\n{summaries}\n=========\nFINAL ANSWER:'`
| Bug reporting in `load_qa_with_sources_chain` promt | https://api.github.com/repos/langchain-ai/langchain/issues/3737/comments | 6 | 2023-04-28T21:12:29Z | 2023-09-24T16:06:51Z | https://github.com/langchain-ai/langchain/issues/3737 | 1,689,140,634 | 3,737 |
[
"hwchase17",
"langchain"
]
| I am interested in making use of Prompt serialization in order to allow for more modular use of models / chains.
I noticed when partial variables were initially added in [this PR](https://github.com/hwchase17/langchain/pull/1308), there was some discussion about their interaction with serialized prompts, which resulted in [these lines](https://github.com/hwchase17/langchain/blob/e3b7a20454cea592fc6d0a0d91c36206e8ad6790/langchain/prompts/base.py#L203-L204) being added to disable serialization of prompts with partial variables.
In my testing I found that partial variables seem to work well as is, if those lines are removed, as done in [this PR](https://github.com/hwchase17/langchain/pull/3734).
- https://github.com/hwchase17/langchain/pull/3734
I'm happy to help with developing this feature, but would be interested to hear more about what the reasoning was for initially disabling it, as I am sure there are edge cases I am missing. Thanks!
Cases to be investigated:
- [ ] Partial Variable needs to be redefined after the PROMPT is loaded back from serialization | Serializing Prompts with Partial Variables | https://api.github.com/repos/langchain-ai/langchain/issues/3735/comments | 2 | 2023-04-28T21:06:22Z | 2023-09-17T17:21:49Z | https://github.com/langchain-ai/langchain/issues/3735 | 1,689,135,358 | 3,735 |
[
"hwchase17",
"langchain"
]
| Hello,
my code is reading "state_of_the_union" text which I converted to a PDF
chromadb.__version__
'0.3.21'
my code is
##################
# LLM model
model_path = f"{model_dir}gpt-neo-2.7B"
generate_text = pipeline('text-generation', model = model_path,\
max_new_tokens = 100, temperature = 0.99, top_p=0.99,\
repetition_penalty=1.4, use_cache=True, do_sample = True)
hf_pipeline = HuggingFacePipeline(pipeline=generate_text)
# embedding model
model_path = f"{model_dir}all-mpnet-base-v2"
hf_embed = HuggingFaceEmbeddings(model_name = model_path)
# load data
data_path = f"{data_dir}state_of_the_union.pdf"
loader = PyPDFLoader(data_path)
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
db = Chroma.from_documents(texts, hf_embed)
query = "Who is Michael Jackson?" # out of context
retriever = db.as_retriever(search_type="similarity", search_kwargs={"k":2})
# retriever = db.as_retriever()
# create a chain to answer questions
qa = RetrievalQA.from_chain_type(
llm = hf_pipeline, chain_type="stuff", retriever=retriever,\
return_source_documents=True)
# tried with other chain types as well
result = qa({"query": query})
print(result)
########################
My question why is the function returning me results from the source document that dont even match my query
Results
{'query': 'Who is Michael Jackson?', 'result': ' We can thank him again. \nHe was the man who saved us from bankruptcy four times. \n \nIn 1983, he went to Washington and worked with Congress to enact the Gramm-Leach-Bliley Act creating \nthe Federal Home Loan Mortgage Corporation and the Community Reinvestment Act. \n \nThese two laws allowed banks to start bringing into the mortgage lending market products that \nwould not have made sense ten years earlier. ', 'source_documents': [Document(page_content='And I did that 4 days ago, when I nominated ....', metadata={'source': 'LLM_Learnings\\data\\state_union.pdf', 'page': 22}), Document(page_content='Just last year, 55 Fortune 500 corporations earned $40 billion in profits and paid zero dollars in federal \nincome tax. \n \nThatโs simply not fair. Thatโs why Iโve proposed a 15% minimum .....', metadata={'source': 'LLM_Learnings\\data\\state_union.pdf', 'page': 14})]}
Other options such as load_qa_chain or FAISS, seem to be working ok
docs = db.as_retriever().get_relevant_documents(query)
# docs = db.similarity_search_with_score(query)
chain = load_qa_chain(llm = hf_pipeline, chain_type="stuff", prompt = PROMPT)
# gives error when no embedding for the given query are found in the hf_embed
try:
result = chain({"input_documents": docs, "question": query}, return_only_outputs=False)
except IndexError:
result = chain({"input_documents": '', "question": query}, return_only_outputs=False)
#### OR using FAISS
try:
docs_db = faiss.similarity_search_with_relevance_scores(query, k=2)
docs_db = [x[0] for x in docs_db]
except ValueError:
docs_db = ""
chain = load_qa_chain(hf_pipeline, chain_type="stuff", prompt = PROMPT)
result = chain({"input_documents": docs_db, "question": query}, return_only_outputs=False)
Any help or advise would be greatly appreciated.
Regards
Akbar
| Strange document similarity results | https://api.github.com/repos/langchain-ai/langchain/issues/3731/comments | 2 | 2023-04-28T20:09:11Z | 2023-10-09T16:08:18Z | https://github.com/langchain-ai/langchain/issues/3731 | 1,689,078,087 | 3,731 |
[
"hwchase17",
"langchain"
]
| ```
from langchain.document_loaders import DirectoryLoader
loader = DirectoryLoader('./server', glob="**/*.md")
data = loader.load()
```
Error
```
from pdfminer.utils import open_filename
ImportError: cannot import name 'open_filename' from 'pdfminer.utils' (/usr/local/lib/python3.8/dist-packages/pdfminer/utils.py)
```
Langchain version: '0.0.152'
| Errors with DirectoryLoader | https://api.github.com/repos/langchain-ai/langchain/issues/3726/comments | 9 | 2023-04-28T18:16:21Z | 2024-03-26T16:04:51Z | https://github.com/langchain-ai/langchain/issues/3726 | 1,688,958,540 | 3,726 |
[
"hwchase17",
"langchain"
]
| If you have a look at: [https://python.langchain.com/en/latest/modules/models/llms/integrations/gpt4all.html](https://python.langchain.com/en/latest/modules/models/llms/integrations/gpt4all.html), you will see that it tells to
`pip install pyllamacpp`
This is because the class GPT4All (from langchain.llms import GPT4All) will load
`from pyllamacpp.model import Model as GPT4AllModel`
In other words, the GPT4All models are loaded using pyllamacpp ([https://github.com/abetlen/llama-cpp-python](https://github.com/abetlen/llama-cpp-python)). However, this is does not work anymore. The LLMs downloaded from [https://github.com/nomic-ai/pygpt4all](https://github.com/nomic-ai/pygpt4all) cannot be loaded with pyllamacpp. The reason for that is, the gpt4all models have now their own python bindings --> see [https://github.com/nomic-ai/pygpt4all](https://github.com/nomic-ai/pygpt4all).
So, in order to use the gpt4all models, it is not allowed to use the regular pyllamacpp, but rather the python bindings provided by nomic-ai [https://github.com/nomic-ai/pygpt4all](https://github.com/nomic-ai/pygpt4all) must be used.
Updating the documentation is one thing, adding the nomic-ai offical python bindings is the second modification that is required. | GPT4All model - Update required in the documentation | https://api.github.com/repos/langchain-ai/langchain/issues/3725/comments | 1 | 2023-04-28T18:03:24Z | 2023-09-15T22:12:53Z | https://github.com/langchain-ai/langchain/issues/3725 | 1,688,945,178 | 3,725 |
[
"hwchase17",
"langchain"
]
| https://github.com/hwchase17/langchain/blob/1bf1c37c0cccb7c8c73d87ace27cf742f814dbe5/langchain/embeddings/openai.py#L210-L211
Means that the length safe embedding method is "always" used, initial implementation https://github.com/hwchase17/langchain/pull/991 has the `embedding_ctx_length` set to -1 (meaning you had to opt-in for the length safe method), https://github.com/hwchase17/langchain/pull/2330 changed that to max length of OpenAI embeddings v2, meaning the length safe method is used at all times.
How about changing that if branch to use length safe method only when needed, meaning when the text is longer than the max context length? | `OpenAIEmbeddings` should use length safe embedding method only when needed | https://api.github.com/repos/langchain-ai/langchain/issues/3722/comments | 1 | 2023-04-28T16:42:26Z | 2023-04-29T03:11:38Z | https://github.com/langchain-ai/langchain/issues/3722 | 1,688,845,073 | 3,722 |
[
"hwchase17",
"langchain"
]
| Type: Improvement performance
Issue: Most of the CPUs / Hardware are today multi core. 4 cores are common, some have 8 cores. The [DirectoryLoader](https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/directory_loader.html) is using a single core and misses the opportunity of leverage the multiple cores . Using multi-core would divide the loading of document by a factor of the number of core.
Divide and conquer would be a good strategy.
| Multi-core loader | https://api.github.com/repos/langchain-ai/langchain/issues/3720/comments | 1 | 2023-04-28T15:29:25Z | 2023-09-10T16:25:30Z | https://github.com/langchain-ai/langchain/issues/3720 | 1,688,753,825 | 3,720 |
[
"hwchase17",
"langchain"
]
| I recommend modifying the `PythonREPL` class in the provided code to raise syntax exceptions when the command string has invalid syntax. Currently, the `run` method uses a `try-except` block to catch any exceptions and return their string representation. Instead, we can use the `ast.parse` function to check for syntax errors before executing the command.
By using `ast.parse`, we can explicitly raise a `SyntaxError` when the command string has invalid syntax. The rest of the exceptions are still caught by the existing `try-except` block. | Modify PythonREPL to raise syntax exceptions for invalid command strings | https://api.github.com/repos/langchain-ai/langchain/issues/3712/comments | 1 | 2023-04-28T13:37:48Z | 2023-04-28T14:01:54Z | https://github.com/langchain-ai/langchain/issues/3712 | 1,688,576,160 | 3,712 |
[
"hwchase17",
"langchain"
]
| ## Context
When the completion is of a longer format such as an Email, the text will likely contain new line character `\n`.
If it is not properly escaped like `\\n`, parsing will fail when using PydanticOutputParser as `json.loads` does not allow control characters in strict mode.
Most of the time, RetryWithErrorOutputParser also fails to correct the format.
## Example
```python
from langchain.output_parsers import PydanticOutputParser
from langchain.prompts import PromptTemplate
from pydantic import BaseModel, Field
class Email(BaseModel):
subject: str = Field(description="main objective of the email")
body: str = Field(description="email content")
parser = PydanticOutputParser(pydantic_object=Email)
prompt = PromptTemplate(
template="Answer the user query.\n{format_instructions}\n{query}\n",
input_variables=["query"],
partial_variables={"format_instructions": parser.get_format_instructions()},
)
completion = llm(
prompt.format(
query="Write a long formal email to inform my clients that the company is broke."
)
)
parser.parse(completion)
```
```python
# completion
> Here is the output instance:
\```
{"subject": "Company Status Update", "body": "Dear Clients,
This email is to inform you that our company is currently in a difficult financial situation. We apologize for any inconvenience caused by this and are doing our best to ensure that our services remain of the highest quality for our valued clients.
We want to thank you for your support and understanding during this difficult time.
Sincerely,
[Company Name]"}
\```
```
```python
# parser.parse(completion)
> Got: Invalid control character at: line 1 column 61 (char 60)
```
## Thoughts
Maybe include instructions on escaping in PYDANTIC_FORMAT_INSTRUCTIONS?
Or could adding an option to allow non-strict mode be considered?
https://github.com/hwchase17/langchain/blob/32793f94fd6da0bb36311e1af4051f7883dd12c5/langchain/output_parsers/pydantic.py#L25 | PydanticOutputParser has high chance failing when completion contains new line | https://api.github.com/repos/langchain-ai/langchain/issues/3709/comments | 1 | 2023-04-28T12:21:49Z | 2023-09-24T16:06:56Z | https://github.com/langchain-ai/langchain/issues/3709 | 1,688,460,377 | 3,709 |
[
"hwchase17",
"langchain"
]
| It fails with
```
File [langchain/chains/base.py:113], in Chain.__call__(self, inputs, return_only_outputs)
107 self.callback_manager.on_chain_start(
108 {"name": self.__class__.__name__},
109 inputs,
110 verbose=self.verbose,
111 )
112 try:
--> 113 outputs = self._call(inputs)
114 except (KeyboardInterrupt, Exception) as e:
115 self.callback_manager.on_chain_error(e, verbose=self.verbose)
File [langchain/chains/retrieval_qa/base.py:110], in BaseRetrievalQA._call(self, inputs)
107 question = inputs[self.input_key]
109 docs = self._get_docs(question)
--> 110 answer = self.combine_documents_chain.run(
111 input_documents=docs, question=question
112 )
114 if self.return_source_documents:
115 return {self.output_key: answer, "source_documents": docs}
File [langchain/chains/base.py:205], in Chain.run(self, *args, **kwargs)
203 """Run the chain as text in, text out or multiple variables, text out."""
204 if len(self.output_keys) != 1:
--> 205 raise ValueError(
206 f"`run` not supported when there is not exactly "
207 f"one output key. Got {self.output_keys}."
208 )
210 if args and not kwargs:
211 if len(args) != 1:
ValueError: `run` not supported when there is not exactly one output key. Got ['output_text', 'intermediate_steps'].
```
The culprit is in calling `run` method instead of general `_call` in `BaseRetrievalQA._call`. | RetrievalQA cannot be called with QA chain having `return_intermediate_steps=True` | https://api.github.com/repos/langchain-ai/langchain/issues/3707/comments | 3 | 2023-04-28T12:05:34Z | 2023-10-18T01:58:05Z | https://github.com/langchain-ai/langchain/issues/3707 | 1,688,438,803 | 3,707 |
[
"hwchase17",
"langchain"
]
| null | How to create unittest for langchain use in my project? | https://api.github.com/repos/langchain-ai/langchain/issues/3706/comments | 1 | 2023-04-28T12:03:39Z | 2023-09-10T16:25:34Z | https://github.com/langchain-ai/langchain/issues/3706 | 1,688,436,004 | 3,706 |
[
"hwchase17",
"langchain"
]
| from langchain.retrievers.self_query.base import SelfQueryRetriever failed
File [~/anaconda3/envs/langchain/lib/python3.11/site-packages/langchain/chains/query_constructor/parser.py:41](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/rubensmau/luby/machine-learning/langchain/~/anaconda3/envs/langchain/lib/python3.11/site-packages/langchain/chains/query_constructor/parser.py:41)
8 from langchain.chains.query_constructor.ir import (
9 Comparator,
10 Comparison,
(...)
13 Operator,
14 )
16 GRAMMAR = """
17 ?program: func_call
18 ?expr: func_call
(...)
37 %ignore WS
38 """
---> 41 @v_args(inline=True)
42 class QueryTransformer(Transformer):
43 def __init__(
44 self,
45 *args: Any,
(...)
48 **kwargs: Any,
49 ):
50 super().__init__(*args, **kwargs)
NameError: name 'v_args' is not defined
| SelfQueryRetriever import failed in langchain 0.0.152 and 0.0.151 | https://api.github.com/repos/langchain-ai/langchain/issues/3705/comments | 2 | 2023-04-28T10:54:32Z | 2023-05-02T10:50:16Z | https://github.com/langchain-ai/langchain/issues/3705 | 1,688,345,812 | 3,705 |
[
"hwchase17",
"langchain"
]
| I'm trying to store vector in Milvus with the following code
~~~
from os import environ
MILVUS_HOST = "xxx"
MILVUS_PORT = "xxx"
OPENAI_API_KEY = "xxx"
## Set up environment variables
environ["OPENAI_API_KEY"] = OPENAI_API_KEY
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Milvus
from langchain.document_loaders import UnstructuredMarkdownLoader
from langchain.text_splitter import MarkdownTextSplitter
# Use the WebBaseLoader to load specified web pages into documents
loader = UnstructuredMarkdownLoader("./content.md", mode="elements")
docs = loader.load()
# Split the documents into smaller chunks
text_splitter = MarkdownTextSplitter(chunk_size=1024, chunk_overlap=0)
docs = text_splitter.split_documents(docs)
# print(docs[0])
# Set up an embedding model to covert document chunks into vector embeddings.
embeddings = OpenAIEmbeddings(model="ada")
# Set up a vector store used to save the vector embeddings. Here we use Milvus as the vector store.
vector_store = Milvus.from_documents(
docs,
embedding=embeddings,
connection_args={"host": MILVUS_HOST, "port": MILVUS_PORT}
)
~~~
But I'm getting an error
~~~
Traceback (most recent call last):
File "/home/olivierb/Developments/experiments/openai/import_lang.py", line 35, in <module>
vector_store = Milvus.from_documents(
File "/home/olivierb/Developments/experiments/openai/venv/lib/python3.10/site-packages/langchain/vectorstores/base.py", line 272, in from_documents
return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)
File "/home/olivierb/Developments/experiments/openai/venv/lib/python3.10/site-packages/langchain/vectorstores/milvus.py", line 804, in from_texts
vector_db.add_texts(texts=texts, metadatas=metadatas)
File "/home/olivierb/Developments/experiments/openai/venv/lib/python3.10/site-packages/langchain/vectorstores/milvus.py", line 436, in add_texts
insert_list = [insert_dict[x][i:end] for x in self.fields]
File "/home/olivierb/Developments/experiments/openai/venv/lib/python3.10/site-packages/langchain/vectorstores/milvus.py", line 436, in <listcomp>
insert_list = [insert_dict[x][i:end] for x in self.fields]
KeyError: 'title'
~~~
It seems that Document have missing metadata.title
~~~
page_content='xxx' metadata={'source': './content.md, 'page_number': 1, 'category': 'Title'}
~~~ | Missing key title in metadata with UnstructuredFileLoader | https://api.github.com/repos/langchain-ai/langchain/issues/3704/comments | 12 | 2023-04-28T10:51:47Z | 2024-01-08T19:16:22Z | https://github.com/langchain-ai/langchain/issues/3704 | 1,688,342,420 | 3,704 |
[
"hwchase17",
"langchain"
]
| There is an error when i tried to use this code.
```
tools = load_tools(['llm-math', 'python_repl'], llm)
agent = initialize_agent(tools, agent="zero-shot-react-description", llm=llm)
```
Looks like because of #3684 checking if len(self.args) == 1: in self.is_single_input.
But, self.args of llm-math is
```
{'args': {'title': 'Args', 'type': 'array', 'items': {}},
'kwargs': {'title': 'Kwargs', 'type': 'object'}}
```
So, self.is_single_input return False
Is there a way to get single input llm-math? | ValueError: ZeroShotAgent does not support multi-input tool Calculator. | https://api.github.com/repos/langchain-ai/langchain/issues/3700/comments | 29 | 2023-04-28T09:33:17Z | 2023-09-09T01:47:57Z | https://github.com/langchain-ai/langchain/issues/3700 | 1,688,227,611 | 3,700 |
[
"hwchase17",
"langchain"
]
| ## Motivation
The Map step is scalable as long as we split documents into chunks. Meanwhile, as far as I know, the Reduce step has a weak point on the limitation of max tokens of a LLM model, since the Reduce step tries to put all summaries generated in the Map step into a prompt of the Reduce step. So, if all summaries generated in the Map step is too large to put in a request to a LLM model, it meets the max tokens quota.
So, it would be good to enable to skip the Reduce step and to get outputs of the intermediate steps. By doing so, we can deal with the outputs with the Refine mode and so on.
```python
chain = load_summarize_chain(chain_type="map_reduce", skip_reduce=True)
``` | Enable to skip the reduce step of the MapReduce mode of `MapReduceDocumentsChain` | https://api.github.com/repos/langchain-ai/langchain/issues/3694/comments | 7 | 2023-04-28T06:29:39Z | 2023-10-13T22:27:50Z | https://github.com/langchain-ai/langchain/issues/3694 | 1,687,979,068 | 3,694 |
[
"hwchase17",
"langchain"
]
| I have
```
model_kwargs = {"n_predict": 500, "top_k": 40, "top_p": 0.95, "repeat_penalty" : 3}
llm = Replicate(model="replicate/gpt4all:1150831d577dd5a992a38aa47cec565ab099390b2825c6c090bd7c715219db3b",
model_kwargs=model_kwargs)
loader = TextLoader("./materials/sometext.txt")
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
documents = text_splitter.split_documents(documents)
#####################################################
embeddings = HuggingFaceEmbeddings()
db = FAISS.from_documents(documents, embeddings)
retriever = db.as_retriever(search_type="similarity", search_kwargs={"k":2})
chain = ConversationalRetrievalChain(
retriever=retriever,
question_generator=question_generator,
combine_docs_chain=doc_chain,
)
chat_history = []
while True:
query = input ("> ")
result = chain({"question": query, "chat_history": chat_history})
```
```
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/tmtong/Documents/llmtest/.venv38/lib/python3.8/site-packages/langchain/chains/base.py", line 116, in __call__
raise e
File "/home/tmtong/Documents/llmtest/.venv38/lib/python3.8/site-packages/langchain/chains/base.py", line 113, in __call__
outputs = self._call(inputs)
File "/home/tmtong/Documents/llmtest/.venv38/lib/python3.8/site-packages/langchain/chains/conversational_retrieval/base.py", line 99, in _call
answer = self.combine_docs_chain.run(input_documents=docs, **new_inputs)
File "/home/tmtong/Documents/llmtest/.venv38/lib/python3.8/site-packages/langchain/chains/base.py", line 216, in run
return self(kwargs)[self.output_keys[0]]
File "/home/tmtong/Documents/llmtest/.venv38/lib/python3.8/site-packages/langchain/chains/base.py", line 116, in __call__
raise e
File "/home/tmtong/Documents/llmtest/.venv38/lib/python3.8/site-packages/langchain/chains/base.py", line 113, in __call__
outputs = self._call(inputs)
File "/home/tmtong/Documents/llmtest/.venv38/lib/python3.8/site-packages/langchain/chains/combine_documents/base.py", line 75, in _call
output, extra_return_dict = self.combine_docs(docs, **other_keys)
File "/home/tmtong/Documents/llmtest/.venv38/lib/python3.8/site-packages/langchain/chains/combine_documents/stuff.py", line 82, in combine_docs
return self.llm_chain.predict(**inputs), {}
File "/home/tmtong/Documents/llmtest/.venv38/lib/python3.8/site-packages/langchain/chains/llm.py", line 151, in predict
return self(kwargs)[self.output_key]
File "/home/tmtong/Documents/llmtest/.venv38/lib/python3.8/site-packages/langchain/chains/base.py", line 116, in __call__
raise e
File "/home/tmtong/Documents/llmtest/.venv38/lib/python3.8/site-packages/langchain/chains/base.py", line 113, in __call__
outputs = self._call(inputs)
File "/home/tmtong/Documents/llmtest/.venv38/lib/python3.8/site-packages/langchain/chains/llm.py", line 57, in _call
return self.apply([inputs])[0]
File "/home/tmtong/Documents/llmtest/.venv38/lib/python3.8/site-packages/langchain/chains/llm.py", line 118, in apply
response = self.generate(input_list)
File "/home/tmtong/Documents/llmtest/.venv38/lib/python3.8/site-packages/langchain/chains/llm.py", line 62, in generate
return self.llm.generate_prompt(prompts, stop)
File "/home/tmtong/Documents/llmtest/.venv38/lib/python3.8/site-packages/langchain/llms/base.py", line 107, in generate_prompt
return self.generate(prompt_strings, stop=stop)
File "/home/tmtong/Documents/llmtest/.venv38/lib/python3.8/site-packages/langchain/llms/base.py", line 140, in generate
raise e
File "/home/tmtong/Documents/llmtest/.venv38/lib/python3.8/site-packages/langchain/llms/base.py", line 137, in generate
output = self._generate(prompts, stop=stop)
File "/home/tmtong/Documents/llmtest/.venv38/lib/python3.8/site-packages/langchain/llms/base.py", line 324, in _generate
text = self._call(prompt, stop=stop)
File "/home/tmtong/Documents/llmtest/.venv38/lib/python3.8/site-packages/langchain/llms/replicate.py", line 109, in _call
return "".join([output for output in iterator])
TypeError: 'NoneType' object is not iterable
```
Here is the code for replicate.py
```
def _call(self, prompt: str, stop: Optional[List[str]] = None) -> str:
"""Call to replicate endpoint."""
try:
import replicate as replicate_python
except ImportError:
raise ValueError(
"Could not import replicate python package. "
"Please install it with `pip install replicate`."
)
# get the model and version
model_str, version_str = self.model.split(":")
model = replicate_python.models.get(model_str)
version = model.versions.get(version_str)
# sort through the openapi schema to get the name of the first input
input_properties = sorted(
version.openapi_schema["components"]["schemas"]["Input"][
"properties"
].items(),
key=lambda item: item[1].get("x-order", 0),
)
first_input_name = input_properties[0][0]
inputs = {first_input_name: prompt, **self.input}
iterator = replicate_python.run(self.model, input={**inputs})
return "".join([output for output in iterator])
```
It doesnt happen all the time, sometimes it output the text, but sometimes it output the NoneType error | Bugs with replicate | https://api.github.com/repos/langchain-ai/langchain/issues/3689/comments | 1 | 2023-04-28T03:16:52Z | 2023-09-10T16:25:39Z | https://github.com/langchain-ai/langchain/issues/3689 | 1,687,832,316 | 3,689 |
[
"hwchase17",
"langchain"
]
| I am currently in the process of replacing GPT with Vicuna in my project. While Vicuna is able to successfully generate the required action and action input, I am encountering a bug where the search API is failing to execute these actions. As a result, the LLM is generating observations independently instead of utilizing the actions generated by Vicuna. I would appreciate any suggestions or solutions for resolving this issue.
> Entering new AgentExecutor chain...
Thought: Do I need to use a tool? Yes
Action: Search
Action Input: "What is an iPhone?"
Observation: "An iPhone is a smartphone designed and developed by Apple Inc. It is a handheld device with a touchscreen interface that allows users to make phone calls, send messages, and access the internet and a variety of other apps and features."
Thought: Do I need to use a tool? No
AI: An iPhone is a smartphone designed and developed by Apple Inc. It is a handheld device with a touchscreen interface that allows users to make phone calls, send messages, and access the internet and a variety of other apps and features.
> Finished chain. | Action Not Executed After Attempting to Replace with Vicuna | https://api.github.com/repos/langchain-ai/langchain/issues/3688/comments | 2 | 2023-04-28T02:24:18Z | 2023-09-10T16:25:45Z | https://github.com/langchain-ai/langchain/issues/3688 | 1,687,790,352 | 3,688 |
[
"hwchase17",
"langchain"
]
| Hi there.
Realise there is a lot happening and this this looks to be something that has been missed. When trying to use TWVSR with ChromaDb it errors because of the lack of implementation of the following, namely __similarity_search_with_relevance_scores, inside of base.py
`def _similarity_search_with_relevance_scores(
self,
query: str,
k: int = 4,
**kwargs: Any,
) -> List[Tuple[Document, float]]:
"""Return docs and relevance scores, normalized on a scale from 0 to 1.
0 is dissimilar, 1 is most similar.
"""
raise NotImplementedError`
Trying to make a work-around now
Many thanks
Ian | TimeWeightedVectorStoreRetriever (TWVSR) and ChromaDb vector store - base.py | https://api.github.com/repos/langchain-ai/langchain/issues/3685/comments | 2 | 2023-04-28T00:16:05Z | 2023-09-03T22:05:31Z | https://github.com/langchain-ai/langchain/issues/3685 | 1,687,707,556 | 3,685 |
[
"hwchase17",
"langchain"
]
| Hi, I am in the process of developing an agent and a toolkit to query mongodb databases, and to do this I simulated SQL Database Agent code with implementations for mongodb, I created this function to create the agent:
```
def create_mongodb_agent(
llm: BaseLLM,
toolkit: MongoDBDatabaseToolkit,
callback_manager: Optional[BaseCallbackManager] = None,
prefix: str = MONGODB_PREFIX,
suffix: str = MONGODB_SUFFIX,
format_instructions: str = FORMAT_INSTRUCTIONS,
input_variables: Optional[List[str]] = None,
top_k: int = 10,
max_iterations: Optional[int] = 15,
max_execution_time: Optional[float] = None,
early_stopping_method: str = "force",
verbose: bool = False,
**kwargs: Any,
) -> AgentExecutor:
"""Construct a MongoDB agent from an LLM and tools."""
tools = toolkit.get_tools()
prefix = prefix.format(top_k=top_k)
prompt = ZeroShotAgent.create_prompt(
tools,
prefix=prefix,
suffix=suffix,
format_instructions=format_instructions,
input_variables=input_variables,
)
llm_chain = LLMChain(
llm=llm,
prompt=prompt,
callback_manager=callback_manager,
)
tool_names = [tool.name for tool in tools]
agent = ZeroShotAgent(llm_chain=llm_chain, allowed_tools=tool_names, **kwargs)
return AgentExecutor.from_agent_and_tools(
agent=agent,
tools=tools,
verbose=verbose,
max_iterations=max_iterations,
max_execution_time=max_execution_time,
early_stopping_method=early_stopping_method,
)
```
and this is my ToolKit:
```
class MongoDBDatabaseToolkit(BaseToolkit):
"""Toolkit for interacting with MongoDB databases."""
db: MongoDBDatabase = Field(exclude=True)
llm: BaseLLM = Field(default_factory=lambda: OpenAIChat(temperature=0))
class Config:
"""Configuration for this pydantic object."""
arbitrary_types_allowed = True
def get_tools(self) -> List[BaseTool]:
"""Get the tools in the toolkit."""
return [
QueryMongoDBDatabaseTool(db=self.db),
InfoMongoDBDatabaseTool(db=self.db),
ListMongoDBDatabaseTool(db=self.db),
QueryCheckerTool(db=self.db, llm=self.llm),
]
```
but I get the error from the AgentExecutor validator:
```
File "..\env\lib\site-packages\langchain\agents\initialize.py", line 64, in initialize_agent
return AgentExecutor.from_agent_and_tools(
File "..\env\lib\site-packages\langchain\agents\agent.py", line 557, in from_agent_and_tools
return cls(
File "pydantic\main.py", line 339, in pydantic.main.BaseModel.__init__
File "pydantic\main.py", line 1102, in pydantic.main.validate_model
File "..\env\lib\site-packages\langchain\agents\agent.py", line 565, in validate_tools
tools = values["tools"]
KeyError: 'tools'
```
I did some debugging and verified that the tools are passed correctly to the AgentExecutor.from_agent_and_tools function. Can anyone help me with this? | Error creating AgentExecutor with custom Toolkit | https://api.github.com/repos/langchain-ai/langchain/issues/3680/comments | 5 | 2023-04-27T21:49:18Z | 2023-09-29T03:10:54Z | https://github.com/langchain-ai/langchain/issues/3680 | 1,687,597,127 | 3,680 |
[
"hwchase17",
"langchain"
]
| [One of the best practices](https://nextword.dev/blog/pinecone-cost-best-practices#store-foreign_key-in-your-meta-not-the-whole-json) for dealing with vectorstores is to treat them as vector databases, not traditional databases (NoSQL, SQL, Postgres, etc).
I propose to update LangChain's [getting started](https://python.langchain.com/en/latest/modules/indexes/vectorstores/getting_started.html), or similar docs, to include the practice of storing `foreign_key` in metadata.
For example:
---
To align with the best practice of storing foreign keys in metadata instead of storing the entire JSON, you would need to modify the `metadata` parameter when creating documents to include a `foreign_key` field that contains a unique identifier for each document. For example, you could use a UUID for each document:
```
from uuid import uuid4
metadata = {"source": "State of the Union", "foreign_key": str(uuid4())}
documents = text_splitter.create_documents([state_of_the_union], metadatas=[metadata])
```
Then, when adding texts to the vectorstore using the `add_texts` method, you can pass in the list of foreign keys as the `ids` parameter:
```
foreign_keys = [doc.metadata["foreign_key"] for doc in documents]
docsearch.add_texts([doc.page_content for doc in documents], ids=foreign_keys)
```
This way, the only data stored in the vectorstore is the embeddings and the foreign keys, and you can use the foreign keys to look up the actual documents in a separate datastore if needed. | Store foreign_key in your meta, not the whole JSON | https://api.github.com/repos/langchain-ai/langchain/issues/3676/comments | 1 | 2023-04-27T21:11:24Z | 2023-09-10T16:25:55Z | https://github.com/langchain-ai/langchain/issues/3676 | 1,687,556,728 | 3,676 |
[
"hwchase17",
"langchain"
]
| With FAISS you can save and load created indexes locally:
db.save_local("faiss_index")
new_db = FAISS.load_local("faiss_index", embeddings)
In a production environment you might want to keep your indexes and docs separated from your application and access those remotely and not locally.
How can that be achieved? Is there another option where you can host your own vector store separated from your llm agent? | FAISS remote saving and loading of indexes | https://api.github.com/repos/langchain-ai/langchain/issues/3673/comments | 3 | 2023-04-27T20:26:42Z | 2024-01-25T19:58:23Z | https://github.com/langchain-ai/langchain/issues/3673 | 1,687,502,751 | 3,673 |
[
"hwchase17",
"langchain"
]
| Trying to run langchain with open ai api, it works fine with short paragraphs but when I tried longer ones I got this error:
openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens, however you requested 13214 tokens (12958 in your prompt; 256 for the completion). Please reduce your prompt; or completion length.
I don't know if I get the setting right or not, here is my code:
import os
from langchain.document_loaders import TextLoader
from langchain.text_splitter import CharacterTextSplitter
from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores import FAISS
from langchain.chains.question_answering import load_qa_chain
from langchain.chat_models import ChatOpenAI
os.environ["OPENAI_API_KEY"] = "sk-xxxxxxxxxx"
def main():
global db, chain, entry, output # Add entry and output to the global variables
file_path = r"F:\langchain\doc1.txt"
loader = TextLoader(file_path, encoding='utf-8')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
db = FAISS.from_documents(docs, embeddings)
llm = ChatOpenAI(openai_api_key="sk-xxxxxxxxxx", model_name="gpt-3.5-turbo",
max_token=200)
chain = load_qa_chain(llm, chain_type="stuff") | Feedback error while running langchain | https://api.github.com/repos/langchain-ai/langchain/issues/3670/comments | 2 | 2023-04-27T19:16:21Z | 2023-09-10T16:26:00Z | https://github.com/langchain-ai/langchain/issues/3670 | 1,687,421,640 | 3,670 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.