issue_owner_repo
sequencelengths 2
2
| issue_body
stringlengths 0
261k
⌀ | issue_title
stringlengths 1
925
| issue_comments_url
stringlengths 56
81
| issue_comments_count
int64 0
2.5k
| issue_created_at
stringlengths 20
20
| issue_updated_at
stringlengths 20
20
| issue_html_url
stringlengths 37
62
| issue_github_id
int64 387k
2.46B
| issue_number
int64 1
127k
|
---|---|---|---|---|---|---|---|---|---|
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_community.chat_models import `ChatZhipuAI`
from langchain_core.messages import AIMessage, HumanMessage, SystemMessage
zhipuai_api_key = my_key
chat = ChatZhipuAI(
temperature=0.5,
api_key=zhipuai_api_key,
model="chatglm_turbo",
)
messages = [
AIMessage(content="Hi."),
SystemMessage(content="Your role is a poet."),
HumanMessage(content="Write a short poem about AI in four lines."),
]
response = chat(messages)
print(response.content) # Displays the AI-generated poem
```
### Error Message and Stack Trace (if applicable)
```
AttributeError: module 'zhipuai' has no attribute 'model_api'
```
### Description
I'm trying to run codes in langchain document "https://python.langchain.com/docs/integrations/chat/zhipuai", then I get this issus. Maybe zhipuai update their package version, while langchain doesn't make any change.
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.22635
> Python Version: 3.12.2 | packaged by conda-forge | (main, Feb 16 2024, 20:42:31) [MSC v.1937 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.1.30
> langchain: 0.1.11
> langchain_community: 0.0.25
> langsmith: 0.1.22
> langchain_openai: 0.0.8
> langchain_text_splitters: 0.0.1
> langchainhub: 0.1.14 | langchain_community ChatZhipuAI API doesn't work | https://api.github.com/repos/langchain-ai/langchain/issues/18771/comments | 6 | 2024-03-08T02:03:46Z | 2024-03-11T11:57:19Z | https://github.com/langchain-ai/langchain/issues/18771 | 2,175,138,009 | 18,771 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
for the example sentence "天青色等烟雨,而我在等你。哈哈哈....."
when I use RecursiveCharacterTextSplitter split it by ["\n\n", "\n", "。"],the result shows that the "。"will be appended in the beginning of the sub-sentence.for example:[Document(page_content="。哈哈哈...")]
### Error Message and Stack Trace (if applicable)
_No response_
### Description
for the example sentence "天青色等烟雨,而我在等你。哈哈哈....."
when I use RecursiveCharacterTextSplitter split it by ["\n\n", "\n", "。"],the result shows that the "。"will be appended in the beginning of the sub-sentence.for example:[Document(page_content="。哈哈哈...")]
### System Info
langchain 0.1.0 | RecursiveCharacterTextSplitter for Chinese sentence | https://api.github.com/repos/langchain-ai/langchain/issues/18770/comments | 1 | 2024-03-08T01:55:22Z | 2024-06-16T16:08:54Z | https://github.com/langchain-ai/langchain/issues/18770 | 2,175,131,360 | 18,770 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
N/A
### Error Message and Stack Trace (if applicable)
_No response_
### Description
There are several places using the following pattern to generate uuid for Chroma vector entries.
https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/vectorstores/chroma.py
ids = [str(uuid.uuid4()) for _ in uris]
ids = [str(uuid.uuid4()) for _ in texts]
ids = [str(uuid.uuid4()) for _ in texts]
However, it will create the same uuid for all entries in the output list.
Not sure whether this is intended or an issue.
Thanks
### System Info
N/A | Using same UUID for Chroma vector entries | https://api.github.com/repos/langchain-ai/langchain/issues/18767/comments | 1 | 2024-03-08T01:29:05Z | 2024-03-08T01:55:53Z | https://github.com/langchain-ai/langchain/issues/18767 | 2,175,110,395 | 18,767 |
[
"hwchase17",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
Reproducible issue. on_tool_end logs the string representation of the tool. All tool callbacks need to be updated to accommodate
```python
from langchain_core.tools import tool
from langchain_core.documents import Document
def foo(x):
return {
'x': 5
}
@tool
def get_docs(x: int):
"""Hello"""
return [Document(page_content='hello')]
chain = foo | get_docs
async for event in chain.astream_events({}, version='v1'):
if event['event'] == 'on_tool_end':
print(event)
``` | Update tool callbacks to send the actual response from the tool | https://api.github.com/repos/langchain-ai/langchain/issues/18760/comments | 12 | 2024-03-07T21:52:10Z | 2024-03-13T17:59:16Z | https://github.com/langchain-ai/langchain/issues/18760 | 2,174,866,952 | 18,760 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
I am using a `PGVector` vector store and running the following code to retrieve relevant docs:
```python
retriever = pg_vs.as_retriever(
search_kwargs = {
"k" : 10,
"filter" : {
"year" : {"gte" : 2022}
}
}
)
```
When running `retriever.get_relevant_documents(query)` I get results that have a `year` field of less than 2022. I have tried different variations of the operator, such as `$gte` as well as other similar operators but they are all leading to the same result. The `year` column type is `int`.
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I am trying to do metadata filtering using a "greater-than-or-equal-to" operator `gte` with a `PGVector` vector store, using a metadata field that is of type `int`. The filter does not work and the retriever returns results that do not adhere to the filter restriction.
### System Info
```
langchain==0.1.7
langchain_community==0.0.2.0
pgvector==0.2.5
```
| PGVector advanced filtering operators (e.g. `gte`) do not seem to work | https://api.github.com/repos/langchain-ai/langchain/issues/18758/comments | 1 | 2024-03-07T21:33:27Z | 2024-07-04T16:08:03Z | https://github.com/langchain-ai/langchain/issues/18758 | 2,174,838,484 | 18,758 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
%pip install –upgrade –quiet langchain langchain-openai.
fails . This is across just about every page.
Also, There are incomplete pip installs. Especially in the cookbook. What is there should be dirt simple to get running.
https://python.langchain.com/docs/expression_language/cookbook/multiple_chains
### Idea or request for content:
Also, you mention Runnables all over the place. Not really any formal docs on it. Same for chains. | How about pip install instructions that are formed correctly | https://api.github.com/repos/langchain-ai/langchain/issues/18747/comments | 1 | 2024-03-07T18:23:10Z | 2024-03-07T21:29:33Z | https://github.com/langchain-ai/langchain/issues/18747 | 2,174,483,798 | 18,747 |
[
"hwchase17",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
Minimal reproduction
```python
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough, RunnableGenerator
from langchain_core.beta.runnables.context import Context
async def to_dict(input):
async for chunk in input:
yield {
'foo': chunk
}
chain = Context.setter('input') | model | to_dict | Context.getter('input')
async for chunk in chain.astream('hello'):
print(chunk)
``` | Potential bug in Context Runnable | https://api.github.com/repos/langchain-ai/langchain/issues/18741/comments | 2 | 2024-03-07T15:38:27Z | 2024-03-08T02:23:14Z | https://github.com/langchain-ai/langchain/issues/18741 | 2,174,152,369 | 18,741 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
使用QAGenerateChain产生问答对,一个问答只能生成一个问答对吗?
### Idea or request for content:
使用QAGenerateChain产生问答对,一个问答只能生成一个问答对吗?如果我想某个文档产生自己想要数量的问答对,可以通过修改代码实现吗?
| 使用QAGenerateChain产生问答对,一个问答只能生成一个问答对吗? | https://api.github.com/repos/langchain-ai/langchain/issues/18737/comments | 2 | 2024-03-07T14:49:48Z | 2024-03-10T14:38:09Z | https://github.com/langchain-ai/langchain/issues/18737 | 2,174,042,531 | 18,737 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
### Custom Retriever Code
```python
# Code from: https://redis.com/blog/build-ecommerce-chatbot-with-redis/
class UserRetriever(BaseRetriever):
"""
ArgenxUserRetriever class extends BaseRetriever and is designed for retrieving relevant documents
based on a user query using hybrid similarity search with a VectorStore.
Attributes:
- vectorstore (VectorStore): The VectorStore instance used for similarity search.
- username (str): The username associated with the documents, used for personalized retrieval.
Methods:
- clean_metadata(self, doc): Cleans the metadata of a document, extracting relevant information for display.
- get_relevant_documents(self, query): Retrieves relevant documents based on a user query using hybrid similarity search.
Example:
retriever = ArgenxRetriever(vectorstore=vector_store, username="john_doe")
relevant_docs = retriever.get_relevant_documents("How does photosynthesis work?")
for doc in relevant_docs:
print(doc.metadata["Title"], doc.page_content)
"""
vectorstore: VectorStore
username: str
def clean_metadata(self, doc):
"""
Cleans the metadata of a document.
Parameters:
doc (object): The document object.
Returns:
dict: A dictionary containing the cleaned metadata.
"""
metadata = doc.metadata
return {
"file_id": metadata["title"],
"source": metadata["title"] + "_page=" + str(int(metadata["chunk_id"].split("_")[-1])+1),
"page_number": str(int(metadata["chunk_id"].split("_")[-1])+1),
"document_title": metadata["document_title_result"]
}
def get_relevant_documents(self, query):
"""
Retrieves relevant documents based on a given query.
Args:
query (str): The query to search for relevant documents.
Returns:
list: A list of relevant documents.
"""
docs = []
is_match_filter = ""
load_dotenv()
admins = os.getenv('ADMINS', '')
admins_list = admins.split(',')
is_admin = self.username.split('@')[0] in admins_list
os.environ["AZURESEARCH_FIELDS_ID"] = "chunk_id"
os.environ["AZURESEARCH_FIELDS_CONTENT"] = "chunk"
os.environ["AZURESEARCH_FIELDS_CONTENT_VECTOR"] = "vector"
#os.environ["AZURESEARCH_FIELDS_TAG"] = "metadata"
if not is_admin:
is_match_filter = f"search.ismatch('{self.username.split('@')[0]}', 'usernames_result')"
for doc in self.vectorstore.similarity_search(query, search_type="semantic_hybrid", k=NUMBER_OF_CHUNKS_TO_RETURN, filters=is_match_filter):
cleaned_metadata = self.clean_metadata(doc)
docs.append(Document(
page_content=doc.page_content,
metadata=cleaned_metadata))
print("\n\n----------------DOCUMENTS RETRIEVED------------------\n\n", docs)
return docs
```
### setup langchain chain,llm
```python
chat = AzureChatOpenAI(
azure_endpoint=SHD_AZURE_OPENAI_ENDPOINT,
openai_api_version="2023-03-15-preview",
deployment_name= POL_OPENAI_EMBEDDING_DEPLOYMENT_NAME,
openai_api_key=SHD_OPENAI_KEY ,
openai_api_type="Azure",
model_name=POL_OPENAI_GPT_MODEL_NAME,
streaming=True,
callbacks=[ChainStreamHandler(g)], # Set ChainStreamHandler as callback
temperature=0)
# Define system and human message prompts
messages = [
SystemMessagePromptTemplate.from_template(ANSWER_PROMPT),
HumanMessagePromptTemplate.from_template("{question} Please answer in html format"),
]
# Set up embeddings, vector store, chat prompt, retriever, memory, and chain
embeddings = setup_embeddings()
vector_store = setup_vector_store(embeddings)
chat_prompt = ChatPromptTemplate.from_messages(messages)
retriever = UserRetriever(vectorstore=vector_store, username=username)
memory = setup_memory()
#memory.save_context(chat_history)
chain = ConversationalRetrievalChain.from_llm(chat,
retriever=retriever,
memory=memory,
verbose=False,
combine_docs_chain_kwargs={
"prompt": chat_prompt,
"document_prompt": PromptTemplate(
template=DOCUMENT_PROMPT,
input_variables=["page_content", "source"]
)
}
)
```
### My fields

### Error Message and Stack Trace (if applicable)
```
Exception has occurred: KeyError
'metadata'
```
The error is thown in this line:
```python
for doc in self.vectorstore.similarity_search(query, search_type="semantic_hybrid", k=NUMBER_OF_CHUNKS_TO_RETURN, filters=is_match_filter):
```
When I dig deep in the langchain code, I found this code:
```python
docs = [
(
Document(
page_content=result.pop(FIELDS_CONTENT),
metadata={
**(
json.loads(result[FIELDS_METADATA])
if FIELDS_METADATA in result
else {
k: v
for k, v in result.items()
if k != FIELDS_CONTENT_VECTOR
}
),
**{
"captions": {
"text": result.get("@search.captions", [{}])[0].text,
"highlights": result.get("@search.captions", [{}])[
0
].highlights,
}
if result.get("@search.captions")
else {},
"answers": semantic_answers_dict.get(
json.loads(result["metadata"]).get("key"),
"",
),
},
},
),
```
As you can see in the last line, its trying to find a metadata field on the search results, which we dont have as our index is customized with our own fields.
I am blaming this line:
https://github.com/langchain-ai/langchain/blob/ced5e7bae790cd9ec4e5374f5d070d9f23d6457b/libs/community/langchain_community/vectorstores/azuresearch.py#L607
@Skar0 , not sure if this is really a bug, or I missed something in the documentation.
### Description
I am trying to use langchain with Azure OpenAI and Azure Search as Vector Store, and a custom retriever. I dont have a metadata field
This was working with a previous project with azure-search-documents==11.4.b09
but in a new project I am trying azure-search-documents ==11.4.0
### System Info
langchain==0.1.7
langchain-community==0.0.20
langchain-core==0.1.23
langchain-openai==0.0.6
langchainhub==0.1.14 | Azure AI Search, metadata field is required and hardcoded in langchain community | https://api.github.com/repos/langchain-ai/langchain/issues/18731/comments | 6 | 2024-03-07T11:19:46Z | 2024-07-01T16:05:24Z | https://github.com/langchain-ai/langchain/issues/18731 | 2,173,626,443 | 18,731 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from langchain_core.documents import Document
from langchain_pinecone import PineconeVectorStore
from langchain_openai import AzureOpenAIEmbeddings, OpenAIEmbeddings
# os.environ["PINECONE_API_KEY"] = "xxxxxxxxxxxxx-xxxxx-4c9f-99a0-42b2e5922ba0"
# os.environ["PINECONE_INDEX_NAME"] = "zzzzzzzzz"
doc = Document("hello world")
raw_documents: list[Document] = [doc]
text_splitter = CharacterTextSplitter(
chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(raw_documents)
embeddings = OpenAIEmbeddings()
docsearch = PineconeVectorStore.from_documents(
docs, embeddings, index_name=os.environ["PINECONE_INDEX_NAME"])
```
throw an exception say:
> Exception has occurred: ValidationError
1 validation error for OpenAIEmbeddings
__root__
If you are using Azure, please use the `AzureOpenAIEmbeddings` class. (type=value_error)
File "D:\src\lang-serve-fastapi-v3-proxy-ai\code\test2_index.py", line 48, in <module>
embeddings = OpenAIEmbeddings()
pydantic.error_wrappers.ValidationError: 1 validation error for OpenAIEmbeddings
__root__
If you are using Azure, please use the `AzureOpenAIEmbeddings` class. (type=value_error)]
at:

### Error Message and Stack Trace (if applicable)
_No response_
### Description
use vector database.
### System Info
langchain==0.1.11
langchain-community==0.0.27
langchain-core==0.1.30
langchain-openai==0.0.8
langchain-pinecone==0.0.3
langchain-text-splitters==0.0.1
langchainhub==0.1.14
langserve==0.0.41
langsmith==0.1.22 | using OpenAIEmbeddings popup exception say please use AzureOpenAIEmbeddings | https://api.github.com/repos/langchain-ai/langchain/issues/18727/comments | 2 | 2024-03-07T08:50:48Z | 2024-03-07T11:45:10Z | https://github.com/langchain-ai/langchain/issues/18727 | 2,173,316,212 | 18,727 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
my vLLM api_server is:
```py
python -m vllm.entrypoints.openai.api_server \
--model=/usr/local/models/Qwen/Qwen1.5-7B-Chat \
--trust-remote-code \
--served-model-name qwmiic \
--host 127.0.0.1 \
--port 9999 \
--dtype=half
```
This is my demo
```py
TOOLS = [ArxivQueryRun()]
functions = [convert_to_openai_function(t) for t in TOOLS]
print(functions)
inference_server_url = "http://127.0.0.1:9999/v1"
llm = ChatOpenAI(
model="qwmiic",
openai_api_key="EMPTY",
openai_api_base=inference_server_url,
max_tokens=512,
temperature=1,
)
llm.invoke("What's the paper 1605.08386 about?",functions=functions)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Please see my Demo.After verification, the functions generated by variable `functions` have a standardized format base on OpenAI format, and if Arxiv directly invokes as ` arxiv = ArxivQueryRun() arxiv.run("1605.08386")` return the correct article content,**but if combined with llm or chain, even if the binding is correct, such as `llm. bind (functions=functions)` or directly `llm.invoke("What's the paper 1605.08386 about?",functions=functions)` as mentioned earlier, it will not trigger function calls or tools**.
### System Info
my python version is 3.10
my platform linux centos8
aiohttp==3.9.3
aioprometheus==23.12.0
aiosignal==1.3.1
annotated-types==0.6.0
anyio==4.2.0
arxiv==2.1.0
async-timeout==4.0.3
attrs==23.2.0
certifi==2024.2.2
charset-normalizer==3.3.2
click==8.1.7
dataclasses-json==0.6.4
distro==1.9.0
exceptiongroup==1.2.0
fastapi==0.109.2
feedparser==6.0.10
filelock==3.13.1
frozenlist==1.4.1
fsspec==2024.2.0
greenlet==3.0.3
h11==0.14.0
httpcore==1.0.4
httptools==0.6.1
httpx==0.27.0
huggingface-hub==0.20.3
idna==3.6
Jinja2==3.1.3
jsonpatch==1.33
jsonpointer==2.4
jsonschema==4.21.1
jsonschema-specifications==2023.12.1
langchain==0.1.5
langchain-community==0.0.24
langchain-core==0.1.28
langchain-openai==0.0.8
langchainhub==0.1.14
langsmith==0.1.10
MarkupSafe==2.1.5
marshmallow==3.20.2
mpmath==1.3.0
msgpack==1.0.7
multidict==6.0.5
mypy-extensions==1.0.0
networkx==3.2.1
ninja==1.11.1.1
numpy==1.26.3
nvidia-cublas-cu12==12.1.3.1
nvidia-cuda-cupti-cu12==12.1.105
nvidia-cuda-nvrtc-cu12==12.1.105
nvidia-cuda-runtime-cu12==12.1.105
nvidia-cudnn-cu12==8.9.2.26
nvidia-cufft-cu12==11.0.2.54
nvidia-curand-cu12==10.3.2.106
nvidia-cusolver-cu12==11.4.5.107
nvidia-cusparse-cu12==12.1.0.106
nvidia-nccl-cu12==2.18.1
nvidia-nvjitlink-cu12==12.3.101
nvidia-nvtx-cu12==12.1.105
openai==1.13.3
orjson==3.9.15
packaging==23.2
protobuf==4.25.2
psutil==5.9.8
pydantic==2.6.0
pydantic_core==2.16.1
pynvml==11.5.0
python-dotenv==1.0.1
PyYAML==6.0.1
quantile-python==1.1
ray==2.9.1
referencing==0.33.0
regex==2023.12.25
requests==2.31.0
rpds-py==0.17.1
safetensors==0.4.2
sentencepiece==0.1.99
sgmllib3k==1.0.0
sniffio==1.3.0
SQLAlchemy==2.0.25
starlette==0.36.3
sympy==1.12
tenacity==8.2.3
tiktoken==0.6.0
tokenizer==3.4.3
tokenizers==0.15.1
torch==2.1.2
tqdm==4.66.1
transformers==4.37.2
triton==2.1.0
types-requests==2.31.0.20240218
typing-inspect==0.9.0
typing_extensions==4.9.0
urllib3==2.2.0
uvicorn==0.27.0.post1
uvloop==0.19.0
vllm==0.3.0
watchfiles==0.21.0
websockets==12.0
xformers==0.0.23.post1
yarl==1.9.4
| functions call or tools call can not be trigged by LLM using vLLM Chat(Qwen1.5) ? | https://api.github.com/repos/langchain-ai/langchain/issues/18724/comments | 8 | 2024-03-07T08:19:33Z | 2024-07-23T16:07:56Z | https://github.com/langchain-ai/langchain/issues/18724 | 2,173,259,610 | 18,724 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from __future__ import annotations
from typing import List, cast
from langchain_core.embeddings import Embeddings
import math
import numpy as np
# create a `UserDefinedEmbeddings` class that expects each document's text
# to contain an embedding vector in Python format
class UserDefinedEmbeddings(Embeddings):
def __init__(
self,
normalize: bool = False
) -> None:
"""langchain Embedding class that allows vectors to be manually
specified via each document's text.
Args:
normalize: If True, normalize all vectors to a unit length of 1.0"""
super().__init__()
self.normalize = normalize
def embed_documents(self, texts: List[str]) -> List[List[float]]:
# evaluate texts into Python objects
vectors = [eval(text) for text in texts]
# verify we have vectors of consistent dimension
if not all(isinstance(vector, list) for vector in vectors):
raise ValueError('All vectors must be lists')
if any(len(vector) != len(vectors[0]) for vector in vectors):
raise ValueError('All vectors must have the same dimension')
if not all((-1 <= value <= 1) for vector in vectors for value in vector):
raise ValueError('All vectors must be between -1.0 and +1.0')
def _normalize_vector(vector):
"""normalize the input vector to a unit length of 1.0"""
magnitude = math.sqrt(np.sum(np.square(vector)))
normalized_vector = [element / magnitude for element in vector]
return normalized_vector
if self.normalize:
vectors = [_normalize_vector(vector) for vector in vectors]
# return them
return cast(List[List[float]], vectors)
def embed_query(self, text: str) -> List[float]:
return self.embed_documents([text])[0]
# create a UserDefinedEmbeddings embedder
userdefined_embeddings = UserDefinedEmbeddings(normalize=True)
# create "documents" that will have hardcoded 2-dimension embedding vectors
text_documents = [
'[ 1, 0]',
'[ 1, 1]',
'[ 0, 1]',
'[-1, 1]',
'[-1, 0]',
'[-1,-1]',
'[ 0,-1]',
'[ 1,-1]'
]
# compare those "document" vectors against this 2-dimensional "query" vector
query = '[ 1, 0]'
# print the default relevance scores for FAISS
import langchain_community.vectorstores
faiss_vectorstore = langchain_community.vectorstores.FAISS.from_texts(text_documents, userdefined_embeddings)
results = faiss_vectorstore.similarity_search_with_relevance_scores(query, k=20)
print("FAISS relevance scores:")
for result in results:
print(f" Relevance of {query} to {result[0].page_content} is {result[1]:.4f}")
print("")
# print the default relevance scores for Chroma DB
import langchain_community.vectorstores
chroma_vectorstore = langchain_community.vectorstores.Chroma.from_texts(text_documents, userdefined_embeddings)
results = chroma_vectorstore.similarity_search_with_relevance_scores(query, k=20)
print("Chroma DB relevance scores:")
for result in results:
print(f" Relevance of {query} to {result[0].page_content} is {result[1]:.4f}")
```
### Error Message and Stack Trace (if applicable)
```
FAISS relevance scores:
Relevance of [ 1, 0] to [ 1, 0] is 1.0000
Relevance of [ 1, 0] to [ 1, 1] is 0.5858
Relevance of [ 1, 0] to [ 1,-1] is 0.5858
Relevance of [ 1, 0] to [ 0, 1] is -0.4142
Relevance of [ 1, 0] to [ 0,-1] is -0.4142
Relevance of [ 1, 0] to [-1, 1] is -1.4142
Relevance of [ 1, 0] to [-1,-1] is -1.4142
Relevance of [ 1, 0] to [-1, 0] is -1.8284
Chroma DB relevance scores:
Relevance of [ 1, 0] to [ 1, 0] is 1.0000
Relevance of [ 1, 0] to [ 1, 1] is 0.5858
Relevance of [ 1, 0] to [ 1,-1] is 0.5858
Relevance of [ 1, 0] to [ 0, 1] is -0.4142
Relevance of [ 1, 0] to [ 0,-1] is -0.4142
Relevance of [ 1, 0] to [-1, 1] is -1.4142
Relevance of [ 1, 0] to [-1,-1] is -1.4142
Relevance of [ 1, 0] to [-1, 0] is -1.8284
```
`/.../langchain_core/vectorstores.py:331: UserWarning: Relevance scores must be between 0 and 1, got [(Document(page_content='[ 1, 0]'), 1.0), (Document(page_content='[ 1, 1]'), 0.5857864626594824), (Document(page_content='[ 1,-1]'), 0.5857864626594824), (Document(page_content='[ 0, 1]'), -0.4142135623730949), (Document(page_content='[ 0,-1]'), -0.4142135623730949), (Document(page_content='[-1, 1]'), -1.414213629552521), (Document(page_content='[-1,-1]'), -1.414213629552521), (Document(page_content='[-1, 0]'), -1.8284271247461898)]
warnings.warn(`
### Description
In the Langchain FAISS and Chroma DB classes, the `DistanceStrategy` is `EUCLIDEAN_DISTANCE`. However, they actually return the *square* of the distance, as described here:
https://github.com/facebookresearch/faiss/wiki/MetricType-and-distances
As a result, relevance scores are computed incorrectly and `similarity_search_with_relevance_scores()` can return negative values unless a user-defined `relevance_score_fn` is given. This is not a desirable out-of-the-box experience.
Perhaps a `EUCLIDEAN_DISTANCE_SQUARED` distance strategy is also needed, such as:
```python
def relevance_score_fn_squared_distance(self, distance: float) -> float:
"""
Remap a distance-squared value in the range of [0, 4] to a relevance score
in the range of [1, 0]
"""
return 1.0 - math.sqrt(distance) / 2
```
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Wed Aug 10 16:21:17 UTC 2022
> Python Version: 3.9.2 (default, Apr 30 2021, 04:38:51)
[GCC 8.2.0]
Package Information
-------------------
> langchain_core: 0.1.30
> langchain: 0.1.11
> langchain_community: 0.0.26
> langsmith: 0.1.22
> langchain_openai: 0.0.8
> langchain_text_splitters: 0.0.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | FAISS and Chroma DB use a `DistanceStrategy` of "distance" instead of "distance-squared", which results in negative relevance scores | https://api.github.com/repos/langchain-ai/langchain/issues/18709/comments | 1 | 2024-03-07T01:07:52Z | 2024-08-02T16:07:18Z | https://github.com/langchain-ai/langchain/issues/18709 | 2,172,738,864 | 18,709 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
Pubmed Tool breaks on the documented example.
https://python.langchain.com/docs/integrations/tools/pubmed
py", line 240, in invoke
return self.run(
callback_manager = CallbackManager.configure(
old_debug = langchain.debug
module 'langchain' has no attribute 'debug'
### Idea or request for content:
_No response_ | DOC: PubMed Tool example breaks | https://api.github.com/repos/langchain-ai/langchain/issues/18704/comments | 0 | 2024-03-06T22:28:06Z | 2024-03-07T01:58:29Z | https://github.com/langchain-ai/langchain/issues/18704 | 2,172,553,258 | 18,704 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
;
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I have schema that have tables and metadata-tables, sort of helper tables that connected to regular one.
I want to exclude all of them and don't load them to SQLAlchemy metadata.
Names of all of them has pattern by design, so it will be nice to exclude them by regular expression.
**Note:**
1. I think ignore_tables should receive both optional list of plain strings and optional list of regexp.
`Optional[List[str | re.Pattern]]`
2. May be there are use-case to make the same for include_tables. It is worth to consider.
3. Using wildcard characters (as in SQL like) can be consider besides regexp.
**Workaround** is obvious: load all tables from the schema to SQLDatabase, go over db.get_usable_table_names() and filter out metadata table, than create new SQLDatabase with include_table that has tables after filtering. It is waste on the time, memory and I/O.
### System Info
bash-4.2# python -m langchain_core.sys_info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Thu Oct 5 21:02:42 UTC 2023
> Python Version: 3.10.13 (main, Dec 4 2023, 13:30:46) [GCC 7.3.1 20180712 (Red Hat 7.3.1-17)]
Package Information
-------------------
> langchain_core: 0.1.28
> langchain: 0.1.10
> langchain_community: 0.0.25
> langsmith: 0.1.10
> langchain_experimental: 0.0.50
> langchain_openai: 0.0.2.post1
> langchain_text_splitters: 0.0.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | In SQLDatabase change ignore_tables to recieve regex | https://api.github.com/repos/langchain-ai/langchain/issues/18688/comments | 1 | 2024-03-06T18:27:54Z | 2024-06-14T16:08:47Z | https://github.com/langchain-ai/langchain/issues/18688 | 2,172,162,318 | 18,688 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from __future__ import annotations
import typing as t
import streamlit as st
from langchain.callbacks.base import BaseCallbackHandler
from langchain_community.chat_message_histories import StreamlitChatMessageHistory
from langchain_core.messages import AIMessage
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.runnables.history import RunnableWithMessageHistory
from langchain_openai import AzureChatOpenAI
class StreamHandler(BaseCallbackHandler):
def __init__(
self,
container,
initial_text: str = "",
) -> None:
self.container = container
self.text = initial_text
def on_llm_start(
self,
serialized: t.Dict[str, t.Any],
prompts: t.List[str],
**kwargs: t.Any,
) -> t.Any:
formatted_prompts = "\n".join(prompts)
print(f"Prompt:\n{formatted_prompts}")
def on_llm_new_token(
self,
token: str,
**kwargs,
) -> None:
self.text += token
self.container.markdown(self.text + "▌")
def on_llm_end(
self,
response,
**kwargs,
) -> None:
self.container.markdown(self.text)
st.toast("Ready!", icon="🥞")
memory = StreamlitChatMessageHistory(key="langchain_messages")
if len(memory.messages) == 0:
memory.clear()
memory.add_message(AIMessage("How can I help you?"))
prompt = ChatPromptTemplate.from_messages(
[
("system", ""),
MessagesPlaceholder(variable_name="history"),
("human", "{input}"),
],
)
llm = AzureChatOpenAI(
azure_endpoint="https://foo.openai.azure.com/",
deployment_name="GPT-4-NEW",
openai_api_version="2024-02-15-preview",
openai_api_key="bar",
streaming=True,
)
chain = prompt | llm
chain_with_history = RunnableWithMessageHistory(
chain,
lambda session_id: memory,
input_messages_key="input",
history_messages_key="history",
)
for msg in memory.messages:
st.chat_message(msg.type).write(msg.content)
if user_input := st.chat_input():
st.chat_message("human").write(user_input)
with st.chat_message("ai"):
stream_handler = StreamHandler(st.empty())
chain_with_history.invoke(
{"input": user_input},
config={
"callbacks": [stream_handler],
"configurable": {"session_id": "any"},
},
)
```
### Error Message and Stack Trace (if applicable)
```
2024-03-07 01:23:00.591 Thread 'ThreadPoolExecutor-1_0': missing ScriptRunContext
2024-03-07 01:23:00.592 Uncaught app exception
Traceback (most recent call last):
File "/Users/qux/.venv/lib/python3.10/site-packages/streamlit/runtime/state/session_state.py", line 398, in __getitem__
return self._getitem(widget_id, key)
File "/Users/qux/.venv/lib/python3.10/site-packages/streamlit/runtime/state/session_state.py", line 443, in _getitem
raise KeyError
KeyError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/qux/.venv/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 535, in _run_script
exec(code, module.__dict__)
File "/Users/qux/demo.py", line 88, in <module>
chain_with_history.invoke(
File "/Users/qux/.venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 4069, in invoke
return self.bound.invoke(
File "/Users/qux/.venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 4069, in invoke
return self.bound.invoke(
File "/Users/qux/.venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2075, in invoke
input = step.invoke(
File "/Users/qux/.venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 4069, in invoke
return self.bound.invoke(
File "/Users/qux/.venv/lib/python3.10/site-packages/langchain_core/runnables/passthrough.py", line 419, in invoke
return self._call_with_config(self._invoke, input, config, **kwargs)
File "/Users/qux/.venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 1262, in _call_with_config
context.run(
File "/Users/qux/.venv/lib/python3.10/site-packages/langchain_core/runnables/config.py", line 326, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
File "/Users/qux/.venv/lib/python3.10/site-packages/langchain_core/runnables/passthrough.py", line 406, in _invoke
**self.mapper.invoke(
File "/Users/qux/.venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2712, in invoke
output = {key: future.result() for key, future in zip(steps, futures)}
File "/Users/qux/.venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2712, in <dictcomp>
output = {key: future.result() for key, future in zip(steps, futures)}
File "/Users/jenrey/.pyenv/versions/3.10.12/lib/python3.10/concurrent/futures/_base.py", line 451, in result
return self.__get_result()
File "/Users/jenrey/.pyenv/versions/3.10.12/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
File "/Users/jenrey/.pyenv/versions/3.10.12/lib/python3.10/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "/Users/qux/.venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 4069, in invoke
return self.bound.invoke(
File "/Users/qux/.venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 3523, in invoke
return self._call_with_config(
File "/Users/qux/.venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 1262, in _call_with_config
context.run(
File "/Users/qux/.venv/lib/python3.10/site-packages/langchain_core/runnables/config.py", line 326, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
File "/Users/qux/.venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 3397, in _invoke
output = call_func_with_variable_args(
File "/Users/qux/.venv/lib/python3.10/site-packages/langchain_core/runnables/config.py", line 326, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
File "/Users/qux/.venv/lib/python3.10/site-packages/langchain_core/runnables/history.py", line 409, in _enter_history
return hist.messages.copy()
File "/Users/qux/.venv/lib/python3.10/site-packages/langchain_community/chat_message_histories/streamlit.py", line 32, in messages
return st.session_state[self._key]
File "/Users/qux/.venv/lib/python3.10/site-packages/streamlit/runtime/state/session_state_proxy.py", line 90, in __getitem__
return get_session_state()[key]
File "/Users/qux/.venv/lib/python3.10/site-packages/streamlit/runtime/state/safe_session_state.py", line 91, in __getitem__
return self._state[key]
File "/Users/qux/.venv/lib/python3.10/site-packages/streamlit/runtime/state/session_state.py", line 400, in __getitem__
raise KeyError(_missing_key_error_message(key))
KeyError: 'st.session_state has no key "langchain_messages". Did you forget to initialize it? More info: https://docs.streamlit.io/library/advanced-features/session-state#initialization'
```
### Description
* I use `streamlit` for front-end page rendering, and I hope to use `StreamlitChatMessageHistory` to store the history of chat records.
* I installed `langchain==0.1.9` in February and ran the above code without any problems, everything went smoothly. When I reinstalled `langchain==0.1.9` in March, problems arose!!!
### System Info
* macOS 12.7.3
* Python 3.10.13
* langchain==0.1.11
* langchain-openai==0.0.6
* streamlit==1.31.1
# Supplement the test results (March)
* In order to make the testing more rigorous and reduce the workload of project developers, I have done extra tests.
* I tested with the same code in `langchain ==0.1.0`, `streamlit==1.30.0`, and the error is as follows:
```
2024-03-07 02:05:17.076 Uncaught app exception
Traceback (most recent call last):
File "/Users/bar/.venv/lib/python3.11/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 535, in _run_script
exec(code, module.__dict__)
File "/Users/bar/demo.py", line 53, in <module>
memory.add_message(AIMessage("How can I help you?"))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: Serializable.__init__() takes 1 positional argument but 2 were given
```
To solve this non-critical error, I replaced the code of the line that reported the error, as shown below.
```python
memory.add_ai_message("Please describe your requirements")
```
At this point, everything is fine!
* Supplement, I also tested in `langchain==0.1.1`, and it still runs smoothly.
### Everything is normal.
- `langchain==0.1.7`
- `streamlit==1.31.1`
- `Python 3.10.12`
The code on the first floor runs smoothly and achieves the final expected effect. There have been no problems at all.
### An error occurred
* `langchain==0.1.8` and `langchain==0.1.9` and `langchain==0.1.10`
* `streamlit==1.31.1`
* `Python 3.10.12`
The program has encountered an error, the error message is as previously shown.
## Conjectures
* Based on the multiple tests above, I suspect that starting from `langchain==0.1.8`, the source code of langchain has been updated, leading to the occurrence of this problem.
* Upon seeing this, it is inferred that the problem will start to occur from `langchain==0.1.8`, however, it's not that simple, I will give the final conclusion at the end.
# Final conclusion (2024.03.08)
```yaml
name : langchain
version : 0.1.7
dependencies:
- langchain-community >=0.0.20,<0.1
- langchain-core >=0.1.22,<0.2
name : langchain
version : 0.1.9
dependencies:
- langchain-community >=0.0.21,<0.1
- langchain-core >=0.1.26,<0.2
name : langchain
version : 0.1.10
dependencies:
- langchain-community >=0.0.25,<0.1
- langchain-core >=0.1.28,<0.2
name : langchain
version : 0.1.11
dependencies:
- langchain-community >=0.0.25,<0.1
- langchain-core >=0.1.29,<0.2
```
* As shown above, this is about the dependency of `langchain`.
* After repeated testing, I found that the ultimate cause of the problem is langchain-core. Because in February, when I used `langchain==0.1.9`, what was installed was the latest `langchain-core==0.1.27` at that time, and `langchain-core==0.1.28` was released in March, so from March onwards when I reinstalled `langchain==0.1.9`, `langchain-core` had changed, leading to this error occurring.
* Then, I went to look at the `langchain-core` code, the changes were small, and found nothing wrong. At this moment, my mind is already in a storm...
* I began to shift my focus towards `langchain-community`, and in the end, I found:
```toml
langchain-community = "0.0.25"
langchain-openai = "0.0.6"
langchain = "0.1.10"
```
```toml
langchain-community = "0.0.25"
langchain-openai = "0.0.6"
langchain = "0.1.9"
```
In these two environments, the program will report an error.
In the following environment, the program will not report an error:
```toml
langchain-community = "0.0.24"
langchain-openai = "0.0.6"
langchain = "0.1.9"
```
## Conclusion
* Conjecture 1: An error occurred in the test code when `langchain-core==0.1.28`. Please check if `langchain-core==0.1.28` has affected the `StreamlitChatMessageHistory` class.
* Conjecture 2 (Most likely): `langchain-community==0.0.25` was released on Mar 2, 2024, so when I reinstalled `langchain==0.1.9` in March, the latest `langchain-community==0.0.25` at that time would be installed, and some changes in `langchain-community==0.0.25` affected the normal execution of the program.
Thank you
| ‼️`langchain-community = "0.0.25"` Milestones: missing ScriptRunContext | https://api.github.com/repos/langchain-ai/langchain/issues/18684/comments | 7 | 2024-03-06T17:47:08Z | 2024-03-10T06:16:24Z | https://github.com/langchain-ai/langchain/issues/18684 | 2,172,093,703 | 18,684 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
'''
!pip -q install langchain openai tiktoken pypdf InstructorEmbedding faiss-cpu
!pip install sentence-transformers==2.2.2
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.llms import OpenAI
from langchain.chains import RetrievalQA
from langchain.document_loaders import PyPDFLoader
from langchain.document_loaders import DirectoryLoader
from InstructorEmbedding import INSTRUCTOR
from langchain.embeddings import HuggingFaceInstructEmbeddings
from google.colab import drive
drive.mount('/content/gdrive', force_remount=True)
root_dir = "/content/gdrive/MyDrive"
path = "/content/gdrive/MyDrive/Documents/inek"
loader = DirectoryLoader(path , glob="./*.pdf", loader_cls=PyPDFLoader)
documents = loader.load()
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=1000,
chunk_overlap=200)
texts = text_splitter.split_documents(documents)
len(texts)
texts[3]
import pickle
import faiss
from langchain.vectorstores import FAISS
from langchain.embeddings import HuggingFaceInstructEmbeddings
instructor_embeddings = HuggingFaceInstructEmbeddings(model_name="asafaya/kanarya-750m"
)
db_instructEmbedd = FAISS.from_documents(texts, instructor_embeddings)
retriever = db_instructEmbedd.as_retriever(search_kwargs={"k": 3})
docs = retriever.get_relevant_documents("Ketosiz hastalığı nedir kimlerde görülür?")
api_key= "sk-ac2d3761ef1e42e2aa18c4fc9cb381ec"
base_url = "https://api.deepseek.com/v1"
client = OpenAI(api_key=api_key, base_url=base_url)
qa_chain_deepseek = RetrievalQA.from_chain_type(
llm=client,
chain_type="stuff",
retriever=retriever,
return_source_documents=True
)
query = "Ketosiz hastalığı nedir kimlerde görülür?"
llm_response = qa_chain_deepseek(query)
print(llm_response["result"])
'''
### Error Message and Stack Trace (if applicable)
NotFoundError Traceback (most recent call last)
[<ipython-input-26-792341640681>](https://localhost:8080/#) in <cell line: 16>()
14
15 query = "Ketosiz hastalığı nedir kimlerde görülür?"
---> 16 llm_response = qa_chain_deepseek(query)
17 print(llm_response["result"])
30 frames
[/usr/local/lib/python3.10/dist-packages/openai/_base_client.py](https://localhost:8080/#) in _request(self, cast_to, options, remaining_retries, stream, stream_cls)
978
979 log.debug("Re-raising status error")
--> 980 raise self._make_status_error_from_response(err.response) from None
981
982 return self._process_response(
NotFoundError: Error code: 404 - {'detail': 'Not Found'}
### Description
* I'm getting NotFoundError: Error code: 404 - {'detail': 'Not Found'}
* Please help me that why I'm getting this error.
### System Info
* | NotFoundError: Error code: 404 - {'detail': 'Not Found'} | https://api.github.com/repos/langchain-ai/langchain/issues/18652/comments | 2 | 2024-03-06T10:48:25Z | 2024-06-20T12:12:06Z | https://github.com/langchain-ai/langchain/issues/18652 | 2,171,198,924 | 18,652 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
When attempting to use the ConversationSummaryBufferMemory.save_context method in the LangChain library, I encountered a NotImplementedError. This error occurs specifically when calling get_num_tokens_from_messages() for the model gpt-35-turbo-16k. I am trying to integrate the AzureChatOpenAI model with a custom memory buffer to manage conversation states but faced this issue during implementation.
To reproduce the error
```
from langchain.chat_models.azure_openai import AzureChatOpenAI
from langchain.memory import ConversationSummaryBufferMemory
llm = AzureChatOpenAI(
openai_api_version=constant.OPENAI_API_VERSION,
azure_endpoint=constant.OPENAI_API_BASE,
api_key=constant.OPENAI_API_KEY,
model=constant.OPENAI_GPT_MODEL_NAME,
azure_deployment=constant.OPENAI_GPT_DEPLOYMENT_NAME,
temperature=0,
)
memory = ConversationSummaryBufferMemory(llm=llm, max_token_limit=10)
memory.save_context({"input": "hi"}, {"output": "whats up"})
```
### Error Message and Stack Trace (if applicable)
NotImplementedError: get_num_tokens_from_messages() is not presently implemented for model gpt-35-turbo-16k.
See https://github.com/openai/openai-python/blob/main/chatml.md for information on how messages are converted to tokens.
### Description
Initialize AzureChatOpenAI with the model gpt-35-turbo-16k.
Create an instance of ConversationSummaryBufferMemory with a small max_token_limit (e.g., 10 tokens).
Attempt to save a context using memory.save_context({"input": "hi"}, {"output": "whats up"}).
### System Info
- langchain==0.1.11
- Python version: 3.11
- Operating System: Windows
| NotImplementedError in ConversationSummaryBufferMemory.save_context with get_num_tokens_from_messages | https://api.github.com/repos/langchain-ai/langchain/issues/18650/comments | 2 | 2024-03-06T10:36:38Z | 2024-06-27T16:07:49Z | https://github.com/langchain-ai/langchain/issues/18650 | 2,171,176,379 | 18,650 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_community.chat_models import ChatAnthropic
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True,output_key="answer")
callback = StreamHandler(st.empty())
llm = ChatAnthropic(temperature=0.5, max_tokens=1000, model_name= "claude-3-opus-20240229",streaming=True,callbacks=[callback])
qa = ConversationalRetrievalChain.from_llm(llm=llm, retriever=vector_store.as_retriever(),memory=memory,verbose=True)
result = qa.invoke({"question": prompt, "chat_history": [(message["role"], message["content"]) for message in st.session_state.messages]})
full_response = result["answer"]
### Error Message and Stack Trace (if applicable)
anthropic error 400 "claude-3-opus-20240229" is not supported on this API. Please use the Messages API instead.
### Description
Once I switch mode_namel from claude-2 to: claude-3-opus-2024022 I get following error:
anthropic error 400 "claude-3-opus-20240229" is not supported on this API. Please use the Messages API instead.
I tried updating langchain to latest but it did not resolve the issue.
### System Info
langchain==0.1.6
langchain-community==0.0.19
langchain-core==0.1.23
langchain-experimental==0.0.50
langchain-openai==0.0.5
langchainhub==0.1.13
anthropic==0.10.0
Windows11 | anthropic error 400 "claude-3-opus-20240229" is not supported on this API. Please use the Messages API instead. | https://api.github.com/repos/langchain-ai/langchain/issues/18646/comments | 7 | 2024-03-06T10:16:19Z | 2024-03-08T21:16:21Z | https://github.com/langchain-ai/langchain/issues/18646 | 2,171,135,553 | 18,646 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_community.llms import HuggingFaceEndpoint
llm = HuggingFaceEndpoint(
endpoint_url="http://0.0.0.0:8080/",
max_new_tokens=512,
top_k=10,
top_p=0.95,
typical_p=0.95,
temperature=0.01,
repetition_penalty=1.03,
huggingfacehub_api_token="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"
)
from langchain.schema import (
HumanMessage,
SystemMessage,
)
from langchain_community.chat_models.huggingface import ChatHuggingFace
messages = [
SystemMessage(content="You're a helpful assistant"),
HumanMessage(
content="What happens when an unstoppable force meets an immovable object?"
),
]
chat_model = ChatHuggingFace(llm=llm)
### Error Message and Stack Trace (if applicable)
{
"name": "ValueError",
"message": "Failed to resolve model_id:Could not find model id for inference server: http://0.0.0.0:8080/Make sure that your Hugging Face token has access to the endpoint.",
"stack": "---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[25], line 14
5 from langchain_community.chat_models.huggingface import ChatHuggingFace
7 messages = [
8 SystemMessage(content=\"You're a helpful assistant\"),
9 HumanMessage(
10 content=\"What happens when an unstoppable force meets an immovable object?\"
11 ),
12 ]
---> 14 chat_model = ChatHuggingFace(llm=llm)
File ~/miniconda3/envs/api_mapping/lib/python3.9/site-packages/langchain_community/chat_models/huggingface.py:55, in ChatHuggingFace.__init__(self, **kwargs)
51 super().__init__(**kwargs)
53 from transformers import AutoTokenizer
---> 55 self._resolve_model_id()
57 self.tokenizer = (
58 AutoTokenizer.from_pretrained(self.model_id)
59 if self.tokenizer is None
60 else self.tokenizer
61 )
File ~/miniconda3/envs/api_mapping/lib/python3.9/site-packages/langchain_community/chat_models/huggingface.py:155, in ChatHuggingFace._resolve_model_id(self)
152 self.model_id = endpoint.repository
154 if not self.model_id:
--> 155 raise ValueError(
156 \"Failed to resolve model_id:\"
157 f\"Could not find model id for inference server: {endpoint_url}\"
158 \"Make sure that your Hugging Face token has access to the endpoint.\"
159 )
ValueError: Failed to resolve model_id:Could not find model id for inference server: http://0.0.0.0:8080/Make sure that your Hugging Face token has access to the endpoint."
}
### Description
I try to create ChatHuggingFace model from Huggingface Text Generation Inference API (deploy my local model)
Get Error
ValueError: Failed to resolve model_id:Could not find model id for inference server: http://0.0.0.0:8080/Make sure that your Hugging Face token has access to the endpoint.
### System Info
absl-py==1.4.0
accelerate==0.26.1
aiofiles==23.2.1
aiohttp @ file:///home/conda/feedstock_root/build_artifacts/aiohttp_1689804989543/work
aiosignal @ file:///home/conda/feedstock_root/build_artifacts/aiosignal_1667935791922/work
altair==5.0.1
annotated-types==0.5.0
antlr4-python3-runtime==4.9.3
anyio==3.7.1
appdirs==1.4.4
asttokens @ file:///home/conda/feedstock_root/build_artifacts/asttokens_1670263926556/work
async-timeout @ file:///home/conda/feedstock_root/build_artifacts/async-timeout_1691763562544/work
attrs @ file:///home/conda/feedstock_root/build_artifacts/attrs_1683424013410/work
auto-gptq==0.6.0
autoawq==0.2.2
autoawq_kernels==0.0.6
backcall @ file:///home/conda/feedstock_root/build_artifacts/backcall_1592338393461/work
backports.functools-lru-cache @ file:///home/conda/feedstock_root/build_artifacts/backports.functools_lru_cache_1687772187254/work
beautifulsoup4==4.12.2
bigjson==1.0.9
bitsandbytes==0.42.0
black==23.7.0
brotlipy @ file:///home/conda/feedstock_root/build_artifacts/brotlipy_1666764672617/work
cachetools @ file:///home/conda/feedstock_root/build_artifacts/cachetools_1633010882559/work
certifi @ file:///home/conda/feedstock_root/build_artifacts/certifi_1700303426725/work/certifi
cffi @ file:///home/conda/feedstock_root/build_artifacts/cffi_1671179360775/work
charset-normalizer @ file:///home/conda/feedstock_root/build_artifacts/charset-normalizer_1688813409104/work
cleanlab==2.5.0
click==8.1.7
cloudpickle==3.0.0
cmake==3.27.2
colorama @ file:///home/conda/feedstock_root/build_artifacts/colorama_1666700638685/work
coloredlogs==15.0.1
comm @ file:///home/conda/feedstock_root/build_artifacts/comm_1691044910542/work
contourpy==1.1.0
cryptography @ file:///home/conda/feedstock_root/build_artifacts/cryptography-split_1695163786734/work
cycler @ file:///home/conda/feedstock_root/build_artifacts/cycler_1635519461629/work
Cython==0.29.37
dataclasses-json==0.6.3
datasets==2.14.4
DateTime==5.4
debugpy @ file:///home/conda/feedstock_root/build_artifacts/debugpy_1691021247994/work
decorator @ file:///home/conda/feedstock_root/build_artifacts/decorator_1641555617451/work
dict==2020.12.3
dill==0.3.7
distro==1.8.0
docker-pycreds==0.4.0
docstring-parser==0.15
einops==0.7.0
et-xmlfile==1.1.0
evaluate==0.4.0
exceptiongroup==1.1.3
executing @ file:///home/conda/feedstock_root/build_artifacts/executing_1667317341051/work
fastapi==0.101.1
ffmpy==0.3.1
filelock==3.12.2
fire==0.5.0
fonttools @ file:///home/conda/feedstock_root/build_artifacts/fonttools_1692542611950/work
frozenlist @ file:///home/conda/feedstock_root/build_artifacts/frozenlist_1695377824562/work
fsspec==2023.6.0
future==0.18.3
fvcore==0.1.5.post20221221
gdown==4.7.1
gekko==1.0.6
gitdb==4.0.10
GitPython==3.1.32
google-api-core @ file:///home/conda/feedstock_root/build_artifacts/google-api-core-split_1653881570487/work
google-api-python-client @ file:///home/conda/feedstock_root/build_artifacts/google-api-python-client_1695664297279/work
google-auth==2.23.3
google-auth-httplib2 @ file:///home/conda/feedstock_root/build_artifacts/google-auth-httplib2_1694516804909/work
google-auth-oauthlib==1.1.0
googleapis-common-protos @ file:///home/conda/feedstock_root/build_artifacts/googleapis-common-protos-feedstock_1690830130005/work
gradio==3.40.1
gradio_client==0.4.0
greenlet==3.0.1
grpcio==1.59.0
h11==0.14.0
hdbscan==0.8.33
htmlmin==0.1.12
httpcore==0.17.3
httplib2 @ file:///home/conda/feedstock_root/build_artifacts/httplib2_1679483503307/work
httpx==0.24.1
huggingface-hub==0.20.3
humanfriendly==10.0
hydra-core==1.3.2
idna @ file:///home/conda/feedstock_root/build_artifacts/idna_1663625384323/work
ImageHash @ file:///home/conda/feedstock_root/build_artifacts/imagehash_1664371213222/work
importlib-metadata @ file:///home/conda/feedstock_root/build_artifacts/importlib-metadata_1688754491823/work
importlib-resources==6.0.1
iopath==0.1.9
ipykernel @ file:///home/conda/feedstock_root/build_artifacts/ipykernel_1693880262622/work
ipython @ file:///home/conda/feedstock_root/build_artifacts/ipython_1685727741709/work
ipywidgets @ file:///home/conda/feedstock_root/build_artifacts/ipywidgets_1694607144474/work
itables @ file:///home/conda/feedstock_root/build_artifacts/itables_1692399918721/work
jedi @ file:///home/conda/feedstock_root/build_artifacts/jedi_1690896916983/work
Jinja2 @ file:///home/conda/feedstock_root/build_artifacts/jinja2_1654302431367/work
jiwer==3.0.3
joblib @ file:///home/conda/feedstock_root/build_artifacts/joblib_1691577114857/work
json-lines==0.5.0
jsonlines==4.0.0
jsonpatch==1.33
jsonpointer==2.4
jsonschema==4.19.0
jsonschema-specifications==2023.7.1
jupyter_client @ file:///home/conda/feedstock_root/build_artifacts/jupyter_client_1687700988094/work
jupyter_core @ file:///home/conda/feedstock_root/build_artifacts/jupyter_core_1686775603087/work
jupyterlab-widgets @ file:///home/conda/feedstock_root/build_artifacts/jupyterlab_widgets_1694598704522/work
kiwisolver==1.4.4
langchain==0.1.11
langchain-community==0.0.25
langchain-core==0.1.29
langchain-text-splitters==0.0.1
langsmith==0.1.22
linkify-it-py==2.0.2
lit==16.0.6
llvmlite==0.41.1
loralib==0.1.1
Markdown==3.5
markdown-it-py==2.2.0
MarkupSafe @ file:///home/conda/feedstock_root/build_artifacts/markupsafe_1685769048265/work
marshmallow==3.20.1
matplotlib @ file:///home/conda/feedstock_root/build_artifacts/matplotlib-suite_1661440538658/work
matplotlib-inline @ file:///home/conda/feedstock_root/build_artifacts/matplotlib-inline_1660814786464/work
mdit-py-plugins==0.3.3
mdurl==0.1.2
mock==5.1.0
mpmath==1.3.0
msal==1.26.0
multidict @ file:///home/conda/feedstock_root/build_artifacts/multidict_1672339396340/work
multimethod @ file:///home/conda/feedstock_root/build_artifacts/multimethod_1603129052241/work
multiprocess==0.70.15
munkres==1.1.4
mypy-extensions==1.0.0
nb-conda-kernels @ file:///home/conda/feedstock_root/build_artifacts/nb_conda_kernels_1667060622050/work
neo4j==5.16.0
nest-asyncio @ file:///home/conda/feedstock_root/build_artifacts/nest-asyncio_1664684991461/work
networkx @ file:///home/conda/feedstock_root/build_artifacts/networkx_1680692919326/work
nltk==3.8.1
nose==1.3.7
numba==0.58.1
numpy @ file:///home/conda/feedstock_root/build_artifacts/numpy_1668919081525/work
nvidia-cublas-cu11==11.10.3.66
nvidia-cublas-cu12==12.1.3.1
nvidia-cuda-cupti-cu11==11.7.101
nvidia-cuda-cupti-cu12==12.1.105
nvidia-cuda-nvrtc-cu11==11.7.99
nvidia-cuda-nvrtc-cu12==12.1.105
nvidia-cuda-runtime-cu11==11.7.99
nvidia-cuda-runtime-cu12==12.1.105
nvidia-cudnn-cu11==8.5.0.96
nvidia-cudnn-cu12==8.9.2.26
nvidia-cufft-cu11==10.9.0.58
nvidia-cufft-cu12==11.0.2.54
nvidia-curand-cu11==10.2.10.91
nvidia-curand-cu12==10.3.2.106
nvidia-cusolver-cu11==11.4.0.1
nvidia-cusolver-cu12==11.4.5.107
nvidia-cusparse-cu11==11.7.4.91
nvidia-cusparse-cu12==12.1.0.106
nvidia-nccl-cu11==2.14.3
nvidia-nccl-cu12==2.18.1
nvidia-nvjitlink-cu12==12.3.101
nvidia-nvtx-cu11==11.7.91
nvidia-nvtx-cu12==12.1.105
oauth2client==4.1.3
oauthlib==3.2.2
omegaconf==2.3.0
openai==0.28.0
opencv-python==4.8.1.78
openpyxl==3.1.2
optimum==1.17.1
optimum-intel==1.15.2
orjson==3.9.15
packaging==23.2
pandas==1.5.3
pandas-profiling @ file:///home/conda/feedstock_root/build_artifacts/pandas-profiling_1674670576924/work
parso @ file:///home/conda/feedstock_root/build_artifacts/parso_1638334955874/work
pathspec==0.11.2
pathtools==0.1.2
patsy @ file:///home/conda/feedstock_root/build_artifacts/patsy_1665356157073/work
peft==0.8.2
pexpect @ file:///home/conda/feedstock_root/build_artifacts/pexpect_1667297516076/work
phik @ file:///home/conda/feedstock_root/build_artifacts/phik_1670564192669/work
pickleshare @ file:///home/conda/feedstock_root/build_artifacts/pickleshare_1602536217715/work
Pillow==10.0.0
platformdirs @ file:///home/conda/feedstock_root/build_artifacts/platformdirs_1690813113769/work
portalocker==2.8.2
prompt-toolkit @ file:///home/conda/feedstock_root/build_artifacts/prompt-toolkit_1688565951714/work
promptlayer==0.4.0
protobuf==3.20.3
psutil @ file:///home/conda/feedstock_root/build_artifacts/psutil_1681775019467/work
ptyprocess @ file:///home/conda/feedstock_root/build_artifacts/ptyprocess_1609419310487/work/dist/ptyprocess-0.7.0-py2.py3-none-any.whl
pure-eval @ file:///home/conda/feedstock_root/build_artifacts/pure_eval_1642875951954/work
py-vncorenlp==0.1.4
pyArango==2.0.2
pyarrow==12.0.1
pyasn1 @ file:///home/conda/feedstock_root/build_artifacts/pyasn1_1694615621498/work
pyasn1-modules @ file:///home/conda/feedstock_root/build_artifacts/pyasn1-modules_1695107857548/work
pycocotools==2.0.7
pycparser @ file:///home/conda/feedstock_root/build_artifacts/pycparser_1636257122734/work
pydantic @ file:///home/conda/feedstock_root/build_artifacts/pydantic_1690476225427/work
pydantic_core==2.6.1
PyDrive==1.3.1
pydub==0.25.1
Pygments @ file:///home/conda/feedstock_root/build_artifacts/pygments_1691408637400/work
pyjnius==1.6.0
PyJWT==2.8.0
pynndescent==0.5.11
pyOpenSSL @ file:///home/conda/feedstock_root/build_artifacts/pyopenssl_1685514481738/work
pyparsing==3.0.9
PySocks @ file:///home/conda/feedstock_root/build_artifacts/pysocks_1661604839144/work
python-arango==7.9.0
python-crfsuite==0.9.9
python-dateutil @ file:///home/conda/feedstock_root/build_artifacts/python-dateutil_1626286286081/work
python-multipart==0.0.6
pytz==2023.3
pyu2f @ file:///home/conda/feedstock_root/build_artifacts/pyu2f_1604248910016/work
PyWavelets @ file:///home/conda/feedstock_root/build_artifacts/pywavelets_1673082327051/work
PyYAML @ file:///home/conda/feedstock_root/build_artifacts/pyyaml_1692737146376/work
pyzmq @ file:///home/conda/feedstock_root/build_artifacts/pyzmq_1691667452339/work
rapidfuzz==3.5.2
referencing==0.30.2
regex==2023.8.8
requests @ file:///home/conda/feedstock_root/build_artifacts/requests_1680286922386/work
requests-oauthlib==1.3.1
requests-toolbelt==1.0.0
responses==0.18.0
rich==13.7.0
rouge==1.0.1
rouge-score==0.1.2
rpds-py==0.9.2
rsa @ file:///home/conda/feedstock_root/build_artifacts/rsa_1658328885051/work
safetensors==0.4.2
scikit-learn==1.3.0
scipy==1.11.2
seaborn @ file:///home/conda/feedstock_root/build_artifacts/seaborn-split_1672497695270/work
semantic-version==2.10.0
sentence-transformers==2.2.2
sentencepiece==0.1.99
sentry-sdk==1.29.2
seqeval==1.2.2
setproctitle==1.3.2
shtab==1.6.4
simplejson==3.19.2
six @ file:///home/conda/feedstock_root/build_artifacts/six_1620240208055/work
skorch==0.15.0
smmap==5.0.0
sniffio==1.3.0
soupsieve==2.5
SQLAlchemy==2.0.23
stack-data @ file:///home/conda/feedstock_root/build_artifacts/stack_data_1669632077133/work
starlette==0.27.0
statsmodels @ file:///croot/statsmodels_1676643798791/work
sympy==1.12
tabulate==0.9.0
tangled-up-in-unicode @ file:///home/conda/feedstock_root/build_artifacts/tangled-up-in-unicode_1632832610704/work
tenacity==8.2.3
tensorboard==2.15.0
tensorboard-data-server==0.7.2
termcolor==2.3.0
text-generation==0.6.1
threadpoolctl==3.2.0
tiktoken==0.5.2
tokenize-rt==5.2.0
tokenizers==0.15.2
tomli==2.0.1
toolz==0.12.0
torch==2.1.2
torchinfo==1.8.0
torchvision==0.16.2
tornado @ file:///home/conda/feedstock_root/build_artifacts/tornado_1692311754787/work
tqdm @ file:///home/conda/feedstock_root/build_artifacts/tqdm_1662214488106/work
traitlets @ file:///home/conda/feedstock_root/build_artifacts/traitlets_1675110562325/work
transformers==4.37.0
trash-cli==0.23.2.13.2
triton==2.1.0
trl==0.7.4
typeguard @ file:///home/conda/feedstock_root/build_artifacts/typeguard_1658932097418/work
typing==3.7.4.3
typing-inspect==0.9.0
typing_extensions==4.10.0
tyro==0.5.17
tzdata==2023.3
uc-micro-py==1.0.2
umap-learn==0.5.5
underthesea==6.7.0
underthesea_core==1.0.4
unicodedata2 @ file:///home/conda/feedstock_root/build_artifacts/unicodedata2_1667239485250/work
uritemplate @ file:///home/conda/feedstock_root/build_artifacts/uritemplate_1634152692041/work
urllib3 @ file:///home/conda/feedstock_root/build_artifacts/urllib3_1678635778344/work
uvicorn==0.23.2
values==2020.12.3
visions @ file:///home/conda/feedstock_root/build_artifacts/visions_1638743854326/work
wandb==0.15.12
wcwidth @ file:///home/conda/feedstock_root/build_artifacts/wcwidth_1673864653149/work
websockets==11.0.3
Werkzeug==3.0.1
widgetsnbextension @ file:///home/conda/feedstock_root/build_artifacts/widgetsnbextension_1694598693908/work
xxhash==3.3.0
yacs==0.1.8
yarl @ file:///home/conda/feedstock_root/build_artifacts/yarl_1685191803031/work
zipp @ file:///home/conda/feedstock_root/build_artifacts/zipp_1689374466814/work
zope.interface==6.1
zstandard==0.22.0 | Failed to resolve model_id:Could not find model id for inference server | https://api.github.com/repos/langchain-ai/langchain/issues/18639/comments | 2 | 2024-03-06T09:35:43Z | 2024-08-04T16:06:20Z | https://github.com/langchain-ai/langchain/issues/18639 | 2,171,052,417 | 18,639 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
import os
from langchain_openai import AzureChatOpenAI
api_base = os.getenv("AZURE_OPENAI_ENDPOINT")
api_key= os.getenv("AZURE_OPENAI_API_KEY")
api_version=os.getenv("OPENAI_API_VERSION"),
model_name = 'gpt-4' #Replace with model deployment name
llm = AzureChatOpenAI(
api_key=api_key,
azure_endpoint=api_base,
azure_deployment=model_name,
openai_api_version = api_version,
temperature=0
)
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "d:\Develop\CodeProjects\chatgpt4v-demo\langchainApp.py", line 11, in <module>
llm = AzureChatOpenAI(
File "D:\Develop\anaconda3\lib\site-packages\langchain_core\load\serializable.py", line 120, in __init__
super().__init__(**kwargs)
File "pydantic\main.py", line 339, in pydantic.main.BaseModel.__init__
File "pydantic\main.py", line 1100, in pydantic.main.validate_model
File "D:\Develop\anaconda3\lib\site-packages\langchain_openai\chat_models\azure.py", line 125, in validate_environment
values["openai_api_version"] = values["openai_api_version"] or os.getenv(
KeyError: 'openai_api_version'
### Description
I want to instantiate AzureChatOpenAI correctly
### System Info
libs:
langchain 0.1.11
langchain-community 0.0.25
langchain-core 0.1.29
langchain-openai 0.0.8
langchain-text-splitters 0.0.1
openai 1.13.3
platform:
Windows x64
python version: 3.10.13 | KeyError: 'openai_api_version' | https://api.github.com/repos/langchain-ai/langchain/issues/18632/comments | 2 | 2024-03-06T07:15:12Z | 2024-03-07T01:47:34Z | https://github.com/langchain-ai/langchain/issues/18632 | 2,170,808,965 | 18,632 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
retriever = elastic_search_db.as_retriever(
search_type="similarity_score_threshold", search_kwargs={"score_threshold": 0.5}
)
docs = retriever.get_relevant_documents(query)
### Error Message and Stack Trace (if applicable)
_No response_
### Description
why similarity_score_threshold is not working. When i changed search_type to similarity then it is worked but retreiverdocument not based on score threshold.
### System Info
version latest
python 3.10 | ElasticSearch retreiver issue based on score_threshold | https://api.github.com/repos/langchain-ai/langchain/issues/18623/comments | 3 | 2024-03-06T05:33:12Z | 2024-06-19T16:07:58Z | https://github.com/langchain-ai/langchain/issues/18623 | 2,170,673,569 | 18,623 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain import hub
from langchain.agents import (
AgentExecutor,
create_openai_tools_agent,
)
from langchain.tools import tool
from langchain_community.callbacks import get_openai_callback
from langchain_openai import AzureChatOpenAI
llm = AzureChatOpenAI(
api_key="...",
azure_deployment="...",
azure_endpoint="...",
api_version="2024-02-15-preview",
streaming=False,
)
@tool
def multiply(a: int, b: int) -> int:
"""Multiply two numbers."""
return a * b
prompt = hub.pull("hwchase17/openai-tools-agent")
tools = [multiply]
agent = create_openai_tools_agent(llm, tools, prompt)
agent_executor = AgentExecutor(
agent=agent,
tools=tools,
)
with get_openai_callback() as cb:
resp = agent_executor.invoke({"input": "What's 3 multiplied by 4?"})
print(cb)
assert cb.total_tokens > 0
```
### Error Message and Stack Trace (if applicable)
The assertion in the minimum working example fails as the token count is zero.
### Description
I am trying to track token usage with an azure openai tools agent. But the token count always returns zero.
### System Info
```
langchain==0.1.11
langchain-community==0.0.25
langchain-core==0.1.29
langchain-openai==0.0.6
langchain-text-splitters==0.0.1
langchainhub==0.1.15
langchain-openai==0.0.6
openai==1.13.3
``` | Token count is not reported for openai tools agent | https://api.github.com/repos/langchain-ai/langchain/issues/18617/comments | 2 | 2024-03-06T00:53:47Z | 2024-03-07T16:54:05Z | https://github.com/langchain-ai/langchain/issues/18617 | 2,170,419,330 | 18,617 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```py
from langchain_together.llms import Together
from langchain_together.embeddings import TogetherEmbeddings as LangchainTogetherEmbeddings
```
### Error Message and Stack Trace (if applicable)
File "/Users/aabor/projects/rag/llm_select.py", line 7, in <module>
from langchain_together.llms import Together
File "/Users/aabor/projects/rag/venv/lib/python3.11/site-packages/langchain_together/__init__.py", line 1, in <module>
from langchain_together.embeddings import TogetherEmbeddings
File "/Users/aabor/projects/rag/venv/lib/python3.11/site-packages/langchain_together/embeddings.py", line 10, in <module>
class TogetherEmbeddings(BaseModel, Embeddings):
File "/Users/aabor/projects/rag/venv/lib/python3.11/site-packages/langchain_together/embeddings.py", line 23, in TogetherEmbeddings
_client: together.Together
^^^^^^^^^^^^^^^^^
AttributeError: module 'together' has no attribute 'Together'
### Description
Example code from langchain documentation provokes error.
I just want to import classes from langchain_together package.
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.3.0: Wed Dec 20 21:31:00 PST 2023; root:xnu-10002.81.5~7/RELEASE_ARM64_T6020
> Python Version: 3.11.6 (v3.11.6:8b6ee5ba3b, Oct 2 2023, 11:18:21) [Clang 13.0.0 (clang-1300.0.29.30)]
Package Information
-------------------
> langchain_core: 0.1.28
> langchain: 0.1.11
> langchain_community: 0.0.24
> langsmith: 0.1.21
> langchain_text_splitters: 0.0.1
> langchain_together: 0.0.2.post1
| Integration with TogetherAI does not work properly, unable to import | https://api.github.com/repos/langchain-ai/langchain/issues/18612/comments | 5 | 2024-03-05T22:36:34Z | 2024-03-06T16:27:53Z | https://github.com/langchain-ai/langchain/issues/18612 | 2,170,257,595 | 18,612 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
_search_kwargs: dict[str, Any] = {"k": k}
_search_kwargs["post_filter_pipeline"] = [{"$project": {"vector": 0}}]
_search_kwargs["pre_filter"] = {'$and': [{'datetime': {'$gte': datetime.datetime(2023, 1, 1, 0, 0)}}]}
vector_collection = VectorMongoCollection()
db: MongoDBAtlasVectorSearch = vector_collection.create_vectorstore()
retriever = db.as_retriever(search_kwargs=_search_kwargs)
```
### Error Message and Stack Trace (if applicable)
pymongo.errors.OperationFailure: Operand type is not supported for $vectorSearch: date, full error: {'ok': 0.0, 'errmsg': 'Operand type is not supported for $vectorSearch: date', 'code': 7828301, 'codeName': 'Location7828301', '$clusterTime': {'clusterTime': Timestamp(1709671160, 1), 'signature': {'hash': b'\x17\x86\xe5\xbe\xd10\x81\xf3\x0e\xe5\xc2\xfc\x9e\xe6\xdf\xe4l\x9c\xb4F', 'keyId': 7314028249355386881}}, 'operationTime': Timestamp(1709671160, 1)}
### Description
I have a MongoDB collection with an Atlas Vector Search index with a `datetime` field both in the collection and in the index so that I can pre filter for specific datetime ranges. Currently, any filter I create with a date fails. This was not the case prior to the switch to `$vectorSearch` within MongoDB before GA when it was necessary to use a Mongo Atlas Search index with `$knnBeta`.
### System Info
```
❯ python -m langchain_core.sys_info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 22.5.0: Thu Jun 8 22:22:20 PDT 2023; root:xnu-8796.121.3~7/RELEASE_ARM64_T6000
> Python Version: 3.12.2 (main, Feb 22 2024, 15:15:24) [Clang 14.0.0 (clang-1400.0.29.202)]
Package Information
-------------------
> langchain_core: 0.1.28
> langchain: 0.1.10
> langchain_community: 0.0.25
> langsmith: 0.1.14
> langchain_mongodb: 0.1.0
> langchain_openai: 0.0.8
> langchain_text_splitters: 0.0.1
``` | MongoDB Atlas Vector Search Does Not Support Date Pre Filters | https://api.github.com/repos/langchain-ai/langchain/issues/18604/comments | 1 | 2024-03-05T20:56:30Z | 2024-05-29T13:11:01Z | https://github.com/langchain-ai/langchain/issues/18604 | 2,170,130,825 | 18,604 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from typing import TypedDict, Annotated, List, Union
from langchain_core.agents import AgentAction, AgentFinish
from langchain_core.messages import BaseMessage
import operator
from langchain.tools import BaseTool, StructuredTool, Tool, tool
from langgraph.prebuilt.tool_executor import ToolExecutor
import random
@tool("lower_case", return_direct=True)
def to_lower_case(input:str) -> str:
"""Returns the input as all lower case."""
return input.lower()
@tool("random_number", return_direct=True)
def random_number_maker(input:str) -> str:
"""Returns a random number between 0-100."""
return random.randint(0, 100)
tools = [to_lower_case,random_number_maker]
tool_executor = ToolExecutor(tools)
from langchain import hub
from langchain_community.llms import Bedrock
from langchain.chat_models import BedrockChat
from langchain_experimental.llms.anthropic_functions import AnthropicFunctions
from langchain_core.utils.function_calling import format_tool_to_openai_function
llm = BedrockChat(model_id="anthropic.claude-v2:1", model_kwargs={"temperature": 0.1})
base_model = AnthropicFunctions(llm=llm)
functions = [format_tool_to_openai_function(t) for t in tools]
model = base_model.bind(functions=functions)
from typing import TypedDict, Annotated, Sequence
import operator
from langchain_core.messages import BaseMessage
class AgentState(TypedDict):
messages: Annotated[Sequence[BaseMessage], operator.add]
from langchain_core.agents import AgentFinish
from langgraph.prebuilt import ToolInvocation
import json
from langchain_core.messages import FunctionMessage
# Define the function that determines whether to continue or not
def should_continue(state):
messages = state['messages']
last_message = messages[-1]
# If there is no function call, then we finish
if "function_call" not in last_message.additional_kwargs:
return "end"
# Otherwise if there is, we continue
else:
return "continue"
# Define the function that calls the model
def call_model(state):
messages = state['messages']
response = model.invoke(messages)
# We return a list, because this will get added to the existing list
return {"messages": [response]}
# Define the function to execute tools
def call_tool(state):
messages = state['messages']
# Based on the continue condition
# we know the last message involves a function call
last_message = messages[-1]
# We construct an ToolInvocation from the function_call
action = ToolInvocation(
tool=last_message.additional_kwargs["function_call"]["name"],
tool_input=json.loads(last_message.additional_kwargs["function_call"]["arguments"]),
)
print(f"The agent action is {action}")
# We call the tool_executor and get back a response
response = tool_executor.invoke(action)
print(f"The tool result is: {response}")
# We use the response to create a FunctionMessage
function_message = FunctionMessage(content=str(response), name=action.tool)
# We return a list, because this will get added to the existing list
print("**********")
return {"messages": [function_message]}
from langgraph.graph import StateGraph, END
# Define a new graph
workflow = StateGraph(AgentState)
# Define the two nodes we will cycle between
workflow.add_node("agent", call_model)
workflow.add_node("action", call_tool)
# Set the entrypoint as `agent` where we start
workflow.set_entry_point("agent")
# We now add a conditional edge
workflow.add_conditional_edges(
# First, we define the start node. We use `agent`.
# This means these are the edges taken after the `agent` node is called.
"agent",
# Next, we pass in the function that will determine which node is called next.
should_continue,
# Finally we pass in a mapping.
# The keys are strings, and the values are other nodes.
# END is a special node marking that the graph should finish.
# What will happen is we will call `should_continue`, and then the output of that
# will be matched against the keys in this mapping.
# Based on which one it matches, that node will then be called.
{
# If `tools`, then we call the tool node.
"continue": "action",
# Otherwise we finish.
"end": END
}
)
# We now add a normal edge from `tools` to `agent`.
# This means that after `tools` is called, `agent` node is called next.
workflow.add_edge('action', 'agent')
# Finally, we compile it!
# This compiles it into a LangChain Runnable,
# meaning you can use it as you would any other runnable
app = workflow.compile()
from langchain_core.messages import HumanMessage, SystemMessage
# inputs = {"input": "give me a random number and then write in words and make it lower case", "chat_history": []}
system_message = SystemMessage(content="you are a helpful assistant")
user_01 = HumanMessage(content="give me a random number and then write in words and make it lower case")
# user_01 = HumanMessage(content="plear write 'Merlion' in lower case")
# user_01 = HumanMessage(content="what is a Merlion?")
inputs = {"messages": [system_message,user_01]}
app.invoke(inputs)
```
### Error Message and Stack Trace (if applicable)
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[8], line 11
6 # user_01 = HumanMessage(content="plear write 'Merlion' in lower case")
7 # user_01 = HumanMessage(content="what is a Merlion?")
9 inputs = {"messages": [system_message,user_01]}
---> 11 app.invoke(inputs)
File [/opt/conda/envs/langchain/lib/python3.10/site-packages/langgraph/pregel/__init__.py:579](http://localhost:8888/opt/conda/envs/langchain/lib/python3.10/site-packages/langgraph/pregel/__init__.py#line=578), in Pregel.invoke(self, input, config, output_keys, input_keys, **kwargs)
569 def invoke(
570 self,
571 input: Union[dict[str, Any], Any],
(...)
576 **kwargs: Any,
577 ) -> Union[dict[str, Any], Any]:
578 latest: Union[dict[str, Any], Any] = None
--> 579 for chunk in self.stream(
580 input,
581 config,
582 output_keys=output_keys if output_keys is not None else self.output,
583 input_keys=input_keys,
584 **kwargs,
585 ):
586 latest = chunk
587 return latest
File [/opt/conda/envs/langchain/lib/python3.10/site-packages/langgraph/pregel/__init__.py:615](http://localhost:8888/opt/conda/envs/langchain/lib/python3.10/site-packages/langgraph/pregel/__init__.py#line=614), in Pregel.transform(self, input, config, output_keys, input_keys, **kwargs)
606 def transform(
607 self,
608 input: Iterator[Union[dict[str, Any], Any]],
(...)
613 **kwargs: Any,
614 ) -> Iterator[Union[dict[str, Any], Any]]:
--> 615 for chunk in self._transform_stream_with_config(
616 input,
617 self._transform,
618 config,
619 output_keys=output_keys,
620 input_keys=input_keys,
621 **kwargs,
622 ):
623 yield chunk
File [/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/runnables/base.py:1513](http://localhost:8888/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/runnables/base.py#line=1512), in Runnable._transform_stream_with_config(self, input, transformer, config, run_type, **kwargs)
1511 try:
1512 while True:
-> 1513 chunk: Output = context.run(next, iterator) # type: ignore
1514 yield chunk
1515 if final_output_supported:
File [/opt/conda/envs/langchain/lib/python3.10/site-packages/langgraph/pregel/__init__.py:355](http://localhost:8888/opt/conda/envs/langchain/lib/python3.10/site-packages/langgraph/pregel/__init__.py#line=354), in Pregel._transform(self, input, run_manager, config, input_keys, output_keys, interrupt)
348 done, inflight = concurrent.futures.wait(
349 futures,
350 return_when=concurrent.futures.FIRST_EXCEPTION,
351 timeout=self.step_timeout,
352 )
354 # interrupt on failure or timeout
--> 355 _interrupt_or_proceed(done, inflight, step)
357 # apply writes to channels
358 _apply_writes(
359 checkpoint, channels, pending_writes, config, step + 1
360 )
File [/opt/conda/envs/langchain/lib/python3.10/site-packages/langgraph/pregel/__init__.py:698](http://localhost:8888/opt/conda/envs/langchain/lib/python3.10/site-packages/langgraph/pregel/__init__.py#line=697), in _interrupt_or_proceed(done, inflight, step)
696 inflight.pop().cancel()
697 # raise the exception
--> 698 raise exc
699 # TODO this is where retry of an entire step would happen
701 if inflight:
702 # if we got here means we timed out
File [/opt/conda/envs/langchain/lib/python3.10/concurrent/futures/thread.py:58](http://localhost:8888/opt/conda/envs/langchain/lib/python3.10/concurrent/futures/thread.py#line=57), in _WorkItem.run(self)
55 return
57 try:
---> 58 result = self.fn(*self.args, **self.kwargs)
59 except BaseException as exc:
60 self.future.set_exception(exc)
File [/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/runnables/base.py:4069](http://localhost:8888/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/runnables/base.py#line=4068), in RunnableBindingBase.invoke(self, input, config, **kwargs)
4063 def invoke(
4064 self,
4065 input: Input,
4066 config: Optional[RunnableConfig] = None,
4067 **kwargs: Optional[Any],
4068 ) -> Output:
-> 4069 return self.bound.invoke(
4070 input,
4071 self._merge_configs(config),
4072 **{**self.kwargs, **kwargs},
4073 )
File [/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/runnables/base.py:2075](http://localhost:8888/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/runnables/base.py#line=2074), in RunnableSequence.invoke(self, input, config)
2073 try:
2074 for i, step in enumerate(self.steps):
-> 2075 input = step.invoke(
2076 input,
2077 # mark each step as a child run
2078 patch_config(
2079 config, callbacks=run_manager.get_child(f"seq:step:{i+1}")
2080 ),
2081 )
2082 # finish the root run
2083 except BaseException as e:
File [/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/runnables/base.py:3523](http://localhost:8888/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/runnables/base.py#line=3522), in RunnableLambda.invoke(self, input, config, **kwargs)
3521 """Invoke this runnable synchronously."""
3522 if hasattr(self, "func"):
-> 3523 return self._call_with_config(
3524 self._invoke,
3525 input,
3526 self._config(config, self.func),
3527 **kwargs,
3528 )
3529 else:
3530 raise TypeError(
3531 "Cannot invoke a coroutine function synchronously."
3532 "Use `ainvoke` instead."
3533 )
File [/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/runnables/base.py:1262](http://localhost:8888/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/runnables/base.py#line=1261), in Runnable._call_with_config(self, func, input, config, run_type, **kwargs)
1258 context = copy_context()
1259 context.run(var_child_runnable_config.set, child_config)
1260 output = cast(
1261 Output,
-> 1262 context.run(
1263 call_func_with_variable_args,
1264 func, # type: ignore[arg-type]
1265 input, # type: ignore[arg-type]
1266 config,
1267 run_manager,
1268 **kwargs,
1269 ),
1270 )
1271 except BaseException as e:
1272 run_manager.on_chain_error(e)
File [/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/runnables/config.py:326](http://localhost:8888/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/runnables/config.py#line=325), in call_func_with_variable_args(func, input, config, run_manager, **kwargs)
324 if run_manager is not None and accepts_run_manager(func):
325 kwargs["run_manager"] = run_manager
--> 326 return func(input, **kwargs)
File [/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/runnables/base.py:3397](http://localhost:8888/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/runnables/base.py#line=3396), in RunnableLambda._invoke(self, input, run_manager, config, **kwargs)
3395 output = chunk
3396 else:
-> 3397 output = call_func_with_variable_args(
3398 self.func, input, config, run_manager, **kwargs
3399 )
3400 # If the output is a runnable, invoke it
3401 if isinstance(output, Runnable):
File [/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/runnables/config.py:326](http://localhost:8888/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/runnables/config.py#line=325), in call_func_with_variable_args(func, input, config, run_manager, **kwargs)
324 if run_manager is not None and accepts_run_manager(func):
325 kwargs["run_manager"] = run_manager
--> 326 return func(input, **kwargs)
Cell In[6], line 28, in call_model(state)
26 def call_model(state):
27 messages = state['messages']
---> 28 response = model.invoke(messages)
29 # We return a list, because this will get added to the existing list
30 return {"messages": [response]}
File [/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/runnables/base.py:4069](http://localhost:8888/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/runnables/base.py#line=4068), in RunnableBindingBase.invoke(self, input, config, **kwargs)
4063 def invoke(
4064 self,
4065 input: Input,
4066 config: Optional[RunnableConfig] = None,
4067 **kwargs: Optional[Any],
4068 ) -> Output:
-> 4069 return self.bound.invoke(
4070 input,
4071 self._merge_configs(config),
4072 **{**self.kwargs, **kwargs},
4073 )
File [/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:166](http://localhost:8888/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py#line=165), in BaseChatModel.invoke(self, input, config, stop, **kwargs)
155 def invoke(
156 self,
157 input: LanguageModelInput,
(...)
161 **kwargs: Any,
162 ) -> BaseMessage:
163 config = ensure_config(config)
164 return cast(
165 ChatGeneration,
--> 166 self.generate_prompt(
167 [self._convert_input(input)],
168 stop=stop,
169 callbacks=config.get("callbacks"),
170 tags=config.get("tags"),
171 metadata=config.get("metadata"),
172 run_name=config.get("run_name"),
173 **kwargs,
174 ).generations[0][0],
175 ).message
File [/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:544](http://localhost:8888/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py#line=543), in BaseChatModel.generate_prompt(self, prompts, stop, callbacks, **kwargs)
536 def generate_prompt(
537 self,
538 prompts: List[PromptValue],
(...)
541 **kwargs: Any,
542 ) -> LLMResult:
543 prompt_messages = [p.to_messages() for p in prompts]
--> 544 return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
File [/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:408](http://localhost:8888/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py#line=407), in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, **kwargs)
406 if run_managers:
407 run_managers[i].on_llm_error(e, response=LLMResult(generations=[]))
--> 408 raise e
409 flattened_outputs = [
410 LLMResult(generations=[res.generations], llm_output=res.llm_output)
411 for res in results
412 ]
413 llm_output = self._combine_llm_outputs([res.llm_output for res in results])
File [/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:398](http://localhost:8888/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py#line=397), in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, **kwargs)
395 for i, m in enumerate(messages):
396 try:
397 results.append(
--> 398 self._generate_with_cache(
399 m,
400 stop=stop,
401 run_manager=run_managers[i] if run_managers else None,
402 **kwargs,
403 )
404 )
405 except BaseException as e:
406 if run_managers:
File [/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:577](http://localhost:8888/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py#line=576), in BaseChatModel._generate_with_cache(self, messages, stop, run_manager, **kwargs)
573 raise ValueError(
574 "Asked to cache, but no cache found at `langchain.cache`."
575 )
576 if new_arg_supported:
--> 577 return self._generate(
578 messages, stop=stop, run_manager=run_manager, **kwargs
579 )
580 else:
581 return self._generate(messages, stop=stop, **kwargs)
File [/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_experimental/llms/anthropic_functions.py:180](http://localhost:8888/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_experimental/llms/anthropic_functions.py#line=179), in AnthropicFunctions._generate(self, messages, stop, run_manager, **kwargs)
176 if "function_call" in kwargs:
177 raise ValueError(
178 "if `function_call` provided, `functions` must also be"
179 )
--> 180 response = self.model.predict_messages(
181 messages, stop=stop, callbacks=run_manager, **kwargs
182 )
183 completion = cast(str, response.content)
184 if forced:
File [/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/_api/deprecation.py:145](http://localhost:8888/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/_api/deprecation.py#line=144), in deprecated.<locals>.deprecate.<locals>.warning_emitting_wrapper(*args, **kwargs)
143 warned = True
144 emit_warning()
--> 145 return wrapped(*args, **kwargs)
File [/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:747](http://localhost:8888/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py#line=746), in BaseChatModel.predict_messages(self, messages, stop, **kwargs)
745 else:
746 _stop = list(stop)
--> 747 return self(messages, stop=_stop, **kwargs)
File [/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/_api/deprecation.py:145](http://localhost:8888/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/_api/deprecation.py#line=144), in deprecated.<locals>.deprecate.<locals>.warning_emitting_wrapper(*args, **kwargs)
143 warned = True
144 emit_warning()
--> 145 return wrapped(*args, **kwargs)
File [/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:691](http://localhost:8888/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py#line=690), in BaseChatModel.__call__(self, messages, stop, callbacks, **kwargs)
683 @deprecated("0.1.7", alternative="invoke", removal="0.2.0")
684 def __call__(
685 self,
(...)
689 **kwargs: Any,
690 ) -> BaseMessage:
--> 691 generation = self.generate(
692 [messages], stop=stop, callbacks=callbacks, **kwargs
693 ).generations[0][0]
694 if isinstance(generation, ChatGeneration):
695 return generation.message
File [/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:408](http://localhost:8888/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py#line=407), in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, **kwargs)
406 if run_managers:
407 run_managers[i].on_llm_error(e, response=LLMResult(generations=[]))
--> 408 raise e
409 flattened_outputs = [
410 LLMResult(generations=[res.generations], llm_output=res.llm_output)
411 for res in results
412 ]
413 llm_output = self._combine_llm_outputs([res.llm_output for res in results])
File [/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:398](http://localhost:8888/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py#line=397), in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, **kwargs)
395 for i, m in enumerate(messages):
396 try:
397 results.append(
--> 398 self._generate_with_cache(
399 m,
400 stop=stop,
401 run_manager=run_managers[i] if run_managers else None,
402 **kwargs,
403 )
404 )
405 except BaseException as e:
406 if run_managers:
File [/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:577](http://localhost:8888/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py#line=576), in BaseChatModel._generate_with_cache(self, messages, stop, run_manager, **kwargs)
573 raise ValueError(
574 "Asked to cache, but no cache found at `langchain.cache`."
575 )
576 if new_arg_supported:
--> 577 return self._generate(
578 messages, stop=stop, run_manager=run_manager, **kwargs
579 )
580 else:
581 return self._generate(messages, stop=stop, **kwargs)
File [/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_community/chat_models/bedrock.py:112](http://localhost:8888/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_community/chat_models/bedrock.py#line=111), in BedrockChat._generate(self, messages, stop, run_manager, **kwargs)
110 else:
111 provider = self._get_provider()
--> 112 prompt = ChatPromptAdapter.convert_messages_to_prompt(
113 provider=provider, messages=messages
114 )
116 params: Dict[str, Any] = {**kwargs}
117 if stop:
File [/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_community/chat_models/bedrock.py:32](http://localhost:8888/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_community/chat_models/bedrock.py#line=31), in ChatPromptAdapter.convert_messages_to_prompt(cls, provider, messages)
27 @classmethod
28 def convert_messages_to_prompt(
29 cls, provider: str, messages: List[BaseMessage]
30 ) -> str:
31 if provider == "anthropic":
---> 32 prompt = convert_messages_to_prompt_anthropic(messages=messages)
33 elif provider == "meta":
34 prompt = convert_messages_to_prompt_llama(messages=messages)
File [/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_community/chat_models/anthropic.py:64](http://localhost:8888/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_community/chat_models/anthropic.py#line=63), in convert_messages_to_prompt_anthropic(messages, human_prompt, ai_prompt)
61 if not isinstance(messages[-1], AIMessage):
62 messages.append(AIMessage(content=""))
---> 64 text = "".join(
65 _convert_one_message_to_text(message, human_prompt, ai_prompt)
66 for message in messages
67 )
69 # trim off the trailing ' ' that might come from the "Assistant: "
70 return text.rstrip()
File [/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_community/chat_models/anthropic.py:65](http://localhost:8888/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_community/chat_models/anthropic.py#line=64), in <genexpr>(.0)
61 if not isinstance(messages[-1], AIMessage):
62 messages.append(AIMessage(content=""))
64 text = "".join(
---> 65 _convert_one_message_to_text(message, human_prompt, ai_prompt)
66 for message in messages
67 )
69 # trim off the trailing ' ' that might come from the "Assistant: "
70 return text.rstrip()
File [/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_community/chat_models/anthropic.py:41](http://localhost:8888/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_community/chat_models/anthropic.py#line=40), in _convert_one_message_to_text(message, human_prompt, ai_prompt)
39 message_text = content
40 else:
---> 41 raise ValueError(f"Got unknown type {message}")
42 return message_text
ValueError: Got unknown type content='67' name='random_number'
```
### Description
I am trying to use AWS bedrock models (such as "anthropic.claude-v2:1") for LangGraph but encountered the error. It seems that `in _convert_one_message_to_text` function for anthropic models cannot consume FunctionMessage.
### System Info
latest langchain v0.1.11 | ValueError: Got unknown type content when using AWS Bedrock for LangGraph | https://api.github.com/repos/langchain-ai/langchain/issues/18598/comments | 3 | 2024-03-05T19:22:23Z | 2024-06-23T16:09:19Z | https://github.com/langchain-ai/langchain/issues/18598 | 2,169,985,942 | 18,598 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.vectorstores import Qdrant
from langchain_community.document_loaders import TextLoader
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
from langchain_core.runn```
ables import RunnablePassthrough
from langchain.embeddings import OpenAIEmbeddings
from langchain.retrievers import BM25Retriever, EnsembleRetriever
from langchain.document_loaders import DirectoryLoader
# from langchain.chat_models import ChatOpenAI
from typing import Optional
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.runnables.history import RunnableWithMessageHistory
from langchain_openai import ChatOpenAI
import gradio as gr
from langchain_community.chat_message_histories import RedisChatMessageHistory
from langchain_core.runnables.history import RunnableWithMessageHistory
import uuid
import redis
from langchain_community.chat_message_histories import RedisChatMessageHistory
with_message_history = RunnableWithMessageHistory(
rag_chain,
get_message_history,
input_messages_key="question",
history_messages_key="chat_history"
)
question = "question?"
session_id = "foobar"
res = str(with_message_history.invoke(
{"question": question},
config={"configurable": {"session_id": session_id}}
))
print(res)
### Error Message and Stack Trace (if applicable)
openai.BadRequestError: Error code: 400 - {'error': {'message':
"'$.messages[90].content' is invalid.
### Description
I am implementing RAG for custom documents to answer questions. Additionally, I have implemented contextual question answering based on the provided history.
### System Info
langchain==0.1.6
langchain-community==0.0.19
langchain-core==0.1.22
langchain-openai==0.0.6 | openai BadRequestError | https://api.github.com/repos/langchain-ai/langchain/issues/18583/comments | 0 | 2024-03-05T16:00:46Z | 2024-03-05T16:19:45Z | https://github.com/langchain-ai/langchain/issues/18583 | 2,169,597,436 | 18,583 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_openai import AzureChatOpenAI
model = AzureChatOpenAI(
openai_api_version="2023-05-15",
api_key=os.getenv("AZURE_OPENAI_KEY"),
azure_deployment="gpt-4", #gtp4-turbo
model_kwargs={"seed": 42, "logprobs": True}
model.invoke("What is the meaning of life?")
)
```
### Error Message and Stack Trace (if applicable)
```python
---------------------------------------------------------------------------
BadRequestError Traceback (most recent call last)
Cell In[90], line 1
----> 1 model.invoke("What is the meaning of life?")
File ~/Desktop/Code/pl-data-science/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:166, in BaseChatModel.invoke(self, input, config, stop, **kwargs)
155 def invoke(
156 self,
157 input: LanguageModelInput,
(...)
161 **kwargs: Any,
162 ) -> BaseMessage:
163 config = ensure_config(config)
164 return cast(
165 ChatGeneration,
--> 166 self.generate_prompt(
167 [self._convert_input(input)],
168 stop=stop,
169 callbacks=config.get("callbacks"),
170 tags=config.get("tags"),
171 metadata=config.get("metadata"),
172 run_name=config.get("run_name"),
173 **kwargs,
174 ).generations[0][0],
175 ).message
File ~/Desktop/Code/pl-data-science/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:544, in BaseChatModel.generate_prompt(self, prompts, stop, callbacks, **kwargs)
536 def generate_prompt(
537 self,
538 prompts: List[PromptValue],
(...)
541 **kwargs: Any,
542 ) -> LLMResult:
543 prompt_messages = [p.to_messages() for p in prompts]
--> 544 return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
File ~/Desktop/Code/pl-data-science/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:408, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, **kwargs)
406 if run_managers:
407 run_managers[i].on_llm_error(e, response=LLMResult(generations=[]))
--> 408 raise e
409 flattened_outputs = [
410 LLMResult(generations=[res.generations], llm_output=res.llm_output)
411 for res in results
412 ]
413 llm_output = self._combine_llm_outputs([res.llm_output for res in results])
File ~/Desktop/Code/pl-data-science/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:398, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, **kwargs)
395 for i, m in enumerate(messages):
396 try:
397 results.append(
--> 398 self._generate_with_cache(
399 m,
400 stop=stop,
401 run_manager=run_managers[i] if run_managers else None,
402 **kwargs,
403 )
404 )
405 except BaseException as e:
406 if run_managers:
File ~/Desktop/Code/pl-data-science/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:577, in BaseChatModel._generate_with_cache(self, messages, stop, run_manager, **kwargs)
573 raise ValueError(
574 "Asked to cache, but no cache found at `langchain.cache`."
575 )
576 if new_arg_supported:
--> 577 return self._generate(
578 messages, stop=stop, run_manager=run_manager, **kwargs
579 )
580 else:
581 return self._generate(messages, stop=stop, **kwargs)
File ~/Desktop/Code/pl-data-science/.venv/lib/python3.11/site-packages/langchain_openai/chat_models/base.py:451, in ChatOpenAI._generate(self, messages, stop, run_manager, stream, **kwargs)
445 message_dicts, params = self._create_message_dicts(messages, stop)
446 params = {
447 **params,
448 **({"stream": stream} if stream is not None else {}),
449 **kwargs,
450 }
--> 451 response = self.client.create(messages=message_dicts, **params)
452 return self._create_chat_result(response)
File ~/Desktop/Code/pl-data-science/.venv/lib/python3.11/site-packages/openai/_utils/_utils.py:271, in required_args.<locals>.inner.<locals>.wrapper(*args, **kwargs)
269 msg = f"Missing required argument: {quote(missing[0])}"
270 raise TypeError(msg)
--> 271 return func(*args, **kwargs)
File ~/Desktop/Code/pl-data-science/.venv/lib/python3.11/site-packages/openai/resources/chat/completions.py:659, in Completions.create(self, messages, model, frequency_penalty, function_call, functions, logit_bias, logprobs, max_tokens, n, presence_penalty, response_format, seed, stop, stream, temperature, tool_choice, tools, top_logprobs, top_p, user, extra_headers, extra_query, extra_body, timeout)
608 @required_args(["messages", "model"], ["messages", "model", "stream"])
609 def create(
610 self,
(...)
657 timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
658 ) -> ChatCompletion | Stream[ChatCompletionChunk]:
--> 659 return self._post(
660 "/chat/completions",
661 body=maybe_transform(
662 {
663 "messages": messages,
664 "model": model,
665 "frequency_penalty": frequency_penalty,
666 "function_call": function_call,
667 "functions": functions,
668 "logit_bias": logit_bias,
669 "logprobs": logprobs,
670 "max_tokens": max_tokens,
671 "n": n,
672 "presence_penalty": presence_penalty,
673 "response_format": response_format,
674 "seed": seed,
675 "stop": stop,
676 "stream": stream,
677 "temperature": temperature,
678 "tool_choice": tool_choice,
679 "tools": tools,
680 "top_logprobs": top_logprobs,
681 "top_p": top_p,
682 "user": user,
683 },
684 completion_create_params.CompletionCreateParams,
685 ),
686 options=make_request_options(
687 extra_headers=extra_headers, extra_query=extra_query, extra_body=extra_body, timeout=timeout
688 ),
689 cast_to=ChatCompletion,
690 stream=stream or False,
691 stream_cls=Stream[ChatCompletionChunk],
692 )
File ~/Desktop/Code/pl-data-science/.venv/lib/python3.11/site-packages/openai/_base_client.py:1180, in SyncAPIClient.post(self, path, cast_to, body, options, files, stream, stream_cls)
1166 def post(
1167 self,
1168 path: str,
(...)
1175 stream_cls: type[_StreamT] | None = None,
1176 ) -> ResponseT | _StreamT:
1177 opts = FinalRequestOptions.construct(
1178 method="post", url=path, json_data=body, files=to_httpx_files(files), **options
1179 )
-> 1180 return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
File ~/Desktop/Code/pl-data-science/.venv/lib/python3.11/site-packages/openai/_base_client.py:869, in SyncAPIClient.request(self, cast_to, options, remaining_retries, stream, stream_cls)
860 def request(
861 self,
862 cast_to: Type[ResponseT],
(...)
867 stream_cls: type[_StreamT] | None = None,
868 ) -> ResponseT | _StreamT:
--> 869 return self._request(
870 cast_to=cast_to,
871 options=options,
872 stream=stream,
873 stream_cls=stream_cls,
874 remaining_retries=remaining_retries,
875 )
File ~/Desktop/Code/pl-data-science/.venv/lib/python3.11/site-packages/openai/_base_client.py:960, in SyncAPIClient._request(self, cast_to, options, remaining_retries, stream, stream_cls)
957 err.response.read()
959 log.debug("Re-raising status error")
--> 960 raise self._make_status_error_from_response(err.response) from None
962 return self._process_response(
963 cast_to=cast_to,
964 options=options,
(...)
967 stream_cls=stream_cls,
968 )
BadRequestError: Error code: 400 - {'error': {'message': "This model does not support the 'logprobs' parameter.", 'type': 'invalid_request_error', 'param': 'logprobs', 'code': None}}
```
### Description
I'm trying to use `AzureChatOpenAI` as an alternative to `ChatOpenAI`, with setting the `logprobs` parameter. Model used is `gpt-4-turbo`.
This seems to be related to this issue: https://github.com/openai/openai-python/issues/1080. When adding the `logprobs` to the `model_kwargs` I get the following error:
```python
BadRequestError: Error code: 400 - {'error': {'message': "This model does not support the 'logprobs' parameter.", 'type': 'invalid_request_error', 'param': 'logprobs', 'code': None}}
```
### System Info
```python
"pip freeze | grep langchain"
langchain==0.1.9
langchain-community==0.0.21
langchain-core==0.1.26
langchain-openai==0.0.5
````
platform: mac
python version: 3.11 | missing logprobs parameter support for AzureChatOpenAI | https://api.github.com/repos/langchain-ai/langchain/issues/18582/comments | 1 | 2024-03-05T15:38:56Z | 2024-06-27T16:07:44Z | https://github.com/langchain-ai/langchain/issues/18582 | 2,169,549,536 | 18,582 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
vectordb = Milvus.from_documents(
documents=texts,
embedding=instructor_embeddings,
collection_name=f"collection_{collection_name.replace('-', '_')}",
connection_args=self.get_milvus_connection_params(),
index_params=self.config.milvus_configuration.get("index_params"),
search_params=self.config.milvus_configuration.get("search_params"),
timeout=300
)
```
### Error Message and Stack Trace (if applicable)
```
vectordb = Milvus.from_documents(
^^^^^^^^^^^^^^^^^^^^^^
File ".../venv/lib/python3.11/site-packages/langchain_core/vectorstores.py", line 528, in from_documents
return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../venv/lib/python3.11/site-packages/langchain_community/vectorstores/milvus.py", line 987, in from_texts
vector_db.add_texts(texts=texts, metadatas=metadatas, ids=ids)
File ".../venv/lib/python3.11/site-packages/langchain_community/vectorstores/milvus.py", line 593, in add_texts
res = self.col.insert(insert_list, timeout=timeout, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: pymilvus.orm.collection.Collection.insert() got multiple values for keyword argument 'timeout'
```
### Description
When trying to add a `timeout` to Milvus class when using it `from_documents`, there is an overlap with the insert `timeout` argument here:
https://github.com/langchain-ai/langchain/blob/7248e98b9edba60a34e8b0018e7b5c1ee1bbdc76/libs/community/langchain_community/vectorstores/milvus.py#L593
### System Info
langchain 0.1.9
langchain-community 0.0.24
langchain-core 0.1.27
langsmith 0.1.9
| Milvus got multiple values for keyword argument 'timeout' | https://api.github.com/repos/langchain-ai/langchain/issues/18580/comments | 0 | 2024-03-05T15:17:09Z | 2024-03-19T03:44:26Z | https://github.com/langchain-ai/langchain/issues/18580 | 2,169,499,293 | 18,580 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
Page: https://python.langchain.com/docs/integrations/text_embedding/text_embeddings_inference
The first URL (in **Hugging Face Text Embeddings Inference (TEI)**) should point to this page: https://huggingface.co/docs/text-embeddings-inference/index
### Idea or request for content:
_No response_ | DOC: URL pointing to HF TGI instead of TEI | https://api.github.com/repos/langchain-ai/langchain/issues/18576/comments | 0 | 2024-03-05T10:57:14Z | 2024-03-08T03:38:41Z | https://github.com/langchain-ai/langchain/issues/18576 | 2,168,902,097 | 18,576 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
async for token in retrieval_chain().astream(
{
"question": question,
"history": [],
},
config={"callbacks": [ConsoleCallbackHandler()]},
): # type: ignore
yield ChatSSEResponse(type="streaming", value=token).model_dump_json()
### Error Message and Stack Trace (if applicable)
<img width="1779" alt="image" src="https://github.com/langchain-ai/langchain/assets/79570011/4b231a27-5328-41b1-8c2a-92758eb85cc3">
### Description
With the given code, we are using ConsoleCallbackHandler form langchain_core.tracers to print all the inputs, outputs and all the api calls that are going behind the scenes.
But with this setup, we are not getting the inputs and the inputs are logged as empty strings but this is working and inputs are logged when we don't use streaming instead use normal invoke method while invoking the chain.
### System Info
python version - 3.11
langchain_core version - 0.1.23
langchain version - 0.1.3 | Input is empty string in ConsoleCallBackHandler outputs | https://api.github.com/repos/langchain-ai/langchain/issues/18567/comments | 2 | 2024-03-05T09:09:30Z | 2024-07-18T16:23:20Z | https://github.com/langchain-ai/langchain/issues/18567 | 2,168,678,480 | 18,567 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from langchain.agents import AgentExecutor, create_react_agent
from langchain.tools import tool
from langchain.llms.bedrock import Bedrock
import boto3
from langchain_core.prompts import PromptTemplate
from langchain import hub
react_prompt_template="""
Answer the following questions as best you can. You have access to the following tools:
{tools}
Use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [{tool_names}]
Action Input: the input to the action
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question
Begin!
Question: {input}
Thought:{agent_scratchpad}
"""
# prompt = hub.pull("hwchase17/react")
prompt = PromptTemplate(
input_variables=["input"],
template=react_prompt_template
)
@tool
def say_hi(name: str) -> str:
"""Say hi to the world"""
return f"hi {name}"
def specify_bedrock_titan_llm():
bedrock_client = boto3.client(
service_name="bedrock-runtime",
region_name="us-east-1",
)
bedrock_llm = Bedrock(
model_id="amazon.titan-text-express-v1",
client=bedrock_client,
model_kwargs={'temperature': 0}
)
return bedrock_llm
if __name__ == '__main__':
llm = specify_bedrock_titan_llm()
agent = create_react_agent(llm, [say_hi], prompt)
agent_executor = AgentExecutor(agent=agent, tools=[say_hi], verbose=True, handle_parsing_errors=True)
result = agent_executor.invoke({"input": "call say_hi function and return the result"})
print(result)
```
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "/Users/aqiao/PycharmProjects/pythonProject/venv/lib/python3.10/site-packages/langchain_community/llms/bedrock.py", line 543, in _prepare_input_and_invoke_stream
response = self.client.invoke_model_with_response_stream(**request_options)
File "/Users/aqiao/PycharmProjects/pythonProject/venv/lib/python3.10/site-packages/botocore/client.py", line 553, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/Users/aqiao/PycharmProjects/pythonProject/venv/lib/python3.10/site-packages/botocore/client.py", line 1009, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.errorfactory.ValidationException: An error occurred (ValidationException) when calling the InvokeModelWithResponseStream operation: Malformed input request: string [
Observation] does not match pattern ^(\|+|User:)$, please reformat your input and try again.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/aqiao/Learning/bedrock/langchain-agent/demo2.py", line 58, in <module>
result = agent_executor.invoke({"input": "call say_hi function and return the result"})
File "/Users/aqiao/PycharmProjects/pythonProject/venv/lib/python3.10/site-packages/langchain/chains/base.py", line 163, in invoke
raise e
File "/Users/aqiao/PycharmProjects/pythonProject/venv/lib/python3.10/site-packages/langchain/chains/base.py", line 153, in invoke
self._call(inputs, run_manager=run_manager)
File "/Users/aqiao/PycharmProjects/pythonProject/venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 1391, in _call
next_step_output = self._take_next_step(
File "/Users/aqiao/PycharmProjects/pythonProject/venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 1097, in _take_next_step
[
File "/Users/aqiao/PycharmProjects/pythonProject/venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 1097, in <listcomp>
[
File "/Users/aqiao/PycharmProjects/pythonProject/venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 1125, in _iter_next_step
output = self.agent.plan(
File "/Users/aqiao/PycharmProjects/pythonProject/venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 387, in plan
for chunk in self.runnable.stream(inputs, config={"callbacks": callbacks}):
File "/Users/aqiao/PycharmProjects/pythonProject/venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2446, in stream
yield from self.transform(iter([input]), config, **kwargs)
File "/Users/aqiao/PycharmProjects/pythonProject/venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2433, in transform
yield from self._transform_stream_with_config(
File "/Users/aqiao/PycharmProjects/pythonProject/venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 1513, in _transform_stream_with_config
chunk: Output = context.run(next, iterator) # type: ignore
File "/Users/aqiao/PycharmProjects/pythonProject/venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2397, in _transform
for output in final_pipeline:
File "/Users/aqiao/PycharmProjects/pythonProject/venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 1051, in transform
for chunk in input:
File "/Users/aqiao/PycharmProjects/pythonProject/venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 4173, in transform
yield from self.bound.transform(
File "/Users/aqiao/PycharmProjects/pythonProject/venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 1061, in transform
yield from self.stream(final, config, **kwargs)
File "/Users/aqiao/PycharmProjects/pythonProject/venv/lib/python3.10/site-packages/langchain_core/language_models/llms.py", line 452, in stream
raise e
File "/Users/aqiao/PycharmProjects/pythonProject/venv/lib/python3.10/site-packages/langchain_core/language_models/llms.py", line 436, in stream
for chunk in self._stream(
File "/Users/aqiao/PycharmProjects/pythonProject/venv/lib/python3.10/site-packages/langchain_community/llms/bedrock.py", line 546, in _prepare_input_and_invoke_stream
raise ValueError(f"Error raised by bedrock service: {e}")
ValueError: Error raised by bedrock service: An error occurred (ValidationException) when calling the InvokeModelWithResponseStream operation: Malformed input request: string [
Observation] does not match pattern ^(\|+|User:)$, please reformat your input and try again.
### Description
I'm using langchain (0.1.10) to interact with aws titan text g1 follow langchain official demo.
there is the `Prompt_Temlate`
```
react_prompt_template="""
Answer the following questions as best you can. You have access to the following tools:
{tools}
Use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [{tool_names}]
Action Input: the input to the action
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question
Begin!
Question: {input}
Thought:{agent_scratchpad}
"""
```
Here is the tool `say_hi` definition
```
@tool
def say_hi(name: str) -> str:
"""Say hi to the world"""
return f"hi {name}"
```
When running the code, it raised below exception
```
ValueError: Error raised by bedrock service: An error occurred (ValidationException) when calling the InvokeModelWithResponseStream operation: Malformed input request: string [
Observation] does not match pattern ^(\|+|User:)$, please reformat your input and try again.
```
### System Info
langchain 0.1.10
aws Titan Text G1
langchain agent | Malformed input request: string [ Observation] does not match pattern ^(\|+|User:)$ | https://api.github.com/repos/langchain-ai/langchain/issues/18565/comments | 5 | 2024-03-05T08:23:52Z | 2024-05-16T11:35:32Z | https://github.com/langchain-ai/langchain/issues/18565 | 2,168,596,027 | 18,565 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
import os
from langchain_community.document_loaders import TextLoader
loader = TextLoader("state_of_the_union.txt")
loader.load()
### Error Message and Stack Trace (if applicable)
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[17], line 2
1 import os
----> 2 from langchain_community.document_loaders import TextLoader
3 loader = TextLoader("state_of_the_union.txt")
4 loader.load()
File ~\AppData\Roaming\Python\Python310\site-packages\langchain_community\document_loaders\__init__.py:190
188 from langchain_community.document_loaders.snowflake_loader import SnowflakeLoader
189 from langchain_community.document_loaders.spreedly import SpreedlyLoader
--> 190 from langchain_community.document_loaders.sql_database import SQLDatabaseLoader
191 from langchain_community.document_loaders.srt import SRTLoader
192 from langchain_community.document_loaders.stripe import StripeLoader
File ~\AppData\Roaming\Python\Python310\site-packages\langchain_community\document_loaders\sql_database.py:10
6 from langchain_community.document_loaders.base import BaseLoader
7 from langchain_community.utilities.sql_database import SQLDatabase
---> 10 class SQLDatabaseLoader(BaseLoader):
11 """
12 Load documents by querying database tables supported by SQLAlchemy.
13
(...)
17 Each document represents one row of the result.
18 """
20 def __init__(
21 self,
22 query: Union[str, sa.Select],
(...)
30 include_query_into_metadata: bool = False,
31 ):
File ~\AppData\Roaming\Python\Python310\site-packages\langchain_community\document_loaders\sql_database.py:22, in SQLDatabaseLoader()
10 class SQLDatabaseLoader(BaseLoader):
11 """
12 Load documents by querying database tables supported by SQLAlchemy.
13
(...)
17 Each document represents one row of the result.
18 """
20 def __init__(
21 self,
---> 22 query: Union[str, sa.Select],
23 db: SQLDatabase,
24 *,
25 parameters: Optional[Dict[str, Any]] = None,
26 page_content_mapper: Optional[Callable[..., str]] = None,
27 metadata_mapper: Optional[Callable[..., Dict[str, Any]]] = None,
28 source_columns: Optional[Sequence[str]] = None,
29 include_rownum_into_metadata: bool = False,
30 include_query_into_metadata: bool = False,
31 ):
32 """
33 Args:
34 query: The query to execute.
(...)
49 expression into the metadata dictionary. Default: False.
50 """
51 self.query = query
---------------------------------------------------------------------------
AttributeError: module 'sqlalchemy' has no attribute 'Select'
### Description
I am new to Langchain and I am stuck at an issue. My end goal is to read the contents of a file and create a vectorstore of my data which I can query later. I'm encountering an error while trying to use the SQLDatabaseLoader from the langchain_community package.
It seems to be referencing an attribute 'Select' from the 'sqlalchemy' module, but it's unable to find it. I'm not sure why this error is occurring, especially since I haven't directly used 'sqlalchemy. Select' in my code. I've tried using versions 1.3, 1.4, and 2.0 of SQLAlchemy, but I still encounter the same error.
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.22631
> Python Version: 3.11.1 (tags/v3.11.1:a7a450f, Dec 6 2022, 19:58:39) [MSC v.1934 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.1.29
> langchain: 0.1.10
> langchain_community: 0.0.25
> langsmith: 0.1.18
> langchain_openai: 0.0.8
> langchain_text_splitters: 0.0.1
> langchainhub: 0.1.15
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
| AttributeError: module 'sqlalchemy' has no attribute 'Select' when using SQLDatabaseLoader in langchain_community | https://api.github.com/repos/langchain-ai/langchain/issues/18552/comments | 6 | 2024-03-05T03:07:49Z | 2024-03-15T23:00:20Z | https://github.com/langchain-ai/langchain/issues/18552 | 2,168,197,280 | 18,552 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain.chains import OpenAIModerationChain
moderation_chain = OpenAIModerationChain()
moderation_chain.ainvoke("This is okay")
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
* We can't use the `OpenAIOutputModerationChain` within our async chain.
* This PR also fixes https://github.com/langchain-ai/langchain/issues/13685
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.2.0: Wed Nov 15 21:54:05 PST 2023; root:xnu-10002.61.3~2/RELEASE_ARM64_T6031
> Python Version: 3.11.7 (main, Dec 4 2023, 18:10:11) [Clang 15.0.0 (clang-1500.1.0.2.5)]
Package Information
-------------------
> langchain_core: 0.1.28
> langchain: 0.1.9
> langchain_community: 0.0.24
> langsmith: 0.1.5
> langchain_experimental: 0.0.53
> langchain_openai: 0.0.8
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | `OpenAIModerationChain` doesn't have an asynchronous implementation | https://api.github.com/repos/langchain-ai/langchain/issues/18533/comments | 0 | 2024-03-04T22:16:33Z | 2024-06-10T16:08:13Z | https://github.com/langchain-ai/langchain/issues/18533 | 2,167,884,379 | 18,533 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
# Refactored document processing and text extraction
import io
from typing import List
from langchain_community.document_loaders import AmazonTextractPDFLoader
class DocumentProcessor:
def __init__(self):
pass
def convert_to_text(self, s3_url: str) -> List[str]:
"""Downloads the PDF from S3, processes it, and returns the text as a list of strings."""
print(f"Processing S3 URL: {s3_url}")
try:
loader = AmazonTextractPDFLoader(s3_url)
pages = loader.load_and_split()
text = [page.page_content for page in pages if page.page_content is not None]
return text
except Exception as e:
print(f"Error processing {s3_url}: {e}")
return []
### Error Message and Stack Trace (if applicable)
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[16], line 10
5 get_ipython().system('pip install SQLAlchemy==1.3')
6 # from sqlalchemy import select
7 # import sqlalchemy as sa
8 # sa.Select
---> 10 from langchain_community.document_loaders import AmazonTextractPDFLoader
12 class DocumentProcessor:
13 def __init__(self):
File /opt/conda/lib/python3.10/site-packages/langchain_community/document_loaders/__init__.py:190
188 from langchain_community.document_loaders.snowflake_loader import SnowflakeLoader
189 from langchain_community.document_loaders.spreedly import SpreedlyLoader
--> 190 from langchain_community.document_loaders.sql_database import SQLDatabaseLoader
191 from langchain_community.document_loaders.srt import SRTLoader
192 from langchain_community.document_loaders.stripe import StripeLoader
File /opt/conda/lib/python3.10/site-packages/langchain_community/document_loaders/sql_database.py:10
6 from langchain_community.document_loaders.base import BaseLoader
7 from langchain_community.utilities.sql_database import SQLDatabase
---> 10 class SQLDatabaseLoader(BaseLoader):
11 """
12 Load documents by querying database tables supported by SQLAlchemy.
13
(...)
17 Each document represents one row of the result.
18 """
20 def __init__(
21 self,
22 query: Union[str, sa.Select],
(...)
30 include_query_into_metadata: bool = False,
31 ):
File /opt/conda/lib/python3.10/site-packages/langchain_community/document_loaders/sql_database.py:22, in SQLDatabaseLoader()
10 class SQLDatabaseLoader(BaseLoader):
11 """
12 Load documents by querying database tables supported by SQLAlchemy.
13
(...)
17 Each document represents one row of the result.
18 """
20 def __init__(
21 self,
---> 22 query: Union[str, sa.Select],
23 db: SQLDatabase,
24 *,
25 parameters: Optional[Dict[str, Any]] = None,
26 page_content_mapper: Optional[Callable[..., str]] = None,
27 metadata_mapper: Optional[Callable[..., Dict[str, Any]]] = None,
28 source_columns: Optional[Sequence[str]] = None,
29 include_rownum_into_metadata: bool = False,
30 include_query_into_metadata: bool = False,
31 ):
32 """
33 Args:
34 query: The query to execute.
(...)
49 expression into the metadata dictionary. Default: False.
50 """
51 self.query = query
AttributeError: module 'sqlalchemy' has no attribute 'Select'
### Description
I'm trying to use the latest version of langchain_community so that I can use Amazon Textract to get text from PDFs. It was working last week but now I am seeing an error from sqlalchemy in the /opt/conda/lib/python3.10/site-packages/langchain_community/document_loaders/sql_database.py file. I tried downgrading sqlalchemy to 1.3 and saw the same error.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Fri Jan 12 09:58:17 UTC 2024
> Python Version: 3.10.13 | packaged by conda-forge | (main, Dec 23 2023, 15:36:39) [GCC 12.3.0]
Package Information
-------------------
> langchain_core: 0.1.28
> langchain: 0.1.10
> langchain_community: 0.0.25
> langsmith: 0.1.16
> langchain_openai: 0.0.8
> langchain_text_splitters: 0.0.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | AttributeError: module 'sqlalchemy' has no attribute 'Select' when importing AmazonTextractPDFLoader | https://api.github.com/repos/langchain-ai/langchain/issues/18528/comments | 8 | 2024-03-04T21:38:14Z | 2024-05-22T10:57:13Z | https://github.com/langchain-ai/langchain/issues/18528 | 2,167,816,337 | 18,528 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_community.chat_models import BedrockChat
from langchain_core.messages import HumanMessage, SystemMessage
chat = BedrockChat(model_id="anthropic.claude-3-sonnet-20240229-v1:0", model_kwargs={"temperature": 0.1}, verbose=True)
messages = [
SystemMessage(content="You are a helpful assistant that translates English to French."),
HumanMessage(content="I love programming.")
]
chat.invoke(messages)
```
### Error Message and Stack Trace (if applicable)
ValueError: Error raised by bedrock service: An error occurred (ValidationException) when calling the InvokeModel operation: "claude-3-sonnet-20240229" is not supported on this API. Please use the Messages API instead.
### Description
Currently, the body that is prepared for model invocation uses Completions API instead of Messages API, even though you create an instance of BedrockChat. This can be seen from the source code here:
```python
input_body = LLMInputOutputAdapter.prepare_input(provider, prompt, params)
```
```python
def prepare_input(
cls, provider: str, prompt: str, model_kwargs: Dict[str, Any]
) -> Dict[str, Any]:
input_body = {**model_kwargs}
if provider == "anthropic":
input_body["prompt"] = _human_assistant_format(prompt) # here the Completions API is used instead of Messages API
elif provider in ("ai21", "cohere", "meta"):
input_body["prompt"] = prompt
elif provider == "amazon":
input_body = dict()
input_body["inputText"] = prompt
input_body["textGenerationConfig"] = {**model_kwargs}
else:
input_body["inputText"] = prompt
if provider == "anthropic" and "max_tokens_to_sample" not in input_body:
input_body["max_tokens_to_sample"] = 256
return input_body
```
Unwinding the call stack, ultimately this function is called, which simply combines all the chat messages into a single string:
```python
def convert_messages_to_prompt_anthropic(
messages: List[BaseMessage],
*,
human_prompt: str = "\n\nHuman:",
ai_prompt: str = "\n\nAssistant:",
) -> str:
"""Format a list of messages into a full prompt for the Anthropic model
Args:
messages (List[BaseMessage]): List of BaseMessage to combine.
human_prompt (str, optional): Human prompt tag. Defaults to "\n\nHuman:".
ai_prompt (str, optional): AI prompt tag. Defaults to "\n\nAssistant:".
Returns:
str: Combined string with necessary human_prompt and ai_prompt tags.
"""
messages = messages.copy() # don't mutate the original list
if not isinstance(messages[-1], AIMessage):
messages.append(AIMessage(content=""))
text = "".join(
_convert_one_message_to_text(message, human_prompt, ai_prompt)
for message in messages
)
# trim off the trailing ' ' that might come from the "Assistant: "
return text.rstrip()
```
The new Claude v3 family of models will only support Messages API, therefore none of them will work with the current version of langchain.
### System Info
langchain==0.1.10
langchain-community==0.0.25
langchain-core==0.1.28
langchain-text-splitters==0.0.1
platform: AL2
python 3.12.0 | BedrockChat is not using Messages API for Anthropic v3 models | https://api.github.com/repos/langchain-ai/langchain/issues/18514/comments | 10 | 2024-03-04T18:38:08Z | 2024-03-15T23:23:51Z | https://github.com/langchain-ai/langchain/issues/18514 | 2,167,488,404 | 18,514 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
import boto3
from langchain_community.llms import Bedrock
bedrock = boto3.client('bedrock-runtime' , 'us-east-1')
MODEL_KWARGS = {
"anthropic.claude-3-sonnet-20240229-v1:0": {
"temperature": 0,
"top_k": 250,
"top_p": 1,
"max_tokens_to_sample": 2**10
}}
model_id = 'anthropic.claude-3-sonnet-20240229-v1:0'
llm = Bedrock(model_id=model_id, model_kwargs=MODEL_KWARGS[model_id])
llm('tell me a joke')
```
### Error Message and Stack Trace (if applicable)
```
.venv/lib/python3.11/site-packages/langchain_core/_api/deprecation.py:117: LangChainDeprecationWarning: The function `__call__` was deprecated in LangChain 0.1.7 and will be removed in 0.2.0. Use invoke instead.
warn_deprecated(
Traceback (most recent call last):
File ".venv/lib/python3.11/site-packages/langchain_community/llms/bedrock.py", line 444, in _prepare_input_and_invoke
response = self.client.invoke_model(**request_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/botocore/client.py", line 553, in _api_call
return self._make_api_call(operation_name, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/botocore/client.py", line 1009, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.errorfactory.ValidationException: An error occurred (ValidationException) when calling the InvokeModel operation: "claude-3-sonnet-20240229" is not supported on this API. Please use the Messages API instead.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File ".venv/lib/python3.11/site-packages/langchain_core/_api/deprecation.py", line 145, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/langchain_core/language_models/llms.py", line 991, in __call__
self.generate(
File ".venv/lib/python3.11/site-packages/langchain_core/language_models/llms.py", line 741, in generate
output = self._generate_helper(
^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/langchain_core/language_models/llms.py", line 605, in _generate_helper
raise e
File ".venv/lib/python3.11/site-packages/langchain_core/language_models/llms.py", line 592, in _generate_helper
self._generate(
File ".venv/lib/python3.11/site-packages/langchain_core/language_models/llms.py", line 1177, in _generate
self._call(prompt, stop=stop, run_manager=run_manager, **kwargs)
File ".venv/lib/python3.11/site-packages/langchain_community/llms/bedrock.py", line 718, in _call
return self._prepare_input_and_invoke(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/langchain_community/llms/bedrock.py", line 451, in _prepare_input_and_invoke
raise ValueError(f"Error raised by bedrock service: {e}")
ValueError: Error raised by bedrock service: An error occurred (ValidationException) when calling the InvokeModel operation: "claude-3-sonnet-20240229" is not supported on this API. Please use the Messages API instead.
```
### Description
obviously claude3 is brand new, but initial testing with existing capabilities seems to indicate a change in how these models need to be invoked.
I'd expect that these new models would work with existing langchain capabilities as drop-in improvements.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #50~20.04.1-Ubuntu SMP Wed Sep 6 17:29:11 UTC 2023
> Python Version: 3.11.4 (main, Aug 9 2023, 21:54:01) [GCC 10.2.1 20210110]
Package Information
-------------------
> langchain_core: 0.1.28
> langchain: 0.1.10
> langchain_community: 0.0.25
> langsmith: 0.1.14
> langchain_text_splitters: 0.0.1
> langchainhub: 0.1.14
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | Bedrock integration with Claude3 fails with ValidationException "claude-3-sonnet-20240229" is not supported on this API. Please use the Messages API instead. | https://api.github.com/repos/langchain-ai/langchain/issues/18513/comments | 8 | 2024-03-04T18:27:57Z | 2024-03-06T23:46:19Z | https://github.com/langchain-ai/langchain/issues/18513 | 2,167,471,081 | 18,513 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from dataclasses import dataclass
from atexit import register as atexit_register
from sqlalchemy.event import listen
from sqlalchemy.engine import Engine
from sqlalchemy import create_engine
from sqlalchemy.sql.elements import quoted_name
from langchain.sql_database import SQLDatabase
@dataclass
class dao:
engine: Engine
DB_SCHEMA: str
def _set_search_path(dbapi_connection, connection_record, **kw):
existing_autocommit = dbapi_connection.autocommit
dbapi_connection.autocommit = True
cursor = dbapi_connection.cursor()
try:
#https://www.postgresql.org/docs/current/ddl-schemas.html#DDL-SCHEMAS-PATH
cursor.execute(f"SET search_path TO {dao.DB_SCHEMA},public;")
#cursor.execute("SET ROLE TO apps_engineer;")
finally:
cursor.close()
dbapi_connection.autocommit = existing_autocommit
def daoInitConfig():
print("daoInitConfig()")
DB_USER = ...
DB_PASSWORD = ...
DB_HOST = ...
DB_PORT = ...
DB_NAME = ...
db_schema = ...
url = f'postgresql+pg8000://{DB_USER}:{DB_PASSWORD}@{DB_HOST}:{DB_PORT}/{DB_NAME}'
import pg8000 as _pg800
_pg800.paramstyle = 'named'
import pg8000.legacy as _legacy
_legacy.paramstyle = 'named'
import pg8000.dbapi as _dbapi
_dbapi.paramstyle = 'named'
from sqlalchemy.dialects.postgresql.pg8000 import PGDialect_pg8000
PGDialect_pg8000.default_paramstyle = 'named'
engine = create_engine(
url)
atexit_register(engine.dispose)
# Use the listen event to call _set_search_path when a connection is created
#listen(engine, 'connect', _set_search_path)
dao.engine = engine
dao.DB_SCHEMA = quoted_name(db_schema, None)
def main():
daoInitConfig()
db = SQLDatabase(
engine=dao.engine,
schema=dao.DB_SCHEMA,
sample_rows_in_table_info=0
)
result = db.run("select 1")
assert "1" == result
if __name__ == '__main__':
main()
```
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "/var/lang/lib/python3.10/site-packages/pg8000/legacy.py", line 254, in execute
self._context = self._c.execute_unnamed(
File "/var/lang/lib/python3.10/site-packages/pg8000/core.py", line 688, in execute_unnamed
self.handle_messages(context)
File "/var/lang/lib/python3.10/site-packages/pg8000/core.py", line 827, in handle_messages
raise context.error
pg8000.exceptions.DatabaseError: {'S': 'ERROR', 'V': 'ERROR', 'C': '42601', 'M': 'syntax error at or near "%"', 'P': '20', 'F': 'scan.l', 'L': '1236', 'R': 'scanner_yyerror'}
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/var/lang/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1969, in _exec_single_context
self.dialect.do_execute(
File "/var/lang/lib/python3.10/site-packages/sqlalchemy/engine/default.py", line 922, in do_execute
cursor.execute(statement, parameters)
File "/var/lang/lib/python3.10/site-packages/pg8000/legacy.py", line 281, in execute
raise cls(msg)
pg8000.dbapi.ProgrammingError: {'S': 'ERROR', 'V': 'ERROR', 'C': '42601', 'M': 'syntax error at or near "%"', 'P': '20', 'F': 'scan.l', 'L': '1236', 'R': 'scanner_yyerror'}
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/var/lang/lib/python3.10/contextlib.py", line 153, in __exit__
self.gen.throw(typ, value, traceback)
File "/var/lang/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 3234, in begin
yield conn
File "/var/lang/lib/python3.10/site-packages/langchain_community/utilities/sql_database.py", line 438, in _execute
connection.exec_driver_sql(
File "/var/lang/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1778, in exec_driver_sql
ret = self._execute_context(
File "/var/lang/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1848, in _execute_context
return self._exec_single_context(
File "/var/lang/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1988, in _exec_single_context
self._handle_dbapi_exception(
File "/var/lang/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 2343, in _handle_dbapi_exception
raise sqlalchemy_exception.with_traceback(exc_info[2]) from e
File "/var/lang/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1969, in _exec_single_context
self.dialect.do_execute(
File "/var/lang/lib/python3.10/site-packages/sqlalchemy/engine/default.py", line 922, in do_execute
cursor.execute(statement, parameters)
File "/var/lang/lib/python3.10/site-packages/pg8000/legacy.py", line 281, in execute
raise cls(msg)
sqlalchemy.exc.ProgrammingError: (pg8000.dbapi.ProgrammingError) {'S': 'ERROR', 'V': 'ERROR', 'C': '42601', 'M': 'syntax error at or near "%"', 'P': '20', 'F': 'scan.l', 'L': '1236', 'R': 'scanner_yyerror'}
[SQL: SET search_path TO %s]
[parameters: ('rag_ask_george_qa',)]
(Background on this error at: https://sqlalche.me/e/20/f405)
Process finished with exit code 1
### Description
I'm using SqlAlchemy's engine as main facade to connect to my Postgress DB. I'm using pg8000 driver. I'm using **named** paramstyle.
So, when _db.run()_ is called it calls _db._execute()_
First thing, that he does, is **breaking** SqlAlchemy's engine facade and working on driver level. The code that is written specifically suggest that engine uses **format** paramstyle.
So, **at least code should be fixed to take into account different paramstyles**. I'm talking about _"SET search_path TO %s"_
Personally, I think that breaking engine interface is inherently wrong. SQLDatabase shouldn't manipulate on such granular level.
What **I prefer to see, is some flag, that disables call to SqlAlchemy driver at all. Than in application code I'll install listener, that will set search_path on every "connect" event.**
In the code above, I just need to change
```python
# Use the listen event to call _set_search_path when a connection is created
#listen(engine, 'connect', _set_search_path)
```
to
```python
# Use the listen event to call _set_search_path when a connection is created
listen(engine, 'connect', _set_search_path)
```
### System Info
bash-4.2# python -m langchain_core.sys_info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Thu Oct 5 21:02:42 UTC 2023
> Python Version: 3.10.13 (main, Dec 4 2023, 13:30:46) [GCC 7.3.1 20180712 (Red Hat 7.3.1-17)]
Package Information
-------------------
> langchain_core: 0.1.28
> langchain: 0.1.10
> langchain_community: 0.0.25
> langsmith: 0.1.10
> langchain_experimental: 0.0.50
> langchain_openai: 0.0.2.post1
> langchain_text_splitters: 0.0.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | SQLDatabase doesn't work with named paramstyle | https://api.github.com/repos/langchain-ai/langchain/issues/18512/comments | 1 | 2024-03-04T17:24:00Z | 2024-06-12T16:08:36Z | https://github.com/langchain-ai/langchain/issues/18512 | 2,167,359,938 | 18,512 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Hi Team,
I am trying to use the AzureChatOpenAI as LLM in the inside the ReACT Agent Here is my code
```
def search_function(query: str) -> str:
"""use to search results corresponding to given query"""
return "Hello World"
search = StructuredTool.from_function(
func=search_function,
name="Search",
description="useful for when you need to answer questions about current events",
# coroutine= ... <- you can specify an async method if desired as well
)
tools = [search]
prompt = hub.pull("hwchase17/react")
prompt
appkey = "APP_KEY" ##Please assume I am using the correct key here
llm = AzureChatOpenAI(
openai_api_version=os.getenv('OPENAI_API_VERSION'),
azure_deployment="gpt-35-turbo",
model_kwargs=dict(
user=f'{{"appkey": "{appkey}"}}'
)
)
agent = create_react_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
agent_executor.invoke({"input": "hi"})
```
Below is the AssertionError that I am getting.
### Error Message and Stack Trace (if applicable)
```
> Entering new AgentExecutor chain...
---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
Cell In[17], [line 1](vscode-notebook-cell:?execution_count=17&line=1)
----> [1](vscode-notebook-cell:?execution_count=17&line=1) agent_executor.invoke({"input": "hi"})
File [~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/chains/base.py:162](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/chains/base.py:162), in Chain.invoke(self, input, config, **kwargs)
[160](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/chains/base.py:160) except BaseException as e:
[161](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/chains/base.py:161) run_manager.on_chain_error(e)
--> [162](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/chains/base.py:162) raise e
[163](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/chains/base.py:163) run_manager.on_chain_end(outputs)
[164](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/chains/base.py:164) final_outputs: Dict[str, Any] = self.prep_outputs(
[165](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/chains/base.py:165) inputs, outputs, return_only_outputs
[166](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/chains/base.py:166) )
File [~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/chains/base.py:156](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/chains/base.py:156), in Chain.invoke(self, input, config, **kwargs)
[149](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/chains/base.py:149) run_manager = callback_manager.on_chain_start(
[150](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/chains/base.py:150) dumpd(self),
[151](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/chains/base.py:151) inputs,
[152](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/chains/base.py:152) name=run_name,
[153](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/chains/base.py:153) )
[154](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/chains/base.py:154) try:
[155](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/chains/base.py:155) outputs = (
--> [156](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/chains/base.py:156) self._call(inputs, run_manager=run_manager)
[157](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/chains/base.py:157) if new_arg_supported
[158](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/chains/base.py:158) else self._call(inputs)
[159](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/chains/base.py:159) )
[160](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/chains/base.py:160) except BaseException as e:
[161](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/chains/base.py:161) run_manager.on_chain_error(e)
File [~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1391](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1391), in AgentExecutor._call(self, inputs, run_manager)
[1389](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1389) # We now enter the agent loop (until it returns something).
[1390](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1390) while self._should_continue(iterations, time_elapsed):
-> [1391](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1391) next_step_output = self._take_next_step(
[1392](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1392) name_to_tool_map,
[1393](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1393) color_mapping,
[1394](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1394) inputs,
[1395](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1395) intermediate_steps,
[1396](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1396) run_manager=run_manager,
[1397](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1397) )
[1398](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1398) if isinstance(next_step_output, AgentFinish):
[1399](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1399) return self._return(
[1400](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1400) next_step_output, intermediate_steps, run_manager=run_manager
[1401](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1401) )
File [~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1097](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1097), in AgentExecutor._take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
[1088](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1088) def _take_next_step(
[1089](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1089) self,
[1090](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1090) name_to_tool_map: Dict[str, BaseTool],
(...)
[1094](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1094) run_manager: Optional[CallbackManagerForChainRun] = None,
[1095](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1095) ) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]:
[1096](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1096) return self._consume_next_step(
-> [1097](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1097) [
[1098](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1098) a
[1099](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1099) for a in self._iter_next_step(
[1100](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1100) name_to_tool_map,
[1101](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1101) color_mapping,
[1102](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1102) inputs,
[1103](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1103) intermediate_steps,
[1104](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1104) run_manager,
[1105](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1105) )
[1106](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1106) ]
[1107](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1107) )
File [~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1097](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1097), in <listcomp>(.0)
[1088](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1088) def _take_next_step(
[1089](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1089) self,
[1090](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1090) name_to_tool_map: Dict[str, BaseTool],
(...)
[1094](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1094) run_manager: Optional[CallbackManagerForChainRun] = None,
[1095](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1095) ) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]:
[1096](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1096) return self._consume_next_step(
-> [1097](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1097) [
[1098](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1098) a
[1099](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1099) for a in self._iter_next_step(
[1100](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1100) name_to_tool_map,
[1101](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1101) color_mapping,
[1102](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1102) inputs,
[1103](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1103) intermediate_steps,
[1104](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1104) run_manager,
[1105](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1105) )
[1106](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1106) ]
[1107](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1107) )
File [~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1125](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1125), in AgentExecutor._iter_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
[1122](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1122) intermediate_steps = self._prepare_intermediate_steps(intermediate_steps)
[1124](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1124) # Call the LLM to see what to do.
-> [1125](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1125) output = self.agent.plan(
[1126](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1126) intermediate_steps,
[1127](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1127) callbacks=run_manager.get_child() if run_manager else None,
[1128](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1128) **inputs,
[1129](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1129) )
[1130](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1130) except OutputParserException as e:
[1131](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1131) if isinstance(self.handle_parsing_errors, bool):
File [~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:387](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:387), in RunnableAgent.plan(self, intermediate_steps, callbacks, **kwargs)
[381](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:381) # Use streaming to make sure that the underlying LLM is invoked in a streaming
[382](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:382) # fashion to make it possible to get access to the individual LLM tokens
[383](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:383) # when using stream_log with the Agent Executor.
[384](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:384) # Because the response from the plan is not a generator, we need to
[385](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:385) # accumulate the output into final output and return that.
[386](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:386) final_output: Any = None
--> [387](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:387) for chunk in self.runnable.stream(inputs, config={"callbacks": callbacks}):
[388](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:388) if final_output is None:
[389](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:389) final_output = chunk
File [~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:2424](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:2424), in RunnableSequence.stream(self, input, config, **kwargs)
[2418](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:2418) def stream(
[2419](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:2419) self,
[2420](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:2420) input: Input,
[2421](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:2421) config: Optional[RunnableConfig] = None,
[2422](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:2422) **kwargs: Optional[Any],
[2423](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:2423) ) -> Iterator[Output]:
-> [2424](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:2424) yield from self.transform(iter([input]), config, **kwargs)
File [~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:2411](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:2411), in RunnableSequence.transform(self, input, config, **kwargs)
[2405](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:2405) def transform(
[2406](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:2406) self,
[2407](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:2407) input: Iterator[Input],
[2408](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:2408) config: Optional[RunnableConfig] = None,
[2409](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:2409) **kwargs: Optional[Any],
[2410](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:2410) ) -> Iterator[Output]:
-> [2411](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:2411) yield from self._transform_stream_with_config(
[2412](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:2412) input,
[2413](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:2413) self._transform,
[2414](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:2414) patch_config(config, run_name=(config or {}).get("run_name") or self.name),
[2415](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:2415) **kwargs,
[2416](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:2416) )
File [~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:1497](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:1497), in Runnable._transform_stream_with_config(self, input, transformer, config, run_type, **kwargs)
[1495](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:1495) try:
[1496](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:1496) while True:
-> [1497](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:1497) chunk: Output = context.run(next, iterator) # type: ignore
[1498](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:1498) yield chunk
[1499](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:1499) if final_output_supported:
File [~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:2375](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:2375), in RunnableSequence._transform(self, input, run_manager, config)
[2366](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:2366) for step in steps:
[2367](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:2367) final_pipeline = step.transform(
[2368](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:2368) final_pipeline,
[2369](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:2369) patch_config(
(...)
[2372](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:2372) ),
[2373](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:2373) )
-> [2375](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:2375) for output in final_pipeline:
[2376](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:2376) yield output
File [~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:1035](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:1035), in Runnable.transform(self, input, config, **kwargs)
[1032](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:1032) final: Input
[1033](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:1033) got_first_val = False
-> [1035](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:1035) for chunk in input:
[1036](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:1036) if not got_first_val:
[1037](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:1037) final = chunk
File [~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:4168](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:4168), in RunnableBindingBase.transform(self, input, config, **kwargs)
[4162](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:4162) def transform(
[4163](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:4163) self,
[4164](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:4164) input: Iterator[Input],
[4165](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:4165) config: Optional[RunnableConfig] = None,
[4166](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:4166) **kwargs: Any,
[4167](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:4167) ) -> Iterator[Output]:
-> [4168](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:4168) yield from self.bound.transform(
[4169](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:4169) input,
[4170](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:4170) self._merge_configs(config),
[4171](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:4171) **{**self.kwargs, **kwargs},
[4172](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:4172) )
File [~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:1045](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:1045), in Runnable.transform(self, input, config, **kwargs)
[1042](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:1042) final = final + chunk # type: ignore[operator]
[1044](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:1044) if got_first_val:
-> [1045](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:1045) yield from self.stream(final, config, **kwargs)
File [~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py:250](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py:250), in BaseChatModel.stream(self, input, config, stop, **kwargs)
[243](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py:243) except BaseException as e:
[244](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py:244) run_manager.on_llm_error(
[245](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py:245) e,
[246](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py:246) response=LLMResult(
[247](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py:247) generations=[[generation]] if generation else []
[248](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py:248) ),
[249](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py:249) )
--> [250](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py:250) raise e
[251](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py:251) else:
[252](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py:252) run_manager.on_llm_end(LLMResult(generations=[[generation]]))
File [~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py:242](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py:242), in BaseChatModel.stream(self, input, config, stop, **kwargs)
[240](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py:240) else:
[241](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py:241) generation += chunk
--> [242](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py:242) assert generation is not None
[243](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py:243) except BaseException as e:
[244](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py:244) run_manager.on_llm_error(
[245](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py:245) e,
[246](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py:246) response=LLMResult(
[247](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py:247) generations=[[generation]] if generation else []
[248](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py:248) ),
[249](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py:249) )
AssertionError:
```
### Description
Hi Team,
I am trying to use the AzureChatOpenAI as LLM in the inside the ReACT Agent
AzureChatOpenAI model is working when I run it independently But When I use that in agent as LLM. It is throwing me the Assertion Error (Also Assertion Error is not descriptive)
Please help me how can I run this code, where and what I have missing
### System Info
langchain==0.1.5
langchain-community==0.0.18
langchain-core==0.1.19
langchain-openai==0.0.6
langchainhub==0.1.14 | Non Descriptive Assertion Error while using AzureChatOpenAI model with Agents | https://api.github.com/repos/langchain-ai/langchain/issues/18500/comments | 5 | 2024-03-04T11:39:52Z | 2024-08-09T16:41:40Z | https://github.com/langchain-ai/langchain/issues/18500 | 2,166,636,716 | 18,500 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
db = SQLDatabase.from_databricks(catalog=catalog, schema=database,include_tables=table_names )
llm=ChatDatabricks(endpoint="databricks-mixtral-8x7b-instruct", model_kwargs={"temperature": 0.5, "max_tokens": 1000
,'top_p': 0.7,'error_behaviour':'truncate_input_end','num_return_sequences':1} )
agent = create_sql_query_chain(llm=llm, prompt=prompt, db=db )
response=agent.invoke({"question":q ,"top_k":3,"table_info":table_info,"catalog":catalog,"database":database,"dialect":db.dialect})
print(response)
```
### Error Message and Stack Trace (if applicable)
Its not throwing any error but it comes with its own column names or tables which doesnt event exist
### Description
Am using the above code to convert NLP to SQL project ,where am using databricks hosted mixtral instruct 7B model and passed the catalog,database and selected list of the tables names , and pass the NLP question to model ,the model is coming up with columns/table names which aren't part of my tables list input and also in addition to that its appending the special chars to column names.
Note-
1. i want only the Databricks SQL as an output response not the results which is why i didn't go thru the SQL agent approach
2. Also since its Databricks the tables doesn't have Primary keys and foreign keys , so how does the langchain db session assumes about relationships?
3. Am using fewshot template and even passing the table names along with its column name,datatype corresponding tables event though sql query used will show incorrect tables/columns in SQL response
### System Info
langchain ,databricks-sql-connector,mlflow==2.9.0 are libraries which am using as part of my code. | Tex to SQL , facing hallucination problem | https://api.github.com/repos/langchain-ai/langchain/issues/18495/comments | 3 | 2024-03-04T10:03:14Z | 2024-06-13T16:09:02Z | https://github.com/langchain-ai/langchain/issues/18495 | 2,166,444,756 | 18,495 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_openai import ChatOpenAI
from langchain_experimental.agents import create_csv_agent
_model_name = "Qwen-1_5-72B-Chat-Int4"
_openai_api_key = "EMPTY"
_openai_api_base = "http://localhost:2783/v1"
llm = ChatOpenAI(model_name=_model_name, openai_api_key=_openai_api_key, openai_api_base = _openai_api_base)
df= './24022710201.csv'
agent = create_csv_agent(llm, df, verbose=True)
agent.invoke(
{
"input": "What is the median value of the date"
}
)
### Error Message and Stack Trace (if applicable)
Entering new AgentExecutor chain...
Thought: We can use the .shape attribute of the dataframe to get the number of rows. The .shape attribute returns a tuple where the first element is the number of rows.
Action: python_repl_ast
Action Input: df.shape[0]
ObservNameError: name 'Observ' is not definedThere seems to be an error in the output, but the command is correct. Let's try again.
Action: python_repl_ast
Action Input: df.shape[0]
ObservNameError: name 'Observ' is not definedIt appears the output is still not correct. I'll try a different approach by counting the rows directly.
Action: python_repl_ast
Action Input: len(df)
ObservNameError: name 'Observ' is not definedThe output is still not showing correctly. It seems there's an issue with the response system. However, the commands to retrieve the number of rows (df.shape[0] or len(df)) are correct. I'll have to assume that if these commands were executed in a normal Python environment, they would return the correct result.
Final Answer: The correct command to find the number of rows in the dataframe is `df.shape[0]` or `len(df)`. If executed, these commands would return the count of rows.
### Description
There will be an inexplicable occurrence of "Observ", which should not be the "Observ" replied by the large model. It is speculated that it is a string concatenation problem with the prompt of create_react_agent
### System Info
langchain 0.1.10
langchain-community 0.0.25
langchain-core 0.1.28
langchain-experimental 0.0.51
langchain-openai 0.0.8
langchain-text-splitters 0.0.1 | NameError: name 'Observ' is not defined in create_csv_agent or create_pandas_dataframe_agent | https://api.github.com/repos/langchain-ai/langchain/issues/18484/comments | 6 | 2024-03-04T07:12:29Z | 2024-07-28T00:11:57Z | https://github.com/langchain-ai/langchain/issues/18484 | 2,166,101,137 | 18,484 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
//Similarity search with filter not working
redis.add_texts(texts,metadatas,embeddings)
redis.add_documents(documents=docs)
//Similarity search with filter working
Redis.from_texts_return_keys([doc.page_content.lower() for doc in chunks],embedding,metadatas=[doc.metadata for doc in chunks],index_name=index_name)
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Hello,
I'm working on implementing similarity search using Redis VectorDB. I've noticed an issue where, upon embedding my data using add_texts or add_documents, the filters don't seem to function properly, despite verifying that all metadata has been correctly added to Redis. However, when I switch to using from_texts_return_keys, everything works as expected. It appears that there needs to be a standardization across methods that are supposed to produce the same output.
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.3.0: Wed Dec 20 21:28:58 PST 2023; root:xnu-10002.81.5~7/RELEASE_X86_64
> Python Version: 3.11.6 (main, Nov 2 2023, 04:51:19) [Clang 14.0.0 (clang-1400.0.29.202)]
Package Information
-------------------
> langchain_core: 0.1.22
> langchain: 0.1.1
> langchain_community: 0.0.13
> langsmith: 0.0.87
> langchain_google_genai: 0.0.5
> langchain_google_vertexai: 0.0.1.post1
> langchain_openai: 0.0.5
> langchainhub: 0.1.13 | Redis add_texts or add_documents doesn't work with filters | https://api.github.com/repos/langchain-ai/langchain/issues/18482/comments | 0 | 2024-03-04T06:43:33Z | 2024-06-10T16:07:58Z | https://github.com/langchain-ai/langchain/issues/18482 | 2,166,061,511 | 18,482 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
redis = Redis.from_existing_index(embedding=embedding, redis_url=os.environ["REDIS_URL"],
index_name=index_name, schema="redis_schema.yaml",startup_nodes=[ClusterNode(host="localhost", port=6379)])
### Error Message and Stack Trace (if applicable)

### Description
I'm trying to use Redis Cluster as vector db. Redis Cluster connection with redis_url is working good but when i want to add startup_nodes, it is broken.
I checked the source code **kwargs should be working fine because it is going to correct client.
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.3.0: Wed Dec 20 21:28:58 PST 2023; root:xnu-10002.81.5~7/RELEASE_X86_64
> Python Version: 3.11.6 (main, Nov 2 2023, 04:51:19) [Clang 14.0.0 (clang-1400.0.29.202)]
Package Information
-------------------
> langchain_core: 0.1.22
> langchain: 0.1.1
> langchain_community: 0.0.13
> langsmith: 0.0.87
> langchain_google_genai: 0.0.5
> langchain_google_vertexai: 0.0.1.post1
> langchain_openai: 0.0.5
> langchainhub: 0.1.13
| Redis Client with startup_nodes doesn't work | https://api.github.com/repos/langchain-ai/langchain/issues/18481/comments | 0 | 2024-03-04T06:32:47Z | 2024-06-10T16:07:53Z | https://github.com/langchain-ai/langchain/issues/18481 | 2,166,047,341 | 18,481 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code

### Error Message and Stack Trace (if applicable)
_No response_
### Description
where is "evaluation" package
### System Info
langchain 0.1.10
python 3.8 | where is "evaluation" package | https://api.github.com/repos/langchain-ai/langchain/issues/18480/comments | 1 | 2024-03-04T06:30:07Z | 2024-03-05T00:24:05Z | https://github.com/langchain-ai/langchain/issues/18480 | 2,166,043,988 | 18,480 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
``` python
gpu = rh.cluster(name="gpu-cluster",
ips=cluster_ip,
server_connection='paramiko',
ssh_creds={'ssh_user': ssh_username, 'ssh_private_key':ssh_key}),
llm = SelfHostedHuggingFaceLLM(
model_id="gpt2", hardware=gpu, model_reqs=["pip:./", "transformers", "torch"])
```
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "rhtest.py", line 44, in <module>
llm = SelfHostedHuggingFaceLLM(
File "/opt/miniconda/envs/lctest1/lib/python3.8/site-packages/langchain_community/llms/self_hosted_hugging_face.py", line 190, in __init__
super().__init__(load_fn_kwargs=load_fn_kwargs, **kwargs)
File "/opt/miniconda/envs/lctest1/lib/python3.8/site-packages/langchain_community/llms/self_hosted.py", line 162, in __init__
remote_load_fn = rh.function(fn=self.model_load_fn).to(
TypeError: to() got an unexpected keyword argument 'reqs'
### Description
I'm trying to use SelfHostedHuggingFaceLLM to host an LLM on a remote cluster. It seems that the .to() method for rh.function() no longer uses the keyword argument 'reqs' (https://www.run.house/docs/_modules/runhouse/resources/functions/function#Function.to), but self_hosted.py still uses
### System Info
langchain==0.1.9
langchain-community==0.0.24
langchain-core==0.1.27
langchain-openai==0.0.8
running on a remote cluster in a container with Rocky Linux 8
python version 3.8.18
| Langchain community self_hosted.py outdated Runhouse integration | https://api.github.com/repos/langchain-ai/langchain/issues/18479/comments | 0 | 2024-03-04T06:16:22Z | 2024-03-05T00:01:57Z | https://github.com/langchain-ai/langchain/issues/18479 | 2,166,022,860 | 18,479 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
_NO_CODE_EXAMPLE_
### Error Message and Stack Trace (if applicable)
```
Directory ../text-splitters does not exist
```
### Description
After forking and cloning the repo on my machine, I tried to open it using docker and specifically in VS Code with the option to "Reopen in Container". While building, the final command of [dev.Dockerfile](https://github.com/langchain-ai/langchain/blob/b051bba1a9f3f2c6020d7c8dbcc792d14b3cbe17/libs/langchain/dev.Dockerfile#L50) resulted in the following error:
```
Directory ../text-splitters does not exist
```
After investigating, I found out that this [commit](https://github.com/langchain-ai/langchain/commit/5efb5c099f6ced0b752306c4cb1c45370c2a6920) created a new packaged called `text-splitters` but this was never added in the [dev.Dockerfile](https://github.com/langchain-ai/langchain/blob/b051bba1a9f3f2c6020d7c8dbcc792d14b3cbe17/libs/langchain/dev.Dockerfile) file.
### System Info
_NO_SYSTEM_INFO_ | Dockerfile issues when trying to build the repo using .devcontainer | https://api.github.com/repos/langchain-ai/langchain/issues/18465/comments | 0 | 2024-03-03T23:20:14Z | 2024-06-09T16:07:47Z | https://github.com/langchain-ai/langchain/issues/18465 | 2,165,618,028 | 18,465 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_community.llms import Bedrock
from langchain.prompts import PromptTemplate
from langchain.chains.combine_documents.stuff import StuffDocumentsChain
from langchain.chains.llm import LLMChain
from langchain.schema import Document
import langchain
langchain.verbose = True
llm = Bedrock(
model_id="mistral.mixtral-8x7b-instruct-v0:1",
model_kwargs={"max_tokens": 250},
)
prompt_template = """Summarize the following text in no more than 3 to 4 sentences:
{text}
CONCISE SUMMARY:"""
prompt = PromptTemplate.from_template(prompt_template)
llm_chain = LLMChain(llm=llm, prompt=prompt)
stuff_chain = StuffDocumentsChain(llm_chain=llm_chain, document_variable_name="text")
doc = Document(
page_content="""Today, we’re excited to announce the availability of two
high-performing Mistral AI models, Mistral 7B and Mixtral 8x7B,
on Amazon Bedrock."""
)
results = stuff_chain.run([doc])
print(results)
```
### Error Message and Stack Trace (if applicable)
/Users/skehlet/Documents/workspace/email-genai-summarizer/venv/lib/python3.11/site-packages/langchain_core/_api/deprecation.py:117: LangChainDeprecationWarning: The function `run` was deprecated in LangChain 0.1.0 and will be removed in 0.2.0. Use invoke instead.
warn_deprecated(
Traceback (most recent call last):
File "/Users/skehlet/Documents/workspace/email-genai-summarizer/venv/lib/python3.11/site-packages/langchain_community/llms/bedrock.py", line 444, in _prepare_input_and_invoke
response = self.client.invoke_model(**request_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/skehlet/Documents/workspace/email-genai-summarizer/venv/lib/python3.11/site-packages/botocore/client.py", line 553, in _api_call
return self._make_api_call(operation_name, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/skehlet/Documents/workspace/email-genai-summarizer/venv/lib/python3.11/site-packages/botocore/client.py", line 1009, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.errorfactory.ValidationException: An error occurred (ValidationException) when calling the InvokeModel operation: Malformed input request: #: required key [prompt] not found#: extraneous key [inputText] is not permitted, please reformat your input and try again.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/skehlet/Documents/workspace/email-genai-summarizer/test2.py", line 20, in <module>
results = stuff_chain.run([doc])
^^^^^^^^^^^^^^^^^^^^^^
File "/Users/skehlet/Documents/workspace/email-genai-summarizer/venv/lib/python3.11/site-packages/langchain_core/_api/deprecation.py", line 145, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/skehlet/Documents/workspace/email-genai-summarizer/venv/lib/python3.11/site-packages/langchain/chains/base.py", line 545, in run
return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/skehlet/Documents/workspace/email-genai-summarizer/venv/lib/python3.11/site-packages/langchain_core/_api/deprecation.py", line 145, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/skehlet/Documents/workspace/email-genai-summarizer/venv/lib/python3.11/site-packages/langchain/chains/base.py", line 378, in __call__
return self.invoke(
^^^^^^^^^^^^
File "/Users/skehlet/Documents/workspace/email-genai-summarizer/venv/lib/python3.11/site-packages/langchain/chains/base.py", line 163, in invoke
raise e
File "/Users/skehlet/Documents/workspace/email-genai-summarizer/venv/lib/python3.11/site-packages/langchain/chains/base.py", line 153, in invoke
self._call(inputs, run_manager=run_manager)
File "/Users/skehlet/Documents/workspace/email-genai-summarizer/venv/lib/python3.11/site-packages/langchain/chains/combine_documents/base.py", line 137, in _call
output, extra_return_dict = self.combine_docs(
^^^^^^^^^^^^^^^^^^
File "/Users/skehlet/Documents/workspace/email-genai-summarizer/venv/lib/python3.11/site-packages/langchain/chains/combine_documents/stuff.py", line 244, in combine_docs
return self.llm_chain.predict(callbacks=callbacks, **inputs), {}
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/skehlet/Documents/workspace/email-genai-summarizer/venv/lib/python3.11/site-packages/langchain/chains/llm.py", line 293, in predict
return self(kwargs, callbacks=callbacks)[self.output_key]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/skehlet/Documents/workspace/email-genai-summarizer/venv/lib/python3.11/site-packages/langchain_core/_api/deprecation.py", line 145, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/skehlet/Documents/workspace/email-genai-summarizer/venv/lib/python3.11/site-packages/langchain/chains/base.py", line 378, in __call__
return self.invoke(
^^^^^^^^^^^^
File "/Users/skehlet/Documents/workspace/email-genai-summarizer/venv/lib/python3.11/site-packages/langchain/chains/base.py", line 163, in invoke
raise e
File "/Users/skehlet/Documents/workspace/email-genai-summarizer/venv/lib/python3.11/site-packages/langchain/chains/base.py", line 153, in invoke
self._call(inputs, run_manager=run_manager)
File "/Users/skehlet/Documents/workspace/email-genai-summarizer/venv/lib/python3.11/site-packages/langchain/chains/llm.py", line 103, in _call
response = self.generate([inputs], run_manager=run_manager)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/skehlet/Documents/workspace/email-genai-summarizer/venv/lib/python3.11/site-packages/langchain/chains/llm.py", line 115, in generate
return self.llm.generate_prompt(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/skehlet/Documents/workspace/email-genai-summarizer/venv/lib/python3.11/site-packages/langchain_core/language_models/llms.py", line 568, in generate_prompt
return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/skehlet/Documents/workspace/email-genai-summarizer/venv/lib/python3.11/site-packages/langchain_core/language_models/llms.py", line 741, in generate
output = self._generate_helper(
^^^^^^^^^^^^^^^^^^^^^^
File "/Users/skehlet/Documents/workspace/email-genai-summarizer/venv/lib/python3.11/site-packages/langchain_core/language_models/llms.py", line 605, in _generate_helper
raise e
File "/Users/skehlet/Documents/workspace/email-genai-summarizer/venv/lib/python3.11/site-packages/langchain_core/language_models/llms.py", line 592, in _generate_helper
self._generate(
File "/Users/skehlet/Documents/workspace/email-genai-summarizer/venv/lib/python3.11/site-packages/langchain_core/language_models/llms.py", line 1177, in _generate
self._call(prompt, stop=stop, run_manager=run_manager, **kwargs)
File "/Users/skehlet/Documents/workspace/email-genai-summarizer/venv/lib/python3.11/site-packages/langchain_community/llms/bedrock.py", line 718, in _call
return self._prepare_input_and_invoke(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/skehlet/Documents/workspace/email-genai-summarizer/venv/lib/python3.11/site-packages/langchain_community/llms/bedrock.py", line 451, in _prepare_input_and_invoke
raise ValueError(f"Error raised by bedrock service: {e}")
ValueError: Error raised by bedrock service: An error occurred (ValidationException) when calling the InvokeModel operation: Malformed input request: #: required key [prompt] not found#: extraneous key [inputText] is not permitted, please reformat your input and try again.
### Description
* I'm trying to use langchain to use Mistral through AWS Bedrock. It just came out so understandably it's not yet supported.
I found the following quick patch to bedrock.py worked for me, hopefully this is helpful:
```diff
--- venv/lib/python3.11/site-packages/langchain_community/llms/bedrock.py.orig 2024-03-03 12:44:35
+++ venv/lib/python3.11/site-packages/langchain_community/llms/bedrock.py 2024-03-03 12:44:58
@@ -104,6 +104,9 @@
input_body = dict()
input_body["inputText"] = prompt
input_body["textGenerationConfig"] = {**model_kwargs}
+ elif provider == "mistral":
+ input_body = dict()
+ input_body["prompt"] = prompt
else:
input_body["inputText"] = prompt
@@ -126,6 +129,8 @@
text = response_body.get("generations")[0].get("text")
elif provider == "meta":
text = response_body.get("generation")
+ elif provider == "mistral":
+ text = response_body.get("outputs")[0].get("text")
else:
text = response_body.get("results")[0].get("outputText")
```
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.3.0: Wed Dec 20 21:30:44 PST 2023; root:xnu-10002.81.5~7/RELEASE_ARM64_T6000
> Python Version: 3.11.7 (main, Dec 4 2023, 18:10:11) [Clang 15.0.0 (clang-1500.1.0.2.5)]
Package Information
-------------------
> langchain_core: 0.1.28
> langchain: 0.1.10
> langchain_community: 0.0.25
> langsmith: 0.1.13
> langchain_text_splitters: 0.0.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | Support Mistral through Bedrock | https://api.github.com/repos/langchain-ai/langchain/issues/18461/comments | 3 | 2024-03-03T21:22:37Z | 2024-04-02T17:44:27Z | https://github.com/langchain-ai/langchain/issues/18461 | 2,165,564,935 | 18,461 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
;
### Error Message and Stack Trace (if applicable)
Root cause:
AttributeError: 'CallbackManager' object has no attribute 'ignore_chain'
`
Traceback (most recent call last):
File "/var/lang/lib/python3.10/site-packages/langchain_core/callbacks/manager.py", line 255, in handle_event
if ignore_condition_name is None or not getattr(
AttributeError: 'CallbackManager' object has no attribute 'ignore_chain'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/var/lang/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2075, in invoke
input = step.invoke(
File "/var/lang/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 4069, in invoke
return self.bound.invoke(
File "/var/lang/lib/python3.10/site-packages/langchain/chains/base.py", line 163, in invoke
raise e
File "/var/lang/lib/python3.10/site-packages/langchain/chains/base.py", line 153, in invoke
self._call(inputs, run_manager=run_manager)
File "/var/lang/lib/python3.10/site-packages/langchain/agents/agent.py", line 1391, in _call
next_step_output = self._take_next_step(
File "/var/lang/lib/python3.10/site-packages/langchain/agents/agent.py", line 1097, in _take_next_step
[
File "/var/lang/lib/python3.10/site-packages/langchain/agents/agent.py", line 1097, in <listcomp>
[
File "/var/lang/lib/python3.10/site-packages/langchain/agents/agent.py", line 1182, in _iter_next_step
yield self._perform_agent_action(
File "/var/lang/lib/python3.10/site-packages/langchain/agents/agent.py", line 1204, in _perform_agent_action
observation = tool.run(
File "/var/lang/lib/python3.10/site-packages/langchain_core/tools.py", line 419, in run
raise e
File "/var/lang/lib/python3.10/site-packages/langchain_core/tools.py", line 376, in run
self._run(*tool_args, run_manager=run_manager, **tool_kwargs)
File "/var/lang/lib/python3.10/site-packages/langchain_community/tools/vectorstore/tool.py", line 57, in _run
return chain.invoke(
File "/var/lang/lib/python3.10/site-packages/langchain/chains/base.py", line 145, in invoke
run_manager = callback_manager.on_chain_start(
File "/var/lang/lib/python3.10/site-packages/langchain_core/callbacks/manager.py", line 1296, in on_chain_start
handle_event(
File "/var/lang/lib/python3.10/site-packages/langchain_core/callbacks/manager.py", line 285, in handle_event
if handler.raise_error:
AttributeError: 'CallbackManager' object has no attribute 'raise_error'
`
### Description
Patch https://github.com/langchain-ai/langchain/pull/16949 introduced breaking change.
See my comment their https://github.com/langchain-ai/langchain/pull/16949#discussion_r1510374350
### System Info
bash-4.2# python -m langchain_core.sys_info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Thu Oct 5 21:02:42 UTC 2023
> Python Version: 3.10.13 (main, Dec 4 2023, 13:30:46) [GCC 7.3.1 20180712 (Red Hat 7.3.1-17)]
Package Information
-------------------
> langchain_core: 0.1.28
> langchain: 0.1.10
> langchain_community: 0.0.25
> langsmith: 0.1.10
> langchain_experimental: 0.0.50
> langchain_openai: 0.0.2.post1
> langchain_text_splitters: 0.0.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | Regression in VectorStoreQATool and VectorStoreQAWithSourcesTool | https://api.github.com/repos/langchain-ai/langchain/issues/18460/comments | 0 | 2024-03-03T20:01:59Z | 2024-03-05T23:57:00Z | https://github.com/langchain-ai/langchain/issues/18460 | 2,165,532,798 | 18,460 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
https://github.com/stewones/langchain-bun-binary
### Error Message and Stack Trace (if applicable)
```sh
134254 | var init_base5 = __esm(() => {
134255 | init_base4();
134256 | });
134257 |
134258 | // node_modules/@langchain/core/dist/prompts/string.js
134259 | class BaseStringPromptTemplate extends BasePromptTemplate {
^
ReferenceError: Cannot access uninitialized variable.
at /$bunfs/root/chatness:134259:40
```
### Description
I'm trying to generate a binary from an app that relies on langchain.
here's the minimal reproduction
https://github.com/stewones/langchain-bun-binary
steps to recreate:
1. `bun install`
2. `bun build --compile --sourcemap ./index.ts --outfile app`
3. `./app`
### System Info
platform: mac
Python: 3.9.6
langchain: 0.1.25 | Possible circular dependencies breaking from building binaries with Bun | https://api.github.com/repos/langchain-ai/langchain/issues/18458/comments | 1 | 2024-03-03T16:13:39Z | 2024-03-03T16:17:38Z | https://github.com/langchain-ai/langchain/issues/18458 | 2,165,434,009 | 18,458 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from typing import List
from langchain.output_parsers.openai_tools import JsonOutputKeyToolsParser
parser = JsonOutputKeyToolsParser(key_name="Joke")
chain = prompt | model | parser
chain.invoke({"input": "tell me a joke"})
-----
[{'setup': "Why couldn't the bicycle stand up by itself?",
'punchline': 'Because it was two tired!'}]
we got a list which is correct, but when we add return_single=True, we still got a list, which is unexpected.
The same is true when we call tool in the chain (code from the document).
from operator import itemgetter
# Note: the `.map()` at the end of `multiply` allows us to pass in a list of `multiply` arguments instead of a single one.
chain = (
model_with_tools
| JsonOutputKeyToolsParser(key_name="multiply", return_single=True)
| multiply
)
chain.invoke("What's four times 23")
it would failed at pydantic validation, because it pipe a list to mulltiply.
we have to get the first item in the list. like below
# You would need to add a step in the chain to convert the list to a dictionary
def extract_arguments_from_list(input_list):
if isinstance(input_list, list) and len(input_list) == 1:
return input_list[0]
else:
raise ValueError("Expected a list with a single dictionary")
from operator import itemgetter
# Note: the `.map()` at the end of `multiply` allows us to pass in a list of `multiply` arguments instead of a single one.
chain = (
model_with_tools
| JsonOutputKeyToolsParser(key_name="multiply", return_single=True)
# | extract_arguments_from_list # This is the new step in the chain
| multiply
)
chain.invoke("What's four times 23")
then it works.
### Error Message and Stack Trace (if applicable)
---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
Cell In[71], line 10
3 # Note: the `.map()` at the end of `multiply` allows us to pass in a list of `multiply` arguments instead of a single one.
4 chain = (
5 model_with_tools
6 | JsonOutputKeyToolsParser(key_name="multiply", return_single=True)
7 # | extract_arguments_from_list # This is the new step in the chain
8 | multiply
9 )
---> 10 chain.invoke("What's four times 23")
File [~/miniconda3/envs/LLMV2/lib/python3.10/site-packages/langchain_core/runnables/base.py:2075](http://localhost:33435/home/coty/miniconda3/envs/LLMV2/lib/python3.10/site-packages/langchain_core/runnables/base.py#line=2074), in RunnableSequence.invoke(self, input, config)
2073 try:
2074 for i, step in enumerate(self.steps):
-> 2075 input = step.invoke(
2076 input,
2077 # mark each step as a child run
2078 patch_config(
2079 config, callbacks=run_manager.get_child(f"seq:step:{i+1}")
2080 ),
2081 )
2082 # finish the root run
2083 except BaseException as e:
File [~/miniconda3/envs/LLMV2/lib/python3.10/site-packages/langchain_core/tools.py:240](http://localhost:33435/home/coty/miniconda3/envs/LLMV2/lib/python3.10/site-packages/langchain_core/tools.py#line=239), in BaseTool.invoke(self, input, config, **kwargs)
233 def invoke(
234 self,
235 input: Union[str, Dict],
236 config: Optional[RunnableConfig] = None,
237 **kwargs: Any,
238 ) -> Any:
239 config = ensure_config(config)
--> 240 return self.run(
241 input,
242 callbacks=config.get("callbacks"),
243 tags=config.get("tags"),
244 metadata=config.get("metadata"),
245 run_name=config.get("run_name"),
246 **kwargs,
247 )
File [~/miniconda3/envs/LLMV2/lib/python3.10/site-packages/langchain_core/tools.py:382](http://localhost:33435/home/coty/miniconda3/envs/LLMV2/lib/python3.10/site-packages/langchain_core/tools.py#line=381), in BaseTool.run(self, tool_input, verbose, start_color, color, callbacks, tags, metadata, run_name, **kwargs)
380 except ValidationError as e:
381 if not self.handle_validation_error:
--> 382 raise e
383 elif isinstance(self.handle_validation_error, bool):
384 observation = "Tool input validation error"
File [~/miniconda3/envs/LLMV2/lib/python3.10/site-packages/langchain_core/tools.py:373](http://localhost:33435/home/coty/miniconda3/envs/LLMV2/lib/python3.10/site-packages/langchain_core/tools.py#line=372), in BaseTool.run(self, tool_input, verbose, start_color, color, callbacks, tags, metadata, run_name, **kwargs)
360 run_manager = callback_manager.on_tool_start(
361 {"name": self.name, "description": self.description},
362 tool_input if isinstance(tool_input, str) else str(tool_input),
(...)
370 **kwargs,
371 )
372 try:
--> 373 parsed_input = self._parse_input(tool_input)
374 tool_args, tool_kwargs = self._to_args_and_kwargs(parsed_input)
375 observation = (
376 self._run(*tool_args, run_manager=run_manager, **tool_kwargs)
377 if new_arg_supported
378 else self._run(*tool_args, **tool_kwargs)
379 )
File [~/miniconda3/envs/LLMV2/lib/python3.10/site-packages/langchain_core/tools.py:280](http://localhost:33435/home/coty/miniconda3/envs/LLMV2/lib/python3.10/site-packages/langchain_core/tools.py#line=279), in BaseTool._parse_input(self, tool_input)
278 else:
279 if input_args is not None:
--> 280 result = input_args.parse_obj(tool_input)
281 return {
282 k: getattr(result, k)
283 for k, v in result.dict().items()
284 if k in tool_input
285 }
286 return tool_input
File [~/miniconda3/envs/LLMV2/lib/python3.10/site-packages/pydantic/main.py:526](http://localhost:33435/home/coty/miniconda3/envs/LLMV2/lib/python3.10/site-packages/pydantic/main.py#line=525), in pydantic.main.BaseModel.parse_obj()
File [~/miniconda3/envs/LLMV2/lib/python3.10/site-packages/pydantic/main.py:341](http://localhost:33435/home/coty/miniconda3/envs/LLMV2/lib/python3.10/site-packages/pydantic/main.py#line=340), in pydantic.main.BaseModel.__init__()
ValidationError: 2 validation errors for multiplySchema
first_int
value is not a valid integer (type=type_error.integer)
second_int
field required (type=value_error.missing)
### Description
from typing import List
from langchain.output_parsers.openai_tools import JsonOutputKeyToolsParser
parser = JsonOutputKeyToolsParser(key_name="Joke")
chain = prompt | model | parser
chain.invoke({"input": "tell me a joke"})
-----
[{'setup': "Why couldn't the bicycle stand up by itself?",
'punchline': 'Because it was two tired!'}]
we got a list which is correct, but when we add return_single=True, we still got a list, which is unexpected.
The same is true when we call tool in the chain (code from the document).
from operator import itemgetter
# Note: the `.map()` at the end of `multiply` allows us to pass in a list of `multiply` arguments instead of a single one.
chain = (
model_with_tools
| JsonOutputKeyToolsParser(key_name="multiply", return_single=True)
| multiply
)
chain.invoke("What's four times 23")
it would failed at pydantic validation, because it pipe a list to mulltiply.
we have to get the first item in the list. like below
# You would need to add a step in the chain to convert the list to a dictionary
def extract_arguments_from_list(input_list):
if isinstance(input_list, list) and len(input_list) == 1:
return input_list[0]
else:
raise ValueError("Expected a list with a single dictionary")
from operator import itemgetter
# Note: the `.map()` at the end of `multiply` allows us to pass in a list of `multiply` arguments instead of a single one.
chain = (
model_with_tools
| JsonOutputKeyToolsParser(key_name="multiply", return_single=True)
# | extract_arguments_from_list # This is the new step in the chain
| multiply
)
chain.invoke("What's four times 23")
then it works.
### System Info
Name: langchain
Version: 0.1.10
Name: langchain-core
Version: 0.1.28
Name: langgraph
Version: 0.0.26
Name: langchain-community
Version: 0.0.25
Name: langchain-experimental
Version: 0.0.53
Name: langchain-openai
Version: 0.0.8
Name: langserve
Version: 0.0.46
Name: langchain-cli
Version: 0.0.21
Name: langsmith
Version: 0.1.13
Name: langgraph
Version: 0.0.26
Name: openai
Version: 1.13.3
Name: httpx
Version: 0.25.2
Name: pydantic
Version: 1.10.14
python=3.10.13
| JsonOutputKeyToolsParser return_single=True doesn't work | https://api.github.com/repos/langchain-ai/langchain/issues/18455/comments | 3 | 2024-03-03T14:42:07Z | 2024-07-11T16:06:51Z | https://github.com/langchain-ai/langchain/issues/18455 | 2,165,390,145 | 18,455 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
def get_langchain_llm_model(llm_model_id, params, region):
'''
params keys should be in [temperature, max_tokens, top_p, top_k, stop]
'''
llm = None
parameters = { item[0]:item[1] for item in params.items() if item[0] in ['temperature', 'max_tokens', 'top_p', 'top_k', 'stop']}
if llm_model_id in bedrock_llms:
boto3_bedrock = boto3.client(
service_name="bedrock-runtime",
region_name=region
)
llm = Bedrock(model_id=llm_model_id, client=boto3_bedrock, streaming=False, model_kwargs=parameters)
return llm
INVOKE_MODEL_ID = 'mistral.mistral-7b-instruct-v0:2'
llm4 = get_langchain_llm_model(INVOKE_MODEL_ID, params, REGION)
llmchain = LLMChain(llm=llm4, verbose=False, prompt=prompt_templ)
answer = llmchain.run({})
### Error Message and Stack Trace (if applicable)
_No response_
### Description
As mistrial was support on Mar.1, i guess langchain have not adapted with this update. Look forwarding to the upgrade for this.
### System Info
Linux, python3.10 | mistrial is supported by bedrock, by its langchain wrapper will throw exception | https://api.github.com/repos/langchain-ai/langchain/issues/18451/comments | 1 | 2024-03-03T13:25:07Z | 2024-06-14T16:08:39Z | https://github.com/langchain-ai/langchain/issues/18451 | 2,165,358,057 | 18,451 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
qianfan_model = QianfanChatEndpoint(model='ERNIE-Bot', qianfan_ak=...... ,
qianfan_sk=......)
for chunk in qianfan_model.stream(input_prompt):
all_output += chunk.content
print(chunk.content, end="", flush=True)
```
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "E:\JMIT\JMIT3\code\ans.py", line 247, in <module>
print(retrieve_ans_chat_stream('我想離婚,孩子該怎麼判?', history=[('你好', '你好呀')]))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\JMIT\JMIT3\code\ans.py", line 236, in retrieve_ans_chat_stream
for chunk in qianfan_model.stream(input_prompt):
File "E:\envs\JMIT\Lib\site-packages\langchain_core\language_models\chat_models.py", line 250, in stream
raise e
File "E:\envs\JMIT\Lib\site-packages\langchain_core\language_models\chat_models.py", line 241, in stream
generation += chunk
File "E:\envs\JMIT\Lib\site-packages\langchain_core\outputs\chat_generation.py", line 57, in __add__
generation_info = merge_dicts(
^^^^^^^^^^^^
File "E:\envs\JMIT\Lib\site-packages\langchain_core\utils\_merge.py", line 38, in merge_dicts
raise TypeError(
TypeError: Additional kwargs key created already exists in left dict and value has unsupported type <class 'int'>.
### Description
This bug occurs when the langchain_core>0.1.12
Downgrading to langchain_core==0.1.12 is bug-free
### System Info
langchain==0.1.10
langchain-community==0.0.25
langchain-core==0.1.28
langchain-text-splitters==0.0.1
windows
Python 3.11.7 | when using qianfan chat streaming occurs TypeError: Additional kwargs key sentence_id already exists in left dict and value has unsupported type <class 'int'>. | https://api.github.com/repos/langchain-ai/langchain/issues/18441/comments | 4 | 2024-03-03T08:34:01Z | 2024-07-25T16:08:13Z | https://github.com/langchain-ai/langchain/issues/18441 | 2,165,252,328 | 18,441 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The following code:
```python
from langchain_community.utilities import SQLDatabase
from langchain.agents import create_sql_agent
from langchain_community.agent_toolkits import SQLDatabaseToolkit
from langchain.agents.agent_types import AgentType
from langchain_groq import ChatGroq
llm = ChatGroq(temperature=0.5, groq_api_key="<api-key>", model_name="mixtral-8x7b-32768")
db = SQLDatabase.from_uri("sqlite:///Chinook.db")
agent_executor_kwargs = {
"handle_parsing_errors": True,
}
agent_executor = create_sql_agent(
llm=llm,
toolkit=SQLDatabaseToolkit(db=db, llm=llm),
# db=db,
verbose=True,
agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
agent_executor_kwargs=agent_executor_kwargs,
)
agent_executor.invoke(
{"input": "List the total sales per country. Which country's customers spent the most?"}
)
```
### Error Message and Stack Trace (if applicable)
```
> Entering new SQL Agent Executor chain...
I need to query the database to get the total sales per country. After that, I can identify the country whose customers spent the most.
Action: sql\_db\_query
Action Input: SELECT country, SUM(sales) FROM orders GROUP BY country ORDER BY SUM(sales) DESC;
sql\_db\_query is not a valid tool, try one of [sql_db_query, sql_db_schema, sql_db_list_tables, sql_db_query_checker].I apologize for the confusion. You are correct that the action should be "sql\_db\_query". However, it seems that I made a mistake in the input for the action. I will try again.
Action: sql\_db\_query
Action Input: SELECT country, SUM(sales) FROM orders GROUP BY country ORDER BY SUM(sales) DESC;
sql\_db\_query is not a valid tool, try one of [sql_db_query, sql_db_schema, sql_db_list_tables, sql_db_query_checker].I apologize for the confusion. It seems I need to use the `sql_db_query_checker` tool first to ensure that my query is correct before executing it with `sql_db_query`.
Action: sql\_db\_query\_checker
Action Input: SELECT country, SUM(sales) FROM orders GROUP BY country ORDER BY SUM(sales) DESC;
sql\_db\_query\_checker is not a valid tool, try one of [sql_db_query, sql_db_schema, sql_db_list_tables, sql_db_query_checker].I apologize for the confusion. It seems I need to check the table schema first to ensure that my query is correct. I will use `sql_db_schema` tool to get the schema of the `orders` table.
Action: sql\_db\_schema
Action Input: orders
sql\_db\_schema is not a valid tool, try one of [sql_db_query, sql_db_schema, sql_db_list_tables, sql_db_query_checker].I apologize for the confusion. It seems I made a mistake in the input for the action. The correct tool to use is `sql_db_list_tables` to get the list of tables in the database.
Action: sql\_db\_list\_tables
Action Input:
sql\_db\_list\_tables is not a valid tool, try one of [sql_db_query, sql_db_schema, sql_db_list_tables, sql_db_query_checker].I apologize for the confusion. I realize now that I made a mistake in my previous inputs. The correct tool to use is `sql_db_list_tables` to get the list of tables in the database. I will try again.
...
Action Input:
sql\_db\_list\_tables is not a valid tool, try one of [sql_db_query, sql_db_schema, sql_db_list_tables, sql_db_query_checker].Invalid Format: Missing 'Action:' after 'Thought:Invalid Format: Missing 'Action:' after 'Thought:Invalid Format: Missing 'Action:' after 'Thought:Invalid Format: Missing 'Action:' after 'Thought:Invalid Format: Missing 'Action:' after 'Thought:Invalid Format: Missing 'Action:' after 'Thought:Invalid Format: Missing 'Action:' after 'Thought:Invalid Format: Missing 'Action:' after 'Thought:Invalid Format: Missing 'Action:' after 'Thought:
> Finished chain.
```
### Description
When attempting to use the SQL Database agent with LLMs that aren't from OpenAI, it produces this loop caused by back slash escape sequences.
I have included groq utilizing the mixtral model but I have also experienced this using ollama, llama.cpp testing a number of different LLM models including but not limited to mixtral, mistral, llama, phi, codellama, etc.
I do not experience this bug when using gpt-3.5-turbo and gpt-4
### System Info
langchain: 0.1.9
python: 3.11.3
OS: Windows | SQL Database Agent - 'Is not a valid tool' error | https://api.github.com/repos/langchain-ai/langchain/issues/18439/comments | 10 | 2024-03-03T06:49:47Z | 2024-07-28T15:56:48Z | https://github.com/langchain-ai/langchain/issues/18439 | 2,165,207,712 | 18,439 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
ChatPromptTemplate(
input_variables=["agent_scratchpad", "input", "tools_string"],
partial_variables={"chat_history": ""},
messages=[
SystemMessagePromptTemplate(
prompt=PromptTemplate(
input_variables=[
"chat_history",
"tools_string",
],
template="""You are a research assistant AI that has been equipped with the following function(s) to help you answer a the question by the human. Your goal is to answer the user's question to the best of your ability, using the function(s) to gather more information if necessary to better answer the question. The result of a function call will be added to the conversation history as an observation.
In this environment you have access to a set of tools you can use to answer the user's question. In order to use a tool, you can use <tool></tool> and <tool_input></tool_input> tags.
You will then get back a response in the form <observation></observation>
For example, if you have a tool called 'search' that could run a google search, in order to search for the weather in SF you would respond:
<tool>search</tool><tool_input>weather in SF</tool_input>
<observation>64 degrees</observation>
Here are the only tool(s) available:
<tools>
{tools_string}
</tools>
Note that the function parameters have been listed in the order that they should be passed into the function.
Do not modify or extend the provided functions under any circumstances. For example, calling get_table_schema() with additional parameters would be considered modifying the function which is not allowed. Please use the functions only as defined.
DO NOT use any functions that I have not equipped you with.
Remember, your goal is to answer the user's question to the best of your ability, using only the function(s) provided to gather more information if necessary to better answer the question. Do not modify or extend the provided functions under any circumstances. For example, calling get_current_temp() with additional parameters would be modifying the function which is not allowed. Please use the functions only as defined. Be careful to only use the <tool> tag when calling a tool. You should use <tool_used> when describing a tool after its been called. The result of a function call will be added to the conversation history as an observation. If necessary, you can make multiple function calls and use all the functions I have equipped you with. Always return your final answer within <final_answer></final_answer> tags.
This is the history of your convesation so far:
{chat_history}
""".replace(
" ", ""
),
)
),
HumanMessagePromptTemplate(
prompt=PromptTemplate(input_variables=["input"], template="Human: {input} ")
),
AIMessagePromptTemplate(
prompt=PromptTemplate(
input_variables=["agent_scratchpad"],
template="Assistant: <scratchpad> I understand I cannot use functions that have not been provided to me to answer this question. {agent_scratchpad}",
)
),
],
)
```
```
def convert_tools_anthropic(tool: BaseTool) -> str:
"""Format tool into the Anthropic function API."""
if len(tool.args) > 0:
arg_names = [arg for arg in tool.args]
parmeters: str = "".join(
[
f"""\n<parameter>\n<name>\n{name}\n</name>\n<type>\n{arg["type"]}\n</type>\n<description>\n{arg["description"]}\n</description>\n</parameter>\n"""
for name, arg in tool.args.items()
]
)
else:
parmeters: str = "\n"
new_tool = (
f"""<tool_description>\n<tool_name>\n{tool.name}\n</tool_name>\n<description>\n{tool.description}\n</description>\n<parameters>{parmeters}</parameters>"""
).strip(" ")
return new_tool
def convert_steps_anthropic(intermediate_steps):
print(type(intermediate_steps))
log = ""
for action, observation in intermediate_steps:
log += (
f"<tool>{action.tool}</tool><tool_input>{action.tool_input}"
f"</tool_input><observation>{observation}</observation>"
)
return log
```
```
agent: Runnable = (
{
"input": lambda x: x["input"],
"agent_scratchpad": lambda x: convert_steps_anthropic(
intermediate_steps=x["intermediate_steps"]
),
}
| prompt.partial(tools_string=tools_string)
| model.bind(stop=["</tool_input>", "</final_answer>"])
| XMLAgentOutputParser()
)
```
### Error Message and Stack Trace (if applicable)
```
[31](https://file+.vscode-resource.vscode-cdn.net/home/zdrake/Repos/mynuspire-datascience/~/Repos/mynuspire-datascience/.venv/lib/python3.11/site-packages/langchain/agents/output_parsers/xml.py:31) def parse(self, text: str) -> Union[AgentAction, AgentFinish]:
[32](https://file+.vscode-resource.vscode-cdn.net/home/zdrake/Repos/mynuspire-datascience/~/Repos/mynuspire-datascience/.venv/lib/python3.11/site-packages/langchain/agents/output_parsers/xml.py:32) if "</tool>" in text:
---> [33](https://file+.vscode-resource.vscode-cdn.net/home/zdrake/Repos/mynuspire-datascience/~/Repos/mynuspire-datascience/.venv/lib/python3.11/site-packages/langchain/agents/output_parsers/xml.py:33) tool, tool_input = text.split("</tool>")
[34](https://file+.vscode-resource.vscode-cdn.net/home/zdrake/Repos/mynuspire-datascience/~/Repos/mynuspire-datascience/.venv/lib/python3.11/site-packages/langchain/agents/output_parsers/xml.py:34) _tool = tool.split("<tool>")[1]
[35](https://file+.vscode-resource.vscode-cdn.net/home/zdrake/Repos/mynuspire-datascience/~/Repos/mynuspire-datascience/.venv/lib/python3.11/site-packages/langchain/agents/output_parsers/xml.py:35) _tool_input = tool_input.split("<tool_input>")[1]
ValueError: too many values to unpack (expected 2)
```
### Description
I am using Antrhopic Claude 2.1 on AWS Bedrock. I am using the XML Agent and XMLOutput parser the same way it is shown in the [cookbook](https://python.langchain.com/docs/expression_language/cookbook/agent). It seems that the built-in tools are looking for tool calls to be written as:
```
<tool>search</tool><tool_input>weather in SF</tool_input>
```
However, Anthropic recently put out a [detailed guide](https://docs.google.com/spreadsheets/d/1sUrBWO0u1-ZuQ8m5gt3-1N5PLR6r__UsRsB7WeySDQA/) about function calling on Claude 2.1. It seems Claude is trained to call functions as:
```
<function_calls>
<invoke>
<tool_name>$TOOL_NAME</tool_name>
<parameters>
<$PARAMETER_NAME>$PARAMETER_VALUE</$PARAMETER_NAME>
...
</parameters>
</invoke>
</function_calls>
```
As you can see I used the prompt to tell the model to output using the Langchain way, and it will in fact return the instructions and the agent will run the code. However, Claude seems to insist on using `<tool>` to describe the tools it uses in its scratchpad. When this happens, the parse tries to execute descriptions of previous steps as if they are function calls. Is there a simpler way to override the tags used in the XMLOutputParser? It likely should be updated to call tools the way Anthropic is now detailing them.
### System Info
AWS Bedrock
Antrhopic Claude 2.1
Langchain 0.1.9 | XMLParser looks for different tags than what Claude 2.1 prefers. | https://api.github.com/repos/langchain-ai/langchain/issues/18434/comments | 0 | 2024-03-02T22:39:48Z | 2024-06-08T16:14:35Z | https://github.com/langchain-ai/langchain/issues/18434 | 2,165,008,206 | 18,434 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
agent_df_base_company = create_pandas_dataframe_agent(
ChatOpenAI(temperature=0, model="gpt-3.5-turbo-0125"),
df,
verbose=True,
#agent_type=AgentType.OPENAI_FUNCTIONS,
agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION
)
agent_df_base_company.run('which company started production in 2023 ?')
### Error Message and Stack Trace (if applicable)
> Entering new AgentExecutor chain...
Thought: We need to extract the year of production from the unstructured text in the dataframe.
Action: Use regular expressions to extract the year of production from the unstructured text.
Action Input:
```python
import re
# Sample unstructured text
text = "year of production was 2023"
# Regular expression to extract the year
year_pattern = r'\b\d{4}\b'
# Extracting the year
years = re.findall(year_pattern, text)
years
```Use regular expressions to extract the year of production from the unstructured text. is not a valid tool, try one of [python_repl_ast].I need to use the Python shell tool to extract the year of production from the unstructured text in the dataframe.
Action: Use the Python shell tool to extract the year of production from the unstructured text.
Action Input:
```python
import re
# Sample unstructured text
text = "year of production was 2023"
# Regular expression to extract the year
year_pattern = r'\b\d{4}\b'
# Extracting the year
years = re.findall(year_pattern, text)
years
```Use the Python shell tool to extract the year of production from the unstructured text. is not a valid tool, try one of [python_repl_ast].I need to extract the year of production from the unstructured text in the dataframe using the Python shell tool.
Action: Use the Python shell tool to extract the year of production from the unstructured text.
Action Input:
```python
import re
# Sample unstructured text
text = "year of production was 2023"
# Regular expression to extract the year
year_pattern = r'\b\d{4}\b'
# Extracting the year
years = re.findall(year_pattern, text)
years
```Use the Python shell tool to extract the year of production from the unstructured text. is not a valid tool, try one of [python_repl_ast].I need to extract the year of production from the unstructured text in the dataframe using the Python shell tool.
Action: Use the Python shell tool to extract the year of production from the unstructured text.
Action Input:
```python
import re
# Sample unstructured text
text = "year of production was 2023"
# Regular expression to extract the year
year_pattern = r'\b\d{4}\b'
# Extracting the year
years = re.findall(year_pattern, text)
years
```Use the Python shell tool to extract the year of production from the unstructured text. is not a valid tool, try one of [python_repl_ast].I need to extract the year of production from the unstructured text in the dataframe using the Python shell tool.
Final Answer: Use the Python shell tool to extract the year of production from the unstructured text.
> Finished chain.
Use the Python shell tool to extract the year of production from the unstructured text.
### Description
there are two errors here:
1.
This error was specific to create_pandas_dataframe_agent. I had a data frame and followed the instruction to perform some simple queries. I tried all three types of agent:
AgentType.ZERO_SHOT_REACT_DESCRIPTION
AgentType.OPENAI_FUNCTIONS,
"openai-tools"
while sometimes it returns the results, sometime it stops at the code level and output:
Use the Python shell tool to extract the year of production from the unstructured text.
PythonREPL was one of the tools as I saw:
tools = [PythonAstREPLTool(locals=df_locals)] + list(extra_tools)
2. line 246 in pandas/base.py, where "include_df_in_prompt is not None" should be simply "include_df_in_prompt", as it was specified as True or False.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP PREEMPT_DYNAMIC Sat Nov 18 15:31:17 UTC 2023
> Python Version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.1.28
> langchain: 0.1.10
> langchain_community: 0.0.25
> langsmith: 0.1.13
> langchain_experimental: 0.0.53
> langchain_openai: 0.0.8
> langchain_text_splitters: 0.0.1
> langchainhub: 0.1.14
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | pandas agent didn't invoke tool but returned code instead, and line 246 include_df_in_prompt | https://api.github.com/repos/langchain-ai/langchain/issues/18432/comments | 2 | 2024-03-02T21:51:21Z | 2024-06-06T14:33:17Z | https://github.com/langchain-ai/langchain/issues/18432 | 2,164,991,186 | 18,432 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
When attempting to connect the Google AI model with LangChain using a provided API key, the following error is encountered:
```python
ValidationError Traceback (most recent call last)
Cell In[52], line 2
1 from langchain_google_genai import GoogleGenerativeAI
----> 2 llm = GoogleGenerativeAI(model="gemini-pro", google_api_key=api_key)
3 print(
4 llm.invoke(
5 "What are some of the pros and cons of Python as a programming language?"
6 )
7 )
File /opt/conda/lib/python3.10/site-packages/langchain_core/load/serializable.py:120, in Serializable.__init__(self, **kwargs)
119 def __init__(self, **kwargs: Any) -> None:
--> 120 super().__init__(**kwargs)
121 self._lc_kwargs = kwargs
File /opt/conda/lib/python3.10/site-packages/pydantic/v1/main.py:341, in BaseModel.__init__(__pydantic_self__, **data)
339 values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)
340 if validation_error:
--> 341 raise validation_error
342 try:
343 object_setattr(__pydantic_self__, '__dict__', values)
ValidationError: 1 validation error for GoogleGenerativeAI
__root__
'NoneType' object does not support item assignment (type=type_error)
```
this is my code
```python
from getpass import getpass
api_key = getpass()
from langchain_google_genai import GoogleGenerativeAI
llm = GoogleGenerativeAI(model="gemini-pro", google_api_key=api_key)
print(
llm.invoke(
"What are some of the pros and cons of Python as a programming language?"
)
)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I am trying use Google LLM model using langchian but its giving validation error , but if i am using that same key in [default llm model methods ](https://ai.google.dev/tutorials/python_quickstart) the LLM model is generating Content
### System Info
i am using Kaggle and GPU T-100 as GPU
| Validation Error Encountered When Connecting Google AI Model with LangChain | https://api.github.com/repos/langchain-ai/langchain/issues/18425/comments | 1 | 2024-03-02T15:47:41Z | 2024-08-05T16:08:16Z | https://github.com/langchain-ai/langchain/issues/18425 | 2,164,863,841 | 18,425 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from langchain.chains import RetrievalQA
from langchain.chains.loading import load_chain
from langchain.openai import ChatOpenAI
# Create a chat LLM model
llm = ChatOpenAI()
# Construct vector DB in any way
chunks = ...
faiss_database = FAISS.from_documents(chunks, OpenAIEmbeddings())
# Create QA chain
qa = RetrievalQA.from_llm(llm=llm, retriever=faiss_database.as_retriever())
# Save QA chain
qa.save("./langchain_qa.json")
# Load QA chain
chain = load_chain("./langchain_qa.json", retriever=faiss_database.as_retriever()) # Raise the error
```
### Error Message and Stack Trace (if applicable)
Exception:
```
ValueError: Loading openai-chat LLM not supported
```
Full stack trace:
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
File <command-1363443796728892>, line 2
1 from langchain.chains.loading import load_chain
----> 2 chain = load_chain("./langchain_qa.json", retriever=faiss_database.as_retriever())
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-bcadb797-0667-46ee-b625-18b963e92eb7/lib/python3.10/site-packages/langchain/chains/loading.py:596, in load_chain(path, **kwargs)
594 return hub_result
595 else:
--> 596 return _load_chain_from_file(path, **kwargs)
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-bcadb797-0667-46ee-b625-18b963e92eb7/lib/python3.10/site-packages/langchain/chains/loading.py:623, in _load_chain_from_file(file, **kwargs)
620 config["memory"] = kwargs.pop("memory")
622 # Load the chain from the config now.
--> 623 return load_chain_from_config(config, **kwargs)
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-bcadb797-0667-46ee-b625-18b963e92eb7/lib/python3.10/site-packages/langchain/chains/loading.py:586, in load_chain_from_config(config, **kwargs)
583 raise ValueError(f"Loading {config_type} chain not supported")
585 chain_loader = type_to_loader_dict[config_type]
--> 586 return chain_loader(config, **kwargs)
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-bcadb797-0667-46ee-b625-18b963e92eb7/lib/python3.10/site-packages/langchain/chains/loading.py:419, in _load_retrieval_qa(config, **kwargs)
417 if "combine_documents_chain" in config:
418 combine_documents_chain_config = config.pop("combine_documents_chain")
--> 419 combine_documents_chain = load_chain_from_config(combine_documents_chain_config)
420 elif "combine_documents_chain_path" in config:
421 combine_documents_chain = load_chain(config.pop("combine_documents_chain_path"))
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-bcadb797-0667-46ee-b625-18b963e92eb7/lib/python3.10/site-packages/langchain/chains/loading.py:586, in load_chain_from_config(config, **kwargs)
583 raise ValueError(f"Loading {config_type} chain not supported")
585 chain_loader = type_to_loader_dict[config_type]
--> 586 return chain_loader(config, **kwargs)
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-bcadb797-0667-46ee-b625-18b963e92eb7/lib/python3.10/site-packages/langchain/chains/loading.py:79, in _load_stuff_documents_chain(config, **kwargs)
77 if "llm_chain" in config:
78 llm_chain_config = config.pop("llm_chain")
---> 79 llm_chain = load_chain_from_config(llm_chain_config)
80 elif "llm_chain_path" in config:
81 llm_chain = load_chain(config.pop("llm_chain_path"))
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-bcadb797-0667-46ee-b625-18b963e92eb7/lib/python3.10/site-packages/langchain/chains/loading.py:586, in load_chain_from_config(config, **kwargs)
583 raise ValueError(f"Loading {config_type} chain not supported")
585 chain_loader = type_to_loader_dict[config_type]
--> 586 return chain_loader(config, **kwargs)
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-bcadb797-0667-46ee-b625-18b963e92eb7/lib/python3.10/site-packages/langchain/chains/loading.py:40, in _load_llm_chain(config, **kwargs)
38 if "llm" in config:
39 llm_config = config.pop("llm")
---> 40 llm = load_llm_from_config(llm_config)
41 elif "llm_path" in config:
42 llm = load_llm(config.pop("llm_path"))
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-bcadb797-0667-46ee-b625-18b963e92eb7/lib/python3.10/site-packages/langchain_community/llms/loading.py:21, in load_llm_from_config(config)
18 type_to_cls_dict = get_type_to_cls_dict()
20 if config_type not in type_to_cls_dict:
---> 21 raise ValueError(f"Loading {config_type} LLM not supported")
23 llm_cls = type_to_cls_dict[config_type]()
24 return llm_cls(**config)
ValueError: Loading openai-chat LLM not supported
```
### Description
We have a logic that deserialize chain (RetrievalQA) saved as a JSON using [load_chain()](https://github.com/langchain-ai/langchain/blob/f96dd57501131840b713ed7c2e86cbf1ddc2761f/libs/langchain/langchain/chains/loading.py#L589). This method has been working correctly for chains includes legacy LLM models like [OpenAI](https://python.langchain.com/docs/integrations/llms/openai).
However, this method doesn't work for the new Chat LLM models like [ChatOpenAI](https://python.langchain.com/docs/integrations/chat/openai), [AzureChatOpenAI](https://python.langchain.com/docs/integrations/chat/azure_chat_openai), raising an error like `ValueError: Loading openai-chat LLM not supported`. This is because those models are not defined in the class mapping [here](https://github.com/ibiscp/langchain/blob/master/langchain/llms/__init__.py#L54-L76).
Alternatively, we've found that a new set of serde methods are added ([ref](https://github.com/langchain-ai/langchain/pull/8164)) under `langchain.loads` and `langchain.dumps`. This works loading a single model like `loads(dumps(ChatOpenAI())`. However, [load_chain()](https://github.com/langchain-ai/langchain/blob/f96dd57501131840b713ed7c2e86cbf1ddc2761f/libs/langchain/langchain/chains/loading.py#L589) doesn't use these new serde methods, so still doesn't handle chains that contain the new chat models.
It seems there are a few possible paths forward to solve this issue:
* Can we add `openai-chat` and other chat models to [the class mapping](https://github.com/ibiscp/langchain/blob/master/langchain/llms/__init__.py#L54-L76)? I saw [a PR proposed this](https://github.com/langchain-ai/langchain/pull/1715) was closed for a deprecation reason, but the new class ChatOpenAI actually has the same model type `openai-chat`.
```
"llm": {
"model_name": "gpt-3.5-turbo",
"model": "gpt-3.5-turbo",
"stream": false,
"n": 1,
"temperature": 0.1,
"max_tokens": 200,
"_type": "openai-chat"
},
```
* Alternatively, is there any plan to migrate [load_chain()](https://github.com/langchain-ai/langchain/blob/f96dd57501131840b713ed7c2e86cbf1ddc2761f/libs/langchain/langchain/chains/loading.py#L589) to use the new serde methods, to support new Chat LLM models?
* If neither of these is planned in the near future, could you provide any workaround we can save/load chains that include new Chat LLM models like ChatOpenAI?
This has been a blocker for us to migrate from old LLM models to the new Chat models. Since the old OpenAI class doesn't work with OpenAI >= 1.0, this effectively
Thank you in advance for your support!
### System Info
$ python -m langchain_core.sys_info
```
System Information
------------------
> OS: Linux
> OS Version: #58~20.04.1-Ubuntu SMP Mon Jan 22 17:15:01 UTC 2024
> Python Version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.1.28
> langchain: 0.1.10
> langchain_community: 0.0.25
> langsmith: 0.1.13
> langchain_openai: 0.0.8
> langchain_text_splitters: 0.0.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
``` | load_chain() method doesn't work with new chat models (ChatOpenAI, AzureChatOpenAI) | https://api.github.com/repos/langchain-ai/langchain/issues/18420/comments | 0 | 2024-03-02T11:31:03Z | 2024-06-08T16:14:30Z | https://github.com/langchain-ai/langchain/issues/18420 | 2,164,769,403 | 18,420 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
See the Colab link: https://colab.research.google.com/drive/1vNVBmE15pAPJjOukfIB4Gs7ilygbLb5k?usp=sharing
'''
!pip install -q langchain
import langchain
print(f'Langchain Version: {langchain.__version__}')
import sys
print(f'Python Version: {sys.version}')
from langchain_text_splitters import RecursiveCharacterTextSplitter
'''
### Error Message and Stack Trace (if applicable)
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
[<ipython-input-2-9070a936598a>](https://localhost:8080/#) in <cell line: 7>()
5 import sys
6 print(f'Python Version: {sys.version}')
----> 7 from langchain_text_splitters import RecursiveCharacterTextSplitter
ModuleNotFoundError: No module named 'langchain_text_splitters'
---------------------------------------------------------------------------
NOTE: If your import is failing due to a missing package, you can
manually install dependencies using either !pip or !apt.
To view examples of installing some common dependencies, click the
"Open Examples" button below.
---------------------------------------------------------------------------
### Description
I was following the Quickstart Guide, and noticed that when trying to import from langchain_text_splitters import RecursiveCharacterTextSplitter, the error above is received. I made sure all dependencies are included, tried uninstalling and reinstalling LangChain.
### System Info
langchain==0.1.9
langchain-community==0.0.25
langchain-core==0.1.28
Langchain Version: 0.1.9
Python Version: 3.10.12 | ModuleNotFoundError: No module named 'langchain_text_splitters' | https://api.github.com/repos/langchain-ai/langchain/issues/18409/comments | 4 | 2024-03-02T01:34:02Z | 2024-03-05T17:34:44Z | https://github.com/langchain-ai/langchain/issues/18409 | 2,164,473,587 | 18,409 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The following code:
``` python
import os
from langchain.agents import AgentType, initialize_agent
from langchain_community.agent_toolkits.github.toolkit import GitHubToolkit
from langchain_community.utilities.github import GitHubAPIWrapper
from langchain_openai import ChatOpenAI
# Set your environment variables using os.environ
os.environ["GITHUB_APP_ID"] = ""
os.environ["GITHUB_APP_PRIVATE_KEY"] = "pemfile"
os.environ["GITHUB_REPOSITORY"] = ""
os.environ["GITHUB_BRANCH"] = ""
os.environ["GITHUB_BASE_BRANCH"] = "main"
# This example also requires an OpenAI API key
os.environ["OPENAI_API_KEY"] = "####"
llm = ChatOpenAI(temperature=0, model="gpt-4-1106-preview")
github = GitHubAPIWrapper()
toolkit = GitHubToolkit.from_github_api_wrapper(github)
tools = toolkit.get_tools()
# STRUCTURED_CHAT includes args_schema for each tool, helps tool args parsing errors.
agent = initialize_agent(
tools,
llm,
agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
)
print("Available tools:")
for tool in tools:
print("\t" + tool.name)
agent.run(
"You have the software engineering capabilities of a Google Principle engineer. You are tasked with completing issues on a github repository. Please look at the existing issues and complete them."
)
```
### Error Message and Stack Trace (if applicable)
```
> Entering new AgentExecutor chain...
Action:
```json
{
"action": "Get Issues",
"action_input": {}
}
```
Observation: Found 1 issues:
[{'title': 'Create a new html file', 'number': 1, 'opened_by': 'ajhous44'}]
Thought:To proceed with the task of completing the issue found in the repository, I need to first understand the details of the issue. I will fetch the title, body, and comment thread of the specific issue with number 1.
Action:
```json
{
"action": "Get Issue",
"action_input": {
"issue_number": 1
}
}
```Traceback (most recent call last):
Traceback (most recent call last):
File "c:\Users\[username]\Documents\Git\[project_name]\[script_name].py", line 34, in <module>
agent.run(
File "C:\Users\[username]\Documents\Git\[project_name]\.venv\Lib\site-packages\langchain_core\_api\deprecation.py", line 145, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\[username]\Documents\Git\[project_name]\.venv\Lib\site-packages\langchain\chains\base.py", line 545, in run
return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\[username]\Documents\Git\[project_name]\.venv\Lib\site-packages\langchain_core\_api\deprecation.py", line 145, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\[username]\Documents\Git\[project_name]\.venv\Lib\site-packages\langchain\chains\base.py", line 378, in __call__
return self.invoke(
^^^^^^^^^^^^
File "C:\Users\[username]\Documents\Git\[project_name]\.venv\Lib\site-packages\langchain\chains\base.py", line 163, in invoke
raise e
File "C:\Users\[username]\Documents\Git\[project_name]\.venv\Lib\site-packages\langchain\chains\base.py", line 153, in invoke
self._call(inputs, run_manager=run_manager)
File "C:\Users\[username]\Documents\Git\[project_name]\.venv\Lib\site-packages\langchain\agents\agent.py", line 1391, in _call
next_step_output = self._take_next_step(
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\[username]\Documents\Git\[project_name]\.venv\Lib\site-packages\langchain_core\tools.py", line 376, in run
self._run(*tool_args, run_manager=run_manager, **tool_kwargs)
TypeError: GitHubAction._run() got an unexpected keyword argument 'issue_number'
```
### Description
When attempting to run a simple agent example integrating LangChain with GitHub, following the documentation provided at https://python.langchain.com/docs/integrations/toolkits/github, I encountered a TypeError during the execution of the agent.run() method. The error message indicates that the GitHubAction._run() method received an unexpected keyword argument issue_number. This issue arises despite closely following the example provided, which suggests there might be a discrepancy between the documented example and the current implementation or expectations of the GitHubAction._run() method. The environment is set up as instructed, with environment variables for GitHub integration and the OpenAI API key configured. The error occurs in the final step, where the agent is expected to process tasks related to GitHub issue management through natural language commands. This suggests a possible mismatch in the expected arguments or an oversight in the documentation.
### System Info
langchain==0.1.9
langchain-community==0.0.24
langchain-core==0.1.28
langchain-openai==0.0.8 | GitHub agent throwing TypeError: GitHubAction._run() Unexpected Keyword Argument 'issue_number' | https://api.github.com/repos/langchain-ai/langchain/issues/18406/comments | 9 | 2024-03-01T23:08:14Z | 2024-07-24T16:07:42Z | https://github.com/langchain-ai/langchain/issues/18406 | 2,164,329,350 | 18,406 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python for section, config in config_sections.items():
chunk_size = 500
splitter = RecursiveJsonSplitter(min_chunk_size=chunk_size, max_chunk_size=chunk_size)
my_chunks = splitter.split_json(json_data=config)```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
The RecursiveJsonSplitter class's _json_split method uses a mutable default argument (chunks = [{}]) which leads to unintended behavior. Specifically, chunks from previous invocations of the method are being included in the results of subsequent invocations. This is because mutable default arguments in Python are initialized once at function definition time, not each time the function is called, causing the default value to be shared across all invocations.
### System Info
System Information
------------------
> Python Version: 3.11.6 | packaged by conda-forge | (main, Oct 3 2023, 10:37:07) [Clang 15.0.7 ]
Package Information
-------------------
> langchain_core: 0.1.28
> langchain: 0.1.9
> langchain_community: 0.0.24
> langsmith: 0.1.10
> langchain_openai: 0.0.2
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | RecursiveJsonSplitter Retains State Across Invocations Due to Mutable Default Argument | https://api.github.com/repos/langchain-ai/langchain/issues/18398/comments | 2 | 2024-03-01T18:21:10Z | 2024-07-21T16:06:00Z | https://github.com/langchain-ai/langchain/issues/18398 | 2,163,952,205 | 18,398 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
class Agent:
def __init__(self, connection_id: str, chat_id: str, user_id: str):
self.chat_id = chat_id
self.user_id = user_id
self.connection_id = connection_id
self._llm = AzureChatOpenAI(
azure_endpoint=OPENAI_AZURE_ENDPOINT,
openai_api_version=OPENAI_API_VERSION,
azure_deployment=OPENAI_AZURE_DEPLOYMENT,
openai_api_key=OPENAI_AZURE_API_KEY,
openai_api_type=OPENAI_API_TYPE,
temperature=0.4,
streaming=True,
)
self._tools = [
CodeInterpreter(),
]
message_history = DynamoDBChatMessageHistory(
table_name=DYNAMODB_HISTORY_TABLE_NAME,
session_id=self.chat_id,
key={
"PK": self.chat_id,
"SK": self.user_id,
},
)
self.memory = ConversationSummaryBufferMemory(
memory_key="chat_memory",
llm=self._llm,
max_token_limit=2000,
return_messages=True,
chat_memory=message_history
)
prompt = ChatPromptTemplate.from_messages(
[
("system", AGENT_SYSTEM_MESSAGE),
MessagesPlaceholder("chat_memory", optional=True),
("human", "{input}"),
MessagesPlaceholder("agent_scratchpad"),
]
)
agent = cast(
BaseMultiActionAgent,
create_openai_tools_agent(self._llm, self._tools, prompt),
)
self.agent_executor = AgentExecutor(
agent=agent, tools=self._tools, verbose=True, memory=self.memory,
).with_config({"run_name": "Agent"})
async def arun(self, query: str, callback: AgentCallbackHandler):
output = None
try:
async for event in self.agent_executor.astream_events(
{"input": query}, version="v1"
):
await callback.handle_event(event)
except Exception as e:
logger.error(traceback.format_exc())
logger.error("astream_events error %s", e)
```
### Error Message and Stack Trace (if applicable)
`
[ERROR] 2024-02-29T14:51:07.594Z functions::handle_message:118 Traceback (most recent call last):
File "<placeholder>\logic\functions.py", line 112, in handle_message
await agent.arun(message, callback)
File "<placeholder>\logic\agent.py", line 107, in arun
async for event in self.agent_executor.astream_events(
File "<placeholder>\langchain_core\runnables\base.py", line 4157, in astream_events
async for item in self.bound.astream_events(
File "<placeholder>\langchain_core\runnables\base.py", line 889, in astream_events
async for log in _astream_log_implementation( # type: ignore[misc]
File "<placeholder>\langchain_core\tracers\log_stream.py", line 612, in _astream_log_implementation
await task
File "<placeholder>\langchain_core\tracers\log_stream.py", line 566, in consume_astream
async for chunk in runnable.astream(input, config, **kwargs):
File "<placeholder>\langchain\agents\agent.py", line 1551, in astream
async for step in iterator:
File "<placeholder>\langchain\agents\agent_iterator.py", line 265, in __aiter__
output = await self._aprocess_next_step_output(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<placeholder>\langchain\agents\agent_iterator.py", line 328, in _aprocess_next_step_output
return await self._areturn(next_step_output, run_manager=run_manager)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<placeholder>\langchain\agents\agent_iterator.py", line 392, in _areturn
return self.make_final_outputs(returned_output, run_manager)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<placeholder>\langchain\agents\agent_iterator.py", line 142, in make_final_outputs
self.agent_executor.prep_outputs(
File "<placeholder>\langchain\chains\base.py", line 440, in prep_outputs
self.memory.save_context(inputs, outputs)
File "<placeholder>\langchain\memory\summary_buffer.py", line 59, in save_context super().save_context(inputs, outputs)
File "<placeholder>\langchain\memory\chat_memory.py", line 38, in save_context
input_str, output_str = self._get_input_output(inputs, outputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<placeholder>\langchain\memory\chat_memory.py", line 30, in _get_input_output
raise ValueError(f"One output key expected, got {outputs.keys()}")
ValueError: One output key expected, got dict_keys(['output', 'messages'])
`
### Description
When using agentexecutor with memory, an error occurs in the save_context method of chat_memory.py. The error is raised because the expected number of output keys is one, but the actual number of output keys is more than one.
### System Info
Python version: 3.11
Libraries used:
openai==1.10.0
langchain==0.1.6
langchain-community==0.0.19
langchain-openai==0.0.5
langchainhub==0.1.14
pyshorteners==1.0.1
tiktoken==0.5.2
python-multipart==0.0.6
httpx==0.25.0
azure-core==1.29.4
azure-identity==1.14.1
azure-search-documents==11.4.0b8
requests==2.31.0
requests-aws4auth==1.2.3
| Error in agentexecutor with memory: One output key expected, got dict_keys(['output', 'messages']) | https://api.github.com/repos/langchain-ai/langchain/issues/18388/comments | 8 | 2024-03-01T14:52:04Z | 2024-06-21T16:37:53Z | https://github.com/langchain-ai/langchain/issues/18388 | 2,163,574,770 | 18,388 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [ ] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from dotenv import load_dotenv
import streamlit as st
from PyPDF2 import PdfReader
from langchain.text_splitter import CharacterTextSplitter
#from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import FAISS
from langchain.chains.question_answering import load_qa_chain
from langchain.llms.huggingface_pipeline import HuggingFacePipeline
from langchain.embeddings.huggingface import HuggingFaceEmbeddings
#from langchain.llms import huggingface_hub
def main():
load_dotenv()
st.set_page_config(page_title="Ask your PDF")
st.header("Ask your PDF 💬")
# upload file
pdf = st.file_uploader("Upload your PDF", type="pdf")
# extract the text
if pdf is not None:
pdf_reader = PdfReader(pdf)
text = ""
for page in pdf_reader.pages:
text += page.extract_text()
# split into chunks
text_splitter = CharacterTextSplitter(
separator="\n",
chunk_size=1000,
chunk_overlap=200,
length_function=len
)
chunks = text_splitter.split_text(text)
# create embeddings
# embeddings = OpenAIEmbeddings()
embeddings = HuggingFaceEmbeddings(model_name="bert-base-uncased")
knowledge_base = FAISS.from_texts(chunks, embeddings)
# show user input
user_question = st.text_input("Ask a question about your PDF:")
if user_question:
docs = knowledge_base.similarity_search(user_question)
#st.write(docs)
pipeline = HuggingFacePipeline()
chain = load_qa_chain(pipeline, chain_type="stuff")
response = chain.run(input_documents=docs, question=user_question)
st.write(response)
if __name__ == '__main__':
main()
### Error Message and Stack Trace (if applicable)
<img width="765" alt="image" src="https://github.com/langchain-ai/langchain/assets/31651898/3cd49b60-ed55-4e21-ac51-a5cf136111ba">
### Description
The error mainly comming from these lines
"
pipeline = HuggingFacePipeline()
chain = load_qa_chain(pipeline, chain_type="stuff")
response = chain.run(input_documents=docs, question=user_question)**
"
### System Info
langchain== latest version
PyPDF2==3.0.1
python-dotenv==1.0.0
streamlit==1.18.1
faiss-cpu==1.7.4
altair<5
| TypeError: 'NoneType' object is not callable | https://api.github.com/repos/langchain-ai/langchain/issues/18384/comments | 0 | 2024-03-01T12:50:01Z | 2024-06-08T16:14:26Z | https://github.com/langchain-ai/langchain/issues/18384 | 2,163,340,434 | 18,384 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_openai import ChatOpenAI
llm = ChatOpenAI()
### Error Message and Stack Trace (if applicable)
python openai.py
Traceback (most recent call last):
File "/home/fyx/Codes/langchain/openai.py", line 1, in <module>
from langchain_openai import ChatOpenAI
File "/home/fyx/anaconda3/envs/langchain/lib/python3.9/site-packages/langchain_openai/__init__.py", line 1, in <module>
from langchain_openai.chat_models import (
File "/home/fyx/anaconda3/envs/langchain/lib/python3.9/site-packages/langchain_openai/chat_models/__init__.py", line 1, in <module>
from langchain_openai.chat_models.azure import AzureChatOpenAI
File "/home/fyx/anaconda3/envs/langchain/lib/python3.9/site-packages/langchain_openai/chat_models/azure.py", line 8, in <module>
import openai
File "/home/fyx/Codes/langchain/openai.py", line 1, in <module>
from langchain_openai import ChatOpenAI
ImportError: cannot import name 'ChatOpenAI' from partially initialized module 'langchain_openai' (most likely due to a circular import) (/home/fyx/anaconda3/envs/langchain/lib/python3.9/site-packages/langchain_openai/__init__.py)
### Description
I am trying to learn langchain but the first code has this bug.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #107~20.04.1-Ubuntu SMP Fri Feb 9 14:20:11 UTC 2024
> Python Version: 3.9.18 (main, Sep 11 2023, 13:41:44)
[GCC 11.2.0]
Package Information
-------------------
> langchain_core: 0.1.28
> langchain: 0.1.9
> langchain_community: 0.0.24
> langsmith: 0.1.10
> langchain_cli: 0.0.21
> langchain_experimental: 0.0.53
> langchain_openai: 0.0.8
> langserve: 0.0.45
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph | mportError: cannot import name 'ChatOpenAI' from partially initialized module 'langchain_openai' (most likely due to a circular import) | https://api.github.com/repos/langchain-ai/langchain/issues/18380/comments | 1 | 2024-03-01T11:00:32Z | 2024-06-08T16:14:20Z | https://github.com/langchain-ai/langchain/issues/18380 | 2,163,155,280 | 18,380 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```Python
from langchain_community.document_loaders import PlaywrightURLLoader
from langchain.vectorstores.utils import filter_complex_metadata
from langchain_community.vectorstores import Chroma
from langchain.text_splitter import CharacterTextSplitter, RecursiveCharacterTextSplitter, TokenTextSplitter
openai_api_key = "{OpenAI API Key}"
embeddings = OpenAIEmbeddings(api_key=openai_api_key)
llm = ChatOpenAI(model_name= openai_model, temperature=0, api_key=openai_api_key)
def connect_chromadb(collection_name: str = "langchain"):
try:
chroma_settings = chromadb.PersistentClient(path=db_directory, settings=Settings(anonymized_telemetry=False))
logging.info("Connecting vectordb...")
#vectordb = Chroma(collection_name = collection_name, embedding_function=embeddings, persist_directory=db_directory, client_settings=chroma_settings)
vectordb = Chroma(collection_name = collection_name, embedding_function=embeddings, client=chroma_settings)
logging.info("Vectordb connected successfully.")
return vectordb
except Exception as e:
logging.error("An error occured connecting to db: " + str(e))
vectordb = connect_chromadb()
urls = ["https://example.com"]
loader = PlaywrightURLLoader(urls=urls, remove_selectors=["header", "footer"], continue_on_failure=True, headless=True)
documents = loader.load()
documents = filter_complex_metadata(documents)
text_splitter = RecursiveCharacterTextSplitter.from_tiktoken_encoder(separators=["\n\n", "\n", "\t"],chunk_size=CHUNK_SIZE_TOKENS,chunk_overlap=200)
texts = text_splitter.split_documents(documents)
vectordb.add_documents(documents=texts, embedding=embeddings, persist_directory = db_directory)
```
### Error Message and Stack Trace (if applicable)

### Description
I am using following dockerfile and it is installing PlayWrightUrlLoader successfully but when I am using it it stucks on loading the PlaywrightUrlLoader, no error comes, it only stops working. When I use PlaywrightUrlLoader in local it works fine.
```DockerFile
# syntax=docker/dockerfile:1
FROM tiangolo/uvicorn-gunicorn:python3.11
ENV shm_size 2gb
WORKDIR /code
COPY requirements.txt .
COPY my_api.py .
COPY my_lib.py .
COPY my_templates.py .
COPY config.cfg .
COPY version.txt .
RUN pip install --no-cache-dir --upgrade -r requirements.txt
#COPY . .
RUN python -m nltk.downloader punkt
RUN apt-get update && apt-get install -y \
fonts-liberation \
libasound2 \
libatk-bridge2.0-0 \
libatk1.0-0 \
libatspi2.0-0 \
libcups2 \
libdbus-1-3 \
libdrm2 \
libgbm1 \
libgtk-3-0 \
# libgtk-4-1 \
libnspr4 \
libnss3 \
libwayland-client0 \
libxcomposite1 \
libxdamage1 \
libxfixes3 \
libxkbcommon0 \
libxrandr2 \
xdg-utils \
libu2f-udev \
libvulkan1
# Chrome instalation
RUN curl -LO https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
RUN apt-get install -y ./google-chrome-stable_current_amd64.deb
RUN rm google-chrome-stable_current_amd64.deb
# Check chrome version
RUN echo "Chrome: " && google-chrome --version
RUN apt-get install -y tesseract-ocr
RUN apt-get install poppler-utils -y
RUN playwright install
RUN mkdir -p /home/DE_ChromaDB
RUN mkdir -p /home/uploaded
RUN mkdir -p /home/chathistory
EXPOSE 3100
CMD ["gunicorn", "my_app:app", "--timeout", "600", "--bind", "0.0.0.0:3100", "--workers","4", "--worker-class", "uvicorn.workers.UvicornWorker"]
```
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.22621
> Python Version: 3.11.7 | packaged by conda-forge | (main, Dec 23 2023, 14:27:59) [MSC v.1937 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.1.26
> langchain: 0.1.9
> langchain_community: 0.0.24
> langsmith: 0.1.8
> langchain_experimental: 0.0.50
> langchain_openai: 0.0.7
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | PlaywrightUrlLoader Works Fine in Local But Not With Docker | https://api.github.com/repos/langchain-ai/langchain/issues/18379/comments | 0 | 2024-03-01T10:40:19Z | 2024-06-08T16:14:16Z | https://github.com/langchain-ai/langchain/issues/18379 | 2,163,119,164 | 18,379 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain.chains import ConversationalRetrievalChain
llm = GoogleGenerativeAI(model="models/text-bison-001", google_api_key=api_key1,temperature=0,verbose=True)
retriever=vectordb2.as_retriever()
#print(retriever)
qa = ConversationalRetrievalChain.from_llm(
llm,
retriever=retriever,
memory=memory,
chain_type="stuff",
response_if_no_docs_found="None")
question = "give me the number of projects did siva worked?"
context={"a":"give bye at last"}
result =qa({"question":question})
print(result['answer'])
print("\n")
result['chat_history']
### Error Message and Stack Trace (if applicable)
alueError Traceback (most recent call last)
Cell In[61], [line 13](vscode-notebook-cell:?execution_count=61&line=13)
[11](vscode-notebook-cell:?execution_count=61&line=11) question = "give me the number of projects did siva worked?"
[12](vscode-notebook-cell:?execution_count=61&line=12) context={"a":"give bye at last"}
---> [13](vscode-notebook-cell:?execution_count=61&line=13) result =qa.invoke({"question":question,"chat_history":[]})
[14](vscode-notebook-cell:?execution_count=61&line=14) print(result['answer'])
[15](vscode-notebook-cell:?execution_count=61&line=15) print("\n")
File [c:\Users\siva.kotagiri\OneDrive](file:///C:/Users/siva.kotagiri/OneDrive) - Nimble Accounting\Desktop\Kore.ai\search-assist\for images\env\lib\site-packages\langchain\chains\base.py:163, in Chain.invoke(self, input, config, **kwargs)
[161](file:///C:/Users/siva.kotagiri/OneDrive%20-%20Nimble%20Accounting/Desktop/Kore.ai/search-assist/for%20images/env/lib/site-packages/langchain/chains/base.py:161) except BaseException as e:
[162](file:///C:/Users/siva.kotagiri/OneDrive%20-%20Nimble%20Accounting/Desktop/Kore.ai/search-assist/for%20images/env/lib/site-packages/langchain/chains/base.py:162) run_manager.on_chain_error(e)
--> [163](file:///C:/Users/siva.kotagiri/OneDrive%20-%20Nimble%20Accounting/Desktop/Kore.ai/search-assist/for%20images/env/lib/site-packages/langchain/chains/base.py:163) raise e
[164](file:///C:/Users/siva.kotagiri/OneDrive%20-%20Nimble%20Accounting/Desktop/Kore.ai/search-assist/for%20images/env/lib/site-packages/langchain/chains/base.py:164) run_manager.on_chain_end(outputs)
[166](file:///C:/Users/siva.kotagiri/OneDrive%20-%20Nimble%20Accounting/Desktop/Kore.ai/search-assist/for%20images/env/lib/site-packages/langchain/chains/base.py:166) if include_run_info:
File [c:\Users\siva.kotagiri\OneDrive](file:///C:/Users/siva.kotagiri/OneDrive) - Nimble Accounting\Desktop\Kore.ai\search-assist\for images\env\lib\site-packages\langchain\chains\base.py:158, in Chain.invoke(self, input, config, **kwargs)
[151](file:///C:/Users/siva.kotagiri/OneDrive%20-%20Nimble%20Accounting/Desktop/Kore.ai/search-assist/for%20images/env/lib/site-packages/langchain/chains/base.py:151) self._validate_inputs(inputs)
[152](file:///C:/Users/siva.kotagiri/OneDrive%20-%20Nimble%20Accounting/Desktop/Kore.ai/search-assist/for%20images/env/lib/site-packages/langchain/chains/base.py:152) outputs = (
[153](file:///C:/Users/siva.kotagiri/OneDrive%20-%20Nimble%20Accounting/Desktop/Kore.ai/search-assist/for%20images/env/lib/site-packages/langchain/chains/base.py:153) self._call(inputs, run_manager=run_manager)
[154](file:///C:/Users/siva.kotagiri/OneDrive%20-%20Nimble%20Accounting/Desktop/Kore.ai/search-assist/for%20images/env/lib/site-packages/langchain/chains/base.py:154) if new_arg_supported
[155](file:///C:/Users/siva.kotagiri/OneDrive%20-%20Nimble%20Accounting/Desktop/Kore.ai/search-assist/for%20images/env/lib/site-packages/langchain/chains/base.py:155) else self._call(inputs)
[156](file:///C:/Users/siva.kotagiri/OneDrive%20-%20Nimble%20Accounting/Desktop/Kore.ai/search-assist/for%20images/env/lib/site-packages/langchain/chains/base.py:156) )
--> [158](file:///C:/Users/siva.kotagiri/OneDrive%20-%20Nimble%20Accounting/Desktop/Kore.ai/search-assist/for%20images/env/lib/site-packages/langchain/chains/base.py:158) final_outputs: Dict[str, Any] = self.prep_outputs(
[159](file:///C:/Users/siva.kotagiri/OneDrive%20-%20Nimble%20Accounting/Desktop/Kore.ai/search-assist/for%20images/env/lib/site-packages/langchain/chains/base.py:159) inputs, outputs, return_only_outputs
...
[18](file:///C:/Users/siva.kotagiri/OneDrive%20-%20Nimble%20Accounting/Desktop/Kore.ai/search-assist/for%20images/env/lib/site-packages/langchain/memory/utils.py:18) if len(prompt_input_keys) != 1:
---> [19](file:///C:/Users/siva.kotagiri/OneDrive%20-%20Nimble%20Accounting/Desktop/Kore.ai/search-assist/for%20images/env/lib/site-packages/langchain/memory/utils.py:19) raise ValueError(f"One input key expected got {prompt_input_keys}")
[20](file:///C:/Users/siva.kotagiri/OneDrive%20-%20Nimble%20Accounting/Desktop/Kore.ai/search-assist/for%20images/env/lib/site-packages/langchain/memory/utils.py:20) return prompt_input_keys[0]
ValueError: One input key expected got ['chat_history', 'question']
Output is truncated. View as a [scrollable element](command:cellOutput.enableScrolling?8758bc90-64a6-49eb-bff6-d453c708d237) or open in a [text editor](command:workbench.action.openLargeOutput?8758bc90-64a6-49eb-bff6-d453c708d237). Adjust cell output [settings](command:workbench.action.openSettings?%5B%22%40tag%3AnotebookOutputLayout%22%5D)...
### Description
The above code is working fine in the colab but not working in the vscode
### System Info
windows | Not executing properly | https://api.github.com/repos/langchain-ai/langchain/issues/18376/comments | 0 | 2024-03-01T10:26:35Z | 2024-06-08T16:14:10Z | https://github.com/langchain-ai/langchain/issues/18376 | 2,163,094,790 | 18,376 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
**Error**
```python
values["spark_app_url"] = get_from_dict_or_env(
values,
"spark_app_url",
"IFLYTEK_SPARK_APP_URL",
"wss://spark-api.xf-yun.com/v3.1/chat",
)
values["spark_llm_domain"] = get_from_dict_or_env(
values,
"spark_llm_domain",
"IFLYTEK_SPARK_LLM_DOMAIN",
"generalv3",
)
# put extra params into model_kwargs
values["model_kwargs"]["temperature"] = values["temperature"] or cls.temperature
values["model_kwargs"]["top_k"] = values["top_k"] or cls.top_k
values["client"] = _SparkLLMClient(
app_id=values["spark_app_id"],
api_key=values["spark_api_key"],
api_secret=values["spark_api_secret"],
api_url=values["spark_api_url"],
spark_domain=values["spark_llm_domain"],
model_kwargs=values["model_kwargs"],
)
return values
```
**Suggested Code**
```python
values["spark_api_url"] = get_from_dict_or_env(
values,
"spark_api_url",
"IFLYTEK_SPARK_API_URL",
"wss://spark-api.xf-yun.com/v3.1/chat",
)
```
File Path: https://github.com/langchain-ai/langchain/blob/v0.1.9/libs/community/langchain_community/chat_models/sparkllm.py
**Lines 193 to 198**
### Error Message and Stack Trace (if applicable)
Exception Error
### Description
Currently, when using the ChatModel provided by `sparkllm`, the default version is V3.1. However, the official version V3.5 is already available. When trying to use the V3.5 version and specifying the new version information through environment variables, it was found that the `IFLYTEK_SPARK_APP_URL` setting was not working. After tracing the code, it was discovered that the error was due to a misspelled key for parameter retrieval in the code. Errors occur in all Release versions. I hope this can be fixed as soon as possible. The specific error code is as follows:
``` python
values["spark_app_url"] = get_from_dict_or_env(
values,
"spark_app_url",
"IFLYTEK_SPARK_APP_URL",
"wss://spark-api.xf-yun.com/v3.1/chat",
)
values["spark_llm_domain"] = get_from_dict_or_env(
values,
"spark_llm_domain",
"IFLYTEK_SPARK_LLM_DOMAIN",
"generalv3",
)
# put extra params into model_kwargs
values["model_kwargs"]["temperature"] = values["temperature"] or cls.temperature
values["model_kwargs"]["top_k"] = values["top_k"] or cls.top_k
values["client"] = _SparkLLMClient(
app_id=values["spark_app_id"],
api_key=values["spark_api_key"],
api_secret=values["spark_api_secret"],
api_url=values["spark_api_url"],
spark_domain=values["spark_llm_domain"],
model_kwargs=values["model_kwargs"],
)
return values
```
File Path: https://github.com/langchain-ai/langchain/blob/v0.1.9/libs/community/langchain_community/chat_models/sparkllm.py
`spark_api_key` key does not exist
### System Info
langchain:V.1.8
Python:V3.12
OS:Windows11
langchain_core: 0.1.24
langchain: 0.1.8
langchain_community: 0.0.21
langsmith: 0.1.3 | LangChain Community ChatModel for sparkllm Bug | https://api.github.com/repos/langchain-ai/langchain/issues/18370/comments | 0 | 2024-03-01T07:13:07Z | 2024-03-01T18:49:31Z | https://github.com/langchain-ai/langchain/issues/18370 | 2,162,760,580 | 18,370 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
vector_search = MongoDBAtlasVectorSearch.from_connection_string(
uri,
DB_NAME + "." + COLLECTION_NAME,
embeddings,
index_name=ATLAS_VECTOR_SEARCH_INDEX_NAME,
relevance_score_fn='cosine',
)
qa_retriever = vector_search.as_retriever(
search_type="similarity_score_threshold",
search_kwargs={'score_threshold': 0.5},
)
### Error Message and Stack Trace (if applicable)
UserWarning: No relevant docs were retrieved using the relevance score threshold 0.5
warnings.warn()
### Description
I'm trying to use MongoDBAtlasVectorSearch, the similarity_score_threshold is used, but it is always returning an empty list. Only if the score is set to 0.0 then the documents are returned.
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.22631
> Python Version: 3.10.13 | packaged by Anaconda, Inc. | (main, Sep 11 2023, 13:24:38) [MSC v.1916 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.1.22
> langchain: 0.1.4
> langchain_community: 0.0.19
> langsmith: 0.0.87
> langchain_cli: 0.0.21
> langchain_openai: 0.0.5
> langserve: 0.0.41 | similarity_score_threshold isn't working for MongoDB Atlas Vector Search | https://api.github.com/repos/langchain-ai/langchain/issues/18365/comments | 3 | 2024-03-01T04:37:23Z | 2024-08-05T16:08:11Z | https://github.com/langchain-ai/langchain/issues/18365 | 2,162,589,243 | 18,365 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
import os
import math
import faiss
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.chat_models import ChatOpenAI
from langchain.vectorstores import FAISS
from langchain.memory import VectorStoreRetrieverMemory
from langchain.docstore import InMemoryDocstore
from langchain.chains import LLMChain
from langchain.agents import AgentExecutor, Tool, ZeroShotAgent, initialize_agent, agent_types
from langchain.callbacks import get_openai_callback
from langchain.retrievers import TimeWeightedVectorStoreRetriever
from langchain.prompts import ChatPromptTemplate, SystemMessagePromptTemplate, MessagesPlaceholder, HumanMessagePromptTemplate
#使用open ai embedding
embedding_size = 1536 # OpenAIEmbeddings 的维度
index = faiss.IndexFlatL2(embedding_size)
embedding_fn = OpenAIEmbeddings()
#创建llm
llm = ChatOpenAI(model="gpt-4-0125-preview")
from langchain.memory import ConversationBufferMemory, CombinedMemory, ConversationSummaryMemory
conv_memory = ConversationBufferMemory(
memory_key="chat_history",
input_key="input"
)
#创建agent
from tools.MyTimer import MyTimer
from tools.QueryTime import QueryTime
from tools.Weather import Weather
from tools.Calculator import Calculator
from tools.CheckSensor import CheckSensor
from tools.Switch import Switch
from tools.Knowledge import Knowledge
from tools.Say import Say
from tools.QueryTimerDB import QueryTimerDB
from tools.DeleteTimer import DeleteTimer
from tools.GetSwitchLog import GetSwitchLog
from tools.getOnRunLinkage import getOnRunLinkage
from tools.TimeCalc import TimeCalc
from tools.SetChatStatus import SetChatStatus
my_timer = MyTimer()
query_time_tool = QueryTime()
weather_tool = Weather()
calculator_tool = Calculator()
check_sensor_tool = CheckSensor()
switch_tool = Switch()
knowledge_tool = Knowledge()
say_tool = Say()
query_timer_db_tool = QueryTimerDB()
delete_timer_tool = DeleteTimer()
get_switch_log = GetSwitchLog()
get_on_run_linkage = getOnRunLinkage()
time_calc_tool = TimeCalc()
set_chat_status_tool = SetChatStatus()
tools = [
Tool(
name=my_timer.name,
func=my_timer.run,
description=my_timer.description
),
# Tool( #卸载时间查询工具
# name=query_time_tool.name,
# func=query_time_tool.run,
# description=query_time_tool.description
# ),
Tool(
name=weather_tool.name,
func=weather_tool.run,
description=weather_tool.description
),
Tool(
name=calculator_tool.name,
func=calculator_tool.run,
description=calculator_tool.description
),
Tool(
name=check_sensor_tool.name,
func=check_sensor_tool.run,
description=check_sensor_tool.description
),
Tool(
name=switch_tool.name,
func=switch_tool.run,
description=switch_tool.description
),
Tool(
name=knowledge_tool.name,
func=knowledge_tool.run,
description=knowledge_tool.description
),
Tool(
name=say_tool.name,
func=say_tool.run,
description=say_tool.description
),
Tool(
name=query_timer_db_tool.name,
func=query_timer_db_tool.run,
description=query_timer_db_tool.description
),
Tool(
name=delete_timer_tool.name,
func=delete_timer_tool.run,
description=delete_timer_tool.description
),
Tool(
name=get_switch_log.name,
func=get_switch_log.run,
description=get_switch_log.description
),
Tool(
name=get_on_run_linkage.name,
func=get_on_run_linkage.run,
description=get_on_run_linkage.description
),
Tool(
name=time_calc_tool.name,
func=time_calc_tool.run,
description=time_calc_tool.description
),
Tool(
name=set_chat_status_tool.name,
func=set_chat_status_tool.run,
description=set_chat_status_tool.description
),
]
agent = initialize_agent(agent_types=agent_types.AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION,
tools=tools, llm=llm, verbose=True, handle_parsing_errors=True, max_history=5, memory=conv_memory)
agent.run("my name is guozebin.")
agent.run("your?")
```
### Error Message and Stack Trace (if applicable)
agent.run("my name is guozebin.")
'Hello, Guozebin! How can I assist you today?'
agent.run("your?")
"Your question seems to be incomplete. Could you please provide more details or clarify what you're asking?"
### Description
I tested with langchain v0.0.336 and v0.0.339rc versions of the Chat ReAct Agent, but I couldn't have a continuous conversation. Here is my sample code, please advise.
### System Info
langchain 0.0.339rc
windows 10
python 3.10 | Troubleshooting Continuous Conversation Issues in Langchain Chat ReAct Agent Versions v0.0.336 and v0.0.339rc: A Request for Guidance | https://api.github.com/repos/langchain-ai/langchain/issues/18364/comments | 0 | 2024-03-01T03:31:40Z | 2024-06-08T16:14:05Z | https://github.com/langchain-ai/langchain/issues/18364 | 2,162,527,514 | 18,364 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
I would expect to be able to pass a template to QuerySQLCheckerTool when constructing it for my own custom toolkit, as such:
```python
query_sql_checker_tool = QuerySQLCheckerTool(
db=sqldb,
llm=model,
description=new_description,
template=new_prompt_template
)
```
however, this is not the case.
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I would like to rewrite the template this tool uses for its Query checker for my certain use case. I would like to do this by composing it into the existing tool from the library, and not have to create my own custom tool based on the given tool for this purpose.
It appears template is already able to be passed as an argument, but it is not being used. This is especially confusing as there is no other use of this argument but it is hardcoded to the same value no matter what we pass for this argument when constructing this tool from the library for our own custom toolkits.
### System Info
This is a code issue as you can see:
https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/tools/sql_database/tool.py#L99
https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/tools/sql_database/tool.py#L116
| SQL query checker tool doesn't respect prompt template passed when creating default LLMChain | https://api.github.com/repos/langchain-ai/langchain/issues/18351/comments | 1 | 2024-03-01T00:59:30Z | 2024-06-08T16:14:00Z | https://github.com/langchain-ai/langchain/issues/18351 | 2,162,360,131 | 18,351 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
@classmethod
async def run_message(cls, text, thread_id):
"""
Put a message in our conversation thread
:param thread: and int that contain our thread identifier
:param text: message
"""
# Error in 'await'
msg = await OpenAIAssistantRunnable(assistant_id=settings.ASSISTANT_ID, as_agent=True).ainvoke({
"content": text,
"thread_id": thread_id
})
return msg.return_values["output"]
```
### Error Message and Stack Trace (if applicable)
```
Traceback (most recent call last):
File "C:\Users\Kyo\AppData\Local\Programs\Python\Python311\Lib\site-packages\django\core\handlers\exception.py", line 55, in inner
response = get_response(request)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Kyo\AppData\Local\Programs\Python\Python311\Lib\site-packages\django\core\handlers\base.py", line 197, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Kyo\AppData\Local\Programs\Python\Python311\Lib\site-packages\asgiref\sync.py", line 277, in __call__
return call_result.result()
^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Kyo\AppData\Local\Programs\Python\Python311\Lib\concurrent\futures\_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "C:\Users\Kyo\AppData\Local\Programs\Python\Python311\Lib\concurrent\futures\_base.py", line 401, in __get_result
raise self._exception
File "C:\Users\RSSpe\AppData\Local\Programs\Python\Python311\Lib\site-packages\asgiref\sync.py", line 353, in main_wrap
result = await self.awaitable(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Kyo\AppData\Local\Programs\Python\Python311\Lib\site-packages\django\views\decorators\csrf.py", line 60, in _view_wrapper
return await view_func(request, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Kyo\AppData\Local\Programs\Python\Python311\Lib\site-packages\adrf\views.py", line 77, in async_dispatch
response = self.handle_exception(exc)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Kyo\AppData\Local\Programs\Python\Python311\Lib\site-packages\rest_framework\views.py", line 469, in handle_exception
self.raise_uncaught_exception(exc)
File "C:\Users\Kyo\AppData\Local\Programs\Python\Python311\Lib\site-packages\rest_framework\views.py", line 480, in raise_uncaught_exception
raise exc
File "C:\Users\Kyo\AppData\Local\Programs\Python\Python311\Lib\site-packages\adrf\views.py", line 70, in async_dispatch
response = await handler(request, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Kyo\AppData\Local\Programs\Python\Python311\Lib\site-packages\adrf\decorators.py", line 50, in handler
return await func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Kyo\Desktop\chatbot-django-capacitacion-ventas\apps\chatbot\views\openai_views.py", line 86, in post_message
text = await openairep.post_user_message(text, thread_id)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Kyo\Desktop\chatbot-django-capacitacion-ventas\apps\chatbot\repositories\openai_repository.py", line 18, in post_user_message
response = await OpenAISingleton.add_message(text, thread)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Kyo\Desktop\chatbot-django-capacitacion-ventas\apps\chatbot\desing_patterns\creational_patterns\singleton\openai_singleton.py", line 134, in add_message
msg = await OpenAIAssistantRunnable(assistant_id=settings.ASSISTANT_ID, as_agent=True).ainvoke({
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Kyo\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\agents\openai_assistant\base.py", line 418, in ainvoke
raise e
File "C:\Users\Kyo\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\agents\openai_assistant\base.py", line 408, in ainvoke
run = await self._create_run(input)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: object Run can't be used in 'await' expression
```
### Description
Some days ago I was using OpenAIAssistantRunnable with await for async, but now I can´t use it. If call it without an 'await' expression I get a coroutine.
### System Info
```
langchain==0.1.3
langchain-community==0.0.15
langchain-core==0.1.15
langchain-openai==0.0.3
langcodes==3.3.0
langsmith==0.0.83
openai==1.9.0
Django==5.0.2
djangorestframework==3.14.0
```
Python version: 3.11.6 | TypeError: object Run can't be used in 'await' expression | https://api.github.com/repos/langchain-ai/langchain/issues/18337/comments | 6 | 2024-02-29T18:44:53Z | 2024-08-09T16:07:12Z | https://github.com/langchain-ai/langchain/issues/18337 | 2,161,882,320 | 18,337 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain.document_loaders import (
UnstructuredExcelLoader,
PyPDFLoader,
UnstructuredWordDocumentLoader,
UnstructuredPowerPointLoader,
UnstructuredFileLoader,
AmazonTextractPDFLoader
)
file="General Drug Interactions (EU,NO) 1.docx"
loader = UnstructuredWordDocumentLoader(file)
pages = loader.load()
### Error Message and Stack Trace (if applicable)
KeyError Traceback (most recent call last)~\AppData\Local\Temp\ipykernel_13540\3705926774.py in <module> 9 ) 10 loader = UnstructuredWordDocumentLoader(file)---> 11 pages = loader.load()~\Anaconda3\lib\site-packages\langchain_community\document_loaders\unstructured.py in load(self) 85 def load(self) -> List[Document]: 86 """Load file."""---> 87 elements = self._get_elements() 88 self._post_process_elements(elements) 89 if self.mode == "elements":~\Anaconda3\lib\site-packages\langchain_community\document_loaders\word_document.py in _get_elements(self) 122 from unstructured.partition.docx import partition_docx 123 --> 124 return partition_docx(filename=self.file_path, **self.unstructured_kwargs)~\Anaconda3\lib\site-packages\unstructured\documents\elements.py in wrapper(*args, **kwargs) 300 @functools.wraps(func) 301 def wrapper(*args: _P.args, **kwargs: _P.kwargs) -> List[Element]:--> 302 elements = func(*args, **kwargs) 303 sig = inspect.signature(func) 304 params: Dict[str, Any] = dict(**dict(zip(sig.parameters, args)), **kwargs)~\Anaconda3\lib\site-packages\unstructured\file_utils\filetype.py in wrapper(*args, **kwargs) 589 @functools.wraps(func) 590 def wrapper(*args: _P.args, **kwargs: _P.kwargs) -> List[Element]:--> 591 elements = func(*args, **kwargs) 592 sig = inspect.signature(func) 593 params: Dict[str, Any] = dict(**dict(zip(sig.parameters, args)), **kwargs)~\Anaconda3\lib\site-packages\unstructured\file_utils\filetype.py in wrapper(*args, **kwargs) 544 @functools.wraps(func) 545 def wrapper(*args: _P.args, **kwargs: _P.kwargs) -> List[Element]:--> 546 elements = func(*args, **kwargs) 547 sig = inspect.signature(func) 548 params: Dict[str, Any] = dict(**dict(zip(sig.parameters, args)), **kwargs)~\Anaconda3\lib\site-packages\unstructured\chunking\title.py in wrapper(*args, **kwargs) 322 @functools.wraps(func) 323 def wrapper(*args: _P.args, **kwargs: _P.kwargs) -> List[Element]:--> 324 elements = func(*args, **kwargs) 325 sig = inspect.signature(func) 326 params: Dict[str, Any] = dict(**dict(zip(sig.parameters, args)), **kwargs)~\Anaconda3\lib\site-packages\unstructured\partition\docx.py in partition_docx(filename, file, metadata_filename, include_page_breaks, include_metadata, infer_table_structure, metadata_last_modified, chunking_strategy, languages, detect_language_per_element, **kwargs) 229 detect_language_per_element=detect_language_per_element, 230 )--> 231 return list(elements) 232 233
~\Anaconda3\lib\site-packages\unstructured\partition\lang.py in apply_lang_metadata(elements, languages, detect_language_per_element) 312 313 if not isinstance(elements, List):--> 314 elements = list(elements) 315 316 full_text = " ".join(e.text for e in elements if hasattr(e, "text"))~\Anaconda3\lib\site-packages\unstructured\partition\docx.py in _iter_document_elements(self) 308 # -- characteristic of a generator avoids repeated code to form interim results into lists. 309 --> 310 if not self._document.sections: 311 for paragraph in self._document.paragraphs: 312 yield from self._iter_paragraph_elements(paragraph)~\Anaconda3\lib\site-packages\unstructured\utils.py in __get__(self, obj, type) 118 # --- and store that value in the (otherwise unused) host-object 119 # --- __dict__ value of same name ('fget' nominally)--> 120 value = self._fget(obj) 121 obj.__dict__[self._name] = value 122 return cast(_T, value)~\Anaconda3\lib\site-packages\unstructured\partition\docx.py in _document(self) 334 335 if filename is not None:--> 336 return docx.Document(filename) 337 338 assert file is not None~\Anaconda3\lib\site-packages\docx\api.py in Document(docx) 21 """ 22 docx = _default_docx_path() if docx is None else docx---> 23 document_part = Package.open(docx).main_document_part 24 if document_part.content_type != CT.WML_DOCUMENT_MAIN: 25 tmpl = "file '%s' is not a Word file, content type is '%s'"~\Anaconda3\lib\site-packages\docx\opc\package.py in open(cls, pkg_file) 114 def open(cls, pkg_file): 115 """Return an |OpcPackage| instance loaded with the contents of `pkg_file`."""--> 116 pkg_reader = PackageReader.from_file(pkg_file) 117 package = cls() 118 Unmarshaller.unmarshal(pkg_reader, package, PartFactory)~\Anaconda3\lib\site-packages\docx\opc\pkgreader.py in from_file(pkg_file) 23 content_types = _ContentTypeMap.from_xml(phys_reader.content_types_xml) 24 pkg_srels = PackageReader._srels_for(phys_reader, PACKAGE_URI)---> 25 sparts = PackageReader._load_serialized_parts( 26 phys_reader, pkg_srels, content_types 27 )~\Anaconda3\lib\site-packages\docx\opc\pkgreader.py in _load_serialized_parts(phys_reader, pkg_srels, content_types) 51 sparts = [] 52 part_walker = PackageReader._walk_phys_parts(phys_reader, pkg_srels)---> 53 for partname, blob, reltype, srels in part_walker: 54 content_type = content_types[partname] 55 spart = _SerializedPart(partname, content_type, reltype, blob, srels)~\Anaconda3\lib\site-packages\docx\opc\pkgreader.py in _walk_phys_parts(phys_reader, srels, visited_partnames) 84 phys_reader, part_srels, visited_partnames 85 )---> 86 for partname, blob, reltype, srels in next_walker: 87 yield (partname, blob, reltype, srels) 88
~\Anaconda3\lib\site-packages\docx\opc\pkgreader.py in _walk_phys_parts(phys_reader, srels, visited_partnames) 84 phys_reader, part_srels, visited_partnames 85 )---> 86 for partname, blob, reltype, srels in next_walker: 87 yield (partname, blob, reltype, srels) 88
~\Anaconda3\lib\site-packages\docx\opc\pkgreader.py in _walk_phys_parts(phys_reader, srels, visited_partnames) 79 reltype = srel.reltype 80 part_srels = PackageReader._srels_for(phys_reader, partname)---> 81 blob = phys_reader.blob_for(partname) 82 yield (partname, blob, reltype, part_srels) 83 next_walker = PackageReader._walk_phys_parts(~\Anaconda3\lib\site-packages\docx\opc\phys_pkg.py in blob_for(self, pack_uri) 81 Raises |ValueError| if no matching member is present in zip archive. 82 """---> 83 return self._zipf.read(pack_uri.membername) 84 85 def close(self):~\Anaconda3\lib\zipfile.py in read(self, name, pwd) 1470 def read(self, name, pwd=None): 1471 """Return file bytes for name."""-> 1472 with self.open(name, "r", pwd) as fp: 1473 return fp.read() 1474
~\Anaconda3\lib\zipfile.py in open(self, name, mode, pwd, force_zip64) 1509 else: 1510 # Get info object for name-> 1511 zinfo = self.getinfo(name) 1512 1513 if mode == 'w':
~\Anaconda3\lib\zipfile.py in getinfo(self, name)
1436 info = self.NameToInfo.get(name)
1437 if info is None:
-> 1438 raise KeyError(
1439 'There is no item named %r in the archive' % name)
1440
KeyError: "There is no item named 'word/#_top' in the archive"
### Description
Hello Team,
I am trying to use latest langchain version to load the docx document, attached the error that i am getting, Just to include the file that i am using is perfectly file and its not corrupted.
### System Info
Langchain latest=0.1.8

| Error while loading docx file using UnstructuredWordDocumentLoader | https://api.github.com/repos/langchain-ai/langchain/issues/18329/comments | 0 | 2024-02-29T16:34:14Z | 2024-06-08T16:13:55Z | https://github.com/langchain-ai/langchain/issues/18329 | 2,161,656,887 | 18,329 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
import torch
from langchain.sql_database import SQLDatabase
from langchain.prompts import PromptTemplate # ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate,
from langchain.chains import LLMChain, create_sql_query_chain # SequentialChain,
from langchain import HuggingFacePipeline
from langchain_experimental.sql import SQLDatabaseChain
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig, pipeline
from urllib.parse import quote_plus
db = SQLDatabase.from_uri(new_con, include_tables=[...])
model = AutoModelForCausalLM.from_pretrained(
r'model_path',
torch_dtype=torch.float32,
trust_remote_code=True,
device_map="auto",
low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained(r'tokenizer_path')
generation_config = GenerationConfig.from_pretrained(r'generationconfig_path')
pipeline = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
return_full_text=True,
generation_config=generation_config
)
llm = HuggingFacePipeline(pipeline=pipeline)
system_message =
"""
prompt instructions
question: {query}
"""
prompt_template = PromptTemplate(
template=system_message, input_variables=["query"]
)
llm_chain = LLMChain(
llm = llm,
prompt = prompt_template
)
# chain using prompt and llm_chain
db_chain_1 = SQLDatabaseChain.from_llm(llm_chain, db, verbose=True, prompt = prompt_template, use_query_checker=False, input_key = 'query')
# using only the llm and no prompt
db_chain_2 = SQLDatabaseChain.from_llm(llm, db, verbose=True)
# another chain test
db_chain_3 = create_sql_query_chain(llm, db)
question = "give me the top ...."
```
Now, I've tried these chains and got different errors:
```
db_chain_1.invoke(question) # ValueError: Missing some input keys: {'query'}
db_chain_2.invoke(question) # TypeError: TextEncodeInput must be Union[TextInputSequence, Tuple[InputSequence, InputSequence]]
db_chain_3.invoke(question) # AssertionError: The input to RunnablePassthrough.assign() must be a dict.
```
Note that with ```langchain==0.0.350``` I was able to run ```db_chain_2.run(question)``` (no prompt) and using the prompt running with ```db_chain_2.run(system_message.format(question=question))```
Package versions:
```
langchain==0.1.9
langchain-community==0.0.24
langchain-core==0.1.27
langchain-experimental==0.0.52
langserve==0.0.43
langsmith==0.1.10
```
The full error message bellow refers to the following chain invoke: ```db_chain_2.invoke(question) # TypeError: TextEncodeInput must be Union[TextInputSequence, Tuple[InputSequence, InputSequence]]```
Nonetheless, I would like to run with using the prompt (the one that I get the ```ValueError: Missing some input keys: {'query'}```)
### Error Message and Stack Trace (if applicable)
> Entering new SQLDatabaseChain chain...
give me the top....
SQLQuery:Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "mypath\.venv\lib\site-packages\langchain\chains\base.py", line 163, in invoke
raise e
File "mypath\.venv\lib\site-packages\langchain\chains\base.py", line 153, in invoke
self._call(inputs, run_manager=run_manager)
File "mypath\.venv\lib\site-packages\langchain_experimental\sql\base.py", line 201, in _call
raise exc
File "mypath\.venv\lib\site-packages\langchain_experimental\sql\base.py", line 132, in _call
sql_cmd = self.llm_chain.predict(
File "mypath\.venv\lib\site-packages\langchain\chains\llm.py", line 293, in predict
return self(kwargs, callbacks=callbacks)[self.output_key]
File "mypath\.venv\lib\site-packages\langchain_core\_api\deprecation.py", line 145, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
File "mypath\.venv\lib\site-packages\langchain\chains\base.py", line 378, in __call__
return self.invoke(
File "mypath\.venv\lib\site-packages\langchain\chains\base.py", line 163, in invoke
raise e
File "mypath\.venv\lib\site-packages\langchain\chains\base.py", line 153, in invoke
self._call(inputs, run_manager=run_manager)
File "mypath\.venv\lib\site-packages\langchain\chains\llm.py", line 103, in _call
response = self.generate([inputs], run_manager=run_manager)
File "mypath\.venv\lib\site-packages\langchain\chains\llm.py", line 115, in generate
return self.llm.generate_prompt(
File "mypath\.venv\lib\site-packages\langchain_core\language_models\llms.py", line 568, in generate_prompt
return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs)
File "mypath\.venv\lib\site-packages\langchain_core\language_models\llms.py", line 741, in generate
output = self._generate_helper(
File "mypath\.venv\lib\site-packages\langchain_core\language_models\llms.py", line 605, in _generate_helper
raise e
File "mypath\.venv\lib\site-packages\langchain_core\language_models\llms.py", line 592, in _generate_helper
self._generate(
File "mypath\.venv\lib\site-packages\langchain_community\llms\huggingface_pipeline.py", line 202, in _generate
responses = self.pipeline(
File "mypath\.venv\lib\site-packages\transformers\pipelines\text_generation.py", line 241, in __call__
return super().__call__(text_inputs, **kwargs)
File "mypath\.venv\lib\site-packages\transformers\pipelines\base.py", line 1148, in __call__
preprocess_params, forward_params, postprocess_params = self._sanitize_parameters(**kwargs)
File "mypath\.venv\lib\site-packages\transformers\pipelines\text_generation.py", line 171, in _sanitize_parameters
stop_sequence_ids = self.tokenizer.encode(stop_sequence, add_special_tokens=False)
File "mypath\.venv\lib\site-packages\transformers\tokenization_utils_base.py", line 2600, in encode
File "mypath\.venv\lib\site-packages\transformers\tokenization_utils_base.py", line 3008, in encode_plus
return self._encode_plus(
File "mypath\.venv\lib\site-packages\transformers\tokenization_utils_fast.py", line 576, in _encode_plus
batched_output = self._batch_encode_plus(
File "mypath\.venv\lib\site-packages\transformers\tokenization_utils_fast.py", line 504, in _batch_encode_plus
encodings = self._tokenizer.encode_batch(
TypeError: TextEncodeInput must be Union[TextInputSequence, Tuple[InputSequence, InputSequence]]
### Description
I was using lower versions and the code worked with the run method.
```
langchain==0.0.350
langchain-community==0.0.3
langchain-core==0.1.0
langchain-experimental==0.0.47
langcodes==3.3.0
langserve==0.0.43
langsmith==0.0.70
```
Upgrading the package versions to
```
langchain==0.1.9
langchain-community==0.0.24
langchain-core==0.1.27
langchain-experimental==0.0.52
langserve==0.0.43
langsmith==0.1.10
```
And change the `run` to `invoke` raises the error.
### System Info
### System info
OS: Windows
Python Version: 3.9.7
### Pip freeze
accelerate==0.27.2
aiohttp==3.9.3
aiosignal==1.3.1
anyio==4.3.0
argon2-cffi==23.1.0
argon2-cffi-bindings==21.2.0
asgiref==3.7.2
async-timeout==4.0.3
attrs==23.2.0
backoff==2.2.1
bcrypt==4.1.2
bitsandbytes==0.42.0
build==1.0.3
cachetools==5.3.3
certifi==2024.2.2
cffi==1.16.0
charset-normalizer==3.3.2
chroma-hnswlib==0.7.3
chromadb==0.4.24
click==8.1.7
colorama==0.4.6
coloredlogs==15.0.1
dataclasses-json==0.6.4
Deprecated==1.2.14
exceptiongroup==1.2.0
fastapi==0.110.0
filelock==3.13.1
flatbuffers==23.5.26
frozenlist==1.4.1
fsspec==2024.2.0
google-auth==2.28.1
googleapis-common-protos==1.62.0
gpt4all==2.2.1.post1
greenlet==3.0.3
grpcio==1.62.0
h11==0.14.0
httpcore==1.0.4
httptools==0.6.1
httpx==0.27.0
httpx-sse==0.4.0
huggingface-hub==0.21.1
humanfriendly==10.0
idna==3.6
importlib-metadata==6.11.0
importlib_resources==6.1.2
Jinja2==3.1.3
joblib==1.3.2
jsonpatch==1.33
jsonpointer==2.4
kubernetes==29.0.0
langchain==0.1.9
langchain-community==0.0.24
langchain-core==0.1.27
langchain-experimental==0.0.52
langserve==0.0.43
langsmith==0.1.10
MarkupSafe==2.1.5
marshmallow==3.21.0
minio==7.2.4
mmh3==4.1.0
monotonic==1.6
mpmath==1.3.0
multidict==6.0.5
mypy-extensions==1.0.0
networkx==3.2.1
numpy==1.26.4
oauthlib==3.2.2
onnxruntime==1.17.1
opentelemetry-api==1.23.0
opentelemetry-exporter-otlp-proto-common==1.23.0
opentelemetry-exporter-otlp-proto-grpc==1.23.0
opentelemetry-instrumentation==0.44b0
opentelemetry-instrumentation-asgi==0.44b0
opentelemetry-instrumentation-fastapi==0.44b0
opentelemetry-proto==1.23.0
opentelemetry-sdk==1.23.0
opentelemetry-semantic-conventions==0.44b0
opentelemetry-util-http==0.44b0
orjson==3.9.15
overrides==7.7.0
packaging==23.2
pillow==10.2.0
posthog==3.4.2
protobuf==4.25.3
psutil==5.9.8
pulsar-client==3.4.0
pyasn1==0.5.1
pyasn1-modules==0.3.0
pycparser==2.21
pycryptodome==3.20.0
pydantic==1.10.13
pyodbc==5.1.0
pypdf==4.0.2
PyPika==0.48.9
pyproject_hooks==1.0.0
pyreadline3==3.4.1
python-dateutil==2.8.2
python-dotenv==1.0.1
PyYAML==6.0.1
regex==2023.12.25
requests==2.31.0
requests-oauthlib==1.3.1
rsa==4.9
safetensors==0.4.2
scikit-learn==1.4.1.post1
scipy==1.12.0
sentence-transformers==2.4.0
six==1.16.0
sniffio==1.3.1
SQLAlchemy==2.0.27
sse-starlette==1.8.2
starlette==0.36.3
sympy==1.12
tenacity==8.2.3
threadpoolctl==3.3.0
tokenizers==0.15.2
tomli==2.0.1
torch==2.2.1
tqdm==4.66.2
transformers==4.38.1
typer==0.9.0
typing-inspect==0.9.0
typing_extensions==4.10.0
urllib3==2.2.1
uvicorn==0.27.1
watchfiles==0.21.0
websocket-client==1.7.0
websockets==12.0
wrapt==1.16.0
yarl==1.9.4
zipp==3.17.0
| SQLDatabaseChain invoke ValueError: Missing some input keys: {'query'} | https://api.github.com/repos/langchain-ai/langchain/issues/18328/comments | 2 | 2024-02-29T14:58:44Z | 2024-06-08T16:13:50Z | https://github.com/langchain-ai/langchain/issues/18328 | 2,161,460,487 | 18,328 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_community.vectorstores import Neo4jVector
from langchain.graphs import Neo4jGraph
from langchain.chat_models import ChatOllama
from langchain_community.embeddings import OllamaEmbeddings
from langchain.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough
from langchain_core.output_parsers import StrOutputParser
llm=ChatOllama(temperature=0, base_url=ollama_base_url, model="llama2:latest", streaming=True, top_k=10, top_p=0.3, num_ctx=3072)
neo4j_graph = Neo4jGraph(url=NEO4J_URI, username=NEO4J_USERNAME, password=NEO4J_PASSWORD)
embeddings = OllamaEmbeddings(base_url=ollama_base_url, model="llama2:latest")
template = """Answer the question based only on the following context:
Always do a case-insensitive and fuzzy search for any search.
Do not include any explanations or apologies in your responses.
If you don't know the answer, just say that you don't know, don't try to make up an answer.
{context}
Question: {question}
"""
prompt = ChatPromptTemplate.from_template(template)
kg = Neo4jVector.from_existing_index(
embedding=embeddings,
url=NEO4J_URI,
username=NEO4J_USERNAME,
password=NEO4J_PASSWORD,
database="neo4j",
index_name="person_eg",
search_type='vector'
)
retriever = kg.as_retriever()
question= "In which department is Erik Valle working?"
rag_chain = (
{"context": retriever, "question": RunnablePassthrough()}
| prompt
| llm
| StrOutputParser()
)
rag_chain.invoke(question)
```
### Error Message and Stack Trace (if applicable)
```bash
Cell In[2], line 50
41 question= "In which department is Erik Valle working?"
43 rag_chain = (
44 {"context": retriever, "question": RunnablePassthrough()}
45 | prompt
46 | llm
47 | StrOutputParser()
48 )
---> 50 rag_chain.invoke(question)
File [~/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_core/runnables/base.py:2053](http://localhost:8888/home/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_core/runnables/base.py#line=2052), in RunnableSequence.invoke(self, input, config)
2051 try:
2052 for i, step in enumerate(self.steps):
-> 2053 input = step.invoke(
2054 input,
2055 # mark each step as a child run
2056 patch_config(
2057 config, callbacks=run_manager.get_child(f"seq:step:{i+1}")
2058 ),
2059 )
2060 # finish the root run
2061 except BaseException as e:
File [~/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_core/runnables/base.py:2692](http://localhost:8888/home/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_core/runnables/base.py#line=2691), in RunnableParallel.invoke(self, input, config)
2679 with get_executor_for_config(config) as executor:
2680 futures = [
2681 executor.submit(
2682 step.invoke,
(...)
2690 for key, step in steps.items()
2691 ]
-> 2692 output = {key: future.result() for key, future in zip(steps, futures)}
2693 # finish the root run
2694 except BaseException as e:
File [~/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_core/runnables/base.py:2692](http://localhost:8888/home/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_core/runnables/base.py#line=2691), in <dictcomp>(.0)
2679 with get_executor_for_config(config) as executor:
2680 futures = [
2681 executor.submit(
2682 step.invoke,
(...)
2690 for key, step in steps.items()
2691 ]
-> 2692 output = {key: future.result() for key, future in zip(steps, futures)}
2693 # finish the root run
2694 except BaseException as e:
File [~/.pyenv/versions/3.11.7/lib/python3.11/concurrent/futures/_base.py:456](http://localhost:8888/home/.pyenv/versions/3.11.7/lib/python3.11/concurrent/futures/_base.py#line=455), in Future.result(self, timeout)
454 raise CancelledError()
455 elif self._state == FINISHED:
--> 456 return self.__get_result()
457 else:
458 raise TimeoutError()
File [~/.pyenv/versions/3.11.7/lib/python3.11/concurrent/futures/_base.py:401](http://localhost:8888/home/.pyenv/versions/3.11.7/lib/python3.11/concurrent/futures/_base.py#line=400), in Future.__get_result(self)
399 if self._exception:
400 try:
--> 401 raise self._exception
402 finally:
403 # Break a reference cycle with the exception in self._exception
404 self = None
File [~/.pyenv/versions/3.11.7/lib/python3.11/concurrent/futures/thread.py:58](http://localhost:8888/home/.pyenv/versions/3.11.7/lib/python3.11/concurrent/futures/thread.py#line=57), in _WorkItem.run(self)
55 return
57 try:
---> 58 result = self.fn(*self.args, **self.kwargs)
59 except BaseException as exc:
60 self.future.set_exception(exc)
File [~/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_core/retrievers.py:121](http://localhost:8888/home/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_core/retrievers.py#line=120), in BaseRetriever.invoke(self, input, config, **kwargs)
117 def invoke(
118 self, input: str, config: Optional[RunnableConfig] = None, **kwargs: Any
119 ) -> List[Document]:
120 config = ensure_config(config)
--> 121 return self.get_relevant_documents(
122 input,
123 callbacks=config.get("callbacks"),
124 tags=config.get("tags"),
125 metadata=config.get("metadata"),
126 run_name=config.get("run_name"),
127 **kwargs,
128 )
File [~/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_core/retrievers.py:224](http://localhost:8888/home/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_core/retrievers.py#line=223), in BaseRetriever.get_relevant_documents(self, query, callbacks, tags, metadata, run_name, **kwargs)
222 except Exception as e:
223 run_manager.on_retriever_error(e)
--> 224 raise e
225 else:
226 run_manager.on_retriever_end(
227 result,
228 )
File [~/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_core/retrievers.py:217](http://localhost:8888/home/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_core/retrievers.py#line=216), in BaseRetriever.get_relevant_documents(self, query, callbacks, tags, metadata, run_name, **kwargs)
215 _kwargs = kwargs if self._expects_other_args else {}
216 if self._new_arg_supported:
--> 217 result = self._get_relevant_documents(
218 query, run_manager=run_manager, **_kwargs
219 )
220 else:
221 result = self._get_relevant_documents(query, **_kwargs)
File [~/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_core/vectorstores.py:654](http://localhost:8888/home/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_core/vectorstores.py#line=653), in VectorStoreRetriever._get_relevant_documents(self, query, run_manager)
650 def _get_relevant_documents(
651 self, query: str, *, run_manager: CallbackManagerForRetrieverRun
652 ) -> List[Document]:
653 if self.search_type == "similarity":
--> 654 docs = self.vectorstore.similarity_search(query, **self.search_kwargs)
655 elif self.search_type == "similarity_score_threshold":
656 docs_and_similarities = (
657 self.vectorstore.similarity_search_with_relevance_scores(
658 query, **self.search_kwargs
659 )
660 )
File [~/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_community/vectorstores/neo4j_vector.py:564](http://localhost:8888/home/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_community/vectorstores/neo4j_vector.py#line=563), in Neo4jVector.similarity_search(self, query, k, **kwargs)
554 """Run similarity search with Neo4jVector.
555
556 Args:
(...)
561 List of Documents most similar to the query.
562 """
563 embedding = self.embedding.embed_query(text=query)
--> 564 return self.similarity_search_by_vector(
565 embedding=embedding,
566 k=k,
567 query=query,
568 )
File [~/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_community/vectorstores/neo4j_vector.py:659](http://localhost:8888/home/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_community/vectorstores/neo4j_vector.py#line=658), in Neo4jVector.similarity_search_by_vector(self, embedding, k, **kwargs)
644 def similarity_search_by_vector(
645 self,
646 embedding: List[float],
647 k: int = 4,
648 **kwargs: Any,
649 ) -> List[Document]:
650 """Return docs most similar to embedding vector.
651
652 Args:
(...)
657 List of Documents most similar to the query vector.
658 """
--> 659 docs_and_scores = self.similarity_search_with_score_by_vector(
660 embedding=embedding, k=k, **kwargs
661 )
662 return [doc for doc, _ in docs_and_scores]
File [~/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_community/vectorstores/neo4j_vector.py:630](http://localhost:8888/home/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_community/vectorstores/neo4j_vector.py#line=629), in Neo4jVector.similarity_search_with_score_by_vector(self, embedding, k, **kwargs)
620 parameters = {
621 "index": self.index_name,
622 "k": k,
(...)
625 "query": remove_lucene_chars(kwargs["query"]),
626 }
628 results = self.query(read_query, params=parameters)
--> 630 docs = [
631 (
632 Document(
633 page_content=result["text"],
634 metadata={
635 k: v for k, v in result["metadata"].items() if v is not None
636 },
637 ),
638 result["score"],
639 )
640 for result in results
641 ]
642 return docs
File [~/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_community/vectorstores/neo4j_vector.py:632](http://localhost:8888/home/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_community/vectorstores/neo4j_vector.py#line=631), in <listcomp>(.0)
620 parameters = {
621 "index": self.index_name,
622 "k": k,
(...)
625 "query": remove_lucene_chars(kwargs["query"]),
626 }
628 results = self.query(read_query, params=parameters)
630 docs = [
631 (
--> 632 Document(
633 page_content=result["text"],
634 metadata={
635 k: v for k, v in result["metadata"].items() if v is not None
636 },
637 ),
638 result["score"],
639 )
640 for result in results
641 ]
642 return docs
File [~/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_core/documents/base.py:22](http://localhost:8888/home/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_core/documents/base.py#line=21), in Document.__init__(self, page_content, **kwargs)
20 def __init__(self, page_content: str, **kwargs: Any) -> None:
21 """Pass page_content in as positional or named arg."""
---> 22 super().__init__(page_content=page_content, **kwargs)
File [~/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_core/load/serializable.py:107](http://localhost:8888/home/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_core/load/serializable.py#line=106), in Serializable.__init__(self, **kwargs)
106 def __init__(self, **kwargs: Any) -> None:
--> 107 super().__init__(**kwargs)
108 self._lc_kwargs = kwargs
File [~/.pyenv/versions/3.11.7/lib/python3.11/site-packages/pydantic/v1/main.py:341](http://localhost:8888/home/.pyenv/versions/3.11.7/lib/python3.11/site-packages/pydantic/v1/main.py#line=340), in BaseModel.__init__(__pydantic_self__, **data)
339 values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)
340 if validation_error:
--> 341 raise validation_error
342 try:
343 object_setattr(__pydantic_self__, '__dict__', values)
ValidationError: 1 validation error for Document
page_content
none is not an allowed value (type=type_error.none.not_allowed)
```
### Description
We use embeddings obtained from a Neo4j database to query a question via Neo4jVector.from_existing_index. The question is a simple string processed by a retriever based on Neo4jVector.from_existing_index, called k. There, we specify parameters such as the langchain_community.embeddings method (e.g., OllamaEmbeddings, AzureOpenAIEmbeddings, SentenceTransformerEmbeddings, etc.), Neo4j server, and an index we created before. We can successfully see the node label and the embedding node properly using k.node_label and k.embedding_node_property, respectively. The issue arises when we pass embeddings no matter the embedding model we use (e.g., LLAMA2, text-embedding-ada-002, or SentenceTransformer) or LLM. Instead, if we use an index pointing to a set of empty embeddings, the LLM replies something false. Indeed, the answer has nothing to do with the database since the embeddings are null. We have already tried using itemgetter as shorthand to extract data from the map when combining it with RunnableParallel, and it brings the same error.
```python
rag_chain = (
{
"context": itemgetter("question") | retriever,
"question": itemgetter("question"),
}
| prompt
| llm
| StrOutputParser()
)
```
Do you have any idea how to solve this issue?
### System Info
**"pip freeze | grep langchain"**
langchain==0.1.7
langchain-cli==0.0.21
langchain-community==0.0.20
langchain-core==0.1.23
langchain-openai==0.0.6
-e /home/insightlab/langchain-samples/my-app/packages/neo4j-advanced-rag
**platform**: linux (Ubuntu 22.04)
**python version**: Python 3.11.7 | Cannot retrieve input from a vector store using an existing index. | https://api.github.com/repos/langchain-ai/langchain/issues/18327/comments | 2 | 2024-02-29T14:46:19Z | 2024-03-08T11:42:23Z | https://github.com/langchain-ai/langchain/issues/18327 | 2,161,434,054 | 18,327 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
import json
import urllib.request
from langchain_community.agent_toolkits.openapi.spec import reduce_openapi_spec
urllib.request.urlretrieve("https://api.snyk.io/rest/openapi/2024-02-21", "openapi_spec.json")
with open("openapi_spec.json", encoding='utf-8') as f:
raw_openapi_spec = json.load(f)
reduced_spec = reduce_openapi_spec(raw_openapi_spec)
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "/Users/matt/Dev/langchain_test/./test_case.py", line 14, in <module>
reduced_spec = reduce_openapi_spec(raw_openapi_spec)
File "/Users/matt/Dev/langchain_test/venv/lib/python3.10/site-packages/langchain_community/agent_toolkits/openapi/spec.py", line 45, in reduce_openapi_spec
endpoints = [
File "/Users/matt/Dev/langchain_test/venv/lib/python3.10/site-packages/langchain_community/agent_toolkits/openapi/spec.py", line 46, in <listcomp>
(name, description, dereference_refs(docs, full_schema=spec))
File "/Users/matt/Dev/langchain_test/venv/lib/python3.10/site-packages/langchain_core/utils/json_schema.py", line 74, in dereference_refs
else _infer_skip_keys(schema_obj, full_schema)
File "/Users/matt/Dev/langchain_test/venv/lib/python3.10/site-packages/langchain_core/utils/json_schema.py", line 55, in _infer_skip_keys
keys += _infer_skip_keys(v, full_schema)
File "/Users/matt/Dev/langchain_test/venv/lib/python3.10/site-packages/langchain_core/utils/json_schema.py", line 55, in _infer_skip_keys
keys += _infer_skip_keys(v, full_schema)
File "/Users/matt/Dev/langchain_test/venv/lib/python3.10/site-packages/langchain_core/utils/json_schema.py", line 51, in _infer_skip_keys
ref = _retrieve_ref(v, full_schema)
File "/Users/matt/Dev/langchain_test/venv/lib/python3.10/site-packages/langchain_core/utils/json_schema.py", line 18, in _retrieve_ref
out = out[int(component)]
KeyError: 400
### Description
When using a JSON OpenAPI file, reduce_openapi_spec fails with a Key error on numerical strings. This is due to https://github.com/langchain-ai/langchain/pull/14745, which casts all digits to ints. The fix is probably to do
if component.isdigit() and isinstance(out[component], int):
but I wasn't sure how you'd want to fix it.
### System Info
pip freeze | grep langchain
langchain==0.1.9
langchain-community==0.0.24
langchain-core==0.1.27
| Key error when using dereference_refs from langchain_community.agent_toolkits.openapi.spec when using JSON OpenAPI spec file | https://api.github.com/repos/langchain-ai/langchain/issues/18325/comments | 0 | 2024-02-29T14:32:09Z | 2024-06-08T16:13:46Z | https://github.com/langchain-ai/langchain/issues/18325 | 2,161,405,297 | 18,325 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from langchain.output_parsers import PydanticOutputParser
from pydantic import BaseModel
class TestModel(BaseModel):
test_attribute: str
if __name__ == '__main__':
pydantic_output_parser = PydanticOutputParser(pydantic_object=TestModel)
print(pydantic_output_parser)
```
### Error Message and Stack Trace (if applicable)
```
pydantic.v1.error_wrappers.ValidationError: 1 validation error for PydanticOutputParser
pydantic_object
subclass of BaseModel expected (type=type_error.subclass; expected_class=BaseModel)
```
### Description
Hello,
Starting from version 0.1.6, there arises an issue with existing pydantic v2 `BaseModel` child classes triggering an error. This problem is directly associated with a particular [commit](https://github.com/langchain-ai/langchain/commit/852973d6169fee3e80f3b361453dd14980dd8797#diff-f86ea1cb10fcab7e1a505fab4aad6b8e4ad3fc33128f7d0c74474166c66bb608):
In the file `pydantic.py` (now relocated to core), the attribute `pydantic_object` of the `PydanticOutputParser` class underwent a change from a generic type variable `T = TypeVar("T", bound=BaseModel)` to `Type[BaseModel]`. Consequently, this alteration shifted the validator used in constructors from `any_class_validator` to `make_class_validator`. Thus, all child BaseModel classes are checked whether they are subclass of pydantic v1 `BaseModel`
It worked on 0.1.5.
I am aware that [Example 2 in the documentation](https://python.langchain.com/docs/guides/pydantic_compatibility) advises against using pydantic v2 BaseModels.
However, **I find it unfortunate that upgrading to version 0.1.6 breaks compatibility with pydantic v2.**
Thank you.
### System Info
python 3.10
langchain >= 0.1.6
mac Sonoma 14.1.2 | Breaking pydantic v2 compatibility in output parsers from 0.1.6 | https://api.github.com/repos/langchain-ai/langchain/issues/18322/comments | 1 | 2024-02-29T13:26:59Z | 2024-04-26T14:41:02Z | https://github.com/langchain-ai/langchain/issues/18322 | 2,161,257,560 | 18,322 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain.prompts import PromptTemplate
from langchain_community.llms.huggingface_endpoint import HuggingFaceEndpoint
from langchain.chains import LLMChain
llm = HuggingFaceEndpoint(
repo_id="google/flan-t5-large",
temperature=0,
max_new_tokens=250,
huggingfacehub_api_token=HUGGINGFACE_TOKEN
)
prompt_tpl = PromptTemplate(
template="What is the good name for a company that makes {product}",
input_variables=["product"]
)
chain = LLMChain(llm=llm, prompt=prompt_tpl)
print(chain.invoke("colorful socks"))
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "/Users/michaelchu/Documents/agent/agent.py", line 20, in <module>
print(chain.invoke("colorful socks"))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/michaelchu/Documents/agent/venv/lib/python3.12/site-packages/langchain/chains/base.py", line 163, in invoke
raise e
File "/Users/michaelchu/Documents/agent/venv/lib/python3.12/site-packages/langchain/chains/base.py", line 153, in invoke
self._call(inputs, run_manager=run_manager)
File "/Users/michaelchu/Documents/agent/venv/lib/python3.12/site-packages/langchain/chains/llm.py", line 103, in _call
response = self.generate([inputs], run_manager=run_manager)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/michaelchu/Documents/agent/venv/lib/python3.12/site-packages/langchain/chains/llm.py", line 115, in generate
return self.llm.generate_prompt(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/michaelchu/Documents/agent/venv/lib/python3.12/site-packages/langchain_core/language_models/llms.py", line 568, in generate_prompt
return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/michaelchu/Documents/agent/venv/lib/python3.12/site-packages/langchain_core/language_models/llms.py", line 741, in generate
output = self._generate_helper(
^^^^^^^^^^^^^^^^^^^^^^
File "/Users/michaelchu/Documents/agent/venv/lib/python3.12/site-packages/langchain_core/language_models/llms.py", line 605, in _generate_helper
raise e
File "/Users/michaelchu/Documents/agent/venv/lib/python3.12/site-packages/langchain_core/language_models/llms.py", line 592, in _generate_helper
self._generate(
File "/Users/michaelchu/Documents/agent/venv/lib/python3.12/site-packages/langchain_core/language_models/llms.py", line 1177, in _generate
self._call(prompt, stop=stop, run_manager=run_manager, **kwargs)
File "/Users/michaelchu/Documents/agent/venv/lib/python3.12/site-packages/langchain_community/llms/huggingface_endpoint.py", line 256, in _call
response = self.client.post(
^^^^^^^^^^^^^^^^^
File "/Users/michaelchu/Documents/agent/venv/lib/python3.12/site-packages/huggingface_hub/inference/_client.py", line 242, in post
hf_raise_for_status(response)
File "/Users/michaelchu/Documents/agent/venv/lib/python3.12/site-packages/huggingface_hub/utils/_errors.py", line 358, in hf_raise_for_status
raise BadRequestError(message, response=response) from e
huggingface_hub.utils._errors.BadRequestError: (Request ID: AxsbrX3A4JxXuBdYC7fv-)
Bad request:
The following `model_kwargs` are not used by the model: ['return_full_text', 'stop', 'watermark', 'stop_sequences'] (note: typos in the generate arguments will also show up in this list)
### Description
Hi, folks. I'm just trying to run a simple LLMChain and getting the Bad Request due to model_kwargs checking. I found there are several same issue being raised, however it haven't fixed in the latest release of langchain. Please help to take a look, thanks!
**_Previous Issue being raised_**: https://github.com/langchain-ai/langchain/issues/10848
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.2.0: Wed Nov 15 21:53:18 PST 2023; root:xnu-10002.61.3~2/RELEASE_ARM64_T6000
> Python Version: 3.12.1 (main, Feb 14 2024, 09:50:51) [Clang 15.0.0 (clang-1500.1.0.2.5)]
Package Information
-------------------
> langchain_core: 0.1.27
> langchain: 0.1.9
> langchain_community: 0.0.24
> langsmith: 0.1.10
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | Bad request: The following `model_kwargs` are not used by the model: ['return_full_text', 'stop', 'watermark', 'stop_sequences'] (note: typos in the generate arguments will also show up in this list) | https://api.github.com/repos/langchain-ai/langchain/issues/18321/comments | 14 | 2024-02-29T13:19:43Z | 2024-07-16T13:24:54Z | https://github.com/langchain-ai/langchain/issues/18321 | 2,161,242,032 | 18,321 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from pprint import pprint
from langchain_core.utils.function_calling import convert_to_openai_function
from pydantic import v1
class BiblioExtraction(v1.BaseModel):
title: str
authors: list[str]
f = convert_to_openai_function(BiblioExtraction)
pprint(f)
```
### Error Message and Stack Trace (if applicable)
```python
{'description': '',
'name': 'BiblioExtraction',
'parameters': {'properties': {'authors': {'items': {'type': 'string'},
'type': 'array'}},
'required': ['title', 'authors'],
'type': 'object'}}
```
### Description
I am trying to use `langchain.chains.openai_functions.create_structured_output_runnable` to extract citation information from the text, and then I found that I cannot extract the `title` field.
After investigation, I discovered that langchain [introduced an undocumented behavior](https://github.com/langchain-ai/langchain/commit/ef42d9d559bf8e9c7de85f20fe9a67cc78c3030a#diff-5244d0e3a3878e2e86fbdac70ff585a1f956939c46ef65e53c77fd896bc03bd6R53) in version 0.1.4, which deletes the `title` field in schema.
### System Info
```
❯ python -m langchain_core.sys_info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Thu Oct 5 21:02:42 UTC 2023
> Python Version: 3.10.13 (main, Nov 16 2023, 15:58:41) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.1.27
> langchain: 0.1.9
> langchain_community: 0.0.24
> langsmith: 0.1.5
> langchain_openai: 0.0.5
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
``` | convert_to_openai_function will drop `title` field in `output_schema` if it's a Pydantic model. | https://api.github.com/repos/langchain-ai/langchain/issues/18319/comments | 3 | 2024-02-29T11:45:13Z | 2024-07-04T16:07:58Z | https://github.com/langchain-ai/langchain/issues/18319 | 2,161,064,475 | 18,319 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I searched the LangChain documentation with the integrated search.
- [X] I added a very descriptive title to this issue.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
Here is a simple version of the code I'm trying to use.
```
llm = LlamaCpp(model_path=MODEL_PATH, n_ctx=2048, n_batch=512, max_tokens=-1)
history = RedisChatMessageHistory(session_id=session_id, url="redis://localhost:6379")
memory = ConversationTokenBufferMemory(llm=llm, max_token_limit=2048, chat_memory=history)
conversation = LLMChain(llm=llm, prompt=prompt, memory=memory)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
While developing a chatbot, I used `ConversationTokenBufferMemory` with `RedisChatMessageHistory` to provide support for different users. It is aimed for storing and retrieving history as needed and loading it into memory to pass into the chain. But when it reaches `max_token_limit`, instead of forgetting the left most history to maintain the context length window, the app crashes because context length exceeded the limit.
What's wrong? `ConversationTokenBufferMemory` is supposed to automatically keep track of total tokens used in history and only keep the latest `n` number of tokens to avoid crashes, right?
### It might be a bug or a lack of support in the `ConversationTokenBufferMemory` class.
Expecting a solution that might sound something like this :
> If `chat_memory` = an object of `RedisChatMessageHistory`, then it should have a feature where it retrieves the entire history data from Redis as it is supposed to, but cleans up the history data to keep the last `max_token_limit` number of tokens before passing it as history in the chain. This way, when Langchain is used in production and has to juggle between multiple different users, this might come in handy.
### System Info
Platform : Windows
Python : 3.10.9
Langchain : Latest while posting this. | ConversationTokenBufferMemory with RedisChatMessageHistory doesn't work, as it crashes due to exceeding the context limit. | https://api.github.com/repos/langchain-ai/langchain/issues/18303/comments | 6 | 2024-02-29T06:50:29Z | 2024-03-01T06:34:17Z | https://github.com/langchain-ai/langchain/issues/18303 | 2,160,530,801 | 18,303 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
The documentation page for BigQueryVectorSearch is broken: https://python.langchain.com/docs/integrations/vectorstores/bigquery_vector_search.
Anything I can help fix it?
### Idea or request for content:
_No response_ | DOC: documentation page for `BigQueryVectorSearch` is broken | https://api.github.com/repos/langchain-ai/langchain/issues/18296/comments | 2 | 2024-02-29T01:22:55Z | 2024-06-08T16:13:40Z | https://github.com/langchain-ai/langchain/issues/18296 | 2,160,164,321 | 18,296 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
class CustomSQLTool(BaseTool):
name = "SQL_TOOL"
description = "useful when you need to answer questions residing on spark tables"
llm = AzureChatOpenAI()
k = 30
def _run(
self, query: str, run_manager: Optional[CallbackManagerForToolRun] = None
) -> str:
"""Use the tool."""
spark_sql = SparkSQL(schema='schema_name', include_tables=['table_name'])
toolkit = SparkSQLToolkit(db=spark_sql, llm=self.llm)
spark_sql_agent_executor = create_spark_sql_agent(
llm=chatllm, #chatllm is the Azure open AI GPT model reference
toolkit=toolkit,
verbose=True,
top_k=self.k,
agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
early_stopping_method="generate",
agent_executor_kwargs={"handle_parsing_errors": True}
)
return spark_sql_agent_executor.run(query)
async def _arun(
self, query: str, run_manager: Optional[AsyncCallbackManagerForToolRun] = None
) -> str:
"""Use the tool asynchronously."""
raise NotImplementedError("CustomSQLTool does not support async")
### Error Message and Stack Trace (if applicable)
[68f78b9qrn] 2024-02-27T20:13:01.+0000 ERROR src.mlflowserving.scoring_server Encountered an unexpected error while evaluating the model. Verify that the input is compatible with the model for inference. Error ''CustomSQLTool' object has no attribute 'is_single_input''
[68f78b9qrn] Traceback (most recent call last):
[68f78b9qrn] File "/opt/conda/envs/mlflow-env/lib/python3.9/site-packages/src/mlflowserving/scoring_server/__init__.py", line 457, in transformation
[68f78b9qrn] raw_predictions = model.predict(data, params=params)
[68f78b9qrn] File "/opt/conda/envs/mlflow-env/lib/python3.9/site-packages/mlflow/pyfunc/__init__.py", line 492, in predict
[68f78b9qrn] return _predict()
[68f78b9qrn] File "/opt/conda/envs/mlflow-env/lib/python3.9/site-packages/mlflow/pyfunc/__init__.py", line 478, in _predict
[68f78b9qrn] return self._predict_fn(data, params=params)
[68f78b9qrn] File "/opt/conda/envs/mlflow-env/lib/python3.9/site-packages/mlflow/pyfunc/model.py", line 473, in predict
[68f78b9qrn] return self.python_model.predict(self.context, self._convert_input(model_input))
[68f78b9qrn] File "<command-1070239644302950>", line 10, in predict
[68f78b9qrn] File "/opt/conda/envs/mlflow-env/lib/python3.9/site-packages/langchain/agents/conversational/base.py", line 109, in from_llm_and_tools
[68f78b9qrn] cls._validate_tools(tools)
[68f78b9qrn] File "/opt/conda/envs/mlflow-env/lib/python3.9/site-packages/langchain/agents/conversational/base.py", line 91, in _validate_tools
[68f78b9qrn] validate_tools_single_input(cls.__name__, tools)
[68f78b9qrn] File "/opt/conda/envs/mlflow-env/lib/python3.9/site-packages/langchain/agents/utils.py", line 9, in validate_tools_single_input
[68f78b9qrn] if not tool.is_single_input:
[68f78b9qrn] AttributeError: 'CustomSQLTool' object has no attribute 'is_single_input'
[68f78b9qrn]
### Description
I am using langchain==0.0.330 (also tried with 0.0.347) and creating a custom SQL class to query spark tables in my Azure Databricks Environment.
my custom SQL class is the child class of BaseTool.
I am registering the whole thing in a mlflow custom pyfunc model.
when I register the model in modle registery it is successfull and when I load and query it is working fine.
when I deploy it as a model serving endpoint it fails stating that the my class CustomSQLTool does not have attribute is_single_input.
As far as I understand this is available in validate tool in basetool class and I need not override it.
both model serving and model registry accept the same input string wrapped in pandas dataframe.
### System Info
langchain 0.0.330 and python 3.9 on Databricks runtime 12.2 ML. | Custom Agent Class fails with object has no attribute 'is_single_input' | https://api.github.com/repos/langchain-ai/langchain/issues/18292/comments | 1 | 2024-02-29T00:21:13Z | 2024-06-08T16:13:35Z | https://github.com/langchain-ai/langchain/issues/18292 | 2,160,102,070 | 18,292 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [x] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Using this code gives the first type of exception "You must provide an embedding function to compute embeddings."
``` python
import chromadb
from chromadb.utils import embedding_functions
from langchain.chains.query_constructor.base import AttributeInfo
from langchain.retrievers.self_query.base import SelfQueryRetriever
from langchain_openai import ChatOpenAI
from langchain_community.vectorstores import Chroma
metadata_field_info = [
AttributeInfo(
name="date",
description="Date description",
type="integer",
),
]
document_content_description = "Content description"
llm = ChatOpenAI(temperature=0, key)
class ChromaDbInstance:
def __init__(self) -> None:
self._client = chromadb.HttpClient(
host=f"http://localhost:{CHROMA_DB_PORT}",
)
self._collection_texts = self._client.get_or_create_collection(
name=CHROMA_MAIN_COLLECTION_NAME
)
self._retriever = SelfQueryRetriever.from_llm(
llm,
Chroma(
client=self._client,
collection_name=CHROMA_MAIN_COLLECTION_NAME
),
document_content_description,
metadata_field_info,
)
@property
def count(self):
return self._collection_texts.count()
def add_text_to_db(self, text):
try:
self._collection_texts.add(
documents=[text], ids=[str(self.count + 1)]]
)
except Exception as e:
print("-- Error adding new text to db --", e)
def query_db(self, query):
return self._retriever.invoke(query)
```
If I pass the "embedding_function" to Chroma initialization - I get another error: "AttributeError: 'ONNXMiniLM_L6_V2' object has no attribute 'embed_query'"
``` python
Chroma(
client=self._client,
collection_name=CHROMA_MAIN_COLLECTION_NAME,
embedding_function=embedding_functions.DefaultEmbeddingFunction(),
),
```
### Error Message and Stack Trace (if applicable)
Initial case when not providing any embedding_function to `langchain_community.vectorstores.Chroma`:
```
Traceback (most recent call last):
File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/uvicorn/protocols/http/httptools_impl.py", line 419, in run_asgi
result = await app( # type: ignore[func-returns-value]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 84, in __call__
return await self.app(scope, receive, send)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/fastapi/applications.py", line 1054, in __call__
await super().__call__(scope, receive, send)
File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/starlette/applications.py", line 123, in __call__
await self.middleware_stack(scope, receive, send)
File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/starlette/middleware/errors.py", line 186, in __call__
raise exc
File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/starlette/middleware/errors.py", line 164, in __call__
await self.app(scope, receive, _send)
File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 62, in __call__
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
raise exc
File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/starlette/routing.py", line 758, in __call__
await self.middleware_stack(scope, receive, send)
File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/starlette/routing.py", line 778, in app
await route.handle(scope, receive, send)
File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/starlette/routing.py", line 299, in handle
await self.app(scope, receive, send)
File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/starlette/routing.py", line 79, in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
raise exc
File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/starlette/routing.py", line 74, in app
response = await func(request)
^^^^^^^^^^^^^^^^^^^
File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/fastapi/routing.py", line 278, in app
raw_response = await run_endpoint_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/fastapi/routing.py", line 191, in run_endpoint_function
return await dependant.call(**values)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/heithvald/Documents/development/projects/project21/python/server.py", line 20, in echo_endpoint
return {"response": db.query_db(req.prompt)}
^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/heithvald/Documents/development/projects/project21/python/modules/vector_db.py", line 72, in query_db
return self._retriever.invoke(query)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/langchain_core/retrievers.py", line 141, in invoke
return self.get_relevant_documents(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/langchain_core/retrievers.py", line 244, in get_relevant_documents
raise e
File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/langchain_core/retrievers.py", line 237, in get_relevant_documents
result = self._get_relevant_documents(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/langchain/retrievers/self_query/base.py", line 186, in _get_relevant_documents
docs = self._get_docs_with_query(new_query, search_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/langchain/retrievers/self_query/base.py", line 160, in _get_docs_with_query
docs = self.vectorstore.search(query, self.search_type, **search_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/langchain_core/vectorstores.py", line 159, in search
return self.similarity_search(query, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/langchain_community/vectorstores/chroma.py", line 348, in similarity_search
docs_and_scores = self.similarity_search_with_score(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/langchain_community/vectorstores/chroma.py", line 429, in similarity_search_with_score
results = self.__query_collection(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/langchain_core/utils/utils.py", line 35, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/langchain_community/vectorstores/chroma.py", line 155, in __query_collection
return self._collection.query(
^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/chromadb/api/models/Collection.py", line 327, in query
valid_query_embeddings = self._embed(input=valid_query_texts)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/chromadb/api/models/Collection.py", line 629, in _embed
raise ValueError(
ValueError: You must provide an embedding function to compute embeddings.https://docs.trychroma.com/embeddings
```
Case when providing the default embedding_function to the Chroma initializer
```
Traceback (most recent call last):
File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/uvicorn/protocols/http/httptools_impl.py", line 419, in run_asgi
result = await app( # type: ignore[func-returns-value]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 84, in __call__
return await self.app(scope, receive, send)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/fastapi/applications.py", line 1054, in __call__
await super().__call__(scope, receive, send)
File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/starlette/applications.py", line 123, in __call__
await self.middleware_stack(scope, receive, send)
File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/starlette/middleware/errors.py", line 186, in __call__
raise exc
File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/starlette/middleware/errors.py", line 164, in __call__
await self.app(scope, receive, _send)
File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 62, in __call__
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
raise exc
File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/starlette/routing.py", line 758, in __call__
await self.middleware_stack(scope, receive, send)
File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/starlette/routing.py", line 778, in app
await route.handle(scope, receive, send)
File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/starlette/routing.py", line 299, in handle
await self.app(scope, receive, send)
File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/starlette/routing.py", line 79, in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
raise exc
File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/starlette/routing.py", line 74, in app
response = await func(request)
^^^^^^^^^^^^^^^^^^^
File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/fastapi/routing.py", line 278, in app
raw_response = await run_endpoint_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/fastapi/routing.py", line 191, in run_endpoint_function
return await dependant.call(**values)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/heithvald/Documents/development/projects/project21/python/server.py", line 20, in echo_endpoint
return {"response": db.query_db(req.prompt)}
^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/heithvald/Documents/development/projects/project21/python/modules/vector_db.py", line 75, in query_db
return self._retriever.invoke(query)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/langchain_core/retrievers.py", line 141, in invoke
return self.get_relevant_documents(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/langchain_core/retrievers.py", line 244, in get_relevant_documents
raise e
File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/langchain_core/retrievers.py", line 237, in get_relevant_documents
result = self._get_relevant_documents(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/langchain/retrievers/self_query/base.py", line 186, in _get_relevant_documents
docs = self._get_docs_with_query(new_query, search_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/langchain/retrievers/self_query/base.py", line 160, in _get_docs_with_query
docs = self.vectorstore.search(query, self.search_type, **search_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/langchain_core/vectorstores.py", line 159, in search
return self.similarity_search(query, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/langchain_community/vectorstores/chroma.py", line 348, in similarity_search
docs_and_scores = self.similarity_search_with_score(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/heithvald/Documents/development/projects/project21/python/.venv/lib/python3.11/site-packages/langchain_community/vectorstores/chroma.py", line 437, in similarity_search_with_score
query_embedding = self._embedding_function.embed_query(query)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'ONNXMiniLM_L6_V2' object has no attribute 'embed_query'
```
### Description
* I'm trying to use a SelfQueryRetriever with Chroma vector store.
* I expect it to work without passing the `embedding_function` arg, or when I pass it explicitly `embedding_function=embedding_functions.DefaultEmbeddingFunction()` to the Chroma constructor
* Instead I get errors when trying to call `retriever.invoke(text)`
I've debugged and found out the problem is most likely in this line: https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/vectorstores/chroma.py#L128 line
If nothing was passed to the `embedding_function` - it would initialize normally and just query the chroma collection and inside the collection it will use the right methods for the embedding_function inside the chromadb lib source code: `return self._embedding_function(input=input)`. At least it will work for the default embedding_function provided by chromadb. Please, fix it.
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.1.0: Mon Oct 9 21:27:24 PDT 2023; root:xnu-10002.41.9~6/RELEASE_ARM64_T6000
> Python Version: 3.11.8 (v3.11.8:db85d51d3e, Feb 6 2024, 18:02:37) [Clang 13.0.0 (clang-1300.0.29.30)]
Package Information
-------------------
> langchain_core: 0.1.27
> langchain: 0.1.9
> langchain_community: 0.0.24
> langsmith: 0.1.10
> langchain_openai: 0.0.8
> chromadb: 0.4.24 | Trying to use Chroma vectorstore with default embedding_function results in an error | https://api.github.com/repos/langchain-ai/langchain/issues/18291/comments | 1 | 2024-02-29T00:16:15Z | 2024-07-15T07:48:57Z | https://github.com/langchain-ai/langchain/issues/18291 | 2,160,097,448 | 18,291 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
from dotenv import load_dotenv
from langchain_community.vectorstores.faiss import FAISS
from azure import azure_embeddings
load_dotenv()
if __name__ == '__main__':
db = FAISS.load_local("faiss_index", azure_embeddings)
retriever = db.as_retriever()
```
previously saved index with
```python
db = FAISS.from_documents(documents, azure_embeddings)
# save to disk
db.save_local("faiss_index")
```
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "<REDACTED>/app.py", line 29, in <module>
db = FAISS.load_local("faiss_index", azure_embeddings)
File "<REDACTED>/site-packages/langchain_community/vectorstores/faiss.py", line 1110, in load_local
index = faiss.read_index(
AttributeError: module 'faiss' has no attribute 'read_index'
### Description
Cannot import index for my RAG app. It was working fine, for a few days.
### System Info
```bash
langchain==0.1.9
langchain-community==0.0.24
langchain-core==0.1.26
langchain-openai==0.0.6
langchainhub==0.1.14
faiss-cpu==1.7.4
```
python 3.9.18
macOS | Cannot import faiss index from a file | https://api.github.com/repos/langchain-ai/langchain/issues/18285/comments | 2 | 2024-02-28T21:12:41Z | 2024-06-19T16:07:38Z | https://github.com/langchain-ai/langchain/issues/18285 | 2,159,876,248 | 18,285 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
from langchain_core.output_parsers import StrOutputParser
from langchain_core.runnables import RunnablePassthrough
from langchain_openai import ChatOpenAI
model = ChatOpenAI()
sql_response = (
RunnablePassthrough.assign(schema=get_schema)
| prompt
| model.bind(stop=["\nSQLResult:"])
| StrOutputParser()
)
How to add ConversationBufferMemory to this code
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I'm trying to add Memory to this chain but I'm unable to do it.
### System Info
No particular system info | How to add memory | https://api.github.com/repos/langchain-ai/langchain/issues/18256/comments | 0 | 2024-02-28T10:56:59Z | 2024-02-28T11:06:44Z | https://github.com/langchain-ai/langchain/issues/18256 | 2,158,680,482 | 18,256 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
prompt_template = ChatPromptTemplate.from_messages(
[
("system", prompt),
("user", "{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
])
llm = ChatOpenAI(model = llm)
llm_with_tools = llm.bind_tools(tools)
input = {
"input": lambda x: x["input"],
"agent_scratchpad": lambda x: format_to_openai_tool_messages(
x["intermediate_steps"]
),
}
agent = (
input
| prompt_template
| llm_with_tools
| OpenAIToolsAgentOutputParser()
)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
for chunk in self.agent_executor.stream({"input": input}):
if "output" in chunk:
return chunk["output"]
```
### Error Message and Stack Trace (if applicable)
Error in LangChainTracer.on_chain_error callback: TracerException('No indexed run ID da9518a3-b6ce-4525-a985-10b3ad41fef9.')
### Description
I am trying to trace my LangChain runs by using LangSmith, but I get the following error at the end of the flow:
`Error in LangChainTracer.on_chain_error callback: TracerException('No indexed run ID da9518a3-b6ce-4525-a985-10b3ad41fef9.')`
The environment variable is set up as environment variable using `.env`.
The run is however logged in LangSmith and I can see it, but the error still appears.
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 22.6.0: Fri Sep 15 13:41:28 PDT 2023; root:xnu-8796.141.3.700.8~1/RELEASE_ARM64_T6000
> Python Version: 3.9.6 (default, Dec 7 2023, 05:42:47)
[Clang 15.0.0 (clang-1500.1.0.2.5)]
Package Information
-------------------
> langchain_core: 0.1.26
> langchain: 0.1.9
> langchain_community: 0.0.22
> langsmith: 0.1.5
> langchain_openai: 0.0.7
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | Issue: Error in LangChainTracer.on_chain_error callback: TracerException('No indexed run ID da9518a3-b6ce-4525-a985-10b3ad41fef9.') | https://api.github.com/repos/langchain-ai/langchain/issues/18254/comments | 2 | 2024-02-28T10:38:58Z | 2024-07-17T12:31:43Z | https://github.com/langchain-ai/langchain/issues/18254 | 2,158,645,069 | 18,254 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python3
import os
from langchain_core.prompts import PromptTemplate, ChatPromptTemplate
from langchain_nvidia_ai_endpoints import ChatNVIDIA, NVIDIAEmbeddings
from langchain_core.output_parsers import StrOutputParser
os.environ["NVIDIA_API_KEY"] = "nvapi-*"
prompt_template = "You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Please ensure that your responses are positive in nature."
llm = ChatNVIDIA(model="mixtral_8x7b", max_tokens=32)
pt = PromptTemplate.from_template(prompt_template)
chain = pt | llm | StrOutputParser()
resp = chain.stream({"context_str": "", "query_str": "What is nvlink"})
count = sum(1 for _ in (print(chunk) for chunk in resp))
print("Token count:", count)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I am using ChatNvidia to build a chain. I have specified `max_tokens` to 32 and trying to generate response. But number of tokens generated are more than that. In above example it's around 78, even though max_tokens limit is already set.
Point to note is when I simply create object of ChatNvidia and invoke it works as expected.
```python3
llm = ChatNVIDIA(model="mixtral_8x7b", max_tokens=32)
llm.invoke("Hi")
```
### System Info
Version details are added below
```
$ pip3 freeze | grep langchain
langchain==0.0.352
langchain-community==0.0.7
langchain-core==0.1.3
langchain-nvidia-ai-endpoints==0.0.1
langchain-nvidia-trt==0.0.1rc0
```
I am using ubuntu 22.04 | max_token limit is not followed when using chain | https://api.github.com/repos/langchain-ai/langchain/issues/18248/comments | 2 | 2024-02-28T06:32:56Z | 2024-04-20T13:57:47Z | https://github.com/langchain-ai/langchain/issues/18248 | 2,158,220,922 | 18,248 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
[Document correction](https://python.langchain.com/docs/expression_language/streaming)
The tips
An LCEL chain constructed using using non-streaming components, will still be able to stream in a lot of cases, with streaming of partial output starting after the last non-streaming step in the chain.
repeat word "using"
### Idea or request for content:
_No response_ | DOC: https://python.langchain.com/docs/expression_language/streaming Document correction | https://api.github.com/repos/langchain-ai/langchain/issues/18247/comments | 1 | 2024-02-28T06:10:33Z | 2024-06-08T16:13:27Z | https://github.com/langchain-ai/langchain/issues/18247 | 2,158,194,261 | 18,247 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```
from langchain_community.utilities.stackexchange import StackExchangeAPIWrapper
# can also replicate with: from langchain_community.tools.stackexchange.tool import StackExchangeTool
api = StackExchangeAPIWrapper()
api.run("window function mysql")
```
### Error Message and Stack Trace (if applicable)
```
File [c:\ProgramData\Anaconda3\envs\llm\lib\site-packages\langchain_community\utilities\stackexchange.py:55](file:///C:/ProgramData/Anaconda3/envs/llm/lib/site-packages/langchain_community/utilities/stackexchange.py:55), in <listcomp>(.0)
[50](file:///C:/ProgramData/Anaconda3/envs/llm/lib/site-packages/langchain_community/utilities/stackexchange.py:50) for question in questions:
[51](file:///C:/ProgramData/Anaconda3/envs/llm/lib/site-packages/langchain_community/utilities/stackexchange.py:51) res_text = f"Question: {question['title']}\n{question['excerpt']}"
[52](file:///C:/ProgramData/Anaconda3/envs/llm/lib/site-packages/langchain_community/utilities/stackexchange.py:52) relevant_answers = [
[53](file:///C:/ProgramData/Anaconda3/envs/llm/lib/site-packages/langchain_community/utilities/stackexchange.py:53) answer
[54](file:///C:/ProgramData/Anaconda3/envs/llm/lib/site-packages/langchain_community/utilities/stackexchange.py:54) for answer in answers
---> [55](file:///C:/ProgramData/Anaconda3/envs/llm/lib/site-packages/langchain_community/utilities/stackexchange.py:55) if answer["question_id"] == question["question_id"]
[56](file:///C:/ProgramData/Anaconda3/envs/llm/lib/site-packages/langchain_community/utilities/stackexchange.py:56) ]
[57](file:///C:/ProgramData/Anaconda3/envs/llm/lib/site-packages/langchain_community/utilities/stackexchange.py:57) accepted_answers = [
[58](file:///C:/ProgramData/Anaconda3/envs/llm/lib/site-packages/langchain_community/utilities/stackexchange.py:58) answer for answer in relevant_answers if answer["is_accepted"]
[59](file:///C:/ProgramData/Anaconda3/envs/llm/lib/site-packages/langchain_community/utilities/stackexchange.py:59) ]
[60](file:///C:/ProgramData/Anaconda3/envs/llm/lib/site-packages/langchain_community/utilities/stackexchange.py:60) if relevant_answers:
KeyError: 'question_id'
```
### Description
Currently the [stackexchange API wrapper](https://api.python.langchain.com/en/latest/_modules/langchain_community/utilities/stackexchange.html#StackExchangeAPIWrapper) may generate key errors, because some answers do not have `question_id` (which is an optional field according to [the official API docs](https://api.stackexchange.com/docs/types/search-excerpt)).
My quick one-liner fix:
In `StackExchangeAPIWrapper` class source code at line 55:
replace `if answer['question_id'] == question['question_id']` with `if answer.get('question_id', '') == question.get('question_id', '')`
### System Info
langchain==0.1.5
langchain-community==0.0.17
stackapi==0.3.0 | StackExchangeTool generates keyerror: 'question_id' | https://api.github.com/repos/langchain-ai/langchain/issues/18242/comments | 1 | 2024-02-28T02:25:59Z | 2024-08-02T15:29:58Z | https://github.com/langchain-ai/langchain/issues/18242 | 2,157,950,437 | 18,242 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
<img width="1362" alt="image" src="https://github.com/langchain-ai/langchain/assets/40649016/19e08b39-50bb-4f24-bacd-5d4a2d550b0d">
I found there is an issue with the next navigation button on [LangSmith](https://python.langchain.com/docs/langsmith/) Doc page.
What expected is https://python.langchain.com/docs/langsmith/walkthrough
### Idea or request for content:
It is easy to fix it | DOC: Wrong 'Next' Navigation for LangSmith | https://api.github.com/repos/langchain-ai/langchain/issues/18241/comments | 0 | 2024-02-28T02:15:41Z | 2024-06-08T16:13:16Z | https://github.com/langchain-ai/langchain/issues/18241 | 2,157,942,271 | 18,241 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
Missing explanation how to use LanceDB with filter from langchain api, https://python.langchain.com/docs/integrations/vectorstores/lancedb
Based on the documentation I am trying to use Langchain with LanceDB as vector database. Here is how I instatiate database:
db = lancedb.connect("./data/lancedb")
table = db.create_table("my_docs", data=[
{"vector": embeddings.embed_query(chunks[0].page_content), "text": chunks[0].page_content, "id": "1", "file":"bb"}
], mode="overwrite")
I then load more documents with different `file` metadata:
`vectordb = LanceDB.from_documents(chunks[1:], embeddings, connection=table)`
Then another batch with also a different `file` metadata value
`vectordb = LanceDB.from_documents(chunks_ma, embeddings, connection=table)`
I can see they were loaded succesfully and my vector db has correct amount of docuemnts:
`print(len(db['my_docs']))`
`11`
Now I want to create a retriever that will be able to pre filter the data based on `file` value:
I tried this
retriever = vectordb.as_retriever(search_kwargs={"k": 6, 'filter':{'file': 'bb'}})
retrieved_docs = retriever.invoke("My query regarding something")
But when I check the outputs of the query invocation its still giving me the documents with wrong file metadata values:
`print(retrieved_docs[0].metadata['file'])`
`'cc'`
But it was supposed to only query the docuemnts in the database matchin the file value.
Is there something I am doing wrong, or what is the correct approach to filter the values before running retrieval query from LanceDB vector DB using Langchain api? I think the guidelines are missing from the documentation but would reatly help.
### Idea or request for content:
_No response_ | DOC: Missing explanation of lancedb usage with filtering | https://api.github.com/repos/langchain-ai/langchain/issues/18235/comments | 1 | 2024-02-28T01:00:59Z | 2024-06-05T13:54:07Z | https://github.com/langchain-ai/langchain/issues/18235 | 2,157,877,691 | 18,235 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
import json
llm = ChatOpenAI()
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant."),
("user", "{input}")
])
output_parser = StrOutputParser()
chain = prompt | llm | output_parser
json.dumps(chain.to_json())
```
### Error Message and Stack Trace (if applicable)
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
[<ipython-input-35-d8b5c0c45d51>](https://localhost:8080/#) in <cell line: 1>()
----> 1 json.dumps(chain.to_json())
3 frames
[/usr/lib/python3.10/json/encoder.py](https://localhost:8080/#) in default(self, o)
177
178 """
--> 179 raise TypeError(f'Object of type {o.__class__.__name__} '
180 f'is not JSON serializable')
181
TypeError: Object of type ChatPromptTemplate is not JSON serializable
```
### Description
I'm trying to serialize a very simple chain to JSON, but the library is complaining that ChatPromptTemplate is not serializable. Removing that node from the chain, I then get the error `ChatOpenAI` is not JSON serializable.
### System Info
langchain==0.1.9
langchain-community==0.0.24
langchain-core==0.1.27
langchain-openai==0.0.8 | Can't Serialize Simple Chain | https://api.github.com/repos/langchain-ai/langchain/issues/18232/comments | 1 | 2024-02-27T21:54:32Z | 2024-02-28T22:49:01Z | https://github.com/langchain-ai/langchain/issues/18232 | 2,157,673,208 | 18,232 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
from langchain_openai import AzureChatOpenAI
or go to pdb, and simply run: import langchain_openai
### Error Message and Stack Trace (if applicable)
ImportError: cannot import name 'PydanticOutputParser' from 'langchain_core.output_parsers' (/mnt/.venv/lib/python3.8/site-packages/langchain_core/output_parsers/__init__.py)
### Description
Simply importing langchain_openai based on the recommendation from : https://python.langchain.com/docs/integrations/chat/azure_chat_openai
breaks the code.
I can see that langchain_openai is trying to import 'PydanticOutputParser' from 'langchain_core.output_parsers' but no such export exists.
I uninstalled everything and reinstalled all the packages.
### System Info
langchain==0.1.9
langchain-community==0.0.24
langchain-core==0.1.27
langchain-openai==0.0.8 | Can not import langchain and langchain_openai | https://api.github.com/repos/langchain-ai/langchain/issues/18228/comments | 5 | 2024-02-27T20:51:40Z | 2024-07-15T16:06:38Z | https://github.com/langchain-ai/langchain/issues/18228 | 2,157,577,357 | 18,228 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
bedrock_client = boto3_session.client("bedrock-runtime")
model = "anthropic.claude-v2:1"
model_kwargs = { #AI21
"max_tokens_to_sample": 4096,
"temperature": 0.2,
"top_p": 1,
"top_k": 250,
"stop_sequences": [],
}
llm = Bedrock(
model_id=model,
client = bedrock_client,
model_kwargs=model_kwargs
)
prompt = ChatPromptTemplate.from_template("Tell me about {topic}. Explain your reasoning.")
output_parser = StrOutputParser()
joke_chain = (
prompt
| llm
| output_parser
)
```
### Error Message and Stack Trace (if applicable)
raise ValueError("Streaming must be set to True for async operations. ")
ValueError: Streaming must be set to True for async operations.
### Description
I have the chain above wrapped in a LangServe endpoint.
The **stream** endpoint works correctly. The **invoke** endpoint fails with a 500 Internal Server error and with the error shown above in the logs.
But, invoke, is supposed to be a synchronous operation.
### System Info
pip freeze
langchain==0.1.9
langchain-community==0.0.24
langchain-core==0.1.27
langserve==0.0.43
platform AWS Linux
python version 3.11.6 | Bedrock llm fails with invoke endpoint in LangServe | https://api.github.com/repos/langchain-ai/langchain/issues/18224/comments | 2 | 2024-02-27T19:49:47Z | 2024-06-19T16:07:33Z | https://github.com/langchain-ai/langchain/issues/18224 | 2,157,486,258 | 18,224 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
This is the code that reproduces the issue.
```
tools = [
# a series of tools
]
llm = ChatOpenAI(model='gpt-3.5-turbo-0125')
prompt = ChatPromptTemplate.from_messages([
("system", system_prompt),
MessagesPlaceholder(variable_name="chat_history"),
("user", "{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad")
])
memory = ConversationBufferMemory(
return_messages=True,
memory_key="chat_history",
output_key='output',
)
# with this, get_openai_callback works (but returns a deprecation warning)
agent_good = OpenAIFunctionsAgent(llm=llm, tools=tools, prompt=prompt)
# with this, get_openai_callback does NOT work properly (it prints zero used tokens)
agent_bad = create_openai_functions_agent(llm=llm, tools=tools, prompt=prompt)
agent_executor = AgentExecutor(
agent=agent_bad, # if I use agent_good, the cost is calculated properly
tools=tools,
verbose=False,
memory=memory,
return_intermediate_steps=True
)
with get_openai_callback() as cb:
result = agent_executor.invoke({'input': 'Hi'})
print(result)
print(cb)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
If I use OpenAIFunctionsAgent to create an agent, I get a deprecation warning, telling me to use create_openai_functions_agent instead.
But, as of langchain 0.1.9 if I use create_openai_functions_agent to create an agent, the OpenAI callback to track costs stops working properly (it returns 0 used tokens and 0 dollars cost).
### System Info
```
python==3.10
tiktoken==0.6.0
openai==1.12.0
langchain==0.1.9
langchain-core==0.1.27
langchain-community==0.0.24
langchain-openai==0.0.8
pydantic==2.6.2
chromadb==0.4.23
``` | get_openai_callback doesn't work with create_openai_functions_agent (it returns 0 used tokens / dollars) | https://api.github.com/repos/langchain-ai/langchain/issues/18212/comments | 8 | 2024-02-27T18:09:25Z | 2024-04-03T04:13:34Z | https://github.com/langchain-ai/langchain/issues/18212 | 2,157,306,347 | 18,212 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```
from langchain_core.prompts import FewShotPromptTemplate, PromptTemplate
catalog = "samples"
database = "nyctaxi"
examples = [
{"input": "longest trip along with pickup zip and drop zip",
"query": "select trip_distance, pickup_zip, dropoff_zip from samples.nyctaxi.trips order by trip_distance desc limit 1"},
{
"input": "what is the total amount earned for trip on 2016-01-01",
"query": "SELECT SUM(fare_amount) FROM samples.nyctaxi.trips WHERE tpep_pickup_datetime >= '2016-01-01 00:00:00+00:00' AND tpep_pickup_datetime < '2016-01-02 00:00:00+00:00",
},
{
"input": "pick up date wise total amount earned and its pickup zip code and drop zip code",
"query": """SELECT tpep_pickup_datetime, SUM(fare_amount) as total_fare_amount, pickup_zip, dropoff_zip
FROM samples.nyctaxi.trips
GROUP BY tpep_pickup_datetime, pickup_zip, dropoff_zip""",
},
{
"input": "What was the average trip distance for each day during the month of January 2016?",
"query": """SELECT
date_trunc('day', tpep_pickup_datetime) as pickup_day,
avg(trip_distance) as avg_distance
FROM
samples.nyctaxi.trips
WHERE
tpep_pickup_datetime >= '2016-01-01' AND tpep_pickup_datetime < '2016-02-01'
GROUP BY
pickup_day
ORDER BY
pickup_day"""
}
,{ "input":"list all the trips with pickup zip as 10003" ,
"query":"select * from samples.nyctaxi.trips where pickup_zip = 10003"}
]
prefix_string="""You are a DATABRICKS SQL expert. Given an input question, create a syntactically correct databricks SQL query to run
in which the query the table name should be prefixed database name is {database} and catalog is {catalog} ,make sure to add corrrect columns or column names of the tables with out any special characters then look at the results of the query
. Unless otherwise specificed, do not return more than {top_k} rows.\n\nHere is the relevant table info: {table_info}\n\nBelow are a number of examples of questions and their corresponding SQL queries."""
example_prompt = PromptTemplate.from_template("User input: {input}\nSQL query: {query}")
prompt = FewShotPromptTemplate(
examples=examples,
example_prompt=example_prompt,
prefix=prefix_string,
suffix="User input: {input}\nSQL query: ",
input_variables=["input", "top_k", "table_info","catalog","database"],
)
# Initialize the SQL database connection
db = SQLDatabase.from_databricks(catalog=catalog, schema=database)
tables_list= ",".join(db.get_usable_table_names())
# prompt.format(input="how many artists are there?", top_k=3, table_info=tables_list,catalog=catalog,database=database)
# Create a language model instance that interacts with the Databricks SQL database
llm = ChatDatabricks(endpoint="databricks-mixtral-8x7b-instruct", max_tokens=200)
agent = create_sql_query_chain(llm=llm, prompt=prompt, db=db)
q="What was the average trip distance for each month and year "
response=agent.invoke({"question":q ,"top_k":3,"table_info":tables_list,"catalog":catalog,"database":database})
print(response)```
### Error Message and Stack Trace (if applicable)
When using the Few Shot template for a Databricks SQL query, I've noticed that sometimes there are no errors, but other times there are. The code above and the error below show the errors I've seen; I've attempted a number of solutions, but I wasn't able to
ERROR:-
KeyError: "Input to FewShotPromptTemplate is missing variables {'database', 'catalog'}. Expected: ['catalog', 'database', 'input', 'table_info', 'top_k'] Received: ['input', 'top_k', 'table_info']"
### Description
When using the Few Shot template for a Databricks SQL query, I've noticed that sometimes there are no errors, but other times there are. The code above and the error below show the errors I've seen; I've attempted a number of solutions, but I wasn't able to
ERROR:-
KeyError: "Input to FewShotPromptTemplate is missing variables {'database', 'catalog'}. Expected: ['catalog', 'database', 'input', 'table_info', 'top_k'] Received: ['input', 'top_k', 'table_info']"
### System Info
langchain==0.0.344,mlflow==2.9.0,databricks-sql-connector and databricks DBR is 14.3 | Am working on Langchain with databricks ,tried few shot way of prompting to make sure queries are accurate, but facing the below error though prompt is correct | https://api.github.com/repos/langchain-ai/langchain/issues/18210/comments | 7 | 2024-02-27T17:22:28Z | 2024-08-06T09:39:48Z | https://github.com/langchain-ai/langchain/issues/18210 | 2,157,228,027 | 18,210 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
Code example:
```
from langchain_community.llms.huggingface_pipeline import HuggingFacePipeline
llm = HuggingFacePipeline.from_model_id(
model_id = "model/google/flan-t5-large",
task = "text2text-generation",
pipeline_kwargs={"max_new_tokens": 100}
)
from langchain.prompts import PromptTemplate
template = """Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer. Keep the answer as concise as possible.
{context}
Question: {question}
Helpful Answer:"""
QA_CHAIN_PROMPT = PromptTemplate.from_template(template)
from langchain.chains import RetrievalQA
qa_chain = RetrievalQA.from_chain_type(
llm=llm,
chain_type="stuff",
retriever=db.as_retriever(),
chain_type_kwargs={"prompt": QA_CHAIN_PROMPT}
)
result = qa_chain ({ "query" : question })
print(result["result"])
```
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "/home/rigazilla/git/infinispan-vector/rag-hf/main.py", line 65, in <module>
result = qa_chain ({ "query" : question })
File "/home/rigazilla/git/infinispan-vector/rag-hf/.venv/lib/python3.9/site-packages/langchain_core/_api/deprecation.py", line 145, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
File "/home/rigazilla/git/infinispan-vector/rag-hf/.venv/lib/python3.9/site-packages/langchain/chains/base.py", line 378, in __call__
return self.invoke(
File "/home/rigazilla/git/infinispan-vector/rag-hf/.venv/lib/python3.9/site-packages/langchain/chains/base.py", line 163, in invoke
raise e
File "/home/rigazilla/git/infinispan-vector/rag-hf/.venv/lib/python3.9/site-packages/langchain/chains/base.py", line 153, in invoke
self._call(inputs, run_manager=run_manager)
File "/home/rigazilla/git/infinispan-vector/rag-hf/.venv/lib/python3.9/site-packages/langchain/chains/retrieval_qa/base.py", line 144, in _call
answer = self.combine_documents_chain.run(
File "/home/rigazilla/git/infinispan-vector/rag-hf/.venv/lib/python3.9/site-packages/langchain_core/_api/deprecation.py", line 145, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
File "/home/rigazilla/git/infinispan-vector/rag-hf/.venv/lib/python3.9/site-packages/langchain/chains/base.py", line 550, in run
return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[
File "/home/rigazilla/git/infinispan-vector/rag-hf/.venv/lib/python3.9/site-packages/langchain_core/_api/deprecation.py", line 145, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
File "/home/rigazilla/git/infinispan-vector/rag-hf/.venv/lib/python3.9/site-packages/langchain/chains/base.py", line 378, in __call__
return self.invoke(
File "/home/rigazilla/git/infinispan-vector/rag-hf/.venv/lib/python3.9/site-packages/langchain/chains/base.py", line 163, in invoke
raise e
File "/home/rigazilla/git/infinispan-vector/rag-hf/.venv/lib/python3.9/site-packages/langchain/chains/base.py", line 153, in invoke
self._call(inputs, run_manager=run_manager)
File "/home/rigazilla/git/infinispan-vector/rag-hf/.venv/lib/python3.9/site-packages/langchain/chains/combine_documents/base.py", line 137, in _call
output, extra_return_dict = self.combine_docs(
File "/home/rigazilla/git/infinispan-vector/rag-hf/.venv/lib/python3.9/site-packages/langchain/chains/combine_documents/stuff.py", line 244, in combine_docs
return self.llm_chain.predict(callbacks=callbacks, **inputs), {}
File "/home/rigazilla/git/infinispan-vector/rag-hf/.venv/lib/python3.9/site-packages/langchain/chains/llm.py", line 293, in predict
return self(kwargs, callbacks=callbacks)[self.output_key]
File "/home/rigazilla/git/infinispan-vector/rag-hf/.venv/lib/python3.9/site-packages/langchain_core/_api/deprecation.py", line 145, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
File "/home/rigazilla/git/infinispan-vector/rag-hf/.venv/lib/python3.9/site-packages/langchain/chains/base.py", line 378, in __call__
return self.invoke(
File "/home/rigazilla/git/infinispan-vector/rag-hf/.venv/lib/python3.9/site-packages/langchain/chains/base.py", line 163, in invoke
raise e
File "/home/rigazilla/git/infinispan-vector/rag-hf/.venv/lib/python3.9/site-packages/langchain/chains/base.py", line 153, in invoke
self._call(inputs, run_manager=run_manager)
File "/home/rigazilla/git/infinispan-vector/rag-hf/.venv/lib/python3.9/site-packages/langchain/chains/llm.py", line 103, in _call
response = self.generate([inputs], run_manager=run_manager)
File "/home/rigazilla/git/infinispan-vector/rag-hf/.venv/lib/python3.9/site-packages/langchain/chains/llm.py", line 115, in generate
return self.llm.generate_prompt(
File "/home/rigazilla/git/infinispan-vector/rag-hf/.venv/lib/python3.9/site-packages/langchain_core/language_models/llms.py", line 568, in generate_prompt
return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs)
File "/home/rigazilla/git/infinispan-vector/rag-hf/.venv/lib/python3.9/site-packages/langchain_core/language_models/llms.py", line 741, in generate
output = self._generate_helper(
File "/home/rigazilla/git/infinispan-vector/rag-hf/.venv/lib/python3.9/site-packages/langchain_core/language_models/llms.py", line 605, in _generate_helper
raise e
File "/home/rigazilla/git/infinispan-vector/rag-hf/.venv/lib/python3.9/site-packages/langchain_core/language_models/llms.py", line 592, in _generate_helper
self._generate(
File "/home/rigazilla/git/infinispan-vector/rag-hf/.venv/lib/python3.9/site-packages/langchain_community/llms/huggingface_pipeline.py", line 202, in _generate
responses = self.pipeline(
File "/home/rigazilla/git/infinispan-vector/rag-hf/.venv/lib/python3.9/site-packages/transformers/pipelines/text2text_generation.py", line 167, in __call__
result = super().__call__(*args, **kwargs)
File "/home/rigazilla/git/infinispan-vector/rag-hf/.venv/lib/python3.9/site-packages/transformers/pipelines/base.py", line 1177, in __call__
outputs = list(final_iterator)
File "/home/rigazilla/git/infinispan-vector/rag-hf/.venv/lib/python3.9/site-packages/transformers/pipelines/pt_utils.py", line 124, in __next__
item = next(self.iterator)
File "/home/rigazilla/git/infinispan-vector/rag-hf/.venv/lib/python3.9/site-packages/transformers/pipelines/pt_utils.py", line 125, in __next__
processed = self.infer(item, **self.params)
File "/home/rigazilla/git/infinispan-vector/rag-hf/.venv/lib/python3.9/site-packages/transformers/pipelines/base.py", line 1102, in forward
model_outputs = self._forward(model_inputs, **forward_params)
File "/home/rigazilla/git/infinispan-vector/rag-hf/.venv/lib/python3.9/site-packages/transformers/pipelines/text2text_generation.py", line 191, in _forward
output_ids = self.model.generate(**model_inputs, **generate_kwargs)
File "/home/rigazilla/git/infinispan-vector/rag-hf/.venv/lib64/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/home/rigazilla/git/infinispan-vector/rag-hf/.venv/lib/python3.9/site-packages/transformers/generation/utils.py", line 1350, in generate
self._validate_model_kwargs(model_kwargs.copy())
File "/home/rigazilla/git/infinispan-vector/rag-hf/.venv/lib/python3.9/site-packages/transformers/generation/utils.py", line 1167, in _validate_model_kwargs
raise ValueError(
ValueError: The following `model_kwargs` are not used by the model: ['return_full_text'] (note: typos in the generate arguments will also show up in this list)
Process finished with exit code 1
### Description
I'm trying to run the code above, but an exeception is raised.
same code works with:
langchain==0.1.7
langchain-community==0.0.20
langchain-core==0.1.23
Maybe this line?
https://github.com/langchain-ai/langchain/blob/0d294760e742e0707a71afc7aad22e4d00b54ae5/libs/community/langchain_community/llms/huggingface_pipeline.py#L205
### System Info
langchain==0.1.9
langchain-community==0.0.24
langchain-core==0.1.27
| Huggingface_pipeline passes unused 'return_full_text' argument | https://api.github.com/repos/langchain-ai/langchain/issues/18198/comments | 6 | 2024-02-27T16:24:56Z | 2024-06-12T16:08:22Z | https://github.com/langchain-ai/langchain/issues/18198 | 2,157,040,025 | 18,198 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
I am having a very hard time figuring out how to properly propagate callbacks/callback handlers for Agents/AgentExecutors/Tools so they appear correctly on langsmith. https://python.langchain.com/docs/modules/agents/how_to/streaming has a small section on how to propagate callbacks, however:
1. It only mentions tools defined with `@tool`. How does it work for tools that subclass `BaseTool`?
2. How does it work/relates to "run managers"? Stepping through the library's code I found that the way that tools are called depends on whether they contain a `run_manager` parameter (sidenote: this is *very* confusing and should be made explicit and visible, people do *not* expect caller behaviour to change depending on the existence of a parameter. At least I didn't). But in these docs you use `callbacks`, not `run_manager`? Which should be used? How does it impact everything else?
### Idea or request for content:
_No response_ | DOC: expand section on how to propagate callbacks in Agents/Streaming (probably make a page entirely dedicated to this?) | https://api.github.com/repos/langchain-ai/langchain/issues/18191/comments | 1 | 2024-02-27T12:38:36Z | 2024-06-08T16:13:05Z | https://github.com/langchain-ai/langchain/issues/18191 | 2,156,485,260 | 18,191 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
https://python.langchain.com/docs/modules/agents/how_to/structured_tools is pretty puzzling - it just shows some code but there is no introduction/explanation as to when/why one should care about this?
### Idea or request for content:
_No response_ | DOC: Fix "Structured Tools" page? | https://api.github.com/repos/langchain-ai/langchain/issues/18190/comments | 1 | 2024-02-27T12:28:53Z | 2024-06-21T16:37:07Z | https://github.com/langchain-ai/langchain/issues/18190 | 2,156,466,485 | 18,190 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
This is my code:
```python
prompt = """
This prompt takes two variables like this:
variable1:
{variable_1}
variable2:
{variable_2}
"""
model_choice="gpt-4-0125-preview"
temperature=0.0
# Initialize the model and prompt configuration
prompt = ChatPromptTemplate.from_template(prompt)
output_parser = StrOutputParser()
model = ChatOpenAI(model=model_choice, temperature=temperature)
setup = RunnableParallel({"variable1":RunnablePassthrough(),"variable2":RunnablePassthrough()})
llm_output = (
setup
| prompt
| model
| output_parser
)
llm_output.invoke({"variable_1":"testing var1","variable_2":"testing var2"})
```
When the code is executed, the langsmith trace shows runnable sequence input as
{
"variable_1": "testing var1",
"variable_2": "testing var2"
}
But the ChatOpenAI call shows the prompt sent as:
```text
This prompt takes two variables like this:
variable1:
{'variable_1': 'testing var1', 'variable_2': 'testing var2'}
variable2:
{'variable_1': 'testing var1', 'variable_2': 'testing var2'}
```
What am I missing? Shouldn't it be
```text
This prompt takes two variables like this:
variable1:
'testing var1'
variable2:
'testing var2'
```


### Error Message and Stack Trace (if applicable)
_No response_
### Description
I am using Langchain to take two variables in a prompt and send an output. While two variables are in correct JSON format, the prompt that is ultimately sent to OpenAI seems to be incorrect.
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.3.0: Wed Dec 20 21:30:59 PST 2023; root:xnu-10002.81.5~7/RELEASE_ARM64_T6030
> Python Version: 3.11.7 | packaged by conda-forge | (main, Dec 23 2023, 14:38:07) [Clang 16.0.6 ]
Package Information
-------------------
> langchain_core: 0.1.22
> langchain: 0.1.1
> langchain_community: 0.0.19
> langsmith: 0.0.87
> langchain_openai: 0.0.5
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | LECL Chain doesn't process multiple inputs properly. | https://api.github.com/repos/langchain-ai/langchain/issues/18188/comments | 1 | 2024-02-27T12:14:03Z | 2024-02-27T13:36:19Z | https://github.com/langchain-ai/langchain/issues/18188 | 2,156,437,408 | 18,188 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```
from langchain_community.chat_models import ChatOllama
from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain.llms import Ollama
from langchain_community.utilities import SQLDatabase
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
from langchain_core.runnables import RunnablePassthrough
# Define chat models
llama2_chat = ChatOllama(model="llama2:13b-chat") # Change model if required
llama2_code = ChatOllama(model="codellama:7b-instruct")
# Set model (choose one of the following options)
llm = llama2_chat # Option 1
# llm = Ollama(model="llama2:13b-chat", callback_manager=CallbackManager([StreamingStdOutCallbackHandler()])) # Option 2
# Connect to database
db = SQLDatabase.from_uri("mysql+pymysql://database_user:password@localhost/databasName")
# Define functions for schema retrieval and query execution
def get_schema(_):
return db.get_table_info()
def run_query(query):
return db.run(query)
# Create prompt templates
template1 = """
Based on the table schema below, write a SQL query that would answer the user's question:
{schema}
Question: {question}
SQL Query:
"""
prompt = ChatPromptTemplate.from_messages(
[
("system", "Given an input question, convert it to a SQL query. No pre-amble."),
("human", template1),
]
)
template2 = """
Based on the table schema below, question, sql query, and sql response, write a natural language response:
{schema}
Question: {question}
SQL Query: {query}
SQL Response: {response}
"""
prompt_response = ChatPromptTemplate.from_messages(
[
(
"system",
"Given an input question and SQL response, convert it to a natural language answer. No pre-amble.",
),
("human", template2),
]
)
# Construct chains for query generation and response
sql_response = (
RunnablePassthrough.assign(schema=get_schema)
| prompt
| llm.bind(stop=["\nSQLResult:"])
| StrOutputParser()
)
full_chain = (
RunnablePassthrough.assign(query=sql_response)
| RunnablePassthrough.assign(
schema=get_schema,
response=lambda x: db.run(x["query"]),
)
| prompt_response
| llm
)
# Invoke the full chain and print the final response
full_chain.invoke({"question": "how many total records?"})
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I am expecting this to only return
"There are a total of 15 records"
instead I am getting the response "Select count(*) from table, this will get the total number of record in the table, The total is 15 record in the table "
How can i modify the templates or anything so that the actual queries are not returned or explained and only the final answer in natural language is returned?
### System Info
```
System Information
------------------
> OS: Linux
> OS Version: #102~20.04.1-Ubuntu SMP Mon Jan 15 13:09:14 UTC 2024
> Python Version: 3.8.10 (default, Nov 22 2023, 10:22:35)
[GCC 9.4.0]
Package Information
-------------------
> langchain_core: 0.1.24
> langchain: 0.1.8
> langchain_community: 0.0.21
> langsmith: 0.1.3
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
``` | sql chat convert natural language to sql then sql to natural language | https://api.github.com/repos/langchain-ai/langchain/issues/18185/comments | 1 | 2024-02-27T11:56:27Z | 2024-03-11T15:36:26Z | https://github.com/langchain-ai/langchain/issues/18185 | 2,156,401,290 | 18,185 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
requests = Requests(headers=headers, verify=False)
my_toolkit = NLAToolkit.from_llm_and_spec(
llm,
spec=OpenAPISpec.from_file("DLM_Lite_Gateway_openapi.json"),
requests=requests,
max_text_length=1800, # If you want to truncate the response text
)
in tool.py
if TYPE_CHECKING:
from langchain.chains.api.openapi.chain import OpenAPIEndpointChain
### Error Message and Stack Trace (if applicable)
File "C:\Users\suchaudn\OneDrive - Legrand France\PYTHON\langChain\natural_language.py", line 54, in <module>
my_toolkit = NLAToolkit.from_llm_and_spec(
File "C:\Users\suchaudn\OneDrive - Legrand France\PYTHON\langChain\venv\lib\site-packages\langchain_community\agent_toolkits\nla\toolkit.py", line 74, in from_llm_and_spec
http_operation_tools = cls._get_http_operation_tools(
File "C:\Users\suchaudn\OneDrive - Legrand France\PYTHON\langChain\venv\lib\site-packages\langchain_community\agent_toolkits\nla\toolkit.py", line 52, in _get_http_operation_tools
endpoint_tool = NLATool.from_llm_and_method(
File "C:\Users\suchaudn\OneDrive - Legrand France\PYTHON\langChain\venv\lib\site-packages\langchain_community\agent_toolkits\nla\tool.py", line 50, in from_llm_and_method
chain = OpenAPIEndpointChain.from_api_operation(
NameError: name 'OpenAPIEndpointChain' is not defined
### Description
i'm starting from the exemple:
https://python.langchain.com/docs/integrations/toolkits/openapi_nla
but i've an error because OpenAPIEndpointChain class is never imported cause by TYPE_CHECKING in tool.py
### System Info
langchain==0.1.8
langchain-community==0.0.21
langchain-core==0.1.25
langchain-openai==0.0.6
langchainplus-sdk==0.0.21
langsmith==0.1.3
| OpenAPIEndpointChain not imported in tool.py | https://api.github.com/repos/langchain-ai/langchain/issues/18179/comments | 1 | 2024-02-27T10:23:44Z | 2024-06-08T16:13:01Z | https://github.com/langchain-ai/langchain/issues/18179 | 2,156,221,535 | 18,179 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
Classes in the BaseTracer hierarchy take a "example_id" parameter. The docstring/api docs have a very vague description of "The example ID associated with the runs.". What is this supposed to be (for)?
### Idea or request for content:
_No response_ | DOC: What are Tracers `example_id` attributes meant for? | https://api.github.com/repos/langchain-ai/langchain/issues/18177/comments | 2 | 2024-02-27T08:51:57Z | 2024-06-12T06:40:29Z | https://github.com/langchain-ai/langchain/issues/18177 | 2,156,022,582 | 18,177 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.