issue_owner_repo
sequencelengths 2
2
| issue_body
stringlengths 0
261k
⌀ | issue_title
stringlengths 1
925
| issue_comments_url
stringlengths 56
81
| issue_comments_count
int64 0
2.5k
| issue_created_at
stringlengths 20
20
| issue_updated_at
stringlengths 20
20
| issue_html_url
stringlengths 37
62
| issue_github_id
int64 387k
2.46B
| issue_number
int64 1
127k
|
---|---|---|---|---|---|---|---|---|---|
[
"hwchase17",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
```python
from langchain_core.load import dumpd
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
prompt = ChatPromptTemplate.from_messages([("system", "foo"), MessagesPlaceholder("bar"), ("human", "baz")])
load(dumpd(MessagesPlaceholder("bar"))) # works
load(dumpd(prompt)) # doesn't work
```
raises
```python
...
File ~/langchain/libs/core/langchain_core/load/load.py:190, in load.<locals>._load(obj)
187 if isinstance(obj, dict):
188 # Need to revive leaf nodes before reviving this node
189 loaded_obj = {k: _load(v) for k, v in obj.items()}
--> 190 return reviver(loaded_obj)
191 if isinstance(obj, list):
192 return [_load(o) for o in obj]
File ~/langchain/libs/core/langchain_core/load/load.py:78, in Reviver.__call__(self, value)
71 raise KeyError(f'Missing key "{key}" in load(secrets_map)')
73 if (
74 value.get("lc", None) == 1
75 and value.get("type", None) == "not_implemented"
76 and value.get("id", None) is not None
77 ):
---> 78 raise NotImplementedError(
79 "Trying to load an object that doesn't implement "
80 f"serialization: {value}"
81 )
83 if (
84 value.get("lc", None) == 1
85 and value.get("type", None) == "constructor"
86 and value.get("id", None) is not None
87 ):
88 [*namespace, name] = value["id"]
NotImplementedError: Trying to load an object that doesn't implement serialization: {'lc': 1, 'type': 'not_implemented', 'id': ['typing', 'List'], 'repr': 'typing.List[typing.Union[langchain_core.messages.ai.AIMessage, langchain_core.messages.human.HumanMessage, langchain_core.messages.chat.ChatMessage, langchain_core.messages.system.SystemMessage, langchain_core.messages.function.FunctionMessage, langchain_core.messages.tool.ToolMessage]]'}
``` | ChatPrompTemplate with MessagesPlaceholder ser/des broken | https://api.github.com/repos/langchain-ai/langchain/issues/24513/comments | 0 | 2024-07-22T18:45:23Z | 2024-07-22T18:47:57Z | https://github.com/langchain-ai/langchain/issues/24513 | 2,423,546,409 | 24,513 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
RunnableWithMessageHistory(AgentExecutor(agent=create_tool_calling_agent(llm_with_tools,self.tools, system_prompt))).invoke(input_prompt, config={ 'configurable': {'session_id': session_id} })
### Error Message and Stack Trace (if applicable)
Invoking: describe with {'extension': 'fallback'}
### Description
We are using a set of tools and we have prompted model through tool_calling_agent system prompt to only invoke tools from the given list, and one of the tools we use is named 'fallback', for specific questions where model is supposed to use this fallback tool with the following format:
Invoking: fallback with {'question': 'please answer the following question'}
The model uses the following and fails to respond, does anyone know why is this happening?
Invoking: describe with {'extension': 'fallback'}
### System Info
Vertex AI
Python: 3.10.12 | Tool calling agent invokes undefined tool: 'describe' | https://api.github.com/repos/langchain-ai/langchain/issues/24512/comments | 0 | 2024-07-22T18:16:04Z | 2024-07-22T18:18:40Z | https://github.com/langchain-ai/langchain/issues/24512 | 2,423,481,807 | 24,512 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
I am trying to use Langchain to query Azure SQL using Azure OpenAI. The code is based on the samples provided in GitHub - [Langchain to query Azure SQL using Azure OpenAI](https://github.com/Azure-Samples/SQL-AI-samples/blob/main/AzureSQLDatabase/LangChain/dbOpenAI.ipynb). I have already tested connectivity with Azure SQL using Langchain & it works. I also tested connectivity with Azure OpenAI using Langchain & it works as well. I am using the API version as 2023-08-01-preview as per the comment that "Azure OpenAI on your own data is only supported by the 2023-08-01-preview API version." Referred this [link](https://github.com/Azure-Samples/openai/blob/main/Basic_Samples/Chat/chat_with_your_own_data.ipynb).
After I create the SQL agent & execute the invoke method, it fails returning internal server error & return code as 500.
```python
import os
from sqlalchemy.engine.url import URL
from langchain_community.utilities import SQLDatabase
from langchain_openai.chat_models import AzureChatOpenAI
from langchain.agents.agent_types import AgentType
from langchain_community.agent_toolkits.sql.base import create_sql_agent, SQLDatabaseToolkit
from azure.identity import EnvironmentCredential, get_bearer_token_provider
from langchain.prompts.chat import ChatPromptTemplate
# Set up SQLAlchemy connection
db_config = {
'drivername': 'mssql+pyodbc',
'username': os.getenv("SQL_SERVER_USERNAME") + '@' + os.getenv("SQL_SERVER"),
'password': os.getenv("SQL_SERVER_PASSWORD"),
'host': os.getenv("SQL_SERVER_ENDPOINT"),
'port': 1433,
'database': os.getenv("SQL_SERVER_DATABASE"),
'query': {'driver': 'ODBC Driver 18 for SQL Server'}
}
db_url = URL.create(**db_config)
db = SQLDatabase.from_uri(db_url)
# Authenticate using the Service Principal
token_provider = get_bearer_token_provider(
EnvironmentCredential(),
"https://cognitiveservices.azure.com/.default"
)
# Set up Azure OpenAI
llm = AzureChatOpenAI(deployment_name="my-deployment-name-gpt-35-turbo-1106", azure_ad_token_provider = token_provider, temperature=0, max_tokens=4000)
final_prompt = ChatPromptTemplate.from_messages(
[
("system",
"""
You are a helpful AI assistant expert in querying SQL Database to find answers to user's question about SQL tables.
"""
),
("user", "{question}\n ai: "),
]
)
# Set up SQL toolkit for LangChain Agent
toolkit = SQLDatabaseToolkit(db=db, llm=llm)
toolkit.get_tools()
# Initialize and run the Agent
agent_executor = create_sql_agent(
llm=llm,
toolkit=toolkit,
verbose=True,
agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
streaming=True,
agent_executor_kwargs={'handle_parsing_errors':True},
)
agent_executor.invoke(final_prompt.format(
question="count the rows in the titanic table."))
```
### Error Message and Stack Trace (if applicable)
Entering new SQL Agent Executor chain...
Traceback (most recent call last):
File "test.py", line 62, in
agent_executor.invoke(final_prompt.format(
File "/home/user/.local/lib/python3.8/site-packages/langchain/chains/base.py", line 166, in invoke
raise e
File "/home/user/.local/lib/python3.8/site-packages/langchain/chains/base.py", line 156, in invoke
self._call(inputs, run_manager=run_manager)
File "/home/user/.local/lib/python3.8/site-packages/langchain/agents/agent.py", line 1636, in _call
next_step_output = self._take_next_step(
File "/home/user/.local/lib/python3.8/site-packages/langchain/agents/agent.py", line 1342, in _take_next_step
[
File "/home/user/.local/lib/python3.8/site-packages/langchain/agents/agent.py", line 1342, in
[
File "/home/user/.local/lib/python3.8/site-packages/langchain/agents/agent.py", line 1370, in _iter_next_step
output = self.agent.plan(
File "/home/user/.local/lib/python3.8/site-packages/langchain/agents/agent.py", line 463, in plan
for chunk in self.runnable.stream(inputs, config={"callbacks": callbacks}):
File "/home/user/.local/lib/python3.8/site-packages/langchain_core/runnables/base.py", line 3251, in stream
yield from self.transform(iter([input]), config, **kwargs)
File "/home/user/.local/lib/python3.8/site-packages/langchain_core/runnables/base.py", line 3238, in transform
yield from self._transform_stream_with_config(
File "/home/user/.local/lib/python3.8/site-packages/langchain_core/runnables/base.py", line 2052, in _transform_stream_with_config
chunk: Output = context.run(next, iterator) # type: ignore
File "/home/user/.local/lib/python3.8/site-packages/langchain_core/runnables/base.py", line 3200, in _transform
for output in final_pipeline:
File "/home/user/.local/lib/python3.8/site-packages/langchain_core/runnables/base.py", line 1270, in transform
for ichunk in input:
File "/home/user/.local/lib/python3.8/site-packages/langchain_core/runnables/base.py", line 5262, in transform
yield from self.bound.transform(
File "/home/user/.local/lib/python3.8/site-packages/langchain_core/runnables/base.py", line 1288, in transform
yield from self.stream(final, config, **kwargs)
File "/home/user/.local/lib/python3.8/site-packages/langchain_core/language_models/chat_models.py", line 360, in stream
raise e
File "/home/user/.local/lib/python3.8/site-packages/langchain_core/language_models/chat_models.py", line 340, in stream
for chunk in self._stream(messages, stop=stop, **kwargs):
File "/home/user/.local/lib/python3.8/site-packages/langchain_openai/chat_models/base.py", line 489, in _stream
with self.client.create(**payload) as response:
File "/home/user/.local/lib/python3.8/site-packages/openai/_utils/_utils.py", line 277, in wrapper
return func(*args, **kwargs)
File "/home/user/.local/lib/python3.8/site-packages/openai/resources/chat/completions.py", line 643, in create
return self._post(
File "/home/user/.local/lib/python3.8/site-packages/openai/_base_client.py", line 1266, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
File "/home/user/.local/lib/python3.8/site-packages/openai/_base_client.py", line 942, in request
return self._request(
File "/home/user/.local/lib/python3.8/site-packages/openai/_base_client.py", line 1031, in _request
return self._retry_request(
File "/home/user/.local/lib/python3.8/site-packages/openai/_base_client.py", line 1079, in _retry_request
return self._request(
File "/home/user/.local/lib/python3.8/site-packages/openai/_base_client.py", line 1031, in _request
return self._retry_request(
File "/home/user/.local/lib/python3.8/site-packages/openai/_base_client.py", line 1079, in _retry_request
return self._request(
File "/home/user/.local/lib/python3.8/site-packages/openai/_base_client.py", line 1046, in _request
raise self._make_status_error_from_response(err.response) from None
openai.InternalServerError: Error code: 500 - {'statusCode': 500, 'message': 'Internal server error', 'activityId': 'xxx-yyy-zzz'}
### Description
* I am trying to use Langchain to query Azure SQL using Azure OpenAI
* The code is based on the samples provided in GitHub - [Langchain to query Azure SQL using Azure OpenAI](https://github.com/Azure-Samples/SQL-AI-samples/blob/main/AzureSQLDatabase/LangChain/dbOpenAI.ipynb)
* Expected result is the code to return response with Action, Observation & Thought in an iterative manner
* Actual result is an error: Internal server error, 500. The complete error log can be seen below.
### System Info
## Langchain version
langchain==0.2.10
langchain-community==0.2.9
langchain-core==0.2.22
langchain-openai==0.1.16
langchain-text-splitters==0.2.2
## Platform
Windows 11
## Python version
Python 3.8.10 | Langchain SQL agent withAzure SQL & Azure OpenAI fails on invoke method returning Internal server error 500 | https://api.github.com/repos/langchain-ai/langchain/issues/24504/comments | 5 | 2024-07-22T16:39:45Z | 2024-08-10T12:38:06Z | https://github.com/langchain-ai/langchain/issues/24504 | 2,423,304,468 | 24,504 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
I am trying to do a simple text summarization task and return the result in JSON format by using the local Llama-3 8B Instruct model (GGUF version) and running with CPU only. The code is as follow:
```
from langchain.chains import LLMChain
from langchain_community.llms import LlamaCpp
from langchain_core.callbacks import CallbackManager, StreamingStdOutCallbackHandler
from langchain_core.prompts import PromptTemplate
# Create the prompt
template = """
Read the article and return the "release date of Llama-3" in JSON format.
If the information is not mentioned, please do not return any answer.
Article: {text}
Answer:
"""
# Text for summarization (from https://en.wikipedia.org/wiki/Llama_(language_model))
text = """
Llama (acronym for Large Language Model Meta AI, and formerly stylized as LLaMA) is a family of autoregressive large language models (LLMs) released by Meta AI starting in February 2023. The latest version is Llama 3, released in April 2024.
Model weights for the first version of Llama were made available to the research community under a non-commercial license, and access was granted on a case-by-case basis. Unauthorized copies of the model were shared via BitTorrent. In response, Meta AI issued DMCA takedown requests against repositories sharing the link on GitHub. Subsequent versions of Llama were made accessible outside academia and released under licenses that permitted some commercial use. Llama models are trained at different parameter sizes, typically ranging between 7B and 70B. Originally, Llama was only available as a foundation model. Starting with Llama 2, Meta AI started releasing instruction fine-tuned versions alongside foundation models.
Alongside the release of Llama 3, Meta added virtual assistant features to Facebook and WhatsApp in select regions, and a standalone website. Both services use a Llama 3 model.
"""
# Set up and run Local Llama-3 model
prompt = PromptTemplate(template=template, input_variables=["text"])
callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])
llm = LlamaCpp(model_path="model/llama/Meta-Llama-3-8B-Instruct.Q6_K.gguf",
n_ctx=2048, callback_manager=callback_manager, verbose=True)
chain = prompt | llm
chain.invoke(text)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
By using the code, the model could be run successfully, and the output would be nice.
```
{
"release_date": "April 2024"
}
```
However, if I input more text (adding more paragraphs in the webpage (https://en.wikipedia.org/wiki/Llama_(language_model))), the output would be bad and the model kept generating the result:
```
The release notes for LLaMA model can be found on the official website, Meta AI. Release notes are typically available after you read the answer.
LLaMA. If you cannot
it as is in. Read More
LLaMA is a "Release. Release note the "Read the article.
# Release note the "read in. Read more and more, Read the Release on "read a "a
Release in "Read the "Release
.
.
.
```
May I know if there is any solution if I would like to input a long text for summarization using local Llama-3 model?
### System Info
langchain==0.2.10
langchain_community==0.2.9
langchain_core==0.2.22
Python version 3.10.12 | Strange output when summarizing long text using local Llama-3 model with LlamaCpp | https://api.github.com/repos/langchain-ai/langchain/issues/24490/comments | 1 | 2024-07-22T06:38:45Z | 2024-07-24T10:07:15Z | https://github.com/langchain-ai/langchain/issues/24490 | 2,422,068,301 | 24,490 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
I log a LangChain agent using `mlflow.pyfunc.PythonModel` wrapper. The context loading is defined as below (individual configurations omi
```python
class agentWrapper(mlflow.pyfunc.PythonModel):
# SETUP OMITTED
def _getHistory(self, session_id):
return SQLChatMessageHistory(session_id=session_id, connection_string="sqlite:///sqlite.db")
def load_context(self, context):
# 1. Configure prompt templates
self._prompt = self._build_prompt()
# 2. Configure LLM client
self._open_ai_llm = self._configureLLM(context)
# 3. Configure agent tools
self._tools = self._configure_tools(context)
# 4. Assemble the AI agent
agent = create_tool_calling_agent(
self._open_ai_llm,
self._tools,
self._prompt
)
agent_executor = AgentExecutor(
agent=agent,
tools=self._tools,
verbose=True,
max_iterations=10)
self._agent_with_chat_history = RunnableWithMessageHistory(
agent_executor,
self._getHistory,
input_messages_key="input",
history_messages_key="chat_history",
)
def predict(self, context, model_input, params):
session_id = uuid.uuid4()
if params.get('session_id'):
session_id = params['session_id']
agent_config = {
"configurable": {
"session_id": str(session_id)
}
}
raw_result = self._agent_with_chat_history.invoke({
"input" : model_input["user_query"]
}, agent_config)
unserialisable_keys = ['context', 'chat_history', 'input']
serialisable_result = {x: str(raw_result[x]) for x in raw_result if x not in unserialisable_keys}
# set return value
return serialisable_result["output"]
```
### Error Message and Stack Trace (if applicable)
```python
Error in RootListenersTracer.on_chain_end callback: ValueError('Expected str, BaseMessage, List[BaseMessage], or Tuple[BaseMessage]. Got 0 Summarise conversation history\nName: user_query, dtype: object.')
```
and then
```python
[chain:RunnableWithMessageHistory > chain:RunnableBranch] [328ms] Exiting Chain run with output:
{
"input": {
"lc": 1,
"type": "not_implemented",
"id": [
"pandas",
"core",
"series",
"Series"
],
"repr": "0 Summarise conversation history\nName: user_query, dtype: object"
},
"chat_history": [],
"output": "I'm sorry, but I don't have access to the conversation history."
}
```
### Description
When I log the agent with MlFlow and download it to the same (and two other) environments, _**the history is not being retrieved**_. I've tried SQL and `FileChatMessageHistory`, and the behaviour was the same.
I've tried moving the block with the `RunnableWithMessageHistory` initialisation to the predict function, and it didn't make any difference.
```python
_agent_with_chat_history = RunnableWithMessageHistory(
self._agent_executor,
self._getHistory,
input_messages_key="input",
history_messages_key="chat_history",
)
```
The `sqlite:///sqlite.db` file was created after I pulled the agent from the MlFlow and initialised it locally. The agent doesn't write to the history,
HOWEVER: The wrapper **works** when I test it locally via the `PythonModelContext` loader:
```python
wrapper = aiAgentWrapper()
ctx = PythonModelContext({
"embedding_model": embedding_model_tmp_path,
"vector_db": vector_db_tmp_path,
},{
'openai_deployment_name':config["open_ai_deployment_name"],
'openai_model_temperature':config["open_ai_model_temperature"],
'openai_api_version': os.environ["OPENAI_API_VERSION"]
})
input_example = {"user_query": "Summarise our conversation "}
agent_params = {
"session_id": sessionId
}
wrapper.load_context(ctx)
wrapper.predict({}, input_example, agent_params ) # <--- THIS WORKS FINE AND HISTORY IS RETRIEVED
```
```python
model_version = mlflow.pyfunc.load_model(
model.model_uri
)
input_example = {"user_query": "Summarise our conversation "}
agent_params = {
"session_id": sessionId
}
model_version.predict(input_example, params=agent_params ) # <-- this DOESNT retrieve the history
```
### System Info
Reproduced in those environments:
- Databricks / Linux / DBR 14.3 ML LTS / python=3.10.12
- Azure ML Online Endpoint / Linux / mcr.microsoft.com/azureml/mlflow-ubuntu20.04-py38-cpu-inference:20240522.v1 / Python 3.8
- Local machine / Windows 11 / Local VENV / Python=3.10.12
Env requirements (logged with the MlFlow):
```
azure-ai-ml==1.13.0
azureml-mlflow==1.54.0
python-dotenv==1.0.1
mlflow==2.10.0 (tried with 2.14, and the result was the same)
cloudpickle==2.0.0
huggingface-hub==0.22.2
faiss-cpu==1.8.0
pandas==1.5.3
langchain==0.2.1
langchain_community==0.2.1
langchain_experimental==0.0.59
langchain_openai==0.1.8
langchain-text-splitters==0.2.0
mlflow==2.10.0
pypdf==4.2.0
sentence-transformers==2.7.0
typing-extensions==4.9.0
datasets==2.20.0
``` | RunnableWithMessageHistory doesn't work after packaging with MlFlow | https://api.github.com/repos/langchain-ai/langchain/issues/24487/comments | 0 | 2024-07-22T00:46:49Z | 2024-07-22T00:50:45Z | https://github.com/langchain-ai/langchain/issues/24487 | 2,421,689,657 | 24,487 |
[
"hwchase17",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/how_to/custom_tools/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
https://python.langchain.com/v0.2/docs/how_to/custom_tools/ Using any of the code for the tools on this page leads to a TypeError. For example using the code from https://python.langchain.com/v0.2/docs/how_to/custom_tools/#tool-decorator will give a TypeError: args_schema must be a subclass of pydantic BaseModel. Got: <class 'pydantic.v1.main.multiplySchema'> error. The same will happen for the rest of the functions that have been defined in the documentation.
### Idea or request for content:
_No response_ | DOC: <Issue related to /v0.2/docs/how_to/custom_tools/> | https://api.github.com/repos/langchain-ai/langchain/issues/24475/comments | 5 | 2024-07-20T19:26:25Z | 2024-07-22T14:21:21Z | https://github.com/langchain-ai/langchain/issues/24475 | 2,421,026,298 | 24,475 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
When I try to execute a custom pandas dataframe agent (https://python.langchain.com/v0.2/docs/integrations/toolkits/pandas/)
I encounter this error:
```
"name": "BadRequestError",
"message": "Error code: 400 - {'error': {'message': \"Invalid 'messages[0].content': string too long. Expected a string with maximum length 1048576, but got a string with length 1316712 instead.\", 'type': 'invalid_request_error', 'param': 'messages[0].content', 'code': 'string_above_max_length'}}"
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I'm expecting to run the agent.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #116-Ubuntu SMP Wed Apr 17 09:17:56 UTC 2024
> Python Version: 3.10.13 (main, Sep 11 2023, 13:21:10) [GCC 11.2.0]
Package Information
-------------------
> langchain_core: 0.2.22
> langchain: 0.1.20
> langchain_community: 0.0.38
> langsmith: 0.1.92
> langchain_chroma: 0.1.0
> langchain_experimental: 0.0.55
> langchain_huggingface: 0.0.3
> langchain_openai: 0.1.17
> langchain_qdrant: 0.1.1
> langchain_text_splitters: 0.0.1
> langchainhub: 0.1.15
> langgraph: 0.1.9
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langserve | Chat with pandas df string length BadRequestError | https://api.github.com/repos/langchain-ai/langchain/issues/24473/comments | 0 | 2024-07-20T18:34:40Z | 2024-07-20T18:38:25Z | https://github.com/langchain-ai/langchain/issues/24473 | 2,421,009,811 | 24,473 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
I am trying to use 'gpt-4o-mini' in ChatOpenAI, code like below:
```
from langchain_openai import ChatOpenAI
OPENAI_MODEL_4oMini = "gpt-4o-mini"
chatmodel = ChatOpenAI(model=OPENAI_MODEL_4oMini, temperature=0, max_tokens=500)
```
### Error Message and Stack Trace (if applicable)
The api called successfully, but when I review openAI response:
response_metadata={‘token_usage’: …, ‘model_name’: ‘gpt-3.5-turbo-0125’, }
### Description
The openAI result shows the model_name is ‘gpt-3.5-turbo-0125’, but I pass ‘gpt-4o-mini’, why it use gpt3.5 ?
I know if there is no 'model' parameter in ChatOpenAI, it will use gpt-3.5-turbo, but I pass a model, I think if the input model is unknown, the langchain should throw an exception instead of using a different model, which may lead to different response result.
### System Info
MacOS, langchain version: 0.2.10 | Use gpt-4o-mini ChatOpenAI, but gpt-3.5-turbo-0125 used | https://api.github.com/repos/langchain-ai/langchain/issues/24461/comments | 4 | 2024-07-20T04:18:23Z | 2024-07-24T14:17:44Z | https://github.com/langchain-ai/langchain/issues/24461 | 2,420,548,548 | 24,461 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_aws import ChatBedrock
from langchain_mistralai.chat_models import ChatMistralAI
from langchain_experimental.graph_transformers import LLMGraphTransformer
from langchain.text_splitter import TokenTextSplitter
from langchain_community.document_loaders import UnstructuredURLLoader
urls =["https://aws.amazon.com/message/061323/"]
loader = UnstructuredURLLoader(urls=urls)
raw_data = loader.load()
text_splitter = TokenTextSplitter(chunk_size=256, chunk_overlap=24)
documents = text_splitter.split_documents(raw_data)
llm = ChatBedrock(
model_id="mistral.mistral-large-2402-v1:0",
model_kwargs={"temperature": 0.0},
)
llm_transformer = LLMGraphTransformer(llm=llm)
graph_documents = llm_transformer.convert_to_graph_documents(documents)
graph_documents[0]
### Here is the output. Example of not working
#### GraphDocument(nodes=[], relationships=[], source=Document(metadata={'source': 'https://aws.amazon.com/......
llm2 = ChatMistralAI(model='mistral-large-latest')
llm_transformer2 = LLMGraphTransformer(llm=llm2)
graph_documents2 = llm_transformer2.convert_to_graph_documents(documents)
graph_documents2[0]
### Here is the output. Example of working
#### GraphDocument(nodes=[Node(id='Aws Lambda', type='Service'), Node(id='Northern Virginia (Us-East-1)
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I am trying to build a GraphRAG application using LangChain. I am getting desired output (graph documents) when using LLMGraphTransformer with an LLM object created using ChatMistralAI. But if I try to use an LLM object created with ChatBedrock I am not getting desired output. The code itself is not failing but it is not recognizing entities (nodes) and relations. This means that I can't use the output to create a GraphDatabase. Being able to process the data via Bedrock is an absolute must for me to proceed.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Tue May 21 16:52:24 UTC 2024
> Python Version: 3.10.14 | packaged by conda-forge | (main, Mar 20 2024, 12:45:18) [GCC 12.3.0]
Package Information
-------------------
> langchain_core: 0.2.19
> langchain: 0.2.8
> langchain_community: 0.2.7
> langsmith: 0.1.85
> langchain_aws: 0.1.11
> langchain_experimental: 0.0.62
> langchain_mistralai: 0.1.10
> langchain_openai: 0.1.16
> langchain_text_splitters: 0.2.2
> python -m langchain_core.sys_info
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | ChatBedrock not creating graph documents with LLMGraphTransformer | https://api.github.com/repos/langchain-ai/langchain/issues/24444/comments | 0 | 2024-07-19T14:24:28Z | 2024-07-19T15:18:03Z | https://github.com/langchain-ai/langchain/issues/24444 | 2,419,054,048 | 24,444 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
async def inference_openai(self, user_prompt: Dict[str, str], chat_history: List[Dict[str, Any]] = []):
jolt_prompt = ChatPromptTemplate.from_messages([
("system", system),
MessagesPlaceholder("chat_history"),
("user", prompt)
]
)
model_kwargs = {
"top_p": 1.0,
"presence_penalty": 0.0}
question_answer_chain = prompt | ChatOpenAI(model="gpt-4o",
max_tokens=2048,
temperature=1.0
model_kwargs=model_kwargs)
ai_msg = await question_answer_chain.ainvoke({"input": str(question_answer_chain), "chat_history": chat_history})
ai_msg = json.loads(ai_msg.content.replace("```json", "").replace("```", ""))
return ai_msg
### Error Message and Stack Trace (if applicable)
Issues with no direct upgrade or patch:
✗ Server-Side Request Forgery (SSRF) [Medium Severity][https://security.snyk.io/vuln/SNYK-PYTHON-LANGCHAIN-7217837] in [email protected]
introduced by [email protected] and 1 other path(s)
No upgrade or patch available
### Description
During the snix scanning it raised a SSRF
<img width="1004" alt="vulnerability" src="https://github.com/user-attachments/assets/033f6100-88b0-4f4e-b43a-8be73796ab2f">
vulnerabilty
### System Info
macOS Sonoma 14.5 | Server-Side Request Forgery (SSRF) | https://api.github.com/repos/langchain-ai/langchain/issues/24442/comments | 2 | 2024-07-19T14:13:11Z | 2024-07-19T19:27:16Z | https://github.com/langchain-ai/langchain/issues/24442 | 2,419,025,178 | 24,442 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
# Code for example.py
from langchain.output_parsers import RetryOutputParser
from langchain_core.output_parsers.pydantic import PydanticOutputParser
from langchain_core.pydantic_v1 import BaseModel
from langchain_openai import ChatOpenAI
from langchain_core.runnables import RunnableLambda, RunnableParallel
from langchain_core.exceptions import OutputParserException
from langchain_core.prompts import (
PromptTemplate,
)
class CustomParser(PydanticOutputParser):
def parse(self, output: str) -> dict:
raise OutputParserException("Failed to parse")
@property
def _type(self) -> str:
return "custom_parser_throw_exception"
class TestModel(BaseModel):
a: int
b: str
parser = CustomParser(pydantic_object=TestModel)
model = ChatOpenAI(temperature=0)
retry_parser = RetryOutputParser.from_llm(parser=parser, llm=model.with_structured_output(TestModel), max_retries=3)
def parse_with_prompt(args):
completion = args['completion']
if (type(completion) is TestModel):
args = args.copy()
del args['completion']
completion = completion.json(ensure_ascii=False)
args['completion'] = completion
return retry_parser.parse_with_prompt(**args)
prompt = PromptTemplate(
template="Answer the user query.\n{format_instructions}\n{query}\n",
input_variables=["query"],
partial_variables={"format_instructions": parser.get_format_instructions()},
)
completion_chain = prompt | model.with_structured_output(TestModel, include_raw=False)
main_chain = RunnableParallel(
completion=completion_chain, prompt_value=prompt
) | RunnableLambda(parse_with_prompt)
print(main_chain.invoke({"query": "who is leo di caprios gf?"}))
```
I created a Custom Parser inheriting it from the `PydanticOutputParser` to force it to throw an `OutputParserException.` The code encapsulates it with the `RetryOutputParser`.
### Error Message and Stack Trace (if applicable)
```
Traceback (most recent call last):
File "C:\Projects\ENV\Lib\site-packages\langchain\output_parsers\retry.py", line 90, in parse_with_prompt
return self.parser.parse(completion)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Projects\src\example.py", line 18, in parse
raise OutputParserException("Failed to parse")
langchain_core.exceptions.OutputParserException: Failed to parse
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Projects\src\example.py", line 59, in <module>
print(main_chain.invoke({"query": "who is leo di caprios gf?"}))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Projects\ENV\Lib\site-packages\langchain_core\runnables\base.py", line 2824, in invoke
input = step.invoke(input, config)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Projects\ENV\Lib\site-packages\langchain_core\runnables\base.py", line 4387, in invoke
return self._call_with_config(
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Projects\ENV\Lib\site-packages\langchain_core\runnables\base.py", line 1734, in _call_with_config
context.run(
File "C:\Projects\ENV\Lib\site-packages\langchain_core\runnables\config.py", line 379, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^
File "C:\Projects\ENV\Lib\site-packages\langchain_core\runnables\base.py", line 4243, in _invoke
output = call_func_with_variable_args(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Projects\ENV\Lib\site-packages\langchain_core\runnables\config.py", line 379, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^
File "C:\Projects\src\example.py", line 44, in parse_with_prompt
return retry_parser.parse_with_prompt(**args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Projects\ENV\Lib\site-packages\langchain\output_parsers\retry.py", line 103, in parse_with_prompt
completion = self.retry_chain.invoke(
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Projects\ENV\Lib\site-packages\langchain_core\runnables\base.py", line 2822, in invoke
input = step.invoke(input, config, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Projects\ENV\Lib\site-packages\langchain_core\prompts\base.py", line 179, in invoke
return self._call_with_config(
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Projects\ENV\Lib\site-packages\langchain_core\runnables\base.py", line 1734, in _call_with_config
context.run(
File "C:\Projects\ENV\Lib\site-packages\langchain_core\runnables\config.py", line 379, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^
File "C:\Projects\ENV\Lib\site-packages\langchain_core\prompts\base.py", line 153, in _format_prompt_with_error_handling
_inner_input = self._validate_input(inner_input)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Projects\ENV\Lib\site-packages\langchain_core\prompts\base.py", line 145, in _validate_input
raise KeyError(
KeyError: "Input to PromptTemplate is missing variables {'completion'}. Expected: ['completion', 'prompt'] Received: ['prompt', 'input']"
```
### Description
The `RetryOutputParser` is apparently a bit buggy, and it's already demanding some custom workarounds to work appropriately with Pydantic data (cf. [this issue](https://github.com/langchain-ai/langchain/issues/19145), from which I adapted the workaround code).
However, the bug I'm flagging is for a wrong-named prompt variable in the code.
What I expect: Since the parser throws the exception, I expect that the Retry Parser calls the LLM again with the prompt and the error message to perform the retry.
What is happening: It throws an error `KeyError: "Input to PromptTemplate is missing variables {'completion'}. Expected: ['completion', 'prompt'] Received: ['prompt', 'input']"`
Looking at the source code for the `RetryOutputParser` it's possible to see that indeed it's passing the completion value labeled with input.
```python
class RetryOutputParser(BaseOutputParser[T]):
#[...]
def parse_with_prompt(self, completion: str, prompt_value: PromptValue) -> T:
"""Parse the output of an LLM call using a wrapped parser.
Args:
completion: The chain completion to parse.
prompt_value: The prompt to use to parse the completion.
Returns:
The parsed completion.
"""
retries = 0
while retries <= self.max_retries:
try:
return self.parser.parse(completion)
except OutputParserException as e:
if retries == self.max_retries:
raise e
else:
retries += 1
if self.legacy and hasattr(self.retry_chain, "run"):
completion = self.retry_chain.run(
prompt=prompt_value.to_string(),
completion=completion,
error=repr(e),
)
else:
completion = self.retry_chain.invoke(
dict(
prompt=prompt_value.to_string(),
input=completion, # <<<<<--------- WRONG NAME
)
)
raise OutputParserException("Failed to parse")
#[...]
```
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.22621
> Python Version: 3.11.9 (tags/v3.11.9:de54cf5, Apr 2 2024, 10:12:12) [MSC v.1938 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.2.21
> langchain: 0.2.9
> langchain_community: 0.2.5
> langsmith: 0.1.90
> langchain_openai: 0.1.17
> langchain_text_splitters: 0.2.1
> langchainhub: 0.1.20 | Wrong prompt variable name in the RetryOutputParser class. "innput" should be replaced by "completion" | https://api.github.com/repos/langchain-ai/langchain/issues/24440/comments | 3 | 2024-07-19T13:31:14Z | 2024-07-19T16:00:30Z | https://github.com/langchain-ai/langchain/issues/24440 | 2,418,933,473 | 24,440 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
#! /usr/bin/env python3
from langchain_community.document_loaders import PyPDFLoader
from pypdf.errors import EmptyFileError, PdfReadError, PdfStreamError
import sys
def TestOneInput(fname):
try:
loader = PyPDFLoader(fname)
loader.load_and_split()
except (EmptyFileError, PdfReadError, PdfStreamError):
pass
if __name__ == "__main__":
if len(sys.argv) < 2:
exit(1)
TestOneInput(sys.argv[1])
```
### Error Message and Stack Trace (if applicable)
```
Traceback (most recent call last):
File "/fuzz/./reproducer.py", line 19, in <module>
TestOneInput(sys.argv[1])
File "/fuzz/./reproducer.py", line 12, in TestOneInput
loader.load_and_split()
File "/usr/local/lib/python3.9/dist-packages/langchain_core/document_loaders/base.py", line 63, in load_and_split
docs = self.load()
File "/usr/local/lib/python3.9/dist-packages/langchain_core/document_loaders/base.py", line 29, in load
return list(self.lazy_load())
File "/usr/local/lib/python3.9/dist-packages/langchain_community/document_loaders/pdf.py", line 193, in lazy_load
yield from self.parser.parse(blob)
File "/usr/local/lib/python3.9/dist-packages/langchain_core/document_loaders/base.py", line 125, in parse
return list(self.lazy_parse(blob))
File "/usr/local/lib/python3.9/dist-packages/langchain_community/document_loaders/parsers/pdf.py", line 102, in lazy_parse
yield from [
File "/usr/local/lib/python3.9/dist-packages/langchain_community/document_loaders/parsers/pdf.py", line 102, in <listcomp>
yield from [
File "/usr/local/lib/python3.9/dist-packages/pypdf/_page.py", line 2277, in __iter__
for i in range(len(self)):
File "/usr/local/lib/python3.9/dist-packages/pypdf/_page.py", line 2208, in __len__
return self.length_function()
File "/usr/local/lib/python3.9/dist-packages/pypdf/_doc_common.py", line 353, in get_num_pages
self._flatten()
File "/usr/local/lib/python3.9/dist-packages/pypdf/_doc_common.py", line 1122, in _flatten
self._flatten(obj, inherit, **addt)
File "/usr/local/lib/python3.9/dist-packages/pypdf/_doc_common.py", line 1122, in _flatten
self._flatten(obj, inherit, **addt)
File "/usr/local/lib/python3.9/dist-packages/pypdf/_doc_common.py", line 1122, in _flatten
self._flatten(obj, inherit, **addt)
[Previous line repeated 980 more times]
File "/usr/local/lib/python3.9/dist-packages/pypdf/_doc_common.py", line 1119, in _flatten
obj = page.get_object()
File "/usr/local/lib/python3.9/dist-packages/pypdf/generic/_base.py", line 284, in get_object
return self.pdf.get_object(self)
File "/usr/local/lib/python3.9/dist-packages/pypdf/_reader.py", line 351, in get_object
retval = self.cache_get_indirect_object(
File "/usr/local/lib/python3.9/dist-packages/pypdf/_reader.py", line 512, in cache_get_indirect_object
return self.resolved_objects.get((generation, idnum))
RecursionError: maximum recursion depth exceeded in comparison
```
### Description
Hi!
I've been fuzzing PyPDFLoader with a [sydr-fuzz](https://github.com/ispras/oss-sydr-fuzz) and found few errors that occur when using a load_and_split method. One of them is shown here. Maybe issue #22892 is similar. The question is should the user handle errors from the pypdf library or is it a bug in langchain/pypdf?
### PoC:
[crash-b26d05712a29b241ac6f9dc7fff57428ba2d1a04.pdf](https://github.com/user-attachments/files/16311638/crash-b26d05712a29b241ac6f9dc7fff57428ba2d1a04.pdf)
### System Info
System Information
------------------
> OS: Linux
> OS Version: #62~20.04.1-Ubuntu SMP Tue Nov 22 21:24:20 UTC 2022
> Python Version: 3.9.5 (default, Nov 23 2021, 15:27:38)
[GCC 9.3.0]
Package Information
-------------------
> langchain_core: 0.2.11
> langchain: 0.2.6
> langchain_community: 0.2.6
> langsmith: 0.1.83
> langchain_openai: 0.1.14
> langchain_text_splitters: 0.2.2
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
commit 27aa4d38bf93f3eef7c46f65cc0d0ef3681137eb
pypdf==4.2.0 | Using PyPDFLoader causes a crash | https://api.github.com/repos/langchain-ai/langchain/issues/24439/comments | 5 | 2024-07-19T12:27:13Z | 2024-07-22T00:20:50Z | https://github.com/langchain-ai/langchain/issues/24439 | 2,418,769,393 | 24,439 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
prompt = PromptTemplate(
template="""<|begin_of_text|><|start_header_id|>system<|end_header_id|> You are an assistant for question-answering tasks.
Use the following pieces of retrieved context to answer the question and give response from the context given to you as truthfully as you can.
Do not add anything from you and If you don't know the answer, just say that you don't know.
<|eot_id|>
<|start_header_id|>user<|end_header_id|>
Question: {question}
Context: {context}
Chat History: {chat_history}
Answer: <|eot_id|><|start_header_id|>assistant<|end_header_id|>""",
input_variables=["question", "context", "chat_history"],
)
global memory
memory = ConversationBufferWindowMemory(k=4,
memory_key='chat_history', return_messages=True, output_key='answer')
# LLMs Using API
llm = HuggingFaceHub(repo_id='meta-llama/Meta-Llama-3-8B-Instruct', huggingfacehub_api_token=api_key", model_kwargs={
"temperature": 0.1,"max_length": 300, "max_new_tokens": 300})
compressor = CohereRerank()
compression_retriever = ContextualCompressionRetriever(
base_compressor=compressor, base_retriever=retriever3
)
global chain_with_memory
# Create the custom chain
chain_with_memory = ConversationalRetrievalChain.from_llm(
llm=llm,
memory=memory,
retriever=compression_retriever,
combine_docs_chain_kwargs={"prompt": prompt},
return_source_documents=True,
)
### Error Message and Stack Trace (if applicable)
llm_reponse before guardrails {'question': 'how many F grade a student can have in bachelor', 'chat_history': [], 'answer': "<|begin_of_text|><|start_header_id|>system<|end_header_id|> You are an assistant for question-answering tasks.\n Use the following pieces of retrieved context to answer the question and give response from the context given to you as truthfully as you can.\n Do not add anything from you and If you don't know the answer, just say that you don't know.\n <|eot_id|>\n <|start_header_id|>user<|end_header_id|>\n Question: how many F grade a student can have in bachelor\n Context:
### Description
i am building a rag pipeline and it was working fine in my local environment but when i deploy it on a server the prompt template was appended at the start of my llm response. When i compare my local and server environment the only difference was on server langchain 0.2.9 and langchain-community were running while on my local setup langchain 0.2.6 was running . Any one who face the same issue or have any solution
### System Info
langchain==0.2.9
langchain-cohere==0.1.9
langchain-community==0.2.7
langchain-core==0.2.21
langchain-experimental==0.0.62
langchain-text-splitters==0.2.2 | complete prompt is appended at the start of my response generated by llama3 | https://api.github.com/repos/langchain-ai/langchain/issues/24437/comments | 2 | 2024-07-19T11:04:58Z | 2024-08-08T18:13:53Z | https://github.com/langchain-ai/langchain/issues/24437 | 2,418,635,380 | 24,437 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
async def get_response(collection_name, user_input):
rag_chain, retriever = await get_rag_chain(embeddings=EMBEDDINGS_MODEL, collection_name=collection_name)
response = await rag_chain.ainvoke(user_input)
response = response.content
return response
async def process_user_question(update: Update, context: CallbackContext) -> int:
user_input = update.message.text
user_id = update.effective_user.id
if user_input == "Назад":
return await show_main_menu(update, context)
await update.message.reply_text("Зачекайте, будь ласка, аналізую чинне законодавство..."
"Підготовка відповіді може тривати кілька хвилин")
collection_name = context.user_data.get('collection_name', 'default_collection')
print(collection_name)
response = await get_response(collection_name=collection_name, user_input=user_input)
log_conversation(user_id=user_id, user_input=user_input, response=response)
await update.message.reply_text(
response + "\n\nВи можете задати ще одне питання або вибрати 'Назад', щоб повернутися до головного меню.",
reply_markup=ReplyKeyboardMarkup([["Назад"]], one_time_keyboard=False))
return USER_QUESTION
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
This code is working, but its not asyncronous.
the single point that takes a lot of time in all the executions is this:
response = await get_response(collection_name=collection_name, user_input=user_input)
it blocks system for all other users, so the ainvoke must be not working as expected
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.22631
> Python Version: 3.12.2 (tags/v3.12.2:6abddd9, Feb 6 2024, 21:26:36) [MSC v.1937 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.2.1
> langchain: 0.2.1
> langchain_community: 0.2.1
> langsmith: 0.1.49
> langchain_google_genai: 1.0.5
> langchain_google_vertexai: 1.0.4
> langchain_openai: 0.1.7
> langchain_text_splitters: 0.2.0
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
| ainvoke is not asynchronous | https://api.github.com/repos/langchain-ai/langchain/issues/24433/comments | 8 | 2024-07-19T09:09:50Z | 2024-07-27T19:08:25Z | https://github.com/langchain-ai/langchain/issues/24433 | 2,418,430,359 | 24,433 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from langchain_huggingface import ChatHuggingFace
from langchain_huggingface import HuggingFacePipeline
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain.pydantic_v1 import BaseModel, Field
class Calculator(BaseModel):
"""Multiply two integers together."""
a: int = Field(..., description="First integer")
b: int = Field(..., description="Second integer")
tools = [Calculator]
llm = HuggingFacePipeline.from_model_id(
model_id="microsoft/Phi-3-mini-4k-instruct",
task="text-generation",
device_map="auto",
pipeline_kwargs={
"max_new_tokens": 1024,
"do_sample": False,
"repetition_penalty": 1.03,
}
)
chat_model = ChatHuggingFace(llm=llm)
print(chat_model.invoke("How much is 3 multiplied by 12?"))
```
### Error Message and Stack Trace (if applicable)
Here is the output:
`
content='<|user|>\nHow much is 3 multiplied by 12?<|end|>\n<|assistant|>\n To find the product of 3 and 12, you simply multiply the two numbers together:\n\n3 × 12 = 36\n\nSo, 3 multiplied by 12 equals 36.' id='run-9270dbaa-9edd-4ca4-bb33-3dec0de34957-0'`
### Description
Hello, according to the [documentation](https://python.langchain.com/v0.2/docs/integrations/chat/) ChatHuggingFace supports tool-calling. However, when I run the example from the documentation, it returns the LLM output rather than a function call.
### System Info
langchain==0.2.9
langchain-community==0.2.7
langchain-core==0.2.21
langchain-huggingface==0.0.3
langchain-text-splitters==0.2.2
Ubuntu 22.04.3 LTS
Python 3.10.12 | Huggingface tool-calling is not working | https://api.github.com/repos/langchain-ai/langchain/issues/24430/comments | 1 | 2024-07-19T07:49:51Z | 2024-07-19T20:06:00Z | https://github.com/langchain-ai/langchain/issues/24430 | 2,418,291,232 | 24,430 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
my code is proprietary
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "/home/username/.pycharm_helpers/pydev/pydevd.py", line 1551, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "/home/username/.pycharm_helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/home/username/code/ai/myproject/examples/llm_rule_translation_and_creation.py", line 20, in <module>
sigma_agent_executor = create_sigma_agent(sigma_vectorstore=sigma_llm.sigmadb)
File "/home/username/code/ai/myproject/myproject/llm/toolkits/base.py", line 63, in create_sigma_agent
llm_with_tools = agent_llm.bind(functions=[convert_to_openai_function(t) for t in tools])
File "/home/username/code/ai/myproject/myproject/llm/toolkits/base.py", line 63, in <listcomp>
llm_with_tools = agent_llm.bind(functions=[convert_to_openai_function(t) for t in tools])
File "/home/username/.cache/pypoetry/virtualenvs/myproject-ItWCGl7B-py3.10/lib/python3.10/site-packages/langchain_core/utils/function_calling.py", line 278, in convert_to_openai_function
return cast(Dict, format_tool_to_openai_function(function))
File "/home/username/.cache/pypoetry/virtualenvs/myproject-ItWCGl7B-py3.10/lib/python3.10/site-packages/langchain_core/_api/deprecation.py", line 168, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
File "/home/username/.cache/pypoetry/virtualenvs/myproject-ItWCGl7B-py3.10/lib/python3.10/site-packages/langchain_core/utils/function_calling.py", line 199, in format_tool_to_openai_function
if tool.tool_call_schema:
File "/home/username/.cache/pypoetry/virtualenvs/myproject-ItWCGl7B-py3.10/lib/python3.10/site-packages/langchain_core/tools.py", line 428, in tool_call_schema
return _create_subset_model(
File "/home/username/.cache/pypoetry/virtualenvs/myproject-ItWCGl7B-py3.10/lib/python3.10/site-packages/langchain_core/tools.py", line 129, in _create_subset_model
if field.required and not field.allow_none
AttributeError: 'FieldInfo' object has no attribute 'required'. Did you mean: 'is_required'?
### Description
I started seeing an AttributeError after upgrading to Pydantic v2.0 while using the latest version of LangChain. The error message is:
csharp
Copy code
AttributeError: 'FieldInfo' object has no attribute 'required'. Did you mean: 'is_required'?
This issue seems related to the recent Pydantic upgrade. See the trace for more information. Downgrading Pydantic resolves the issue.
### System Info
LangChain Version: Latest
Pydantic Version: 2.20.1
Python Version: 3.10.12
Operating System: Windows with WSL Ubuntu
poetry | It Seems There's a Compatibility Issue with Pydantic v2.0: FieldInfo object has no attribute 'required' | https://api.github.com/repos/langchain-ai/langchain/issues/24427/comments | 7 | 2024-07-19T04:48:04Z | 2024-08-01T17:00:04Z | https://github.com/langchain-ai/langchain/issues/24427 | 2,417,897,185 | 24,427 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from langchain_community.document_loaders import UnstructuredMarkdownLoader
from langchain_core.documents import Document
loader = UnstructuredMarkdownLoader("./a.md")
```
### Error Message and Stack Trace (if applicable)
C:\src\myproj>python testExample1.py
C:\Users\feisong\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain\llms\__init__.py:549: LangChainDeprecationWarning: Importing LLMs from langchain is deprecated. Importing from langchain will no longer be supported as of langchain==0.2.0. Please import from langchain-community instead:
`from langchain_community.llms import OpenAI`.
To install langchain-community run `pip install -U langchain-community`.
warnings.warn(
C:\Users\feisong\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_core\_api\deprecation.py:139: LangChainDeprecationWarning: The class `AzureOpenAI` was deprecated in LangChain 0.0.10 and will be removed in 0.3.0. An updated version of the class exists in the langchain-openai package and should be used instead. To use it run `pip install -U langchain-openai` and import as `from langchain_openai import AzureOpenAI`.
warn_deprecated(
Traceback (most recent call last):
File "C:\src\myproj\testExample1.py", line 56, in <module>
documents += loader.load()
^^^^^^^^^^^^^
File "C:\Users\feisong\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_core\document_loaders\base.py", line 30, in load
return list(self.lazy_load())
^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\feisong\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_community\document_loaders\unstructured.py", line 89, in lazy_load
elements = self._get_elements()
^^^^^^^^^^^^^^^^^^^^
File "C:\Users\feisong\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_community\document_loaders\email.py", line 68, in _get_elements
return partition_email(filename=self.file_path, **self.unstructured_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\feisong\AppData\Local\Programs\Python\Python312\Lib\site-packages\unstructured\documents\elements.py", line 593, in wrapper
elements = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\feisong\AppData\Local\Programs\Python\Python312\Lib\site-packages\unstructured\file_utils\filetype.py", line 626, in wrapper
elements = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\feisong\AppData\Local\Programs\Python\Python312\Lib\site-packages\unstructured\file_utils\filetype.py", line 582, in wrapper
elements = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\feisong\AppData\Local\Programs\Python\Python312\Lib\site-packages\unstructured\chunking\dispatch.py", line 74, in wrapper
elements = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\feisong\AppData\Local\Programs\Python\Python312\Lib\site-packages\unstructured\partition\email.py", line 427, in partition_email
elements = partition_html(
^^^^^^^^^^^^^^^
File "C:\Users\feisong\AppData\Local\Programs\Python\Python312\Lib\site-packages\unstructured\documents\elements.py", line 593, in wrapper
elements = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\feisong\AppData\Local\Programs\Python\Python312\Lib\site-packages\unstructured\file_utils\filetype.py", line 626, in wrapper
elements = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\feisong\AppData\Local\Programs\Python\Python312\Lib\site-packages\unstructured\file_utils\filetype.py", line 582, in wrapper
elements = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\feisong\AppData\Local\Programs\Python\Python312\Lib\site-packages\unstructured\chunking\dispatch.py", line 74, in wrapper
elements = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\feisong\AppData\Local\Programs\Python\Python312\Lib\site-packages\unstructured\partition\html\partition.py", line 107, in partition_html
document.elements,
^^^^^^^^^^^^^^^^^
File "C:\Users\feisong\AppData\Local\Programs\Python\Python312\Lib\site-packages\unstructured\utils.py", line 187, in __get__
value = self._fget(obj)
^^^^^^^^^^^^^^^
File "C:\Users\feisong\AppData\Local\Programs\Python\Python312\Lib\site-packages\unstructured\documents\html.py", line 76, in elements
return list(iter_elements())
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\feisong\AppData\Local\Programs\Python\Python312\Lib\site-packages\unstructured\documents\html.py", line 71, in iter_elements
for e in self._iter_elements(self._main):
File "C:\Users\feisong\AppData\Local\Programs\Python\Python312\Lib\site-packages\unstructured\documents\html.py", line 145, in _iter_elements
yield from self._process_text_tag(tag_elem)
File "C:\Users\feisong\AppData\Local\Programs\Python\Python312\Lib\site-packages\unstructured\documents\html.py", line 450, in _process_text_tag
element = self._parse_tag(tag_elem, include_tail_text)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\feisong\AppData\Local\Programs\Python\Python312\Lib\site-packages\unstructured\documents\html.py", line 409, in _parse_tag
ElementCls = self._classify_text(text, tag_elem.tag)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\feisong\AppData\Local\Programs\Python\Python312\Lib\site-packages\unstructured\documents\html.py", line 94, in _classify_text
if tag not in HEADING_TAGS and is_possible_narrative_text(text):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\feisong\AppData\Local\Programs\Python\Python312\Lib\site-packages\unstructured\partition\text_type.py", line 80, in is_possible_narrative_text
if exceeds_cap_ratio(text, threshold=cap_threshold):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\feisong\AppData\Local\Programs\Python\Python312\Lib\site-packages\unstructured\partition\text_type.py", line 276, in exceeds_cap_ratio
if sentence_count(text, 3) > 1:
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\feisong\AppData\Local\Programs\Python\Python312\Lib\site-packages\unstructured\partition\text_type.py", line 225, in sentence_count
sentences = sent_tokenize(text)
^^^^^^^^^^^^^^^^^^^
File "C:\Users\feisong\AppData\Local\Programs\Python\Python312\Lib\site-packages\unstructured\nlp\tokenize.py", line 136, in sent_tokenize
_download_nltk_packages_if_not_present()
File "C:\Users\feisong\AppData\Local\Programs\Python\Python312\Lib\site-packages\unstructured\nlp\tokenize.py", line 130, in _download_nltk_packages_if_not_present
download_nltk_packages()
File "C:\Users\feisong\AppData\Local\Programs\Python\Python312\Lib\site-packages\unstructured\nlp\tokenize.py", line 88, in download_nltk_packages
urllib.request.urlretrieve(NLTK_DATA_URL, tgz_file)
File "C:\Users\feisong\AppData\Local\Programs\Python\Python312\Lib\urllib\request.py", line 250, in urlretrieve
tfp = open(filename, 'wb')
^^^^^^^^^^^^^^^^^^^^
PermissionError: [Errno 13] Permission denied: 'C:\\Users\\feisong\\AppData\\Local\\Temp\\tmpildcyt_d'
### Description
I am trying to use langchain to load .md, .eml files.
UnstructuredMarkdownLoader
UnstructuredEmailLoader
but got above exception.
### System Info
langchain==0.2.8
langchain-cli==0.0.25
langchain-community==0.2.7
langchain-core==0.2.19
langchain-openai==0.1.16
langchain-text-splitters==0.2.2
Windows
Python 3.10.2 | Several unstructed loader throwing PermissionError: [Errno 13] ( unstructuredMarkdownloader , unstructruedEmailLoader .. ) | https://api.github.com/repos/langchain-ai/langchain/issues/24413/comments | 0 | 2024-07-18T19:43:34Z | 2024-07-18T20:01:42Z | https://github.com/langchain-ai/langchain/issues/24413 | 2,417,245,683 | 24,413 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_community.vectorstores import Neo4jVector
from langchain_huggingface import HuggingFaceEmbeddings
embeddings = HuggingFaceEmbeddings(
model_name="sentence-transformers/all-mpnet-base-v2"
)
self.existing_graph_parts = Neo4jVector.from_existing_graph(
embedding=embeddings,
url=uri,
username=username,
password=password,
node_label="part",
text_node_properties=["name"],
embedding_node_property="embedding",
)
```
### Error Message and Stack Trace (if applicable)
```
Traceback (most recent call last):
File "D:\graph_rag.py", line 133, in <module>
graph_rag = GraphRag()
^^^^^^^^^^
File "D:\graph_rag.py", line 44, in __init__
self.existing_graph_parts = Neo4jVector.from_existing_graph(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\syh\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_community\vectorstores\neo4j_vector.py", line 1431, in from_existing_graph
text_embeddings = embedding.embed_documents([el["text"] for el in data])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\syh\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_huggingface\embeddings\huggingface.py", line 87, in embed_documents
embeddings = self.client.encode(
^^^^^^^^^^^^^^^^^^^
File "C:\Users\syh\AppData\Local\Programs\Python\Python312\Lib\site-packages\sentence_transformers\SentenceTransformer.py", line 565, in encode
if all_embeddings[0].dtype == torch.bfloat16:
~~~~~~~~~~~~~~^^^
IndexError: list index out of range
```
### Description
Sorry for my poor English!
When I run the code first time, it worked well.
But when I rerun the code, it run error as above.
I think it error because all nodes has its embedding already, so when run the code in the lib below:
file: langchain_community\vectorstores\neo4j_vector.py
from line 1421
```python
while True:
fetch_query = (
f"MATCH (n:`{node_label}`) "
f"WHERE n.{embedding_node_property} IS null "
"AND any(k in $props WHERE n[k] IS NOT null) "
f"RETURN elementId(n) AS id, reduce(str='',"
"k IN $props | str + '\\n' + k + ':' + coalesce(n[k], '')) AS text "
"LIMIT 1000"
)
data = store.query(fetch_query, params={"props": text_node_properties})
text_embeddings = embedding.embed_documents([el["text"] for el in data])
```
This code will fetch some nodes which don't have embedding_node_property. Since all nodes in my neo4j already have embedding, so the data is a empty list.
Then in the code following, 0 is out of an empty list's index.
file: sentence_transformers\SentenceTransformer.py
from line 563
```python
elif convert_to_numpy:
if not isinstance(all_embeddings, np.ndarray):
if all_embeddings[0].dtype == torch.bfloat16:
all_embeddings = np.asarray([emb.float().numpy() for emb in all_embeddings])
else:
all_embeddings = np.asarray([emb.numpy() for emb in all_embeddings])
```
That's where the error happened.
I have got the answer from the bot, but I still think it is bug which needs to be fixed!
Thanks!
### System Info
langchain==0.2.6
langchain-community==0.2.6
langchain-core==0.2.10
langchain-huggingface==0.0.3
langchain-openai==0.1.10
langchain-text-splitters==0.2.2
windows 11
python3.12 | Neo4jVector doesn't work well with HuggingFaceEmbeddings when reusing the graph | https://api.github.com/repos/langchain-ai/langchain/issues/24401/comments | 7 | 2024-07-18T14:32:34Z | 2024-08-10T22:56:05Z | https://github.com/langchain-ai/langchain/issues/24401 | 2,416,594,786 | 24,401 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from langchain_experimental.sql import SQLDatabaseChain
from langchain_community.utilities import SQLDatabase
from langchain_openai import ChatOpenAI, OpenAI
OPENAI_API_KEY = "XXXXXX"
llm = ChatOpenAI(temperature=0, openai_api_key=OPENAI_API_KEY)
sql_uri = "sqlite:///phealth.db"
db = SQLDatabase.from_uri(sql_uri, include_tables=['nutrition','exercise','medication'],sample_rows_in_table_info=2)
db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)
def retrieve_from_db(query: str) -> str:
db_context = db_chain(query)
db_context = db_context['result'].strip()
return db_context
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
From SQLDatabaseChain's output I can see the following generated query and results:
```
> Entering new SQLDatabaseChain chain...
what medications i've taken today? for user 1
SQLQuery:SELECT name, dosage, dosage_unit, administration, reason, prescription, frequency, indications, interactions
FROM medication
WHERE user_id = 1 AND date(create_time) = date('now')
LIMIT 5;
SQLResult:
Answer:Ibuprofeno, 200 mg, Oral, Pain relief, fever reduction, 0, Every 4 to 6 hours, Headache, dental pain, menstrual cramps, muscle aches, or arthritis, May interact with blood thinners, blood pressure medications, and other NSAIDs
Aspirina, 325 mg, Oral, Pain relief, fever reduction, blood thinning, 0, Every 4 to 6 hours, Headache, muscle pain, arthritis, prevention of heart attacks or stroke, May interact with blood thinners, NSAIDs, and certain antidepressants
> Finished chain.
Ibuprofeno, 200 mg, Oral, Pain relief, fever reduction, 0, Every 4 to 6 hours, Headache, dental pain, menstrual cramps, muscle aches, or arthritis, May interact with blood thinners, blood pressure medications, and other NSAIDs
Aspirina, 325 mg, Oral, Pain relief, fever reduction, blood thinning, 0, Every 4 to 6 hours, Headache, muscle pain, arthritis, prevention of heart attacks or stroke, May interact with blood thinners, NSAIDs, and certain antidepressants
```
But when running the code directly on the database (SQLite) I get no results, which is correct since no records should match:
```
sqlite> SELECT name, dosage, dosage_unit, administration, reason, prescription, frequency, indications, interactions FROM medication WHERE user_id = 1 AND date(create_time) = date('now') LIMIT 5;
sqlite>
sqlite> SELECT name, date(create_time), date('now') from medication ;
Ibuprofeno|2024-07-17|2024-07-18
Aspirina|2024-07-17|2024-07-18
Normorytmin|2024-07-17|2024-07-18
Corvis|2024-07-17|2024-07-18
Duodart|2024-07-17|2024-07-18
Normorytmin|2024-07-17|2024-07-18
Corvis|2024-07-17|2024-07-18
Normorytmin|2024-07-17|2024-07-18
Corvis|2024-07-17|2024-07-18
Duodart|2024-07-17|2024-07-18
Normorytmin|2024-07-17|2024-07-18
Corvis|2024-07-17|2024-07-18
Duodart|2024-07-17|2024-07-18
Rosuvast|2024-07-17|2024-07-18
```
### System Info
langchain==0.2.7
langchain-cli==0.0.25
langchain-community==0.2.6
langchain-core==0.2.21
langchain-experimental==0.0.62
langchain-openai==0.1.17
langchain-text-splitters==0.2.2
| SQLDatabaseChain generated query returns incorrect result, and different from when the query is executed directly on the db | https://api.github.com/repos/langchain-ai/langchain/issues/24399/comments | 0 | 2024-07-18T14:16:13Z | 2024-07-18T14:18:58Z | https://github.com/langchain-ai/langchain/issues/24399 | 2,416,520,776 | 24,399 |
[
"hwchase17",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/how_to/few_shot_examples_chat/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
I have rebuilt the [example](https://python.langchain.com/v0.2/docs/how_to/few_shot_examples_chat/) in the documentation.
Unfortunately, I get a ValidationError. I am not the only one with this problem, so I assume that something is wrong in the documentation or Langchain.
`from langchain_community.chat_models import ChatOllama`
`model = ChatOllama(model="llama3")`
`from langchain_core.prompts import ChatPromptTemplate, FewShotChatMessagePromptTemplate`
`examples = [ {"input": "2 🦜 2", "output": "4"}, {"input": "2 🦜 3", "output": "5"},]`
`example_prompt = ChatPromptTemplate.from_messages([("human", "{input}"), ("ai", "{output}"),])`
`few_shot_prompt = FewShotChatMessagePromptTemplate(example_prompt=example_prompt, examples=examples,)`
`print(few_shot_prompt.invoke({}).to_messages())`
--------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
Cell In[4], [line 8](vscode-notebook-cell:?execution_count=4&line=8)
[1](vscode-notebook-cell:?execution_count=4&line=1) # This is a prompt template used to format each individual example.
[2](vscode-notebook-cell:?execution_count=4&line=2) example_prompt = ChatPromptTemplate.from_messages(
[3](vscode-notebook-cell:?execution_count=4&line=3) [
[4](vscode-notebook-cell:?execution_count=4&line=4) ("human", "{input}"),
[5](vscode-notebook-cell:?execution_count=4&line=5) ("ai", "{output}"),
[6](vscode-notebook-cell:?execution_count=4&line=6) ]
[7](vscode-notebook-cell:?execution_count=4&line=7) )
----> [8](vscode-notebook-cell:?execution_count=4&line=8) few_shot_prompt = FewShotChatMessagePromptTemplate(
[9](vscode-notebook-cell:?execution_count=4&line=9) example_prompt=example_prompt,
[10](vscode-notebook-cell:?execution_count=4&line=10) examples=examples,
[11](vscode-notebook-cell:?execution_count=4&line=11) )
[13](vscode-notebook-cell:?execution_count=4&line=13) print(few_shot_prompt.invoke({"Hallo"}).to_messages())
File c:\Users\\AppData\Local\miniconda3\envs\langchain\Lib\site-packages\pydantic\v1\main.py:341, in BaseModel.__init__(__pydantic_self__, **data)
[339](file:///C:/Users//AppData/Local/miniconda3/envs/langchain/Lib/site-packages/pydantic/v1/main.py:339) values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)
[340](file:///C:/Users//AppData/Local/miniconda3/envs/langchain/Lib/site-packages/pydantic/v1/main.py:340) if validation_error:
--> [341](file:///C:/Users//AppData/Local/miniconda3/envs/langchain/Lib/site-packages/pydantic/v1/main.py:341) raise validation_error
[342](file:///C:/Users//AppData/Local/miniconda3/envs/langchain/Lib/site-packages/pydantic/v1/main.py:342) try:
[343](file:///C:/Users//AppData/Local/miniconda3/envs/langchain/Lib/site-packages/pydantic/v1/main.py:343) object_setattr(__pydantic_self__, '__dict__', values)
**ValidationError: 1 validation error for FewShotChatMessagePromptTemplate
input_variables
field required (type=value_error.missing)**
### Idea or request for content:
_No response_ | DOC: Missing input variables for FewShotChatMessagePromptTemplate | https://api.github.com/repos/langchain-ai/langchain/issues/24398/comments | 3 | 2024-07-18T13:35:20Z | 2024-07-21T18:06:15Z | https://github.com/langchain-ai/langchain/issues/24398 | 2,416,403,639 | 24,398 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_core.messages import HumanMessage
from langchain_core.tools import tool
from langchain_experimental.llms.ollama_functions import OllamaFunctions
@tool
def add(a: int, b: int) -> int:
"""Adds a and b."""
return a + b
@tool
def multiply(a: int, b: int) -> int:
"""Multiplies a and b."""
return a * b
tools = [add, multiply]
llm_with_tools = OllamaFunctions(model="llama3:70b", format="json").bind_tools(tools=tools)
query = "What is 3 * 12?"
messages = [HumanMessage(query)]
ai_msg = llm_with_tools.invoke(messages)
messages.append(ai_msg)
for tool_call in ai_msg.tool_calls:
selected_tool = {"add": add, "multiply": multiply}[tool_call["name"].lower()]
tool_msg = selected_tool.invoke(tool_call)
messages.append(tool_msg)
# passing messages with (Human, AI, Tool) back to model
ai_msg = llm_with_tools.invoke(messages)
messages.append(ai_msg)
print(messages)
```
### Error Message and Stack Trace (if applicable)
```
[
HumanMessage(content='What is 3 * 12?'),
AIMessage(content='', id='run-cb967bbf-778f-49b8-80d7-a54ce8b605c1-0', tool_calls=[{'name': 'multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_739326217a574817bef06eea64439d48', 'type': 'tool_call'}]),
ToolMessage(content='36', tool_call_id='call_739326217a574817bef06eea64439d48'),
AIMessage(content='', id='run-5e04e8b2-1120-44af-bb9b-13595dd221b5-0', tool_calls=[{'name': 'multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_dcbfe846caf74b4fb4ebba1d3c660ebc', 'type': 'tool_call'}])
]
```
### Description
* When using the experimental `OllamaFunctions`, passing Tool output back as described in [the documentation](https://python.langchain.com/v0.2/docs/how_to/tool_results_pass_to_model/) does not work
* The model ignores/doesn't revive the tool related messages and thus just regenerates the first call
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1-NixOS SMP PREEMPT_DYNAMIC Thu Jun 27 11:49:15 UTC 2024
> Python Version: 3.11.9 (main, Apr 2 2024, 08:25:04) [GCC 13.2.0]
Package Information
-------------------
> langchain_core: 0.2.18
> langchain: 0.2.7
> langchain_community: 0.2.7
> langsmith: 0.1.85
> langchain_experimental: 0.0.62
> langchain_openai: 0.1.16
> langchain_text_splitters: 0.2.2
> langchainhub: 0.1.20
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | Passing tool output back to model doesn't work for OllamaFunctions | https://api.github.com/repos/langchain-ai/langchain/issues/24396/comments | 1 | 2024-07-18T12:25:14Z | 2024-07-19T16:34:27Z | https://github.com/langchain-ai/langchain/issues/24396 | 2,416,223,973 | 24,396 |
[
"hwchase17",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/integrations/text_embedding/baidu_qianfan_endpoint/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
无法调用接口
![image](https://github.com/user-attachments/assets/fa9e528f-0761-4070-80e9-78fa32396451)
### Idea or request for content:
直接使用千帆的api可以成功,但是用langchain的接口会报错 | DOC: <Issue related to /v0.2/docs/integrations/text_embedding/baidu_qianfan_endpoint/> | https://api.github.com/repos/langchain-ai/langchain/issues/24392/comments | 1 | 2024-07-18T10:31:24Z | 2024-07-21T08:33:52Z | https://github.com/langchain-ai/langchain/issues/24392 | 2,416,007,593 | 24,392 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_community.llms.moonshot import Moonshot
import os
kimi_llm = Moonshot(model="moonshot-v1-8k")
output = kimi_llm.invoke("hello")
print(output)
### Error Message and Stack Trace (if applicable)
AttributeError: 'Moonshot' object has no attribute '_client'
### Description
Moonshot 0.2.7 has problem : AttributeError: 'Moonshot' object has no attribute '_client', When I back to 0.2.6 is OK!
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Tue Jun 4 14:43:51 UTC 2024
> Python Version: 3.12.4 | packaged by Anaconda, Inc. | (main, Jun 18 2024, 15:12:24) [GCC 11.2.0]
Package Information
-------------------
> langchain_core: 0.2.20
> langchain: 0.2.8
> langchain_community: 0.2.7
> langsmith: 0.1.88
> langchain-moonshot-chat: Installed. No version info available.
> langchain-prompt-chain: Installed. No version info available.
> langchain-prompt-template: Installed. No version info available.
> langchain-test: Installed. No version info available.
> langchain_text_splitters: 0.2.2
Packages not installed (Not Necessarily a Problem)
-------------------------------------------------- | Moonshot 0.2.7 has problem : AttributeError: 'Moonshot' object has no attribute '_client', When I back to 0.2.6 is OK! | https://api.github.com/repos/langchain-ai/langchain/issues/24390/comments | 3 | 2024-07-18T09:23:36Z | 2024-07-30T09:17:25Z | https://github.com/langchain-ai/langchain/issues/24390 | 2,415,836,333 | 24,390 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_community.embeddings import SparkLLMTextEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain_community.document_loaders import UnstructuredMarkdownLoader
import os
os.environ['TMPDIR'] = './.caches'
os.environ['TEMP'] = './.caches'
markdown_path = "./llms/doc1.md"
loader = UnstructuredMarkdownLoader(markdown_path)
documnets = loader.load()
print(loader)
```
### Error Message and Stack Trace (if applicable)
```bash
(LangChain) F:\PythonProject\LangChain>python ./llms/SparkLLMTextEmbeddings.py
Traceback (most recent call last):
File "F:\PythonProject\LangChain\llms\SparkLLMTextEmbeddings.py", line 21, in <module>
documnets = loader.load()
^^^^^^^^^^^^^
File "C:\Users\asus\.conda\envs\LangChain\Lib\site-packages\langchain_core\document_loaders\base.py", line 30, in load
return list(self.lazy_load())
^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\asus\.conda\envs\LangChain\Lib\site-packages\langchain_community\document_loaders\unstructured.py", line 89, in lazy_load
elements = self._get_elements()
^^^^^^^^^^^^^^^^^^^^
File "C:\Users\asus\.conda\envs\LangChain\Lib\site-packages\langchain_community\document_loaders\markdown.py", line 45, in _get_elements
return partition_md(filename=self.file_path, **self.unstructured_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\asus\.conda\envs\LangChain\Lib\site-packages\unstructured\documents\elements.py", line 593, in wrapper
elements = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\asus\.conda\envs\LangChain\Lib\site-packages\unstructured\file_utils\filetype.py", line 626, in wrapper
elements = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\asus\.conda\envs\LangChain\Lib\site-packages\unstructured\file_utils\filetype.py", line 582, in wrapper
elements = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\asus\.conda\envs\LangChain\Lib\site-packages\unstructured\chunking\dispatch.py", line 74, in wrapper
elements = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\asus\.conda\envs\LangChain\Lib\site-packages\unstructured\partition\md.py", line 110, in partition_md
return partition_html(
^^^^^^^^^^^^^^^
File "C:\Users\asus\.conda\envs\LangChain\Lib\site-packages\unstructured\documents\elements.py", line 593, in wrapper
elements = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\asus\.conda\envs\LangChain\Lib\site-packages\unstructured\file_utils\filetype.py", line 626, in wrapper
elements = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\asus\.conda\envs\LangChain\Lib\site-packages\unstructured\file_utils\filetype.py", line 582, in wrapper
elements = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\asus\.conda\envs\LangChain\Lib\site-packages\unstructured\chunking\dispatch.py", line 74, in wrapper
elements = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\asus\.conda\envs\LangChain\Lib\site-packages\unstructured\partition\html\partition.py", line 107, in partition_html
document.elements,
^^^^^^^^^^^^^^^^^
File "C:\Users\asus\.conda\envs\LangChain\Lib\site-packages\unstructured\utils.py", line 187, in __get__
value = self._fget(obj)
^^^^^^^^^^^^^^^
File "C:\Users\asus\.conda\envs\LangChain\Lib\site-packages\unstructured\documents\html.py", line 76, in elements
return list(iter_elements())
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\asus\.conda\envs\LangChain\Lib\site-packages\unstructured\documents\html.py", line 71, in iter_elements
for e in self._iter_elements(self._main):
File "C:\Users\asus\.conda\envs\LangChain\Lib\site-packages\unstructured\documents\html.py", line 145, in _iter_elements
yield from self._process_text_tag(tag_elem)
File "C:\Users\asus\.conda\envs\LangChain\Lib\site-packages\unstructured\documents\html.py", line 450, in _process_text_tag
element = self._parse_tag(tag_elem, include_tail_text)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\asus\.conda\envs\LangChain\Lib\site-packages\unstructured\documents\html.py", line 409, in _parse_tag
ElementCls = self._classify_text(text, tag_elem.tag)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\asus\.conda\envs\LangChain\Lib\site-packages\unstructured\documents\html.py", line 94, in _classify_text
if tag not in HEADING_TAGS and is_possible_narrative_text(text):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\asus\.conda\envs\LangChain\Lib\site-packages\unstructured\partition\text_type.py", line 80, in is_possible_narrative_text
if exceeds_cap_ratio(text, threshold=cap_threshold):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\asus\.conda\envs\LangChain\Lib\site-packages\unstructured\partition\text_type.py", line 276, in exceeds_cap_ratio
if sentence_count(text, 3) > 1:
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\asus\.conda\envs\LangChain\Lib\site-packages\unstructured\partition\text_type.py", line 225, in sentence_count
sentences = sent_tokenize(text)
^^^^^^^^^^^^^^^^^^^
File "C:\Users\asus\.conda\envs\LangChain\Lib\site-packages\unstructured\nlp\tokenize.py", line 136, in sent_tokenize
_download_nltk_packages_if_not_present()
File "C:\Users\asus\.conda\envs\LangChain\Lib\site-packages\unstructured\nlp\tokenize.py", line 130, in _download_nltk_packages_if_not_present
download_nltk_packages()
File "C:\Users\asus\.conda\envs\LangChain\Lib\site-packages\unstructured\nlp\tokenize.py", line 88, in download_nltk_packages
urllib.request.urlretrieve(NLTK_DATA_URL, tgz_file)
File "C:\Users\asus\.conda\envs\LangChain\Lib\urllib\request.py", line 250, in urlretrieve
tfp = open(filename, 'wb')
^^^^^^^^^^^^^^^^^^^^
PermissionError: [Errno 13] Permission denied: 'F:\\PythonProject\\LangChain\\.caches\\tmp27mpsjp4'
```
### Description
I don't know where I went wrong
### System Info
platform windows
Python 3.12.4 | UnstructuredMarkdownLoader PermissionError: [Errno 13] Permission denied | https://api.github.com/repos/langchain-ai/langchain/issues/24388/comments | 4 | 2024-07-18T08:24:24Z | 2024-07-22T17:17:38Z | https://github.com/langchain-ai/langchain/issues/24388 | 2,415,715,932 | 24,388 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
import os
import requests
import yaml
os.environ["OPENAI_API_KEY"] = "sk-REDACTED"
from langchain_community.agent_toolkits.openapi import planner
from langchain_openai.chat_models import ChatOpenAI
from langchain_community.agent_toolkits.openapi.spec import reduce_openapi_spec
from langchain.requests import RequestsWrapper
from requests.packages.urllib3.exceptions import InsecureRequestWarning
# Ignore SSL warnings
requests.packages.urllib3.disable_warnings(InsecureRequestWarning)
with open("/home/ehkim/git/testprj/code_snippet/swagger.yaml") as f:
data = yaml.load(f, Loader=yaml.FullLoader)
swagger_api_spec = reduce_openapi_spec(data)
def construct_superset_aut_headers(url=None):
import requests
url = "https://superset.mydomain.com/api/v1/security/login"
payload = {
"username": "myusername",
"password": "mypassword",
"provider": "db",
"refresh": True
}
headers = {
"Content-Type": "application/json"
}
response = requests.post(url, json=payload, headers=headers, verify=False)
data = response.json()
return {"Authorization": f"Bearer {data['access_token']}"}
from langchain.globals import set_debug
set_debug(True)
llm = ChatOpenAI(model='gpt-4o')
swagger_requests_wrapper = RequestsWrapper(headers=construct_superset_aut_headers(), verify=False)
superset_agent = planner.create_openapi_agent(
swagger_api_spec,
swagger_requests_wrapper,
llm,
allowed_operations=["GET", "POST", "PUT", "DELETE", "PATCH"],
allow_dangerous_requests=True,
agent_executor_kwargs={'handle_parsing_errors':True},
handle_parsing_errors=True
)
superset_agent.run(
"""
1. Get the dataset using the following information. (tool: requests_post, API: /api/v1/dataset/get_or_create/, database_id: 1, table_name: raw_esg_volume, response : {{'result' : {{'table_id': (dataset_id)}}}})
2. Retrieve the dataset information obtained in step 1. (tool: requests_get, API: /api/v1/dataset/dataset/{{dataset_id}}/, params: None)
3. Create a chart referencing the dataset obtained in step 2. The chart should plot the trend of total, online_news, and (total - online_news) values as a line chart. (tool: requests_post, API: /api/v1/chart/, database_id: 1)
4. Return the URL of the created chart. https://superset.mydomain.com/explore/?slice_id={{chart_id}}
When specifying the action, only write the tool name without any additional explanation.
"""
)
In this file, I used swagger.yaml file from https://superset.demo.datahubproject.io/api/v1/_openapi.
It's json format, so I converted it with below code.
```python
import json
import yaml
# read file
with open('swagger.json', 'r') as json_file:
json_data = json.load(json_file)
# write file
with open('swagger.yaml', 'w') as yaml_file:
yaml.dump(json_data, yaml_file, default_flow_style=False)
```
### Error Message and Stack Trace (if applicable)
There's no exception because of handle_parsing_error=True but failure to solve user's request.
The below is agent log.
```
[chain/start] [chain:AgentExecutor] Entering Chain run with input:
{
"input": "\n 1. Get the dataset using the following information. (tool: requests_post, API: /api/v1/dataset/get_or_create/, database_id: 1, (syncopation) "
}
[chain/start] [chain:AgentExecutor > chain:LLMChain] Entering Chain run with input:
{
"input": "\n 1. Get the dataset using the following information. (tool: requests_post, API: /api/v1/dataset/get_or_create/, database_id: 1, (syncopation) ",
"agent_scratchpad": "",
"stop": [
"\nObservation:",
"\n\tObservation:"
]
}
...
(syncopation)
(api_planner log)
(syncopation)
(api_controller log)
...
[chain/end] [chain:AgentExecutor > tool:api_controller > chain:AgentExecutor > chain:LLMChain] [2.73s] Exiting Chain run with output:
{
"text": "Action: The action to take is to make a POST request to the `/api/v1/dataset/get_or_create/` endpoint to create or get the dataset for the table 'raw_esg_volume' in the database with ID 1.\nAction Input: \n```json\n{\n \"url\": \"https://superset.mydomain.com/api/v1/dataset/get_or_create/\",\n \"data\": {\n \"database_id\": 1,\n \"table_name\": \"raw_esg_volume\"\n },\n \"output_instructions\": \"Extract the table_id from the response\"\n}\n```"
}
[tool/start] [chain:AgentExecutor > tool:api_controller > chain:AgentExecutor > tool:invalid_tool] Entering Tool run with input:
"{'requested_tool_name': "The action to take is to make a POST request to the `/api/v1/dataset/get_or_create/` endpoint to create or get the dataset for the table 'raw_esg_volume' in the database with ID 1.", 'available_tool_names': ['requests_get', 'requests_post', 'requests_put']}"
[tool/end] [chain:AgentExecutor > tool:api_controller > chain:AgentExecutor > tool:invalid_tool] [0ms] Exiting Tool run with output:
"The action to take is to make a POST request to the `/api/v1/dataset/get_or_create/` endpoint to create or get the dataset for the table 'raw_esg_volume' in the database with ID 1. is not a valid tool, try one of [requests_get, requests_post, requests_put]."
[chain/start] [chain:AgentExecutor > tool:api_controller > chain:AgentExecutor > chain:LLMChain] Entering Chain run with input:
{
"input": "1. POST /api/v1/dataset/get_or_create/ with params {'database_id': 1, 'table_name': 'raw_esg_volume'}",
"agent_scratchpad": "Action: The action to take is to make a POST request to the `/api/v1/dataset/get_or_create/` endpoint to create or get the dataset for the table 'raw_esg_volume' in the database with ID 1.\nAction Input: \n```json\n{\n \"url\": \"https://superset.mydomain.com/api/v1/dataset/get_or_create/\",\n \"data\": {\n \"database_id\": 1,\n \"table_name\": \"raw_esg_volume\"\n },\n \"output_instructions\": \"Extract the table_id from the response\"\n}\n```\nObservation: The action to take is to make a POST request to the `/api/v1/dataset/get_or_create/` endpoint to create or get the dataset for the table 'raw_esg_volume' in the database with ID 1. is not a valid tool, try one of [requests_get, requests_post, requests_put].\nThought:",
"stop": [
"\nObservation:",
"\n\tObservation:"
]
}
[llm/start] [chain:AgentExecutor > tool:api_controller > chain:AgentExecutor > chain:LLMChain > llm:ChatOpenAI] Entering LLM run with input:
{
"prompts": [
"Human: You are an agent that gets a sequence of API calls and given their documentation, should execute them and return the final response.\nIf you cannot complete them and run into issues, you should explain the issue. If you're unable to resolve an API call, you can retry the API call. When interacting with API objects, you should extract ids for inputs to other API calls but ids and names for outputs returned to the User.\n\n\nHere is documentation on the API:\nBase url: https://superset.mydomain.com/\nEndpoints:\n== Docs for POST /api/v1/dataset/get_or_create/ == \nrequestBody:\n content:\n application/json:\n schema:\n properties:\n always_filter_main_dttm:\n default: false\n type: boolean\n database:\n type: integer\n external_url:\n nullable: true\n type: string\n is_managed_externally:\n nullable: true\n type: boolean\n normalize_columns:\n default: false\n type: boolean\n owners:\n items:\n type: integer\n type: array\n schema:\n maxLength: 250\n minLength: 0\n nullable: true\n type: string\n sql:\n nullable: true\n type: string\n table_name:\n maxLength: 250\n minLength: 1\n type: string\n required:\n - database\n - table_name\n type: object\n description: Dataset schema\n required: true\n\n== Docs for POST /api/v1/dataset/get_or_create/ == \nrequestBody:\n content:\n application/json:\n schema:\n properties:\n always_filter_main_dttm:\n default: false\n type: boolean\n database_id:\n description: ID of database table belongs to\n type: integer\n normalize_columns:\n default: false\n type: boolean\n schema:\n description: The schema the table belongs to\n maxLength: 250\n minLength: 0\n nullable: true\n type: string\n table_name:\n description: Name of table\n type: string\n template_params:\n description: Template params for the table\n type: string\n required:\n - database_id\n - table_name\n type: object\n required: true\nresponses:\n content:\n application/json:\n schema:\n properties:\n result:\n properties:\n table_id:\n type: integer\n type: object\n type: object\n description: The ID of the table\n\n\n\n\nHere are tools to execute requests against the API: requests_get: Use this to GET content from a website.\nInput to the tool should be a json string with 3 keys: \"url\", \"params\" and \"output_instructions\".\nThe value of \"url\" should be a string. \nThe value of \"params\" should be a dict of the needed and available parameters from the OpenAPI spec related to the endpoint. \nIf parameters are not needed, or not available, leave it empty.\nThe value of \"output_instructions\" should be instructions on what information to extract from the response, \nfor example the id(s) for a resource(s) that the GET request fetches.\n\nrequests_post: Use this when you want to POST to a website.\nInput to the tool should be a json string with 3 keys: \"url\", \"data\", and \"output_instructions\".\nThe value of \"url\" should be a string.\nThe value of \"data\" should be a dictionary of key-value pairs you want to POST to the url.\nThe value of \"output_instructions\" should be instructions on what information to extract from the response, for example the id(s) for a resource(s) that the POST request creates.\nAlways use double quotes for strings in the json string.\nrequests_put: Use this when you want to PUT to a website.\nInput to the tool should be a json string with 3 keys: \"url\", \"data\", and \"output_instructions\".\nThe value of \"url\" should be a string.\nThe value of \"data\" should be a dictionary of key-value pairs you want to PUT to the url.\nThe value of \"output_instructions\" should be instructions on what information to extract from the response, for example the id(s) for a resource(s) that the PUT request creates.\nAlways use double quotes for strings in the json string.\n\n\nStarting below, you should follow this format:\n\nPlan: the plan of API calls to execute\nThought: you should always think about what to do\nAction: the action to take, should be one of the tools [requests_get, requests_post, requests_put]\nAction Input: the input to the action\nObservation: the output of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I am finished executing the plan (or, I cannot finish executing the plan without knowing some other information.)\nFinal Answer: the final output from executing the plan or missing information I'd need to re-plan correctly.\n\n\nBegin!\n\nPlan: 1. POST /api/v1/dataset/get_or_create/ with params {'database_id': 1, 'table_name': 'raw_esg_volume'}\nThought:\nAction: The action to take is to make a POST request to the `/api/v1/dataset/get_or_create/` endpoint to create or get the dataset for the table 'raw_esg_volume' in the database with ID 1.\nAction Input: \n```json\n{\n \"url\": \"https://superset.mydomain.com/api/v1/dataset/get_or_create/\",\n \"data\": {\n \"database_id\": 1,\n \"table_name\": \"raw_esg_volume\"\n },\n \"output_instructions\": \"Extract the table_id from the response\"\n}\n```\nObservation: The action to take is to make a POST request to the `/api/v1/dataset/get_or_create/` endpoint to create or get the dataset for the table 'raw_esg_volume' in the database with ID 1. is not a valid tool, try one of [requests_get, requests_post, requests_put].\nThought:"
]
}
[llm/end] [chain:AgentExecutor > tool:api_controller > chain:AgentExecutor > chain:LLMChain > llm:ChatOpenAI] [4.12s] Exiting LLM run with output:
{
"generations": [
[
{
"text": "It looks like there was an error in the previous action. I will correct the action to use the appropriate tool, which is `requests_post`.\n\nPlan: 1. POST /api/v1/dataset/get_or_create/ with params {'database_id': 1, 'table_name': 'raw_esg_volume'}\nThought: Make a POST request to the `/api/v1/dataset/get_or_create/` endpoint to create or get the dataset for the table 'raw_esg_volume' in the database with ID 1.\nAction: Execute the corrected action using the `requests_post` tool.\nAction Input:\n```json\n{\n \"url\": \"https://superset.mydomain.com/api/v1/dataset/get_or_create/\",\n \"data\": {\n \"database_id\": 1,\n \"table_name\": \"raw_esg_volume\"\n },\n \"output_instructions\": \"Extract the table_id from the response\"\n}\n```",
"generation_info": {
"finish_reason": "stop",
"logprobs": null
},
"type": "ChatGeneration",
"message": {
"lc": 1,
"type": "constructor",
"id": [
"langchain",
"schema",
"messages",
"AIMessage"
],
"kwargs": {
"content": "It looks like there was an error in the previous action. I will correct the action to use the appropriate tool, which is `requests_post`.\n\nPlan: 1. POST /api/v1/dataset/get_or_create/ with params {'database_id': 1, 'table_name': 'raw_esg_volume'}\nThought: Make a POST request to the `/api/v1/dataset/get_or_create/` endpoint to create or get the dataset for the table 'raw_esg_volume' in the database with ID 1.\nAction: Execute the corrected action using the `requests_post` tool.\nAction Input:\n```json\n{\n \"url\": \"https://superset.mydomain.com/api/v1/dataset/get_or_create/\",\n \"data\": {\n \"database_id\": 1,\n \"table_name\": \"raw_esg_volume\"\n },\n \"output_instructions\": \"Extract the table_id from the response\"\n}\n```",
"response_metadata": {
"token_usage": {
"completion_tokens": 196,
"prompt_tokens": 1296,
"total_tokens": 1492
},
"model_name": "gpt-4o",
"system_fingerprint": "fp_c4e5b6fa31",
"finish_reason": "stop",
"logprobs": null
},
"type": "ai",
"id": "run-b38b50e3-b4d1-44ef-996a-76b132d46f79-0",
"usage_metadata": {
"input_tokens": 1296,
"output_tokens": 196,
"total_tokens": 1492
},
"tool_calls": [],
"invalid_tool_calls": []
}
}
}
]
],
"llm_output": {
"token_usage": {
"completion_tokens": 196,
"prompt_tokens": 1296,
"total_tokens": 1492
},
"model_name": "gpt-4o",
"system_fingerprint": "fp_c4e5b6fa31"
},
"run": null
}
[chain/end] [chain:AgentExecutor > tool:api_controller > chain:AgentExecutor > chain:LLMChain] [4.12s] Exiting Chain run with output:
{
"text": "It looks like there was an error in the previous action. I will correct the action to use the appropriate tool, which is `requests_post`.\n\nPlan: 1. POST /api/v1/dataset/get_or_create/ with params {'database_id': 1, 'table_name': 'raw_esg_volume'}\nThought: Make a POST request to the `/api/v1/dataset/get_or_create/` endpoint to create or get the dataset for the table 'raw_esg_volume' in the database with ID 1.\nAction: Execute the corrected action using the `requests_post` tool.\nAction Input:\n```json\n{\n \"url\": \"https://superset.mydomain.com/api/v1/dataset/get_or_create/\",\n \"data\": {\n \"database_id\": 1,\n \"table_name\": \"raw_esg_volume\"\n },\n \"output_instructions\": \"Extract the table_id from the response\"\n}\n```"
}
[tool/start] [chain:AgentExecutor > tool:api_controller > chain:AgentExecutor > tool:invalid_tool] Entering Tool run with input:
"{'requested_tool_name': 'Execute the corrected action using the `requests_post` tool.', 'available_tool_names': ['requests_get', 'requests_post', 'requests_put']}"
[tool/end] [chain:AgentExecutor > tool:api_controller > chain:AgentExecutor > tool:invalid_tool] [0ms] Exiting Tool run with output:
"Execute the corrected action using the `requests_post` tool. is not a valid tool, try one of [requests_get, requests_post, requests_put]."
...
(loop)
...
```
### Description
I expected two things.
One is that all five operations added to the planner.create_openapi_agent function are added to api_controller, and the other is that only the tool name is correctly entered in the response in the tool name field when executing the API.
However, as observed through logs, both did not work well.
### System Info
platform : linux
python : 3.11
$ pip freeze | grep langchain
langchain==0.2.4
langchain-community==0.2.4
langchain-core==0.2.6
langchain-openai==0.1.8
langchain-text-splitters==0.2.1 | LangChain Agent Fails to Recognize Tool Names with Descriptions and Incomplete Operation Addition | https://api.github.com/repos/langchain-ai/langchain/issues/24382/comments | 0 | 2024-07-18T05:01:28Z | 2024-07-18T06:20:53Z | https://github.com/langchain-ai/langchain/issues/24382 | 2,415,226,049 | 24,382 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
import os
import dotenv
from langchain_openai import OpenAIEmbeddings
dotenv.load_dotenv()
embeddings = OpenAIEmbeddings(
model='text-embedding-3-large',
dimensions=1024, # assign dimensions to 1024
openai_api_base=os.getenv('OPENAI_API_BASE')
)
text = 'This is a test document.'
vector = embeddings.embed_documents([text])
print(f'the dimension of vector is {len(vector[0])}')
```
Output:
the dimension of vector is 3072
### Error Message and Stack Trace (if applicable)
_No response_
### Description
- I'm using OpenAIEmbeddings to embed my document.
- I assign model to text-embedding-3-large and dimension to 1024
- However, the actual dimension of vector is still 3072(default with text-embedding-3-large)
- It seems that the param `dimension` is not working.
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 21.5.0: Tue Apr 26 21:08:29 PDT 2022; root:xnu-8020.121.3~4/RELEASE_ARM64_T8101
> Python Version: 3.12.3 (v3.12.3:f6650f9ad7, Apr 9 2024, 08:18:47) [Clang 13.0.0 (clang-1300.0.29.30)]
Package Information
-------------------
> langchain_core: 0.2.20
> langchain: 0.2.8
> langchain_community: 0.2.7
> langsmith: 0.1.82
> langchain_experimental: 0.0.62
> langchain_huggingface: 0.0.3
> langchain_openai: 0.1.14
> langchain_text_splitters: 0.2.2
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
| [Embedding] The dimensions parameter of OpenAIEmbeddings is not working | https://api.github.com/repos/langchain-ai/langchain/issues/24378/comments | 4 | 2024-07-18T02:11:48Z | 2024-07-19T01:10:57Z | https://github.com/langchain-ai/langchain/issues/24378 | 2,415,056,529 | 24,378 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_core.messages import AIMessageChunk
chunks = [
AIMessageChunk(content="Hello", response_metadata={'prompt_token_count': 12, 'generation_token_count': 1, 'stop_reason': None}, id='1'),
AIMessageChunk(content="!", response_metadata={'prompt_token_count': None, 'generation_token_count': 2, 'stop_reason': None}, id='1')
]
response = AIMessageChunk("")
for chunk in chunks:
response += chunk
```
### Error Message and Stack Trace (if applicable)
TypeError: Additional kwargs key generation_token_count already exists in left dict and value has unsupported type <class 'int'>.
### Description
Chunk addition is failing with streaming use cases that generate AIMessageChunk. This root cause seems to be a failure in the [merge_dict](https://github.com/langchain-ai/langchain/blob/master/libs/core/langchain_core/utils/_merge.py#L6) function.
```python
from langchain_aws import ChatBedrock
chat = ChatBedrock(
model_id="meta.llama3-8b-instruct-v1:0",
streaming=True
)
response = AIMessageChunk("")
for chunk in model.stream(message):
response += chunk
```
### System Info
langchain-core = 0.2.21
### Related Issues
https://github.com/langchain-ai/langchain/issues/23891
https://github.com/langchain-ai/langchain-aws/issues/107 | AIMessageChunk merge is failing | https://api.github.com/repos/langchain-ai/langchain/issues/24377/comments | 2 | 2024-07-18T01:49:43Z | 2024-07-18T23:32:31Z | https://github.com/langchain-ai/langchain/issues/24377 | 2,415,009,829 | 24,377 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Description
When starting langserve with the code below and accessing it via `RemoteRunnable`, I encounter a `KeyError: "Input to ChatPromptTemplate is missing variables {'language'}. Expected: ['history', 'input', 'language'] Received: ['input', 'history']"`.
### Example Code
```python
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.runnables.history import RunnableWithMessageHistory
from langchain_core.chat_history import BaseChatMessageHistory
from langchain_community.chat_message_histories import ChatMessageHistory
from langserve import add_routes
from fastapi import FastAPI
import uvicorn
app = FastAPI()
store = {}
def get_session_history(session_id: str) -> BaseChatMessageHistory:
if session_id not in store:
store[session_id] = ChatMessageHistory()
return store[session_id]
prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"You're an assistant who speaks in {language}. Respond in 20 words or fewer.",
),
MessagesPlaceholder(variable_name="history"),
("human", "{input}"),
]
)
model = ChatOpenAI(model="gpt-3.5-turbo-0125")
runnable = prompt | model
runnable_with_history = RunnableWithMessageHistory(
runnable,
get_session_history,
input_messages_key="input",
history_messages_key="history",
)
chain = runnable_with_history
add_routes(app, chain, path="/test")
uvicorn.run(app, host="0.0.0.0", port=8000)
```
### Code for RemoteRunnable:
```python
from langserve import RemoteRunnable
rr = RemoteRunnable("http://localhost:8000/test/")
rr.invoke(
{"language": "Italian", "input": "what's my name?"},
config={"configurable": {"session_id": "1"}},
)
```
This issue also occurs in the LangServe Playground where the input box for `language` does not appear. When sending the message as-is, it results in `KeyError: "Input to ChatPromptTemplate is missing variables {'language'}. Expected: ['history', 'input', 'language'] Received: ['input', 'history']"`.
### Error Message and Stack Trace (if applicable)
INFO: 127.0.0.1:57555 - "POST /test/invoke HTTP/1.1" 500 Internal Server Error
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/usr/local/var/pyenv/versions/3.11.6/lib/python3.11/site-packages/uvicorn/protocols/http/httptools_impl.py", line 426, in run_asgi
result = await app( # type: ignore[func-returns-value]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/var/pyenv/versions/3.11.6/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 84, in __call__
return await self.app(scope, receive, send)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/var/pyenv/versions/3.11.6/lib/python3.11/site-packages/fastapi/applications.py", line 1054, in __call__
await super().__call__(scope, receive, send)
File "/usr/local/var/pyenv/versions/3.11.6/lib/python3.11/site-packages/starlette/applications.py", line 123, in __call__
await self.middleware_stack(scope, receive, send)
File "/usr/local/var/pyenv/versions/3.11.6/lib/python3.11/site-packages/starlette/middleware/errors.py", line 186, in __call__
raise exc
File "/usr/local/var/pyenv/versions/3.11.6/lib/python3.11/site-packages/starlette/middleware/errors.py", line 164, in __call__
await self.app(scope, receive, _send)
File "/usr/local/var/pyenv/versions/3.11.6/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 65, in __call__
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
File "/usr/local/var/pyenv/versions/3.11.6/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
raise exc
File "/usr/local/var/pyenv/versions/3.11.6/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "/usr/local/var/pyenv/versions/3.11.6/lib/python3.11/site-packages/starlette/routing.py", line 756, in __call__
await self.middleware_stack(scope, receive, send)
File "/usr/local/var/pyenv/versions/3.11.6/lib/python3.11/site-packages/starlette/routing.py", line 776, in app
await route.handle(scope, receive, send)
File "/usr/local/var/pyenv/versions/3.11.6/lib/python3.11/site-packages/starlette/routing.py", line 297, in handle
await self.app(scope, receive, send)
File "/usr/local/var/pyenv/versions/3.11.6/lib/python3.11/site-packages/starlette/routing.py", line 77, in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
File "/usr/local/var/pyenv/versions/3.11.6/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
raise exc
File "/usr/local/var/pyenv/versions/3.11.6/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "/usr/local/var/pyenv/versions/3.11.6/lib/python3.11/site-packages/starlette/routing.py", line 72, in app
response = await func(request)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/var/pyenv/versions/3.11.6/lib/python3.11/site-packages/fastapi/routing.py", line 278, in app
raw_response = await run_endpoint_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/var/pyenv/versions/3.11.6/lib/python3.11/site-packages/fastapi/routing.py", line 191, in run_endpoint_function
return await dependant.call(**values)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/var/pyenv/versions/3.11.6/lib/python3.11/site-packages/langserve/server.py", line 530, in invoke
return await api_handler.invoke(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/var/pyenv/versions/3.11.6/lib/python3.11/site-packages/langserve/api_handler.py", line 833, in invoke
output = await invoke_coro
^^^^^^^^^^^^^^^^^
File "/usr/local/var/pyenv/versions/3.11.6/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 5018, in ainvoke
return await self.bound.ainvoke(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/var/pyenv/versions/3.11.6/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 5018, in ainvoke
return await self.bound.ainvoke(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/var/pyenv/versions/3.11.6/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2862, in ainvoke
input = await step.ainvoke(input, config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/var/pyenv/versions/3.11.6/lib/python3.11/site-packages/langchain_core/runnables/branch.py", line 277, in ainvoke
output = await runnable.ainvoke(
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/var/pyenv/versions/3.11.6/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 5018, in ainvoke
return await self.bound.ainvoke(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/var/pyenv/versions/3.11.6/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2860, in ainvoke
input = await step.ainvoke(input, config, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/var/pyenv/versions/3.11.6/lib/python3.11/site-packages/langchain_core/prompts/base.py", line 203, in ainvoke
return await self._acall_with_config(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/var/pyenv/versions/3.11.6/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1784, in _acall_with_config
output: Output = await asyncio.create_task(coro, context=context) # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/var/pyenv/versions/3.11.6/lib/python3.11/site-packages/langchain_core/prompts/base.py", line 159, in _aformat_prompt_with_error_handling
_inner_input = self._validate_input(inner_input)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/var/pyenv/versions/3.11.6/lib/python3.11/site-packages/langchain_core/prompts/base.py", line 145, in _validate_input
raise KeyError(
KeyError: "Input to ChatPromptTemplate is missing variables {'language'}. Expected: ['history', 'input', 'language'] Received: ['input', 'history']"
### Conditions Under Which the Issue Does Not Occur
#### 1. Without Using LangServe:
Running the server-side code (excluding `uvicorn.run()`) in an IPython shell with the following command does **NOT** trigger the issue:
```python
chain.invoke(
{"language": "Italian", "input": "what's my name?"},
config={"configurable": {"session_id": "1"}},
)
```
#### 2. Without Using RunnableWithMessageHistory:
Modifying the server-side code as shown below and running it in the playground does **NOT** trigger the issue:
```python
# Before:
chain = runnable_with_history
# After:
chain = runnable
```
### Conclusion
The issue seems to arise from the combination of `RunnableWithMessageHistory` and LangServe.
Any assistance or guidance on resolving this issue would be greatly appreciated.
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 21.6.0: Thu Jun 8 23:57:12 PDT 2023; root:xnu-8020.240.18.701.6~1/RELEASE_X86_64
> Python Version: 3.11.6 (main, Oct 16 2023, 15:57:36) [Clang 14.0.0 (clang-1400.0.29.202)]
Package Information
-------------------
> langchain_core: 0.2.19
> langchain: 0.2.7
> langchain_community: 0.2.7
> langsmith: 0.1.85
> langchain_anthropic: 0.1.20
> langchain_chroma: 0.1.1
> langchain_cli: 0.0.25
> langchain_experimental: 0.0.61
> langchain_openai: 0.1.16
> langchain_text_splitters: 0.2.2
> langchainhub: 0.1.20
> langchainplus_sdk: 0.0.20
> langgraph: 0.1.8
> langserve: 0.2.2
> pydantic: 2.8.2 | KeyError with RunnableWithMessageHistory and LangServe: Missing Variable | https://api.github.com/repos/langchain-ai/langchain/issues/24370/comments | 0 | 2024-07-17T22:24:33Z | 2024-07-17T22:31:00Z | https://github.com/langchain-ai/langchain/issues/24370 | 2,414,721,623 | 24,370 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from langchain_community.chat_models import ChatOllama
from langchain_core.output_parsers import StrOutputParser
from graphs.reference_graph.prompts.code_review_prompt import code_review_prompt
from graphs.reference_graph.thinkers.hallucination_grader import hallucination_grader
from langchain.agents import AgentType, initialize_agent
from langchain_community.agent_toolkits.github.toolkit import GitHubToolkit
from langchain_community.utilities.github import GitHubAPIWrapper
llm = ChatOllama(model="deepseek-coder-v2", temperature=1)
github = GitHubAPIWrapper()
toolkit = GitHubToolkit.from_github_api_wrapper(github)
tools = toolkit.get_tools()
agent = initialize_agent(
tools,
llm,
agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
)
prompt_chain = code_review_prompt | agent | StrOutputParser()
```
### Error Message and Stack Trace (if applicable)
```
/Users/gvalenc/git/gvalenc/code-connoisseur/.venv/lib/python3.12/site-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
warnings.warn(
Traceback (most recent call last):
File "/Users/gvalenc/git/gvalenc/code-connoisseur/app/main.py", line 58, in <module>
for output in workflowApp.stream(inputs):
File "/Users/gvalenc/git/gvalenc/code-connoisseur/.venv/lib/python3.12/site-packages/langgraph/pregel/__init__.py", line 986, in stream
_panic_or_proceed(done, inflight, step)
File "/Users/gvalenc/git/gvalenc/code-connoisseur/.venv/lib/python3.12/site-packages/langgraph/pregel/__init__.py", line 1540, in _panic_or_proceed
raise exc
File "/opt/homebrew/Cellar/[email protected]/3.12.4/Frameworks/Python.framework/Versions/3.12/lib/python3.12/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/gvalenc/git/gvalenc/code-connoisseur/.venv/lib/python3.12/site-packages/langgraph/pregel/retry.py", line 72, in run_with_retry
task.proc.invoke(task.input, task.config)
File "/Users/gvalenc/git/gvalenc/code-connoisseur/.venv/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 2822, in invoke
input = step.invoke(input, config, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/gvalenc/git/gvalenc/code-connoisseur/.venv/lib/python3.12/site-packages/langgraph/utils.py", line 95, in invoke
ret = context.run(self.func, input, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/gvalenc/git/gvalenc/code-connoisseur/app/graphs/reference_graph/thinkers/code_reviewer.py", line 86, in generate_code_review_node
generate_chain = getGeneratePromptChain()
^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/gvalenc/git/gvalenc/code-connoisseur/app/graphs/reference_graph/thinkers/code_reviewer.py", line 51, in getGeneratePromptChain
github = GitHubAPIWrapper()
^^^^^^^^^^^^^^^^^^
File "/Users/gvalenc/git/gvalenc/code-connoisseur/.venv/lib/python3.12/site-packages/pydantic/v1/main.py", line 339, in __init__
values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/gvalenc/git/gvalenc/code-connoisseur/.venv/lib/python3.12/site-packages/pydantic/v1/main.py", line 1100, in validate_model
values = validator(cls_, values)
^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/gvalenc/git/gvalenc/code-connoisseur/.venv/lib/python3.12/site-packages/langchain_community/utilities/github.py", line 90, in validate_environment
installation = installation[0]
~~~~~~~~~~~~^^^
File "/Users/gvalenc/git/gvalenc/code-connoisseur/.venv/lib/python3.12/site-packages/github/PaginatedList.py", line 76, in __getitem__
self.__fetchToIndex(index)
File "/Users/gvalenc/git/gvalenc/code-connoisseur/.venv/lib/python3.12/site-packages/github/PaginatedList.py", line 92, in __fetchToIndex
self._grow()
File "/Users/gvalenc/git/gvalenc/code-connoisseur/.venv/lib/python3.12/site-packages/github/PaginatedList.py", line 95, in _grow
newElements = self._fetchNextPage()
^^^^^^^^^^^^^^^^^^^^^
File "/Users/gvalenc/git/gvalenc/code-connoisseur/.venv/lib/python3.12/site-packages/github/PaginatedList.py", line 244, in _fetchNextPage
headers, data = self.__requester.requestJsonAndCheck(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/gvalenc/git/gvalenc/code-connoisseur/.venv/lib/python3.12/site-packages/github/Requester.py", line 548, in requestJsonAndCheck
return self.__check(*self.requestJson(verb, url, parameters, headers, input, self.__customConnection(url)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/gvalenc/git/gvalenc/code-connoisseur/.venv/lib/python3.12/site-packages/github/Requester.py", line 609, in __check
raise self.createException(status, responseHeaders, data)
github.GithubException.GithubException: 500
```
### Description
This is a follow-up issue from this discussion: https://github.com/langchain-ai/langchain/discussions/24351
I created an app in GitHub Enterprise (GHE) and set up my env variables where I'm running my LangChain app.
```
export GITHUB_APP_ID="<app-id>"
export GITHUB_APP_PRIVATE_KEY="<path to .pem file>"
export GITHUB_REPOSITORY="<ghe-repo-url>"
```
After some debugging with Dosu and looking at the [source code for GitHubAPIWrapper](https://api.python.langchain.com/en/latest/_modules/langchain_community/utilities/github.html#GitHubAPIWrapper), it seems that the wrapper is not taking in the API URL for the GitHub Enterprise instance. Looking at the exception headers, it continues to try to hit github.com instead of my GHE instance. I can't seem to get it to do otherwise.
`_GithubException__headers: {'date': 'Wed, 17 Jul 2024 18:40:35 GMT', 'vary': 'Accept-Encoding, Accept, X-Requested-With', 'transfer-encoding': 'chunked', 'x-github-request-id': 'CED3:109409:A57DE0:1348F7E:66981022', 'server': 'github.com'}`
### System Info
```
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.5.0: Wed May 1 20:12:58 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6000
> Python Version: 3.12.4 (main, Jun 6 2024, 18:26:44) [Clang 15.0.0 (clang-1500.3.9.4)]
Package Information
-------------------
> langchain_core: 0.2.19
> langchain: 0.2.5
> langchain_community: 0.2.5
> langsmith: 0.1.81
> langchain_huggingface: 0.0.3
> langchain_ibm: 0.1.7
> langchain_text_splitters: 0.2.2
> langgraph: 0.1.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langserve
``` | 500 error when using GitHubAPIWrapper with GitHub Enterprise | https://api.github.com/repos/langchain-ai/langchain/issues/24367/comments | 0 | 2024-07-17T21:27:01Z | 2024-07-17T21:29:36Z | https://github.com/langchain-ai/langchain/issues/24367 | 2,414,630,610 | 24,367 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```# import
from langchain_community.embeddings import OllamaEmbeddings
from sentence_transformers.util import cos_sim
import numpy as np
from numpy.testing import assert_almost_equal
# definitions
ollama_emb = OllamaEmbeddings(model='mxbai-embed-large')
# test on ollama
query = 'Represent this sentence for searching relevant passages: A man is eating a piece of bread'
docs = [
query,
"A man is eating food.",
"A man is eating pasta.",
"The girl is carrying a baby.",
"A man is riding a horse.",
]
r_1 = ollama_emb.embed_documents(docs)
# Calculate cosine similarity
similarities = cos_sim(r_1[0], r_1[1:])
print(similarities.numpy()[0])
print("to be compared to :\n [0.7920, 0.6369, 0.1651, 0.3621]")
try :
assert_almost_equal( similarities.numpy()[0], np.array([0.7920, 0.6369, 0.1651, 0.3621]),decimal=2)
print("TEST 1 : OLLAMA PASSED.")
except AssertionError:
print("TEST 1 : OLLAMA FAILED.")
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
THe test is not working well.
It works with ollama directly but not with ollama under Langchain.
Also, it works well with Llamafile under Langchain.
The issue seems to be the same than here : [https://github.com/ollama/ollama/issues/4207](https://github.com/ollama/ollama/issues/4207 )
Why is it not fixed with langchain?
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.5.0: Wed May 1 20:13:18 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6030
> Python Version: 3.10.4 (main, Mar 31 2022, 03:37:37) [Clang 12.0.0 ]
Package Information
-------------------
> langchain_core: 0.2.20
> langchain: 0.2.8
> langchain_community: 0.2.7
> langsmith: 0.1.88
> langchain_chroma: 0.1.1
> langchain_text_splitters: 0.2.2
ollama : 0.2.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | mxbai-embed-large embedding not consistent with original paper | https://api.github.com/repos/langchain-ai/langchain/issues/24357/comments | 1 | 2024-07-17T17:30:05Z | 2024-07-24T07:45:45Z | https://github.com/langchain-ai/langchain/issues/24357 | 2,414,158,892 | 24,357 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
def _create_index(self) -> None:
"""Create a index on the collection"""
from pymilvus import Collection, MilvusException
if isinstance(self.col, Collection) and self._get_index() is None:
try:
# If no index params, use a default HNSW based one
if self.index_params is None:
self.index_params = {
"metric_type": "L2",
"index_type": "HNSW",
"params": {"M": 8, "efConstruction": 64},
}
try:
self.col.create_index(
self._vector_field,
index_params=self.index_params,
using=self.alias,
)
# If default did not work, most likely on Zilliz Cloud
except MilvusException:
# Use AUTOINDEX based index
self.index_params = {
"metric_type": "L2",
"index_type": "AUTOINDEX",
"params": {},
}
self.col.create_index(
self._vector_field,
index_params=self.index_params,
using=self.alias,
)
logger.debug(
"Successfully created an index on collection: %s",
self.collection_name,
)
except MilvusException as e:
logger.error(
"Failed to create an index on collection: %s", self.collection_name
)
raise e
### Error Message and Stack Trace (if applicable)
_No response_
### Description
We are trying to use Langchain_milvus library to create milvus collection using metadata. Now latest version of milvus support Scalar Index for other column also. we have requirement to add Scalar Index for batter performance in filtering data.
Currently langchain milvus support to add index only for VECTOR field only.
We can use metadata_schema logic to support indexing on Scalar fields.
https://github.com/langchain-ai/langchain/pull/23219
### System Info
[langchain-core==0.2.20](https://github.com/langchain-ai/langchain/releases/tag/langchain-core%3D%3D0.2.20)
[langchain-community==0.2.7](https://github.com/langchain-ai/langchain/releases/tag/langchain-community%3D%3D0.2.7) | Support scalar field indexing for milvus collection creation | https://api.github.com/repos/langchain-ai/langchain/issues/24343/comments | 5 | 2024-07-17T12:15:08Z | 2024-07-18T08:39:51Z | https://github.com/langchain-ai/langchain/issues/24343 | 2,413,462,272 | 24,343 |
[
"hwchase17",
"langchain"
] | ### URL
_No response_
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
_No response_
### Idea or request for content:
_No response_ | how to specify a seed when calling the chatopenai model to ensure the stability of the output results. | https://api.github.com/repos/langchain-ai/langchain/issues/24336/comments | 0 | 2024-07-17T08:58:00Z | 2024-07-17T09:00:36Z | https://github.com/langchain-ai/langchain/issues/24336 | 2,413,053,515 | 24,336 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
import os
import requests
import yaml
os.environ["OPENAI_API_KEY"] = "sk-REDACTED"
from langchain_community.agent_toolkits.openapi import planner
from langchain_openai.chat_models import ChatOpenAI
from langchain_community.agent_toolkits.openapi.spec import reduce_openapi_spec
from langchain.requests import RequestsWrapper
from requests.packages.urllib3.exceptions import InsecureRequestWarning
# Ignore SSL warnings
requests.packages.urllib3.disable_warnings(InsecureRequestWarning)
with open("/home/ehkim/git/testprj/code_snippet/swagger.yaml") as f:
data = yaml.load(f, Loader=yaml.FullLoader)
swagger_api_spec = reduce_openapi_spec(data)
def construct_superset_aut_headers(url=None):
import requests
url = "https://superset.mydomain.com/api/v1/security/login"
payload = {
"username": "myusername",
"password": "mypassword",
"provider": "db",
"refresh": True
}
headers = {
"Content-Type": "application/json"
}
response = requests.post(url, json=payload, headers=headers, verify=False)
data = response.json()
return {"Authorization": f"Bearer {data['access_token']}"}
from langchain.globals import set_debug
set_debug(True)
llm = ChatOpenAI(model='gpt-4o')
swagger_requests_wrapper = RequestsWrapper(headers=construct_superset_aut_headers(), verify=False)
superset_agent = planner.create_openapi_agent(
swagger_api_spec,
swagger_requests_wrapper,
llm, allow_dangerous_requests=True,
handle_parsing_errors=True)
superset_agent.run(
"""
1. Get the dataset using the following information. (tool: requests_post, API: /api/v1/dataset/get_or_create/, database_id: 1, table_name: raw_esg_volume, response : {{'result' : {{'table_id': (dataset_id)}}}})
2. Retrieve the dataset information obtained in step 1. (tool: requests_get, API: /api/v1/dataset/dataset/{{dataset_id}}/, params: None)
3. Create a chart referencing the dataset obtained in step 2. The chart should plot the trend of total, online_news, and (total - online_news) values as a line chart. (tool: requests_post, API: /api/v1/chart/, database_id: 1)
4. Return the URL of the created chart. https://superset.mydomain.com/explore/?slice_id={{chart_id}}
When specifying the action, only write the tool name without any additional explanation.
"""
)
```
In this file, I used swagger.yaml file from https://superset.demo.datahubproject.io/api/v1/_openapi.
It's json format, so I converted it with below code.
```python
import json
import yaml
# read file
with open('swagger.json', 'r') as json_file:
json_data = json.load(json_file)
# write file
with open('swagger.yaml', 'w') as yaml_file:
yaml.dump(json_data, yaml_file, default_flow_style=False)
```
### Error Message and Stack Trace (if applicable)
(myenv) ehkim@ehkim-400TEA-400SEA:~/git/testprj/code_snippet$ python openapi-agent.py
/home/ehkim/anaconda3/envs/myenv/lib/python3.12/site-packages/langchain/_api/module_import.py:92: LangChainDeprecationWarning: Importing RequestsWrapper from langchain is deprecated. Please replace deprecated imports:
>> from langchain import RequestsWrapper
with new imports of:
>> from langchain_community.utilities import RequestsWrapper
You can use the langchain cli to **automatically** upgrade many imports. Please see documentation here <https://python.langchain.com/v0.2/docs/versions/v0_2/>
warn_deprecated(
Traceback (most recent call last):
File "/home/ehkim/git/testprj/code_snippet/openapi-agent.py", line 23, in <module>
swagger_api_spec = reduce_openapi_spec(data)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ehkim/anaconda3/envs/myenv/lib/python3.12/site-packages/langchain_community/agent_toolkits/openapi/spec.py", line 53, in reduce_openapi_spec
(name, description, dereference_refs(docs, full_schema=spec))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ehkim/anaconda3/envs/myenv/lib/python3.12/site-packages/langchain_core/utils/json_schema.py", line 108, in dereference_refs
else _infer_skip_keys(schema_obj, full_schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ehkim/anaconda3/envs/myenv/lib/python3.12/site-packages/langchain_core/utils/json_schema.py", line 80, in _infer_skip_keys
keys += _infer_skip_keys(v, full_schema, processed_refs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ehkim/anaconda3/envs/myenv/lib/python3.12/site-packages/langchain_core/utils/json_schema.py", line 80, in _infer_skip_keys
keys += _infer_skip_keys(v, full_schema, processed_refs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ehkim/anaconda3/envs/myenv/lib/python3.12/site-packages/langchain_core/utils/json_schema.py", line 76, in _infer_skip_keys
ref = _retrieve_ref(v, full_schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ehkim/anaconda3/envs/myenv/lib/python3.12/site-packages/langchain_core/utils/json_schema.py", line 17, in _retrieve_ref
out = out[int(component)]
~~~^^^^^^^^^^^^^^^^
KeyError: 400
### Description
I'm trying to use the langchain library to execute the OpenAPI Agent and fully interpret an OpenAPI specification using the reduce_openapi_spec function in my script.
I expect the agent to execute normally without any errors.
Instead, it raises a KeyError: 400.
### System Info
(myenv) ehkim@ehkim-400TEA-400SEA:~/git/testprj/code_snippet$ pip freeze | grep langchain
langchain==0.2.8
langchain-cli==0.0.21
langchain-community==0.2.7
langchain-core==0.2.20
langchain-experimental==0.0.37
langchain-google-vertexai==0.0.3
langchain-openai==0.1.16
langchain-robocorp==0.0.3
langchain-text-splitters==0.2.2
langchainhub==0.1.15 | 'KeyError: 400' occurs when using langchain_community.agent_toolkits.openapi.spec.reduce_openapi_spec. | https://api.github.com/repos/langchain-ai/langchain/issues/24335/comments | 0 | 2024-07-17T08:54:57Z | 2024-07-17T08:57:34Z | https://github.com/langchain-ai/langchain/issues/24335 | 2,413,047,320 | 24,335 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the [LangGraph](https://langchain-ai.github.io/langgraph/)/LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangGraph/LangChain rather than my code.
- [X] I am sure this is better as an issue [rather than a GitHub discussion](https://github.com/langchain-ai/langgraph/discussions/new/choose), since this is a LangGraph bug and not a design question.
### Example Code
```python
from langchain_openai import ChatOpenAI
from langchain_core.runnables import ConfigurableField
from langchain_core.pydantic_v1 import BaseModel
import os
os.environ["OPENAI_API_KEY"] = "..."
class Add(BaseModel):
"""Add two numbers"""
a: int
b: int
configurable_temperature = ConfigurableField(
id="temperature",
name="Temperature",
description="The temperature of the model",
)
tools = [Add]
model = ChatOpenAI(temperature=0).configurable_fields(
temperature=configurable_temperature
)
print("Model without Tools")
print("Config Specs - ", model.config_specs)
print("Config Schema Json - ", model.config_schema(include=["temperature"]).schema_json())
print("\n\nModel with Tools")
model_with_tools = model.bind_tools(tools)
print("Config Specs - ", model_with_tools.config_specs)
print("Config Schema Json - ", model_with_tools.config_schema(include=["temperature"]).schema_json())
```
### Error Message and Stack Trace (if applicable)
```shell
Model without Tools
Config Specs - [ConfigurableFieldSpec(id='temperature', annotation=<class 'float'>, name='Temperature', description='The temperature of the model', default=0.0, is_shared=False, dependencies=None)]
Config Schema Json - {"title": "RunnableConfigurableFieldsConfig", "type": "object", "properties": {"configurable": {"$ref": "#/definitions/Configurable"}}, "definitions": {"Configurable": {"title": "Configurable", "type": "object", "properties": {"temperature": {"title": "Temperature", "description": "The temperature of the model", "default": 0.0, "type": "number"}}}}}
Model with Tools
Config Specs - []
Config Schema Json - {"title": "ChatOpenAIConfig", "type": "object", "properties": {}}
```
### Description
When using the model with tools, the configurable fields are not exposed or used internally.
Am I doing something wrong? Please suggest the correct approach for setting configurable fields while using model with tool_calling.
### System Info
System Information
------------------
> OS: Linux
> OS Version: langchain-ai/langgraph#1 SMP Fri Mar 29 23:14:13 UTC 2024
> Python Version: 3.11.9 (main, Apr 19 2024, 16:48:06) [GCC 11.2.0]
Package Information
-------------------
> langchain_core: 0.2.18
> langchain: 0.2.7
> langchain_community: 0.2.7
> langsmith: 0.1.85
> langchain_chroma: 0.1.2
> langchain_cli: 0.0.25
> langchain_openai: 0.1.16
> langchain_text_splitters: 0.2.2
> langgraph: 0.1.8
> langserve: 0.2.2 | Configurable Fields Not available after bind_tools called on Runnable | https://api.github.com/repos/langchain-ai/langchain/issues/24341/comments | 3 | 2024-07-17T06:27:07Z | 2024-08-08T18:18:13Z | https://github.com/langchain-ai/langchain/issues/24341 | 2,413,346,088 | 24,341 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from typing import TypedDict
from langchain_core.runnables import RunnableParallel, RunnableLambda
class Foo(TypedDict):
foo: str
class InputData(Foo):
bar: str
def forward_foo(input_data: InputData):
return input_data["foo"]
def transform_input(input_data: InputData):
foo = input_data["foo"]
bar = input_data["bar"]
return {
"transformed": foo + bar
}
foo_runnable = RunnableLambda(forward_foo)
other_runnable = RunnableLambda(transform_input)
parallel = RunnableParallel(
foo=foo_runnable,
other=other_runnable,
)
repr(parallel.input_schema.validate({ "foo": "Y", "bar": "Z" }))
# 'RunnableParallel<foo,other>Input()'
# If we remove the type annotations on forward_foo and transform_input
# args, validate() gives us the right result:
# "RunnableParallel<foo,other>Input(foo='Y', bar='Z')"
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
When using `TypedDict` subclasses to annotate the arguments of a `RunnableParallel` children, the `RunnableParallel` schema isn't correctly inferred from the children's schemas.
The `RunnableParallel` schema is empty, i.e. `parallel.input_schema.schema()` outputs:
```
{'title': 'RunnableParallel<foo,other>Input',
'type': 'object',
'properties': {}}
```
and `parallel.input_schema.validate()` returns an empty dict for any input.
This is problematic when exposing the `RunnableParallel` chain using Langserve, because Langserve passes the endpoint input through `schema.validate()`, which essentially clears any input as it returns an empty `dict`
The only workarounds we have found so far are either:
* remove type annotations on the `RunnableParallel` children functions
* pipe a `RunnablePassthrough` before the `RunnableParallel` : `parallel = RunnablePassthrough() | RunnableParallel()`
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.5.0: Wed May 1 20:12:58 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6000
> Python Version: 3.10.13 (main, Aug 24 2023, 12:59:26) [Clang 15.0.0 (clang-1500.1.0.2.5)]
Package Information
-------------------
> langchain_core: 0.2.20
> langchain: 0.2.8
> langchain_community: 0.2.7
> langsmith: 0.1.88
> langchain_anthropic: 0.1.20
> langchain_cli: 0.0.25
> langchain_openai: 0.1.16
> langchain_text_splitters: 0.2.2
> langchainhub: 0.1.20
> langserve: 0.2.2
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph | RunnableParallel input schema is empty if children runnable input schemas use TypedDict's | https://api.github.com/repos/langchain-ai/langchain/issues/24326/comments | 1 | 2024-07-17T00:30:23Z | 2024-07-17T19:07:28Z | https://github.com/langchain-ai/langchain/issues/24326 | 2,412,292,456 | 24,326 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Description and Example Code
Langchain seemingly computes token usage and cost for both OpenAI and AzureOpenAI models using `OpenAICallbackHandler`. However, that relies on the fact that both the APIs retrieve the "complete" name of the called model, which is not the case in Azure OpenAI.
In my subscription I have 3 deployments of gpt-3.5-turbo corresponding to `gpt-35-turbo-0613`, `gpt-35-turbo-0312`, `gpt-35-turbo-1106` and 2 deployments of gpt-4 corresponding to `gpt-4-1106-preview` and `gpt-4-0613`. However, when calling them for inference, the model is called, respectively `gpt-35-turbo` and `gpt-4` regardless of the version. Langchain can't compute the correct cost then, despite no warning is thrown. This dictionary [here](https://github.com/langchain-ai/langchain/blob/47ed7f766a5de1ee6e876be822536cd51ccb4777/libs/community/langchain_community/callbacks/openai_info.py#L10-L116) also contains entries that would never be used because of the above, e.g. [this one](https://github.com/langchain-ai/langchain/blob/47ed7f766a5de1ee6e876be822536cd51ccb4777/libs/community/langchain_community/callbacks/openai_info.py#L68).
```python
from langchain_openai import AzureChatOpenAI
llm1 = AzureChatOpenAI(
api_version="2023-08-01-preview",
azure_endpoint="https://YOUR_ENDPOINT.openai.azure.com/",
api_key="YOUR_KEY",
azure_deployment="gpt-35-turbo-0613",
temperature=0,
)
llm2 = AzureChatOpenAI(
api_version="2023-08-01-preview",
azure_endpoint="https://YOUR_ENDPOINT.openai.azure.com/",
api_key="YOUR_KEY",
azure_deployment="gpt-35-turbo-0312",
temperature=0,
)
messages = [
(
"system",
"You are a helpful assistant that translates English to French. Translate the user sentence.",
),
("human", "I love programming."),
]
llm1.invoke(messages).response_metadata['model_name'] # gpt-35-turbo
llm2.invoke(messages).response_metadata['model_name'] # gpt-35-turbo
```
### System Info
Not applicable here. | OpenAI callback is deceiving when used with Azure OpenAI | https://api.github.com/repos/langchain-ai/langchain/issues/24324/comments | 1 | 2024-07-16T22:53:49Z | 2024-07-21T08:48:15Z | https://github.com/langchain-ai/langchain/issues/24324 | 2,412,171,445 | 24,324 |
[
"hwchase17",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/integrations/vectorstores/google_cloud_sql_pg/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
The code examples generate this error:
```console
File "main.py", line 18
engine = await PostgresEngine.afrom_instance(project_id=config.google_project_id, region=config.region, instance=config.cloud_sql_connection_name, database=config.db_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
SyntaxError: 'await' outside function
```
### Idea or request for content:
_No response_ | DOC: <Issue related to /v0.2/docs/integrations/vectorstores/google_cloud_sql_pg/> SyntaxError: 'await' outside function | https://api.github.com/repos/langchain-ai/langchain/issues/24319/comments | 1 | 2024-07-16T18:58:16Z | 2024-07-16T21:06:16Z | https://github.com/langchain-ai/langchain/issues/24319 | 2,411,848,751 | 24,319 |
[
"hwchase17",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/integrations/retrievers/pinecone_hybrid_search/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
PineconeApiException Traceback (most recent call last)
Cell In[26], [line 1](vscode-notebook-cell:?execution_count=26&line=1)
----> [1](vscode-notebook-cell:?execution_count=26&line=1) result = retriever.invoke("foo")
File d:\Datascience_workspace_2023\genai-bootcamp-llmapps\venv\lib\site-packages\langchain_core\retrievers.py:222, in BaseRetriever.invoke(self, input, config, **kwargs)
[220](file:///D:/Datascience_workspace_2023/genai-bootcamp-llmapps/venv/lib/site-packages/langchain_core/retrievers.py:220) except Exception as e:
[221](file:///D:/Datascience_workspace_2023/genai-bootcamp-llmapps/venv/lib/site-packages/langchain_core/retrievers.py:221) run_manager.on_retriever_error(e)
--> [222](file:///D:/Datascience_workspace_2023/genai-bootcamp-llmapps/venv/lib/site-packages/langchain_core/retrievers.py:222) raise e
[223](file:///D:/Datascience_workspace_2023/genai-bootcamp-llmapps/venv/lib/site-packages/langchain_core/retrievers.py:223) else:
[224](file:///D:/Datascience_workspace_2023/genai-bootcamp-llmapps/venv/lib/site-packages/langchain_core/retrievers.py:224) run_manager.on_retriever_end(
[225](file:///D:/Datascience_workspace_2023/genai-bootcamp-llmapps/venv/lib/site-packages/langchain_core/retrievers.py:225) result,
[226](file:///D:/Datascience_workspace_2023/genai-bootcamp-llmapps/venv/lib/site-packages/langchain_core/retrievers.py:226) )
File d:\Datascience_workspace_2023\genai-bootcamp-llmapps\venv\lib\site-packages\langchain_core\retrievers.py:215, in BaseRetriever.invoke(self, input, config, **kwargs)
[213](file:///D:/Datascience_workspace_2023/genai-bootcamp-llmapps/venv/lib/site-packages/langchain_core/retrievers.py:213) _kwargs = kwargs if self._expects_other_args else {}
[214](file:///D:/Datascience_workspace_2023/genai-bootcamp-llmapps/venv/lib/site-packages/langchain_core/retrievers.py:214) if self._new_arg_supported:
--> [215](file:///D:/Datascience_workspace_2023/genai-bootcamp-llmapps/venv/lib/site-packages/langchain_core/retrievers.py:215) result = self._get_relevant_documents(
[216](file:///D:/Datascience_workspace_2023/genai-bootcamp-llmapps/venv/lib/site-packages/langchain_core/retrievers.py:216) input, run_manager=run_manager, **_kwargs
[217](file:///D:/Datascience_workspace_2023/genai-bootcamp-llmapps/venv/lib/site-packages/langchain_core/retrievers.py:217) )
[218](file:///D:/Datascience_workspace_2023/genai-bootcamp-llmapps/venv/lib/site-packages/langchain_core/retrievers.py:218) else:
[219](file:///D:/Datascience_workspace_2023/genai-bootcamp-llmapps/venv/lib/site-packages/langchain_core/retrievers.py:219) result = self._get_relevant_documents(input, **_kwargs)
File d:\Datascience_workspace_2023\genai-bootcamp-llmapps\venv\lib\site-packages\langchain_community\retrievers\pinecone_hybrid_search.py:167, in PineconeHybridSearchRetriever._get_relevant_documents(self, query, run_manager, **kwargs)
[165](file:///D:/Datascience_workspace_2023/genai-bootcamp-llmapps/venv/lib/site-packages/langchain_community/retrievers/pinecone_hybrid_search.py:165) sparse_vec["values"] = [float(s1) for s1 in sparse_vec["values"]]
...
PineconeApiException: (400)
Reason: Bad Request
HTTP response headers: HTTPHeaderDict({'Date': 'Tue, 16 Jul 2024 17:25:55 GMT', 'Content-Type': 'application/json', 'Content-Length': '103', 'Connection': 'keep-alive', 'x-pinecone-request-latency-ms': '1', 'x-pinecone-request-id': '3784258799918705851', 'x-envoy-upstream-service-time': '2', 'server': 'envoy'})
HTTP response body: {"code":3,"message":"Vector dimension 384 does not match the dimension of the index 1536","details":[]}
Output is truncated. View as a [scrollable element](command:cellOutput.enableScrolling?c51472c5-0b39-4575-adec-6a963874b078) or open in a [text editor](command:workbench.action.openLargeOutput?c51472c5-0b39-4575-adec-6a963874b078). Adjust cell output [settings](command:workbench.action.openSettings?%5B%22%40tag%3AnotebookOutputLayout%22%5D)...
### Idea or request for content:
_No response_ | DOC: <Issue related to /v0.2/docs/integrations/retrievers/pinecone_hybrid_search/> | https://api.github.com/repos/langchain-ai/langchain/issues/24317/comments | 0 | 2024-07-16T17:28:39Z | 2024-07-16T17:31:19Z | https://github.com/langchain-ai/langchain/issues/24317 | 2,411,686,931 | 24,317 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
+-----------+
| __start__ |
+-----------+
*
*
*
+--------+
| 数据分析专家 |
+--------+....
.. ...
.. ...
. ....
+---------+ ..
| 网络优化工程师 | .
+---------+ .
.. .. .
.. .. .
. .. .
+--------+ . ..
| 网络运营经理 | . ....
+--------+.... . ...
... . ...
.... . ....
.. . ..
+---------+
| __end__ |
+---------+
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
draw_ascii width calculation error when using Chinese description in LangGraph
### System Info
langchain==0.1.14
langchain-community==0.0.31
langchain-core==0.1.40
langchain-experimental==0.0.56
langchain-openai==0.1.1
langchain-text-splitters==0.0.1
langchainhub==0.1.15
macOS 14.3.1
Python 3.11.4 | graph_ascii multi-byte width calculation problem | https://api.github.com/repos/langchain-ai/langchain/issues/24308/comments | 0 | 2024-07-16T14:52:28Z | 2024-07-16T14:55:04Z | https://github.com/langchain-ai/langchain/issues/24308 | 2,411,371,903 | 24,308 |
[
"hwchase17",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/integrations/text_embedding/nemo/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
Could you a Nemo model link . I can download the nemo model weights. In hugging face I didn't find this nemo model weights.
When I exceute the 'NV-Embed-QA-003'. It giving the connection error.
### Idea or request for content:
_No response_ | DOC: <Issue related to /v0.2/docs/integrations/text_embedding/nemo/> | https://api.github.com/repos/langchain-ai/langchain/issues/24305/comments | 0 | 2024-07-16T11:50:19Z | 2024-07-16T11:52:50Z | https://github.com/langchain-ai/langchain/issues/24305 | 2,410,942,439 | 24,305 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain.agents import AgentType, initialize_agent
```
### Error Message and Stack Trace (if applicable)
tests/langchain/test_langchain_model_export.py:19: in <module>
from langchain.agents import AgentType, initialize_agent
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/langchain/agents/__init__.py:36: in <module>
from langchain_core.tools import Tool, tool
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/langchain_core/tools.py:48: in <module>
from typing_extensions import Annotated, cast, get_args, get_origin
E ImportError: cannot import name 'cast' from 'typing_extensions' (/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/typing_extensions.py)
### Description
Langchain should pin typing_extensions>=4.7.0 (instead of 4.2.0) in the current dev version, otherwise we'll get `cannot import name 'cast' from 'typing_extensions' ` error
### System Info
Using langchain master branch. typing_extensions==4.5.0 fails | cannot import name 'cast' from 'typing_extensions' | https://api.github.com/repos/langchain-ai/langchain/issues/24287/comments | 1 | 2024-07-16T01:14:16Z | 2024-07-17T01:21:23Z | https://github.com/langchain-ai/langchain/issues/24287 | 2,409,950,747 | 24,287 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The code is:
```python
from langchain.chat_models import AzureChatOpenAI
from config import *
chat_model = AzureChatOpenAI(
openai_api_type=OPENAI_API_TYPE,
openai_api_version=OPENAI_API_VERSION,
openai_api_key=OPENAI_API_KEY,
azure_deployment=AZURE_DEPLOYMENT,
openai_api_base=OPENAI_API_BASE
)
messages = [
(
"system",
"You are a helpful assistant that translates English to French. Translate the user sentence.",
),
("human", "I love programming."),
]
chat_model.invoke(messages)
```
Error
```sh
Traceback (most recent call last):
File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/httpx/_transports/default.py", line 69, in map_httpcore_exceptions
yield
File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/httpx/_transports/default.py", line 233, in handle_request
resp = self._pool.handle_request(req)
File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/httpcore/_sync/connection_pool.py", line 216, in handle_request
raise exc from None
File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/httpcore/_sync/connection_pool.py", line 196, in handle_request
response = connection.handle_request(
File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/httpcore/_sync/connection.py", line 99, in handle_request
raise exc
File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/httpcore/_sync/connection.py", line 76, in handle_request
stream = self._connect(request)
File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/httpcore/_sync/connection.py", line 122, in _connect
stream = self._network_backend.connect_tcp(**kwargs)
File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/httpcore/_backends/sync.py", line 205, in connect_tcp
with map_exceptions(exc_map):
File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/contextlib.py", line 153, in __exit__
self.gen.throw(typ, value, traceback)
File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions
raise to_exc(exc) from exc
httpcore.ConnectError: [Errno -3] Temporary failure in name resolution
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/openai/_base_client.py", line 978, in _request
response = self._client.send(
File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/httpx/_client.py", line 914, in send
response = self._send_handling_auth(
File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/httpx/_client.py", line 942, in _send_handling_auth
response = self._send_handling_redirects(
File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/httpx/_client.py", line 979, in _send_handling_redirects
response = self._send_single_request(request)
File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/httpx/_client.py", line 1015, in _send_single_request
response = transport.handle_request(request)
File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/httpx/_transports/default.py", line 232, in handle_request
with map_httpcore_exceptions():
File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/contextlib.py", line 153, in __exit__
self.gen.throw(typ, value, traceback)
File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/httpx/_transports/default.py", line 86, in map_httpcore_exceptions
raise mapped_exc(message) from exc
httpx.ConnectError: [Errno -3] Temporary failure in name resolution
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/mnt/c/D/Python-dev3/rpa-infra/response_time/execution-eproc/Guardrails/Simple Bot/config/github.py", line 19, in <module>
chat_model.invoke(messages)
File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 158, in invoke
self.generate_prompt(
File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 560, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 421, in generate
raise e
File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 411, in generate
self._generate_with_cache(
File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 632, in _generate_with_cache
result = self._generate(
File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/langchain_community/chat_models/openai.py", line 441, in _generate
response = self.completion_with_retry(
File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/langchain_community/chat_models/openai.py", line 356, in completion_with_retry
return self.client.create(**kwargs)
File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/openai/_utils/_utils.py", line 277, in wrapper
return func(*args, **kwargs)
File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/openai/resources/chat/completions.py", line 643, in create
return self._post(
File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/openai/_base_client.py", line 1266, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/openai/_base_client.py", line 942, in request
return self._request(
File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/openai/_base_client.py", line 1002, in _request
return self._retry_request(
File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/openai/_base_client.py", line 1079, in _retry_request
return self._request(
File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/openai/_base_client.py", line 1002, in _request
return self._retry_request(
File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/openai/_base_client.py", line 1079, in _retry_request
return self._request(
File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/openai/_base_client.py", line 1012, in _request
raise APIConnectionError(request=request) from err
openai.APIConnectionError: Connection error.
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I am using langchain to have a simple API call with AzureChatOpenAI
### System Info
langchain==0.1.20
langchain-community==0.0.38
langchain-core==0.1.52
langchain-text-splitters==0.0.2 | ConnectError: [Errno -3] Temporary failure in name resolution | https://api.github.com/repos/langchain-ai/langchain/issues/24276/comments | 2 | 2024-07-15T17:14:23Z | 2024-07-31T08:17:27Z | https://github.com/langchain-ai/langchain/issues/24276 | 2,409,234,877 | 24,276 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
.
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Getting
raise ValueError(
ValueError: OpenAIChat currently only supports single prompt, got .
```
llm = AzureOpenAI(
azure_endpoint="https://.....openai.azure.com/",
deployment_name="....",
model_name="...",
openai_api_version="....",
)
def summarize(pdf):
loader = PyPDFLoader(pdf)
docs = loader.load_and_split()
chain = load_summarize_chain(llm=llm, chain_type="map_reduce", verbose=False)
summary = chain.run(docs)
print(summary)
print("\n")
return summary
```
### System Info
. | raise ValueError( ValueError: OpenAIChat currently only supports single prompt, got | https://api.github.com/repos/langchain-ai/langchain/issues/24268/comments | 1 | 2024-07-15T14:07:28Z | 2024-07-22T15:50:08Z | https://github.com/langchain-ai/langchain/issues/24268 | 2,408,838,622 | 24,268 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
agent = create_structured_chat_agent(llm, agent_tools, prompt)
agent_executor = AgentExecutor(
agent=agent,
tools=agent_tools,
verbose=os.getenv("ENV", "dev") == "dev",
handle_parsing_errors='Check you output. make sure double quotes inside of "action_input" are properly escaped with a backslash. Otherwise, the JSON will not parse correctly.',
callbacks=agent_callback_manager,
return_intermediate_steps=True,
)
```
### Error Message and Stack Trace (if applicable)
An output parsing error occurred. In order to pass this error back to the agent and have it try again, pass `handle_parsing_errors=True` to the AgentExecutor.
### Description
I already search on issued and stumbled upon [this](https://github.com/langchain-ai/langchain/issues/14580) and [this](https://github.com/langchain-ai/langchain/issues/14947) issues but non of them address the issue properly. I'm using an agent with JSON output parser. The agent constructs json in each step like
```json
json { "action": "Final Answer", "action_input": "Final answer"}
```
It uses [REACT](https://smith.langchain.com/hub/hwchase17/react) to construct each step's output. The problem is, whenever there is double quote(") inside of the "action_input" the agent raises ``OutputParserException``. I think this is somehow expected in the sense that the $JSON_BLOB will not be a valid json anyway. the proper way is to escape double quotes inside of the "action_input". I specifically told agent to escape double quotes inside "action_input" in the initial prompt but apparently the agent doesn't respect it. Besides, for this case we can't reply on the agent to always escape double quotes. I think the better approach is to refactor ``parse_json_markdown`` function. I did a bit of debug and this method calls ``_parse_json`` and I think this method should handle escaping double quotes inside "action_input" before trying to parse it.
### System Info
```
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.5.0: Wed May 1 20:16:51 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T8103
> Python Version: 3.12.0 (main, Oct 2 2023, 12:03:24) [Clang 15.0.0 (clang-1500.0.40.1)]
Package Information
-------------------
> langchain_core: 0.2.18
> langchain: 0.2.7
> langchain_community: 0.2.7
> langsmith: 0.1.85
> langchain_aws: 0.1.11
> langchain_experimental: 0.0.62
> langchain_openai: 0.1.16
> langchain_text_splitters: 0.2.2
> langchainhub: 0.1.20
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
``` | An output parsing error occurred. | https://api.github.com/repos/langchain-ai/langchain/issues/24266/comments | 0 | 2024-07-15T13:51:30Z | 2024-07-16T15:11:05Z | https://github.com/langchain-ai/langchain/issues/24266 | 2,408,802,870 | 24,266 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_community.tools.tavily_search import TavilySearchResults, TavilyAnswer
tool_with_raw = TavilySearchResults(include_raw_content=True, max_results=1)
tool_with_raw_and_answer = TavilySearchResults(include_raw_content=True, include_answer=True, max_results=1)
tool_without_raw = TavilySearchResults(include_raw_content=False, max_results=1)
r1=tool_with_raw.invoke({'query': 'how to cook a steak?'})
print(r1)
r2=tool_without_raw.invoke({'query': 'how to cook a steak?'})
print(r2)
r3=tool_with_raw_and_answer.invoke({'query': 'how to cook a steak?'})
print(r3)
```
```python
[
{
'url': 'https://www.onceuponachef.com/recipes/how-to-cook-steak-on-the-stovetop.html',
'content': 'Pan-Seared Steaks\nPan-searing is the best way to cook a steak, and it’s also the easiest!\nIngredients\nInstructions\nPair
with\nNutrition Information\nPowered by\nThis website is written and produced for informational purposes only. When I do this again I will do for 5
minutes but will turn off the heat on my cast Iron frying pan 2 minutes before and add butter and rosemary and garlic to get the steak more to our
liking.\n I got a ribeye steak, heated the pan to the top heat and did everything like you mentioned, but after three minutes the steak was burned,
on the other side the same happened. After doing some more research, I find you have to bring the steak to room temperature before you cook it and
yiu have to snip the fat around the edges to keep it from curling. 22 Quick and Easy Recipes in 30 Minutes (or less) + 5 Chef Secrets To Make You A
Better Cook!\nFind a Recipe\nHow To Cook Steak On The Stovetop\nThis post may contain affiliate links.'
}
]
>>> r2=tool_without_raw.invoke({'query': 'how to cook a steak?'})
>>> print(r2)
[
{
'url': 'https://www.onceuponachef.com/recipes/how-to-cook-steak-on-the-stovetop.html',
'content': 'Pan-Seared Steaks\nPan-searing is the best way to cook a steak, and it’s also the easiest!\nIngredients\nInstructions\nPair
with\nNutrition Information\nPowered by\nThis website is written and produced for informational purposes only. When I do this again I will do for 5
minutes but will turn off the heat on my cast Iron frying pan 2 minutes before and add butter and rosemary and garlic to get the steak more to our
liking.\n I got a ribeye steak, heated the pan to the top heat and did everything like you mentioned, but after three minutes the steak was burned,
on the other side the same happened. After doing some more research, I find you have to bring the steak to room temperature before you cook it and
yiu have to snip the fat around the edges to keep it from curling. 22 Quick and Easy Recipes in 30 Minutes (or less) + 5 Chef Secrets To Make You A
Better Cook!\nFind a Recipe\nHow To Cook Steak On The Stovetop\nThis post may contain affiliate links.'
}
]
>>> r3=tool_with_raw_and_answer.invoke({'query': 'how to cook a steak?'})
>>> print(r3)
[
{
'url': 'https://www.onceuponachef.com/recipes/how-to-cook-steak-on-the-stovetop.html',
'content': 'Pan-Seared Steaks\nPan-searing is the best way to cook a steak, and it’s also the easiest!\nIngredients\nInstructions\nPair
with\nNutrition Information\nPowered by\nThis website is written and produced for informational purposes only. When I do this again I will do for 5
minutes but will turn off the heat on my cast Iron frying pan 2 minutes before and add butter and rosemary and garlic to get the steak more to our
liking.\n I got a ribeye steak, heated the pan to the top heat and did everything like you mentioned, but after three minutes the steak was burned,
on the other side the same happened. After doing some more research, I find you have to bring the steak to room temperature before you cook it and
yiu have to snip the fat around the edges to keep it from curling. 22 Quick and Easy Recipes in 30 Minutes (or less) + 5 Chef Secrets To Make You A
Better Cook!\nFind a Recipe\nHow To Cook Steak On The Stovetop\nThis post may contain affiliate links.'
}
]
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Hello,
I cannot get all the informations requested from the parameters.
Seems that only max_result is kept. I can understand that there is two classes (TavilySearchResults, TavilyAnswer) but if we can initiate TavilySearchResults with API options why to keep the two classes? D
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.3.0: Wed Dec 20 21:28:58 PST 2023; root:xnu-10002.81.5~7/RELEASE_X86_64
> Python Version: 3.12.2 | packaged by conda-forge | (main, Feb 16 2024, 21:00:12) [Clang 16.0.6 ]
Package Information
-------------------
> langchain_core: 0.2.11
> langchain: 0.2.6
> langchain_community: 0.2.6
> langsmith: 0.1.83
> langchain_openai: 0.1.14
> langchain_text_splitters: 0.2.2
> langgraph: 0.1.5
> langserve: 0.2.2 | TavilySearch parameters don't change the output. | https://api.github.com/repos/langchain-ai/langchain/issues/24265/comments | 6 | 2024-07-15T13:41:05Z | 2024-07-18T00:13:03Z | https://github.com/langchain-ai/langchain/issues/24265 | 2,408,779,708 | 24,265 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
n/a
### Error Message and Stack Trace (if applicable)
n/a
### Description
AzureOpenAIEmbeddings and AzureChatOpenAI classes accept an azure_ad_token parameter instead of an api_key
However AzureSearch does not support it in the langchan community library. I was able to hack it, by Copy Paste the AzureSearch from the langchain community and make some modifications:
BearerTokenCredential.py
from azure.core.credentials import TokenCredential
from azure.core.credentials import AccessToken
import time
```
class BearerTokenCredential(TokenCredential):
def __init__(self, token):
self._token = token
def get_token(self, *scopes, **kwargs):
# The AccessToken expects the token and its expiry time in seconds.
# Here we set the expiry to an hour from now.
expiry = int(time.time()) + 3600
return AccessToken(self._token, expiry)
```
In AzureSearch.py
```
def _get_search_client(
endpoint: str,
key: str,
azure_ad_access_token: Optional[str],
index_name: str,
semantic_configuration_name: Optional[str] = None,
fields: Optional[List[SearchField]] = None,
vector_search: Optional[VectorSearch] = None,
semantic_configurations: Optional[
Union[SemanticConfiguration, List[SemanticConfiguration]]
] = None,
scoring_profiles: Optional[List[ScoringProfile]] = None,
default_scoring_profile: Optional[str] = None,
default_fields: Optional[List[SearchField]] = None,
user_agent: Optional[str] = "langchain",
cors_options: Optional[CorsOptions] = None,
async_: bool = False,
) -> Union[SearchClient, AsyncSearchClient]:
from azure.core.credentials import AzureKeyCredential
from azure.core.exceptions import ResourceNotFoundError
from azure.identity import DefaultAzureCredential, InteractiveBrowserCredential
from azure.search.documents import SearchClient
from azure.search.documents.aio import SearchClient as AsyncSearchClient
from azure.search.documents.indexes import SearchIndexClient
from azure.search.documents.indexes.models import (
ExhaustiveKnnAlgorithmConfiguration,
ExhaustiveKnnParameters,
HnswAlgorithmConfiguration,
HnswParameters,
SearchIndex,
SemanticConfiguration,
SemanticField,
SemanticPrioritizedFields,
SemanticSearch,
VectorSearch,
VectorSearchAlgorithmKind,
VectorSearchAlgorithmMetric,
VectorSearchProfile,
)
default_fields = default_fields or []
if key is None:
if azure_ad_access_token:
credential = BearerTokenCredential(azure_ad_access_token)
else:
credential = DefaultAzureCredential()
elif key.upper() == "INTERACTIVE":
credential = InteractiveBrowserCredential()
credential.get_token("https://search.azure.com/.default")
else:
credential = AzureKeyCredential(key)
index_client: SearchIndexClient = SearchIndexClient(
endpoint=endpoint, credential=credential, user_agent=user_agent
)
```
WOuld it be possible to include it in the next version of langchain community?
### System Info
n/a | AzureSearch vector store does not support access token authentication. FIX Suggested | https://api.github.com/repos/langchain-ai/langchain/issues/24263/comments | 2 | 2024-07-15T11:54:45Z | 2024-07-17T08:17:36Z | https://github.com/langchain-ai/langchain/issues/24263 | 2,408,547,672 | 24,263 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from langchain.prompts.prompt import PromptTemplate
from langchain.chains import GraphCypherQAChain
CYPHER_QA_TEMPLATE = """
You're an AI cook formulating Cypher statements to navigate through a recipe database.
Schema: {schema}
Examples: {examples}
Question: {question}
"""
CYPHER_GENERATION_PROMPT = PromptTemplate(
input_variables=["schema","examples","question"],
template = CYPHER_QA_TEMPLATE)
model = ChatOpenAI(temperature=0, model_name = "gpt-4-0125-preview")
chain = GraphCypherQAChain.from_llm(graph=graph, llm=model, verbose=True, validate_cypher = True, cypher_prompt = CYPHER_GENERATION_PROMPT)
res = chain.invoke({"schema": graph.schema,"examples" : examples,"question":question})
```
### Error Message and Stack Trace (if applicable)
```
> Entering new GraphCypherQAChain chain...
Traceback (most recent call last):
File "/Users/<path_to_my_project>/src/text2cypher_langchain.py", line 129, in <module>
res = chain.invoke({"schema": graph.schema,"examples" : examples,"question":question})
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/<path_to_my_project>/venv/lib/python3.11/site-packages/langchain/chains/base.py", line 166, in invoke
raise e
File "/Users/<path_to_my_project>/venv/lib/python3.11/site-packages/langchain/chains/base.py", line 154, in invoke
self._validate_inputs(inputs)
File "/Users/<path_to_my_project>/venv/lib/python3.11/site-packages/langchain/chains/base.py", line 284, in _validate_inputs
raise ValueError(f"Missing some input keys: {missing_keys}")
ValueError: Missing some input keys: {'query'}
```
### Description
I'm getting a missing key error when passing custom arguments in `PromptTemplate` and `GraphCypherQAChain`.
This seems similar to #19560 now closed.
### System Info
- langchain==0.2.7
- MacOS 13.6.7 (Ventura)
- python 3.11.4 | Missing key error - Using PromptTemplate and GraphCypherQAChain. | https://api.github.com/repos/langchain-ai/langchain/issues/24260/comments | 8 | 2024-07-15T10:00:01Z | 2024-07-17T19:56:51Z | https://github.com/langchain-ai/langchain/issues/24260 | 2,408,338,395 | 24,260 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
async def run_chatbot(vectorstore, session_id, uid, chatbot_data):
try:
openai_api_key = os.getenv("OPENAI_API_KEY")
if not openai_api_key:
raise ValueError("Missing OpenAI API key in environment variables")
print(chatbot_data['activeModel'])
model = ChatOpenAI(
temperature=0.5,
model_name=chatbot_data['activeModel'],
openai_api_key=openai_api_key,
)
firestore_config = {
"collection_name": "chathistory",
"session_id": session_id,
"user_id": uid,
}
chat_history = FirestoreChatMessageHistory(**firestore_config)
memory = ConversationBufferWindowMemory(
chat_history=chat_history,
memory_key="chat_history",
input_key="question",
output_key="text",
)
# retrieval qa chain
qa = RetrievalQA.from_chain_type(
llm=model,
chain_type="stuff",
retriever=vectorstore.as_retriever()
)
qa_tool = Tool(
name='Knowledge Base',
func=qa.run,
description=(
'use this tool when answering general knowledge queries to get '
'more information about the topic'
)
)
tools = [qa_tool]
agent = initialize_agent(
agent='chat-conversational-react-description',
tools=tools,
llm=model,
verbose=True,
max_iterations=3,
early_stopping_method='generate',
memory=memory,
)
### Error Message and Stack Trace (if applicable)
variable chat_history should be a list of base messages, got
### Description
I dont know what the problem is.
### System Info
aiohttp==3.9.5
aiosignal==1.3.1
annotated-types @ file:///private/var/folders/nz/j6p8yfhx1mv_0grj5xl4650h0000gp/T/abs_1fa2djihwb/croot/annotated-types_1709542925772/work
anyio @ file:///private/var/folders/k1/30mswbxs7r1g6zwn8y4fyt500000gp/T/abs_a17a7759g2/croot/anyio_1706220182417/work
asgiref==3.8.1
async-timeout==4.0.3
attrs==23.2.0
bidict==0.23.1
blinker==1.7.0
CacheControl==0.14.0
cachetools==5.3.3
certifi==2024.2.2
cffi==1.16.0
charset-normalizer==3.3.2
click==8.1.7
cryptography==42.0.5
dataclasses-json==0.6.4
distro @ file:///private/var/folders/nz/j6p8yfhx1mv_0grj5xl4650h0000gp/T/abs_ddkyz0575y/croot/distro_1714488254309/work
exceptiongroup @ file:///private/var/folders/nz/j6p8yfhx1mv_0grj5xl4650h0000gp/T/abs_b2258scr33/croot/exceptiongroup_1706031391815/work
firebase-admin==6.5.0
Flask==3.0.3
Flask-Cors==4.0.0
Flask-SocketIO==5.3.6
frozenlist==1.4.1
google-api-core==2.18.0
google-api-python-client==2.127.0
google-auth==2.29.0
google-auth-httplib2==0.2.0
google-cloud-core==2.4.1
google-cloud-firestore==2.16.0
google-cloud-storage==2.16.0
google-crc32c==1.5.0
google-resumable-media==2.7.0
googleapis-common-protos==1.63.0
grpcio==1.62.2
grpcio-status==1.62.2
h11 @ file:///private/var/folders/k1/30mswbxs7r1g6zwn8y4fyt500000gp/T/abs_110bmw2coo/croot/h11_1706652289620/work
httpcore @ file:///private/var/folders/k1/30mswbxs7r1g6zwn8y4fyt500000gp/T/abs_fcxiho9nv7/croot/httpcore_1706728465004/work
httplib2==0.22.0
httpx @ file:///private/var/folders/k1/30mswbxs7r1g6zwn8y4fyt500000gp/T/abs_727e6zfsxn/croot/httpx_1706887102687/work
idna @ file:///private/var/folders/k1/30mswbxs7r1g6zwn8y4fyt500000gp/T/abs_a12xpo84t2/croot/idna_1714398852854/work
itsdangerous==2.2.0
Jinja2==3.1.3
jsonpatch==1.33
jsonpointer==2.4
langchain==0.1.16
langchain-community==0.0.34
langchain-core==0.2.17
langchain-google-firestore==0.2.1
langchain-openai==0.1.16
langchain-pinecone==0.1.1
langchain-text-splitters==0.0.1
langsmith==0.1.85
MarkupSafe==2.1.5
marshmallow==3.21.1
more-itertools==10.2.0
msgpack==1.0.8
multidict==6.0.5
mypy-extensions==1.0.0
numpy==1.26.4
openai==1.35.13
orjson==3.10.1
packaging==23.2
pinecone-client==3.2.2
proto-plus==1.23.0
protobuf==4.25.3
pyasn1==0.6.0
pyasn1_modules==0.4.0
pycparser==2.22
pydantic @ file:///private/var/folders/k1/30mswbxs7r1g6zwn8y4fyt500000gp/T/abs_0ai8cvgm2c/croot/pydantic_1709577986211/work
pydantic_core @ file:///private/var/folders/k1/30mswbxs7r1g6zwn8y4fyt500000gp/T/abs_06smitnu98/croot/pydantic-core_1709573985903/work
PyJWT==2.8.0
pyparsing==3.1.2
python-dotenv==1.0.1
python-engineio==4.9.0
python-socketio==5.11.2
PyYAML==6.0.1
regex==2024.5.15
requests==2.31.0
rsa==4.9
simple-websocket==1.0.0
sniffio @ file:///private/var/folders/nz/j6p8yfhx1mv_0grj5xl4650h0000gp/T/abs_1573pknjrg/croot/sniffio_1705431298885/work
SQLAlchemy==2.0.29
tenacity==8.2.3
tiktoken==0.7.0
tqdm==4.66.2
typing-inspect==0.9.0
typing_extensions @ file:///private/var/folders/nz/j6p8yfhx1mv_0grj5xl4650h0000gp/T/abs_93dg13ilv4/croot/typing_extensions_1715268840722/work
uritemplate==4.1.1
urllib3==2.2.1
Werkzeug==3.0.2
wsproto==1.2.0
yarl==1.9.4
| variable chat_history should be a list of base messages, got | https://api.github.com/repos/langchain-ai/langchain/issues/24257/comments | 2 | 2024-07-15T08:55:17Z | 2024-07-17T08:52:45Z | https://github.com/langchain-ai/langchain/issues/24257 | 2,408,209,723 | 24,257 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
this is my code:
`from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import ConfigurableField
from langchain_core.tools import tool
from langchain.agents import create_tool_calling_agent, AgentExecutor
from langchain.globals import set_verbose
from langchain.globals import set_debug
set_debug(True)
@tool
def multiply(x: float, y: float) -> float:
"""Multiply 'x' times 'y'."""
return x * y
@tool
def exponentiate(x: float, y: float) -> float:
"""Raise 'x' to the 'y'."""
return x**y
@tool
def add(x: float, y: float) -> float:
"""Add 'x' and 'y'."""
return x + y
prompt = ChatPromptTemplate.from_messages([
("system", "you're a helpful assistant"),
("human", "{input}"),
("placeholder", "{agent_scratchpad}"),
])
tools = [multiply, exponentiate, add]
llm = ChatOpenAI(model="command-r", base_url="http://localhost:11434/v1")
agent = create_tool_calling_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
agent_executor.invoke({"input": "what's 3 plus 5 raised to the 2.743. also what's 17.24 - 918.1241", })`
### Error Message and Stack Trace (if applicable)
_No response_
### Description
i see in the debug log that the tools are not used in any of the following local models : command-r , qwen2, llama3 , nexusraven
for regular openai it worked,
can't i use the create_tool_calling_agent with ollama models ?
in here [https://blog.langchain.dev/tool-calling-with-langchain/](url) it is suggested that it should work with every model.
### System Info
system : Apple M3 Pro
libraries :
langchain==0.2.7
langchain-aws==0.1.10
langchain-cohere==0.1.9
langchain-community==0.2.7
langchain-core==0.2.18
langchain-experimental==0.0.62
langchain-openai==0.1.15
langchain-text-splitters==0.2.2
langchainhub==0.1.20
| issue using tools with ollama local models | https://api.github.com/repos/langchain-ai/langchain/issues/24255/comments | 0 | 2024-07-15T07:46:02Z | 2024-07-15T07:48:45Z | https://github.com/langchain-ai/langchain/issues/24255 | 2,408,084,105 | 24,255 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
I am trying to use LLMGraphTransformer
Despite upgrading everything still facing this issue
`from langchain.transformers import LLMGraphTransformer
ModuleNotFoundError Traceback (most recent call last)
Cell In[6], line 2
1 from langchain.llms import OpenAI
----> 2 from langchain.transformers import LLMGraphTransformer
3 import getpass
4 import os
ModuleNotFoundError: No module named 'langchain.transformers'`
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I am trying to use LLMGraphTransformer
Despite upgrading everything still facing this issue
`from langchain.transformers import LLMGraphTransformer
ModuleNotFoundError Traceback (most recent call last)
Cell In[6], line 2
1 from langchain.llms import OpenAI
----> 2 from langchain.transformers import LLMGraphTransformer
3 import getpass
4 import os
ModuleNotFoundError: No module named 'langchain.transformers'`
### System Info
I am trying to use LLMGraphTransformer
Despite upgrading everything still facing this issue
`from langchain.transformers import LLMGraphTransformer
ModuleNotFoundError Traceback (most recent call last)
Cell In[6], line 2
1 from langchain.llms import OpenAI
----> 2 from langchain.transformers import LLMGraphTransformer
3 import getpass
4 import os
ModuleNotFoundError: No module named 'langchain.transformers'` | No module named 'langchain.transformers' | https://api.github.com/repos/langchain-ai/langchain/issues/24251/comments | 3 | 2024-07-15T06:24:37Z | 2024-07-15T21:11:35Z | https://github.com/langchain-ai/langchain/issues/24251 | 2,407,953,032 | 24,251 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_openai import AzureOpenAIEmbeddings
os.environ["AZURE_OPENAI_API_KEY"] = get_auth_token()
os.environ["OPENAI_API_KEY"] = get_auth_token()
os.environ["AZURE_OPENAI_ENDPOINT"] = 'https://workspace.openai.azure.com/'
os.environ["OPENAI_ENDPOINT"] = 'https://workspace.openai.azure.com/'
os.environ['OPENAI_API_TYPE'] = "azure"
os.environ['OPENAI_API_VERSION']='2023-07-01-preview'
embeddings = AzureOpenAIEmbeddings(
model='text-embedding-3-small',
chunk_size=1
)
embeddings.embed_documents(['text'])
```
### Error Message and Stack Trace (if applicable)
```text
---------------------------------------------------------------------------
SSLEOFError Traceback (most recent call last)
File /anaconda/envs/nlp_min/lib/python3.10/site-packages/urllib3/connectionpool.py:670, in HTTPConnectionPool.urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
[669](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f32316164386262372d633338382d343161352d613931612d6362336539323161356439612f7265736f7572636547726f7570732f676932756f6b79757439356e6c6c392d636f6d6d6f6e2f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f676932756f6b79757439356e6c6c392d616d6c2f636f6d70757465732f6465762d41313030.vscode-resource.vscode-cdn.net/anaconda/envs/nlp_min/lib/python3.10/site-packages/urllib3/connectionpool.py:669) # Make the request on the httplib connection object.
......
[706](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f32316164386262372d633338382d343161352d613931612d6362336539323161356439612f7265736f7572636547726f7570732f676932756f6b79757439356e6c6c392d636f6d6d6f6e2f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f676932756f6b79757439356e6c6c392d616d6c2f636f6d70757465732f6465762d41313030.vscode-resource.vscode-cdn.net/anaconda/envs/nlp_min/lib/python3.10/site-packages/requests/sessions.py:706) elapsed = preferred_clock() - start
File /anaconda/envs/nlp_min/lib/python3.10/site-packages/requests/adapters.py:517, in HTTPAdapter.send(self, request, stream, timeout, verify, cert, proxies)
[513](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f32316164386262372d633338382d343161352d613931612d6362336539323161356439612f7265736f7572636547726f7570732f676932756f6b79757439356e6c6c392d636f6d6d6f6e2f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f676932756f6b79757439356e6c6c392d616d6c2f636f6d70757465732f6465762d41313030.vscode-resource.vscode-cdn.net/anaconda/envs/nlp_min/lib/python3.10/site-packages/requests/adapters.py:513) raise ProxyError(e, request=request)
[515](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f32316164386262372d633338382d343161352d613931612d6362336539323161356439612f7265736f7572636547726f7570732f676932756f6b79757439356e6c6c392d636f6d6d6f6e2f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f676932756f6b79757439356e6c6c392d616d6c2f636f6d70757465732f6465762d41313030.vscode-resource.vscode-cdn.net/anaconda/envs/nlp_min/lib/python3.10/site-packages/requests/adapters.py:515) if isinstance(e.reason, _SSLError):
[516](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f32316164386262372d633338382d343161352d613931612d6362336539323161356439612f7265736f7572636547726f7570732f676932756f6b79757439356e6c6c392d636f6d6d6f6e2f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f676932756f6b79757439356e6c6c392d616d6c2f636f6d70757465732f6465762d41313030.vscode-resource.vscode-cdn.net/anaconda/envs/nlp_min/lib/python3.10/site-packages/requests/adapters.py:516) # This branch is for urllib3 v1.22 and later.
--> [517](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f32316164386262372d633338382d343161352d613931612d6362336539323161356439612f7265736f7572636547726f7570732f676932756f6b79757439356e6c6c392d636f6d6d6f6e2f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f676932756f6b79757439356e6c6c392d616d6c2f636f6d70757465732f6465762d41313030.vscode-resource.vscode-cdn.net/anaconda/envs/nlp_min/lib/python3.10/site-packages/requests/adapters.py:517) raise SSLError(e, request=request)
[519](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f32316164386262372d633338382d343161352d613931612d6362336539323161356439612f7265736f7572636547726f7570732f676932756f6b79757439356e6c6c392d636f6d6d6f6e2f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f676932756f6b79757439356e6c6c392d616d6c2f636f6d70757465732f6465762d41313030.vscode-resource.vscode-cdn.net/anaconda/envs/nlp_min/lib/python3.10/site-packages/requests/adapters.py:519) raise ConnectionError(e, request=request)
[521](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f32316164386262372d633338382d343161352d613931612d6362336539323161356439612f7265736f7572636547726f7570732f676932756f6b79757439356e6c6c392d636f6d6d6f6e2f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f676932756f6b79757439356e6c6c392d616d6c2f636f6d70757465732f6465762d41313030.vscode-resource.vscode-cdn.net/anaconda/envs/nlp_min/lib/python3.10/site-packages/requests/adapters.py:521) except ClosedPoolError as e:
SSLError: HTTPSConnectionPool(host='openaipublic.blob.core.windows.net', port=443): Max retries exceeded with url: /encodings/cl100k_base.tiktoken (Caused by SSLError(SSLEOFError(8, '[SSL: UNEXPECTED_EOF_WHILE_READING] EOF occurred in violation of protocol (_ssl.c:1007)')))
````
### Description
I tried this code snippets along with many variations, none worked, the issue is that under the hood the function tries to access openaipublic.blob.core.windows.net which is not allowed. Why is this trying to access an external link when all it needs to do is to connect to our azure openai endpoint?
### System Info
langchain==0.2.7
langchain-chroma==0.1.2
langchain-community==0.0.8
langchain-core==0.2.18
langchain-openai==0.1.16
langchain-text-splitters==0.2.2 | LangChain AzureOpenAIEmbeddings is not working due to model trying to access microsoft | https://api.github.com/repos/langchain-ai/langchain/issues/24248/comments | 1 | 2024-07-15T03:20:17Z | 2024-07-17T12:52:46Z | https://github.com/langchain-ai/langchain/issues/24248 | 2,407,782,564 | 24,248 |
[
"hwchase17",
"langchain"
] | ### URL
_No response_
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
_No response_
### Idea or request for content:
_No response_ | DOC: <Please wri知识库交叉融合,在项目使用中,我有一些公用知识库和私有知识库,我想在回答的时候将私有知识库和公用的知识库结合起来,这怎么实现?后期可以更新吗te a comprehensive title after the 'DOC: ' prefix> | https://api.github.com/repos/langchain-ai/langchain/issues/24246/comments | 0 | 2024-07-15T02:31:34Z | 2024-07-15T02:31:34Z | https://github.com/langchain-ai/langchain/issues/24246 | 2,407,748,868 | 24,246 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
langchain pinecone store .from_documents and .add_documents don't support id_prefix.
### Error Message and Stack Trace (if applicable)
how to insert id_prefix when upserting using langchain pinecone.from_documents??? or what is the alternative? because id_prefix is very important when we want to delete specific vectors #24235
### Description
how to insert id_prefix when upserting using langchain pinecone.from_documents??? or what is the alternative? because id_prefix is very important when we want to delete specific vectors #24235
### System Info
.. | how to insert id_prefix when upserting using langchain pinecone.from_documents??? or what is the alternative? because id_prefix is very important when we want to delete specific vectors #24235 | https://api.github.com/repos/langchain-ai/langchain/issues/24239/comments | 0 | 2024-07-14T12:29:11Z | 2024-07-14T12:31:37Z | https://github.com/langchain-ai/langchain/issues/24239 | 2,407,414,207 | 24,239 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
.
### Error Message and Stack Trace (if applicable)
how to insert id_prefix when upserting using langchain pinecone.from_documents??? or what is the alternative? because id_prefix is very important when search for specific files vectors and then deleting those vectores.
### Description
how to insert id_prefix when upserting using langchain pinecone.from_documents??? or what is the alternative? because id_prefix is very important when search for specific files vectors and then deleting those vectores.
### System Info
.. | how to insert id_prefix when upserting using langchain pinecone.from_documents??? or what is the alternative? because id_prefix is very important when we want to delete specific vectors | https://api.github.com/repos/langchain-ai/langchain/issues/24235/comments | 0 | 2024-07-14T09:49:14Z | 2024-07-14T12:28:04Z | https://github.com/langchain-ai/langchain/issues/24235 | 2,407,354,608 | 24,235 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```# Data model
class GradeDocuments(BaseModel):
"""Binary score for relevance check on retrieved documents."""
binary_score: str = Field(description="Documents are relevant to the question, 'yes' or 'no'")
llm = OllamaFunctions(model="gemma:2b", format="json", temperature=0)
structured_llm_documents_grader = llm.with_structured_output(
GradeDocuments)
chain = grade_prompt | structured_llm_documents_grader
chain.invoke({"question": question, "document": document.page_content})
### Error Message and Stack Trace (if applicable)
```raised the following error:
<class 'TypeError'>: Object of type ModelMetaclass is not JSON serializable
ile "/Users/dman/python/chatbot_project/.venv/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 4978, in invoke
return self.bound.invoke(
^^^^^^^^^^^^^^^^^^
File "/Users/dman/python/chatbot_project/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 265, in invoke
self.generate_prompt(
File "/Users/dman/python/chatbot_project/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 698, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dman/python/chatbot_project/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 555, in generate
raise e
File "/Users/dman/python/chatbot_project/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 545, in generate
self._generate_with_cache(
File "/Users/dman/python/chatbot_project/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 758, in _generate_with_cache
for chunk in self._stream(messages, stop=stop, **kwargs):
File "/Users/dman/python/chatbot_project/.venv/lib/python3.11/site-packages/langchain_community/chat_models/ollama.py", line 344, in _stream
for stream_resp in self._create_chat_stream(messages, stop, **kwargs):
File "/Users/dman/python/chatbot_project/.venv/lib/python3.11/site-packages/langchain_community/chat_models/ollama.py", line 189, in _create_chat_stream
yield from self._create_stream(
^^^^^^^^^^^^^^^^^^^^
File "/Users/dman/python/chatbot_project/.venv/lib/python3.11/site-packages/langchain_community/llms/ollama.py", line 232, in _create_stream
response = requests.post(
^^^^^^^^^^^^^^
File "/Users/dman/python/chatbot_project/.venv/lib/python3.11/site-packages/requests/api.py", line 115, in post
return request("post", url, data=data, json=json, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dman/python/chatbot_project/.venv/lib/python3.11/site-packages/requests/api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dman/python/chatbot_project/.venv/lib/python3.11/site-packages/requests/sessions.py", line 575, in request
prep = self.prepare_request(req)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dman/python/chatbot_project/.venv/lib/python3.11/site-packages/requests/sessions.py", line 484, in prepare_request
p.prepare(
File "/Users/dman/python/chatbot_project/.venv/lib/python3.11/site-packages/requests/models.py", line 370, in prepare
self.prepare_body(data, files, json)
File "/Users/dman/python/chatbot_project/.venv/lib/python3.11/site-packages/requests/models.py", line 510, in prepare_body
body = complexjson.dumps(json, allow_nan=False)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/[email protected]/3.11.9/Frameworks/Python.fr
### Description
I am trying to use OllamaFunction with with_structured_output following an example on the user doc. However, I am seeing Object of type ModelMetaclass is not JSON serializable
### System Info
(server-py3.11) denniswong@macbook-pro server % pip freeze | grep langchain
langchain==0.2.7
langchain-cohere==0.1.9
langchain-community==0.2.7
langchain-core==0.2.18
langchain-experimental==0.0.62
langchain-openai==0.1.16
langchain-text-splitters==0.2.2
mac
python version 3.11.9 | OllamaFunction returns Object of type ModelMetaclass is not JSON serializable following example on documentation | https://api.github.com/repos/langchain-ai/langchain/issues/24234/comments | 0 | 2024-07-14T06:19:23Z | 2024-07-14T06:21:55Z | https://github.com/langchain-ai/langchain/issues/24234 | 2,407,287,144 | 24,234 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
There is an issue at https://chat.langchain.com/
### Description
When using IME to input Japanese prompts in the Chat LangChain (https://chat.langchain.com/), pressing the Enter key to confirm Japanese character conversion results in the prompt being prematurely sent. This issue likely affects other languages using IME as well. (The same type of issue as https://github.com/langchain-ai/langchain/issues/24231, but the solution is slightly different)
### Steps to Reproduce:
Use IME to input a Japanese prompt.
Press the Enter key to confirm character conversion.
### Expected Behavior:
The input should be correctly converted to Japanese.
The prompt should not be sent.
### Actual Behavior:
The prompt is sent prematurely while still being composed.
### Proposed Solution:
In my local environment, running the following code in the Chrome console resolves the issue. I suggest incorporating a similar solution into the Chat LangChain:
``` javascript
(function() {
'use strict';
var parent_element = document.querySelector("body");
var isComposing = false;
// Start of Japanese input
parent_element.addEventListener('compositionstart', function(){
if (event.target.tagName === 'TEXTAREA') {
isComposing = true;
}
});
// End of Japanese input
parent_element.addEventListener('compositionend', function(){
if (event.target.tagName === 'TEXTAREA') {
isComposing = false;
}
});
// Modified handleIMEEnter function
function handleIMEEnter(event) {
if (event.target.tagName === 'TEXTAREA') {
if (event.code == "Enter" && isComposing) {
event.stopPropagation();
}
}
}
// Register handleIMEEnter function as a keydown event listener
parent_element.addEventListener('keydown', handleIMEEnter);
})();
```
### Additional Notes:
The difference with [IME Input Handling Issue in LangChain Chat Playground](https://github.com/langchain-ai/langchain/issues/24231) is that in Chat LangChain, a new TextArea is dynamically added for each prompt submission. Therefore, it is necessary to ensure that events are fired from the newly added TextArea as well. Specifically, this is achieved by capturing and handling events that bubble up to the body element.
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 21.6.0: Thu Jun 8 23:57:12 PDT 2023; root:xnu-8020.240.18.701.6~1/RELEASE_X86_64
> Browser: Google Chrome Version 126.0.6478.127 (Official Build) (x86_64) | IME Input Handling Issue in Chat LangChain | https://api.github.com/repos/langchain-ai/langchain/issues/24233/comments | 2 | 2024-07-13T20:14:44Z | 2024-07-15T17:06:57Z | https://github.com/langchain-ai/langchain/issues/24233 | 2,407,150,660 | 24,233 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Description:
When using IME to input Japanese prompts in the LangChain Chat Playground, pressing the Enter key to confirm Japanese character conversion results in the prompt being prematurely sent. This issue likely affects other languages using IME as well.
### Example Code
```python
from fastapi import FastAPI
from langserve import add_routes
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_openai import ChatOpenAI
app = FastAPI()
_prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"Response to a user input in Japanese",
),
MessagesPlaceholder("chat_history"),
("human", "{text}"),
]
)
_model = ChatOpenAI(model='gpt-4o')
chain = _prompt | _model
add_routes(app,
chain,
path="/japanese-speak",
playground_type="chat",
)
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=8000)
```
### Steps to Reproduce:
Use IME to input a Japanese prompt.
Press the Enter key to confirm character conversion.
### Expected Behavior:
The input should be correctly converted to Japanese.
The prompt should not be sent.
### Actual Behavior:
The prompt is sent prematurely while still being composed.
### Proposed Solution:
In my local environment, running the following code in the Chrome console resolves the issue. I suggest incorporating a similar solution into the Chat Playground:
```javascript
(function() {
'use strict';
var input_element = document.querySelector("textarea");
var isComposing = false;
// Start of Japanese input
input_element.addEventListener('compositionstart', function(){
isComposing = true;
});
// End of Japanese input
input_element.addEventListener('compositionend', function(){
isComposing = false;
});
// Modified handleIMEEnter function
function handleIMEEnter(event) {
if (event.code == "Enter" && isComposing) {
event.stopPropagation();
}
}
// Register handleIMEEnter function as a keydown event listener
input_element.addEventListener('keydown', handleIMEEnter, { capture: true });
})();
```
### Additional Notes:
The `{ capture: true }` option in the `addEventListener` call ensures that the `handleIMEEnter` function is called before the prompt submission event, preventing the prompt from being sent prematurely. In the implementation within the Chat Playground, setting the order of event listeners appropriately should eliminate the need for `{ capture: true }`.
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 21.6.0: Thu Jun 8 23:57:12 PDT 2023; root:xnu-8020.240.18.701.6~1/RELEASE_X86_64
> Python Version: 3.11.6 (main, Oct 16 2023, 15:57:36) [Clang 14.0.0 (clang-1400.0.29.202)]
Package Information
-------------------
> langchain_core: 0.2.17
> langchain: 0.2.7
> langchain_community: 0.2.7
> langsmith: 0.1.85
> langchain_anthropic: 0.1.20
> langchain_chroma: 0.1.1
> langchain_cli: 0.0.25
> langchain_experimental: 0.0.61
> langchain_openai: 0.1.16
> langchain_text_splitters: 0.2.2
> langchainhub: 0.1.20
> langchainplus_sdk: 0.0.20
> langgraph: 0.1.8
> langserve: 0.2.2
| IME Input Handling Issue in LangChain Chat Playground | https://api.github.com/repos/langchain-ai/langchain/issues/24231/comments | 0 | 2024-07-13T19:06:02Z | 2024-07-13T19:39:53Z | https://github.com/langchain-ai/langchain/issues/24231 | 2,407,101,635 | 24,231 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [ ] I am sure that this is a bug in LangChain rather than my code.
- [ ] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
python
from langchain.chains import RetrievalQAWithSourcesChain
from langchain.prompts import (
ChatPromptTemplate,
HumanMessagePromptTemplate,
PromptTemplate,
SystemMessagePromptTemplate,
)
from langchain_openai import ChatOpenAI
from langchain_community.vectorstores import Redis
from chatbot_api import config
_INDEX_NAME = "Postmarket"
rds = Redis.from_existing_index(
embedding=config.OPEN_AI_EMBEDDINGS,
index_name=_INDEX_NAME,
schema=config.INDEX_SCHEMA,
redis_url=config.REDIS_URL,
)
_template = """Your job is to use information on the documents
to answer questions about postmarket operations. Use the following
context to answer questions. Be as detailed as possible, but don't
make up any information that's not from the context. If you don't
know an answer, say you don't know. If you refer to a document, cite
your reference.
{context}
"""
system_prompt = SystemMessagePromptTemplate(
prompt=PromptTemplate(input_variables=['context'], template=_template)
)
human_prompt = HumanMessagePromptTemplate(
prompt=PromptTemplate(input_variables=['question'], template="{question}")
)
messages = [system_prompt, human_prompt]
postmarket_prompt = ChatPromptTemplate(input_variables=['context', 'question'], messages=messages)
postmarket_chain = RetrievalQAWithSourcesChain.from_chain_type(
llm=ChatOpenAI(model=config.QA_MODEL, temperature=config.TEMPERATURE),
chain_type="stuff",
retriever=rds.as_retriever(search_type="similarity", search_kwargs={"k": 8}),
return_source_documents=True,
# chain_type_kwargs={"prompt": postmarket_prompt}, # this also doesn't work throwing ValueError -> document_variable_name summaries was not found in llm_chain input_variables: ['context', 'question']
verbose=True,
)
postmarket_chain.combine_documents_chain.llm_chain.prompt = postmarket_prompt
```
Then the `postmarket_chain` is used by the tool i defined in my langchain agent as `func=postmarket_chain.invoke`
### Error Message and Stack Trace (if applicable)
```
[chain/start] [chain:AgentExecutor > tool:Premarket > chain:RetrievalQAWithSourcesChain] Entering Chain run with input:
{
"question": "What are the procedures for submitting an application for a new medical device?",
"history": []
}
[chain/start] [chain:AgentExecutor > tool:Premarket > chain:RetrievalQAWithSourcesChain > chain:StuffDocumentsChain] Entering Chain run with input:
[inputs]
[chain/start] [chain:AgentExecutor > tool:Premarket > chain:RetrievalQAWithSourcesChain > chain:StuffDocumentsChain > chain:LLMChain] Entering Chain run with input:
{
"question": "What are the procedures for submitting an application for a new medical device?",
"summaries": "Content: Page 12D. Promotional Literature\nAny (I'm cutting the rest but this text is fetched from my vectorstore, I can confirm)"
}
[llm/start] [chain:AgentExecutor > tool:Premarket > chain:RetrievalQAWithSourcesChain > chain:StuffDocumentsChain > chain:LLMChain > llm:ChatOpenAI] Entering LLM run with input:
{
"prompts": [
"System: Your job is to use information on documents\nto answer questions about premarket operations. Use the following\ncontext to answer questions. Be as detailed as possible, but don't\nmake up any information that's not from the context. If you don't\nknow an answer, say you don't know. If you refer to a document, cite\nyour reference.\n{context}\n\nHuman: What are the procedures for submitting an application for a new medical device?"
]
}
[llm/end] [chain:AgentExecutor > tool:Premarket > chain:RetrievalQAWithSourcesChain > chain:StuffDocumentsChain > chain:LLMChain > llm:ChatOpenAI] [5.16s] Exiting LLM run with output:
{
"generations": [
[
{
"text": "I don't have the specific documents or guidelines available in the provided context to detail the procedures for submitting a 510(k) notification for a new medical device. Typically, this process involves preparing and submitting a premarket notification to the FDA to demonstrate that the new device is substantially equivalent to a legally marketed device (predicate device) not subject to premarket approval (PMA). The submission includes information about the device, its intended use, and comparative analyses, among other data. For detailed steps and requirements, it is best to refer directly to the relevant FDA guidelines or documents.",
"generation_info": {
"finish_reason": "stop",
"logprobs": null
},
"type": "ChatGeneration",
"message": {
"lc": 1,
"type": "constructor",
"id": [
"langchain",
"schema",
"messages",
"AIMessage"
],
```
### Description
I have a multimodel RAG system that generates answers using the texts parsed from hundreds of PDFs that are retrieved from my Redis vectorstore. And I have several chains (RetrievalQAWithSourcesChain) to find relevant contextual texts from vectorstore and append them in my chatbot llm calls. I'm having problems in correctly adding context to the system prompt. Below code throws ValueError: Missing some input keys: {'context'} .
The RetrievalQAWithSourcesChain is supposed to use the Redis retriever and append the extracted texts to the {context} I believe, but seems like it can't or there's something else i can't see.
It surprisinly works when I use double brackets around 'context' in the prompt -> {{context}}. However, when I examine the logs of the intermediate steps of langchain trying to use the agent's tools to generate an answer, my understanding is that the context is not even passed and the llm model just uses its own knowledge to give answers without using any contextual info that's supposed to be passed from vectorstore. Here are some logs below. Notice how some text data returned from vectorstore is included in summaries but then when StuffDocumentsChain passed that to llm:ChatOpenAI you see that it's not injected into the system prompt (scroll right to see), the context field still remains as {context} (it dropped the outer brackets)
Am I right in my assumption of the context is not being passed to the knowledge window correctly? How can I fix this? All the examples I see from other projects use one bracket around context when they include it in the system prompt. However I could only make the code work with double brackets and that seems like it's not injecting the context at all...
Can this be due to the index schema I used when creating the vectorstore? the schema for reference:
```
text:
- name: content
- name: source
numeric:
- name: start_index
- name: page
vector:
- name: content_vector
algorithm: HNSW
datatype: FLOAT32
dims: 384
distance_metric: COSINE
```
### System Info
langchain==0.2.7
langchain-community==0.2.7
langchain-core==0.2.16
langchain-openai==0.1.15
langchain-text-splitters==0.2.2
langchainhub==0.1.20
Python 3.12.4
OS: MacOS Sonoma 14.4.1 | Langchain RetrievalQAWithSourcesChain throwing ValueError: Missing some input keys: {'context'} | https://api.github.com/repos/langchain-ai/langchain/issues/24229/comments | 4 | 2024-07-13T14:46:32Z | 2024-08-03T23:07:59Z | https://github.com/langchain-ai/langchain/issues/24229 | 2,406,966,926 | 24,229 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Collab link : https://colab.research.google.com/drive/1BCat5tBZRcxUhjQ3vGJD3Zu1eiqYIAWz?usp=sharing
Code :
```
!pip install -qU langchain langchain-community langchain-core
!pip install -qU langchain-google-genai
!pip install -qU langchain-text-splitters tiktoken
!pip install -qU faiss-gpu
```
```python
import os
import getpass
os.environ["GOOGLE_API_KEY"] = getpass.getpass("Google API Key:")
import re
import requests
from langchain_community.document_loaders import BSHTMLLoader
# Download the content
response = requests.get("https://en.wikipedia.org/wiki/Car")
# Write it to a file
with open("car.html", "w", encoding="utf-8") as f:
f.write(response.text)
# Load it with an HTML parser
loader = BSHTMLLoader("car.html")
document = loader.load()[0]
# Clean up code
# Replace consecutive new lines with a single new line
document.page_content = re.sub("\n\n+", "\n", document.page_content)
from typing import List, Optional
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.pydantic_v1 import BaseModel, Field
class KeyDevelopment(BaseModel):
"""Information about a development in the history of cars."""
year: int = Field(
..., description="The year when there was an important historic development."
)
description: str = Field(
..., description="What happened in this year? What was the development?"
)
evidence: str = Field(
...,
description="Repeat in verbatim the sentence(s) from which the year and description information were extracted",
)
class ExtractionData(BaseModel):
"""Extracted information about key developments in the history of cars."""
key_developments: List[KeyDevelopment]
# Define a custom prompt to provide instructions and any additional context.
# 1) You can add examples into the prompt template to improve extraction quality
# 2) Introduce additional parameters to take context into account (e.g., include metadata
# about the document from which the text was extracted.)
prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"You are an expert at identifying key historic development in text. "
"Only extract important historic developments. Extract nothing if no important information can be found in the text.",
),
("human", "{text}"),
]
)
from langchain_google_genai import ChatGoogleGenerativeAI
llm = ChatGoogleGenerativeAI(model="gemini-pro")
extractor = prompt | llm.with_structured_output(
schema=ExtractionData,
include_raw=False,
)
from langchain_text_splitters import TokenTextSplitter
text_splitter = TokenTextSplitter(
# Controls the size of each chunk
chunk_size=2000,
# Controls overlap between chunks
chunk_overlap=20,
)
texts = text_splitter.split_text(document.page_content)
from langchain_community.vectorstores import FAISS
from langchain_core.documents import Document
from langchain_core.runnables import RunnableLambda
from langchain_google_genai import GoogleGenerativeAIEmbeddings
from langchain_text_splitters import CharacterTextSplitter
texts = text_splitter.split_text(document.page_content)
vectorstore = FAISS.from_texts(texts, embedding=GoogleGenerativeAIEmbeddings(model="models/embedding-001"))
retriever = vectorstore.as_retriever(
search_kwargs={"k": 1}
) # Only extract from first document
rag_extractor = {
"text": retriever | (lambda docs: docs[0].page_content) # fetch content of top doc
} | extractor
results = rag_extractor.invoke("Key developments associated with cars")
```
### Error Message and Stack Trace (if applicable)
InvalidArgument Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/langchain_google_genai/chat_models.py](https://localhost:8080/#) in _chat_with_retry(**kwargs)
177 try:
--> 178 return generation_method(**kwargs)
179 # Do not retry for these errors.
25 frames
[/usr/local/lib/python3.10/dist-packages/google/ai/generativelanguage_v1beta/services/generative_service/client.py](https://localhost:8080/#) in generate_content(self, request, model, contents, retry, timeout, metadata)
826 # Send the request.
--> 827 response = rpc(
828 request,
[/usr/local/lib/python3.10/dist-packages/google/api_core/gapic_v1/method.py](https://localhost:8080/#) in __call__(self, timeout, retry, compression, *args, **kwargs)
130
--> 131 return wrapped_func(*args, **kwargs)
132
[/usr/local/lib/python3.10/dist-packages/google/api_core/retry/retry_unary.py](https://localhost:8080/#) in retry_wrapped_func(*args, **kwargs)
292 )
--> 293 return retry_target(
294 target,
[/usr/local/lib/python3.10/dist-packages/google/api_core/retry/retry_unary.py](https://localhost:8080/#) in retry_target(target, predicate, sleep_generator, timeout, on_error, exception_factory, **kwargs)
152 # defer to shared logic for handling errors
--> 153 _retry_error_helper(
154 exc,
[/usr/local/lib/python3.10/dist-packages/google/api_core/retry/retry_base.py](https://localhost:8080/#) in _retry_error_helper(exc, deadline, next_sleep, error_list, predicate_fn, on_error_fn, exc_factory_fn, original_timeout)
211 )
--> 212 raise final_exc from source_exc
213 if on_error_fn is not None:
[/usr/local/lib/python3.10/dist-packages/google/api_core/retry/retry_unary.py](https://localhost:8080/#) in retry_target(target, predicate, sleep_generator, timeout, on_error, exception_factory, **kwargs)
143 try:
--> 144 result = target()
145 if inspect.isawaitable(result):
[/usr/local/lib/python3.10/dist-packages/google/api_core/timeout.py](https://localhost:8080/#) in func_with_timeout(*args, **kwargs)
119
--> 120 return func(*args, **kwargs)
121
[/usr/local/lib/python3.10/dist-packages/google/api_core/grpc_helpers.py](https://localhost:8080/#) in error_remapped_callable(*args, **kwargs)
80 except grpc.RpcError as exc:
---> 81 raise exceptions.from_grpc_error(exc) from exc
82
InvalidArgument: 400 * GenerateContentRequest.tools[0].function_declarations[0].parameters.properties[key_developments].items: missing field.
The above exception was the direct cause of the following exception:
ChatGoogleGenerativeAIError Traceback (most recent call last)
[<ipython-input-18-49ad0989f74d>](https://localhost:8080/#) in <cell line: 1>()
----> 1 results = rag_extractor.invoke("Key developments associated with cars")
[/usr/local/lib/python3.10/dist-packages/langchain_core/runnables/base.py](https://localhost:8080/#) in invoke(self, input, config, **kwargs)
2794 input = step.invoke(input, config, **kwargs)
2795 else:
-> 2796 input = step.invoke(input, config)
2797 # finish the root run
2798 except BaseException as e:
[/usr/local/lib/python3.10/dist-packages/langchain_core/runnables/base.py](https://localhost:8080/#) in invoke(self, input, config, **kwargs)
4976 **kwargs: Optional[Any],
4977 ) -> Output:
-> 4978 return self.bound.invoke(
4979 input,
4980 self._merge_configs(config),
[/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/chat_models.py](https://localhost:8080/#) in invoke(self, input, config, stop, **kwargs)
263 return cast(
264 ChatGeneration,
--> 265 self.generate_prompt(
266 [self._convert_input(input)],
267 stop=stop,
[/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/chat_models.py](https://localhost:8080/#) in generate_prompt(self, prompts, stop, callbacks, **kwargs)
696 ) -> LLMResult:
697 prompt_messages = [p.to_messages() for p in prompts]
--> 698 return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
699
700 async def agenerate_prompt(
[/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/chat_models.py](https://localhost:8080/#) in generate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
553 if run_managers:
554 run_managers[i].on_llm_error(e, response=LLMResult(generations=[]))
--> 555 raise e
556 flattened_outputs = [
557 LLMResult(generations=[res.generations], llm_output=res.llm_output) # type: ignore[list-item]
[/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/chat_models.py](https://localhost:8080/#) in generate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
543 try:
544 results.append(
--> 545 self._generate_with_cache(
546 m,
547 stop=stop,
[/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/chat_models.py](https://localhost:8080/#) in _generate_with_cache(self, messages, stop, run_manager, **kwargs)
768 else:
769 if inspect.signature(self._generate).parameters.get("run_manager"):
--> 770 result = self._generate(
771 messages, stop=stop, run_manager=run_manager, **kwargs
772 )
[/usr/local/lib/python3.10/dist-packages/langchain_google_genai/chat_models.py](https://localhost:8080/#) in _generate(self, messages, stop, run_manager, tools, functions, safety_settings, tool_config, generation_config, **kwargs)
765 generation_config=generation_config,
766 )
--> 767 response: GenerateContentResponse = _chat_with_retry(
768 request=request,
769 **kwargs,
[/usr/local/lib/python3.10/dist-packages/langchain_google_genai/chat_models.py](https://localhost:8080/#) in _chat_with_retry(generation_method, **kwargs)
194 raise e
195
--> 196 return _chat_with_retry(**kwargs)
197
198
[/usr/local/lib/python3.10/dist-packages/tenacity/__init__.py](https://localhost:8080/#) in wrapped_f(*args, **kw)
334 copy = self.copy()
335 wrapped_f.statistics = copy.statistics # type: ignore[attr-defined]
--> 336 return copy(f, *args, **kw)
337
338 def retry_with(*args: t.Any, **kwargs: t.Any) -> WrappedFn:
[/usr/local/lib/python3.10/dist-packages/tenacity/__init__.py](https://localhost:8080/#) in __call__(self, fn, *args, **kwargs)
473 retry_state = RetryCallState(retry_object=self, fn=fn, args=args, kwargs=kwargs)
474 while True:
--> 475 do = self.iter(retry_state=retry_state)
476 if isinstance(do, DoAttempt):
477 try:
[/usr/local/lib/python3.10/dist-packages/tenacity/__init__.py](https://localhost:8080/#) in iter(self, retry_state)
374 result = None
375 for action in self.iter_state.actions:
--> 376 result = action(retry_state)
377 return result
378
[/usr/local/lib/python3.10/dist-packages/tenacity/__init__.py](https://localhost:8080/#) in <lambda>(rs)
396 def _post_retry_check_actions(self, retry_state: "RetryCallState") -> None:
397 if not (self.iter_state.is_explicit_retry or self.iter_state.retry_run_result):
--> 398 self._add_action_func(lambda rs: rs.outcome.result())
399 return
400
[/usr/lib/python3.10/concurrent/futures/_base.py](https://localhost:8080/#) in result(self, timeout)
449 raise CancelledError()
450 elif self._state == FINISHED:
--> 451 return self.__get_result()
452
453 self._condition.wait(timeout)
[/usr/lib/python3.10/concurrent/futures/_base.py](https://localhost:8080/#) in __get_result(self)
401 if self._exception:
402 try:
--> 403 raise self._exception
404 finally:
405 # Break a reference cycle with the exception in self._exception
[/usr/local/lib/python3.10/dist-packages/tenacity/__init__.py](https://localhost:8080/#) in __call__(self, fn, *args, **kwargs)
476 if isinstance(do, DoAttempt):
477 try:
--> 478 result = fn(*args, **kwargs)
479 except BaseException: # noqa: B902
480 retry_state.set_exception(sys.exc_info()) # type: ignore[arg-type]
[/usr/local/lib/python3.10/dist-packages/langchain_google_genai/chat_models.py](https://localhost:8080/#) in _chat_with_retry(**kwargs)
188
189 except google.api_core.exceptions.InvalidArgument as e:
--> 190 raise ChatGoogleGenerativeAIError(
191 f"Invalid argument provided to Gemini: {e}"
192 ) from e
ChatGoogleGenerativeAIError: Invalid argument provided to Gemini: 400 * GenerateContentRequest.tools[0].function_declarations[0].parameters.properties[key_developments].items: missing field.
### Description
Hi !
Since yesterday, I try to follow this official guide in the v0.2 documentation : https://python.langchain.com/v0.2/docs/how_to/extraction_long_text/
However, it doesn't work well with Chat Google Generative AI
The collab link is here, if you want to try : https://colab.research.google.com/drive/1BCat5tBZRcxUhjQ3vGJD3Zu1eiqYIAWz?usp=sharing
I have followed the guide step by step, but it keep having an error about missing field on the request.
For information, Chat Google Generative AI have Structured Output : https://python.langchain.com/v0.2/docs/integrations/chat/google_generative_ai/
And also, it's not about my location either (I have already success for others use of Chat Google Generative AI)
I have try differents things with schema, and I go to the conclusion that I can't use scheme that define other scheme in it like (or List):
```python
class ExtractionData(BaseModel):
"""Extracted information about key developments in the history of cars."""
key_developments: List[KeyDevelopment]
```
However I can use without problem this scheme :
```python
class KeyDevelopment(BaseModel):
"""Information about a development in the history of cars."""
year: int = Field(
..., description="The year when there was an important historic development."
)
description: str = Field(
..., description="What happened in this year? What was the development?"
)
evidence: str = Field(
...,
description="Repeat in verbatim the sentence(s) from which the year and description information were extracted",
)
```
(but responses with scheme tend to have very bad result with Chat Google, like it's 90% time non-sense)
Sorry for my english which is not really perfect and thank you for reading me !
- ToyHugs
### System Info
https://colab.research.google.com/drive/1BCat5tBZRcxUhjQ3vGJD3Zu1eiqYIAWz?usp=sharing | [Google Generative AI] Structured Output doesn't work with advanced schema | https://api.github.com/repos/langchain-ai/langchain/issues/24225/comments | 1 | 2024-07-13T11:54:26Z | 2024-07-22T13:53:13Z | https://github.com/langchain-ai/langchain/issues/24225 | 2,406,868,969 | 24,225 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_community.document_loaders import NotionDBLoader
loader = NotionDBLoader(database_id='your_database_id', integration_token='your_integration_token')
documents = loader.load()
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "/Users/lulu/dev/python/deeple_io/poet/main.py", line 133, in <module>
app = asyncio.run(main())
File "/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/asyncio/base_events.py", line 642, in run_until_complete
return future.result()
File "/Users/lulu/dev/python/deeple_io/poet/main.py", line 40, in main
documents = loader.load()
File "/Users/lulu/dev/python/deeple_io/poet/.venv/lib/python3.9/site-packages/langchain_community/document_loaders/notiondb.py", line 67, in load
return list(self.load_page(page_summary) for page_summary in page_summaries)
File "/Users/lulu/dev/python/deeple_io/poet/.venv/lib/python3.9/site-packages/langchain_community/document_loaders/notiondb.py", line 67, in <genexpr>
return list(self.load_page(page_summary) for page_summary in page_summaries)
File "/Users/lulu/dev/python/deeple_io/poet/.venv/lib/python3.9/site-packages/langchain_community/document_loaders/notiondb.py", line 137, in load_page
[item["name"] for item in prop_data["people"]]
File "/Users/lulu/dev/python/deeple_io/poet/.venv/lib/python3.9/site-packages/langchain_community/document_loaders/notiondb.py", line 137, in <listcomp>
[item["name"] for item in prop_data["people"]]
KeyError: 'name'
### Description
## **Problem Description:**
When attempting to load documents from NotionDB using the LangChain library, a `KeyError: 'name'` occurs.
## **Steps to Reproduce:**
1. Install the LangChain library.
2. Run the following code.
3. Observe the error.
## **Expected Behavior:**
The documents should be loaded correctly from NotionDB.
## **Actual Behavior:**
A `KeyError: 'name'` occurs.
### System Info
langchain==0.2.7
langchain-community==0.2.7
langchain-core==0.2.17
langchain-openai==0.1.16
langchain-text-splitters==0.2.2
| Issue: Document loader for Notion DB doesn't supports KeyError: 'name' | https://api.github.com/repos/langchain-ai/langchain/issues/24223/comments | 0 | 2024-07-13T09:12:21Z | 2024-08-01T13:55:41Z | https://github.com/langchain-ai/langchain/issues/24223 | 2,406,813,253 | 24,223 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The following code:
```Python
import os
from typing import List
import dotenv
from langchain.output_parsers import OutputFixingParser
from langchain_core.output_parsers import PydanticOutputParser
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_openai import ChatOpenAI
dotenv.load_dotenv()
class Actor(BaseModel):
name: str = Field(description="name of an actor")
film_names: List[str] = Field(description="list of names of films they starred in")
actor_query = "Generate the filmography for a random actor."
parser = PydanticOutputParser(pydantic_object=Actor)
misformatted = "{'name': 'Tom Hanks', 'film_names': ['Forrest Gump']}"
new_parser = OutputFixingParser.from_llm(parser=parser, llm=ChatOpenAI(openai_api_base=os.getenv('OPENAI_API_BASE')))
print(new_parser.parse(misformatted))
```
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "/Users/zhangshenao/Desktop/LLM/happy-langchain/6-输出解析/2.使用OutputFixingParser自动修复解析器.py", line 39, in <module>
print(new_parser.parse(misformatted))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/langchain/output_parsers/fix.py", line 74, in parse
completion = self.retry_chain.invoke(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 2497, in invoke
input = step.invoke(input, config, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/langchain_core/prompts/base.py", line 179, in invoke
return self._call_with_config(
^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 1593, in _call_with_config
context.run(
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/langchain_core/runnables/config.py", line 380, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/langchain_core/prompts/base.py", line 153, in _format_prompt_with_error_handling
_inner_input = self._validate_input(inner_input)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/langchain_core/prompts/base.py", line 145, in _validate_input
raise KeyError(
KeyError: "Input to PromptTemplate is missing variables {'completion'}. Expected: ['completion', 'error', 'instructions'] Received: ['instructions', 'input', 'error']"
### Description
* I am using the OutputFixingParser component according to the official documentation, but an exception has occurred
* The official documentation link is: https://python.langchain.com/v0.2/docs/how_to/output_parser_fixing/
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 21.5.0: Tue Apr 26 21:08:29 PDT 2022; root:xnu-8020.121.3~4/RELEASE_ARM64_T8101
> Python Version: 3.12.3 (v3.12.3:f6650f9ad7, Apr 9 2024, 08:18:47) [Clang 13.0.0 (clang-1300.0.29.30)]
Package Information
-------------------
> langchain_core: 0.2.12
> langchain: 0.2.7
> langchain_community: 0.2.7
> langsmith: 0.1.82
> langchain_huggingface: 0.0.3
> langchain_openai: 0.1.14
> langchain_text_splitters: 0.2.2
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | [OutputFixingParser] I am using the OutputFixingParser component according to the official documentation, but an exception has occurred | https://api.github.com/repos/langchain-ai/langchain/issues/24219/comments | 2 | 2024-07-13T02:55:33Z | 2024-07-18T02:14:36Z | https://github.com/langchain-ai/langchain/issues/24219 | 2,406,650,388 | 24,219 |
[
"hwchase17",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/tutorials/qa_chat_history/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
I simply tried to run the sample code in the [Agents section](https://python.langchain.com/v0.2/docs/tutorials/qa_chat_history/#agents) and it raised the following exception:
`openai/_base_client.py", line 1046, in _request
raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': "An assistant message with 'tool_calls' must be followed by tool messages responding to each 'tool_call_id'. The following tool_call_ids did not have response messages: call_OTNdB9zMnNa1V7U8G5Omt7Jr", 'type': 'invalid_request_error', 'param': 'messages.[2].role', 'code': None}}`
I am using the following versions of langchain, langgraph, and openai:
langchain==0.2.7
langchain-community==0.2.7
langchain-core==0.2.16
langchain-openai==0.1.14
langchain-text-splitters==0.2.2
langgraph==0.1.8
langsmith==0.1.84
openai==1.35.13
### Idea or request for content:
_No response_ | DOC: openai.BadRequestError Raised when Running the "Agents" Sample Code | https://api.github.com/repos/langchain-ai/langchain/issues/24196/comments | 1 | 2024-07-12T19:03:34Z | 2024-07-15T05:39:22Z | https://github.com/langchain-ai/langchain/issues/24196 | 2,406,179,110 | 24,196 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The function _await_for_run inside [openai_assistant/base.py](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/agents/openai_assistant/base.py) has a sleep invocation, in the file only a sleep function is imported that is the time.sleep implementation which is blocking.
awaiting asyncio.sleep instead would be the correct solution to avoid blocking invocations in an async function.
In particular this code:
```python
async def _await_for_run(self, run_id: str, thread_id: str) -> Any:
in_progress = True
while in_progress:
run = await self.async_client.beta.threads.runs.retrieve(
run_id, thread_id=thread_id
)
in_progress = run.status in ("in_progress", "queued")
if in_progress:
sleep(self.check_every_ms / 1000)
return run
```
should become:
```python
async def _await_for_run(self, run_id: str, thread_id: str) -> Any:
in_progress = True
while in_progress:
run = await self.async_client.beta.threads.runs.retrieve(
run_id, thread_id=thread_id
)
in_progress = run.status in ("in_progress", "queued")
if in_progress:
---------------await asyncio.sleep(self.check_every_ms / 1000)
return run
```
in addition to this asyncio should be imported somewhere in that file.
I may open a pull request to fix this but I would be able to do so in the beginning of the next week.
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I'm trying to create a fastAPI endpoint to serve langchain completions and I noticed that increasing check_every_ms would block completely my endpoint for the specified ms instead of asyncrhonously awaiting the specified time.
Considering the high response time of some openai models it is not an unlickely situation increasing that number to avoid useless excess traffic every second.
I include system info below but this issue is present also in the current langchain repo.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #41~22.04.2-Ubuntu SMP PREEMPT_DYNAMIC Mon Jun 3 11:32:55 UTC 2
> Python Version: 3.11.6 (main, Jul 5 2024, 16:48:21) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.2.9
> langchain: 0.2.1
> langchain_community: 0.2.1
> langsmith: 0.1.81
> langchain_cli: 0.0.25
> langchain_text_splitters: 0.2.1
> langserve: 0.2.2
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph | Openai Assistant async _await_for_run method is not really async | https://api.github.com/repos/langchain-ai/langchain/issues/24194/comments | 4 | 2024-07-12T18:05:47Z | 2024-07-17T19:06:05Z | https://github.com/langchain-ai/langchain/issues/24194 | 2,406,104,929 | 24,194 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```typescript
const loader = new PDFLoader("./sample-docs/layout-parser-paper-fast.pdf");
const docs = await loader.load();
const textSplitter = new RecursiveCharacterTextSplitter({
chunkSize: 1000,
chunkOverlap: 200,
});
const splits = await textSplitter.splitDocuments(docs);
const model = new ChatOllama({
model: 'mistral',
temperature: 0,
baseUrl: 'http://localhost:11433',
useMMap: true,
});
const embeddings = new OllamaEmbeddings({model:"mxbai-embed-large", baseUrl: 'http://localhost:11434', onFailedAttempt: e => {throw e}, requestOptions: {
useMMap: false,
}});
const vectorstore = await ElasticVectorSearch.fromDocuments(
splits,
embeddings,
clientArgs,
);
const retriever = vectorstore.asRetriever();
const prompt = await pull<ChatPromptTemplate>("rlm/rag-prompt");
const ragChainFromDocs = RunnableSequence.from([
{
context: retriever.pipe(formatDocumentsAsString),
question: new RunnablePassthrough(),
},
prompt,
model,
new StringOutputParser(),
]);
const stream = await ragChainFromDocs.stream(
messages.map(message =>
message.role == 'user'
? new HumanMessage(message.content)
: new AIMessage(message.content),
),
)
```
### Error Message and Stack Trace (if applicable)
DEBUG [update_slots] slot released | n_cache_tokens=211 n_ctx=512 n_past=211 n_system_tokens=0 slot_id=0 task_id=217 tid="139849943545728" timestamp=1720804402 truncated=false
DEBUG [log_server_request] request | method="POST" params={} path="/embedding" remote_addr="127.0.0.1" remote_port=36972 status=200 tid="139849797445184" timestamp=1720804402
[GIN] 2024/07/12 - 14:13:22 | 200 | 1.418235476s | 127.0.0.1 | POST "/api/embeddings"
time=2024-07-12T14:13:22.804-03:00 level=DEBUG source=sched.go:348 msg="context for request finished"
time=2024-07-12T14:13:22.804-03:00 level=DEBUG source=sched.go:281 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/home/rafheros/.ollama/models/blobs/sha256-819c2adf5ce6df2b6bd2ae4ca90d2a69f060afeb438d0c171db57daa02e39c3d duration=5m0s
time=2024-07-12T14:13:22.804-03:00 level=DEBUG source=sched.go:299 msg="after processing request finished event" modelPath=/home/rafheros/.ollama/models/blobs/sha256-819c2adf5ce6df2b6bd2ae4ca90d2a69f060afeb438d0c171db57daa02e39c3d refCount=0
time=2024-07-12T14:13:22.808-03:00 level=DEBUG source=sched.go:507 msg="evaluating already loaded" model=/home/rafheros/.ollama/models/blobs/sha256-819c2adf5ce6df2b6bd2ae4ca90d2a69f060afeb438d0c171db57daa02e39c3d
DEBUG [process_single_task] slot data | n_idle_slots=1 n_processing_slots=0 task_id=220 tid="139849943545728" timestamp=1720804402
DEBUG [process_single_task] slot data | n_idle_slots=1 n_processing_slots=0 task_id=221 tid="139849943545728" timestamp=1720804402
DEBUG [launch_slot_with_data] slot is processing task | slot_id=0 task_id=222 tid="139849943545728" timestamp=1720804402
DEBUG [update_slots] kv cache rm [p0, end) | p0=0 slot_id=0 task_id=222 tid="139849943545728" timestamp=1720804402
DEBUG [update_slots] slot released | n_cache_tokens=189 n_ctx=512 n_past=189 n_system_tokens=0 slot_id=0 task_id=222 tid="139849943545728" timestamp=1720804404 truncated=false
DEBUG [log_server_request] request | method="POST" params={} path="/embedding" remote_addr="127.0.0.1" remote_port=36976 status=200 tid="139849789052480" timestamp=1720804404
[GIN] 2024/07/12 - 14:13:24 | 200 | 1.277078941s | 127.0.0.1 | POST "/api/embeddings"
time=2024-07-12T14:13:24.084-03:00 level=DEBUG source=sched.go:348 msg="context for request finished"
time=2024-07-12T14:13:24.084-03:00 level=DEBUG source=sched.go:281 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/home/rafheros/.ollama/models/blobs/sha256-819c2adf5ce6df2b6bd2ae4ca90d2a69f060afeb438d0c171db57daa02e39c3d duration=5m0s
time=2024-07-12T14:13:24.084-03:00 level=DEBUG source=sched.go:299 msg="after processing request finished event" modelPath=/home/rafheros/.ollama/models/blobs/sha256-819c2adf5ce6df2b6bd2ae4ca90d2a69f060afeb438d0c171db57daa02e39c3d refCount=0
time=2024-07-12T14:13:24.087-03:00 level=DEBUG source=sched.go:507 msg="evaluating already loaded" model=/home/rafheros/.ollama/models/blobs/sha256-819c2adf5ce6df2b6bd2ae4ca90d2a69f060afeb438d0c171db57daa02e39c3d
DEBUG [process_single_task] slot data | n_idle_slots=1 n_processing_slots=0 task_id=225 tid="139849943545728" timestamp=1720804404
DEBUG [process_single_task] slot data | n_idle_slots=1 n_processing_slots=0 task_id=226 tid="139849943545728" timestamp=1720804404
DEBUG [launch_slot_with_data] slot is processing task | slot_id=0 task_id=227 tid="139849943545728" timestamp=1720804404
DEBUG [update_slots] kv cache rm [p0, end) | p0=0 slot_id=0 task_id=227 tid="139849943545728" timestamp=1720804404
DEBUG [update_slots] slot released | n_cache_tokens=165 n_ctx=512 n_past=165 n_system_tokens=0 slot_id=0 task_id=227 tid="139849943545728" timestamp=1720804405 truncated=false
DEBUG [log_server_request] request | method="POST" params={} path="/embedding" remote_addr="127.0.0.1" remote_port=36976 status=200 tid="139849789052480" timestamp=1720804405
[GIN] 2024/07/12 - 14:13:25 | 200 | 1.116597159s | 127.0.0.1 | POST "/api/embeddings"
time=2024-07-12T14:13:25.203-03:00 level=DEBUG source=sched.go:348 msg="context for request finished"
time=2024-07-12T14:13:25.203-03:00 level=DEBUG source=sched.go:281 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/home/rafheros/.ollama/models/blobs/sha256-819c2adf5ce6df2b6bd2ae4ca90d2a69f060afeb438d0c171db57daa02e39c3d duration=5m0s
time=2024-07-12T14:13:25.203-03:00 level=DEBUG source=sched.go:299 msg="after processing request finished event" modelPath=/home/rafheros/.ollama/models/blobs/sha256-819c2adf5ce6df2b6bd2ae4ca90d2a69f060afeb438d0c171db57daa02e39c3d refCount=0
time=2024-07-12T14:13:25.206-03:00 level=DEBUG source=sched.go:507 msg="evaluating already loaded" model=/home/rafheros/.ollama/models/blobs/sha256-819c2adf5ce6df2b6bd2ae4ca90d2a69f060afeb438d0c171db57daa02e39c3d
DEBUG [process_single_task] slot data | n_idle_slots=1 n_processing_slots=0 task_id=230 tid="139849943545728" timestamp=1720804405
DEBUG [process_single_task] slot data | n_idle_slots=1 n_processing_slots=0 task_id=231 tid="139849943545728" timestamp=1720804405
DEBUG [launch_slot_with_data] slot is processing task | slot_id=0 task_id=232 tid="139849943545728" timestamp=1720804405
DEBUG [update_slots] kv cache rm [p0, end) | p0=0 slot_id=0 task_id=232 tid="139849943545728" timestamp=1720804405
DEBUG [update_slots] slot released | n_cache_tokens=202 n_ctx=512 n_past=202 n_system_tokens=0 slot_id=0 task_id=232 tid="139849943545728" timestamp=1720804406 truncated=false
DEBUG [log_server_request] request | method="POST" params={} path="/embedding" remote_addr="127.0.0.1" remote_port=36982 status=200 tid="139849780659776" timestamp=1720804406
[GIN] 2024/07/12 - 14:13:26 | 200 | 1.398312778s | 127.0.0.1 | POST "/api/embeddings"
time=2024-07-12T14:13:26.604-03:00 level=DEBUG source=sched.go:348 msg="context for request finished"
time=2024-07-12T14:13:26.604-03:00 level=DEBUG source=sched.go:281 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/home/rafheros/.ollama/models/blobs/sha256-819c2adf5ce6df2b6bd2ae4ca90d2a69f060afeb438d0c171db57daa02e39c3d duration=5m0s
time=2024-07-12T14:13:26.604-03:00 level=DEBUG source=sched.go:299 msg="after processing request finished event" modelPath=/home/rafheros/.ollama/models/blobs/sha256-819c2adf5ce6df2b6bd2ae4ca90d2a69f060afeb438d0c171db57daa02e39c3d refCount=0
time=2024-07-12T14:13:26.607-03:00 level=DEBUG source=sched.go:507 msg="evaluating already loaded" model=/home/rafheros/.ollama/models/blobs/sha256-819c2adf5ce6df2b6bd2ae4ca90d2a69f060afeb438d0c171db57daa02e39c3d
DEBUG [process_single_task] slot data | n_idle_slots=1 n_processing_slots=0 task_id=235 tid="139849943545728" timestamp=1720804406
DEBUG [process_single_task] slot data | n_idle_slots=1 n_processing_slots=0 task_id=236 tid="139849943545728" timestamp=1720804406
DEBUG [launch_slot_with_data] slot is processing task | slot_id=0 task_id=237 tid="139849943545728" timestamp=1720804406
DEBUG [update_slots] kv cache rm [p0, end) | p0=0 slot_id=0 task_id=237 tid="139849943545728" timestamp=1720804406
DEBUG [update_slots] slot released | n_cache_tokens=187 n_ctx=512 n_past=187 n_system_tokens=0 slot_id=0 task_id=237 tid="139849943545728" timestamp=1720804407 truncated=false
DEBUG [log_server_request] request | method="POST" params={} path="/embedding" remote_addr="127.0.0.1" remote_port=33576 status=200 tid="139849935148608" timestamp=1720804407
[GIN] 2024/07/12 - 14:13:27 | 200 | 1.235134467s | 127.0.0.1 | POST "/api/embeddings"
time=2024-07-12T14:13:27.842-03:00 level=DEBUG source=sched.go:348 msg="context for request finished"
time=2024-07-12T14:13:27.842-03:00 level=DEBUG source=sched.go:281 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/home/rafheros/.ollama/models/blobs/sha256-819c2adf5ce6df2b6bd2ae4ca90d2a69f060afeb438d0c171db57daa02e39c3d duration=5m0s
time=2024-07-12T14:13:27.842-03:00 level=DEBUG source=sched.go:299 msg="after processing request finished event" modelPath=/home/rafheros/.ollama/models/blobs/sha256-819c2adf5ce6df2b6bd2ae4ca90d2a69f060afeb438d0c171db57daa02e39c3d refCount=0
time=2024-07-12T14:13:27.846-03:00 level=DEBUG source=sched.go:507 msg="evaluating already loaded" model=/home/rafheros/.ollama/models/blobs/sha256-819c2adf5ce6df2b6bd2ae4ca90d2a69f060afeb438d0c171db57daa02e39c3d
DEBUG [process_single_task] slot data | n_idle_slots=1 n_processing_slots=0 task_id=240 tid="139849943545728" timestamp=1720804407
DEBUG [process_single_task] slot data | n_idle_slots=1 n_processing_slots=0 task_id=241 tid="139849943545728" timestamp=1720804407
DEBUG [launch_slot_with_data] slot is processing task | slot_id=0 task_id=242 tid="139849943545728" timestamp=1720804407
DEBUG [update_slots] kv cache rm [p0, end) | p0=0 slot_id=0 task_id=242 tid="139849943545728" timestamp=1720804407
DEBUG [update_slots] slot released | n_cache_tokens=205 n_ctx=512 n_past=205 n_system_tokens=0 slot_id=0 task_id=242 tid="139849943545728" timestamp=1720804409 truncated=false
DEBUG [log_server_request] request | method="POST" params={} path="/embedding" remote_addr="127.0.0.1" remote_port=33576 status=200 tid="139849935148608" timestamp=1720804409
[GIN] 2024/07/12 - 14:13:29 | 200 | 1.439000676s | 127.0.0.1 | POST "/api/embeddings"
time=2024-07-12T14:13:29.284-03:00 level=DEBUG source=sched.go:348 msg="context for request finished"
time=2024-07-12T14:13:29.284-03:00 level=DEBUG source=sched.go:281 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/home/rafheros/.ollama/models/blobs/sha256-819c2adf5ce6df2b6bd2ae4ca90d2a69f060afeb438d0c171db57daa02e39c3d duration=5m0s
time=2024-07-12T14:13:29.284-03:00 level=DEBUG source=sched.go:299 msg="after processing request finished event" modelPath=/home/rafheros/.ollama/models/blobs/sha256-819c2adf5ce6df2b6bd2ae4ca90d2a69f060afeb438d0c171db57daa02e39c3d refCount=0
time=2024-07-12T14:13:29.287-03:00 level=DEBUG source=sched.go:507 msg="evaluating already loaded" model=/home/rafheros/.ollama/models/blobs/sha256-819c2adf5ce6df2b6bd2ae4ca90d2a69f060afeb438d0c171db57daa02e39c3d
DEBUG [process_single_task] slot data | n_idle_slots=1 n_processing_slots=0 task_id=245 tid="139849943545728" timestamp=1720804409
DEBUG [process_single_task] slot data | n_idle_slots=1 n_processing_slots=0 task_id=246 tid="139849943545728" timestamp=1720804409
DEBUG [launch_slot_with_data] slot is processing task | slot_id=0 task_id=247 tid="139849943545728" timestamp=1720804409
DEBUG [update_slots] kv cache rm [p0, end) | p0=0 slot_id=0 task_id=247 tid="139849943545728" timestamp=1720804409
DEBUG [update_slots] slot released | n_cache_tokens=202 n_ctx=512 n_past=202 n_system_tokens=0 slot_id=0 task_id=247 tid="139849943545728" timestamp=1720804410 truncated=false
DEBUG [log_server_request] request | method="POST" params={} path="/embedding" remote_addr="127.0.0.1" remote_port=33590 status=200 tid="139849918363200" timestamp=1720804410
[GIN] 2024/07/12 - 14:13:30 | 200 | 1.358210814s | 127.0.0.1 | POST "/api/embeddings"
time=2024-07-12T14:13:30.645-03:00 level=DEBUG source=sched.go:348 msg="context for request finished"
time=2024-07-12T14:13:30.645-03:00 level=DEBUG source=sched.go:281 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/home/rafheros/.ollama/models/blobs/sha256-819c2adf5ce6df2b6bd2ae4ca90d2a69f060afeb438d0c171db57daa02e39c3d duration=5m0s
time=2024-07-12T14:13:30.645-03:00 level=DEBUG source=sched.go:299 msg="after processing request finished event" modelPath=/home/rafheros/.ollama/models/blobs/sha256-819c2adf5ce6df2b6bd2ae4ca90d2a69f060afeb438d0c171db57daa02e39c3d refCount=0
[GIN] 2024/07/12 - 14:13:33 | 400 | 65.664µs | 127.0.0.1 | POST "/api/embeddings"
### Description
I'm trying to embend PDF splited documents on a vector store but the embeddings from OllamaEmbedding only returns 400 Bad Request on it's final request, thas a strage behaviour because counting the requests we have plus 1 final requests that always return this status even if the others are 200.
### System Info
langchain v0.2.18
npm 20
wsl
next.js | OllamaEmbeddings returns error 400 Bad Request when embedding documents | https://api.github.com/repos/langchain-ai/langchain/issues/24190/comments | 1 | 2024-07-12T17:31:07Z | 2024-07-30T11:24:42Z | https://github.com/langchain-ai/langchain/issues/24190 | 2,406,041,713 | 24,190 |
[
"hwchase17",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/how_to/structured_output/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
From:
https://python.langchain.com/v0.2/docs/how_to/structured_output/#the-with_structured_output-method
the below function:
json_schema = {
"title": "joke",
"description": "Joke to tell user.",
"type": "object",
"properties": {
"setup": {
"type": "string",
"description": "The setup of the joke",
},
"punchline": {
"type": "string",
"description": "The punchline to the joke",
},
"rating": {
"type": "integer",
"description": "How funny the joke is, from 1 to 10",
},
},
"required": ["setup", "punchline"],
}
structured_llm = llm.with_structured_output(json_schema)
structured_llm.invoke("Tell me a joke about cats")
Returns JSON with single quotes which causes issues for further processing. I'm using OpenAI's API but I don't think the model is the issue because when prompted without using the with_structured_output() method, it returns JSON templates with double quotes but with preceding text and ```json. So is there a way to get JSON schemas in double quotes without preceding text and ```json.
### Idea or request for content:
Can we get with_structured_output method to return in JSON format with double quotes without any preceding text and ```json? | DOC: <Issue related to /v0.2/docs/how_to/structured_output/> | https://api.github.com/repos/langchain-ai/langchain/issues/24183/comments | 0 | 2024-07-12T13:53:49Z | 2024-07-12T13:56:28Z | https://github.com/langchain-ai/langchain/issues/24183 | 2,405,669,156 | 24,183 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.pydantic_v1 import BaseModel
from langchain_experimental.llms.ollama_functions import OllamaFunctions
class Schema(BaseModel): pass
prompt = ChatPromptTemplate.from_messages([("human", [{"image_url": "data:image/jpeg;base64,{image_url}"}])])
model = OllamaFunctions()
structured_llm = prompt | model.with_structured_output(schema=Schema)
structured_llm.invoke(dict(image_url=''))
```
### Error Message and Stack Trace (if applicable)
```
Traceback (most recent call last):
File "/Users/xyz/workspace/xyz/extraction/scratch_6.py", line 14, in <module>
structured_llm.invoke(dict(image_url=''))
File "/Users/xyz/Library/Caches/pypoetry/virtualenvs/extraction-HCVuYDLA-py3.12/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 2576, in invoke
input = step.invoke(input, config)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/xyz/Library/Caches/pypoetry/virtualenvs/extraction-HCVuYDLA-py3.12/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 4657, in invoke
return self.bound.invoke(
^^^^^^^^^^^^^^^^^^
File "/Users/xyz/Library/Caches/pypoetry/virtualenvs/extraction-HCVuYDLA-py3.12/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 265, in invoke
self.generate_prompt(
File "/Users/xyz/Library/Caches/pypoetry/virtualenvs/extraction-HCVuYDLA-py3.12/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 698, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/xyz/Library/Caches/pypoetry/virtualenvs/extraction-HCVuYDLA-py3.12/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 555, in generate
raise e
File "/Users/xyz/Library/Caches/pypoetry/virtualenvs/extraction-HCVuYDLA-py3.12/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 545, in generate
self._generate_with_cache(
File "/Users/xyz/Library/Caches/pypoetry/virtualenvs/extraction-HCVuYDLA-py3.12/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 770, in _generate_with_cache
result = self._generate(
^^^^^^^^^^^^^^^
File "/Users/xyz/Library/Caches/pypoetry/virtualenvs/extraction-HCVuYDLA-py3.12/lib/python3.12/site-packages/langchain_experimental/llms/ollama_functions.py", line 363, in _generate
response_message = super()._generate(
^^^^^^^^^^^^^^^^^^
File "/Users/xyz/Library/Caches/pypoetry/virtualenvs/extraction-HCVuYDLA-py3.12/lib/python3.12/site-packages/langchain_community/chat_models/ollama.py", line 286, in _generate
final_chunk = self._chat_stream_with_aggregation(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/xyz/Library/Caches/pypoetry/virtualenvs/extraction-HCVuYDLA-py3.12/lib/python3.12/site-packages/langchain_community/chat_models/ollama.py", line 217, in _chat_stream_with_aggregation
for stream_resp in self._create_chat_stream(messages, stop, **kwargs):
File "/Users/xyz/Library/Caches/pypoetry/virtualenvs/extraction-HCVuYDLA-py3.12/lib/python3.12/site-packages/langchain_community/chat_models/ollama.py", line 187, in _create_chat_stream
"messages": self._convert_messages_to_ollama_messages(messages),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/xyz/Library/Caches/pypoetry/virtualenvs/extraction-HCVuYDLA-py3.12/lib/python3.12/site-packages/langchain_experimental/llms/ollama_functions.py", line 315, in _convert_messages_to_ollama_messages
raise ValueError(
ValueError: Only string image_url content parts are supported.
```
### Description
I'm using langchain to extract structured output from base64 encoded image using multimodal models running on ollama.
When running the example code, we get an error as `OllamaFunctions` does not support the provided message format.
If we replace the ollama `model` with an Azure GPT-4o model instead, we do not receive the error. i.e.
```python
model = AzureChatOpenAI(api_key='sk-1234',
openai_api_version="2023-12-01-preview",
azure_endpoint="https://language.openai.azure.com/")
structured_llm = prompt | model.with_structured_output(schema=Schema)
structured_llm.invoke(dict(image_url=''))
```
works as expected.
The prompt message is eventually [converted](https://github.com/langchain-ai/langchain/blob/master/libs/core/langchain_core/prompts/chat.py#L538) into a `ImagePromptTemplate`. Which in turn is constructing the unsupported dict structure.
It appears that [`ChatOllama._convert_messages_to_ollama_messages`](https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/chat_models/ollama.py#L142) is trying to cope with the different formats. While the overwritten [`OllamaFunction._convert_messages_to_ollama_messages`](https://github.com/langchain-ai/langchain/blob/master/libs/experimental/langchain_experimental/llms/ollama_functions.py#L306) does not.
### System Info
```
$ python -m langchain_core.sys_info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.5.0: Wed May 1 20:09:52 PDT 2024; root:xnu-10063.121.3~5/RELEASE_X86_64
> Python Version: 3.12.4 (main, Jun 6 2024, 18:26:44) [Clang 15.0.0 (clang-1500.3.9.4)]
Package Information
-------------------
> langchain_core: 0.2.16
> langchain: 0.2.7
> langchain_community: 0.2.7
> langsmith: 0.1.85
> langchain_experimental: 0.0.62
> langchain_openai: 0.1.15
> langchain_text_splitters: 0.2.2
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
``` | OllamaFunctions incompatible with ImagePromptTemplate | https://api.github.com/repos/langchain-ai/langchain/issues/24174/comments | 0 | 2024-07-12T08:00:09Z | 2024-07-12T08:02:48Z | https://github.com/langchain-ai/langchain/issues/24174 | 2,404,991,555 | 24,174 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
def create_selector():
try:
vectorstore = Chroma()
vectorstore.delete_collection()
selector = SemanticSimilarityExampleSelector.from_examples(
examples,
llm_embeddings,
vectorstore,
k=1,
input_keys=["input"],
)
return selector
except Exception as e:
logger.error(e)
return None
```
### Error Message and Stack Trace (if applicable)
'Collection' object has no attribute 'model_fields'
### Description
I'm trying to use Chroma vectorstore in Langchain, and receive the error above. Error appeared when calling `Chroma()` function.
### System Info
OS: Ubuntu
OS Version: Ubuntu 22.04
Python Version: 3.10.12
### Packages
langchain==0.2.5
langchain-chroma==0.1.2
langchain_community==0.2.5
langchain-openai==0.1.8 | AttributeError: 'Collection' object has no attribute 'model_fields' | https://api.github.com/repos/langchain-ai/langchain/issues/24163/comments | 19 | 2024-07-12T02:41:35Z | 2024-08-02T07:25:40Z | https://github.com/langchain-ai/langchain/issues/24163 | 2,404,531,016 | 24,163 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
NA
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I'm a beginner to open source projects and submitted my first pull request (https://github.com/langchain-ai/langchain/pull/23628) two weeks ago. Initially, it reported some linting errors, but I fixed them, and the pull request was approved. However, it has been stuck at this stage for more than two weeks. I tried updating the branch and rerunning the workflows, but the same issue persists. Could you please advise on what might be the problem here?
Thank you!
![image](https://github.com/user-attachments/assets/417f63c0-f1b2-43c9-a677-ef8634338154)
### System Info
NA | unable to merge approved pull request | https://api.github.com/repos/langchain-ai/langchain/issues/24154/comments | 1 | 2024-07-11T22:33:58Z | 2024-07-12T15:02:26Z | https://github.com/langchain-ai/langchain/issues/24154 | 2,404,272,511 | 24,154 |
[
"hwchase17",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/integrations/text_embedding/google_generative_ai/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
the documentation states that i can use GoogleGenerativeAIEmbeddings from langchain-google-genai but i got an error that i can not import it form the library
link for documentation page:
https://python.langchain.com/v0.2/docs/integrations/text_embedding/google_generative_ai/
### Idea or request for content:
_No response_ | DOC: <Issue related to /v0.2/docs/integrations/text_embedding/google_generative_ai/> | https://api.github.com/repos/langchain-ai/langchain/issues/24148/comments | 1 | 2024-07-11T21:07:41Z | 2024-07-13T09:46:47Z | https://github.com/langchain-ai/langchain/issues/24148 | 2,404,128,451 | 24,148 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_aws import ChatBedrock
from pydantic import BaseModel, Field
class Joke(BaseModel):
setup: str = Field(description="The setup of the joke")
punchline: str = Field(description="The punchline to the joke")
chat = ChatBedrock(
model_id="anthropic.claude-3-haiku-20240307-v1:0",
model_kwargs={"temperature": 0.1},
region_name="my-region-name",
credentials_profile_name="my-profile-name",
streaming=True,
).bind_tools([Joke])
chat.invoke(""tell me a joke")```
### Error Message and Stack Trace (if applicable)
```shell
---------------------------------------------------------------------------
RecursionError Traceback (most recent call last)
Cell In[22], [line 1](vscode-notebook-cell:?execution_count=22&line=1)
----> [1](vscode-notebook-cell:?execution_count=22&line=1) chain.invoke("tell me a joke")
File ~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/runnables/base.py:4653, in RunnableBindingBase.invoke(self, input, config, **kwargs)
[4647](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/runnables/base.py:4647) def invoke(
[4648](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/runnables/base.py:4648) self,
[4649](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/runnables/base.py:4649) input: Input,
[4650](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/runnables/base.py:4650) config: Optional[RunnableConfig] = None,
[4651](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/runnables/base.py:4651) **kwargs: Optional[Any],
[4652](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/runnables/base.py:4652) ) -> Output:
-> [4653](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/runnables/base.py:4653) return self.bound.invoke(
[4654](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/runnables/base.py:4654) input,
[4655](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/runnables/base.py:4655) self._merge_configs(config),
[4656](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/runnables/base.py:4656) **{**self.kwargs, **kwargs},
[4657](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/runnables/base.py:4657) )
File ~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:265, in BaseChatModel.invoke(self, input, config, stop, **kwargs)
[254](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:254) def invoke(
[255](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:255) self,
[256](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:256) input: LanguageModelInput,
(...)
[260](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:260) **kwargs: Any,
[261](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:261) ) -> BaseMessage:
[262](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:262) config = ensure_config(config)
[263](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:263) return cast(
[264](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:264) ChatGeneration,
--> [265](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:265) self.generate_prompt(
[266](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:266) [self._convert_input(input)],
[267](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:267) stop=stop,
[268](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:268) callbacks=config.get("callbacks"),
[269](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:269) tags=config.get("tags"),
[270](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:270) metadata=config.get("metadata"),
[271](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:271) run_name=config.get("run_name"),
[272](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:272) run_id=config.pop("run_id", None),
[273](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:273) **kwargs,
[274](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:274) ).generations[0][0],
[275](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:275) ).message
File ~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:698, in BaseChatModel.generate_prompt(self, prompts, stop, callbacks, **kwargs)
[690](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:690) def generate_prompt(
[691](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:691) self,
[692](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:692) prompts: List[PromptValue],
(...)
[695](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:695) **kwargs: Any,
[696](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:696) ) -> LLMResult:
[697](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:697) prompt_messages = [p.to_messages() for p in prompts]
--> [698](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:698) return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
File ~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:555, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
[553](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:553) if run_managers:
[554](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:554) run_managers[i].on_llm_error(e, response=LLMResult(generations=[]))
--> [555](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:555) raise e
[556](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:556) flattened_outputs = [
[557](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:557) LLMResult(generations=[res.generations], llm_output=res.llm_output) # type: ignore[list-item]
[558](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:558) for res in results
[559](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:559) ]
[560](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:560) llm_output = self._combine_llm_outputs([res.llm_output for res in results])
File ~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:545, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
[542](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:542) for i, m in enumerate(messages):
[543](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:543) try:
[544](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:544) results.append(
--> [545](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:545) self._generate_with_cache(
[546](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:546) m,
[547](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:547) stop=stop,
[548](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:548) run_manager=run_managers[i] if run_managers else None,
[549](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:549) **kwargs,
[550](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:550) )
[551](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:551) )
[552](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:552) except BaseException as e:
[553](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:553) if run_managers:
File ~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:770, in BaseChatModel._generate_with_cache(self, messages, stop, run_manager, **kwargs)
[768](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:768) else:
[769](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:769) if inspect.signature(self._generate).parameters.get("run_manager"):
--> [770](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:770) result = self._generate(
[771](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:771) messages, stop=stop, run_manager=run_manager, **kwargs
[772](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:772) )
[773](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:773) else:
[774](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py:774) result = self._generate(messages, stop=stop, **kwargs)
File ~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:521, in ChatBedrock._generate(self, messages, stop, run_manager, **kwargs)
[519](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:519) if self.streaming:
[520](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:520) response_metadata: List[Dict[str, Any]] = []
--> [521](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:521) for chunk in self._stream(messages, stop, run_manager, **kwargs):
[522](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:522) completion += chunk.text
[523](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:523) response_metadata.append(chunk.message.response_metadata)
File ~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:442, in ChatBedrock._stream(self, messages, stop, run_manager, **kwargs)
[440](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:440) if "claude-3" in self._get_model():
[441](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:441) if _tools_in_params({**kwargs}):
--> [442](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:442) result = self._generate(
[443](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:443) messages, stop=stop, run_manager=run_manager, **kwargs
[444](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:444) )
[445](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:445) message = result.generations[0].message
[446](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:446) if isinstance(message, AIMessage) and message.tool_calls is not None:
File ~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:521, in ChatBedrock._generate(self, messages, stop, run_manager, **kwargs)
[519](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:519) if self.streaming:
[520](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:520) response_metadata: List[Dict[str, Any]] = []
--> [521](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:521) for chunk in self._stream(messages, stop, run_manager, **kwargs):
[522](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:522) completion += chunk.text
[523](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:523) response_metadata.append(chunk.message.response_metadata)
File ~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:442, in ChatBedrock._stream(self, messages, stop, run_manager, **kwargs)
[440](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:440) if "claude-3" in self._get_model():
[441](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:441) if _tools_in_params({**kwargs}):
--> [442](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:442) result = self._generate(
[443](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:443) messages, stop=stop, run_manager=run_manager, **kwargs
[444](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:444) )
[445](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:445) message = result.generations[0].message
[446](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:446) if isinstance(message, AIMessage) and message.tool_calls is not None:
[... skipping similar frames: ChatBedrock._generate at line 521 (734 times), ChatBedrock._stream at line 442 (734 times)]
File ~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:521, in ChatBedrock._generate(self, messages, stop, run_manager, **kwargs)
[519](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:519) if self.streaming:
[520](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:520) response_metadata: List[Dict[str, Any]] = []
--> [521](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:521) for chunk in self._stream(messages, stop, run_manager, **kwargs):
[522](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:522) completion += chunk.text
[523](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:523) response_metadata.append(chunk.message.response_metadata)
File ~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:442, in ChatBedrock._stream(self, messages, stop, run_manager, **kwargs)
[440](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:440) if "claude-3" in self._get_model():
[441](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:441) if _tools_in_params({**kwargs}):
--> [442](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:442) result = self._generate(
[443](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:443) messages, stop=stop, run_manager=run_manager, **kwargs
[444](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:444) )
[445](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:445) message = result.generations[0].message
[446](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:446) if isinstance(message, AIMessage) and message.tool_calls is not None:
File ~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:517, in ChatBedrock._generate(self, messages, stop, run_manager, **kwargs)
[514](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:514) llm_output: Dict[str, Any] = {}
[515](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:515) tool_calls: List[Dict[str, Any]] = []
[516](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:516) provider_stop_reason_code = self.provider_stop_reason_key_map.get(
--> [517](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:517) self._get_provider(), "stop_reason"
[518](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:518) )
[519](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:519) if self.streaming:
[520](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/chat_models/bedrock.py:520) response_metadata: List[Dict[str, Any]] = []
File ~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/llms/bedrock.py:585, in BedrockBase._get_provider(self)
[583](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/llms/bedrock.py:583) if self.provider:
[584](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/llms/bedrock.py:584) return self.provider
--> [585](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/llms/bedrock.py:585) if self.model_id.startswith("arn"):
[586](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/llms/bedrock.py:586) raise ValueError(
[587](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/llms/bedrock.py:587) "Model provider should be supplied when passing a model ARN as "
[588](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/llms/bedrock.py:588) "model_id"
[589](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/llms/bedrock.py:589) )
[591](https://file+.vscode-resource.vscode-cdn.net/Users/tommasodelorenzo/Documents/clients/kontrata/juztina/ai-core/app/~/Documents/clients/kontrata/juztina/ai-core/.venv/lib/python3.12/site-packages/langchain_aws/llms/bedrock.py:591) return self.model_id.split(".")[0]
RecursionError: maximum recursion depth exceeded while calling a Python object```
### Description
- I am trying to stream with tool calling (recently added by Anthropic).
- Setting `streaming = False` works.
- Setting `streaming = True` I get recursion error.
- The same setting works using `ChatAnthropic` class.
### System Info
Python 3.12.1
langchain-anthropic==0.1.19
langchain-aws==0.1.10
langchain-core==0.2.13
langchain-openai==0.1.15
langchain-qdrant==0.1.1 | `RecursionError ` in `ChatBedrock` with Anthropic model, tool calling and streaming | https://api.github.com/repos/langchain-ai/langchain/issues/24136/comments | 2 | 2024-07-11T17:44:05Z | 2024-07-25T08:58:48Z | https://github.com/langchain-ai/langchain/issues/24136 | 2,403,723,399 | 24,136 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
QianfanLLMEndpoint(qianfan_ak="xxx",qianfan_sk="xxx")
```
### Error Message and Stack Trace (if applicable)
pydantic.v1.error_wrappers.ValidationError: 2 validation errors for QianfanLLMEndpoint
qianfan_ak
str type expected (type=type_error.str)
qianfan_sk
### Description
qianfan_ak qianfan_sk pydantic check error SecretStr != str
### System Info
langchain==0.2.7
langchain-community==0.2.7
langchain-core==0.2.13
langchain-mongodb==0.1.6
langchain-openai==0.1.15
langchain-text-splitters==0.2.2
python 3.11
mac m3 | QianfanLLMEndpoint ak/sk SecretStr ERROR | https://api.github.com/repos/langchain-ai/langchain/issues/24126/comments | 4 | 2024-07-11T15:30:30Z | 2024-07-26T01:45:07Z | https://github.com/langchain-ai/langchain/issues/24126 | 2,403,489,721 | 24,126 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
import os
from langchain_anthropic import ChatAnthropic
from langchain_core.runnables import ConfigurableField, RunnableConfig
from pydantic.v1 import SecretStr
client = ChatAnthropic(
base_url=os.environ['CHAT_ANTHROPIC_BASE_URL'],
api_key=SecretStr(os.environ['CHAT_ANTHROPIC_API_KEY']),
model_name='claude-3-opus-20240229',
).configurable_fields(
model_kwargs=ConfigurableField(
id="model_kwargs",
name="Model Kwargs",
description="Keyword arguments to pass through to the chat client (e.g. user)",
),
)
configurable = {
"model_kwargs": {"metadata": {"user_id": "testuserid"}}
}
response = client.invoke("Write me a short story", config=RunnableConfig(configurable=configurable))
print(response)
```
### Error Message and Stack Trace (if applicable)
Exception: `ValidatorError`
```
Traceback (most recent call last):
File "main.py", line 32, in <module>
response = client.invoke("Write me a short story", config=RunnableConfig(configurable=configurable))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.venv/lib/python3.12/site-packages/langchain_core/runnables/configurable.py", line 115, in invoke
runnable, config = self.prepare(config)
^^^^^^^^^^^^^^^^^^^^
File "/.venv/lib/python3.12/site-packages/langchain_core/runnables/configurable.py", line 104, in prepare
runnable, config = runnable._prepare(merge_configs(runnable.config, config))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.venv/lib/python3.12/site-packages/langchain_core/runnables/configurable.py", line 415, in _prepare
self.default.__class__(**{**init_params, **configurable}),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.venv/lib/python3.12/site-packages/pydantic/v1/main.py", line 341, in __init__
raise validation_error
pydantic.v1.error_wrappers.ValidationError: 1 validation error for ChatAnthropic
__root__
Found metadata supplied twice. (type=value_error)
```
### Description
- I'm trying to set up a reusable chat model where I can pass in a user on each invocation
- Anthropic expects this via a `metadata` object on the `messages.create(...)` call, as described here
- Since it is an extra argument to the `create()` call, I believe I should be able to pass it via `model_kwargs`
- But it seems to clash with something else (I'm guessing the `metadata` field of `BaseLanguageModel`)
Is there a way around this so that I can pass the `metadata` kwarg to the `create()` call as expected? At a glance since it's nested under `model_kwargs` it shouldn't clash with other params. Are they being flattened and if so, why?
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.5.0: Wed May 1 20:14:38 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6020
> Python Version: 3.12.3 (main, Jul 2 2024, 11:16:56) [Clang 15.0.0 (clang-1500.3.9.4)]
Package Information
-------------------
> langchain_core: 0.2.13
> langchain: 0.2.7
> langchain_community: 0.2.7
> langsmith: 0.1.85
> langchain_anthropic: 0.1.19
> langchain_openai: 0.1.15
> langchain_text_splitters: 0.2.2
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
| ChatAnthropic - Found metadata supplied twice | https://api.github.com/repos/langchain-ai/langchain/issues/24121/comments | 2 | 2024-07-11T14:03:02Z | 2024-07-12T12:54:48Z | https://github.com/langchain-ai/langchain/issues/24121 | 2,403,275,400 | 24,121 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
import asyncio
import litellm
from langchain_community.chat_models.litellm_router import ChatLiteLLMRouter
from langchain_core.messages import HumanMessage
from langchain_core.prompt_values import ChatPromptValue
from litellm import Router
from litellm.integrations.custom_logger import CustomLogger
def get_llm_router() -> Router:
"""
Return a new instance of Router, ensure to pass the following parameters so responses are cached:
* redis_host
* redis_port
* redis_password
* cache_kwargs
* cache_responses
* caching_groups
"""
raise NotImplementedError('Create your own router')
class MyLogger(CustomLogger):
async def async_log_success_event(self, kwargs, response_obj: "ModelResponse", start_time, end_time):
print(f"[MyLogger::async_log_success_event] response id: '{response_obj.id}'; cache_hit: '{kwargs.get('cache_hit', '')}'.\n\n")
my_logger = MyLogger()
litellm.callbacks = [my_logger]
async def chat():
llm = ChatLiteLLMRouter(router=get_llm_router())
msg1 = ""
msg1_count = 0
async for msg in llm.astream(
input=ChatPromptValue(messages=[HumanMessage("What's the first planet in solar system?")])):
msg1 += msg.content
if msg.content:
msg1_count += 1
print(f"msg1 (count={msg1_count}): {msg1}\n\n")
msg2 = ""
msg2_count = 0
async for msg in llm.astream(input=ChatPromptValue(messages=[HumanMessage("What's the first planet in solar system?")])):
msg2 += msg.content
if msg.content:
msg2_count += 1
print(f"msg2 (count={msg2_count}): {msg2}\n\n")
await asyncio.sleep(5)
if __name__ == "__main__":
asyncio.run(chat())
```
### Error Message and Stack Trace (if applicable)
This is the output generated running the shared code:
```
Intialized router with Routing strategy: latency-based-routing
Routing fallbacks: None
Routing context window fallbacks: None
Router Redis Caching=<litellm.caching.RedisCache object at 0x12370da10>
msg1 (count=20): The first planet in the solar system, starting from the one closest to the Sun, is Mercury.
[MyLogger::async_log_success_event] response id: 'chatcmpl-9jnacYSdnczh2zWMKi3l813lNXVtE'; cache_hit: 'None'.
msg2 (count=1): The first planet in the solar system, starting from the one closest to the Sun, is Mercury.
[MyLogger::async_log_success_event] response id: 'chatcmpl-9jnacYSdnczh2zWMKi3l813lNXVtE'; cache_hit: 'None'.
```
Notice the two lines starting with `[MyLogger::async_log_success_event]` saying `cache_hit: 'None'`. It's expected to be `True` in the second line as the the call to `astream` generated a single chunk with the entire message.
### Description
I'm trying to cache LLM responses using LiteLLM router cache settings and get notified when a response was obtained from cache instead of LLM. For that purpose I've implemented a custom logger as shown in [LiteLLM docs](https://litellm.vercel.app/docs/observability/custom_callback#cache-hits).
The issue is that when I call `astream` API, as shown in the code snippet above, the `cache_hit` flag is `None`, even though for the case where response is returned from cache.
When I call the `ainvoke` API (`await llm.ainvoke(...)`) the `cache_hit` flag is passed as `True` to my custom logger as expected after the second call to `ainvoke`.
### System Info
```bash
$ poetry run python -m langchain_core.sys_info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.2.0: Wed Nov 15 21:54:10 PST 2023; root:xnu-10002.61.3~2/RELEASE_X86_64
> Python Version: 3.11.7 (main, Dec 15 2023, 12:09:04) [Clang 14.0.6 ]
Package Information
-------------------
> langchain_core: 0.2.13
> langchain: 0.2.7
> langchain_community: 0.2.7
> langsmith: 0.1.85
> langchain_openai: 0.1.15
> langchain_text_splitters: 0.2.0
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
``` | LiteLLM - cache_hit flag reported as None when using the async stream API | https://api.github.com/repos/langchain-ai/langchain/issues/24120/comments | 2 | 2024-07-11T14:00:20Z | 2024-07-17T13:11:33Z | https://github.com/langchain-ai/langchain/issues/24120 | 2,403,269,285 | 24,120 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
import logging
logging.basicConfig(level=logging.DEBUG)
from langchain_core.documents import Document
from langchain_core.embeddings import FakeEmbeddings
from langchain_milvus import Milvus
milvus_vector_store = Milvus(
index_params = {
"metric_type": "COSINE",
"index_type": "HNSW",
"params": {"M": 8, "efConstruction": 64},
},
embedding_function=FakeEmbeddings(size=100),
collection_name="test_langchain",
connection_args={...}
)
docs = [
Document(page_content='First document content.'),
Document(page_content='Second document content.'),
]
ids = [
"id_1",
"id_2"
]
result = milvus_vector_store.add_documents(ids=ids, documents=docs)
print(result)
```
### Error Message and Stack Trace (if applicable)
```bash
DEBUG:langchain_milvus.vectorstores.milvus:No documents to upsert.
Traceback (most recent call last):
File "/Users/amiigas/langchain_milvus_test.py", line 32, in <module>
result = milvus_vector_store.add_documents(ids=ids, documents=docs)
File "/Users/amiigas/.venv/lib/python3.10/site-packages/langchain_core/vectorstores/base.py", line 463, in add_documents
return self.upsert(documents_, **kwargs)["succeeded"]
TypeError: 'NoneType' object is not subscriptable
```
### Description
* Trying to add documents to Milvus vector store using `add_documents()` method.
* Expecting to successfully add documents and see returned ids `["id_1", "id_2"]` as a method result
* Instead method returns `None` which then throws `TypeError` after trying to access `succeeded` key
`VectorStore` passes `documents_` to `Milvus.upsert()` as a positional arg, which results in `documents=None` and returning `None` early
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.5.0: Wed May 1 20:13:18 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6030
> Python Version: 3.10.11 (v3.10.11:7d4cc5aa85, Apr 4 2023, 19:05:19) [Clang 13.0.0 (clang-1300.0.29.30)]
Package Information
-------------------
> langchain_core: 0.2.13
> langchain: 0.2.7
> langsmith: 0.1.84
> langchain_milvus: 0.1.1
> langchain_text_splitters: 0.2.2 | Milvus fails on `add_documents()` with new `VectorStore.upsert()` method | https://api.github.com/repos/langchain-ai/langchain/issues/24116/comments | 7 | 2024-07-11T09:46:30Z | 2024-07-17T20:08:05Z | https://github.com/langchain-ai/langchain/issues/24116 | 2,402,749,899 | 24,116 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
#1st key code
loader = UnstructuredWordDocumentLoader(self.file_path, mode="paged", strategy="fast", infer_table_structure=EXTRACT_TABLES)
#2ed key code
def create_documents(
self, texts: List[str], metadatas: Optional[List[dict]] = None
) -> List[Document]:
"""Create documents from a list of texts."""
_metadatas = metadatas or [{}] * len(texts)
documents = []
for i, text in enumerate(texts):
index = 0
previous_chunk_len = 0
for chunk in self.split_text(text):
metadata = copy.deepcopy(_metadatas[i])
if self._add_start_index:
offset = index + previous_chunk_len - self._chunk_overlap
index = text.find(chunk, max(0, offset))
metadata["start_index"] = index
previous_chunk_len = len(chunk)
new_doc = Document(page_content=chunk, metadata=metadata)
documents.append(new_doc)
return documents
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Use `from langchain_community.document_loaders import UnstructuredWordDocumentLoader` to parse word file with big table, then got `Document` objects. if param `infer_table_structure=True` which is default param, each document metadata property contains a `text_as_html` property which is a big object.
Then when using `TextSplitter` to split documents into chunks, every chunks document will deepcopy metadata once.
If there are many chunks, the memory will increase sharply, and finally oom, and the program will terminate.
### System Info
langchain==0.2.3
langchain-community==0.2.4
langchain-core==0.2.5
langchain-text-splitters==0.2.1 | Parse msword file then split chunks cause oom | https://api.github.com/repos/langchain-ai/langchain/issues/24115/comments | 3 | 2024-07-11T09:44:23Z | 2024-07-11T13:33:05Z | https://github.com/langchain-ai/langchain/issues/24115 | 2,402,745,688 | 24,115 |
[
"hwchase17",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/concepts/#tools
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
Document says:
To use an existing pre-built tool, see [here](https://python.langchain.com/v0.2/docs/concepts/docs/integrations/tools/) for a list of pre-built tools.
But the link page does not exist.
### Idea or request for content:
_No response_ | DOC: Page Not Found | https://api.github.com/repos/langchain-ai/langchain/issues/24112/comments | 3 | 2024-07-11T08:13:33Z | 2024-07-16T09:35:29Z | https://github.com/langchain-ai/langchain/issues/24112 | 2,402,553,645 | 24,112 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The following code:
```python
agent = create_react_agent(llm, tools, prompt)
agent_executor = AgentExecutor(
agent=agent,
tools=tools,
verbose=True,
max_iterations=1,
early_stopping_method="generate",
)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I am using create_react_agent() to create an agent. I want to let LLM to get the final answer when the max_iterations reached based on the existing information. But it seems the agent created by create_react_agent() does not support "early_stopping_method='generate'". It seems the agent type is BaseSingleActionAgent and return_stopped_response() does not handle "early_stopping_method='generate'"
```python
def return_stopped_response(
self,
early_stopping_method: str,
intermediate_steps: List[Tuple[AgentAction, str]],
**kwargs: Any,
) -> AgentFinish:
"""Return response when agent has been stopped due to max iterations."""
if early_stopping_method == "force":
# `force` just returns a constant string
return AgentFinish(
{"output": "Agent stopped due to iteration limit or time limit."}, ""
)
else:
raise ValueError(
f"Got unsupported early_stopping_method `{early_stopping_method}`"
)
```
Can I assign it as Agent(BaseSingleActionAgent) when using create_react_agent() ?
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.19043
> Python Version: 3.10.0 (tags/v3.10.0:b494f59, Oct 4 2021, 19:00:18) [MSC v.1929 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.2.13
> langchain: 0.2.7
> langchain_community: 0.2.7
> langsmith: 0.1.85
> langchain_google_alloydb_pg: 0.2.2
> langchain_google_community: 1.0.6
> langchain_google_vertexai: 1.0.6
> langchain_openai: 0.1.8
> langchain_text_splitters: 0.2.2
> langchainhub: 0.1.20
> langgraph: 0.1.7
> langserve: 0.2.2 | agent created by create_react_agent() does not support early_stopping_method='generate' | https://api.github.com/repos/langchain-ai/langchain/issues/24111/comments | 2 | 2024-07-11T07:54:20Z | 2024-07-16T12:28:45Z | https://github.com/langchain-ai/langchain/issues/24111 | 2,402,516,924 | 24,111 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
def chat_to_sql_validator(input_prompt, chat_history, database, model_type):
print(f"Database used: {database.dialect}")
print(f"Usable table: {database.get_usable_table_names()[0]}\n\n")
if model_type == "gpt-4o":
model = ChatGPT()
elif model_type == "gemini-pro":
model = Gemini()
toolkit = SQLDatabaseToolkit(db = database,
llm = model,
agent_type = "tool-calling",
verbose = False)
snippet_data = toolkit.get_context()["table_info"]
current_date = date.today().strftime("%Y-%m-%d")
examples = [
{
"input": "Example 1",
"output": "Example 1"
},
{
"input": "Example 2",
"output": "Example 2"
},
{
"input": "Example 3",
"output": "Example 3"
}
]
system = """
You are a {dialect} expert. Given a human chat history and question, your task is to create a syntactically correct {dialect} query to run.
Unless the user specifies in the question a specific number of examples to obtain, query for at most {top_k} results using the LIMIT clause as per {dialect}.
You can order the results to return the most informative data in the database.
Never query for all columns from a table. You must query only the columns that are needed to answer the question.
Pay attention to use only the column names you can see in the tables below. Be careful to not query for columns that do not exist. Also, pay attention to which column is in which table.
Pay attention to use date('now') function to get the current date, if the question involves "today".
Only use the following tables:
{table_info}
There is some rules that defined by human to generate syntactically correct {dialect} query:
1. Text Search:
1.1. For partial matches in the following columns, use the LIKE operator:
1.1.1. Branch Name
1.1.2. Store Name
1.1.3. Product Name
1.1.4. Principal Name
1.1.5. Product Type
1.1.6. Product Brand Name
1.1.7. Product Division Name
1.1.8. Product Department Name
1.1.9. Product Category Name
1.2. Ensure case-insensitive search using the UPPER function:
UPPER(column_name) LIKE UPPER('%search_term%')
1.3. Replace spaces in the search term with '%' for wildcard matching.
2. Counting Distinct Values:
2.1. Use the COUNT DISTINCT function to calculate the total number of unique values in the following columns:
2.1.1. Branch Code or Branch Name
2.1.2. Store Code or Store Name
2.1.3. Product Code or Product Name
2.1.4. Principal Code or Principal Name
2.1.5. Product Type
2.1.6. Product Brand Name
2.1.7. Product Division Name
2.1.8. Product Department Name
2.1.9. Product Category Name
3. Summing Values:
3.1. Transactions and Sales Gross must use the SUM function
3.2. Quantity Purchase Order and Amount Purchase Order must use the SUM function
4. Data Aggregation:
4.1. Perform appropriate data aggregation based on the user's question. This could include:
4.1.1. SUM: To calculate the total value.
4.1.2. AVG: To calculate the average value.
4.1.3. MIN, MAX: To find the minimum or maximum value.
5. Sorting:
If the result is a list, sort the values according to the user's question.
Specify the column and sorting order (ASC for ascending, DESC for descending).
6. Data Structure Awareness:
Understand that 'Branch' and 'Store' are not equivalent entities within the data.
This means that queries should be structured to differentiate between these entities when necessary.
Within the rule by human write a draft query. Then double check the {dialect} query for common mistakes, including:
- Always make column data from datetime or date cast into string
- Not using GROUP BY for aggregating data
- Using NOT IN with NULL values
- Using UNION when UNION ALL should have been used
- Using BETWEEN for exclusive ranges
- Data type mismatch in predicates
- Properly quoting identifiers
- Using the correct number of arguments for functions
- Casting to the correct data type
- Using the proper columns for joins
You must you this format:
First draft: <<FIRST_DRAFT_QUERY>>
Final answer: <<FINAL_ANSWER_QUERY>>
Here are history question from human, to help you understand the context:
{chat_history}
Here is the snippet of the data to help you understand more about the table:
{snippet_data}
Here is the current date if asked about today, the format date is in YYYY-MM-DD:
{current_date}
Your query answer must align with the question from Human. If the question asking 10 then show 10 rows.
"""
try:
example_prompt = ChatPromptTemplate.from_messages(
[
("human", "{input}"),
("ai", "{output}"),
]
)
few_shot_prompt = FewShotChatMessagePromptTemplate(example_prompt = example_prompt, examples = examples)
prompt = ChatPromptTemplate.from_messages([("system", system), few_shot_prompt, ("human", "{input}")]).partial(dialect=database.dialect, chat_history=chat_history, snippet_data=snippet_data, current_date=current_date)
chain = create_sql_query_chain(model, database, prompt=prompt, k=10)
output_query = chain.invoke({"question": input_prompt})
except Exception as Error:
output_query = ""
prompt = ""
output_prompt = {
"output_prompt" : prompt,
"output_script" : output_query,
}
return output_prompt
```
### Error Message and Stack Trace (if applicable)
The error message was:
```
File "/workspace/main.py", line 359, in chat_to_sql_validator
few_shot_prompt = FewShotChatMessagePromptTemplate(example_prompt=ChatPromptTemplate.from_messages([("human", "{input}"), ("ai", "{output}")]), examples = examples)
File "/layers/google.python.pip/pip/lib/python3.8/site-packages/pydantic/v1/main.py", line 341, in __init__
raise validation_error
pydantic.v1.error_wrappers.ValidationError: 1 validation error for FewShotChatMessagePromptTemplate
input_variables
field required (type=value_error.missing)"
```
### Description
* I'm trying to generate a SQL Script with my defined rules and example, using Few Shot Prompt and Chat Prompt
* I expect the result was returning the dictionary with script query sql in it
* Instead, i've got error
```
ValidationError: 1 validation error for FewShotChatMessagePromptTemplate
input_variables
field required (type=value_error.missing)
```
* Ive test it on my notebook and it is works, but when im trying to use it in cloud function i've got that error
### System Info
functions-framework==3.*
google-cloud-aiplatform
google-cloud-storage
google-cloud-bigquery-storage
google-api-python-client
google-auth
langchain
langchain-openai
langchain-community
langchain-google-vertexai
langchain-openai
tiktoken
nest-asyncio
bs4
faiss-cpu
langchain_experimental
tabulate
pandas-gbq
sqlalchemy
sqlalchemy-bigquery
flask
I've also tried different version of langchain and also didn't work | 1 validation error for FewShotChatMessagePromptTemplate input_variables field required (type=value_error.missing) | https://api.github.com/repos/langchain-ai/langchain/issues/24108/comments | 6 | 2024-07-11T07:18:05Z | 2024-07-31T09:03:16Z | https://github.com/langchain-ai/langchain/issues/24108 | 2,402,448,597 | 24,108 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
llm = VLLMOpenAI(
openai_api_key="EMPTY",
openai_api_base=api_base,
model_name="microsoft/Phi-3-vision-128k-instruct",
model_kwargs={"stop": ["."]}
)
image_path = "invoice_data_images/Screenshot 2024-05-02 160946_page_1.png"
with open(image_path, "rb") as image_file:
image_base64 = base64.b64encode(image_file.read()).decode("utf-8")
prompt_1 = "Give me the invoice date from the given image."
messages = [
HumanMessage(
content=[
{"type": "text", "text": prompt_1},
{"type": "image_url", "image_url": {"url": f"data:image/png;base64,{image_base64}"}}
]
)
]
response = llm.invoke(messages)
print(response)
### Error Message
{
"name": "BadRequestError",
"message": "Error code: 400 - {'object': 'error', 'message': 'This model's maximum context length is 8192 tokens. However, you requested 254457 tokens (254201 in the messages, 256 in the completion). Please reduce the length of the messages or completion.', 'type': 'BadRequestError', 'param': None, 'code': 400}"
}
### Description
I hosted VLLM on an EC2 instance to extract text data from images using the Phi-3 Vision model. The model is hosted with the following command:
python3 -m vllm.entrypoints.openai.api_server --port 8000 --model microsoft/Phi-3-vision-128k-instruct --trust-remote-code --dtype=half --max_model_len=8192
When running the code, I encounter a BadRequestError due to exceeding the maximum context length. The error message indicates that the total number of tokens requested is 254457, which far exceeds the model's limit of 8192 tokens.
The base64-encoded image is being considered part of the prompt, significantly increasing the token count and leading to the context length issue.
Even if the model's context length were 128k tokens, including a base64-encoded image in the prompt would always exceed the model's limit.
# Expected Behavior
The model should process the image without considering the base64 string as part of the prompt token count, thereby avoiding the context length issue.
### System Info
langchain==0.2.7
langchain-community==0.2.7
langchain-core==0.2.12
langchain-text-splitters==0.2.2 | Maximum Context Length Exceeded Due to Base64-Encoded Image in Prompt | https://api.github.com/repos/langchain-ai/langchain/issues/24107/comments | 0 | 2024-07-11T07:07:57Z | 2024-07-11T07:12:35Z | https://github.com/langchain-ai/langchain/issues/24107 | 2,402,430,115 | 24,107 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Using RequestWrapper to make a POST call using agent-
from langchain_community.utilities import RequestsWrapper
Error -
` ClientSession._request() got an unexpected keyword argument 'verify'`
### Error Message and Stack Trace (if applicable)
Getting` ClientSession._request() got an unexpected keyword argument 'verify'` error while using the RequestWrapper to do a POST request.
### Description
I am using `RequestWrapper` from `langchain_community.utilities` to do a `POST` call to my endpoint.
Below is the error thrown by the agent.
Error -
`ClientSession._request() got an unexpected keyword argument 'verify'`
I guess aiohttp session library got updated and `verify` arguement got removed from the `_request` method.
Using langchain version 0.2.7
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.22631
> Python Version: 3.10.2 (tags/v3.10.2:a58ebcc, Jan 17 2022, 14:12:15) [MSC v.1929 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.2.12
> langchain: 0.2.7
> langchain_community: 0.2.6
> langsmith: 0.1.83
> langchain_aws: 0.1.9
> langchain_google_community: 1.0.6
> langchain_openai: 0.1.14
> langchain_text_splitters: 0.2.2
> langgraph: 0.0.55
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langserve | Facing Error in ClientSession._request() While Doing POST Call using RequestWrapper | https://api.github.com/repos/langchain-ai/langchain/issues/24106/comments | 0 | 2024-07-11T06:10:37Z | 2024-07-11T06:13:18Z | https://github.com/langchain-ai/langchain/issues/24106 | 2,402,333,054 | 24,106 |
[
"hwchase17",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/how_to/output_parser_fixing/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
langchain 0.2.6
from langchain.output_parsers import (OutputFixingParser)
fixing_parser = OutputFixingParser.from_llm(parser= parser, llm= ChatOpenAI())
an error:
KeyError: "Input to PromptTemplate is missing variables {'completion'}. Expected: ['completion', 'error', 'instructions'] Received: ['instructions', 'input', 'error']"
### Idea or request for content:
_No response_ | OutputFixingParser,KeyError | https://api.github.com/repos/langchain-ai/langchain/issues/24105/comments | 0 | 2024-07-11T02:46:57Z | 2024-07-11T02:49:45Z | https://github.com/langchain-ai/langchain/issues/24105 | 2,402,110,694 | 24,105 |
[
"hwchase17",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/integrations/text_embedding/baidu_qianfan_endpoint/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
I run the sample code and get an error.How to solve it?
Operating environment:
python 3.10
langchain-community 0.2.7
qianfan 0.4.0.1
Run the code:
`import os
from langchain_community.embeddings import QianfanEmbeddingsEndpoint
os.environ["QIANFAN_AK"] = "xxx"
os.environ["QIANFAN_SK"] = "xxx"
embed = QianfanEmbeddingsEndpoint()
res = embed.embed_documents(["hi", "world"])
print(res)`
Error message:
`Traceback (most recent call last):
File "/llm-example/langchain/test.py", line 9, in <module>
embed = QianfanEmbeddingsEndpoint()
File "/llm-example/langchain/.venv/lib/python3.10/site-packages/pydantic/v1/main.py", line 341, in __init__
raise validation_error
pydantic.v1.error_wrappers.ValidationError: 2 validation errors for QianfanEmbeddingsEndpoint
qianfan_ak
str type expected (type=type_error.str)
qianfan_sk
str type expected (type=type_error.str)
`
### Idea or request for content:
_No response_ | DOC: <Issue related to /v0.2/docs/integrations/text_embedding/baidu_qianfan_endpoint/> | https://api.github.com/repos/langchain-ai/langchain/issues/24104/comments | 2 | 2024-07-11T01:45:20Z | 2024-07-19T13:44:24Z | https://github.com/langchain-ai/langchain/issues/24104 | 2,402,037,774 | 24,104 |
[
"hwchase17",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/integrations/platforms/aws/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
_No response_
### Idea or request for content:
_No response_ | DOC: <Issue related to /v0.2/docs/integrations/platforms/aws/> | https://api.github.com/repos/langchain-ai/langchain/issues/24094/comments | 0 | 2024-07-10T22:44:26Z | 2024-07-10T22:46:56Z | https://github.com/langchain-ai/langchain/issues/24094 | 2,401,863,069 | 24,094 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_community.tools.wikidata.tool import WikidataAPIWrapper, WikidataQueryRun
wikidata_tool = WikidataQueryRun(api_wrapper=WikidataAPIWrapper())
wikidata_tool.name, wikidata_tool.description
print(wikidata_tool.run("Alan Turing"))
### Error Message and Stack Trace (if applicable)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[16], line 1
----> 1 print(wikidata_tool.run("Alan Touring"))
File ~\.virtualenvs\udemy-llm-agents-juqR4cf2\lib\site-packages\langchain_core\tools.py:452, in BaseTool.run(self, tool_input, verbose, start_color, color, callbacks, tags, metadata, run_name, run_id, config, **kwargs)
450 except (Exception, KeyboardInterrupt) as e:
451 run_manager.on_tool_error(e)
--> 452 raise e
453 else:
454 run_manager.on_tool_end(observation, color=color, name=self.name, **kwargs)
File ~\.virtualenvs\udemy-llm-agents-juqR4cf2\lib\site-packages\langchain_core\tools.py:409, in BaseTool.run(self, tool_input, verbose, start_color, color, callbacks, tags, metadata, run_name, run_id, config, **kwargs)
406 parsed_input = self._parse_input(tool_input)
407 tool_args, tool_kwargs = self._to_args_and_kwargs(parsed_input)
408 observation = (
--> 409 context.run(
410 self._run, *tool_args, run_manager=run_manager, **tool_kwargs
411 )
412 if new_arg_supported
413 else context.run(self._run, *tool_args, **tool_kwargs)
414 )
415 except ValidationError as e:
416 if not self.handle_validation_error:
File ~\.virtualenvs\udemy-llm-agents-juqR4cf2\lib\site-packages\langchain_community\tools\wikidata\tool.py:30, in WikidataQueryRun._run(self, query, run_manager)
24 def _run(
25 self,
26 query: str,
27 run_manager: Optional[CallbackManagerForToolRun] = None,
28 ) -> str:
29 """Use the Wikidata tool."""
---> 30 return self.api_wrapper.run(query)
File ~\.virtualenvs\udemy-llm-agents-juqR4cf2\lib\site-packages\langchain_community\utilities\wikidata.py:177, in WikidataAPIWrapper.run(self, query)
175 docs = []
176 for item in items[: self.top_k_results]:
--> 177 if doc := self._item_to_document(item):
178 docs.append(f"Result {item}:\n{doc.page_content}")
179 if not docs:
File ~\.virtualenvs\udemy-llm-agents-juqR4cf2\lib\site-packages\langchain_community\utilities\wikidata.py:149, in WikidataAPIWrapper._item_to_document(self, qid)
147 for prop, values in resp.statements.items():
148 if values:
--> 149 doc_lines.append(f"{prop.label}: {', '.join(values)}")
151 return Document(
152 page_content=("\n".join(doc_lines))[: self.doc_content_chars_max],
153 meta={"title": qid, "source": f"https://www.wikidata.org/wiki/{qid}"},
154 )
TypeError: sequence item 0: expected str instance, FluentValue found
### Description
I'm trying to use various tools from the 'langchain' library.
Call to wikidata_tool.run("string") resulted in the above error.
Identified a bug in the below code ( using latest version of the build)
https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/wikidata.py
Line 149: `doc_lines.append(f"{prop.label}: {', '.join(values)}")`
**
> /*** source of error, here 'values' is a list not a string ***/
**
So I commented above code and made the below changes. the code worked fine after that.
`doc_lines.append(f"{prop.label}: {', '.join(map(str,values))}")
`
### System Info
### PIP freeze | grep langchain output
langchain==0.2.7
langchain-anthropic==0.1.19
langchain-cohere==0.1.9
langchain-community==0.2.7
langchain-core==0.2.12
langchain-experimental==0.0.62
langchain-google-genai==1.0.4
langchain-openai==0.1.14
langchain-text-splitters==0.2.2
langchainhub==0.1.20
### **python -m langchain_core.sys_info**
System Information
------------------
**> OS: Windows**
**> OS Version: 10.0.19045**
**> Python Version: 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
**
Package Information
-------------------
> langchain_core: 0.2.12
> langchain: 0.2.7
> langchain_community: 0.2.7
> langsmith: 0.1.84
> langchain_anthropic: 0.1.19
> langchain_cohere: 0.1.9
> langchain_experimental: 0.0.62
> langchain_google_genai: 1.0.4
> langchain_openai: 0.1.14
> langchain_text_splitters: 0.2.2
> langchainhub: 0.1.20
> langgraph: 0.0.51
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langserve
[dependency graph.txt](https://github.com/user-attachments/files/16169561/dependency.graph.txt)
| TypeError: sequence item 0: expected str instance, FluentValue found - Error while invoking WikidataQueryRun | https://api.github.com/repos/langchain-ai/langchain/issues/24093/comments | 2 | 2024-07-10T22:41:55Z | 2024-07-10T22:55:43Z | https://github.com/langchain-ai/langchain/issues/24093 | 2,401,860,745 | 24,093 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_community.tools.wikidata.tool import WikidataAPIWrapper, WikidataQueryRun
wikidata_tool = WikidataQueryRun(api_wrapper=WikidataAPIWrapper())
wikidata_tool.name, wikidata_tool.description
print(wikidata_tool.run("Alan Turing"))
### Error Message and Stack Trace (if applicable)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[16], line 1
----> 1 print(wikidata_tool.run("Alan Touring"))
File ~\.virtualenvs\udemy-llm-agents-juqR4cf2\lib\site-packages\langchain_core\tools.py:452, in BaseTool.run(self, tool_input, verbose, start_color, color, callbacks, tags, metadata, run_name, run_id, config, **kwargs)
450 except (Exception, KeyboardInterrupt) as e:
451 run_manager.on_tool_error(e)
--> 452 raise e
453 else:
454 run_manager.on_tool_end(observation, color=color, name=self.name, **kwargs)
File ~\.virtualenvs\udemy-llm-agents-juqR4cf2\lib\site-packages\langchain_core\tools.py:409, in BaseTool.run(self, tool_input, verbose, start_color, color, callbacks, tags, metadata, run_name, run_id, config, **kwargs)
406 parsed_input = self._parse_input(tool_input)
407 tool_args, tool_kwargs = self._to_args_and_kwargs(parsed_input)
408 observation = (
--> 409 context.run(
410 self._run, *tool_args, run_manager=run_manager, **tool_kwargs
411 )
412 if new_arg_supported
413 else context.run(self._run, *tool_args, **tool_kwargs)
414 )
415 except ValidationError as e:
416 if not self.handle_validation_error:
File ~\.virtualenvs\udemy-llm-agents-juqR4cf2\lib\site-packages\langchain_community\tools\wikidata\tool.py:30, in WikidataQueryRun._run(self, query, run_manager)
24 def _run(
25 self,
26 query: str,
27 run_manager: Optional[CallbackManagerForToolRun] = None,
28 ) -> str:
29 """Use the Wikidata tool."""
---> 30 return self.api_wrapper.run(query)
File ~\.virtualenvs\udemy-llm-agents-juqR4cf2\lib\site-packages\langchain_community\utilities\wikidata.py:177, in WikidataAPIWrapper.run(self, query)
175 docs = []
176 for item in items[: self.top_k_results]:
--> 177 if doc := self._item_to_document(item):
178 docs.append(f"Result {item}:\n{doc.page_content}")
179 if not docs:
File ~\.virtualenvs\udemy-llm-agents-juqR4cf2\lib\site-packages\langchain_community\utilities\wikidata.py:149, in WikidataAPIWrapper._item_to_document(self, qid)
147 for prop, values in resp.statements.items():
148 if values:
--> 149 doc_lines.append(f"{prop.label}: {', '.join(values)}")
151 return Document(
152 page_content=("\n".join(doc_lines))[: self.doc_content_chars_max],
153 meta={"title": qid, "source": f"https://www.wikidata.org/wiki/{qid}"},
154 )
TypeError: sequence item 0: expected str instance, FluentValue found
### Description
I'm trying to use various tools from the 'langchain' library.
Call to wikidata_tool.run("string") resulted in the above error.
Identified a bug in the below code ( using latest version of the build)
https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/wikidata.py
Line 149: `doc_lines.append(f"{prop.label}: {', '.join(values)}")`
**
> /*** source of error, here 'values' is a list not a string ***/
**
So I commented above code and made the below changes. the code worked fine after that.
`doc_lines.append(f"{prop.label}: {', '.join(map(str,values))}")
`
### System Info
### PIP freeze | grep langchain output
langchain==0.2.7
langchain-anthropic==0.1.19
langchain-cohere==0.1.9
langchain-community==0.2.7
langchain-core==0.2.12
langchain-experimental==0.0.62
langchain-google-genai==1.0.4
langchain-openai==0.1.14
langchain-text-splitters==0.2.2
langchainhub==0.1.20
### **python -m langchain_core.sys_info**
System Information
------------------
**> OS: Windows**
**> OS Version: 10.0.19045**
**[dependency graph.txt](https://github.com/user-attachments/files/16169560/dependency.graph.txt)**
Package Information
-------------------
> langchain_core: 0.2.12
> langchain: 0.2.7
> langchain_community: 0.2.7
> langsmith: 0.1.84
> langchain_anthropic: 0.1.19
> langchain_cohere: 0.1.9
> langchain_experimental: 0.0.62
> langchain_google_genai: 1.0.4
> langchain_openai: 0.1.14
> langchain_text_splitters: 0.2.2
> langchainhub: 0.1.20
> langgraph: 0.0.51
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langserve
[dependency graph.txt](https://github.com/user-attachments/files/16169561/dependency.graph.txt)
| TypeError: sequence item 0: expected str instance, FluentValue found - Error while invoking WikidataQueryRun | https://api.github.com/repos/langchain-ai/langchain/issues/24092/comments | 0 | 2024-07-10T22:38:53Z | 2024-07-10T22:41:34Z | https://github.com/langchain-ai/langchain/issues/24092 | 2,401,857,853 | 24,092 |
[
"hwchase17",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
Type hints for `@tool` decorator are incorrect, which causes downstream type-checking errors. For example:
```python
from langchain_core.tools import tool, StructuredTool
@tool
def multiply(a: int, b: int) -> int:
"""multiply two ints"""
return a * b
tool_: StructuredTool = multiply
```
leads to the following mypy errors:
```pycon
error: Incompatible types in assignment (expression has type "Callable[..., Any]", variable has type "StructuredTool") [assignment]
``` | core: Fix @tool typing | https://api.github.com/repos/langchain-ai/langchain/issues/24089/comments | 0 | 2024-07-10T22:09:47Z | 2024-07-10T22:12:25Z | https://github.com/langchain-ai/langchain/issues/24089 | 2,401,823,001 | 24,089 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
My network is 0.0.0.0 when coding my ingestion looks alright but when wanting to add documents to vector store i get the error below. Also, i am unable to fetch the documents in my MongoDB as it shows **connect ETIMEDOUT 13.68.114.169:27017**
# Add documents to vector store
insert_ids = vectorstore.add_documents(ingestion_docs)
print(f"Inserted Document IDs: {insert_ids}")
**the following error occurs:** Error inserting documents: ac-vtawjby-shard-00-02.oecedvw.mongodb.net:27017: timed out (configured timeouts: socketTimeoutMS: 20000.0ms, connectTimeoutMS: 20000.0ms),ac-vtawjby-shard-00-00.oecedvw.mongodb.net:27017: timed out (configured timeouts: socketTimeoutMS: 20000.0ms, connectTimeoutMS: 20000.0ms),ac-vtawjby-shard-00-01.oecedvw.mongodb.net:27017: timed out (configured timeouts: socketTimeoutMS: 20000.0ms, connectTimeoutMS: 20000.0ms), Timeout: 30s, Topology Description: <TopologyDescription id: 668ed682a78aef78b848fcf0, topology_type: ReplicaSetNoPrimary, servers: [<ServerDescription ('ac-vtawjby-shard-00-00.oecedvw.mongodb.net', 27017) server_type: Unknown, rtt: None, error=NetworkTimeout('ac-vtawjby-shard-00-00.oecedvw.mongodb.net:27017: timed out (configured timeouts: socketTimeoutMS: 20000.0ms, connectTimeoutMS: 20000.0ms)')>, <ServerDescription ('ac-vtawjby-shard-00-01.oecedvw.mongodb.net', 27017) server_type: Unknown, rtt: None, error=NetworkTimeout('ac-vtawjby-shard-00-01.oecedvw.mongodb.net:27017: timed out (configured timeouts: socketTimeoutMS: 20000.0ms, connectTimeoutMS: 20000.0ms)')>, <ServerDescription ('ac-vtawjby-shard-00-02.oecedvw.mongodb.net', 27017) server_type: Unknown, rtt: None, error=NetworkTimeout('ac-vtawjby-shard-00-02.oecedvw.mongodb.net:27017: timed out (configured timeouts: socketTimeoutMS: 20000.0ms, connectTimeoutMS: 20000.0ms)')>]>
### Error Message and Stack Trace (if applicable)
_No response_
### Description
All these packages were used and came out true import os
from pymongo.mongo_client import MongoClient
from langchain_cohere.chat_models import ChatCohere
from langchain_cohere.embeddings import CohereEmbeddings
from langchain_core.output_parsers import StrOutputParser
from langchain_mongodb import MongoDBAtlasVectorSearch
from langchain.prompts import PromptTemplate
from langchain.schema import Document
from dotenv import load_dotenv
load_dotenv()
and the following .env were located correctly:
COHERE_API_KEY = "XXXX"
LANGCHAIN_TRACING_V2=true
LANGCHAIN_ENDPOINT="https://api.smith.langchain.com"
LANGCHAIN_API_KEY="XXXX"
LANGCHAIN_PROJECT="XXXX"
ATLAS_CONNECTION_STRING = "XXXX"
### System Info
**-m langchain_core.sys_info**
System Information
------------------
> OS: Windows
> OS Version: 10.0.22621
> Python Version: 3.12.4 (tags/v3.12.4:8e8a4ba, Jun 6 2024, 19:30:16) [MSC v.1940 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.2.9
> langchain: 0.2.5
> langchain_community: 0.2.5
> langsmith: 0.1.81
> langchain_cohere: 0.1.8
> langchain_mongodb: 0.1.6
> langchain_text_splitters: 0.2.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | Adding documents to vector store/server timeout | https://api.github.com/repos/langchain-ai/langchain/issues/24082/comments | 0 | 2024-07-10T18:57:19Z | 2024-07-10T19:00:43Z | https://github.com/langchain-ai/langchain/issues/24082 | 2,401,449,080 | 24,082 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_community.llms import VLLM
llm = VLLM(
model="llava-hf/llava-1.5-7b-hf",
trust_remote_code=True, # mandatory for hf models
max_new_tokens=128,
top_k=10,
top_p=0.95,
temperature=0.8,
)
OR
llm = VLLM(
model="llava-hf/llava-1.5-7b-hf",
trust_remote_code=True, # mandatory for hf models
max_new_tokens=128,
top_k=10,
top_p=0.95,
temperature=0.8,
image_input_type="pixel_values",
image_token_id=123,
image_input_shape="224,224,3",
image_feature_size=512,
)
Both the ways of instantiating the VLLM class gives the same error.
### Error Message and Stack Trace (if applicable)
llm = VLLM(
[rank0]: ^^^^^
[rank0]: File /miniforge3/envs/ipex-vllm/lib/python3.11/site-packages/pydantic/v1/main.py", line 341, in __init__
[rank0]: raise validation_error
[rank0]: pydantic.v1.error_wrappers.ValidationError: 1 validation error for VLLM
[rank0]: __root__
[rank0]: Provide `image_input_type` and other vision related configurations through LLM entrypoint or engine arguments. (type=assertion_error)
### Description
I am trying to use VLLM through Langchain to run LLaVA model. I am using CPU to run my code.
I am getting this error:
"Provide `image_input_type` and other vision related configurations through LLM entrypoint or engine arguments. "
I went through the source code of vllm/vllm/engine/arg_utils.py:class EngineArgs:
and passed the Vision Configurations in VLLM class as above. However, I see that, even after setting,
image_input_type="pixel_values" in VLLM class (as seen above), the self.image_input_type
in the EngineArgs class shows value as None.
Name: vllm
Version: 0.4.2+cpu
Summary: A high-throughput and memory-efficient inference and serving engine for LLMs
Home-page: https://github.com/vllm-project/vllm
Author: vLLM Team
Author-email:
License: Apache 2.0
Location: /home/ceed-user/miniforge3/envs/ipex-vllm/lib/python3.11/site-packages/vllm-0.4.2+cpu-py3.11-linux-x86_64.egg
Requires: cmake, fastapi, filelock, lm-format-enforcer, ninja, numpy, openai, outlines, prometheus-fastapi-instrumentator, prometheus_client, psutil, py-cpuinfo, pydantic, requests, sentencepiece, tiktoken, tokenizers, torch, transformers, triton, typing_extensions, uvicorn
Required-by:
### System Info
langchain==0.2.7
langchain-community==0.2.7
langchain-core==0.2.12
langchain-text-splitters==0.2.2 | LLaVA model error in VLLM through Langchain | https://api.github.com/repos/langchain-ai/langchain/issues/24078/comments | 2 | 2024-07-10T16:40:38Z | 2024-07-11T15:14:47Z | https://github.com/langchain-ai/langchain/issues/24078 | 2,401,225,592 | 24,078 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from typing import List
from langchain_core.output_parsers import PydanticOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_huggingface import ChatHuggingFace, HuggingFaceEndpoint
class Calculator(BaseModel):
"""Multiply two integers together."""
a: int = Field(..., description="First integer")
b: int = Field(..., description="Second integer")
llm = HuggingFaceEndpoint(
repo_id="HuggingFaceH4/zephyr-7b-beta",
task="text-generation",
max_new_tokens=512,
do_sample=False,
repetition_penalty=1.03,
)
chat_model = ChatHuggingFace(llm=llm, verbose=True).with_structured_output(schema=Calculator, include_raw=True)
print(chat_model.invoke("How much is 3 multiplied by 12?"))
```
### Error Message and Stack Trace (if applicable)
{
"name": "InferenceTimeoutError",
"message": "Model not loaded on the server: https://api-inference.huggingface.co/models/HuggingFaceH4/zephyr-7b-beta/v1/chat/completions. Please retry with a higher timeout (current: 120).",
"stack": "---------------------------------------------------------------------------
HTTPError Traceback (most recent call last)
File /usr/local/lib/python3.11/site-packages/huggingface_hub/utils/_errors.py:304, in hf_raise_for_status(response, endpoint_name)
303 try:
--> 304 response.raise_for_status()
305 except HTTPError as e:
File /usr/local/lib/python3.11/site-packages/requests/models.py:1021, in Response.raise_for_status(self)
1020 if http_error_msg:
-> 1021 raise HTTPError(http_error_msg, response=self)
HTTPError: 503 Server Error: Service Unavailable for url: https://api-inference.huggingface.co/models/HuggingFaceH4/zephyr-7b-beta/v1/chat/completions
The above exception was the direct cause of the following exception:
HfHubHTTPError Traceback (most recent call last)
File /usr/local/lib/python3.11/site-packages/huggingface_hub/inference/_client.py:273, in InferenceClient.post(self, json, data, model, task, stream)
272 try:
--> 273 hf_raise_for_status(response)
274 return response.iter_lines() if stream else response.content
File /usr/local/lib/python3.11/site-packages/huggingface_hub/utils/_errors.py:371, in hf_raise_for_status(response, endpoint_name)
369 # Convert `HTTPError` into a `HfHubHTTPError` to display request information
370 # as well (request id and/or server error message)
--> 371 raise HfHubHTTPError(str(e), response=response) from e
HfHubHTTPError: 503 Server Error: Service Unavailable for url: https://api-inference.huggingface.co/models/HuggingFaceH4/zephyr-7b-beta/v1/chat/completions (Request ID: ORVZdEUt8LFrnogCctYgS)
Model HuggingFaceH4/zephyr-7b-beta is currently loading
The above exception was the direct cause of the following exception:
InferenceTimeoutError Traceback (most recent call last)
Cell In[1], line 27
17 llm = HuggingFaceEndpoint(
18 repo_id=\"HuggingFaceH4/zephyr-7b-beta\",
19 # repo_id=\"meta-llama/Meta-Llama-3-8B-Instruct\",
(...)
23 repetition_penalty=1.03,
24 )
26 chat_model = ChatHuggingFace(llm=llm,verbose=True,).with_structured_output(schema=Calculator, include_raw=True)
---> 27 print(chat_model.invoke(\"How much is 3 multiplied by 12?\"))
File /usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py:2503, in RunnableSequence.invoke(self, input, config, **kwargs)
2499 config = patch_config(
2500 config, callbacks=run_manager.get_child(f\"seq:step:{i+1}\")
2501 )
2502 if i == 0:
-> 2503 input = step.invoke(input, config, **kwargs)
2504 else:
2505 input = step.invoke(input, config)
File /usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py:3150, in RunnableParallel.invoke(self, input, config)
3137 with get_executor_for_config(config) as executor:
3138 futures = [
3139 executor.submit(
3140 step.invoke,
(...)
3148 for key, step in steps.items()
3149 ]
-> 3150 output = {key: future.result() for key, future in zip(steps, futures)}
3151 # finish the root run
3152 except BaseException as e:
File /usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py:3150, in <dictcomp>(.0)
3137 with get_executor_for_config(config) as executor:
3138 futures = [
3139 executor.submit(
3140 step.invoke,
(...)
3148 for key, step in steps.items()
3149 ]
-> 3150 output = {key: future.result() for key, future in zip(steps, futures)}
3151 # finish the root run
3152 except BaseException as e:
File /usr/local/lib/python3.11/concurrent/futures/_base.py:456, in Future.result(self, timeout)
454 raise CancelledError()
455 elif self._state == FINISHED:
--> 456 return self.__get_result()
457 else:
458 raise TimeoutError()
File /usr/local/lib/python3.11/concurrent/futures/_base.py:401, in Future.__get_result(self)
399 if self._exception:
400 try:
--> 401 raise self._exception
402 finally:
403 # Break a reference cycle with the exception in self._exception
404 self = None
File /usr/local/lib/python3.11/concurrent/futures/thread.py:58, in _WorkItem.run(self)
55 return
57 try:
---> 58 result = self.fn(*self.args, **self.kwargs)
59 except BaseException as exc:
60 self.future.set_exception(exc)
File /usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py:4586, in RunnableBindingBase.invoke(self, input, config, **kwargs)
4580 def invoke(
4581 self,
4582 input: Input,
4583 config: Optional[RunnableConfig] = None,
4584 **kwargs: Optional[Any],
4585 ) -> Output:
-> 4586 return self.bound.invoke(
4587 input,
4588 self._merge_configs(config),
4589 **{**self.kwargs, **kwargs},
4590 )
File /usr/local/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:248, in BaseChatModel.invoke(self, input, config, stop, **kwargs)
237 def invoke(
238 self,
239 input: LanguageModelInput,
(...)
243 **kwargs: Any,
244 ) -> BaseMessage:
245 config = ensure_config(config)
246 return cast(
247 ChatGeneration,
--> 248 self.generate_prompt(
249 [self._convert_input(input)],
250 stop=stop,
251 callbacks=config.get(\"callbacks\"),
252 tags=config.get(\"tags\"),
253 metadata=config.get(\"metadata\"),
254 run_name=config.get(\"run_name\"),
255 run_id=config.pop(\"run_id\", None),
256 **kwargs,
257 ).generations[0][0],
258 ).message
File /usr/local/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:681, in BaseChatModel.generate_prompt(self, prompts, stop, callbacks, **kwargs)
673 def generate_prompt(
674 self,
675 prompts: List[PromptValue],
(...)
678 **kwargs: Any,
679 ) -> LLMResult:
680 prompt_messages = [p.to_messages() for p in prompts]
--> 681 return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
File /usr/local/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:538, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
536 if run_managers:
537 run_managers[i].on_llm_error(e, response=LLMResult(generations=[]))
--> 538 raise e
539 flattened_outputs = [
540 LLMResult(generations=[res.generations], llm_output=res.llm_output) # type: ignore[list-item]
541 for res in results
542 ]
543 llm_output = self._combine_llm_outputs([res.llm_output for res in results])
File /usr/local/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:528, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
525 for i, m in enumerate(messages):
526 try:
527 results.append(
--> 528 self._generate_with_cache(
529 m,
530 stop=stop,
531 run_manager=run_managers[i] if run_managers else None,
532 **kwargs,
533 )
534 )
535 except BaseException as e:
536 if run_managers:
File /usr/local/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:753, in BaseChatModel._generate_with_cache(self, messages, stop, run_manager, **kwargs)
751 else:
752 if inspect.signature(self._generate).parameters.get(\"run_manager\"):
--> 753 result = self._generate(
754 messages, stop=stop, run_manager=run_manager, **kwargs
755 )
756 else:
757 result = self._generate(messages, stop=stop, **kwargs)
File /usr/local/lib/python3.11/site-packages/langchain_huggingface/chat_models/huggingface.py:218, in ChatHuggingFace._generate(self, messages, stop, run_manager, **kwargs)
216 elif _is_huggingface_endpoint(self.llm):
217 message_dicts = self._create_message_dicts(messages, stop)
--> 218 answer = self.llm.client.chat_completion(messages=message_dicts, **kwargs)
219 return self._create_chat_result(answer)
220 else:
File /usr/local/lib/python3.11/site-packages/huggingface_hub/inference/_client.py:706, in InferenceClient.chat_completion(self, messages, model, stream, frequency_penalty, logit_bias, logprobs, max_tokens, n, presence_penalty, seed, stop, temperature, tool_choice, tool_prompt, tools, top_logprobs, top_p)
703 model_url += \"/v1/chat/completions\"
705 try:
--> 706 data = self.post(
707 model=model_url,
708 json=dict(
709 model=\"tgi\", # random string
710 messages=messages,
711 frequency_penalty=frequency_penalty,
712 logit_bias=logit_bias,
713 logprobs=logprobs,
714 max_tokens=max_tokens,
715 n=n,
716 presence_penalty=presence_penalty,
717 seed=seed,
718 stop=stop,
719 temperature=temperature,
720 tool_choice=tool_choice,
721 tool_prompt=tool_prompt,
722 tools=tools,
723 top_logprobs=top_logprobs,
724 top_p=top_p,
725 stream=stream,
726 ),
727 stream=stream,
728 )
729 except HTTPError as e:
730 if e.response.status_code in (400, 404, 500):
731 # Let's consider the server is not a chat completion server.
732 # Then we call again `chat_completion` which will render the chat template client side.
733 # (can be HTTP 500, HTTP 400, HTTP 404 depending on the server)
File /usr/local/lib/python3.11/site-packages/huggingface_hub/inference/_client.py:283, in InferenceClient.post(self, json, data, model, task, stream)
280 if error.response.status_code == 503:
281 # If Model is unavailable, either raise a TimeoutError...
282 if timeout is not None and time.time() - t0 > timeout:
--> 283 raise InferenceTimeoutError(
284 f\"Model not loaded on the server: {url}. Please retry with a higher timeout (current:\"
285 f\" {self.timeout}).\",
286 request=error.request,
287 response=error.response,
288 ) from error
289 # ...or wait 1s and retry
290 logger.info(f\"Waiting for model to be loaded on the server: {error}\")
InferenceTimeoutError: Model not loaded on the server: https://api-inference.huggingface.co/models/HuggingFaceH4/zephyr-7b-beta/v1/chat/completions. Please retry with a higher timeout (current: 120)."
}
### Description
I'm encountering an issue when using the HuggingFaceEndpoint with the with_structured_output method in LangChain. The issue arises due to a model loading timeout, preventing the model from being used effectively. I also tried to increase the timeout to 300 but still doesnt work.
### System Info
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Fri Mar 29 23:14:13 UTC 2024
> Python Version: 3.11.7 (main, Dec 19 2023, 20:33:49) [GCC 12.2.0]
Package Information
-------------------
> langchain_core: 0.2.11
> langchain: 0.2.6
> langchain_community: 0.2.6
> langsmith: 0.1.83
> langchain_experimental: 0.0.62
> langchain_huggingface: 0.0.3
> langchain_milvus: 0.1.1
> langchain_text_splitters: 0.2.2
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | HuggingFaceEndpoint Fails to Use with_structured_output Due to Model Loading Timeout | https://api.github.com/repos/langchain-ai/langchain/issues/24077/comments | 1 | 2024-07-10T16:38:26Z | 2024-08-08T08:46:12Z | https://github.com/langchain-ai/langchain/issues/24077 | 2,401,221,475 | 24,077 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from typing import TypedDict, Annotated, List
from langgraph.graph import StateGraph
from langgraph.graph.graph import END, START, CompiledGraph
from langgraph.graph.message import add_messages
import random, operator
from IPython.display import Image, display
class TestState(TypedDict):
messages: Annotated[List[str], operator.add]
test_workflow = StateGraph(TestState)
def node1(state: TestState):
return {"messages": ["Hello from node 1"]}
def node2(state: TestState):
return {"messages": ["Hello from node 2"]}
def node3(state: TestState):
return {"messages": ["Hello from node 3"]}
def node4(state: TestState):
return {"messages": ["Hello from node 4"]}
def node5(state: TestState):
return {"messages": ["Hello from node 5"]}
def route(state: TestState):
if random.choice([True, False]):
return "node5"
return "__end__"
test_workflow.add_node("node1", node1)
test_workflow.add_node("node2", node2)
test_workflow.add_node("node3", node3)
test_workflow.add_node("node4", node4)
test_workflow.add_node("node5", node5)
test_workflow.add_edge(START, "node1")
test_workflow.add_edge("node1", "node2")
test_workflow.add_edge("node2", "node3")
test_workflow.add_edge("node3", "node4")
test_workflow.add_edge("node5", "node4")
test_workflow.add_conditional_edges("node4", route)
display(Image(test_workflow.compile().get_graph().draw_mermaid_png()))
```
### Error Message and Stack Trace (if applicable)
No error message.
### Description
## Unexpected behavior
The previous code does not work as expected ( according to [Langgraph API reference](https://langchain-ai.github.io/langgraph/reference/graphs/#langgraph.graph.message.MessageGraph.add_conditional_edges) ) as it generates the following graph with unexpected conditional edges from `node4` to every other node:
![image](https://github.com/langchain-ai/langchain/assets/79788901/6c719602-0d74-41d7-a007-bb3d35e8627e)
This is also visible when inspecting the `branches` attribute of the workflow:
```python
test_workflow.branches
```
returns :
```python
defaultdict(dict,
{'node4': {'route': Branch(path=route(recurse=True), ends=None, then=None)}})
```
## Fix for expected behavior
Replacing
```python
test_workflow.add_conditional_edges("node4", route)
```
by giving the additional `map_path` argument such as
```python
test_workflow.add_conditional_edges("node4", route, {"node5": "node5", "__end__": "__end__"})
```
fixes the issue, as it can be seen by inspecting the `branches` attribute of the workflow.
```python
test_workflow.branches
```
now returns :
```python
defaultdict(dict,
{'node4': {'route': Branch(path=route(recurse=True), ends={'node5': 'node5', '__end__': '__end__'}, then=None)}})
```
and the expected Mermaid graph:
![image](https://github.com/langchain-ai/langchain/assets/79788901/269aa35c-48ea-40a4-a0ac-9a746f8e1fac)
### System Info
I use the latest versions for these libraries installed with poetry:
```
python = "3.12"
langchain-core = "0.2.12"
langgraph = "0.0.64"
``` | Langgraph `add_conditional_edges` does not work without optional argument `path_map` | https://api.github.com/repos/langchain-ai/langchain/issues/24076/comments | 1 | 2024-07-10T16:14:05Z | 2024-07-10T17:05:51Z | https://github.com/langchain-ai/langchain/issues/24076 | 2,401,176,047 | 24,076 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
docs_match = app.docsearch.similarity_search_with_score(input_data.question, k=3,filter = filter_object)
always returns documents
### Error Message and Stack Trace (if applicable)
_No response_
### Description
as_retriever returns documents from database even when the query is totally unrelated to the documents in the database
### System Info
Mac | as_retriever returns documents from database even when the query is totally unrelated to the documents in the database | https://api.github.com/repos/langchain-ai/langchain/issues/24075/comments | 0 | 2024-07-10T16:13:26Z | 2024-07-10T16:16:05Z | https://github.com/langchain-ai/langchain/issues/24075 | 2,401,174,741 | 24,075 |
[
"hwchase17",
"langchain"
] | ### URL
https://github.com/langchain-ai/langchain/blob/c4e149d4f18319bef6f1d4c409250c4a0ad21dac/libs/community/langchain_community/vectorstores/neo4j_vector.py
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
Many parameters in the [Neo4jVector class](https://github.com/langchain-ai/langchain/blob/c4e149d4f18319bef6f1d4c409250c4a0ad21dac/libs/community/langchain_community/vectorstores/neo4j_vector.py#L429) are not documented. Some examples:
```python
class Neo4jVector(VectorStore):
"""`Neo4j` vector index.
To use, you should have the ``neo4j`` python package installed.
Args:
url: Neo4j connection url
username: Neo4j username.
password: Neo4j password
database: Optionally provide Neo4j database
Defaults to "neo4j"
embedding: Any embedding function implementing
`langchain.embeddings.base.Embeddings` interface.
distance_strategy: The distance strategy to use. (default: COSINE)
pre_delete_collection: If True, will delete existing data if it exists.
(default: False). Useful for testing.
Example:
.. code-block:: python
from langchain_community.vectorstores.neo4j_vector import Neo4jVector
from langchain_community.embeddings.openai import OpenAIEmbeddings
url="bolt://localhost:7687"
username="neo4j"
password="pleaseletmein"
embeddings = OpenAIEmbeddings()
vectorestore = Neo4jVector.from_documents(
embedding=embeddings,
documents=docs,
url=url
username=username,
password=password,
)
"""
def __init__(
self,
embedding: Embeddings,
*,
search_type: SearchType = SearchType.VECTOR,
username: Optional[str] = None,
password: Optional[str] = None,
url: Optional[str] = None,
keyword_index_name: Optional[str] = "keyword",
database: Optional[str] = None,
index_name: str = "vector",
node_label: str = "Chunk",
embedding_node_property: str = "embedding",
text_node_property: str = "text",
distance_strategy: DistanceStrategy = DEFAULT_DISTANCE_STRATEGY,
logger: Optional[logging.Logger] = None,
pre_delete_collection: bool = False,
retrieval_query: str = "",
relevance_score_fn: Optional[Callable[[float], float]] = None,
index_type: IndexType = DEFAULT_INDEX_TYPE,
graph: Optional[Neo4jGraph] = None,
) -> None:
```
```python
def retrieve_existing_fts_index(
self, text_node_properties: List[str] = []
) -> Optional[str]:
"""
Check if the fulltext index exists in the Neo4j database
This method queries the Neo4j database for existing fts indexes
with the specified name.
Returns:
(Tuple): keyword index information
"""
```
```python
def similarity_search(
self,
query: str,
k: int = 4,
params: Dict[str, Any] = {},
filter: Optional[Dict[str, Any]] = None,
**kwargs: Any,
) -> List[Document]:
"""Run similarity search with Neo4jVector.
Args:
query (str): Query text to search for.
k (int): Number of results to return. Defaults to 4.
Returns:
List of Documents most similar to the query.
"""
```
```python
def from_texts(
cls: Type[Neo4jVector],
texts: List[str],
embedding: Embeddings,
metadatas: Optional[List[dict]] = None,
distance_strategy: DistanceStrategy = DEFAULT_DISTANCE_STRATEGY,
ids: Optional[List[str]] = None,
**kwargs: Any,
) -> Neo4jVector:
"""
Return Neo4jVector initialized from texts and embeddings.
Neo4j credentials are required in the form of `url`, `username`,
and `password` and optional `database` parameters.
"""
```
### Idea or request for content:
At least include all parameters in the `Args` sections of the function docs. | DOC: Neo4jVector class => many parameters not documented | https://api.github.com/repos/langchain-ai/langchain/issues/24074/comments | 0 | 2024-07-10T16:05:59Z | 2024-07-10T16:08:41Z | https://github.com/langchain-ai/langchain/issues/24074 | 2,401,157,922 | 24,074 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
I am trying to use the Azure AI Search Vectorstore and retriever and the vectorstore and retriever (given from the vectorstore) work perfectly when doing retrieval of documents using the syncronous methods but gives an error when trying to run the async methods.
Creating the instances of embeddings and Azure Search
```python
from azure.search.documents.indexes.models import (
SearchField,
SearchFieldDataType,
SimpleField,
)
from langchain_openai import AzureOpenAIEmbeddings
from langchain_community.vectorstores import AzureSearch
fields = [
SimpleField(
name="content",
type=SearchFieldDataType.String,
key=True,
filterable=True,
),
SearchField(
name="content_vector",
type=SearchFieldDataType.Collection(SearchFieldDataType.Single),
searchable=True,
vector_search_dimensions=1536,
vector_search_profile_name="myHnswProfile",
),
SimpleField(
name="document_name",
type=SearchFieldDataType.String,
key=True,
filterable=True,
)
]
encoder = AzureOpenAIEmbeddings(
azure_endpoint=os.getenv("EMBEDDINGS_OPENAI_ENDPOINT"),
deployment=os.getenv("EMBEDDINGS_DEPLOYMENT_NAME"),
openai_api_version=os.getenv("OPENAI_API_VERSION"),
openai_api_key=os.getenv("AZURE_OPENAI_API_KEY"),
)
vectorstore = AzureSearch(
azure_search_endpoint=os.getenv("AI_SEARCH_ENDPOINT_SECRET"),
azure_search_key=os.getenv("AI_SEARCH_API_KEY"),
index_name=os.getenv("AI_SEARCH_INDEX_NAME_SECRET"),
fields=fields,
embedding_function=encoder,
)
retriever = vectorstore.as_retriever(search_type="hybrid", k=2)
```
Syncronous methods working and returning documents
```python
vectorstore.vector_search("what is the capital of France")
retriever.invoke("what is the capital of France")
```
Asyncronous methods working and returning documents
```python
await vectorstore.avector_search("what is the capital of France")
await retriever.ainvoke("what is the capital of France")
```
### Error Message and Stack Trace (if applicable)
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
Cell In[15], line 1
----> 1 await vectorstore.avector_search("what is the capital of France")
File ~/.local/lib/python3.11/site-packages/langchain_community/vectorstores/azuresearch.py:695, in AzureSearch.avector_search(self, query, k, filters, **kwargs)
682 async def avector_search(
683 self, query: str, k: int = 4, *, filters: Optional[str] = None, **kwargs: Any
684 ) -> List[Document]:
685 """
686 Returns the most similar indexed documents to the query text.
687
(...)
693 List[Document]: A list of documents that are most similar to the query text.
694 """
--> 695 docs_and_scores = await self.avector_search_with_score(
696 query, k=k, filters=filters
697 )
698 return [doc for doc, _ in docs_and_scores]
File ~/.local/lib/python3.11/site-packages/langchain_community/vectorstores/azuresearch.py:742, in AzureSearch.avector_search_with_score(self, query, k, filters, **kwargs)
730 """Return docs most similar to query.
731
732 Args:
(...)
739 to the query and score for each
740 """
741 embedding = await self._aembed_query(query)
--> 742 docs, scores, _ = await self._asimple_search(
743 embedding, "", k, filters=filters, **kwargs
744 )
746 return list(zip(docs, scores))
File ~/.local/lib/python3.11/site-packages/langchain_community/vectorstores/azuresearch.py:1080, in AzureSearch._asimple_search(self, embedding, text_query, k, filters, **kwargs)
1066 async with self._async_client() as async_client:
1067 results = await async_client.search(
1068 search_text=text_query,
1069 vector_queries=[
(...)
1078 **kwargs,
1079 )
-> 1080 docs = [
1081 (
1082 _result_to_document(result),
1083 float(result["@search.score"]),
1084 result[FIELDS_CONTENT_VECTOR],
1085 )
1086 async for result in results
1087 ]
1088 if not docs:
1089 raise ValueError(f"No {docs=}")
File ~/.local/lib/python3.11/site-packages/langchain_community/vectorstores/azuresearch.py:1084, in <listcomp>(.0)
1066 async with self._async_client() as async_client:
1067 results = await async_client.search(
1068 search_text=text_query,
1069 vector_queries=[
(...)
1078 **kwargs,
1079 )
1080 docs = [
1081 (
1082 _result_to_document(result),
1083 float(result["@search.score"]),
-> 1084 result[FIELDS_CONTENT_VECTOR],
1085 )
1086 async for result in results
1087 ]
1088 if not docs:
1089 raise ValueError(f"No {docs=}")
KeyError: 'content_vector'
### Description
The async methods for searching documents (at least) do not work and raise an error. The async client is not being used for async methods for retrieval possibly.
### System Info
langchain==0.2.6
langchain-community==0.2.4
langchain-core==0.2.11
langchain-openai==0.1.8
langchain-text-splitters==0.2.1 | AzureSearch vectorstore does not work asyncronously | https://api.github.com/repos/langchain-ai/langchain/issues/24064/comments | 0 | 2024-07-10T11:28:29Z | 2024-07-10T11:31:07Z | https://github.com/langchain-ai/langchain/issues/24064 | 2,400,508,202 | 24,064 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from langchain_community.chat_models.azureml_endpoint import AzureMLChatOnlineEndpoint
from langchain_community.llms.azureml_endpoint import ContentFormatterBase
from langchain_community.chat_models.azureml_endpoint import (
AzureMLEndpointApiType,
CustomOpenAIChatContentFormatter,
)
from langchain_core.messages import HumanMessage
chat = AzureMLChatOnlineEndpoint(
endpoint_url="https://....inference.ml.azure.com/score",
endpoint_api_type=AzureMLEndpointApiType.dedicated,
endpoint_api_key="",
content_formatter=CustomOpenAIChatContentFormatter(),
)
# This works, but the output is of type BaseMessage, not AIMessage
response = chat.invoke(
[HumanMessage(content="Hallo, whats your name?")],max_tokens=3000
)
response
# BaseMessage(content="Hello! I'm an AI language model, and I don't have a personal name. You can call me Assistant. How can I help you today?", type='assistant', id='run-36ffdce3-3cee-43e3-af21-505fe6cf61e1-0')
# With a Prompt Template in a LCEL chain it does not work and throws the error
from langchain_core.prompts import ChatPromptTemplate
system = "You are a helpful assistant."
human = "{text}"
prompt = ChatPromptTemplate.from_messages([("system", system), ("human", human)])
chain = prompt | chat
chain.invoke({"text": "Explain the importance of low latency for LLMs."})
```
### Error Message and Stack Trace (if applicable)
KeyError Traceback (most recent call last)
File [~/anaconda3/lib/python3.11/site-packages/langchain_community/chat_models/azureml_endpoint.py:140](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langchain_community/chat_models/azureml_endpoint.py:140), in CustomOpenAIChatContentFormatter.format_response_payload(self, output, api_type)
139 try:
--> 140 choice = json.loads(output)["output"]
141 except (KeyError, IndexError, TypeError) as e:
KeyError: 'output'
The above exception was the direct cause of the following exception:
ValueError Traceback (most recent call last)
[/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/test.ipynb](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/test.ipynb) Zelle 6 line 8
[5](vscode-notebook-cell:/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/test.ipynb#Y160sZmlsZQ%3D%3D?line=4) prompt = ChatPromptTemplate.from_messages([("system", system), ("human", human)])
[7](vscode-notebook-cell:/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/test.ipynb#Y160sZmlsZQ%3D%3D?line=6) chain = prompt | chat
----> [8](vscode-notebook-cell:/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/test.ipynb#Y160sZmlsZQ%3D%3D?line=7) chain.invoke({"text": "Explain the importance of low latency for LLMs."})
File [~/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:2507](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:2507), in RunnableSequence.invoke(self, input, config, **kwargs)
2505 input = step.invoke(input, config, **kwargs)
2506 else:
-> 2507 input = step.invoke(input, config)
2508 # finish the root run
2509 except BaseException as e:
File [~/anaconda3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:248](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:248), in BaseChatModel.invoke(self, input, config, stop, **kwargs)
237 def invoke(
238 self,
239 input: LanguageModelInput,
(...)
243 **kwargs: Any,
244 ) -> BaseMessage:
245 config = ensure_config(config)
246 return cast(
247 ChatGeneration,
--> 248 self.generate_prompt(
249 [self._convert_input(input)],
250 stop=stop,
251 callbacks=config.get("callbacks"),
252 tags=config.get("tags"),
253 metadata=config.get("metadata"),
254 run_name=config.get("run_name"),
255 run_id=config.pop("run_id", None),
256 **kwargs,
257 ).generations[0][0],
258 ).message
File [~/anaconda3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:677](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:677), in BaseChatModel.generate_prompt(self, prompts, stop, callbacks, **kwargs)
669 def generate_prompt(
670 self,
671 prompts: List[PromptValue],
(...)
674 **kwargs: Any,
675 ) -> LLMResult:
676 prompt_messages = [p.to_messages() for p in prompts]
--> 677 return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
File [~/anaconda3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:534](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:534), in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
532 if run_managers:
533 run_managers[i].on_llm_error(e, response=LLMResult(generations=[]))
--> 534 raise e
535 flattened_outputs = [
536 LLMResult(generations=[res.generations], llm_output=res.llm_output) # type: ignore[list-item]
537 for res in results
538 ]
539 llm_output = self._combine_llm_outputs([res.llm_output for res in results])
File [~/anaconda3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:524](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:524), in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
521 for i, m in enumerate(messages):
522 try:
523 results.append(
--> 524 self._generate_with_cache(
525 m,
526 stop=stop,
527 run_manager=run_managers[i] if run_managers else None,
528 **kwargs,
529 )
530 )
531 except BaseException as e:
532 if run_managers:
File [~/anaconda3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:749](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:749), in BaseChatModel._generate_with_cache(self, messages, stop, run_manager, **kwargs)
747 else:
748 if inspect.signature(self._generate).parameters.get("run_manager"):
--> 749 result = self._generate(
750 messages, stop=stop, run_manager=run_manager, **kwargs
751 )
752 else:
753 result = self._generate(messages, stop=stop, **kwargs)
File [~/anaconda3/lib/python3.11/site-packages/langchain_community/chat_models/azureml_endpoint.py:279](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langchain_community/chat_models/azureml_endpoint.py:279), in AzureMLChatOnlineEndpoint._generate(self, messages, stop, run_manager, **kwargs)
273 request_payload = self.content_formatter.format_messages_request_payload(
274 messages, _model_kwargs, self.endpoint_api_type
275 )
276 response_payload = self.http_client.call(
277 body=request_payload, run_manager=run_manager
278 )
--> 279 generations = self.content_formatter.format_response_payload(
280 response_payload, self.endpoint_api_type
281 )
282 return ChatResult(generations=[generations])
File [~/anaconda3/lib/python3.11/site-packages/langchain_community/chat_models/azureml_endpoint.py:142](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langchain_community/chat_models/azureml_endpoint.py:142), in CustomOpenAIChatContentFormatter.format_response_payload(self, output, api_type)
140 choice = json.loads(output)["output"]
141 except (KeyError, IndexError, TypeError) as e:
--> 142 raise ValueError(self.format_error_msg.format(api_type=api_type)) from e
143 return ChatGeneration(
144 message=BaseMessage(
145 content=choice.strip(),
(...)
148 generation_info=None,
149 )
150 if api_type == AzureMLEndpointApiType.serverless:
ValueError: Error while formatting response payload for chat model of type `AzureMLEndpointApiType.dedicated`. Are you using the right formatter for the deployed model and endpoint type?
### Description
Hi,
I want to use the Mixtral-8x7B Instruct version from the Azure Machine Learning catalog but it is not working in Langchain Chains.
Invoking the chat model itself works, however the type of the response is a BaseMessage and not an AIMessage (if you for example compare it to a response from ChatOpenAI())
When using the LLM with a prompt in a LCEL chain, it does not work. It first gives me the `KeyError: 'output'`. I don't know why this KeyError occurs.
For the message `ValueError: Error while formatting response payload for chat model of type `AzureMLEndpointApiType.dedicated`. Are you using the right formatter for the deployed model and endpoint type?` I made sure that dedicated is correct and if it would be wrong, invoking the chat model itself won't work either I think.
I tried to convert the type of the LLM call to AIMessage, but I am not sure how to use the llm_call function further in my Langchain steps:
```
from langchain_core.messages import HumanMessage, AIMessage, BaseMessage
def convert_to_ai_message(base_message: BaseMessage) -> AIMessage:
return AIMessage(content=base_message.content, id=base_message.id)
def llm_call(message):
res = chat.invoke(message, max_tokens=2000)
new_res = convert_to_ai_message(res)
return new_res
```
So I think this is a bug which has to be fixed.
### System Info
langchain 0.2.6 pypi_0 pypi
langchain-chroma 0.1.0 pypi_0 pypi
langchain-community 0.2.6 pypi_0 pypi
langchain-core 0.2.10 pypi_0 pypi
langchain-experimental 0.0.49 pypi_0 pypi
langchain-groq 0.1.5 pypi_0 pypi
langchain-openai 0.1.7 pypi_0 pypi
langchain-postgres 0.0.3 pypi_0 pypi
langchain-text-splitters 0.2.1 pypi_0 pypi | AzureMLEndpoint not working in LCEL: KeyError: 'output' | https://api.github.com/repos/langchain-ai/langchain/issues/24061/comments | 0 | 2024-07-10T10:30:48Z | 2024-07-10T10:34:09Z | https://github.com/langchain-ai/langchain/issues/24061 | 2,400,367,694 | 24,061 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
llm = VLLMOpenAI(
openai_api_key="EMPTY",
openai_api_base=api_base,
model_name="microsoft/Phi-3-vision-128k-instruct",
model_kwargs={"stop": ["."]}
)
image_path = "invoice_data_images/Screenshot 2024-05-02 160946_page_1.png"
with open(image_path, "rb") as image_file:
image_base64 = base64.b64encode(image_file.read()).decode("utf-8")
prompt_1 = "Give me the invoice date from the given image."
messages = [
HumanMessage(
content=[
{"type": "text", "text": prompt_1},
{"type": "image_url", "image_url": {"url": f"data:image/png;base64,{image_base64}"}}
]
)
]
response = llm.invoke(messages)
print(response)
### Error Message and Stack Trace (if applicable)
# Error
{
"name": "BadRequestError",
"message": "Error code: 400 - {'object': 'error', 'message': \"This model's maximum context length is 3744 tokens. However, you requested 254457 tokens (254201 in the messages, 256 in the completion). Please reduce the length of the messages or completion.\", 'type': 'BadRequestError', 'param': None, 'code': 400}",
"stack": "---------------------------------------------------------------------------
BadRequestError Traceback (most recent call last)
Cell In[96], line 26
16 messages = [
17 HumanMessage(
18 content=[
(...)
22 )
23 ]
25 # Invoke the model with the message
---> 26 response = llm.invoke(messages)
27 print(response)
File ~/SapidBlue/invoice_data_extraction/lightllm_xinf/venv/lib/python3.8/site-packages/langchain_core/language_models/llms.py:346, in BaseLLM.invoke(self, input, config, stop, **kwargs)
336 def invoke(
337 self,
338 input: LanguageModelInput,
(...)
342 **kwargs: Any,
343 ) -> str:
344 config = ensure_config(config)
345 return (
--> 346 self.generate_prompt(
347 [self._convert_input(input)],
348 stop=stop,
349 callbacks=config.get(\"callbacks\"),
350 tags=config.get(\"tags\"),
351 metadata=config.get(\"metadata\"),
352 run_name=config.get(\"run_name\"),
353 run_id=config.pop(\"run_id\", None),
354 **kwargs,
355 )
356 .generations[0][0]
357 .text
358 )
File ~/SapidBlue/invoice_data_extraction/lightllm_xinf/venv/lib/python3.8/site-packages/langchain_core/language_models/llms.py:703, in BaseLLM.generate_prompt(self, prompts, stop, callbacks, **kwargs)
695 def generate_prompt(
696 self,
697 prompts: List[PromptValue],
(...)
700 **kwargs: Any,
701 ) -> LLMResult:
702 prompt_strings = [p.to_string() for p in prompts]
--> 703 return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs)
File ~/SapidBlue/invoice_data_extraction/lightllm_xinf/venv/lib/python3.8/site-packages/langchain_core/language_models/llms.py:882, in BaseLLM.generate(self, prompts, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
867 if (self.cache is None and get_llm_cache() is None) or self.cache is False:
868 run_managers = [
869 callback_manager.on_llm_start(
870 dumpd(self),
(...)
880 )
881 ]
--> 882 output = self._generate_helper(
883 prompts, stop, run_managers, bool(new_arg_supported), **kwargs
884 )
885 return output
886 if len(missing_prompts) > 0:
File ~/SapidBlue/invoice_data_extraction/lightllm_xinf/venv/lib/python3.8/site-packages/langchain_core/language_models/llms.py:740, in BaseLLM._generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs)
738 for run_manager in run_managers:
739 run_manager.on_llm_error(e, response=LLMResult(generations=[]))
--> 740 raise e
741 flattened_outputs = output.flatten()
742 for manager, flattened_output in zip(run_managers, flattened_outputs):
File ~/SapidBlue/invoice_data_extraction/lightllm_xinf/venv/lib/python3.8/site-packages/langchain_core/language_models/llms.py:727, in BaseLLM._generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs)
717 def _generate_helper(
718 self,
719 prompts: List[str],
(...)
723 **kwargs: Any,
724 ) -> LLMResult:
725 try:
726 output = (
--> 727 self._generate(
728 prompts,
729 stop=stop,
730 # TODO: support multiple run managers
731 run_manager=run_managers[0] if run_managers else None,
732 **kwargs,
733 )
734 if new_arg_supported
735 else self._generate(prompts, stop=stop)
736 )
737 except BaseException as e:
738 for run_manager in run_managers:
File ~/SapidBlue/invoice_data_extraction/lightllm_xinf/venv/lib/python3.8/site-packages/langchain_community/llms/openai.py:464, in BaseOpenAI._generate(self, prompts, stop, run_manager, **kwargs)
452 choices.append(
453 {
454 \"text\": generation.text,
(...)
461 }
462 )
463 else:
--> 464 response = completion_with_retry(
465 self, prompt=_prompts, run_manager=run_manager, **params
466 )
467 if not isinstance(response, dict):
468 # V1 client returns the response in an PyDantic object instead of
469 # dict. For the transition period, we deep convert it to dict.
470 response = response.dict()
File ~/SapidBlue/invoice_data_extraction/lightllm_xinf/venv/lib/python3.8/site-packages/langchain_community/llms/openai.py:119, in completion_with_retry(llm, run_manager, **kwargs)
117 \"\"\"Use tenacity to retry the completion call.\"\"\"
118 if is_openai_v1():
--> 119 return llm.client.create(**kwargs)
121 retry_decorator = _create_retry_decorator(llm, run_manager=run_manager)
123 @retry_decorator
124 def _completion_with_retry(**kwargs: Any) -> Any:
File ~/SapidBlue/invoice_data_extraction/lightllm_xinf/venv/lib/python3.8/site-packages/openai/_utils/_utils.py:277, in required_args.<locals>.inner.<locals>.wrapper(*args, **kwargs)
275 msg = f\"Missing required argument: {quote(missing[0])}\"
276 raise TypeError(msg)
--> 277 return func(*args, **kwargs)
File ~/SapidBlue/invoice_data_extraction/lightllm_xinf/venv/lib/python3.8/site-packages/openai/resources/completions.py:528, in Completions.create(self, model, prompt, best_of, echo, frequency_penalty, logit_bias, logprobs, max_tokens, n, presence_penalty, seed, stop, stream, stream_options, suffix, temperature, top_p, user, extra_headers, extra_query, extra_body, timeout)
499 @required_args([\"model\", \"prompt\"], [\"model\", \"prompt\", \"stream\"])
500 def create(
501 self,
(...)
526 timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
527 ) -> Completion | Stream[Completion]:
--> 528 return self._post(
529 \"/completions\",
530 body=maybe_transform(
531 {
532 \"model\": model,
533 \"prompt\": prompt,
534 \"best_of\": best_of,
535 \"echo\": echo,
536 \"frequency_penalty\": frequency_penalty,
537 \"logit_bias\": logit_bias,
538 \"logprobs\": logprobs,
539 \"max_tokens\": max_tokens,
540 \"n\": n,
541 \"presence_penalty\": presence_penalty,
542 \"seed\": seed,
543 \"stop\": stop,
544 \"stream\": stream,
545 \"stream_options\": stream_options,
546 \"suffix\": suffix,
547 \"temperature\": temperature,
548 \"top_p\": top_p,
549 \"user\": user,
550 },
551 completion_create_params.CompletionCreateParams,
552 ),
553 options=make_request_options(
554 extra_headers=extra_headers, extra_query=extra_query, extra_body=extra_body, timeout=timeout
555 ),
556 cast_to=Completion,
557 stream=stream or False,
558 stream_cls=Stream[Completion],
559 )
File ~/SapidBlue/invoice_data_extraction/lightllm_xinf/venv/lib/python3.8/site-packages/openai/_base_client.py:1261, in SyncAPIClient.post(self, path, cast_to, body, options, files, stream, stream_cls)
1247 def post(
1248 self,
1249 path: str,
(...)
1256 stream_cls: type[_StreamT] | None = None,
1257 ) -> ResponseT | _StreamT:
1258 opts = FinalRequestOptions.construct(
1259 method=\"post\", url=path, json_data=body, files=to_httpx_files(files), **options
1260 )
-> 1261 return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
File ~/SapidBlue/invoice_data_extraction/lightllm_xinf/venv/lib/python3.8/site-packages/openai/_base_client.py:942, in SyncAPIClient.request(self, cast_to, options, remaining_retries, stream, stream_cls)
933 def request(
934 self,
935 cast_to: Type[ResponseT],
(...)
940 stream_cls: type[_StreamT] | None = None,
941 ) -> ResponseT | _StreamT:
--> 942 return self._request(
943 cast_to=cast_to,
944 options=options,
945 stream=stream,
946 stream_cls=stream_cls,
947 remaining_retries=remaining_retries,
948 )
File ~/SapidBlue/invoice_data_extraction/lightllm_xinf/venv/lib/python3.8/site-packages/openai/_base_client.py:1041, in SyncAPIClient._request(self, cast_to, options, remaining_retries, stream, stream_cls)
1038 err.response.read()
1040 log.debug(\"Re-raising status error\")
-> 1041 raise self._make_status_error_from_response(err.response) from None
1043 return self._process_response(
1044 cast_to=cast_to,
1045 options=options,
(...)
1048 stream_cls=stream_cls,
1049 )
BadRequestError: Error code: 400 - {'object': 'error', 'message': \"This model's maximum context length is 8192 tokens. However, you requested 254457 tokens (254201 in the messages, 256 in the completion). Please reduce the length of the messages or completion.\", 'type': 'BadRequestError', 'param': None, 'code': 400}"
}
### Description
I hosted vllm on ec2 instance.
### System Info
langchain==0.2.7
langchain-community==0.2.7
langchain-core==0.2.12
langchain-text-splitters==0.2.2 | maximum context length is 8192 tokens. However, you requested 254457 tokens | https://api.github.com/repos/langchain-ai/langchain/issues/24058/comments | 0 | 2024-07-10T09:54:32Z | 2024-07-10T09:57:09Z | https://github.com/langchain-ai/langchain/issues/24058 | 2,400,283,552 | 24,058 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
import os
import dotenv
from langchain_community.chat_message_histories import PostgresChatMessageHistory
from langchain_core.messages import AIMessage, HumanMessage
from zhipuai import ZhipuAI
SYSTEM_PROMPT_TEMPLATE = '''你是一个具有记忆功能的AI助手,请根据结合用户的当前提问,和之前的历史聊天记录,生成准确的回答。你们的对话全部采用中文。
下面是你们的历史聊天记录:
[历史聊天记录开始]
{chat_history}
[历史聊天记录结束]
'''
MODEL_NAME = 'glm-4-air'
dotenv.load_dotenv()
client = ZhipuAI(api_key=os.getenv("ZHIPUAI_API_KEY"))
chat_message_history = PostgresChatMessageHistory(
connection_string='postgresql://root:[email protected]:5432/llm_ops',
table_name='llm_ops_chat_history',
session_id="llm_ops")
while True:
query = input('Human:')
if 'bye' == query:
print('bye bye~')
break
system_prompt = SYSTEM_PROMPT_TEMPLATE.format(chat_history=chat_message_history)
response = client.chat.completions.create(
model=MODEL_NAME,
messages=[
{'role': 'system', 'content': system_prompt},
{"role": "user", "content": query},
],
stream=True,
)
output = ''
print('AI:', end='', flush=True)
for chunk in response:
content = chunk.choices[0].delta.content
print(content, end='', flush=True)
output += content
print()
# save chat message
chat_message_history.add_messages([HumanMessage(query), AIMessage(output)])
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "/Users/zhangshenao/Desktop/llm-ops/llm-ops-backend/langchain_study/7-记忆功能实现/3.使用PostgresChatMessageHistory组件保存聊天历史.py", line 35, in <module>
chat_message_history = PostgresChatMessageHistory(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/langchain_core/_api/deprecation.py", line 203, in warn_if_direct_instance
return wrapped(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/langchain_community/chat_message_histories/postgres.py", line 44, in __init__
import psycopg
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/psycopg/__init__.py", line 9, in <module>
from . import pq # noqa: F401 import early to stabilize side effects
^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/psycopg/pq/__init__.py", line 118, in <module>
import_from_libpq()
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/psycopg/pq/__init__.py", line 104, in import_from_libpq
PGcancelConn = module.PGcancelConn
^^^^^^^^^^^^^^^^^^^
AttributeError: module 'psycopg_binary.pq' has no attribute 'PGcancelConn'
Exception ignored in: <function PostgresChatMessageHistory.__del__ at 0x1038f4900>
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/langchain_community/chat_message_histories/postgres.py", line 97, in __del__
if self.cursor:
^^^^^^^^^^^
AttributeError: 'PostgresChatMessageHistory' object has no attribute 'cursor'
### Description
I want to use `PostgresChatMessageHistory` to persist my chat message history into postgresql, but it occurs errors!
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 21.5.0: Tue Apr 26 21:08:29 PDT 2022; root:xnu-8020.121.3~4/RELEASE_ARM64_T8101
> Python Version: 3.12.3 (v3.12.3:f6650f9ad7, Apr 9 2024, 08:18:47) [Clang 13.0.0 (clang-1300.0.29.30)]
Package Information
-------------------
> langchain_core: 0.2.12
> langchain: 0.2.7
> langchain_community: 0.2.7
> langsmith: 0.1.82
> langchain_openai: 0.1.14
> langchain_text_splitters: 0.2.2 | community: AttributeError: 'PostgresChatMessageHistory' object has no attribute 'cursor' | https://api.github.com/repos/langchain-ai/langchain/issues/24053/comments | 0 | 2024-07-10T07:16:34Z | 2024-07-10T14:45:20Z | https://github.com/langchain-ai/langchain/issues/24053 | 2,399,953,037 | 24,053 |
[
"hwchase17",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/integrations/platforms/huggingface/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
DOC: I am looking for a guide on using LangChain and HuggingFace (langchain-huggingface library) with a serverless API endpoint for automatic speech recognition. Can you please refer me to one?
### Idea or request for content:
_No response_ | DOC: I am looking for a guide on using LangChain and HuggingFace (langchain-huggingface library) with a serverless API endpoint for automatic speech recognition. Can you please refer me to one? | https://api.github.com/repos/langchain-ai/langchain/issues/24052/comments | 0 | 2024-07-10T07:12:33Z | 2024-07-10T07:15:05Z | https://github.com/langchain-ai/langchain/issues/24052 | 2,399,945,107 | 24,052 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The following code: "
def DocLoader(file):
try:
# print(file)
loader = UnstructuredExcelLoader(file, mode="elements")
documnets = loader.load()
# print(file)
return documnets
except Exception as ex:
print(f"Exception in DocLoader\n Exception: {ex}")
"
### Error Message and Stack Trace (if applicable)
1. Exception in DocLoader Exception: boolean index did not match indexed array along dimension 1; dimension is 5 but corresponding boolean dimension is 2.
2. Exception: too many indices for array: array is 3-dimensional, but 13 were indexed
### Description
**Problem**:
- Excel Loader script in Databricks keeps encountering errors for unknown reasons.
**Cause**:
- The issue appears to stem from the Excel Loader in LangChain, but it only occurs in Databricks. I tested the same files locally, and they worked without any issues.
**Solutions Tried**:
1. **Library Version**:
I suspected the issue might be due to different versions of LangChain. I ensured that Databricks used the same version as my local environment by using the same requirements.txt file. However, the issue persisted.
2. **Python Environment**:
- I considered that the problem might be due to different Python versions. I created a local environment with the same Python version as Databricks and tested the script. It worked fine locally, so the Python version does not seem to be the cause.
3. **Script Testing**:
- To further debug, I took the same script from Databricks and ran it locally with 5-6 Excel files that caused exceptions in Databricks. I placed these files in a test blob storage account and ran the script. It executed without any issues.
### System Info
```
azure-functions
langchain
langchain-core
langchain-community
unstructured
azure-storage-blob
azure-identity
networkx
pandas
openpyxl
openai
azure_search_documents-11.4.0b12-py3-none-any.whl
psutil
```
```
Databricks:
Standard_D16ads_v5 - AMD based processor
```
>
> Local System:
> Processor 11th Gen Intel(R) Core(TM) i5-1135G7 @ 2.40GHz 1.38 GHz
> Installed RAM 16.0 GB (15.4 GB usable)
> System type 64-bit operating system, x64-based processor
| unstructuredExcelLoader throwing exception in Databricks but working fine in local system | https://api.github.com/repos/langchain-ai/langchain/issues/24050/comments | 0 | 2024-07-10T07:02:55Z | 2024-07-10T07:05:32Z | https://github.com/langchain-ai/langchain/issues/24050 | 2,399,925,929 | 24,050 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The following code ([example from the docs](https://python.langchain.com/v0.2/docs/integrations/llms/yandex/)):
```shell
pip install -U langchain langchain-community yandexcloud
```
```python
from langchain_community.llms import YandexGPT
from langchain_core.prompts import PromptTemplate
from langchain.chains import LLMChain
os.environ['YC_IAM_TOKEN'] = 'xxxxxxx'
os.environ['YC_FOLDER_ID'] = 'xxxxxxx'
llm = YandexGPT()
template = "What is the capital of {country}?"
prompt = PromptTemplate.from_template(template)
llm_chain = LLMChain(prompt=prompt, llm=llm)
country = "Russia"
llm_chain.invoke(country)
```
raises an error:
### Error Message and Stack Trace (if applicable)
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[8], line 3
1 country = "Russia"
----> 3 llm_chain.invoke(country)
File ~/.cache/virtualenvs/myenv/lib/python3.12/site-packages/langchain/chains/base.py:166, in Chain.invoke(self, input, config, **kwargs)
164 except BaseException as e:
165 run_manager.on_chain_error(e)
--> 166 raise e
167 run_manager.on_chain_end(outputs)
169 if include_run_info:
File ~/.cache/virtualenvs/myenv/lib/python3.12/site-packages/langchain/chains/base.py:156, in Chain.invoke(self, input, config, **kwargs)
153 try:
154 self._validate_inputs(inputs)
155 outputs = (
--> 156 self._call(inputs, run_manager=run_manager)
157 if new_arg_supported
158 else self._call(inputs)
159 )
161 final_outputs: Dict[str, Any] = self.prep_outputs(
162 inputs, outputs, return_only_outputs
163 )
164 except BaseException as e:
File ~/.cache/virtualenvs/myenv/lib/python3.12/site-packages/langchain/chains/llm.py:128, in LLMChain._call(self, inputs, run_manager)
123 def _call(
124 self,
125 inputs: Dict[str, Any],
126 run_manager: Optional[CallbackManagerForChainRun] = None,
127 ) -> Dict[str, str]:
--> 128 response = self.generate([inputs], run_manager=run_manager)
129 return self.create_outputs(response)[0]
File ~/.cache/virtualenvs/myenv/lib/python3.12/site-packages/langchain/chains/llm.py:140, in LLMChain.generate(self, input_list, run_manager)
138 callbacks = run_manager.get_child() if run_manager else None
139 if isinstance(self.llm, BaseLanguageModel):
--> 140 return self.llm.generate_prompt(
141 prompts,
142 stop,
143 callbacks=callbacks,
144 **self.llm_kwargs,
145 )
146 else:
147 results = self.llm.bind(stop=stop, **self.llm_kwargs).batch(
148 cast(List, prompts), {"callbacks": callbacks}
149 )
File ~/.cache/virtualenvs/myenv/lib/python3.12/site-packages/langchain_core/language_models/llms.py:703, in BaseLLM.generate_prompt(self, prompts, stop, callbacks, **kwargs)
695 def generate_prompt(
696 self,
697 prompts: List[PromptValue],
(...)
700 **kwargs: Any,
701 ) -> LLMResult:
702 prompt_strings = [p.to_string() for p in prompts]
--> 703 return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs)
File ~/.cache/virtualenvs/myenv/lib/python3.12/site-packages/langchain_core/language_models/llms.py:882, in BaseLLM.generate(self, prompts, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
867 if (self.cache is None and get_llm_cache() is None) or self.cache is False:
868 run_managers = [
869 callback_manager.on_llm_start(
870 dumpd(self),
(...)
880 )
881 ]
--> 882 output = self._generate_helper(
883 prompts, stop, run_managers, bool(new_arg_supported), **kwargs
884 )
885 return output
886 if len(missing_prompts) > 0:
File ~/.cache/virtualenvs/myenv/lib/python3.12/site-packages/langchain_core/language_models/llms.py:740, in BaseLLM._generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs)
738 for run_manager in run_managers:
739 run_manager.on_llm_error(e, response=LLMResult(generations=[]))
--> 740 raise e
741 flattened_outputs = output.flatten()
742 for manager, flattened_output in zip(run_managers, flattened_outputs):
File ~/.cache/virtualenvs/myenv/lib/python3.12/site-packages/langchain_core/language_models/llms.py:727, in BaseLLM._generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs)
717 def _generate_helper(
718 self,
719 prompts: List[str],
(...)
723 **kwargs: Any,
724 ) -> LLMResult:
725 try:
726 output = (
--> 727 self._generate(
728 prompts,
729 stop=stop,
730 # TODO: support multiple run managers
731 run_manager=run_managers[0] if run_managers else None,
732 **kwargs,
733 )
734 if new_arg_supported
735 else self._generate(prompts, stop=stop)
736 )
737 except BaseException as e:
738 for run_manager in run_managers:
File ~/.cache/virtualenvs/myenv/lib/python3.12/site-packages/langchain_core/language_models/llms.py:1431, in LLM._generate(self, prompts, stop, run_manager, **kwargs)
1428 new_arg_supported = inspect.signature(self._call).parameters.get("run_manager")
1429 for prompt in prompts:
1430 text = (
-> 1431 self._call(prompt, stop=stop, run_manager=run_manager, **kwargs)
1432 if new_arg_supported
1433 else self._call(prompt, stop=stop, **kwargs)
1434 )
1435 generations.append([Generation(text=text)])
1436 return LLMResult(generations=generations)
File ~/.cache/virtualenvs/myenv/lib/python3.12/site-packages/langchain_community/llms/yandex.py:165, in YandexGPT._call(self, prompt, stop, run_manager, **kwargs)
144 def _call(
145 self,
146 prompt: str,
(...)
149 **kwargs: Any,
150 ) -> str:
151 """Call the Yandex GPT model and return the output.
152
153 Args:
(...)
163 response = YandexGPT("Tell me a joke.")
164 """
--> 165 text = completion_with_retry(self, prompt=prompt)
166 if stop is not None:
167 text = enforce_stop_tokens(text, stop)
File ~/.cache/virtualenvs/myenv/lib/python3.12/site-packages/langchain_community/llms/yandex.py:334, in completion_with_retry(llm, **kwargs)
330 @retry_decorator
331 def _completion_with_retry(**_kwargs: Any) -> Any:
332 return _make_request(llm, **_kwargs)
--> 334 return _completion_with_retry(**kwargs)
File ~/.cache/virtualenvs/myenv/lib/python3.12/site-packages/tenacity/__init__.py:336, in BaseRetrying.wraps.<locals>.wrapped_f(*args, **kw)
334 copy = self.copy()
335 wrapped_f.statistics = copy.statistics # type: ignore[attr-defined]
--> 336 return copy(f, *args, **kw)
File ~/.cache/virtualenvs/myenv/lib/python3.12/site-packages/tenacity/__init__.py:475, in Retrying.__call__(self, fn, *args, **kwargs)
473 retry_state = RetryCallState(retry_object=self, fn=fn, args=args, kwargs=kwargs)
474 while True:
--> 475 do = self.iter(retry_state=retry_state)
476 if isinstance(do, DoAttempt):
477 try:
File ~/.cache/virtualenvs/myenv/lib/python3.12/site-packages/tenacity/__init__.py:376, in BaseRetrying.iter(self, retry_state)
374 result = None
375 for action in self.iter_state.actions:
--> 376 result = action(retry_state)
377 return result
File ~/.cache/virtualenvs/myenv/lib/python3.12/site-packages/tenacity/__init__.py:398, in BaseRetrying._post_retry_check_actions.<locals>.<lambda>(rs)
396 def _post_retry_check_actions(self, retry_state: "RetryCallState") -> None:
397 if not (self.iter_state.is_explicit_retry or self.iter_state.retry_run_result):
--> 398 self._add_action_func(lambda rs: rs.outcome.result())
399 return
401 if self.after is not None:
File /Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/concurrent/futures/_base.py:449, in Future.result(self, timeout)
447 raise CancelledError()
448 elif self._state == FINISHED:
--> 449 return self.__get_result()
451 self._condition.wait(timeout)
453 if self._state in [CANCELLED, CANCELLED_AND_NOTIFIED]:
File /Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/concurrent/futures/_base.py:401, in Future.__get_result(self)
399 if self._exception:
400 try:
--> 401 raise self._exception
402 finally:
403 # Break a reference cycle with the exception in self._exception
404 self = None
File ~/.cache/virtualenvs/myenv/lib/python3.12/site-packages/tenacity/__init__.py:478, in Retrying.__call__(self, fn, *args, **kwargs)
476 if isinstance(do, DoAttempt):
477 try:
--> 478 result = fn(*args, **kwargs)
479 except BaseException: # noqa: B902
480 retry_state.set_exception(sys.exc_info()) # type: ignore[arg-type]
File ~/.cache/virtualenvs/myenv/lib/python3.12/site-packages/langchain_community/llms/yandex.py:332, in completion_with_retry.<locals>._completion_with_retry(**_kwargs)
330 @retry_decorator
331 def _completion_with_retry(**_kwargs: Any) -> Any:
--> 332 return _make_request(llm, **_kwargs)
File ~/.cache/virtualenvs/myenv/lib/python3.12/site-packages/langchain_community/llms/yandex.py:238, in _make_request(self, prompt)
229 request = CompletionRequest(
230 model_uri=self.model_uri,
231 completion_options=CompletionOptions(
(...)
235 messages=[Message(role="user", text=prompt)],
236 )
237 stub = TextGenerationServiceStub(channel)
--> 238 res = stub.Completion(request, metadata=self._grpc_metadata) # type: ignore[attr-defined]
239 return list(res)[0].alternatives[0].message.text
AttributeError: 'YandexGPT' object has no attribute '_grpc_metadata'
### Description
* I'm trying to use YandexGPT module from langchain-community to connect to YandexGPT LLM using steps from the [docs](https://python.langchain.com/v0.2/docs/integrations/llms/yandex/) instead I got error.
### System Info
```
$ python -m langchain_core.sys_info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.0.0: Fri Sep 15 14:42:42 PDT 2023; root:xnu-10002.1.13~1/RELEASE_X86_64
> Python Version: 3.12.0 (v3.12.0:0fb18b02c8, Oct 2 2023, 09:45:56) [Clang 13.0.0 (clang-1300.0.29.30)]
Package Information
-------------------
> langchain_core: 0.2.12
> langchain: 0.2.7
> langchain_community: 0.2.7
> langsmith: 0.1.84
> langchain_text_splitters: 0.2.2
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
``` | AttributeError: 'YandexGPT' object has no attribute '_grpc_metadata' | https://api.github.com/repos/langchain-ai/langchain/issues/24049/comments | 0 | 2024-07-10T06:59:34Z | 2024-07-31T21:18:34Z | https://github.com/langchain-ai/langchain/issues/24049 | 2,399,920,012 | 24,049 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain.js documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain.js rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_community.document_loaders import UnstructuredFileLoader
loader = UnstructuredFileLoader("./example_data/state_of_the_union.txt")
docs = loader.load()
### Error Message and Stack Trace (if applicable)
<details>
loader = UnstructuredFileLoader("./state_of_the_union.txt")
loader.load()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/kennethliu/newenv/lib/python3.10/site-packages/langchain_core/document_loaders/base.py", line 30, in load
return list(self.lazy_load())
File "/home/kennethliu/newenv/lib/python3.10/site-packages/langchain_community/document_loaders/unstructured.py", line 89, in lazy_load
elements = self._get_elements()
File "/home/kennethliu/newenv/lib/python3.10/site-packages/langchain_community/document_loaders/unstructured.py", line 181, in _get_elements
return partition(filename=self.file_path, **self.unstructured_kwargs)
File "/home/kennethliu/newenv/lib/python3.10/site-packages/unstructured/partition/auto.py", line 464, in partition
elements = partition_text(
File "/home/kennethliu/newenv/lib/python3.10/site-packages/unstructured/partition/text.py", line 101, in partition_text
return _partition_text(
File "/home/kennethliu/newenv/lib/python3.10/site-packages/unstructured/documents/elements.py", line 593, in wrapper
elements = func(*args, **kwargs)
File "/home/kennethliu/newenv/lib/python3.10/site-packages/unstructured/file_utils/filetype.py", line 626, in wrapper
elements = func(*args, **kwargs)
File "/home/kennethliu/newenv/lib/python3.10/site-packages/unstructured/file_utils/filetype.py", line 582, in wrapper
elements = func(*args, **kwargs)
File "/home/kennethliu/newenv/lib/python3.10/site-packages/unstructured/chunking/dispatch.py", line 74, in wrapper
elements = func(*args, **kwargs)
File "/home/kennethliu/newenv/lib/python3.10/site-packages/unstructured/partition/text.py", line 180, in _partition_text
file_content = _split_by_paragraph(
File "/home/kennethliu/newenv/lib/python3.10/site-packages/unstructured/partition/text.py", line 360, in _split_by_paragraph
_split_content_to_fit_max(
File "/home/kennethliu/newenv/lib/python3.10/site-packages/unstructured/partition/text.py", line 392, in _split_content_to_fit_max
sentences = sent_tokenize(content)
File "/home/kennethliu/newenv/lib/python3.10/site-packages/unstructured/nlp/tokenize.py", line 136, in sent_tokenize
_download_nltk_packages_if_not_present()
File "/home/kennethliu/newenv/lib/python3.10/site-packages/unstructured/nlp/tokenize.py", line 121, in _download_nltk_packages_if_not_present
tagger_available = check_for_nltk_package(
File "/home/kennethliu/newenv/lib/python3.10/site-packages/unstructured/nlp/tokenize.py", line 112, in check_for_nltk_package
nltk.find(f"{package_category}/{package_name}", paths=paths)
File "/home/kennethliu/newenv/lib/python3.10/site-packages/nltk/data.py", line 555, in find
return find(modified_name, paths)
File "/home/kennethliu/newenv/lib/python3.10/site-packages/nltk/data.py", line 542, in find
return ZipFilePathPointer(p, zipentry)
File "/home/kennethliu/newenv/lib/python3.10/site-packages/nltk/compat.py", line 41, in _decorator
return init_func(*args, **kwargs)
File "/home/kennethliu/newenv/lib/python3.10/site-packages/nltk/data.py", line 394, in __init__
zipfile = OpenOnDemandZipFile(os.path.abspath(zipfile))
File "/home/kennethliu/newenv/lib/python3.10/site-packages/nltk/compat.py", line 41, in _decorator
return init_func(*args, **kwargs)
File "/home/kennethliu/newenv/lib/python3.10/site-packages/nltk/data.py", line 935, in __init__
zipfile.ZipFile.__init__(self, filename)
File "/usr/lib/python3.10/zipfile.py", line 1269, in __init__
self._RealGetContents()
File "/usr/lib/python3.10/zipfile.py", line 1336, in _RealGetContents
raise BadZipFile("File is not a zip file")
zipfile.BadZipFile: File is not a zip file
>>> loader = UnstructuredFileLoader("./state_of_the_union.txt")
>>> loader = UnstructuredFileLoader(r"./state_of_the_union.txt")
>>> loader.load()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/kennethliu/newenv/lib/python3.10/site-packages/langchain_core/document_loaders/base.py", line 30, in load
return list(self.lazy_load())
File "/home/kennethliu/newenv/lib/python3.10/site-packages/langchain_community/document_loaders/unstructured.py", line 89, in lazy_load
elements = self._get_elements()
File "/home/kennethliu/newenv/lib/python3.10/site-packages/langchain_community/document_loaders/unstructured.py", line 181, in _get_elements
return partition(filename=self.file_path, **self.unstructured_kwargs)
File "/home/kennethliu/newenv/lib/python3.10/site-packages/unstructured/partition/auto.py", line 464, in partition
elements = partition_text(
File "/home/kennethliu/newenv/lib/python3.10/site-packages/unstructured/partition/text.py", line 101, in partition_text
return _partition_text(
File "/home/kennethliu/newenv/lib/python3.10/site-packages/unstructured/documents/elements.py", line 593, in wrapper
elements = func(*args, **kwargs)
File "/home/kennethliu/newenv/lib/python3.10/site-packages/unstructured/file_utils/filetype.py", line 626, in wrapper
elements = func(*args, **kwargs)
File "/home/kennethliu/newenv/lib/python3.10/site-packages/unstructured/file_utils/filetype.py", line 582, in wrapper
elements = func(*args, **kwargs)
File "/home/kennethliu/newenv/lib/python3.10/site-packages/unstructured/chunking/dispatch.py", line 74, in wrapper
elements = func(*args, **kwargs)
File "/home/kennethliu/newenv/lib/python3.10/site-packages/unstructured/partition/text.py", line 180, in _partition_text
file_content = _split_by_paragraph(
File "/home/kennethliu/newenv/lib/python3.10/site-packages/unstructured/partition/text.py", line 360, in _split_by_paragraph
_split_content_to_fit_max(
File "/home/kennethliu/newenv/lib/python3.10/site-packages/unstructured/partition/text.py", line 392, in _split_content_to_fit_max
sentences = sent_tokenize(content)
File "/home/kennethliu/newenv/lib/python3.10/site-packages/unstructured/nlp/tokenize.py", line 136, in sent_tokenize
_download_nltk_packages_if_not_present()
File "/home/kennethliu/newenv/lib/python3.10/site-packages/unstructured/nlp/tokenize.py", line 121, in _download_nltk_packages_if_not_present
tagger_available = check_for_nltk_package(
File "/home/kennethliu/newenv/lib/python3.10/site-packages/unstructured/nlp/tokenize.py", line 112, in check_for_nltk_package
nltk.find(f"{package_category}/{package_name}", paths=paths)
File "/home/kennethliu/newenv/lib/python3.10/site-packages/nltk/data.py", line 555, in find
return find(modified_name, paths)
File "/home/kennethliu/newenv/lib/python3.10/site-packages/nltk/data.py", line 542, in find
return ZipFilePathPointer(p, zipentry)
File "/home/kennethliu/newenv/lib/python3.10/site-packages/nltk/compat.py", line 41, in _decorator
return init_func(*args, **kwargs)
File "/home/kennethliu/newenv/lib/python3.10/site-packages/nltk/data.py", line 394, in __init__
zipfile = OpenOnDemandZipFile(os.path.abspath(zipfile))
File "/home/kennethliu/newenv/lib/python3.10/site-packages/nltk/compat.py", line 41, in _decorator
return init_func(*args, **kwargs)
File "/home/kennethliu/newenv/lib/python3.10/site-packages/nltk/data.py", line 935, in __init__
zipfile.ZipFile.__init__(self, filename)
File "/usr/lib/python3.10/zipfile.py", line 1269, in __init__
self._RealGetContents()
File "/usr/lib/python3.10/zipfile.py", line 1336, in _RealGetContents
raise BadZipFile("File is not a zip file")
zipfile.BadZipFile: File is not a zip file
</details>
### Description
something wrong. can't upload the txt.
### System Info
ubuntu 22.01 | something wrong when i upload a .txt | https://api.github.com/repos/langchain-ai/langchain/issues/24101/comments | 1 | 2024-07-10T05:14:20Z | 2024-07-12T15:55:33Z | https://github.com/langchain-ai/langchain/issues/24101 | 2,401,986,680 | 24,101 |