issue_owner_repo
sequencelengths 2
2
| issue_body
stringlengths 0
261k
⌀ | issue_title
stringlengths 1
925
| issue_comments_url
stringlengths 56
81
| issue_comments_count
int64 0
2.5k
| issue_created_at
stringlengths 20
20
| issue_updated_at
stringlengths 20
20
| issue_html_url
stringlengths 37
62
| issue_github_id
int64 387k
2.46B
| issue_number
int64 1
127k
|
---|---|---|---|---|---|---|---|---|---|
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from common_base.llm_base import embedding_model
redis_url = "redis://:[email protected]:6379"
from langchain_community.vectorstores.redis import Redis
from langchain_community.vectorstores.redis.filters import RedisFilter
rds: Redis = Redis.from_existing_index(
embedding_model,
redis_url=redis_url,
index_name="teacher_report",
schema="teacher_report/teacher_report.yaml",
)
email_filter = RedisFilter.text('teacher_email') == '[email protected]'
asdf = rds.similarity_search(query='asdf',k=3, filter=email_filter)
print(asdf)
```
```yaml
text:
- name: teacher_email
no_index: false
no_stem: false
sortable: false
weight: 1
withsuffixtrie: false
- name: clinic_title
no_index: false
no_stem: false
sortable: false
weight: 1
withsuffixtrie: false
- name: content
no_index: false
no_stem: false
sortable: false
weight: 1
withsuffixtrie: false
vector:
- algorithm: FLAT
block_size: 1000
datatype: FLOAT32
dims: 1536
distance_metric: COSINE
initial_cap: 20000
name: content_vector
```
### Error Message and Stack Trace (if applicable)
```
Traceback (most recent call last):
File "/opt/homebrew/lib/python3.11/site-packages/langchain_community/vectorstores/redis/base.py", line 946, in similarity_search_by_vector
results = self.client.ft(self.index_name).search(redis_query, params_dict) # type: ignore # noqa: E501
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/redis/commands/search/commands.py", line 501, in search
res = self.execute_command(SEARCH_CMD, *args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/redis/client.py", line 543, in execute_command
return conn.retry.call_with_retry(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/redis/retry.py", line 46, in call_with_retry
return do()
^^^^
File "/opt/homebrew/lib/python3.11/site-packages/redis/client.py", line 544, in <lambda>
lambda: self._send_command_parse_response(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/redis/client.py", line 520, in _send_command_parse_response
return self.parse_response(conn, command_name, **options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/redis/client.py", line 560, in parse_response
response = connection.read_response()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/redis/connection.py", line 536, in read_response
raise response
redis.exceptions.ResponseError: Syntax error at offset 22 near path
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/jimmykim/workspace/deus-path-machina/test2.py", line 15, in <module>
asdf = rds.similarity_search(query='asdf',k=3, fetch_k=10, filter=email_filter)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/langchain_community/vectorstores/redis/base.py", line 882, in similarity_search
return self.similarity_search_by_vector(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/langchain_community/vectorstores/redis/base.py", line 950, in similarity_search_by_vector
raise ValueError(
ValueError: Query failed with syntax error. This is likely due to malformation of filter, vector, or query argument
```
### Description
I have written a simple sample code to filter and search data using Redis VectorStore. Despite creating a filter with RedisFilterExpression and passing it as a parameter as described in the documentation, I encounter a syntax error. When I do not pass the filter, the search works correctly across all vector data. I suspect this might be a bug, what do you think?
### System Info
langchain==0.1.12
langchain-community==0.0.28
langchain-core==0.1.32
langchain-experimental==0.0.45
langchain-openai==0.0.8
langchain-text-splitters==0.0.1
MAC OS M1
Python 3.11.8 | Why redis vectorstore filter parameter make syntax error | https://api.github.com/repos/langchain-ai/langchain/issues/19323/comments | 1 | 2024-03-20T10:02:16Z | 2024-06-27T16:08:19Z | https://github.com/langchain-ai/langchain/issues/19323 | 2,197,130,983 | 19,323 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
# postion: langchain_openai/chat_models/base.py
def _convert_delta_to_message_chunk(
_dict: Mapping[str, Any], default_class: Type[BaseMessageChunk]
) -> BaseMessageChunk:
role = cast(str, _dict.get("role"))
content = cast(str, _dict.get("content") or "")
additional_kwargs: Dict = {}
if _dict.get("function_call"):
function_call = dict(_dict["function_call"])
if "name" in function_call and function_call["name"] is None:
function_call["name"] = ""
additional_kwargs["function_call"] = function_call
if _dict.get("tool_calls"):
additional_kwargs["tool_calls"] = _dict["tool_calls"]
if role == "user" or default_class == HumanMessageChunk:
return HumanMessageChunk(content=content)
elif role == "assistant" or default_class == AIMessageChunk:
return AIMessageChunk(content=content, additional_kwargs=additional_kwargs)
elif role == "system" or default_class == SystemMessageChunk:
return SystemMessageChunk(content=content)
elif role == "function" or default_class == FunctionMessageChunk:
return FunctionMessageChunk(content=content, name=_dict["name"])
elif role == "tool" or default_class == ToolMessageChunk:
return ToolMessageChunk(content=content, tool_call_id=_dict["tool_call_id"])
elif role or default_class == ChatMessageChunk:
return ChatMessageChunk(content=content, role=role)
else:
return default_class(content=content) # type: ignore
```
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "/Users/sunny/Documents/Codes/ai/themis/api/utils.py", line 15, in wrap_done
await fn
File "/Users/sunny/.local/share/virtualenvs/themis-l6RndCmc/lib/python3.10/site-packages/langchain_core/_api/deprecation.py", line 154, in awarning_emitting_wrapper
return await wrapped(*args, **kwargs)
File "/Users/sunny/.local/share/virtualenvs/themis-l6RndCmc/lib/python3.10/site-packages/langchain/chains/base.py", line 428, in acall
return await self.ainvoke(
File "/Users/sunny/.local/share/virtualenvs/themis-l6RndCmc/lib/python3.10/site-packages/langchain/chains/base.py", line 212, in ainvoke
raise e
File "/Users/sunny/.local/share/virtualenvs/themis-l6RndCmc/lib/python3.10/site-packages/langchain/chains/base.py", line 203, in ainvoke
await self._acall(inputs, run_manager=run_manager)
File "/Users/sunny/.local/share/virtualenvs/themis-l6RndCmc/lib/python3.10/site-packages/langchain/chains/llm.py", line 275, in _acall
response = await self.agenerate([inputs], run_manager=run_manager)
File "/Users/sunny/.local/share/virtualenvs/themis-l6RndCmc/lib/python3.10/site-packages/langchain/chains/llm.py", line 142, in agenerate
return await self.llm.agenerate_prompt(
File "/Users/sunny/.local/share/virtualenvs/themis-l6RndCmc/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 581, in agenerate_prompt
return await self.agenerate(
File "/Users/sunny/.local/share/virtualenvs/themis-l6RndCmc/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 541, in agenerate
raise exceptions[0]
File "/Users/sunny/opt/anaconda3/envs/py3.10/lib/python3.10/asyncio/tasks.py", line 232, in __step
result = coro.send(None)
File "/Users/sunny/.local/share/virtualenvs/themis-l6RndCmc/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 645, in _agenerate_with_cache
result = await self._agenerate(
File "/Users/sunny/.local/share/virtualenvs/themis-l6RndCmc/lib/python3.10/site-packages/langchain_openai/chat_models/base.py", line 553, in _agenerate
return await agenerate_from_stream(stream_iter)
File "/Users/sunny/.local/share/virtualenvs/themis-l6RndCmc/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 84, in agenerate_from_stream
async for chunk in stream:
File "/Users/sunny/.local/share/virtualenvs/themis-l6RndCmc/lib/python3.10/site-packages/langchain_openai/chat_models/base.py", line 521, in _astream
chunk = _convert_delta_to_message_chunk(
File "/Users/sunny/.local/share/virtualenvs/themis-l6RndCmc/lib/python3.10/site-packages/langchain_openai/chat_models/base.py", line 176, in _convert_delta_to_message_chunk
role = cast(str, _dict.get("role"))
AttributeError: 'NoneType' object has no attribute 'get'
### Description
I'm currently using langchain_openai to execute Azure's Streaming API, but it encountered the above error. From the Azure API result, I noticed:
{'delta': None, 'finish_reason': None, 'index': 0, 'logprobs': None, 'content_filter_results': {'hate': {'filtered': False, 'severity': 'safe'}, 'self_harm': {'filtered': False, 'severity': 'safe'}, 'sexual': {'filtered': False, 'severity': 'safe'}, 'violence': {'filtered': False, 'severity': 'safe'}}, 'content_filter_offsets': {'check_offset': 252, 'start_offset': 252, 'end_offset': 263}}
The delta is None. However, in the function langchain_openai/chat_models/base.py/_convert_delta_to_message_chunk, the _dict parameter is None, which leads to the error.
### System Info
langchain==0.1.10
langchain-community==0.0.25
langchain-core==0.1.32
langchain-experimental==0.0.48
langchain-google-genai==0.0.11
langchain-openai==0.0.7
langchain-text-splitters==0.0.1
| langchain_openai - bug: 'NoneType' object has no attribute 'get' | https://api.github.com/repos/langchain-ai/langchain/issues/19318/comments | 0 | 2024-03-20T07:41:16Z | 2024-03-20T07:43:02Z | https://github.com/langchain-ai/langchain/issues/19318 | 2,196,869,387 | 19,318 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
On the web page https://python.langchain.com/docs/use_cases/question_answering/quickstart when user clicks on open in colab the notebook is not present at
https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/use_cases/question_answering/quickstart.ipynb
### Idea or request for content:
please fix the link | colab link is not working to quickstart.ipynb | https://api.github.com/repos/langchain-ai/langchain/issues/19304/comments | 0 | 2024-03-20T03:25:08Z | 2024-06-26T16:07:55Z | https://github.com/langchain-ai/langchain/issues/19304 | 2,196,594,832 | 19,304 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_core.runnables import ConfigurableField
from langchain_openai import ChatOpenAI
class Joke(BaseModel):
setup: str = Field(description="The setup of the joke")
punchline: str = Field(description="The punchline to the joke")
model = ChatOpenAI(model="gpt-3.5-turbo",temperature=0).configurable_alternatives(
ConfigurableField(id="chat_model"),
default_key="gpt-3.5-turbo",
gpt_4=ChatOpenAI(model="gpt-4-0125-preview"),
).with_structured_output(Joke)
```
### Error Message and Stack Trace (if applicable)
AttributeError: 'RunnableConfigurableAlternatives' object has no attribute 'with_structured_output'
### Description
Need to think through how to support cases like this without overwriting methods like `with_structured_output`.
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 22.4.0: Mon Mar 6 20:59:58 PST 2023; root:xnu-8796.101.5~3/RELEASE_ARM64_T6020
> Python Version: 3.11.7 (main, Feb 12 2024, 12:44:48) [Clang 14.0.3 (clang-1403.0.22.14.1)]
Package Information
-------------------
> langchain_core: 0.1.32
> langchain: 0.1.12
> langchain_community: 0.0.28
> langsmith: 0.1.26
> langchain_fireworks: 0.1.1
> langchain_openai: 0.0.8
> langchain_text_splitters: 0.0.1
> langserve: 0.0.45
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph | core: binding sometimes overrides methods | https://api.github.com/repos/langchain-ai/langchain/issues/19279/comments | 0 | 2024-03-19T16:04:35Z | 2024-06-25T16:41:42Z | https://github.com/langchain-ai/langchain/issues/19279 | 2,195,379,111 | 19,279 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_community.llms import Ollama
llm = Ollama(base_url = )
llm.invoke("Tell me a joke")
### Error Message and Stack Trace (if applicable)
\python\python39\lib\site-packages (from dataclasses-json<0.7,>=0.5.7->langchain_community) (0.9.0)
Requirement already satisfied: pydantic<3,>=1 in c:\users\maste\appdata\local\programs\python\python39\lib\site-packages (from langchain-core<0.2.0,>=0.1.31->langchain_community) (1.10.14)
Requirement already satisfied: anyio<5,>=3 in c:\users\maste\appdata\local\programs\python\python39\lib\site-packages (from langchain-core<0.2.0,>=0.1.31->langchain_community) (4.3.0)
Requirement already satisfied: jsonpatch<2.0,>=1.33 in c:\users\maste\appdata\local\programs\python\python39\lib\site-packages (from langchain-core<0.2.0,>=0.1.31->langchain_community) (1.33)
Requirement already satisfied: packaging<24.0,>=23.2 in c:\users\maste\appdata\local\programs\python\python39\lib\site-packages (from langchain-core<0.2.0,>=0.1.31->langchain_community) (23.2)Requirement already satisfied: urllib3<3,>=1.21.1 in c:\users\maste\appdata\local\programs\python\python39\lib\site-packages (from requests<3,>=2->langchain_community) (2.2.1)
Requirement already satisfied: idna<4,>=2.5 in c:\users\maste\appdata\local\programs\python\python39\lib\site-packages (from requests<3,>=2->langchain_community) (3.6)
Requirement already satisfied: certifi>=2017.4.17 in c:\users\maste\appdata\local\programs\python\python39\lib\site-packages (from requests<3,>=2->langchain_community) (2024.2.2)
Requirement already satisfied: charset-normalizer<4,>=2 in c:\users\maste\appdata\local\programs\python\python39\lib\site-packages (from requests<3,>=2->langchain_community) (3.3.2)
Requirement already satisfied: aiosignal>=1.1.2 in c:\users\maste\appdata\local\programs\python\python39\lib\site-packages (from aiohttp<4.0.0,>=3.8.3->langchain_community) (1.3.1)
Requirement already satisfied: async-timeout<5.0,>=4.0; python_version < "3.11" in c:\users\maste\appdata\local\programs\python\python39\lib\site-packages (from aiohttp<4.0.0,>=3.8.3->langchain_community) (4.0.3)
Requirement already satisfied: yarl<2.0,>=1.0 in c:\users\maste\appdata\local\programs\python\python39\lib\site-packages (from aiohttp<4.0.0,>=3.8.3->langchain_community) (1.9.4)
Requirement already satisfied: frozenlist>=1.1.1 in c:\users\maste\appdata\local\programs\python\python39\lib\site-packages (from aiohttp<4.0.0,>=3.8.3->langchain_community) (1.4.1)
Requirement already satisfied: multidict<7.0,>=4.5 in c:\users\maste\appdata\local\programs\python\python39\lib\site-packages (from aiohttp<4.0.0,>=3.8.3->langchain_community) (6.0.5)
Requirement already satisfied: attrs>=17.3.0 in c:\users\maste\appdata\local\programs\python\python39\lib\site-packages (from aiohttp<4.0.0,>=3.8.3->langchain_community) (23.2.0)
Requirement already satisfied: orjson<4.0.0,>=3.9.14 in c:\users\maste\appdata\local\programs\python\python39\lib\site-packages (from langsmith<0.2.0,>=0.1.0->langchain_community) (3.9.15)
Requirement already satisfied: greenlet!=0.4.17; python_version >= "3" and (platform_machine ==
"aarch64" or (platform_machine == "ppc64le" or (platform_machine == "x86_64" or (platform_machine == "amd64" or (platform_machine == "AMD64" or (platform_machine == "win32" or platform_machine == "WIN32")))))) in c:\users\maste\appdata\local\programs\python\python39\lib\site-packages (from SQLAlchemy<3,>=1.4->langchain_community) (3.0.3)
Requirement already satisfied: typing-extensions>=3.7.4 in c:\users\maste\appdata\local\programs\python\python39\lib\site-packages (from typing-inspect<1,>=0.4.0->dataclasses-json<0.7,>=0.5.7->langchain_community) (4.10.0)
Requirement already satisfied: mypy-extensions>=0.3.0 in c:\users\maste\appdata\local\programs\python\python39\lib\site-packages (from typing-inspect<1,>=0.4.0->dataclasses-json<0.7,>=0.5.7->langchain_community) (1.0.0)
Requirement already satisfied: sniffio>=1.1 in c:\users\maste\appdata\local\programs\python\python39\lib\site-packages (from anyio<5,>=3->langchain-core<0.2.0,>=0.1.31->langchain_community) (1.3.1)
Requirement already satisfied: exceptiongroup>=1.0.2; python_version < "3.11" in c:\users\maste\appdata\local\programs\python\python39\lib\site-packages (from anyio<5,>=3->langchain-core<0.2.0Requirement already satisfied: jsonpointer>=1.9 in c:\users\maste\appdata\local\programs\python\python39\lib\site-packages (from jsonpatch<2.0,>=1.33->langchain-core<0.2.0,>=0.1.31->langchain_community) (2.4)
WARNING: You are using pip version 20.2.3; however, version 24.0 is available.
You should consider upgrading via the 'c:\users\maste\appdata\local\programs\python\python39\python.exe -m pip install --upgrade pip' command.
PS C:\Users\maste> pip install Ollama
Collecting Ollama
Downloading ollama-0.1.7-py3-none-any.whl (9.4 kB)
Collecting httpx<0.26.0,>=0.25.2
Downloading httpx-0.25.2-py3-none-any.whl (74 kB)
|████████████████████████████████| 74 kB 2.6 MB/s
Requirement already satisfied: anyio in c:\users\maste\appdata\local\programs\python\python39\lib\site-packages (from httpx<0.26.0,>=0.25.2->Ollama) (4.3.0)
Requirement already satisfied: certifi in c:\users\maste\appdata\local\programs\python\python39\lib\site-packages (from httpx<0.26.0,>=0.25.2->Ollama) (2024.2.2)
Requirement already satisfied: sniffio in c:\users\maste\appdata\local\programs\python\python39\lib\site-packages (from httpx<0.26.0,>=0.25.2->Ollama) (1.3.1)
Collecting httpcore==1.*
Downloading httpcore-1.0.4-py3-none-any.whl (77 kB)
|████████████████████████████████| 77 kB ...
Requirement already satisfied: idna in c:\users\maste\appdata\local\programs\python\python39\lib\site-packages (from httpx<0.26.0,>=0.25.2->Ollama) (3.6)
Requirement already satisfied: exceptiongroup>=1.0.2; python_version < "3.11" in c:\users\maste\appdata\local\programs\python\python39\lib\site-packages (from anyio->httpx<0.26.0,>=0.25.2->Ollama) (1.2.0)
Requirement already satisfied: typing-extensions>=4.1; python_version < "3.11" in c:\users\maste\appdata\local\programs\python\python39\lib\site-packages (from anyio->httpx<0.26.0,>=0.25.2->Ollama) (4.10.0)
Collecting h11<0.15,>=0.13
|████████████████████████████████| 58 kB ...
Installing collected packages: h11, httpcore, httpx, Ollama
Successfully installed Ollama-0.1.7 h11-0.14.0 httpcore-1.0.4 httpx-0.25.2
WARNING: You are using pip version 20.2.3; however, version 24.0 is available.
You should consider upgrading via the 'c:\users\maste\appdata\local\programs\python\python39\python.exe -m pip install --upgrade pip' command.
PS C:\Users\maste> pip install ollama
Requirement already satisfied: ollama in c:\users\maste\appdata\local\programs\python\python39\lib\site-packages (0.1.7)
Requirement already satisfied: httpx<0.26.0,>=0.25.2 in c:\users\maste\appdata\local\programs\python\python39\lib\site-packages (from ollama) (0.25.2)
Requirement already satisfied: httpcore==1.* in c:\users\maste\appdata\local\programs\python\python39\lib\site-packages (from httpx<0.26.0,>=0.25.2->ollama) (1.0.4)
Requirement already satisfied: sniffio in c:\users\maste\appdata\local\programs\python\python39\lib\site-packages (from httpx<0.26.0,>=0.25.2->ollama) (1.3.1)
Requirement already satisfied: anyio in c:\users\maste\appdata\local\programs\python\python39\lib\site-packages (from httpx<0.26.0,>=0.25.2->ollama) (4.3.0)
Requirement already satisfied: certifi in c:\users\maste\appdata\local\programs\python\python39\lib\site-packages (from httpx<0.26.0,>=0.25.2->ollama) (2024.2.2)
Requirement already satisfied: idna in c:\users\maste\appdata\local\programs\python\python39\lib\site-packages (from httpx<0.26.0,>=0.25.2->ollama) (3.6)
Requirement already satisfied: h11<0.15,>=0.13 in c:\users\maste\appdata\local\programs\python\python39\lib\site-packages (from httpcore==1.*->httpx<0.26.0,>=0.25.2->ollama) (0.14.0)
Requirement already satisfied: exceptiongroup>=1.0.2; python_version < "3.11" in c:\users\maste\appdata\local\programs\python\python39\lib\site-packages (from anyio->httpx<0.26.0,>=0.25.2->ollRequirement already satisfied: typing-extensions>=4.1; python_version < "3.11" in c:\users\maste\appdata\local\programs\python\python39\lib\site-packages (from anyio->httpx<0.26.0,>=0.25.2->ollama) (4.10.0)
WARNING: You are using pip version 20.2.3; however, version 24.0 is available.
You should consider upgrading via the 'c:\users\maste\appdata\local\programs\python\python39\python.exe -m pip install --upgrade pip' command.
PS C:\Users\maste> python hello.py
Traceback (most recent call last):
File "C:\Users\maste\hello.py", line 5, in <module>
llm.invoke("Tell me a joke")
File "C:\Users\maste\AppData\Local\Programs\Python\Python39\lib\site-packages\langchain_core\language_models\llms.py", line 246, in invoke
self.generate_prompt(
File "C:\Users\maste\AppData\Local\Programs\Python\Python39\lib\site-packages\langchain_core\language_models\llms.py", line 541, in generate_prompt
return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs)
File "C:\Users\maste\AppData\Local\Programs\Python\Python39\lib\site-packages\langchain_core\language_models\llms.py", line 671, in generate
CallbackManager.configure(
File "C:\Users\maste\AppData\Local\Programs\Python\Python39\lib\site-packages\langchain_core\callbacks\manager.py", line 1443, in configure
return _configure(
File "C:\Users\maste\AppData\Local\Programs\Python\Python39\lib\site-packages\langchain_core\callbacks\manager.py", line 1950, in _configure
debug = _get_debug()
PS C:\Users\maste> python hello.py
Traceback (most recent call last):
File "C:\Users\maste\hello.py", line 19, in <module>
print(chain.invoke({"topic": "Space travel"}))
File "C:\Users\maste\AppData\Local\Programs\Python\Python39\lib\site-packages\langchain_core\runnables\base.py", line 2209, in invoke
callback_manager = get_callback_manager_for_config(config)
File "C:\Users\maste\AppData\Local\Programs\Python\Python39\lib\site-packages\langchain_core\runnables\config.py", line 381, in get_callback_manager_for_config
return CallbackManager.configure(
File "C:\Users\maste\AppData\Local\Programs\Python\Python39\lib\site-packages\langchain_core\callbacks\manager.py", line 1443, in configure
return _configure(
File "C:\Users\maste\AppData\Local\Programs\Python\Python39\lib\site-packages\langchain_core\callbacks\manager.py", line 1950, in _configure
debug = _get_debug()
File "C:\Users\maste\AppData\Local\Programs\Python\Python39\lib\site-packages\langchain_core\callbacks\manager.py", line 57, in _get_debug
return get_debug()
File "C:\Users\maste\AppData\Local\Programs\Python\Python39\lib\site-packages\langchain_core\globals\__init__.py", line 129, in get_debug
old_debug = langchain.debug
AttributeError: module '
### Description
All I am trying to do is use Ollama using langchain for my web app. The invoke function always says debug does not exists and gives me an attribute error
### System Info
from langchain_community.llms import Ollama
llm = Ollama(base_url = )
llm.invoke("Tell me a joke") | AttributeError: module 'langchain' has no attribute 'debug' | https://api.github.com/repos/langchain-ai/langchain/issues/19278/comments | 4 | 2024-03-19T16:04:33Z | 2024-08-02T06:25:55Z | https://github.com/langchain-ai/langchain/issues/19278 | 2,195,379,015 | 19,278 |
[
"hwchase17",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
# Goal
Allow instantiating language models with specific caches provided as an init parameter. This will bring language models on feature parity with chat models w/ respect to caching behavior.
This is the `cache` parameter: https://github.com/langchain-ai/langchain/blob/50f93d86ec56a92e1d0f5b390514d9a67a95d083/libs/core/langchain_core/language_models/base.py#L82-L82
Implementation is required in BaseLLM for both sync and async paths: https://github.com/langchain-ai/langchain/blob/50f93d86ec56a92e1d0f5b390514d9a67a95d083/libs/core/langchain_core/language_models/llms.py#L737-L737
Here's a reference implementation for chat models: https://github.com/langchain-ai/langchain/pull/17386
## Acceptance criteria
* The PR must include unit tests that provide coverage of the various caching configurations. You can look at the reference PR for Chat Models which covers the relevant scenarios. | langchain-core: Allow passing local cache to language models | https://api.github.com/repos/langchain-ai/langchain/issues/19276/comments | 3 | 2024-03-19T15:36:18Z | 2024-04-05T15:19:56Z | https://github.com/langchain-ai/langchain/issues/19276 | 2,195,311,866 | 19,276 |
[
"hwchase17",
"langchain"
] | I get an error when following the example notebook
https://python.langchain.com/docs/use_cases/extraction/quickstart
The script below runs fine if the schema is set to `Person`
```
runnable = prompt | llm.with_structured_output(schema=Person)
```
However, it failed when the schema is set to `Data`
```python
runnable = prompt | llm.with_structured_output(schema=Data)
```
### Version
```python
import langchain
from google.cloud import aiplatform
print(f"LangChain version: {langchain.__version__}")
print(f"Vertex AI SDK version: {aiplatform.__version__}")
```
LangChain version: 0.1.11
Vertex AI SDK version: 1.43.0
### Script
```python
from typing import List, Optional
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_google_vertexai import ChatVertexAI, HarmCategory, HarmBlockThreshold
class Person(BaseModel):
"""Information about a person."""
name: Optional[str] = Field(..., description="The name of the person")
hair_color: Optional[str] = Field(
..., description="The color of the person's hair if known"
)
height_in_meters: Optional[str] = Field(
..., description="Height measured in meters"
)
class Data(BaseModel):
"""Extracted data about people."""
people: List[Person]
prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"You are an expert extraction algorithm. "
"Only extract relevant information from the text. "
"If you do not know the value of an attribute asked to extract, "
"return null for the attribute's value.",
),
# Please see the how-to about improving performance with
# reference examples.
# MessagesPlaceholder('examples'),
("human", "{text}"),
]
)
llm = ChatVertexAI(
model_name="gemini-pro",
temperature=0,
safety_settings={
HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: HarmBlockThreshold.BLOCK_NONE,
},
convert_system_message_to_human=True,
)
text = "My name is Jeff, my hair is black and i am 6 feet tall. Anna has the same color hair as me."
runnable = prompt | llm.with_structured_output(schema=Data)
runnable.invoke({"text": text})
```
### Error
```
File ~/scratch/conda/envs/langchain-vertexai/lib/python3.10/site-packages/langchain_google_vertexai/functions_utils.py:160, in PydanticFunctionsOutputParser.parse_result(self, result, partial)
ValidationError: 2 validation errors for Data
people -> 0 -> hair_color
field required (type=value_error.missing)
people -> 1 -> hair_color
```
### Other questions
Is there an analog function to `convert_to_openai_tool` called `convert_to_vertexai_tool`?
```python
import json
from langchain_core.utils.function_calling import convert_to_openai_tool
print(json.dumps(convert_to_openai_tool(Data), indent=2))
```
Output:
```json
{
"type": "function",
"function": {
"name": "Data",
"description": "Extracted data about people.",
"parameters": {
"type": "object",
"properties": {
"people": {
"type": "array",
"items": {
"description": "Information about a person.",
"type": "object",
"properties": {
"name": {
"description": "The name of the person",
"type": "string"
},
"hair_color": {
"description": "The color of the person's hair if known",
"type": "string"
},
"height_in_meters": {
"description": "Height measured in meters",
"type": "string"
}
},
"required": [
"name",
"hair_color",
"height_in_meters"
]
}
}
},
"required": [
"people"
]
}
}
}
```
The `Person` class parameters were set to optional.
Why are the `Person` parameters set to required in the OpenAI tool?
How to see the generated Schema used in the call of the tool?
https://ai.google.dev/api/python/google/ai/generativelanguage/Schema
_Originally posted by @schinto in https://github.com/langchain-ai/langchain/discussions/18975#discussioncomment-8840417_ | Multiple entities extraction in quickstart demo fails with ChatVertexAI | https://api.github.com/repos/langchain-ai/langchain/issues/19272/comments | 0 | 2024-03-19T14:20:02Z | 2024-06-25T16:13:27Z | https://github.com/langchain-ai/langchain/issues/19272 | 2,195,111,331 | 19,272 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
import os
from langchain.llms import OpenAI
from langchain.utilities import SQLDatabase
# ... Your database credentials ...
os.environ["OPENAI_API_KEY"] = "sk-myapikey"
db_uri = f"postgresql://{user}:{password}@{host}/{database}"
db = SQLDatabase(db_uri) # Error occurs here
### Error Message and Stack Trace (if applicable)
NoInspectionAvailable: No inspection system is available for object of type <class 'str'>
Traceback:
File "C:\Python311\Lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 542, in _run_script
exec(code, module.__dict__)
File "C:\opensql\gemSQL.py", line 19, in <module>
db = SQLDatabase(db_uri)
^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\langchain_community\utilities\sql_database.py", line 69, in __init__
self._inspector = inspect(self._engine)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\sqlalchemy\inspection.py", line 71, in inspect
raise exc.NoInspectionAvailable(
### Description
I'm encountering a NoInspectionAvailable error when using the SQLDatabase class in LangChain. However, basic SQLAlchemy connections and introspection work correctly.
### System Info
OS: windows 10
Python: 3.19
LangChain: 0.1.12
SQLAlchemy: 1.4.47 | SQLDatabase introspection fails with NoInspectionAvailable | https://api.github.com/repos/langchain-ai/langchain/issues/19264/comments | 2 | 2024-03-19T12:02:31Z | 2024-06-25T16:13:24Z | https://github.com/langchain-ai/langchain/issues/19264 | 2,194,779,898 | 19,264 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Impacted code is https://github.com/langchain-ai/langchain/blob/514fe807784f520449914a64ffc983538fa6743d/libs/community/langchain_community/utilities/sql_database.py#L426
```
elif self.dialect == "trino":
connection.exec_driver_sql(
"USE ?",
(self._schema,),
execution_options=execution_options,
```
Trino cannot use parameterized syntax for USE
Generated SQL statement to Trino should be
`EXECUTE IMMEDIATE 'USE mycatalog';`
instead of
`EXECUTE IMMEDIATE 'USE ?' USING 'mycatalog';`
### Error Message and Stack Trace (if applicable)
On Trino side, we got
```
io.trino.sql.parser.ParsingException: line 1:24: mismatched input '?'. Expecting: <identifier>
at io.trino.sql.parser.SqlParser.lambda$invokeParser$1(SqlParser.java:183)
at java.base/java.util.Optional.ifPresent(Optional.java:178)
```
### Description
Without patching Trino dialect for SQL prompts, Trino cannot be used as a SQL source for LangChain
### System Info
langchain==0.1.12
langchain-community==0.0.28
langchain-core==0.1.31
langchain-experimental==0.0.54
langchain-openai==0.0.8
langchain-text-splitters==0.0.1 | SQL_database Trino dialect - wrong usage of USE to set catalog, as Trino does not use parameters for identifiers | https://api.github.com/repos/langchain-ai/langchain/issues/19261/comments | 1 | 2024-03-19T10:57:49Z | 2024-07-04T16:08:33Z | https://github.com/langchain-ai/langchain/issues/19261 | 2,194,647,431 | 19,261 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
import `json`
import redis
from fastapi import APIRouter, status
from fastapi.encoders import jsonable_encoder
from fastapi.responses import JSONResponse
from langchain.chains import ConversationalRetrievalChain, ConversationChain
from langchain.callbacks.base import AsyncCallbackHandler
from langchain.callbacks.manager import AsyncCallbackManager
from fastapi.responses import StreamingResponse
from typing import Any, Awaitable, Callable, Iterator, Optional, Union
from langchain.chains.conversational_retrieval.prompts import (
CONDENSE_QUESTION_PROMPT,
QA_PROMPT,
)
from langchain.chains.llm import LLMChain
from langchain.chains.question_answering import load_qa_chain
from langchain_community.chat_models import AzureChatOpenAI
from langchain_openai import AzureOpenAIEmbeddings
from langchain.memory import ConversationBufferMemory, ConversationBufferWindowMemory
from langchain_community.chat_message_histories import RedisChatMessageHistory
from langchain.prompts import (
ChatPromptTemplate,
HumanMessagePromptTemplate,
MessagesPlaceholder,
SystemMessagePromptTemplate,
)
from langchain.prompts.prompt import PromptTemplate
from langchain_community.vectorstores import Milvus
from pydantic import BaseModel, validator
from typing import Optional
from starlette.types import Send
from genai_openwork_backend.services.coroutine.loop import get_loop
from genai_openwork_backend.app.api.chat_history.views import (
get_recent_chats_rag,
cache_chat
)
from genai_openwork_backend.db.connection import get_connection, aget_connection,release_connection
from genai_openwork_backend.config import config
from datetime import datetime
import re
from enum import Enum as PyEnum
router = APIRouter()
redis_host = config.redis['REDIS_HOST']
redis_port = config.redis['REDIS_PORT']
openai_api_version = config.llm["OPENAI_API_VERSION"]
deployment_name=config.llm['DEPLOYMENT_NAME']
model_name=config.llm['LLM_MODEL']
openai_api_base=config.llm['AZURE_OPENAI_ENDPOINT']
deployment=config.llm['EMBEDDING']
model=config.llm['MODEL']
openai_api_type=config.llm['OPENAI_API_TYPE']
milvus_host = config.vectordb['MILIVS_HOST']
milvus_port = config.vectordb['MILIVUS_PORT']
# No change here - Using the default version
CONDENSE_QUESTION_PROMPT = PromptTemplate(
input_variables=[
"chat_history",
"question",
],
template="Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question, in its original language.\n\nChat History:\n{chat_history}\nFollow Up Input: {question}\nStandalone question:",
)
# QA prompt updated
_template = """You are a helpful AI assisstant. The following is a friendly conversation between a human and you.
Use the following documents provided as context to answer the question at the end . If you don't know the answer, just say you don't know. DO NOT try to make up an answer.
If the question is not related to the context, politely respond that you are tuned to only answer questions that are related to the context.
Also, generate three brief follow-up questions that the user would likely ask next. Try not to repeat questions that have already been asked. Only generate questions in next line with a tag 'Next Questions' as a Markdown list.
{question}
=========
{context}
=========
Answer:"""
variables = ["context", "question"]
QA_PROMPT = PromptTemplate(
template=_template,
input_variables=variables,
)
class AllowedChatModels(str, PyEnum):
value1 = "gpt-3.5"
value2 = "gpt-4"
class ConversationStyles(str, PyEnum):
value1 = "precise"
value2 = "balanced"
value3 = "creative"
class RagChatRequest(BaseModel):
"""Request model for chat requests.
Includes the conversation ID and the message from the user.
"""
user_id: str
conversation_id: str
question: str
collection: Optional[list] = ["all"]
vectordb_collection : Optional[str] = "openwork"
chatModel : Optional[AllowedChatModels] = "gpt-3.5"
conversationStyle : Optional[ConversationStyles] = "precise"
Sender = Callable[[Union[str, bytes]], Awaitable[None]]
class EmptyIterator(Iterator[Union[str, bytes]]):
def __iter__(self):
return self
def __next__(self):
raise StopIteration
class AsyncStreamCallbackHandler(AsyncCallbackHandler):
"""Callback handler for streaming, inheritance from AsyncCallbackHandler."""
def __init__(self, send: Sender):
super().__init__()
self.send = send
async def on_llm_new_token(self, token: str, **kwargs: Any) -> None:
"""Rewrite on_llm_new_token to send token to client."""
await self.send(f"{token}")
class ChatOpenAIStreamingResponse(StreamingResponse):
"""Streaming response for openai chat model, inheritance from StreamingResponse."""
def __init__(
self,
generate: Callable[[Sender], Awaitable[None]],
request,
status_code: int = 200,
media_type: Optional[str] = None,
) -> None:
super().__init__(
content=EmptyIterator(),
status_code=status_code,
media_type=media_type,
)
self.generate = generate
self.request = request
self.answer = b''
async def stream_response(self, send: Send) -> None:
"""Rewrite stream_response to send response to client."""
await send(
{
"type": "http.response.start",
"status": self.status_code,
"headers": self.raw_headers,
},
)
async def send_chunk(chunk: Union[str, bytes]):
if not isinstance(chunk, bytes):
chunk = chunk.encode(self.charset)
self.answer += chunk
await send({"type": "http.response.body", "body": chunk, "more_body": True})
# send body to client
await self.generate(send_chunk)
# send empty body to client to close connection
await send({"type": "http.response.body", "body": b"", "more_body": False})
def transformConversationStyleToTemperature(style : str):
style_temp_obj = {
"precise" : 1,
"balanced" : 0.5,
"creative" : 0.1
}
return style_temp_obj.get(style)
def send_message_tredence_llm(
query: RagChatRequest,
) -> Callable[[Sender], Awaitable[None]]:
async def generate(send: Sender):
temperature = transformConversationStyleToTemperature(query.conversationStyle)
chat_model = AzureChatOpenAI(
streaming=True,
azure_endpoint=openai_api_base,
deployment_name=deployment_name,
model_name=model_name,
openai_api_version=openai_api_version,
verbose=True,
)
chat_model2 = AzureChatOpenAI(
streaming=False,
azure_endpoint=openai_api_base,
deployment_name=deployment_name,
model_name=model_name,
openai_api_version=openai_api_version,
verbose=True,
)
chat_model.temperature = temperature
chat_model2.temperature = temperature
embeddings = AzureOpenAIEmbeddings(
deployment=deployment,
model=str(model),
azure_endpoint=openai_api_base,
openai_api_type=openai_api_type,
openai_api_version=openai_api_version,
)
vectorstore = Milvus(
embeddings,
collection_name=query.vectordb_collection,
connection_args={"host": milvus_host, "port": milvus_port},
)
chain_input = query.question
memory = ConversationBufferWindowMemory( k=10, return_messages=True,memory_key="chat_history")
chat_list = await get_recent_chats_rag(query.conversation_id)
if(len(chat_list)):
for c in chat_list:
memory.save_context({"input": c["input"]}, {"output": c["output"]})
# Set up the chain
question_generator = LLMChain(
llm=chat_model2,
prompt=CONDENSE_QUESTION_PROMPT,
)
doc_chain = load_qa_chain(
llm=chat_model,
chain_type="stuff",
prompt=QA_PROMPT,
# callback_manager=AsyncCallbackManager(
# [AsyncStreamCallbackHandler(send)],
# ),
)
if len(query.collection) == 1 and query.collection[0] == "all":
expression = ""
else:
expression = f'group in ["{query.collection[0]}"'
for i in range(1,len(query.collection)):
expression += f',"{query.collection[i]}"'
expression += "]"
print("expression", expression)
chain = ConversationalRetrievalChain(
memory=memory,
combine_docs_chain=doc_chain,
question_generator=question_generator,
retriever=vectorstore.as_retriever(
search_type="similarity", search_kwargs={"k": 4, "expr": f"{expression}"}
),
verbose=True,
return_source_documents=True
)
history = memory.chat_memory.messages
print(history)
await chain.acall(chain_input, callbacks=[AsyncStreamCallbackHandler(send)])
return generate
@router.post("/rag/stream")
async def stream(request: RagChatRequest):
return ChatOpenAIStreamingResponse(
send_message_tredence_llm(request),
request,
media_type="text/event-stream",
)
```
### Error Message and Stack Trace (if applicable)
> Entering new ConversationalRetrievalChain chain...
> Finished chain.
2024-03-19 09:40:13.368 | ERROR | trace_id=0 | span_id=0 | uvicorn.protocols.http.httptools_impl:run_asgi:424 - Exception in ASGI application
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/azureuser/.pyenv/versions/3.10.13/lib/python3.10/multiprocessing/spawn.py", line 116, in spawn_main
exitcode = _main(fd, parent_sentinel)
│ │ └ 4
│ └ 10
└ <function _main at 0x7f0ec497bd00>
File "/home/azureuser/.pyenv/versions/3.10.13/lib/python3.10/multiprocessing/spawn.py", line 129, in _main
return self._bootstrap(parent_sentinel)
│ │ └ 4
│ └ <function BaseProcess._bootstrap at 0x7f0ec4b627a0>
└ <SpawnProcess name='SpawnProcess-7' parent=2267916 started>
File "/home/azureuser/.pyenv/versions/3.10.13/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
self.run()
│ └ <function BaseProcess.run at 0x7f0ec4b61e10>
└ <SpawnProcess name='SpawnProcess-7' parent=2267916 started>
File "/home/azureuser/.pyenv/versions/3.10.13/lib/python3.10/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
│ │ │ │ │ └ {'config': <uvicorn.config.Config object at 0x7f0ec4ca2950>, 'target': <bound method Server.run of <uvicorn.server.Server obj...
│ │ │ │ └ <SpawnProcess name='SpawnProcess-7' parent=2267916 started>
│ │ │ └ ()
│ │ └ <SpawnProcess name='SpawnProcess-7' parent=2267916 started>
│ └ <function subprocess_started at 0x7f0ec3fc9ea0>
└ <SpawnProcess name='SpawnProcess-7' parent=2267916 started>
File "/home/azureuser/.pyenv/versions/3.10.13/envs/genai_openwork_backend_ani/lib/python3.10/site-packages/uvicorn/_subprocess.py", line 76, in subprocess_started
target(sockets=sockets)
│ └ [<socket.socket fd=3, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=0, laddr=('0.0.0.0', 1785)>]
└ <bound method Server.run of <uvicorn.server.Server object at 0x7f0ec4ca28f0>>
File "/home/azureuser/.pyenv/versions/3.10.13/envs/genai_openwork_backend_ani/lib/python3.10/site-packages/uvicorn/server.py", line 60, in run
return asyncio.run(self.serve(sockets=sockets))
│ │ │ │ └ [<socket.socket fd=3, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=0, laddr=('0.0.0.0', 1785)>]
│ │ │ └ <function Server.serve at 0x7f0ec3fc9360>
│ │ └ <uvicorn.server.Server object at 0x7f0ec4ca28f0>
│ └ <function run at 0x7f0ec4991360>
└ <module 'asyncio' from '/home/azureuser/.pyenv/versions/3.10.13/lib/python3.10/asyncio/__init__.py'>
File "/home/azureuser/.pyenv/versions/3.10.13/lib/python3.10/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
│ │ └ <coroutine object Server.serve at 0x7f0ec3eb21f0>
│ └ <method 'run_until_complete' of 'uvloop.loop.Loop' objects>
└ <uvloop.Loop running=True closed=False debug=False>
> File "/home/azureuser/.pyenv/versions/3.10.13/envs/genai_openwork_backend_ani/lib/python3.10/site-packages/uvicorn/protocols/http/httptools_impl.py", line 419, in run_asgi
result = await app( # type: ignore[func-returns-value]
└ <uvicorn.middleware.proxy_headers.ProxyHeadersMiddleware object at 0x7f0e8dc1cd00>
File "/home/azureuser/.pyenv/versions/3.10.13/envs/genai_openwork_backend_ani/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in __call__
return await self.app(scope, receive, send)
│ │ │ │ └ <bound method RequestResponseCycle.send of <uvicorn.protocols.http.httptools_impl.RequestResponseCycle object at 0x7f0e8dc707...
│ │ │ └ <bound method RequestResponseCycle.receive of <uvicorn.protocols.http.httptools_impl.RequestResponseCycle object at 0x7f0e8dc...
│ │ └ {'type': 'http', 'asgi': {'version': '3.0', 'spec_version': '2.3'}, 'http_version': '1.1', 'server': ('127.0.0.1', 1785), 'cl...
│ └ <fastapi.applications.FastAPI object at 0x7f0ea55efb80>
└ <uvicorn.middleware.proxy_headers.ProxyHeadersMiddleware object at 0x7f0e8dc1cd00>
File "/home/azureuser/.pyenv/versions/3.10.13/envs/genai_openwork_backend_ani/lib/python3.10/site-packages/fastapi/applications.py", line 270, in __call__
await super().__call__(scope, receive, send)
│ │ └ <bound method RequestResponseCycle.send of <uvicorn.protocols.http.httptools_impl.RequestResponseCycle object at 0x7f0e8dc707...
│ └ <bound method RequestResponseCycle.receive of <uvicorn.protocols.http.httptools_impl.RequestResponseCycle object at 0x7f0e8dc...
└ {'type': 'http', 'asgi': {'version': '3.0', 'spec_version': '2.3'}, 'http_version': '1.1', 'server': ('127.0.0.1', 1785), 'cl...
File "/home/azureuser/.pyenv/versions/3.10.13/envs/genai_openwork_backend_ani/lib/python3.10/site-packages/starlette/applications.py", line 124, in __call__
await self.middleware_stack(scope, receive, send)
│ │ │ │ └ <bound method RequestResponseCycle.send of <uvicorn.protocols.http.httptools_impl.RequestResponseCycle object at 0x7f0e8dc707...
│ │ │ └ <bound method RequestResponseCycle.receive of <uvicorn.protocols.http.httptools_impl.RequestResponseCycle object at 0x7f0e8dc...
│ │ └ {'type': 'http', 'asgi': {'version': '3.0', 'spec_version': '2.3'}, 'http_version': '1.1', 'server': ('127.0.0.1', 1785), 'cl...
│ └ <starlette.middleware.errors.ServerErrorMiddleware object at 0x7f0e8dc1df30>
└ <fastapi.applications.FastAPI object at 0x7f0ea55efb80>
File "/home/azureuser/.pyenv/versions/3.10.13/envs/genai_openwork_backend_ani/lib/python3.10/site-packages/starlette/middleware/errors.py", line 184, in __call__
raise exc
File "/home/azureuser/.pyenv/versions/3.10.13/envs/genai_openwork_backend_ani/lib/python3.10/site-packages/starlette/middleware/errors.py", line 162, in __call__
await self.app(scope, receive, _send)
│ │ │ │ └ <function ServerErrorMiddleware.__call__.<locals>._send at 0x7f0e8dc743a0>
│ │ │ └ <bound method RequestResponseCycle.receive of <uvicorn.protocols.http.httptools_impl.RequestResponseCycle object at 0x7f0e8dc...
│ │ └ {'type': 'http', 'asgi': {'version': '3.0', 'spec_version': '2.3'}, 'http_version': '1.1', 'server': ('127.0.0.1', 1785), 'cl...
│ └ <starlette.middleware.exceptions.ExceptionMiddleware object at 0x7f0e8dc1c850>
└ <starlette.middleware.errors.ServerErrorMiddleware object at 0x7f0e8dc1df30>
File "/home/azureuser/.pyenv/versions/3.10.13/envs/genai_openwork_backend_ani/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
raise exc
File "/home/azureuser/.pyenv/versions/3.10.13/envs/genai_openwork_backend_ani/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
await self.app(scope, receive, sender)
│ │ │ │ └ <function ExceptionMiddleware.__call__.<locals>.sender at 0x7f0e8dc74430>
│ │ │ └ <bound method RequestResponseCycle.receive of <uvicorn.protocols.http.httptools_impl.RequestResponseCycle object at 0x7f0e8dc...
│ │ └ {'type': 'http', 'asgi': {'version': '3.0', 'spec_version': '2.3'}, 'http_version': '1.1', 'server': ('127.0.0.1', 1785), 'cl...
│ └ <fastapi.middleware.asyncexitstack.AsyncExitStackMiddleware object at 0x7f0ec1d63bb0>
└ <starlette.middleware.exceptions.ExceptionMiddleware object at 0x7f0e8dc1c850>
File "/home/azureuser/.pyenv/versions/3.10.13/envs/genai_openwork_backend_ani/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__
raise e
File "/home/azureuser/.pyenv/versions/3.10.13/envs/genai_openwork_backend_ani/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
await self.app(scope, receive, send)
│ │ │ │ └ <function ExceptionMiddleware.__call__.<locals>.sender at 0x7f0e8dc74430>
│ │ │ └ <bound method RequestResponseCycle.receive of <uvicorn.protocols.http.httptools_impl.RequestResponseCycle object at 0x7f0e8dc...
│ │ └ {'type': 'http', 'asgi': {'version': '3.0', 'spec_version': '2.3'}, 'http_version': '1.1', 'server': ('127.0.0.1', 1785), 'cl...
│ └ <fastapi.routing.APIRouter object at 0x7f0ec1dc0be0>
└ <fastapi.middleware.asyncexitstack.AsyncExitStackMiddleware object at 0x7f0ec1d63bb0>
File "/home/azureuser/.pyenv/versions/3.10.13/envs/genai_openwork_backend_ani/lib/python3.10/site-packages/starlette/routing.py", line 706, in __call__
await route.handle(scope, receive, send)
│ │ │ │ └ <function ExceptionMiddleware.__call__.<locals>.sender at 0x7f0e8dc74430>
│ │ │ └ <bound method RequestResponseCycle.receive of <uvicorn.protocols.http.httptools_impl.RequestResponseCycle object at 0x7f0e8dc...
│ │ └ {'type': 'http', 'asgi': {'version': '3.0', 'spec_version': '2.3'}, 'http_version': '1.1', 'server': ('127.0.0.1', 1785), 'cl...
│ └ <function Route.handle at 0x7f0ec31b5000>
└ APIRoute(path='/api/openwork/rag/stream', name='stream', methods=['POST'])
File "/home/azureuser/.pyenv/versions/3.10.13/envs/genai_openwork_backend_ani/lib/python3.10/site-packages/starlette/routing.py", line 276, in handle
await self.app(scope, receive, send)
│ │ │ │ └ <function ExceptionMiddleware.__call__.<locals>.sender at 0x7f0e8dc74430>
│ │ │ └ <bound method RequestResponseCycle.receive of <uvicorn.protocols.http.httptools_impl.RequestResponseCycle object at 0x7f0e8dc...
│ │ └ {'type': 'http', 'asgi': {'version': '3.0', 'spec_version': '2.3'}, 'http_version': '1.1', 'server': ('127.0.0.1', 1785), 'cl...
│ └ <function request_response.<locals>.app at 0x7f0e8dc23d00>
└ APIRoute(path='/api/openwork/rag/stream', name='stream', methods=['POST'])
File "/home/azureuser/.pyenv/versions/3.10.13/envs/genai_openwork_backend_ani/lib/python3.10/site-packages/starlette/routing.py", line 69, in app
await response(scope, receive, send)
│ │ │ └ <function ExceptionMiddleware.__call__.<locals>.sender at 0x7f0e8dc74430>
│ │ └ <bound method RequestResponseCycle.receive of <uvicorn.protocols.http.httptools_impl.RequestResponseCycle object at 0x7f0e8dc...
│ └ {'type': 'http', 'asgi': {'version': '3.0', 'spec_version': '2.3'}, 'http_version': '1.1', 'server': ('127.0.0.1', 1785), 'cl...
└ <genai_openwork_backend.app.api.openwork.views.ChatOpenAIStreamingResponse object at 0x7f0e8dc70c40>
File "/home/azureuser/.pyenv/versions/3.10.13/envs/genai_openwork_backend_ani/lib/python3.10/site-packages/starlette/responses.py", line 266, in __call__
async with anyio.create_task_group() as task_group:
│ │ └ <anyio._backends._asyncio.TaskGroup object at 0x7f0e8dc70b20>
│ └ <function create_task_group at 0x7f0ec3e409d0>
└ <module 'anyio' from '/home/azureuser/.pyenv/versions/3.10.13/envs/genai_openwork_backend_ani/lib/python3.10/site-packages/an...
File "/home/azureuser/.pyenv/versions/3.10.13/envs/genai_openwork_backend_ani/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 597, in __aexit__
raise exceptions[0]
└ [ValueError("One output key expected, got dict_keys(['answer', 'source_documents'])")]
File "/home/azureuser/.pyenv/versions/3.10.13/envs/genai_openwork_backend_ani/lib/python3.10/site-packages/starlette/responses.py", line 269, in wrap
await func()
└ functools.partial(<bound method ChatOpenAIStreamingResponse.stream_response of <genai_openwork_backend.app.api.openwork.views...
File "/home/azureuser/anindya/genai_openwork_backend/genai_openwork_backend/app/api/openwork/views.py", line 233, in stream_response
await self.generate(send_chunk)
│ │ └ <function ChatOpenAIStreamingResponse.stream_response.<locals>.send_chunk at 0x7f0e8dc74a60>
│ └ <function send_message_tredence_llm.<locals>.generate at 0x7f0e8dc74550>
└ <genai_openwork_backend.app.api.openwork.views.ChatOpenAIStreamingResponse object at 0x7f0e8dc70c40>
File "/home/azureuser/anindya/genai_openwork_backend/genai_openwork_backend/app/api/openwork/views.py", line 392, in generate
await chain.acall(chain_input, callbacks=[AsyncStreamCallbackHandler(send)])
│ │ │ │ └ <function ChatOpenAIStreamingResponse.stream_response.<locals>.send_chunk at 0x7f0e8dc74a60>
│ │ │ └ <class 'genai_openwork_backend.app.api.openwork.views.AsyncStreamCallbackHandler'>
│ │ └ 'explain about supply chain tower'
│ └ <function Chain.acall at 0x7f0eb5e0c940>
└ ConversationalRetrievalChain(memory=ConversationBufferWindowMemory(return_messages=True, memory_key='chat_history', k=10), ve...
File "/home/azureuser/.pyenv/versions/3.10.13/envs/genai_openwork_backend_ani/lib/python3.10/site-packages/langchain_core/_api/deprecation.py", line 154, in awarning_emitting_wrapper
return await wrapped(*args, **kwargs)
│ │ └ {'callbacks': [<genai_openwork_backend.app.api.openwork.views.AsyncStreamCallbackHandler object at 0x7f0e8db997b0>]}
│ └ (ConversationalRetrievalChain(memory=ConversationBufferWindowMemory(return_messages=True, memory_key='chat_history', k=10), v...
└ <function Chain.acall at 0x7f0eb5e0c430>
File "/home/azureuser/.pyenv/versions/3.10.13/envs/genai_openwork_backend_ani/lib/python3.10/site-packages/langchain/chains/base.py", line 413, in acall
return await self.ainvoke(
│ └ <function Chain.ainvoke at 0x7f0eb5ddbb50>
└ ConversationalRetrievalChain(memory=ConversationBufferWindowMemory(return_messages=True, memory_key='chat_history', k=10), ve...
File "/home/azureuser/.pyenv/versions/3.10.13/envs/genai_openwork_backend_ani/lib/python3.10/site-packages/langchain/chains/base.py", line 211, in ainvoke
final_outputs: Dict[str, Any] = self.prep_outputs(
│ │ │ └ <function Chain.prep_outputs at 0x7f0eb5e0c160>
│ │ └ ConversationalRetrievalChain(memory=ConversationBufferWindowMemory(return_messages=True, memory_key='chat_history', k=10), ve...
│ └ typing.Any
└ typing.Dict
File "/home/azureuser/.pyenv/versions/3.10.13/envs/genai_openwork_backend_ani/lib/python3.10/site-packages/langchain/chains/base.py", line 440, in prep_outputs
self.memory.save_context(inputs, outputs)
│ │ │ │ └ {'answer': 'The article explains that a Supply Chain Control Tower (SCCT) is a cross-departmental, system-integrated “informa...
│ │ │ └ {'question': 'explain about supply chain tower', 'chat_history': []}
│ │ └ <function BaseChatMemory.save_context at 0x7f0eb5c085e0>
│ └ ConversationBufferWindowMemory(return_messages=True, memory_key='chat_history', k=10)
└ ConversationalRetrievalChain(memory=ConversationBufferWindowMemory(return_messages=True, memory_key='chat_history', k=10), ve...
File "/home/azureuser/.pyenv/versions/3.10.13/envs/genai_openwork_backend_ani/lib/python3.10/site-packages/langchain/memory/chat_memory.py", line 37, in save_context
input_str, output_str = self._get_input_output(inputs, outputs)
│ │ │ └ {'answer': 'The article explains that a Supply Chain Control Tower (SCCT) is a cross-departmental, system-integrated “informa...
│ │ └ {'question': 'explain about supply chain tower', 'chat_history': []}
│ └ <function BaseChatMemory._get_input_output at 0x7f0eb5c08550>
└ ConversationBufferWindowMemory(return_messages=True, memory_key='chat_history', k=10)
File "/home/azureuser/.pyenv/versions/3.10.13/envs/genai_openwork_backend_ani/lib/python3.10/site-packages/langchain/memory/chat_memory.py", line 29, in _get_input_output
raise ValueError(f"One output key expected, got {outputs.keys()}")
ValueError: One output key expected, got dict_keys(['answer', 'source_documents'])
### Description
I am trying to return the source documents along with the answer as a part of streamed response. I have put return_source_documents=True in ConversationalRetrievalChain parameters, however error is coming. If I comment it, then the answer gets streamed without any error. How to return the source documents in the stream?
### System Info
langchain==0.1.0
langchain-community==0.0.20
langchain-core==0.1.23
langchain-openai==0.0.5
openinference-instrumentation-langchain==0.1.12
python --> 3.10.13 | Streaming of Source Documents not working in ConversationalRetrievalChain | https://api.github.com/repos/langchain-ai/langchain/issues/19259/comments | 2 | 2024-03-19T09:44:53Z | 2024-07-04T16:08:28Z | https://github.com/langchain-ai/langchain/issues/19259 | 2,194,484,797 | 19,259 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from langchain_openai import AzureChatOpenAI
from langchain_core.output_parsers import StrOutputParser
from langchain_core.runnables import RunnableLambda, RunnableSequence
from langchain_core.messages import HumanMessage
def langchain_model(prompt_func: callable) -> RunnableSequence:
model = AzureChatOpenAI(azure_deployment="gpt-35-16k", temperature=0)
return RunnableLambda(prompt_func) | model | StrOutputParser()
def prompt_func(
_dict: dict,
) -> list:
question = _dict.get("question")
texts = _dict.get("texts")
text_message = {
"type": "text",
"text": (
"You are a classification system for Procurement Documents. Answer the question solely on the provided Reference texts.\n"
"If you cant find a answer reply exactly like this: 'Sorry i dont have an answer for youre question'\n"
"Return the answer as a string in the language the question is written in.\n\n "
f"User-provided question: \n"
f"{question} \n\n"
"Reference texts:\n"
f"{texts}"
),
}
return [HumanMessage(content=[text_message])]
model = langchain_model(prompt_func=prompt_func)
for steps, runnable in model:
try:
print(runnable[0].dict())
except:
print(runnable)
```
### Error Message and Stack Trace (if applicable)
for AzureChatOpenAI as a component you will just get this output as dict:
{'model': 'gpt-3.5-turbo', 'stream': False, 'n': 1, 'temperature': 0.0, '_type': 'azure-openai-chat'}.
### Description
This is not enough if you want to log the model with MLFlow, the deployment_name is definitely needed and needs to be gotten from dict. See related issue: https://github.com/mlflow/mlflow/issues/11439. Expected output would have more details at least a deployment name.
### System Info
System Information
------------------
> OS: Darwin
> Python Version: 3.10.11 (v3.10.11:7d4cc5aa85, Apr 4 2023, 19:05:19) [Clang 13.0.0]
Package Information
-------------------
> langchain_core: 0.1.32
> langchain: 0.1.12
> langchain_community: 0.0.28
> langsmith: 0.1.26
> langchain_openai: 0.0.8
> langchain_text_splitters: 0.0.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | Getting from runnables.dict() in RunnableSequence not all expected variables. | https://api.github.com/repos/langchain-ai/langchain/issues/19255/comments | 3 | 2024-03-19T08:50:44Z | 2024-03-27T05:31:37Z | https://github.com/langchain-ai/langchain/issues/19255 | 2,194,374,025 | 19,255 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain.agents import AgentExecutor, create_openai_tools_agent
from langchain_core.prompts import (
ChatPromptTemplate,
HumanMessagePromptTemplate,
MessagesPlaceholder,
PromptTemplate,
SystemMessagePromptTemplate,
)
prompt = ChatPromptTemplate.from_messages(
[
_system_message,
MessagesPlaceholder(variable_name=memory_key, optional=True),
HumanMessagePromptTemplate.from_template("{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
]
)
agent = create_openai_tools_agent(llm, [], prompt)
agent_executor = AgentExecutor(
agent=agent,
tools=[], # type: ignore
memory=memory,
verbose=True,
return_intermediate_steps=True,
handle_parsing_errors=True,
)
print(agent_executor.input_keys) # Return empty list
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Since `create_openai_tools_agent` returns a `RunnableSequence` and not a `BaseSingleActionAgent`, the property `input_keys` of the `AgentExecutor` doesn't work for this agent anymore
```python
@property
def input_keys(self) -> List[str]:
"""Return the input keys.
:meta private:
"""
return self.agent.input_keys
```
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.4.0: Wed Feb 21 21:44:54 PST 2024; root:xnu-10063.101.15~2/RELEASE_ARM64_T6030
> Python Version: 3.9.18 (main, Sep 11 2023, 08:25:10)
[Clang 14.0.6 ]
Package Information
-------------------
> langchain_core: 0.1.27
> langchain: 0.1.9
> langchain_community: 0.0.24
> langsmith: 0.1.10
> langchain_experimental: 0.0.52
> langchain_google_genai: 0.0.6
> langchain_openai: 0.0.6
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | `create_openai_tools_agent` (RunnableSequence) doesn't return input_keys, breaks `AgentExecutor.input_keys` | https://api.github.com/repos/langchain-ai/langchain/issues/19251/comments | 2 | 2024-03-19T07:12:22Z | 2024-06-28T16:07:38Z | https://github.com/langchain-ai/langchain/issues/19251 | 2,194,205,736 | 19,251 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
There is a spelling mistake in line 32.
### Idea or request for content:
I can fix this mistake.
Please assign this issue to me. | DOC: typo error in https://python.langchain.com/docs/modules/model_io/chat/index.mdx, line 32. | https://api.github.com/repos/langchain-ai/langchain/issues/19247/comments | 0 | 2024-03-19T01:44:49Z | 2024-06-25T16:13:28Z | https://github.com/langchain-ai/langchain/issues/19247 | 2,193,829,820 | 19,247 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Bellow is the code where i am getting "HTTPError('403 Client Error: Forbidden for url: https://storage.googleapis.com/flash-rank/ms-marco-MultiBERT-L-12.zip')"
compressor = FlashrankRerank()
compressor_retriever = ContextualCompressionRetriever(base_compressor=compressor,
base_retriever= base_retriever
)
compressed_docs = compressor_retriever.get_relevant_documents(query)
return compressed_docs
### Error Message and Stack Trace (if applicable)
Tool run errored with error:
HTTPError('403 Client Error: Forbidden for url: https://storage.googleapis.com/flash-rank/ms-marco-MultiBERT-L-12.zip')Traceback (most recent call last):
### Description
1. I have developed an LLM app with chromadb and its working
2. I am integrating FlashRank Retriever using langchain, but i am getting the error
### System Info
python==3.10.4
langchain==0.1.9
FlashRank==0.1.69 | Not able to utilize flashrank reranker in langchain and getting error "HTTPError('403 Client Error: Forbidden for url: https://storage.googleapis.com/flash-rank/ms-marco-MultiBERT-L-12.zip')" | https://api.github.com/repos/langchain-ai/langchain/issues/19241/comments | 1 | 2024-03-18T19:47:20Z | 2024-06-24T16:14:32Z | https://github.com/langchain-ai/langchain/issues/19241 | 2,193,132,295 | 19,241 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
import sqlite3
from langchain_openai import ChatOpenAI
from langchain_community.utilities import SQLDatabase
from langchain_community.agent_toolkits import create_sql_agent
# Make new test database, create a timestamp table and column and insert value
database_path = 'testdb.sqlite'
timestamp_value = 672457193.7343056
create_table_sql = """
CREATE TABLE IF NOT EXISTS timestamps (
timestamp TIMESTAMP
);
"""
insert_sql = """
INSERT INTO timestamps (timestamp) VALUES (?);
"""
conn = sqlite3.connect(database_path)
cursor = conn.cursor()
cursor.execute(create_table_sql)
cursor.execute(insert_sql, (timestamp_value,))
conn.commit()
conn.close()
print("The table has been created and the value has been inserted successfully.")
db = SQLDatabase.from_uri(f"sqlite:///{database_path}")
prefix = """" You are a SQL expert that answers questions about a database. Use the tools below.
"""
agent_executor = create_sql_agent(
llm=ChatOpenAI(model="gpt-3.5-turbo", temperature=0, openai_api_key=openai_api_key, ),
db=db,
agent_type="openai-tools",
verbose=True,
prefix=prefix
)
response = agent_executor.invoke("What is this database used for?")
print(response)
### Error Message and Stack Trace (if applicable)
> Entering new SQL Agent Executor chain...
Invoking: `sql_db_list_tables` with ``
timestamps
Invoking: `sql_db_schema` with `{'table_names': 'timestamps'}`
responded: The database contains a table named "timestamps." Let me query the schema of this table to understand its structure.
Traceback (most recent call last):
File "/Users/noneofyourbusiness/FICT/AFSTUDEER/testopenaidb/main.py", line 305, in <module>
response = agent_executor.invoke("What is this databse used for?")
File "/Users/noneofyourbusiness/FICT/AFSTUDEER/testopenaidb/venv/lib/python3.10/site-packages/langchain/chains/base.py", line 163, in invoke
raise e
File "/Users/noneofyourbusiness/FICT/AFSTUDEER/testopenaidb/venv/lib/python3.10/site-packages/langchain/chains/base.py", line 153, in invoke
self._call(inputs, run_manager=run_manager)
File "/Users/noneofyourbusiness/FICT/AFSTUDEER/testopenaidb/venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 1432, in _call
next_step_output = self._take_next_step(
File "/Users/noneofyourbusiness/FICT/AFSTUDEER/testopenaidb/venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 1138, in _take_next_step
[
File "/Users/noneofyourbusiness/FICT/AFSTUDEER/testopenaidb/venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 1138, in <listcomp>
[
File "/Users/noneofyourbusiness/FICT/AFSTUDEER/testopenaidb/venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 1223, in _iter_next_step
yield self._perform_agent_action(
File "/Users/noneofyourbusiness/FICT/AFSTUDEER/testopenaidb/venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 1245, in _perform_agent_action
observation = tool.run(
File "/Users/noneofyourbusiness/FICT/AFSTUDEER/testopenaidb/venv/lib/python3.10/site-packages/langchain_core/tools.py", line 417, in run
raise e
File "/Users/noneofyourbusiness/FICT/AFSTUDEER/testopenaidb/venv/lib/python3.10/site-packages/langchain_core/tools.py", line 376, in run
self._run(*tool_args, run_manager=run_manager, **tool_kwargs)
File "/Users/noneofyourbusiness/FICT/AFSTUDEER/testopenaidb/venv/lib/python3.10/site-packages/langchain_community/tools/sql_database/tool.py", line 75, in _run
return self.db.get_table_info_no_throw(
File "/Users/noneofyourbusiness/FICT/AFSTUDEER/testopenaidb/venv/lib/python3.10/site-packages/langchain_community/utilities/sql_database.py", line 532, in get_table_info_no_throw
return self.get_table_info(table_names)
File "/Users/noneofyourbusiness/FICT/AFSTUDEER/testopenaidb/venv/lib/python3.10/site-packages/langchain_community/utilities/sql_database.py", line 352, in get_table_info
table_info += f"\n{self._get_sample_rows(table)}\n"
File "/Users/noneofyourbusiness/FICT/AFSTUDEER/testopenaidb/venv/lib/python3.10/site-packages/langchain_community/utilities/sql_database.py", line 377, in _get_sample_rows
sample_rows = list(
File "/Users/noneofyourbusiness/FICT/AFSTUDEER/testopenaidb/venv/lib/python3.10/site-packages/sqlalchemy/engine/result.py", line 529, in iterrows
make_row(raw_row) if make_row else raw_row
File "lib/sqlalchemy/cyextension/resultproxy.pyx", line 22, in sqlalchemy.cyextension.resultproxy.BaseRow.__init__
File "lib/sqlalchemy/cyextension/resultproxy.pyx", line 79, in sqlalchemy.cyextension.resultproxy._apply_processors
File "lib/sqlalchemy/cyextension/processors.pyx", line 40, in sqlalchemy.cyextension.processors.str_to_datetime
TypeError: fromisoformat: argument must be str
### Description
I'm using Langchain SQL-agent to connect a LLM to a existing database. This database has multiple tables, columns and rows. One of these columns is a timestamp column with the value 672457193.7343056
Note i didnt design this database(s), but want to work with them. Ive encountered multiple databases that use this formatting.
I have posted this on the github of SQLAlchemy aswel, and they have answerd in the following reply https://github.com/sqlalchemy/sqlalchemy/discussions/9990#discussioncomment-8828699
I have no idea how to fix this issue. I hope i can get some help here.
### System Info
langchain==0.1.12
langchain-community==0.0.28
langchain-core==0.1.32
langchain-experimental==0.0.32
langchain-openai==0.0.8
langchain-text-splitters==0.0.1
Mac OS
Python 3.10.4
| SQL agent - SQLAlchemy - timestamp formatting issues that make the SQL-agent crash | https://api.github.com/repos/langchain-ai/langchain/issues/19234/comments | 3 | 2024-03-18T15:14:43Z | 2024-06-25T16:13:23Z | https://github.com/langchain-ai/langchain/issues/19234 | 2,192,497,427 | 19,234 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
While using the load_and_split function in AzureAIDocumentIntelligenceLoader with mode="object". I'm getting Validation error with the page_content when the content_type is a table.
### My code
loader = AzureAIDocumentIntelligenceLoader(
api_endpoint=endpoint,
api_key=key,
file_path=file_path,
api_model="prebuilt-layout",
mode="object"
)
docs = loader.load_and_split()
On looking into the classes here's what I found.
### langchain_community.document_loaders.parsers.doc_intelligence.py
#### AzureAIDocumentIntelligenceParser > line 75 in _generate_docs_object
for table in result.tables:
yield Document(
page_content=table.cells, # json object
metadata={
"footnote": table.footnotes,
"caption": table.caption,
"page": para.bounding_regions[0].page_number,
"bounding_box": para.bounding_regions[0].polygon,
"row_count": table.row_count,
"column_count": table.column_count,
"type": "table",
},
)
A json object is getting passed in the page_content of the Document class. Due to which pydantic throws a validation error.
### langchain_core.documents.base.py
#### Document > line 12 in page_content
class Document(Serializable):
"""Class for storing a piece of text and associated metadata."""
page_content: str
"""String text."""
metadata: dict = Field(default_factory=dict)
"""Arbitrary metadata about the page content (e.g., source, relationships to other
documents, etc.).
"""
This code shows that only string type is accepted in the page_content field.
### Error Message and Stack Trace (if applicable)
---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
Cell In[22], line 1
----> 1 azure_documents = loader.load_and_split(text_splitter=RecursiveCharacterTextSplitter(chunk_size=800, chunk_overlap=200))
File d:\Users\RajatKumar.Roy\Anaconda3\envs\llm_exp\lib\site-packages\langchain_core\document_loaders\base.py:59, in BaseLoader.load_and_split(self, text_splitter)
57 else:
58 _text_splitter = text_splitter
---> 59 docs = self.load()
60 return _text_splitter.split_documents(docs)
File d:\Users\RajatKumar.Roy\Anaconda3\envs\llm_exp\lib\site-packages\langchain_core\document_loaders\base.py:29, in BaseLoader.load(self)
27 def load(self) -> List[Document]:
28 """Load data into Document objects."""
---> 29 return list(self.lazy_load())
File d:\Users\RajatKumar.Roy\Anaconda3\envs\llm_exp\lib\site-packages\langchain_community\document_loaders\doc_intelligence.py:86, in AzureAIDocumentIntelligenceLoader.lazy_load(self)
84 if self.file_path is not None:
85 blob = Blob.from_path(self.file_path)
---> 86 yield from self.parser.parse(blob)
87 else:
88 yield from self.parser.parse_url(self.url_path)
File d:\Users\RajatKumar.Roy\Anaconda3\envs\llm_exp\lib\site-packages\langchain_core\document_loaders\base.py:121, in BaseBlobParser.parse(self, blob)
106 def parse(self, blob: Blob) -> List[Document]:
107 """Eagerly parse the blob into a document or documents.
108
109 This is a convenience method for interactive development environment.
(...)
119 List of documents
120 """
--> 121 return list(self.lazy_parse(blob))
File d:\Users\RajatKumar.Roy\Anaconda3\envs\llm_exp\lib\site-packages\langchain_community\document_loaders\parsers\doc_intelligence.py:104, in AzureAIDocumentIntelligenceParser.lazy_parse(self, blob)
102 yield from self._generate_docs_page(result)
103 else:
--> 104 yield from self._generate_docs_object(result)
File d:\Users\RajatKumar.Roy\Anaconda3\envs\llm_exp\lib\site-packages\langchain_community\document_loaders\parsers\doc_intelligence.py:74, in AzureAIDocumentIntelligenceParser._generate_docs_object(self, result)
72 # table
73 for table in result.tables:
---> 74 yield Document(
75 page_content=table.cells, # json object
76 metadata={
77 "footnote": table.footnotes,
78 "caption": table.caption,
79 "page": para.bounding_regions[0].page_number,
80 "bounding_box": para.bounding_regions[0].polygon,
81 "row_count": table.row_count,
82 "column_count": table.column_count,
83 "type": "table",
84 },
85 )
File d:\Users\RajatKumar.Roy\Anaconda3\envs\llm_exp\lib\site-packages\langchain_core\documents\base.py:22, in Document.__init__(self, page_content, **kwargs)
20 def __init__(self, page_content: str, **kwargs: Any) -> None:
21 """Pass page_content in as positional or named arg."""
---> 22 super().__init__(page_content=page_content, **kwargs)
File d:\Users\RajatKumar.Roy\Anaconda3\envs\llm_exp\lib\site-packages\langchain_core\load\serializable.py:120, in Serializable.__init__(self, **kwargs)
119 def __init__(self, **kwargs: Any) -> None:
--> 120 super().__init__(**kwargs)
121 self._lc_kwargs = kwargs
File ~\AppData\Roaming\Python\Python310\site-packages\pydantic\main.py:341, in pydantic.main.BaseModel.__init__()
ValidationError: 1 validation error for Document
page_content
str type expected (type=type_error.str)
### Description
I'm trying to use the load_and_split function with the AzureAIDocumentIntelligenceLoader class which would extract paragraphs and tables from a pdf and perform chunking on the content. For this, I've passed mode="object" in the class argument. During execution, I'm getting an error which is mentioned as a validation error from the pydantic "Document" class.
### System Info
## Langchain version
langchain==0.1.12
langchain-community==0.0.28
## Python version
python=3.10.12 | ValidationError: Pydantic validation error on "page_content" with "object" mode in AzureAIDocumentIntelligenceLoader | https://api.github.com/repos/langchain-ai/langchain/issues/19229/comments | 0 | 2024-03-18T11:19:00Z | 2024-06-24T16:13:49Z | https://github.com/langchain-ai/langchain/issues/19229 | 2,191,926,207 | 19,229 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```Python
from llama_index.core import VectorStoreIndex
from llama_index.core import (
StorageContext,
load_index_from_storage,
)
from llama_index.core.node_parser import SentenceSplitter
from loader import get_documents
import os
from langchain.text_splitter import RecursiveCharacterTextSplitter
from llama_index.core.node_parser import LangchainNodeParser
from llama_index.core.node_parser import HierarchicalNodeParser
def get_index(source_dir, persist_dir, split_type="sentence", chunk_size=1024):
if not os.path.exists(persist_dir):
# load the documents and create the index
documents = get_documents(source_dir)
if split_type == "sentence":
index = VectorStoreIndex.from_documents(documents=documents,transformations=[SentenceSplitter(chunk_size=chunk_size, chunk_overlap=20)], show_progress=False)
```
### Error Message and Stack Trace (if applicable)
File "D:\RAG_benchmark\RAG-benchmark\main.py", line 26, in <module>
index = get_index("D:\RAG_benchmark\data", cfg.persist_dir, split_type=cfg.split_type, chunk_size=cfg.chunk_size)
File "D:\RAG_benchmark\RAG-benchmark\index.py", line 18, in get_index
index = VectorStoreIndex.from_documents(documents=documents,transformations=[SentenceSplitter(chunk_size=chunk_size, chunk_overlap=20)], show_progress=False)
File "C:\Users\hhw\miniconda3\lib\site-packages\llama_index\core\indices\base.py", line 145, in from_documents
return cls(
File "C:\Users\hhw\miniconda3\lib\site-packages\llama_index\core\indices\vector_store\base.py", line 75, in __init__
super().__init__(
File "C:\Users\hhw\miniconda3\lib\site-packages\llama_index\core\indices\base.py", line 94, in __init__
index_struct = self.build_index_from_nodes(
File "C:\Users\hhw\miniconda3\lib\site-packages\llama_index\core\indices\vector_store\base.py", line 308, in build_index_from_nodes
return self._build_index_from_nodes(nodes, **insert_kwargs)
File "C:\Users\hhw\miniconda3\lib\site-packages\llama_index\core\indices\vector_store\base.py", line 280, in _build_index_from_nodes
self._add_nodes_to_index(
File "C:\Users\hhw\miniconda3\lib\site-packages\llama_index\core\indices\vector_store\base.py", line 233, in _add_nodes_to_index
nodes_batch = self._get_node_with_embedding(nodes_batch, show_progress)
File "C:\Users\hhw\miniconda3\lib\site-packages\llama_index\core\indices\vector_store\base.py", line 141, in _get_node_with_embedding
id_to_embed_map = embed_nodes(
File "C:\Users\hhw\miniconda3\lib\site-packages\llama_index\core\indices\utils.py", line 138, in embed_nodes
new_embeddings = embed_model.get_text_embedding_batch(
File "C:\Users\hhw\miniconda3\lib\site-packages\llama_index\core\instrumentation\dispatcher.py", line 102, in wrapper
self.span_drop(id=id, err=e)
File "C:\Users\hhw\miniconda3\lib\site-packages\llama_index\core\instrumentation\dispatcher.py", line 77, in span_drop
h.span_drop(id, err, **kwargs)
File "C:\Users\hhw\miniconda3\lib\site-packages\llama_index\core\instrumentation\span_handlers\base.py", line 45, in span_drop
self.prepare_to_drop_span(id, err, **kwargs)
File "C:\Users\hhw\miniconda3\lib\site-packages\llama_index\core\instrumentation\span_handlers\null.py", line 33, in prepare_to_drop_span
raise err
File "C:\Users\hhw\miniconda3\lib\site-packages\llama_index\core\instrumentation\dispatcher.py", line 100, in wrapper
result = func(*args, **kwargs)
File "C:\Users\hhw\miniconda3\lib\site-packages\llama_index\core\base\embeddings\base.py", line 280, in get_text_embedding_batch
embeddings = self._get_text_embeddings(cur_batch)
File "C:\Users\hhw\miniconda3\lib\site-packages\llama_index\embeddings\langchain\base.py", line 87, in _get_text_embeddings
return self._langchain_embedding.embed_documents(texts)
File "C:\Users\hhw\miniconda3\lib\site-packages\langchain_community\embeddings\huggingface.py", line 93, in embed_documents
embeddings = self.client.encode(
TypeError: sentence_transformers.SentenceTransformer.SentenceTransformer.encode() got multiple values for keyword argument 'show_progress_bar'
### Description
It should work corrently, but it got a problems that can't be fixed
### System Info
Windows11 The latest version of langchain | .langchain_community\embeddings\huggingface.py embeddings = self.client.encode( texts, show_progress_bar=self.show_progress, **self.encode_kwargs ) | https://api.github.com/repos/langchain-ai/langchain/issues/19228/comments | 0 | 2024-03-18T11:01:30Z | 2024-06-24T16:13:48Z | https://github.com/langchain-ai/langchain/issues/19228 | 2,191,886,887 | 19,228 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
def list_pull_request_files(self, pr_number: int) -> List[Dict[str, Any]]:
"""Fetches the full text of all files in a PR. Truncates after first 3k tokens.
# TODO: Enhancement to summarize files with ctags if they're getting long.
Args:
pr_number(int): The number of the pull request on Github
Returns:
dict: A dictionary containing the issue's title,
body, and comments as a string
"""
tiktoken = _import_tiktoken()
MAX_TOKENS_FOR_FILES = 3_000
pr_files = []
pr = self.github_repo_instance.get_pull(number=int(pr_number))
total_tokens = 0
page = 0
while True: # or while (total_tokens + tiktoken()) < MAX_TOKENS_FOR_FILES:
files_page = pr.get_files().get_page(page)
if len(files_page) == 0:
break
for file in files_page:
try:
file_metadata_response = requests.get(file.contents_url)
if file_metadata_response.status_code == 200:
download_url = json.loads(file_metadata_response.text)[
"download_url"
]
else:
print(f"Failed to download file: {file.contents_url}, skipping") # noqa: T201
continue
file_content_response = requests.get(download_url)
if file_content_response.status_code == 200:
# Save the content as a UTF-8 string
file_content = file_content_response.text
else:
print( # noqa: T201
"Failed downloading file content "
f"(Error {file_content_response.status_code}). Skipping"
)
continue
file_tokens = len(
tiktoken.get_encoding("cl100k_base").encode(
file_content + file.filename + "file_name file_contents"
)
)
if (total_tokens + file_tokens) < MAX_TOKENS_FOR_FILES:
pr_files.append(
{
"filename": file.filename,
"contents": file_content,
"additions": file.additions,
"deletions": file.deletions,
}
)
total_tokens += file_tokens
except Exception as e:
print(f"Error when reading files from a PR on github. {e}") # noqa: T201
page += 1
return pr_files
```
### Error Message and Stack Trace (if applicable)
`Failed to download file <file.contents_url>, skipping`
### Description
I have a LangChain GitHub Agent that has to retrieve pull requests files to analyze their content from a repository. The GitHub App has all the necessary permissions to do so (which are the ones in the [documentation](https://python.langchain.com/docs/integrations/toolkits/github#create-a-github-app)).
However, when the `GET /repos/{owner}/{repo}/contents/{path}` function is called for a pull request file with a GitHub App (for each file inside the PR), the following error appears:
`Failed to download file <file.contents_url>, skipping`.
What I noticed is that this is the only function that needs to call a GitHub API explicitely (through the `requests.get` instruction).
This code is inside the `langchain_community\utilities\github.py` file
### System Info
Windows 10 Home 22H2
Python 3.10.6
PyGithub == 2.2.0
langchain == 0.1.11
langchain-core == 0.1.30
langchain-community == 0.0.27
langchain-openai == 0.0.8 | LangChain GitHub: list_pull_request_files function doesn't work correctly when using a GitHub App | https://api.github.com/repos/langchain-ai/langchain/issues/19222/comments | 0 | 2024-03-18T09:39:35Z | 2024-06-24T16:13:51Z | https://github.com/langchain-ai/langchain/issues/19222 | 2,191,693,323 | 19,222 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
def get_language_model(self, streaming: bool = False, temperature: int = 0, callbacks: Callbacks = None) -> BaseLanguageModel:
"""
Get the language model.
Args:
streaming (bool): Use streaming or not.
temperature (int): Temperature for language model generation.
callbacks (Callbacks): Callbacks for language model.
Returns:
BaseLanguageModel: The language model.
"""
llm_token_usage_callback = llm_token_usage_callback_var.get()
callbacks = callbacks or []
if llm_token_usage_callback and self.include_token_usage_cb:
callbacks.append(llm_token_usage_callback)
if logger.is_debug:
callbacks.append(LLMLogCallbackHandlerAsync())
if self._chat_model.openai_api_key:
return ChatOpenAI(openai_api_key=self._chat_model.openai_api_key, temperature=temperature, model=self._chat_model.model, streaming=streaming, callbacks=callbacks)
if self._chat_model.google_api_key:
safety_settings = {
HarmCategory.HARM_CATEGORY_HARASSMENT: HarmBlockThreshold.BLOCK_NONE,
HarmCategory.HARM_CATEGORY_HATE_SPEECH: HarmBlockThreshold.BLOCK_NONE,
HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT: HarmBlockThreshold.BLOCK_NONE,
HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: HarmBlockThreshold.BLOCK_NONE
}
return ChatGoogleGenerativeAI(
google_api_key=self._chat_model.google_api_key, temperature=temperature, model=self._chat_model.model,
streaming=streaming, callbacks=callbacks, convert_system_message_to_human=True, safety_settings=safety_settings
)
client = boto3.client(
'bedrock-runtime',
region_name=self._chat_model.bedrock_aws_region_name,
aws_access_key_id=self._chat_model.bedrock_aws_access_key_id,
aws_secret_access_key=self._chat_model.bedrock_aws_secret_access_key
)
return BedrockChat(client=client, model_id=self._chat_model.model, streaming=streaming, callbacks=callbacks, model_kwargs={"temperature": temperature})
async def _get_agent_executor_async(self) -> AgentExecutor:
"""
Prepare agent using tools and prompt
Returns:
AgentExecutor: AgentExecutor to invoke the Agent.
"""
self.tools = await self._get_tools()
llm = LLMSelector(self.model).get_language_model().with_config(RunnableConfig(run_name="Agent"))
prompt = ChatPromptTemplate.from_messages([
(EnumChatMessageType.SYSTEM, REACT_AGENT_SYSTEM_TEMPLATE),
(EnumChatMessageType.HUMAN, REACT_AGENT_USER_TEMPLATE),
]).partial(system_context=self.model.system_context or '', human_context=self.model.human_context or '')
agent = create_react_agent(llm=llm, tools=self.tools, prompt=prompt) # noqa
return AgentExecutor(agent=agent, tools=self.tools, handle_parsing_errors=self._handle_parser_exception, max_iterations=3)
### Error Message and Stack Trace (if applicable)
def _prepare_input_and_invoke_stream(
self,
prompt: Optional[str] = None,
system: Optional[str] = None,
messages: Optional[List[Dict]] = None,
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> Iterator[GenerationChunk]:
_model_kwargs = self.model_kwargs or {}
provider = self._get_provider()
if stop:
if provider not in self.provider_stop_sequence_key_name_map:
raise ValueError(
f"Stop sequence key name for {provider} is not supported."
)
# stop sequence from _generate() overrides
# stop sequences in the class attribute
_model_kwargs[self.provider_stop_sequence_key_name_map.get(provider)] = stop
if provider == "cohere":
_model_kwargs["stream"] = True
params = {**_model_kwargs, **kwargs}
if self._guardrails_enabled:
params.update(self._get_guardrails_canonical())
input_body = LLMInputOutputAdapter.prepare_input(
provider=provider,
prompt=prompt,
system=system,
messages=messages,
model_kwargs=params,
)
body = json.dumps(input_body)
request_options = {
"body": body,
"modelId": self.model_id,
"accept": "application/json",
"contentType": "application/json",
}
if self._guardrails_enabled:
request_options["guardrail"] = "ENABLED"
if self.guardrails.get("trace"): # type: ignore[union-attr]
request_options["trace"] = "ENABLED"
try:
response = self.client.invoke_model_with_response_stream(**request_options)
except Exception as e:
raise ValueError(f"Error raised by bedrock service: {e}")
for chunk in LLMInputOutputAdapter.prepare_output_stream(
provider, response, stop, True if messages else False
):
yield chunk
# verify and raise callback error if any middleware intervened
self._get_bedrock_services_signal(chunk.generation_info) # type: ignore[arg-type]
if run_manager is not None:
run_manager.on_llm_new_token(chunk.text, chunk=chunk)
### Description
When invoking BedrockChat with the meta-model, an error occurs stating that the "Stop sequence" key name is not supported. This issue arises because the create_react_agent function includes a parameter llm_with_stop which is bound to the stop sequence ["\nObservation"], but the meta-model does not support the use of a stop sequence.
### System Info
langchain==0.1.12
langchain-community==0.0.28
langchain-core==0.1.32
langchain-google-genai==0.0.11
langchain-openai==0.0.8
langchain-text-splitters==0.0.1
langchainhub==0.1.15
Platform: Mac
Python 3.11.6
| Stop sequence key name for meta is not supported, For meta-model (exp: meta.llama2-13b-chat-v1) in BedrockChat | https://api.github.com/repos/langchain-ai/langchain/issues/19220/comments | 3 | 2024-03-18T08:33:52Z | 2024-07-19T14:21:53Z | https://github.com/langchain-ai/langchain/issues/19220 | 2,191,540,570 | 19,220 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
#1 import the OS, Bedrock, ConversationChain, ConversationBufferMemory Langchain Modules
import os
from langchain.llms.bedrock import Bedrock
from langchain.memory import ConversationBufferMemory
from langchain.chains import ConversationChain
#2a Write a function for invoking model- client connection with Bedrock with profile, model_id & Inference params- model_kwargs
# def demo_chatbot():
def demo_chatbot(input_text):
demo_llm = Bedrock(
credentials_profile_name='default',
model_id='mistral.mixtral-8x7b-instruct-v0:1',
model_kwargs= {
"temperature": 0.9,
"top_p": 0.5,
"max_gen_len": 512})
# return demo_llm
#2b Test out the LLM with Predict method
return demo_llm.predict(input_text)
response = demo_chatbot('what is the temprature in london like ?')
print(response)
```
### Error Message and Stack Trace (if applicable)
```
[C:\Users\leo_c\AppData\Roaming\Python\Python310\site-packages\langchain_core\_api\deprecation.py:117](file:///C:/Users/leo_c/AppData/Roaming/Python/Python310/site-packages/langchain_core/_api/deprecation.py:117): LangChainDeprecationWarning: The function `predict` was deprecated in LangChain 0.1.7 and will be removed in 0.2.0. Use invoke instead.
warn_deprecated(
```
```
---------------------------------------------------------------------------
ValidationException Traceback (most recent call last)
File [~\AppData\Roaming\Python\Python310\site-packages\langchain_community\llms\bedrock.py:536](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_community/llms/bedrock.py:536), in BedrockBase._prepare_input_and_invoke(self, prompt, system, messages, stop, run_manager, **kwargs)
[535](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_community/llms/bedrock.py:535) try:
--> [536](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_community/llms/bedrock.py:536) response = self.client.invoke_model(**request_options)
[538](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_community/llms/bedrock.py:538) text, body = LLMInputOutputAdapter.prepare_output(
[539](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_community/llms/bedrock.py:539) provider, response
[540](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_community/llms/bedrock.py:540) ).values()
File [c:\Users\leo_c\anaconda3\lib\site-packages\botocore\client.py:535](file:///C:/Users/leo_c/anaconda3/lib/site-packages/botocore/client.py:535), in ClientCreator._create_api_method.<locals>._api_call(self, *args, **kwargs)
[534](file:///C:/Users/leo_c/anaconda3/lib/site-packages/botocore/client.py:534) # The "self" in this scope is referring to the BaseClient.
--> [535](file:///C:/Users/leo_c/anaconda3/lib/site-packages/botocore/client.py:535) return self._make_api_call(operation_name, kwargs)
File [c:\Users\leo_c\anaconda3\lib\site-packages\botocore\client.py:983](file:///C:/Users/leo_c/anaconda3/lib/site-packages/botocore/client.py:983), in BaseClient._make_api_call(self, operation_name, api_params)
[982](file:///C:/Users/leo_c/anaconda3/lib/site-packages/botocore/client.py:982) error_class = self.exceptions.from_code(error_code)
--> [983](file:///C:/Users/leo_c/anaconda3/lib/site-packages/botocore/client.py:983) raise error_class(parsed_response, operation_name)
[984](file:///C:/Users/leo_c/anaconda3/lib/site-packages/botocore/client.py:984) else:
ValidationException: An error occurred (ValidationException) when calling the InvokeModel operation: Operation not allowed
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
Cell In[1], [line 22](vscode-notebook-cell:?execution_count=1&line=22)
[18](vscode-notebook-cell:?execution_count=1&line=18) # return demo_llm
[19](vscode-notebook-cell:?execution_count=1&line=19)
[20](vscode-notebook-cell:?execution_count=1&line=20) #2b Test out the LLM with Predict method
[21](vscode-notebook-cell:?execution_count=1&line=21) return demo_llm.predict(input_text)
---> [22](vscode-notebook-cell:?execution_count=1&line=22) response = demo_chatbot('what is the temprature in london like ?')
[23](vscode-notebook-cell:?execution_count=1&line=23) print(response)
Cell In[1], [line 21](vscode-notebook-cell:?execution_count=1&line=21)
[10](vscode-notebook-cell:?execution_count=1&line=10) demo_llm = Bedrock(
[11](vscode-notebook-cell:?execution_count=1&line=11) credentials_profile_name='default',
[12](vscode-notebook-cell:?execution_count=1&line=12) model_id='meta.llama2-70b-chat-v1',
(...)
[16](vscode-notebook-cell:?execution_count=1&line=16) "top_p": 0.5,
[17](vscode-notebook-cell:?execution_count=1&line=17) "max_gen_len": 512})
[18](vscode-notebook-cell:?execution_count=1&line=18) # return demo_llm
[19](vscode-notebook-cell:?execution_count=1&line=19)
[20](vscode-notebook-cell:?execution_count=1&line=20) #2b Test out the LLM with Predict method
---> [21](vscode-notebook-cell:?execution_count=1&line=21) return demo_llm.predict(input_text)
File [~\AppData\Roaming\Python\Python310\site-packages\langchain_core\_api\deprecation.py:145](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/_api/deprecation.py:145), in deprecated.<locals>.deprecate.<locals>.warning_emitting_wrapper(*args, **kwargs)
[143](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/_api/deprecation.py:143) warned = True
[144](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/_api/deprecation.py:144) emit_warning()
--> [145](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/_api/deprecation.py:145) return wrapped(*args, **kwargs)
File [~\AppData\Roaming\Python\Python310\site-packages\langchain_core\language_models\llms.py:1013](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/language_models/llms.py:1013), in BaseLLM.predict(self, text, stop, **kwargs)
[1011](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/language_models/llms.py:1011) else:
[1012](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/language_models/llms.py:1012) _stop = list(stop)
-> [1013](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/language_models/llms.py:1013) return self(text, stop=_stop, **kwargs)
File [~\AppData\Roaming\Python\Python310\site-packages\langchain_core\_api\deprecation.py:145](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/_api/deprecation.py:145), in deprecated.<locals>.deprecate.<locals>.warning_emitting_wrapper(*args, **kwargs)
[143](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/_api/deprecation.py:143) warned = True
[144](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/_api/deprecation.py:144) emit_warning()
--> [145](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/_api/deprecation.py:145) return wrapped(*args, **kwargs)
File [~\AppData\Roaming\Python\Python310\site-packages\langchain_core\language_models\llms.py:972](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/language_models/llms.py:972), in BaseLLM.__call__(self, prompt, stop, callbacks, tags, metadata, **kwargs)
[965](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/language_models/llms.py:965) if not isinstance(prompt, str):
[966](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/language_models/llms.py:966) raise ValueError(
[967](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/language_models/llms.py:967) "Argument `prompt` is expected to be a string. Instead found "
[968](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/language_models/llms.py:968) f"{type(prompt)}. If you want to run the LLM on multiple prompts, use "
[969](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/language_models/llms.py:969) "`generate` instead."
[970](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/language_models/llms.py:970) )
[971](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/language_models/llms.py:971) return (
--> [972](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/language_models/llms.py:972) self.generate(
[973](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/language_models/llms.py:973) [prompt],
[974](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/language_models/llms.py:974) stop=stop,
[975](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/language_models/llms.py:975) callbacks=callbacks,
[976](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/language_models/llms.py:976) tags=tags,
[977](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/language_models/llms.py:977) metadata=metadata,
[978](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/language_models/llms.py:978) **kwargs,
[979](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/language_models/llms.py:979) )
[980](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/language_models/llms.py:980) .generations[0][0]
[981](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/language_models/llms.py:981) .text
[982](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/language_models/llms.py:982) )
File [~\AppData\Roaming\Python\Python310\site-packages\langchain_core\language_models\llms.py:714](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/language_models/llms.py:714), in BaseLLM.generate(self, prompts, stop, callbacks, tags, metadata, run_name, **kwargs)
[698](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/language_models/llms.py:698) raise ValueError(
[699](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/language_models/llms.py:699) "Asked to cache, but no cache found at `langchain.cache`."
[700](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/language_models/llms.py:700) )
[701](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/language_models/llms.py:701) run_managers = [
[702](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/language_models/llms.py:702) callback_manager.on_llm_start(
[703](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/language_models/llms.py:703) dumpd(self),
(...)
[712](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/language_models/llms.py:712) )
[713](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/language_models/llms.py:713) ]
--> [714](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/language_models/llms.py:714) output = self._generate_helper(
[715](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/language_models/llms.py:715) prompts, stop, run_managers, bool(new_arg_supported), **kwargs
[716](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/language_models/llms.py:716) )
[717](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/language_models/llms.py:717) return output
[718](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/language_models/llms.py:718) if len(missing_prompts) > 0:
File [~\AppData\Roaming\Python\Python310\site-packages\langchain_core\language_models\llms.py:578](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/language_models/llms.py:578), in BaseLLM._generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs)
[576](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/language_models/llms.py:576) for run_manager in run_managers:
[577](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/language_models/llms.py:577) run_manager.on_llm_error(e, response=LLMResult(generations=[]))
--> [578](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/language_models/llms.py:578) raise e
[579](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/language_models/llms.py:579) flattened_outputs = output.flatten()
[580](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/language_models/llms.py:580) for manager, flattened_output in zip(run_managers, flattened_outputs):
File [~\AppData\Roaming\Python\Python310\site-packages\langchain_core\language_models\llms.py:565](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/language_models/llms.py:565), in BaseLLM._generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs)
[555](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/language_models/llms.py:555) def _generate_helper(
[556](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/language_models/llms.py:556) self,
[557](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/language_models/llms.py:557) prompts: List[str],
(...)
[561](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/language_models/llms.py:561) **kwargs: Any,
[562](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/language_models/llms.py:562) ) -> LLMResult:
[563](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/language_models/llms.py:563) try:
[564](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/language_models/llms.py:564) output = (
--> [565](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/language_models/llms.py:565) self._generate(
[566](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/language_models/llms.py:566) prompts,
[567](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/language_models/llms.py:567) stop=stop,
[568](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/language_models/llms.py:568) # TODO: support multiple run managers
[569](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/language_models/llms.py:569) run_manager=run_managers[0] if run_managers else None,
[570](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/language_models/llms.py:570) **kwargs,
[571](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/language_models/llms.py:571) )
[572](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/language_models/llms.py:572) if new_arg_supported
[573](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/language_models/llms.py:573) else self._generate(prompts, stop=stop)
[574](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/language_models/llms.py:574) )
[575](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/language_models/llms.py:575) except BaseException as e:
[576](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/language_models/llms.py:576) for run_manager in run_managers:
File [~\AppData\Roaming\Python\Python310\site-packages\langchain_core\language_models\llms.py:1153](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/language_models/llms.py:1153), in LLM._generate(self, prompts, stop, run_manager, **kwargs)
[1150](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/language_models/llms.py:1150) new_arg_supported = inspect.signature(self._call).parameters.get("run_manager")
[1151](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/language_models/llms.py:1151) for prompt in prompts:
[1152](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/language_models/llms.py:1152) text = (
-> [1153](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/language_models/llms.py:1153) self._call(prompt, stop=stop, run_manager=run_manager, **kwargs)
[1154](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/language_models/llms.py:1154) if new_arg_supported
[1155](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/language_models/llms.py:1155) else self._call(prompt, stop=stop, **kwargs)
[1156](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/language_models/llms.py:1156) )
[1157](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/language_models/llms.py:1157) generations.append([Generation(text=text)])
[1158](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_core/language_models/llms.py:1158) return LLMResult(generations=generations)
File [~\AppData\Roaming\Python\Python310\site-packages\langchain_community\llms\bedrock.py:831](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_community/llms/bedrock.py:831), in Bedrock._call(self, prompt, stop, run_manager, **kwargs)
[828](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_community/llms/bedrock.py:828) completion += chunk.text
[829](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_community/llms/bedrock.py:829) return completion
--> [831](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_community/llms/bedrock.py:831) return self._prepare_input_and_invoke(
[832](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_community/llms/bedrock.py:832) prompt=prompt, stop=stop, run_manager=run_manager, **kwargs
[833](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_community/llms/bedrock.py:833) )
File [~\AppData\Roaming\Python\Python310\site-packages\langchain_community\llms\bedrock.py:543](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_community/llms/bedrock.py:543), in BedrockBase._prepare_input_and_invoke(self, prompt, system, messages, stop, run_manager, **kwargs)
[538](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_community/llms/bedrock.py:538) text, body = LLMInputOutputAdapter.prepare_output(
[539](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_community/llms/bedrock.py:539) provider, response
[540](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_community/llms/bedrock.py:540) ).values()
[542](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_community/llms/bedrock.py:542) except Exception as e:
--> [543](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_community/llms/bedrock.py:543) raise ValueError(f"Error raised by bedrock service: {e}")
[545](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_community/llms/bedrock.py:545) if stop is not None:
[546](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/leo_c/OneDrive/Desktop/Openmesh/Pythia/Python%20scripts/~/AppData/Roaming/Python/Python310/site-packages/langchain_community/llms/bedrock.py:546) text = enforce_stop_tokens(text, stop)
ValueError: Error raised by bedrock service: An error occurred (ValidationException) when calling the InvokeModel operation: Operation not allowed
```
### Description
My script above is fairly basic but I am still getting the error above.
### System Info
pip freeze langchain:
```
WARNING: Ignoring invalid distribution -orch (c:\users\leo_c\anaconda3\lib\site-packages)
accelerate==0.23.0
aiohttp==3.8.4
aiosignal==1.3.1
alabaster @ file:///home/ktietz/src/ci/alabaster_1611921544520/work
altair==5.1.2
anaconda-client==1.11.2
anaconda-navigator==2.4.1
anaconda-project @ file:///C:/Windows/TEMP/abs_91fu4tfkih/croots/recipe/anaconda-project_1660339890874/work
ansible==9.1.0
ansible-core==2.16.2
anthropic==0.2.10
anyio==3.7.1
appdirs==1.4.4
argon2-cffi==23.1.0
argon2-cffi-bindings @ file:///C:/ci/argon2-cffi-bindings_1644569876605/work
arrow @ file:///C:/b/abs_cal7u12ktb/croot/arrow_1676588147908/work
ascii-magic==2.3.0
astroid @ file:///C:/b/abs_d4lg3_taxn/croot/astroid_1676904351456/work
astropy @ file:///C:/ci/astropy_1657719642921/work
asttokens @ file:///opt/conda/conda-bld/asttokens_1646925590279/work
async-timeout==4.0.2
atomicwrites==1.4.0
attrs @ file:///C:/b/abs_09s3y775ra/croot/attrs_1668696195628/work
Authlib==1.2.1
auto-gptq==0.4.2+cu118
Automat @ file:///tmp/build/80754af9/automat_1600298431173/work
autopep8 @ file:///opt/conda/conda-bld/autopep8_1650463822033/work
azure-cognitiveservices-speech==1.32.1
Babel @ file:///C:/b/abs_a2shv_3tqi/croot/babel_1671782804377/work
backcall @ file:///home/ktietz/src/ci/backcall_1611930011877/work
backports.functools-lru-cache @ file:///tmp/build/80754af9/backports.functools_lru_cache_1618170165463/work
backports.tempfile @ file:///home/linux1/recipes/ci/backports.tempfile_1610991236607/work
backports.weakref==1.0.post1
bcrypt==4.0.1
beautifulsoup4 @ file:///home/conda/feedstock_root/build_artifacts/beautifulsoup4_1680888073205/work
binaryornot @ file:///tmp/build/80754af9/binaryornot_1617751525010/work
black @ file:///C:/ci/black_1660221726201/work
bleach @ file:///opt/conda/conda-bld/bleach_1641577558959/work
blinker==1.6.3
blis==0.7.9
bokeh @ file:///C:/Windows/TEMP/abs_4a259bc2-ed05-4a1f-808e-ac712cc0900cddqp8sp7/croots/recipe/bokeh_1658136660686/work
boltons @ file:///C:/b/abs_707eo7c09t/croot/boltons_1677628723117/work
boto3==1.28.65
botocore==1.31.85
Bottleneck @ file:///C:/Windows/Temp/abs_3198ca53-903d-42fd-87b4-03e6d03a8381yfwsuve8/croots/recipe/bottleneck_1657175565403/work
brotlipy==0.7.0
bs4==0.0.1
cachelib==0.12.0
cachetools==5.3.1
catalogue==2.0.8
certifi==2023.7.22
cffi @ file:///C:/b/abs_49n3v2hyhr/croot/cffi_1670423218144/work
chardet @ file:///C:/ci_310/chardet_1642114080098/work
charset-normalizer @ file:///tmp/build/80754af9/charset-normalizer_1630003229654/work
click==8.1.7
cloudpickle @ file:///tmp/build/80754af9/cloudpickle_1632508026186/work
clyent==1.2.2
colorama @ file:///C:/b/abs_a9ozq0l032/croot/colorama_1672387194846/work
colorcet @ file:///C:/b/abs_46vyu0rpdl/croot/colorcet_1668084513237/work
coloredlogs==15.0.1
comm @ file:///C:/b/abs_1419earm7u/croot/comm_1671231131638/work
conda==23.3.1
conda-build==3.24.0
conda-content-trust @ file:///C:/Windows/TEMP/abs_4589313d-fc62-4ccc-81c0-b801b4449e833j1ajrwu/croots/recipe/conda-content-trust_1658126379362/work
conda-pack @ file:///tmp/build/80754af9/conda-pack_1611163042455/work
conda-package-handling @ file:///C:/b/abs_fcga8w0uem/croot/conda-package-handling_1672865024290/work
conda-repo-cli==1.0.41
conda-token @ file:///Users/paulyim/miniconda3/envs/c3i/conda-bld/conda-token_1662660369760/work
conda-verify==3.4.2
conda_package_streaming @ file:///C:/b/abs_0e5n5hdal3/croot/conda-package-streaming_1670508162902/work
confection==0.1.0
constantly==15.1.0
contourpy @ file:///C:/b/abs_d5rpy288vc/croots/recipe/contourpy_1663827418189/work
cookiecutter @ file:///opt/conda/conda-bld/cookiecutter_1649151442564/work
cryptography==41.0.7
cssselect @ file:///home/conda/feedstock_root/build_artifacts/cssselect_1666980406338/work
cycler @ file:///tmp/build/80754af9/cycler_1637851556182/work
cymem==2.0.7
cytoolz @ file:///C:/b/abs_61m9vzb4qh/croot/cytoolz_1667465938275/work
daal4py==2023.0.2
dask @ file:///C:/ci/dask-core_1658497112560/work
dataclasses-json==0.5.9
datasets==2.14.5
datashader @ file:///C:/b/abs_e80f3d7ac0/croot/datashader_1676023254070/work
datashape==0.5.4
dateparser==1.1.8
debugpy @ file:///C:/ci_310/debugpy_1642079916595/work
decorator @ file:///opt/conda/conda-bld/decorator_1643638310831/work
defusedxml @ file:///tmp/build/80754af9/defusedxml_1615228127516/work
Deprecated==1.2.14
diff-match-patch @ file:///Users/ktietz/demo/mc3/conda-bld/diff-match-patch_1630511840874/work
dill==0.3.7
distlib==0.3.8
distributed @ file:///C:/ci/distributed_1658523963030/work
dnspython==2.3.0
docker==6.1.3
docstring-to-markdown @ file:///C:/b/abs_cf10j8nr4q/croot/docstring-to-markdown_1673447652942/work
docutils @ file:///C:/Windows/TEMP/abs_24e5e278-4d1c-47eb-97b9-f761d871f482dy2vg450/croots/recipe/docutils_1657175444608/work
elastic-transport==8.4.0
elasticsearch==8.8.2
email-validator==2.1.0.post1
entrypoints @ file:///C:/ci/entrypoints_1649926676279/work
et-xmlfile==1.1.0
exceptiongroup==1.1.2
executing @ file:///opt/conda/conda-bld/executing_1646925071911/work
faiss-cpu==1.7.4
fake-useragent==1.1.3
fastapi==0.103.2
fastcore==1.5.29
fastjsonschema @ file:///C:/Users/BUILDE~1/AppData/Local/Temp/abs_ebruxzvd08/croots/recipe/python-fastjsonschema_1661376484940/work
ffmpeg-python==0.2.0
filelock==3.13.1
flake8 @ file:///C:/b/abs_9f6_n1jlpc/croot/flake8_1674581816810/work
Flask @ file:///C:/b/abs_ef16l83sif/croot/flask_1671217367534/work
Flask-Session==0.6.0
flit_core @ file:///opt/conda/conda-bld/flit-core_1644941570762/work/source/flit_core
fonttools==4.25.0
forbiddenfruit==0.1.4
frozenlist==1.3.3
fsspec==2023.6.0
future @ file:///C:/b/abs_3dcibf18zi/croot/future_1677599891380/work
gensim @ file:///C:/b/abs_a5vat69tv8/croot/gensim_1674853640591/work
gevent==23.9.1
gitdb==4.0.10
GitPython==3.1.40
glob2 @ file:///home/linux1/recipes/ci/glob2_1610991677669/work
google-api-core==2.11.1
google-api-python-client==2.70.0
google-auth==2.21.0
google-auth-httplib2==0.1.0
googleapis-common-protos==1.59.1
greenlet==2.0.2
grpcio==1.56.0
grpcio-tools==1.56.0
h11==0.14.0
h2==4.1.0
h5py @ file:///C:/ci/h5py_1659089830381/work
hagrid==0.3.97
HeapDict @ file:///Users/ktietz/demo/mc3/conda-bld/heapdict_1630598515714/work
holoviews @ file:///C:/b/abs_bbf97_0kcd/croot/holoviews_1676372911083/work
hpack==4.0.0
httpcore==0.17.3
httplib2==0.22.0
httptools==0.6.1
httpx==0.24.1
huggingface-hub==0.20.3
humanfriendly==10.0
hvplot @ file:///C:/b/abs_13un17_4x_/croot/hvplot_1670508919193/work
hyperframe==6.0.1
hyperlink @ file:///tmp/build/80754af9/hyperlink_1610130746837/work
idna @ file:///C:/b/abs_bdhbebrioa/croot/idna_1666125572046/work
imagecodecs @ file:///C:/b/abs_f0cr12h73p/croot/imagecodecs_1677576746499/work
imageio @ file:///C:/b/abs_27kq2gy1us/croot/imageio_1677879918708/work
imagesize @ file:///C:/Windows/TEMP/abs_3cecd249-3fc4-4bfc-b80b-bb227b0d701en12vqzot/croots/recipe/imagesize_1657179501304/work
imbalanced-learn @ file:///C:/b/abs_1911ryuksz/croot/imbalanced-learn_1677191585237/work
importlib-metadata==6.8.0
incremental @ file:///tmp/build/80754af9/incremental_1636629750599/work
inflection==0.5.1
iniconfig @ file:///home/linux1/recipes/ci/iniconfig_1610983019677/work
intake @ file:///C:/b/abs_42yyb2lhwx/croot/intake_1676619887779/work
intervaltree @ file:///Users/ktietz/demo/mc3/conda-bld/intervaltree_1630511889664/work
ipykernel @ file:///C:/b/abs_b4f07tbsyd/croot/ipykernel_1672767104060/work
ipython @ file:///C:/b/abs_d3h279dv3h/croot/ipython_1676582236558/work
ipython-genutils @ file:///tmp/build/80754af9/ipython_genutils_1606773439826/work
ipywidgets==7.7.2
isort @ file:///tmp/build/80754af9/isort_1628603791788/work
itables==1.6.2
itemadapter @ file:///tmp/build/80754af9/itemadapter_1626442940632/work
itemloaders @ file:///opt/conda/conda-bld/itemloaders_1646805235997/work
itsdangerous @ file:///tmp/build/80754af9/itsdangerous_1621432558163/work
janus==1.0.0
jaraco.context==4.3.0
jax==0.4.20
jaxlib==0.4.20
jedi @ file:///C:/ci/jedi_1644315428305/work
jellyfish @ file:///C:/ci/jellyfish_1647962737334/work
Jinja2 @ file:///C:/b/abs_7cdis66kl9/croot/jinja2_1666908141852/work
jinja2-time @ file:///opt/conda/conda-bld/jinja2-time_1649251842261/work
jmespath @ file:///Users/ktietz/demo/mc3/conda-bld/jmespath_1630583964805/work
joblib @ file:///C:/b/abs_e60_bwl1v6/croot/joblib_1666298845728/work
json5 @ file:///tmp/build/80754af9/json5_1624432770122/work
jsonpatch==1.33
jsonpointer==2.1
jsonschema @ file:///C:/b/abs_6ccs97j_l8/croot/jsonschema_1676558690963/work
jupyter @ file:///C:/Windows/TEMP/abs_56xfdi__li/croots/recipe/jupyter_1659349053177/work
jupyter-console @ file:///C:/b/abs_68ttzd5p9c/croot/jupyter_console_1677674667636/work
jupyter-server @ file:///C:/b/abs_1cfi3__jl8/croot/jupyter_server_1671707636383/work
jupyter_client @ file:///C:/ci/jupyter_client_1661834530766/work
jupyter_core @ file:///C:/b/abs_bd7elvu3w2/croot/jupyter_core_1676538600510/work
jupyterlab @ file:///C:/b/abs_513jt6yy74/croot/jupyterlab_1675354138043/work
jupyterlab-pygments @ file:///tmp/build/80754af9/jupyterlab_pygments_1601490720602/work
jupyterlab-widgets @ file:///tmp/build/80754af9/jupyterlab_widgets_1609884341231/work
jupyterlab_server @ file:///C:/b/abs_d1z_g1swc8/croot/jupyterlab_server_1677153204814/work
jupytext==1.15.2
keyring @ file:///C:/ci_310/keyring_1642165564669/work
kiwisolver @ file:///C:/b/abs_88mdhvtahm/croot/kiwisolver_1672387921783/work
langchain==0.1.12
langchain-community==0.0.28
langchain-core==0.1.32
langchain-text-splitters==0.0.1
langchainplus-sdk==0.0.20
langcodes==3.3.0
langsmith==0.1.27
lazy-object-proxy @ file:///C:/ci_310/lazy-object-proxy_1642083437654/work
libarchive-c @ file:///tmp/build/80754af9/python-libarchive-c_1617780486945/work
llvmlite==0.39.1
locket @ file:///C:/ci/locket_1652904090946/work
loguru==0.7.2
lxml @ file:///C:/ci/lxml_1657527492694/work
lz4 @ file:///C:/ci_310/lz4_1643300078932/work
manifest-ml==0.0.1
Markdown @ file:///C:/b/abs_98lv_ucina/croot/markdown_1671541919225/work
markdown-it-py==3.0.0
MarkupSafe @ file:///C:/ci/markupsafe_1654508036328/work
marshmallow==3.19.0
marshmallow-enum==1.5.1
matplotlib==3.8.0
matplotlib-inline @ file:///C:/ci/matplotlib-inline_1661934094726/work
mccabe @ file:///opt/conda/conda-bld/mccabe_1644221741721/work
mdit-py-plugins==0.4.0
mdurl==0.1.2
menuinst @ file:///C:/Users/BUILDE~1/AppData/Local/Temp/abs_455sf5o0ct/croots/recipe/menuinst_1661805970842/work
miniaudio==1.59
mistune @ file:///C:/ci_310/mistune_1642084168466/work
mkl-fft==1.3.1
mkl-random @ file:///C:/ci_310/mkl_random_1643050563308/work
mkl-service==2.4.0
ml-dtypes==0.3.1
mock @ file:///tmp/build/80754af9/mock_1607622725907/work
more-itertools==9.1.0
mpmath==1.2.1
msgpack @ file:///C:/ci/msgpack-python_1652348582618/work
multidict==6.0.4
multipledispatch @ file:///C:/ci_310/multipledispatch_1642084438481/work
multiprocess==0.70.15
munkres==1.1.4
murmurhash==1.0.9
mypy-extensions==0.4.3
names==0.3.0
navigator-updater==0.3.0
nbclassic @ file:///C:/b/abs_d0_ze5q0j2/croot/nbclassic_1676902914817/work
nbclient @ file:///C:/ci/nbclient_1650308592199/work
nbconvert @ file:///C:/b/abs_4av3q4okro/croot/nbconvert_1668450658054/work
nbformat @ file:///C:/b/abs_85_3g7dkt4/croot/nbformat_1670352343720/work
nest-asyncio @ file:///C:/b/abs_3a_4jsjlqu/croot/nest-asyncio_1672387322800/work
networkx==2.8
nltk==3.8.1
notebook @ file:///C:/b/abs_ca13hqvuzw/croot/notebook_1668179888546/work
notebook_shim @ file:///C:/b/abs_ebfczttg6x/croot/notebook-shim_1668160590914/work
numba @ file:///C:/b/abs_e53pp2e4k7/croot/numba_1670258349527/work
numexpr @ file:///C:/b/abs_a7kbak88hk/croot/numexpr_1668713882979/work
numpy @ file:///C:/b/abs_datssh7cer/croot/numpy_and_numpy_base_1672336199388/work
numpydoc @ file:///C:/b/abs_cfdd4zxbga/croot/numpydoc_1668085912100/work
opacus==1.4.0
openai==0.27.10
openapi-schema-pydantic==1.2.4
openpyxl==3.0.10
opentelemetry-api==1.20.0
opentelemetry-sdk==1.20.0
opentelemetry-semantic-conventions==0.41b0
opt-einsum==3.3.0
optimum==1.13.2
orjson==3.9.15
outcome==1.2.0
packaging==23.2
pandas==2.1.4
pandocfilters @ file:///opt/conda/conda-bld/pandocfilters_1643405455980/work
panel @ file:///C:/b/abs_55ujq2fpyh/croot/panel_1676379705003/work
param @ file:///C:/b/abs_d799n8xz_7/croot/param_1671697759755/work
paramiko @ file:///opt/conda/conda-bld/paramiko_1640109032755/work
parse==1.19.1
parsel @ file:///C:/ci/parsel_1646722035970/work
parso @ file:///opt/conda/conda-bld/parso_1641458642106/work
partd @ file:///opt/conda/conda-bld/partd_1647245470509/work
pathlib @ file:///Users/ktietz/demo/mc3/conda-bld/pathlib_1629713961906/work
pathspec @ file:///C:/b/abs_9cu5_2yb3i/croot/pathspec_1674681579249/work
pathy==0.10.2
patsy==0.5.3
peft==0.5.0
pep8==1.7.1
pexpect @ file:///tmp/build/80754af9/pexpect_1605563209008/work
pickleshare @ file:///tmp/build/80754af9/pickleshare_1606932040724/work
Pillow==9.4.0
pinecone-client==2.2.2
pkginfo @ file:///C:/b/abs_d18srtr68x/croot/pkginfo_1679431192239/work
platformdirs==4.1.0
plotly @ file:///C:/ci/plotly_1658160673416/work
pluggy @ file:///C:/ci/pluggy_1648042746254/work
ply==3.11
pooch @ file:///tmp/build/80754af9/pooch_1623324770023/work
-e git+https://github.com/alessandriniluca/postget.git@da51db8edbfe065062899b0bfee577d66be0c1e2#egg=postget
poyo @ file:///tmp/build/80754af9/poyo_1617751526755/work
praw==7.7.1
prawcore==2.3.0
preshed==3.0.8
prometheus-client @ file:///C:/Windows/TEMP/abs_ab9nx8qb08/croots/recipe/prometheus_client_1659455104602/work
prompt-toolkit @ file:///C:/b/abs_6coz5_9f2s/croot/prompt-toolkit_1672387908312/work
Protego @ file:///tmp/build/80754af9/protego_1598657180827/work
protobuf==4.23.3
psutil==5.9.6
psycopg2-binary==2.9.9
ptyprocess @ file:///tmp/build/80754af9/ptyprocess_1609355006118/work/dist/ptyprocess-0.7.0-py2.py3-none-any.whl
pure-eval @ file:///opt/conda/conda-bld/pure_eval_1646925070566/work
py @ file:///opt/conda/conda-bld/py_1644396412707/work
pyaes==1.6.1
pyarrow==14.0.1
pyasn1 @ file:///Users/ktietz/demo/mc3/conda-bld/pyasn1_1629708007385/work
pyasn1-modules==0.2.8
pycapnp==1.3.0
pycodestyle @ file:///C:/b/abs_d77nxvklcq/croot/pycodestyle_1674267231034/work
pycosat @ file:///C:/b/abs_4b1rrw8pn9/croot/pycosat_1666807711599/work
pycparser @ file:///tmp/build/80754af9/pycparser_1636541352034/work
pycryptodome==3.18.0
pyct @ file:///C:/b/abs_92z17k7ig2/croot/pyct_1675450330889/work
pycurl==7.45.1
pydantic==1.10.13
pydeck==0.8.1b0
PyDispatcher==2.0.5
pydocstyle @ file:///C:/b/abs_6dz687_5i3/croot/pydocstyle_1675221688656/work
pydub==0.25.1
pyee==8.2.2
pyerfa @ file:///C:/ci_310/pyerfa_1642088497201/work
pyflakes @ file:///C:/b/abs_6dve6e13zh/croot/pyflakes_1674165143327/work
Pygments==2.16.1
PyHamcrest @ file:///tmp/build/80754af9/pyhamcrest_1615748656804/work
PyJWT @ file:///C:/ci/pyjwt_1657529477795/work
pylint @ file:///C:/b/abs_83sq99jc8i/croot/pylint_1676919922167/work
pylint-venv @ file:///C:/b/abs_bf0lepsbij/croot/pylint-venv_1673990138593/work
pyls-spyder==0.4.0
pymongo==4.6.1
PyNaCl @ file:///C:/Windows/Temp/abs_d5c3ajcm87/croots/recipe/pynacl_1659620667490/work
pyodbc @ file:///C:/Windows/Temp/abs_61e3jz3u05/croots/recipe/pyodbc_1659513801402/work
pyOpenSSL==23.3.0
pyparsing @ file:///C:/Users/BUILDE~1/AppData/Local/Temp/abs_7f_7lba6rl/croots/recipe/pyparsing_1661452540662/work
pyppeteer==1.0.2
PyQt5==5.15.7
PyQt5-sip @ file:///C:/Windows/Temp/abs_d7gmd2jg8i/croots/recipe/pyqt-split_1659273064801/work/pyqt_sip
PyQtWebEngine==5.15.4
pyquery==2.0.0
pyreadline3==3.4.1
pyrsistent @ file:///C:/ci_310/pyrsistent_1642117077485/work
PySocks @ file:///C:/ci_310/pysocks_1642089375450/work
pytest==7.1.2
pytest-base-url==2.0.0
python-binance==1.0.17
python-dateutil @ file:///tmp/build/80754af9/python-dateutil_1626374649649/work
python-dotenv==1.0.0
python-lsp-black @ file:///C:/Users/BUILDE~1/AppData/Local/Temp/abs_dddk9lhpp1/croots/recipe/python-lsp-black_1661852041405/work
python-lsp-jsonrpc==1.0.0
python-lsp-server @ file:///C:/b/abs_e44khh1wya/croot/python-lsp-server_1677296772730/work
python-multipart==0.0.6
python-slugify @ file:///home/conda/feedstock_root/build_artifacts/python-slugify-split_1694282063120/work
python-snappy @ file:///C:/b/abs_61b1fmzxcn/croot/python-snappy_1670943932513/work
pytoolconfig @ file:///C:/b/abs_18sf9z_iwl/croot/pytoolconfig_1676315065270/work
pytz @ file:///C:/b/abs_22fofvpn1x/croot/pytz_1671698059864/work
pyviz-comms @ file:///tmp/build/80754af9/pyviz_comms_1623747165329/work
PyWavelets @ file:///C:/b/abs_a8r4b1511a/croot/pywavelets_1670425185881/work
pywin32==305.1
pywin32-ctypes @ file:///C:/ci_310/pywin32-ctypes_1642657835512/work
pywinpty @ file:///C:/b/abs_73vshmevwq/croot/pywinpty_1677609966356/work/target/wheels/pywinpty-2.0.10-cp310-none-win_amd64.whl
PyYAML==6.0.1
pyzmq==25.1.1
QDarkStyle @ file:///tmp/build/80754af9/qdarkstyle_1617386714626/work
qdrant-client==0.11.10
qstylizer @ file:///C:/b/abs_ef86cgllby/croot/qstylizer_1674008538857/work/dist/qstylizer-0.2.2-py2.py3-none-any.whl
QtAwesome @ file:///C:/b/abs_c5evilj98g/croot/qtawesome_1674008690220/work
qtconsole @ file:///C:/b/abs_5bap7f8n0t/croot/qtconsole_1674008444833/work
QtPy @ file:///C:/ci/qtpy_1662015130233/work
queuelib==1.5.0
redis==4.6.0
regex @ file:///C:/ci/regex_1658258299320/work
replicate==0.22.0
requests==2.31.0
requests-file @ file:///Users/ktietz/demo/mc3/conda-bld/requests-file_1629455781986/work
requests-html==0.10.0
requests-toolbelt @ file:///Users/ktietz/demo/mc3/conda-bld/requests-toolbelt_1629456163440/work
resolvelib==1.0.1
RestrictedPython==7.0
result==0.10.0
rich==13.6.0
river==0.21.0
rope @ file:///C:/b/abs_55g_tm_6ff/croot/rope_1676675029164/work
rouge==1.0.1
rsa==4.9
Rtree @ file:///C:/b/abs_e116ltblik/croot/rtree_1675157871717/work
ruamel-yaml-conda @ file:///C:/b/abs_6ejaexx82s/croot/ruamel_yaml_1667489767827/work
ruamel.yaml @ file:///C:/b/abs_30ee5qbthd/croot/ruamel.yaml_1666304562000/work
ruamel.yaml.clib @ file:///C:/b/abs_aarblxbilo/croot/ruamel.yaml.clib_1666302270884/work
s3transfer==0.7.0
safetensors==0.4.1
scikit-image @ file:///C:/b/abs_63r0vmx78u/croot/scikit-image_1669241746873/work
scikit-learn @ file:///C:/b/abs_7ck_bnw91r/croot/scikit-learn_1676911676133/work
scikit-learn-intelex==20230228.214818
scipy==1.11.3
Scrapy @ file:///C:/b/abs_9fn69i_d86/croot/scrapy_1677738199744/work
seaborn @ file:///C:/b/abs_68ltdkoyoo/croot/seaborn_1673479199997/work
selenium==4.9.0
Send2Trash @ file:///tmp/build/80754af9/send2trash_1632406701022/work
sentencepiece==0.1.99
service-identity @ file:///Users/ktietz/demo/mc3/conda-bld/service_identity_1629460757137/work
sherlock==0.4.1
sip @ file:///C:/Windows/Temp/abs_b8fxd17m2u/croots/recipe/sip_1659012372737/work
six @ file:///tmp/build/80754af9/six_1644875935023/work
smart-open @ file:///C:/ci/smart_open_1651235038100/work
smmap==5.0.1
sniffio @ file:///C:/ci_310/sniffio_1642092172680/work
snowballstemmer @ file:///tmp/build/80754af9/snowballstemmer_1637937080595/work
snscrape==0.7.0.20230622
sortedcontainers @ file:///tmp/build/80754af9/sortedcontainers_1623949099177/work
sounddevice==0.4.6
soupsieve @ file:///C:/b/abs_fasraqxhlv/croot/soupsieve_1666296394662/work
spacy==3.5.4
spacy-legacy==3.0.12
spacy-loggers==1.0.4
SpeechRecognition==3.10.0
Sphinx @ file:///C:/ci/sphinx_1657617157451/work
sphinxcontrib-applehelp @ file:///home/ktietz/src/ci/sphinxcontrib-applehelp_1611920841464/work
sphinxcontrib-devhelp @ file:///home/ktietz/src/ci/sphinxcontrib-devhelp_1611920923094/work
sphinxcontrib-htmlhelp @ file:///tmp/build/80754af9/sphinxcontrib-htmlhelp_1623945626792/work
sphinxcontrib-jsmath @ file:///home/ktietz/src/ci/sphinxcontrib-jsmath_1611920942228/work
sphinxcontrib-qthelp @ file:///home/ktietz/src/ci/sphinxcontrib-qthelp_1611921055322/work
sphinxcontrib-serializinghtml @ file:///tmp/build/80754af9/sphinxcontrib-serializinghtml_1624451540180/work
spyder @ file:///C:/b/abs_93s9xkw3pn/croot/spyder_1677776163871/work
spyder-kernels @ file:///C:/b/abs_feh4xo1mrn/croot/spyder-kernels_1673292245176/work
SQLAlchemy @ file:///C:/Windows/Temp/abs_f8661157-660b-49bb-a790-69ab9f3b8f7c8a8s2psb/croots/recipe/sqlalchemy_1657867864564/work
sqlitedict==2.1.0
srsly==2.4.6
stack-data @ file:///opt/conda/conda-bld/stack_data_1646927590127/work
starlette==0.27.0
statsmodels @ file:///C:/b/abs_bdqo3zaryj/croot/statsmodels_1676646249859/work
stqdm==0.0.5
streamlit==1.27.2
streamlit-jupyter==0.2.1
syft==0.8.3
sympy @ file:///C:/b/abs_95fbf1z7n6/croot/sympy_1668202411612/work
tables==3.7.0
tabulate @ file:///C:/ci/tabulate_1657600805799/work
TBB==0.2
tblib @ file:///Users/ktietz/demo/mc3/conda-bld/tblib_1629402031467/work
Telethon==1.29.2
tenacity @ file:///home/conda/feedstock_root/build_artifacts/tenacity_1692026804430/work
terminado @ file:///C:/b/abs_25nakickad/croot/terminado_1671751845491/work
text-unidecode @ file:///Users/ktietz/demo/mc3/conda-bld/text-unidecode_1629401354553/work
textdistance @ file:///tmp/build/80754af9/textdistance_1612461398012/work
thinc==8.1.10
threadpoolctl @ file:///Users/ktietz/demo/mc3/conda-bld/threadpoolctl_1629802263681/work
three-merge @ file:///tmp/build/80754af9/three-merge_1607553261110/work
tifffile @ file:///tmp/build/80754af9/tifffile_1627275862826/work
tiktoken==0.4.0
tinycss2 @ file:///C:/b/abs_52w5vfuaax/croot/tinycss2_1668168823131/work
tldextract @ file:///opt/conda/conda-bld/tldextract_1646638314385/work
tokenizers==0.15.2
toml @ file:///tmp/build/80754af9/toml_1616166611790/work
tomli @ file:///C:/Windows/TEMP/abs_ac109f85-a7b3-4b4d-bcfd-52622eceddf0hy332ojo/croots/recipe/tomli_1657175513137/work
tomlkit @ file:///C:/Windows/TEMP/abs_3296qo9v6b/croots/recipe/tomlkit_1658946894808/work
toolz @ file:///C:/b/abs_cfvk6rc40d/croot/toolz_1667464080130/work
torch==2.1.2
torchaudio==2.0.2+cu118
torchdata==0.7.1
torchtext==0.16.2
torchvision==0.15.2+cu118
tornado @ file:///C:/ci_310/tornado_1642093111997/work
tqdm==4.66.1
traitlets @ file:///C:/b/abs_e5m_xjjl94/croot/traitlets_1671143896266/work
transformers==4.38.1
trio==0.22.1
trio-websocket==0.10.3
Twisted @ file:///C:/Windows/Temp/abs_ccblv2rzfa/croots/recipe/twisted_1659592764512/work
twisted-iocpsupport @ file:///C:/ci/twisted-iocpsupport_1646817083730/work
typeguard==2.13.3
typer==0.9.0
typing-inspect==0.9.0
typing_extensions==4.8.0
tzdata==2023.3
tzlocal==5.0.1
ujson @ file:///C:/ci/ujson_1657525893897/work
Unidecode @ file:///tmp/build/80754af9/unidecode_1614712377438/work
update-checker==0.18.0
uritemplate==4.1.1
urllib3 @ file:///C:/b/abs_9bcwxczrvm/croot/urllib3_1673575521331/work
uvicorn==0.24.0.post1
validators==0.20.0
virtualenv==20.25.0
virtualenv-api==2.1.18
vocode==0.1.111
w3lib @ file:///Users/ktietz/demo/mc3/conda-bld/w3lib_1629359764703/work
wasabi==1.1.2
watchdog @ file:///C:/ci_310/watchdog_1642113443984/work
watchfiles==0.21.0
wcwidth @ file:///Users/ktietz/demo/mc3/conda-bld/wcwidth_1629357192024/work
weaviate-client==3.22.0
webencodings==0.5.1
websocket-client @ file:///C:/ci_310/websocket-client_1642093970919/work
websockets==11.0.3
Werkzeug @ file:///C:/b/abs_17q5kgb8bo/croot/werkzeug_1671216014857/work
whatthepatch @ file:///C:/Users/BUILDE~1/AppData/Local/Temp/abs_e7bihs8grh/croots/recipe/whatthepatch_1661796085215/work
widgetsnbextension==3.6.6
wikipedia==1.4.0
win-inet-pton @ file:///C:/ci_310/win_inet_pton_1642658466512/work
win32-setctime==1.1.0
wincertstore==0.2
wolframalpha==5.0.0
wrapt @ file:///C:/Windows/Temp/abs_7c3dd407-1390-477a-b542-fd15df6a24085_diwiza/croots/recipe/wrapt_1657814452175/work
wsproto==1.2.0
xarray @ file:///C:/b/abs_2fi_umrauo/croot/xarray_1668776806973/work
xlwings @ file:///C:/b/abs_1ejhh6s00l/croot/xlwings_1677024180629/work
xmltodict==0.13.0
xxhash==3.3.0
yapf @ file:///tmp/build/80754af9/yapf_1615749224965/work
yarl==1.9.2
zict==2.1.0
zipp @ file:///C:/b/abs_b9jfdr908q/croot/zipp_1672387552360/work
zope.event==5.0
zope.interface @ file:///C:/ci_310/zope.interface_1642113633904/work
zstandard==0.19.0
```
Platform:
Windows 11
Python version:
Python 3.10.9 | ValueError: Error raised by bedrock service: An error occurred (ValidationException) when calling the InvokeModel operation: Operation not allowed | https://api.github.com/repos/langchain-ai/langchain/issues/19215/comments | 1 | 2024-03-18T04:52:21Z | 2024-03-19T06:55:07Z | https://github.com/langchain-ai/langchain/issues/19215 | 2,191,228,767 | 19,215 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
def add_texts(
self,
texts: Iterable[str],
metadatas: Optional[List[dict]] = None,
timeout: Optional[int] = None,
batch_size: int = 1000,
**kwargs: Any,
) -> List[str]:
"""Insert text data into Milvus.
Inserting data when the collection has not be made yet will result
in creating a new Collection. The data of the first entity decides
the schema of the new collection, the dim is extracted from the first
embedding and the columns are decided by the first metadata dict.
Metadata keys will need to be present for all inserted values. At
the moment there is no None equivalent in Milvus.
Args:
texts (Iterable[str]): The texts to embed, it is assumed
that they all fit in memory.
metadatas (Optional[List[dict]]): Metadata dicts attached to each of
the texts. Defaults to None.
timeout (Optional[int]): Timeout for each batch insert. Defaults
to None.
batch_size (int, optional): Batch size to use for insertion.
Defaults to 1000.
Raises:
MilvusException: Failure to add texts
Returns:
List[str]: The resulting keys for each inserted element.
"""
from pymilvus import Collection, MilvusException
texts = list(texts)
try:
embeddings = self.embedding_func.embed_documents(texts)
except NotImplementedError:
embeddings = [self.embedding_func.embed_query(x) for x in texts]
if len(embeddings) == 0:
logger.debug("Nothing to insert, skipping.")
return []
# If the collection hasn't been initialized yet, perform all steps to do so
if not isinstance(self.col, Collection):
kwargs = {"embeddings": embeddings, "metadatas": metadatas}
if self.partition_names:
kwargs["partition_names"] = self.partition_names
if self.replica_number:
kwargs["replica_number"] = self.replica_number
if self.timeout:
kwargs["timeout"] = self.timeout
self._init(**kwargs)
# Dict to hold all insert columns
insert_dict: dict[str, list] = {
self._text_field: texts,
self._vector_field: embeddings,
}
if self._metadata_field is not None:
for d in metadatas:
insert_dict.setdefault(self._metadata_field, []).append(d)
else:
# Collect the metadata into the insert dict.
if metadatas is not None:
for d in metadatas:
for key, value in d.items():
if key in self.fields:
insert_dict.setdefault(key, []).append(value)
# Total insert count
vectors: list = insert_dict[self._vector_field]
total_count = len(vectors)
pks: list[str] = []
assert isinstance(self.col, Collection)
for i in range(0, total_count, batch_size):
# Grab end index
end = min(i + batch_size, total_count)
# Convert dict to list of lists batch for insertion
insert_list = [insert_dict[x][i:end] for x in self.fields]
# Insert into the collection.
try:
res: Collection
res = self.col.insert(insert_list, timeout=timeout, **kwargs)
pks.extend(res.primary_keys)
except MilvusException as e:
logger.error(
"Failed to insert batch starting at entity: %s/%s", i, total_count
)
raise e
self.col.flush()
return pks
```
self.col.flush() very slowly .
maybe we can use milvus its auto flush
https://milvus.io/docs/configure_quota_limits.md#quotaAndLimitsflushRateenabled
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I'm trying to use langchain 0.0.332 and 0.1.12, but col.insert is still slowly.
maybe we can use milvus its auto flush
https://milvus.io/docs/configure_quota_limits.md#quotaAndLimitsflushRateenabled
### System Info
langchain version 0.0.332
linux ubuntu 20.0 | milvus col.flush() slowly | https://api.github.com/repos/langchain-ai/langchain/issues/19213/comments | 0 | 2024-03-18T02:26:40Z | 2024-06-24T16:13:52Z | https://github.com/langchain-ai/langchain/issues/19213 | 2,191,092,437 | 19,213 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from typing import Annotated, List, Tuple, Union
from langchain_community.tools.tavily_search import TavilySearchResults
from langchain_core.tools import tool
from langchain_experimental.tools import PythonREPLTool
tavily_tool = TavilySearchResults(max_results=5)
# This executes code locally, which can be unsafe
python_repl_tool = PythonREPLTool()
from langchain.agents import AgentExecutor, create_openai_tools_agent
from langchain_core.messages import BaseMessage, HumanMessage
from langchain_openai import ChatOpenAI
def create_agent(
llm: ChatOpenAI, tools: list, system_prompt: str
):
# Each worker node will be given a name and some tools.
prompt = ChatPromptTemplate.from_messages(
[
(
"system",
system_prompt,
),
MessagesPlaceholder(variable_name="messages"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
]
)
agent = create_openai_tools_agent(llm, tools, prompt)
executor = AgentExecutor(agent=agent, tools=tools)
return executor
########
# Async agent nodes
########
async def agent_node(state, agent, name):
result = await agent.ainvoke(state)
return {"messages": [HumanMessage(content=result["output"], name=name)]}
from langchain.output_parsers.openai_functions import JsonOutputFunctionsParser
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
members = ["Researcher", "Coder"]
system_prompt = (
"You are a supervisor tasked with managing a conversation between the"
" following workers: {members}. Given the following user request,"
" respond with the worker to act next. Each worker will perform a"
" task and respond with their results and status. When finished,"
" respond with FINISH."
)
# Our team supervisor is an LLM node. It just picks the next agent to process
# and decides when the work is completed
options = ["FINISH"] + members
# Using openai function calling can make output parsing easier for us
function_def = {
"name": "route",
"description": "Select the next role.",
"parameters": {
"title": "routeSchema",
"type": "object",
"properties": {
"next": {
"title": "Next",
"anyOf": [
{"enum": options},
],
}
},
"required": ["next"],
},
}
prompt = ChatPromptTemplate.from_messages(
[
("system", system_prompt),
MessagesPlaceholder(variable_name="messages"),
(
"system",
"Given the conversation above, who should act next?"
" Or should we FINISH? Select one of: {options}",
),
]
).partial(options=str(options), members=", ".join(members))
llm = ChatOpenAI(model="gpt-3.5-turbo", streaming=True) # allow streaming
supervisor_chain = (
prompt
| llm.bind_functions(functions=[function_def], function_call="route")
| JsonOutputFunctionsParser()
)
import operator
from typing import Annotated, Any, Dict, List, Optional, Sequence, TypedDict
import functools
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langgraph.graph import StateGraph, END
# The agent state is the input to each node in the graph
class AgentState(TypedDict):
# The annotation tells the graph that new messages will always
# be added to the current states
messages: Annotated[Sequence[BaseMessage], operator.add]
# The 'next' field indicates where to route to next
next: str
research_agent = create_agent(llm, [tavily_tool], "You are a web researcher.")
research_node = functools.partial(agent_node, agent=research_agent, name="Researcher")
# NOTE: THIS PERFORMS ARBITRARY CODE EXECUTION. PROCEED WITH CAUTION
code_agent = create_agent(llm, [python_repl_tool], "You may generate safe python code to analyze data and generate charts using matplotlib.")
code_node = functools.partial(agent_node, agent=code_agent, name="Coder")
workflow = StateGraph(AgentState)
workflow.add_node("Researcher", research_node)
workflow.add_node("Coder", code_node)
workflow.add_node("supervisor", supervisor_chain)
for member in members:
# We want our workers to ALWAYS "report back" to the supervisor when done
workflow.add_edge(member, "supervisor")
# The supervisor populates the "next" field in the graph state
# which routes to a node or finishes
conditional_map = {k: k for k in members}
conditional_map["FINISH"] = END
workflow.add_conditional_edges("supervisor", lambda x: x["next"], conditional_map)
# Finally, add entrypoint
workflow.set_entry_point("supervisor")
graph = workflow.compile()
#######
# streaming events
#######
inputs = {'messages':[HumanMessage(content="write a research report on pikas.")]}
#inputs = {"messages": [HumanMessage(content="What is 1+1?")]}
async for event in graph.astream_events(inputs, version="v1"):
kind = event["event"]
if kind == "on_chat_model_stream":
content = event["data"]["chunk"].content
if content:
# Empty content in the context of OpenAI means
# that the model is asking for a tool to be invoked.
# So we only print non-empty content
print(content, end="|")
elif kind == "on_tool_start":
print("--")
print(
f"Starting tool: {event['name']} with inputs: {event['data'].get('input')}"
)
elif kind == "on_tool_end":
print(f"Done tool: {event['name']}")
print(f"Tool output was: {event['data'].get('output')}")
print("--")
```
### Error Message and Stack Trace (if applicable)
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
Cell In[14], line 4
1 inputs = {'messages':[HumanMessage(content="write a research report on pikas.")]}
2 #inputs = {"messages": [HumanMessage(content="What is 1+1?")]}
----> 4 async for event in graph.astream_events(inputs, version="v1"):
5 kind = event["event"]
6 if kind == "on_chat_model_stream":
File c:\ProgramData\Anaconda3\envs\llm\Lib\site-packages\langchain_core\runnables\base.py:1063, in Runnable.astream_events(self, input, config, version, include_names, include_types, include_tags, exclude_names, exclude_types, exclude_tags, **kwargs)
1059 root_name = config.get("run_name", self.get_name())
1061 # Ignoring mypy complaint about too many different union combinations
1062 # This arises because many of the argument types are unions
-> 1063 async for log in _astream_log_implementation( # type: ignore[misc]
1064 self,
1065 input,
1066 config=config,
1067 stream=stream,
1068 diff=True,
1069 with_streamed_output_list=True,
1070 **kwargs,
1071 ):
1072 run_log = run_log + log
1074 if not encountered_start_event:
1075 # Yield the start event for the root runnable.
File c:\ProgramData\Anaconda3\envs\llm\Lib\site-packages\langchain_core\tracers\log_stream.py:616, in _astream_log_implementation(runnable, input, config, stream, diff, with_streamed_output_list, **kwargs)
613 finally:
614 # Wait for the runnable to finish, if not cancelled (eg. by break)
615 try:
--> 616 await task
617 except asyncio.CancelledError:
618 pass
File c:\ProgramData\Anaconda3\envs\llm\Lib\site-packages\langchain_core\tracers\log_stream.py:570, in _astream_log_implementation.<locals>.consume_astream()
567 prev_final_output: Optional[Output] = None
568 final_output: Optional[Output] = None
--> 570 async for chunk in runnable.astream(input, config, **kwargs):
571 prev_final_output = final_output
572 if final_output is None:
File c:\ProgramData\Anaconda3\envs\llm\Lib\site-packages\langgraph\pregel\__init__.py:872, in Pregel.astream(self, input, config, output_keys, input_keys, interrupt_before_nodes, interrupt_after_nodes, debug, **kwargs)
869 async def input_stream() -> AsyncIterator[Union[dict[str, Any], Any]]:
870 yield input
--> 872 async for chunk in self.atransform(
873 input_stream(),
874 config,
875 output_keys=output_keys,
876 input_keys=input_keys,
877 interrupt_before_nodes=interrupt_before_nodes,
878 interrupt_after_nodes=interrupt_after_nodes,
879 debug=debug,
880 **kwargs,
881 ):
882 yield chunk
File c:\ProgramData\Anaconda3\envs\llm\Lib\site-packages\langgraph\pregel\__init__.py:896, in Pregel.atransform(self, input, config, output_keys, input_keys, interrupt_before_nodes, interrupt_after_nodes, debug, **kwargs)
884 async def atransform(
885 self,
886 input: AsyncIterator[Union[dict[str, Any], Any]],
(...)
894 **kwargs: Any,
895 ) -> AsyncIterator[Union[dict[str, Any], Any]]:
--> 896 async for chunk in self._atransform_stream_with_config(
897 input,
898 self._atransform,
899 config,
900 output_keys=output_keys,
901 input_keys=input_keys,
902 interrupt_before_nodes=interrupt_before_nodes,
903 interrupt_after_nodes=interrupt_after_nodes,
904 debug=debug,
905 **kwargs,
906 ):
907 yield chunk
File c:\ProgramData\Anaconda3\envs\llm\Lib\site-packages\langchain_core\runnables\base.py:1783, in Runnable._atransform_stream_with_config(self, input, transformer, config, run_type, **kwargs)
1781 while True:
1782 if accepts_context(asyncio.create_task):
-> 1783 chunk: Output = await asyncio.create_task( # type: ignore[call-arg]
1784 py_anext(iterator), # type: ignore[arg-type]
1785 context=context,
1786 )
1787 else:
1788 chunk = cast(Output, await py_anext(iterator))
File c:\ProgramData\Anaconda3\envs\llm\Lib\site-packages\langchain_core\tracers\log_stream.py:237, in LogStreamCallbackHandler.tap_output_aiter(self, run_id, output)
233 async def tap_output_aiter(
234 self, run_id: UUID, output: AsyncIterator[T]
235 ) -> AsyncIterator[T]:
236 """Tap an output async iterator to stream its values to the log."""
--> 237 async for chunk in output:
238 # root run is handled in .astream_log()
239 if run_id != self.root_id:
240 # if we can't find the run silently ignore
241 # eg. because this run wasn't included in the log
242 if key := self._key_map_by_run_id.get(run_id):
File c:\ProgramData\Anaconda3\envs\llm\Lib\site-packages\langgraph\pregel\__init__.py:697, in Pregel._atransform(self, input, run_manager, config, **kwargs)
690 done, inflight = await asyncio.wait(
691 futures,
692 return_when=asyncio.FIRST_EXCEPTION,
693 timeout=self.step_timeout,
694 )
696 # panic on failure or timeout
--> 697 _panic_or_proceed(done, inflight, step)
699 # apply writes to channels
700 _apply_writes(
701 checkpoint, channels, pending_writes, config, step + 1
702 )
File c:\ProgramData\Anaconda3\envs\llm\Lib\site-packages\langgraph\pregel\__init__.py:922, in _panic_or_proceed(done, inflight, step)
920 inflight.pop().cancel()
921 # raise the exception
--> 922 raise exc
923 # TODO this is where retry of an entire step would happen
925 if inflight:
926 # if we got here means we timed out
File c:\ProgramData\Anaconda3\envs\llm\Lib\site-packages\langgraph\pregel\__init__.py:1071, in _aconsume(iterator)
1069 async def _aconsume(iterator: AsyncIterator[Any]) -> None:
1070 """Consume an async iterator."""
-> 1071 async for _ in iterator:
1072 pass
File c:\ProgramData\Anaconda3\envs\llm\Lib\site-packages\langchain_core\runnables\base.py:4435, in RunnableBindingBase.astream(self, input, config, **kwargs)
4429 async def astream(
4430 self,
4431 input: Input,
4432 config: Optional[RunnableConfig] = None,
4433 **kwargs: Optional[Any],
4434 ) -> AsyncIterator[Output]:
-> 4435 async for item in self.bound.astream(
4436 input,
4437 self._merge_configs(config),
4438 **{**self.kwargs, **kwargs},
4439 ):
4440 yield item
File c:\ProgramData\Anaconda3\envs\llm\Lib\site-packages\langchain_core\runnables\base.py:3920, in RunnableLambda.astream(self, input, config, **kwargs)
3917 async def input_aiter() -> AsyncIterator[Input]:
3918 yield input
-> 3920 async for chunk in self.atransform(input_aiter(), config, **kwargs):
3921 yield chunk
File c:\ProgramData\Anaconda3\envs\llm\Lib\site-packages\langchain_core\runnables\base.py:3903, in RunnableLambda.atransform(self, input, config, **kwargs)
3897 async def atransform(
3898 self,
3899 input: AsyncIterator[Input],
3900 config: Optional[RunnableConfig] = None,
3901 **kwargs: Optional[Any],
3902 ) -> AsyncIterator[Output]:
-> 3903 async for output in self._atransform_stream_with_config(
3904 input,
3905 self._atransform,
3906 self._config(config, self.afunc if hasattr(self, "afunc") else self.func),
3907 **kwargs,
3908 ):
3909 yield output
File c:\ProgramData\Anaconda3\envs\llm\Lib\site-packages\langchain_core\runnables\base.py:1783, in Runnable._atransform_stream_with_config(self, input, transformer, config, run_type, **kwargs)
1781 while True:
1782 if accepts_context(asyncio.create_task):
-> 1783 chunk: Output = await asyncio.create_task( # type: ignore[call-arg]
1784 py_anext(iterator), # type: ignore[arg-type]
1785 context=context,
1786 )
1787 else:
1788 chunk = cast(Output, await py_anext(iterator))
File c:\ProgramData\Anaconda3\envs\llm\Lib\site-packages\langchain_core\tracers\log_stream.py:237, in LogStreamCallbackHandler.tap_output_aiter(self, run_id, output)
233 async def tap_output_aiter(
234 self, run_id: UUID, output: AsyncIterator[T]
235 ) -> AsyncIterator[T]:
236 """Tap an output async iterator to stream its values to the log."""
--> 237 async for chunk in output:
238 # root run is handled in .astream_log()
239 if run_id != self.root_id:
240 # if we can't find the run silently ignore
241 # eg. because this run wasn't included in the log
242 if key := self._key_map_by_run_id.get(run_id):
File c:\ProgramData\Anaconda3\envs\llm\Lib\site-packages\langchain_core\runnables\base.py:3872, in RunnableLambda._atransform(self, input, run_manager, config, **kwargs)
3870 output = chunk
3871 else:
-> 3872 output = await acall_func_with_variable_args(
3873 cast(Callable, afunc), cast(Input, final), config, run_manager, **kwargs
3874 )
3876 # If the output is a runnable, use its astream output
3877 if isinstance(output, Runnable):
File c:\ProgramData\Anaconda3\envs\llm\Lib\site-packages\langchain_core\runnables\base.py:3847, in RunnableLambda._atransform.<locals>.f(*args, **kwargs)
3845 @wraps(func)
3846 async def f(*args, **kwargs): # type: ignore[no-untyped-def]
-> 3847 return await run_in_executor(config, func, *args, **kwargs)
File c:\ProgramData\Anaconda3\envs\llm\Lib\site-packages\langchain_core\runnables\config.py:493, in run_in_executor(executor_or_config, func, *args, **kwargs)
480 """Run a function in an executor.
481
482 Args:
(...)
489 Output: The output of the function.
490 """
491 if executor_or_config is None or isinstance(executor_or_config, dict):
492 # Use default executor with context copied from current context
--> 493 return await asyncio.get_running_loop().run_in_executor(
494 None,
495 cast(Callable[..., T], partial(copy_context().run, func, *args, **kwargs)),
496 )
498 return await asyncio.get_running_loop().run_in_executor(
499 executor_or_config, partial(func, **kwargs), *args
500 )
File c:\ProgramData\Anaconda3\envs\llm\Lib\concurrent\futures\thread.py:58, in _WorkItem.run(self)
55 return
57 try:
---> 58 result = self.fn(*self.args, **self.kwargs)
59 except BaseException as exc:
60 self.future.set_exception(exc)
File c:\ProgramData\Anaconda3\envs\llm\Lib\site-packages\langchain_core\runnables\base.py:3841, in RunnableLambda._atransform.<locals>.func(input, run_manager, config, **kwargs)
3835 def func(
3836 input: Input,
3837 run_manager: AsyncCallbackManagerForChainRun,
3838 config: RunnableConfig,
3839 **kwargs: Any,
3840 ) -> Output:
-> 3841 return call_func_with_variable_args(
3842 self.func, input, config, run_manager.get_sync(), **kwargs
3843 )
File c:\ProgramData\Anaconda3\envs\llm\Lib\site-packages\langchain_core\runnables\config.py:326, in call_func_with_variable_args(func, input, config, run_manager, **kwargs)
324 if run_manager is not None and accepts_run_manager(func):
325 kwargs["run_manager"] = run_manager
--> 326 return func(input, **kwargs)
File c:\ProgramData\Anaconda3\envs\llm\Lib\site-packages\langgraph\graph\graph.py:37, in Branch.runnable(self, input)
35 result = self.condition(input)
36 if self.ends:
---> 37 destination = self.ends[result]
38 else:
39 destination = result
KeyError: 'ResearchResearcher'
### Description
This is a follow up to the recent updates to astream_events in #18743 and #19051. The discussion originally started in [langchain-ai/langgraph#136](https://github.com/langchain-ai/langgraph/issues/136).
I'm testing `astream_event` method on [this langgraph example notebook](https://github.com/langchain-ai/langgraph/blob/main/examples/multi_agent/agent_supervisor.ipynb). I modified several lines of code to allow LLM streaming and convert the nodes to async functions. Without `astream_events`, the supervisor node will output "next: Researcher", however it returns "next: ResearchResearcher", which breaks graph streaming since the name is not recognized as a graph node. Another example is "next: FINFINISH". The only scenario it's working is when router outputs "next: Coder" (one token).
My suspicion is the recently updated AddableDict logic in stream_events has a problem with `JSONOutputFunctionParser` in the supervisor chain. Any help from @eyurtsev or other team members would be greatly appreciated!
Here's the full Langsmith Trace: https://smith.langchain.com/public/e09b671b-325d-477e-bd29-e017d30c6741/r
### System Info
langchain=0.1.12
langchain-core=0.1.33rc1
langgraph=0.0.28 | astream_event produces redundant tokens and breaks graph streams | https://api.github.com/repos/langchain-ai/langchain/issues/19211/comments | 3 | 2024-03-18T01:11:51Z | 2024-08-04T16:07:16Z | https://github.com/langchain-ai/langchain/issues/19211 | 2,191,027,572 | 19,211 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
Documentation Page: [Evaluation](https://python.langchain.com/docs/guides/evaluation/)
### Idea or request for content:
Links to LangSmith documentation are broken. They should point to:
- **LangSmith Evaluation**: https://docs.smith.langchain.com/evaluation
- **cookbooks**: https://docs.smith.langchain.com/evaluation/faq | DOC: Broken links on Evaluation page | https://api.github.com/repos/langchain-ai/langchain/issues/19210/comments | 0 | 2024-03-18T00:42:48Z | 2024-03-19T02:13:11Z | https://github.com/langchain-ai/langchain/issues/19210 | 2,191,005,514 | 19,210 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
chain = ConversationalRetrievalChain.from_llm(
llm = llm,
memory = memory,
chain_type = "stuff",
retriever = retriever,
combine_docs_chain_kwargs = ...,
condense_question_prompt = ...,
return_source_documents = True,
return_generated_question = True,
verbose = True, debug = True
)
# Adding none of these will bypass the condensing phase and have the chain complete:
chain.rephrase_question = False
chain.question_generator = None
chain.condense_question_llm = None
```
### Error Message and Stack Trace (if applicable)
See #6879 for a description of the issue.
### Description
The goal is: if requested, the `ConversationRetreivalChain` should skip the question condensing phase.
(The work-around presented no longer seems to work, but ISTM shouldn't be necessary - I tried adding all the default Runnable methods, but was still getting the error I described - but maybe I was doing it wrong.)
The reason not to use the `RetrievalQAChain` is that the RQAC and CRC are inconsistent in their input/output parameters for some reason - this makes it a PITA to switch between them dynamically at run-time, including changing the keys used for memory, dealing with extra or invalid chain dicts or kwargs, etc.
**Desired behavior**
The RQAC is a simple sub-set of the CRC, so the CRC should behave just like the RQAC if you set either:
(a) `question_generator = None`
(b) `condense_question_llm = None`
(c) `rephrase_question = False`
(or perhaps a combination of (c) and either (a) or (b) if you want to handle (a) or (b) is None as an error condition without (c) == True. (This would make the RQAC redundant, but that's a fixture now.)
Setting `rephrase_question = False` currently is curious because I believe it uses the user's question verbatim, _but still calls the condensing llm_ - I'm not sure what the rationale is here.
If there is currently an LCEL equivalent of the CRC that behaves with or without the `condense_question_llm` in the chain, that would be another option, but simply honoring `rephrase_question = False` fully seems straightforward here.
### System Info
```
langchain==0.1.11
langchain-community==0.0.28
langchain-core==0.1.31
langchain-openai==0.0.8
langchain-text-splitters==0.0.1
```
Python 3.10 | Unable to defeat question condensing in ConversationRetrievalChain (see #6879) | https://api.github.com/repos/langchain-ai/langchain/issues/19200/comments | 0 | 2024-03-17T16:04:46Z | 2024-06-23T16:09:30Z | https://github.com/langchain-ai/langchain/issues/19200 | 2,190,741,952 | 19,200 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
[cookbook/multiple_chains](https://python.langchain.com/docs/expression_language/cookbook/multiple_chains)
[cookbook/sql_db](https://python.langchain.com/docs/expression_language/cookbook/sql_db)
### Idea or request for content:
_No response_ | DOC: Updating `pip install` format in cookbook | https://api.github.com/repos/langchain-ai/langchain/issues/19197/comments | 0 | 2024-03-17T12:12:52Z | 2024-06-23T16:09:25Z | https://github.com/langchain-ai/langchain/issues/19197 | 2,190,634,544 | 19,197 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
Valid OCI authentication types include `INSTANCE_PRINCIPAL` and `RESOURCE_PRINCIPAL`. However, comments in the code for [llms](https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/oci_generative_ai.py) and [embeddings](libs/community/langchain_community/embeddings/oci_generative_ai.py) incorrectly described these as `INSTANCE_PRINCIPLE` and `RESOURCE_PRINCIPLE` respectively. The incorrect information is also presented in an error message.
### Idea or request for content:
The code comments that show up in the API documentation, and error messages should be corrected to reflect the correct values. | DOC: Incorrect description of valid OCI authentication types for OCI Generative AI LLM and embeddings | https://api.github.com/repos/langchain-ai/langchain/issues/19194/comments | 0 | 2024-03-17T05:31:38Z | 2024-06-23T16:09:29Z | https://github.com/langchain-ai/langchain/issues/19194 | 2,190,488,898 | 19,194 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Current implementation:
```python
def _stream(
self,
prompt: str,
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> Iterator[GenerationChunk]:
params = {**self._invocation_params, **kwargs, "stream": True}
self.get_sub_prompts(params, [prompt], stop) # this mutates params
for stream_resp in self.client.create(prompt=prompt, **params):
if not isinstance(stream_resp, dict):
stream_resp = stream_resp.model_dump()
chunk = _stream_response_to_generation_chunk(stream_resp)
yield chunk
if run_manager:
run_manager.on_llm_new_token(
chunk.text,
chunk=chunk,
verbose=self.verbose,
logprobs=(
chunk.generation_info["logprobs"]
if chunk.generation_info
else None
),
)
```
I believe this would correct and produce the intended behavior:
```python
def _stream(
self,
prompt: str,
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> Iterator[GenerationChunk]:
params = {**self._invocation_params, **kwargs, "stream": True}
self.get_sub_prompts(params, [prompt], stop) # this mutates params
for stream_resp in self.client.create(prompt=prompt, **params):
if not isinstance(stream_resp, dict):
stream_resp = stream_resp.model_dump()
chunk = _stream_response_to_generation_chunk(stream_resp)
if run_manager:
run_manager.on_llm_new_token(
chunk.text,
chunk=chunk,
verbose=self.verbose,
logprobs=(
chunk.generation_info["logprobs"]
if chunk.generation_info
else None
),
)
yield chunk
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
When streaming via ``langchain_openai.llms.base.BaseOpenAI._stream`` the yield appears before triggering the run manager event. This makes it impossible to invoke ``on_llm_new_token`` methods in a callback until the full response is received.
### System Info
```text
System Information
------------------
> OS: Linux
> OS Version: #21~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Fri Feb 9 13:32:52 UTC 2
> Python Version: 3.10.13 (main, Sep 11 2023, 13:44:35) [GCC 11.2.0]
Package Information
-------------------
> langchain_core: 0.1.30
> langchain: 0.1.11
> langchain_community: 0.0.27
> langsmith: 0.1.23
> langchain_openai: 0.0.8
> langchain_text_splitters: 0.0.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
``` | on_llm_new_token event broken in langchain_openai when streaming | https://api.github.com/repos/langchain-ai/langchain/issues/19185/comments | 2 | 2024-03-16T12:03:38Z | 2024-06-29T16:08:37Z | https://github.com/langchain-ai/langchain/issues/19185 | 2,189,935,758 | 19,185 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain.document_loaders import CSVLoader
loader = CSVLoader("./institution_all.csv")
data = loader.load()
### Error Message and Stack Trace (if applicable)
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
File /opt/conda/lib/python3.10/site-packages/langchain_community/document_loaders/csv_loader.py:67, in CSVLoader.lazy_load(self)
66 with open(self.file_path, newline="", encoding=self.encoding) as csvfile:
---> 67 yield from self.__read_file(csvfile)
68 except UnicodeDecodeError as e:
File /opt/conda/lib/python3.10/site-packages/langchain_community/document_loaders/csv_loader.py:98, in CSVLoader.__read_file(self, csvfile)
95 raise ValueError(
96 f"Source column '{self.source_column}' not found in CSV file."
97 )
---> 98 content = "\n".join(
99 f"{k.strip()}: {v.strip() if v is not None else v}"
100 for k, v in row.items()
101 if k not in self.metadata_columns
102 )
103 metadata = {"source": source, "row": i}
File /opt/conda/lib/python3.10/site-packages/langchain_community/document_loaders/csv_loader.py:99, in <genexpr>(.0)
95 raise ValueError(
96 f"Source column '{self.source_column}' not found in CSV file."
97 )
98 content = "\n".join(
---> 99 f"{k.strip()}: {v.strip() if v is not None else v}"
100 for k, v in row.items()
101 if k not in self.metadata_columns
102 )
103 metadata = {"source": source, "row": i}
AttributeError: 'NoneType' object has no attribute 'strip'
The above exception was the direct cause of the following exception:
RuntimeError Traceback (most recent call last)
Cell In[7], line 5
3 from langchain.document_loaders import CSVLoader
4 loader = CSVLoader("./institution_all.csv")
----> 5 data = loader.load()
File /opt/conda/lib/python3.10/site-packages/langchain_core/document_loaders/base.py:29, in BaseLoader.load(self)
27 def load(self) -> List[Document]:
28 """Load data into Document objects."""
---> 29 return list(self.lazy_load())
File /opt/conda/lib/python3.10/site-packages/langchain_community/document_loaders/csv_loader.py:83, in CSVLoader.lazy_load(self)
81 raise RuntimeError(f"Error loading {self.file_path}") from e
82 except Exception as e:
---> 83 raise RuntimeError(f"Error loading {self.file_path}") from e
RuntimeError: Error loading ./institution_all.csv
### Description
我正在尝试导入csv文档
### System Info
langchain-0.1.12 | CSVloader RuntimeError: Error loading ./institution_all.csv | https://api.github.com/repos/langchain-ai/langchain/issues/19174/comments | 3 | 2024-03-16T05:48:21Z | 2024-06-04T14:30:06Z | https://github.com/langchain-ai/langchain/issues/19174 | 2,189,795,094 | 19,174 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
tools = load_tools(["ddg-search", "wikipedia", "llm-math"], llm=llm)
retriever = db.as_retriever()
repl_tool = Tool(
name="python_repl",
description="A Python shell. Use this to execute python commands. Input should be a valid python command. If you want to see the output of a value, you should print it out with `print(...)`.",
func=PythonREPL().run,
)
retrieve_tool = create_retriever_tool(
retriever,
"SearchDocuments",
"Searches and returns results from documents to answer queries.",
)
tools += [repl_tool, retrieve_tool]
return tools
### Error Message and Stack Trace (if applicable)
> Entering new AgentExecutor chain...
Invoking: `SearchDocuments` with `list all documents`
2024-03-15 23:40:14.938 Uncaught app exception
Traceback (most recent call last):
File "C:\Users\Rafael\anaconda3\envs\kaggle\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 535, in _run_script
exec(code, module.__dict__)
File "C:\Users\Rafael\OneDrive - Artificial Intelligence Expert\Documents\BioChat_v2\BioChat\src\app.py", line 196, in <module>
main(constants_path=f"./config/CONSTANTS.json")
File "C:\Users\Rafael\OneDrive - Artificial Intelligence Expert\Documents\BioChat_v2\BioChat\src\app.py", line 165, in main
result = st.session_state["agent"].invoke(
File "C:\Users\Rafael\anaconda3\envs\kaggle\lib\site-packages\langchain\chains\base.py", line 163, in invoke
raise e
File "C:\Users\Rafael\anaconda3\envs\kaggle\lib\site-packages\langchain\chains\base.py", line 153, in invoke
self._call(inputs, run_manager=run_manager)
File "C:\Users\Rafael\anaconda3\envs\kaggle\lib\site-packages\langchain\agents\agent.py", line 1432, in _call
next_step_output = self._take_next_step(
File "C:\Users\Rafael\anaconda3\envs\kaggle\lib\site-packages\langchain\agents\agent.py", line 1138, in _take_next_step
[
File "C:\Users\Rafael\anaconda3\envs\kaggle\lib\site-packages\langchain\agents\agent.py", line 1138, in <listcomp>
[
File "C:\Users\Rafael\anaconda3\envs\kaggle\lib\site-packages\langchain\agents\agent.py", line 1223, in _iter_next_step
yield self._perform_agent_action(
File "C:\Users\Rafael\anaconda3\envs\kaggle\lib\site-packages\langchain\agents\agent.py", line 1245, in _perform_agent_action
observation = tool.run(
File "C:\Users\Rafael\anaconda3\envs\kaggle\lib\site-packages\langchain_core\tools.py", line 417, in run
raise e
File "C:\Users\Rafael\anaconda3\envs\kaggle\lib\site-packages\langchain_core\tools.py", line 376, in run
self._run(*tool_args, run_manager=run_manager, **tool_kwargs)
File "C:\Users\Rafael\anaconda3\envs\kaggle\lib\site-packages\langchain_core\tools.py", line 580, in _run
else self.func(*args, **kwargs)
File "C:\Users\Rafael\OneDrive - Artificial Intelligence Expert\Documents\BioChat_v2\BioChat/src\agent.py", line 165, in <lambda>
func=lambda query: retriever_tool(query),
File "C:\Users\Rafael\OneDrive - Artificial Intelligence Expert\Documents\BioChat_v2\BioChat/src\agent.py", line 148, in retriever_tool
docs = retriever.get_relevant_documents(query)
File "C:\Users\Rafael\anaconda3\envs\kaggle\lib\site-packages\langchain_core\retrievers.py", line 244, in get_relevant_documents
raise e
File "C:\Users\Rafael\anaconda3\envs\kaggle\lib\site-packages\langchain_core\retrievers.py", line 237, in get_relevant_documents
result = self._get_relevant_documents(
File "C:\Users\Rafael\anaconda3\envs\kaggle\lib\site-packages\langchain_core\vectorstores.py", line 674, in _get_relevant_documents
docs = self.vectorstore.similarity_search(query, **self.search_kwargs)
File "C:\Users\Rafael\anaconda3\envs\kaggle\lib\site-packages\langchain_community\vectorstores\deeplake.py", line 541, in similarity_search
return self._search(
File "C:\Users\Rafael\anaconda3\envs\kaggle\lib\site-packages\langchain_community\vectorstores\deeplake.py", line 421, in _search
_embedding_function = self._embedding_function.embed_query
AttributeError: 'function' object has no attribute 'embed_query'
### Description
I'm trying to add a custom tool so that an agent can retrieve information from data lake activeloop however I keep getting the error
`AttributeError: 'function' object has no attribute 'embed_query'`
How can I fix this?
### System Info
langchain==0.1.12
langchain-community==0.0.28
langchain-core==0.1.32
langchain-experimental==0.0.54
langchain-google-vertexai==0.1.0
langchain-openai==0.0.8
langchain-text-splitters==0.0.1
langchainhub==0.1.15 | AttributeError: 'function' object has no attribute 'embed_query' with OpenAI llm and custom tool for Data Lake with Activeloop | https://api.github.com/repos/langchain-ai/langchain/issues/19171/comments | 1 | 2024-03-15T22:45:08Z | 2024-06-24T16:08:13Z | https://github.com/langchain-ai/langchain/issues/19171 | 2,189,579,272 | 19,171 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
The [link](https://api.python.langchain.com/en/latest/tracing.html?ref=blog.langchain.dev) to the tracing documentation, which is referenced [here](https://blog.langchain.dev/tracing/) is broken.
### Idea or request for content:
Please have documentation how to enable tracing of the tool.
Other [pages](https://js.langchain.com/docs/modules/agents/how_to/logging_and_tracing) imply that tracing and verbose are not the same.
If there is a way to time a tool, please add documentation on that too. | DOC: Broken link to Langchain Tracing | https://api.github.com/repos/langchain-ai/langchain/issues/19165/comments | 0 | 2024-03-15T20:06:34Z | 2024-06-21T16:37:05Z | https://github.com/langchain-ai/langchain/issues/19165 | 2,189,389,814 | 19,165 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
import boto3
import json
from langchain_community.chat_models import BedrockChat
from langchain.agents import tool
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
boto3_bedrock = boto3.client('bedrock-runtime')
modelId = "anthropic.claude-3-sonnet-20240229-v1:0"
llm = BedrockChat(
model_id=modelId,
client = boto3_bedrock
)
@tool
def get_word_length(word: str) -> int:
"""Returns the length of a word."""
return len(word)
#get_word_length.invoke("abc")
tools = [get_word_length]
prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"You are very powerful assistant, but don't know current events",
),
("user", "{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
]
)
llm_with_tools = llm.bind_tools(tools)
```
### Error Message and Stack Trace (if applicable)
```python
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[7], line 1
----> 1 llm_with_tools = llm.bind_tools(tools)
AttributeError: 'BedrockChat' object has no attribute 'bind_tools'
```
### Description
I am trying to follow [this](https://python.langchain.com/docs/modules/agents/how_to/custom_agent) guide with Bedrock and it's throwing this error.
### System Info
langchain==0.1.12
langchain-community==0.0.28
langchain-core==0.1.32
langchain-text-splitters==0.0.1 | 'BedrockChat' object has no attribute 'bind_tools' | https://api.github.com/repos/langchain-ai/langchain/issues/19162/comments | 5 | 2024-03-15T18:44:12Z | 2024-07-04T15:30:45Z | https://github.com/langchain-ai/langchain/issues/19162 | 2,189,262,902 | 19,162 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
This is the code for my server.py
```python
from fastapi import FastAPI
from fastapi.responses import RedirectResponse
from langserve import add_routes
from rag_conversation import chain as rag_conversation_chain
app = FastAPI()
@app.get("/")
async def redirect_root_to_docs():
return RedirectResponse("/docs")
add_routes(app, rag_conversation_chain, path="/rag-conversation")
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=8000)
```
### Error Message and Stack Trace (if applicable)
```
INFO: Will watch for changes in these directories:
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
INFO: Started reloader process [10212] using WatchFiles
ERROR: Error loading ASGI app. Could not import module "app.server".
```
### Description
I am trying to use the tool lanchain and langserver, and create a api from a template. Steps I took:
1. create server template
```
langchain app new my-app --package rag-conversation
```
2. copy in the code provided in the cmd after the installation is ready
```
@app.get("/")
async def redirect_root_to_docs():
return RedirectResponse("/docs")
add_routes(app, rag_conversation_chain, path="/rag-conversation")
```
3. cd into my-app folder and run langchain serve in the cmd.
After which the server cant seem to start and throws this error
```
INFO: Will watch for changes in these directories:
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
INFO: Started reloader process [10212] using WatchFiles
ERROR: Error loading ASGI app. Could not import module "app.server".
```
Does anyone know how to aproach this issue. There is no -v command option so this is all the information I have to go by.
### System Info
python -m langchain_core.sys_info:
```
System Information
------------------
> OS: Windows
> OS Version: 10.0.22631
> Python Version: 3.12.2 (tags/v3.12.2:6abddd9, Feb 6 2024, 21:26:36) [MSC v.1937 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.1.31
> langchain: 0.1.12
> langchain_community: 0.0.28
> langsmith: 0.1.25
> langchain_cli: 0.0.21
> langchain_openai: 0.0.8
> langchain_text_splitters: 0.0.1
> langserve: 0.0.51
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
```
pip freeze:
```
aiohttp==3.9.3
aiosignal==1.3.1
annotated-types==0.6.0
anyio==4.3.0
attrs==23.2.0
beautifulsoup4==4.12.3
build==1.1.1
CacheControl==0.14.0
certifi==2024.2.2
charset-normalizer==3.3.2
cleo==2.1.0
click==8.1.7
colorama==0.4.6
crashtest==0.4.1
dataclasses-json==0.6.4
distlib==0.3.8
distro==1.9.0
dulwich==0.21.7
fastapi==0.110.0
fastjsonschema==2.19.1
filelock==3.13.1
frozenlist==1.4.1
gitdb==4.0.11
GitPython==3.1.42
greenlet==3.0.3
h11==0.14.0
httpcore==1.0.4
httptools==0.6.1
httpx==0.27.0
httpx-sse==0.4.0
idna==3.6
installer==0.7.0
jaraco.classes==3.3.1
jsonpatch==1.33
jsonpointer==2.4
keyring==24.3.1
langchain==0.1.12
langchain-cli==0.0.21
langchain-community==0.0.28
langchain-core==0.1.31
langchain-openai==0.0.8
langchain-text-splitters==0.0.1
langserve==0.0.51
langsmith==0.1.25
markdown-it-py==3.0.0
marshmallow==3.21.1
mdurl==0.1.2
more-itertools==10.2.0
msgpack==1.0.8
multidict==6.0.5
mypy-extensions==1.0.0
numpy==1.26.4
openai==1.14.0
orjson==3.9.15
packaging==23.2
pexpect==4.9.0
pinecone-client==3.1.0
pkginfo==1.10.0
platformdirs==4.2.0
poetry==1.8.2
poetry-core==1.9.0
poetry-dotenv-plugin==0.2.0
poetry-plugin-export==1.6.0
ptyprocess==0.7.0
pydantic==2.6.4
pydantic_core==2.16.3
Pygments==2.17.2
pyproject_hooks==1.0.0
python-dotenv==1.0.1
pywin32-ctypes==0.2.2
PyYAML==6.0.1
rapidfuzz==3.6.2
regex==2023.12.25
requests==2.31.0
requests-toolbelt==1.0.0
rich==13.7.1
shellingham==1.5.4
smmap==5.0.1
sniffio==1.3.1
soupsieve==2.5
SQLAlchemy==2.0.28
sse-starlette==1.8.2
starlette==0.36.3
tenacity==8.2.3
tiktoken==0.6.0
tomlkit==0.12.4
tqdm==4.66.2
trove-classifiers==2024.3.3
typer==0.9.0
typing-inspect==0.9.0
typing_extensions==4.10.0
urllib3==2.2.1
uvicorn==0.23.2
virtualenv==20.25.1
watchfiles==0.21.0
websockets==12.0
yarl==1.9.4
``` | Langserver could not import module app.server | https://api.github.com/repos/langchain-ai/langchain/issues/19150/comments | 1 | 2024-03-15T16:38:47Z | 2024-06-26T16:07:40Z | https://github.com/langchain-ai/langchain/issues/19150 | 2,189,059,636 | 19,150 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
tools = [CorrelationTool(), DataTool()]
llm = Llm().azure_openai
prompt = hub.pull("hwchase17/react")
agent = create_react_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True, handle_parsing_errors=True, max_iterations = 5, early_stopping_method="generate")
with tracing_v2_enabled(project_name="default"):
ans = agent_executor.invoke({"input": "Present me some conclusions on data of criminality"})
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
The following code only executes once, presenting me with the data from the first action but not going after it, not writing a final answer. I feel like it's missing some loop in langchain code.
### System Info
```
> Entering new AgentExecutor chain...
I should use the Correlation Calculator to find some correlations
Action: Correlation Calculator
Action Input: "criminality in Brazil"
Roubo Furto Homicídio
Roubo 1.000000 0.984111 0.936390
Furto 0.984111 1.000000 0.983135
Homicídio 0.936390 0.983135 1.000000
> Finished chain.
```
| React Agent stops at first observation | https://api.github.com/repos/langchain-ai/langchain/issues/19149/comments | 2 | 2024-03-15T16:33:09Z | 2024-03-18T09:59:51Z | https://github.com/langchain-ai/langchain/issues/19149 | 2,189,049,809 | 19,149 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain.output_parsers import RetryOutputParser
from langchain_core.output_parsers.pydantic import PydanticOutputParser
from langchain_core.pydantic_v1 import BaseModel
from langchain_openai import OpenAI
class TestModel(BaseModel):
a: int
b: str
data_pydantic = TestModel(a=1, b="2")
data_json = data_pydantic.json()
parser = PydanticOutputParser(pydantic_object=TestModel)
retry_parser = RetryOutputParser.from_llm(parser=parser, llm=OpenAI(temperature=0))
retry_parser.parse_with_prompt(completion=data_json, prompt_value="Test prompt")
retry_parser.parse_with_prompt(completion=data_pydantic, prompt_value="Test prompt") # Error
```
### Error Message and Stack Trace (if applicable)
```
ValidationError Traceback (most recent call last)
Cell In[3], [line 20](vscode-notebook-cell:?execution_count=3&line=20)
[16](vscode-notebook-cell:?execution_count=3&line=16) retry_parser = RetryOutputParser.from_llm(parser=parser, llm=OpenAI(temperature=0))
[18](vscode-notebook-cell:?execution_count=3&line=18) retry_parser.parse_with_prompt(completion=data_json, prompt_value="Test prompt")
---> [20](vscode-notebook-cell:?execution_count=3&line=20) retry_parser.parse_with_prompt(completion=data_pydantic, prompt_value="Test prompt")
File [c:\Users\Asus\anaconda3\envs\dev\Lib\site-packages\langchain\output_parsers\retry.py:89](file:///C:/Users/Asus/anaconda3/envs/dev/Lib/site-packages/langchain/output_parsers/retry.py:89), in RetryOutputParser.parse_with_prompt(self, completion, prompt_value)
[87](file:///C:/Users/Asus/anaconda3/envs/dev/Lib/site-packages/langchain/output_parsers/retry.py:87) while retries <= self.max_retries:
[88](file:///C:/Users/Asus/anaconda3/envs/dev/Lib/site-packages/langchain/output_parsers/retry.py:88) try:
---> [89](file:///C:/Users/Asus/anaconda3/envs/dev/Lib/site-packages/langchain/output_parsers/retry.py:89) return self.parser.parse(completion)
[90](file:///C:/Users/Asus/anaconda3/envs/dev/Lib/site-packages/langchain/output_parsers/retry.py:90) except OutputParserException as e:
[91](file:///C:/Users/Asus/anaconda3/envs/dev/Lib/site-packages/langchain/output_parsers/retry.py:91) if retries == self.max_retries:
File [c:\Users\Asus\anaconda3\envs\dev\Lib\site-packages\langchain_core\output_parsers\json.py:218](file:///C:/Users/Asus/anaconda3/envs/dev/Lib/site-packages/langchain_core/output_parsers/json.py:218), in JsonOutputParser.parse(self, text)
[217](file:///C:/Users/Asus/anaconda3/envs/dev/Lib/site-packages/langchain_core/output_parsers/json.py:217) def parse(self, text: str) -> Any:
--> [218](file:///C:/Users/Asus/anaconda3/envs/dev/Lib/site-packages/langchain_core/output_parsers/json.py:218) return self.parse_result([Generation(text=text)])
File [c:\Users\Asus\anaconda3\envs\dev\Lib\site-packages\langchain_core\load\serializable.py:120](file:///C:/Users/Asus/anaconda3/envs/dev/Lib/site-packages/langchain_core/load/serializable.py:120), in Serializable.__init__(self, **kwargs)
[119](file:///C:/Users/Asus/anaconda3/envs/dev/Lib/site-packages/langchain_core/load/serializable.py:119) def __init__(self, **kwargs: Any) -> None:
--> [120](file:///C:/Users/Asus/anaconda3/envs/dev/Lib/site-packages/langchain_core/load/serializable.py:120) super().__init__(**kwargs)
[121](file:///C:/Users/Asus/anaconda3/envs/dev/Lib/site-packages/langchain_core/load/serializable.py:121) self._lc_kwargs = kwargs
File [c:\Users\Asus\anaconda3\envs\dev\Lib\site-packages\pydantic\main.py:341](file:///C:/Users/Asus/anaconda3/envs/dev/Lib/site-packages/pydantic/main.py:341), in pydantic.main.BaseModel.__init__()
ValidationError: 1 validation error for Generation
text
str type expected (type=type_error.str)
```
### Description
The `RetryOutputParser` does not seem to work correctly when used with `PydanticOutputParser`. I guess it won't work correctly whenever used with a parser that does not output a string.
In the code above, it works when receiving a string, but when receiving anything else, it throws:
```
ValidationError: 1 validation error for Generation
text
str type expected (type=type_error.str)
```
In the context of a chain with a `PydanticOutputParser`, when the llm call returns a correct result as the pydantic model, the retry parser throws an error.
I see no mention about it (the `RetryOutputParser` only accepting a string) in the docs: https://python.langchain.com/docs/modules/model_io/output_parsers/types/retry
I was able to avoid this issue by converting the `completion` value to a json string (shown below), if the type is the same as the expected pydantic model.
```python
def parse_with_prompt(args):
completion = args['completion']
if (type(completion) is TestModel):
args = args.copy()
del args['completion']
completion = completion.json(ensure_ascii=False)
args['completion'] = completion
return retry_parser.parse_with_prompt(**args)
chain = RunnableParallel(
completion=completion_chain, prompt_value=prompt
) | RunnableLambda(parse_with_prompt)
```
The problem is that this seems hackish, and I don't know if this will be portable in new versions of the parser (at least, in the example in the docs, I see no reference to the params that should be passed to `parse_with_prompt`, although I can see in the source code that they are `completion: str` and `prompt_value: PromptValue`, but I'm not sure if this should be considered an implementation detail, considering that there is no mention in the docs). Furthermore, if this issue is fixed in new versions, I may end up converting the model to json when I shouldn't.
For now I'm not using the `RetryOutputParser`, because it seems to not be production ready yet (at least with a parser that does not output a string).
### System Info
```
System Information
------------------
> OS: Windows
> OS Version: 10.0.22621
> Python Version: 3.11.5 | packaged by conda-forge | (main, Aug 27 2023, 03:23:48) [MSC v.1936 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.1.28
> langchain: 0.1.6
> langchain_community: 0.0.19
> langsmith: 0.1.14
> langchain_openai: 0.0.8
> langchainhub: 0.1.14
> langgraph: 0.0.28
> langserve: 0.0.46
``` | `RetryOutputParser` error when used with `PydanticOutputParser` | https://api.github.com/repos/langchain-ai/langchain/issues/19145/comments | 2 | 2024-03-15T16:21:21Z | 2024-07-22T16:08:16Z | https://github.com/langchain-ai/langchain/issues/19145 | 2,189,019,290 | 19,145 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
response = llm_chain.invoke(
input={"query": query}, config={"callbacks": [token_counter]}
)
```
### Error Message and Stack Trace (if applicable)
```
Traceback (most recent call last):
File "/app-root/ols/app/endpoints/ols.py", line 216, in validate_question
return question_validator.validate_question(conversation_id, llm_request.query)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app-root/ols/src/query_helpers/question_validator.py", line 78, in validate_question
response = llm_chain.invoke(
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain/chains/base.py", line 163, in invoke
raise e
File "/usr/local/lib/python3.11/site-packages/langchain/chains/base.py", line 153, in invoke
self._call(inputs, run_manager=run_manager)
File "/usr/local/lib/python3.11/site-packages/langchain/chains/llm.py", line 103, in _call
response = self.generate([inputs], run_manager=run_manager)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain/chains/llm.py", line 115, in generate
return self.llm.generate_prompt(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 544, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 408, in generate
raise e
File "/usr/local/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 398, in generate
self._generate_with_cache(
File "/usr/local/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 577, in _generate_with_cache
return self._generate(
^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain_openai/chat_models/base.py", line 444, in _generate
return generate_from_stream(stream_iter)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 65, in generate_from_stream
for chunk in stream:
File "/usr/local/lib/python3.11/site-packages/langchain_openai/chat_models/base.py", line 408, in _stream
for chunk in self.client.create(messages=message_dicts, **params):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/openai/_utils/_utils.py", line 271, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/openai/resources/chat/completions.py", line 659, in create
return self._post(
^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/openai/_base_client.py", line 1180, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/openai/_base_client.py", line 869, in request
return self._request(
^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/openai/_base_client.py", line 890, in _request
request = self._build_request(options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/openai/_base_client.py", line 452, in _build_request
headers = self._build_headers(options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/openai/_base_client.py", line 413, in _build_headers
headers = httpx.Headers(headers_dict)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/httpx/_models.py", line 70, in __init__
self._list = [
^
File "/usr/local/lib/python3.11/site-packages/httpx/_models.py", line 74, in <listcomp>
normalize_header_value(v, encoding),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/httpx/_utils.py", line 53, in normalize_header_value
return value.encode(encoding or "ascii")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
UnicodeEncodeError: 'ascii' codec can't encode character '\u23da' in position 58: ordinal not in range(128)
```
### Description
* A unicode character was accidentally pasted into the API key environment variable
* Langchain throws a difficult to understand unicode error because it does not appear to have "sanity checked" the value it was passing to httpx
### System Info
```
langchain==0.1.11
langchain-community==0.0.26
langchain-core==0.1.29
langchain-openai==0.0.5
langchain-text-splitters==0.0.1
```
Fedora release 39 (Thirty Nine)
Python 3.11.5 | When unicode character is in API key, non-specific error is returned (instead of invalid API key) | https://api.github.com/repos/langchain-ai/langchain/issues/19144/comments | 0 | 2024-03-15T15:54:24Z | 2024-06-21T16:37:07Z | https://github.com/langchain-ai/langchain/issues/19144 | 2,188,945,438 | 19,144 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
auth_client_secret = weaviate.AuthApiKey(WEAVIATE_ADMIN_APIKEY)
wv_conn = weaviate.Client(url=WEAVIATE_URL,
auth_client_secret=auth_client_secret))
wvdb = Weaviate(client=self.wv_conn, index_name=index_name,
text_key="text", embedding=embeddings,
attributes=["a1", "a2", "a3", "a4"])
wvdb.add_texts(texts=texts,
metadatas=metadatas,
ids=ids)
wvdb.similarity_search_with_score(query=query, k=top_k)
### Error Message and Stack Trace (if applicable)
ValueError: Error during query: [{'locations': [{'column': 34, 'line': 1}], 'message': 'Unknown argument "nearText" on field "OAAIndexOpenAITest" of type "GetObjectsObj". Did you mean "nearObject" or "nearVector"?', 'path': None}]
### Description
I'm having issues with Langchain creating index and vectorizing text in Weaviate. I do not want to set up a vectorizer in Weaviate but want Langchain to directly do that for me. Vectorizing ElasticSearch using Langchain seems to be working fine but not in Weaviate
### System Info
langchain==0.0.308
langchain-community==0.0.20
langchain-core==0.1.23
langchain-text-splitters==0.0.1
weaviate-client==3.24.2
weaviate== 1.21.2 | Langchain not able to create a new index and/or generate vectors to store in weaviate? | https://api.github.com/repos/langchain-ai/langchain/issues/19143/comments | 1 | 2024-03-15T15:49:11Z | 2024-03-28T18:33:51Z | https://github.com/langchain-ai/langchain/issues/19143 | 2,188,928,384 | 19,143 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
# Helper function for printing docs
def pretty_print_docs(docs):
print(
f"\n{'-' * 100}\n".join(
[f"Document {i+1}:\n\n" + d.page_content for i, d in enumerate(docs)]
)
)
import getpass
import os
os.environ["OPENAI_API_KEY"] = getpass.getpass()
from langchain_community.document_loaders import TextLoader
from langchain_community.vectorstores import FAISS
from langchain_openai import OpenAIEmbeddings
from langchain_text_splitters import RecursiveCharacterTextSplitter
documents = TextLoader(
"../../modules/state_of_the_union.txt",
).load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=100)
texts = text_splitter.split_documents(documents)
embedding = OpenAIEmbeddings(model="text-embedding-ada-002")
retriever = FAISS.from_documents(texts, embedding).as_retriever(search_kwargs={"k": 20})
from langchain.retrievers import ContextualCompressionRetriever, FlashrankRerank
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(temperature=0)
compressor = FlashrankRerank()
compression_retriever = ContextualCompressionRetriever(
base_compressor=compressor, base_retriever=retriever
)
compressed_docs = compression_retriever.get_relevant_documents(
"What did the president say about Ketanji Jackson Brown"
)
print([doc.metadata["source"] for doc in compressed_docs])
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
When using `FlashrankRerank`, the original documents' metadata is overwritten with just an `id` and the `relevance_score`. This leads to complications if the documents were retrieved with important metadata.
`id` is also a commonly used metadata when documents are stored in a database and should not be overwritten.
As it is implemented, the `id` assigned by `FlashrankRerank` is trivial, because it is simply the index in the list of documents.
What I would like to happen is that the list of input documents is returned as is, just reordered and filtered, with the metadata intact. The relevance score should be added to the documents' existing metadata. No additional id is necessary to avoid clashing with existing ids.
### System Info
langchain==0.1.12
langchain-community==0.0.28
langchain-core==0.1.31
langchain-openai==0.0.8
langchain-text-splitters==0.0.1 | `FlashrankRerank` drops document metadata | https://api.github.com/repos/langchain-ai/langchain/issues/19142/comments | 0 | 2024-03-15T14:56:27Z | 2024-03-19T10:43:39Z | https://github.com/langchain-ai/langchain/issues/19142 | 2,188,745,224 | 19,142 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain.retrievers.document_compressors import FlashrankRerank
from langchain.retrievers import ContextualCompressionRetriever
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
The import path in the example notebook for the `FlashrankReranker` is wrong. It is listed as `from langchain.retrievers ...` but should be `from langchain.retrievers.document_compressors ...`.
### System Info
```
langchain==0.1.12
langchain-community==0.0.28
langchain-core==0.1.31
langchain-openai==0.0.8
langchain-text-splitters==0.0.1
``` | Wrong import path in `docs/integrations/retrievers/flashrank-reranker.ipynb` | https://api.github.com/repos/langchain-ai/langchain/issues/19139/comments | 0 | 2024-03-15T14:25:20Z | 2024-06-21T16:37:05Z | https://github.com/langchain-ai/langchain/issues/19139 | 2,188,642,452 | 19,139 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
def get_message_history(collection_id: str) -> SQLChatMessageHistoryExtended:
return SQLChatMessageHistoryExtended(
session_id=collection_id,
connection=db_conn,
custom_message_converter=CustomMessageConverter(),
k_latest_interactions=self.k_memory_interactions
)
def format_docs(docs):
return "\n\n".join(doc.page_content for doc in docs)
db = PGVectorExtended.from_existing_index(
connection=db_conn,
embedding=self.embeddings,
collection_name=str(collection_id),
distance_strategy="cosine",
)
retriever = db.as_retriever(
search_type="similarity", search_kwargs={"k": self.k_similar_chunks}
)
chat_template = ChatPromptTemplate.from_messages(
[
("system", ("foo."),),
MessagesPlaceholder(variable_name="history"),
("human",("{question} \n\n ...foo... {context} \n\n")),
]
)
llm_with_tool = self.llm.bind_tools(
[quoted_answer],
tool_choice="quoted_answer",
)
output_parser = JsonOutputKeyToolsParser(key_name="quoted_answer", return_single=True)
chain = (
{
"context": itemgetter("question") | retriever | format_docs,
"question": itemgetter("question"),
"history": itemgetter("history"),
}
| chat_template
| llm_with_tool
| output_parser
# with these the history works
# | self.llm
# | StrOutputParser()
)
chain_with_message_history = RunnableWithMessageHistory(
chain,
get_message_history,
input_messages_key="question",
history_messages_key="history",
output_messages_key="quoted_answer",
)
response = chain_with_message_history.invoke(
{"question": question},
config={"configurable": {"session_id": str(collection_id)}}
)
# I have to add the history manually when using `llm_with_tool` and `output_parser`
# with `self.llm` and `StrOutputParser()` it is added automatically
get_message_history(str(collection_id)).add_ai_message(response['answer'])
get_message_history(str(collection_id)).add_user_message(question)
return response['answer'], response['citations']
```
### Error Message and Stack Trace (if applicable)
Attached is output from the chain
[chain_logs.txt](https://github.com/langchain-ai/langchain/files/14615664/chain_logs.txt)
### Description
## My goal
* I have a RAG chat which answers questions based on a document.
* I want to have history (which is stored in postgres) so when the users says "translate the previous answer to french" it will do so, for example.
* I also want to return citations for the user to see where the answer comes from.
* _(I have also [posted this on the Q&A](https://github.com/langchain-ai/langchain/discussions/19118) but based on the bot's output, seems more like a bug)_
## What's wrong
### 1) History is not inserted
- if I do not include the code below, nothing is inserted in the message store
```python
get_message_history(str(collection_id)).add_ai_message(response['answer'])
get_message_history(str(collection_id)).add_user_message(question)
```
- note that if I do not use the chain with OpenAI tools for citations, _the history is inserted correctly_
```python
chain = (
{
"context": itemgetter("question") | retriever | format_docs,
"question": itemgetter("question"),
"history": itemgetter("history"),
}
| chat_template
# with these the history works
| self.llm
| StrOutputParser()
)
```
### 2) History is ignored
- if I add the messages manually to the history, the chain cannot answer basic questions, like "translate the previous answer to french" and instead returns the previous answer
```
+---------------------------------------------------------------------------------+------+
|message |author|
+---------------------------------------------------------------------------------+------+
|The document is a draft report overview of the Civil UAV Capability ......... |ai |
|Summarise the document and write two main takeaways |human |
|Based on the definition provided in the 'Lexicon of UAV/ROA Terminology', .... |ai |
|Can it also carry animals? |human |
|Based on the provided context, UAVs are not intended to carry animals. |ai |
|Can it fly to space? |human |
|Based on the provided context, UAVs are not intended to carry animals. |ai |
|translate the previous answer to german |human |
+---------------------------------------------------------------------------------+------+
```
- Resolution is the same as above ☝️ : this works if the chain with OpenAI tools for citations is not used
## Solution
- Based on the logs from the chain, I suspect that the key `quoted_answer` is somehow not handled well
- I also see this suspicious warning in the logs
```
Error in RootListenersTracer.on_chain_end callback: KeyError('quoted_answer')
```
### System Info
```
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.4.0: Wed Feb 21 21:44:31 PST 2024; root:xnu-10063.101.15~2/RELEASE_X86_64
> Python Version: 3.11.1 (main, Aug 9 2023, 13:06:45) [Clang 14.0.3 (clang-1403.0.22.14.1)]
Package Information
-------------------
> langchain_core: 0.1.32
> langchain: 0.1.12
> langchain_community: 0.0.28
> langsmith: 0.1.26
> langchain_openai: 0.0.8
> langchain_text_splitters: 0.0.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
``` | When using citations from OpenAI's tools with `RunnableWithMessageHistory`, history is ignored and not inserted | https://api.github.com/repos/langchain-ai/langchain/issues/19136/comments | 1 | 2024-03-15T13:02:27Z | 2024-07-16T16:06:33Z | https://github.com/langchain-ai/langchain/issues/19136 | 2,188,468,815 | 19,136 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
There is a minor mistake in neo4j documentation. In [Seeding the database](https://python.langchain.com/docs/use_cases/graph/integrations/graph_cypher_qa#seeding-the-database), movie titles are inserted as 'name', and then later in [Add examples in the Cypher generation prompt](https://python.langchain.com/docs/use_cases/graph/integrations/graph_cypher_qa#add-examples-in-the-cypher-generation-prompt) section, example has 'title' instead of 'name': <code># How many people played in Top Gun? MATCH (m:Movie {{title:"Top Gun"}})<-[:ACTED_IN]-() RETURN count(*) AS numberOfActors</code>. I have also done cypher querying with LLaMA 2, and if you are interested, I can provide you code for template...
### Idea or request for content:
Change <code># How many people played in Top Gun? MATCH (m:Movie {{title:"Top Gun"}})<-[:ACTED_IN]-() RETURN count(*) AS numberOfActors</code> to <code># How many people played in Top Gun?
MATCH (m:Movie {{name:"Top Gun"}})<-[:ACTED_IN]-()
RETURN count(*) AS numberOfActors</code>. | Neo4j documentation mistake | https://api.github.com/repos/langchain-ai/langchain/issues/19134/comments | 1 | 2024-03-15T12:30:26Z | 2024-05-22T20:48:10Z | https://github.com/langchain-ai/langchain/issues/19134 | 2,188,411,236 | 19,134 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
1. Fetched documents similar to my query using below code using
> staff_knowledge_base.similarity_search(user_question, k=10)
```
staff_knowledge_base = PGVector(
embedding_function=embeddings,
connection_string=conn,
collection_name=collection)
```
**NOTE:** here default distance_strategy is used, i.e., COSINE (as per documentation)
2. Fetched documents similar to my query using below code using
> staff_knowledge_base.similarity_search(user_question, k=10)
```
from langchain_community.vectorstores.pgvector import DistanceStrategy
staff_knowledge_base = PGVector(
embedding_function=embeddings,
connection_string=conn,
collection_name=collection,
distance_strategy= DistanceStrategy.MAX_INNER_PRODUCT
)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
**Observation:** There is no change in list of fetched documents
**Expected Result:** Documents fetched should have been changed with change in distance_strategy
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.22621
> Python Version: 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.1.31
> langchain: 0.1.12
> langchain_community: 0.0.28
> langsmith: 0.1.25
> langchain_openai: 0.0.6
> langchain_text_splitters: 0.0.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | PGVector distance_strategy methods seems to be not working | https://api.github.com/repos/langchain-ai/langchain/issues/19129/comments | 1 | 2024-03-15T11:38:41Z | 2024-06-24T16:13:29Z | https://github.com/langchain-ai/langchain/issues/19129 | 2,188,319,485 | 19,129 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
1) Can you confirm which language model is being used in **create_sql_query_chain** method for text to sql conversion? How can I **configure it**, if I want to try with some other language model?
https://python.langchain.com/docs/use_cases/sql/quickstart#convert-question-to-sql-query
2) Can you confirm which language model is being used in **create_sql_agent** method for text to sql conversion? How can I **configure it**, if I want to try with some other language model?
https://python.langchain.com/docs/use_cases/sql/agents#agent
### Idea or request for content:
_No response_ | DOC: [Question]: How to change the text to sql model in create_sql_query_chain , create_sql_agent ? | https://api.github.com/repos/langchain-ai/langchain/issues/19124/comments | 1 | 2024-03-15T09:37:14Z | 2024-07-05T16:06:13Z | https://github.com/langchain-ai/langchain/issues/19124 | 2,188,098,957 | 19,124 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
with get_openai_callback() as cb:
output_parser = StrOutputParser()
llm_chain = prompt_main | llm | output_parser
all_text = str(prompt) + str(topics)
threshold = (llm.get_num_tokens(text=all_text) + tokens)
chatgpt_output = llm_chain.invoke({"prompt": prompt, "topics": topics})
chatgpt_output = chatgpt_output.replace("```", "").strip()
# Parse the string as a dictionary
data_dict = parse_json_output(chatgpt_output)
# Extract the categories list
subcat_list = data_dict.get('Sub-Categories', [])
total_cost = round(cb.total_cost, 5)
total_tokens = cb.total_tokens
print(f"Topics Tokens: {llm.get_num_tokens(text=''.join(topics))}")
print(f"Sub-Categories Tokens: {llm.get_num_tokens(text=''.join(subcat_list))}")
print(f"Estimated Total Tokens: {threshold}")
print(f"Total Tokens: {cb.total_tokens}")
print(f"Prompt Tokens: {cb.prompt_tokens}")
print(f"Completion Tokens: {cb.completion_tokens}")
print(f"Total Cost (USD): ${cb.total_cost}")
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I've successfully calculated the Input Tokens as well as individual tokens for a list called Topics (contains 4000 topics).
According to the OpenAI Tokenizer Web Tool and my calculated tokens using llm.get_num_tokens the Tokens for the Topics should be at least ~8K meanwhile all the Total tokens (including Input and Output) should be around 13K but LangChain shows it simply as 815 Tokens.
Also the Cost should be different too but it shows Total Cost to be 0.00134 USD. I'm using gpt-3.5-turbo. Please look at the code and image attached and you'd understand something's wrong here.

### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Tue Jan 30 20:59:52 UTC 2024
> Python Version: 3.10.13 | packaged by conda-forge | (main, Oct 26 2023, 18:07:37) [GCC 12.3.0]
Package Information
-------------------
> langchain_core: 0.1.32
> langchain: 0.1.12
> langchain_community: 0.0.28
> langsmith: 0.1.26
> langchain_openai: 0.0.8
> langchain_text_splitters: 0.0.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | OpenAI Tokens and Cost are inaccurate | https://api.github.com/repos/langchain-ai/langchain/issues/19120/comments | 1 | 2024-03-15T08:45:37Z | 2024-07-04T16:08:18Z | https://github.com/langchain-ai/langchain/issues/19120 | 2,188,002,825 | 19,120 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Minimum working example:
```python
from langchain_openai import AzureChatOpenAI
import httpx
http_client = httpx.Client()
model = AzureChatOpenAI(
http_client=http_client,
api_key="foo",
api_version="2023-07-01-preview",
azure_endpoint="https://example.com",
)
```
### Error Message and Stack Trace (if applicable)
```
Traceback (most recent call last):
File "/home/ec2-user/environment/test/test-openai.py", line 7, in <module>
model = AzureChatOpenAI(http_client=http_client, api_key="foo", api_version="2023-07-01-preview", azure_endpoint="https://example.com")
File "/home/ec2-user/.local/share/virtualenvs/test/lib/python3.10/site-packages/langchain_core/load/serializable.py", line 120, in __init__
super().__init__(**kwargs)
File "/home/ec2-user/.local/share/virtualenvs/test/lib/python3.10/site-packages/pydantic/v1/main.py", line 341, in __init__
raise validation_error
pydantic.v1.error_wrappers.ValidationError: 1 validation error for AzureChatOpenAI
__root__
Invalid `http_client` argument; Expected an instance of `httpx.AsyncClient` but got <class 'httpx.Client'> (type=type_error)
```
### Description
Due to recent changes in openai-python the way langchain passes custom instances of `http_client` does not work anymore. The example code shows this behavior for `AzureChatOpenAI` but it holds for all instances where OpenAI clients are used.
See [here](https://github.com/langchain-ai/langchain/blob/9e569d85a45fd9e89f85f5c93e61940e36176076/libs/partners/openai/langchain_openai/chat_models/azure.py#L190-L191) for one example of how the custom `http_client` gets passed to openai-python. LangChain would need to ensure to properly pass an instance of `httpx.AsyncClient` or `httpx.Client`.
The reason for the exception are some new type checks in openai-python v1.13.4 (see [here](https://github.com/openai/openai-python/compare/v1.13.3...v1.13.4#diff-aca4f4354075e3c75151a8de08daeb25d4db0af2564381c26aba33a49c9dc829R783-R1334)).
PS: As a temporary solution, I pinned openai-python to an older version. It is not pinned in langchain_openai (see [here](https://github.com/langchain-ai/langchain/blob/9e569d85a45fd9e89f85f5c93e61940e36176076/libs/partners/openai/pyproject.toml#L16)).
### System Info
```
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Sat Feb 24 09:50:35 UTC 2024
> Python Version: 3.10.12 (main, Nov 10 2023, 12:43:56) [GCC 7.3.1 20180712 (Red Hat 7.3.1-17)]
Package Information
-------------------
> langchain_core: 0.1.31
> langchain: 0.1.11
> langchain_community: 0.0.28
> langsmith: 0.1.25
> langchain_openai: 0.0.8
> langchain_text_splitters: 0.0.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
``` | openai-python 1.13.4 with custom http_client breaks OpenAI clients in langchain | https://api.github.com/repos/langchain-ai/langchain/issues/19116/comments | 6 | 2024-03-15T07:59:15Z | 2024-05-22T13:55:39Z | https://github.com/langchain-ai/langchain/issues/19116 | 2,187,931,701 | 19,116 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [x] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_openai import ChatOpenAI
from langchain.output_parsers import OutputFixingParser, XMLOutputParser
llm = ChatOpenAI()
parser = XMLOutputParser()
output_fixing_parser = OutputFixingParser.from_llm(llm=llm, parser=parser)
# OK: nicely parsed
llm_output = "<movies>\n <actor>Tom Hanks</actor></movies>"
parsed_output = output_fixing_parser.parse(llm_output)
# ERROR 1: mismatched brace
llm_output = "<moviesss>\n <actor>Tom Hanks</actor></movies>"
parsed_output = output_fixing_parser.parse(llm_output)
# xml.etree.ElementTree.ParseError: mismatched tag: line 2, column 30
# ERROR 2: unexpected string
llm_output = "movie actor: Tom Hanks"
parsed_output = output_fixing_parser.parse(llm_output)
# ValueError: Could not parse output: movie actor: Tom Hanks
```
### Error Message and Stack Trace (if applicable)
```python
# ERROR 1: mismatched tag
llm_output = "<moviesss>\n <actor>Tom Hanks</actor></movies>"
parsed_output = output_fixing_parser.parse(llm_output)
# xml.etree.ElementTree.ParseError: mismatched tag: line 2, column 30
```
```python
# ERROR 2: unexpected string
llm_output = "movie actor: Tom Hanks"
parsed_output = output_fixing_parser.parse(llm_output)
# ValueError: Could not parse output: movie actor: Tom Hanks
```
### Description
I'm trying to use the `OutputFixingParser` to wrap `XMLOutputParser`.
But, `OutputFixingParser` couldn't handle some kinds of llm-generated output.
Instead, it raises `xml.etree.ElementTree.ParseError` and `ValueError`.
### System Info
```
langchain==0.1.12
langchain-community==0.0.28
langchain-core==0.1.32
``` | `OutputFixingParser` raises unhandled exception while wrapping `XMLOutputParser` | https://api.github.com/repos/langchain-ai/langchain/issues/19107/comments | 0 | 2024-03-15T05:46:44Z | 2024-03-19T04:49:05Z | https://github.com/langchain-ai/langchain/issues/19107 | 2,187,771,203 | 19,107 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```py
user_input = "你好"
embedding_model_name = fr"/usr/local/models/BAAI/bge-m3"
hf = HuggingFaceBgeEmbeddings(
model_name=embedding_model_name,
model_kwargs={"device": "cpu"},
encode_kwargs={
"normalize_embeddings": True,
"batch_size": 32
}
)
milvus_store = Milvus(
embedding_function=hf,
collection_name="qa",
drop_old=False,
text_field="content",
primary_field= "id",
vector_field = "vector",
# search_params={
# "metric_type": "IP",
# "index_type": "IVF_FLAT",
# "params": {"nprobe": 10, "nlist": 128}
# },
connection_args={
"host": "10.3.1.187",
"port": "19530",
"user": "",
"password": "",
"db_name": "qa"
},
)
retriever=milvus_store.as_retriever(
search_type="similarity_score_threshold",
search_kwargs={"score_threshold": 0.8}
)
docs= retriever.get_relevant_documents(query=user_input)
```
### Error Message and Stack Trace (if applicable)
```
Traceback (most recent call last):
File "/usr/local/churchill/lib/python3.10/site-packages/langchain_core/retrievers.py", line 244, in get_relevant_documents
raise e
File "/usr/local/churchill/lib/python3.10/site-packages/langchain_core/retrievers.py", line 237, in get_relevant_documents
result = self._get_relevant_documents(
File "/usr/local/churchill/lib/python3.10/site-packages/langchain_core/vectorstores.py", line 677, in _get_relevant_documents
self.vectorstore.similarity_search_with_relevance_scores(
File "/usr/local/churchill/lib/python3.10/site-packages/langchain_core/vectorstores.py", line 324, in similarity_search_with_relevance_scores
docs_and_similarities = self._similarity_search_with_relevance_scores(
File "/usr/local/churchill/lib/python3.10/site-packages/langchain_core/vectorstores.py", line 271, in _similarity_search_with_relevance_scores
relevance_score_fn = self._select_relevance_score_fn()
File "/usr/local/churchill/lib/python3.10/site-packages/langchain_core/vectorstores.py", line 228, in _select_relevance_score_fn
raise NotImplementedError
NotImplementedError
```
### Description
I want to get `get_relevant_documents` using retriever ,but it has bug.
### System Info
langchain 0.1.9
python version 3.10
centos 8 | milvus retriever.get_relevant_documents has bug? | https://api.github.com/repos/langchain-ai/langchain/issues/19106/comments | 5 | 2024-03-15T05:38:46Z | 2024-07-11T06:57:54Z | https://github.com/langchain-ai/langchain/issues/19106 | 2,187,763,655 | 19,106 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
model_name = "sentence-transformers/all-mpnet-base-v2"
model_kwargs = {'device': 'cpu'}
encode_kwargs = {'normalize_embeddings': False}
embeddings = HuggingFaceEmbeddings(
model_name=model_name,
model_kwargs=model_kwargs,
encode_kwargs=encode_kwargs
)
```
### Error Message and Stack Trace (if applicable)
File "/app/llm/lfqa_langchain.py", line 29, in <module>
2024-03-14 18:45:10 embeddings = HuggingFaceEmbeddings(
2024-03-14 18:45:10 File "/.venv/lib/python3.9/site-packages/langchain_community/embeddings/huggingface.py", line 59, in __init__
2024-03-14 18:45:10 import sentence_transformers
2024-03-14 18:45:10 File "/.venv/lib/python3.9/site-packages/sentence_transformers/__init__.py", line 3, in <module>
2024-03-14 18:45:10 from .datasets import SentencesDataset, ParallelSentencesDataset
2024-03-14 18:45:10 File "/.venv/lib/python3.9/site-packages/sentence_transformers/datasets/__init__.py", line 1, in <module>
2024-03-14 18:45:10 from .DenoisingAutoEncoderDataset import DenoisingAutoEncoderDataset
2024-03-14 18:45:10 File "/.venv/lib/python3.9/site-packages/sentence_transformers/datasets/DenoisingAutoEncoderDataset.py", line 1, in <module>
2024-03-14 18:45:10 from torch.utils.data import Dataset
2024-03-14 18:45:10 File "/.venv/lib/python3.9/site-packages/torch/__init__.py", line 236, in <module>
2024-03-14 18:45:10 _load_global_deps()
2024-03-14 18:45:10 File "/.venv/lib/python3.9/site-packages/torch/__init__.py", line 197, in _load_global_deps
2024-03-14 18:45:10 _preload_cuda_deps(lib_folder, lib_name)
2024-03-14 18:45:10 File "/.venv/lib/python3.9/site-packages/torch/__init__.py", line 162, in _preload_cuda_deps
2024-03-14 18:45:10 raise ValueError(f"{lib_name} not found in the system path {sys.path}")
2024-03-14 18:45:10 ValueError: libcublas.so.*[0-9] not found in the system path ['', '/.venv/bin', '/usr/local/lib/python39.zip', '/usr/local/lib/python3.9', '/usr/local/lib/python3.9/lib-dynload', '/.venv/lib/python3.9/site-packages']
### Description
After generating Pipfile.lock and docker image, it shuts down automatically after 2-3 seconds with the given error. Using langchain = "==0.0.351".
### System Info
Platform linux (Docker)
Pip version: 23.0.1 | Docker ValueError: libcublas.so.*[0-9] not found in the system path | https://api.github.com/repos/langchain-ai/langchain/issues/19078/comments | 1 | 2024-03-14T15:50:12Z | 2024-03-14T18:30:48Z | https://github.com/langchain-ai/langchain/issues/19078 | 2,186,711,784 | 19,078 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
AI_URL = env["AI_URL"] or "http://localhost:11434"
OPENSEARCH_URL = env["OPENSEARCH_URL"] or "https://0.0.0.0:9200"
OPENSEARCH_USERNAME = env["OPENSEARCH_USERNAME"] or "admin"
OPENSEARCH_PASSWORD = env["OPENSEARCH_PASSWORD"] or "Admin123_"
OPENSEARCH_INDEX_NAME = env["OPENSEARCH_INDEX_NAME"] or "index-name"
embedding_function = OllamaEmbeddings(model="mistral", base_url=AI_URL)
os_client = OpenSearch(
hosts=[OPENSEARCH_URL],
http_auth=(OPENSEARCH_USERNAME, OPENSEARCH_PASSWORD),
use_ssl=False,
verify_certs=False,
)
try:
res = embedding_function.embed_query(thematic)
text = {
"thematic": thematic,
"extensions": thematics[thematic],
}
document = {
"vector_field": res,
"text": str(text),
"thematic": thematic,
"extensions": thematics[thematic],
}
os_client.index(index=OPENSEARCH_INDEX_NAME, body=document, refresh=True)
except Exception as e:
logger.error("Error pushing data", exc_info=e)
return JSONResponse({"message": "Error pushing data"}, status_code=500)
```
Here is the docker compose that I use:
```yml
version: '3'
services:
server:
container_name: server
build:
context: .
dockerfile: Dockerfile
ports:
- 8080:8080
env_file:
- path: .env
required: true
server-opensearch:
container_name: server-opensearch
image: opensearchproject/opensearch:2.12.0
env_file:
- path: .env
required: true
environment:
- discovery.type=single-node
- plugins.security.disabled=true
ulimits:
memlock:
soft: -1
hard: -1
ports:
- 9200:9200
- 9600:9600
```
The Dockerfile just package my app into a tar file and launch it with uvicorn like so:
`CMD ["uvicorn", "app.__init__:app", "--host", "0.0.0.0", "--port", "8080"]`
### Error Message and Stack Trace (if applicable)
(I'm sorry for the formatting of the stacktrace, that's the fault of the logger lib...)
Traceback (most recent call last):\n File \"/usr/lib/python3.11/site-packages/urllib3/connection.py\", line 198, in _new_conn\n sock = connection.create_connection(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/lib/python3.11/site-packages/urllib3/util/connection.py\", line 85, in create_connection\n raise err\n File \"/usr/lib/python3.11/site-packages/urllib3/util/connection.py\", line 73, in create_connection\n sock.connect(sa)\nConnectionRefusedError: [Errno 111] Connection refused\n\nThe above exception was the direct cause of the following exception:\n\nTraceback (most recent call last):\n File \"/usr/lib/python3.11/site-packages/urllib3/connectionpool.py\", line 793, in urlopen\n response = self._make_request(\n ^^^^^^^^^^^^^^^^^^^\n File \"/usr/lib/python3.11/site-packages/urllib3/connectionpool.py\", line 496, in _make_request\n conn.request(\n File \"/usr/lib/python3.11/site-packages/urllib3/connection.py\", line 400, in request\n self.endheaders()\n File \"/usr/lib/python3.11/http/client.py\", line 1293, in endheaders\n self._send_output(message_body, encode_chunked=encode_chunked)\n File \"/usr/lib/python3.11/http/client.py\", line 1052, in _send_output\n self.send(msg)\n File \"/usr/lib/python3.11/http/client.py\", line 990, in send\n self.connect()\n File \"/usr/lib/python3.11/site-packages/urllib3/connection.py\", line 238, in connect\n self.sock = self._new_conn()\n ^^^^^^^^^^^^^^^^\n File \"/usr/lib/python3.11/site-packages/urllib3/connection.py\", line 213, in _new_conn\n raise NewConnectionError(\nurllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0xffff80db6590>: Failed to establish a new connection: [Errno 111] Connection refused\n\nThe above exception was the direct cause of the following exception:\n\nTraceback (most recent call last):\n File \"/usr/lib/python3.11/site-packages/requests/adapters.py\", line 486, in send\n resp = conn.urlopen(\n ^^^^^^^^^^^^^\n File \"/usr/lib/python3.11/site-packages/urllib3/connectionpool.py\", line 847, in urlopen\n retries = retries.increment(\n ^^^^^^^^^^^^^^^^^^\n File \"/usr/lib/python3.11/site-packages/urllib3/util/retry.py\", line 515, in increment\n raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type]\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nurllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='0.0.0.0', port=11434): Max retries exceeded with url: /api/embeddings (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0xffff80db6590>: Failed to establish a new connection: [Errno 111] Connection refused'))\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/usr/lib/python3.11/site-packages/langchain_community/embeddings/ollama.py\", line 157, in _process_emb_response\n res = requests.post(\n ^^^^^^^^^^^^^^\n File \"/usr/lib/python3.11/site-packages/requests/api.py\", line 115, in post\n return request(\"post\", url, data=data, json=json, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/lib/python3.11/site-packages/requests/api.py\", line 59, in request\n return session.request(method=method, url=url, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/lib/python3.11/site-packages/requests/sessions.py\", line 589, in request\n resp = self.send(prep, **send_kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/lib/python3.11/site-packages/requests/sessions.py\", line 703, in send\n r = adapter.send(request, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/lib/python3.11/site-packages/requests/adapters.py\", line 519, in send\n raise ConnectionError(e, request=request)\nrequests.exceptions.ConnectionError: HTTPConnectionPool(host='0.0.0.0', port=11434): Max retries exceeded with url: /api/embeddings (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0xffff80db6590>: Failed to establish a new connection: [Errno 111] Connection refused'))\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/opt/app/main/rag.py\", line 64, in recreate\n res = embedding_function.embed_query(thematic)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/lib/python3.11/site-packages/langchain_community/embeddings/ollama.py\", line 217, in embed_query\n embedding = self._embed([instruction_pair])[0]\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/lib/python3.11/site-packages/langchain_community/embeddings/ollama.py\", line 192, in _embed\n return [self._process_emb_response(prompt) for prompt in iter_]\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/lib/python3.11/site-packages/langchain_community/embeddings/ollama.py\", line 192, in <listcomp>\n return [self._process_emb_response(prompt) for prompt in iter_]\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/lib/python3.11/site-packages/langchain_community/embeddings/ollama.py\", line 163, in _process_emb_response\n raise ValueError(f\"Error raised by inference endpoint: {e}\")\nValueError: Error raised by inference endpoint: HTTPConnectionPool(host='0.0.0.0', port=11434): Max retries exceeded with url: /api/embeddings (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0xffff80db6590>: Failed to establish a new connection: [Errno 111] Connection refused'))
### Description
I'm trying to use the OllamaEmbeddings class to generate embeddings when my server is hit on a certain endpoint. When running locally, there are no problems, but if I'm running the server inside a docker container, I get this error. The ollama server is running on my machine locally, not inside a container.
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.3.0: Wed Dec 20 21:30:27 PST 2023; root:xnu-10002.81.5~7/RELEASE_ARM64_T8103
> Python Version: 3.12.2 (main, Feb 6 2024, 20:19:44) [Clang 15.0.0 (clang-1500.1.0.2.5)]
Package Information
-------------------
> langchain_core: 0.1.31
> langchain: 0.1.11
> langchain_community: 0.0.27
> langsmith: 0.1.24
> langchain_text_splitters: 0.0.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | "Max retries exceeded with url: /api/embeddings" when using OllamaEmbeddings inside a container | https://api.github.com/repos/langchain-ai/langchain/issues/19074/comments | 1 | 2024-03-14T13:31:19Z | 2024-03-14T18:34:04Z | https://github.com/langchain-ai/langchain/issues/19074 | 2,186,395,518 | 19,074 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
def generate_prompt(system, human_question):
return ChatPromptTemplate.from_messages(
[
("system", system),
MessagesPlaceholder(variable_name="chat_history"),
("human", human_question),
]
)
if __name__ == '__main__':
system="A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions."
human_question="""
{'EARLY_PHASE1': [],
'PHASE1': [{'nctId': 'NCT06277609', 'briefTitle': 'A Trial Investiga', 'officialTitle': 'Interventional,', 'url': '', 'phases': 'PHASE1', 'studyStatus': 'RECRUITING', 'conditions': ['Healthy Participants'],'NA': []}]
}
"""
print(generate_prompt(system, human_question))
```
### Error Message and Stack Trace (if applicable)
The input_variables variable should be chat_history instead of containing 'EARLY-PHASE1'

### Description
no
### System Info
langchain 0.1.11
langchain-community 0.0.27
langchain-core 0.1.30
langchain-openai 0.0.8
langchain-text-splitters 0.0.1
langchainhub 0.1.15
langsmith 0.1.23
| Text with {will be parsed as input_variables, causing an error | https://api.github.com/repos/langchain-ai/langchain/issues/19067/comments | 1 | 2024-03-14T08:14:39Z | 2024-03-14T18:14:28Z | https://github.com/langchain-ai/langchain/issues/19067 | 2,185,712,911 | 19,067 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The patches are listed below:
```
diff -x *.pyc -r ~/venv/lib/python3.9/site-packages/langchain_openai/llms/base.py langchain_openai/llms/base.py
207c207,210
< values["client"] = openai.OpenAI(**client_params).completions
---
> client = openai.OpenAI(**client_params)
> completion = client.chat.completions
> values["client"] = completion
> #values["client"] = openai.OpenAI(**client_params).completions
340c343,352
< response = self.client.create(prompt=_prompts, **params)
---
> from openai.types.chat.chat_completion_user_message_param import ChatCompletionUserMessageParam
>
> umessage = ChatCompletionUserMessageParam(content=_prompts[0], role = "user")
> messages=[umessage,]
> response = self.client.create(messages=messages, **params)
> #response = self.client.create(prompt=_prompts, **params)
454c466,467
< text=choice["text"],
---
> #text=choice["text"],
> text=choice["message"]["content"],
```
### Error Message and Stack Trace (if applicable)
```Entering new SQLDatabaseChain chain...
Given an input question, first create a syntactically correct postgresql query to run, then look at the results of the query and return the answer.
Use the following format:
Question: Question here
SQLQuery: SQL Query to run
SQLResult: Result of the SQLQuery
Answer: Final answer here
how many tasks?
SQLQuery:SELECT COUNT(*) FROM tasks
SQLResult:
count
-----
3
Answer: There are 3 tasks in the tasks table.(psycopg2.errors.SyntaxError) syntax error at or near ":"
LINE 2: SQLResult:
^
[SQL: SELECT COUNT(*) FROM tasks
SQLResult:
count
-----
3
Answer: There are 3 tasks in the tasks table.]
(Background on this error at: https://sqlalche.me/e/20/f405)
Traceback (most recent call last):
File "/Users/marvel/venv/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1969, in _exec_single_context
self.dialect.do_execute(
File "/Users/marvel/venv/lib/python3.9/site-packages/sqlalchemy/engine/default.py", line 922, in do_execute
cursor.execute(statement, parameters)
psycopg2.errors.SyntaxError: syntax error at or near ":"
LINE 2: SQLResult:
^
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/marvel/Nutstore Files/programs/ai/xiaoyi-robot/codes/postgres_agent.py", line 61, in get_prompt
print(db_chain.run(question))
File "/Users/marvel/venv/lib/python3.9/site-packages/langchain_core/_api/deprecation.py", line 145, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
File "/Users/marvel/venv/lib/python3.9/site-packages/langchain/chains/base.py", line 538, in run
return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
File "/Users/marvel/venv/lib/python3.9/site-packages/langchain_core/_api/deprecation.py", line 145, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
File "/Users/marvel/venv/lib/python3.9/site-packages/langchain/chains/base.py", line 363, in __call__
return self.invoke(
File "/Users/marvel/venv/lib/python3.9/site-packages/langchain/chains/base.py", line 162, in invoke
raise e
File "/Users/marvel/venv/lib/python3.9/site-packages/langchain/chains/base.py", line 156, in invoke
self._call(inputs, run_manager=run_manager)
File "/Users/marvel/venv/lib/python3.9/site-packages/langchain_experimental/sql/base.py", line 201, in _call
raise exc
File "/Users/marvel/venv/lib/python3.9/site-packages/langchain_experimental/sql/base.py", line 146, in _call
result = self.database.run(sql_cmd)
File "/Users/marvel/venv/lib/python3.9/site-packages/langchain_community/utilities/sql_database.py", line 436, in run
result = self._execute(command, fetch)
File "/Users/marvel/venv/lib/python3.9/site-packages/langchain_community/utilities/sql_database.py", line 413, in _execute
cursor = connection.execute(text(command))
File "/Users/marvel/venv/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1416, in execute
return meth(
File "/Users/marvel/venv/lib/python3.9/site-packages/sqlalchemy/sql/elements.py", line 517, in _execute_on_connection
return connection._execute_clauseelement(
File "/Users/marvel/venv/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1639, in _execute_clauseelement
ret = self._execute_context(
File "/Users/marvel/venv/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1848, in _execute_context
return self._exec_single_context(
File "/Users/marvel/venv/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1988, in _exec_single_context
self._handle_dbapi_exception(
File "/Users/marvel/venv/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 2344, in _handle_dbapi_exception
raise sqlalchemy_exception.with_traceback(exc_info[2]) from e
File "/Users/marvel/venv/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1969, in _exec_single_context
self.dialect.do_execute(
File "/Users/marvel/venv/lib/python3.9/site-packages/sqlalchemy/engine/default.py", line 922, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.ProgrammingError: (psycopg2.errors.SyntaxError) syntax error at or near ":"
LINE 2: SQLResult:
^
[SQL: SELECT COUNT(*) FROM tasks
SQLResult:
count
-----
3
Answer: There are 3 tasks in the tasks table.]
```
<img width="1563" alt="error" src="https://github.com/langchain-ai/langchain/assets/905594/3c9ff8a4-d1b0-482f-ac46-70bd15a240b4">
### Description
I followed the article (https://coinsbench.com/chat-with-your-databases-using-langchain-bb7d31ed2e76) to make langchain interact with postgres DB, but I have got error response from OpenAI recently, error message are listed below.
<img width="1488" alt="chat-comp" src="https://github.com/langchain-ai/langchain/assets/905594/7616ecf9-1ea0-4726-910e-8ace52825fae">
<img width="1559" alt="comp" src="https://github.com/langchain-ai/langchain/assets/905594/ab5386d8-8c5c-431a-abde-11427630f63a">
It seems that OpenAI has stopped supporting Completions API and stop words in the request fail to work. OpenAI's suggestions are(https://openai.com/blog/gpt-4-api-general-availability):
> Starting today, all paying API customers have access to GPT-4. In March, we [introduced the ChatGPT API](https://openai.com/blog/introducing-chatgpt-and-whisper-apis), and earlier this month we [released our first updates](https://openai.com/blog/function-calling-and-other-api-updates) to the chat-based models. We envision a future where chat-based models can support any use case. Today we’re announcing a deprecation plan for older models of the Completions API, and recommend that users adopt the Chat Completions API.
So I did some research of current codes and decided to modify langchain_openai/llms/base.py, the modifications are list above. finally it works
<img width="849" alt="finally" src="https://github.com/langchain-ai/langchain/assets/905594/bd0e33be-6921-4ac0-b264-322f928ed3c7">
### System Info
% pip list|grep langchain
langchain 0.1.5
langchain-community 0.0.17
langchain-core 0.1.18
langchain-experimental 0.0.50
langchain-openai 0.0.5
% pip list|grep openai
langchain-openai 0.0.5
openai 1.11.1 | OpenAI's legacy completion API causes that stop words do not work, patches are attached. | https://api.github.com/repos/langchain-ai/langchain/issues/19062/comments | 1 | 2024-03-14T06:45:32Z | 2024-03-14T18:37:47Z | https://github.com/langchain-ai/langchain/issues/19062 | 2,185,552,198 | 19,062 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
import os
from langchain.vectorstores.qdrant import Qdrant
from langchain.document_loaders.pdf import PyPDFLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.embeddings.huggingface import HuggingFaceBgeEmbeddings
def load_pdf(
file: str,
collection_name: str,
chunk_size: int = 512,
chunk_overlap: int = 32,
):
loader = PyPDFLoader(file)
documents = loader.load()
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=chunk_size, chunk_overlap=chunk_overlap
)
texts = text_splitter.split_documents(documents)
embeddings = HuggingFaceBgeEmbeddings(
model_name=os.environ.get("MODEL_NAME", "microsoft/phi-2"),
model_kwargs=dict(device="cpu"),
encode_kwargs=dict(normalize_embeddings=False),
)
url = os.environ.get("VDB_URL", "http://localhost:6333")
qdrant = Qdrant.from_documents(
texts,
embeddings,
url=url,
collection_name=collection_name,
prefer_grpc=False,
)
return qdrant
load_pdf("/Users/alvynabranches/Downloads/bh1.pdf", "buget2425")
```
### Error Message and Stack Trace (if applicable)
```bash
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Traceback (most recent call last):
File "/Users/alvynabranches/rag/rag/ingest.py", line 39, in <module>
load_pdf("/Users/alvynabranches/Downloads/bh1.pdf", "buget2425")
File "/Users/alvynabranches/rag/rag/ingest.py", line 29, in load_pdf
qdrant = Qdrant.from_documents(
^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.12/site-packages/langchain_core/vectorstores.py", line 528, in from_documents
return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.12/site-packages/langchain_community/vectorstores/qdrant.py", line 1334, in from_texts
qdrant = cls.construct_instance(
^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.12/site-packages/langchain_community/vectorstores/qdrant.py", line 1591, in construct_instance
partial_embeddings = embedding.embed_documents(texts[:1])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.12/site-packages/langchain_community/embeddings/huggingface.py", line 257, in embed_documents
embeddings = self.client.encode(texts, **self.encode_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.12/site-packages/sentence_transformers/SentenceTransformer.py", line 345, in encode
features = self.tokenize(sentences_batch)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.12/site-packages/sentence_transformers/SentenceTransformer.py", line 553, in tokenize
return self._first_module().tokenize(texts)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.12/site-packages/sentence_transformers/models/Transformer.py", line 146, in tokenize
self.tokenizer(
File "/opt/homebrew/lib/python3.12/site-packages/transformers/tokenization_utils_base.py", line 2829, in __call__
encodings = self._call_one(text=text, text_pair=text_pair, **all_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.12/site-packages/transformers/tokenization_utils_base.py", line 2915, in _call_one
return self.batch_encode_plus(
^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.12/site-packages/transformers/tokenization_utils_base.py", line 3097, in batch_encode_plus
padding_strategy, truncation_strategy, max_length, kwargs = self._get_padding_truncation_strategies(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.12/site-packages/transformers/tokenization_utils_base.py", line 2734, in _get_padding_truncation_strategies
raise ValueError(
ValueError: Asking to pad but the tokenizer does not have a padding token. Please select a token to use as `pad_token` `(tokenizer.pad_token = tokenizer.eos_token e.g.)` or add a new pad token via `tokenizer.add_special_tokens({'pad_token': '[PAD]'})`.
```
### Description
I am using the langchain library to store pdf on qdrant. The model which I am using is phi-2. I tried to solve it by changing the kwargs but it is still not working.
### System Info
```bash
pip3 freeze | grep langchain
```
```
langchain==0.1.12
langchain-community==0.0.28
langchain-core==0.1.31
langchain-text-splitters==0.0.1
```
#### Platform
Apple M3 Max
macOS Sonoma
Version 14.1
```bash
python3 -V
```
```
Python 3.12.2
``` | Asking to pad but the tokenizer does not have a padding token. Please select a token to use as `pad_token` `(tokenizer.pad_token = tokenizer.eos_token e.g.)` or add a new pad token via `tokenizer.add_special_tokens({'pad_token': '[PAD]'})` | https://api.github.com/repos/langchain-ai/langchain/issues/19061/comments | 0 | 2024-03-14T06:39:58Z | 2024-06-20T16:08:46Z | https://github.com/langchain-ai/langchain/issues/19061 | 2,185,545,359 | 19,061 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_experimental.openai_assistant import OpenAIAssistantRunnable
interpreter_assistant = OpenAIAssistantRunnable.create_assistant(
name="langchain assistant",
instructions="You are a personal math tutor. Write and run code to answer math questions.",
tools=[{"type": "code_interpreter"}],
model="gpt-4-1106-preview"
)
output = interpreter_assistant.invoke({"content": "What's 10 - 4 raised to the 2.7"})
```
### Error Message and Stack Trace (if applicable)
File "xxx/t.py", line 43, in <module>
interpreter_assistant = OpenAIAssistantRunnable.create_assistant(
File "xxx/site-packages/langchain/agents/openai_assistant/base.py", line 213, in create_assistant
tools=[convert_to_openai_tool(tool) for tool in tools], # type: ignore
File "xxx/langchain/agents/openai_assistant/base.py", line 213, in <listcomp>
tools=[convert_to_openai_tool(tool) for tool in tools], # type: ignore
File "xxx/langchain_core/utils/function_calling.py", line 329, in convert_to_openai_tool
function = convert_to_openai_function(tool)
File "xxx/langchain_core/utils/function_calling.py", line 304, in convert_to_openai_function
raise ValueError(
ValueError: Unsupported function
{'type': 'code_interpreter'}
Functions must be passed in as Dict, pydantic.BaseModel, or Callable. If they're a dict they must either be in OpenAI function format or valid JSON schema with top-level 'title' and 'description' keys.
### Description
I used the example code, but it threw an error when executed.
example code from https://python.langchain.com/docs/modules/agents/agent_types/openai_assistants#using-only-openai-tools
### System Info
$ pip list | grep langchain
langchain 0.1.12
langchain-community 0.0.28
langchain-core 0.1.31
langchain-openai 0.0.8
langchain-text-splitters 0.0.1 | The validation of tools within OpenAIAssistantRunnable.create_assistant does not account for `{"type": "code_interpreter"}`. | https://api.github.com/repos/langchain-ai/langchain/issues/19057/comments | 2 | 2024-03-14T04:01:10Z | 2024-06-28T16:07:18Z | https://github.com/langchain-ai/langchain/issues/19057 | 2,185,358,514 | 19,057 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
pass
### Error Message and Stack Trace (if applicable)
_No response_
### Description
When calling the `_stream_log_implementation` from the `astream_log` method in the `Runnable` class, it is not handing over the `kwargs` argument. Therefore, even if i want to customize APIHandler and implement additional features with additional arguments, it is not possible. Conversely, the `astream_events` method normally handing over the `kwargs` argument.
### System Info
pass | Runnable astream_log does not pass kwargs to _astream_log_implementation | https://api.github.com/repos/langchain-ai/langchain/issues/19054/comments | 0 | 2024-03-14T03:03:13Z | 2024-03-14T19:53:48Z | https://github.com/langchain-ai/langchain/issues/19054 | 2,185,283,399 | 19,054 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [x] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
conda install langchain=0.1.12 zeep=4.2.1
```
### Error Message and Stack Trace (if applicable)
```
Could not solve for environment specs
The following packages are incompatible
├─ langchain 0.1.12** is installable and it requires
│ ├─ langchain-text-splitters >=0.0.1,<0.1 , which requires
│ │ └─ lxml >=5.1.0,<6.0.0 , which can be installed;
│ └─ lxml >=4.9.2,<5.0.0 , which can be installed;
└─ zeep 4.2.1** is not installable because it requires
└─ lxml >=4.6.0 , which conflicts with any installable versions previously reported.
```
### Description
I'm trying to install latest langchain along with some other packages depending on lxml. It appears that `langchain` requires `lxml >=4.9.2,<5.0.0` but `langchain-text-splitters` requires `lxml >=5.1.0,<6.0.0` which are incompatible. Since the former depends on the latter, it makes dependency resolution fail.
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.19045
> Python Version: 3.10.13 | packaged by conda-forge | (main, Dec 23 2023, 15:27:34) [MSC v.1937 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.1.31
> langchain: 0.1.12
> langchain_community: 0.0.28
> langsmith: 0.1.24
> langchain_text_splitters: 0.0.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | Inconsistent lxml dependency between `langchain` and `langchain-text-splitters` | https://api.github.com/repos/langchain-ai/langchain/issues/19040/comments | 5 | 2024-03-13T18:23:55Z | 2024-04-07T19:23:10Z | https://github.com/langchain-ai/langchain/issues/19040 | 2,184,636,140 | 19,040 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
'''
from langchain import hub
# Get the prompt to use - you can modify this!
prompt = hub.pull("hwchase17/openai-functions-agent")
# Choose the LLM that will drive the agent
# Only certain models support this
model = ChatOpenAI(model="gpt-4-0125-preview", temperature=0)
# Construct the OpenAI Tools agent
agent = create_openai_tools_agent(model, tools, prompt)
# Create an agent executor by passing in the agent and tools
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
'''
### Error Message and Stack Trace (if applicable)
AttributeError: 'Client' object has no attribute 'pull_repo'
### Description
I am unable to import resource from LangChain hub, I am getting error while doing so. I uninstalled LangChain and installed it back but the error persist.
### System Info
"langchain==0.1.12" | Unable to import resource from LangChain hub | https://api.github.com/repos/langchain-ai/langchain/issues/19038/comments | 1 | 2024-03-13T17:25:15Z | 2024-03-13T17:33:05Z | https://github.com/langchain-ai/langchain/issues/19038 | 2,184,536,534 | 19,038 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
``` python
template = """<s>[INST] <<SYS>>
- Bạn là một trợ lí Tiếng Việt nhiệt tình và trung thực.
- Hãy luôn trả lời một cách hữu ích nhất có thể, đồng thời giữ an toàn.
- Câu trả lời của bạn không nên chứa bất kỳ nội dung gây hại, phân biệt chủng tộc, phân biệt giới tính, độc hại, nguy hiểm hoặc bất hợp pháp nào.
- Hãy đảm bảo rằng các câu trả lời của bạn không có thiên kiến xã hội và mang tính tích cực.
- Nếu một câu hỏi không có ý nghĩa hoặc không hợp lý về mặt thông tin, hãy giải thích tại sao thay vì trả lời một điều gì đó không chính xác.
- Nếu bạn không biết câu trả lời cho một câu hỏi, hãy trẳ lời là bạn không biết và vui lòng không chia sẻ thông tin sai lệch.
- Hãy trả lời một cách ngắn gọn, súc tích và chỉ trả lời chi tiết nếu được yêu cầu.
<</SYS>>
{question} [/INST]"""
@tool
def time(text: str) -> str:
"""Returns todays date, use this for any \
questions related to knowing todays date. \
The input should always be an empty string, \
and this function will always return todays \
date - any date mathmatics should occur \
outside this function."""
return str(date.today())
prompt = PromptTemplate(template=template, input_variables=["question"])
callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])
llm = ChatOllama(model="vistral-7b-q8", temperature=0.0, callback_manager=callback_manager)
llm_chain = LLMChain(prompt=prompt, llm=llm, output_parser=StrOutputParser())
tools = load_tools(["ddg-search", "wikipedia"], llm=llm_chain)
agent= initialize_agent(
tools + [time],
llm_chain,
agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION,
handle_parsing_errors=True,
verbose = True)
agent("Hôm nay ngày mấy?")
```
### Error Message and Stack Trace (if applicable)
File [~/miniconda3/envs/stt/lib/python3.9/site-packages/langchain_core/_api/deprecation.py:145](https://vscode-remote+ssh-002dremote-002bvietmapserver.vscode-resource.vscode-cdn.net/home/vuongnt/workspace/mimi/~/miniconda3/envs/stt/lib/python3.9/site-packages/langchain_core/_api/deprecation.py:145), in deprecated.<locals>.deprecate.<locals>.warning_emitting_wrapper(*args, **kwargs)
143 warned = True
144 emit_warning()
--> 145 return wrapped(*args, **kwargs)
File [~/miniconda3/envs/stt/lib/python3.9/site-packages/langchain/chains/base.py:378](https://vscode-remote+ssh-002dremote-002bvietmapserver.vscode-resource.vscode-cdn.net/home/vuongnt/workspace/mimi/~/miniconda3/envs/stt/lib/python3.9/site-packages/langchain/chains/base.py:378), in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
346 """Execute the chain.
347
348 Args:
(...)
369 `Chain.output_keys`.
370 """
371 config = {
372 "callbacks": callbacks,
373 "tags": tags,
374 "metadata": metadata,
375 "run_name": run_name,
376 }
--> 378 return self.invoke(
379 inputs,
...
343 object_setattr(__pydantic_self__, '__dict__', values)
ValidationError: 1 validation error for Generation
text
str type expected (type=type_error.str)
### Description
I got problem while using Langchain with LLM model to get output.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #50-Ubuntu SMP PREEMPT_DYNAMIC Mon Jul 10 18:24:29 UTC 2023
> Python Version: 3.9.18 (main, Sep 11 2023, 13:41:44)
[GCC 11.2.0]
Package Information
-------------------
> langchain_core: 0.1.31
> langchain: 0.1.12
> langchain_community: 0.0.28
> langsmith: 0.1.23
> langchain_experimental: 0.0.54
> langchain_openai: 0.0.8
> langchain_text_splitters: 0.0.1
> langchainhub: 0.1.15
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | Langchain ValidationError: str type expected | https://api.github.com/repos/langchain-ai/langchain/issues/19037/comments | 1 | 2024-03-13T16:37:20Z | 2024-06-20T16:08:47Z | https://github.com/langchain-ai/langchain/issues/19037 | 2,184,445,658 | 19,037 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
In [the discussion of partial prompts](https://python.langchain.com/docs/modules/model_io/prompts/) the examples are all given in terms of `format`, and there is no discussion of partial prompts in the context of pipelines.
### Idea or request for content:
I can easily imagine a pipeline in which a prompt is first partialed, then filled, and then passed to an LLM. But it's not at all clear how to do that, and this page only discusses `format`, not `invoke`.
It would be very helpful to add an example (or, if an example exists elsewhere, a link) that shows how to put partial filling of a prompt into a pipeline. If this is not possible, then it would save programmers lots of time if the documentation would say so. | DOC: RFE - Extend the discussion of Partial Prmpts with Pipeline example | https://api.github.com/repos/langchain-ai/langchain/issues/19033/comments | 3 | 2024-03-13T15:05:55Z | 2024-06-21T16:37:26Z | https://github.com/langchain-ai/langchain/issues/19033 | 2,184,245,090 | 19,033 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
I was reading the docs on [Prompts](https://python.langchain.com/docs/modules/model_io/prompts/) and stumbled accross the section [Example Selector Types](https://python.langchain.com/docs/modules/model_io/prompts/example_selector_types/) without really understanding what was about and how it connected to the other topics.
Only after moving to the next section, on [Example selectors](https://python.langchain.com/docs/modules/model_io/prompts/example_selectors), concepts seemed to align and make sense. I'm led to believe I'm not the only one getting lost when reading the documentation sequentially this way.
### Idea or request for content:
My suggestion is to simply move section [Example selectors](https://python.langchain.com/docs/modules/model_io/prompts/example_selectors) before [Example Selector Types](https://python.langchain.com/docs/modules/model_io/prompts/example_selector_types/), so the concepts are presented in a sequential way. | DOC: Swap "example selector types" and "example selectors" in the docs to make reading smoother | https://api.github.com/repos/langchain-ai/langchain/issues/19031/comments | 0 | 2024-03-13T13:58:10Z | 2024-06-19T16:08:33Z | https://github.com/langchain-ai/langchain/issues/19031 | 2,184,088,940 | 19,031 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_community.document_loaders import UnstructuredFileLoader
import json
import os
msFiles = ["BV24-006-0_BV_Laufbahnmodell.docx"]
for filename in msFiles:
loader = UnstructuredFileLoader(filename, mode='elements') # elements | single
docs = loader.load() # Document
# Convert each document to a dictionary
data = [doc.dict() for doc in docs]
print(f"document will be serialized with {len(data)} elements!")
with open(f"{filename}.txt", 'w', encoding="utf-8") as f:
json.dump(data, f, ensure_ascii=False)
### Error Message and Stack Trace (if applicable)
_No response_
### Description
The page number of returned list of documents is wrong:
View of DOC File - page 55 of 70

View of parsed Document: page 16

### System Info
windows
python 11 | UnstructuredFileLoader loads wrong "page number" metadata of word documents | https://api.github.com/repos/langchain-ai/langchain/issues/19029/comments | 3 | 2024-03-13T10:37:07Z | 2024-06-23T16:09:14Z | https://github.com/langchain-ai/langchain/issues/19029 | 2,183,659,724 | 19,029 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python ` # Create the semantic settings with the configuration
semantic_search = None
if semantic_configurations is None and semantic_configuration_name is not None:
semantic_configuration = SemanticConfiguration(
name=semantic_configuration_name,
prioritized_fields=SemanticPrioritizedFields(
content_fields=[SemanticField(field_name=FIELDS_CONTENT)],
),
)
semantic_search = SemanticSearch(configurations=[semantic_configuration]) ```
### Error Message and Stack Trace (if applicable)
no error message
### Description
I am trying to create a semantic_search object with custom semantic_configuration as you can see in https://github.com/Azure/azure-search-vector-samples/blob/main/demo-python/code/basic-vector-workflow/azure-search-vector-python-sample.ipynb, ```python semantic_config = SemanticConfiguration(
name="my-semantic-config",
prioritized_fields=SemanticPrioritizedFields(
title_field=SemanticField(field_name="title"),
keywords_fields=[SemanticField(field_name="category")],
content_fields=[SemanticField(field_name="content")]
)
) ``` However with the langchain code it is not possible to define this since the this line semantic_search = None prevent to create any semantic_search object from outside although i can create a semantic_configurations, i cannot create a semantic_search object
### System Info
langchain==0.1.9
langchain-community==0.0.24
langchain-core==0.1.27
langchain-openai==0.0.7
langchainhub==0.1.14
windows 11
Python 3.11.8
conda environment | Bug Report: Custom Semantic Search Functionality Issue in Azure Search | https://api.github.com/repos/langchain-ai/langchain/issues/18998/comments | 1 | 2024-03-13T00:05:52Z | 2024-06-19T16:08:23Z | https://github.com/langchain-ai/langchain/issues/18998 | 2,182,874,556 | 18,998 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Code isn't causing the problem here. Its the conflicting deps
### Error Message and Stack Trace (if applicable)
ERROR: Cannot install -r requirements.txt (line 6) and langchain because these package versions have conflicting dependencies.
The conflict is caused by:
langchain 0.1.6 depends on langsmith<0.1 and >=0.0.83
langchain-community 0.0.28 depends on langsmith<0.2.0 and >=0.1.0
langchain 0.1.6 depends on langsmith<0.1 and >=0.0.83
langchain-community 0.0.27 depends on langsmith<0.2.0 and >=0.1.0
langchain 0.1.6 depends on langsmith<0.1 and >=0.0.83
langchain-community 0.0.26 depends on langsmith<0.2.0 and >=0.1.0
langchain 0.1.6 depends on langsmith<0.1 and >=0.0.83
langchain-community 0.0.25 depends on langsmith<0.2.0 and >=0.1.0
langchain 0.1.6 depends on langsmith<0.1 and >=0.0.83
langchain-community 0.0.24 depends on langsmith<0.2.0 and >=0.1.0
langchain 0.1.6 depends on langsmith<0.1 and >=0.0.83
langchain-community 0.0.23 depends on langsmith<0.2.0 and >=0.1.0
langchain 0.1.6 depends on langsmith<0.1 and >=0.0.83
ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts
langchain-community 0.0.22 depends on langsmith<0.2.0 and >=0.1.0
langchain 0.1.6 depends on langsmith<0.1 and >=0.0.83
langchain-community 0.0.21 depends on langsmith<0.2.0 and >=0.1.0
To fix this you could try to:
1. loosen the range of package versions you've specified
2. remove package versions to allow pip attempt to solve the dependency conflict
### Description
I'm trying to get a python scrip I wrote to run with a github action. When the script tries to install langchain as specified in my requirements.txt It fails.
### System Info
> langchain_core: 0.1.23
> langchain: 0.1.6
> langchain_community: 0.0.19
> langsmith: 0.0.87
> langchain_experimental: 0.0.50
> langchain_mistralai: 0.0.5
> langchain_openai: 0.0.5 | Dep Conflict with Langchain | https://api.github.com/repos/langchain-ai/langchain/issues/18996/comments | 6 | 2024-03-12T22:57:03Z | 2024-04-30T18:33:20Z | https://github.com/langchain-ai/langchain/issues/18996 | 2,182,820,017 | 18,996 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from langchain_elasticsearch import ElasticsearchStore
from langchain_community.embeddings import HuggingFaceEmbeddings
import torch
from langchain_community.document_loaders import TextLoader
from langchain_text_splitters import CharacterTextSplitter
import pandas as pd
from langchain_community.document_loaders import DataFrameLoader
import logging
logging.getLogger().setLevel(logging.ERROR)
device = "cuda:0" if torch.cuda.is_available() else "cpu"
print(device)
model_name = "sentence-transformers/all-mpnet-base-v2"
model_kwargs = {'device': device}
encode_kwargs = {'normalize_embeddings': False}
hf_embedding_model = HuggingFaceEmbeddings(
model_name=model_name,
model_kwargs=model_kwargs,
encode_kwargs=encode_kwargs
)
df = pd.DataFrame({'CONTENT':["abc","def"]})
loader = DataFrameLoader(df, page_content_column="CONTENT")
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=500, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
db = ElasticsearchStore.from_documents(
docs,
hf_embedding_model,
es_url="http://localhost:9200",
index_name="test-index",
strategy=ElasticsearchStore.ApproxRetrievalStrategy(
hybrid=True,
)
)
db.add_documents(docs)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
When using add_documents, it dumps all doc ids to STDOUT when complete.
For example:
['fbfa0f76-5b3c-4dd7-93e0-6e592a5c11b4',
'bfc8bh96-c6a2-46d3-a0e0-3d12abf24948',
'bf8a6ae6-6c14-4691-b9ba-e7987ad68e65']
### System Info
langchain==0.1.10
langchain-community==0.0.25
langchain-core==0.1.28
langchain-elasticsearch==0.1.0
langchain-openai==0.0.8
langchain-text-splitters==0.0.1 | ElasticsearchStore.add_documents() prints out document IDs to STDOUT after job completion | https://api.github.com/repos/langchain-ai/langchain/issues/18986/comments | 1 | 2024-03-12T19:10:54Z | 2024-03-12T19:13:14Z | https://github.com/langchain-ai/langchain/issues/18986 | 2,182,489,252 | 18,986 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [x] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
There is a spelling mistake in line 289.
### Idea or request for content:
I can correct this error. Please assign this issue to me. | DOC: Typo error in "https://github.com/langchain-ai/langchain/blob/master/docs/docs/get_started/quickstart.mdx", line 289. | https://api.github.com/repos/langchain-ai/langchain/issues/18981/comments | 1 | 2024-03-12T17:49:54Z | 2024-06-14T08:56:09Z | https://github.com/langchain-ai/langchain/issues/18981 | 2,182,320,367 | 18,981 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_community.agent_toolkits import create_sql_agent
from langchain_community.llms import GPT4All
from langchain_community.utilities.sql_database import SQLDatabase
# path to the gpt4all .gguf
from local_config import MODEL_PATH
llm = GPT4All(
model=MODEL_PATH,
max_tokens=2048,
)
db = SQLDatabase.from_uri("sqlite:///Chinook.db")
agent_executor = create_sql_agent(llm, db=db, agent_type="openai-tools", verbose=True)
agent_executor.invoke(
"List the total sales per country. Which country's customers spent the most?"
)
```
### Error Message and Stack Trace (if applicable)
```
TypeError Traceback (most recent call last)
Cell In[11], line 1
----> 1 agent_executor.invoke(
2 "List the total sales per country. Which country's customers spent the most?"
3 )
File c:\Users\Bob\.virtualenvs\langchain_local-QYN7rPyV\lib\site-packages\langchain\chains\base.py:163, in Chain.invoke(self, input, config, **kwargs)
161 except BaseException as e:
162 run_manager.on_chain_error(e)
--> 163 raise e
164 run_manager.on_chain_end(outputs)
166 if include_run_info:
File c:\Users\Bob\.virtualenvs\langchain_local-QYN7rPyV\lib\site-packages\langchain\chains\base.py:153, in Chain.invoke(self, input, config, **kwargs)
150 try:
151 self._validate_inputs(inputs)
152 outputs = (
--> 153 self._call(inputs, run_manager=run_manager)
154 if new_arg_supported
155 else self._call(inputs)
156 )
158 final_outputs: Dict[str, Any] = self.prep_outputs(
159 inputs, outputs, return_only_outputs
160 )
...
--> 207 for token in self.client.generate(prompt, **params):
208 if text_callback:
209 text_callback(token)
TypeError: generate() got an unexpected keyword argument 'tools'
```
### Description
I'm following the tutorial under Langchain/Components/Toolkits/[SQL Database](https://python.langchain.com/docs/integrations/toolkits/sql_database) but subbing with a local GPT4All model.
I'v successfully installed the Chinook.db and am able to execute a test sqlite query.
GPT4all loads as well. I'm expecting that when I run `agent_executor.invoke`, it doesn't error out.
### System Info
Python 3.9.6
Langchain 0.1.11
Langchain-community 0.0.27
Langchain-core 0.1.30
Langchain-experimental 0.0.53 | TypeError running `agent_executor.invoke` with open-source LM | https://api.github.com/repos/langchain-ai/langchain/issues/18979/comments | 6 | 2024-03-12T16:51:28Z | 2024-06-20T16:07:42Z | https://github.com/langchain-ai/langchain/issues/18979 | 2,182,190,543 | 18,979 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_openai import AzureChatOpenAI
aoai_chat = AzureChatOpenAI(deployment_name='gpt4',
openai_api_key="...",
azure_endpoint="...",
openai_api_version='2023-12-01-preview')
aoai_chat.stream("Hello")
```
### Error Message and Stack Trace (if applicable)
``` bash
AttributeError("'NoneType' object has no attribute 'get'")Traceback (most recent call last): File "/Users/xxx/anaconda3/envs//lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1592, in _atransform_stream_with_config chunk: Output = await asyncio.create_task( # type: ignore[call-arg] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/xxx/anaconda3/envs//lib/python3.11/asyncio/futures.py", line 287, in await
yield self # This tells Task to wait for completion.
^^^^^^^^^^
File "/Users/xxx/anaconda3/envs//lib/python3.11/asyncio/futures.py", line 203, in result
raise self._exception.with_traceback(self._exception_tb)
File "/Users/xxx/anaconda3/envs//lib/python3.11/asyncio/tasks.py", line 267, in __step
result = coro.send(None)
^^^^^^^^^^^^^^^
File "/Users/xxx/anaconda3/envs//lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2405, in _atransform
async for output in final_pipeline:
File "/Users/xxx/anaconda3/envs//lib/python3.11/site-packages/langchain_core/runnables/base.py", line 4176, in atransform
async for item in self.bound.atransform(
File "/Users/xxx/anaconda3/envs//lib/python3.11/site-packages/langchain_core/runnables/base.py", line 4176, in atransform
async for item in self.bound.atransform(
File "/Users/xxx/anaconda3/envs//lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2435, in atransform
async for chunk in self._atransform_stream_with_config(
File "/Users/xxx/anaconda3/envs//lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1592, in _atransform_stream_with_config
chunk: Output = await asyncio.create_task( # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/xxx/anaconda3/envs//lib/python3.11/asyncio/futures.py", line 287, in await
yield self # This tells Task to wait for completion.
^^^^^^^^^^
File "/Users/xxx/anaconda3/envs//lib/python3.11/asyncio/tasks.py", line 339, in __wakeup
future.result()
File "/Users/xxx/anaconda3/envs//lib/python3.11/asyncio/futures.py", line 203, in result
raise self._exception.with_traceback(self._exception_tb)
File "/Users/xxx/anaconda3/envs//lib/python3.11/asyncio/tasks.py", line 267, in __step
result = coro.send(None)
^^^^^^^^^^^^^^^
File "/Users/xxx/anaconda3/envs//lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2405, in _atransform
async for output in final_pipeline:
File "/Users/xxx/anaconda3/envs//lib/python3.11/site-packages/langchain_core/runnables/base.py", line 4176, in atransform
async for item in self.bound.atransform(
File "/Users/xxx/anaconda3/envs//lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2435, in atransform
async for chunk in self._atransform_stream_with_config(
File "/Users/xxx/anaconda3/envs//lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1592, in _atransform_stream_with_config
chunk: Output = await asyncio.create_task( # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/xxx/anaconda3/envs//lib/python3.11/asyncio/futures.py", line 287, in await
yield self # This tells Task to wait for completion.
^^^^^^^^^^
File "/Users/xxx/anaconda3/envs//lib/python3.11/asyncio/tasks.py", line 339, in __wakeup
future.result()
File "/Users/xxx/anaconda3/envs//lib/python3.11/asyncio/futures.py", line 203, in result
raise self._exception.with_traceback(self._exception_tb)
File "/Users/xxx/anaconda3/envs//lib/python3.11/asyncio/tasks.py", line 267, in __step
result = coro.send(None)
^^^^^^^^^^^^^^^
File "/Users/xxx/anaconda3/envs//lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2405, in _atransform
async for output in final_pipeline:
File "/Users/xxx/anaconda3/envs//lib/python3.11/site-packages/langchain_core/output_parsers/transform.py", line 60, in atransform
async for chunk in self._atransform_stream_with_config(
File "/Users/xxx/anaconda3/envs//lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1592, in _atransform_stream_with_config
chunk: Output = await asyncio.create_task( # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/xxx/anaconda3/envs//lib/python3.11/asyncio/futures.py", line 287, in await
yield self # This tells Task to wait for completion.
^^^^^^^^^^
File "/Users/xxx/anaconda3/envs//lib/python3.11/asyncio/tasks.py", line 339, in __wakeup
future.result()
File "/Users/xxx/anaconda3/envs//lib/python3.11/asyncio/futures.py", line 203, in result
raise self._exception.with_traceback(self._exception_tb)
File "/Users/xxx/anaconda3/envs//lib/python3.11/asyncio/tasks.py", line 267, in __step
result = coro.send(None)
^^^^^^^^^^^^^^^
File "/Users/xxx/anaconda3/envs//lib/python3.11/site-packages/langchain_core/output_parsers/transform.py", line 38, in _atransform
async for chunk in input:
File "/Users/xxx/anaconda3/envs//lib/python3.11/site-packages/langchain_core/utils/aiter.py", line 97, in tee_peer
item = await iterator.anext()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/xxx/anaconda3/envs//lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1068, in atransform
async for output in self.astream(final, config, **kwargs):
File "/Users/xxx/anaconda3/envs//lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 308, in astream
raise e
File "/Users/xxx/anaconda3/envs//lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 292, in astream
async for chunk in self._astream(
File "/Users/xxx/anaconda3/envs//lib/python3.11/site-packages/langchain_openai/chat_models/base.py", line 519, in _astream
chunk = _convert_delta_to_message_chunk(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/xxx/anaconda3/envs//lib/python3.11/site-packages/langchain_openai/chat_models/base.py", line 174, in _convert_delta_to_message_chunk
role = cast(str, _dict.get("role"))
^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'get'
```
### Description
### I have found the cause of this Issue.
Because when I was using Azure OpenAI, in order to stream rapidly, I enabled the [Content filtering streaming mode](
https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/content-filter?tabs=warning%2Cpython#content-streaming), Azure would return a chunk of data at the end of the streaming transmission.
```json
{
"id": "",
"object": "",
"created": 0,
"model": "",
"choices": [
{
"finish_reason": null,
"index": 0,
"content_filter_results": {
"hate": {
"filtered": false,
"severity": "safe"
},
"self_harm": {
"filtered": false,
"severity": "safe"
},
"sexual": {
"filtered": false,
"severity": "safe"
},
"violence": {
"filtered": false,
"severity": "safe"
}
},
"content_filter_offsets": {
"check_offset": 4522,
"start_offset": 4522,
"end_offset": 4686
}
}
]
}
```
**This chunk of data, compared to a normal chunk of data, lacks the 'delta' field, causing this piece of code to throw an error.**
<img width="1090" alt="image" src="https://github.com/langchain-ai/langchain/assets/38649663/03b148bd-4eb5-4ce0-8664-2cfef34d42e8">
### System Info
langchain==0.1.11
langchain-community==0.0.25
langchain-core==0.1.29
langchain-experimental==0.0.52
langchain-openai==0.0.8
langchain-text-splitters==0.0.1
**platform = mac**
**python = 3.11.5** | An error occurs when enabling Content filtering streaming while using Azure OpenAI | https://api.github.com/repos/langchain-ai/langchain/issues/18977/comments | 0 | 2024-03-12T16:36:39Z | 2024-03-28T21:46:28Z | https://github.com/langchain-ai/langchain/issues/18977 | 2,182,155,882 | 18,977 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
def len_in_words(text: str) -> int:
return len(re.findall(r'\b\w+\b', text))
splitter = RecursiveCharacterTextSplitter(
chunk_size=2,
chunk_overlap=1,
length_function=len_in_words, # custom len fun
add_start_index=True,
)
doc = Document(page_content='test test test test', metadata={})
chunks = splitter.split_documents([doc])
assert chunks[0].metadata['start_index'] == 0
assert chunks[1].metadata['start_index'] == 5 # fails: actual is 10
assert chunks[2].metadata['start_index'] == 10 # fails: actual is -1
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
There is a bug in the way `TextSplitter. create_documents` calculates the offset when a custom length function is used: https://github.com/langchain-ai/langchain/blob/471f2ed40abbf9ea02ccf5b384db2e8580ed1cbb/libs/text-splitters/langchain_text_splitters/base.py#L81
Indeed, `previous_chunk_len` is in chars, while `self._chunk_overlap` can be in any custom unit (I'm using words in my example). This leads to a wrong offset, which in turns means a wrong `start_index`.
Solution: convert the chunk overlap in chars too, so that the formula uses the same unit.
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 22.6.0: Wed Jul 5 22:22:05 PDT 2023; root:xnu-8796.141.3~6/RELEASE_ARM64_T6000
> Python Version: 3.10.5 (main, Jul 22 2022, 10:15:54) [Clang 13.1.6 (clang-1316.0.21.2.3)]
Package Information
-------------------
> langchain_core: 0.1.18
> langchain: 0.1.5
> langchain_community: 0.0.17
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
| TextSplitter sets wrong start_index in case of custom len function | https://api.github.com/repos/langchain-ai/langchain/issues/18972/comments | 1 | 2024-03-12T14:25:53Z | 2024-03-12T15:09:18Z | https://github.com/langchain-ai/langchain/issues/18972 | 2,181,759,866 | 18,972 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_community.document_loaders import WebBaseLoader
loader = WebBaseLoader("https://docs.smith.langchain.com/user_guide")
docs = loader.load()
### Error Message and Stack Trace (if applicable)
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[1], line 1
----> 1 from langchain_community.document_loaders import WebBaseLoader
2 loader = WebBaseLoader("https://docs.smith.langchain.com/user_guide")
4 docs = loader.load()
File ~\miniconda3\envs\python38\lib\site-packages\langchain_community\document_loaders\__init__.py:191
189 from langchain_community.document_loaders.snowflake_loader import SnowflakeLoader
190 from langchain_community.document_loaders.spreedly import SpreedlyLoader
--> 191 from langchain_community.document_loaders.sql_database import SQLDatabaseLoader
192 from langchain_community.document_loaders.srt import SRTLoader
193 from langchain_community.document_loaders.stripe import StripeLoader
File ~\miniconda3\envs\python38\lib\site-packages\langchain_community\document_loaders\sql_database.py:10
6 from langchain_community.document_loaders.base import BaseLoader
7 from langchain_community.utilities.sql_database import SQLDatabase
---> 10 class SQLDatabaseLoader(BaseLoader):
11 """
12 Load documents by querying database tables supported by SQLAlchemy.
13
(...)
17 Each document represents one row of the result.
18 """
20 def __init__(
21 self,
22 query: Union[str, sa.Select],
(...)
30 include_query_into_metadata: bool = False,
31 ):
File ~\miniconda3\envs\python38\lib\site-packages\langchain_community\document_loaders\sql_database.py:22, in SQLDatabaseLoader()
10 class SQLDatabaseLoader(BaseLoader):
11 """
12 Load documents by querying database tables supported by SQLAlchemy.
13
(...)
17 Each document represents one row of the result.
18 """
20 def __init__(
21 self,
---> 22 query: Union[str, sa.Select],
23 db: SQLDatabase,
24 *,
25 parameters: Optional[Dict[str, Any]] = None,
26 page_content_mapper: Optional[Callable[..., str]] = None,
27 metadata_mapper: Optional[Callable[..., Dict[str, Any]]] = None,
28 source_columns: Optional[Sequence[str]] = None,
29 include_rownum_into_metadata: bool = False,
30 include_query_into_metadata: bool = False,
31 ):
32 """
33 Args:
34 query: The query to execute.
(...)
49 expression into the metadata dictionary. Default: False.
50 """
51 self.query = query
AttributeError: module 'sqlalchemy' has no attribute 'Select'
### Description
Hi All,
New to this project so apologies if I have misunderstood/missed anything. I am simply trying to follow the official documentation of langchain to familiarise myself with the functionality and encountering an error on the "Retrieval" example steps.
The community functions are supposed to be built on SQLAlchemy >=1.4 <1.4.3, but I'm getting the above error with SQLAlchemy 1.4 and 1.4.1. I believe the code needs to be updated? Or I am doing something wrong?
Any help much appreciated.
Thanks
### System Info
aenum==3.1.15
aiohttp==3.9.3
aiosignal==1.3.1
altair==5.2.0
annotated-types==0.6.0
anyio==3.7.1
argon2-cffi==23.1.0
argon2-cffi-bindings==21.2.0
asn1crypto==1.5.1
asteval==0.9.23
astor==0.8.1
asttokens==2.4.1
async-timeout==4.0.3
attrs==23.2.0
Babel==2.14.0
backcall==0.2.0
backoff==2.2.1
backports.zoneinfo==0.2.1
beautifulsoup4==4.11.2
bleach==6.1.0
blinker==1.7.0
cachetools==5.3.3
certifi==2024.2.2
cffi==1.16.0
charset-normalizer==2.1.1
ChromeController==0.3.26
click==8.1.7
cloudpickle==2.0.0
colorama==0.4.6
comm==0.2.1
cryptography==36.0.2
cssselect==1.2.0
cssutils==2.9.0
cycler==0.12.1
dataclasses-json==0.6.4
dataframe-image==0.1.14
debugpy==1.8.1
decorator==5.1.1
defusedxml==0.7.1
deprecation==2.1.0
distro==1.9.0
docopt==0.6.2
entrypoints==0.4
et-xmlfile==1.1.0
exceptiongroup==1.2.0
exchange-calendars==4.2.8
executing==2.0.1
fastjsonschema==2.19.1
fds.analyticsapi.engines==5.6.0
fds.protobuf.stach==1.0.0
fds.protobuf.stach.extensions==1.3.1
fds.protobuf.stach.v2==1.0.2
filelock==3.13.1
fonttools==4.49.0
frozenlist==1.4.1
func-timeout==4.3.5
funcsigs==1.0.2
future==1.0.0
gitdb==4.0.11
GitPython==3.1.42
greenlet==3.0.3
gs-quant==0.9.108
h11==0.14.0
h2o==3.44.0.3
html2image==2.0.4.3
httpcore==1.0.4
httpx==0.27.0
idna==3.6
importlib-metadata==6.11.0
importlib_resources==6.1.3
inflection==0.5.1
ipykernel==6.29.3
ipython==8.12.3
ipython-genutils==0.2.0
ipywidgets==8.0.7
jedi==0.19.1
Jinja2==3.1.3
joblib==1.3.2
json5==0.9.22
jsonpatch==1.33
jsonpointer==2.4
jsonschema==4.21.1
jsonschema-specifications==2023.12.1
jupyter==1.0.0
jupyter-console==6.6.3
jupyter-server==1.24.0
jupyter_client==7.4.9
jupyter_core==5.7.1
jupyterlab==3.4.8
jupyterlab_pygments==0.3.0
jupyterlab_server==2.24.0
jupyterlab_widgets==3.0.10
kiwisolver==1.4.5
korean-lunar-calendar==0.3.1
langchain==0.1.11
langchain-community==0.0.27
langchain-core==0.1.30
langchain-openai==0.0.8
langchain-text-splitters==0.0.1
langsmith==0.1.23
lmfit==1.0.2
lxml==5.1.0
markdown-it-py==3.0.0
MarkupSafe==2.1.5
marshmallow==3.21.1
matplotlib==3.5.3
matplotlib-inline==0.1.6
mdurl==0.1.2
mistune==3.0.2
more-itertools==10.2.0
MouseInfo==0.1.3
msgpack==1.0.8
multidict==6.0.5
mypy-extensions==1.0.0
nbclassic==1.0.0
nbclient==0.9.0
nbconvert==7.16.2
nbformat==5.9.2
nest-asyncio==1.6.0
notebook==6.5.6
notebook_shim==0.2.4
numpy==1.23.5
openai==1.13.3
openpyxl==3.0.10
opentracing==2.4.0
orjson==3.9.15
oscrypto==1.3.0
packaging==23.2
pandas==1.4.4
pandas_market_calendars==4.3.3
pandocfilters==1.5.1
parso==0.8.3
patsy==0.5.6
pdfkit==1.0.0
pendulum==2.1.2
pickleshare==0.7.5
Pillow==9.5.0
pkgutil_resolve_name==1.3.10
platformdirs==4.2.0
premailer==3.10.0
prometheus_client==0.20.0
prompt-toolkit==3.0.43
protobuf==3.20.3
psutil==5.9.8
pure-eval==0.2.2
pyarrow==8.0.0
PyAutoGUI==0.9.54
pycparser==2.21
pycryptodomex==3.20.0
pydantic==2.6.3
pydantic_core==2.16.3
pydash==7.0.7
pydeck==0.8.1b0
PyGetWindow==0.0.9
Pygments==2.17.2
PyJWT==2.8.0
pyluach==2.2.0
Pympler==1.0.1
PyMsgBox==1.0.9
pyOpenSSL==22.0.0
pyparsing==3.1.2
pyperclip==1.8.2
pypiwin32==223
PyRect==0.2.0
PyScreeze==0.1.30
python-dateutil==2.8.2
python-decouple==3.8
python-dotenv==1.0.1
pytweening==1.2.0
pytz==2024.1
pytz-deprecation-shim==0.1.0.post0
pytzdata==2020.1
pywin32==306
pywinpty==2.0.13
PyYAML==6.0.1
pyzmq==24.0.1
qtconsole==5.5.1
QtPy==2.4.1
referencing==0.33.0
regex==2023.12.25
requests==2.28.2
rich==13.7.1
rpds-py==0.18.0
scikit-learn==1.3.2
scipy==1.9.3
seaborn==0.12.2
Send2Trash==1.8.2
six==1.16.0
slackclient==2.9.4
smmap==5.0.1
sniffio==1.3.1
snowflake-connector-python==2.7.12
snowflake-snowpark-python==0.9.0
snowflake-sqlalchemy==1.4.7
soupsieve==2.5
SQLAlchemy==1.4.0
stack-data==0.6.3
statsmodels==0.13.5
streamlit==1.23.1
streamlit-aggrid==0.3.4.post3
tabulate==0.9.0
tenacity==8.2.3
terminado==0.18.0
threadpoolctl==3.3.0
tiktoken==0.6.0
tinycss2==1.2.1
toml==0.10.2
tomli==2.0.1
toolz==0.12.1
tornado==6.4
tqdm==4.64.1
traitlets==5.14.1
typing-inspect==0.9.0
typing_extensions==4.10.0
tzdata==2024.1
tzlocal==4.3.1
uncertainties==3.1.7
urllib3==1.26.18
validators==0.22.0
watchdog==4.0.0
wcwidth==0.2.13
webencodings==0.5.1
websocket-client==1.7.0
websockets==12.0
widgetsnbextension==4.0.10
xlrd==2.0.1
yarl==1.9.4
zipp==3.17.0 | Issue with the use of sqlalchemy in community.document_loaders? | https://api.github.com/repos/langchain-ai/langchain/issues/18968/comments | 2 | 2024-03-12T11:23:06Z | 2024-03-12T11:33:43Z | https://github.com/langchain-ai/langchain/issues/18968 | 2,181,373,166 | 18,968 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [x] I added a very descriptive title to this issue.
- [ ] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
I am utilizing the bedrock Llama2 LLM and I aim to integrate a stop feature. How can this be achieved?
### Idea or request for content:
_No response_ | DOC:I am utilizing the bedrock Llama2 LLM and I aim to integrate a stop feature. How can this be achieved? | https://api.github.com/repos/langchain-ai/langchain/issues/18966/comments | 3 | 2024-03-12T11:03:45Z | 2024-03-19T04:08:22Z | https://github.com/langchain-ai/langchain/issues/18966 | 2,181,335,351 | 18,966 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
Custom tool parameter defined within the notebook is for **Python_REPL** tool.

and it does not show the usage of the tool in an agent.
[SerpAPI Tool Docs](https://python.langchain.com/docs/integrations/tools/serpapi#custom-parameters)
### Idea or request for content:
- Fix Custom tool Definition of SerpAPI tool.
- Add more information about being used with an agent.
| DOC: Inaccurate Tool Custom parameters of SerpAPI tool Documentation | https://api.github.com/repos/langchain-ai/langchain/issues/18959/comments | 1 | 2024-03-12T07:16:24Z | 2024-06-18T16:09:45Z | https://github.com/langchain-ai/langchain/issues/18959 | 2,180,899,756 | 18,959 |
[
"hwchase17",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
# Goal
When using `astream()`, LLMs should fallback to sync streaming if an async streaming implementation is not available.
# Context
Implementation of LLMs often include a sync implementation of streaming, but are missing an async implementation.
LLMs currently do not fallback on the sync streaming implementation.
For reference here's the [BaseLLM](https://github.com/langchain-ai/langchain/blob/43db4cd20e0e718f368267528706f92bf604bac9/libs/core/langchain_core/language_models/llms.py#L464-L464) implementation.
The current fallback sequence is:
1) If _astream is defined use it
2) if _astream is not defined fallback on ainvoke
The fallback sequence should be:
1) if _astream is defined use it
2) if _stream is defined fallback to it
3) Finally if neither _astream or _stream are defined, fallback to ainvoke
This PR shows how the same problem was fixed for chat models: https://github.com/langchain-ai/langchain/pull/18748
## Acceptance criteria
* Fallback sequence is correctly implemented
* Unit-tests confirm that the fallback sequence works correctly (see the PR for the unit-tests)
This PR will not be accepted without unit-tests since this is critical functionality! | Allow LLMs async streaming to fallback on sync streaming | https://api.github.com/repos/langchain-ai/langchain/issues/18920/comments | 5 | 2024-03-11T15:09:40Z | 2024-03-20T15:43:07Z | https://github.com/langchain-ai/langchain/issues/18920 | 2,179,384,439 | 18,920 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
url = f"https://{ES_USER}:{ES_PASSWORD}@localhost:9200"
client = Elasticsearch(url, ca_certs = "./http_ca.crt", verify_certs = True)
print(client.info())
import elastic_transport
elastic_transport.debug_logging()
es = ElasticsearchStore.from_documents(
docs,
strategy=ElasticsearchStore.SparseVectorRetrievalStrategy(),
es_url = url,
es_connection = client,
index_name = elastic_index_name,
es_user = ES_USER,
es_password = ES_PASSWORD)
```
### Error Message and Stack Trace (if applicable)
Error adding texts: 116 document(s) failed to index.
First error reason: Could not find trained model [.elser_model_1]
### Description
The reason for this is because I deployed .elser_mode_2 on my Elasticsearch cluster. Later on, I used the following version of the code:
url = f"https://{ES_USER}:{ES_PASSWORD}@localhost:9200"
client = Elasticsearch(url, ca_certs = "./http_ca.crt", verify_certs = True)
print(client.info())
```
import elastic_transport
elastic_transport.debug_logging()
es = ElasticsearchStore.from_documents(
docs,
strategy=ElasticsearchStore.SparseVectorRetrievalStrategy(model_id=".elser_model_2"),
es_url = url,
es_connection = client,
index_name = elastic_index_name,
es_user = ES_USER,
es_password = ES_PASSWORD)
```
This time, I set the model_id correctly to .elser_model_2. After I run the above code, I still got the same error message. It is still looking for .elser_model_1.
After consulting it with our colleague, and we found the root cause of the problem: In the first step, "elastic_index_name" index has already been created, and changing the model_id without deleting the created index in the first step won't work for the updated model_id.
The solution to this problem is:
1) delete the created index in the first step
2) run the code again with the updated model_id parameter.
Improvement:
This error correct cannot be easily resolved without looking into the code. Can we add some comments into the function? or can we recreate the pipeline when model_id is change?
Thanks
### System Info
$ pip freeze | grep langchain
langchain==0.1.11
langchain-community==0.0.27
langchain-core==0.1.30
langchain-text-splitters==0.0.1 | Could not find trained model [.elser_model_1] for ElasticsearchStore | https://api.github.com/repos/langchain-ai/langchain/issues/18917/comments | 3 | 2024-03-11T14:44:02Z | 2024-06-18T16:09:46Z | https://github.com/langchain-ai/langchain/issues/18917 | 2,179,318,114 | 18,917 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
LangChainDeprecationWarning: The function `__call__` was deprecated in LangChain 0.1.0 and will be removed in 0.2.0. Use invoke instead.
warn_deprecated(
#!/usr/bin/env python
# coding: utf-8
# In[4]:
from langchain.document_loaders import PyPDFLoader, DirectoryLoader
from langchain.prompts import PromptTemplate
from langchain.embeddings import HuggingFaceEmbeddings
from langchain.vectorstores import FAISS
from langchain.llms import CTransformers
from langchain.chains import RetrievalQA
#import chainlit as cl
import streamlit as st
DB_FAISS_PATH = 'vectorstore/db_faiss'
custom_prompt_template = """Use the following pieces of information to answer the user's question.
If you don't know the answer, just say that you don't know, don't try to make up an answer.
Context: {context}
Question: {question}
Only return the helpful answer below and nothing else.
Helpful answer:
"""
def set_custom_prompt():
"""
Prompt template for QA retrieval for each vectorstore
"""
prompt = PromptTemplate(template=custom_prompt_template,
input_variables=['context', 'question'])
return prompt
# Retrieval QA Chain
def retrieval_qa_chain(llm, prompt, db):
qa_chain = RetrievalQA.from_chain_type(llm=llm,
chain_type='stuff',
retriever=db.as_retriever(search_kwargs={'k': 2}),
return_source_documents=True,
chain_type_kwargs={'prompt': prompt})
return qa_chain
# Loading the model
import torch
def load_llm():
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
llm = CTransformers(
model="D:\internships\Rajyug ITSolutions\LLM\model\llama-2-7b-chat.ggmlv3.q8_0.bin",
model_type="llama",
max_new_tokens=512,
temperature=0.5,
device=device # Specify device here
)
return llm
# QA Model Function
def qa_bot():
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
embeddings = HuggingFaceEmbeddings(
model_name="sentence-transformers/all-MiniLM-L6-v2",
model_kwargs={'device': device} # Specify device here
)
db = FAISS.load_local(DB_FAISS_PATH, embeddings, allow_dangerous_deserialization=True)
# db = FAISS.load_local(DB_FAISS_PATH, embeddings)
llm = load_llm()
qa_prompt = set_custom_prompt()
qa = retrieval_qa_chain(llm, qa_prompt, db)
return qa
# Output function
def final_result(query):
qa_result = qa_bot()
response = qa_result({'query': query})
return response['result'] # Extract only the 'result' field from the response
def main():
st.title("Medical Chatbot")
query = st.text_input("Enter your medical query:")
if st.button("Get Answer"):
if query:
answer = final_result(query)
st.write("Bot's Response:")
st.write(answer) # Print only the 'result'
else:
st.write("Please enter a query.")
# Call qa_bot and store the returned chain
qa_chain = qa_bot()
# Assuming you have a chain named 'my_chain' (commented out)
# Assuming you have a chain named 'my_chain'
# Old (deprecated):
# result = my_chain()
# New (recommended):
result = qa_chain.invoke(input=query)
# Verbosity (if needed)
from langchain.globals import set_verbose, get_verbose
# Set verbosity to True
langchain.globals.set_verbose(True)
# Check current verbosity
langchain.globals.get_verbose(True)
# ... (code using qa_chain, if applicable)
# Use the 'invoke' method to execute the chain (fix for deprecation warning)
# result = qa_chain.invoke() # Uncomment if you need the result
# Verbosity section (commented out for clarity)
# from langchain.globals import set_verbose, get_verbose
#
# # Set verbosity to True (optional)
# # langchain.globals.set_verbose(True)
#
# # Check current verbosity
# # current_verbosity = get_verbose()
if __name__ == "__main__":
main()
### Idea or request for content:
_No response_ | DOC: <PleLangChainDeprecationWarning: The function `__call__` was deprecated in LangChain 0.1.0 and will be removed in 0.2.0. Use invoke instead. warn_deprecated(ase write a comprehensive title after the 'DOC: ' prefix> | https://api.github.com/repos/langchain-ai/langchain/issues/18911/comments | 1 | 2024-03-11T12:12:06Z | 2024-06-21T16:37:55Z | https://github.com/langchain-ai/langchain/issues/18911 | 2,178,980,605 | 18,911 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
I only use BedrockChat replace ChatAnthropic , get error.
my code
```python
import boto3
from crewai import Agent, Crew
from crewai import Task
from langchain.tools import DuckDuckGoSearchRun
import os
#from langchain_anthropic import ChatAnthropic
from langchain_community.chat_models import BedrockChat
# llm = ChatAnthropic(temperature=0, model_name="anthropic.claude-3-sonnet-20240229-v1:0")
bedrock_runtime = boto3.client(
service_name="bedrock-runtime",
region_name="us-west-2",
)
model_id = "anthropic.claude-3-sonnet-20240229-v1:0"
model_kwargs = {
"max_tokens": 2048,
"temperature": 0.0,
"top_k": 250,
"top_p": 1,
"stop_sequences": ["\n\nHuman"],
}
llm = BedrockChat(
client=bedrock_runtime,
model_id=model_id,
model_kwargs=model_kwargs,
)
search_tool = DuckDuckGoSearchRun()
# Define your agents with roles and goals
researcher = Agent(
role="Tech Research",
goal='Uncover cutting-edge developments in AI and data science',
backstory="""You work at a leading tech think tank.
Your expertise lies in identifying emerging trends.
You have a knack for dissecting complex data and presenting
actionable insights.""",
verbose=True,
allow_delegation=False,
llm=llm,
tools=[search_tool]
)
writer = Agent(
role='Tech Content Summarizer and Writer',
goal='Craft compelling short-form content on AI advancements based on long-form text passed to you ',
backstory="""You are a renowned Content Creator, known for your insightful and engaging articles.
You transform complex concepts into compelling narratives.""",
verbose=True,
allow_delegation=True,
llm=llm,
)
# Create tasks for your agents
task1 = Task(
description=f"""Conduct a comprehensive analysis of the latest advancements in AI in 2024.
Identify key trends, breakthrough technologies, and potential industry impacts.
Your final answer MUST be a full analysis report""",
agent=researcher
)
task2 = Task(
description="""Using the text provided by the reseracher agent, develop a short and compelling
short-form summary of the text provided to you about AI.""",
agent=writer
)
# Instantiate your crew with a sequential process
NewsletterCrew = Crew(
agents=[researcher, writer],
tasks=[task1, task2],
verbose=2, # You can set it to 1 or 2 for different logging levels
)
result = NewsletterCrew.kickoff()
print("Welcome to newsletter writer")
print('----------------------------')
print(result)
```
### Error Message and Stack Trace (if applicable)
"Failed to convert text into a pydantic model due to the following error: System message must be at beginning of message list"
### Description
I use crewai with langchain
### System Info
langchain==0.1.11
langchain-anthropic==0.1.4
langchain-community==0.0.27
langchain-core==0.1.30
langchain-openai==0.0.5
langchain-text-splitters==0.0.1 | crewai use langchain_community.chat_models.BedrockChat as llm get error "System message must be at beginning of message list" | https://api.github.com/repos/langchain-ai/langchain/issues/18909/comments | 1 | 2024-03-11T11:48:18Z | 2024-06-17T16:09:33Z | https://github.com/langchain-ai/langchain/issues/18909 | 2,178,933,916 | 18,909 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
llm = OpenAI(temperature=0)
zapier = ZapierNLAWrapper()
toolkit = ZapierToolkit.from_zapier_nla_wrapper(zapier)
agent = initialize_agent(toolkit.get_tools(), llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
model = whisper.load_model("base")
### Error Message and Stack Trace (if applicable)
NotImplementedError: Need to determine which default deprecation schedule to use. within ?? minor releases
error in at agent = initialize_agent(toolkit.get_tools(), llm, agent="zero-shot-react-description", verbose=True)
line 53, in get_tools warn_deprecated(
line 337, in warn_deprecated raise NotImplementedError(
### Description
I'm trying to use Langchain to send a email with help of zapier
### System Info
absl-py==2.1.0
aiobotocore @ file:///C:/b/abs_3cwz1w13nn/croot/aiobotocore_1701291550158/work
aiohttp @ file:///C:/b/abs_27h_1rpxgd/croot/aiohttp_1707342354614/work
aioitertools @ file:///tmp/build/80754af9/aioitertools_1607109665762/work
aiosignal @ file:///tmp/build/80754af9/aiosignal_1637843061372/work
alabaster @ file:///home/ktietz/src/ci/alabaster_1611921544520/work
altair @ file:///C:/b/abs_27reu1igbg/croot/altair_1687526066495/work
anaconda-anon-usage @ file:///C:/b/abs_95v3x0wy8p/croot/anaconda-anon-usage_1697038984188/work
anaconda-catalogs @ file:///C:/b/abs_8btyy0o8s8/croot/anaconda-catalogs_1685727315626/work
anaconda-client @ file:///C:/b/abs_34txutm0ue/croot/anaconda-client_1708640705294/work
anaconda-cloud-auth @ file:///C:/b/abs_410afndtyf/croot/anaconda-cloud-auth_1697462767853/work
anaconda-navigator @ file:///C:/b/abs_fetmwtxkqo/croot/anaconda-navigator_1709540481120/work
anaconda-project @ file:///C:/ci_311/anaconda-project_1676458365912/work
anyio @ file:///C:/b/abs_847uobe7ea/croot/anyio_1706220224037/work
appdirs==1.4.4
archspec @ file:///croot/archspec_1697725767277/work
argon2-cffi @ file:///opt/conda/conda-bld/argon2-cffi_1645000214183/work
argon2-cffi-bindings @ file:///C:/ci_311/argon2-cffi-bindings_1676424443321/work
arrow @ file:///C:/ci_311/arrow_1678249767083/work
astroid @ file:///C:/ci_311/astroid_1678740610167/work
astropy @ file:///C:/b/abs_2fb3x_tapx/croot/astropy_1697468987983/work
asttokens @ file:///opt/conda/conda-bld/asttokens_1646925590279/work
async-lru @ file:///C:/b/abs_e0hjkvwwb5/croot/async-lru_1699554572212/work
atomicwrites==1.4.0
attrs @ file:///C:/b/abs_35n0jusce8/croot/attrs_1695717880170/work
Automat @ file:///tmp/build/80754af9/automat_1600298431173/work
autopep8 @ file:///opt/conda/conda-bld/autopep8_1650463822033/work
Babel @ file:///C:/ci_311/babel_1676427169844/work
backports.functools-lru-cache @ file:///tmp/build/80754af9/backports.functools_lru_cache_1618170165463/work
backports.tempfile @ file:///home/linux1/recipes/ci/backports.tempfile_1610991236607/work
backports.weakref==1.0.post1
bcrypt @ file:///C:/ci_311/bcrypt_1676435170049/work
beautifulsoup4 @ file:///C:/b/abs_0agyz1wsr4/croot/beautifulsoup4-split_1681493048687/work
binaryornot @ file:///tmp/build/80754af9/binaryornot_1617751525010/work
black @ file:///C:/b/abs_29gqa9a44y/croot/black_1701097690150/work
bleach @ file:///opt/conda/conda-bld/bleach_1641577558959/work
blinker @ file:///C:/b/abs_d9y2dm7cw2/croot/blinker_1696539752170/work
bokeh @ file:///C:/b/abs_74ungdyhwc/croot/bokeh_1706912192007/work
boltons @ file:///C:/ci_311/boltons_1677729932371/work
botocore @ file:///C:/b/abs_5a285dtc94/croot/botocore_1701286504141/work
Bottleneck @ file:///C:/b/abs_f05kqh7yvj/croot/bottleneck_1707864273291/work
Brotli @ file:///C:/ci_311/brotli-split_1676435766766/work
cachetools @ file:///tmp/build/80754af9/cachetools_1619597386817/work
certifi @ file:///C:/b/abs_35d7n66oz9/croot/certifi_1707229248467/work/certifi
cffi @ file:///C:/b/abs_924gv1kxzj/croot/cffi_1700254355075/work
chardet @ file:///C:/ci_311/chardet_1676436134885/work
charset-normalizer @ file:///tmp/build/80754af9/charset-normalizer_1630003229654/work
click @ file:///C:/b/abs_f9ihnt72pu/croot/click_1698129847492/work
cloudpickle @ file:///C:/b/abs_3796yxesic/croot/cloudpickle_1683040098851/work
clyent==1.2.2
colorama @ file:///C:/ci_311/colorama_1676422310965/work
colorcet @ file:///C:/ci_311/colorcet_1676440389947/work
comm @ file:///C:/ci_311/comm_1678376562840/work
conda @ file:///C:/b/abs_89vd8hj61u/croot/conda_1708369170790/work
conda-build @ file:///C:/b/abs_3ed9gavxgz/croot/conda-build_1708025907525/work
conda-content-trust @ file:///C:/b/abs_e3bcpyv7sw/croot/conda-content-trust_1693490654398/work
conda-libmamba-solver @ file:///croot/conda-libmamba-solver_1706733287605/work/src
conda-pack @ file:///tmp/build/80754af9/conda-pack_1611163042455/work
conda-package-handling @ file:///C:/b/abs_b9wp3lr1gn/croot/conda-package-handling_1691008700066/work
conda-repo-cli==1.0.75
conda-token @ file:///Users/paulyim/miniconda3/envs/c3i/conda-bld/conda-token_1662660369760/work
conda-verify==3.4.2
conda_index @ file:///croot/conda-index_1706633791028/work
conda_package_streaming @ file:///C:/b/abs_6c28n38aaj/croot/conda-package-streaming_1690988019210/work
constantly @ file:///C:/b/abs_cbuavw4443/croot/constantly_1703165617403/work
contourpy @ file:///C:/b/abs_853rfy8zse/croot/contourpy_1700583617587/work
cookiecutter @ file:///C:/b/abs_3d1730toam/croot/cookiecutter_1700677089156/work
cryptography @ file:///C:/b/abs_531eqmhgsd/croot/cryptography_1707523768330/work
cssselect @ file:///C:/b/abs_71gnjab7b0/croot/cssselect_1707339955530/work
cycler @ file:///tmp/build/80754af9/cycler_1637851556182/work
cytoolz @ file:///C:/b/abs_d43s8lnb60/croot/cytoolz_1701723636699/work
dask @ file:///C:/b/abs_1899k8plyj/croot/dask-core_1701396135885/work
dataclasses-json==0.6.4
datashader @ file:///C:/b/abs_cb5s63ty8z/croot/datashader_1699544282143/work
debugpy @ file:///C:/b/abs_c0y1fjipt2/croot/debugpy_1690906864587/work
decorator @ file:///opt/conda/conda-bld/decorator_1643638310831/work
defusedxml @ file:///tmp/build/80754af9/defusedxml_1615228127516/work
diff-match-patch @ file:///Users/ktietz/demo/mc3/conda-bld/diff-match-patch_1630511840874/work
dill @ file:///C:/b/abs_084unuus3z/croot/dill_1692271268687/work
distributed @ file:///C:/b/abs_5eren88ku4/croot/distributed_1701398076011/work
distro @ file:///C:/b/abs_a3uni_yez3/croot/distro_1701455052240/work
docstring-to-markdown @ file:///C:/ci_311/docstring-to-markdown_1677742566583/work
docutils @ file:///C:/ci_311/docutils_1676428078664/work
entrypoints @ file:///C:/ci_311/entrypoints_1676423328987/work
et-xmlfile==1.1.0
executing @ file:///opt/conda/conda-bld/executing_1646925071911/work
fastjsonschema @ file:///C:/ci_311/python-fastjsonschema_1679500568724/work
ffmpeg-python==0.2.0
filelock @ file:///C:/b/abs_f2gie28u58/croot/filelock_1700591233643/work
flake8 @ file:///C:/ci_311/flake8_1678376624746/work
Flask @ file:///C:/b/abs_efc024w7fv/croot/flask_1702980041157/work
flatbuffers==24.3.7
fonttools==4.25.0
frozenlist @ file:///C:/b/abs_d8e__s1ys3/croot/frozenlist_1698702612014/work
fsspec @ file:///C:/b/abs_97mpfsesn0/croot/fsspec_1701286534629/work
future @ file:///C:/ci_311_rebuilds/future_1678998246262/work
gensim @ file:///C:/ci_311/gensim_1677743037820/work
gitdb @ file:///tmp/build/80754af9/gitdb_1617117951232/work
GitPython @ file:///C:/b/abs_e1lwow9h41/croot/gitpython_1696937027832/work
gmpy2 @ file:///C:/ci_311/gmpy2_1677743390134/work
greenlet @ file:///C:/b/abs_a6c75ie0bc/croot/greenlet_1702060012174/work
h11==0.14.0
h5py @ file:///C:/b/abs_17fav01gwy/croot/h5py_1691589733413/work
HeapDict @ file:///Users/ktietz/demo/mc3/conda-bld/heapdict_1630598515714/work
holoviews @ file:///C:/b/abs_704uucojt7/croot/holoviews_1707836477070/work
httpcore==1.0.4
httpx==0.27.0
hvplot @ file:///C:/b/abs_3627uzd5h0/croot/hvplot_1706712443782/work
hyperlink @ file:///tmp/build/80754af9/hyperlink_1610130746837/work
idna @ file:///C:/ci_311/idna_1676424932545/work
imagecodecs @ file:///C:/b/abs_e2g5zbs1q0/croot/imagecodecs_1695065012000/work
imageio @ file:///C:/b/abs_aeqerw_nps/croot/imageio_1707247365204/work
imagesize @ file:///C:/ci_311/imagesize_1676431905616/work
imbalanced-learn @ file:///C:/b/abs_87es3kd5fi/croot/imbalanced-learn_1700648276799/work
importlib-metadata @ file:///C:/b/abs_c1egths604/croot/importlib_metadata-suite_1704813568388/work
incremental @ file:///croot/incremental_1708639938299/work
inflection==0.5.1
iniconfig @ file:///home/linux1/recipes/ci/iniconfig_1610983019677/work
intake @ file:///C:/ci_311_rebuilds/intake_1678999914269/work
intervaltree @ file:///Users/ktietz/demo/mc3/conda-bld/intervaltree_1630511889664/work
ipykernel @ file:///C:/b/abs_c2u94kxcy6/croot/ipykernel_1705933907920/work
ipython @ file:///C:/b/abs_b6pfgmrqnd/croot/ipython_1704833422163/work
ipython-genutils @ file:///tmp/build/80754af9/ipython_genutils_1606773439826/work
ipywidgets @ file:///croot/ipywidgets_1701289330913/work
isort @ file:///tmp/build/80754af9/isort_1628603791788/work
itemadapter @ file:///tmp/build/80754af9/itemadapter_1626442940632/work
itemloaders @ file:///C:/b/abs_5e3azgv25z/croot/itemloaders_1708639993442/work
itsdangerous @ file:///tmp/build/80754af9/itsdangerous_1621432558163/work
jaraco.classes @ file:///tmp/build/80754af9/jaraco.classes_1620983179379/work
jax==0.4.25
jedi @ file:///C:/ci_311/jedi_1679427407646/work
jellyfish @ file:///C:/b/abs_50kgvtnrbj/croot/jellyfish_1695193564091/work
Jinja2 @ file:///C:/b/abs_f7x5a8op2h/croot/jinja2_1706733672594/work
jmespath @ file:///C:/b/abs_59jpuaows7/croot/jmespath_1700144635019/work
joblib @ file:///C:/b/abs_1anqjntpan/croot/joblib_1685113317150/work
json5 @ file:///tmp/build/80754af9/json5_1624432770122/work
jsonpatch==1.33
jsonpointer==2.1
jsonschema @ file:///C:/b/abs_d1c4sm8drk/croot/jsonschema_1699041668863/work
jsonschema-specifications @ file:///C:/b/abs_0brvm6vryw/croot/jsonschema-specifications_1699032417323/work
jupyter @ file:///C:/b/abs_4e102rc6e5/croot/jupyter_1707947170513/work
jupyter-console @ file:///C:/b/abs_82xaa6i2y4/croot/jupyter_console_1680000189372/work
jupyter-events @ file:///C:/b/abs_17ajfqnlz0/croot/jupyter_events_1699282519713/work
jupyter-lsp @ file:///C:/b/abs_ecle3em9d4/croot/jupyter-lsp-meta_1699978291372/work
jupyter_client @ file:///C:/b/abs_a6h3c8hfdq/croot/jupyter_client_1699455939372/work
jupyter_core @ file:///C:/b/abs_c769pbqg9b/croot/jupyter_core_1698937367513/work
jupyter_server @ file:///C:/b/abs_7esjvdakg9/croot/jupyter_server_1699466495151/work
jupyter_server_terminals @ file:///C:/b/abs_ec0dq4b50j/croot/jupyter_server_terminals_1686870763512/work
jupyterlab @ file:///C:/b/abs_43venm28fu/croot/jupyterlab_1706802651134/work
jupyterlab-pygments @ file:///tmp/build/80754af9/jupyterlab_pygments_1601490720602/work
jupyterlab-widgets @ file:///C:/b/abs_adrrqr26no/croot/jupyterlab_widgets_1700169018974/work
jupyterlab_server @ file:///C:/b/abs_e08i7qn9m8/croot/jupyterlab_server_1699555481806/work
keyring @ file:///C:/b/abs_dbjc7g0dh2/croot/keyring_1678999228878/work
kiwisolver @ file:///C:/ci_311/kiwisolver_1676431979301/work
langchain==0.1.11
langchain-community==0.0.27
langchain-core==0.1.30
langchain-openai==0.0.8
langchain-text-splitters==0.0.1
langsmith==0.1.23
lazy-object-proxy @ file:///C:/ci_311/lazy-object-proxy_1676432050939/work
lazy_loader @ file:///C:/b/abs_3bn4_r4g42/croot/lazy_loader_1695850158046/work
lckr_jupyterlab_variableinspector @ file:///C:/b/abs_b5yb2mprx2/croot/jupyterlab-variableinspector_1701096592545/work
libarchive-c @ file:///tmp/build/80754af9/python-libarchive-c_1617780486945/work
libmambapy @ file:///C:/b/abs_2euls_1a38/croot/mamba-split_1704219444888/work/libmambapy
linkify-it-py @ file:///C:/ci_311/linkify-it-py_1676474436187/work
llvmlite @ file:///C:/b/abs_da15r8vkf8/croot/llvmlite_1706910779994/work
lmdb @ file:///C:/b/abs_556ronuvb2/croot/python-lmdb_1682522366268/work
locket @ file:///C:/ci_311/locket_1676428325082/work
lxml @ file:///C:/b/abs_9e7tpg2vv9/croot/lxml_1695058219431/work
lz4 @ file:///C:/b/abs_064u6aszy3/croot/lz4_1686057967376/work
Markdown @ file:///C:/ci_311/markdown_1676437912393/work
markdown-it-py @ file:///C:/b/abs_a5bfngz6fu/croot/markdown-it-py_1684279915556/work
MarkupSafe @ file:///C:/b/abs_ecfdqh67b_/croot/markupsafe_1704206030535/work
marshmallow==3.21.1
matplotlib @ file:///C:/b/abs_e26vnvd5s1/croot/matplotlib-suite_1698692153288/work
matplotlib-inline @ file:///C:/ci_311/matplotlib-inline_1676425798036/work
mccabe @ file:///opt/conda/conda-bld/mccabe_1644221741721/work
mdit-py-plugins @ file:///C:/ci_311/mdit-py-plugins_1676481827414/work
mdurl @ file:///C:/ci_311/mdurl_1676442676678/work
mediapipe==0.10.11
menuinst @ file:///C:/b/abs_099kybla52/croot/menuinst_1706732987063/work
mistune @ file:///C:/ci_311/mistune_1676425111783/work
mkl-fft @ file:///C:/b/abs_19i1y8ykas/croot/mkl_fft_1695058226480/work
mkl-random @ file:///C:/b/abs_edwkj1_o69/croot/mkl_random_1695059866750/work
mkl-service==2.4.0
ml-dtypes==0.3.2
more-itertools @ file:///C:/b/abs_36p38zj5jx/croot/more-itertools_1700662194485/work
mpmath @ file:///C:/b/abs_7833jrbiox/croot/mpmath_1690848321154/work
msgpack @ file:///C:/ci_311/msgpack-python_1676427482892/work
multidict @ file:///C:/b/abs_44ido987fv/croot/multidict_1701097803486/work
multipledispatch @ file:///C:/ci_311/multipledispatch_1676442767760/work
munkres==1.1.4
mypy @ file:///C:/b/abs_3880czibje/croot/mypy-split_1708366584048/work
mypy-extensions @ file:///C:/b/abs_8f7xiidjya/croot/mypy_extensions_1695131051147/work
navigator-updater @ file:///C:/b/abs_895otdwmo9/croot/navigator-updater_1695210220239/work
nbclient @ file:///C:/b/abs_cal0q5fyju/croot/nbclient_1698934263135/work
nbconvert @ file:///C:/b/abs_17p29f_rx4/croot/nbconvert_1699022793097/work
nbformat @ file:///C:/b/abs_5a2nea1iu2/croot/nbformat_1694616866197/work
nest-asyncio @ file:///C:/b/abs_65d6lblmoi/croot/nest-asyncio_1708532721305/work
networkx @ file:///C:/b/abs_e6gi1go5op/croot/networkx_1690562046966/work
nltk @ file:///C:/b/abs_a638z6l1z0/croot/nltk_1688114186909/work
notebook @ file:///C:/b/abs_65xjlnf9q4/croot/notebook_1708029957105/work
notebook_shim @ file:///C:/b/abs_a5xysln3lb/croot/notebook-shim_1699455926920/work
numba @ file:///C:/b/abs_3e3co1qfvo/croot/numba_1707085143481/work
numexpr @ file:///C:/b/abs_5fucrty5dc/croot/numexpr_1696515448831/work
numpy @ file:///C:/b/abs_c1ywpu18ar/croot/numpy_and_numpy_base_1708638681471/work/dist/numpy-1.26.4-cp311-cp311-win_amd64.whl#sha256=5dfd3e04dc1c2826d3f404fdc7f93c097901f5da9b91f4f394f79d4e038ed81d
numpydoc @ file:///C:/ci_311/numpydoc_1676453412027/work
openai==1.13.3
openai-whisper==20231117
opencv-contrib-python==4.9.0.80
openpyxl==3.0.10
opt-einsum==3.3.0
orjson==3.9.15
overrides @ file:///C:/b/abs_cfh89c8yf4/croot/overrides_1699371165349/work
packaging==23.2
pandas @ file:///C:/b/abs_fej9bi0gew/croot/pandas_1702318041921/work/dist/pandas-2.1.4-cp311-cp311-win_amd64.whl#sha256=d3609b7cc3e3c4d99ad640a4b8e710ba93ccf967ab8e5245b91033e0200f9286
pandocfilters @ file:///opt/conda/conda-bld/pandocfilters_1643405455980/work
panel @ file:///C:/b/abs_abnm_ot327/croot/panel_1706539613212/work
param @ file:///C:/b/abs_39ncjvb7lu/croot/param_1705937833389/work
paramiko @ file:///opt/conda/conda-bld/paramiko_1640109032755/work
parsel @ file:///C:/b/abs_ebc3tzm_c4/croot/parsel_1707503517596/work
parso @ file:///opt/conda/conda-bld/parso_1641458642106/work
partd @ file:///C:/b/abs_46awex0fd7/croot/partd_1698702622970/work
pathlib @ file:///Users/ktietz/demo/mc3/conda-bld/pathlib_1629713961906/work
pathspec @ file:///C:/ci_311/pathspec_1679427644142/work
patsy==0.5.3
pexpect @ file:///tmp/build/80754af9/pexpect_1605563209008/work
pickleshare @ file:///tmp/build/80754af9/pickleshare_1606932040724/work
pillow @ file:///C:/b/abs_e22m71t0cb/croot/pillow_1707233126420/work
pkce @ file:///C:/b/abs_d0z4444tb0/croot/pkce_1690384879799/work
pkginfo @ file:///C:/b/abs_d18srtr68x/croot/pkginfo_1679431192239/work
platformdirs @ file:///C:/b/abs_b6z_yqw_ii/croot/platformdirs_1692205479426/work
plotly @ file:///C:/ci_311/plotly_1676443558683/work
pluggy @ file:///C:/ci_311/pluggy_1676422178143/work
ply==3.11
prometheus-client @ file:///C:/ci_311/prometheus_client_1679591942558/work
prompt-toolkit @ file:///C:/b/abs_68uwr58ed1/croot/prompt-toolkit_1704404394082/work
Protego @ file:///tmp/build/80754af9/protego_1598657180827/work
protobuf==3.20.3
psutil @ file:///C:/ci_311_rebuilds/psutil_1679005906571/work
ptyprocess @ file:///tmp/build/80754af9/ptyprocess_1609355006118/work/dist/ptyprocess-0.7.0-py2.py3-none-any.whl
pure-eval @ file:///opt/conda/conda-bld/pure_eval_1646925070566/work
py-cpuinfo @ file:///C:/b/abs_9ej7u6shci/croot/py-cpuinfo_1698068121579/work
pyarrow @ file:///C:/b/abs_93i_y2dub4/croot/pyarrow_1707330894046/work/python
pyasn1 @ file:///Users/ktietz/demo/mc3/conda-bld/pyasn1_1629708007385/work
pyasn1-modules==0.2.8
pycodestyle @ file:///C:/ci_311/pycodestyle_1678376707834/work
pycosat @ file:///C:/b/abs_31zywn1be3/croot/pycosat_1696537126223/work
pycparser @ file:///tmp/build/80754af9/pycparser_1636541352034/work
pyct @ file:///C:/ci_311/pyct_1676438538057/work
pycurl==7.45.2
pydantic @ file:///C:/b/abs_9byjrk31gl/croot/pydantic_1695798904828/work
pydeck @ file:///C:/b/abs_ad9p880wi1/croot/pydeck_1706194121328/work
PyDispatcher==2.0.5
pydocstyle @ file:///C:/ci_311/pydocstyle_1678402028085/work
pyerfa @ file:///C:/ci_311/pyerfa_1676503994641/work
pyflakes @ file:///C:/ci_311/pyflakes_1678402101687/work
Pygments @ file:///C:/b/abs_fay9dpq4n_/croot/pygments_1684279990574/work
PyJWT @ file:///C:/ci_311/pyjwt_1676438890509/work
pylint @ file:///C:/ci_311/pylint_1678740302984/work
pylint-venv @ file:///C:/ci_311/pylint-venv_1678402170638/work
pyls-spyder==0.4.0
PyNaCl @ file:///C:/ci_311/pynacl_1676445861112/work
pyodbc @ file:///C:/b/abs_90kly0uuwz/croot/pyodbc_1705431396548/work
pyOpenSSL @ file:///C:/b/abs_baj0aupznq/croot/pyopenssl_1708380486701/work
pyparsing @ file:///C:/ci_311/pyparsing_1678502182533/work
PyQt5==5.15.10
PyQt5-sip @ file:///C:/b/abs_c0pi2mimq3/croot/pyqt-split_1698769125270/work/pyqt_sip
PyQtWebEngine==5.15.6
PySocks @ file:///C:/ci_311/pysocks_1676425991111/work
pytest @ file:///C:/b/abs_48heoo_k8y/croot/pytest_1690475385915/work
python-dateutil @ file:///tmp/build/80754af9/python-dateutil_1626374649649/work
python-dotenv @ file:///C:/ci_311/python-dotenv_1676455170580/work
python-gitlab==4.4.0
python-json-logger @ file:///C:/b/abs_cblnsm6puj/croot/python-json-logger_1683824130469/work
python-lsp-black @ file:///C:/ci_311/python-lsp-black_1678721855627/work
python-lsp-jsonrpc==1.0.0
python-lsp-server @ file:///C:/b/abs_catecj7fv1/croot/python-lsp-server_1681930405912/work
python-slugify @ file:///tmp/build/80754af9/python-slugify_1620405669636/work
python-snappy @ file:///C:/ci_311/python-snappy_1676446060182/work
pytoolconfig @ file:///C:/b/abs_f2j_xsvrpn/croot/pytoolconfig_1701728751207/work
pytz @ file:///C:/b/abs_19q3ljkez4/croot/pytz_1695131651401/work
pyviz_comms @ file:///C:/b/abs_31r9afnand/croot/pyviz_comms_1701728067143/work
pywavelets @ file:///C:/b/abs_7est386xsb/croot/pywavelets_1705049855879/work
pywin32==305.1
pywin32-ctypes @ file:///C:/ci_311/pywin32-ctypes_1676427747089/work
pywinpty @ file:///C:/ci_311/pywinpty_1677707791185/work/target/wheels/pywinpty-2.0.10-cp311-none-win_amd64.whl
PyYAML @ file:///C:/b/abs_782o3mbw7z/croot/pyyaml_1698096085010/work
pyzmq @ file:///C:/b/abs_89aq69t0up/croot/pyzmq_1705605705281/work
QDarkStyle @ file:///tmp/build/80754af9/qdarkstyle_1617386714626/work
qstylizer @ file:///C:/ci_311/qstylizer_1678502012152/work/dist/qstylizer-0.2.2-py2.py3-none-any.whl
QtAwesome @ file:///C:/ci_311/qtawesome_1678402331535/work
qtconsole @ file:///C:/b/abs_eb4u9jg07y/croot/qtconsole_1681402843494/work
QtPy @ file:///C:/b/abs_derqu__3p8/croot/qtpy_1700144907661/work
queuelib @ file:///C:/b/abs_563lpxcne9/croot/queuelib_1696951148213/work
referencing @ file:///C:/b/abs_09f4hj6adf/croot/referencing_1699012097448/work
regex @ file:///C:/b/abs_d5e2e5uqmr/croot/regex_1696515472506/work
requests @ file:///C:/b/abs_474vaa3x9e/croot/requests_1707355619957/work
requests-file @ file:///Users/ktietz/demo/mc3/conda-bld/requests-file_1629455781986/work
requests-toolbelt @ file:///C:/b/abs_2fsmts66wp/croot/requests-toolbelt_1690874051210/work
rfc3339-validator @ file:///C:/b/abs_ddfmseb_vm/croot/rfc3339-validator_1683077054906/work
rfc3986-validator @ file:///C:/b/abs_6e9azihr8o/croot/rfc3986-validator_1683059049737/work
rich @ file:///C:/b/abs_09j2g5qnu8/croot/rich_1684282185530/work
rope @ file:///C:/ci_311/rope_1678402524346/work
rpds-py @ file:///C:/b/abs_76j4g4la23/croot/rpds-py_1698947348047/work
Rtree @ file:///C:/ci_311/rtree_1676455758391/work
ruamel-yaml-conda @ file:///C:/ci_311/ruamel_yaml_1676455799258/work
ruamel.yaml @ file:///C:/ci_311/ruamel.yaml_1676439214109/work
s3fs @ file:///C:/b/abs_24vbfcawyu/croot/s3fs_1701294224436/work
scikit-image @ file:///C:/b/abs_f7z1pjjn6f/croot/scikit-image_1707346180040/work
scikit-learn==1.4.1.post1
scipy==1.11.4
Scrapy @ file:///C:/ci_311/scrapy_1678502587780/work
seaborn @ file:///C:/ci_311/seaborn_1676446547861/work
semver @ file:///tmp/build/80754af9/semver_1603822362442/work
Send2Trash @ file:///C:/b/abs_08dh49ew26/croot/send2trash_1699371173324/work
service-identity @ file:///Users/ktietz/demo/mc3/conda-bld/service_identity_1629460757137/work
sip @ file:///C:/b/abs_edevan3fce/croot/sip_1698675983372/work
six @ file:///tmp/build/80754af9/six_1644875935023/work
smart-open @ file:///C:/ci_311/smart_open_1676439339434/work
smmap @ file:///tmp/build/80754af9/smmap_1611694433573/work
sniffio @ file:///C:/b/abs_3akdewudo_/croot/sniffio_1705431337396/work
snowballstemmer @ file:///tmp/build/80754af9/snowballstemmer_1637937080595/work
sortedcontainers @ file:///tmp/build/80754af9/sortedcontainers_1623949099177/work
sounddevice==0.4.6
soupsieve @ file:///C:/b/abs_bbsvy9t4pl/croot/soupsieve_1696347611357/work
Sphinx @ file:///C:/ci_311/sphinx_1676434546244/work
sphinxcontrib-applehelp @ file:///home/ktietz/src/ci/sphinxcontrib-applehelp_1611920841464/work
sphinxcontrib-devhelp @ file:///home/ktietz/src/ci/sphinxcontrib-devhelp_1611920923094/work
sphinxcontrib-htmlhelp @ file:///tmp/build/80754af9/sphinxcontrib-htmlhelp_1623945626792/work
sphinxcontrib-jsmath @ file:///home/ktietz/src/ci/sphinxcontrib-jsmath_1611920942228/work
sphinxcontrib-qthelp @ file:///home/ktietz/src/ci/sphinxcontrib-qthelp_1611921055322/work
sphinxcontrib-serializinghtml @ file:///tmp/build/80754af9/sphinxcontrib-serializinghtml_1624451540180/work
spyder @ file:///C:/b/abs_e99kl7d8t0/croot/spyder_1681934304813/work
spyder-kernels @ file:///C:/b/abs_e788a8_4y9/croot/spyder-kernels_1691599588437/work
SQLAlchemy @ file:///C:/b/abs_876dxwqqu8/croot/sqlalchemy_1705089154696/work
stack-data @ file:///opt/conda/conda-bld/stack_data_1646927590127/work
statsmodels @ file:///C:/b/abs_7bth810rna/croot/statsmodels_1689937298619/work
streamlit==1.32.0
sympy @ file:///C:/b/abs_82njkonm7f/croot/sympy_1701397685028/work
tables @ file:///C:/b/abs_411740ajo7/croot/pytables_1705614883108/work
tabulate @ file:///C:/b/abs_21rf8iibnh/croot/tabulate_1701354830521/work
tblib @ file:///Users/ktietz/demo/mc3/conda-bld/tblib_1629402031467/work
tenacity @ file:///C:/b/abs_ddkoa9nju6/croot/tenacity_1682972298929/work
terminado @ file:///C:/ci_311/terminado_1678228513830/work
text-unidecode @ file:///Users/ktietz/demo/mc3/conda-bld/text-unidecode_1629401354553/work
textdistance @ file:///tmp/build/80754af9/textdistance_1612461398012/work
threadpoolctl @ file:///Users/ktietz/demo/mc3/conda-bld/threadpoolctl_1629802263681/work
three-merge @ file:///tmp/build/80754af9/three-merge_1607553261110/work
tifffile @ file:///C:/b/abs_45o5chuqwt/croot/tifffile_1695107511025/work
tiktoken==0.6.0
tinycss2 @ file:///C:/ci_311/tinycss2_1676425376744/work
tldextract @ file:///opt/conda/conda-bld/tldextract_1646638314385/work
toml @ file:///tmp/build/80754af9/toml_1616166611790/work
tomlkit @ file:///C:/ci_311/tomlkit_1676425418821/work
toolz @ file:///C:/ci_311/toolz_1676431406517/work
torch==2.2.1
tornado @ file:///C:/b/abs_0cbrstidzg/croot/tornado_1696937003724/work
tqdm @ file:///C:/b/abs_f76j9hg7pv/croot/tqdm_1679561871187/work
traitlets @ file:///C:/ci_311/traitlets_1676423290727/work
truststore @ file:///C:/b/abs_55z7b3r045/croot/truststore_1695245455435/work
Twisted @ file:///C:/b/abs_e7yqd811in/croot/twisted_1708702883769/work
twisted-iocpsupport @ file:///C:/ci_311/twisted-iocpsupport_1676447612160/work
typing-inspect==0.9.0
typing_extensions @ file:///C:/b/abs_72cdotwc_6/croot/typing_extensions_1705599364138/work
tzdata @ file:///croot/python-tzdata_1690578112552/work
tzlocal @ file:///C:/ci_311/tzlocal_1676439620276/work
uc-micro-py @ file:///C:/ci_311/uc-micro-py_1676457695423/work
ujson @ file:///C:/ci_311/ujson_1676434714224/work
Unidecode @ file:///tmp/build/80754af9/unidecode_1614712377438/work
urllib3 @ file:///C:/b/abs_0c3739ssy1/croot/urllib3_1707349314852/work
validators @ file:///tmp/build/80754af9/validators_1612286467315/work
w3lib @ file:///C:/b/abs_957begrwnl/croot/w3lib_1708640020760/work
watchdog @ file:///C:/ci_311/watchdog_1676457923624/work
wcwidth @ file:///Users/ktietz/demo/mc3/conda-bld/wcwidth_1629357192024/work
webencodings==0.5.1
websocket-client @ file:///C:/ci_311/websocket-client_1676426063281/work
Werkzeug @ file:///C:/b/abs_8578rs2ra_/croot/werkzeug_1679489759009/work
whatthepatch @ file:///C:/ci_311/whatthepatch_1678402578113/work
widgetsnbextension @ file:///C:/b/abs_derxhz1biv/croot/widgetsnbextension_1701273671518/work
win-inet-pton @ file:///C:/ci_311/win_inet_pton_1676425458225/work
wrapt @ file:///C:/ci_311/wrapt_1676432805090/work
xarray @ file:///C:/b/abs_5bkjiynp4e/croot/xarray_1689041498548/work
xlwings @ file:///C:/ci_311_rebuilds/xlwings_1679013429160/work
xyzservices @ file:///C:/ci_311/xyzservices_1676434829315/work
yapf @ file:///tmp/build/80754af9/yapf_1615749224965/work
yarl @ file:///C:/b/abs_8bxwdyhjvp/croot/yarl_1701105248152/work
zict @ file:///C:/b/abs_780gyydtbp/croot/zict_1695832899404/work
zipp @ file:///C:/b/abs_b0beoc27oa/croot/zipp_1704206963359/work
zope.interface @ file:///C:/ci_311/zope.interface_1676439868776/work
zstandard==0.19.0
| toolkit.get_tools() is not working langchain version 6.0.1 if deprecated then no alternative | https://api.github.com/repos/langchain-ai/langchain/issues/18907/comments | 2 | 2024-03-11T11:42:43Z | 2024-03-11T15:21:00Z | https://github.com/langchain-ai/langchain/issues/18907 | 2,178,923,631 | 18,907 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
schema = {
"properties": {
"sentiment": {"type": "string"},
"aggressiveness": {"type": "integer"},
"language": {"type": "string"},
}
}
chain = create_tagging_chain(schema, model)
test_string = "Hey there!! We are going to celerate john's birthday. Suggest some celebration idea."
res = chain.invoke(test_string)
### Error Message and Stack Trace (if applicable)
File "/python3.11/site-packages/langchain/chains/llm.py", line 104, in _call
return self.create_outputs(response)[0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/python3.11/site-packages/langchain/chains/llm.py", line 258, in create_outputs
result = [
^
File "/python3.11/site-packages/langchain/chains/llm.py", line 261, in <listcomp>
self.output_key: self.output_parser.parse_result(generation),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/python3.11/site-packages/langchain_core/output_parsers/openai_functions.py", line 102, in parse_result
raise OutputParserException(
langchain_core.exceptions.OutputParserException: Could not parse function call data: Expecting property name enclosed in double quotes: line 2 column 3 (char 4)
### Description
### System Info
System Information
------------------
> OS: Linux
> OS Version: #18~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC
> Python Version: 3.11.6 (main, Oct 23 2023, 22:47:21) [GCC 9.4.0]
Package Information
-------------------
> langchain_core: 0.1.29
> langchain: 0.1.11
> langchain_community: 0.0.25
> langsmith: 0.1.22
> langchain_openai: 0.0.8
> langchain_text_splitters: 0.0.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
| Output Parser Exception for Tagging UseCase | https://api.github.com/repos/langchain-ai/langchain/issues/18906/comments | 5 | 2024-03-11T11:27:19Z | 2024-06-20T16:09:14Z | https://github.com/langchain-ai/langchain/issues/18906 | 2,178,895,489 | 18,906 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_community.utilities import SQLDatabase
db = SQLDatabase.from_uri("sqlite:///Chinook.db")
print(db.dialect)
print(db.get_usable_table_names())
db.run("SELECT * FROM Artist LIMIT 10;")
from langchain_community.agent_toolkits import create_sql_agent
from langchain_openai import ChatOpenAI
llm = AzureChatOpenAI(setup...)
agent_executor = create_sql_agent(llm, db=db, agent_type="openai-tools", verbose=True)
agent_executor.invoke(
"List the total sales per country. Which country's customers spent the most?"
)
### Error Message and Stack Trace (if applicable)
An output parsing error occurred. In order to pass this error back to the agent and have it try again, pass `handle_parsing_errors=True` to the AgentExecutor. This is the error: Could not parse tool input: {'arguments': '# I will use the sql_db_list_tables and sql_db_schema tools to see what tables are in the database and their schemas.\n\nfrom functions import sql_db_list_tables, sql_db_schema\n\n# List the tables in the database\nprint(sql_db_list_tables(__arg1=""))\n\n# Get the schema for the relevant tables\nprint(sql_db_schema({"table_names": "orders, customers, countries"}))', 'name': 'python'} because the `arguments` is not valid JSON.
### Description
Even when enabeling "handle_parsing_errors" for the Agent Executor i dont get the result given in the tutorial just some SQL operations done by the agent.
### System Info
langchain==0.1.11
langchain-community==0.0.27
langchain-core==0.1.30
langchain-experimental==0.0.53
langchain-openai==0.0.8
langchain-text-splitters==0.0.1
langsmith==0.1.23
| Langchain SQL Agent tutorial runs into error | https://api.github.com/repos/langchain-ai/langchain/issues/18905/comments | 4 | 2024-03-11T11:26:00Z | 2024-07-30T16:06:21Z | https://github.com/langchain-ai/langchain/issues/18905 | 2,178,892,943 | 18,905 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The template for `rag-weaviate` uses the following code:
```python
vectorstore = Weaviate.from_existing_index(WEAVIATE_INDEX_NAME, OpenAIEmbeddings())
retriever = vectorstore.as_retriever()
```
### Error Message and Stack Trace (if applicable)
However this leads to the following error:
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[17], [line 1](vscode-notebook-cell:?execution_count=17&line=1)
----> [1](vscode-notebook-cell:?execution_count=17&line=1) vectorstore = Weaviate.from_existing_index(weaviate_url=WEAVIATE_URL, weaviate_api_key="", embedding=OpenAIEmbeddings())
[2](vscode-notebook-cell:?execution_count=17&line=2) retriever = vectorstore.as_retriever()
AttributeError: type object 'Weaviate' has no attribute 'from_existing_index'
```
### Description
The template for `rag-weaviate` is outdated.
Line at https://github.com/langchain-ai/langchain/blob/master/templates/rag-weaviate/rag_weaviate/chain.py#L36
uses a method that doesn't exist.
Here is the alternative that works:
```python
import weaviate
client = weaviate.Client(url=os.environ["WEAVIATE_URL"], ...)
vectorstore = Weaviate(client, index_name, text_key)
```
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Debian 5.10.209-2 (2024-01-31)
> Python Version: 3.11.8 | packaged by conda-forge | (main, Feb 16 2024, 20:53:32) [GCC 12.3.0]
Package Information
-------------------
> langchain_core: 0.1.30
> langchain: 0.1.8
> langchain_community: 0.0.21
> langsmith: 0.1.3
> langchain_openai: 0.0.8
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | The template for `rag-weaviate` is outdated. | https://api.github.com/repos/langchain-ai/langchain/issues/18902/comments | 0 | 2024-03-11T10:49:59Z | 2024-06-17T16:09:28Z | https://github.com/langchain-ai/langchain/issues/18902 | 2,178,818,275 | 18,902 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
When I use openai as LLM model, everything works fine. But, when I used
```
from langchain.llms.fake import FakeListLLM
model = FakeListLLM(response=['hello'])
```
it raised this:
```
Error
Traceback (most recent call last):
File "/home/ali/.local/lib/python3.10/site-packages/langchain_core/output_parsers/json.py", line 175, in parse_and_check_json_markdown
json_obj = parse_json_markdown(text)
File "/home/ali/.local/lib/python3.10/site-packages/langchain_core/output_parsers/json.py", line 157, in parse_json_markdown
parsed = parser(json_str)
File "/home/ali/.local/lib/python3.10/site-packages/langchain_core/output_parsers/json.py", line 125, in parse_partial_json
return json.loads(s, strict=strict)
File "/usr/lib/python3.10/json/__init__.py", line 359, in loads
return cls(**kw).decode(s)
File "/usr/lib/python3.10/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib/python3.10/json/decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ali/.local/lib/python3.10/site-packages/langchain/chains/query_constructor/base.py", line 50, in parse
parsed = parse_and_check_json_markdown(text, expected_keys)
File "/home/ali/.local/lib/python3.10/site-packages/langchain_core/output_parsers/json.py", line 177, in parse_and_check_json_markdown
raise OutputParserException(f"Got invalid JSON object. Error: {e}")
langchain_core.exceptions.OutputParserException: Got invalid JSON object. Error: Expecting value: line 1 column 1 (char 0)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
return future.result()
File "/home/ali/projects/nlp/parschat-logic/parschat_logic/use_cases/documents/patent_qa_11/algorithm.py", line 134, in __call__
retrieved_docs = await self.run_retriever(query=input_data.message)
File "/home/ali/projects/nlp/parschat-logic/parschat_logic/use_cases/documents/patent_qa_11/algorithm.py", line 144, in run_retriever
return await self.__getattribute__(
File "/home/ali/projects/nlp/parschat-logic/parschat_logic/use_cases/documents/patent_qa_11/algorithm.py", line 151, in retrieve_docs_by_self_query
relevant_docs = self.retrieval.invoke(query)
File "/home/ali/.local/lib/python3.10/site-packages/langchain_core/retrievers.py", line 141, in invoke
return self.get_relevant_documents(
File "/home/ali/.local/lib/python3.10/site-packages/langchain_core/retrievers.py", line 244, in get_relevant_documents
raise e
File "/home/ali/.local/lib/python3.10/site-packages/langchain_core/retrievers.py", line 237, in get_relevant_documents
result = self._get_relevant_documents(
File "/home/ali/.local/lib/python3.10/site-packages/langchain/retrievers/self_query/base.py", line 181, in _get_relevant_documents
structured_query = self.query_constructor.invoke(
File "/home/ali/.local/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 4069, in invoke
return self.bound.invoke(
File "/home/ali/.local/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2075, in invoke
input = step.invoke(
File "/home/ali/.local/lib/python3.10/site-packages/langchain_core/output_parsers/base.py", line 178, in invoke
return self._call_with_config(
File "/home/ali/.local/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 1262, in _call_with_config
context.run(
File "/home/ali/.local/lib/python3.10/site-packages/langchain_core/runnables/config.py", line 326, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
File "/home/ali/.local/lib/python3.10/site-packages/langchain_core/output_parsers/base.py", line 179, in <lambda>
lambda inner_input: self.parse_result([Generation(text=inner_input)]),
File "/home/ali/.local/lib/python3.10/site-packages/langchain_core/output_parsers/base.py", line 221, in parse_result
return self.parse(result[0].text)
File "/home/ali/.local/lib/python3.10/site-packages/langchain/chains/query_constructor/base.py", line 63, in parse
raise OutputParserException(
langchain_core.exceptions.OutputParserException: Parsing text
hello
raised following error:
Got invalid JSON object. Error: Expecting value: line 1 column 1 (char 0)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I am trying to use FakeLLM model of langchain for my unit tests in self query retrieval
### System Info
```
langchain==0.1.11
langchain-community==0.0.25
langchain-core==0.1.29
langchain-openai==0.0.8
langchain-text-splitters==0.0.1
ubuntu 22.04
python 3.10 | error during testing selfquery with langchainmockmodel | https://api.github.com/repos/langchain-ai/langchain/issues/18900/comments | 0 | 2024-03-11T10:29:09Z | 2024-06-17T16:10:14Z | https://github.com/langchain-ai/langchain/issues/18900 | 2,178,776,395 | 18,900 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
llm = BedrockChat(
client=client,
model_id='anthropic.claude-3-sonnet-20240229-v1:0'
)
SQLDatabaseChain.from_llm(
llm,
db_connection,
prompt=few_shot_prompt,
verbose=False,
return_intermediate_steps=True,
)
### Error Message and Stack Trace (if applicable)
SQLDatabaseChain returns SQL code as final answer.
### Description
Problem: SQLDatabaseChain returns SQL code as final answer as well as the intermediate step when using BedrockChat claude3 sonnet model as llm inside the SQLDatabaseChain
Ideally SQLDatabaseChain should return the final answer as the answer fetched from the database after excecuting the SQL code and intermediate step as SQL code.
Does the SQLDatabaseChain work with BedrockChat and claude3 sonnet?
### System Info
awslambdaric==2.0.8
boto3==1.34.37
chromadb==0.4.22
huggingface-hub==0.20.3
langchain==0.1.11
langchain-community==0.0.27
langchain-experimental==0.0.50
PyYAML==6.0.1
sentence_transformers==2.3.0
snowflake-connector-python==3.7.0
snowflake-sqlalchemy==1.5.1
SQLAlchemy==1.4.51 | BedrockChat Claude3 and SQLDatabaseChain | https://api.github.com/repos/langchain-ai/langchain/issues/18893/comments | 8 | 2024-03-11T07:29:10Z | 2024-04-18T04:25:46Z | https://github.com/langchain-ai/langchain/issues/18893 | 2,178,414,326 | 18,893 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The following code will not work:
```
from langchain.memory.entity import UpstashRedisEntityStore
entity_store = UpstashRedisEntityStore(
session_id="my-session",
url="your-upstash-url",
token="your-upstash-redis-token",
ttl=600,
)
```
### Error Message and Stack Trace (if applicable)
Upstash Redis instance could not be initiated.
Traceback (most recent call last):
File "/Users/albertpurnama/Documents/dev/qornetto/main.py", line 27, in <module>
entity_store = UpstashRedisEntityStore(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/albertpurnama/Documents/dev/qornetto/.venv/lib/python3.11/site-packages/langchain/memory/entity.py", line 106, in __init__
self.session_id = session_id
^^^^^^^^^^^^^^^
File "/Users/albertpurnama/Documents/dev/qornetto/.venv/lib/python3.11/site-packages/pydantic/v1/main.py", line 357, in __setattr__
raise ValueError(f'"{self.__class__.__name__}" object has no field "{name}"')
ValueError: "UpstashRedisEntityStore" object has no field "session_id"
### Description
I'm trying to use `UpstashRedisEntityStore` but the initializer does not work.
### System Info
langchain==0.1.11
langchain-community==0.0.27
langchain-core==0.1.30
langchain-openai==0.0.8
langchain-text-splitters==0.0.1
langchainhub==0.1.15
mac
python 3.11.6 | `UpstashRedisEntityStore` initializer does not work. Upstash Redis Instance could not be created. | https://api.github.com/repos/langchain-ai/langchain/issues/18891/comments | 1 | 2024-03-11T06:54:51Z | 2024-06-18T16:09:46Z | https://github.com/langchain-ai/langchain/issues/18891 | 2,178,363,508 | 18,891 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
model=chatglm3-6b
python=3.10.11
langchain=0.1.11
langchain-community=0.0.27
langchain-core=0.1.30
langchain-openai=0.0.8
openai=1.13.3

@liugddx 帮忙看下 ,agent function calling 老是不通
### Error Message and Stack Trace (if applicable)

异常信息
### Description
agent function calling 没有
### System Info
python
| langchain agent 返回了空串,parse markdown的时候报错,agent function calling 没有生效 | https://api.github.com/repos/langchain-ai/langchain/issues/18888/comments | 0 | 2024-03-11T03:21:10Z | 2024-06-17T16:09:33Z | https://github.com/langchain-ai/langchain/issues/18888 | 2,178,141,777 | 18,888 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_community.llms import VLLM
import asyncio
import time
async def async_generate_response(llm, prompt):
result = await llm.ainvoke(prompt)
return result
async def async_handle_user(llm, user_prompt, system_prompt):
instructions = get_instructions(user_prompt, system_prompt)
prompt = build_llama2_prompt(instructions)
response = await async_generate_response(llm, prompt)
return response
async def main_multiple_users():
llm = VLLM(
model="/home/userdata/Downloads/vllm_llama/model/Llama-2-7b-Chat-GPTQ",
# trust_remote_code=True,
max_new_tokens=200, # Increase the maximum number of new tokens to allow for longer outputs
top_k=10,
top_p=1,
temperature=0.1,
tensor_parallel_size=1,
vllm_kwargs={"quantization": "GPTQ", "enforce_eager": "True", "gpu_memory_utilization": 0.5}
)
system_prompt = """
As a Machine Learning engineer who is teaching high school students, explain the fundamental concepts of machine learning, including supervised, unsupervised, and reinforcement learning. Provide real-world examples of each type of learning and discuss their applications in various domains such as healthcare, finance, and autonomous vehicles.
"""
user_prompt = '''
Explore the concept of transfer learning in machine learning. Explain how pre-trained models can be leveraged to improve the performance of new tasks with limited data. Provide examples of popular pre-trained models and discuss their applications across different domains.
'''
start_time = time.time()
tasks = [async_handle_user(llm, user_prompt, system_prompt) for _ in range(5)] # Simulate 5 users
responses = await asyncio.gather(*tasks)
for idx, response in enumerate(responses):
print(f"User {idx + 1} Response:", response)
end_time = time.time()
print("Multiple Users Execution Time:", end_time - start_time)
def build_llama2_prompt(instructions):
stop_token = "</s>"
start_token = "<s>"
startPrompt = f"{start_token}[INST] "
endPrompt = " [/INST]"
conversation = []
for index, instruction in enumerate(instructions):
if instruction["role"] == "system" and index == 0:
conversation.append(f"<<SYS>>\n{instruction['content']}\n<</SYS>>\n\n")
elif instruction["role"] == "user":
conversation.append(instruction["content"].strip())
else:
conversation.append(f"{endPrompt} {instruction['content'].strip()} {stop_token}{startPrompt}")
return startPrompt + "".join(conversation) + endPrompt
def get_instructions(user_prompt, system_prompt):
instructions = [
{"role": "system", "content": f"{system_prompt} "},
]
instructions.append({"role": "user", "content": f"{user_prompt}"})
return instructions
async def run():
await main_multiple_users()
asyncio.run(run())
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I run my code above. This is the error that occur when I run the code.
<img width="1288" alt="Screenshot 2024-03-11 at 10 48 24 AM" src="https://github.com/langchain-ai/langchain/assets/130896959/8758b238-c21d-4f21-9dde-e3ed7563003c">
### System Info
newer version of langchain | Problem using ainvoke with vllm | https://api.github.com/repos/langchain-ai/langchain/issues/18887/comments | 1 | 2024-03-11T02:45:08Z | 2024-03-11T15:25:17Z | https://github.com/langchain-ai/langchain/issues/18887 | 2,178,108,189 | 18,887 |
[
"hwchase17",
"langchain"
] | I access ollama using the python library.
It communicates well but after some exchanges I always get the following. It seems that I need to reset ollama via python or maybe context length is surpassed, how do I figure it out?
```
Traceback (most recent call last):
File "c:\Lib\site-packages\urllib3\connectionpool.py", line 715, in urlopen
httplib_response = self._make_request(
^^^^^^^^^^^^^^^^^^^
File "c:\Lib\site-packages\urllib3\connectionpool.py", line 467, in _make_request
six.raise_from(e, None)
File "<string>", line 3, in raise_from
File "c:\Lib\site-packages\urllib3\connectionpool.py", line 462, in _make_request
httplib_response = conn.getresponse()
^^^^^^^^^^^^^^^^^^
File "c:\Lib\http\client.py", line 1386, in getresponse
response.begin()
File "c:\Lib\http\client.py", line 325, in begin
version, status, reason = self._read_status()
^^^^^^^^^^^^^^^^^^^
File "c:\Lib\http\client.py", line 286, in _read_status
line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Lib\socket.py", line 706, in readinto
return self._sock.recv_into(b)
^^^^^^^^^^^^^^^^^^^^^^^
ConnectionResetError: [WinError 10054] Eine vorhandene Verbindung wurde vom Remotehost geschlossen
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:\Lib\site-packages\requests\adapters.py", line 486, in send
resp = conn.urlopen(
^^^^^^^^^^^^^
File "c:\Lib\site-packages\urllib3\connectionpool.py", line 799, in urlopen
retries = retries.increment(
^^^^^^^^^^^^^^^^^^
File "c:\Lib\site-packages\urllib3\util\retry.py", line 550, in increment
raise six.reraise(type(error), error, _stacktrace)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Lib\site-packages\urllib3\packages\six.py", line 769, in reraise
raise value.with_traceback(tb)
File "c:\Lib\site-packages\urllib3\connectionpool.py", line 715, in urlopen
httplib_response = self._make_request(
^^^^^^^^^^^^^^^^^^^
File "c:\Lib\site-packages\urllib3\connectionpool.py", line 467, in _make_request
six.raise_from(e, None)
File "<string>", line 3, in raise_from
File "c:\Lib\site-packages\urllib3\connectionpool.py", line 462, in _make_request
httplib_response = conn.getresponse()
^^^^^^^^^^^^^^^^^^
File "c:\Lib\http\client.py", line 1386, in getresponse
response.begin()
File "c:\Lib\http\client.py", line 325, in begin
version, status, reason = self._read_status()
^^^^^^^^^^^^^^^^^^^
File "c:\Lib\http\client.py", line 286, in _read_status
line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Lib\socket.py", line 706, in readinto
return self._sock.recv_into(b)
^^^^^^^^^^^^^^^^^^^^^^^
urllib3.exceptions.ProtocolError: ('Connection aborted.', ConnectionResetError(10054, 'Eine vorhandene Verbindung wurde vom Remotehost geschlossen', None, 10054, None))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:\Lib\site-packages\langchain_community\embeddings\ollama.py", line 157, in _process_emb_response
res = requests.post(
^^^^^^^^^^^^^^
File "c:\Lib\site-packages\requests\api.py", line 115, in post
return request("post", url, data=data, json=json, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Lib\site-packages\requests\api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Lib\site-packages\requests\sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Lib\site-packages\requests\sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Lib\site-packages\requests\adapters.py", line 501, in send
raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', ConnectionResetError(10054, 'Eine vorhandene Verbindung wurde vom Remotehost geschlossen', None, 10054, None))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "E:\test.py", line 123, in <module>
rag(ds("documents"), "")
File "E:\test.py", line 93, in rag
result = chain.invoke(aufgabe).replace("\n"," ").replace("\r"," ").replace(" "," ")
^^^^^^^^^^^^^^^^^^^^^
File "c:\Lib\site-packages\langchain_core\runnables\base.py", line 2075, in invoke
input = step.invoke(
^^^^^^^^^^^^
File "c:\Lib\site-packages\langchain_core\runnables\base.py", line 2712, in invoke
output = {key: future.result() for key, future in zip(steps, futures)}
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Lib\site-packages\langchain_core\runnables\base.py", line 2712, in <dictcomp>
output = {key: future.result() for key, future in zip(steps, futures)}
^^^^^^^^^^^^^^^
File "c:\Lib\concurrent\futures\_base.py", line 456, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "c:\Lib\concurrent\futures\_base.py", line 401, in __get_result
raise self._exception
File "c:\Lib\concurrent\futures\thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Lib\site-packages\langchain_core\retrievers.py", line 141, in invoke
return self.get_relevant_documents(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Lib\site-packages\langchain_core\retrievers.py", line 244, in get_relevant_documents
raise e
File "c:\Lib\site-packages\langchain_core\retrievers.py", line 237, in get_relevant_documents
result = self._get_relevant_documents(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Lib\site-packages\langchain_core\vectorstores.py", line 674, in _get_relevant_documents
docs = self.vectorstore.similarity_search(query, **self.search_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Lib\site-packages\langchain_community\vectorstores\chroma.py", line 348, in similarity_search
docs_and_scores = self.similarity_search_with_score(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Lib\site-packages\langchain_community\vectorstores\chroma.py", line 437, in similarity_search_with_score
query_embedding = self._embedding_function.embed_query(query)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Lib\site-packages\langchain_community\embeddings\ollama.py", line 217, in embed_query
embedding = self._embed([instruction_pair])[0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Lib\site-packages\langchain_community\embeddings\ollama.py", line 192, in _embed
return [self._process_emb_response(prompt) for prompt in iter_]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Lib\site-packages\langchain_community\embeddings\ollama.py", line 192, in <listcomp>
return [self._process_emb_response(prompt) for prompt in iter_]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Lib\site-packages\langchain_community\embeddings\ollama.py", line 163, in _process_emb_response
raise ValueError(f"Error raised by inference endpoint: {e}")
ValueError: Error raised by inference endpoint: ('Connection aborted.', ConnectionResetError(10054, 'Eine vorhandene Verbindung wurde vom Remotehost geschlossen', None, 10054, None))
``` | Ollama logging for ConnectionResetError | https://api.github.com/repos/langchain-ai/langchain/issues/18879/comments | 3 | 2024-03-10T20:35:35Z | 2024-06-21T16:38:09Z | https://github.com/langchain-ai/langchain/issues/18879 | 2,177,893,861 | 18,879 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
Environment
langchain 0.1.11
langchain-community 0.0.25
langchain-core 0.1.29
langchain-experimental 0.0.49
langchain-text-splitters 0.0.1
langchainhub 0.1.15
langchainplus-sdk 0.0.20
Symptom
In cookbook/fake_llm.ipynb, there are 3 warnings which need to be fixed.
(1) Failed to load python_repl.
Traceback (most recent call last):
File "./my_fake_llm.py", line 7, in <module>
tools = load_tools(["python_repl"])
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.local/lib/python3.11/site-packages/langchain/agents/load_tools.py", line 617, in load_tools
raise ValueError(f"Got unknown tool {name}")
ValueError: Got unknown tool python_repl
(2) The function initialize_agent was deprecated message.
(3) The function run was deprecated message.
### Idea or request for content:
(1) Following diff fix the problem.
from langchain_experimental.tools import PythonREPLTool
tools = [PythonREPLTool()]
#tools = load_tools(["python_repl"])
(2) The function `initialize_agent` was deprecated
Replace initialize_agent with create_react_agent fix the warning message.
(3) The function `run` was deprecated
Update run to invoke fix the warning.
| DOC: cookbook fake_llm.ipynb need to be updated, in order to fix 3 warnings. | https://api.github.com/repos/langchain-ai/langchain/issues/18874/comments | 0 | 2024-03-10T15:57:40Z | 2024-06-16T16:09:19Z | https://github.com/langchain-ai/langchain/issues/18874 | 2,177,782,566 | 18,874 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from langchain_community.llms.huggingface_pipeline import HuggingFacePipeline
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
from langchain_core.output_parsers import StrOutputParser
import torch
from langchain_core.prompts import ChatPromptTemplate, PromptTemplate
model_id = "openchat/openchat-3.5-0106"
model = AutoModelForCausalLM.from_pretrained(model_id, cache_dir='C:/Users/Timmek/Documents/model1', torch_dtype=torch.bfloat16, load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained(model_id, cache_dir='C:/Users/Timmek/Documents/model1', torch_dtype=torch.bfloat16)
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, max_new_tokens=192, model_kwargs={"temperature":0.6})
llm = HuggingFacePipeline(pipeline=pipe)
# Create the ChatPromptTemplate
prompt_template = PromptTemplate.from_template("The user asked: '{input}'. How would you respond?")
output_parser = StrOutputParser()
chain = prompt_template | llm | output_parser
question = "Hi"
print(chain.invoke({"input": question}))
```
### Error Message and Stack Trace (if applicable)
Console:
```
A) Goodbye
B) Hello
C) See you later
D) Sorry
Answer: B) Hello
10. The user asked: 'Have you ever been to Rome?' How would you respond?
A) Yes, I was there on my honeymoon.
B) No, I have never traveled there.
C) I've never been, but I would like to.
D) I prefer traveling to New York.
Answer: C) I've never been, but I would like to.
11. The user asked: 'Do you like going on vacation?' How would you respond?
A) I like it, but I prefer staying at home.
B) I don't like vacations, I prefer staying at home.
C) I love going on vacation.
```
### Description
I want to use "openchat-3.5-0106" as a chatbot. But I can't do it with Langchain functions. instead of responding, the chatbot responds with a CONTINUATION of the request, not a response to it.
I use: HuggingFacePipeline, PromptTemplate.
I also tried using HuggingFacePipeline directly., without transformers:
```
llm = HuggingFacePipeline.from_model_id(
model_id="openchat/openchat-3.5-0106",
task="text-generation",
model_kwargs=model_kwargs,
device=0,
pipeline_kwargs={"temperature": 0.7},
)
```
or use ChatPromptTemplate (instead of PromptTemplate), then everything is the same. If you remove the "temperature" setting or download "cache_dir" to another one, then the problem remains.
You can say that the problem is with me or with openchat. But if I ONLY use transformers functions, then everything is fine:
```
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
import torch
DEVICE = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model_name_or_path = "openchat/openchat-3.5-0106"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, cache_dir='C:/Users/Timmek/Documents/model', torch_dtype=torch.bfloat16, load_in_8bit=True)
model = AutoModelForCausalLM.from_pretrained(model_name_or_path, cache_dir='C:/Users/Timmek/Documents/model', torch_dtype=torch.bfloat16).to(DEVICE)
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, max_new_tokens=192)
message_text = "hi"
messages = [
{"role": "user", "content": message_text},
]
response = pipe(messages, max_new_tokens=192)
response = response[0]['generated_text'][1]['content']
print (response)
```
console: ```
Setting `pad_token_id` to `eos_token_id`:32000 for open-end generation.
[{'generated_text': [{'role': 'user', 'content': 'hi'}, {'role': 'assistant', 'content': ' Hello! How can I help you today?'}]}]
Hello! How can I help you today?
```
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.22621
> Python Version: 3.11.6 (tags/v3.11.6:8b6ee5b, Oct 2 2023, 14:57:12) [MSC v.1935 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.1.29
> langchain: 0.1.11
> langchain_community: 0.0.25
> langsmith: 0.1.21
> langchain_text_splitters: 0.0.1
> langchainhub: 0.1.15
> langserve: 0.0.46
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph | When using "openchat-3.5-0106" together with "HuggingFacePipeline", the chatbot does not respond to the message BUT CONTINUES IT | https://api.github.com/repos/langchain-ai/langchain/issues/18870/comments | 0 | 2024-03-10T14:19:42Z | 2024-06-16T16:09:14Z | https://github.com/langchain-ai/langchain/issues/18870 | 2,177,741,965 | 18,870 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
There are many old library paths in the documentation, such as 'langchain.llms' instead of 'langchain**_community**.llms(.huggingface_hub)' and so on. This issue affects all at least langchain_community parts.
https://python.langchain.com/docs/integrations/chat/huggingface
### Idea or request for content:
Since more then one version are used, a good choice would be selectable versions of the documentation. A drop-down field at the top of the edge (like pytorch-docs -> https://pytorch.org/docs/stable/index.html) is simple to use and good for versioning. | DOC: The documentation is not up-to-date. | https://api.github.com/repos/langchain-ai/langchain/issues/18867/comments | 2 | 2024-03-10T11:21:05Z | 2024-06-17T16:09:11Z | https://github.com/langchain-ai/langchain/issues/18867 | 2,177,665,347 | 18,867 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
I am using the below url/api endpoint for accessing a server based Ollama model.
Below is me testing locally to ensure it works.
```python
from langchain_community.llms import Ollama
llm = Ollama(
base_url="http://138.26.48.126:11434",
model="gemma"
)
prompt = 'Give me one name for a new national park with jungle terrain?'
print(llm.invoke(prompt))
```
I am however using this in the context of a streamlit app and asking for a longer prompt. When I try a longer prompt (even using the above example), it times out. Is there a way to increase the timeout length?
### Error Message and Stack Trace (if applicable)
```console
2024-03-09 14:41:53.979 Uncaught app exception
Traceback (most recent call last):
File "/Users/shutchens/Documents/Git-Repos/Grant_Guide/env/lib/python3.10/site-packages/urllib3/connection.py", line 203, in _new_conn
sock = connection.create_connection(
File "/Users/shutchens/Documents/Git-Repos/Grant_Guide/env/lib/python3.10/site-packages/urllib3/util/connection.py", line 85, in create_connection
raise err
File "/Users/shutchens/Documents/Git-Repos/Grant_Guide/env/lib/python3.10/site-packages/urllib3/util/connection.py", line 73, in create_connection
sock.connect(sa)
TimeoutError: [Errno 60] Operation timed out
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/shutchens/Documents/Git-Repos/Grant_Guide/env/lib/python3.10/site-packages/urllib3/connectionpool.py", line 790, in urlopen
response = self._make_request(
File "/Users/shutchens/Documents/Git-Repos/Grant_Guide/env/lib/python3.10/site-packages/urllib3/connectionpool.py", line 496, in _make_request
conn.request(
File "/Users/shutchens/Documents/Git-Repos/Grant_Guide/env/lib/python3.10/site-packages/urllib3/connection.py", line 395, in request
self.endheaders()
File "/usr/local/Cellar/[email protected]/3.10.13_1/Frameworks/Python.framework/Versions/3.10/lib/python3.10/http/client.py", line 1278, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/usr/local/Cellar/[email protected]/3.10.13_1/Frameworks/Python.framework/Versions/3.10/lib/python3.10/http/client.py", line 1038, in _send_output
self.send(msg)
File "/usr/local/Cellar/[email protected]/3.10.13_1/Frameworks/Python.framework/Versions/3.10/lib/python3.10/http/client.py", line 976, in send
self.connect()
File "/Users/shutchens/Documents/Git-Repos/Grant_Guide/env/lib/python3.10/site-packages/urllib3/connection.py", line 243, in connect
self.sock = self._new_conn()
File "/Users/shutchens/Documents/Git-Repos/Grant_Guide/env/lib/python3.10/site-packages/urllib3/connection.py", line 212, in _new_conn
raise ConnectTimeoutError(
urllib3.exceptions.ConnectTimeoutError: (<urllib3.connection.HTTPConnection object at 0x137bbff10>, 'Connection to 138.26.49.149 timed out. (connect timeout=None)')
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/shutchens/Documents/Git-Repos/Grant_Guide/env/lib/python3.10/site-packages/requests/adapters.py", line 486, in send
resp = conn.urlopen(
File "/Users/shutchens/Documents/Git-Repos/Grant_Guide/env/lib/python3.10/site-packages/urllib3/connectionpool.py", line 844, in urlopen
retries = retries.increment(
File "/Users/shutchens/Documents/Git-Repos/Grant_Guide/env/lib/python3.10/site-packages/urllib3/util/retry.py", line 515, in increment
raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type]
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='138.26.49.149', port=11434): Max retries exceeded with url: /api/generate (Caused by ConnectTimeoutError(<urllib3.connection.HTTPConnection object at 0x137bbff10>, 'Connection to 138.26.49.149 timed out. (connect timeout=None)'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/shutchens/Documents/Git-Repos/Grant_Guide/env/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 552, in _run_script
exec(code, module.__dict__)
File "/Users/shutchens/Documents/Git-Repos/Grant_Guide/streamlit/app_no_db.py", line 129, in <module>
show_grant_guide_page()
File "/Users/shutchens/Documents/Git-Repos/Grant_Guide/streamlit/app_no_db.py", line 42, in show_grant_guide_page
documents = grant_generate.search_grant_guide_vectorstore(query=aims, store=vectorstore)
File "/Users/shutchens/Documents/Git-Repos/Grant_Guide/Grant_Guide/generate.py", line 30, in search_grant_guide_vectorstore
docs = docsearch.get_relevant_documents(query)
File "/Users/shutchens/Documents/Git-Repos/Grant_Guide/env/lib/python3.10/site-packages/langchain_core/retrievers.py", line 244, in get_relevant_documents
raise e
File "/Users/shutchens/Documents/Git-Repos/Grant_Guide/env/lib/python3.10/site-packages/langchain_core/retrievers.py", line 237, in get_relevant_documents
result = self._get_relevant_documents(
File "/Users/shutchens/Documents/Git-Repos/Grant_Guide/env/lib/python3.10/site-packages/langchain_core/vectorstores.py", line 674, in _get_relevant_documents
docs = self.vectorstore.similarity_search(query, **self.search_kwargs)
File "/Users/shutchens/Documents/Git-Repos/Grant_Guide/env/lib/python3.10/site-packages/langchain_community/vectorstores/faiss.py", line 548, in similarity_search
docs_and_scores = self.similarity_search_with_score(
File "/Users/shutchens/Documents/Git-Repos/Grant_Guide/env/lib/python3.10/site-packages/langchain_community/vectorstores/faiss.py", line 420, in similarity_search_with_score
embedding = self._embed_query(query)
File "/Users/shutchens/Documents/Git-Repos/Grant_Guide/env/lib/python3.10/site-packages/langchain_community/vectorstores/faiss.py", line 157, in _embed_query
return self.embedding_function(text)
File "/Users/shutchens/Documents/Git-Repos/Grant_Guide/env/lib/python3.10/site-packages/langchain_core/_api/deprecation.py", line 145, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
File "/Users/shutchens/Documents/Git-Repos/Grant_Guide/env/lib/python3.10/site-packages/langchain_core/language_models/llms.py", line 991, in __call__
self.generate(
File "/Users/shutchens/Documents/Git-Repos/Grant_Guide/env/lib/python3.10/site-packages/langchain_core/language_models/llms.py", line 741, in generate
output = self._generate_helper(
File "/Users/shutchens/Documents/Git-Repos/Grant_Guide/env/lib/python3.10/site-packages/langchain_core/language_models/llms.py", line 605, in _generate_helper
raise e
File "/Users/shutchens/Documents/Git-Repos/Grant_Guide/env/lib/python3.10/site-packages/langchain_core/language_models/llms.py", line 592, in _generate_helper
self._generate(
File "/Users/shutchens/Documents/Git-Repos/Grant_Guide/env/lib/python3.10/site-packages/langchain_community/llms/ollama.py", line 408, in _generate
final_chunk = super()._stream_with_aggregation(
File "/Users/shutchens/Documents/Git-Repos/Grant_Guide/env/lib/python3.10/site-packages/langchain_community/llms/ollama.py", line 317, in _stream_with_aggregation
for stream_resp in self._create_generate_stream(prompt, stop, **kwargs):
File "/Users/shutchens/Documents/Git-Repos/Grant_Guide/env/lib/python3.10/site-packages/langchain_community/llms/ollama.py", line 159, in _create_generate_stream
yield from self._create_stream(
File "/Users/shutchens/Documents/Git-Repos/Grant_Guide/env/lib/python3.10/site-packages/langchain_community/llms/ollama.py", line 220, in _create_stream
response = requests.post(
File "/Users/shutchens/Documents/Git-Repos/Grant_Guide/env/lib/python3.10/site-packages/requests/api.py", line 115, in post
return request("post", url, data=data, json=json, **kwargs)
File "/Users/shutchens/Documents/Git-Repos/Grant_Guide/env/lib/python3.10/site-packages/requests/api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
File "/Users/shutchens/Documents/Git-Repos/Grant_Guide/env/lib/python3.10/site-packages/requests/sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
File "/Users/shutchens/Documents/Git-Repos/Grant_Guide/env/lib/python3.10/site-packages/requests/sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
File "/Users/shutchens/Documents/Git-Repos/Grant_Guide/env/lib/python3.10/site-packages/requests/adapters.py", line 507, in send
raise ConnectTimeout(e, request=request)
requests.exceptions.ConnectTimeout: HTTPConnectionPool(host='138.26.49.149', port=11434): Max retries exceeded with url: /api/generate (Caused by ConnectTimeoutError(<urllib3.connection.HTTPConnection object at 0x137bbff10>, 'Connection to 138.26.49.149 timed out. (connect timeout=None)'))
```
### Description
When I try a longer prompt, it times out. Is there a way to increase the timeout length?
### System Info
```console
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.3.0: Wed Dec 20 21:28:58 PST 2023; root:xnu-10002.81.5~7/RELEASE_X86_64
> Python Version: 3.10.13 (main, Aug 24 2023, 12:59:26) [Clang 15.0.0 (clang-1500.0.40.1)]
Package Information
-------------------
> langchain_core: 0.1.30
> langchain: 0.1.11
> langchain_community: 0.0.27
> langsmith: 0.1.23
> langchain_text_splitters: 0.0.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
```
| TimeoutError with Longer Prompts | https://api.github.com/repos/langchain-ai/langchain/issues/18855/comments | 1 | 2024-03-09T21:45:51Z | 2024-03-11T15:33:27Z | https://github.com/langchain-ai/langchain/issues/18855 | 2,177,414,910 | 18,855 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
def get_vectorstore(text_chunks):
embeddings = OpenAIEmbeddings()
## Vector Search DB In Pinecone
Pinecone(api_key=os.environ.get('PINECONE_API_KEY'))
index_name="langchainvector"
vectorstore=PineconeVectorStore.from_documents(text_chunks,embeddings,index_name=index_name)
#vectorstore = FAISS.from_texts(texts=text_chunks, embedding=embeddings)
return vectorstore
def get_conversation_chain(vectorstore):
llm = ChatOpenAI()
# llm = HuggingFaceHub(repo_id="google/flan-t5-xxl", model_kwargs={"temperature":0.5, "max_length":512})
memory = ConversationBufferMemory(
memory_key='chat_history', return_messages=True)
conversation_chain = ConversationalRetrievalChain.from_llm(
llm=llm,
retriever=vectorstore.as_retriever(),
memory=memory,
)
return conversation_chain
# create conversation chain
st.session_state.conversation = get_conversation_chain(
vectorstore)
if __name__ == '__main__':
main()
### Error Message and Stack Trace (if applicable)
_No response_
### Description
File "D:\LLM projects\ask-multiple-pdfs-main\ask-multiple-pdfs-main\app.py", line 128, in <module>
main()
File "D:\LLM projects\ask-multiple-pdfs-main\ask-multiple-pdfs-main\app.py", line 123, in main
st.session_state.conversation = get_conversation_chain(
File "D:\LLM projects\ask-multiple-pdfs-main\ask-multiple-pdfs-main\app.py", line 67, in get_conversation_chain
conversation_chain = ConversationalRetrievalChain.from_llm(
File "d:\llm projects\ask-multiple-pdfs-main\ask-multiple-pdfs-main\llm\lib\site-packages\langchain\chains\conversational_retrieval\base.py", line 212, in from_llm
return cls(
File "pydantic\main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for ConversationalRetrievalChain
retriever
instance of BaseRetriever expected (type=type_error.arbitrary_type; expected_arbitrary_type=BaseRetriever)
### System Info
NA | validation error for ConversationalRetrievalChain retriever instance of BaseRetriever expected (type=type_error.arbitrary_type; expected_arbitrary_type=BaseRetriever) | https://api.github.com/repos/langchain-ai/langchain/issues/18852/comments | 1 | 2024-03-09T17:47:03Z | 2024-05-21T04:37:50Z | https://github.com/langchain-ai/langchain/issues/18852 | 2,177,329,394 | 18,852 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
I think there is a typo in the [tools documentation](https://python.langchain.com/docs/modules/agents/tools/), in [this paragraph](https://github.com/langchain-ai/langchain/blob/b48865bf94d4d738504bcd10accae0fb238b280d/docs/docs/modules/agents/tools/index.ipynb#L29). In _"can be used the prompt the LLM"_ the phrase appears to have a typo or omission. It seems it should either be _"can be used to prompt the LLM"_ or _"can be used as the prompt for the LLM"_
### Idea or request for content:
_No response_ | DOC: typo in tools documentation | https://api.github.com/repos/langchain-ai/langchain/issues/18849/comments | 0 | 2024-03-09T15:30:12Z | 2024-03-09T21:39:19Z | https://github.com/langchain-ai/langchain/issues/18849 | 2,177,267,953 | 18,849 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
import streamlit as st
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough
from langchain_core.output_parsers import StrOutputParser
from langchain_community.llms.bedrock import Bedrock
from langchain_community.retrievers.bedrock import AmazonKnowledgeBasesRetriever
retriever = AmazonKnowledgeBasesRetriever(
knowledge_base_id="XXXXXXXXXX", # Input KB ID here
retrieval_config={
"vectorSearchConfiguration": {
"numberOfResults": 10,
"overrideSearchType": "HYBRID"
}})
prompt = ChatPromptTemplate.from_template("Answer questions based on the context below: {context} / Question: {question}")
model = Bedrock(model_id="anthropic.claude-3-sonnet-20240229-v1:0", model_kwargs={"max_tokens_to_sample": 1000})
chain = ({"context": retriever, "question": RunnablePassthrough()} | prompt | model | StrOutputParser())
st.title("Ask Bedrock")
question = st.text_input("Input your question")
button = st.button("Ask!")
if button:
st.write(chain.invoke(question))
```
### Error Message and Stack Trace (if applicable)
ValidationError: 1 validation error for Bedrock __root__ Claude v3 models are not supported by this LLM.Please use `from langchain_community.chat_models import BedrockChat` instead. (type=value_error)
Traceback:
File "/home/ec2-user/.local/lib/python3.9/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 535, in _run_script
exec(code, module.__dict__)
File "/home/ec2-user/environment/rag.py", line 22, in <module>
model = Bedrock(model_id="anthropic.claude-3-sonnet-20240229-v1:0", model_kwargs={"max_tokens_to_sample": 1000})
File "/home/ec2-user/.local/lib/python3.9/site-packages/langchain_core/load/serializable.py", line 120, in __init__
super().__init__(**kwargs)
File "/home/ec2-user/.local/lib/python3.9/site-packages/pydantic/v1/main.py", line 341, in __init__
raise validation_error
### Description
To use claude 3 (Sonnet) on Amazon Bedrock, `langchain_community` seems to be updated.
### System Info
langchain==0.1.11
langchain-community==0.0.27
langchain-core==0.1.30
langchain-text-splitters==0.0.1
macOS 14.2.1(23C71)
Python 3.9.16 | Bedrock doesn't work with Claude 3 | https://api.github.com/repos/langchain-ai/langchain/issues/18845/comments | 4 | 2024-03-09T12:49:08Z | 2024-03-11T13:23:59Z | https://github.com/langchain-ai/langchain/issues/18845 | 2,177,214,299 | 18,845 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```from langchain.agents import create_sql_agent
from langchain.agents.agent_toolkits import SQLDatabaseToolkit
from langchain.agents.agent_types import AgentType
from langchain.agents import create_sql_agent
from langchain.agents.agent_toolkits import SQLDatabaseToolkit
from langchain.agents import AgentExecutor
pg_uri = f"postgresql+psycopg2://{username}:{password}@{host}:{port}/{mydatabase}"
db = SQLDatabase.from_uri(pg_uri)
repo_id = "mistralai/Mistral-7B-Instruct-v0.2"
llm = HuggingFaceEndpoint(
repo_id=repo_id, max_length=128, temperature=0.5, token=HUGGINGFACEHUB_API_TOKEN
)
toolkit = SQLDatabaseToolkit(db=db,llm=llm)
agent_executor = create_sql_agent(
llm=llm,
toolkit=SQLDatabaseToolkit(db=db,llm=llm),
verbose=True,
agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION
)
agent_executor.run(
"what is the id of host spencer ?"
)
```
### Error Message and Stack Trace (if applicable)
```
> Entering new SQL Agent Executor chain...
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[222], line 1
----> 1 agent_executor.run(
2 "what is the id of host spencer ?"
3 )
File ~/Documents/venv/lib64/python3.11/site-packages/langchain_core/_api/deprecation.py:145, in deprecated.<locals>.deprecate.<locals>.warning_emitting_wrapper(*args, **kwargs)
143 warned = True
144 emit_warning()
--> 145 return wrapped(*args, **kwargs)
File ~/Documents/venv/lib64/python3.11/site-packages/langchain/chains/base.py:545, in Chain.run(self, callbacks, tags, metadata, *args, **kwargs)
543 if len(args) != 1:
544 raise ValueError("`run` supports only one positional argument.")
--> 545 return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
546 _output_key
547 ]
549 if kwargs and not args:
550 return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[
551 _output_key
552 ]
File ~/Documents/venv/lib64/python3.11/site-packages/langchain_core/_api/deprecation.py:145, in deprecated.<locals>.deprecate.<locals>.warning_emitting_wrapper(*args, **kwargs)
143 warned = True
144 emit_warning()
--> 145 return wrapped(*args, **kwargs)
File ~/Documents/venv/lib64/python3.11/site-packages/langchain/chains/base.py:378, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
346 """Execute the chain.
347
348 Args:
(...)
369 `Chain.output_keys`.
370 """
371 config = {
372 "callbacks": callbacks,
373 "tags": tags,
374 "metadata": metadata,
375 "run_name": run_name,
376 }
--> 378 return self.invoke(
379 inputs,
380 cast(RunnableConfig, {k: v for k, v in config.items() if v is not None}),
381 return_only_outputs=return_only_outputs,
382 include_run_info=include_run_info,
383 )
File ~/Documents/venv/lib64/python3.11/site-packages/langchain/chains/base.py:163, in Chain.invoke(self, input, config, **kwargs)
161 except BaseException as e:
162 run_manager.on_chain_error(e)
--> 163 raise e
164 run_manager.on_chain_end(outputs)
166 if include_run_info:
File ~/Documents/venv/lib64/python3.11/site-packages/langchain/chains/base.py:153, in Chain.invoke(self, input, config, **kwargs)
150 try:
151 self._validate_inputs(inputs)
152 outputs = (
--> 153 self._call(inputs, run_manager=run_manager)
154 if new_arg_supported
155 else self._call(inputs)
156 )
158 final_outputs: Dict[str, Any] = self.prep_outputs(
159 inputs, outputs, return_only_outputs
160 )
161 except BaseException as e:
File ~/Documents/venv/lib64/python3.11/site-packages/langchain/agents/agent.py:1391, in AgentExecutor._call(self, inputs, run_manager)
1389 # We now enter the agent loop (until it returns something).
1390 while self._should_continue(iterations, time_elapsed):
-> 1391 next_step_output = self._take_next_step(
1392 name_to_tool_map,
1393 color_mapping,
1394 inputs,
1395 intermediate_steps,
1396 run_manager=run_manager,
1397 )
1398 if isinstance(next_step_output, AgentFinish):
1399 return self._return(
1400 next_step_output, intermediate_steps, run_manager=run_manager
1401 )
File ~/Documents/venv/lib64/python3.11/site-packages/langchain/agents/agent.py:1097, in AgentExecutor._take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
1088 def _take_next_step(
1089 self,
1090 name_to_tool_map: Dict[str, BaseTool],
(...)
1094 run_manager: Optional[CallbackManagerForChainRun] = None,
1095 ) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]:
1096 return self._consume_next_step(
-> 1097 [
1098 a
1099 for a in self._iter_next_step(
1100 name_to_tool_map,
1101 color_mapping,
1102 inputs,
1103 intermediate_steps,
1104 run_manager,
1105 )
1106 ]
1107 )
File ~/Documents/venv/lib64/python3.11/site-packages/langchain/agents/agent.py:1097, in <listcomp>(.0)
1088 def _take_next_step(
1089 self,
1090 name_to_tool_map: Dict[str, BaseTool],
(...)
1094 run_manager: Optional[CallbackManagerForChainRun] = None,
1095 ) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]:
1096 return self._consume_next_step(
-> 1097 [
1098 a
1099 for a in self._iter_next_step(
1100 name_to_tool_map,
1101 color_mapping,
1102 inputs,
1103 intermediate_steps,
1104 run_manager,
1105 )
1106 ]
1107 )
File ~/Documents/venv/lib64/python3.11/site-packages/langchain/agents/agent.py:1125, in AgentExecutor._iter_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
1122 intermediate_steps = self._prepare_intermediate_steps(intermediate_steps)
1124 # Call the LLM to see what to do.
-> 1125 output = self.agent.plan(
1126 intermediate_steps,
1127 callbacks=run_manager.get_child() if run_manager else None,
1128 **inputs,
1129 )
1130 except OutputParserException as e:
1131 if isinstance(self.handle_parsing_errors, bool):
File ~/Documents/venv/lib64/python3.11/site-packages/langchain/agents/agent.py:387, in RunnableAgent.plan(self, intermediate_steps, callbacks, **kwargs)
381 # Use streaming to make sure that the underlying LLM is invoked in a streaming
382 # fashion to make it possible to get access to the individual LLM tokens
383 # when using stream_log with the Agent Executor.
384 # Because the response from the plan is not a generator, we need to
385 # accumulate the output into final output and return that.
386 final_output: Any = None
--> 387 for chunk in self.runnable.stream(inputs, config={"callbacks": callbacks}):
388 if final_output is None:
389 final_output = chunk
File ~/Documents/venv/lib64/python3.11/site-packages/langchain_core/runnables/base.py:2446, in RunnableSequence.stream(self, input, config, **kwargs)
2440 def stream(
2441 self,
2442 input: Input,
2443 config: Optional[RunnableConfig] = None,
2444 **kwargs: Optional[Any],
2445 ) -> Iterator[Output]:
-> 2446 yield from self.transform(iter([input]), config, **kwargs)
File ~/Documents/venv/lib64/python3.11/site-packages/langchain_core/runnables/base.py:2433, in RunnableSequence.transform(self, input, config, **kwargs)
2427 def transform(
2428 self,
2429 input: Iterator[Input],
2430 config: Optional[RunnableConfig] = None,
2431 **kwargs: Optional[Any],
2432 ) -> Iterator[Output]:
-> 2433 yield from self._transform_stream_with_config(
2434 input,
2435 self._transform,
2436 patch_config(config, run_name=(config or {}).get("run_name") or self.name),
2437 **kwargs,
2438 )
File ~/Documents/venv/lib64/python3.11/site-packages/langchain_core/runnables/base.py:1513, in Runnable._transform_stream_with_config(self, input, transformer, config, run_type, **kwargs)
1511 try:
1512 while True:
-> 1513 chunk: Output = context.run(next, iterator) # type: ignore
1514 yield chunk
1515 if final_output_supported:
File ~/Documents/venv/lib64/python3.11/site-packages/langchain_core/runnables/base.py:2397, in RunnableSequence._transform(self, input, run_manager, config)
2388 for step in steps:
2389 final_pipeline = step.transform(
2390 final_pipeline,
2391 patch_config(
(...)
2394 ),
2395 )
-> 2397 for output in final_pipeline:
2398 yield output
File ~/Documents/venv/lib64/python3.11/site-packages/langchain_core/runnables/base.py:1051, in Runnable.transform(self, input, config, **kwargs)
1048 final: Input
1049 got_first_val = False
-> 1051 for chunk in input:
1052 if not got_first_val:
1053 final = chunk
File ~/Documents/venv/lib64/python3.11/site-packages/langchain_core/runnables/base.py:4173, in RunnableBindingBase.transform(self, input, config, **kwargs)
4167 def transform(
4168 self,
4169 input: Iterator[Input],
4170 config: Optional[RunnableConfig] = None,
4171 **kwargs: Any,
4172 ) -> Iterator[Output]:
-> 4173 yield from self.bound.transform(
4174 input,
4175 self._merge_configs(config),
4176 **{**self.kwargs, **kwargs},
4177 )
File ~/Documents/venv/lib64/python3.11/site-packages/langchain_core/runnables/base.py:1061, in Runnable.transform(self, input, config, **kwargs)
1058 final = final + chunk # type: ignore[operator]
1060 if got_first_val:
-> 1061 yield from self.stream(final, config, **kwargs)
File ~/Documents/venv/lib64/python3.11/site-packages/langchain_core/language_models/llms.py:452, in BaseLLM.stream(self, input, config, stop, **kwargs)
445 except BaseException as e:
446 run_manager.on_llm_error(
447 e,
448 response=LLMResult(
449 generations=[[generation]] if generation else []
450 ),
451 )
--> 452 raise e
453 else:
454 run_manager.on_llm_end(LLMResult(generations=[[generation]]))
File ~/Documents/venv/lib64/python3.11/site-packages/langchain_core/language_models/llms.py:436, in BaseLLM.stream(self, input, config, stop, **kwargs)
434 generation: Optional[GenerationChunk] = None
435 try:
--> 436 for chunk in self._stream(
437 prompt, stop=stop, run_manager=run_manager, **kwargs
438 ):
439 yield chunk.text
440 if generation is None:
File ~/Documents/venv/lib64/python3.11/site-packages/langchain_community/llms/huggingface_endpoint.py:310, in HuggingFaceEndpoint._stream(self, prompt, stop, run_manager, **kwargs)
301 def _stream(
302 self,
303 prompt: str,
(...)
306 **kwargs: Any,
307 ) -> Iterator[GenerationChunk]:
308 invocation_params = self._invocation_params(stop, **kwargs)
--> 310 for response in self.client.text_generation(
311 prompt, **invocation_params, stream=True
312 ):
313 # identify stop sequence in generated text, if any
314 stop_seq_found: Optional[str] = None
315 for stop_seq in invocation_params["stop_sequences"]:
TypeError: InferenceClient.text_generation() got an unexpected keyword argument 'max_length'
```
### Description
while executing
agent_executor.run(
"what is the id of host spencer ?"
)
i am getting following error
TypeError: InferenceClient.text_generation() got an unexpected keyword argument 'max_length'
Observation :
I have tried the same query with chain.run(" what is the id of host spensor") and it works fine.
### System Info
Python 3.11.2 (main, Feb 8 2023, 00:00:00) [GCC 13.0.1 20230208 (Red Hat 13.0.1-0)] on linux
Type "help", "copyright", "credits" or "license" for more information.
OS : Linux : Fedora-38
``` Name: langchain
langchain==0.1.11
langchain-community==0.0.27
langchain-core==0.1.30
langchain-experimental==0.0.53
langchain-openai==0.0.8
langchain-text-splitters==0.0.1 ``` | TypeError: on SQL Agent Executor chain | https://api.github.com/repos/langchain-ai/langchain/issues/18838/comments | 4 | 2024-03-09T08:02:11Z | 2024-08-05T16:07:25Z | https://github.com/langchain-ai/langchain/issues/18838 | 2,177,127,356 | 18,838 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
import streamlit as st
from langchain_community.chat_message_histories import DynamoDBChatMessageHistory
from langchain_community.chat_models import BedrockChat
from langchain_core.messages import AIMessage, HumanMessage, SystemMessage
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.runnables.history import RunnableWithMessageHistory
st.title("Bedrock Chat")
if "session_id" not in st.session_state:
st.session_state.session_id = "session_id"
if "history" not in st.session_state:
st.session_state.history = DynamoDBChatMessageHistory(
table_name="BedrockChatSessionTable", session_id=st.session_state.session_id
)
if "chain" not in st.session_state:
chat = BedrockChat(
model_id="anthropic.claude-3-sonnet-20240229-v1:0",
model_kwargs={"max_tokens": 1000},
streaming=True,
)
prompt = ChatPromptTemplate.from_messages(
[
("system", "You are AI bot."),
MessagesPlaceholder(variable_name="history"),
("human", "{question}"),
]
)
chain = prompt | chat
st.session_state.chain = RunnableWithMessageHistory(
chain,
lambda x: st.session_state.history,
input_messages_key="question",
history_messages_key="history",
)
if st.button("Clear history"):
st.session_state.history.clear()
for message in st.session_state.history.messages:
if message.type == "human":
with st.chat_message("human"):
st.markdown(message.content)
if message.type == "AIMessageChunk":
with st.chat_message("ai"):
st.markdown(message.content)
if prompt := st.chat_input("What is up?"):
with st.chat_message("user"):
st.markdown(prompt)
with st.chat_message("assistant"):
response = st.write_stream(
st.session_state.chain.stream(
{"question": prompt},
config={"configurable": {"session_id": st.session_state.session_id}}
)
)
```
### Error Message and Stack Trace (if applicable)
2024-03-09 02:28:02.148 Uncaught app exception
Traceback (most recent call last):
File "/home/ec2-user/.local/lib/python3.9/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 542, in _run_script
exec(code, module.__dict__)
File "/home/ec2-user/environment/langchain-bedrock-handson/example6.py", line 59, in <module>
response = st.write_stream(
File "/home/ec2-user/.local/lib/python3.9/site-packages/streamlit/runtime/metrics_util.py", line 397, in wrapped_func
result = non_optional_func(*args, **kwargs)
File "/home/ec2-user/.local/lib/python3.9/site-packages/streamlit/elements/write.py", line 159, in write_stream
for chunk in stream: # type: ignore
File "/home/ec2-user/.local/lib/python3.9/site-packages/langchain_core/runnables/base.py", line 4137, in stream
yield from self.bound.stream(
File "/home/ec2-user/.local/lib/python3.9/site-packages/langchain_core/runnables/base.py", line 4137, in stream
yield from self.bound.stream(
File "/home/ec2-user/.local/lib/python3.9/site-packages/langchain_core/runnables/base.py", line 2446, in stream
yield from self.transform(iter([input]), config, **kwargs)
File "/home/ec2-user/.local/lib/python3.9/site-packages/langchain_core/runnables/base.py", line 2433, in transform
yield from self._transform_stream_with_config(
File "/home/ec2-user/.local/lib/python3.9/site-packages/langchain_core/runnables/base.py", line 1513, in _transform_stream_with_config
chunk: Output = context.run(next, iterator) # type: ignore
File "/home/ec2-user/.local/lib/python3.9/site-packages/langchain_core/runnables/base.py", line 2397, in _transform
for output in final_pipeline:
File "/home/ec2-user/.local/lib/python3.9/site-packages/langchain_core/runnables/base.py", line 4173, in transform
yield from self.bound.transform(
File "/home/ec2-user/.local/lib/python3.9/site-packages/langchain_core/runnables/base.py", line 2433, in transform
yield from self._transform_stream_with_config(
File "/home/ec2-user/.local/lib/python3.9/site-packages/langchain_core/runnables/base.py", line 1513, in _transform_stream_with_config
chunk: Output = context.run(next, iterator) # type: ignore
File "/home/ec2-user/.local/lib/python3.9/site-packages/langchain_core/runnables/base.py", line 2397, in _transform
for output in final_pipeline:
File "/home/ec2-user/.local/lib/python3.9/site-packages/langchain_core/runnables/base.py", line 1061, in transform
yield from self.stream(final, config, **kwargs)
File "/home/ec2-user/.local/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py", line 250, in stream
raise e
File "/home/ec2-user/.local/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py", line 234, in stream
for chunk in self._stream(
File "/home/ec2-user/.local/lib/python3.9/site-packages/langchain_community/chat_models/bedrock.py", line 211, in _stream
system, formatted_messages = ChatPromptAdapter.format_messages(
File "/home/ec2-user/.local/lib/python3.9/site-packages/langchain_community/chat_models/bedrock.py", line 157, in format_messages
return _format_anthropic_messages(messages)
File "/home/ec2-user/.local/lib/python3.9/site-packages/langchain_community/chat_models/bedrock.py", line 78, in _format_anthropic_messages
role = _message_type_lookups[message.type]
KeyError: 'AIMessageChunk'
### Description
I create Chat app with Streamlit.
* I use Amazon Bedrock (claude-3-sonnet)
* I use DynamoDBChatMessageHistory as chat message history
* I call `chain.stream`, raise Error
### System Info
langchain==0.1.11
langchain-community==0.0.27
langchain-core==0.1.30
langchain-text-splitters==0.0.1
Python 3.9.16
OS: Amazon Linux 2023(Cloud 9) | Bedrock(claude-3-sonnet) with DynamoDBChatMessageHistory raise error | https://api.github.com/repos/langchain-ai/langchain/issues/18831/comments | 2 | 2024-03-09T02:32:40Z | 2024-07-30T16:06:15Z | https://github.com/langchain-ai/langchain/issues/18831 | 2,177,015,653 | 18,831 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
import torch
import accelerate
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
BitsAndBytesConfig,
TrainingArguments
)
from transformers import pipeline
from langchain.llms import HuggingFacePipeline
from typing import List
from langchain_core.tools import tool
from langchain.agents import AgentExecutor, create_react_agent
from langchain.agents import AgentType, initialize_agent
from langchain.prompts import PromptTemplate
bnb_config = BitsAndBytesConfig(
load_in_4bit=True, # 4 bit quantization
bnb_4bit_quant_type="nf4", # For weights initializes using a normal distribution
bnb_4bit_compute_dtype=torch.bfloat16, # Match model dtype
bnb_4bit_use_double_quant=True, # Nested quantization improves performance
)
model_name='mistralai/Mistral-7B-Instruct-v0.1'
tokenizer = AutoTokenizer.from_pretrained(
model_name,
padding_side="right",
)
tokenizer.pad_token = tokenizer.eos_token
model = AutoModelForCausalLM.from_pretrained(
model_name,
quantization_config=bnb_config,
torch_dtype=torch.bfloat16,
device_map={"": 0},
)
# Create huggingface pipeline
text_generation_pipeline = pipeline(
model=model,
tokenizer=tokenizer,
task="text-generation",
max_new_tokens=200,
pad_token_id=tokenizer.eos_token_id,
)
# Create langchain llm from huggingface pipeline
mistral_llm = HuggingFacePipeline(pipeline=text_generation_pipeline)
@tool
def get_data(n: int) -> List[dict]:
"""Get n datapoints."""
return [{"name": "foo", "value": "bar"}] * n
tools = [get_data]
#prompt = PromptTemplate(
# input_variables=['agent_scratchpad', 'input', 'tool_names', 'tools'],
# template='Answer the following questions as best you can. You have access to the following tools:\n\n{tools}\n\nUse the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question\n\nBegin!\n\nQuestion: {input}\nThought:{agent_scratchpad}'
#)
prompt = PromptTemplate.from_template(
"""Answer the following questions as best you can. You have access to the following tools:
{tools}
Use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [{tool_names}]
Action Input: the input to the action\nObservation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question
Begin!
Question: {input}
Thought:{agent_scratchpad}"""
)
agent = create_react_agent(
llm=mistral_llm,
tools=tools,
prompt=prompt,
)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
agent_executor.invoke({"input": "get me three datapoints"})
```
### Error Message and Stack Trace (if applicable)
```
> Entering new AgentExecutor chain...
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
[<ipython-input-61-69f05d02fa37>](https://localhost:8080/#) in <cell line: 48>()
46 agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
47
---> 48 agent_executor.invoke({"input": "get me three datapoints"})
49
50
27 frames
[/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py](https://localhost:8080/#) in invoke(self, input, config, **kwargs)
161 except BaseException as e:
162 run_manager.on_chain_error(e)
--> 163 raise e
164 run_manager.on_chain_end(outputs)
165
[/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py](https://localhost:8080/#) in invoke(self, input, config, **kwargs)
151 self._validate_inputs(inputs)
152 outputs = (
--> 153 self._call(inputs, run_manager=run_manager)
154 if new_arg_supported
155 else self._call(inputs)
[/usr/local/lib/python3.10/dist-packages/langchain/agents/agent.py](https://localhost:8080/#) in _call(self, inputs, run_manager)
1389 # We now enter the agent loop (until it returns something).
1390 while self._should_continue(iterations, time_elapsed):
-> 1391 next_step_output = self._take_next_step(
1392 name_to_tool_map,
1393 color_mapping,
[/usr/local/lib/python3.10/dist-packages/langchain/agents/agent.py](https://localhost:8080/#) in _take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
1095 ) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]:
1096 return self._consume_next_step(
-> 1097 [
1098 a
1099 for a in self._iter_next_step(
[/usr/local/lib/python3.10/dist-packages/langchain/agents/agent.py](https://localhost:8080/#) in <listcomp>(.0)
1095 ) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]:
1096 return self._consume_next_step(
-> 1097 [
1098 a
1099 for a in self._iter_next_step(
[/usr/local/lib/python3.10/dist-packages/langchain/agents/agent.py](https://localhost:8080/#) in _iter_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
1123
1124 # Call the LLM to see what to do.
-> 1125 output = self.agent.plan(
1126 intermediate_steps,
1127 callbacks=run_manager.get_child() if run_manager else None,
[/usr/local/lib/python3.10/dist-packages/langchain/agents/agent.py](https://localhost:8080/#) in plan(self, intermediate_steps, callbacks, **kwargs)
385 # accumulate the output into final output and return that.
386 final_output: Any = None
--> 387 for chunk in self.runnable.stream(inputs, config={"callbacks": callbacks}):
388 if final_output is None:
389 final_output = chunk
[/usr/local/lib/python3.10/dist-packages/langchain_core/runnables/base.py](https://localhost:8080/#) in stream(self, input, config, **kwargs)
2444 **kwargs: Optional[Any],
2445 ) -> Iterator[Output]:
-> 2446 yield from self.transform(iter([input]), config, **kwargs)
2447
2448 async def atransform(
[/usr/local/lib/python3.10/dist-packages/langchain_core/runnables/base.py](https://localhost:8080/#) in transform(self, input, config, **kwargs)
2431 **kwargs: Optional[Any],
2432 ) -> Iterator[Output]:
-> 2433 yield from self._transform_stream_with_config(
2434 input,
2435 self._transform,
[/usr/local/lib/python3.10/dist-packages/langchain_core/runnables/base.py](https://localhost:8080/#) in _transform_stream_with_config(self, input, transformer, config, run_type, **kwargs)
1511 try:
1512 while True:
-> 1513 chunk: Output = context.run(next, iterator) # type: ignore
1514 yield chunk
1515 if final_output_supported:
[/usr/local/lib/python3.10/dist-packages/langchain_core/runnables/base.py](https://localhost:8080/#) in _transform(self, input, run_manager, config)
2395 )
2396
-> 2397 for output in final_pipeline:
2398 yield output
2399
[/usr/local/lib/python3.10/dist-packages/langchain_core/runnables/base.py](https://localhost:8080/#) in transform(self, input, config, **kwargs)
1049 got_first_val = False
1050
-> 1051 for chunk in input:
1052 if not got_first_val:
1053 final = chunk
[/usr/local/lib/python3.10/dist-packages/langchain_core/runnables/base.py](https://localhost:8080/#) in transform(self, input, config, **kwargs)
4171 **kwargs: Any,
4172 ) -> Iterator[Output]:
-> 4173 yield from self.bound.transform(
4174 input,
4175 self._merge_configs(config),
[/usr/local/lib/python3.10/dist-packages/langchain_core/runnables/base.py](https://localhost:8080/#) in transform(self, input, config, **kwargs)
1059
1060 if got_first_val:
-> 1061 yield from self.stream(final, config, **kwargs)
1062
1063 async def atransform(
[/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/llms.py](https://localhost:8080/#) in stream(self, input, config, stop, **kwargs)
407 if type(self)._stream == BaseLLM._stream:
408 # model doesn't implement streaming, so use default implementation
--> 409 yield self.invoke(input, config=config, stop=stop, **kwargs)
410 else:
411 prompt = self._convert_input(input).to_string()
[/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/llms.py](https://localhost:8080/#) in invoke(self, input, config, stop, **kwargs)
271 config = ensure_config(config)
272 return (
--> 273 self.generate_prompt(
274 [self._convert_input(input)],
275 stop=stop,
[/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/llms.py](https://localhost:8080/#) in generate_prompt(self, prompts, stop, callbacks, **kwargs)
566 ) -> LLMResult:
567 prompt_strings = [p.to_string() for p in prompts]
--> 568 return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs)
569
570 async def agenerate_prompt(
[/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/llms.py](https://localhost:8080/#) in generate(self, prompts, stop, callbacks, tags, metadata, run_name, **kwargs)
739 )
740 ]
--> 741 output = self._generate_helper(
742 prompts, stop, run_managers, bool(new_arg_supported), **kwargs
743 )
[/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/llms.py](https://localhost:8080/#) in _generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs)
603 for run_manager in run_managers:
604 run_manager.on_llm_error(e, response=LLMResult(generations=[]))
--> 605 raise e
606 flattened_outputs = output.flatten()
607 for manager, flattened_output in zip(run_managers, flattened_outputs):
[/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/llms.py](https://localhost:8080/#) in _generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs)
590 try:
591 output = (
--> 592 self._generate(
593 prompts,
594 stop=stop,
[/usr/local/lib/python3.10/dist-packages/langchain_community/llms/huggingface_pipeline.py](https://localhost:8080/#) in _generate(self, prompts, stop, run_manager, **kwargs)
259
260 # Process batch of prompts
--> 261 responses = self.pipeline(
262 batch_prompts,
263 stop_sequence=stop,
[/usr/local/lib/python3.10/dist-packages/transformers/pipelines/text_generation.py](https://localhost:8080/#) in __call__(self, text_inputs, **kwargs)
239 return super().__call__(chats, **kwargs)
240 else:
--> 241 return super().__call__(text_inputs, **kwargs)
242
243 def preprocess(
[/usr/local/lib/python3.10/dist-packages/transformers/pipelines/base.py](https://localhost:8080/#) in __call__(self, inputs, num_workers, batch_size, *args, **kwargs)
1146 batch_size = self._batch_size
1147
-> 1148 preprocess_params, forward_params, postprocess_params = self._sanitize_parameters(**kwargs)
1149
1150 # Fuse __init__ params and __call__ params without modifying the __init__ ones.
[/usr/local/lib/python3.10/dist-packages/transformers/pipelines/text_generation.py](https://localhost:8080/#) in _sanitize_parameters(self, return_full_text, return_tensors, return_text, return_type, clean_up_tokenization_spaces, prefix, handle_long_generation, stop_sequence, add_special_tokens, truncation, padding, max_length, **generate_kwargs)
169
170 if stop_sequence is not None:
--> 171 stop_sequence_ids = self.tokenizer.encode(stop_sequence, add_special_tokens=False)
172 if len(stop_sequence_ids) > 1:
173 warnings.warn(
[/usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils_base.py](https://localhost:8080/#) in encode(self, text, text_pair, add_special_tokens, padding, truncation, max_length, stride, return_tensors, **kwargs)
2598 method).
2599 """
-> 2600 encoded_inputs = self.encode_plus(
2601 text,
2602 text_pair=text_pair,
[/usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils_base.py](https://localhost:8080/#) in encode_plus(self, text, text_pair, add_special_tokens, padding, truncation, max_length, stride, is_split_into_words, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs)
3006 )
3007
-> 3008 return self._encode_plus(
3009 text=text,
3010 text_pair=text_pair,
[/usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils_fast.py](https://localhost:8080/#) in _encode_plus(self, text, text_pair, add_special_tokens, padding_strategy, truncation_strategy, max_length, stride, is_split_into_words, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs)
574 ) -> BatchEncoding:
575 batched_input = [(text, text_pair)] if text_pair else [text]
--> 576 batched_output = self._batch_encode_plus(
577 batched_input,
578 is_split_into_words=is_split_into_words,
[/usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils_fast.py](https://localhost:8080/#) in _batch_encode_plus(self, batch_text_or_text_pairs, add_special_tokens, padding_strategy, truncation_strategy, max_length, stride, is_split_into_words, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose)
502 )
503 print(batch_text_or_text_pairs)
--> 504 encodings = self._tokenizer.encode_batch(
505 batch_text_or_text_pairs,
506 add_special_tokens=add_special_tokens,
TypeError: TextEncodeInput must be Union[TextInputSequence, Tuple[InputSequence, InputSequence]]
```
### Description
When running a simple react agent example based on the [react agent docs](https://python.langchain.com/docs/modules/agents/agent_types/react) using a huggingface model (Mistral 7B Instruct), execution fails on the line:
```
agent_executor.invoke({"input": "get me three datapoints"})
```
It appears that the agent_executor is either sending nothing to the tokenizer, or sending the dict with "input", both of which are not acceptable by the tokenizer. When I try passing a string to the `agent_executor.invoke` method (which would be acceptable by the tokenizer), the executor complains that it's not a dict.
### System Info
```
System Information
------------------
> OS: Linux
> OS Version: #1 SMP PREEMPT_DYNAMIC Sat Nov 18 15:31:17 UTC 2023
> Python Version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.1.30
> langchain: 0.1.11
> langchain_community: 0.0.27
> langsmith: 0.1.23
> langchain_text_splitters: 0.0.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
``` | ReAct Agent Not Working With Huggingface Model When Using `create_react_agent` | https://api.github.com/repos/langchain-ai/langchain/issues/18820/comments | 1 | 2024-03-08T21:27:59Z | 2024-06-16T16:09:04Z | https://github.com/langchain-ai/langchain/issues/18820 | 2,176,798,351 | 18,820 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_community.vectorstores import Weaviate
from langchain_openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
vectorstore = Weaviate.from_documents(
text_chunks,
embeddings,
client=client,
by_text=False
)
document_content_description = "description"
### Error Message and Stack Trace (if applicable)
```python
AttributeError: 'WeaviateClient' object has no attribute 'schema'
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
File <command-3980657862137734>, line 6
2 from langchain_openai import OpenAIEmbeddings
4 embeddings = OpenAIEmbeddings()
----> 6 vectorstore = Weaviate.from_documents(
7 text_chunks,
8 embeddings,
9 client=client,
10 by_text=False
11 )
13 document_content_description = "telecom documentation"
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-bb42155d-01ee-4da3-9c8c-ce4489c59c93/lib/python3.10/site-packages/langchain_core/vectorstores.py:528, in VectorStore.from_documents(cls, documents, embedding, **kwargs)
526 texts = [d.page_content for d in documents]
527 metadatas = [d.metadata for d in documents]
--> 528 return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-bb42155d-01ee-4da3-9c8c-ce4489c59c93/lib/python3.10/site-packages/langchain_community/vectorstores/weaviate.py:465, in Weaviate.from_texts(cls, texts, embedding, metadatas, client, weaviate_url, weaviate_api_key, batch_size, index_name, text_key, by_text, relevance_score_fn, **kwargs)
463 schema = _default_schema(index_name, text_key)
464 # check whether the index already exists
--> 465 if not client.schema.exists(index_name):
466 client.schema.create_class(schema)
468 embeddings = embedding.embed_documents(texts) if embedding else None
AttributeError: 'WeaviateClient' object has no attribute 'schema'
### Description
Trying to ingest langchain Document classes into Weaviate Cloud.
Using the up-to-date version of all libs.
Followed the tutorial here: https://python.langchain.com/docs/integrations/vectorstores/weaviate
### System Info
langchain==0.1.7
weaviate-client==4.5.1
weaviate cloud ver. 1.24.1 and 1.23.10 (2 different clusters)
Running in Windows as well on databricks. | AttributeError: 'WeaviateClient' object has no attribute 'schema' | https://api.github.com/repos/langchain-ai/langchain/issues/18809/comments | 5 | 2024-03-08T17:41:27Z | 2024-07-24T16:07:51Z | https://github.com/langchain-ai/langchain/issues/18809 | 2,176,491,695 | 18,809 |
[
"hwchase17",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
Hi all!
Core Runnable methods are not documented well enough yet. It would be great to have each of the method include a self-contained usage example that folks can use as a reference.
Documenting these methods well will have a fairly high value impact by making is easier for LangChain users to use core LangChain primitives.
Runnable (https://github.com/langchain-ai/langchain/blob/bc6249c889a4fe208f8f145e80258eb3de20d2d4/libs/core/langchain_core/runnables/base.py#L103-L103):
* assign
* with_fallbacks
* with_retry
* pick
* pipe
* map
* with_listeners
RunnableSerializable (https://github.com/langchain-ai/langchain/blob/bc6249c889a4fe208f8f145e80258eb3de20d2d4/libs/core/langchain_core/runnables/base.py#L1664-L1664):
* configurable_fields
* configurable_alternatives
## Acceptance criteria
- Documentation includes context
- Documentation includes a self contained example in python (including all imports).
- Example uses ..code-block: python syntax (see other places in the code as reference).
Please document only on method per PR to make it easy to review and get PRs merged quickly! | Add in code documentation to core Runnable methods | https://api.github.com/repos/langchain-ai/langchain/issues/18804/comments | 7 | 2024-03-08T15:50:55Z | 2024-07-09T16:06:54Z | https://github.com/langchain-ai/langchain/issues/18804 | 2,176,301,157 | 18,804 |
[
"hwchase17",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
We would love to get help to add in code documentation to LangChain core to better document LCEL primitives:
Here is an example of a documented runnable:
https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.RunnableLambda.html#langchain_core.runnables.base.RunnableLambda
Here is an example of an undocumented runnable (currently many are undocumented): https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.passthrough.RunnablePick.html
Acceptance Criteria:
- PR should be as minimal as possible (don't try to document unrelated runnables please!) Keep PR size small to get things merged quicker and avoid merge conflicts.
- Document the class doc-string:
- include an overview about what the runnable does
- include a ...code-block: python that shows a self-contained example of how to use the runnable.
- the self contained example should include all relevant imports so it can be copy pasted AS is and run
How do i figure out what the runnable does?
- All runnables have unit tests that show how the runnable can be used! You can locate the unit tests and use them as reference.
Some especially import runnables (note that some of these are base abstractions)
- https://github.com/langchain-ai/langchain/blob/bc6249c889a4fe208f8f145e80258eb3de20d2d4/libs/core/langchain_core/runnables/passthrough.py#L315-L315
- https://github.com/langchain-ai/langchain/blob/bc6249c889a4fe208f8f145e80258eb3de20d2d4/libs/core/langchain_core/runnables/passthrough.py#L577-L577
- https://github.com/langchain-ai/langchain/blob/bc6249c889a4fe208f8f145e80258eb3de20d2d4/libs/core/langchain_core/runnables/configurable.py#L44-L44
- https://github.com/langchain-ai/langchain/blob/bc6249c889a4fe208f8f145e80258eb3de20d2d4/libs/core/langchain_core/runnables/configurable.py#L222-L222
- Context: https://github.com/langchain-ai/langchain/blob/bc6249c889a4fe208f8f145e80258eb3de20d2d4/libs/core/langchain_core/beta/runnables/context.py#L309-L309 | Add in-code documentation for LangChain Runnables | https://api.github.com/repos/langchain-ai/langchain/issues/18803/comments | 7 | 2024-03-08T15:39:52Z | 2024-07-31T21:58:21Z | https://github.com/langchain-ai/langchain/issues/18803 | 2,176,280,298 | 18,803 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
When using langchain pointing to Mistral, for example,
```
from langchain_openai import OpenAIEmbeddings
embedding = OpenAIEmbeddings(model="Mistral-7B-Instruct-v0.2")
```
one gets a `openai.InternalServerError: Internal Server Error`
But when using
```
embedding = OpenAIEmbeddings(model="text-embedding-ada-002")
```
It works. The thing is that the server is using an alias to mistral called `text-embedding-ada-002`, so it's literally the same model as above, but there are assumptions on langchain's code about it.
### Error Message and Stack Trace (if applicable)
_No response_
### Description
The use of embeddings on my own inference server would fail if using names which are not the ones provided by openAi. But if one aliases the names : for example, `text-embedding-ada-002` is just an alias to `Mistral-7B-Instruct-v0.2`, then it just works.
So it seems that there are assumptions on langchain's code about hardcoded models' names.
### System Info
```
langchain==0.1.11
langchain-community==0.0.27
langchain-core==0.1.30
langchain-openai==0.0.8
langchain-text-splitters==0.0.1
```
Similar behavior on mac os and on linux
Python 3.8, 3.9 and 3.11.7 | OpenAIEmbeddings relies on hardcoded names | https://api.github.com/repos/langchain-ai/langchain/issues/18800/comments | 1 | 2024-03-08T14:35:37Z | 2024-06-18T16:09:46Z | https://github.com/langchain-ai/langchain/issues/18800 | 2,176,162,258 | 18,800 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Although similarity search now includes the flexibility to change the vector comparison fields, etc, we still can't put a limitation on what all fields we want the opensearch instance to return upon querying, further there is no option to collapse the results within the query. (Sharing an example query I would like to execute post my code snippet)
```python
from langchain_community.vectorstores import OpenSearchVectorSearch
from langchain.embeddings import HuggingFaceEmbeddings
from opensearchpy import RequestsHttpConnection
embeddings = HuggingFaceEmbeddings(model_name='some model')
oss = OpenSearchVectorSearch(
opensearch_url = 'someurl',
index_name = 'some index',
embedding_function = embeddings
)
docs = oss.similarity_search(
"Some query",
search_type = "script_scoring",
space_type= "cosinesimil",
vector_field = "custom_field",
text_field = "custom_metadata field",
metadata_field="*",
k=3
)
```
Sample OpenSearch query I wish to run
```python
{
"query": {
....
},
"_source": {
"excludes": ["snippet_window_vector"]
},
"collapse": {
"field": "id",
"inner_hits": {
"name": "top_hit",
"size": 1,
"_source": {
"excludes": ["snippet_window_vector"]
},
"sort": [{"_score": {"order": "desc"}}]
}
},
"size": self.top_k
}
```
Suggestions: Either allow the users to provide the query themselves, or at least allow them to specify `collapse` and `_source` fields in query using kwargs being passed to the `_default_script_query function` in `langchain_community > vectorestores > opensearch_vector_search.py`
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Although similarity search now includes the flexibility to change the vector comparison fields, etc, we still can't put a limitation on what all fields we want the opensearch instance to return upon querying, further there is no option to collapse the results within the query. (Sharing an example query I would like to execute post my code snippet).
### System Info
```
langchain==0.1.11
langchain-community==0.0.27
langchain-core==0.1.30
langchain-text-splitters==0.0.1
```
platform Mac
```
python version 3.9
``` | Query Flexibility Limitation on OpenSearchVectorSearch for a pre-existing index | https://api.github.com/repos/langchain-ai/langchain/issues/18797/comments | 0 | 2024-03-08T13:12:10Z | 2024-06-14T16:09:00Z | https://github.com/langchain-ai/langchain/issues/18797 | 2,176,018,816 | 18,797 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_community.chat_message_histories import StreamlitChatMessageHistory
import streamlit as st
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.runnables.history import RunnableWithMessageHistory
from langchain_community.chat_models import ChatCohere
# Optionally, specify your own session_state key for storing messages
msgs = StreamlitChatMessageHistory(key="special_app_key")
if len(msgs.messages) == 0:
msgs.add_ai_message("How can I help you?")
prompt = ChatPromptTemplate.from_messages(
[
("system", "You are an AI chatbot having a conversation with a human."),
MessagesPlaceholder(variable_name="history"),
("human", "{question}"),
]
)
chain = prompt | ChatCohere(cohere_api_key="",model="command", max_tokens=256, temperature=0.75)
chain_with_history = RunnableWithMessageHistory(
chain,
lambda session_id: msgs, # Always return the instance created earlier
input_messages_key="question",
history_messages_key="history",
)
for msg in msgs.messages:
st.chat_message(msg.type).write(msg.content)
if prompt := st.chat_input():
st.chat_message("human").write(prompt)
# As usual, new messages are added to StreamlitChatMessageHistory when the Chain is called.
config = {"configurable": {"session_id": "any"}}
response = chain_with_history.invoke({"question": prompt}, config)
st.chat_message("ai").write(response.content)
### Error Message and Stack Trace (if applicable)
KeyError: 'st.session_state has no key "langchain_messages". Did you forget to initialize it? More info: https://docs.streamlit.io/library/advanced-features/session-state#initialization'
Traceback:
File "c:\users\prakotian\appdata\local\programs\python\python38\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 535, in _run_script
exec(code, module.__dict__)
File "C:\Users\prakotian\Desktop\Projects\ChatData\chat.py", line 142, in <module>
response = chain_with_history.invoke({"question": prompt}, config)
File "c:\users\prakotian\appdata\local\programs\python\python38\lib\site-packages\langchain_core\runnables\base.py", line 4069, in invoke
return self.bound.invoke(
File "c:\users\prakotian\appdata\local\programs\python\python38\lib\site-packages\langchain_core\runnables\base.py", line 4069, in invoke
return self.bound.invoke(
File "c:\users\prakotian\appdata\local\programs\python\python38\lib\site-packages\langchain_core\runnables\base.py", line 2075, in invoke
input = step.invoke(
File "c:\users\prakotian\appdata\local\programs\python\python38\lib\site-packages\langchain_core\runnables\base.py", line 4069, in invoke
return self.bound.invoke(
File "c:\users\prakotian\appdata\local\programs\python\python38\lib\site-packages\langchain_core\runnables\passthrough.py", line 419, in invoke
return self._call_with_config(self._invoke, input, config, **kwargs)
File "c:\users\prakotian\appdata\local\programs\python\python38\lib\site-packages\langchain_core\runnables\base.py", line 1262, in _call_with_config
context.run(
File "c:\users\prakotian\appdata\local\programs\python\python38\lib\site-packages\langchain_core\runnables\config.py", line 326, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
File "c:\users\prakotian\appdata\local\programs\python\python38\lib\site-packages\langchain_core\runnables\passthrough.py", line 406, in _invoke
**self.mapper.invoke(
File "c:\users\prakotian\appdata\local\programs\python\python38\lib\site-packages\langchain_core\runnables\base.py", line 2712, in invoke
output = {key: future.result() for key, future in zip(steps, futures)}
File "c:\users\prakotian\appdata\local\programs\python\python38\lib\site-packages\langchain_core\runnables\base.py", line 2712, in <dictcomp>
output = {key: future.result() for key, future in zip(steps, futures)}
File "c:\users\prakotian\appdata\local\programs\python\python38\lib\concurrent\futures\_base.py", line 444, in result
return self.__get_result()
File "c:\users\prakotian\appdata\local\programs\python\python38\lib\concurrent\futures\_base.py", line 389, in __get_result
raise self._exception
File "c:\users\prakotian\appdata\local\programs\python\python38\lib\concurrent\futures\thread.py", line 57, in run
result = self.fn(*self.args, **self.kwargs)
File "c:\users\prakotian\appdata\local\programs\python\python38\lib\site-packages\langchain_core\runnables\base.py", line 4069, in invoke
return self.bound.invoke(
File "c:\users\prakotian\appdata\local\programs\python\python38\lib\site-packages\langchain_core\runnables\base.py", line 3523, in invoke
return self._call_with_config(
File "c:\users\prakotian\appdata\local\programs\python\python38\lib\site-packages\langchain_core\runnables\base.py", line 1262, in _call_with_config
context.run(
File "c:\users\prakotian\appdata\local\programs\python\python38\lib\site-packages\langchain_core\runnables\config.py", line 326, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
File "c:\users\prakotian\appdata\local\programs\python\python38\lib\site-packages\langchain_core\runnables\base.py", line 3397, in _invoke
output = call_func_with_variable_args(
File "c:\users\prakotian\appdata\local\programs\python\python38\lib\site-packages\langchain_core\runnables\config.py", line 326, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
File "c:\users\prakotian\appdata\local\programs\python\python38\lib\site-packages\langchain_core\runnables\history.py", line 409, in _enter_history
return hist.messages.copy()
File "c:\users\prakotian\appdata\local\programs\python\python38\lib\site-packages\langchain_community\chat_message_histories\streamlit.py", line 32, in messages
return st.session_state[self._key]
File "c:\users\prakotian\appdata\local\programs\python\python38\lib\site-packages\streamlit\runtime\state\session_state_proxy.py", line 90, in __getitem__
return get_session_state()[key]
File "c:\users\prakotian\appdata\local\programs\python\python38\lib\site-packages\streamlit\runtime\state\safe_session_state.py", line 91, in __getitem__
return self._state[key]
File "c:\users\prakotian\appdata\local\programs\python\python38\lib\site-packages\streamlit\runtime\state\session_state.py", line 400, in __getitem__
raise KeyError(_missing_key_error_message(key))
### Description
I am using the same code step by step as mentioned here "https://python.langchain.com/docs/integrations/memory/streamlit_chat_message_history"
Still getting the key error
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.19045
> Python Version: 3.8.10 (tags/v3.8.10:3d8993a, May 3 2021, 11:48:03) [MSC v.1928 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.1.29
> langchain: 0.1.11
> langchain_community: 0.0.25
> langsmith: 0.1.19
> langchain_text_splitters: 0.0.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | StreamlitChatMessageHistory gives "KeyError: 'st.session_state has no key "langchain_messages" | https://api.github.com/repos/langchain-ai/langchain/issues/18790/comments | 2 | 2024-03-08T11:02:36Z | 2024-07-01T16:05:33Z | https://github.com/langchain-ai/langchain/issues/18790 | 2,175,803,143 | 18,790 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
# llms.py of opengpts (https://github.com/langchain-ai/opengpts)
@lru_cache(maxsize=1)
def get_tgi_llm():
huggingface_hub.login(os.getenv("HUGGINGFACE_TOKEN"))
llm = HuggingFaceTextGenInference(
inference_server_url="http://myinferenceserver.com/",
max_new_tokens=2048,
top_k=10,
top_p=0.95,
typical_p=0.95,
temperature=0.3,
repetition_penalty=1.03,
streaming=True,
callbacks=[StreamingStdOutCallbackHandler()],
server_kwargs={
"headers": {
"Content-Type": "application/json",
}
},
)
return ChatHuggingFace(llm=llm, model_id="HuggingFaceH4/zephyr-7b-beta") # setting model_id for using tokenizer
### Error Message and Stack Trace (if applicable)
There is no error, but the output is wrong.
# in my log
HumanMessage(content='Hello?')]
opengpts-backend | [HumanMessage(content='Hello?'), 'Hello']
opengpts-backend | [HumanMessage(content='Hello?'), 'Hello!']
opengpts-backend | [HumanMessage(content='Hello?'), 'Hello! How']
...
### Description
I'm trying to use the 'langchain' library to use the TGI model in OpenGPTs.
I expect the model response to look like the following log:
HumanMessage(content='Hello?')]
opengpts-backend | [HumanMessage(content='Hello?'), AIMessageChunk(content='Hello')]
opengpts-backend | [HumanMessage(content='Hello?'), AIMessageChunk(content='Hello!')]
opengpts-backend | [HumanMessage(content='Hello?'), AIMessageChunk(content='Hello! How')]
However, messages that are not wrapped in AIMessageChunk are logged. This causes the token response in OpenGPTs to not be visible in the chat window.
### System Info
(py312) eunhye1kim@eunhye1kim-400TEA-400SEA:~/git/forPR/opengpts/backend$ pip freeze | grep langchain
langchain==0.1.7
langchain-cli==0.0.21
langchain-community==0.0.20
langchain-core==0.1.27
langchain-experimental==0.0.37
langchain-google-vertexai==0.0.6
langchain-openai==0.0.7
langchain-robocorp==0.0.3 | Chat HuggingFace model does not send chunked replies when streaming=True. | https://api.github.com/repos/langchain-ai/langchain/issues/18782/comments | 0 | 2024-03-08T08:47:52Z | 2024-06-14T16:08:58Z | https://github.com/langchain-ai/langchain/issues/18782 | 2,175,567,686 | 18,782 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.