issue_owner_repo
sequencelengths 2
2
| issue_body
stringlengths 0
261k
⌀ | issue_title
stringlengths 1
925
| issue_comments_url
stringlengths 56
81
| issue_comments_count
int64 0
2.5k
| issue_created_at
stringlengths 20
20
| issue_updated_at
stringlengths 20
20
| issue_html_url
stringlengths 37
62
| issue_github_id
int64 387k
2.46B
| issue_number
int64 1
127k
|
---|---|---|---|---|---|---|---|---|---|
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
Im using a standard SelfQueryRetriver to extract relevant documents (car listings) that match a user query. Its has been working pretty well but recently it started giving me errors (stack trace attached).
ise
` retriever = SelfQueryRetriever.from_llm(
llm,
vectordb,
document_content_description,
metadata_field_info,
verbose=True
)`
### Error Message and Stack Trace (if applicable)
OutputParserException('Parsing text\n```json\n{\n "query": "with bluetooth and a reversing camera recent",\n "filter": "or(eq(\\"vehicle_type\\", \\"Hatchback\\"), eq(\\"vehicle_type\\", \\"Sedan\\")), in(\\"location\\", [\\"Westgate\\", \\"North Shore\\", \\"Otahuhu\\", \\"Penrose\\", \\"Botany\\", \\"Manukau\\"])"\n}\n```\n raised following error:\nUnexpected token Token(\'COMMA\', \',\') at line 1, column 65.\nExpected one of: \n\t* $END\n')Traceback (most recent call last):
File "/home/dharshana/.local/share/virtualenvs/tina-virtual-assistant-eLldwkZS/lib/python3.11/site-packages/lark/parsers/lalr_parser_state.py", line 77, in feed_token
action, arg = states[state][token.type]
~~~~~~~~~~~~~^^^^^^^^^^^^
KeyError: 'COMMA'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/dharshana/.local/share/virtualenvs/tina-virtual-assistant-eLldwkZS/lib/python3.11/site-packages/langchain/chains/query_constructor/base.py", line 56, in parse
parsed["filter"] = self.ast_parse(parsed["filter"])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dharshana/.local/share/virtualenvs/tina-virtual-assistant-eLldwkZS/lib/python3.11/site-packages/lark/lark.py", line 658, in parse
return self.parser.parse(text, start=start, on_error=on_error)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dharshana/.local/share/virtualenvs/tina-virtual-assistant-eLldwkZS/lib/python3.11/site-packages/lark/parser_frontends.py", line 104, in parse
return self.parser.parse(stream, chosen_start, **kw)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dharshana/.local/share/virtualenvs/tina-virtual-assistant-eLldwkZS/lib/python3.11/site-packages/lark/parsers/lalr_parser.py", line 42, in parse
return self.parser.parse(lexer, start)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dharshana/.local/share/virtualenvs/tina-virtual-assistant-eLldwkZS/lib/python3.11/site-packages/lark/parsers/lalr_parser.py", line 88, in parse
return self.parse_from_state(parser_state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dharshana/.local/share/virtualenvs/tina-virtual-assistant-eLldwkZS/lib/python3.11/site-packages/lark/parsers/lalr_parser.py", line 111, in parse_from_state
raise e
File "/home/dharshana/.local/share/virtualenvs/tina-virtual-assistant-eLldwkZS/lib/python3.11/site-packages/lark/parsers/lalr_parser.py", line 102, in parse_from_state
state.feed_token(token)
File "/home/dharshana/.local/share/virtualenvs/tina-virtual-assistant-eLldwkZS/lib/python3.11/site-packages/lark/parsers/lalr_parser_state.py", line 80, in feed_token
raise UnexpectedToken(token, expected, state=self, interactive_parser=None)
lark.exceptions.UnexpectedToken: Unexpected token Token('COMMA', ',') at line 1, column 65.
Expected one of:
* $END
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/dharshana/.local/share/virtualenvs/tina-virtual-assistant-eLldwkZS/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1246, in _call_with_config
context.run(
File "/home/dharshana/.local/share/virtualenvs/tina-virtual-assistant-eLldwkZS/lib/python3.11/site-packages/langchain_core/runnables/config.py", line 326, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^
File "/home/dharshana/.local/share/virtualenvs/tina-virtual-assistant-eLldwkZS/lib/python3.11/site-packages/langchain_core/output_parsers/base.py", line 168, in <lambda>
lambda inner_input: self.parse_result(
^^^^^^^^^^^^^^^^^^
File "/home/dharshana/.local/share/virtualenvs/tina-virtual-assistant-eLldwkZS/lib/python3.11/site-packages/langchain_core/output_parsers/base.py", line 219, in parse_result
return self.parse(result[0].text)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dharshana/.local/share/virtualenvs/tina-virtual-assistant-eLldwkZS/lib/python3.11/site-packages/langchain/chains/query_constructor/base.py", line 63, in parse
raise OutputParserException(
langchain_core.exceptions.OutputParserException: Parsing text
```json
{
"query": "with bluetooth and a reversing camera recent",
"filter": "or(eq(\"vehicle_type\", \"Hatchback\"), eq(\"vehicle_type\", \"Sedan\")), in(\"location\", [\"Westgate\", \"North Shore\", \"Otahuhu\", \"Penrose\", \"Botany\", \"Manukau\"])"
}
```
raised following error:
Unexpected token Token('COMMA', ',') at line 1, column 65.
Expected one of:
* $END
### Description
im getting
lark.exceptions.UnexpectedToken: Unexpected token Token('COMMA', ',') at line 1, column 65.
Expected one of:
* $END
Seems its not happy with a COMMA.
Im not entirely sure if the cause of the error is change in the Pinecone query api or an update in langchain version
### System Info
"langchain": {
"hashes": [
"sha256:29d95f12afe9690953820970205dba3b098ee1f7531e80eb18c1236d3feda921",
"sha256:b40fbe2b65360afe6c0d5bbf37e79469f990779460640edde5b906175c49807e"
],
"index": "pypi",
"version": "==0.1.7"
},
"langchain-community": {
"hashes": [
"sha256:bd112b5813702919c50f89b1afa2b63adf1da89999df4842b327ee11220f8c39",
"sha256:c56c48bc77d24e1fc399a9ee9a637d96e3b2ff952e3a080b5a41820d9d00fb3c"
],
"index": "pypi",
"version": "==0.0.20"
},
"langchain-core": {
"hashes": [
"sha256:34359cc8b6f8c3d45098c54a6a9b35c9f538ef58329cd943a2249d6d7b4e5806",
"sha256:d42fac013c39a8b0bcd7e337a4cb6c17c16046c60d768f89df582ad73ec3c5cb"
],
"markers": "python_full_version >= '3.8.1' and python_version < '4.0'",
"version": "==0.1.23"
},
"langchain-openai": {
"hashes": [
"sha256:2ef040e4447a26a9d3bd45dfac9cefa00797ea58555a3d91ab4f88699eb3a005",
"sha256:f5c4ebe46f2c8635c8f0c26cc8df27700aacafea025410e418d5a080039974dd"
],
"index": "pypi",
"version": "==0.0.6"
}, | Error in StructuredQueryOutputParser using SelfQueryRetriever with Pinecone | https://api.github.com/repos/langchain-ai/langchain/issues/17696/comments | 3 | 2024-02-18T07:42:21Z | 2024-07-14T16:06:02Z | https://github.com/langchain-ai/langchain/issues/17696 | 2,140,798,399 | 17,696 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
Example code that doesn't work:
```
from langchain_community.tools.google_lens import GoogleLensQueryRun
from langchain_community.utilities.google_lens import GoogleLensAPIWrapper
SERPAPI_API_KEY = "api_key_here"
tool = GoogleLensQueryRun(api_wrapper=GoogleLensAPIWrapper())
# Runs google lens on an image of Danny Devito
tool.run("https://i.imgur.com/HBrB8p0.png")
```
This is the code from langchain.utilities.google_lens that is incorrect:
```
if len(responseValue["knowledge_graph"]) > 0:
subject = responseValue["knowledge_graph"][0]
xs += f"Subject:{subject['title']}({subject['subtitle']})\n"
xs += f"Link to subject:{subject['link']}\n\n"
xs += "Related Images:\n\n"
for image in responseValue["visual_matches"]:
xs += f"Title: {image['title']}\n"
xs += f"Source({image['source']}): {image['link']}\n"
xs += f"Image: {image['thumbnail']}\n\n"
xs += (
"Reverse Image Search"
+ f"Link: {responseValue['reverse_image_search']['link']}\n"
)
print(xs)
```
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "/Users/simonquach/Documents/vs-code/treehacks/google-lens.py", line 77, in <module>
tool.run("https://i.imgur.com/HBrB8p0.png")
File "/Users/simonquach/Documents/vs-code/treehacks/.venv/lib/python3.12/site-packages/langchain_core/tools.py", line 373, in run
raise e
File "/Users/simonquach/Documents/vs-code/treehacks/.venv/lib/python3.12/site-packages/langchain_core/tools.py", line 345, in run
self._run(*tool_args, run_manager=run_manager, **tool_kwargs)
File "/Users/simonquach/Documents/vs-code/treehacks/.venv/lib/python3.12/site-packages/langchain_community/tools/google_lens/tool.py", line 29, in _run
return self.api_wrapper.run(query)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/simonquach/Documents/vs-code/treehacks/.venv/lib/python3.12/site-packages/langchain_community/utilities/google_lens.py", line 67, in run
if len(responseValue["knowledge_graph"]) > 0:
~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^
KeyError: 'knowledge_graph'
### Description
I encountered a KeyError while using the GoogleLensQueryRun tool from the langchain_community package, specifically when attempting to run a Google Lens search on an image URL. The issue arises within the langchain_community.utilities.google_lens module, during the handling of the API response. The problematic code within langchain_community.utilities.google_lens attempts to access a knowledge_graph key in the response. However, this results in a KeyError if the knowledge_graph key is not present in the response. It seems that the code does not account for scenarios where the knowledge_graph key might be missing from the Google Lens API response.
### System Info
langchain==0.1.5
langchain-community==0.0.17
langchain-core==0.1.18
mac
Python 3.12.1 | No "knowledge_graph" property in Google Lens API call from SerpAPI | https://api.github.com/repos/langchain-ai/langchain/issues/17690/comments | 1 | 2024-02-17T23:04:21Z | 2024-06-01T00:21:19Z | https://github.com/langchain-ai/langchain/issues/17690 | 2,140,650,229 | 17,690 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
import asyncio
import os
from langchain_community.chat_models import ChatCohere
from langchain_core.messages import SystemMessage, HumanMessage, AIMessage
from langchain_core.output_parsers import StrOutputParser
async def idk(provider, model, messages, api_key):
if provider == 'cohere':
llm = ChatCohere(cohere_api_key=api_key, model_name=model, temperature=0)
else:
raise Exception("Provider not supported")
output_parser = StrOutputParser()
chain = llm | output_parser
# This works
for chunk in chain.stream(messages):
print(chunk, end="")
print()
print('---------------')
print()
# This works then breaks
async for chunk in chain.astream(messages):
print(chunk, end="")
# yield chunk
messages = [
SystemMessage("You are world class mathematician."),
HumanMessage("Whats 10 + 10?"),
AIMessage("10 + 10 is"),
HumanMessage("What?")
]
provider_inputs = [
{
'provider': 'cohere',
'api_key': os.environ.get('COHERE_API_KEY'),
'model': 'command'
}
]
for x in provider_inputs:
print(f"Running inputs for {x['provider']}")
asyncio.run(
idk(
provider=x['provider'],
messages=messages,
model=x['model'],
api_key=x['api_key']
)
)
print()
print()
```
### Error Message and Stack Trace (if applicable)
Unclosed client session
client_session: <aiohttp.client.ClientSession object at 0x1046d6d30>
Unclosed connector
connections: ['[(<aiohttp.client_proto.ResponseHandler object at 0x1046ccc40>, 2.635331583)]']
connector: <aiohttp.connector.TCPConnector object at 0x103474130>
Fatal error on SSL transport
protocol: <asyncio.sslproto.SSLProtocol object at 0x1046d6eb0>
transport: <_SelectorSocketTransport closing fd=11>
Traceback (most recent call last):
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/asyncio/selector_events.py", line 918, in write
n = self._sock.send(data)
OSError: [Errno 9] Bad file descriptor
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/asyncio/sslproto.py", line 684, in _process_write_backlog
self._transport.write(chunk)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/asyncio/selector_events.py", line 924, in write
self._fatal_error(exc, 'Fatal write error on socket transport')
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/asyncio/selector_events.py", line 719, in _fatal_error
self._force_close(exc)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/asyncio/selector_events.py", line 731, in _force_close
self._loop.call_soon(self._call_connection_lost, exc)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/asyncio/base_events.py", line 746, in call_soon
self._check_closed()
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/asyncio/base_events.py", line 510, in _check_closed
raise RuntimeError('Event loop is closed')
RuntimeError: Event loop is closed
### Description
I'm trying to use the langchain ChatCohere integration to asynchronously stream responses back to the user. I noticed invoke, ainvoke, and stream work fine, but astream does not. Swapping ChatCohere with Google and OpenAIs langchain modules worked fine in this same scenario
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.2.0: Wed Nov 15 21:59:33 PST 2023; root:xnu-10002.61.3~2/RELEASE_ARM64_T8112
> Python Version: 3.9.6 (default, Nov 10 2023, 13:38:27)
[Clang 15.0.0 (clang-1500.1.0.2.5)]
Package Information
-------------------
> langchain_core: 0.1.23
> langchain: 0.0.354
> langchain_community: 0.0.20
> langsmith: 0.0.87
> langchain_google_genai: 0.0.9
> langchain_mistralai: 0.0.4
> langchain_openai: 0.0.6
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
| ChatCohere async stream operation breaks after each run | https://api.github.com/repos/langchain-ai/langchain/issues/17687/comments | 1 | 2024-02-17T20:31:21Z | 2024-06-01T00:20:13Z | https://github.com/langchain-ai/langchain/issues/17687 | 2,140,574,863 | 17,687 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
import asyncio
import os
from langchain_community.chat_models import ChatAnyscale
from langchain_core.messages import SystemMessage, HumanMessage, AIMessage
from langchain_core.output_parsers import StrOutputParser
async def idk(provider, model, messages, api_key):
if provider == 'anyscale':
llm = ChatAnyscale(anyscale_api_key=api_key, model_name=model, temperature=0)
else:
raise Exception("Provider not supported")
output_parser = StrOutputParser()
chain = llm | output_parser
# This works
for chunk in chain.stream(messages):
print(chunk, end="")
# This does not work
async for chunk in chain.astream(messages):
print(chunk, end="")
# yield chunk
messages = [
SystemMessage("You are world class mathematician."),
HumanMessage("Whats 10 + 10?"),
AIMessage("10 + 10 is"),
HumanMessage("What?")
]
provider_inputs = [
{
'provider': 'anyscale',
'api_key': os.environ.get('ANYSCALE_API_KEY'),
'model': 'mistralai/Mixtral-8x7B-Instruct-v0.1'
}
]
for x in provider_inputs:
print(f"Running inputs for {x['provider']}")
asyncio.run(
idk(
provider=x['provider'],
messages=messages,
model=x['model'],
api_key=x['api_key']
)
)
print()
print()
```
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "/Users/julianshalaby/Desktop/LLM_Server/lc_errror.py", line 45, in <module>
asyncio.run(
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/asyncio/base_events.py", line 642, in run_until_complete
return future.result()
File "/Users/julianshalaby/Desktop/LLM_Server/lc_errror.py", line 23, in idk
async for chunk in chain.astream(messages):
File "/Users/julianshalaby/Desktop/LLM_Server/venv/lib/python3.9/site-packages/langchain_core/runnables/base.py", line 2449, in astream
async for chunk in self.atransform(input_aiter(), config, **kwargs):
File "/Users/julianshalaby/Desktop/LLM_Server/venv/lib/python3.9/site-packages/langchain_core/runnables/base.py", line 2432, in atransform
async for chunk in self._atransform_stream_with_config(
File "/Users/julianshalaby/Desktop/LLM_Server/venv/lib/python3.9/site-packages/langchain_core/runnables/base.py", line 1600, in _atransform_stream_with_config
chunk = cast(Output, await py_anext(iterator))
File "/Users/julianshalaby/Desktop/LLM_Server/venv/lib/python3.9/site-packages/langchain_core/runnables/base.py", line 2402, in _atransform
async for output in final_pipeline:
File "/Users/julianshalaby/Desktop/LLM_Server/venv/lib/python3.9/site-packages/langchain_core/output_parsers/transform.py", line 60, in atransform
async for chunk in self._atransform_stream_with_config(
File "/Users/julianshalaby/Desktop/LLM_Server/venv/lib/python3.9/site-packages/langchain_core/runnables/base.py", line 1560, in _atransform_stream_with_config
final_input: Optional[Input] = await py_anext(input_for_tracing, None)
File "/Users/julianshalaby/Desktop/LLM_Server/venv/lib/python3.9/site-packages/langchain_core/utils/aiter.py", line 62, in anext_impl
return await __anext__(iterator)
File "/Users/julianshalaby/Desktop/LLM_Server/venv/lib/python3.9/site-packages/langchain_core/utils/aiter.py", line 97, in tee_peer
item = await iterator.__anext__()
File "/Users/julianshalaby/Desktop/LLM_Server/venv/lib/python3.9/site-packages/langchain_core/runnables/base.py", line 1071, in atransform
async for output in self.astream(final, config, **kwargs):
File "/Users/julianshalaby/Desktop/LLM_Server/venv/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py", line 308, in astream
raise e
File "/Users/julianshalaby/Desktop/LLM_Server/venv/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py", line 292, in astream
async for chunk in self._astream(
File "/Users/julianshalaby/Desktop/LLM_Server/venv/lib/python3.9/site-packages/langchain_community/chat_models/openai.py", line 488, in _astream
async for chunk in await acompletion_with_retry(
File "/Users/julianshalaby/Desktop/LLM_Server/venv/lib/python3.9/site-packages/langchain_community/chat_models/openai.py", line 105, in acompletion_with_retry
return await llm.async_client.create(**kwargs)
AttributeError: 'NoneType' object has no attribute 'create'
### Description
I'm trying to use the langchain ChatAnyscale integration to asynchronously stream responses back to the user. I noticed invoke and stream work fine, but ainvoke and astream do not. Swapping ChatAnyscale with Google and OpenAIs langchain modules worked fine in this same scenario
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.2.0: Wed Nov 15 21:59:33 PST 2023; root:xnu-10002.61.3~2/RELEASE_ARM64_T8112
> Python Version: 3.9.6 (default, Nov 10 2023, 13:38:27)
[Clang 15.0.0 (clang-1500.1.0.2.5)]
Package Information
-------------------
> langchain_core: 0.1.23
> langchain: 0.0.354
> langchain_community: 0.0.20
> langsmith: 0.0.87
> langchain_google_genai: 0.0.9
> langchain_openai: 0.0.6
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
| ChatAnyscale async operation not functional | https://api.github.com/repos/langchain-ai/langchain/issues/17685/comments | 2 | 2024-02-17T20:11:19Z | 2024-06-01T00:07:42Z | https://github.com/langchain-ai/langchain/issues/17685 | 2,140,550,648 | 17,685 |
[
"hwchase17",
"langchain"
] | Getting an error on "from langchain.chains import RetrievalQA"; error message is cannot import name 'NeptuneRdfGraph' from 'langchain_community.graphs' . using langcain version 0.1.7
_Originally posted by @NarayananParthasarathy in https://github.com/langchain-ai/langchain/issues/2725#issuecomment-1950262407_
| Getting an error on "from langchain.chains import RetrievalQA"; error message is cannot import name 'NeptuneRdfGraph' from 'langchain_community.graphs' . using langcain version 0.1.7 | https://api.github.com/repos/langchain-ai/langchain/issues/17680/comments | 7 | 2024-02-17T17:26:06Z | 2024-05-05T13:53:12Z | https://github.com/langchain-ai/langchain/issues/17680 | 2,140,342,672 | 17,680 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
When I run the following code:
```
embeddings: OpenAIEmbeddings = OpenAIEmbeddings(deployment="text-embedding-ada-002", chunk_size=1)
index_name: str = "langchain-example"
vector_store: AzureSearch = AzureSearch(
azure_search_endpoint=os.environ.get("SEARCH_ENDPOINT"),
azure_search_key=os.environ.get("SEARCH_API_KEY"),
index_name=index_name,
embedding_function=embeddings.embed_query,
)
```
I get the following Error.
```
Traceback (most recent call last):
File "D:\pythonProject2\main.py", line 11, in <module>
vector_store: AzureSearch = AzureSearch(
^^^^^^^^^^^^
File "D:\pythonProject2\.venv\Lib\site-packages\langchain_community\vectorstores\azuresearch.py", line 268, in __init__
self.client = _get_search_client(
^^^^^^^^^^^^^^^^^^^
File "D:\pythonProject2\.venv\Lib\site-packages\langchain_community\vectorstores\azuresearch.py", line 84, in _get_search_client
from azure.search.documents.indexes.models import (
ImportError: cannot import name 'ExhaustiveKnnAlgorithmConfiguration' from 'azure.search.documents.indexes.models' (D:\pythonProject2\.venv\Lib\s
ite-packages\azure\search\documents\indexes\models\__init__.py)
```
ExhaustiveKnnAlgorithmConfiguration was removed as it seems.
I used `azure-search-documents==11.4.0b8`.
Downgrading and upgrading will result in a warning:
```
Successfully installed azure-search-documents-11.4.0
PS D:\pythonProject2> python .\main.py
vector_search_configuration is not a known attribute of class <class 'azure.search.documents.indexes.models._index.SearchField'> and will be igno
red
```
I tried different versions of LangChain from 0.1.0 to 0.1.7 and all resulted in the same issue. Any ideas for a workaround or a solution? It´s probably not a known issue yet.
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I expect to be able to instantiate AzureSearch without any errors or warnings.
### System Info
langchain-openai==0.0.6
azure-identity==1.15.0
azure-search-documents==11.4.0
langchain==0.1.7
langchain-community==0.0.20
langchain-core==0.1.23
langchain-openai==0.0.6
langsmith==0.0.87
| LangChain does not work with AzureSearch anymore due to ImportError | https://api.github.com/repos/langchain-ai/langchain/issues/17679/comments | 6 | 2024-02-17T17:23:20Z | 2024-05-12T07:48:02Z | https://github.com/langchain-ai/langchain/issues/17679 | 2,140,339,097 | 17,679 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
local_embedding = HuggingFaceEmbeddings(model_name=embedding_path)
local_vdb = FAISS.load_local(vector_path, local_embedding, "default")
### Error Message and Stack Trace (if applicable)
_No response_
### Description

i find when i try to get something from faiss, the gpu memory raise up, that's normal. but when the work is down, i mean i reterive already, but gpu memory not falling down ( even i closed the interface "gradio web" ), that the problem is. i'm building a gradio web app for my commpany, many people will use, when oneperson use to get something from faiss, the embedding model will use another memory, i mean if embedding working will use 2gib ( suppose ),so two person call = 4gib, three person call = 6gib, not 2gib, 2.3 gib.... it cost too many resources, so, how i can mannully stop the embedding model, when the work is down and release the gpu memory. tha's very important to me, thanks for your help. 🌹
### System Info
python 3.9.18
langchain lastest
ubuntu 20.04 lts | CUDA memory won't release with HuggingFaceEmbeddings + local embedding model | https://api.github.com/repos/langchain-ai/langchain/issues/17678/comments | 6 | 2024-02-17T17:08:34Z | 2024-02-19T13:35:28Z | https://github.com/langchain-ai/langchain/issues/17678 | 2,140,320,363 | 17,678 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```
pdf_path = './pdf/2402.10021v1.pdf'
loader = PyPDFium2Loader(pdf_path)
documents = loader.load()
````
### Error Message and Stack Trace (if applicable)
corrupted size vs. prev_size
[1] 619773 abort (core dumped)
### Description
When I am trying to load pdf of https://arxiv.org/abs/2402.10021v1, this error occurs. However, when I load other pdfs, there's no errors. I also tried to load this pdf directed by fitz using the following code, there is no error.
```
import fitz
pdf_path = './pdf/2402.10021v1.pdf'
pdf_document = fitz.open(pdf_path)
text = ""
for page_number in range(len(pdf_document)):
page = pdf_document.load_page(page_number)
text += page.get_text()
print(text)
```
### System Info
langchain==0.1.5
langchain-community==0.0.17
langchain-core==0.1.18
pypdfium2==4.26.0
ubantu 20.04
Python 3.10.13 | [Bug]error: corrupted size vs. prev_size occurs during loading pdf by PyPDFium2Loader | https://api.github.com/repos/langchain-ai/langchain/issues/17667/comments | 1 | 2024-02-17T01:27:13Z | 2024-06-08T16:10:10Z | https://github.com/langchain-ai/langchain/issues/17667 | 2,139,724,060 | 17,667 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
from langchain_community.vectorstores.elasticsearch import ElasticsearchStore
from langchain_community.document_loaders import DirectoryLoader
from elasticsearch import Elasticsearch
loader = DirectoryLoader('../', glob="**/*.pdf", show_progress=True)
docs = loader.load()
print(docs[0])
es_connection = Elasticsearch(
hosts=['https://XXXXXXX.es.us-central1.gcp.cloud.es.io'],
basic_auth=('XXXXX', 'XXXXX')
)
vector_store = ElasticsearchStore(
index_name="test-elser",
es_connection=es_connection,
strategy=ElasticsearchStore.SparseVectorRetrievalStrategy(
model_id=".elser_model_2_linux-x86_64"
),
)
vector_store.add_documents(docs)
```
### Error Message and Stack Trace (if applicable)
> First error reason: Could not find trained model [.elser_model_1]
> Traceback (most recent call last):
> File "/Users/gustavollermalylarrain/Documents/proyectos/labs/langchain-elser/langchain_elser/vector_store.py", line 23, in <module>
> vector_store.add_documents(docs)
> File "/Users/gustavollermalylarrain/Documents/proyectos/labs/langchain-elser/.venv/lib/python3.11/site-packages/langchain_core/vectorstores.py", line 119, in add_documents
> return self.add_texts(texts, metadatas, **kwargs)
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File "/Users/gustavollermalylarrain/Documents/proyectos/labs/langchain-elser/.venv/lib/python3.11/site-packages/langchain_community/vectorstores/elasticsearch.py", line 1040, in add_texts
> return self.__add(
> ^^^^^^^^^^^
> File "/Users/gustavollermalylarrain/Documents/proyectos/labs/langchain-elser/.venv/lib/python3.11/site-packages/langchain_community/vectorstores/elasticsearch.py", line 998, in __add
> raise e
> File "/Users/gustavollermalylarrain/Documents/proyectos/labs/langchain-elser/.venv/lib/python3.11/site-packages/langchain_community/vectorstores/elasticsearch.py", line 981, in __add
> success, failed = bulk(
> ^^^^^
> File "/Users/gustavollermalylarrain/Documents/proyectos/labs/langchain-elser/.venv/lib/python3.11/site-packages/elasticsearch/helpers/actions.py", line 521, in bulk
> for ok, item in streaming_bulk(
> File "/Users/gustavollermalylarrain/Documents/proyectos/labs/langchain-elser/.venv/lib/python3.11/site-packages/elasticsearch/helpers/actions.py", line 436, in streaming_bulk
> for data, (ok, info) in zip(
> File "/Users/gustavollermalylarrain/Documents/proyectos/labs/langchain-elser/.venv/lib/python3.11/site-packages/elasticsearch/helpers/actions.py", line 355, in _process_bulk_chunk
> yield from gen
> File "/Users/gustavollermalylarrain/Documents/proyectos/labs/langchain-elser/.venv/lib/python3.11/site-packages/elasticsearch/helpers/actions.py", line 274, in _process_bulk_chunk_success
> raise BulkIndexError(f"{len(errors)} document(s) failed to index.", errors)
> elasticsearch.helpers.BulkIndexError: 2 document(s) failed to index.
### Description
The ELSER ingestion is not working if I use a different elser model id than the default.
I tried with both `ElasticsearchStore.from_documents` and `ElasticsearchStore.add_documents` with no luck
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.3.0: Wed Dec 20 21:30:44 PST 2023; root:xnu-10002.81.5~7/RELEASE_ARM64_T6000
> Python Version: 3.11.6 (main, Nov 2 2023, 04:39:43) [Clang 14.0.3 (clang-1403.0.22.14.1)]
Package Information
-------------------
> langchain_core: 0.1.23
> langchain: 0.1.7
> langchain_community: 0.0.20
> langsmith: 0.0.87
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | [Elasticsearch VectorStore] Could not find trained model | https://api.github.com/repos/langchain-ai/langchain/issues/17665/comments | 4 | 2024-02-17T00:34:26Z | 2024-07-05T09:19:14Z | https://github.com/langchain-ai/langchain/issues/17665 | 2,139,676,380 | 17,665 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
loader = GoogleDriveLoader(
document_ids=document_list,
recursive=True,
file_loader_cls=UnstructuredFileIOLoader,
file_loader_kwargs={"mode": "elements"}
)
data = loader.load()
### Error Message and Stack Trace (if applicable)
Code output is provided in the following. There is supposed to be header info indicating that the title is "One-Pager for Generative Classifier Results" and a few section other names.
```
[{'content': '\ufeffOne-Pager for Generative Classifier Results\r\nData\r\nHate Speech Dataset from Kaggle (comments from Twitter)\r\nMethod\r\nIn the system prompt, ask GPT4 to summarize in one word YES or NO whether a comment is offensive language or not.\r\n\r\n\r\nReturn the top 5 log probabilities of the next token. Since YES or Yes is just one token, we calibrate the probability using \\sum P(<token>.upper() == YES) as the offensive language score.\r\nExperiments\r\nWe created the validation dataset that the ratio between positive and negative samples is around 1:2. Specifically, there are 206 positive samples and 416 negative samples. We let GPT4 to generate offensive language score as a classifier\r\nHistograms of positive and negative samples\r\nBelow are hisgoram and 1-CDF plots for positive and negative samples under zero-shot setup.\r\n \r\n\r\n\r\n\r\nZero-shot v.s. Few-shots\r\nBelow is the figure comparing the zero-shot classifier and the few-shots classifier. Specifically, we randomly select 10 offensive languages (outside of the validation dataset) and provide them as examples in the system prompt.\r\n\r\n\r\nWe shall see that the few shot classifier outperforms zero-shot classifier, especially in reducing the volume of false positives.\r\n \r\n\r\n\r\n\r\nNext steps\r\n* Calibrate precision through manual label correction on FPs.\r\n* Precision curve with respect to pos-neg ratio in the validation datasets.\r\n* Comparison of GPT3.5 and GPT4.', 'metadata': {'name': 'Generative Classifier', 'id': '1CMkmfv2CTy9qx3gAwiDOhdYUPjv2-5WOfGYodiXEr_I', 'version': '45', 'modifiedTime': '2024-02-16T21:55:15.296Z', 'createdTime': '2024-01-12T02:37:57.462Z', 'webViewLink': 'https://docs.google.com/document/d/1CMkmfv2CTy9qx3gAwiDOhdYUPjv2-5WOfGYodiXEr_I/edit?usp=drivesdk', 'type': 'google_doc', 'url': 'https://docs.google.com/document/d/1CMkmfv2CTy9qx3gAwiDOhdYUPjv2-5WOfGYodiXEr_I/edit?usp=drivesdk'}}]
```
### Description
I followed the instructions provided in GoogleDriveLoader and passed additional parameters indicating the google document should be parsed as elements so that I get header information of each text chunk. However, the loader does not work as expected. It still concatenate all plain texts together into one LangChain Document.
What should I do to parse the header/section name in a google document?
### System Info
langchain_community.__version__ == 0.0.19 | Google documents not parsed as elements | https://api.github.com/repos/langchain-ai/langchain/issues/17664/comments | 2 | 2024-02-17T00:30:21Z | 2024-06-08T16:10:05Z | https://github.com/langchain-ai/langchain/issues/17664 | 2,139,672,570 | 17,664 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
One cannot set params, in my particular case, to 0 for the VertexAI models as they evaluate to False and are ignored.
This is not a problem for PaLM models (i.e. `text-bison`) as the [default temperature](https://cloud.google.com/vertex-ai/docs/generative-ai/model-reference/text) is `0.0`, however this is an issue for Gemini Pro as the [default temperature for text](https://cloud.google.com/vertex-ai/docs/generative-ai/model-reference/gemini) is 0.9.
MRE
```python
from langchain_google_vertexai import VertexAI
llm = VertexAI(model_name='gemini-pro', project='test', temperature=0.0)
print(llm._default_params)
```
You'll see that temperature is unset and so will use the Google API's default when generating.
### Error Message and Stack Trace (if applicable)
_No response_
### Description
* I'm trying to set the temperature of `gemini-pro` to 0.0 and am unsuccessful, so its using the default of 0.9
This is a nefarious bug because users may not realise their temperature settings are being ignored...and typically a temperature of 0.0 is done for a very particular reason.
I am submitting a PR to fix this.
### System Info
Package Information
-------------------
> langchain_core: 0.1.23
> langchain: 0.1.7
> langchain_community: 0.0.20
> langsmith: 0.0.87
> langchain_google_vertexai: 0.0.5 | Can't set params to 0 on VertexAI models | https://api.github.com/repos/langchain-ai/langchain/issues/17658/comments | 1 | 2024-02-16T22:42:10Z | 2024-06-01T00:19:19Z | https://github.com/langchain-ai/langchain/issues/17658 | 2,139,587,257 | 17,658 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
In the quickstart docs [(here)](https://github.com/langchain-ai/langchain/blob/master/docs/docs/get_started/quickstart.mdx) , there is a grammatical error on line 61.
- Original:
`This allows you interact in a chat manner with this LLM, so it remembers previous questions.`
### Idea or request for content:
Grammer added:
`This allows you to interact in a chat manner with this LLM, so it remembers previous questions.`
There should be a "to" between "you" and "interact". | DOC: Grammatical Error in quickstart.mdx | https://api.github.com/repos/langchain-ai/langchain/issues/17657/comments | 1 | 2024-02-16T22:34:11Z | 2024-02-16T22:46:53Z | https://github.com/langchain-ai/langchain/issues/17657 | 2,139,579,508 | 17,657 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
from langchain_community.chat_models import BedrockChat
b = BedrockChat(model_id="anthropic.claude-v2", model_kwargs={"temperature": 0.1})
```
### Error Message and Stack Trace (if applicable)
If none AWS env variables are set, you see:
> ValidationError: 1 validation error for BedrockChat
> __root__
> Could not load credentials to authenticate with AWS client. Please check that credentials in the specified profile name are valid. (type=value_error)
While it is actually:
> Did not find region_name, please add an environment variable `AWS_DEFAULT_REGION` which contains it, or pass `region_name` as a named parameter
### Description
If no ENV variables are set, you get misleading error message.
Langchain is attempting to propagate true root cause by using 'raise from'.
The problem is that this error is happening inside pydantic validation. And pydantic effectively [re-wraps errors](https://github.com/pydantic/pydantic/blob/12ebdfc6790ab0c29cc8aefd1d97dd04603eb7cb/pydantic/v1/main.py#L1030) loosing __context__ and __cause__ info. Only top-level error message is left.
That is occasionally misleading when the problem is not about AWS creds mismatch.
AmazonComprehendModerationChain, BedrockEmbeddings, BedrockBase, AmazonTextractPDFLoader, BedrockChat, Bedrock classes are affected.
### System Info
platform-independent | AWS errors propagation is broken in Bedrock classes constructors validation | https://api.github.com/repos/langchain-ai/langchain/issues/17654/comments | 1 | 2024-02-16T21:58:01Z | 2024-06-01T00:07:41Z | https://github.com/langchain-ai/langchain/issues/17654 | 2,139,543,299 | 17,654 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
loader = Docx2txtLoader(filename)
docs = loader.load()
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "C:\Users\MarkRamsey\PycharmProjects\ri_genaipro_source\RI_GenAIPro_Data_Pipeline.py", line 57, in <module>
from loaders_local import *
File "C:\Users\MarkRamsey\PycharmProjects\ri_genaipro_source\loaders_local.py", line 1, in <module>
from langchain_community.document_loaders import Docx2txtLoader
File "C:\Users\MarkRamsey\PycharmProjects\ri_genaipro_source\.venv\Lib\site-packages\langchain_community\document_loaders\__init__.py", line 163, in <module>
from langchain_community.document_loaders.pebblo import PebbloSafeLoader
File "C:\Users\MarkRamsey\PycharmProjects\ri_genaipro_source\.venv\Lib\site-packages\langchain_community\document_loaders\pebblo.py", line 5, in <module>
import pwd
ModuleNotFoundError: No module named 'pwd'
### Description
the document_loaders\pebblo.py module is using pwd which is only valid in Linux, so it fails on windows
### System Info
platform Windows 11
python 3.11.8 | Use of pwd in document loaders causes failure in Windows with langchain_community 0.0.20 | https://api.github.com/repos/langchain-ai/langchain/issues/17651/comments | 3 | 2024-02-16T20:20:11Z | 2024-04-27T21:50:46Z | https://github.com/langchain-ai/langchain/issues/17651 | 2,139,392,957 | 17,651 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
vectorstore = Pinecone.from_existing_index(index_name="primary", embedding=embedding)
vectorstore.as_retriever(search_kwargs={"score_threshold": .80})
```
### Error Message and Stack Trace (if applicable)
```python
TypeError: Pinecone.similarity_search_with_score() got an unexpected keyword argument 'score_threshold'
```
### Description
No score threshold filter available on Pinecone in Langchain.
### System Info
Langchain v0.1.7
Python v3.11.5
Windows 10 | Pinecone No Score_Threshold Argument | https://api.github.com/repos/langchain-ai/langchain/issues/17650/comments | 7 | 2024-02-16T20:09:33Z | 2024-06-08T16:10:01Z | https://github.com/langchain-ai/langchain/issues/17650 | 2,139,369,620 | 17,650 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
from pymilvus import connections , utility
import os
os.environ["OPENAI_API_TYPE"] = "azure"
os.environ["AZURE_OPENAI_ENDPOINT"] = "set this to end point"
os.environ["AZURE_OPENAI_API_KEY"] = "azure openai api key"
os.environ["OPENAI_API_VERSION"] = "2023-05-15"
from langchain_openai import AzureOpenAIEmbeddings
embeddings = AzureOpenAIEmbeddings(
azure_deployment="text-embedding-ada-002",
openai_api_version="2023-05-15"
)
text = "This is a test query."
query_result = embeddings.embed_query(text)
print(query_result)
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "test.py", line 14, in <module>
embeddings = AzureOpenAIEmbeddings(
File "../lib/python3.10/site-packages/pydantic/v1/main.py", line 341, in __init__
raise validation_error
pydantic.v1.error_wrappers.ValidationError: 1 validation error for AzureOpenAIEmbeddings
__root__
As of openai>=1.0.0, if `deployment` (or alias `azure_deployment`) is specified then `openai_api_base` (or alias `base_url`) should not be. Instead use `deployment` (or alias `azure_deployment`) and `azure_endpoint`. (type=value_error)
### Description
I am trying to use the langchain_openai library for embedding but I also need to import and use pymilvus. When doing so the embedding doesn't work. see the error. I use the exact code from the langchain library website.
### System Info
Name: langchain-openai
Version: 0.0.6
Name: pymilvus
Version: 2.3.6
| AzureOpenAIEmbeddings gives an error if pymilvus is imported before | https://api.github.com/repos/langchain-ai/langchain/issues/17646/comments | 2 | 2024-02-16T16:52:13Z | 2024-02-25T20:03:42Z | https://github.com/langchain-ai/langchain/issues/17646 | 2,138,993,189 | 17,646 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
from langchain.text_splitter import RecursiveCharacterTextSplitter
text_to_split = 'any text can be put here if I am splitting from_tiktoken_encoder and have a chunk_overlap greater than 0 it will not work. The start_index metadata will have intermittant -1 values in it.'
text_splitter = RecursiveCharacterTextSplitter(length_function=len, is_separator_regex=False).from_tiktoken_encoder(
chunk_size=20, chunk_overlap=10,
)
split_texts = text_splitter.create_documents([text_to_split])
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Basically the error comes if you are splitting "from_tiktoken_encoder" rather than splitting by character count, and if you are specifying a chunk_overlap greater than 0. The error is caused by line 150 of text_splitter.py:
offset = index + previous_chunk_len - self._chunk_overlap
It won't calculate the correct offset because out self._chunk_overlap is specified as a token count, but that line in the code is calculating offset as a number of characters.
### System Info
aiohttp==3.9.3
aiosignal==1.3.1
anyio==3.5.0
argon2-cffi==21.3.0
argon2-cffi-bindings==21.2.0
asttokens==2.0.5
async-timeout==4.0.3
attrs==22.1.0
backcall==0.2.0
beautifulsoup4==4.11.1
black==22.6.0
bleach==4.1.0
blinker==1.4
boto3==1.24.28
botocore==1.27.96
certifi==2022.12.7
cffi==1.15.1
chardet==4.0.0
charset-normalizer==2.0.4
click==8.0.4
comm==0.1.2
contourpy==1.0.5
cryptography==39.0.1
cycler==0.11.0
Cython==0.29.32
databricks-sdk==0.1.6
dataclasses-json==0.6.4
dbus-python==1.2.18
debugpy==1.6.7
decorator==5.1.1
defusedxml==0.7.1
distlib==0.3.7
distro==1.7.0
distro-info==1.1+ubuntu0.2
docopt==0.6.2
docstring-to-markdown==0.11
entrypoints==0.4
executing==0.8.3
facets-overview==1.1.1
fastjsonschema==2.19.0
filelock==3.12.4
fonttools==4.25.0
frozenlist==1.4.1
googleapis-common-protos==1.61.0
greenlet==3.0.3
grpcio==1.48.2
grpcio-status==1.48.1
h11==0.14.0
httpcore==1.0.3
httplib2==0.20.2
httpx==0.26.0
idna==3.4
importlib-metadata==4.6.4
ipykernel==6.25.0
ipython==8.14.0
ipython-genutils==0.2.0
ipywidgets==7.7.2
jedi==0.18.1
jeepney==0.7.1
Jinja2==3.1.2
jmespath==0.10.0
joblib==1.2.0
jsonpatch==1.33
jsonpointer==2.4
jsonschema==4.17.3
jupyter-client==7.3.4
jupyter-server==1.23.4
jupyter_core==5.2.0
jupyterlab-pygments==0.1.2
jupyterlab-widgets==1.0.0
keyring==23.5.0
kiwisolver==1.4.4
langchain==0.1.7
langchain-community==0.0.20
langchain-core==0.1.23
langsmith==0.0.87
launchpadlib==1.10.16
lazr.restfulclient==0.14.4
lazr.uri==1.0.6
lxml==4.9.1
MarkupSafe==2.1.1
marshmallow==3.20.2
matplotlib==3.7.0
matplotlib-inline==0.1.6
mccabe==0.7.0
mistune==0.8.4
more-itertools==8.10.0
multidict==6.0.5
mypy-extensions==0.4.3
nbclassic==0.5.2
nbclient==0.5.13
nbconvert==6.5.4
nbformat==5.7.0
nest-asyncio==1.5.6
nodeenv==1.8.0
notebook==6.5.2
notebook_shim==0.2.2
num2words==0.5.13
numpy==1.23.5
oauthlib==3.2.0
openai==1.12.0
packaging==23.2
pandas==1.5.3
pandocfilters==1.5.0
parso==0.8.3
pathspec==0.10.3
patsy==0.5.3
pexpect==4.8.0
pickleshare==0.7.5
Pillow==9.4.0
platformdirs==2.5.2
plotly==5.9.0
pluggy==1.0.0
prometheus-client==0.14.1
prompt-toolkit==3.0.36
protobuf==4.24.0
psutil==5.9.0
psycopg2==2.9.3
ptyprocess==0.7.0
pure-eval==0.2.2
pyarrow==8.0.0
pyarrow-hotfix==0.5
pycparser==2.21
pydantic==1.10.6
pyflakes==3.1.0
Pygments==2.11.2
PyGObject==3.42.1
PyJWT==2.3.0
pyodbc==4.0.32
pyparsing==3.0.9
pyright==1.1.294
pyrsistent==0.18.0
python-apt==2.4.0+ubuntu2
python-dateutil==2.8.2
python-lsp-jsonrpc==1.1.1
python-lsp-server==1.8.0
pytoolconfig==1.2.5
pytz==2022.7
PyYAML==6.0.1
pyzmq==23.2.0
regex==2023.12.25
requests==2.28.1
rope==1.7.0
s3transfer==0.6.2
scikit-learn==1.1.1
scipy==1.10.0
seaborn==0.12.2
SecretStorage==3.3.1
Send2Trash==1.8.0
six==1.16.0
sniffio==1.2.0
soupsieve==2.3.2.post1
SQLAlchemy==2.0.27
ssh-import-id==5.11
stack-data==0.2.0
statsmodels==0.13.5
tenacity==8.1.0
terminado==0.17.1
threadpoolctl==2.2.0
tiktoken==0.6.0
tinycss2==1.2.1
tokenize-rt==4.2.1
tomli==2.0.1
tornado==6.1
tqdm==4.66.2
traitlets==5.7.1
typing-inspect==0.9.0
typing_extensions==4.9.0
ujson==5.4.0
unattended-upgrades==0.1
urllib3==1.26.14
virtualenv==20.16.7
wadllib==1.3.6
wcwidth==0.2.5
webencodings==0.5.1
websocket-client==0.58.0
whatthepatch==1.0.2
widgetsnbextension==3.6.1
yapf==0.33.0
yarl==1.9.4
zipp==1.0.0
| langchain.textsplitter "add_start_index" option broken for create_documents() when splitting text by token count rather than character count | https://api.github.com/repos/langchain-ai/langchain/issues/17642/comments | 1 | 2024-02-16T14:43:15Z | 2024-05-31T23:49:27Z | https://github.com/langchain-ai/langchain/issues/17642 | 2,138,764,620 | 17,642 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
I tried to run code from langchain [doc](https://python.langchain.com/docs/integrations/vectorstores/faiss#similarity-search-with-filtering) where is called similarity search with filter, but the results are differend than in documentation.
```python
from langchain_community.vectorstores import FAISS
from langchain.schema import Document
list_of_documents = [
Document(page_content="foo", metadata=dict(page=1)),
Document(page_content="bar", metadata=dict(page=1)),
Document(page_content="foo", metadata=dict(page=2)),
Document(page_content="barbar", metadata=dict(page=2)),
Document(page_content="foo", metadata=dict(page=3)),
Document(page_content="bar burr", metadata=dict(page=3)),
Document(page_content="foo", metadata=dict(page=4)),
Document(page_content="bar bruh", metadata=dict(page=4)),
]
db = FAISS.from_documents(list_of_documents, embeddings)
results_with_scores = db.similarity_search_with_score("foo", filter=dict(page=1))
# Or with a callable:
# results_with_scores = db.similarity_search_with_score("foo", filter=lambda d: d["page"] == 1)
for doc, score in results_with_scores:
print(f"Content: {doc.page_content}, Metadata: {doc.metadata}, Score: {score}")
```
My results are same as results without filtering.
```
Content: foo, Metadata: {'page': 1}, Score: 5.159960813797904e-15
Content: foo, Metadata: {'page': 2}, Score: 5.159960813797904e-15
Content: foo, Metadata: {'page': 3}, Score: 5.159960813797904e-15
Content: foo, Metadata: {'page': 4}, Score: 5.159960813797904e-15
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I'm migrating from langchain==0.0.349 to new langchain 0.1.X and filtering worked just fine in version 0.0.349
### System Info
faiss-cpu==1.7.4
langchain==0.1.6
langchain-community==0.0.19
langchain-core==0.1.23
langchain-openai==0.0.6
windows 10 | FAISS vectorstore filter not working | https://api.github.com/repos/langchain-ai/langchain/issues/17633/comments | 4 | 2024-02-16T13:00:42Z | 2024-08-05T16:07:30Z | https://github.com/langchain-ai/langchain/issues/17633 | 2,138,581,516 | 17,633 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
``` python
from langchain.sql_database import SQLDatabase
from langchain_experimental.sql import SQLDatabaseChain
from langchain.chat_models import AzureChatOpenAI
DB = SQLDatabase.from_uri(SQLCONNECTION, schema=SQLSCHEMA, include_tables=[SQL_TBL1, SQL_TBL2])
LLM = AzureChatOpenAI(model=OPENAI_MODEL_NAME, temperature=0, openai_api_key=OPENAI_API_KEY,
openai_api_version=OPENAI_DEPLOYMENT_VERSION, azure_endpoint=OPENAI_DEPLOYMENT_ENDPOINT,
deployment_name=OPENAI_DEPLOYMENT_NAME)
db_chain = SQLDatabaseChain(llm=LLM, database=DB, verbose=True)
db_chain.run(USERQUESTION)
```
### Error Message and Stack Trace (if applicable)
N/A
### Description
Each time a question is asked, the SQL query is created and remains pending on the server even after the answer is returned to the user. The SQL query is only discarded after killing the console where the script runs.
### System Info
langchain==0.1.7
langchain-experimental==0.0.51
Windows 11 Pro
Python 3.11.4
| SQL connection remains active on the server | https://api.github.com/repos/langchain-ai/langchain/issues/17628/comments | 6 | 2024-02-16T11:49:24Z | 2024-05-20T14:40:39Z | https://github.com/langchain-ai/langchain/issues/17628 | 2,138,443,293 | 17,628 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
from langchain_community.utilities import SQLDatabase
db = SQLDatabase.from_uri(URI)
```
### Error Message and Stack Trace (if applicable)
```
{
"name": "ImportError",
"message": "cannot import name 'string_types' from 'sqlalchemy.util.compat' (/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/util/compat.py)",
"stack": "---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
Cell In[3], line 1
----> 1 db = SQLDatabase.from_uri(URI)
File ~/.local/lib/python3.10/site-packages/langchain_community/utilities/sql_database.py:133, in SQLDatabase.from_uri(cls, database_uri, engine_args, **kwargs)
131 \"\"\"Construct a SQLAlchemy engine from URI.\"\"\"
132 _engine_args = engine_args or {}
--> 133 return cls(create_engine(database_uri, **_engine_args), **kwargs)
File <string>:2, in create_engine(url, **kwargs)
File ~/.local/lib/python3.10/site-packages/sqlalchemy/util/deprecations.py:281, in deprecated_params.<locals>.decorate.<locals>.warned(fn, *args, **kwargs)
274 if m in kwargs:
275 _warn_with_version(
276 messages[m],
277 versions[m],
278 version_warnings[m],
279 stacklevel=3,
280 )
--> 281 return fn(*args, **kwargs)
File ~/.local/lib/python3.10/site-packages/sqlalchemy/engine/create.py:550, in create_engine(url, **kwargs)
546 u = _url.make_url(url)
548 u, plugins, kwargs = u._instantiate_plugins(kwargs)
--> 550 entrypoint = u._get_entrypoint()
551 _is_async = kwargs.pop(\"_is_async\", False)
552 if _is_async:
File ~/.local/lib/python3.10/site-packages/sqlalchemy/engine/url.py:758, in URL._get_entrypoint(self)
756 else:
757 name = self.drivername.replace(\"+\", \".\")
--> 758 cls = registry.load(name)
759 # check for legacy dialects that
760 # would return a module with 'dialect' as the
761 # actual class
762 if (
763 hasattr(cls, \"dialect\")
764 and isinstance(cls.dialect, type)
765 and issubclass(cls.dialect, Dialect)
766 ):
File ~/.local/lib/python3.10/site-packages/sqlalchemy/util/langhelpers.py:372, in PluginLoader.load(self, name)
370 if impl.name == name:
371 self.impls[name] = impl.load
--> 372 return impl.load()
374 raise exc.NoSuchModuleError(
375 \"Can't load plugin: %s:%s\" % (self.group, name)
376 )
File /usr/local/lib/python3.10/importlib/metadata/__init__.py:171, in EntryPoint.load(self)
166 \"\"\"Load the entry point from its definition. If only a module
167 is indicated by the value, return that module. Otherwise,
168 return the named object.
169 \"\"\"
170 match = self.pattern.match(self.value)
--> 171 module = import_module(match.group('module'))
172 attrs = filter(None, (match.group('attr') or '').split('.'))
173 return functools.reduce(getattr, attrs, module)
File /usr/local/lib/python3.10/importlib/__init__.py:126, in import_module(name, package)
124 break
125 level += 1
--> 126 return _bootstrap._gcd_import(name[level:], package, level)
File <frozen importlib._bootstrap>:1050, in _gcd_import(name, package, level)
File <frozen importlib._bootstrap>:1027, in _find_and_load(name, import_)
File <frozen importlib._bootstrap>:1006, in _find_and_load_unlocked(name, import_)
File <frozen importlib._bootstrap>:688, in _load_unlocked(spec)
File <frozen importlib._bootstrap_external>:883, in exec_module(self, module)
File <frozen importlib._bootstrap>:241, in _call_with_frames_removed(f, *args, **kwds)
File ~/.local/lib/python3.10/site-packages/snowflake/sqlalchemy/__init__.py:30
10 import importlib.metadata as importlib_metadata
12 from sqlalchemy.types import (
13 BIGINT,
14 BINARY,
(...)
27 VARCHAR,
28 )
---> 30 from . import base, snowdialect
31 from .custom_commands import (
32 AWSBucket,
33 AzureContainer,
(...)
42 PARQUETFormatter,
43 )
44 from .custom_types import (
45 ARRAY,
46 BYTEINT,
(...)
61 VARIANT,
62 )
File ~/.local/lib/python3.10/site-packages/snowflake/sqlalchemy/base.py:13
11 from sqlalchemy.sql import compiler, expression
12 from sqlalchemy.sql.elements import quoted_name
---> 13 from sqlalchemy.util.compat import string_types
15 from .custom_commands import AWSBucket, AzureContainer, ExternalStage
16 from .util import _set_connection_interpolate_empty_sequences
ImportError: cannot import name 'string_types' from 'sqlalchemy.util.compat' (/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/util/compat.py)"
}
```
### Description
When trying to connect to a DB with the SQLDatabase module, I get the error as shown.
### System Info
sqlalchemy-2.0.27
langchain-community 0.0.20
langchain 0.1.5 | ImportError: cannot import name 'string_types' from 'sqlalchemy.util.compat' | https://api.github.com/repos/langchain-ai/langchain/issues/17616/comments | 6 | 2024-02-16T04:17:58Z | 2024-02-27T20:18:02Z | https://github.com/langchain-ai/langchain/issues/17616 | 2,137,842,219 | 17,616 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
The issues are described in the description. The following code will work to reproduce (1) as long as an index called `azure-search` doesn't already exist. To reproduce (2), the import for `VectorSearch` should be added back in [azuresearch.py](https://github.com/langchain-ai/langchain/pull/15659/files#diff-b691fd57bb6a6d89396c11c8d198f361be2f53e19ef4059904232cd3b5698b77L92) and then the code should be run again.
```python
import os
from langchain_openai import AzureOpenAIEmbeddings
from langchain_community.vectorstores.azuresearch import AzureSearch
embeddings = AzureOpenAIEmbeddings(
deployment=os.environ["AZURE_EMBEDDINGS_DEPLOYMENT"],
chunk_size=1
)
vector_store: AzureSearch = AzureSearch(
azure_search_endpoint=os.environ["AZURE_SEARCH_ENDPOINT"],
azure_search_key=os.environ["AZURE_SEARCH_KEY"],
index_name="azure-search",
embedding_function=embeddings.embed_query,
)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
In this PR: https://github.com/langchain-ai/langchain/pull/15659, the AzureSearch vectorstore was update to work the latest stable azure-search-documents. In the process, this introduced a few regressions.
1. In the code [here](https://github.com/langchain-ai/langchain/pull/15659/files#diff-b691fd57bb6a6d89396c11c8d198f361be2f53e19ef4059904232cd3b5698b77L92), the import for `VectorSearch` was removed. If a search index needs to be created, we run into the following error:
```txt
Traceback (most recent call last):
File "/home/krpratic/langchain/repro_bug.py", line 14, in <module>
vector_store: AzureSearch = AzureSearch(
^^^^^^^^^^^^
File "/home/krpratic/langchain/libs/community/langchain_community/vectorstores/azuresearch.py", line 270, in __init__
self.client = _get_search_client(
^^^^^^^^^^^^^^^^^^^
File "/home/krpratic/langchain/libs/community/langchain_community/vectorstores/azuresearch.py", line 145, in _get_search_client
vector_search = VectorSearch(
^^^^^^^^^^^^
NameError: name 'VectorSearch' is not defined. Did you mean: 'vector_search'?
```
2. If I edit in the code in (1) to add the import for `VectorSearch` back and re-run the code, I get the following error:
```text
(InvalidRequestParameter) The request is invalid. Details: definition : The vector field 'content_vector' must have the property 'vectorSearchConfiguration' set.
```
This is due to a change from the beta --> stable version of azure-search-documents where `vector_search_configuration` was renamed to `vector_search_profile_name`: [changelog](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/search/azure-search-documents/CHANGELOG.md?plain=1#L96) + [code](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/search/azure-search-documents/azure/search/documents/indexes/models/_index.py#L212). To fix, the code should be updated to `vector_search_profile_name="myHnswProfile"` to point to the name of the vector search profile that specifies the algorithm to use when searching the vector field:
```python
SearchField(
name=FIELDS_CONTENT_VECTOR,
type=SearchFieldDataType.Collection(SearchFieldDataType.Single),
searchable=True,
vector_search_dimensions=len(self.embed_query("Text")),
vector_search_profile_name="myHnswProfile",
),
```
### System Info
langchain-cli==0.0.21
langchain-core==0.1.23
langchain-openai==0.0.6
azure-search-documents==11.4.0
linux; (Linux-5.15.133.1-microsoft-standard-WSL2-x86_64-with-glibc2.35)
Python v3.11 | regressions with AzureSearch vectorstore update to v11.4.0 | https://api.github.com/repos/langchain-ai/langchain/issues/17598/comments | 1 | 2024-02-15T22:24:24Z | 2024-02-16T17:11:47Z | https://github.com/langchain-ai/langchain/issues/17598 | 2,137,548,937 | 17,598 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
using `langchain_openai.ChatOpenAI` works without any problems when using [Ollama OpenAI Compatible API](https://registry.ollama.ai/blog/openai-compatibility)
``` python
from langchain_openai import ChatOpenAI, OpenAI
llm = ChatOpenAI(
temperature=0,
model_name="phi",
openai_api_base="http://localhost:11434/v1",
openai_api_key="Not needed for local server",
)
print(llm.invoke("Hello, how are you?").content)
```
result:
```
I'm doing well, thank you for asking. How can I assist you today?
```
However, using same code for `langchain_openai.OpenAI` results in an error:'
``` python
from langchain_openai import ChatOpenAI, OpenAI
llm = OpenAI(
temperature=0,
model_name="phi",
openai_api_base="http://localhost:11434/v1",
openai_api_key="Not needed for local server",
)
print(llm.invoke("Hello, how are you?").content)
```
results in
```
NotFoundError: 404 page not found
```
I checked that there's no problem with `Ollama` itself or the localhost, I repeated the same expirement many times and it always worked for `ChatOpenAI` and didnt' work for `OpenAI`
### Error Message and Stack Trace (if applicable)
```
{
"name": "NotFoundError",
"message": "404 page not found",
"stack": "---------------------------------------------------------------------------
NotFoundError Traceback (most recent call last)
Cell In[3], line 8
1 from langchain_openai import ChatOpenAI, OpenAI
2 llm = OpenAI(
3 temperature=0,
4 model_name=\"phi\",
5 openai_api_base=\"http://localhost:11434/v1\",
6 openai_api_key=\"Not needed for local server\",
7 )
----> 8 print(llm.invoke(\"Hello, how are you?\").content)
File ~/miniconda3/envs/main/lib/python3.11/site-packages/langchain_core/language_models/llms.py:273, in BaseLLM.invoke(self, input, config, stop, **kwargs)
263 def invoke(
264 self,
265 input: LanguageModelInput,
(...)
269 **kwargs: Any,
270 ) -> str:
271 config = ensure_config(config)
272 return (
--> 273 self.generate_prompt(
274 [self._convert_input(input)],
275 stop=stop,
276 callbacks=config.get(\"callbacks\"),
277 tags=config.get(\"tags\"),
278 metadata=config.get(\"metadata\"),
279 run_name=config.get(\"run_name\"),
280 **kwargs,
281 )
282 .generations[0][0]
283 .text
284 )
File ~/miniconda3/envs/main/lib/python3.11/site-packages/langchain_core/language_models/llms.py:568, in BaseLLM.generate_prompt(self, prompts, stop, callbacks, **kwargs)
560 def generate_prompt(
561 self,
562 prompts: List[PromptValue],
(...)
565 **kwargs: Any,
566 ) -> LLMResult:
567 prompt_strings = [p.to_string() for p in prompts]
--> 568 return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs)
File ~/miniconda3/envs/main/lib/python3.11/site-packages/langchain_core/language_models/llms.py:741, in BaseLLM.generate(self, prompts, stop, callbacks, tags, metadata, run_name, **kwargs)
725 raise ValueError(
726 \"Asked to cache, but no cache found at `langchain.cache`.\"
727 )
728 run_managers = [
729 callback_manager.on_llm_start(
730 dumpd(self),
(...)
739 )
740 ]
--> 741 output = self._generate_helper(
742 prompts, stop, run_managers, bool(new_arg_supported), **kwargs
743 )
744 return output
745 if len(missing_prompts) > 0:
File ~/miniconda3/envs/main/lib/python3.11/site-packages/langchain_core/language_models/llms.py:605, in BaseLLM._generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs)
603 for run_manager in run_managers:
604 run_manager.on_llm_error(e, response=LLMResult(generations=[]))
--> 605 raise e
606 flattened_outputs = output.flatten()
607 for manager, flattened_output in zip(run_managers, flattened_outputs):
File ~/miniconda3/envs/main/lib/python3.11/site-packages/langchain_core/language_models/llms.py:592, in BaseLLM._generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs)
582 def _generate_helper(
583 self,
584 prompts: List[str],
(...)
588 **kwargs: Any,
589 ) -> LLMResult:
590 try:
591 output = (
--> 592 self._generate(
593 prompts,
594 stop=stop,
595 # TODO: support multiple run managers
596 run_manager=run_managers[0] if run_managers else None,
597 **kwargs,
598 )
599 if new_arg_supported
600 else self._generate(prompts, stop=stop)
601 )
602 except BaseException as e:
603 for run_manager in run_managers:
File ~/miniconda3/envs/main/lib/python3.11/site-packages/langchain_openai/llms/base.py:340, in BaseOpenAI._generate(self, prompts, stop, run_manager, **kwargs)
328 choices.append(
329 {
330 \"text\": generation.text,
(...)
337 }
338 )
339 else:
--> 340 response = self.client.create(prompt=_prompts, **params)
341 if not isinstance(response, dict):
342 # V1 client returns the response in an PyDantic object instead of
343 # dict. For the transition period, we deep convert it to dict.
344 response = response.dict()
File ~/miniconda3/envs/main/lib/python3.11/site-packages/openai/_utils/_utils.py:275, in required_args.<locals>.inner.<locals>.wrapper(*args, **kwargs)
273 msg = f\"Missing required argument: {quote(missing[0])}\"
274 raise TypeError(msg)
--> 275 return func(*args, **kwargs)
File ~/miniconda3/envs/main/lib/python3.11/site-packages/openai/resources/completions.py:506, in Completions.create(self, model, prompt, best_of, echo, frequency_penalty, logit_bias, logprobs, max_tokens, n, presence_penalty, seed, stop, stream, suffix, temperature, top_p, user, extra_headers, extra_query, extra_body, timeout)
478 @required_args([\"model\", \"prompt\"], [\"model\", \"prompt\", \"stream\"])
479 def create(
480 self,
(...)
504 timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
505 ) -> Completion | Stream[Completion]:
--> 506 return self._post(
507 \"/completions\",
508 body=maybe_transform(
509 {
510 \"model\": model,
511 \"prompt\": prompt,
512 \"best_of\": best_of,
513 \"echo\": echo,
514 \"frequency_penalty\": frequency_penalty,
515 \"logit_bias\": logit_bias,
516 \"logprobs\": logprobs,
517 \"max_tokens\": max_tokens,
518 \"n\": n,
519 \"presence_penalty\": presence_penalty,
520 \"seed\": seed,
521 \"stop\": stop,
522 \"stream\": stream,
523 \"suffix\": suffix,
524 \"temperature\": temperature,
525 \"top_p\": top_p,
526 \"user\": user,
527 },
528 completion_create_params.CompletionCreateParams,
529 ),
530 options=make_request_options(
531 extra_headers=extra_headers, extra_query=extra_query, extra_body=extra_body, timeout=timeout
532 ),
533 cast_to=Completion,
534 stream=stream or False,
535 stream_cls=Stream[Completion],
536 )
File ~/miniconda3/envs/main/lib/python3.11/site-packages/openai/_base_client.py:1200, in SyncAPIClient.post(self, path, cast_to, body, options, files, stream, stream_cls)
1186 def post(
1187 self,
1188 path: str,
(...)
1195 stream_cls: type[_StreamT] | None = None,
1196 ) -> ResponseT | _StreamT:
1197 opts = FinalRequestOptions.construct(
1198 method=\"post\", url=path, json_data=body, files=to_httpx_files(files), **options
1199 )
-> 1200 return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
File ~/miniconda3/envs/main/lib/python3.11/site-packages/openai/_base_client.py:889, in SyncAPIClient.request(self, cast_to, options, remaining_retries, stream, stream_cls)
880 def request(
881 self,
882 cast_to: Type[ResponseT],
(...)
887 stream_cls: type[_StreamT] | None = None,
888 ) -> ResponseT | _StreamT:
--> 889 return self._request(
890 cast_to=cast_to,
891 options=options,
892 stream=stream,
893 stream_cls=stream_cls,
894 remaining_retries=remaining_retries,
895 )
File ~/miniconda3/envs/main/lib/python3.11/site-packages/openai/_base_client.py:980, in SyncAPIClient._request(self, cast_to, options, remaining_retries, stream, stream_cls)
977 err.response.read()
979 log.debug(\"Re-raising status error\")
--> 980 raise self._make_status_error_from_response(err.response) from None
982 return self._process_response(
983 cast_to=cast_to,
984 options=options,
(...)
987 stream_cls=stream_cls,
988 )
NotFoundError: 404 page not found"
}
```
### Description
using `langchain_openai.ChatOpenAI` works without any problems when using [Ollama OpenAI Compatible API](https://registry.ollama.ai/blog/openai-compatibility)
However, using same code for `langchain_openai.OpenAI` results in an error.
I checked that there's no problem with `Ollama` itself or the localhost, I repeated the same expirement many times and it always worked for `ChatOpenAI` and didnt' work for `OpenAI`
### System Info
```
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Thu Jan 11 04:09:03 UTC 2024
> Python Version: 3.11.5 (main, Sep 11 2023, 13:54:46) [GCC 11.2.0]
Package Information
-------------------
> langchain_core: 0.1.22
> langchain: 0.1.7
> langchain_community: 0.0.20
> langsmith: 0.0.87
> langchain_openai: 0.0.6
> langchainplus_sdk: 0.0.20
``` | ChatOpenAI and OpenAI give different behaviors when using local openai_api_base | https://api.github.com/repos/langchain-ai/langchain/issues/17596/comments | 1 | 2024-02-15T21:58:34Z | 2024-05-08T22:51:59Z | https://github.com/langchain-ai/langchain/issues/17596 | 2,137,519,690 | 17,596 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
-
### Idea or request for content:
below is the code
```
child_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=50)
parent_splitter = RecursiveCharacterTextSplitter(chunk_size=1200, chunk_overlap=300)
vectorstore = Chroma(
collection_name="full_documents", embedding_function=embeddings)
store = InMemoryStore()
retriever = ParentDocumentRetriever(
vectorstore=vectorstore,
docstore=store,
child_splitter=child_splitter,
parent_splitter=parent_splitter
)
retriever.add_documents(documents, ids=None)
docss = retriever.get_relevant_documents("data related to cricket")
```
`print(docss)`
it is only returning 1 output. How to retrieve topk documents? | how to get topk relevant documents using retriever with ParentDocumentRetriever? | https://api.github.com/repos/langchain-ai/langchain/issues/17589/comments | 1 | 2024-02-15T19:14:05Z | 2024-02-15T22:10:37Z | https://github.com/langchain-ai/langchain/issues/17589 | 2,137,248,520 | 17,589 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
When you instantiate an AzureSearch instance with a search type of `semantic_hybrid` and the metadata is not on the list of fields for the `fields` of the vector store or in the index, the `semantic_hybrid_search_with_score_and_rerank` method fails.
### Error Message and Stack Trace (if applicable)
```
Traceback (most recent call last):
File "/Users/carlos/PycharmProjects/project/backend/api/app/controllers/session/send_message.py", line 76, in send_message
response = agent({"input": prompt})
^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/carlos/PycharmProjects/project/backend/api/services/agent.py", line 166, in __call__
return self._executor.invoke(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/carlos/PycharmProjects/project/.venv/lib/python3.11/site-packages/langchain/chains/base.py", line 162, in invoke
raise e
File "/Users/carlos/PycharmProjects/project/.venv/lib/python3.11/site-packages/langchain/chains/base.py", line 156, in invoke
self._call(inputs, run_manager=run_manager)
File "/Users/carlos/PycharmProjects/project/.venv/lib/python3.11/site-packages/langchain/agents/agent.py", line 1391, in _call
next_step_output = self._take_next_step(
^^^^^^^^^^^^^^^^^^^^^
File "/Users/carlos/PycharmProjects/project/.venv/lib/python3.11/site-packages/langchain/agents/agent.py", line 1097, in _take_next_step
[
File "/Users/carlos/PycharmProjects/project/.venv/lib/python3.11/site-packages/langchain/agents/agent.py", line 1097, in <listcomp>
[
File "/Users/carlos/PycharmProjects/project/.venv/lib/python3.11/site-packages/langchain/agents/agent.py", line 1182, in _iter_next_step
yield self._perform_agent_action(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/carlos/PycharmProjects/project/.venv/lib/python3.11/site-packages/langchain/agents/agent.py", line 1204, in _perform_agent_action
observation = tool.run(
^^^^^^^^^
File "/Users/carlos/PycharmProjects/project/.venv/lib/python3.11/site-packages/langchain_core/tools.py", line 401, in run
raise e
File "/Users/carlos/PycharmProjects/project/.venv/lib/python3.11/site-packages/langchain_core/tools.py", line 358, in run
self._run(*tool_args, run_manager=run_manager, **tool_kwargs)
File "/Users/carlos/PycharmProjects/project/backend/api/services/agent.py", line 254, in _run
docs = self.vectorstore.semantic_hybrid_search(query=query, k=4)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/carlos/PycharmProjects/project/.venv/lib/python3.11/site-packages/langchain_community/vectorstores/azuresearch.py", line 520, in semantic_hybrid_search
docs_and_scores = self.semantic_hybrid_search_with_score_and_rerank(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/carlos/PycharmProjects/project/.venv/lib/python3.11/site-packages/langchain_community/vectorstores/azuresearch.py", line 582, in semantic_hybrid_search_with_score_and_rerank
docs = [
^
File "/Users/carlos/PycharmProjects/project/.venv/lib/python3.11/site-packages/langchain_community/vectorstores/azuresearch.py", line 606, in <listcomp>
json.loads(result["metadata"]).get("key"),
~~~~~~^^^^^^^^^^^^
KeyError: 'metadata'
```
### Description
A fix was introduced in PR #15642 and the bug was re-introduced in PR #15659.
For this to work the method should look like this:
```python
def semantic_hybrid_search_with_score_and_rerank(
self, query: str, k: int = 4, filters: Optional[str] = None
) -> List[Tuple[Document, float, float]]:
"""Return docs most similar to query with an hybrid query.
Args:
query: Text to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
Returns:
List of Documents most similar to the query and score for each
"""
from azure.search.documents.models import VectorizedQuery
results = self.client.search(
search_text=query,
vector_queries=[
VectorizedQuery(
vector=np.array(self.embed_query(query), dtype=np.float32).tolist(),
k_nearest_neighbors=k,
fields=FIELDS_CONTENT_VECTOR,
)
],
filter=filters,
query_type="semantic",
semantic_configuration_name=self.semantic_configuration_name,
query_caption="extractive",
query_answer="extractive",
top=k,
)
# Get Semantic Answers
semantic_answers = results.get_answers() or []
semantic_answers_dict: Dict = {}
for semantic_answer in semantic_answers:
semantic_answers_dict[semantic_answer.key] = {
"text": semantic_answer.text,
"highlights": semantic_answer.highlights,
}
# Convert results to Document objects
docs = [
(
Document(
page_content=result.pop(FIELDS_CONTENT),
metadata={
**(
json.loads(result[FIELDS_METADATA])
if FIELDS_METADATA in result
else {
k: v
for k, v in result.items()
if k != FIELDS_CONTENT_VECTOR
}
),
**{
"captions": {
"text": result.get("@search.captions", [{}])[0].text,
"highlights": result.get("@search.captions", [{}])[
0
].highlights,
}
if result.get("@search.captions")
else {},
"answers": semantic_answers_dict.get(
json.loads(result[FIELDS_METADATA]).get("key")
if FIELDS_METADATA in result
else "",
"",
),
},
},
),
float(result["@search.score"]),
float(result["@search.reranker_score"]),
)
for result in results
]
return docs
```
### System Info
Python 3.11
Langchain 0.1.7 | AzureSearch.semantic_hybrid_search_with_score_and_rerank not working when METADATA_FIELD not in index | https://api.github.com/repos/langchain-ai/langchain/issues/17587/comments | 2 | 2024-02-15T19:02:19Z | 2024-07-01T16:05:09Z | https://github.com/langchain-ai/langchain/issues/17587 | 2,137,228,317 | 17,587 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
na
### Error Message and Stack Trace (if applicable)
_No response_
### Description
The latest version of langchain_coomunity 0.0.20 is giving an error "NO MODEUL PWD" while using TEXTLOADER()
### System Info
Running on windows and using TEXTLOADER | Latest langchain_community is giving an error "No MODULE PWD" while using TEXTLOADER | https://api.github.com/repos/langchain-ai/langchain/issues/17585/comments | 2 | 2024-02-15T17:37:11Z | 2024-06-01T00:07:41Z | https://github.com/langchain-ai/langchain/issues/17585 | 2,137,088,483 | 17,585 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```
from langchain_community.llms.huggingface_pipeline import HuggingFacePipeline
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_id = "gpt2"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, max_new_tokens=10)
hf = HuggingFacePipeline(pipeline=pipe)
from langchain.prompts import PromptTemplate
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate.from_template(template)
chain = prompt | hf
question = "What is electroencephalography?"
print(chain.invoke({"question": question}))
```
### Error Message and Stack Trace (if applicable)
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
[/mnt/efs/fs2/LangChain-in-Kubernetes/chat-backend](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a224157532d4543322d41492d32227d.vscode-resource.vscode-cdn.net/mnt/efs/fs2/LangChain-in-Kubernetes/chat-backend) Optimized/test.ipynb Cell 2 line 1
[12](vscode-notebook-cell://ssh-remote%2B7b22686f73744e616d65223a224157532d4543322d41492d32227d/mnt/efs/fs2/LangChain-in-Kubernetes/chat-backend%20Optimized/test.ipynb#W2sdnNjb2RlLXJlbW90ZQ%3D%3D?line=11) template = """Question: {question}
[13](vscode-notebook-cell://ssh-remote%2B7b22686f73744e616d65223a224157532d4543322d41492d32227d/mnt/efs/fs2/LangChain-in-Kubernetes/chat-backend%20Optimized/test.ipynb#W2sdnNjb2RlLXJlbW90ZQ%3D%3D?line=12)
[14](vscode-notebook-cell://ssh-remote%2B7b22686f73744e616d65223a224157532d4543322d41492d32227d/mnt/efs/fs2/LangChain-in-Kubernetes/chat-backend%20Optimized/test.ipynb#W2sdnNjb2RlLXJlbW90ZQ%3D%3D?line=13) Answer: Let's think step by step."""
[15](vscode-notebook-cell://ssh-remote%2B7b22686f73744e616d65223a224157532d4543322d41492d32227d/mnt/efs/fs2/LangChain-in-Kubernetes/chat-backend%20Optimized/test.ipynb#W2sdnNjb2RlLXJlbW90ZQ%3D%3D?line=14) prompt = PromptTemplate.from_template(template)
---> [17](vscode-notebook-cell://ssh-remote%2B7b22686f73744e616d65223a224157532d4543322d41492d32227d/mnt/efs/fs2/LangChain-in-Kubernetes/chat-backend%20Optimized/test.ipynb#W2sdnNjb2RlLXJlbW90ZQ%3D%3D?line=16) chain = prompt | hf
[19](vscode-notebook-cell://ssh-remote%2B7b22686f73744e616d65223a224157532d4543322d41492d32227d/mnt/efs/fs2/LangChain-in-Kubernetes/chat-backend%20Optimized/test.ipynb#W2sdnNjb2RlLXJlbW90ZQ%3D%3D?line=18) question = "What is electroencephalography?"
[21](vscode-notebook-cell://ssh-remote%2B7b22686f73744e616d65223a224157532d4543322d41492d32227d/mnt/efs/fs2/LangChain-in-Kubernetes/chat-backend%20Optimized/test.ipynb#W2sdnNjb2RlLXJlbW90ZQ%3D%3D?line=20) print(chain.invoke({"question": question}))
File [~/.local/lib/python3.9/site-packages/langchain_core/runnables/base.py:436](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a224157532d4543322d41492d32227d.vscode-resource.vscode-cdn.net/mnt/efs/fs2/LangChain-in-Kubernetes/chat-backend%20Optimized/~/.local/lib/python3.9/site-packages/langchain_core/runnables/base.py:436), in Runnable.__ror__(self, other)
[426](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a224157532d4543322d41492d32227d.vscode-resource.vscode-cdn.net/mnt/efs/fs2/LangChain-in-Kubernetes/chat-backend%20Optimized/~/.local/lib/python3.9/site-packages/langchain_core/runnables/base.py:426) def __ror__(
[427](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a224157532d4543322d41492d32227d.vscode-resource.vscode-cdn.net/mnt/efs/fs2/LangChain-in-Kubernetes/chat-backend%20Optimized/~/.local/lib/python3.9/site-packages/langchain_core/runnables/base.py:427) self,
[428](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a224157532d4543322d41492d32227d.vscode-resource.vscode-cdn.net/mnt/efs/fs2/LangChain-in-Kubernetes/chat-backend%20Optimized/~/.local/lib/python3.9/site-packages/langchain_core/runnables/base.py:428) other: Union[
(...)
[433](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a224157532d4543322d41492d32227d.vscode-resource.vscode-cdn.net/mnt/efs/fs2/LangChain-in-Kubernetes/chat-backend%20Optimized/~/.local/lib/python3.9/site-packages/langchain_core/runnables/base.py:433) ],
[434](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a224157532d4543322d41492d32227d.vscode-resource.vscode-cdn.net/mnt/efs/fs2/LangChain-in-Kubernetes/chat-backend%20Optimized/~/.local/lib/python3.9/site-packages/langchain_core/runnables/base.py:434) ) -> RunnableSerializable[Other, Output]:
[435](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a224157532d4543322d41492d32227d.vscode-resource.vscode-cdn.net/mnt/efs/fs2/LangChain-in-Kubernetes/chat-backend%20Optimized/~/.local/lib/python3.9/site-packages/langchain_core/runnables/base.py:435) """Compose this runnable with another object to create a RunnableSequence."""
--> [436](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a224157532d4543322d41492d32227d.vscode-resource.vscode-cdn.net/mnt/efs/fs2/LangChain-in-Kubernetes/chat-backend%20Optimized/~/.local/lib/python3.9/site-packages/langchain_core/runnables/base.py:436) return RunnableSequence(coerce_to_runnable(other), self)
File [~/.local/lib/python3.9/site-packages/langchain_core/runnables/base.py:4370](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a224157532d4543322d41492d32227d.vscode-resource.vscode-cdn.net/mnt/efs/fs2/LangChain-in-Kubernetes/chat-backend%20Optimized/~/.local/lib/python3.9/site-packages/langchain_core/runnables/base.py:4370), in coerce_to_runnable(thing)
[4368](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a224157532d4543322d41492d32227d.vscode-resource.vscode-cdn.net/mnt/efs/fs2/LangChain-in-Kubernetes/chat-backend%20Optimized/~/.local/lib/python3.9/site-packages/langchain_core/runnables/base.py:4368) return cast(Runnable[Input, Output], RunnableParallel(thing))
[4369](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a224157532d4543322d41492d32227d.vscode-resource.vscode-cdn.net/mnt/efs/fs2/LangChain-in-Kubernetes/chat-backend%20Optimized/~/.local/lib/python3.9/site-packages/langchain_core/runnables/base.py:4369) else:
-> [4370](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a224157532d4543322d41492d32227d.vscode-resource.vscode-cdn.net/mnt/efs/fs2/LangChain-in-Kubernetes/chat-backend%20Optimized/~/.local/lib/python3.9/site-packages/langchain_core/runnables/base.py:4370) raise TypeError(
[4371](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a224157532d4543322d41492d32227d.vscode-resource.vscode-cdn.net/mnt/efs/fs2/LangChain-in-Kubernetes/chat-backend%20Optimized/~/.local/lib/python3.9/site-packages/langchain_core/runnables/base.py:4371) f"Expected a Runnable, callable or dict."
[4372](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a224157532d4543322d41492d32227d.vscode-resource.vscode-cdn.net/mnt/efs/fs2/LangChain-in-Kubernetes/chat-backend%20Optimized/~/.local/lib/python3.9/site-packages/langchain_core/runnables/base.py:4372) f"Instead got an unsupported type: {type(thing)}"
[4373](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a224157532d4543322d41492d32227d.vscode-resource.vscode-cdn.net/mnt/efs/fs2/LangChain-in-Kubernetes/chat-backend%20Optimized/~/.local/lib/python3.9/site-packages/langchain_core/runnables/base.py:4373) )
TypeError: Expected a Runnable, callable or dict.Instead got an unsupported type: <class 'langchain.prompts.prompt.PromptTemplate'>
```
### Description
I've tried to replicate the example on the site
https://python.langchain.com/docs/integrations/llms/huggingface_pipelines
I've installed all dependencies, and I've got that error
### System Info
langchain==0.1.7
langchain-community==0.0.20
langchain-core==0.1.23
langchainplus-sdk==0.0.20
pythin 3.9.16
Platform RHEL & CenOS | Hugging Face Local Pipelines EXAMPLE NOT WORKING ON CENTOS | https://api.github.com/repos/langchain-ai/langchain/issues/17584/comments | 4 | 2024-02-15T17:28:25Z | 2024-06-01T00:07:26Z | https://github.com/langchain-ai/langchain/issues/17584 | 2,137,074,153 | 17,584 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
Unable to retrieve topk relevant documents using ParentDocumentRetriever
### Idea or request for content:
below is the code
```
# Define your prompt template
prompt_template = """Use the following pieces of information to answer the user's question.
If you don't know the answer, just say that you don't know, don't try to make up an answer.
Context: {context}
Question: {question}
Only return the helpful answer below and nothing else. If no context, then no answer.
Helpful Answer:"""
child_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=50)
parent_splitter = RecursiveCharacterTextSplitter(chunk_size=1200, chunk_overlap=300)
vectorstore = Chroma(
collection_name="full_documents", embedding_function=embeddings)
store = InMemoryStore()
retriever = ParentDocumentRetriever(
vectorstore=vectorstore,
docstore=store,
child_splitter=child_splitter,
parent_splitter=parent_splitter
)
retriever.add_documents(documents, ids=None)
retriever.invoke("data related to cricket")
```
in the above code, if you see i wrote a code where ParentDocumentRetriever will invoke. And it is returning only 1 document. How to get topk documents using ParentDocumenRetriever? | how to get topk retrievals for ParentDocumentRetriever using Chroma? | https://api.github.com/repos/langchain-ai/langchain/issues/17583/comments | 3 | 2024-02-15T17:07:34Z | 2024-02-15T22:10:36Z | https://github.com/langchain-ai/langchain/issues/17583 | 2,137,036,523 | 17,583 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
I created three different sets of examples and, for each of them, the related example selector
```
self.example_selector = SemanticSimilarityExampleSelector.from_examples(
examples, # one of the three subset
HuggingFaceEmbeddings(),
Chroma,
k=5,
)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I am trying to manage three different SemanticSimilarityExampleSelectors. Ideally, each one of them has different examples to choose from. I do NOT want to mix them. However, using the code i provided, Chroma mixes them and few-shot breaks
### System Info
langchain 0.1.6 | Managing multiple vector stores separately | https://api.github.com/repos/langchain-ai/langchain/issues/17580/comments | 2 | 2024-02-15T15:53:14Z | 2024-06-24T16:07:26Z | https://github.com/langchain-ai/langchain/issues/17580 | 2,136,882,477 | 17,580 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
-
### Idea or request for content:
below is the code, which will add the data into chroma and define the retriever
%%time
# query = 'how many are injured and dead in christchurch Mosque?'
# Define your prompt template
prompt_template = """Use the following pieces of information to answer the user's question.
If you don't know the answer, just say that you don't know, don't try to make up an answer.
Context: {context}
Question: {question}
Only return the helpful answer below and nothing else. If no context, then no answer.
Helpful Answer:"""
```
child_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=50)
parent_splitter = RecursiveCharacterTextSplitter(chunk_size=1200, chunk_overlap=300)
vectorstore = Chroma(
collection_name="full_documents", embedding_function=embeddings)
store = InMemoryStore()
retriever = ParentDocumentRetriever(
vectorstore=vectorstore,
docstore=store,
child_splitter=child_splitter,
parent_splitter=parent_splitter
)
retriever.add_documents(documents, ids=None)
```
how to use FAISS db instead of Chroma and use retriever to get relevant documents like below?
```
vectorstore = FAISS.from_documents(documents, openai)
retriever = vectorstore.as_retriever(search_kwargs={"k": 10})
docs = retriever.get_relevant_documents("data related to cricket?")
```
can you help me with the code? | how to use FAISS for ParentDocumentRetriever for retrieving the documents? | https://api.github.com/repos/langchain-ai/langchain/issues/17576/comments | 2 | 2024-02-15T14:26:13Z | 2024-02-15T14:58:47Z | https://github.com/langchain-ai/langchain/issues/17576 | 2,136,666,207 | 17,576 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
agent = OpenAIAssistantRunnable.create_assistant(
name="Weather asssistant",
instructions="Very helpful assistnat on any topic but when it comes to weather uses get weather on planet function",
tools=tools,
model="gpt-4-1106-preview",
as_agent=True,
)
from langchain.agents import AgentExecutor
agent_executor = AgentExecutor(agent=agent, tools=tools)
r = agent_executor.invoke({"content": "What's the weather in Mars", "additional_instructions": "Always address the user as king Daniiar"}) # additional_instructions are not getting parsed
print(r)
r = agent_executor.invoke({"content": "how do you address me?", "thread_id": 'thread_IMlTopZtP9NarAuxXO3Jf9RU'})
print(r)
### Error Message and Stack Trace (if applicable)
def _create_run(self, input: dict) -> Any:
params = {
k: v
for k, v in input.items()
if k in ("instructions", "model", "tools", "run_metadata")
}
return self.client.beta.threads.runs.create(
input["thread_id"],
assistant_id=self.assistant_id,
**params,
)
this code under langchain.agents.openai_assistant.base.OpenAIAssistnatRunnable does not support additional_instructions parameter.
This parameter exists: https://platform.openai.com/docs/api-reference/runs/createRun#runs-createrun-additional_instructions
### Description
I need additional_instructions parameter, available in OpenAI SDK but not available in Langchain.
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 21.3.0: Wed Jan 5 21:37:58 PST 2022; root:xnu-8019.80.24~20/RELEASE_ARM64_T6000
> Python Version: 3.11.6 (main, Nov 2 2023, 04:39:40) [Clang 14.0.0 (clang-1400.0.29.202)]
Package Information
-------------------
> langchain_core: 0.1.23
> langchain: 0.1.7
> langchain_community: 0.0.20
> langsmith: 0.0.87
> langchain_openai: 0.0.6
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | OPenAI Assistnat does not support additional_instructions parameter on create_run method. | https://api.github.com/repos/langchain-ai/langchain/issues/17574/comments | 1 | 2024-02-15T13:32:44Z | 2024-06-01T00:08:34Z | https://github.com/langchain-ai/langchain/issues/17574 | 2,136,539,036 | 17,574 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```
from langchain_community.vectorstores import Pinecone
from langchain_openai import ChatOpenAI
from langchain_openai import OpenAIEmbeddings
from langchain.chains import ConversationalRetrievalChain
pc = pinecone.Pinecone(api_key=secret['PINECONE_API_KEY'],
environment=secret['PINECONE_ENV'])
index = pc.Index(secret['PINECONE_INDEX_NAME'])
embeddings = OpenAIEmbeddings(secret['OPENAI_API_KEY'])
model = ChatOpenAI(model_name='gpt-4-turbo-preview')
docsearch = Pinecone.from_existing_index(index_name=secret['PINECONE_INDEX_NAME'], embedding=embeddings, namespace=secret['PINECONE_NAMESPACE']), search_kwargs = {'k': 25, 'namespace': secret['PINECONE_NAMESPACE']}
retriever = docsearch.as_retriever(namespace=secret['PINECONE_NAMESPACE'], search_kwargs=search_kwargs)
qa = ConversationalRetrievalChain.from_llm(llm=model,retriever=retriever)
qa({'question': prompt, 'chat_history': chat})
```
### Error Message and Stack Trace (if applicable)
```
2024-02-15 12:26:09 Traceback (most recent call last):
2024-02-15 12:26:09 File "/app/app.py", line 66, in respond
2024-02-15 12:26:09 top_k_result = pinecone_qa({'question': prompt, 'chat_history': chat})
2024-02-15 12:26:09 File "/usr/local/lib/python3.10/site-packages/ddtrace/contrib/trace_utils.py", line 343, in wrapper
2024-02-15 12:26:09 return func(mod, pin, wrapped, instance, args, kwargs)
2024-02-15 12:26:09 File "/usr/local/lib/python3.10/site-packages/ddtrace/contrib/langchain/patch.py", line 521, in traced_chain_call
2024-02-15 12:26:09 final_outputs = func(*args, **kwargs)
2024-02-15 12:26:09 File "/usr/local/lib/python3.10/site-packages/langchain_core/_api/deprecation.py", line 145, in warning_emitting_wrapper
2024-02-15 12:26:09 return wrapped(*args, **kwargs)
2024-02-15 12:26:09 File "/usr/local/lib/python3.10/site-packages/langchain/chains/base.py", line 363, in __call__
2024-02-15 12:26:09 return self.invoke(
2024-02-15 12:26:09 File "/usr/local/lib/python3.10/site-packages/langchain/chains/base.py", line 162, in invoke
2024-02-15 12:26:09 raise e
2024-02-15 12:26:09 File "/usr/local/lib/python3.10/site-packages/langchain/chains/base.py", line 156, in invoke
2024-02-15 12:26:09 self._call(inputs, run_manager=run_manager)
2024-02-15 12:26:09 File "/usr/local/lib/python3.10/site-packages/langchain/chains/conversational_retrieval/base.py", line 155, in _call
2024-02-15 12:26:09 docs = self._get_docs(new_question, inputs, run_manager=_run_manager)
2024-02-15 12:26:09 File "/usr/local/lib/python3.10/site-packages/langchain/chains/conversational_retrieval/base.py", line 317, in _get_docs
2024-02-15 12:26:09 docs = self.retriever.get_relevant_documents(
2024-02-15 12:26:09 File "/usr/local/lib/python3.10/site-packages/langchain_core/retrievers.py", line 224, in get_relevant_documents
2024-02-15 12:26:09 raise e
2024-02-15 12:26:09 File "/usr/local/lib/python3.10/site-packages/langchain_core/retrievers.py", line 217, in get_relevant_documents
2024-02-15 12:26:09 result = self._get_relevant_documents(
2024-02-15 12:26:09 File "/usr/local/lib/python3.10/site-packages/langchain_core/vectorstores.py", line 654, in _get_relevant_documents
2024-02-15 12:26:09 docs = self.vectorstore.similarity_search(query, **self.search_kwargs)
2024-02-15 12:26:09 File "/usr/local/lib/python3.10/site-packages/ddtrace/contrib/trace_utils.py", line 343, in wrapper
2024-02-15 12:26:09 return func(mod, pin, wrapped, instance, args, kwargs)
2024-02-15 12:26:09 File "/usr/local/lib/python3.10/site-packages/ddtrace/contrib/langchain/patch.py", line 624, in traced_similarity_search
2024-02-15 12:26:09 instance._index.configuration.server_variables.get("environment", ""),
2024-02-15 12:26:09 AttributeError: 'Index' object has no attribute 'configuration'
```
### Description
When I run this locally, I get a proper response. When I run it through docker, I get 'Index' object has no attribute 'configuration'.
Any thoughts? This has been wrecking my brain all morning
### System Info
```
langchain==0.1.7
langchain-community==0.0.20
pinecone-client==3.0.2
``` | 'Index' object has no attribute 'configuration' when running my LangChain application in a docker image | https://api.github.com/repos/langchain-ai/langchain/issues/17571/comments | 4 | 2024-02-15T11:33:36Z | 2024-06-01T00:20:28Z | https://github.com/langchain-ai/langchain/issues/17571 | 2,136,301,630 | 17,571 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
ArgillaCallbackHandler is working for Hugging Face Argilla Space for public not for private as the class don't have variable extra_headers. So the class ArgillaCallbackHandler class __init__ method not taking extra_headers as an variable.
```
class ArgillaCallbackHandler(BaseCallbackHandler):
REPO_URL: str = "https://github.com/argilla-io/argilla"
ISSUES_URL: str = f"{REPO_URL}/issues"
BLOG_URL: str = "https://docs.argilla.io/en/latest/tutorials_and_integrations/integrations/use_argilla_callback_in_langchain.html" # noqa: E501
DEFAULT_API_URL: str = "http://localhost:6900"
def __init__(
self,
dataset_name: str,
workspace_name: Optional[str] = None,
api_url: Optional[str] = None,
api_key: Optional[str] = None,
) -> None:
### Error Message and Stack Trace (if applicable)

`╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /anaconda3/envs/llm/lib/python3.10/site-packages/langchain_community/callbacks/argilla_callback. │
│ py:141 in __init__ │
│ │
│ 138 │ │ │
│ 139 │ │ # Connect to Argilla with the provided credentials, if applicable │
│ 140 │ │ try: │
│ ❱ 141 │ │ │ rg.init(api_key=api_key, api_url=api_url) │
│ 142 │ │ except Exception as e: │
│ 143 │ │ │ raise ConnectionError( │
│ 144 │ │ │ │ f"Could not connect to Argilla with exception: '{e}'.\n" │
│ │
│ /anaconda3/envs/llm/lib/python3.10/site-packages/argilla/client/singleton.py:95 in init │
│ │
│ 92 │ │ >>> headers = {"X-Client-id":"id","X-Secret":"secret"} │
│ 93 │ │ >>> rg.init(api_url="http://localhost:9090", api_key="4AkeAPIk3Y", extra_headers │
│ 94 │ """ │
│ ❱ 95 │ ArgillaSingleton.init( │
│ 96 │ │ api_url=api_url, │
│ 97 │ │ api_key=api_key, │
│ 98 │ │ workspace=workspace, │
│ │
│ /anaconda3/envs/llm/lib/python3.10/site-packages/argilla/client/singleton.py:47 in init │
│ │
│ 44 │ ) -> Argilla: │
│ 45 │ │ cls._INSTANCE = None │
│ 46 │ │ │
│ ❱ 47 │ │ cls._INSTANCE = Argilla( │
│ 48 │ │ │ api_url=api_url, │
│ 49 │ │ │ api_key=api_key, │
│ 50 │ │ │ timeout=timeout, │
│ │
│ /anaconda3/envs/llm/lib/python3.10/site-packages/argilla/client/client.py:164 in __init__ │
│ │
│ 161 │ │ │ httpx_extra_kwargs=httpx_extra_kwargs, │
│ 162 │ │ ) │
│ 163 │ │ │
│ ❱ 164 │ │ self._user = users_api.whoami(client=self.http_client) # .parsed │
│ 165 │ │ │
│ 166 │ │ if not workspace and self._user.username == DEFAULT_USERNAME and DEFAULT_USERNAM │
│ 167 │ │ │ warnings.warn( │
│ │
│ /anaconda3/envs/llm/lib/python3.10/site-packages/argilla/client/sdk/users/api.py:39 in whoami │
│ │
│ 36 │ """ │
│ 37 │ url = "/api/me" │
│ 38 │ │
│ ❱ 39 │ response = client.get(url) │
│ 40 │ return UserModel(**response) │
│ 41 │
│ 42 │
│ │
│ /anaconda3/envs/llm/lib/python3.10/site-packages/argilla/client/sdk/client.py:124 in inner │
│ │
│ 121 │ │ @functools.wraps(func) │
│ 122 │ │ def inner(self, *args, **kwargs): │
│ 123 │ │ │ try: │
│ ❱ 124 │ │ │ │ result = func(self, *args, **kwargs) │
│ 125 │ │ │ │ return result │
│ 126 │ │ │ except httpx.ConnectError as err: │
│ 127 │ │ │ │ err_str = f"Your Api endpoint at {self.base_url} is not available or not │
│ │
│ /anaconda3/envs/llm/lib/python3.10/site-packages/argilla/client/sdk/client.py:141 in get │
│ │
│ 138 │ │ │ *args, │
│ 139 │ │ │ **kwargs, │
│ 140 │ │ ) │
│ ❱ 141 │ │ return build_raw_response(response).parsed │
│ 142 │ │
│ 143 │ @with_httpx_error_handler │
│ 144 │ def patch(self, path: str, *args, **kwargs): │
│ │
│ /anaconda3/envs/llm/lib/python3.10/site-packages/argilla/client/sdk/_helpers.py:25 in │
│ build_raw_response │
│ │
│ 22 │
│ 23 │
│ 24 def build_raw_response(response: httpx.Response) -> Response[Union[Dict[str, Any], Error │
│ ❱ 25 │ return build_typed_response(response) │
│ 26 │
│ 27 │
│ 28 ResponseType = TypeVar("ResponseType") │
│ │
│ /anaconda3/envs/llm/lib/python3.10/site-packages/argilla/client/sdk/_helpers.py:34 in │
│ build_typed_response │
│ │
│ 31 def build_typed_response( │
│ 32 │ response: httpx.Response, response_type_class: Optional[Type[ResponseType]] = None │
│ 33 ) -> Response[Union[ResponseType, ErrorMessage, HTTPValidationError]]: │
│ ❱ 34 │ parsed_response = check_response(response, expected_response=response_type_class) │
│ 35 │ if response_type_class: │
│ 36 │ │ parsed_response = response_type_class(**parsed_response) │
│ 37 │ return Response( │
│ │
│ /anaconda3/envs/llm/lib/python3.10/site-packages/argilla/client/sdk/_helpers.py:63 in │
│ check_response │
│ │
│ 60 │ │ │ message=message, │
│ 61 │ │ │ response=response.content, │
│ 62 │ │ ) │
│ ❱ 63 │ handle_response_error(response, **kwargs) │
│ 64 │
│ │
│ /anaconda3/envs/llm/lib/python3.10/site-packages/argilla/client/sdk/commons/errors_handler.py:63 │
│ in handle_response_error │
│ │
│ 60 │ │ error_type = GenericApiError │
│ 61 │ else: │
│ 62 │ │ raise HttpResponseError(response=response) │
│ ❱ 63 │ raise error_type(**error_args) │
│ 64 │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
NotFoundApiError: Argilla server returned an error with http status: 404. Error details: {'response': '<!DOCTYPE
html>\n<html class="">\n<head>\n <meta charset="utf-8"/>\n <meta\n name="viewport"\n
content="width=device-width, initial-scale=1.0, user-scalable=no"\n />\n <meta\n
name="description"\n content="We’re on a journey to advance and democratize artificial intelligence
through open source and open science."\n />\n <meta property="fb:app_id" content="1321688464574422"/>\n
<meta name="twitter:card" content="summary_large_image"/>\n <meta name="twitter:site" content="@huggingface"/>\n
<meta\n property="og:title"\n content="Hugging Face – The AI community building the
future."\n />\n <meta property="og:type" content="website"/>\n\n <title>Hugging Face – The AI community
building the future.</title>\n <style>\n body {\n margin: 0;\n }\n\n main {\n
background-color: white;\n min-height: 100vh;\n text-align: center;\n font-family:
Source Sans Pro, ui-sans-serif, system-ui, -apple-system,\n BlinkMacSystemFont, Segoe UI, Roboto,
Helvetica Neue, Arial, Noto Sans,\n sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,\n
Noto Color Emoji;\n }\n\n img {\n width: 6rem;\n height: 6rem;\n
margin: 7rem 1rem 1rem;\n }\n\n h1 {\n font-size: 3.75rem;\n line-height: 1;\n
color: rgba(31, 41, 55, 1);\n font-weight: 700;\n box-sizing: border-box;\n
margin: 0 auto;\n }\n\n p {\n color: rgba(107, 114, 128, 1);\n font-size:
1.125rem;\n line-height: 1.75rem;\n max-width: 28rem;\n box-sizing: border-box;\n
margin: 0 auto;\n }\n\n .dark main {\n background-color: rgb(11, 15, 25);\n }\n\n
.dark h1 {\n color: rgb(209, 213, 219);\n }\n\n .dark p {\n color: rgb(156,
163, 175);\n }\n </style>\n <script>\n // On page load or when changing themes, best to add
inline in `head` to avoid FOUC\n const key = "_tb_global_settings";\n let theme =
window.matchMedia("(prefers-color-scheme: dark)").matches\n ? "dark"\n : "light";\n
try {\n const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;\n if
(storageTheme) {\n theme = storageTheme === "dark" ? "dark" : "light";\n }\n }
catch (e) {\n }\n if (theme === "dark") {\n
document.documentElement.classList.add("dark");\n } else {\n
document.documentElement.classList.remove("dark");\n }\n </script>\n</head>\n\n<body>\n<main>\n <img\n
src="https://huggingface.co/front/assets/huggingface_logo.svg"\n alt=""\n />\n <div>\n
<h1>404</h1>\n <p>Sorry, we can’t find the page you are looking for.</p>\n
</div>\n</main>\n</body>\n</html>\n'}
The above exception was the direct cause of the following exception:
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ in <module>:2 │
│ │
│ 1 from langchain.callbacks import ArgillaCallbackHandler │
│ ❱ 2 argilla_callback = ArgillaCallbackHandler( │
│ 3 │ dataset_name="slack-search", │
│ 4 │ api_url="https://aniruddhac-argillaslack.hf.space", │
│ 5 │ api_key="owner.apikey", │
│ │
│ /anaconda3/envs/llm/lib/python3.10/site-packages/langchain_community/callbacks/argilla_callback. │
│ py:143 in __init__ │
│ │
│ 140 │ │ try: │
│ 141 │ │ │ rg.init(api_key=api_key, api_url=api_url) │
│ 142 │ │ except Exception as e: │
│ ❱ 143 │ │ │ raise ConnectionError( │
│ 144 │ │ │ │ f"Could not connect to Argilla with exception: '{e}'.\n" │
│ 145 │ │ │ │ "Please check your `api_key` and `api_url`, and make sure that " │
│ 146 │ │ │ │ "the Argilla server is up and running. If the problem persists " │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
ConnectionError: Could not connect to Argilla with exception: 'Argilla server returned an error with http status:
404. Error details: {'response': '<!DOCTYPE html>\n<html class="">\n<head>\n <meta charset="utf-8"/>\n
<meta\n name="viewport"\n content="width=device-width, initial-scale=1.0, user-scalable=no"\n
/>\n <meta\n name="description"\n content="We’re on a journey to advance and democratize
artificial intelligence through open source and open science."\n />\n <meta property="fb:app_id"
content="1321688464574422"/>\n <meta name="twitter:card" content="summary_large_image"/>\n <meta
name="twitter:site" content="@huggingface"/>\n <meta\n property="og:title"\n
content="Hugging Face – The AI community building the future."\n />\n <meta property="og:type"
content="website"/>\n\n <title>Hugging Face – The AI community building the future.</title>\n <style>\n
body {\n margin: 0;\n }\n\n main {\n background-color: white;\n
min-height: 100vh;\n text-align: center;\n font-family: Source Sans Pro, ui-sans-serif,
system-ui, -apple-system,\n BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,\n
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,\n Noto Color Emoji;\n }\n\n
img {\n width: 6rem;\n height: 6rem;\n margin: 7rem 1rem 1rem;\n }\n\n
h1 {\n font-size: 3.75rem;\n line-height: 1;\n color: rgba(31, 41, 55, 1);\n
font-weight: 700;\n box-sizing: border-box;\n margin: 0 auto;\n }\n\n p {\n
color: rgba(107, 114, 128, 1);\n font-size: 1.125rem;\n line-height: 1.75rem;\n
max-width: 28rem;\n box-sizing: border-box;\n margin: 0 auto;\n }\n\n .dark
main {\n background-color: rgb(11, 15, 25);\n }\n\n .dark h1 {\n color:
rgb(209, 213, 219);\n }\n\n .dark p {\n color: rgb(156, 163, 175);\n }\n
</style>\n <script>\n // On page load or when changing themes, best to add inline in `head` to avoid
FOUC\n const key = "_tb_global_settings";\n let theme = window.matchMedia("(prefers-color-scheme:
dark)").matches\n ? "dark"\n : "light";\n try {\n const storageTheme =
JSON.parse(window.localStorage.getItem(key)).theme;\n if (storageTheme) {\n theme =
storageTheme === "dark" ? "dark" : "light";\n }\n } catch (e) {\n }\n if (theme ===
"dark") {\n document.documentElement.classList.add("dark");\n } else {\n
document.documentElement.classList.remove("dark");\n }\n </script>\n</head>\n\n<body>\n<main>\n <img\n
src="https://huggingface.co/front/assets/huggingface_logo.svg"\n alt=""\n />\n <div>\n
<h1>404</h1>\n <p>Sorry, we can’t find the page you are looking for.</p>\n
</div>\n</main>\n</body>\n</html>\n'}'.
Please check your `api_key` and `api_url`, and make sure that the Argilla server is up and running. If the problem
persists please report it to https://github.com/argilla-io/argilla/issues as an `integration` issue.`

### Description

We should have below code integrated at https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/callbacks/argilla_callback.py
## Change 1:

Below will be the code to be followed ad change fro the above snapshots:
` def __init__(
self,
dataset_name: str,
workspace_name: Optional[str] = None,
api_url: Optional[str] = None,
api_key: Optional[str] = None,
extra_headers: Optional[str] = None,
) -> None:`
## Change 2:

` # Connect to Argilla with the provided credentials, if applicable
try:
rg.init(api_key=api_key, api_url=api_url,extra_headers=extra_headers)
except Exception as e:
raise ConnectionError(
f"Could not connect to Argilla with exception: '{e}'.\n"
"Please check your `api_key` and `api_url`, and make sure that "
"the Argilla server is up and running. If the problem persists "
f"please report it to {self.ISSUES_URL} as an `integration` issue."
) from e`
### System Info

platform : Mac


| ArgillaCallbackHandler is working for Hugging Face Argilla Space for public not private as the class don't have variable extra_headers. | https://api.github.com/repos/langchain-ai/langchain/issues/17562/comments | 2 | 2024-02-15T06:02:22Z | 2024-06-08T16:09:55Z | https://github.com/langchain-ai/langchain/issues/17562 | 2,135,715,147 | 17,562 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```
async def partial_test(self):
prompt_template = "Tell me details about {name}. The output should be in {lang}"
llm = LLMSelector(self.model).get_language_model()
prompt = ChatPromptTemplate.from_messages([
HumanMessagePromptTemplate.from_template(prompt_template, partial_variables={"lang": "Spanish"}),
])
chain = prompt | llm | StrOutputParser()
result = await chain.ainvoke({'name': 'Open AI'})
print(result)
```
### Error Message and Stack Trace (if applicable)
Errror: `Input to ChatPromptTemplate is missing variables {'lang'}. Expected: ['lang', 'name'] Received: ['name']\"`
### Description
But this works with `PromptTemplate` also work with `ChatPromptTemplate` in previous version
### System Info
langchain v0.1.7 | partial_variables not working with ChatPromptTemplate (langchain v0.1.9) | https://api.github.com/repos/langchain-ai/langchain/issues/17560/comments | 10 | 2024-02-15T05:20:53Z | 2024-06-03T23:22:43Z | https://github.com/langchain-ai/langchain/issues/17560 | 2,135,668,512 | 17,560 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
-
### Idea or request for content:
below is the code that documentation has for get_prompts()
```
def get_prompts(
self, config: Optional[RunnableConfig] = None
) -> List[BasePromptTemplate]:
from langchain_core.prompts.base import BasePromptTemplate
prompts = []
for _, node in self.get_graph(config=config).nodes.items():
if isinstance(node.data, BasePromptTemplate):
prompts.append(node.data)
return prompts
```
I just want to know whether the above code will return the template which is running/used inside SelfQueryRetriver else will it return prompts which we've defined? When i tried to run it, it has returned empty list | unable to retrieve prompt template which is already pre-defined for SelfQueryRetriever. | https://api.github.com/repos/langchain-ai/langchain/issues/17558/comments | 1 | 2024-02-15T05:10:10Z | 2024-02-16T17:12:55Z | https://github.com/langchain-ai/langchain/issues/17558 | 2,135,657,667 | 17,558 |
[
"hwchase17",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
Add a good README to each of the `libs/partners` packages. See [langchain-google-vertexai](https://github.com/langchain-ai/langchain/blob/master/libs/partners/google-vertexai/README.md) for a good reference. | docs: Make sure all partner packages have README | https://api.github.com/repos/langchain-ai/langchain/issues/17545/comments | 6 | 2024-02-14T19:50:58Z | 2024-04-11T21:09:21Z | https://github.com/langchain-ai/langchain/issues/17545 | 2,135,088,749 | 17,545 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
``` python
class Chat:
def __init__(self):
self.memory = ConversationBufferMemory(return_messages=True)
self.router = self.init_router()
self.full_chain = {"router": self.router, "question": lambda x: x["question"]} | self.create_branch()
#self.initialiser = self.init_router()
def init_followup(self):
"""
Initialize the follow up chain.
:return: Runnable
"""
return (
RunnablePassthrough.assign(
history=RunnableLambda(self.memory.load_memory_variables) | itemgetter("history")
)
| prompt
| instant
)
def init_router(self):
"""
Initialize the router.
:return: Runnable
"""
chain = (master | instant)
return RunnableWithMessageHistory(
chain,
self.get_by_session_id,
input_messages_key="question",
history_messages_key="history",
)
def get_by_session_id(self, session_id: str) -> BaseChatMessageHistory:
if session_id not in store:
store[session_id] = self.memory
return store[session_id]
def create_branch(self):
branch = RunnableBranch(
(lambda x: "optimization" in x["router"].lower(), optimization),
(lambda x: "sustainability" in x["router"].lower(), sustainability),
(lambda x: "webpage" in x["router"].lower(), web),
(lambda x: "followup" in x["router"].lower(), self.init_followup()),
other)
return branch
def invoke_model(self, query) -> None:
result = self.full_chain.invoke({"question":query},config={"configurable": {"session_id": "foo"}})
self._update_memory(query, result)
return result
def _update_memory(self, inputs, outputs):
inputs = {"query" : inputs}
if hasattr(outputs, 'content'):
self.memory.save_context(inputs, {"output": outputs.content})
else:
self.memory.save_context(inputs, {"output": outputs})
def clear_memory(self):
self.memory.clear()
### Error Message and Stack Trace (if applicable)
```python
Traceback (most recent call last):
File "/src/models/model.py", line 199, in <module>
person = Chat()
^^^^^^
File "/src/models/model.py", line 136, in __init__
self.router = self.init_router()
^^^^^^^^^^^^^^^^^^
File "/src/models/model.py", line 157, in init_router
chain = (master | instant)
~~~~~~~^~~~~~~~~
File "/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 436, in __ror__
return RunnableSequence(coerce_to_runnable(other), self)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "lib/python3.11/site-packages/langchain_core/runnables/base.py", line 4370, in coerce_to_runnable
raise TypeError(
TypeError: Expected a Runnable, callable or dict.Instead got an unsupported type: <class 'str'>
### Description
I am attempting to encapsulate all of my logic within a custom class. I am creating a router chain using LCEL. Upon instantiating the class, I get an error stating that a runnable, callable, or dict was expected but it received a string.
**Initialisation of the Chat Object**
When attempting to initialise the underlying router chain (Line 136):
```Python
self.router = self.init_router()
```
_The init_router() function based between lines 151-162 is based on the LangChain web documentation_:
```Python
def init_router(self):
"""
Initialize the router.
:return: Runnable
"""
chain = (master | instant)
return RunnableWithMessageHistory(
chain,
self.get_by_session_id,
input_messages_key="question",
history_messages_key="history",
)
```
I seem to experience a strange behaviour whereby, if I was to replace "self.router" with the same chain defined outside of my custom class, the solution works. However, when I try to instantiate the router chain within a method of my class, I get the error. I suspect this has something to do with LCEL but I would like some clarification.
### System Info
```
langchain==0.1.7
langchain-community==0.0.20
langchain-core==0.1.22
langchain-experimental==0.0.50
langchainhub==0.1.14
platform: Mac OS 14.1.1
python version: 3.11.4 | Custom class complains of chain being of type Str | https://api.github.com/repos/langchain-ai/langchain/issues/17541/comments | 0 | 2024-02-14T18:57:04Z | 2024-05-22T16:09:22Z | https://github.com/langchain-ai/langchain/issues/17541 | 2,134,989,526 | 17,541 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
Try the following code:
```python
from langchain.text_splitter import Language
from langchain_community.document_loaders.parsers import LanguageParser
parser=LanguageParser(language=Language.PYTHON)
```
### Error Message and Stack Trace (if applicable)
```sh
Traceback (most recent call last):
File "/development/test.py", line 4, in <module>
parser=LanguageParser(language=Language.PYTHON)
File "/development/env/lib/python3.9/site-packages/langchain_community/document_loaders/parsers/language/language_parser.py", line 162, in __init__
raise Exception(f"No parser available for {language}")
Exception: No parser available for python
```
### Description
Lines 30 and 33 of `langauge_parser.py` are `from langchain.langchain.text_splitter import Language` which causes an import error. I think these lines should just be `from langchain.text_splitter import Language`
### System Info
langchain==0.1.7
langchain-community==0.0.20
langchain-core==0.1.23
langchain-openai==0.0.6
langchainhub==0.1.14
Platform: Mac
Python 3.9.6 | Import error in language_parser.py during "from langchain.langchain.text_splitter import Language" | https://api.github.com/repos/langchain-ai/langchain/issues/17536/comments | 3 | 2024-02-14T17:14:19Z | 2024-02-15T09:42:58Z | https://github.com/langchain-ai/langchain/issues/17536 | 2,134,810,211 | 17,536 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
Ollama works properly
```python
from langchain_community.llms import Ollama
llm = Ollama(model="llama2:latest")
llm.invoke("Tell me a joke")
```
ChatOllama is not working:
```python
from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain.chat_models import ChatOllama
from langchain.schema import HumanMessage
chat_model = ChatOllama(model="llama2:latest",callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]))
messages = [HumanMessage(content="Tell me a joke")]
chat_model_response = chat_model(messages)
```
### Error Message and Stack Trace (if applicable)
```ssh
---------------------------------------------------------------------------
OllamaEndpointNotFoundError Traceback (most recent call last)
Cell In[6], line 8
6 chat_model = ChatOllama(model="llama2:latest",callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]))
7 messages = [HumanMessage(content="Tell me a joke")]
----> 8 chat_model_response = chat_model(messages)
File [~/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_core/_api/deprecation.py:145](http://localhost:8888/home/oiaagent/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_core/_api/deprecation.py#line=144), in deprecated.<locals>.deprecate.<locals>.warning_emitting_wrapper(*args, **kwargs)
143 warned = True
144 emit_warning()
--> 145 return wrapped(*args, **kwargs)
File [~/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:691](http://localhost:8888/home/oiaagent/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py#line=690), in BaseChatModel.__call__(self, messages, stop, callbacks, **kwargs)
683 @deprecated("0.1.7", alternative="invoke", removal="0.2.0")
684 def __call__(
685 self,
(...)
689 **kwargs: Any,
690 ) -> BaseMessage:
--> 691 generation = self.generate(
692 [messages], stop=stop, callbacks=callbacks, **kwargs
693 ).generations[0][0]
694 if isinstance(generation, ChatGeneration):
695 return generation.message
File [~/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:408](http://localhost:8888/home/oiaagent/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py#line=407), in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, **kwargs)
406 if run_managers:
407 run_managers[i].on_llm_error(e, response=LLMResult(generations=[]))
--> 408 raise e
409 flattened_outputs = [
410 LLMResult(generations=[res.generations], llm_output=res.llm_output)
411 for res in results
412 ]
413 llm_output = self._combine_llm_outputs([res.llm_output for res in results])
File [~/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:398](http://localhost:8888/home/oiaagent/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py#line=397), in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, **kwargs)
395 for i, m in enumerate(messages):
396 try:
397 results.append(
--> 398 self._generate_with_cache(
399 m,
400 stop=stop,
401 run_manager=run_managers[i] if run_managers else None,
402 **kwargs,
403 )
404 )
405 except BaseException as e:
406 if run_managers:
File [~/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:577](), in BaseChatModel._generate_with_cache(self, messages, stop, run_manager, **kwargs)
573 raise ValueError(
574 "Asked to cache, but no cache found at `langchain.cache`."
575 )
576 if new_arg_supported:
--> 577 return self._generate(
578 messages, stop=stop, run_manager=run_manager, **kwargs
579 )
580 else:
581 return self._generate(messages, stop=stop, **kwargs)
File [~/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_community/chat_models/ollama.py:250](), in ChatOllama._generate(self, messages, stop, run_manager, **kwargs)
226 def _generate(
227 self,
228 messages: List[BaseMessage],
(...)
231 **kwargs: Any,
232 ) -> ChatResult:
233 """Call out to Ollama's generate endpoint.
234
235 Args:
(...)
247 ])
248 """
--> 250 final_chunk = self._chat_stream_with_aggregation(
251 messages,
252 stop=stop,
253 run_manager=run_manager,
254 verbose=self.verbose,
255 **kwargs,
256 )
257 chat_generation = ChatGeneration(
258 message=AIMessage(content=final_chunk.text),
259 generation_info=final_chunk.generation_info,
260 )
261 return ChatResult(generations=[chat_generation])
File [~/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_community/chat_models/ollama.py:183](http://localhost:8888/home/oiaagent/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_community/chat_models/ollama.py#line=182), in ChatOllama._chat_stream_with_aggregation(self, messages, stop, run_manager, verbose, **kwargs)
174 def _chat_stream_with_aggregation(
175 self,
176 messages: List[BaseMessage],
(...)
180 **kwargs: Any,
181 ) -> ChatGenerationChunk:
182 final_chunk: Optional[ChatGenerationChunk] = None
--> 183 for stream_resp in self._create_chat_stream(messages, stop, **kwargs):
184 if stream_resp:
185 chunk = _chat_stream_response_to_chat_generation_chunk(stream_resp)
File [~/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_community/chat_models/ollama.py:156](), in ChatOllama._create_chat_stream(self, messages, stop, **kwargs)
147 def _create_chat_stream(
148 self,
149 messages: List[BaseMessage],
150 stop: Optional[List[str]] = None,
151 **kwargs: Any,
152 ) -> Iterator[str]:
153 payload = {
154 "messages": self._convert_messages_to_ollama_messages(messages),
155 }
--> 156 yield from self._create_stream(
157 payload=payload, stop=stop, api_url=f"{self.base_url}/api/chat/", **kwargs
158 )
File [~/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_community/llms/ollama.py:233](), in _OllamaCommon._create_stream(self, api_url, payload, stop, **kwargs)
231 if response.status_code != 200:
232 if response.status_code == 404:
--> 233 raise OllamaEndpointNotFoundError(
234 "Ollama call failed with status code 404. "
235 "Maybe your model is not found "
236 f"and you should pull the model with `ollama pull {self.model}`."
237 )
238 else:
239 optional_detail = response.json().get("error")
OllamaEndpointNotFoundError: Ollama call failed with status code 404. Maybe your model is not found and you should pull the model with `ollama pull llama2:latest`.
```
### Description
Hello,
I am still having the same issue reported in [15147](https://github.com/langchain-ai/langchain/issues/15147) . I tried the same things BharathKumarAI did and even updated ollama, but it is still showing the same error.
### System Info
angchain 0.0.322
langsmith 0.0.51
python 3.11.7
ubuntu 20.04.6 LTS
ollama list | OllamaEndpointNotFoundError: Ollama call failed with status code 404. Maybe your model is not found and you should pull the model with `ollama pull llama2`. | https://api.github.com/repos/langchain-ai/langchain/issues/17533/comments | 6 | 2024-02-14T15:55:49Z | 2024-08-02T08:49:04Z | https://github.com/langchain-ai/langchain/issues/17533 | 2,134,667,479 | 17,533 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
from typing import Set, Literal
from langchain_core.utils.function_calling import convert_to_openai_function
class UserInfos(BaseModel):
"general information about a user"
gender: Literal["male", "female", "other"]
preferences: Set[Literal["games", "books"]]
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
The resulting function is not well defined and missing some properties.
## Output
```json
{
"name": "UserInfos",
"description": "general information about a user",
"parameters": {
"type": "object",
"properties": {
"gender": {
"enum": [
"male",
"female",
"other"
],
"type": "string"
}
},
"required": [
"gender",
"preferences"
]
}
}
```
## Excepted
**NOTE**: This is produced by the deprecated `convert_pydantic_to_openai_function` function.
```json
{
"name": "UserInfos",
"description": "general information about a user",
"parameters": {
"properties": {
"gender": {
"enum": [
"male",
"female",
"other"
],
"type": "string"
},
"preferences": {
"items": {
"enum": [
"games",
"books"
],
"type": "string"
},
"type": "array",
"uniqueItems": true
}
},
"required":[
"gender",
"preferences"
],
"type":"object"
}
}
```
### System Info
System Information
------------------
> OS: Linux
> OS Version: #40~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Thu Nov 16 10:53:04 UTC 2
> Python Version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.1.23
> langchain: 0.1.7
> langchain_community: 0.0.20
> langsmith: 0.0.87
> langchain_openai: 0.0.6
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | convert_to_openai_function drop some (nested?) properties | https://api.github.com/repos/langchain-ai/langchain/issues/17531/comments | 5 | 2024-02-14T14:53:30Z | 2024-05-22T16:09:17Z | https://github.com/langchain-ai/langchain/issues/17531 | 2,134,532,581 | 17,531 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
llm = BedrockChat(
credentials_profile_name="default",
model_id="anthropic.claude-instant-v1",
streaming=True,
model_kwargs={"temperature": 0.1},
)
prompt = ChatPromptTemplate.from_messages(
[
("system", system_prompt),
MessagesPlaceholder(variable_name="history"),
("human", "{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
]
)
tools = load_tools(["google-search"], llm)
image_generation_tool = StructuredTool.from_function(
func=image_generation,
name="image_generator",
description="Use this tool to generate images for the user",
return_direct=True,
)
tools.append(image_generation_tool)
agent = create_json_chat_agent(llm, tools, prompt)
history = DynamoDBChatMessageHistory(table_name="LangchainSessionTable", session_id=session_id)
memory = ConversationBufferMemory(chat_memory=history, memory_key="history", return_messages=True)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True, handle_parsing_errors=True, memory=memory)```
### Error Message and Stack Trace (if applicable)
> Finished chain.
2 validation errors for AIMessage
content
str type expected (type=type_error.str)
content
value is not a valid list (type=type_error.list)
### Description
I'm using:
1. Langchain 0.5.1
2. Amazon Bedrock / Anthropic Claude Instant 1.2
3. Amazon Dynomo DB for chat history
4. Conversation memory buffer
5. A tool to create an image from Bedrock Stability AI
The agent generates the image but when it tries to add it to the conversation history; I get an error.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #15~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Fri Jan 12 18:54:30 UTC 2
> Python Version: 3.10.13 (main, Sep 11 2023, 13:44:35) [GCC 11.2.0]
Package Information
-------------------
> langchain_core: 0.1.21
> langchain: 0.1.5
> langchain_community: 0.0.19
> langsmith: 0.0.87
> langchain_openai: 0.0.5
> langserve: 0.0.41
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph | Error with multi-modal chat and agent memory | https://api.github.com/repos/langchain-ai/langchain/issues/17529/comments | 3 | 2024-02-14T14:33:02Z | 2024-06-01T00:07:30Z | https://github.com/langchain-ai/langchain/issues/17529 | 2,134,489,640 | 17,529 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
from langchain_community.document_loaders.athena import AthenaLoader
database_name = "database"
s3_output_path = "s3://bucket-no-prefix"
query="""SELECT
CAST(extract(hour FROM current_timestamp) AS INTEGER) AS current_hour,
CAST(extract(minute FROM current_timestamp) AS INTEGER) AS current_minute,
CAST(extract(second FROM current_timestamp) AS INTEGER) AS current_second;
"""
profile_name = "AdministratorAccess"
loader = AthenaLoader(
query=query,
database=database_name,
s3_output_uri=s3_output_path,
profile_name=profile_name,
)
documents = loader.load()
print(documents)
```
### Error Message and Stack Trace (if applicable)
NoSuchKey: An error occurred (NoSuchKey) when calling the GetObject operation: The specified key does not exist
### Description
Athena Loader errors when result s3 bucket uri has no prefix. The Loader instance call results in a "NoSuchKey: An error occurred (NoSuchKey) when calling the GetObject operation: The specified key does not exist." error.
If s3_output_path contains a prefix like:
```python
s3_output_path = "s3://bucket-with-prefix/prefix"
```
Execution works without an error.
## Suggested solution
Modify:
```python
key = "/".join(tokens[1:]) + "/" + query_execution_id + ".csv"
```
to
```python
key = "/".join(tokens[1:]) + ("/" if tokens[1:] else "") + query_execution_id + ".csv"
```
https://github.com/langchain-ai/langchain/blob/9e8a3fc4fff8e20ab5d1f113515ded14906eb6f3/libs/community/langchain_community/document_loaders/athena.py#L128
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 22.6.0: Fri Sep 15 13:41:30 PDT 2023; root:xnu-8796.141.3.700.8~1/RELEASE_ARM64_T8103
> Python Version: 3.9.9 (main, Jan 9 2023, 11:42:03)
[Clang 14.0.0 (clang-1400.0.29.102)]
Package Information
-------------------
> langchain_core: 0.1.23
> langchain: 0.1.7
> langchain_community: 0.0.20
> langsmith: 0.0.87
> langchain_openai: 0.0.6
> langchainhub: 0.1.14
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | Athena Loader errors when result s3 bucket uri has no prefix | https://api.github.com/repos/langchain-ai/langchain/issues/17525/comments | 3 | 2024-02-14T12:45:19Z | 2024-05-22T16:09:07Z | https://github.com/langchain-ai/langchain/issues/17525 | 2,134,276,738 | 17,525 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
-
### Idea or request for content:
below is the code
```
retriever = SelfQueryRetriever.from_llm(
llm,
vectorstore,
document_content_description,
metadata_field_info,
use_original_query=False,
enable_limit=True,
verbose=True
)
retriever.get_prompts()
```
and retriever.get_prompts() is returning [], i..e nothing. But SelfQueryRetriever has in-built template right which has prompts. How see that and change that template? | unable to retrieve the get_prompts for SelfQueryRetriever? | https://api.github.com/repos/langchain-ai/langchain/issues/17524/comments | 1 | 2024-02-14T12:07:25Z | 2024-02-14T14:19:07Z | https://github.com/langchain-ai/langchain/issues/17524 | 2,134,212,565 | 17,524 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
The code and steps on https://python.langchain.com/docs/integrations/providers/modal
### Error Message and Stack Trace (if applicable)
```
File "/[redacted]/langchain-modal-test.py", line 96, in call_api
'output': chain.invoke({"input": prompt, "format_instructions": parser.get_format_instructions()}).content
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/[redacted]/python3.12/site-packages/langchain_core/runnables/base.py", line 2053, in invoke
input = step.invoke(
^^^^^^^^^^^^
File "/[redacted]/python3.12/site-packages/langchain_core/language_models/llms.py", line 235, in invoke
self.generate_prompt(
File "/[redacted]/python3.12/site-packages/langchain_core/language_models/llms.py", line 530, in generate_prompt
return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/[redacted]/python3.12/site-packages/langchain_core/language_models/llms.py", line 703, in generate
output = self._generate_helper(
^^^^^^^^^^^^^^^^^^^^^^
File "/[redacted]/python3.12/site-packages/langchain_core/language_models/llms.py", line 567, in _generate_helper
raise e
File "/[redacted]/python3.12/site-packages/langchain_core/language_models/llms.py", line 554, in _generate_helper
self._generate(
File "/[redacted]/python3.12/site-packages/langchain_core/language_models/llms.py", line 1139, in _generate
self._call(prompt, stop=stop, run_manager=run_manager, **kwargs)
File "/[redacted]/python3.12/site-packages/langchain_community/llms/modal.py", line 95, in _call
text = response_json["prompt"]
^^^^^^^^^^^^^
UnboundLocalError: cannot access local variable 'response_json' where it is not associated with a value
```
### Description
* I'm trying to use the modal integration as the documentation describes : I published a modal endpoint to prompt a llm, and I can see it is receiving the query and generating the response, but the chain is failing. I believe the response is incorrectly parsed by the modal community package.
### System Info
System Information
------------------
> OS: Linux
> Python Version: 3.12.1
Package Information
-------------------
> langchain_core: 0.1.22
> langchain: 0.1.6
> langchain_community: 0.0.19
> langsmith: 0.0.87
> langchain_openai: 0.0.5
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
| Community integration for Modal consistently fails | https://api.github.com/repos/langchain-ai/langchain/issues/17522/comments | 2 | 2024-02-14T11:49:16Z | 2024-06-01T00:08:33Z | https://github.com/langchain-ai/langchain/issues/17522 | 2,134,182,644 | 17,522 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
-
### Idea or request for content:
below is the data
```
from langchain.globals import set_debug
set_debug(True)
query = "what's the one thing that cricketers to do while they're bowling?"
structured_query = StructuredQuery(query=query_template, limit=5)
# print(structured_query)
docs = retriever.get_relevant_documents(structured_query)
```
when i saw the debug output, it's completing changing the query to just 'cricket' instead of the complete query which i gave like below
```
[chain/end] [1:retriever:Retriever > 2:chain:RunnableSequence > 5:parser:StructuredQueryOutputParser] [1ms] Exiting Parser run with output:
{
"lc": 1,
"type": "not_implemented",
"id": [
"langchain",
"chains",
"query_constructor",
"ir",
"StructuredQuery"
],
"repr": "StructuredQuery(query='Cricket', filter=None, limit=5)"
}
[chain/end] [1:retriever:Retriever > 2:chain:RunnableSequence] [476ms] Exiting Chain run with output:
[outputs]
```
i want to change the prompt which is already present inside the SelfQueryRetriever. Can you help me with that code? | how to set custom prompt for SelfQueryRetriever? | https://api.github.com/repos/langchain-ai/langchain/issues/17521/comments | 2 | 2024-02-14T11:42:21Z | 2024-03-16T10:58:40Z | https://github.com/langchain-ai/langchain/issues/17521 | 2,134,170,530 | 17,521 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
from langchain.agents import initialize_agent, AgentType
### Error Message and Stack Trace (if applicable)
```py
File "<some file>", line 7, in <module>
from langchain.agents import initialize_agent, AgentType
File "/home/rafail/.pyenv/versions/develop/lib/python3.8/site-packages/langchain/agents/__init__.py", line 34, in <module>
from langchain_community.agent_toolkits import (
File "/home/rafail/.pyenv/versions/develop/lib/python3.8/site-packages/langchain_community/agent_toolkits/__init__.py", line 45, in <module>
from langchain_community.agent_toolkits.sql.base import create_sql_agent
File "/home/rafail/.pyenv/versions/develop/lib/python3.8/site-packages/langchain_community/agent_toolkits/sql/base.py", line 29, in <module>
from langchain_community.agent_toolkits.sql.toolkit import SQLDatabaseToolkit
File "/home/rafail/.pyenv/versions/develop/lib/python3.8/site-packages/langchain_community/agent_toolkits/sql/toolkit.py", line 9, in <module>
from langchain_community.tools.sql_database.tool import (
File "/home/rafail/.pyenv/versions/develop/lib/python3.8/site-packages/langchain_community/tools/sql_database/tool.py", line 5, in <module>
from sqlalchemy import Result
ImportError: cannot import name 'Result' from 'sqlalchemy' (/home/rafail/.pyenv/versions/develop/lib/python3.8/site-packages/sqlalchemy/__init__.py)
```
### Description
Importing langchain causes this issue as "Result" was not directly importable in versions of SQLAlchemy < 2.0.0.
To resolve, this line could be changed to:
`from sqlalchemy.engine import Result`
https://github.com/langchain-ai/langchain/blob/9e8a3fc4fff8e20ab5d1f113515ded14906eb6f3/libs/community/langchain_community/tools/sql_database/tool.py#L5C1-L5C5
### System Info
langchain==0.1.6
langchain-community==0.0.20
langchain-core==0.1.23
langchainplus-sdk==0.0.20 | Compatibility issue with SQLAlchemy<2 | https://api.github.com/repos/langchain-ai/langchain/issues/17519/comments | 6 | 2024-02-14T11:31:19Z | 2024-06-01T00:08:33Z | https://github.com/langchain-ai/langchain/issues/17519 | 2,134,148,532 | 17,519 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
-
### Idea or request for content:
I want to change a prompt which is present inside SelfQueryRetrieval and i wrote below code
```
from langchain.chains.retrieval_qa import SelfQueryRetrieval
from langchain.prompts import PromptTemplate
class CustomSelfQueryRetrieval(SelfQueryRetrieval):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.prompt = PromptTemplate(
template="Your new prompt template here.\n\n{context}\n\nQuestion: {question}\nHelpful Answer:",
input_variables=["context", "question"]
)
```
Now, can you give me an example of how to utilize the above function? Is it same like SelfQueryRetrieval? Can you give a code example? | i want to change the prompt which is present inside the SelfQueryRetrieval | https://api.github.com/repos/langchain-ai/langchain/issues/17518/comments | 3 | 2024-02-14T11:22:07Z | 2024-02-14T14:19:06Z | https://github.com/langchain-ai/langchain/issues/17518 | 2,134,132,681 | 17,518 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
-
### Idea or request for content:
Below is the code which tries to the return the information what's happening within SelfQueryRetrieval and return the relevant documents
```
from langchain.globals import set_debug
set_debug(True)
query = "what's the one thing that cricketers to do while they're bowling?"
structured_query = StructuredQuery(query=query_template, limit=5)
# print(structured_query)
docs = retriever.get_relevant_documents(structured_query)
```
```
when i saw the debug output, it's completing changing the query to just 'cricket' instead of the complete query which i gave like below
```
chain/start] [1:retriever:Retriever > 2:chain:RunnableSequence > 5:parser:StructuredQueryOutputParser] Entering Parser run with input:
{
"input": "```json\n{\n \"query\": \"cricket\",\n \"filter\": \"NO_FILTER\",\n \"limit\": 5\n}\n```"
}
[chain/end] [1:retriever:Retriever > 2:chain:RunnableSequence > 5:parser:StructuredQueryOutputParser] [1ms] Exiting Parser run with output:
{
"lc": 1,
"type": "not_implemented",
"id": [
"langchain",
"chains",
"query_constructor",
"ir",
"StructuredQuery"
],
"repr": "StructuredQuery(query='cricket', filter=None, limit=5)"
}
[chain/end] [1:retriever:Retriever > 2:chain:RunnableSequence] [476ms] Exiting Chain run with output:
[outputs]
```
if you see above it completely changed the query which will return irrelevant results. Now, I want to set my own prompt to the SelfQueryRetrieval. Can you return the code of how to do it with example? | how to set custom prompt for SelQueryRetrieval? | https://api.github.com/repos/langchain-ai/langchain/issues/17517/comments | 4 | 2024-02-14T10:59:23Z | 2024-02-14T14:19:05Z | https://github.com/langchain-ai/langchain/issues/17517 | 2,134,075,685 | 17,517 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
from langchain.retrievers.multi_query import MultiQueryRetriever
MultiQueryRetriever()
### Error Message and Stack Trace (if applicable)
venv\Lib\site-packages\langchain\retrievers\__init__.py:37: in <module>
from langchain.retrievers.web_research import WebResearchRetriever
venv\Lib\site-packages\langchain\retrievers\web_research.py:5: in <module>
from langchain_community.document_loaders import AsyncHtmlLoader
venv\Lib\site-packages\langchain_community\document_loaders\__init__.py:163: in <module>
from langchain_community.document_loaders.pebblo import PebbloSafeLoader
venv\Lib\site-packages\langchain_community\document_loaders\pebblo.py:5: in <module>
import pwd
E ModuleNotFoundError: No module named 'pwd'
### Description
There is no `pwd` on windows.
### System Info
python 3.11.2
langchain==0.1.7 | import pwd on windows | https://api.github.com/repos/langchain-ai/langchain/issues/17514/comments | 31 | 2024-02-14T09:51:15Z | 2024-05-22T10:46:00Z | https://github.com/langchain-ai/langchain/issues/17514 | 2,133,944,378 | 17,514 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
-
### Idea or request for content:
below is the code
retriever = SelfQueryRetriever.from_llm(
llm,
vectorstore,
document_content_description,
metadata_field_info,
use_original_query=False,
enable_limit=True,
verbose=True
)
and below's the function which i'm trying to use which will do de-duplication,
```
def fetch_unique_documents(query_template, company_names, initial_limit, desired_count):
company_documents = {}
for company_name in company_names:
# Format the query with the current company name
query = query_template.format(company_names=company_name)
unique_docs = []
seen_contents = set()
current_limit = initial_limit
while len(unique_docs) < desired_count:
# structured_query = StructuredQuery(query=query, limit=current_limit)
docs = retriever.get_relevant_documents(query)
# Keep track of whether we found new unique documents in this iteration
found_new_unique = False
for doc in docs:
if doc.page_content not in seen_contents:
unique_docs.append(doc)
seen_contents.add(doc.page_content)
found_new_unique = True
if len(unique_docs) == desired_count:
break
if not found_new_unique or len(unique_docs) == desired_count:
break # Exit if no new unique documents are found or if we've reached the desired count
# Increase the limit more aggressively if we are still far from the desired count
current_limit += desired_count - len(unique_docs)
# Store the results in the dictionary with the company name as the key
company_documents[company_name] = unique_docs
return company_documents
# Example usage
company_names = company_names
query_template = "Does the company {company_names}, have one?"
desired_count = 5
initial_limit = 50
# Fetch documents for each company
company_documents = fetch_unique_documents(query_template, company_names, initial_limit=desired_count, desired_count=desired_count)
```
in the above, i don't know where to mention the desired_count, as i don't want to use StructuredQuery function, i just want to use normal retriever and get the relevant documents. Can you help me with the code? | how to mention topk in get_relevant_documents while using SelfQueryRetriever? | https://api.github.com/repos/langchain-ai/langchain/issues/17511/comments | 1 | 2024-02-14T07:45:34Z | 2024-02-14T14:19:05Z | https://github.com/langchain-ai/langchain/issues/17511 | 2,133,737,849 | 17,511 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
``` python
pgvector = PGVector(...) # Initialize PGVector with necessary parameters
ids_to_delete = [...] # List of ids to delete
pgvector.delete(ids_to_delete)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
How can I obtain the custom_id of a document from a collection using the source file name, as I want to delete a specific file from the list of files stored in a particular collection?
### System Info
I am using postgres | How can I get custom_id of a document from a Collection using Source file name? | https://api.github.com/repos/langchain-ai/langchain/issues/17508/comments | 7 | 2024-02-14T06:10:52Z | 2024-02-14T14:24:24Z | https://github.com/langchain-ai/langchain/issues/17508 | 2,133,616,806 | 17,508 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
Approach 1:
def result(question):
snowflake_url = get_snowflake_db_uri()
db_connection = SQLDatabase.from_uri(snowflake_url, sample_rows_in_table_info=1, include_tables=['table_1'],max_string_length=32000)
sql_db_chain = SQLDatabaseChain.from_llm(
llm,
db_connection,
prompt=few_shot_prompt,
use_query_checker=True, # must be False for OpenAI model
verbose=False,
return_intermediate_steps=True
)
answer = sql_db_chain(question)
return answer["intermediate_steps"][1], answer["result"]
Approach2:
def llm_answer(question):
snowflake_url = get_snowflake_db_uri()
db_connection = SQLDatabase.from_uri(
snowflake_url,
sample_rows_in_table_info=1,
include_tables=['maps_route_campaign_report'],
view_support=True,
max_string_length=30000,
)
table_info = db_connection.table_info
chat_template = """You are a SQL expert ....
Use the following format:
Question: Question here
SQLQuery: SQL Query to run
SQLResult: Result of the SQLQuery
Answer: Final answer here
"""
chat_prompt = ChatPromptTemplate.from_messages(
[
('system', chat_template),
MessagesPlaceholder(variable_name='history'),
('human', '{input}'),
]
)
sql_db_chain = SQLDatabaseChain.from_llm(
llm,
db_connection,
prompt=few_shot_prompt,
use_query_checker=True, # must be False for OpenAI model
verbose=False,
return_intermediate_steps=True
)
memory_buffer = ConversationBufferWindowMemory(k=4, return_messages=True)
chat = memory_buffer.load_memory_variables({})['history']
prompt = chat_prompt.format(info=table_info, history=chat, input=question)
answer = sql_db_chain(prompt)
answer_str = str(answer) if not isinstance(answer, str) else answer
memory_buffer.save_context({'input': question}, {'answer': answer_str})
sql_query = json.dumps(answer['intermediate_steps'][1])
sql_result = json.dumps(answer['result'])
return sql_query, sql_result
### Error Message and Stack Trace (if applicable)
None
### Description
1) What is the main difference in llm performance between Approach 1 and Approach 2 for text-to-SQL tasks?
2) Will Approach 2 retain memory if multiple tables are involved?
3) Which approach is best for light weighted deployment with docker and AWS
### System Info
boto3==1.34.29
chromadb==0.4.22
huggingface-hub==0.20.3
langchain==0.1.4
langchain-experimental==0.0.49
python-dotenv==1.0.1
sentence_transformers==2.3.0
snowflake-connector-python==3.7.0
snowflake-sqlalchemy==1.5.1
SQLAlchemy==1.4.51
streamlit==1.30.0 | Which has better performance for Text2SQL: direct question in SQLDatabaseChain vs question though ChatPromptTemplate | https://api.github.com/repos/langchain-ai/langchain/issues/17506/comments | 1 | 2024-02-14T04:14:09Z | 2024-02-14T04:30:21Z | https://github.com/langchain-ai/langchain/issues/17506 | 2,133,508,496 | 17,506 |
[
"hwchase17",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
The goal is to make it easier to run integration tests.
Acceptance Criteria:
- [ ] Update docker compose file here: https://github.com/langchain-ai/langchain/blob/master/docker/docker-compose.yml to include a database that is used during integration tests and can be spun up locally via docker compose.
- [ ] Use non standard port -- just increment by 1 from the previous service in that file
- [ ] Update any integration tests that use the given service to use a matching port
This is a good first issue for folks with experience in dev ops.
Consider git-grepping for existing yml files that contain service configuration. | Expand docker-compose with common dependencies used in integration tests | https://api.github.com/repos/langchain-ai/langchain/issues/17505/comments | 0 | 2024-02-14T03:32:30Z | 2024-06-01T00:07:26Z | https://github.com/langchain-ai/langchain/issues/17505 | 2,133,481,545 | 17,505 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
few_shot_prompt = FewShotPromptTemplate(
example_selector=example_selector,
example_prompt=example_prompt,
prefix=_snowflake_prompt + 'Provide no preamble' + ' Here are some examples:',
suffix=PROMPT_SUFFIX,
input_variables=['table_info', 'input', 'top_k'],
)
snowflake_url = get_snowflake_db_uri()
db_connection = SQLDatabase.from_uri(
snowflake_url,
sample_rows_in_table_info=2,
include_tables=['table1'],
view_support=True,
max_string_length=30000,
)
return_op = SQLDatabaseChain.from_llm(
llm,
db_connection,
prompt=few_shot_prompt,
use_query_checker=True
verbose=False,
return_intermediate_steps=True
### Error Message and Stack Trace (if applicable)
How to set dialect with SQLDatabaseChain using FewShotPromptTemplate
### Description
I want to set the dialect to snowflake in SQLDatabaseChain, provide steps to do so for FewShotPromptTemplate
### System Info
boto3==1.34.29
chromadb==0.4.22
huggingface-hub==0.20.3
langchain==0.1.5
langchain-community==0.0.19
langchain-experimental==0.0.50
pip_audit==2.6.0
pre-commit==3.6.0
pylint==2.17.4
pylint_quotes==0.2.3
pylint_pydantic==0.3.2
python-dotenv==1.0.1
sentence_transformers==2.3.0
snowflake-connector-python==3.7.0
snowflake-sqlalchemy==1.5.1
SQLAlchemy==1.4.51
streamlit==1.30.0
watchdog==3.0.0
boto3==1.34.29
chromadb==0.4.22
huggingface-hub==0.20.3 | How to set dialect snowflake with SQLDatabaseChain with FewShotPromptTemplate | https://api.github.com/repos/langchain-ai/langchain/issues/17487/comments | 11 | 2024-02-13T21:38:37Z | 2024-05-22T16:08:48Z | https://github.com/langchain-ai/langchain/issues/17487 | 2,133,171,500 | 17,487 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```
app = Sanic("app")
async def create_collection(col_id, docs, embeddings):
return await PGVector.afrom_documents(
embedding=embeddings,
documents=docs,
collection_name=f"col_{col_id}",
connection_string=config.CONNECTION_STRING,
pre_delete_collection=True,
)
@app.post("/collection")
async def chat_collection(request):
docs = []
session = request.args.get("session")
if not session:
raise ValueError("Session ID is required.")
for file in request.files.getlist("collection"):
for page in document_split_chunk(file.body, "pdf"):
docs.append(page)
await create_collection(session, docs, embeddings)
return json("Collection created.")
if __name__ == "__main__":
app.run(host="127.0.0.1", port=8000, workers=1, debug=True)
```
### Error Message and Stack Trace (if applicable)
```
Executing <Task pending name='Task-8' coro=<HttpProtocol.connection_task() running at C:\Users\Aidan Stewart\PycharmProjects\x\venv\lib\site-packages\sanic\server\protocols\http_protocol.py:155> wait_for=<Future pending cb=[_chain_future.<locals>._call_check_cancel() at C:\Users\Aidan Stewart\AppData\Local\Programs\Python\Python39\lib\asyncio\futures.py:384, <TaskWakeupMethWrapper object at 0x0000016B66DC5CA0>()] created at C:\Users\Aidan Stewart\AppData\Local\Programs\Python\Python39\lib\asyncio\base_events.py:429> created at C:\Users\Aidan Stewart\PycharmProjects\x\venv\lib\site-packages\sanic\server\protocols\http_protocol.py:283> took 1.016 seconds
Traceback (most recent call last):
File "C:\Users\Aidan Stewart\PycharmProjects\x\venv\lib\site-packages\langchain_community\vectorstores\pgvector.py", line 253, in create_vector_extension
session.execute(statement)
File "C:\Users\Aidan Stewart\PycharmProjects\x\venv\lib\site-packages\sqlalchemy\orm\session.py", line 2308, in execute
return self._execute_internal(
File "C:\Users\Aidan Stewart\PycharmProjects\x\venv\lib\site-packages\sqlalchemy\orm\session.py", line 2180, in _execute_internal
conn = self._connection_for_bind(bind)
File "C:\Users\Aidan Stewart\PycharmProjects\x\venv\lib\site-packages\sqlalchemy\orm\session.py", line 2047, in _connection_for_bind
return trans._connection_for_bind(engine, execution_options)
File "<string>", line 2, in _connection_for_bind
File "C:\Users\Aidan Stewart\PycharmProjects\x\venv\lib\site-packages\sqlalchemy\orm\state_changes.py", line 139, in _go
ret_value = fn(self, *arg, **kw)
File "C:\Users\Aidan Stewart\PycharmProjects\x\venv\lib\site-packages\sqlalchemy\orm\session.py", line 1143, in _connection_for_bind
conn = bind.connect()
File "C:\Users\Aidan Stewart\PycharmProjects\x\venv\lib\site-packages\sqlalchemy\engine\base.py", line 3269, in connect
return self._connection_cls(self)
File "C:\Users\Aidan Stewart\PycharmProjects\x\venv\lib\site-packages\sqlalchemy\engine\base.py", line 145, in __init__
self._dbapi_connection = engine.raw_connection()
File "C:\Users\Aidan Stewart\PycharmProjects\x\venv\lib\site-packages\sqlalchemy\engine\base.py", line 3293, in raw_connection
return self.pool.connect()
File "C:\Users\Aidan Stewart\PycharmProjects\x\venv\lib\site-packages\sqlalchemy\pool\base.py", line 452, in connect
return _ConnectionFairy._checkout(self)
File "C:\Users\Aidan Stewart\PycharmProjects\x\venv\lib\site-packages\sqlalchemy\pool\base.py", line 1269, in _checkout
fairy = _ConnectionRecord.checkout(pool)
File "C:\Users\Aidan Stewart\PycharmProjects\x\venv\lib\site-packages\sqlalchemy\pool\base.py", line 716, in checkout
rec = pool._do_get()
File "C:\Users\Aidan Stewart\PycharmProjects\x\venv\lib\site-packages\sqlalchemy\pool\impl.py", line 148, in _do_get
return self._pool.get(wait, self._timeout)
File "C:\Users\Aidan Stewart\PycharmProjects\x\venv\lib\site-packages\sqlalchemy\util\queue.py", line 309, in get
return self.get_nowait()
File "C:\Users\Aidan Stewart\PycharmProjects\x\venv\lib\site-packages\sqlalchemy\util\queue.py", line 303, in get_nowait
return self._queue.get_nowait()
File "C:\Users\Aidan Stewart\PycharmProjects\x\venv\lib\site-packages\sqlalchemy\util\langhelpers.py", line 1146, in __get__
obj.__dict__[self.__name__] = result = self.fget(obj)
File "C:\Users\Aidan Stewart\PycharmProjects\x\venv\lib\site-packages\sqlalchemy\util\queue.py", line 278, in _queue
queue = asyncio.Queue(maxsize=self.maxsize)
File "C:\Users\Aidan Stewart\AppData\Local\Programs\Python\Python39\lib\asyncio\queues.py", line 36, in __init__
self._loop = events.get_event_loop()
File "C:\Users\Aidan Stewart\AppData\Local\Programs\Python\Python39\lib\asyncio\events.py", line 642, in get_event_loop
raise RuntimeError('There is no current event loop in thread %r.'
RuntimeError: There is no current event loop in thread 'asyncio_0'.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\Aidan Stewart\PycharmProjects\x\venv\lib\site-packages\sanic\app.py", line 1385, in handle_request
response = await response
File "C:\Users\Aidan Stewart\PycharmProjects\x\py_orchestrator\blueprints\chat\view.py", line 41, in chat_collection
await create_collection(session, files, embeddings)
File "C:\Users\Aidan Stewart\PycharmProjects\x\py_orchestrator\common\document_service.py", line 35, in create_collection
return await PGVector.afrom_documents(
File "C:\Users\Aidan Stewart\PycharmProjects\x\venv\lib\site-packages\langchain_core\vectorstores.py", line 520, in afrom_documents
return await cls.afrom_texts(texts, embedding, metadatas=metadatas, **kwargs)
File "C:\Users\Aidan Stewart\PycharmProjects\x\venv\lib\site-packages\langchain_core\vectorstores.py", line 542, in afrom_texts
return await run_in_executor(
File "C:\Users\Aidan Stewart\PycharmProjects\x\venv\lib\site-packages\langchain_core\runnables\config.py", line 493, in run_in_executor
return await asyncio.get_running_loop().run_in_executor(
File "C:\Users\Aidan Stewart\AppData\Local\Programs\Python\Python39\lib\concurrent\futures\thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "C:\Users\Aidan Stewart\PycharmProjects\x\venv\lib\site-packages\langchain_community\vectorstores\pgvector.py", line 662, in from_texts
return cls.__from(
File "C:\Users\Aidan Stewart\PycharmProjects\x\venv\lib\site-packages\langchain_community\vectorstores\pgvector.py", line 349, in __from
store = cls(
File "C:\Users\Aidan Stewart\PycharmProjects\x\venv\lib\site-packages\langchain_community\vectorstores\pgvector.py", line 212, in __init__
self.__post_init__()
File "C:\Users\Aidan Stewart\PycharmProjects\x\venv\lib\site-packages\langchain_community\vectorstores\pgvector.py", line 218, in __post_init__
self.create_vector_extension()
File "C:\Users\Aidan Stewart\PycharmProjects\x\venv\lib\site-packages\langchain_community\vectorstores\pgvector.py", line 256, in create_vector_extension
raise Exception(f"Failed to create vector extension: {e}") from e
Exception: Failed to create vector extension: There is no current event loop in thread 'asyncio_0'.
```
### Description
When attempting to create a collection for a set of documents, I run into the error demonstrated. It appears that sqlalchemy cannot find the existing async loop created by Sanic when the server is running. Not entirely sure what the cause is nor the solution as it appears to be an internal Langchain issue when attempting to execute SQLAlchemy code in an async executor.
### System Info
aiofiles==23.2.1
aiohttp==3.9.2
aiosignal==1.3.1
aiosqlite==0.17.0
annotated-types==0.6.0
anyio==4.2.0
async-timeout==4.0.3
asyncpg==0.29.0
attrs==23.2.0
black==24.1.1
certifi==2023.11.17
charset-normalizer==3.3.2
click==8.1.7
colorama==0.4.6
dataclasses-json==0.6.3
distro==1.9.0
exceptiongroup==1.2.0
frozenlist==1.4.1
greenlet==3.0.3
h11==0.14.0
html5tagger==1.3.0
httpcore==1.0.2
httptools==0.6.1
httpx==0.26.0
idna==3.6
iso8601==1.1.0
jsonpatch==1.33
jsonpointer==2.4
langchain==0.1.4
langchain-community==0.0.16
langchain-core==0.1.17
langchain-openai==0.0.5
langsmith==0.0.84
marshmallow==3.20.2
multidict==6.0.4
mypy-extensions==1.0.0
numpy==1.26.3
openai==1.10.0
packaging==23.2
pathspec==0.12.1
pgvector==0.2.4
platformdirs==4.1.0
psycopg2-binary==2.9.9
pydantic==2.6.0
pydantic_core==2.16.1
pypdf==4.0.1
pypika-tortoise==0.1.6
pytz==2023.4
PyYAML==6.0.1
regex==2023.12.25
requests==2.31.0
sanic==23.12.1
sanic-routing==23.12.0
sniffio==1.3.0
SQLAlchemy==2.0.25
tenacity==8.2.3
tiktoken==0.5.2
tomli==2.0.1
tortoise-orm==0.20.0
tqdm==4.66.1
tracerite==1.1.1
typing-inspect==0.9.0
typing_extensions==4.9.0
urllib3==2.1.0
websockets==12.0
yarl==1.9.4
Windows 10
Python 3.9 | RuntimeError: There is no current event loop in thread 'asyncio_0'. Utilizing Sanic as my web framework as choice. Error occurs when attempting to use `PGVector.afrom_documents` | https://api.github.com/repos/langchain-ai/langchain/issues/17485/comments | 7 | 2024-02-13T21:35:23Z | 2024-02-13T22:44:00Z | https://github.com/langchain-ai/langchain/issues/17485 | 2,133,167,447 | 17,485 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
The following code is copied from a [small repository](https://github.com/jkndrkn/pinecone-upsert-error/tree/main) I created that helps reproduce the issue.
```python
from os import environ
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_community.document_loaders import TextLoader
from langchain_community.embeddings import HuggingFaceEmbeddings
from langchain_community.vectorstores import Pinecone
INDEX_NAME = environ["PINECONE_INDEX"]
TRIALS = 50
TEXT_PATH = "my_text.txt"
loader = TextLoader(TEXT_PATH)
documents = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=50)
docs = text_splitter.split_documents(documents)
print("docs length:", len(docs))
embedder = HuggingFaceEmbeddings(model_name="all-mpnet-base-v2")
for i in range(0, TRIALS):
print("trial: ", i, flush=True)
Pinecone.from_documents(docs, embedder, index_name=INDEX_NAME)
```
Please see the project [README.md](https://github.com/jkndrkn/pinecone-upsert-error/blob/main/README.md) for instructions for how to configure and run this code.
### Error Message and Stack Trace (if applicable)
```
Traceback (most recent call last):
File "/Users/jeriksen/opt/anaconda3/envs/pinecone-upsert-error/lib/python3.10/site-packages/urllib3/connection.py", line 198, in _new_conn
sock = connection.create_connection(
File "/Users/jeriksen/opt/anaconda3/envs/pinecone-upsert-error/lib/python3.10/site-packages/urllib3/util/connection.py", line 60, in create_connection
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
File "/Users/jeriksen/opt/anaconda3/envs/pinecone-upsert-error/lib/python3.10/socket.py", line 955, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno 8] nodename nor servname provided, or not known
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/jeriksen/opt/anaconda3/envs/pinecone-upsert-error/lib/python3.10/site-packages/urllib3/connectionpool.py", line 793, in urlopen
response = self._make_request(
File "/Users/jeriksen/opt/anaconda3/envs/pinecone-upsert-error/lib/python3.10/site-packages/urllib3/connectionpool.py", line 491, in _make_request
raise new_e
File "/Users/jeriksen/opt/anaconda3/envs/pinecone-upsert-error/lib/python3.10/site-packages/urllib3/connectionpool.py", line 467, in _make_request
self._validate_conn(conn)
File "/Users/jeriksen/opt/anaconda3/envs/pinecone-upsert-error/lib/python3.10/site-packages/urllib3/connectionpool.py", line 1099, in _validate_conn
conn.connect()
File "/Users/jeriksen/opt/anaconda3/envs/pinecone-upsert-error/lib/python3.10/site-packages/urllib3/connection.py", line 616, in connect
self.sock = sock = self._new_conn()
File "/Users/jeriksen/opt/anaconda3/envs/pinecone-upsert-error/lib/python3.10/site-packages/urllib3/connection.py", line 205, in _new_conn
raise NameResolutionError(self.host, self, e) from e
urllib3.exceptions.NameResolutionError: <urllib3.connection.HTTPSConnection object at 0x7f7d20580430>: Failed to resolve 'answers-dev-jde-test-REDACTED.svc.us-east-1-aws.pinecone.io' ([Errno 8] nodename nor servname provided, or not known)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/jeriksen/bamboohr/ai-labs/pinecone-upsert-error/load_document.py", line 28, in <module>
Pinecone.from_documents(docs, embedder, index_name=INDEX_NAME)
File "/Users/jeriksen/opt/anaconda3/envs/pinecone-upsert-error/lib/python3.10/site-packages/langchain_core/vectorstores.py", line 508, in from_documents
return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)
File "/Users/jeriksen/opt/anaconda3/envs/pinecone-upsert-error/lib/python3.10/site-packages/langchain_pinecone/vectorstores.py", line 434, in from_texts
pinecone.add_texts(
File "/Users/jeriksen/opt/anaconda3/envs/pinecone-upsert-error/lib/python3.10/site-packages/langchain_pinecone/vectorstores.py", line 166, in add_texts
[res.get() for res in async_res]
File "/Users/jeriksen/opt/anaconda3/envs/pinecone-upsert-error/lib/python3.10/site-packages/langchain_pinecone/vectorstores.py", line 166, in <listcomp>
[res.get() for res in async_res]
File "/Users/jeriksen/opt/anaconda3/envs/pinecone-upsert-error/lib/python3.10/multiprocessing/pool.py", line 774, in get
raise self._value
File "/Users/jeriksen/opt/anaconda3/envs/pinecone-upsert-error/lib/python3.10/multiprocessing/pool.py", line 125, in worker
result = (True, func(*args, **kwds))
File "/Users/jeriksen/opt/anaconda3/envs/pinecone-upsert-error/lib/python3.10/site-packages/pinecone/core/client/api_client.py", line 195, in __call_api
response_data = self.request(
File "/Users/jeriksen/opt/anaconda3/envs/pinecone-upsert-error/lib/python3.10/site-packages/pinecone/core/client/api_client.py", line 454, in request
return self.rest_client.POST(url,
File "/Users/jeriksen/opt/anaconda3/envs/pinecone-upsert-error/lib/python3.10/site-packages/pinecone/core/client/rest.py", line 301, in POST
return self.request("POST", url,
File "/Users/jeriksen/opt/anaconda3/envs/pinecone-upsert-error/lib/python3.10/site-packages/pinecone/core/client/rest.py", line 178, in request
r = self.pool_manager.request(
File "/Users/jeriksen/opt/anaconda3/envs/pinecone-upsert-error/lib/python3.10/site-packages/urllib3/_request_methods.py", line 144, in request
return self.request_encode_body(
File "/Users/jeriksen/opt/anaconda3/envs/pinecone-upsert-error/lib/python3.10/site-packages/urllib3/_request_methods.py", line 279, in request_encode_body
return self.urlopen(method, url, **extra_kw)
File "/Users/jeriksen/opt/anaconda3/envs/pinecone-upsert-error/lib/python3.10/site-packages/urllib3/poolmanager.py", line 444, in urlopen
response = conn.urlopen(method, u.request_uri, **kw)
File "/Users/jeriksen/opt/anaconda3/envs/pinecone-upsert-error/lib/python3.10/site-packages/urllib3/connectionpool.py", line 877, in urlopen
return self.urlopen(
File "/Users/jeriksen/opt/anaconda3/envs/pinecone-upsert-error/lib/python3.10/site-packages/urllib3/connectionpool.py", line 877, in urlopen
return self.urlopen(
File "/Users/jeriksen/opt/anaconda3/envs/pinecone-upsert-error/lib/python3.10/site-packages/urllib3/connectionpool.py", line 877, in urlopen
return self.urlopen(
File "/Users/jeriksen/opt/anaconda3/envs/pinecone-upsert-error/lib/python3.10/site-packages/urllib3/connectionpool.py", line 847, in urlopen
retries = retries.increment(
File "/Users/jeriksen/opt/anaconda3/envs/pinecone-upsert-error/lib/python3.10/site-packages/urllib3/util/retry.py", line 515, in increment
raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type]
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='answers-dev-jde-test-REDACTED.svc.us-east-1-aws.pinecone.io', port=443): Max retries exceeded with url: /vectors/upsert (Caused by NameResolutionError("<urllib3.connection.HTTPSConnection object at 0x7f7d20580430>: Failed to resolve 'answers-dev-jde-test-REDACTED.svc.us-east-1-aws.pinecone.io' ([Errno 8] nodename nor servname provided, or not known)"))
```
### Description
I am trying to use from `langchain_community.vectorstores.Pinecone` to upsert embeddings to a Pinecone index using `Pinecone.from_documents()`. When I start a script that calls `from_documents()` once for each document it will succeed for the first few runs but then fail and return this error:
```
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='answers-dev-jde-test-REDACTED.svc.us-east-1-aws.pinecone.io', port=443): Max retries exceeded with url: /vectors/upsert (Caused by NameResolutionError("<urllib3.connection.HTTPSConnection object at 0x7f7d20580430>: Failed to resolve 'answers-dev-jde-test-REDACTED.svc.us-east-1-aws.pinecone.io' ([Errno 8] nodename nor servname provided, or not known)"))
```
This is an unexpected error. The index does exist. I have tried this with many different Pinecone indexes of both 1024 dimensions with Cohere embeddings and 768 dimensions with HuggingFace embeddings. I have also tried various document sizes.
### System Info
LangChain dependencies
```
langchain==0.1.6
langchain-community==0.0.19
langchain-core==0.1.22
```
Here is my environment.yml
```
name: pinecone-upsert-error
dependencies:
- pip=23.1.2
- python=3.10.13
- pip:
- cohere==4.46
- langchain==0.1.6
- pinecone-client[grpc]==3.0.2
```
Here is the entire output of `pip freeze`:
```
aiohttp==3.9.3
aiosignal==1.3.1
annotated-types==0.6.0
anyio==4.2.0
async-timeout==4.0.3
attrs==23.2.0
backoff==2.2.1
certifi==2024.2.2
charset-normalizer==3.3.2
click==8.1.7
cohere==4.46
dataclasses-json==0.6.4
exceptiongroup==1.2.0
fastavro==1.9.3
filelock==3.13.1
frozenlist==1.4.1
fsspec==2024.2.0
googleapis-common-protos==1.62.0
greenlet==3.0.3
grpc-gateway-protoc-gen-openapiv2==0.1.0
grpcio==1.60.1
huggingface-hub==0.20.3
idna==3.6
importlib-metadata==6.11.0
Jinja2==3.1.3
joblib==1.3.2
jsonpatch==1.33
jsonpointer==2.4
langchain==0.1.6
langchain-community==0.0.19
langchain-core==0.1.22
langsmith==0.0.87
lz4==4.3.3
MarkupSafe==2.1.5
marshmallow==3.20.2
mpmath==1.3.0
multidict==6.0.5
mypy-extensions==1.0.0
networkx==3.2.1
nltk==3.8.1
numpy==1.26.4
packaging==23.2
pillow==10.2.0
pinecone-client==3.0.2
protobuf==3.20.3
pydantic==2.6.1
pydantic_core==2.16.2
PyYAML==6.0.1
regex==2023.12.25
requests==2.31.0
safetensors==0.4.2
scikit-learn==1.4.0
scipy==1.12.0
sentence-transformers==2.3.1
sentencepiece==0.1.99
sniffio==1.3.0
SQLAlchemy==2.0.27
sympy==1.12
tenacity==8.2.3
threadpoolctl==3.2.0
tokenizers==0.15.2
torch==2.2.0
tqdm==4.66.2
transformers==4.37.2
typing-inspect==0.9.0
typing_extensions==4.9.0
urllib3==2.2.0
yarl==1.9.4
zipp==3.17.0
``` | Error in langchain_community.vectorstores.Pinecone: Sending request to /vectors/upsert triggers NameResolutionError | https://api.github.com/repos/langchain-ai/langchain/issues/17474/comments | 8 | 2024-02-13T19:49:05Z | 2024-02-14T21:44:26Z | https://github.com/langchain-ai/langchain/issues/17474 | 2,133,024,664 | 17,474 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
-
### Idea or request for content:
below is the code
```
retriever = vectorStore.as_retriever(search_kwargs=
{"k": current_limit,
"pre_filter": {"Subject": {"$eq": "sports"}}
}
)
```
the above pre_filter should go through the name metadata component i.e. metadata where Subject is sports and should retrieve the document right? But its performing same as ht normal retriever i.e. without pre_filter. May i know the reason why? | why' the pre_filter component not working for retriever while retrieving documents? | https://api.github.com/repos/langchain-ai/langchain/issues/17464/comments | 3 | 2024-02-13T15:24:57Z | 2024-02-14T01:34:39Z | https://github.com/langchain-ai/langchain/issues/17464 | 2,132,577,209 | 17,464 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
def retreival_qa_chain(COLLECTION_NAME):
embedding = OpenAIEmbeddings()
llm = ChatOpenAI(model="gpt-3.5-turbo-16k",temperature=0.1)
vector_store = PGVector(
connection_string=CONNECTION_STRING,
collection_name=COLLECTION_NAME,
embedding_function=embedding
)
retriever = vector_store.as_retriever(search_kwargs={"k": 3})
qa = RetrievalQA.from_chain_type(
llm=llm,
chain_type="stuff",
retriever=retriever,
return_source_documents=True
)
return qa
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
How can I delete existing collection in pgvector and ensure that when deleting a collection in pgvector, its corresponding entry in another table is also deleted?
### System Info
I am using pgvector | How to delete collection in Pgvector with cascade delete? | https://api.github.com/repos/langchain-ai/langchain/issues/17461/comments | 1 | 2024-02-13T12:26:46Z | 2024-02-14T01:50:11Z | https://github.com/langchain-ai/langchain/issues/17461 | 2,132,213,397 | 17,461 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
The following code
```
from langchain_core.runnables import RunnableParallel
from langchain.prompts import ChatPromptTemplate
chain1 = ChatPromptTemplate.from_template("tell me a joke about {topic}")
chain2 = ChatPromptTemplate.from_template("write a short (2 line) poem about {topic}")
def test(input) -> int:
print(input)
return(3)
combined = RunnableParallel(joke=chain1, poem=chain2).assign(x=RunnableLambda(test))
```
The output is correct if you do `combined.invoke({'topic':"love"})` you correctly get
```
{'joke': ChatPromptValue(messages=[HumanMessage(content='tell me a joke about love')]),
'poem': ChatPromptValue(messages=[HumanMessage(content='write a short (2 line) poem about love')]),
'x': 3}
```
however if you check the output schema as follows
```
combined.output_schema.schema()
````
Output is
```
{'title': 'RunnableSequenceOutput',
'type': 'object',
'properties': {'topic': {'title': 'Topic', 'type': 'string'},
'x': {'title': 'X', 'type': 'integer'}}}
```
`joke ` field is missing in the output schema. This is impacting the langserve api output for the chain as well.
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Output schema of the runnable is missing field when RunnableParallel is used in conjunction with assign.
### System Info
langchain = "0.1.6"
python = "^3.11" | Issue with chain output_schema when a runnableparrallel is invoked with assign. | https://api.github.com/repos/langchain-ai/langchain/issues/17460/comments | 2 | 2024-02-13T11:19:29Z | 2024-07-30T16:05:57Z | https://github.com/langchain-ai/langchain/issues/17460 | 2,132,099,898 | 17,460 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
-
### Idea or request for content:
below is the code which will add only page_content to the embeddings
```
# Instantiate the OpenAIEmbeddings class
openai = OpenAIEmbeddings(openai_api_key="")
# Generate embeddings for your documents
embeddings = openai.embed_documents([doc.page_content for doc in documents])
# Create tuples of text and corresponding embedding
text_embeddings = list(zip([doc.page_content for doc in documents], embeddings))
# Create a FAISS vector store from the embeddings
vectorStore = FAISS.from_embeddings(text_embeddings, openai)
# Create a retriever for the vector database
retriever = vectorStore.as_retriever(search_kwargs={"k": 10})
docs = retriever.get_relevant_documents("Data related to cricket")
```
while retrieving the document and printing the output, it'll only return page_content and doesn't return metadata. So, is there any to also add metadata to the vector store and return metadata along with the page_content output? | how to also add metadata along with the page_content to the vector store? | https://api.github.com/repos/langchain-ai/langchain/issues/17459/comments | 5 | 2024-02-13T10:34:22Z | 2024-08-01T15:41:18Z | https://github.com/langchain-ai/langchain/issues/17459 | 2,131,995,849 | 17,459 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
-
### Idea or request for content:
below is the code which will retrieve the documents
```
# Create a retriever for the vector database
retriever = vectorStore.as_retriever(search_kwargs={"k": 10})
docs = retriever.get_relevant_documents("data related to cricket")
```
then below's the output when i tried to return the metadata
```
for doc in docs:
print(doc.metadata)
{}
{}
{}
{}
{}
{}
{}
{}
{}
{}
```
it is not returning metadata, but it is returning page_content. How to also return the metadata along with page_content? | unable to retrieve metadata while retrieving the documents | https://api.github.com/repos/langchain-ai/langchain/issues/17458/comments | 5 | 2024-02-13T10:29:04Z | 2024-02-18T04:30:22Z | https://github.com/langchain-ai/langchain/issues/17458 | 2,131,985,841 | 17,458 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
-
### Idea or request for content:
below is the code how documents[0] look like
`Document(page_content='Text: This e Australian Act (Cth).', metadata={'source': '/content/excel_data/2.csv', 'row': 0, 'x1': '296.707803875208', 'y1': '211.071329072118', 'x2': '1436.41797742248', 'y2': '276.853476041928', 'Block Type': 'LAYOUT_TEXT', 'Block ID': '42c93696619b409b80ddc71df32580f2', 'page_num': '1', 'Company': 'BBCI', 'Year': '2020', 'is_answer': '0'})`
below's the code which returns the topk unique documents using retriever
```
def fetch_unique_documents_with_metadata(query, initial_limit, desired_count):
unique_docs_with_metadata = []
seen_contents = set()
current_limit = initial_limit
while len(unique_docs_with_metadata) < desired_count:
retriever = vectorStore.as_retriever(search_kwargs={"k": current_limit})
docs = retriever.get_relevant_documents(query)
# Keep track of whether we found new unique documents in this iteration
found_new_unique = False
for doc in docs:
if doc.page_content not in seen_contents:
# Add both page_content and metadata to the unique document list
doc_with_metadata = {
"page_content": doc.page_content,
"metadata": doc.metadata
}
unique_docs_with_metadata.append(doc_with_metadata)
seen_contents.add(doc.page_content)
found_new_unique = True
if len(unique_docs_with_metadata) == desired_count:
break
if not found_new_unique or len(unique_docs_with_metadata) == desired_count:
break # Exit if no new unique documents are found or if we've reached the desired count
# Increase the limit more aggressively if we are still far from the desired count
current_limit += desired_count - len(unique_docs_with_metadata)
return unique_docs_with_metadata
# Example usage with the updated function
query = "Does or concerns, including in relation?"
desired_count = 10 # The number of unique documents you want
unique_documents_with_metadata = fetch_unique_documents_with_metadata(query, initial_limit=desired_count, desired_count=desired_count)
# Print the unique documents or handle them as needed
# for doc in unique_documents_with_metadata:
# print(f"Row {doc['metadata']['row']}: {doc['page_content']}")
# print(f"Metadata: {doc['metadata']}")
len(unique_documents_with_metadata)
```
the output is below
```
{'page_content': 'Text: 5. esources departments or the \n.',
'metadata': {}}
```
unable to extract/return metadata while retrieving the relevant documents. It is not returning metadata present inside the documents, it is only returning page_content. I want to return all the medata which looks like metadata={'source': '/content/excel_data/2.csv', 'row': 0, 'x1': '296.707803875208', 'y1': '211.071329072118', 'x2': '1436.41797742248', 'y2': '276.853476041928', 'Block Type': 'LAYOUT_TEXT', 'Block ID': '42c93696619b409b80ddc71df32580f2', 'page_num': '1', 'Company': 'BBCI', 'Year': '2020', 'is_answer': '0'}) but it is only returning page_content. Can you look into it and help me with the code? | unable to extract/return metadata while retrieving the relevant documents | https://api.github.com/repos/langchain-ai/langchain/issues/17455/comments | 4 | 2024-02-13T10:08:37Z | 2024-02-14T01:39:23Z | https://github.com/langchain-ai/langchain/issues/17455 | 2,131,944,956 | 17,455 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
from langchain.document_loaders import GithubFileLoader
loader = GithubFileLoader(
repo="langchain-ai/langchain", # the repo name
access_token="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",
github_api_url="https://api.github.com",
file_filter=lambda file_path: file_path.endswith(
".md"
), # load all markdowns files.
)
documents = loader.load()
print(documents)
### Error Message and Stack Trace (if applicable)
_No response_
### Description
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://api.github.com/api/v3/repos/langchain-ai/langchain/git/trees/master?recursive=1
### System Info
System Information
------------------
> OS: Linux
> OS Version: #17~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Tue Jan 16 14:32:32 UTC 2
> Python Version: 3.11.4 (main, Jul 5 2023, 13:45:01) [GCC 11.2.0]
Package Information
-------------------
> langchain_core: 0.1.22
> langchain: 0.1.6
> langchain_community: 0.0.19
> langsmith: 0.0.87
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | GithubFileLoader API Error | https://api.github.com/repos/langchain-ai/langchain/issues/17453/comments | 13 | 2024-02-13T09:48:58Z | 2024-07-23T16:07:26Z | https://github.com/langchain-ai/langchain/issues/17453 | 2,131,905,108 | 17,453 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
See here: https://github.com/langchain-ai/langchain/pull/610 no mention of this hypothetical notebook containing an example.
and here: https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.eval_chain.QAEvalChain.html# What does this mean?

It's cryptic.
### Idea or request for content:
```python
from langchain import PromptTemplate
# Create a custom prompt template incorporating the rubric
custom_prompt_template = PromptTemplate(
template=f"""
Grade the student's answer's against the answer key on a scale of 1 to 10 based on the following criteria:
- Accuracy: Is the answer correct? (40%)
- Relevance: Is the answer relevant to the question? (30%)
- Clarity: Is the answer clearly articulated? (20%)
- Grammar: Is the answer grammatically correct? (10%)
Provide a numerical score and a brief justification for each category.
{{input}}
Evaluate the following answer based on the criteria outlined in this rubric:
{{prompt}}
Answer:
{{evaluation}}
""")
# Initialize the QAEvalChain with the custom prompt
eval_chain = QAEvalChain.from_llm(llm, prompt=custom_prompt_template, verbose=True)
# %%
qa_pair = {'query': 'What did the boy do when he was tired?', 'answer': 'He would sleep in her shade.'}
student_answer = {'query': 'What did the boy do when he was tired?', 'answer': 'When the boy was tired, he asked the tree for a quiet place to sit and rest, and the tree offered her old stump for him to sit and rest.'}
eval_data = zip([qa_pair], [student_answer])
graded_outputs = eval_chain.evaluate(eval_data)
```
I want to do something like that passing custom rubrics for evaluation but have no idea how arguments to the call `.evaluate()` map to `input`, `prompt`, `evaluation` which are deemed required input variables in the documentation. Please when you speak among yourselves remember that users are going to be researching potential questions, as you insist, but if you don't provide more informative speech amongst yourselves, I have to create a new issue like this one. | QAEvalChain custom prompt how do I do this? | https://api.github.com/repos/langchain-ai/langchain/issues/17449/comments | 3 | 2024-02-13T08:52:21Z | 2024-05-21T16:09:26Z | https://github.com/langchain-ai/langchain/issues/17449 | 2,131,806,527 | 17,449 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
from langchain.agents.agent_types import AgentType
from langchain_experimental.agents.agent_toolkits.csv.base import create_csv_agent
from langchain_openai import OpenAI
from langchain_community.embeddings import HuggingFaceInferenceAPIEmbeddings
from langchain_experimental.agents.agent_toolkits import create_pandas_dataframe_agent
agent = create_csv_agent(
llm ,
'train.csv',
agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
)
agent.run("how many rows are there?")¨
f=pd.read_csv('train.csv',delimiter=';',encoding='Latin-1')
print(df.head())
agent = create_pandas_dataframe_agent(llm, df, agent_type="openai-tools", verbose=True)
agent.invoke(
{
"input": "What's the correlation between age and fare? is that greater than the correlation between fare and survival?"
}
)
**## Output ##
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
TypeError: Completions.create() got an unexpected keyword argument 'tools'**
**and when changing the LLM or agent type i get this error: in create_react_agent
raise ValueError(f"Prompt missing required variables: {missing_vars}")
ValueError: Prompt missing required variables: {'tools', 'tool_names'}**
### Error Message and Stack Trace (if applicable)
line 26, in <module>
agent = create_csv_agent(llm ,'train.csv',agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "\LocalCache\local-packages\Python311\site-packages\langchain_experimental\agents\agent_toolkits\csv\base.py", line 66, in create_csv_agent
return create_pandas_dataframe_agent(llm, df, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain_experimental\agents\agent_toolkits\pandas\base.py", line 264, in create_pandas_dataframe_agent
runnable=create_react_agent(llm, tools, prompt), # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain\agents\react\agent.py", line 97, in create_react_agent
raise ValueError(f"Prompt missing required variables: {missing_vars}")
ValueError: Prompt missing required variables: {'tool_names', 'tools'}
### Description
I am trying to use the CSV agent to query my CSV but I keep getting the error: TypeError: Completions.create() got an unexpected keyword argument 'tools' **( for agent type = openai tools)** and I tried a different agent type and I am having this error **ValueError: Prompt missing required variables: {'tools', 'tool_names'}**
(i followed the example from the documentation)
if anyone has an idea how to fix this or encountered this issue before please reach out!
### System Info
OS: Windows
OS Version: 10.0.19045
Python Version: 3.11.8 (tags/v3.11.8:db85d51, Feb 6 2024, 22:03:32) [MSC v.1937 64 bit (AMD64)] | Erros with langchain CSV agent and Pandas agent | https://api.github.com/repos/langchain-ai/langchain/issues/17448/comments | 7 | 2024-02-13T08:38:17Z | 2024-05-21T16:09:20Z | https://github.com/langchain-ai/langchain/issues/17448 | 2,131,775,505 | 17,448 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
from langchain_community.tools.tavily_search import TavilySearchResults, TavilyAnswer
search = TavilySearchResults()
print(search.invoke("what is the weather in SF"))
# output: []
tans = TavilyAnswer()
print(tans.invoke("What is the weather in SF?"))
# output: The current weather in San Francisco is partly cloudy with a temperature of 51.1°F (10.6°C). The wind is coming from the west-northwest at 8.1 mph (13.0 km/h), and the humidity is at 86%. The visibility is 9.0 miles (16.0 km), and the UV index is 1.0. The highest temperature recorded in San Francisco in 2024 was 73°F (23°C) on January 29.
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
# Issue:
I'm trying the [Agents Quickstart example on Tavily](https://python.langchain.com/docs/modules/agents/quick_start).
As shown above, "TavilySearchResults" is returning empty response. Upon checking, I found "TavilyAnswer" can return results correctly. Digging further, I found the difference is due to Tavily rest API param "search_depth".
"TavilySearchResults" defaults to "advanced" while "TavilyAnswer" defaults to "basic".
I tried on https://app.tavily.com/playground, same behavior, search_depth="basic" can return results while "advanced" does not have result.
# Possible Solution
I'm on Tavily free tier, hope the API tier does not make a difference.
I propose either to change default search_depth to "basic" or expose search_depth as variable in TavilySearchResults.
let me know any option is preferred, I can create a pull request
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.3.0: Wed Dec 20 21:30:44 PST 2023; root:xnu-10002.81.5~7/RELEASE_ARM64_T6000
> Python Version: 3.9.18 | packaged by conda-forge | (main, Dec 23 2023, 16:35:41)
[Clang 16.0.6 ]
Package Information
-------------------
> langchain_core: 0.1.16
> langchain: 0.1.5
> langchain_community: 0.0.17
> langchain_experimental: 0.0.50
> langchain_openai: 0.0.5
> langchainhub: 0.1.14
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | TavilySearchResults in Agents Quick Start always return empty result | https://api.github.com/repos/langchain-ai/langchain/issues/17447/comments | 6 | 2024-02-13T08:26:36Z | 2024-07-04T08:58:21Z | https://github.com/langchain-ai/langchain/issues/17447 | 2,131,742,229 | 17,447 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
'''python
def add_docs_to_chromadb(docpath: str, collection_name: str):
try:
logging.info("Adding docs to collection")
vectordb = delib.connect_chromadb(collection_name = collection_name)
if os.path.isfile(docpath):
loader = UnstructuredFileLoader(docpath, mode="paged", strategy="hi_res", hi_res_model_name="detectron2_onnx", post_processors=[clean_extra_whitespace])
elif os.path.isdir(docpath):
loader = DirectoryLoader(docpath, silent_errors=True, use_multithreading=True, loader_kwargs={"mode": "paged", "strategy":"hi_res", "hi_res_model_name":"detectron2_onnx", "post_processors":[clean_extra_whitespace]})
#loader = DirectoryLoader(docpath, silent_errors=True, use_multithreading=True)
else:
logging.error(f"Provided path '{docpath}' is not a valid file or folder.")
return {"response": f"Provided path '{docpath}' is not a valid file or folder."}
logging.info("Connected to db and collection")
#loader = DirectoryLoader(docpath)
documents = loader.load()
logging.info(f"There are {len(documents)} documents are loaded for indexing.")
documents = filter_complex_metadata(documents)
#text_splitter = TokenTextSplitter(chunk_size=1000, chunk_overlap=200)
#text_splitter = RecursiveCharacterTextSplitter(separators=["\n\n", "\n", "\t"], chunk_size=10000, chunk_overlap=3000)
text_splitter = RecursiveCharacterTextSplitter.from_tiktoken_encoder(separators=["\n\n", "\n", "\t"],chunk_size=delib.CHUNK_SIZE_TOKENS,chunk_overlap=200)
texts = text_splitter.split_documents(documents)
logging.info("Adding documents to vectordb..")
vectordb.add_documents(documents=texts, embedding=embeddings, persist_directory = db_directory)
#vectordb.add_documents(documents=texts)
#vectordb.persist()
logging.info(f"Documents from '{docpath}' indexed successfully.")
except Exception as e:
logging.error(f"An error occured: {str(e)}")
logging.error("An error occured adding to collection: "+ str(e))
#pass
'''
add_docs_to_chromadb("home/uploaded/data", "langchain")
### Error Message and Stack Trace (if applicable)
2024-02-13T03:12:26.0575885Z 2024-02-13 03:12:26,052 INFO: Detecting page elements ...
2024-02-13T03:12:26.4798476Z 2024-02-13 03:12:26,479 INFO: Processing entire page OCR with tesseract...
2024-02-13T03:12:33.8568556Z 2024-02-13 03:12:33,851 INFO: Detecting page elements ...
2024-02-13T03:12:33.9356842Z 2024-02-13 03:12:33,927 INFO: Detecting page elements ...
2024-02-13T03:12:34.4394549Z 2024-02-13 03:12:34,438 INFO: Processing entire page OCR with tesseract...
2024-02-13T03:12:41.8113054Z 2024-02-13 03:12:41,804 INFO: Detecting page elements ...
2024-02-13T03:12:42.8076208Z 2024-02-13 03:12:42,807 INFO: Detecting page elements ...
2024-02-13T03:12:47.3426438Z 2024-02-13 03:12:47,328 INFO: Processing entire page OCR with tesseract...
2024-02-13T03:12:49.6169096Z 2024-02-13 03:12:49,608 INFO: Detecting page elements ...
2024-02-13T03:12:50.7754205Z 2024-02-13 03:12:50,767 INFO: Detecting page elements ...
2024-02-13T03:12:58.8977025Z 2024-02-13 03:12:58,891 INFO: Detecting page elements ...
2024-02-13T03:12:59.5277951Z 2024-02-13 03:12:59,527 INFO: Detecting page elements ...
2024-02-13T03:13:00.6122064Z [2024-02-13 03:13:00 +0000] [32703] [INFO] 169.254.130.1:62885 - "GET /docs HTTP/1.1" 200
2024-02-13T03:13:04.4380495Z 2024-02-13 03:13:04,433 INFO: Processing entire page OCR with tesseract...
2024-02-13T03:13:06.8056447Z 2024-02-13 03:13:06,803 INFO: Detecting page elements ...
2024-02-13T03:13:07.5115041Z 2024-02-13 03:13:07,497 INFO: Detecting page elements ...
2024-02-13T03:13:12.0908634Z 2024-02-13 03:13:12,081 INFO: Processing entire page OCR with tesseract...
2024-02-13T03:13:13.3332086Z 2024-02-13 03:13:13,323 INFO: Detecting page elements ...
2024-02-13T03:13:16.9851600Z 2024-02-13 03:13:16,979 INFO: Detecting page elements ...
2024-02-13T03:13:17.7084602Z 2024-02-13 03:13:17,706 INFO: Processing entire page OCR with tesseract...
2024-02-13T03:13:21.0221346Z 2024-02-13 03:13:21,018 INFO: Detecting page elements ...
2024-02-13T03:13:21.8595109Z 2024-02-13 03:13:21,857 INFO: Processing entire page OCR with tesseract...
2024-02-13T03:13:22.0843506Z 2024-02-13 03:13:22,083 INFO: Processing entire page OCR with tesseract...
2024-02-13T03:13:24.1381747Z 2024-02-13 03:13:24,137 INFO: Detecting page elements ...
2024-02-13T03:13:26.8770925Z 2024-02-13 03:13:26,876 INFO: Processing entire page OCR with tesseract...
2024-02-13T03:13:28.9028391Z 2024-02-13 03:13:28,892 INFO: Detecting page elements ...
2024-02-13T03:13:31.8635158Z 2024-02-13 03:13:31,860 INFO: Detecting page elements ...
### Description
I am trying to upload a folder "data" containing some pdf files to chromadb vectorstore, If I do not use the loader_kwargs parameter then it takes little time in indexing but then accuracy of getting the answer from vectordb is not good so I want to use the loader_kwargs to pass the parameters to loader_cls which is by default "UnstructuredFileLoader" but in that case it takes a lot of time which more than expected like for 4 to 5 pdfs it is taking more than two hours and when I uploaded folder containing 130 pdf files and checked after 12 hours and found that indexing could not done and log was freezed.
Can you let me know any solution for this, I will be thankful to you.
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.22621
> Python Version: 3.9.5 (default, May 18 2021, 14:42:02) [MSC v.1916 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.1.10
> langchain: 0.0.242
> langchain_community: 0.0.12
> langserve: Not Found | OCR with Tesseract takes more than expected time while indexing a pdf file to chromadb using UnstructuredFileLoader | https://api.github.com/repos/langchain-ai/langchain/issues/17444/comments | 1 | 2024-02-13T06:20:29Z | 2024-05-21T16:09:15Z | https://github.com/langchain-ai/langchain/issues/17444 | 2,131,545,246 | 17,444 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```json
{"page_content": "hallo test", "metadata": {"source": "mysource", "seq_num": 244 }, "type": "Document"}
```
```py
chunked_documents = load_docs_from_jsonl("test.jsonl")
embeddings = OpenAIEmbeddings(openai_api_key=OPENAI_API_KEY)
collection_name = "test_embeddings_cs" + str(CHUNK_SIZE)
db = PGVector.from_documents(
embedding=embeddings,
documents=chunked_documents,
collection_name=collection_name,
connection_string=POSTGRES_URL,
pre_delete_collection=False,
)
```
```
DETAIL: Missing "]" after array dimensions.
[SQL: INSERT INTO langchain_pg_embedding (collection_id, embedding, document, cmetadata, custom_id, uuid) VALUES (%(collection_id)s::UUID, %(embedding)s, %(document)s, %(cmetadata)s, %(custom_id)s, %(uuid)s::UUID)]
[parameters: {'collection_id': UUID('f35bcca7-c797-4e57-a31a-8d82da54542b'), 'embedding': '[-0.02666512080169465,0.0014875975490676566,0.003841705348754905,-0.025984670417264277,-0.01976139198263224,-0.005496757382752562,-0.0229651753288143 ... (32576 characters truncated) ... .016387495109019937,-0.02728886667698162,-0.011532203877076985,-0.013332560400847324,-0.006326055473869786,0.015182533137287487,0.012971071250534016]', 'document': 'hallo test', 'cmetadata': '{"source": "mysource", "seq_num": 244}', 'custom_id': 'ab97f376-c9cf-11ee-88a9-1eae5cf5b7d5', 'uuid': UUID('f21bf17b-07b2-40f2-a33f-16c50f682e7f')}]
(Background on this error at: https://sqlalche.me/e/20/9h9h)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I have a very simple code that creates embeddings from my previously prepared documents with PGVector and tries to push it to the PostgreSQL in Vercel. Unfortunately it complains with an `DETAIL: Missing "]" after array dimensions.`
### System Info
langchain==0.1.6
langchain-community==0.0.19
langchain-core==0.1.22
langchain-openai==0.0.5
mac
Python 3.12.1 | Error when pushing PGVector embeddings to Vercel | https://api.github.com/repos/langchain-ai/langchain/issues/17428/comments | 4 | 2024-02-12T18:01:04Z | 2024-07-03T16:34:06Z | https://github.com/langchain-ai/langchain/issues/17428 | 2,130,669,648 | 17,428 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
-
### Idea or request for content:
below is the code
```
# Categorize documents
documents_dict = {
'amd': DirectoryLoader('/content/amd', glob="*.pdf", loader_cls=PyPDFLoader).load(),
'engie': DirectoryLoader('/content/engie', glob="*.pdf", loader_cls=PyPDFLoader).load(),
# Add more categories as needed
}
# Create a vector database and a retriever for each category
vector_stores = {}
retrievers = {}
class MyTextSplitter(RecursiveCharacterTextSplitter):
def split_documents(self, documents):
chunks = super().split_documents(documents)
for document, chunk in zip(documents, chunks):
chunk.metadata['source'] = document.metadata['source']
return chunks
text_splitter = MyTextSplitter(chunk_size=1000,
chunk_overlap=100)
for category, docs in documents_dict.items():
texts = text_splitter.split_documents(docs)
vector_store = FAISS.from_documents(texts, embeddings)
retriever = vector_store.as_retriever(search_kwargs={"k": 5})
vector_stores[category] = vector_store
retrievers[category] = retriever
# Answer a question related to 'Cricket'
category = 'engie'
qa_chain = RetrievalQA.from_chain_type(llm=OpenAI(temperature=0.2),
chain_type="stuff",
retriever=retrievers[category],
return_source_documents=True)
# Format the prompt using the template
context = ""
# question = "what's the final provision of dhl?"
question = "what for it strives?"
formatted_prompt = prompt_template.format(context=context, question=question)
# Pass the formatted prompt to the RetrievalQA function
llm_response = qa_chain(formatted_prompt)
process_llm_response(llm_response)
```
in the above code, if you see even retrievers has been iterated. Is there any way that we can only use only one retriever to retrieve the data from multiple vector stores? As in there should be only one retriever, if category is amd, it should fetch data from amd category vector store and return the answer. Can you help me with this code? | how to use only retriever to retrieve data from multiple knowledge bases? | https://api.github.com/repos/langchain-ai/langchain/issues/17427/comments | 1 | 2024-02-12T17:56:28Z | 2024-02-14T01:47:37Z | https://github.com/langchain-ai/langchain/issues/17427 | 2,130,661,616 | 17,427 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
-
### Idea or request for content:
below is the code
```
# Categorize documents
documents_dict = {
'amd': DirectoryLoader('/content/amd', glob="*.pdf", loader_cls=PyPDFLoader).load(),
'engie': DirectoryLoader('/content/engie', glob="*.pdf", loader_cls=PyPDFLoader).load(),
# Add more categories as needed
}
# Create a vector database and a retriever for each category
vector_stores = {}
retrievers = {}
class MyTextSplitter(RecursiveCharacterTextSplitter):
def split_documents(self, documents):
chunks = super().split_documents(documents)
for document, chunk in zip(documents, chunks):
chunk.metadata['source'] = document.metadata['source']
return chunks
text_splitter = MyTextSplitter(chunk_size=1000,
chunk_overlap=100)
for category, docs in documents_dict.items():
texts = text_splitter.split_documents(docs)
vector_store = Chroma.from_documents(texts, embeddings)
retriever = vector_store.as_retriever(search_kwargs={"k": 5})
vector_stores[category] = vector_store
retrievers[category] = retriever
llm = OpenAI(temperature=0.2)
# Create a retriever for the vector database
document_content_description = "Description of a corporate document outlining human rights commitments and implementation strategies by an organization, including ethical principles, global agreements, and operational procedures."
metadata_field_info = [
{
"name": "document_type",
"description": "The type of document, such as policy statement, modern slavery statement, human rights due diligence manual.",
"type": "string",
},
{
"name": "company_name",
"description": "The name of the company that the document pertains to.",
"type": "string",
},
{
"name": "effective_date",
"description": "The date when the document or policy became effective.",
"type": "date",
},
]
category = 'amd'
retriever = SelfQueryRetriever.from_llm(
llm,
vector_stores[category],
document_content_description,
metadata_field_info,
use_original_query=False,
verbose=True
)
# print(retriever)
# Answer a question related to 'Cricket'
# category = 'amd'
qa_chain = RetrievalQA.from_chain_type(llm=OpenAI(temperature=0.2),
chain_type="stuff",
retriever=retriever,
return_source_documents=True)
# Format the prompt using the template
context = ""
# question = "what's the final provision of dhl?"
question = "what for it strives?"
formatted_prompt = prompt_template.format(context=context, question=question)
# Pass the formatted prompt to the RetrievalQA function
llm_response = qa_chain(formatted_prompt)
process_llm_response(llm_response)
```
above is the code which should return output from amd document, but it is returning output from engie document. I feel because while using the SelfQueryRetriever, we're not defining retrievers[category] inside the SelfQueryRetriever, maybe that's one of the reasons. And we must utilize both retrievers[category] and vector_stores[category] to get the output from the category we have mentioned. But SelfQueryRetriever won't accept retriever parameter in it. Is there any alternative for this? Can you have a look into it and fix the code? | not able to return correct answer from selected vector category and no option to add retriever category in SelfQueryRetriever | https://api.github.com/repos/langchain-ai/langchain/issues/17426/comments | 3 | 2024-02-12T17:49:38Z | 2024-02-14T03:34:59Z | https://github.com/langchain-ai/langchain/issues/17426 | 2,130,650,621 | 17,426 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
-
### Idea or request for content:
below's the code
```
# Categorize documents
documents_dict = {
'amd': DirectoryLoader('/content/amd', glob="*.pdf", loader_cls=PyPDFLoader).load(),
'engie': DirectoryLoader('/content/engie', glob="*.pdf", loader_cls=PyPDFLoader).load(),
# Add more categories as needed
}
# Create a vector database and a retriever for each category
vector_stores = {}
retrievers = {}
class MyTextSplitter(RecursiveCharacterTextSplitter):
def split_documents(self, documents):
chunks = super().split_documents(documents)
for document, chunk in zip(documents, chunks):
chunk.metadata['source'] = document.metadata['source']
return chunks
text_splitter = MyTextSplitter(chunk_size=1000,
chunk_overlap=100)
for category, docs in documents_dict.items():
texts = text_splitter.split_documents(docs)
vector_store = Chroma.from_documents(texts, embeddings)
retriever = vector_store.as_retriever(search_kwargs={"k": 5})
vector_stores[category] = vector_store
retrievers[category] = retriever
llm = OpenAI(temperature=0.2)
# Create a retriever for the vector database
document_content_description = "Description of a corporate document outlining human rights commitments and implementation strategies by an organization, including ethical principles, global agreements, and operational procedures."
metadata_field_info = [
{
"name": "document_type",
"description": "The type of document, such as policy statement, modern slavery statement, human rights due diligence manual.",
"type": "string",
},
{
"name": "company_name",
"description": "The name of the company that the document pertains to.",
"type": "string",
},
{
"name": "effective_date",
"description": "The date when the document or policy became effective.",
"type": "date",
},
]
category = 'amd'
retriever = SelfQueryRetriever.from_llm(
llm,
vector_stores[category],
document_content_description,
metadata_field_info,
use_original_query=False,
verbose=True
)
# print(retriever)
# Answer a question related to 'Cricket'
# category = 'amd'
qa_chain = RetrievalQA.from_chain_type(llm=OpenAI(temperature=0.2),
chain_type="stuff",
retriever=retriever,
return_source_documents=True)
# Format the prompt using the template
context = ""
# question = "what's the final provision of dhl?"
question = "what for it strives?"
formatted_prompt = prompt_template.format(context=context, question=question)
# Pass the formatted prompt to the RetrievalQA function
llm_response = qa_chain(formatted_prompt)
process_llm_response(llm_response)
```
above is the code which should return output from amd document, but it is returning output from engie document. I feel because while using the SelfQueryRetriever, we're not defining retrievers[category] anywhere, maybe that's one of the reasons. Can you have a look into it and fix the code? | not able to return answer from selected vector category and no option to add retriever category in SelfQueryRetriever | https://api.github.com/repos/langchain-ai/langchain/issues/17424/comments | 1 | 2024-02-12T16:34:16Z | 2024-02-14T03:34:58Z | https://github.com/langchain-ai/langchain/issues/17424 | 2,130,511,421 | 17,424 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
from langchain_openai import AzureChatOpenAI
### Error Message and Stack Trace (if applicable)
It does not authorize the API from bearer token
### Description
az login
az account get-access-token --resource https://cognitiveservices.azure.com --query "accessToken" -o tsv
ref link: https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/managed-identity
### System Info
from langchain_openai import AzureChatOpenAI
Windows
python 3.11 | Bearer authentication of Azure OpenAI instead API_KEY | https://api.github.com/repos/langchain-ai/langchain/issues/17422/comments | 1 | 2024-02-12T16:03:43Z | 2024-05-20T16:09:29Z | https://github.com/langchain-ai/langchain/issues/17422 | 2,130,446,715 | 17,422 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
-
### Idea or request for content:
below's the code
```
# Categorize documents
documents_dict = {
'amd': DirectoryLoader('/content/amd', glob="*.pdf", loader_cls=PyPDFLoader).load(),
'engie': DirectoryLoader('/content/engie', glob="*.pdf", loader_cls=PyPDFLoader).load(),
# Add more categories as needed
}
# Create a vector database and a retriever for each category
vector_stores = {}
retrievers = {}
class MyTextSplitter(RecursiveCharacterTextSplitter):
def split_documents(self, documents):
chunks = super().split_documents(documents)
for document, chunk in zip(documents, chunks):
chunk.metadata['source'] = document.metadata['source']
return chunks
text_splitter = MyTextSplitter(chunk_size=1000,
chunk_overlap=100)
for category, docs in documents_dict.items():
texts = text_splitter.split_documents(docs)
vector_store = Chroma.from_documents(texts, embeddings)
retriever = vector_store.as_retriever(search_kwargs={"k": 5})
vector_stores[category] = vector_store
retrievers[category] = retriever
llm = OpenAI(temperature=0.2)
# Create a retriever for the vector database
document_content_description = "Description of a corporate document outlining human rights commitments and implementation strategies by an organization, including ethical principles, global agreements, and operational procedures."
metadata_field_info = [
{
"name": "document_type",
"description": "The type of document, such as policy statement, modern slavery statement, human rights due diligence manual.",
"type": "string",
},
{
"name": "company_name",
"description": "The name of the company that the document pertains to.",
"type": "string",
},
{
"name": "effective_date",
"description": "The date when the document or policy became effective.",
"type": "date",
},
]
category = 'amd'
retriever = SelfQueryRetriever.from_llm(
llm,
vector_stores[category],
document_content_description,
metadata_field_info,
use_original_query=False,
verbose=True
)
# Answer a question related to 'Cricket'
# category = 'amd'
qa_chain = RetrievalQA.from_chain_type(llm=OpenAI(temperature=0.2),
chain_type="stuff",
retriever=retrievers[category],
return_source_documents=True)
# Format the prompt using the template
context = ""
# question = "what's the final provision of dhl?"
question = "what for it strives?"
formatted_prompt = prompt_template.format(context=context, question=question)
# Pass the formatted prompt to the RetrievalQA function
llm_response = qa_chain(formatted_prompt)
process_llm_response(llm_response)
```
above is the code which should output from amd document, but it is returning output from engie document. Can you have a look into it and fix the code? | unable to return answer from selected vector category | https://api.github.com/repos/langchain-ai/langchain/issues/17421/comments | 1 | 2024-02-12T16:00:09Z | 2024-02-14T03:34:58Z | https://github.com/langchain-ai/langchain/issues/17421 | 2,130,439,746 | 17,421 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
The following code produces the pydantic exception: field "config" not yet prepared so type is still a ForwardRef):
```python
from langchain.chat_models import BedrockChat
from botocore.config import Config
model = BedrockChat(
region_name="us-east-1",
model_id="anthropic.claude-v2:1",
model_kwargs=dict(temperature=0, max_tokens_to_sample=10000),
verbose=False,
streaming=False,
config=Config(
retries=dict(max_attempts=10, mode='adaptive', total_max_attempts=100)
)
)
```
### Error Message and Stack Trace (if applicable)
`pydantic.v1.errors.ConfigError: field "config" not yet prepared so type is still a ForwardRef, you might need to call BedrockChat.update_forward_refs().`
The stack trace:
```
Traceback (most recent call last):
File "/Users/atigarev/PycharmProjects/isolated_lc/test.py", line 5, in <module>
model = BedrockChat(
File "/Users/atigarev/PycharmProjects/isolated_lc/.venv/lib/python3.10/site-packages/langchain_core/load/serializable.py", line 107, in __init__
super().__init__(**kwargs)
File "/Users/atigarev/PycharmProjects/isolated_lc/.venv/lib/python3.10/site-packages/pydantic/v1/main.py", line 339, in __init__
values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)
File "/Users/atigarev/PycharmProjects/isolated_lc/.venv/lib/python3.10/site-packages/pydantic/v1/main.py", line 1074, in validate_model
v_, errors_ = field.validate(value, values, loc=field.alias, cls=cls_)
File "/Users/atigarev/PycharmProjects/isolated_lc/.venv/lib/python3.10/site-packages/pydantic/v1/fields.py", line 857, in validate
raise ConfigError(
pydantic.v1.errors.ConfigError: field "config" not yet prepared so type is still a ForwardRef, you might need to call BedrockChat.update_forward_refs().
```
### Description
Passing botocore.config.Config to BedrockChat produces the pydantic exception: field "config" not yet prepared so type is still a ForwardRef) (stack trace provided in its own field).
Adding the following code at the bottom of `langchain-community/chat_models/bedrock.py` **fixes it**:
```python
from botocore.config import Config
BedrockChat.update_forward_refs()
```
Just doing `BedrockChat.update_forward_refs()` in my code causes error:
```
Traceback (most recent call last):
File "/Users/atigarev/PycharmProjects/isolated_lc/test.py", line 4, in <module>
BedrockChat.update_forward_refs()
File "/Users/atigarev/PycharmProjects/isolated_lc/.venv/lib/python3.10/site-packages/pydantic/v1/main.py", line 814, in update_forward_refs
update_model_forward_refs(cls, cls.__fields__.values(), cls.__config__.json_encoders, localns)
File "/Users/atigarev/PycharmProjects/isolated_lc/.venv/lib/python3.10/site-packages/pydantic/v1/typing.py", line 554, in update_model_forward_refs
update_field_forward_refs(f, globalns=globalns, localns=localns)
File "/Users/atigarev/PycharmProjects/isolated_lc/.venv/lib/python3.10/site-packages/pydantic/v1/typing.py", line 520, in update_field_forward_refs
field.type_ = evaluate_forwardref(field.type_, globalns, localns or None)
File "/Users/atigarev/PycharmProjects/isolated_lc/.venv/lib/python3.10/site-packages/pydantic/v1/typing.py", line 66, in evaluate_forwardref
return cast(Any, type_)._evaluate(globalns, localns, set())
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/typing.py", line 694, in _evaluate
eval(self.__forward_code__, globalns, localns),
File "<string>", line 1, in <module>
NameError: name 'Config' is not defined
```
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 20.6.0: Thu Mar 9 20:39:26 PST 2023; root:xnu-7195.141.49.700.6~1/RELEASE_X86_64
> Python Version: 3.10.5 (v3.10.5:f377153967, Jun 6 2022, 12:36:10) [Clang 13.0.0 (clang-1300.0.29.30)]
Package Information
-------------------
> langchain_core: 0.1.22
> langchain: 0.1.6
> langchain_community: 0.0.19
> langsmith: 0.0.87
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
| Can't pass botocore.config.Config to BedrockChat (pydantic error: field "config" not yet prepared so type is still a ForwardRef) | https://api.github.com/repos/langchain-ai/langchain/issues/17420/comments | 1 | 2024-02-12T15:56:47Z | 2024-05-20T16:09:25Z | https://github.com/langchain-ai/langchain/issues/17420 | 2,130,433,068 | 17,420 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
-
### Idea or request for content:
below's the code
```
# Categorize documents
documents_dict = {
'amd': DirectoryLoader('/content/amd', glob="*.pdf", loader_cls=PyPDFLoader).load(),
'engie': DirectoryLoader('/content/engie', glob="*.pdf", loader_cls=PyPDFLoader).load(),
# Add more categories as needed
}
# Create a vector database and a retriever for each category
vector_stores = {}
retrievers = {}
class MyTextSplitter(RecursiveCharacterTextSplitter):
def split_documents(self, documents):
chunks = super().split_documents(documents)
for document, chunk in zip(documents, chunks):
chunk.metadata['source'] = document.metadata['source']
return chunks
text_splitter = MyTextSplitter(chunk_size=1000,
chunk_overlap=100)
for category, docs in documents_dict.items():
texts = text_splitter.split_documents(docs)
vector_store = FAISS.from_documents(texts, embeddings)
retriever = vector_store.as_retriever(search_kwargs={"k": 5})
vector_stores[category] = vector_store
retrievers[category] = retriever
# Answer a question related to 'Cricket'
category = 'amd'
qa_chain = RetrievalQA.from_chain_type(llm=OpenAI(temperature=0.2),
chain_type="stuff",
retriever=retrievers[category],
return_source_documents=True)
# Format the prompt using the template
context = ""
# question = "what's the final provision of dhl?"
question = "what for it strives?"
formatted_prompt = prompt_template.format(context=context, question=question)
# Pass the formatted prompt to the RetrievalQA function
llm_response = qa_chain(formatted_prompt)
process_llm_response(llm_response)
faiss_instance = vector_stores['amd'] # Assuming this is your FAISS instance
```
Now, i wanna extract the metadata and see what components are present in faiss_instance. Can you help me with code? | not able to retrieve metadata from FAISS instance | https://api.github.com/repos/langchain-ai/langchain/issues/17419/comments | 5 | 2024-02-12T15:34:38Z | 2024-02-14T03:34:58Z | https://github.com/langchain-ai/langchain/issues/17419 | 2,130,386,858 | 17,419 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
-
### Idea or request for content:
below's the code
```
faiss_instance = vector_stores['amd'] # Assuming this is your FAISS instance, where amd is a category
for doc_id, document in faiss_instance.docstore.items():
print(f"ID: {doc_id}, Metadata: {document.metadata}")
```
and the error is below
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
[<ipython-input-36-8d0aa86de55f>](https://localhost:8080/#) in <cell line: 2>()
1 faiss_instance = vector_stores['amd'] # Assuming this is your FAISS instance
----> 2 for doc_id, document in faiss_instance.docstore.items():
3 print(f"ID: {doc_id}, Metadata: {document.metadata}")
AttributeError: 'InMemoryDocstore' object has no attribute 'items'
```
how to fix the above error and return the metadata present in faiss and in that particular category? | unable to retrieve the metadata of FAISS vector db | https://api.github.com/repos/langchain-ai/langchain/issues/17417/comments | 1 | 2024-02-12T15:21:06Z | 2024-02-14T03:34:57Z | https://github.com/langchain-ai/langchain/issues/17417 | 2,130,359,673 | 17,417 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
_No response_
### Idea or request for content:
below's the code
```
# Categorize documents
documents_dict = {
'cricket': DirectoryLoader('/content/cricket', glob="*.pdf", loader_cls=PyPDFLoader).load(),
'fifa': DirectoryLoader('/content/fifa', glob="*.pdf", loader_cls=PyPDFLoader).load(),
# Add more categories as needed
}
# Create a vector database and a retriever for each category
vector_stores = {}
retrievers = {}
class MyTextSplitter(RecursiveCharacterTextSplitter):
def split_documents(self, documents):
chunks = super().split_documents(documents)
for document, chunk in zip(documents, chunks):
chunk.metadata['source'] = document.metadata['source']
return chunks
text_splitter = MyTextSplitter(chunk_size=1000,
chunk_overlap=100)
for category, docs in documents_dict.items():
texts = text_splitter.split_documents(docs)
vector_store = FAISS.from_documents(texts, embeddings)
retriever = vector_store.as_retriever(search_kwargs={"k": 5})
vector_stores[category] = vector_store
retrievers[category] = retriever
# Answer a question related to 'Cricket'
category = 'cricket'
qa_chain = RetrievalQA.from_chain_type(llm=OpenAI(temperature=0.2),
chain_type="stuff",
retriever=retrievers[category],
return_source_documents=True)
# Format the prompt using the template
context = ""
# question = "what's the final provision of dhl?"
question = "what for it strives?"
formatted_prompt = prompt_template.format(context=context, question=question)
# Pass the formatted prompt to the RetrievalQA function
llm_response = qa_chain(formatted_prompt)
process_llm_response(llm_response)
```
In the above code, we're not storing the vector databases on local storage, instead, they are kept in memory. Retrieving answers from an in-memory storage might take longer, compared to quickly loading them from a saved vector database, correct? If that's the case, could you also provide the code for this approach? | How to load data into vector db and use it retrieval and QA? | https://api.github.com/repos/langchain-ai/langchain/issues/17412/comments | 5 | 2024-02-12T13:58:31Z | 2024-02-14T03:34:57Z | https://github.com/langchain-ai/langchain/issues/17412 | 2,130,198,517 | 17,412 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
below's the code
```
# loader = TextLoader('single_text_file.txt')
loader = DirectoryLoader(f'/content/files', glob="./*.pdf", loader_cls=PyPDFLoader)
documents = loader.load()
# Define your prompt template
prompt_template = """Use the following pieces of information to answer the user's question.
If you don't know the answer, just say that you don't know, don't try to make up an answer.
Context: {context}
Question: {question}
Only return the helpful answer below and nothing else. If no context, then no answer.
Helpful Answer:"""
unique_sources = set()
for doc in documents:
source = doc.metadata['source']
unique_sources.add(source)
num_unique_sources = len(unique_sources)
# print("Number of unique sources:", num_unique_sources)
# print()
# # print("Unique source names:")
# for source_name in unique_sources:
# print(source_name)
class MyTextSplitter(RecursiveCharacterTextSplitter):
def split_documents(self, documents):
chunks = super().split_documents(documents)
for document, chunk in zip(documents, chunks):
chunk.metadata['source'] = document.metadata['source']
return chunks
text_splitter = MyTextSplitter(chunk_size=1000,
chunk_overlap=100)
texts = text_splitter.split_documents(documents)
# Create a vector database for the document
# vectorStore = FAISS.from_documents(texts, instructor_embeddings)
vectorStore = FAISS.from_documents(texts, embeddings)
# Create a retriever for the vector database
retriever = vectorStore.as_retriever(search_kwargs={"k": 5})
# Create a chain to answer questions
qa_chain = RetrievalQA.from_chain_type(llm=OpenAI(temperature=0.2),
chain_type="stuff",
retriever=retriever,
return_source_documents=True)
# # Use the chain to answer a questions
# llm_response = qa_chain(query)
# process_llm_response(llm_response)
# Format the prompt using the template
context = ""
question = "can you return the objective"
formatted_prompt = prompt_template.format(context=context, question=question)
# Pass the formatted prompt to the RetrievalQA function
llm_response = qa_chain(formatted_prompt)
process_llm_response(llm_response)
```
In the code mentioned above, it creates a single vector database (vectorDB) for all the files located in the files folder. Now, I'm interested in creating multiple vector databases for multiple files (let's say i want to create a vectordb which is related to Cricket and it has files related to cricket, again a vectordb related to football and it has files related to football etc) and would like assistance in extracting the details or metadata of these vector databases. Could you provide help with the code for this?
The answer you gave is below
Based on your request, you want to create multiple vector databases for different categories of files and extract their metadata. Here's how you can modify your code to achieve this:
First, you need to categorize your documents based on their content. You can create a dictionary where the keys are the categories (like 'Cricket', 'Football', etc.) and the values are lists of documents related to that category.
```
# Categorize documents
documents_dict = {
'Cricket': [],
'Football': [],
# Add more categories as needed
}
for doc in documents:
source = doc.metadata['source']
documents_dict[source].append(doc)
```
Next, for each category, you can create a vector database and a retriever:
```
# Create a vector database and a retriever for each category
vector_stores = {}
retrievers = {}
for category, docs in documents_dict.items():
texts = text_splitter.split_documents(docs)
vector_store = FAISS.from_documents(texts, embeddings)
retriever = vector_store.as_retriever(search_kwargs={"k": 5})
vector_stores[category] = vector_store
retrievers[category] = retriever
```
Now, vector_stores is a dictionary where the keys are the categories and the values are the corresponding vector databases. Similarly, retrievers is a dictionary where the keys are the categories and the values are the corresponding retrievers.
To answer a question related to a specific category, you can use the corresponding retriever:
```
# Answer a question related to 'Cricket'
category = 'Cricket'
qa_chain = RetrievalQA.from_chain_type(llm=OpenAI(temperature=0.2),
chain_type="stuff",
retriever=retrievers[category],
return_source_documents=True)
```
```
# Use the chain to answer a questions
llm_response = qa_chain(query)
process_llm_response(llm_response)
```
To extract the metadata of a vector database, you can use the get_all_metadatas method:
```
# Extract the metadata of the 'Cricket' vector database
metadata = vector_stores['Cricket'].get_all_metadatas()
```
but i'm bit confused. So, you categorized documents, then you said to mention list of documents related to that. So is the below how i should insert
```
# Categorize documents
documents_dict = {
'Cricket': ['cricket1.pdf', 'cricket2.pdf'],
'Football': ['fifa1.pdf', 'fifa2.pdf'],
# Add more categories as needed
}
for doc in documents:
source = doc.metadata['source']
documents_dict[source].append(doc)
```
and again you're looping documents, where as documents has all the pdf files data in it. How come it'll understand? Can you please return the complete code so that it'd helpful and can understand easily. And also i want to save the vectorDB locally instead of using inmemory
### Idea or request for content:
_No response_ | How to create multiple vectorDB's for multiple files, then extract the details/metadata of it in FAISS and Chroma? | https://api.github.com/repos/langchain-ai/langchain/issues/17410/comments | 5 | 2024-02-12T12:59:45Z | 2024-02-14T03:34:57Z | https://github.com/langchain-ai/langchain/issues/17410 | 2,130,090,684 | 17,410 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
from langchain_openai import OpenAIEmbeddings
embedding_model = os.environ.get("EMBEDDING_MODEL")
print(embedding_model)
embedding_dimension = os.environ.get("EMBEDDING_DIMENSION")
print(embedding_dimension)
# the langchain way
embeddings_model_lg = OpenAIEmbeddings(api_key=OPENAI_API_KEY, model=embedding_model, deployment=embedding_model, dimensions=int(embedding_dimension))
vectorstore = SupabaseVectorStore(
client=supabase,
embedding=embeddings_model_lg,
table_name="documents",
query_name="match_documents",
)
retriever = SelfQueryRetriever.from_llm(
llm,
vectorstore,
document_content_description,
metadata_field_info,
verbose=True
)
# %%
# specify a relevant query
query = "How does tree help the boy make the crown? return results with relevance scores"
embedded_query = embeddings_model_lg.embed_query(query)
response = retriever.get_relevant_documents(query)
```
and in my .env
```bash
EMBEDDING_DIMENSION=256
# edit this based on your model preference, e.g. text-embedding-3-small, text-embedding-ada-002
EMBEDDING_MODEL=text-embedding-3-large
```
### Error Message and Stack Trace (if applicable)
```bash
2024-02-12 21:49:08,618:WARNING - Warning: model not found. Using cl100k_base encoding.
2024-02-12 21:49:09,055:INFO - HTTP Request: POST https://api.openai.com/v1/embeddings "HTTP/1.1 200 OK"
2024-02-12 21:49:10,285:INFO - HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
2024-02-12 21:49:10,295:INFO - Generated Query: query='tree help boy crown' filter=None limit=None
2024-02-12 21:49:10,296:WARNING - Warning: model not found. Using cl100k_base encoding.
2024-02-12 21:49:10,584:INFO - HTTP Request: POST https://api.openai.com/v1/embeddings "HTTP/1.1 200 OK"
2024-02-12 21:49:11,104:INFO - HTTP Request: POST https://lhbeoisvtsilsquybifs.supabase.co/rest/v1/rpc/match_documents?limit=4 "HTTP/1.1 200 OK"
```
it's a warning.
### Description
I want it to use the model I designated. Can I change the default in base.py?
```python
.
.
.
client: Any = Field(default=None, exclude=True) #: :meta private:
async_client: Any = Field(default=None, exclude=True) #: :meta private:
model: str = "text-embedding-ada-002"
dimensions: Optional[int] = None
"""The number of dimensions the resulting o...
```
I can't believe the results are actually correct but this is a tiny tiny children's book so it could have been a fluke.
```bash
[Document(page_content='Once there was a tree.... and she loved a little boy. And everyday the boy would come and he would gather her leaves and make them into crowns and play king of the forest. He would climb up her trunk and swing from her branches and eat apples. And they would play hide-and-go-seek.'), Document(page_content='And the tree was happy. But time went by. And the boy grew older. And the tree was often alone. Then one day the boy came to the tree and the tree said, "Come, Boy, come and climb up my trunk and swing from my branches and eat apples and play in my shade and be happy.'), ...
```
### System Info
```bash
(langchain) nyck33@nyck33-lenovo:/media/nyck33/65DA61B605B0A8C1/projects/langchain-deeplearning-ai-tutorial$ pip freeze | grep langchain
langchain==0.1.5
langchain-community==0.0.19
langchain-core==0.1.21
langchain-openai==0.0.5
``` | OpenAIEmbeddings model argument does not work | https://api.github.com/repos/langchain-ai/langchain/issues/17409/comments | 4 | 2024-02-12T12:59:44Z | 2024-04-06T09:39:41Z | https://github.com/langchain-ai/langchain/issues/17409 | 2,130,090,661 | 17,409 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
from langchain.chains.qa_generation.base import QAGenerationChain
from langchain.evaluation.qa.generate_chain import QAGenerateChain
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
# Initialize the language model and QAGenerationChain
llm = ChatOpenAI(temperature=0.0, model=llm_model) # Replace
embeddings_model_lg = OpenAIEmbeddings(api_key=OPENAI_API_KEY, model=embedding_model, deployment=embedding_model, dimensions=int(embedding_dimension))
.
.
.
### load vectorstore already on Supabase
vectorstore = SupabaseVectorStore(
client=supabase,
embedding=embeddings_model_lg,
table_name="documents",
query_name="match_page_sections",
)
# %%
print(vectorstore.embeddings)
# %% [markdown]
### Create our self-querying retrieval model
from langchain.chains.query_constructor.base import AttributeInfo
from langchain.retrievers.self_query.base import SelfQueryRetriever
from langchain_openai import OpenAI
# %% define the metaddata in the pgvector table
# want descriptions of the metadata fields
metadata_field_info = []
document_content_description = "Ordered segments of the book 'The Giving Tree' by Shel Silverstein"
retriever = SelfQueryRetriever.from_llm(
llm,
vectorstore,
document_content_description,
metadata_field_info,
verbose=True
)
# %%
# specify a relevant query
query = "How does tree help the boy make the crown?"
embedded_query = embeddings_model_lg.embed_query(query)
response = retriever.get_relevant_documents(embedded_query)
# %%
# try using openai embeddings and calling methods on vectorstore
# https://api.python.langchain.com/en/latest/vectorstores/langchain_community.vectorstores.supabase.SupabaseVectorStore.html#
query_embedding_openai = get_embeddings(query)
results = vectorstore.similarity_search_with_relevance_scores(query, k=5)
```
both of those tries at the end throw errors like below
### Error Message and Stack Trace (if applicable)
```bash
{
"name": "APIError",
"message": "{'code': 'PGRST202', 'details': 'Searched for the function public.match_page_sections with parameter query_embedding or with a single unnamed json/jsonb parameter, but no matches were found in the schema cache.', 'hint': None, 'message': 'Could not find the function public.match_page_sections(query_embedding) in the schema cache'}",
"stack": "---------------------------------------------------------------------------
APIError Traceback (most recent call last)
File /media/nyck33/65DA61B605B0A8C1/projects/langchain-deeplearning-ai-tutorial/supabase/giving_tree_supabase_query.py:5
3 query = \"How does tree help the boy make the crown?\"
4 embedded_query = embeddings_model_lg.embed_query(query)
----> 5 response = retriever.get_relevant_documents(embedded_query)
File ~/miniconda3/envs/langchain/lib/python3.10/site-packages/langchain_core/retrievers.py:224, in BaseRetriever.get_relevant_documents(self, query, callbacks, tags, metadata, run_name, **kwargs)
222 except Exception as e:
223 run_manager.on_retriever_error(e)
--> 224 raise e
225 else:
226 run_manager.on_retriever_end(
227 result,
228 )
File ~/miniconda3/envs/langchain/lib/python3.10/site-packages/langchain_core/retrievers.py:217, in BaseRetriever.get_relevant_documents(self, query, callbacks, tags, metadata, run_name, **kwargs)
215 _kwargs = kwargs if self._expects_other_args else {}
216 if self._new_arg_supported:
--> 217 result = self._get_relevant_documents(
218 query, run_manager=run_manager, **_kwargs
219 )
220 else:
221 result = self._get_relevant_documents(query, **_kwargs)
File ~/miniconda3/envs/langchain/lib/python3.10/site-packages/langchain/retrievers/self_query/base.py:174, in SelfQueryRetriever._get_relevant_documents(self, query, run_manager)
172 logger.info(f\"Generated Query: {structured_query}\")
173 new_query, search_kwargs = self._prepare_query(query, structured_query)
--> 174 docs = self._get_docs_with_query(new_query, search_kwargs)
175 return docs
File ~/miniconda3/envs/langchain/lib/python3.10/site-packages/langchain/retrievers/self_query/base.py:148, in SelfQueryRetriever._get_docs_with_query(self, query, search_kwargs)
145 def _get_docs_with_query(
146 self, query: str, search_kwargs: Dict[str, Any]
147 ) -> List[Document]:
--> 148 docs = self.vectorstore.search(query, self.search_type, **search_kwargs)
149 return docs
File ~/miniconda3/envs/langchain/lib/python3.10/site-packages/langchain_core/vectorstores.py:139, in VectorStore.search(self, query, search_type, **kwargs)
137 \"\"\"Return docs most similar to query using specified search type.\"\"\"
138 if search_type == \"similarity\":
--> 139 return self.similarity_search(query, **kwargs)
140 elif search_type == \"mmr\":
141 return self.max_marginal_relevance_search(query, **kwargs)
File ~/miniconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/vectorstores/supabase.py:182, in SupabaseVectorStore.similarity_search(self, query, k, filter, **kwargs)
174 def similarity_search(
175 self,
176 query: str,
(...)
179 **kwargs: Any,
180 ) -> List[Document]:
181 vector = self._embedding.embed_query(query)
--> 182 return self.similarity_search_by_vector(vector, k=k, filter=filter, **kwargs)
File ~/miniconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/vectorstores/supabase.py:191, in SupabaseVectorStore.similarity_search_by_vector(self, embedding, k, filter, **kwargs)
184 def similarity_search_by_vector(
185 self,
186 embedding: List[float],
(...)
189 **kwargs: Any,
190 ) -> List[Document]:
--> 191 result = self.similarity_search_by_vector_with_relevance_scores(
192 embedding, k=k, filter=filter, **kwargs
193 )
195 documents = [doc for doc, _ in result]
197 return documents
File ~/miniconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/vectorstores/supabase.py:237, in SupabaseVectorStore.similarity_search_by_vector_with_relevance_scores(self, query, k, filter, postgrest_filter, score_threshold)
231 query_builder.params = query_builder.params.set(
232 \"and\", f\"({postgrest_filter})\"
233 )
235 query_builder.params = query_builder.params.set(\"limit\", k)
--> 237 res = query_builder.execute()
239 match_result = [
240 (
241 Document(
(...)
248 if search.get(\"content\")
249 ]
251 if score_threshold is not None:
File ~/miniconda3/envs/langchain/lib/python3.10/site-packages/postgrest/_sync/request_builder.py:119, in SyncSingleRequestBuilder.execute(self)
117 return SingleAPIResponse[_ReturnT].from_http_request_response(r)
118 else:
--> 119 raise APIError(r.json())
120 except ValidationError as e:
121 raise APIError(r.json()) from e
APIError: {'code': 'PGRST202', 'details': 'Searched for the function public.match_page_sections with parameter query_embedding or with a single unnamed json/jsonb parameter, but no matches were found in the schema cache.', 'hint': None, 'message': 'Could not find the function public.match_page_sections(query_embedding) in the schema cache'}"
}
```
and
```bash
---------------------------------------------------------------------------
APIError Traceback (most recent call last)
File [/media/nyck33/65DA61B605B0A8C1/projects/langchain-deeplearning-ai-tutorial/supabase/giving_tree_supabase_query.py:7](https://file+.vscode-resource.vscode-cdn.net/media/nyck33/65DA61B605B0A8C1/projects/langchain-deeplearning-ai-tutorial/supabase/giving_tree_supabase_query.py:7)
[1](https://file+.vscode-resource.vscode-cdn.net/media/nyck33/65DA61B605B0A8C1/projects/langchain-deeplearning-ai-tutorial/supabase/giving_tree_supabase_query.py:1) # %%
[2](https://file+.vscode-resource.vscode-cdn.net/media/nyck33/65DA61B605B0A8C1/projects/langchain-deeplearning-ai-tutorial/supabase/giving_tree_supabase_query.py:2) # try using openai embeddings and calling methods on vectorstore
[3](https://file+.vscode-resource.vscode-cdn.net/media/nyck33/65DA61B605B0A8C1/projects/langchain-deeplearning-ai-tutorial/supabase/giving_tree_supabase_query.py:3) # https://api.python.langchain.com/en/latest/vectorstores/langchain_community.vectorstores.supabase.SupabaseVectorStore.html#
[5](https://file+.vscode-resource.vscode-cdn.net/media/nyck33/65DA61B605B0A8C1/projects/langchain-deeplearning-ai-tutorial/supabase/giving_tree_supabase_query.py:5) query_embedding_openai = get_embeddings(query)
----> [7](https://file+.vscode-resource.vscode-cdn.net/media/nyck33/65DA61B605B0A8C1/projects/langchain-deeplearning-ai-tutorial/supabase/giving_tree_supabase_query.py:7) results = vectorstore.similarity_search_with_relevance_scores(query, k=5)
File [~/miniconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/vectorstores/supabase.py:207](https://file+.vscode-resource.vscode-cdn.net/media/nyck33/65DA61B605B0A8C1/projects/langchain-deeplearning-ai-tutorial/~/miniconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/vectorstores/supabase.py:207), in SupabaseVectorStore.similarity_search_with_relevance_scores(self, query, k, filter, **kwargs)
[199](https://file+.vscode-resource.vscode-cdn.net/media/nyck33/65DA61B605B0A8C1/projects/langchain-deeplearning-ai-tutorial/~/miniconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/vectorstores/supabase.py:199) def similarity_search_with_relevance_scores(
[200](https://file+.vscode-resource.vscode-cdn.net/media/nyck33/65DA61B605B0A8C1/projects/langchain-deeplearning-ai-tutorial/~/miniconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/vectorstores/supabase.py:200) self,
[201](https://file+.vscode-resource.vscode-cdn.net/media/nyck33/65DA61B605B0A8C1/projects/langchain-deeplearning-ai-tutorial/~/miniconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/vectorstores/supabase.py:201) query: str,
(...)
[204](https://file+.vscode-resource.vscode-cdn.net/media/nyck33/65DA61B605B0A8C1/projects/langchain-deeplearning-ai-tutorial/~/miniconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/vectorstores/supabase.py:204) **kwargs: Any,
[205](https://file+.vscode-resource.vscode-cdn.net/media/nyck33/65DA61B605B0A8C1/projects/langchain-deeplearning-ai-tutorial/~/miniconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/vectorstores/supabase.py:205) ) -> List[Tuple[Document, float]]:
[206](https://file+.vscode-resource.vscode-cdn.net/media/nyck33/65DA61B605B0A8C1/projects/langchain-deeplearning-ai-tutorial/~/miniconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/vectorstores/supabase.py:206) vector = self._embedding.embed_query(query)
--> [207](https://file+.vscode-resource.vscode-cdn.net/media/nyck33/65DA61B605B0A8C1/projects/langchain-deeplearning-ai-tutorial/~/miniconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/vectorstores/supabase.py:207) return self.similarity_search_by_vector_with_relevance_scores(
[208](https://file+.vscode-resource.vscode-cdn.net/media/nyck33/65DA61B605B0A8C1/projects/langchain-deeplearning-ai-tutorial/~/miniconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/vectorstores/supabase.py:208) vector, k=k, filter=filter, **kwargs
[209](https://file+.vscode-resource.vscode-cdn.net/media/nyck33/65DA61B605B0A8C1/projects/langchain-deeplearning-ai-tutorial/~/miniconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/vectorstores/supabase.py:209) )
File [~/miniconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/vectorstores/supabase.py:237](https://file+.vscode-resource.vscode-cdn.net/media/nyck33/65DA61B605B0A8C1/projects/langchain-deeplearning-ai-tutorial/~/miniconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/vectorstores/supabase.py:237), in SupabaseVectorStore.similarity_search_by_vector_with_relevance_scores(self, query, k, filter, postgrest_filter, score_threshold)
[231](https://file+.vscode-resource.vscode-cdn.net/media/nyck33/65DA61B605B0A8C1/projects/langchain-deeplearning-ai-tutorial/~/miniconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/vectorstores/supabase.py:231) query_builder.params = query_builder.params.set(
[232](https://file+.vscode-resource.vscode-cdn.net/media/nyck33/65DA61B605B0A8C1/projects/langchain-deeplearning-ai-tutorial/~/miniconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/vectorstores/supabase.py:232) "and", f"({postgrest_filter})"
[233](https://file+.vscode-resource.vscode-cdn.net/media/nyck33/65DA61B605B0A8C1/projects/langchain-deeplearning-ai-tutorial/~/miniconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/vectorstores/supabase.py:233) )
[235](https://file+.vscode-resource.vscode-cdn.net/media/nyck33/65DA61B605B0A8C1/projects/langchain-deeplearning-ai-tutorial/~/miniconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/vectorstores/supabase.py:235) query_builder.params = query_builder.params.set("limit", k)
--> [237](https://file+.vscode-resource.vscode-cdn.net/media/nyck33/65DA61B605B0A8C1/projects/langchain-deeplearning-ai-tutorial/~/miniconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/vectorstores/supabase.py:237) res = query_builder.execute()
[239](https://file+.vscode-resource.vscode-cdn.net/media/nyck33/65DA61B605B0A8C1/projects/langchain-deeplearning-ai-tutorial/~/miniconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/vectorstores/supabase.py:239) match_result = [
[240](https://file+.vscode-resource.vscode-cdn.net/media/nyck33/65DA61B605B0A8C1/projects/langchain-deeplearning-ai-tutorial/~/miniconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/vectorstores/supabase.py:240) (
[241](https://file+.vscode-resource.vscode-cdn.net/media/nyck33/65DA61B605B0A8C1/projects/langchain-deeplearning-ai-tutorial/~/miniconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/vectorstores/supabase.py:241) Document((...)
[248](https://file+.vscode-resource.vscode-cdn.net/media/nyck33/65DA61B605B0A8C1/projects/langchain-deeplearning-ai-tutorial/~/miniconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/vectorstores/supabase.py:248) if search.get("content")
[249](https://file+.vscode-resource.vscode-cdn.net/media/nyck33/65DA61B605B0A8C1/projects/langchain-deeplearning-ai-tutorial/~/miniconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/vectorstores/supabase.py:249) ]
[251](https://file+.vscode-resource.vscode-cdn.net/media/nyck33/65DA61B605B0A8C1/projects/langchain-deeplearning-ai-tutorial/~/miniconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/vectorstores/supabase.py:251) if score_threshold is not None:
File [~/miniconda3/envs/langchain/lib/python3.10/site-packages/postgrest/_sync/request_builder.py:119](https://file+.vscode-resource.vscode-cdn.net/media/nyck33/65DA61B605B0A8C1/projects/langchain-deeplearning-ai-tutorial/~/miniconda3/envs/langchain/lib/python3.10/site-packages/postgrest/_sync/request_builder.py:119), in SyncSingleRequestBuilder.execute(self)
[117](https://file+.vscode-resource.vscode-cdn.net/media/nyck33/65DA61B605B0A8C1/projects/langchain-deeplearning-ai-tutorial/~/miniconda3/envs/langchain/lib/python3.10/site-packages/postgrest/_sync/request_builder.py:117) return SingleAPIResponse[_ReturnT].from_http_request_response(r)
[118](https://file+.vscode-resource.vscode-cdn.net/media/nyck33/65DA61B605B0A8C1/projects/langchain-deeplearning-ai-tutorial/~/miniconda3/envs/langchain/lib/python3.10/site-packages/postgrest/_sync/request_builder.py:118) else:
--> [119](https://file+.vscode-resource.vscode-cdn.net/media/nyck33/65DA61B605B0A8C1/projects/langchain-deeplearning-ai-tutorial/~/miniconda3/envs/langchain/lib/python3.10/site-packages/postgrest/_sync/request_builder.py:119) raise APIError(r.json())
[120](https://file+.vscode-resource.vscode-cdn.net/media/nyck33/65DA61B605B0A8C1/projects/langchain-deeplearning-ai-tutorial/~/miniconda3/envs/langchain/lib/python3.10/site-packages/postgrest/_sync/request_builder.py:120) except ValidationError as e:
[121](https://file+.vscode-resource.vscode-cdn.net/media/nyck33/65DA61B605B0A8C1/projects/langchain-deeplearning-ai-tutorial/~/miniconda3/envs/langchain/lib/python3.10/site-packages/postgrest/_sync/request_builder.py:121) raise APIError(r.json()) from e
APIError: {'code': 'PGRST202', 'details': 'Searched for the function public.match_page_sections with parameter query_embedding or with a single unnamed json/jsonb parameter, but no matches were found in the schema cache.', 'hint': None, 'message': 'Could not find the function public.match_page_sections(query_embedding) in the schema cache'}
```
but it is there:

that is schema public and my other code for the openai retrieval plugin works (it's their code from their repo that looks like the following):
```
async def _query(self, queries: List[QueryWithEmbedding]) -> List[QueryResult]:
"""
Takes in a list of queries with embeddings and filters and returns a list of query results with matching document chunks and scores.
"""
query_results: List[QueryResult] = []
for query in queries:
# get the top 3 documents with the highest cosine similarity using rpc function in the database called "match_page_sections"
params = {
"in_embedding": query.embedding,
}
if query.top_k:
params["in_match_count"] = query.top_k
if query.filter:
if query.filter.document_id:
params["in_document_id"] = query.filter.document_id
if query.filter.source:
params["in_source"] = query.filter.source.value
if query.filter.source_id:
params["in_source_id"] = query.filter.source_id
if query.filter.author:
params["in_author"] = query.filter.author
if query.filter.start_date:
params["in_start_date"] = datetime.fromtimestamp(
to_unix_timestamp(query.filter.start_date)
)
if query.filter.end_date:
params["in_end_date"] = datetime.fromtimestamp(
to_unix_timestamp(query.filter.end_date)
)
try:
logger.debug(f"RPC params: {params}")
data = await self.client.rpc("match_page_sections", params=params)
results: List[DocumentChunkWithScore] = []
for row in data:
document_chunk = DocumentChunkWithScore(...
```
### Description
I want to use the `retriever.get_relevant_documents("What are some movies about dinosaurs")
` from https://python.langchain.com/docs/integrations/retrievers/self_query/supabase_self_query
### System Info
```bash
(langchain) nyck33@nyck33-lenovo:~$ pip freeze | grep langchain
langchain==0.1.5
langchain-community==0.0.19
langchain-core==0.1.21
langchain-openai==0.0.5
```
on Ubuntu 23.04, conda environment | APIERROR: PGRST202, can't find function on Supabase pgvector database | https://api.github.com/repos/langchain-ai/langchain/issues/17407/comments | 1 | 2024-02-12T12:22:07Z | 2024-05-20T16:09:19Z | https://github.com/langchain-ai/langchain/issues/17407 | 2,130,026,799 | 17,407 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
os.environ['OPENAI_API_KEY'] = openapi_key
# Define connection parameters using constants
from urllib.parse import quote_plus
server_name = constants.server_name
database_name = constants.database_name
username = constants.username
password = constants.password
encoded_password = quote_plus(password)
connection_uri = f"mssql+pyodbc://{username}:{encoded_password}@{server_name}/{database_name}?driver=ODBC+Driver+17+for+SQL+Server"
# Create an engine to connect to the SQL database
engine = create_engine(connection_uri)
model_name="gpt-3.5-turbo-16k"
db = SQLDatabase(engine, view_support=True, include_tables=['EGV_emp_departments_ChatGPT'])
llm = ChatOpenAI(temperature=0, verbose=False, model=model_name)
PROMPT_SUFFIX = """Only use the following tables:
{table_info}
Previous Conversation:
{history}
Question: {input}"""
_DEFAULT_TEMPLATE = """Given an input question, first create a syntactically correct {dialect} query to run,
then look at the results of the query and return the answer.
Given an input question, create a syntactically correct MSSQL query by considering only the matching column names from the question,
then look at the results of the query and return the answer.
If a column name is not present, refrain from writing the SQL query. column like UAN number, PF number are not not present do not consider such columns.
Write the query only for the column names which are present in view.
Execute the query and analyze the results to formulate a response.
Return the answer in sentence form.
Use the following format:
Question: "Question here"
SQLQuery: "SQL Query to run"
SQLResult: "Result of the SQLQuery"
Answer: "Final answer here"
"""
PROMPT = PromptTemplate.from_template(_DEFAULT_TEMPLATE + PROMPT_SUFFIX)
memory = None
# Define a function named chat that takes a question and SQL format indicator as input
def chat1(question):
# global db_chain
global memory
# prompt = """
# Given an input question, create a syntactically correct MSSQL query by considering only the matching column names from the question,
# then look at the results of the query and return the answer.
# If a column name is not present, refrain from writing the SQL query. column like UAN number, PF number are not not present do not consider such columns.
# Write the query only for the column names which are present in view.
# Execute the query and analyze the results to formulate a response.
# Return the answer in sentence form.
# The question: {question}
# """
try:
if memory == None:
memory = ConversationBufferMemory()
db_chain = SQLDatabaseChain.from_llm(llm, db, prompt=PROMPT, verbose=True, memory=memory)
greetings = ["hi", "hello", "hey"]
if any(greeting == question.lower() for greeting in greetings):
print(question)
print("Hello! How can I assist you today?")
return "Hello! How can I assist you today?"
else:
answer = db_chain.run(question)
# answer = db_chain.run(prompt.format(question=question))
# print(memory.load_memory_variables()["history"])
print(memory.load_memory_variables({}))
return answer
except exc.ProgrammingError as e:
# Check for a specific SQL error related to invalid column name
if "Invalid column name" in str(e):
print("Answer: Error Occured while processing the question")
print(str(e))
return "Invalid question. Please check your column names."
else:
print("Error Occured while processing")
print(str(e))
return "Unknown ProgrammingError Occured"
except openai.RateLimitError as e:
print("Error Occured while fetching the answer")
print(str(e))
return "Rate limit exceeded. Please, Mention the Specific Columns you need!"
except openai.BadRequestError as e:
print("Error Occured while fetching the answer")
print(str(e))
return "Context length exceeded: This model's maximum context length is 16385 tokens. Please reduce the length of the messages."
except Exception as e:
print("Error Occured while processing")
print(str(e))
return "Unknown Error Occured"
### Error Message and Stack Trace (if applicable)
so far im trying with flask_caching for cache memory, insted i would want to use llm cache, here is the code below
app = Flask(__name__)
CORS(app) # Enable CORS if needed
cache = Cache(app, config={'CACHE_TYPE': 'SimpleCache'})
app.secret_key = uuid.uuid4().hex
# previous_question = []
# filename = "details"
csv_file = ""
pdf_file = ""
# This function will be used to get answers from the chatbot
# def get_chatbot_answer(questions):
# return chat(questions) # Call your chat function here
@app.route('/')
def index():
# return {"message": "welcome to home page"}
return render_template('chatbot5.html')
@cache.memoize(timeout=3600)
def store_chat_history(question, answer):
return {'question': question, 'answer': answer, 'timestamp': datetime}
@app.route('/get_previous_questions', methods=['GET'])
def get_previous_question():
previous_questions = cache.get('previous_questions') or []
return jsonify(previous_questions)
@app.route('/get_answer', methods=['GET'])
# @token_required
def generate_answer():
question = request.args.get('questions')
answer = chat1(question)
store_chat_history(question, answer)
previous_questions = cache.get('previous_questions') or []
previous_questions.append({'question' : question, 'answer': answer, 'timestamp': datetime.now()})
cache.set('previous_questions', previous_questions, timeout=3600)
return {'answer': answer}
### Description
1. while using flask-caching it doesnt retrieve the question from cache memory until the question is exactly same
2. when i ask memory based question like 1 what is xyz employee id, 2nd what is there mail id, 3rd what is xyz1 employee id 4th what is there mail id, here it return the answer for 2nd question which is stored in cache memory but the 4th question is based on 3rd question.
so for this reason can i use llm cache instead? will it solve the issue for above problemes?
and can u integrate the llm caching in above code with memory
### System Info
python: 3.11
langchain: latest | How does conversational buffer memory and llm caching can be used together? | https://api.github.com/repos/langchain-ai/langchain/issues/17402/comments | 13 | 2024-02-12T10:03:33Z | 2024-02-14T01:50:46Z | https://github.com/langchain-ai/langchain/issues/17402 | 2,129,783,031 | 17,402 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
def chat_langchain(new_project_qa, query, not_uuid):
result = new_project_qa.invoke(query)
print(result,"***********************************************")
relevant_document = result['source_documents']
if relevant_document:
source = relevant_document[0].metadata.get('source', '')
# Check if the file extension is ".pdf"
file_extension = os.path.splitext(source)[1]
if file_extension.lower() == ".pdf":
source = os.path.basename(source)
# Retrieve the UserExperience instance using the provided not_uuid
user_experience_inst = UserExperience.objects.get(not_uuid=not_uuid)
bot_ending = user_experience_inst.bot_ending_msg if user_experience_inst.bot_ending_msg is not None else ""
# Create the list_json dictionary
if bot_ending != '':
list_json = {
'bot_message': result['result'] + '\n\n' + str(bot_ending),
"citation": source
}
else:
list_json = {
'bot_message': result['result'] + str(bot_ending),
"citation": source
}
else:
# Handle the case when relevant_document is empty
list_json = {
'bot_message': result['result'],
'citation': ''
}
# Return the list_json dictionary
return list_json
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
The provided code is for accessing data in a database (ChromeDB), but I need to retrieve metadata, page content, and the source from pgvector.
how can I acheive this?
### System Info
I am using pgvector database | How to get source/Metadata in Pgvector? | https://api.github.com/repos/langchain-ai/langchain/issues/17400/comments | 1 | 2024-02-12T09:50:00Z | 2024-02-14T01:48:56Z | https://github.com/langchain-ai/langchain/issues/17400 | 2,129,758,298 | 17,400 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
from langchain_community.llms import LlamaCpp
llm = LlamaCpp(
model_path=".../llama-2-13b-chat.Q4_0.gguf",
n_gpu_layers=30,
n_batch=1024,
f16_kv=True,
grammar_path=".../response.gbnf",
)
```
### Error Message and Stack Trace (if applicable)
llama_model_loader: loaded meta data with 19 key-value pairs and 363 tensors from /static/llamacpp_model/llama-2-13b-chat.Q4_0.gguf (version GGUF V2)
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = LLaMA v2
llama_model_loader: - kv 2: llama.context_length u32 = 4096
llama_model_loader: - kv 3: llama.embedding_length u32 = 5120
llama_model_loader: - kv 4: llama.block_count u32 = 40
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 13824
llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 7: llama.attention.head_count u32 = 40
llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 40
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 10: general.file_type u32 = 2
llama_model_loader: - kv 11: tokenizer.ggml.model str = llama
llama_model_loader: - kv 12: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv 13: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv 15: tokenizer.ggml.bos_token_id u32 = 1
llama_model_loader: - kv 16: tokenizer.ggml.eos_token_id u32 = 2
llama_model_loader: - kv 17: tokenizer.ggml.unknown_token_id u32 = 0
llama_model_loader: - kv 18: general.quantization_version u32 = 2
llama_model_loader: - type f32: 81 tensors
llama_model_loader: - type q4_0: 281 tensors
llama_model_loader: - type q6_K: 1 tensors
llm_load_vocab: special tokens definition check successful ( 259/32000 ).
llm_load_print_meta: format = GGUF V2
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = SPM
llm_load_print_meta: n_vocab = 32000
llm_load_print_meta: n_merges = 0
llm_load_print_meta: n_ctx_train = 4096
llm_load_print_meta: n_embd = 5120
llm_load_print_meta: n_head = 40
llm_load_print_meta: n_head_kv = 40
llm_load_print_meta: n_layer = 40
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 1
llm_load_print_meta: n_embd_k_gqa = 5120
llm_load_print_meta: n_embd_v_gqa = 5120
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff = 13824
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx = 4096
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: model type = 13B
llm_load_print_meta: model ftype = Q4_0
llm_load_print_meta: model params = 13.02 B
llm_load_print_meta: model size = 6.86 GiB (4.53 BPW)
llm_load_print_meta: general.name = LLaMA v2
llm_load_print_meta: BOS token = 1 '<s>'
llm_load_print_meta: EOS token = 2 '</s>'
llm_load_print_meta: UNK token = 0 '<unk>'
llm_load_print_meta: LF token = 13 '<0x0A>'
llm_load_tensors: ggml ctx size = 0.28 MiB
ggml_backend_metal_buffer_from_ptr: allocated buffer, size = 6595.61 MiB, ( 6595.69 / 12288.02)
llm_load_tensors: offloading 30 repeating layers to GPU
llm_load_tensors: offloaded 30/41 layers to GPU
llm_load_tensors: CPU buffer size = 7023.90 MiB
llm_load_tensors: Metal buffer size = 6595.60 MiB
...................................................................................................
llama_new_context_with_model: n_ctx = 512
llama_new_context_with_model: freq_base = 10000.0
llama_new_context_with_model: freq_scale = 1
ggml_metal_init: allocating
ggml_metal_init: found device: Apple M3 Pro
ggml_metal_init: picking default device: Apple M3 Pro
ggml_metal_init: default.metallib not found, loading from source
ggml_metal_init: GGML_METAL_PATH_RESOURCES = nil
ggml_metal_init: loading '.venv/lib/python3.11/site-packages/llama_cpp/ggml-metal.metal'
ggml_metal_init: GPU name: Apple M3 Pro
ggml_metal_init: GPU family: MTLGPUFamilyApple9 (1009)
ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003)
ggml_metal_init: GPU family: MTLGPUFamilyMetal3 (5001)
ggml_metal_init: simdgroup reduction support = true
ggml_metal_init: simdgroup matrix mul. support = true
ggml_metal_init: hasUnifiedMemory = true
ggml_metal_init: recommendedMaxWorkingSetSize = 12884.92 MB
llama_kv_cache_init: CPU KV buffer size = 100.00 MiB
ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size = 300.00 MiB, ( 6901.56 / 12288.02)
llama_kv_cache_init: Metal KV buffer size = 300.00 MiB
llama_new_context_with_model: KV self size = 400.00 MiB, K (f16): 200.00 MiB, V (f16): 200.00 MiB
llama_new_context_with_model: CPU input buffer size = 11.01 MiB
ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size = 0.02 MiB, ( 6901.58 / 12288.02)
ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size = 93.52 MiB, ( 6995.08 / 12288.02)
llama_new_context_with_model: Metal compute buffer size = 93.50 MiB
llama_new_context_with_model: CPU compute buffer size = 81.40 MiB
llama_new_context_with_model: graph splits (measure): 5
AVX = 0 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 |
Model metadata: {'general.quantization_version': '2', 'tokenizer.ggml.unknown_token_id': '0', 'tokenizer.ggml.eos_token_id': '2', 'tokenizer.ggml.bos_token_id': '1', 'tokenizer.ggml.model': 'llama', 'llama.attention.head_count_kv': '40', 'llama.context_length': '4096', 'llama.attention.head_count': '40', 'llama.rope.dimension_count': '128', 'general.file_type': '2', 'llama.feed_forward_length': '13824', 'llama.embedding_length': '5120', 'llama.block_count': '40', 'general.architecture': 'llama', 'llama.attention.layer_norm_rms_epsilon': '0.000010', 'general.name': 'LLaMA v2'}
from_string grammar:
space ::= space_1
space_1 ::= [ ] |
boolean ::= boolean_3 space
boolean_3 ::= [t] [r] [u] [e] | [f] [a] [l] [s] [e]
string ::= ["] string_7 ["] space
string_5 ::= [^"\] | [\] string_6
string_6 ::= ["\/bfnrt] | [u] [0-9a-fA-F] [0-9a-fA-F] [0-9a-fA-F] [0-9a-fA-F]
string_7 ::= string_5 string_7 |
root ::= [{] space ["] [f] [a] [v] [o] [r] [a] [b] [l] [e] ["] space [:] space boolean [,] space ["] [n] [a] [m] [e] ["] space [:] space string [}] space
ggml_metal_free: deallocating
Exception ignored in: <function LlamaGrammar.__del__ at 0x1636acfe0>
Traceback (most recent call last):
File ".venv/lib/python3.11/site-packages/llama_cpp/llama_grammar.py", line 50, in __del__
AttributeError: 'NoneType' object has no attribute 'llama_grammar_free'
### Description
I'm trying to use a Llamacpp model through Langchain providing a grammar path.
The inference process works and the grammar is correctly applied in the output.
After deallocating the model, the function LlamaGrammar.__del__ is called to delete the grammar object but I get 'NoneType' object has no attribute 'llama_grammar_free' because the model was already deallocated.
### System Info
langchain==0.1.4
langchain-community==0.0.16
langchain-core==0.1.17 | LlamaCpp error when a model that was built using a grammar_path is deallocated | https://api.github.com/repos/langchain-ai/langchain/issues/17399/comments | 1 | 2024-02-12T09:43:52Z | 2024-05-20T16:09:14Z | https://github.com/langchain-ai/langchain/issues/17399 | 2,129,747,771 | 17,399 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
def retreival_qa_chain(chroma_db_path):
embedding = OpenAIEmbeddings()
vectordb=PGVector.from_documents(embedding=embedding,collection_name
=COLLECTION_NAME,connection_string=CONNECTION_STRING)
vector_store = PGVector(
connection_string=CONNECTION_STRING,
collection_name=COLLECTION_NAME,
embedding_function=embedding
)
retriever = vector_store.as_retriever()
qa = RetrievalQA.from_chain_type(
llm=OpenAI(),
chain_type="stuff",
retriever=retriever,
)
retriever=retriever,return_source_documents=True)
return qa
def create_global_qa_chain():
chroma_db_path = "chroma-databases"
folders = os.listdir(chroma_db_path)
qa_chains = {}
for index, folder in enumerate(folders):
folder_path = f"{chroma_db_path}/{folder}"
project = retreival_qa_chain(folder_path)
qa_chains[folder] = project
return qa_chains
qa_chains = create_global_qa_chain()
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
How Can I Store QA object returning from retreival_qa_chain in a folder for question answering in pgvector, like in chromadb its automatically storing by specifying persist directory.
How can I store it.
Like in chromadb we store it in persist directory and do question answering by calling like this:
chat_qa = qa_chains[formatted_project_name]
not_uuid=formatted_project_name
query = request.POST.get('message', None)
custom_message=generate_custom_prompt(chat_qa,query,name,not_uuid)
project_instance = ProjectName.objects.get(not_uuid=not_uuid)
# try:
chat_response = chat_langchain(chat_qa, custom_message,not_uuid)
### System Info
I am postgres for storing embeddings of it | How to store QA object in a folder. | https://api.github.com/repos/langchain-ai/langchain/issues/17394/comments | 1 | 2024-02-12T07:26:58Z | 2024-02-14T01:49:02Z | https://github.com/langchain-ai/langchain/issues/17394 | 2,129,559,523 | 17,394 |
[
"hwchase17",
"langchain"
] | Regarding the discrepancy between the official documentation and my experience, I have checked the versions of langchain and langchain core as suggested:
```python
python -m langchain_core.sys_info
```
The output is as follows:
```python
langchain_core: 0.1.13
langchain: 0.0.340
langchain_community: 0.0.13
langserve: Not Found
```
Additionally, I would like to clarify that the import statement **`from langchain.schema.agent import AgentFinish`** was derived from the demo in the **`OpenAIAssistantRunnable`** class of langchain. I utilized this import statement to resolve the issue of mismatched types encountered previously.
In summary, there appears to be an inconsistency between the package imported in the official documentation example and the package used in the demo of the **`OpenAIAssistantRunnable`** class in langchain.
```python
Example using custom tools and custom execution:
.. code-block:: python
from langchain_experimental.openai_assistant import OpenAIAssistantRunnable
from langchain.agents import AgentExecutor
from langchain.schema.agent import AgentFinish
from langchain.tools import E2BDataAnalysisTool
tools = [E2BDataAnalysisTool(api_key="...")]
agent = OpenAIAssistantRunnable.create_assistant(
name="langchain assistant e2b tool",
instructions="You are a personal math tutor. Write and run code to answer math questions.",
tools=tools,
model="gpt-4-1106-preview",
as_agent=True
)
def execute_agent(agent, tools, input):
tool_map = {tool.name: tool for tool in tools}
response = agent.invoke(input)
while not isinstance(response, AgentFinish):
tool_outputs = []
for action in response:
tool_output = tool_map[action.tool].invoke(action.tool_input)
tool_outputs.append({"output": tool_output, "tool_call_id": action.tool_call_id})
response = agent.invoke(
{
"tool_outputs": tool_outputs,
"run_id": action.run_id,
"thread_id": action.thread_id
}
)
return response
response = execute_agent(agent, tools, {"content": "What's 10 - 4 raised to the 2.7"})
next_response = execute_agent(agent, tools, {"content": "now add 17.241", "thread_id": response.thread_id})
```
_Originally posted by @WindChaserInTheSunset in https://github.com/langchain-ai/langchain/issues/17367#issuecomment-1937756296_
| Regarding the discrepancy between the official documentation and my experience, I have checked the versions of langchain and langchain core as suggested: | https://api.github.com/repos/langchain-ai/langchain/issues/17392/comments | 1 | 2024-02-12T07:21:34Z | 2024-05-20T16:09:09Z | https://github.com/langchain-ai/langchain/issues/17392 | 2,129,553,655 | 17,392 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
I have to make some additional work based on the primary key field of the returned similar document. But I see pk field has been removed in `output_fields` . In `similarity_search_with_score_by_vector` method `output_fields` set from `self.fields`. But In `self.fields` there is no `pk` as it has been removed in another method `_extract_fields`.
### Error Message and Stack Trace (if applicable)
_No response_
### Description
`pk` is the most important field of a DB. So it should return in the `Document` object metadata along with other fields when `similarity_search` or similar kind of method call.
### System Info
langchain==0.1.4
langchain-community==0.0.16
langchain-core==0.1.16
langchain-google-genai==0.0.5
langchain-openai==0.0.2 | Milvus VectorStore not returning primary key field value (PK) in Similarity Search document | https://api.github.com/repos/langchain-ai/langchain/issues/17390/comments | 1 | 2024-02-12T06:00:59Z | 2024-02-27T04:43:59Z | https://github.com/langchain-ai/langchain/issues/17390 | 2,129,473,165 | 17,390 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
import asyncio
import os
from typing import Any, Dict, List, Optional, Sequence, Type, Union
from uuid import UUID
from langchain.agents import AgentExecutor, create_openai_tools_agent
from langchain.callbacks.base import AsyncCallbackHandler
from langchain.callbacks.manager import (
AsyncCallbackManagerForToolRun,
CallbackManagerForToolRun,
)
from langchain.pydantic_v1 import BaseModel, Field
from langchain.schema import AgentAction, AgentFinish
from langchain.tools import BaseTool
from langchain_core.agents import AgentAction, AgentFinish
from langchain_core.callbacks import (
AsyncCallbackManagerForToolRun,
CallbackManagerForToolRun,
)
from langchain_core.documents import Document
from langchain_core.messages import BaseMessage, SystemMessage
from langchain_core.outputs import ChatGenerationChunk, GenerationChunk, LLMResult
from langchain_core.prompts import (
ChatPromptTemplate,
HumanMessagePromptTemplate,
MessagesPlaceholder,
PromptTemplate,
)
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_core.tools import BaseTool, ToolException
from langchain_openai import ChatOpenAI
from loguru import logger
from tenacity import RetryCallState
# Simulate a custom tool
class CalculatorInput(BaseModel):
a: int = Field(description="first number")
b: int = Field(description="second number")
# Define async custom tool
class CustomCalculatorTool(BaseTool):
name = "Calculator"
description = "useful for when you need to answer questions about math"
args_schema: Type[BaseModel] = CalculatorInput
handle_tool_error = True
def _run(
self, a: int, b: int, run_manager: Optional[CallbackManagerForToolRun] = None
) -> str:
"""Use the tool."""
raise NotImplementedError("Calculator does not support sync")
async def _arun(
self,
a: int,
b: int,
run_manager: Optional[AsyncCallbackManagerForToolRun] = None,
) -> str:
"""Use the tool asynchronously."""
if a == 0:
raise ToolException("a cannot be 0")
return a * b
# Custom handler to store data from the agent's execution ==> Want to store all of the data printed to the console when using `set_debug(True)`
class MyCustomAsyncHandler(AsyncCallbackHandler):
def __init__(self):
self.chain_start_data = []
self.chain_end_data = []
self.chain_error_data = []
self.tool_start_data = []
self.tool_end_data = []
self.tool_error_data = []
self.agent_action_data = []
self.agent_finish_data = []
async def on_llm_start(
self,
serialized: Dict[str, Any],
prompts: List[str],
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
tags: Optional[List[str]] = None,
metadata: Optional[Dict[str, Any]] = None,
**kwargs: Any,
) -> None:
logger.debug(
f"on_llm_start: serialized={serialized}, prompts={prompts}, run_id={run_id}, parent_run_id={parent_run_id}, tags={tags}, metadata={metadata}, kwargs={kwargs}"
)
async def on_chat_model_start(
self,
serialized: Dict[str, Any],
messages: List[List[BaseMessage]],
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
tags: Optional[List[str]] = None,
metadata: Optional[Dict[str, Any]] = None,
**kwargs: Any,
) -> Any:
raise NotImplementedError(
f"{self.__class__.__name__} does not implement `on_chat_model_start`"
)
# Note: This method intentionally raises NotImplementedError
async def on_llm_new_token(
self,
token: str,
*,
chunk: Optional[Union[GenerationChunk, ChatGenerationChunk]] = None,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
tags: Optional[List[str]] = None,
**kwargs: Any,
) -> None:
logger.debug(
f"on_llm_new_token: token={token}, chunk={chunk}, run_id={run_id}, parent_run_id={parent_run_id}, tags={tags}, kwargs={kwargs}"
)
async def on_llm_end(
self,
response: LLMResult,
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
tags: Optional[List[str]] = None,
**kwargs: Any,
) -> None:
logger.debug(
f"on_llm_end: response={response}, run_id={run_id}, parent_run_id={parent_run_id}, tags={tags}, kwargs={kwargs}"
)
async def on_llm_error(
self,
error: BaseException,
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
tags: Optional[List[str]] = None,
**kwargs: Any,
) -> None:
logger.debug(
f"on_llm_error: error={error}, run_id={run_id}, parent_run_id={parent_run_id}, tags={tags}, kwargs={kwargs}"
)
async def on_chain_start(
self,
serialized: Dict[str, Any],
inputs: Dict[str, Any],
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
tags: Optional[List[str]] = None,
metadata: Optional[Dict[str, Any]] = None,
**kwargs: Any,
) -> None:
logger.debug(
f"on_chain_start: serialized={serialized}, inputs={inputs}, run_id={run_id}, parent_run_id={parent_run_id}, tags={tags}, metadata={metadata}, kwargs={kwargs}"
)
async def on_chain_end(
self,
outputs: Dict[str, Any],
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
tags: Optional[List[str]] = None,
**kwargs: Any,
) -> None:
logger.debug(
f"on_chain_end: outputs={outputs}, run_id={run_id}, parent_run_id={parent_run_id}, tags={tags}, kwargs={kwargs}"
)
async def on_chain_error(
self,
error: BaseException,
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
tags: Optional[List[str]] = None,
**kwargs: Any,
) -> None:
logger.debug(
f"on_chain_error: error={error}, run_id={run_id}, parent_run_id={parent_run_id}, tags={tags}, kwargs={kwargs}"
)
async def on_tool_start(
self,
serialized: Dict[str, Any],
input_str: str,
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
tags: Optional[List[str]] = None,
metadata: Optional[Dict[str, Any]] = None,
inputs: Optional[Dict[str, Any]] = None,
**kwargs: Any,
) -> None:
logger.debug(
f"on_tool_start: serialized={serialized}, input_str={input_str}, run_id={run_id}, parent_run_id={parent_run_id}, tags={tags}, metadata={metadata}, inputs={inputs}, kwargs={kwargs}"
)
async def on_tool_end(
self,
output: str,
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
tags: Optional[List[str]] = None,
**kwargs: Any,
) -> None:
logger.debug(
f"on_tool_end: output={output}, run_id={run_id}, parent_run_id={parent_run_id}, tags={tags}, kwargs={kwargs}"
)
async def on_tool_error(
self,
error: BaseException,
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
tags: Optional[List[str]] = None,
**kwargs: Any,
) -> None:
logger.debug(
f"on_tool_error: error={error}, run_id={run_id}, parent_run_id={parent_run_id}, tags={tags}, kwargs={kwargs}"
)
async def on_text(
self,
text: str,
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
tags: Optional[List[str]] = None,
**kwargs: Any,
) -> None:
logger.debug(
f"on_text: text={text}, run_id={run_id}, parent_run_id={parent_run_id}, tags={tags}, kwargs={kwargs}"
)
async def on_retry(
self,
retry_state: RetryCallState,
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
**kwargs: Any,
) -> Any:
logger.debug(
f"on_retry: retry_state={retry_state}, run_id={run_id}, parent_run_id={parent_run_id}, kwargs={kwargs}"
)
async def on_agent_action(
self,
action: AgentAction,
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
tags: Optional[List[str]] = None,
**kwargs: Any,
) -> None:
logger.debug(
f"on_agent_action: action={action}, run_id={run_id}, parent_run_id={parent_run_id}, tags={tags}, kwargs={kwargs}"
)
async def on_agent_finish(
self,
finish: AgentFinish,
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
tags: Optional[List[str]] = None,
**kwargs: Any,
) -> None:
logger.debug(
f"on_agent_finish: finish={finish}, run_id={run_id}, parent_run_id={parent_run_id}, tags={tags}, kwargs={kwargs}"
)
async def on_retriever_start(
self,
serialized: Dict[str, Any],
query: str,
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
tags: Optional[List[str]] = None,
metadata: Optional[Dict[str, Any]] = None,
**kwargs: Any,
) -> None:
logger.debug(
f"on_retriever_start: serialized={serialized}, query={query}, run_id={run_id}, parent_run_id={parent_run_id}, tags={tags}, metadata={metadata}, kwargs={kwargs}"
)
async def on_retriever_end(
self,
documents: Sequence[Document],
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
tags: Optional[List[str]] = None,
**kwargs: Any,
) -> None:
logger.debug(
f"on_retriever_end: documents={documents}, run_id={run_id}, parent_run_id={parent_run_id}, tags={tags}, kwargs={kwargs}"
)
async def on_retriever_error(
self,
error: BaseException,
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
tags: Optional[List[str]] = None,
**kwargs: Any,
) -> None:
logger.debug(
f"on_retriever_error: error={error}, run_id={run_id}, parent_run_id={parent_run_id}, tags={tags}, kwargs={kwargs}"
)
# Methods to retrieve stored data
def get_chain_start_data(self) -> List[Dict]:
return self.chain_start_data
def get_chain_end_data(self) -> List[Dict]:
return self.chain_end_data
def get_chain_error_data(self) -> List[Dict]:
return self.chain_error_data
def get_tool_start_data(self) -> List[Dict]:
return self.tool_start_data
def get_tool_end_data(self) -> List[Dict]:
return self.tool_end_data
def get_tool_error_data(self) -> List[Dict]:
return self.tool_error_data
def get_agent_action_data(self) -> List[Dict]:
return self.agent_action_data
def get_agent_finish_data(self) -> List[Dict]:
return self.agent_finish_data
OPENAI_API_KEY = os.environ["OPENAI_API_KEY"]
# Ensure the OpenAI API key is defined and raise an error if it's not
if OPENAI_API_KEY is None:
raise ValueError("OPENAI_API_KEY environment variable is not defined")
# Create list of tools
tools = [CustomCalculatorTool()]
# Create a prompt
prompt_messages = [
SystemMessage(content=("""You a math expert.""")),
HumanMessagePromptTemplate(
prompt=PromptTemplate(
template="""Multiply {a} by {b}""",
input_variables=["a", "b"],
)
),
MessagesPlaceholder("agent_scratchpad"),
]
prompt = ChatPromptTemplate(messages=prompt_messages)
# Define the LLM model
llm = ChatOpenAI(openai_api_key=OPENAI_API_KEY, model="gpt-4")
# Create an Agent that can be used to call the tools we defined
agent = create_openai_tools_agent(llm, tools, prompt)
# Custom Handler
custom_handler = MyCustomAsyncHandler()
# Create the agent executor with the custom handler
agent_executor = AgentExecutor(
agent=agent,
tools=tools,
callbacks=[custom_handler],
)
# Invoke the agent executor
run_result = asyncio.run(agent_executor.ainvoke({"a": 2, "b": 3}))
```
### Description
I want to store the logs of a AgentExecutor invocation so that I can load them into my database.
These logs are useful for a analyzing and keeping track of:
1. The prompt in (string form) being sent to OpenAI's API
2. The tool that was used
3. The inputs sent for the tool that was used
4. **Knowing whether the tool was used successfully or not**
When you run an AgentExecutor with a tool and implement a custom callback handler (like in the code I have provided), some of the data is missing.
When I run this code, I would expect to see logs for all of methods defined inside of `MyCustomAsyncHandler` such as `on_chain_start`, `on_chain_end`, `on_tool_start`, `on_tool_end`, etc.
Right now, the only methods that show logs are:
1. `on_chain_start`
2. `on_agent_action`
3. `on_agent_finish`
4. `on_chain_end`
When I run this same code and set `set_debug(True)`, I see logs being printed out for the following:
1. `[chain/start]`
2. `[chain/end]`
3. `[llm/start]`
4. `[llm/end]`
5. `[tool/start]`
6. `[tool/end]`
Which I would expect to be captured by the analogous method inside my custom handler. When creating the `MyCustomAsyncHandler`, I copied the methods directly from langchain's `AsyncCallbackHandler` class to make sure they were named properly.
Is there something I am overlooking or misunderstanding?
### System Info
`python -m langchain_core.sys_info`
> System Information
> ------------------
> > OS: Darwin
> > OS Version: Darwin Kernel Version 23.1.0: Mon Oct 9 21:27:24 PDT 2023; root:xnu-10002.41.9~6/RELEASE_ARM64_T6000
> > Python Version: 3.11.7 (main, Dec 4 2023, 18:10:11) [Clang 15.0.0 (clang-1500.1.0.2.5)]
>
> Package Information
> -------------------
> > langchain_core: 0.1.18
> > langchain: 0.1.5
> > langchain_community: 0.0.17
> > langchain_openai: 0.0.3
>
> Packages not installed (Not Necessarily a Problem)
> --------------------------------------------------
> The following packages were not found:
>
> > langgraph
> > langserve | Custom Callback Handlers does not return data as expected during AgentExecutor runs | https://api.github.com/repos/langchain-ai/langchain/issues/17389/comments | 4 | 2024-02-12T05:43:41Z | 2024-02-12T23:22:21Z | https://github.com/langchain-ai/langchain/issues/17389 | 2,129,457,966 | 17,389 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
@staticmethod
async def generate_stream(agent, prompt):
print("...........>", prompt)
async for chunk in agent.astream_log({"input": prompt}, include_names=['ChatOpenAI']):
# astream_log(chunk, include_names=['ChatOpenAI'])
yield chunk
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I am trying to stream the response of llm by token , by using astream_log method as given in documentation , but it is only streaming the first llm call where it is generating code for the python repl tool, it is not streaming the 2nd llm call where it gives the final response

attached is the out put i received when i hit the api
### System Info
langchain version:
langchain==0.1.4
langchain-community==0.0.19
langchain-core==0.1.21
langchain-experimental==0.0.49
langchain-openai==0.0.5
system: ubuntu 20 and docker
python 3.10 | Pandas DataFreame agent streaming issue | https://api.github.com/repos/langchain-ai/langchain/issues/17388/comments | 2 | 2024-02-12T05:33:27Z | 2024-07-09T22:22:24Z | https://github.com/langchain-ai/langchain/issues/17388 | 2,129,450,263 | 17,388 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
I am on azure databricks notebook. i am getting this error. i tried different versions of langchain and had no luck. any opinion?
i normally can see that it is available in the notebook, but fails with env.
**thi is the code, that i have:**
import langchain
from langchain.llms import get_type_to_cls_dict
try:
from langchain.llms import get_type_to_cls_dict
print("The function 'get_type_to_cls_dict' is available in this version of langchain.")
except ImportError as e:
print("The function 'get_type_to_cls_dict' is NOT available. Error:", e)
**The function 'get_type_to_cls_dict' is available in this version of langchain.**
**the issue is starting here**
model_info = mlflow.langchain.log_model(
full_chain,
loader_fn=get_retriever, # Load the retriever with DATABRICKS_TOKEN env as secret (for authentication).
artifact_path="chain",
registered_model_name=model_name,
pip_requirements=[
"mlflow==" + mlflow.__version__,
# "langchain==" + langchain.__version__,
"langchain==0.1.4",
"databricks-vectorsearch",
"pydantic==2.5.2 --no-binary pydantic",
"cloudpickle=="+ cloudpickle.__version__
],
input_example=input_df,
signature=signature
)
model = mlflow.langchain.load_model(model_info.model_uri)
model.invoke(dialog)
**this is the error part needed to be fixed:**
**ImportError: cannot import name 'get_type_to_cls_dict' from 'langchain.llms.loading'** (/local_disk0/.ephemeral_nfs/envs/pythonEnv-2798e36c-7aeb-43ef-842b-9eb535594f26/lib/python3.10/site-packages/langchain/llms/loading.py)
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
File <command-3707747759080680>, line 1
----> 1 model = mlflow.langchain.load_model(model_info.model_uri)
2 model.invoke(dialog)
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-2798e36c-7aeb-43ef-842b-9eb535594f26/lib/python3.10/site-packages/mlflow/langchain/__init__.py:567, in load_model(model_uri, dst_path)
547 """
548 Load a LangChain model from a local file or a run.
549
(...)
564 :return: A LangChain model instance
565 """
566 local_model_path = _download_artifact_from_uri(artifact_uri=model_uri, output_path=dst_path)
--> 567 return _load_model_from_local_fs(local_model_path)
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-2798e36c-7aeb-43ef-842b-9eb535594f26/lib/python3.10/site-packages/mlflow/langchain/__init__.py:542, in _load_model_from_local_fs(local_model_path)
540 flavor_conf = _get_flavor_configuration(model_path=local_model_path, flavor_name=FLAVOR_NAME)
541 _add_code_from_conf_to_system_path(local_model_path, flavor_conf)
--> 542 return _load_model(local_model_path, flavor_conf)
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-2798e36c-7aeb-43ef-842b-9eb535594f26/lib/python3.10/site-packages/mlflow/langchain/__init__.py:429, in _load_model(local_model_path, flavor_conf)
427 model_load_fn = flavor_conf.get(_MODEL_LOAD_KEY)
428 if model_load_fn == _RUNNABLE_LOAD_KEY:
--> 429 return _load_runnables(local_model_path, flavor_conf)
430 if model_load_fn == _BASE_LOAD_KEY:
431 return _load_base_lcs(local_model_path, flavor_conf)
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-2798e36c-7aeb-43ef-842b-9eb535594f26/lib/python3.10/site-packages/mlflow/langchain/runnables.py:373, in _load_runnables(path, conf)
371 model_data = conf.get(_MODEL_DATA_KEY, _MODEL_DATA_YAML_FILE_NAME)
372 if model_type in (x.__name__ for x in lc_runnable_with_steps_types()):
--> 373 return _load_runnable_with_steps(os.path.join(path, model_data), model_type)
374 if (
375 model_type in (x.__name__ for x in picklable_runnable_types())
376 or model_data == _MODEL_DATA_PKL_FILE_NAME
377 ):
378 return _load_from_pickle(os.path.join(path, model_data))
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-2798e36c-7aeb-43ef-842b-9eb535594f26/lib/python3.10/site-packages/mlflow/langchain/runnables.py:120, in _load_runnable_with_steps(file_path, model_type)
118 config = steps_conf.get(step)
119 # load model from the folder of the step
--> 120 runnable = _load_model_from_path(os.path.join(steps_path, step), config)
121 steps[step] = runnable
123 if model_type == RunnableSequence.__name__:
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-2798e36c-7aeb-43ef-842b-9eb535594f26/lib/python3.10/site-packages/mlflow/langchain/runnables.py:78, in _load_model_from_path(path, model_config)
76 model_load_fn = model_config.get(_MODEL_LOAD_KEY)
77 if model_load_fn == _RUNNABLE_LOAD_KEY:
---> 78 return _load_runnables(path, model_config)
79 if model_load_fn == _BASE_LOAD_KEY:
80 return _load_base_lcs(path, model_config)
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-2798e36c-7aeb-43ef-842b-9eb535594f26/lib/python3.10/site-packages/mlflow/langchain/runnables.py:373, in _load_runnables(path, conf)
371 model_data = conf.get(_MODEL_DATA_KEY, _MODEL_DATA_YAML_FILE_NAME)
372 if model_type in (x.__name__ for x in lc_runnable_with_steps_types()):
--> 373 return _load_runnable_with_steps(os.path.join(path, model_data), model_type)
374 if (
375 model_type in (x.__name__ for x in picklable_runnable_types())
376 or model_data == _MODEL_DATA_PKL_FILE_NAME
377 ):
378 return _load_from_pickle(os.path.join(path, model_data))
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-2798e36c-7aeb-43ef-842b-9eb535594f26/lib/python3.10/site-packages/mlflow/langchain/runnables.py:120, in _load_runnable_with_steps(file_path, model_type)
118 config = steps_conf.get(step)
119 # load model from the folder of the step
--> 120 runnable = _load_model_from_path(os.path.join(steps_path, step), config)
121 steps[step] = runnable
123 if model_type == RunnableSequence.__name__:
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-2798e36c-7aeb-43ef-842b-9eb535594f26/lib/python3.10/site-packages/mlflow/langchain/runnables.py:78, in _load_model_from_path(path, model_config)
76 model_load_fn = model_config.get(_MODEL_LOAD_KEY)
77 if model_load_fn == _RUNNABLE_LOAD_KEY:
---> 78 return _load_runnables(path, model_config)
79 if model_load_fn == _BASE_LOAD_KEY:
80 return _load_base_lcs(path, model_config)
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-2798e36c-7aeb-43ef-842b-9eb535594f26/lib/python3.10/site-packages/mlflow/langchain/runnables.py:373, in _load_runnables(path, conf)
371 model_data = conf.get(_MODEL_DATA_KEY, _MODEL_DATA_YAML_FILE_NAME)
372 if model_type in (x.__name__ for x in lc_runnable_with_steps_types()):
--> 373 return _load_runnable_with_steps(os.path.join(path, model_data), model_type)
374 if (
375 model_type in (x.__name__ for x in picklable_runnable_types())
376 or model_data == _MODEL_DATA_PKL_FILE_NAME
377 ):
378 return _load_from_pickle(os.path.join(path, model_data))
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-2798e36c-7aeb-43ef-842b-9eb535594f26/lib/python3.10/site-packages/mlflow/langchain/runnables.py:120, in _load_runnable_with_steps(file_path, model_type)
118 config = steps_conf.get(step)
119 # load model from the folder of the step
--> 120 runnable = _load_model_from_path(os.path.join(steps_path, step), config)
121 steps[step] = runnable
123 if model_type == RunnableSequence.__name__:
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-2798e36c-7aeb-43ef-842b-9eb535594f26/lib/python3.10/site-packages/mlflow/langchain/runnables.py:82, in _load_model_from_path(path, model_config)
80 return _load_base_lcs(path, model_config)
81 if model_load_fn == _CONFIG_LOAD_KEY:
---> 82 return _load_model_from_config(path, model_config)
83 raise MlflowException(f"Unsupported model load key {model_load_fn}")
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-2798e36c-7aeb-43ef-842b-9eb535594f26/lib/python3.10/site-packages/mlflow/langchain/runnables.py:49, in _load_model_from_config(path, model_config)
47 from langchain.chains.loading import load_chain
48 from langchain.chains.loading import type_to_loader_dict as chains_type_to_loader_dict
---> 49 from langchain.llms.loading import get_type_to_cls_dict as llms_get_type_to_cls_dict
50 from langchain.llms.loading import load_llm
51 from langchain.prompts.loading import load_prompt
ImportError: cannot import name 'get_type_to_cls_dict' from 'langchain.llms.loading' (/local_disk0/.ephemeral_nfs/envs/pythonEnv-2798e36c-7aeb-43ef-842b-9eb535594f26/lib/python3.10/site-packages/langchain/llms/loading.py)
### Idea or request for content:
i need support to have the issue mentioned above to be fixed.
<img width="775" alt="image" src="https://github.com/langchain-ai/langchain/assets/152225892/fcca5f6e-de18-42c7-ae12-9b48f5caea4d">
| Issue about get_type_to_cls_dict in azure databricks notebook | https://api.github.com/repos/langchain-ai/langchain/issues/17384/comments | 5 | 2024-02-12T02:38:48Z | 2024-06-11T16:08:03Z | https://github.com/langchain-ai/langchain/issues/17384 | 2,129,329,631 | 17,384 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
from langchain.retrievers.multi_query import MultiQueryRetriever
from langchain.llms.bedrock import Bedrock
temperature = 0.0
model_id="anthropic.claude-v2"
model_params={
"max_tokens_to_sample": 2000,
"temperature": temperature,
"stop_sequences": ["\n\nHuman:"]}
llm = Bedrock(model_id=model_id,
client=boto3_bedrock,
model_kwargs=model_params
)
retriever = MultiQueryRetriever.from_llm(
retriever=db.as_retriever(), llm=llm
)
question = "tell me about llama 2?"
docs = retriever.get_relevant_documents(query=question)
len(docs)
### Error Message and Stack Trace (if applicable)
--------------------------------------------------------------------------
ValidationException Traceback (most recent call last)
File /opt/conda/lib/python3.10/site-packages/langchain/embeddings/bedrock.py:121, in BedrockEmbeddings._embedding_func(self, text)
120 try:
--> 121 response = self.client.invoke_model(
122 body=body,
123 modelId=self.model_id,
124 accept="application/json",
125 contentType="application/json",
126 )
127 response_body = json.loads(response.get("body").read())
File /opt/conda/lib/python3.10/site-packages/botocore/client.py:535, in ClientCreator._create_api_method.<locals>._api_call(self, *args, **kwargs)
534 # The "self" in this scope is referring to the BaseClient.
--> 535 return self._make_api_call(operation_name, kwargs)
File /opt/conda/lib/python3.10/site-packages/botocore/client.py:980, in BaseClient._make_api_call(self, operation_name, api_params)
979 error_class = self.exceptions.from_code(error_code)
--> 980 raise error_class(parsed_response, operation_name)
981 else:
ValidationException: An error occurred (ValidationException) when calling the InvokeModel operation: Malformed input request: expected minLength: 1, actual: 0, please reformat your input and try again.
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
Cell In[17], line 3
1 question = "tell me about llama 2?"
----> 3 docs = retriever.get_relevant_documents(query=question)
4 len(docs)
File /opt/conda/lib/python3.10/site-packages/langchain/schema/retriever.py:211, in BaseRetriever.get_relevant_documents(self, query, callbacks, tags, metadata, run_name, **kwargs)
209 except Exception as e:
210 run_manager.on_retriever_error(e)
--> 211 raise e
212 else:
213 run_manager.on_retriever_end(
214 result,
215 **kwargs,
216 )
File /opt/conda/lib/python3.10/site-packages/langchain/schema/retriever.py:204, in BaseRetriever.get_relevant_documents(self, query, callbacks, tags, metadata, run_name, **kwargs)
202 _kwargs = kwargs if self._expects_other_args else {}
203 if self._new_arg_supported:
--> 204 result = self._get_relevant_documents(
205 query, run_manager=run_manager, **_kwargs
206 )
207 else:
208 result = self._get_relevant_documents(query, **_kwargs)
File /opt/conda/lib/python3.10/site-packages/langchain/retrievers/multi_query.py:163, in MultiQueryRetriever._get_relevant_documents(self, query, run_manager)
154 """Get relevant documents given a user query.
155
156 Args:
(...)
160 Unique union of relevant documents from all generated queries
161 """
162 queries = self.generate_queries(query, run_manager)
--> 163 documents = self.retrieve_documents(queries, run_manager)
164 return self.unique_union(documents)
File /opt/conda/lib/python3.10/site-packages/langchain/retrievers/multi_query.py:198, in MultiQueryRetriever.retrieve_documents(self, queries, run_manager)
196 documents = []
197 for query in queries:
--> 198 docs = self.retriever.get_relevant_documents(
199 query, callbacks=run_manager.get_child()
200 )
201 documents.extend(docs)
202 return documents
File /opt/conda/lib/python3.10/site-packages/langchain/schema/retriever.py:211, in BaseRetriever.get_relevant_documents(self, query, callbacks, tags, metadata, run_name, **kwargs)
209 except Exception as e:
210 run_manager.on_retriever_error(e)
--> 211 raise e
212 else:
213 run_manager.on_retriever_end(
214 result,
215 **kwargs,
216 )
File /opt/conda/lib/python3.10/site-packages/langchain/schema/retriever.py:204, in BaseRetriever.get_relevant_documents(self, query, callbacks, tags, metadata, run_name, **kwargs)
202 _kwargs = kwargs if self._expects_other_args else {}
203 if self._new_arg_supported:
--> 204 result = self._get_relevant_documents(
205 query, run_manager=run_manager, **_kwargs
206 )
207 else:
208 result = self._get_relevant_documents(query, **_kwargs)
File /opt/conda/lib/python3.10/site-packages/langchain/schema/vectorstore.py:585, in VectorStoreRetriever._get_relevant_documents(self, query, run_manager)
581 def _get_relevant_documents(
582 self, query: str, *, run_manager: CallbackManagerForRetrieverRun
583 ) -> List[Document]:
584 if self.search_type == "similarity":
--> 585 docs = self.vectorstore.similarity_search(query, **self.search_kwargs)
586 elif self.search_type == "similarity_score_threshold":
587 docs_and_similarities = (
588 self.vectorstore.similarity_search_with_relevance_scores(
589 query, **self.search_kwargs
590 )
591 )
File /opt/conda/lib/python3.10/site-packages/langchain/vectorstores/pgvector.py:335, in PGVector.similarity_search(self, query, k, filter, **kwargs)
318 def similarity_search(
319 self,
320 query: str,
(...)
323 **kwargs: Any,
324 ) -> List[Document]:
325 """Run similarity search with PGVector with distance.
326
327 Args:
(...)
333 List of Documents most similar to the query.
334 """
--> 335 embedding = self.embedding_function.embed_query(text=query)
336 return self.similarity_search_by_vector(
337 embedding=embedding,
338 k=k,
339 filter=filter,
340 )
File /opt/conda/lib/python3.10/site-packages/langchain/embeddings/bedrock.py:156, in BedrockEmbeddings.embed_query(self, text)
147 def embed_query(self, text: str) -> List[float]:
148 """Compute query embeddings using a Bedrock model.
149
150 Args:
(...)
154 Embeddings for the text.
155 """
--> 156 return self._embedding_func(text)
File /opt/conda/lib/python3.10/site-packages/langchain/embeddings/bedrock.py:130, in BedrockEmbeddings._embedding_func(self, text)
128 return response_body.get("embedding")
129 except Exception as e:
--> 130 raise ValueError(f"Error raised by inference endpoint: {e}")
ValueError: Error raised by inference endpoint: An error occurred (ValidationException) when calling the InvokeModel operation: Malformed input request: expected minLength: 1, actual: 0, please reformat your input and try again.
### Description
I'm using MultiQueryRetriever with bedrock claude model.
I got ValueError: Error raised by inference endpoint: An error occurred (ValidationException) when calling the InvokeModel operation: Malformed input request: expected minLength: 1, actual: 0, please reformat your input and try again.
Please take a look. Thank you
### System Info
!pip show langchain
Name: langchain
Version: 0.0.318
Summary: Building applications with LLMs through composability
Home-page: https://github.com/langchain-ai/langchain
Author:
Author-email:
License: MIT
Location: /opt/conda/lib/python3.10/site-packages
Requires: aiohttp, anyio, async-timeout, dataclasses-json, jsonpatch, langsmith, numpy, pydantic, PyYAML, requests, SQLAlchemy, tenacity
Required-by: jupyter_ai, jupyter_ai_magics | MultiQueryRetriever with bedrock claude | https://api.github.com/repos/langchain-ai/langchain/issues/17382/comments | 3 | 2024-02-11T21:38:03Z | 2024-06-24T19:21:01Z | https://github.com/langchain-ai/langchain/issues/17382 | 2,129,180,857 | 17,382 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
from langchain_google_vertexai import ChatVertexAI, VertexAI, VertexAIEmbeddings
llm_chain = LLMChain(llm=llm, prompt=prompt_template)
res = llm_chain.predict(user_prompt=user_prompt)
### Error Message and Stack Trace (if applicable)
Prompted _[llm/error] [1:chain:LLMChain > 2:llm:VertexAI] [4.64s] LLM run errored with error: "TypeError(\"Additional kwargs key Finance already exists in left dict and value has unsupported type <class 'float'>.\")Traceback (most recent call last):
### Description
I'm trying to use text-unicorn model through vertex ai while setting the stream parameter to true. With every chunk generated by the llm, the generation_info dict contains key-value pairs where the key is the same but the value is different with every returned generation. Acoordingly a runtime error is raised and no propeper answer is returned from the llm.
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.19045
> Python Version: 3.9.13 (tags/v3.9.13:6de2ca5, May 17 2022, 16:36:42) [MSC v.1929 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.1.15
> langchain: 0.0.354
> langchain_community: 0.0.15
> langchain_benchmarks: 0.0.10
> langchain_experimental: 0.0.47
> langchain_google_genai: 0.0.2
> langchain_google_vertexai: 0.0.2
> langchainhub: 0.1.14
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | merge_dicts in _merge can't merge different values of instance float and raises a type error | https://api.github.com/repos/langchain-ai/langchain/issues/17376/comments | 4 | 2024-02-11T10:18:31Z | 2024-05-21T16:09:11Z | https://github.com/langchain-ai/langchain/issues/17376 | 2,128,927,696 | 17,376 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```
from langchain_google_genai import ChatGoogleGenerativeAI
from langchain_community.agent_toolkits.amadeus.toolkit import AmadeusToolkit
from langchain import hub
from langchain.agents import AgentExecutor, create_react_agent
llm = ChatGoogleGenerativeAI(model="gemini-pro")
toolkit = AmadeusToolkit(llm=llm)
tools = toolkit.get_tools()
prompt = hub.pull("sirux21/react")
agent = create_react_agent(llm, tools, prompt)
val = {
"input": "Find flights from NYC to LAX tomorrow",
}
agent_executor = AgentExecutor(
agent=agent,
tools=tools,
verbose=True,
agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION,
)
agent_executor.invoke(val)
```
### Error Message and Stack Trace (if applicable)
NYC is too broad of a location, I should narrow it down to a specific airport
Action: closest_airport
Action Input: {
"location": "New York City, NY"
}content='```json\n{\n "iataCode": "JFK"\n}\n```'JFK is the closest airport to NYC, I will use that as the origin airport
Action: single_flight_search
Action Input: {
"originLocationCode": "JFK",
"destinationLocationCode": "LAX",
"departureDateTimeEarliest": "2023-06-08T00:00:00",
"departureDateTimeLatest": "2023-06-08T23:59:59"
}Traceback (most recent call last):
File "C:\Users\sirux21\Nextcloud\CodeRed-Oddysey\CodeRed-Odyssey\python\main4.py", line 24, in <module>
agent_executor.invoke(val)
File "C:\Users\sirux21\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain\chains\base.py", line 162, in invoke
raise e
File "C:\Users\sirux21\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain\chains\base.py", line 156, in invoke
self._call(inputs, run_manager=run_manager)
File "C:\Users\sirux21\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain\agents\agent.py", line 1391, in _call
next_step_output = self._take_next_step(
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\sirux21\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain\agents\agent.py", line 1097, in _take_next_step
[
File "C:\Users\sirux21\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain\agents\agent.py", line 1097, in <listcomp>
[
File "C:\Users\sirux21\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain\agents\agent.py", line 1182, in _iter_next_step
yield self._perform_agent_action(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\sirux21\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain\agents\agent.py", line 1204, in _perform_agent_action
observation = tool.run(
^^^^^^^^^
File "C:\Users\sirux21\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain_core\tools.py", line 364, in run
raise e
File "C:\Users\sirux21\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain_core\tools.py", line 355, in run
parsed_input = self._parse_input(tool_input)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\sirux21\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain_core\tools.py", line 258, in _parse_input
input_args.validate({key_: tool_input})
File "C:\Users\sirux21\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\pydantic\v1\main.py", line 711, in validate
return cls(**value)
^^^^^^^^^^^^
File "C:\Users\sirux21\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\pydantic\v1\main.py", line 341, in __init__
raise validation_error
pydantic.v1.error_wrappers.ValidationError: 3 validation errors for FlightSearchSchema
destinationLocationCode
field required (type=value_error.missing)
departureDateTimeEarliest
field required (type=value_error.missing)
departureDateTimeLatest
field required (type=value_error.missing)
### Description
FlightSearchSchema is unable to parse the input
### System Info
langchain==0.1.6
langchain-community==0.0.19
langchain-core==0.1.22
langchain-google-genai==0.0.8
langchainhub==0.1.14
windows
3.11 | Amadeus searchflight not working | https://api.github.com/repos/langchain-ai/langchain/issues/17375/comments | 5 | 2024-02-11T01:38:03Z | 2024-03-07T01:24:06Z | https://github.com/langchain-ai/langchain/issues/17375 | 2,128,789,064 | 17,375 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
import os
from langchain.schema import SystemMessage, HumanMessage
from langchain_openai import AzureChatOpenAI
from langchain.callbacks import get_openai_callback
azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT")
AZURE_OPENAI_API_KEY = os.getenv("AZURE_OPENAI_API_KEY")
# Create an instance of chat llm
llm = AzureChatOpenAI(
azure_endpoint=azure_endpoint,
api_key = AZURE_OPENAI_API_KEY,
api_version="2023-05-15",
azure_deployment="gpt-3.5-turbo",
model="gpt-3.5-turbo",
)
messages = [
SystemMessage(
content=(
"You are ExpertGPT, an AGI system capable of "
"anything except answering questions about cheese. "
"It turns out that AGI does not fathom cheese as a "
"concept, the reason for this is a mystery."
)
),
HumanMessage(content="Tell me about parmigiano, the Italian cheese!")
]
with get_openai_callback() as cb:
res = llm.invoke(messages)
# print the response
print(res.content)
# print the total tokens used
print(cb.total_tokens)
### Error Message and Stack Trace (if applicable)
HTTP Request: POST https://oxcxxxxxxx-dev.openai.azure.com/openai/deployments/gpt-3.5-turbo/chat/completions?api-version=2023-05-15 "HTTP/1.1 401 Unauthorized"
DEBUG:httpcore.http11:receive_response_body.started request=<Request [b'POST']>
receive_response_body.started request=<Request [b'POST']>
DEBUG:httpcore.http11:receive_response_body.complete
receive_response_body.complete
DEBUG:httpcore.http11:response_closed.started
response_closed.started
DEBUG:httpcore.http11:response_closed.complete
response_closed.complete
DEBUG:openai._base_client:HTTP Request: POST https://oxcxxxxxxx-dev.openai.azure.com/openai/deployments/gpt-3.5-turbo/chat/completions?api-version=2023-05-15 "401 Unauthorized"
HTTP Request: POST https://oxcxxxxxxx-dev.openai.azure.com/openai/deployments/gpt-3.5-turbo/chat/completions?api-version=2023-05-15 "401 Unauthorized"
DEBUG:openai._base_client:Encountered httpx.HTTPStatusError
Traceback (most recent call last):
File "/home/mlakka/.local/lib/python3.10/site-packages/openai/_base_client.py", line 959, in _request
response.raise_for_status()
File "/home/mlakka/.local/lib/python3.10/site-packages/httpx/_models.py", line 759, in raise_for_status
raise HTTPStatusError(message, request=request, response=self)
httpx.HTTPStatusError: Client error '401 Unauthorized' for url 'https://oxcxxxxxxx-dev.openai.azure.com/openai/deployments/gpt-3.5-turbo/chat/completions?api-version=2023-05-15'
For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/401
Encountered httpx.HTTPStatusError
Traceback (most recent call last):
File "/home/mlakka/.local/lib/python3.10/site-packages/openai/_base_client.py", line 959, in _request
response.raise_for_status()
File "/home/mlakka/.local/lib/python3.10/site-packages/httpx/_models.py", line 759, in raise_for_status
raise HTTPStatusError(message, request=request, response=self)
httpx.HTTPStatusError: Client error '401 Unauthorized' for url 'https://oxcxxxxxxx-dev.openai.azure.com/openai/deployments/gpt-3.5-turbo/chat/completions?api-version=2023-05-15'
For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/401
DEBUG:openai._base_client:Not retrying
Not retrying
DEBUG:openai._base_client:Re-raising status error
Re-raising status error
Error is coming from here
### Description
My key works for other calls but with langchain is not working and giving the above error. Please help. By the way, I am using Azure open ai
AuthenticationError Traceback (most recent call last)
Cell In[6], line 32
18 messages = [
19 SystemMessage(
20 content=(
(...)
27 HumanMessage(content="Tell me about parmigiano, the Italian cheese!")
28 ]
30 with get_openai_callback() as cb:
---> 32 res = llm.invoke(messages)
34 # print the response
35 print(res.content)
File ~/.local/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:166, in BaseChatModel.invoke(self, input, config, stop, **kwargs)
155 def invoke(
156 self,
157 input: LanguageModelInput,
(...)
161 **kwargs: Any,
162 ) -> BaseMessage:
163 config = ensure_config(config)
164 return cast(
165 ChatGeneration,
--> 166 self.generate_prompt(
167 [self._convert_input(input)],
168 stop=stop,
169 callbacks=config.get("callbacks"),
170 tags=config.get("tags"),
171 metadata=config.get("metadata"),
172 run_name=config.get("run_name"),
173 **kwargs,
174 ).generations[0][0],
175 ).message
File ~/.local/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:544, in BaseChatModel.generate_prompt(self, prompts, stop, callbacks, **kwargs)
536 def generate_prompt(
537 self,
538 prompts: List[PromptValue],
(...)
541 **kwargs: Any,
542 ) -> LLMResult:
543 prompt_messages = [p.to_messages() for p in prompts]
--> 544 return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
File ~/.local/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:408, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, **kwargs)
406 if run_managers:
407 run_managers[i].on_llm_error(e, response=LLMResult(generations=[]))
--> 408 raise e
409 flattened_outputs = [
410 LLMResult(generations=[res.generations], llm_output=res.llm_output)
411 for res in results
412 ]
413 llm_output = self._combine_llm_outputs([res.llm_output for res in results])
File ~/.local/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:398, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, **kwargs)
395 for i, m in enumerate(messages):
396 try:
397 results.append(
--> 398 self._generate_with_cache(
399 m,
400 stop=stop,
401 run_manager=run_managers[i] if run_managers else None,
402 **kwargs,
403 )
404 )
405 except BaseException as e:
406 if run_managers:
File ~/.local/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:577, in BaseChatModel._generate_with_cache(self, messages, stop, run_manager, **kwargs)
573 raise ValueError(
574 "Asked to cache, but no cache found at `langchain.cache`."
575 )
576 if new_arg_supported:
--> 577 return self._generate(
578 messages, stop=stop, run_manager=run_manager, **kwargs
579 )
580 else:
581 return self._generate(messages, stop=stop, **kwargs)
File ~/.local/lib/python3.10/site-packages/langchain_openai/chat_models/base.py:451, in ChatOpenAI._generate(self, messages, stop, run_manager, stream, **kwargs)
445 message_dicts, params = self._create_message_dicts(messages, stop)
446 params = {
447 **params,
448 **({"stream": stream} if stream is not None else {}),
449 **kwargs,
450 }
--> 451 response = self.client.create(messages=message_dicts, **params)
452 return self._create_chat_result(response)
File ~/.local/lib/python3.10/site-packages/openai/_utils/_utils.py:275, in required_args.<locals>.inner.<locals>.wrapper(*args, **kwargs)
273 msg = f"Missing required argument: {quote(missing[0])}"
274 raise TypeError(msg)
--> 275 return func(*args, **kwargs)
File ~/.local/lib/python3.10/site-packages/openai/resources/chat/completions.py:663, in Completions.create(self, messages, model, frequency_penalty, function_call, functions, logit_bias, logprobs, max_tokens, n, presence_penalty, response_format, seed, stop, stream, temperature, tool_choice, tools, top_logprobs, top_p, user, extra_headers, extra_query, extra_body, timeout)
611 @required_args(["messages", "model"], ["messages", "model", "stream"])
612 def create(
613 self,
(...)
661 timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
662 ) -> ChatCompletion | Stream[ChatCompletionChunk]:
--> 663 return self._post(
664 "/chat/completions",
665 body=maybe_transform(
666 {
667 "messages": messages,
668 "model": model,
669 "frequency_penalty": frequency_penalty,
670 "function_call": function_call,
671 "functions": functions,
672 "logit_bias": logit_bias,
673 "logprobs": logprobs,
674 "max_tokens": max_tokens,
675 "n": n,
676 "presence_penalty": presence_penalty,
677 "response_format": response_format,
678 "seed": seed,
679 "stop": stop,
680 "stream": stream,
681 "temperature": temperature,
682 "tool_choice": tool_choice,
683 "tools": tools,
684 "top_logprobs": top_logprobs,
685 "top_p": top_p,
686 "user": user,
687 },
688 completion_create_params.CompletionCreateParams,
689 ),
690 options=make_request_options(
691 extra_headers=extra_headers, extra_query=extra_query, extra_body=extra_body, timeout=timeout
692 ),
693 cast_to=ChatCompletion,
694 stream=stream or False,
695 stream_cls=Stream[ChatCompletionChunk],
696 )
File ~/.local/lib/python3.10/site-packages/openai/_base_client.py:1200, in SyncAPIClient.post(self, path, cast_to, body, options, files, stream, stream_cls)
1186 def post(
1187 self,
1188 path: str,
(...)
1195 stream_cls: type[_StreamT] | None = None,
1196 ) -> ResponseT | _StreamT:
1197 opts = FinalRequestOptions.construct(
1198 method="post", url=path, json_data=body, files=to_httpx_files(files), **options
1199 )
-> 1200 return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
File ~/.local/lib/python3.10/site-packages/openai/_base_client.py:889, in SyncAPIClient.request(self, cast_to, options, remaining_retries, stream, stream_cls)
880 def request(
881 self,
882 cast_to: Type[ResponseT],
(...)
887 stream_cls: type[_StreamT] | None = None,
888 ) -> ResponseT | _StreamT:
--> 889 return self._request(
890 cast_to=cast_to,
891 options=options,
892 stream=stream,
893 stream_cls=stream_cls,
894 remaining_retries=remaining_retries,
895 )
File ~/.local/lib/python3.10/site-packages/openai/_base_client.py:980, in SyncAPIClient._request(self, cast_to, options, remaining_retries, stream, stream_cls)
977 err.response.read()
979 log.debug("Re-raising status error")
--> 980 raise self._make_status_error_from_response(err.response) from None
982 return self._process_response(
983 cast_to=cast_to,
984 options=options,
(...)
987 stream_cls=stream_cls,
988 )
AuthenticationError: Error code: 401 - {'statusCode': 401, 'message': 'Unauthorized. Access token is missing, invalid, audience is incorrect (https://cognitiveservices.azure.com/), or have expired.'}
### System Info
openai==1.12.0
langchain==0.1.6
langchain-community==0.0.19
langchain-core==0.1.22
langchain-openai==0.0.5 | Keep getting AuthenticationError: Error code: 401 - {'statusCode': 401, 'message': 'Unauthorized. Access token is missing, invalid, audience is incorrect (https://cognitiveservices.azure.com), or have expired.'} | https://api.github.com/repos/langchain-ai/langchain/issues/17373/comments | 6 | 2024-02-10T22:50:16Z | 2024-07-22T15:25:14Z | https://github.com/langchain-ai/langchain/issues/17373 | 2,128,743,120 | 17,373 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
from langchain_community.vectorstores import Milvus
vector_db = Milvus.from_texts(
texts=str_list,
embedding=embeddings,
connection_args={
"uri": "zilliz_cloud_uri",
"token": "zilliz_cloud_token"
},
)
```
### Error Message and Stack Trace (if applicable)
E0210 23:13:44.149694000 8088408832 hpack_parser.cc:993] Error parsing 'content-type' metadata: invalid value
[__internal_register] retry:4, cost: 0.27s, reason: <_InactiveRpcError: StatusCode.UNKNOWN, Stream removed>
<img width="1440" alt="Screenshot 1402-11-21 at 23 20 28" src="https://github.com/langchain-ai/langchain/assets/69215813/22e7e225-848f-4689-a413-e2ef8e9998b2">
### Description
OS== macOS 14
pymilvus==2.3.4
langchain==0.1.3
langchain-community==0.0.15
pyarrow>=12.0.0
NOTE: pyarrow must be installed manually .otherwise code will throw error
=========================
I am using Milvus module from langchain_community as vector database. Code seem to have gRPC related errors in local environment. moreover , within colab environment I'am facing this error :
```python
584 end = min(i + batch_size, total_count)
585 # Convert dict to list of lists batch for insertion
--> 586 insert_list = [insert_dict[x][i:end] for x in self.fields]
587 # Insert into the collection.
588 try:
KeyError: 'year'
```
Error is from :
File "/Users/moeinmn/anaconda3/lib/python3.11/site-packages/langchain_community/vectorstores/milvus.py", line 904, in from_texts
vector_db = cls(
^^^^
File "/Users/moeinmn/anaconda3/lib/python3.11/site-packages/langchain_community/vectorstores/milvus.py", line 179, in __init__
self.alias = self._create_connection_alias(connection_args)
### System Info
langchain==0.1.3
langchain-community==0.0.15
pymilvus==2.3.4 | gRPC error with Milvus retriever | https://api.github.com/repos/langchain-ai/langchain/issues/17371/comments | 2 | 2024-02-10T20:01:16Z | 2024-06-19T16:07:03Z | https://github.com/langchain-ai/langchain/issues/17371 | 2,128,696,829 | 17,371 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
On the official documentation page for "Modules -> Agents -> Agent Types -> OpenAI assistants" (https://python.langchain.com/docs/modules/agents/agent_types/openai_assistants#using-existing-thread), there is an error in the example regarding the import statement for AgentFinish package and its usage in the execute_agent function.
The incorrect import statement in the example is:
```python
from langchain_core.agents import AgentFinish
```
However, the correct import statement should be:
```python
from langchain.schema.agent import AgentFinish
```
The issue arises because the incorrect import statement leads to a discrepancy between the package imported and the actual type returned by agent.invoke(input). Despite the actual type being AgentFinish, the example code still enters the while loop, causing further errors when accessing the tool attribute in action.
The corrected example code should be as follows:
```python
from langchain.schema.agent import AgentFinish
def execute_agent(agent, tools, input):
tool_map = {tool.name: tool for tool in tools}
response = agent.invoke(input)
while not isinstance(response, AgentFinish):
tool_outputs = []
for action in response:
tool_output = tool_map[action.tool].invoke(action.tool_input)
tool_outputs.append({"output": tool_output, "tool_call_id": action.tool_call_id})
response = agent.invoke(
{
"tool_outputs": tool_outputs,
"run_id": action.run_id,
"thread_id": action.thread_id
}
)
return response
```
This correction ensures that the example code operates as intended, avoiding errors related to the incorrect package import and usage.
### Idea or request for content:
_No response_ | Correction Needed in OpenAI Assistants Example on Official Documentation | https://api.github.com/repos/langchain-ai/langchain/issues/17367/comments | 4 | 2024-02-10T14:33:55Z | 2024-02-11T13:33:33Z | https://github.com/langchain-ai/langchain/issues/17367 | 2,128,498,821 | 17,367 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
from langchain_community.llms import OpenLLM
server_url = "http://localhost:3000"
llm2 = OpenLLM(server_url=server_url)
llm2._generate(["hello i am "])
```
```bash
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/lustre06/project/6045755/omij/LLMs/llm_env/lib/python3.9/site-packages/langchain_core/language_models/llms.py", line 1139, in _generate
self._call(prompt, stop=stop, run_manager=run_manager, **kwargs)
File "/lustre06/project/6045755/omij/LLMs/llm_env/lib/python3.9/site-packages/langchain_community/llms/openllm.py", line 265, in _call
self._identifying_params["model_name"], **copied
File "/lustre06/project/6045755/omij/LLMs/llm_env/lib/python3.9/site-packages/langchain_community/llms/openllm.py", line 220, in _identifying_params
self.llm_kwargs.update(self._client._config())
TypeError: 'dict' object is not callable
```
### Error Message and Stack Trace (if applicable)

```python
>>> llm2
OpenLLM(server_url='http://localhost:3000', llm_kwargs={'n': 1, 'best_of': None, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'use_beam_search': False, 'ignore_eos': False, 'skip_special_tokens': True, 'max_new_tokens': 128, 'min_length': 0, 'early_stopping': False, 'num_beams': 1, 'num_beam_groups': 1, 'use_cache': True, 'temperature': 0.75, 'top_k': 15, 'top_p': 0.78, 'typical_p': 1.0, 'epsilon_cutoff': 0.0, 'eta_cutoff': 0.0, 'diversity_penalty': 0.0, 'repetition_penalty': 1.0, 'encoder_repetition_penalty': 1.0, 'length_penalty': 1.0, 'no_repeat_ngram_size': 0, 'renormalize_logits': False, 'remove_invalid_values': False, 'num_return_sequences': 1, 'output_attentions': False, 'output_hidden_states': False, 'output_scores': False, 'encoder_no_repeat_ngram_size': 0})
>>> llm2._client._config
{'n': 1, 'best_of': None, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'use_beam_search': False, 'ignore_eos': False, 'skip_special_tokens': True, 'max_new_tokens': 128, 'min_length': 0, 'early_stopping': False, 'num_beams': 1, 'num_beam_groups': 1, 'use_cache': True, 'temperature': 0.75, 'top_k': 15, 'top_p': 0.78, 'typical_p': 1.0, 'epsilon_cutoff': 0.0, 'eta_cutoff': 0.0, 'diversity_penalty': 0.0, 'repetition_penalty': 1.0, 'encoder_repetition_penalty': 1.0, 'length_penalty': 1.0, 'no_repeat_ngram_size': 0, 'renormalize_logits': False, 'remove_invalid_values': False, 'num_return_sequences': 1, 'output_attentions': False, 'output_hidden_states': False, 'output_scores': False, 'encoder_no_repeat_ngram_size': 0}
>>>
>>> llm2._generate(["hello i am "])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/lustre06/project/6045755/omij/LLMs/llm_env/lib/python3.9/site-packages/langchain_core/language_models/llms.py", line 1139, in _generate
self._call(prompt, stop=stop, run_manager=run_manager, **kwargs)
File "/lustre06/project/6045755/omij/LLMs/llm_env/lib/python3.9/site-packages/langchain_community/llms/openllm.py", line 265, in _call
self._identifying_params["model_name"], **copied
File "/lustre06/project/6045755/omij/LLMs/llm_env/lib/python3.9/site-packages/langchain_community/llms/openllm.py", line 220, in _identifying_params
self.llm_kwargs.update(self._client._config())
TypeError: 'dict' object is not callable
```
### Description
I'm trying to use llama-2-7b-hf hosted via OpenLLM, to run llm chains locally.
Using given example from this docs [langchain-openllm](https://python.langchain.com/docs/integrations/llms/openllm).
After looking into details, the llm is initialized correctly, but there seems to issue with this specific block of code
```python
@property
def _identifying_params(self) -> IdentifyingParams:
"""Get the identifying parameters."""
if self._client is not None:
self.llm_kwargs.update(self._client._config())
model_name = self._client._metadata()["model_name"]
model_id = self._client._metadata()["model_id"]
else:
```
### System Info
python -m langchain_core.sys_info
System Information
------------------
> OS: Linux
> OS Version: 1 SMP Fri Nov 17 03:31:10 UTC 2023
> Python Version: 3.9.6 (default, Jul 12 2021, 18:23:59)
[GCC 9.3.0]
Package Information
-------------------
> langchain_core: 0.1.21
> langchain: 0.1.4
> langchain_community: 0.0.19
> langsmith: 0.0.87
> langchain_example: Installed. No version info available.
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
| Bug Encountered in OpenLLM invoke method: TypeError with client's config | https://api.github.com/repos/langchain-ai/langchain/issues/17362/comments | 1 | 2024-02-10T05:40:29Z | 2024-05-18T16:07:48Z | https://github.com/langchain-ai/langchain/issues/17362 | 2,128,119,760 | 17,362 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
pgvector = PGVector(...) # Initialize PGVector with necessary parameters
ids_to_delete = [...] # List of ids to delete
pgvector.delete(ids_to_delete)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I want to fetch ID's of a document stored in Postgres pgvector db to delete the particular document embeddings
### System Info
I am using Postgres pgvector | How to get IDs of documents stored in postgres pgvector | https://api.github.com/repos/langchain-ai/langchain/issues/17361/comments | 9 | 2024-02-10T05:21:56Z | 2024-07-19T18:59:09Z | https://github.com/langchain-ai/langchain/issues/17361 | 2,128,105,436 | 17,361 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```
from langchain_community.chat_models.huggingface import ChatHuggingFace
from langchain_community.llms import HuggingFaceHub
llm2 = HuggingFaceHub(
repo_id="HuggingFaceH4/zephyr-7b-beta",
task="text-generation",
huggingfacehub_api_token=""
)
chat_model = ChatHuggingFace(llm=llm2)
from langchain import hub
from langchain.agents import AgentExecutor, load_tools
from langchain.agents.format_scratchpad import format_log_to_str
from langchain.agents.output_parsers import (
ReActJsonSingleInputOutputParser,
)
from langchain.tools.render import render_text_description
from langchain_community.utilities import SerpAPIWrapper
# setup tools
search = SerpAPIWrapper(serpapi_api_key=SERPER_API_KEY)
ddg_search = DuckDuckGoSearchRun()
youtube = YouTubeSearchTool()
tools = [
Tool(
name="Search",
func=search.run,
description="Useful for answering questions about current events."
),
Tool(
name="DuckDuckGo Search",
func=ddg_search.run,
description="Useful to browse information from the Internet."
),
Tool(
name="Youtube Search",
func=youtube.run,
description="Useful for when the user explicitly asks to search on YouTube."
)
]
# setup ReAct style prompt
# setup ReAct style prompt
prompt = hub.pull("hwchase17/react-json")
prompt = prompt.partial(
tools=render_text_description(tools),
tool_names=", ".join([t.name for t in tools]),
)
# define the agent
chat_model_with_stop = chat_model.bind(stop=["\nObservation"])
agent = (
{
"input": lambda x: x["input"],
"agent_scratchpad": lambda x: format_log_to_str(x["intermediate_steps"]),
}
| prompt
| chat_model_with_stop
| ReActJsonSingleInputOutputParser()
)
# instantiate AgentExecutor
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True, handle_parsing_errors=True,max_execution_time=60)
agent_executor.invoke(
{
"input": "Who is the current holder of the speed skating world record on 500 meters? What is her current age raised to the 0.43 power?"
}
)
```
ERROR:
```
> Entering new AgentExecutor chain...
Could not parse LLM output: <|system|>
Answer the following questions as best you can. You have access to the following tools:
search: Useful for answering questions about current events.
The way you use the tools is by specifying a json blob.
Specifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).
The only values that should be in the "action" field are: search
The $JSON_BLOB should only contain a SINGLE action, do NOT return a list of multiple actions. Here is an example of a valid $JSON_BLOB:
```
{
"action": $TOOL_NAME,
"action_input": $INPUT
}
```
ALWAYS use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action:
```
$JSON_BLOB
```Invalid or incomplete responseCould not parse LLM output: <|system|>
Answer the following questions as best you can. You have access to the following tools:
search: Useful for answering questions about current events.
The way you use the tools is by specifying a json blob.
Specifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).
The only values that should be in the "action" field are: search
The $JSON_BLOB should only contain a SINGLE action, do NOT return a list of multiple actions. Here is an example of a valid $JSON_BLOB:
```
{
"action": $TOOL_NAME,
"action_input": $INPUT
}
```
ALWAYS use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action:
```
$JSON_BLOB
```Invalid or incomplete responseCould not parse LLM output: <|system|>
Answer the following questions as best you can. You have access to the following tools:
search: Useful for answering questions about current events.
The way you use the tools is by specifying a json blob.
Specifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).
The only values that should be in the "action" field are: search
The $JSON_BLOB should only contain a SINGLE action, do NOT return a list of multiple actions. Here is an example of a valid $JSON_BLOB:
```
{
"action": $TOOL_NAME,
"action_input": $INPUT
}
```
ALWAYS use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action:
```
$JSON_BLOB
```Invalid or incomplete responseCould not parse LLM output: <|system|>
Answer the following questions as best you can. You have access to the following tools:
search: Useful for answering questions about current events.
The way you use the tools is by specifying a json blob.
Specifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).
The only values that should be in the "action" field are: search
The $JSON_BLOB should only contain a SINGLE action, do NOT return a list of multiple actions. Here is an example of a valid $JSON_BLOB:
```
{
"action": $TOOL_NAME,
"action_input": $INPUT
}
```
ALWAYS use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action:
```
$JSON_BLOB
```Invalid or incomplete responseCould not parse LLM output: <|system|>
Answer the following questions as best you can. You have access to the following tools:
search: Useful for answering questions about current events.
The way you use the tools is by specifying a json blob.
Specifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).
The only values that should be in the "action" field are: search
The $JSON_BLOB should only contain a SINGLE action, do NOT return a list of multiple actions. Here is an example of a valid $JSON_BLOB:
```
{
"action": $TOOL_NAME,
"action_input": $INPUT
}
```
ALWAYS use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action:
```
$JSON_BLOB
```Invalid or incomplete responseCould not parse LLM output: <|system|>
Answer the following questions as best you can. You have access to the following tools:
search: Useful for answering questions about current events.
The way you use the tools is by specifying a json blob.
Specifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).
The only values that should be in the "action" field are: search
The $JSON_BLOB should only contain a SINGLE action, do NOT return a list of multiple actions. Here is an example of a valid $JSON_BLOB:
```
{
"action": $TOOL_NAME,
"action_input": $INPUT
}
```
ALWAYS use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action:
```
$JSON_BLOB
```Invalid or incomplete responseCould not parse LLM output: <|system|>
Answer the following questions as best you can. You have access to the following tools:
search: Useful for answering questions about current events.
The way you use the tools is by specifying a json blob.
Specifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).
The only values that should be in the "action" field are: search
The $JSON_BLOB should only contain a SINGLE action, do NOT return a list of multiple actions. Here is an example of a valid $JSON_BLOB:
```
{
"action": $TOOL_NAME,
"action_input": $INPUT
}
```
ALWAYS use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action:
```
$JSON_BLOB
```Invalid or incomplete responseCould not parse LLM output: <|system|>
Answer the following questions as best you can. You have access to the following tools:
search: Useful for answering questions about current events.
The way you use the tools is by specifying a json blob.
Specifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).
The only values that should be in the "action" field are: search
The $JSON_BLOB should only contain a SINGLE action, do NOT return a list of multiple actions. Here is an example of a valid $JSON_BLOB:
```
{
"action": $TOOL_NAME,
"action_input": $INPUT
}
```
ALWAYS use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action:
```
$JSON_BLOB
```Invalid or incomplete responseCould not parse LLM output: <|system|>
Answer the following questions as best you can. You have access to the following tools:
search: Useful for answering questions about current events.
The way you use the tools is by specifying a json blob.
Specifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).
The only values that should be in the "action" field are: search
The $JSON_BLOB should only contain a SINGLE action, do NOT return a list of multiple actions. Here is an example of a valid $JSON_BLOB:
```
{
"action": $TOOL_NAME,
"action_input": $INPUT
}
```
ALWAYS use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action:
```
$JSON_BLOB
```Invalid or incomplete responseCould not parse LLM output: <|system|>
Answer the following questions as best you can. You have access to the following tools:
search: Useful for answering questions about current events.
The way you use the tools is by specifying a json blob.
Specifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).
The only values that should be in the "action" field are: search
The $JSON_BLOB should only contain a SINGLE action, do NOT return a list of multiple actions. Here is an example of a valid $JSON_BLOB:
```
{
"action": $TOOL_NAME,
"action_input": $INPUT
}
```
ALWAYS use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action:
```
$JSON_BLOB
```Invalid or incomplete responseCould not parse LLM output: <|system|>
Answer the following questions as best you can. You have access to the following tools:
search: Useful for answering questions about current events.
The way you use the tools is by specifying a json blob.
Specifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).
The only values that should be in the "action" field are: search
The $JSON_BLOB should only contain a SINGLE action, do NOT return a list of multiple actions. Here is an example of a valid $JSON_BLOB:
```
{
"action": $TOOL_NAME,
"action_input": $INPUT
}
```
ALWAYS use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action:
```
$JSON_BLOB
```Invalid or incomplete responseCould not parse LLM output: <|system|>
Answer the following questions as best you can. You have access to the following tools:
search: Useful for answering questions about current events.
The way you use the tools is by specifying a json blob.
Specifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).
The only values that should be in the "action" field are: search
The $JSON_BLOB should only contain a SINGLE action, do NOT return a list of multiple actions. Here is an example of a valid $JSON_BLOB:
```
{
"action": $TOOL_NAME,
"action_input": $INPUT
}
```
ALWAYS use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action:
```
$JSON_BLOB
```Invalid or incomplete responseCould not parse LLM output: <|system|>
Answer the following questions as best you can. You have access to the following tools:
search: Useful for answering questions about current events.
The way you use the tools is by specifying a json blob.
Specifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).
The only values that should be in the "action" field are: search
The $JSON_BLOB should only contain a SINGLE action, do NOT return a list of multiple actions. Here is an example of a valid $JSON_BLOB:
```
{
"action": $TOOL_NAME,
"action_input": $INPUT
}
```
ALWAYS use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action:
```
$JSON_BLOB
```Invalid or incomplete responseCould not parse LLM output: <|system|>
Answer the following questions as best you can. You have access to the following tools:
search: Useful for answering questions about current events.
The way you use the tools is by specifying a json blob.
Specifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).
The only values that should be in the "action" field are: search
The $JSON_BLOB should only contain a SINGLE action, do NOT return a list of multiple actions. Here is an example of a valid $JSON_BLOB:
```
{
"action": $TOOL_NAME,
"action_input": $INPUT
}
```
ALWAYS use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action:
```
$JSON_BLOB
```Invalid or incomplete responseCould not parse LLM output: <|system|>
Answer the following questions as best you can. You have access to the following tools:
search: Useful for answering questions about current events.
The way you use the tools is by specifying a json blob.
Specifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).
The only values that should be in the "action" field are: search
The $JSON_BLOB should only contain a SINGLE action, do NOT return a list of multiple actions. Here is an example of a valid $JSON_BLOB:
```
{
"action": $TOOL_NAME,
"action_input": $INPUT
}
```
ALWAYS use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action:
```
$JSON_BLOB
```Invalid or incomplete response
> Finished chain.
{'input': 'Who is the current holder of the speed skating world record on 500 meters? What is her current age raised to the 0.43 power?',
'output': 'Agent stopped due to iteration limit or time limit.'}
```
### Error Message and Stack Trace (if applicable)
```
> Entering new AgentExecutor chain...
Could not parse LLM output: <|system|>
Answer the following questions as best you can. You have access to the following tools:
search: Useful for answering questions about current events.
The way you use the tools is by specifying a json blob.
Specifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).
The only values that should be in the "action" field are: search
The $JSON_BLOB should only contain a SINGLE action, do NOT return a list of multiple actions. Here is an example of a valid $JSON_BLOB:
```
{
"action": $TOOL_NAME,
"action_input": $INPUT
}
```
ALWAYS use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action:
```
$JSON_BLOB
```Invalid or incomplete responseCould not parse LLM output: <|system|>
Answer the following questions as best you can. You have access to the following tools:
search: Useful for answering questions about current events.
The way you use the tools is by specifying a json blob.
Specifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).
The only values that should be in the "action" field are: search
The $JSON_BLOB should only contain a SINGLE action, do NOT return a list of multiple actions. Here is an example of a valid $JSON_BLOB:
```
{
"action": $TOOL_NAME,
"action_input": $INPUT
}
```
ALWAYS use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action:
```
$JSON_BLOB
```Invalid or incomplete responseCould not parse LLM output: <|system|>
Answer the following questions as best you can. You have access to the following tools:
search: Useful for answering questions about current events.
The way you use the tools is by specifying a json blob.
Specifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).
The only values that should be in the "action" field are: search
The $JSON_BLOB should only contain a SINGLE action, do NOT return a list of multiple actions. Here is an example of a valid $JSON_BLOB:
```
{
"action": $TOOL_NAME,
"action_input": $INPUT
}
```
ALWAYS use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action:
```
$JSON_BLOB
```Invalid or incomplete responseCould not parse LLM output: <|system|>
Answer the following questions as best you can. You have access to the following tools:
search: Useful for answering questions about current events.
The way you use the tools is by specifying a json blob.
Specifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).
The only values that should be in the "action" field are: search
The $JSON_BLOB should only contain a SINGLE action, do NOT return a list of multiple actions. Here is an example of a valid $JSON_BLOB:
```
{
"action": $TOOL_NAME,
"action_input": $INPUT
}
```
ALWAYS use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action:
```
$JSON_BLOB
```Invalid or incomplete responseCould not parse LLM output: <|system|>
Answer the following questions as best you can. You have access to the following tools:
search: Useful for answering questions about current events.
The way you use the tools is by specifying a json blob.
Specifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).
The only values that should be in the "action" field are: search
The $JSON_BLOB should only contain a SINGLE action, do NOT return a list of multiple actions. Here is an example of a valid $JSON_BLOB:
```
{
"action": $TOOL_NAME,
"action_input": $INPUT
}
```
ALWAYS use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action:
```
$JSON_BLOB
```Invalid or incomplete responseCould not parse LLM output: <|system|>
Answer the following questions as best you can. You have access to the following tools:
search: Useful for answering questions about current events.
The way you use the tools is by specifying a json blob.
Specifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).
The only values that should be in the "action" field are: search
The $JSON_BLOB should only contain a SINGLE action, do NOT return a list of multiple actions. Here is an example of a valid $JSON_BLOB:
```
{
"action": $TOOL_NAME,
"action_input": $INPUT
}
```
ALWAYS use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action:
```
$JSON_BLOB
```Invalid or incomplete responseCould not parse LLM output: <|system|>
Answer the following questions as best you can. You have access to the following tools:
search: Useful for answering questions about current events.
The way you use the tools is by specifying a json blob.
Specifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).
The only values that should be in the "action" field are: search
The $JSON_BLOB should only contain a SINGLE action, do NOT return a list of multiple actions. Here is an example of a valid $JSON_BLOB:
```
{
"action": $TOOL_NAME,
"action_input": $INPUT
}
```
ALWAYS use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action:
```
$JSON_BLOB
```Invalid or incomplete responseCould not parse LLM output: <|system|>
Answer the following questions as best you can. You have access to the following tools:
search: Useful for answering questions about current events.
The way you use the tools is by specifying a json blob.
Specifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).
The only values that should be in the "action" field are: search
The $JSON_BLOB should only contain a SINGLE action, do NOT return a list of multiple actions. Here is an example of a valid $JSON_BLOB:
```
{
"action": $TOOL_NAME,
"action_input": $INPUT
}
```
ALWAYS use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action:
```
$JSON_BLOB
```Invalid or incomplete responseCould not parse LLM output: <|system|>
Answer the following questions as best you can. You have access to the following tools:
search: Useful for answering questions about current events.
The way you use the tools is by specifying a json blob.
Specifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).
The only values that should be in the "action" field are: search
The $JSON_BLOB should only contain a SINGLE action, do NOT return a list of multiple actions. Here is an example of a valid $JSON_BLOB:
```
{
"action": $TOOL_NAME,
"action_input": $INPUT
}
```
ALWAYS use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action:
```
$JSON_BLOB
```Invalid or incomplete responseCould not parse LLM output: <|system|>
Answer the following questions as best you can. You have access to the following tools:
search: Useful for answering questions about current events.
The way you use the tools is by specifying a json blob.
Specifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).
The only values that should be in the "action" field are: search
The $JSON_BLOB should only contain a SINGLE action, do NOT return a list of multiple actions. Here is an example of a valid $JSON_BLOB:
```
{
"action": $TOOL_NAME,
"action_input": $INPUT
}
```
ALWAYS use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action:
```
$JSON_BLOB
```Invalid or incomplete responseCould not parse LLM output: <|system|>
Answer the following questions as best you can. You have access to the following tools:
search: Useful for answering questions about current events.
The way you use the tools is by specifying a json blob.
Specifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).
The only values that should be in the "action" field are: search
The $JSON_BLOB should only contain a SINGLE action, do NOT return a list of multiple actions. Here is an example of a valid $JSON_BLOB:
```
{
"action": $TOOL_NAME,
"action_input": $INPUT
}
```
ALWAYS use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action:
```
$JSON_BLOB
```Invalid or incomplete responseCould not parse LLM output: <|system|>
Answer the following questions as best you can. You have access to the following tools:
search: Useful for answering questions about current events.
The way you use the tools is by specifying a json blob.
Specifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).
The only values that should be in the "action" field are: search
The $JSON_BLOB should only contain a SINGLE action, do NOT return a list of multiple actions. Here is an example of a valid $JSON_BLOB:
```
{
"action": $TOOL_NAME,
"action_input": $INPUT
}
```
ALWAYS use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action:
```
$JSON_BLOB
```Invalid or incomplete responseCould not parse LLM output: <|system|>
Answer the following questions as best you can. You have access to the following tools:
search: Useful for answering questions about current events.
The way you use the tools is by specifying a json blob.
Specifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).
The only values that should be in the "action" field are: search
The $JSON_BLOB should only contain a SINGLE action, do NOT return a list of multiple actions. Here is an example of a valid $JSON_BLOB:
```
{
"action": $TOOL_NAME,
"action_input": $INPUT
}
```
ALWAYS use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action:
```
$JSON_BLOB
```Invalid or incomplete responseCould not parse LLM output: <|system|>
Answer the following questions as best you can. You have access to the following tools:
search: Useful for answering questions about current events.
The way you use the tools is by specifying a json blob.
Specifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).
The only values that should be in the "action" field are: search
The $JSON_BLOB should only contain a SINGLE action, do NOT return a list of multiple actions. Here is an example of a valid $JSON_BLOB:
```
{
"action": $TOOL_NAME,
"action_input": $INPUT
}
```
ALWAYS use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action:
```
$JSON_BLOB
```Invalid or incomplete responseCould not parse LLM output: <|system|>
Answer the following questions as best you can. You have access to the following tools:
search: Useful for answering questions about current events.
The way you use the tools is by specifying a json blob.
Specifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).
The only values that should be in the "action" field are: search
The $JSON_BLOB should only contain a SINGLE action, do NOT return a list of multiple actions. Here is an example of a valid $JSON_BLOB:
```
{
"action": $TOOL_NAME,
"action_input": $INPUT
}
```
ALWAYS use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action:
```
$JSON_BLOB
```Invalid or incomplete response
> Finished chain.
{'input': 'Who is the current holder of the speed skating world record on 500 meters? What is her current age raised to the 0.43 power?',
'output': 'Agent stopped due to iteration limit or time limit.'}
```
### Description
Challenge integrating Huggingface + Langchain
### System Info
```
System Information
------------------
> OS: Linux
> OS Version: #1 SMP PREEMPT_DYNAMIC Sat Nov 18 15:31:17 UTC 2023
> Python Version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.1.22
> langchain: 0.1.6
> langchain_community: 0.0.19
> langsmith: 0.0.87
> langchain_openai: 0.0.5
> langchainhub: 0.1.14
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
``` | $JSON_BLOB ```Invalid or incomplete response - ChatHuggingFace.. Langchain | https://api.github.com/repos/langchain-ai/langchain/issues/17356/comments | 8 | 2024-02-10T03:13:11Z | 2024-02-10T19:26:19Z | https://github.com/langchain-ai/langchain/issues/17356 | 2,128,067,649 | 17,356 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```
import os, dotenv, openai
from langchain_community.document_loaders import WebBaseLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.retrievers import MultiQueryRetriever
from langchain_community.vectorstores import FAISS
from langchain_openai import OpenAIEmbeddings, ChatOpenAI
#API Key
dotenv.load_dotenv()
openai.api_key = os.getenv("OPENAI_API_KEY")
#Load and split docs
documents = WebBaseLoader("https://en.wikipedia.org/wiki/New_York_City").load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size = 1000, chunk_overlap = 50)
documents = text_splitter.split_documents(documents)
vector_store = FAISS.from_documents(documents, OpenAIEmbeddings())
retriever = vector_store.as_retriever()
#MultiQueryRetriever
primary_qa_llm = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0)
advanced_retriever = MultiQueryRetriever.from_llm(retriever=retriever, llm=primary_qa_llm)
print(advanced_retriever.get_relevant_documents("Where is nyc?"))
```
### Error Message and Stack Trace (if applicable)
```
Traceback (most recent call last):
File "C:\Users\admin\.conda\envs\qdrant\Lib\site-packages\pydantic\v1\main.py", line 522, in parse_obj
obj = dict(obj)
^^^^^^^^^
TypeError: 'int' object is not iterable
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\admin\.conda\envs\qdrant\Lib\site-packages\langchain\output_parsers\pydantic.py", line 25, in parse_result
return self.pydantic_object.parse_obj(json_object)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\.conda\envs\qdrant\Lib\site-packages\pydantic\v1\main.py", line 525, in parse_obj
raise ValidationError([ErrorWrapper(exc, loc=ROOT_KEY)], cls) from e
pydantic.v1.error_wrappers.ValidationError: 1 validation error for LineList
__root__
LineList expected dict not int (type=type_error)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "d:\Documents-Alon\MapleRAG\ragas-tutorial\ragas-debug.py", line 28, in <module>
print(advanced_retriever.get_relevant_documents("Who are you?"))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\.conda\envs\qdrant\Lib\site-packages\langchain_core\retrievers.py", line 224, in get_relevant_documents
raise e
File "C:\Users\admin\.conda\envs\qdrant\Lib\site-packages\langchain_core\retrievers.py", line 217, in get_relevant_documents
result = self._get_relevant_documents(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\.conda\envs\qdrant\Lib\site-packages\langchain\retrievers\multi_query.py", line 172, in _get_relevant_documents
queries = self.generate_queries(query, run_manager)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\.conda\envs\qdrant\Lib\site-packages\langchain\retrievers\multi_query.py", line 189, in generate_queries
response = self.llm_chain(
^^^^^^^^^^^^^^^
File "C:\Users\admin\.conda\envs\qdrant\Lib\site-packages\langchain_core\_api\deprecation.py", line 145, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\.conda\envs\qdrant\Lib\site-packages\langchain\chains\base.py", line 363, in __call__
return self.invoke(
^^^^^^^^^^^^
File "C:\Users\admin\.conda\envs\qdrant\Lib\site-packages\langchain\chains\base.py", line 162, in invoke
raise e
File "C:\Users\admin\.conda\envs\qdrant\Lib\site-packages\langchain\chains\base.py", line 156, in invoke
self._call(inputs, run_manager=run_manager)
return self.create_outputs(response)[0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\.conda\envs\qdrant\Lib\site-packages\langchain\chains\llm.py", line 258, in create_outputs
^
File "C:\Users\admin\.conda\envs\qdrant\Lib\site-packages\langchain\chains\llm.py", line 261, in <listcomp>
self.output_key: self.output_parser.parse_result(generation),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\.conda\envs\qdrant\Lib\site-packages\langchain\output_parsers\pydantic.py", line 29, in parse_result
raise OutputParserException(msg, llm_output=json_object)
langchain_core.exceptions.OutputParserException: Failed to parse LineList from completion 1. Got: 1 validation error for LineList
__root__
LineList expected dict not int (type=type_error)
```
### Description
Trying to use MultiQueryRetriever and getting an error.
The base `retriever` works. The `primary_qa_llm` works too.
Using windows.
### System Info
```
> pip list | findstr /i "langchain"
langchain 0.1.6
langchain-community 0.0.19
langchain-core 0.1.22
langchain-openai 0.0.5
langchainhub 0.1.14
```
Platform: Windows
Python version: 3.11.7 | MultiQueryRetriever is failing | https://api.github.com/repos/langchain-ai/langchain/issues/17352/comments | 13 | 2024-02-09T23:57:05Z | 2024-05-13T10:21:47Z | https://github.com/langchain-ai/langchain/issues/17352 | 2,127,990,898 | 17,352 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
_No response_
### Idea or request for content:
below's the code which's present in the latest MultiQueryRetriever documentation
```
from typing import List
from langchain.chains import LLMChain
from langchain.output_parsers import PydanticOutputParser
from langchain.prompts import PromptTemplate
from pydantic import BaseModel, Field
# Output parser will split the LLM result into a list of queries
class LineList(BaseModel):
# "lines" is the key (attribute name) of the parsed output
lines: List[str] = Field(description="Lines of text")
class LineListOutputParser(PydanticOutputParser):
def __init__(self) -> None:
super().__init__(pydantic_object=LineList)
def parse(self, text: str) -> LineList:
lines = text.strip().split("\n")
return LineList(lines=lines)
output_parser = LineListOutputParser()
QUERY_PROMPT = PromptTemplate(
input_variables=["question"],
template="""You are an AI language model assistant. Your task is to generate five
different versions of the given user question to retrieve relevant documents from a vector
database. By generating multiple perspectives on the user question, your goal is to help
the user overcome some of the limitations of the distance-based similarity search.
Provide these alternative questions separated by newlines.
Original question: {question}""",
)
llm = OpenAI(temperature=0)
# Chain
llm_chain = LLMChain(llm=llm, prompt=QUERY_PROMPT, output_parser=output_parser)
# Other inputs
question = "What are the approaches to Task Decomposition?"
# Run
retriever = MultiQueryRetriever(
retriever=vectordb.as_retriever(), llm_chain=llm_chain, parser_key="lines"
) # "lines" is the key (attribute name) of the parsed output
# Results
unique_docs = retriever.get_relevant_documents(
query="What does the course say about regression?"
)
len(unique_docs)
```
the above code is returning below issue
```
---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
[<ipython-input-22-0cf7a0e69b40>](https://localhost:8080/#) in <cell line: 24>()
22
23
---> 24 output_parser = LineListOutputParser()
25
26 QUERY_PROMPT = PromptTemplate(
2 frames
[<ipython-input-22-0cf7a0e69b40>](https://localhost:8080/#) in __init__(self)
15 class LineListOutputParser(PydanticOutputParser):
16 def __init__(self) -> None:
---> 17 super().__init__(pydantic_object=LineList)
18
19 def parse(self, text: str) -> LineList:
[/usr/local/lib/python3.10/dist-packages/langchain_core/load/serializable.py](https://localhost:8080/#) in __init__(self, **kwargs)
105
106 def __init__(self, **kwargs: Any) -> None:
--> 107 super().__init__(**kwargs)
108 self._lc_kwargs = kwargs
109
[/usr/local/lib/python3.10/dist-packages/pydantic/v1/main.py](https://localhost:8080/#) in __init__(__pydantic_self__, **data)
339 values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)
340 if validation_error:
--> 341 raise validation_error
342 try:
343 object_setattr(__pydantic_self__, '__dict__', values)
ValidationError: 1 validation error for LineListOutputParser
pydantic_object
subclass of BaseModel expected (type=type_error.subclass; expected_class=BaseModel)
```
Can you please have a look into and return the correct code rather than suggesting to go through all the files? | MultiQueryRetriever documentation code is not executing | https://api.github.com/repos/langchain-ai/langchain/issues/17343/comments | 5 | 2024-02-09T21:16:25Z | 2024-02-12T22:07:02Z | https://github.com/langchain-ai/langchain/issues/17343 | 2,127,859,163 | 17,343 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
_No response_
### Idea or request for content:
below's the code which i directly took from MultiQueryRetriever LangChain documentation
```
# Build a sample vectorDB
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_community.document_loaders import WebBaseLoader
from langchain_community.vectorstores import Chroma
# from langchain_openai import OpenAIEmbeddings
# Load blog post
loader = WebBaseLoader("https://lilianweng.github.io/posts/2023-06-23-agent/")
data = loader.load()
# Split
text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=0)
splits = text_splitter.split_documents(data)
# VectorDB
embedding = OpenAIEmbeddings()
vectordb = Chroma.from_documents(documents=splits, embedding=embedding)
from langchain.retrievers.multi_query import MultiQueryRetriever
# from langchain_openai import ChatOpenAI
question = "What are the approaches to Task Decomposition?"
llm = OpenAI(temperature=0)
retriever_from_llm = MultiQueryRetriever.from_llm(
retriever=vectordb.as_retriever(), llm=llm
)
# Set logging for the queries
import logging
logging.basicConfig()
logging.getLogger("langchain.retrievers.multi_query").setLevel(logging.INFO)
unique_docs = retriever_from_llm.get_relevant_documents(query=question)
len(unique_docs)
```
below's the error it is returning
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/pydantic/v1/main.py](https://localhost:8080/#) in parse_obj(cls, obj)
521 try:
--> 522 obj = dict(obj)
523 except (TypeError, ValueError) as e:
TypeError: 'int' object is not iterable
The above exception was the direct cause of the following exception:
ValidationError Traceback (most recent call last)
14 frames
[/usr/local/lib/python3.10/dist-packages/langchain/output_parsers/pydantic.py](https://localhost:8080/#) in parse_result(self, result, partial)
24 try:
---> 25 return self.pydantic_object.parse_obj(json_object)
26 except ValidationError as e:
[/usr/local/lib/python3.10/dist-packages/pydantic/v1/main.py](https://localhost:8080/#) in parse_obj(cls, obj)
524 exc = TypeError(f'{cls.__name__} expected dict not {obj.__class__.__name__}')
--> 525 raise ValidationError([ErrorWrapper(exc, loc=ROOT_KEY)], cls) from e
526 return cls(**obj)
ValidationError: 1 validation error for LineList
__root__
LineList expected dict not int (type=type_error)
During handling of the above exception, another exception occurred:
OutputParserException Traceback (most recent call last)
[<ipython-input-73-07101c8e33b2>](https://localhost:8080/#) in <cell line: 34>()
32 logging.getLogger("langchain.retrievers.multi_query").setLevel(logging.INFO)
33
---> 34 unique_docs = retriever_from_llm.get_relevant_documents(query=question)
35 len(unique_docs)
[/usr/local/lib/python3.10/dist-packages/langchain_core/retrievers.py](https://localhost:8080/#) in get_relevant_documents(self, query, callbacks, tags, metadata, run_name, **kwargs)
222 except Exception as e:
223 run_manager.on_retriever_error(e)
--> 224 raise e
225 else:
226 run_manager.on_retriever_end(
[/usr/local/lib/python3.10/dist-packages/langchain_core/retrievers.py](https://localhost:8080/#) in get_relevant_documents(self, query, callbacks, tags, metadata, run_name, **kwargs)
215 _kwargs = kwargs if self._expects_other_args else {}
216 if self._new_arg_supported:
--> 217 result = self._get_relevant_documents(
218 query, run_manager=run_manager, **_kwargs
219 )
[/usr/local/lib/python3.10/dist-packages/langchain/retrievers/multi_query.py](https://localhost:8080/#) in _get_relevant_documents(self, query, run_manager)
170 Unique union of relevant documents from all generated queries
171 """
--> 172 queries = self.generate_queries(query, run_manager)
173 if self.include_original:
174 queries.append(query)
[/usr/local/lib/python3.10/dist-packages/langchain/retrievers/multi_query.py](https://localhost:8080/#) in generate_queries(self, question, run_manager)
187 List of LLM generated queries that are similar to the user input
188 """
--> 189 response = self.llm_chain(
190 {"question": question}, callbacks=run_manager.get_child()
191 )
[/usr/local/lib/python3.10/dist-packages/langchain_core/_api/deprecation.py](https://localhost:8080/#) in warning_emitting_wrapper(*args, **kwargs)
143 warned = True
144 emit_warning()
--> 145 return wrapped(*args, **kwargs)
146
147 async def awarning_emitting_wrapper(*args: Any, **kwargs: Any) -> Any:
[/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py](https://localhost:8080/#) in __call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
361 }
362
--> 363 return self.invoke(
364 inputs,
365 cast(RunnableConfig, {k: v for k, v in config.items() if v is not None}),
[/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py](https://localhost:8080/#) in invoke(self, input, config, **kwargs)
160 except BaseException as e:
161 run_manager.on_chain_error(e)
--> 162 raise e
163 run_manager.on_chain_end(outputs)
164 final_outputs: Dict[str, Any] = self.prep_outputs(
[/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py](https://localhost:8080/#) in invoke(self, input, config, **kwargs)
154 try:
155 outputs = (
--> 156 self._call(inputs, run_manager=run_manager)
157 if new_arg_supported
158 else self._call(inputs)
[/usr/local/lib/python3.10/dist-packages/langchain/chains/llm.py](https://localhost:8080/#) in _call(self, inputs, run_manager)
102 ) -> Dict[str, str]:
103 response = self.generate([inputs], run_manager=run_manager)
--> 104 return self.create_outputs(response)[0]
105
106 def generate(
[/usr/local/lib/python3.10/dist-packages/langchain/chains/llm.py](https://localhost:8080/#) in create_outputs(self, llm_result)
256 def create_outputs(self, llm_result: LLMResult) -> List[Dict[str, Any]]:
257 """Create outputs from response."""
--> 258 result = [
259 # Get the text of the top generated string.
260 {
[/usr/local/lib/python3.10/dist-packages/langchain/chains/llm.py](https://localhost:8080/#) in <listcomp>(.0)
259 # Get the text of the top generated string.
260 {
--> 261 self.output_key: self.output_parser.parse_result(generation),
262 "full_generation": generation,
263 }
[/usr/local/lib/python3.10/dist-packages/langchain/output_parsers/pydantic.py](https://localhost:8080/#) in parse_result(self, result, partial)
27 name = self.pydantic_object.__name__
28 msg = f"Failed to parse {name} from completion {json_object}. Got: {e}"
---> 29 raise OutputParserException(msg, llm_output=json_object)
30
31 def get_format_instructions(self) -> str:
OutputParserException: Failed to parse LineList from completion 1. Got: 1 validation error for LineList
__root__
LineList expected dict not int (type=type_error)
```
the same code was running yesterday, but its returning an error today. there must be issue from langchain side itself. Can you have a look it it? | MultiQueryRetriever documentation code itself is not executing | https://api.github.com/repos/langchain-ai/langchain/issues/17342/comments | 7 | 2024-02-09T20:23:23Z | 2024-02-12T22:06:47Z | https://github.com/langchain-ai/langchain/issues/17342 | 2,127,799,180 | 17,342 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.