issue_owner_repo
listlengths 2
2
| issue_body
stringlengths 0
261k
⌀ | issue_title
stringlengths 1
925
| issue_comments_url
stringlengths 56
81
| issue_comments_count
int64 0
2.5k
| issue_created_at
stringlengths 20
20
| issue_updated_at
stringlengths 20
20
| issue_html_url
stringlengths 37
62
| issue_github_id
int64 387k
2.46B
| issue_number
int64 1
127k
|
---|---|---|---|---|---|---|---|---|---|
[
"hwchase17",
"langchain"
] | ### System Info
langchain v0.1.0
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.output_parsers import PydanticOutputParser
from pydantic import BaseModel, Field
from typing import Any, Dict
from langchain_core.outputs.llm_result import LLMResult
from langchain.prompts.chat import ChatPromptTemplate, HumanMessagePromptTemplate
from langchain.chains import LLMChain
from langchain.callbacks.base import BaseCallbackHandler
class CustomCallBack(BaseCallbackHandler):
def on_llm_end(self, response: LLMResult, **kwargs: Any) -> Any:
print(f"on_llm_end => {response}")
def on_chain_end(self, outputs: Dict[str, Any], **kwargs: Any) -> Any:
print(f"on_chain_end => {outputs}")
def on_text(self, text: str, **kwargs: Any) -> Any:
print(f"on_text => {text}")
class TennisPlayer(BaseModel):
age: int = Field(description="Age of the player")
nb_victories: int = Field(description="Nb of victories in major tournaments")
# Instantiate the LLM here
llm = None
parser = PydanticOutputParser(pydantic_object=TennisPlayer)
prompt = "Give me some information about Rafael Nadal.\n" \
"You'll answer with the following format:\n" \
"{format_instructions}"
chat_prompt = ChatPromptTemplate.from_messages([HumanMessagePromptTemplate.from_template(prompt)])
chain = LLMChain(llm=llm, prompt=chat_prompt, callbacks=[CustomCallBack()], output_parser=parser)
chain.invoke(input={"format_instructions": parser.get_format_instructions()})
```
### Expected behavior
The custom callback handler makes it possible to intercept the prompt sent to the LLM (through _on_text_) and the output in _on_chain_end_. The problem is that when an output parser is involved, the _outputs_ dictionary of _on_chain_end_ associates the "text" key with the final constructed object and not the output message (containing the JSON data that has been marshalled to an object). And for an unknown reason, the _on_llm_end_ callback function isn't invoked...
When something goes wrong in the marshalling process, the analysis of the LLM's output message is mandatory. Well, it doesn't seem abnormal to get the final object produced by the chain in the _on_chain_end_ callback, but in that case I would expect the _on_llm_end_ callback to be called just before with the output message in parameter. But it is not. So, at this stage, it's not possible to intercept the raw LLM's output message for debugging purposes. | Intercepting the output message in a callback handler before it is sent to the output parser | https://api.github.com/repos/langchain-ai/langchain/issues/15830/comments | 5 | 2024-01-10T17:27:36Z | 2024-04-18T16:21:24Z | https://github.com/langchain-ai/langchain/issues/15830 | 2,074,843,378 | 15,830 |
[
"hwchase17",
"langchain"
] | ### System Info
Langchain 0.1.0
python 3.10
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
reproducing the error could be as follows
```python
...
#################################################################
## get multiple queries to be searched on the web
query_generation_chain = (
search_queries_prompt
| llm.bind(stop=TOGETHER_STOP_KEYWORDS)
| CommaSeparatedListOutputParser()
)
#################################################################
## scrape and summarize a webpages based on urls
summarize_chain = RunnablePassthrough.assign(
summary=RunnablePassthrough.assign(text=lambda x: scrape_webpage(x["url"])[:10_000])
| summarize_prompt
| llm
| StrOutputParser(),
) | (lambda x: f'URL: {x["url"]} \n\nSUMMARY: {x["summary"]}')
chain = (
RunnablePassthrough.assign(urls = query_generation_chain | fetch_links_from_web)
| RunnableLambda(lambda x: [{"question": x["question"], "url": url} for url in x["urls"]])
| summarize_chain.map() ## generate list of summarized article for each link
| (lambda reports: "\n\n".join(reports)) ## combine the summaries into a report
)
```
if i invoke `get_graph()` on the chain like this :
```python
chain.get_graph()
```
i get this error :
```console
Traceback (most recent call last):
File "/home/joede/dev/llm_playground/researcher/main.py", line 30, in <module>
report_writer_chain.get_graph().print_ascii()
File "/home/joede/.cache/pypoetry/virtualenvs/researcher-Jp5_VGHR-py3.10/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 1690, in get_graph
step_graph = step.get_graph(config)
File "/home/joede/.cache/pypoetry/virtualenvs/researcher-Jp5_VGHR-py3.10/lib/python3.10/site-packages/langchain_core/runnables/passthrough.py", line 379, in get_graph
graph = self.mapper.get_graph(config)
File "/home/joede/.cache/pypoetry/virtualenvs/researcher-Jp5_VGHR-py3.10/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2282, in get_graph
step_graph = step.get_graph()
File "/home/joede/.cache/pypoetry/virtualenvs/researcher-Jp5_VGHR-py3.10/lib/python3.10/site-packages/langchain_core/runnables/passthrough.py", line 379, in get_graph
graph = self.mapper.get_graph(config)
File "/home/joede/.cache/pypoetry/virtualenvs/researcher-Jp5_VGHR-py3.10/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2282, in get_graph
step_graph = step.get_graph()
File "/home/joede/.cache/pypoetry/virtualenvs/researcher-Jp5_VGHR-py3.10/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 1690, in get_graph
step_graph = step.get_graph(config)
File "/home/joede/.cache/pypoetry/virtualenvs/researcher-Jp5_VGHR-py3.10/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2906, in get_graph
graph = super().get_graph(config)
File "/home/joede/.cache/pypoetry/virtualenvs/researcher-Jp5_VGHR-py3.10/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 399, in get_graph
output_node = graph.add_node(self.get_output_schema(config))
File "/home/joede/.cache/pypoetry/virtualenvs/researcher-Jp5_VGHR-py3.10/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 331, in get_output_schema
if inspect.isclass(root_type) and issubclass(root_type, BaseModel):
File "/usr/lib/python3.10/abc.py", line 123, in __subclasscheck__
return _abc_subclasscheck(cls, subclass)
TypeError: issubclass() arg 1 must be a class
```
with further inspection i found it error when it gets here:
```python
def get_input_schema(...):
....
root_type = self.OutputType
if inspect.isclass(root_type) and issubclass(root_type, BaseModel):
return root_type
```
in the `langchain_core/runnable/base.py`
### Expected behavior
the expected output should be a graph of ascii characters that should look like this :
```console
+---------------------------------+
| Parallel<research_summary>Input |
+---------------------------------+
**** *******
*** *******
** ******
+---------------------+ ****
| Parallel<urls>Input | *
+---------------------+ *
*** **** *
**** *** *
** **** *
+--------------------+ ** *
| ChatPromptTemplate | * *
+--------------------+ * *
* * *
* * *
* * *
+---------------+ * *
| WithFallbacks | * *
+---------------+ * *
* * *
* * *
* * *
+--------------------------------+ * *
| CommaSeparatedListOutputParser | * *
+--------------------------------+ * *
* * *
* * *
* * *
+------------------------------+ +-------------+ *
| Lambda(fetch_links_from_web) | | Passthrough | *
+------------------------------+ *+-------------+ *
*** **** *
**** **** *
** ** *
+----------------------+ +-------------+
| Parallel<urls>Output | **| Passthrough |
+----------------------+ ******* +-------------+
**** ******
*** *******
** ****
+----------------------------------+
| Parallel<research_summary>Output |
+----------------------------------+
*
*
*
+--------------------+
| ChatPromptTemplate |
+--------------------+
*
*
*
+---------------+
| WithFallbacks |
+---------------+
*
*
*
+-----------------+
| StrOutputParser |
+-----------------+
*
*
*
+-----------------------+
| StrOutputParserOutput |
+-----------------------+
``` | `chain.get_graph()` doesn't play nicely with `chain.map()` or `list[str]` | https://api.github.com/repos/langchain-ai/langchain/issues/15828/comments | 1 | 2024-01-10T17:15:20Z | 2024-04-17T16:18:38Z | https://github.com/langchain-ai/langchain/issues/15828 | 2,074,820,818 | 15,828 |
[
"hwchase17",
"langchain"
] | ### System Info
python=3.11
langchain= latest
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
while running a code with create_pandas_dataframe_agent it throwing key error
from langchain.agents.agent_types import AgentType
from langchain_experimental.agents.agent_toolkits import create_pandas_dataframe_agent
from langchain_openai import ChatOpenAI
import pandas as pd
from langchain_openai import OpenAI
df = pd.read_csv(r"C:\Users\rndbcpsoft\OneDrive\Desktop\test\chat_data_2024-01-05_13-20-11.csv")
# agent = create_pandas_dataframe_agent(
# ChatOpenAI(temperature=0, model="gpt-3.5-turbo"),
# df,
# verbose=True,
# agent_type=AgentType.OPENAI_FUNCTIONS,
# )
llm = ChatOpenAI(
temperature=0, model="gpt-3.5-turbo-0613", openai_api_key=openapi_key , streaming=True
)
pandas_df_agent = create_pandas_dataframe_agent(
llm,
df,
verbose=True,
agent_type=AgentType.OPENAI_FUNCTIONS,
handle_parsing_errors=True,
)
error:
KeyError Traceback (most recent call last)
Cell In[7], [line 12](vscode-notebook-cell:?execution_count=7&line=12)
[1](vscode-notebook-cell:?execution_count=7&line=1) # agent = create_pandas_dataframe_agent(
[2](vscode-notebook-cell:?execution_count=7&line=2) # ChatOpenAI(temperature=0, model="gpt-3.5-turbo"),
[3](vscode-notebook-cell:?execution_count=7&line=3) # df,
[4](vscode-notebook-cell:?execution_count=7&line=4) # verbose=True,
[5](vscode-notebook-cell:?execution_count=7&line=5) # agent_type=AgentType.OPENAI_FUNCTIONS,
[6](vscode-notebook-cell:?execution_count=7&line=6) # )
[8](vscode-notebook-cell:?execution_count=7&line=8) llm = ChatOpenAI(
[9](vscode-notebook-cell:?execution_count=7&line=9) temperature=0, model="gpt-3.5-turbo-0613", openai_api_key=openapi_key , streaming=True
[10](vscode-notebook-cell:?execution_count=7&line=10) )
---> [12](vscode-notebook-cell:?execution_count=7&line=12) pandas_df_agent = create_pandas_dataframe_agent(
[13](vscode-notebook-cell:?execution_count=7&line=13) llm,
[14](vscode-notebook-cell:?execution_count=7&line=14) df,
[15](vscode-notebook-cell:?execution_count=7&line=15) verbose=True,
[16](vscode-notebook-cell:?execution_count=7&line=16) agent_type=AgentType.OPENAI_FUNCTIONS,
[17](vscode-notebook-cell:?execution_count=7&line=17) handle_parsing_errors=True,
[18](vscode-notebook-cell:?execution_count=7&line=18) )
File [c:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\langchain_experimental\agents\agent_toolkits\pandas\base.py:322](file:///C:/Users/rndbcpsoft/OneDrive/Desktop/test/envtest/Lib/site-packages/langchain_experimental/agents/agent_toolkits/pandas/base.py:322), in create_pandas_dataframe_agent(llm, df, agent_type, callback_manager, prefix, suffix, input_variables, verbose, return_intermediate_steps, max_iterations, max_execution_time, early_stopping_method, agent_executor_kwargs, include_df_in_prompt, number_of_head_rows, extra_tools, **kwargs)
[313](file:///C:/Users/rndbcpsoft/OneDrive/Desktop/test/envtest/Lib/site-packages/langchain_experimental/agents/agent_toolkits/pandas/base.py:313) _prompt, base_tools = _get_functions_prompt_and_tools(
[314](file:///C:/Users/rndbcpsoft/OneDrive/Desktop/test/envtest/Lib/site-packages/langchain_experimental/agents/agent_toolkits/pandas/base.py:314) df,
[315](file:///C:/Users/rndbcpsoft/OneDrive/Desktop/test/envtest/Lib/site-packages/langchain_experimental/agents/agent_toolkits/pandas/base.py:315) prefix=prefix,
(...)
...
---> [57](file:///C:/Users/rndbcpsoft/OneDrive/Desktop/test/envtest/Lib/site-packages/langchain/agents/openai_functions_agent/base.py:57) if not isinstance(values["llm"], ChatOpenAI):
[58](file:///C:/Users/rndbcpsoft/OneDrive/Desktop/test/envtest/Lib/site-packages/langchain/agents/openai_functions_agent/base.py:58) raise ValueError("Only supported with ChatOpenAI models.")
[59](file:///C:/Users/rndbcpsoft/OneDrive/Desktop/test/envtest/Lib/site-packages/langchain/agents/openai_functions_agent/base.py:59) return values
KeyError: 'llm'
### Expected behavior
while running a code with create_pandas_dataframe_agent it throwing key error
from langchain.agents.agent_types import AgentType
from langchain_experimental.agents.agent_toolkits import create_pandas_dataframe_agent
from langchain_openai import ChatOpenAI
import pandas as pd
from langchain_openai import OpenAI
df = pd.read_csv(r"C:\Users\rndbcpsoft\OneDrive\Desktop\test\chat_data_2024-01-05_13-20-11.csv")
# agent = create_pandas_dataframe_agent(
# ChatOpenAI(temperature=0, model="gpt-3.5-turbo"),
# df,
# verbose=True,
# agent_type=AgentType.OPENAI_FUNCTIONS,
# )
llm = ChatOpenAI(
temperature=0, model="gpt-3.5-turbo-0613", openai_api_key=openapi_key , streaming=True
)
pandas_df_agent = create_pandas_dataframe_agent(
llm,
df,
verbose=True,
agent_type=AgentType.OPENAI_FUNCTIONS,
handle_parsing_errors=True,
)
error:
KeyError Traceback (most recent call last)
Cell In[7], [line 12](vscode-notebook-cell:?execution_count=7&line=12)
[1](vscode-notebook-cell:?execution_count=7&line=1) # agent = create_pandas_dataframe_agent(
[2](vscode-notebook-cell:?execution_count=7&line=2) # ChatOpenAI(temperature=0, model="gpt-3.5-turbo"),
[3](vscode-notebook-cell:?execution_count=7&line=3) # df,
[4](vscode-notebook-cell:?execution_count=7&line=4) # verbose=True,
[5](vscode-notebook-cell:?execution_count=7&line=5) # agent_type=AgentType.OPENAI_FUNCTIONS,
[6](vscode-notebook-cell:?execution_count=7&line=6) # )
[8](vscode-notebook-cell:?execution_count=7&line=8) llm = ChatOpenAI(
[9](vscode-notebook-cell:?execution_count=7&line=9) temperature=0, model="gpt-3.5-turbo-0613", openai_api_key=openapi_key , streaming=True
[10](vscode-notebook-cell:?execution_count=7&line=10) )
---> [12](vscode-notebook-cell:?execution_count=7&line=12) pandas_df_agent = create_pandas_dataframe_agent(
[13](vscode-notebook-cell:?execution_count=7&line=13) llm,
[14](vscode-notebook-cell:?execution_count=7&line=14) df,
[15](vscode-notebook-cell:?execution_count=7&line=15) verbose=True,
[16](vscode-notebook-cell:?execution_count=7&line=16) agent_type=AgentType.OPENAI_FUNCTIONS,
[17](vscode-notebook-cell:?execution_count=7&line=17) handle_parsing_errors=True,
[18](vscode-notebook-cell:?execution_count=7&line=18) )
File [c:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\langchain_experimental\agents\agent_toolkits\pandas\base.py:322](file:///C:/Users/rndbcpsoft/OneDrive/Desktop/test/envtest/Lib/site-packages/langchain_experimental/agents/agent_toolkits/pandas/base.py:322), in create_pandas_dataframe_agent(llm, df, agent_type, callback_manager, prefix, suffix, input_variables, verbose, return_intermediate_steps, max_iterations, max_execution_time, early_stopping_method, agent_executor_kwargs, include_df_in_prompt, number_of_head_rows, extra_tools, **kwargs)
[313](file:///C:/Users/rndbcpsoft/OneDrive/Desktop/test/envtest/Lib/site-packages/langchain_experimental/agents/agent_toolkits/pandas/base.py:313) _prompt, base_tools = _get_functions_prompt_and_tools(
[314](file:///C:/Users/rndbcpsoft/OneDrive/Desktop/test/envtest/Lib/site-packages/langchain_experimental/agents/agent_toolkits/pandas/base.py:314) df,
[315](file:///C:/Users/rndbcpsoft/OneDrive/Desktop/test/envtest/Lib/site-packages/langchain_experimental/agents/agent_toolkits/pandas/base.py:315) prefix=prefix,
(...)
...
---> [57](file:///C:/Users/rndbcpsoft/OneDrive/Desktop/test/envtest/Lib/site-packages/langchain/agents/openai_functions_agent/base.py:57) if not isinstance(values["llm"], ChatOpenAI):
[58](file:///C:/Users/rndbcpsoft/OneDrive/Desktop/test/envtest/Lib/site-packages/langchain/agents/openai_functions_agent/base.py:58) raise ValueError("Only supported with ChatOpenAI models.")
[59](file:///C:/Users/rndbcpsoft/OneDrive/Desktop/test/envtest/Lib/site-packages/langchain/agents/openai_functions_agent/base.py:59) return values
KeyError: 'llm' | KeyError: 'llm' in create_pandas_dataframe_agent | https://api.github.com/repos/langchain-ai/langchain/issues/15819/comments | 4 | 2024-01-10T13:34:12Z | 2024-04-18T16:36:53Z | https://github.com/langchain-ai/langchain/issues/15819 | 2,074,391,510 | 15,819 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Can someone please help me pass llamaCPP instance into langchain's conversational retrieval chain that uses a retriever.
### Suggestion:
_No response_ | using LLamaCPP with conversational retrieval chain. | https://api.github.com/repos/langchain-ai/langchain/issues/15818/comments | 1 | 2024-01-10T13:29:03Z | 2024-04-17T16:16:51Z | https://github.com/langchain-ai/langchain/issues/15818 | 2,074,381,981 | 15,818 |
[
"hwchase17",
"langchain"
] | ### Feature request
The current document_loaders accept file path to process. But most of the time, especially if application deployed to somewhere, file is uploaded by user and not exist on file system.
Writing that in-memory bytes to disk and re-read is a unnecessary step.
It would be good to take BytesIO or some abstraction to process in-memory files.
### Motivation
It will eliminate writing in-memory files to disk and re-reading them from disk while using document_loaders.
### Your contribution
I can create a PR for this. | document_loaders to support BytesIO or an interface for in-memory objects | https://api.github.com/repos/langchain-ai/langchain/issues/15815/comments | 6 | 2024-01-10T12:36:25Z | 2024-04-17T16:20:25Z | https://github.com/langchain-ai/langchain/issues/15815 | 2,074,285,594 | 15,815 |
[
"hwchase17",
"langchain"
] | ### System Info
LC version: 0.1.0
Platform: MacOS
Python version: 3.12.1
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
Python 3.12.1 (main, Jan 9 2024, 18:02:09) [Clang 15.0.0 (clang-1500.0.40.1)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from langchain_openai import AzureChatOpenAI
>>> from langchain_core.runnables import ConfigurableField
>>> ConfigurableAzureChatOpenAI = AzureChatOpenAI(
... openai_api_key = "asdfg",
... openai_api_version = "asdg",
... deployment_name='asdg',
... azure_endpoint="asdg",
... temperature=0.9
... ).configurable_fields(
... azure_endpoint=ConfigurableField(id="azure_endpoint"),
... openai_api_key=ConfigurableField(id="openai_api_key"),
... azure_deployment=ConfigurableField(id="deployment_name"),
... openai_api_version=ConfigurableField(id="openai_api_version"),
... )
Traceback (most recent call last):
File "<stdin>", line 7, in <module>
File "/Users/pramodh/.pyenv/versions/3.12.1/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 1368, in configurable_fields
raise ValueError(
ValueError: Configuration key azure_deployment not found in client=<openai.resources.chat.completions.Completions object at 0x1079f4350> async_client=<openai.resources.chat.completions.AsyncCompletions object at 0x1079f58e0> temperature=0.9 openai_api_key='asdfg' openai_proxy='' azure_endpoint='asdg' deployment_name='asdg' openai_api_version='asdg' openai_api_type='azure': available keys are {self.__fields__.keys()}
```
### Expected behavior
`azure_deployment` is an alias for `deployment_name` defined inside `AzureChatOpenAI`, but it cannot be set as a `ConfigurableField` - we instead have to set `deployment_name` as a ConfigurableField.
I would expect the above code to not throw an error, as they are just aliases. | AzureChatOpenAI: `Configuration key azure_deployment not found in client` | https://api.github.com/repos/langchain-ai/langchain/issues/15814/comments | 1 | 2024-01-10T12:30:52Z | 2024-04-17T16:27:44Z | https://github.com/langchain-ai/langchain/issues/15814 | 2,074,275,377 | 15,814 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
i have tried with several tests, even with the most basic e.g in the doc, nothing. Dissapointed because it got me superexcited at first:
from langchain_experimental.llms.ollama_functions import OllamaFunctions
from langchain.schema import HumanMessage
model = OllamaFunctions(model="dolphinmodel",)
model = model.bind(
functions=[
{
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, " "e.g. San Francisco, CA",
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"],
},
},
"required": ["location"],
},
}
],
function_call={"name": "get_current_weather"},
)
model.invoke("what is the weather in Boston?")
### Idea or request for content:
_No response_ | DOC: <https://python.langchain.com/docs/integrations/chat/ollama_functions 'DOC: ' prefix>ollamafunctions not working at all | https://api.github.com/repos/langchain-ai/langchain/issues/15808/comments | 2 | 2024-01-10T09:17:02Z | 2024-07-04T16:07:33Z | https://github.com/langchain-ai/langchain/issues/15808 | 2,073,927,465 | 15,808 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
import os
from langchain.prompts.prompt import PromptTemplate
from langchain.llms import OpenAI
from langchain.chains.question_answering import load_qa_chain
from langchain.chains import (
ConversationalRetrievalChain,
LLMChain
)
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain_community.chat_models import ChatOpenAI
from langchain_community.embeddings import HuggingFaceEmbeddings
from langchain_core.callbacks import CallbackManager
from qdrant_client import QdrantClient
from langchain.vectorstores import Qdrant
os.environ['OPENAI_API_KEY'] = "mykey"
embeddings = HuggingFaceEmbeddings(
model_name="all-MiniLM-L6-v2"
)
llm_name = "gpt-3.5-turbo"
llm = ChatOpenAI(model_name=llm_name, temperature=0)
streaming_llm = OpenAI(
streaming=True,
callback_manager=CallbackManager([
StreamingStdOutCallbackHandler()
]),
verbose=True,
max_tokens=150,
temperature=0.2
)
condense_question_prompt = PromptTemplate.from_template(
"在幹嘛"
)
qa_prompt = PromptTemplate.from_template("測")
question_generator = LLMChain(
llm=llm,
prompt=condense_question_prompt
)
doc_chain = load_qa_chain(
llm=streaming_llm,
chain_type="stuff",
prompt=qa_prompt
)
client = QdrantClient(host="192.168.0.31", port=6333)
collection_name = "test"
vectorstore = Qdrant(client, collection_name,
embedding_function=embeddings.embed_query)
chatbot = ConversationalRetrievalChain(
retriever=vectorstore.as_retriever(),
combine_docs_chain=doc_chain,
question_generator=question_generator
)
chat_history = []
question = input("Hi! What are you looking for today?")
while True:
result = chatbot(
{"question": question, "chat_history": chat_history}
)
print("\n")
chat_history.append((result["question"], result["answer"]))
question = input()

### Suggestion:
Why can't I store and retrieve vectors? Please help me fix it. | Why can't I store and retrieve vectors? Please help me fix it. | https://api.github.com/repos/langchain-ai/langchain/issues/15806/comments | 1 | 2024-01-10T08:47:26Z | 2024-04-17T16:17:52Z | https://github.com/langchain-ai/langchain/issues/15806 | 2,073,877,371 | 15,806 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
from langchain.chains import ConversationalRetrievalChain, ConversationChain, LLMChain
from langchain.memory import ConversationTokenBufferMemory
from langchain_community.chat_models import ChatOpenAI
from langchain_community.embeddings import HuggingFaceEmbeddings
from langchain_core.prompts import ChatPromptTemplate, SystemMessagePromptTemplate, MessagesPlaceholder, \
HumanMessagePromptTemplate
from qdrant_client import QdrantClient, models
import os
from qdrant_client.grpc import PointStruct
os.environ['OPENAI_API_KEY'] = "mykey"
COLLECTION_NAME = "teeeeee"
embeddings = HuggingFaceEmbeddings(
model_name="all-MiniLM-L6-v2"
)
print("已成功連線到Qdrant")
def connection():
client = QdrantClient(host="192.168.0.311", port=6333)
client.recreate_collection(
collection_name=COLLECTION_NAME,
vectors_config=models.VectorParams(
distance=models.Distance.COSINE,
size=384),
optimizers_config=models.OptimizersConfigDiff(memmap_threshold=20000),
hnsw_config=models.HnswConfigDiff(on_disk=True, m=16, ef_construct=100)
)
return client
prompt = ChatPromptTemplate(
messages=[
SystemMessagePromptTemplate.from_template(
"你是耶米菈."
),
MessagesPlaceholder(variable_name="chat_history"),
HumanMessagePromptTemplate.from_template("{question}"),
]
)
llm_name = "gpt-3.5-turbo"
llm = ChatOpenAI(model_name=llm_name, temperature=0)
memory_token_limit = 2000
memory = ConversationTokenBufferMemory(
llm=llm,
prompt=prompt,
max_token_limit=int(memory_token_limit),
memory_key="chat_history",
return_messages=True,
)
conversation = LLMChain(llm=llm, prompt=prompt, verbose=True, memory=memory)
chat_history = []
def upsert_vector(client, vectors, data):
for i, vector in enumerate(vectors):
client.upsert(
collection_name=COLLECTION_NAME,
points=[PointStruct(id=i,
vector=vectors[i],
payload=data[i])]
)
print("upsert finish")
def search_from_qdrant(client, vector, k=1):
search_result = client.search(
collection_name=COLLECTION_NAME,
query_vector=vector,
limit=k,
append_payload=True,
)
return search_result
def get_embedding(text, model_name):
while True:
memory.load_memory_variables({})
question = input('提問:')
result = conversation.run({'question': question, 'chat_history': chat_history})
print(result)
chat_history.append([f'User: {question}', f'Ai: {result}'])
print(chat_history)
st_history = ' '.join(map(str, chat_history))
res = embeddings.embed_query(st_history)
print(f'ok: {res[:4]}...')
if question.lower() == 'bye':
break
return st_history
def main():
qclient = connection()
data_objs = [
{
"id": 1,
"teeeeee": f"我是阿狗。你叫做阿狗"
},
]
embedding_array = [get_embedding(text["teeeeee"], embeddings)
for text in data_objs]
upsert_vector(qclient, embedding_array, data_objs)
query_text = "請複誦我剛才所說的話"
query_embedding = get_embedding(query_text, embeddings)
results = search_from_qdrant(qclient, query_embedding, k=1)
print(f"尋找 {query_text}:", results)
if __name__ == '__main__':
main()
Execution result:
Traceback (most recent call last):
File "C:\Users\syz\Downloads\Chat-Bot-using-gpt-3.5-turbo-main\models\測.py", line 117, in <module>
main()
File "C:\Users\syz\Downloads\Chat-Bot-using-gpt-3.5-turbo-main\models\測.py", line 109, in main
upsert_vector(qclient, embedding_array, data_objs)
File "C:\Users\syz\Downloads\Chat-Bot-using-gpt-3.5-turbo-main\models\測.py", line 63, in upsert_vector
points=[PointStruct(id=i,
^^^^^^^^^^^^^^^^^
TypeError: Message must be initialized with a dict: qdrant.PointStruct
### Suggestion:
_No response_ | Why can't you search vectors? | https://api.github.com/repos/langchain-ai/langchain/issues/15804/comments | 1 | 2024-01-10T07:35:35Z | 2024-04-17T16:22:20Z | https://github.com/langchain-ai/langchain/issues/15804 | 2,073,772,510 | 15,804 |
[
"hwchase17",
"langchain"
] | ### System Info
Issue with current documentation:
I was reading the documentation and in the modules/model_io/concepts page noticed a minor issue with the pagination navigation. Both the "Previous" and "Next" links currently point to the same page ('model_io'), which may lead to confusion for users.

### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
// The error is on the documentation website.
### Expected behavior
Upon reviewing the content, I believe that the "Next" link should navigate users to the 'prompts' page of the 'model_io' section, providing a seamless transition for readers.
| DOC: modules/model_io/concepts in documentation | https://api.github.com/repos/langchain-ai/langchain/issues/15803/comments | 1 | 2024-01-10T07:33:55Z | 2024-04-17T16:17:13Z | https://github.com/langchain-ai/langchain/issues/15803 | 2,073,770,325 | 15,803 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
from langchain.chains import ConversationalRetrievalChain, ConversationChain, LLMChain
from langchain.memory import ConversationTokenBufferMemory
from langchain_community.chat_models import ChatOpenAI
from langchain_community.embeddings import HuggingFaceEmbeddings
from langchain_core.prompts import ChatPromptTemplate, SystemMessagePromptTemplate, MessagesPlaceholder, \
HumanMessagePromptTemplate
from qdrant_client import QdrantClient, models
import os
from qdrant_client.grpc import PointStruct
os.environ['OPENAI_API_KEY'] = "mykey"
COLLECTION_NAME = "lyric"
embeddings = HuggingFaceEmbeddings(
model_name="all-MiniLM-L6-v2"
)
print("已成功連線到Qdrant")
def connection():
client = QdrantClient(host="192.168.0.28", port=6333)
client.recreate_collection(
collection_name=COLLECTION_NAME,
vectors_config=models.VectorParams(
distance=models.Distance.COSINE,
size=1536),
optimizers_config=models.OptimizersConfigDiff(memmap_threshold=20000),
hnsw_config=models.HnswConfigDiff(on_disk=True, m=16, ef_construct=100)
)
return client
prompt = ChatPromptTemplate(
messages=[
SystemMessagePromptTemplate.from_template(
"you are robot."
),
MessagesPlaceholder(variable_name="chat_history"),
HumanMessagePromptTemplate.from_template("{question}"),
]
)
llm_name = "gpt-3.5-turbo"
llm = ChatOpenAI(model_name=llm_name, temperature=0)
memory_token_limit = 2000
memory = ConversationTokenBufferMemory(
llm=llm,
prompt=prompt,
max_token_limit=int(memory_token_limit),
memory_key="chat_history",
return_messages=True,
)
conversation = LLMChain(llm=llm, prompt=prompt, verbose=True, memory=memory)
chat_history = []
def upsert_vector(client, vectors, data):
for i, vector in enumerate(vectors):
client.upsert(
collection_name=COLLECTION_NAME,
points=[PointStruct(id=i,
vector=vectors[i],
payload=data[i])]
)
print("upsert finish")
def search_from_qdrant(client, vector, k=1):
search_result = client.search(
collection_name=COLLECTION_NAME,
query_vector=vector,
limit=k,
append_payload=True,
)
return search_result
def main():
qclient = connection()
while True:
memory.load_memory_variables({})
question = input('提問:')
result = conversation.run({'question': question, 'chat_history': chat_history})
print(result)
chat_history.append([f'User: {question}', f'Ai: {result}'])
print(chat_history)
st_history = ' '.join(map(str, chat_history))
res = embeddings.embed_query(st_history)
print(f'ok: {res[:4]}...')
if question.lower() == 'bye':
break
data_objs = [
{
"id": 1,
"lyric": f"{res}"
},
]
embedding_array = [res(text["lyric"], embeddings)
for text in data_objs]
upsert_vector(qclient, embedding_array, data_objs)
query_text = "Please repeat what I just said"
query_embedding = res(query_text, embeddings)
results = search_from_qdrant(qclient, query_embedding, k=1)
print(f"select {query_text}:", results)
if __name__ == '__main__':
main()
Why can't I pass 'res' to embedding_array and perform vector search? Also, please help me find out where else I might be going wrong

### Suggestion:
_No response_ | Why can't I pass 'res' to embedding_array and perform vector search? Also, please help me find out where else I might be going wrong | https://api.github.com/repos/langchain-ai/langchain/issues/15802/comments | 1 | 2024-01-10T06:35:58Z | 2024-04-17T16:25:14Z | https://github.com/langchain-ai/langchain/issues/15802 | 2,073,696,335 | 15,802 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain==0.0.350
python==3.9.2rc1
### Who can help?
@agola11
Sample code
```
from langchain.output_parsers import ResponseSchema, StructuredOutputParser
from langchain.prompts import PromptTemplate
response_schemas = [
ResponseSchema(name="result", description="answer to the user's question"),
ResponseSchema(
name="source_documents",
description="source used to answer the user's question",
),
]
output_parser = StructuredOutputParser.from_response_schemas(response_schemas)
#
format_instructions = output_parser.get_format_instructions()
#
llms = LlamaCpp(streaming=True,
model_path=r"C:\Users\PLNAYAK\Documents\Local_LLM_Inference\zephyr-7b-alpha.Q4_K_M.gguf",
max_tokens = 500,
temperature=0.75,
top_p=1,
model_kwargs={"gpu_layers":0,"stream":True},
verbose=True,n_threads = int(os.cpu_count()/2),
n_ctx=4096)
#
prompt = PromptTemplate(
template="Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.\n\n{context}\n\nQuestion: {question}\nHelpful Answer:",
input_variables=["context","question"],
partial_variables={"format_instructions": format_instructions},
output_parser=output_parser
)
#
chain = prompt | llms | output_parser
chain.invoke({"question":query,"context":complete_context})
```
Error Log
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
Cell In[41], [line 1](vscode-notebook-cell:?execution_count=41&line=1)
----> [1](vscode-notebook-cell:?execution_count=41&line=1) chain.invoke({"question":query,"context":complete_context})
File [c:\Users\PLNAYAK\Documents\Local_LLM_Inference\llms\lib\site-packages\langchain_core\runnables\base.py:1514](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/runnables/base.py:1514), in RunnableSequence.invoke(self, input, config)
[1512](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/runnables/base.py:1512) try:
[1513](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/runnables/base.py:1513) for i, step in enumerate(self.steps):
-> [1514](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/runnables/base.py:1514) input = step.invoke(
[1515](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/runnables/base.py:1515) input,
[1516](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/runnables/base.py:1516) # mark each step as a child run
[1517](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/runnables/base.py:1517) patch_config(
[1518](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/runnables/base.py:1518) config, callbacks=run_manager.get_child(f"seq:step:{i+1}")
[1519](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/runnables/base.py:1519) ),
[1520](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/runnables/base.py:1520) )
[1521](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/runnables/base.py:1521) # finish the root run
[1522](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/runnables/base.py:1522) except BaseException as e:
File [c:\Users\PLNAYAK\Documents\Local_LLM_Inference\llms\lib\site-packages\langchain_core\prompts\base.py:94](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/prompts/base.py:94), in BasePromptTemplate.invoke(self, input, config)
[91](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/prompts/base.py:91) def invoke(
[92](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/prompts/base.py:92) self, input: Dict, config: Optional[RunnableConfig] = None
[93](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/prompts/base.py:93) ) -> PromptValue:
---> [94](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/prompts/base.py:94) return self._call_with_config(
[95](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/prompts/base.py:95) self._format_prompt_with_error_handling,
[96](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/prompts/base.py:96) input,
[97](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/prompts/base.py:97) config,
[98](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/prompts/base.py:98) run_type="prompt",
[99](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/prompts/base.py:99) )
File [c:\Users\PLNAYAK\Documents\Local_LLM_Inference\llms\lib\site-packages\langchain_core\runnables\base.py:886](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/runnables/base.py:886), in Runnable._call_with_config(self, func, input, config, run_type, **kwargs)
[879](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/runnables/base.py:879) run_manager = callback_manager.on_chain_start(
[880](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/runnables/base.py:880) dumpd(self),
[881](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/runnables/base.py:881) input,
[882](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/runnables/base.py:882) run_type=run_type,
[883](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/runnables/base.py:883) name=config.get("run_name"),
[884](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/runnables/base.py:884) )
[885](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/runnables/base.py:885) try:
--> [886](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/runnables/base.py:886) output = call_func_with_variable_args(
[887](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/runnables/base.py:887) func, input, config, run_manager, **kwargs
[888](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/runnables/base.py:888) )
[889](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/runnables/base.py:889) except BaseException as e:
[890](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/runnables/base.py:890) run_manager.on_chain_error(e)
File [c:\Users\PLNAYAK\Documents\Local_LLM_Inference\llms\lib\site-packages\langchain_core\runnables\config.py:308](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/runnables/config.py:308), in call_func_with_variable_args(func, input, config, run_manager, **kwargs)
[306](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/runnables/config.py:306) if run_manager is not None and accepts_run_manager(func):
[307](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/runnables/config.py:307) kwargs["run_manager"] = run_manager
--> [308](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/runnables/config.py:308) return func(input, **kwargs)
File [c:\Users\PLNAYAK\Documents\Local_LLM_Inference\llms\lib\site-packages\langchain_core\prompts\base.py:89](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/prompts/base.py:89), in BasePromptTemplate._format_prompt_with_error_handling(self, inner_input)
[83](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/prompts/base.py:83) except KeyError as e:
[84](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/prompts/base.py:84) raise KeyError(
[85](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/prompts/base.py:85) f"Input to {self.__class__.__name__} is missing variable {e}. "
[86](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/prompts/base.py:86) f" Expected: {self.input_variables}"
[87](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/prompts/base.py:87) f" Received: {list(inner_input.keys())}"
[88](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/prompts/base.py:88) ) from e
---> [89](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/prompts/base.py:89) return self.format_prompt(**input_dict)
File [c:\Users\PLNAYAK\Documents\Local_LLM_Inference\llms\lib\site-packages\langchain_core\prompts\string.py:161](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/prompts/string.py:161), in StringPromptTemplate.format_prompt(self, **kwargs)
[159](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/prompts/string.py:159) def format_prompt(self, **kwargs: Any) -> PromptValue:
[160](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/prompts/string.py:160) """Create Chat Messages."""
--> [161](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/prompts/string.py:161) return StringPromptValue(text=self.format(**kwargs))
File [c:\Users\PLNAYAK\Documents\Local_LLM_Inference\llms\lib\site-packages\langchain_core\prompts\prompt.py:132](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/prompts/prompt.py:132), in PromptTemplate.format(self, **kwargs)
[117](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/prompts/prompt.py:117) """Format the prompt with the inputs.
[118](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/prompts/prompt.py:118)
[119](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/prompts/prompt.py:119) Args:
(...)
[129](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/prompts/prompt.py:129) prompt.format(variable1="foo")
[130](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/prompts/prompt.py:130) """
[131](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/prompts/prompt.py:131) kwargs = self._merge_partial_and_user_variables(**kwargs)
--> [132](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/prompts/prompt.py:132) return DEFAULT_FORMATTER_MAPPING[self.template_format](self.template, **kwargs)
File [C:\Program](file:///C:/Program) Files\Python39\lib\string.py:161, in Formatter.format(self, format_string, *args, **kwargs)
[160](file:///C:/Program%20Files/Python39/lib/string.py:160) def format(self, format_string, /, *args, **kwargs):
--> [161](file:///C:/Program%20Files/Python39/lib/string.py:161) return self.vformat(format_string, args, kwargs)
File [c:\Users\PLNAYAK\Documents\Local_LLM_Inference\llms\lib\site-packages\langchain_core\utils\formatting.py:29](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/utils/formatting.py:29), in StrictFormatter.vformat(self, format_string, args, kwargs)
[24](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/utils/formatting.py:24) if len(args) > 0:
[25](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/utils/formatting.py:25) raise ValueError(
[26](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/utils/formatting.py:26) "No arguments should be provided, "
[27](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/utils/formatting.py:27) "everything should be passed as keyword arguments."
[28](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/utils/formatting.py:28) )
---> [29](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/utils/formatting.py:29) return super().vformat(format_string, args, kwargs)
File [C:\Program](file:///C:/Program) Files\Python39\lib\string.py:166, in Formatter.vformat(self, format_string, args, kwargs)
[164](file:///C:/Program%20Files/Python39/lib/string.py:164) used_args = set()
[165](file:///C:/Program%20Files/Python39/lib/string.py:165) result, _ = self._vformat(format_string, args, kwargs, used_args, 2)
--> [166](file:///C:/Program%20Files/Python39/lib/string.py:166) self.check_unused_args(used_args, args, kwargs)
[167](file:///C:/Program%20Files/Python39/lib/string.py:167) return result
File [c:\Users\PLNAYAK\Documents\Local_LLM_Inference\llms\lib\site-packages\langchain_core\utils\formatting.py:18](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/utils/formatting.py:18), in StrictFormatter.check_unused_args(self, used_args, args, kwargs)
[16](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/utils/formatting.py:16) extra = set(kwargs).difference(used_args)
[17](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/utils/formatting.py:17) if extra:
---> [18](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/langchain_core/utils/formatting.py:18) raise KeyError(extra)
KeyError: {'format_instructions'}
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Sample code
```
from langchain.output_parsers import ResponseSchema, StructuredOutputParser
from langchain.prompts import PromptTemplate
response_schemas = [
ResponseSchema(name="result", description="answer to the user's question"),
ResponseSchema(
name="source_documents",
description="source used to answer the user's question",
),
]
output_parser = StructuredOutputParser.from_response_schemas(response_schemas)
#
format_instructions = output_parser.get_format_instructions()
#
llms = LlamaCpp(streaming=True,
model_path=r"C:\Users\PLNAYAK\Documents\Local_LLM_Inference\zephyr-7b-alpha.Q4_K_M.gguf",
max_tokens = 500,
temperature=0.75,
top_p=1,
model_kwargs={"gpu_layers":0,"stream":True},
verbose=True,n_threads = int(os.cpu_count()/2),
n_ctx=4096)
#
prompt = PromptTemplate(
template="Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.\n\n{context}\n\nQuestion: {question}\nHelpful Answer:",
input_variables=["context","question"],
partial_variables={"format_instructions": format_instructions},
output_parser=output_parser
)
#
chain = prompt | llms | output_parser
chain.invoke({"question":query,"context":complete_context})
```
### Expected behavior
It should return output in a structured format | Encounter Error (KeyError: {'format_instructions'})while using StructuredOutputParser | https://api.github.com/repos/langchain-ai/langchain/issues/15801/comments | 2 | 2024-01-10T06:00:17Z | 2024-06-14T16:08:42Z | https://github.com/langchain-ai/langchain/issues/15801 | 2,073,649,435 | 15,801 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain==0.0.352
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
# Define LLM to generate response
llm = VertexAI(model_name='text-bison@001', max_output_tokens=512, temperature=0.2)
if not message:
message = request.form.get('userInput')
template = """
appropriate custom prompt context...
{context}
Question: {question}
"""
qa_prompt = ChatPromptTemplate.from_messages(
[
("system", template),
MessagesPlaceholder(variable_name="chat_history"),
("user", "{question}"),
]
)
rag_chain = (
RunnablePassthrough.assign(
context=contextualized_question | sub_retriever
)
| qa_prompt
| llm
| remove_prefix
)
response = rag_chain.invoke(({"question": message, "chat_history": memory.get_history()}))
memory.add_interaction(message, response)
```
### Expected behavior
I want to get intermediate output of contextualized_question chain, in
```python
RunnablePassthrough.assign(
context=contextualized_question | sub_retriever
)
```
so that I can easily debug the whole process.
For now, I am just getting final response from the chain which is,
```python
response = rag_chain.invoke(({"question": message, "chat_history": memory.get_history()}))
``` | printing intermediate output from RAG chains | https://api.github.com/repos/langchain-ai/langchain/issues/15800/comments | 3 | 2024-01-10T05:52:27Z | 2024-01-11T01:16:50Z | https://github.com/langchain-ai/langchain/issues/15800 | 2,073,641,136 | 15,800 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Please look at the example below, I used ChatPromptTemplate to chat wit gpt, the output of the gpt always have a prefix "AI: ",how to remove it.
```python
def chat(self, messages):
history = [("system", SYSTEM)]
for message in messages:
if message["role"] == "user":
history.append(("human", message["content"]))
else:
history.append(("ai", message["content"]))
prompt = ChatPromptTemplate.from_messages(history)
chat_chain = prompt | self.model
res = chat_chain.stream({})
return res
```

### Suggestion:
_No response_ | Issue: How to use ChatPromptTemplate? | https://api.github.com/repos/langchain-ai/langchain/issues/15797/comments | 5 | 2024-01-10T05:00:37Z | 2024-07-10T16:05:40Z | https://github.com/langchain-ai/langchain/issues/15797 | 2,073,590,016 | 15,797 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I am developing in a Colab environment and I have Typing_Extensions Issue
Package Version
-------------------------------- ---------------------
absl-py 1.4.0
aiohttp 3.9.1
aiosignal 1.3.1
alabaster 0.7.13
albumentations 1.3.1
altair 4.2.2
anyio 3.7.1
appdirs 1.4.4
argon2-cffi 23.1.0
argon2-cffi-bindings 21.2.0
array-record 0.5.0
arviz 0.15.1
astropy 5.3.4
astunparse 1.6.3
async-timeout 4.0.3
atpublic 4.0
attrs 23.2.0
audioread 3.0.1
autograd 1.6.2
Babel 2.14.0
backcall 0.2.0
backoff 2.2.1
beautifulsoup4 4.11.2
bidict 0.22.1
bigframes 0.18.0
bleach 6.1.0
blinker 1.4
blis 0.7.11
blosc2 2.0.0
bokeh 3.3.2
bqplot 0.12.42
branca 0.7.0
build 1.0.3
CacheControl 0.13.1
cachetools 5.3.2
catalogue 2.0.10
certifi 2023.11.17
cffi 1.16.0
chardet 5.2.0
charset-normalizer 3.3.2
chex 0.1.7
click 8.1.7
click-plugins 1.1.1
cligj 0.7.2
cloudpickle 2.2.1
cmake 3.27.9
cmdstanpy 1.2.0
cohere 4.41
colorcet 3.0.1
colorlover 0.3.0
colour 0.1.5
community 1.0.0b1
confection 0.1.4
cons 0.4.6
contextlib2 21.6.0
contourpy 1.2.0
cryptography 41.0.7
cufflinks 0.17.3
cupy-cuda12x 12.2.0
cvxopt 1.3.2
cvxpy 1.3.2
cycler 0.12.1
cymem 2.0.8
Cython 3.0.7
dask 2023.8.1
dataclasses-json 0.6.3
datascience 0.17.6
db-dtypes 1.2.0
dbus-python 1.2.18
debugpy 1.6.6
decorator 4.4.2
defusedxml 0.7.1
diskcache 5.6.3
distributed 2023.8.1
distro 1.7.0
dlib 19.24.2
dm-tree 0.1.8
docutils 0.18.1
dopamine-rl 4.0.6
duckdb 0.9.2
earthengine-api 0.1.384
easydict 1.11
ecos 2.0.12
editdistance 0.6.2
eerepr 0.0.4
en-core-web-sm 3.6.0
entrypoints 0.4
et-xmlfile 1.1.0
etils 1.6.0
etuples 0.3.9
exceptiongroup 1.2.0
fastai 2.7.13
fastavro 1.9.3
fastcore 1.5.29
fastdownload 0.0.7
fastjsonschema 2.19.1
fastprogress 1.0.3
fastrlock 0.8.2
filelock 3.13.1
fiona 1.9.5
firebase-admin 5.3.0
Flask 2.2.5
flatbuffers 23.5.26
flax 0.7.5
folium 0.14.0
fonttools 4.47.0
frozendict 2.4.0
frozenlist 1.4.1
fsspec 2023.6.0
future 0.18.3
gast 0.5.4
gcsfs 2023.6.0
GDAL 3.4.3
gdown 4.6.6
geemap 0.30.0
gensim 4.3.2
geocoder 1.38.1
geographiclib 2.0
geopandas 0.13.2
geopy 2.3.0
gin-config 0.5.0
glob2 0.7
google 2.0.3
google-ai-generativelanguage 0.4.0
google-api-core 2.11.1
google-api-python-client 2.84.0
google-auth 2.17.3
google-auth-httplib2 0.1.1
google-auth-oauthlib 1.2.0
google-cloud-aiplatform 1.38.1
google-cloud-bigquery 3.12.0
google-cloud-bigquery-connection 1.12.1
google-cloud-bigquery-storage 2.24.0
google-cloud-core 2.3.3
google-cloud-datastore 2.15.2
google-cloud-firestore 2.11.1
google-cloud-functions 1.13.3
google-cloud-iam 2.13.0
google-cloud-language 2.9.1
google-cloud-resource-manager 1.11.0
google-cloud-storage 2.8.0
google-cloud-translate 3.11.3
google-colab 1.0.0
google-crc32c 1.5.0
google-generativeai 0.3.2
google-pasta 0.2.0
google-resumable-media 2.7.0
googleapis-common-protos 1.62.0
googledrivedownloader 0.4
graphviz 0.20.1
greenlet 3.0.3
grpc-google-iam-v1 0.13.0
grpcio 1.60.0
grpcio-status 1.48.2
gspread 3.4.2
gspread-dataframe 3.3.1
gym 0.25.2
gym-notices 0.0.8
h11 0.14.0
h5netcdf 1.3.0
h5py 3.9.0
holidays 0.40
holoviews 1.17.1
html5lib 1.1
httpcore 1.0.2
httpimport 1.3.1
httplib2 0.22.0
httpx 0.26.0
huggingface-hub 0.20.2
humanize 4.7.0
hyperopt 0.2.7
ibis-framework 7.1.0
idna 3.6
imageio 2.31.6
imageio-ffmpeg 0.4.9
imagesize 1.4.1
imbalanced-learn 0.10.1
imgaug 0.4.0
importlib-metadata 6.11.0
importlib-resources 6.1.1
imutils 0.5.4
inflect 7.0.0
iniconfig 2.0.0
install 1.3.5
intel-openmp 2023.2.3
ipyevents 2.0.2
ipyfilechooser 0.6.0
ipykernel 5.5.6
ipyleaflet 0.18.1
ipython 7.34.0
ipython-genutils 0.2.0
ipython-sql 0.5.0
ipytree 0.2.2
ipywidgets 7.7.1
itsdangerous 2.1.2
jax 0.4.23
jaxlib 0.4.23+cuda12.cudnn89
jeepney 0.7.1
jieba 0.42.1
Jinja2 3.1.2
joblib 1.3.2
jsonpatch 1.33
jsonpickle 3.0.2
jsonpointer 2.4
jsonschema 4.19.2
jsonschema-specifications 2023.12.1
jupyter-client 6.1.12
jupyter-console 6.1.0
jupyter_core 5.7.0
jupyter-server 1.24.0
jupyterlab_pygments 0.3.0
jupyterlab-widgets 3.0.9
kaggle 1.5.16
kagglehub 0.1.4
keras 2.15.0
keyring 23.5.0
kiwisolver 1.4.5
langchain 0.1.0
langchain-community 0.0.11
langchain-core 0.1.8
langcodes 3.3.0
langsmith 0.0.79
launchpadlib 1.10.16
lazr.restfulclient 0.14.4
lazr.uri 1.0.6
lazy_loader 0.3
libclang 16.0.6
librosa 0.10.1
lida 0.0.10
lightgbm 4.1.0
linkify-it-py 2.0.2
llmx 0.0.15a0
llvmlite 0.41.1
locket 1.0.0
logical-unification 0.4.6
lxml 4.9.4
malloy 2023.1067
Markdown 3.5.1
markdown-it-py 3.0.0
MarkupSafe 2.1.3
marshmallow 3.20.2
matplotlib 3.7.1
matplotlib-inline 0.1.6
matplotlib-venn 0.11.9
mdit-py-plugins 0.4.0
mdurl 0.1.2
miniKanren 1.0.3
missingno 0.5.2
mistune 0.8.4
mizani 0.9.3
mkl 2023.2.0
ml-dtypes 0.2.0
mlxtend 0.22.0
more-itertools 10.1.0
moviepy 1.0.3
mpmath 1.3.0
msgpack 1.0.7
multidict 6.0.4
multipledispatch 1.0.0
multitasking 0.0.11
murmurhash 1.0.10
music21 9.1.0
mypy-extensions 1.0.0
natsort 8.4.0
nbclassic 1.0.0
nbclient 0.9.0
nbconvert 6.5.4
nbformat 5.9.2
nest-asyncio 1.5.8
networkx 3.2.1
nibabel 4.0.2
nltk 3.8.1
notebook 6.5.5
notebook_shim 0.2.3
numba 0.58.1
numexpr 2.8.8
numpy 1.23.5
oauth2client 4.1.3
oauthlib 3.2.2
openai 1.7.0
opencv-contrib-python 4.8.0.76
opencv-python 4.8.0.76
opencv-python-headless 4.9.0.80
openpyxl 3.1.2
opt-einsum 3.3.0
optax 0.1.7
orbax-checkpoint 0.4.4
osqp 0.6.2.post8
packaging 23.2
pandas 1.5.3
pandas-datareader 0.10.0
pandas-gbq 0.19.2
pandas-stubs 1.5.3.230304
pandocfilters 1.5.0
panel 1.3.6
param 2.0.1
parso 0.8.3
parsy 2.1
partd 1.4.1
pathlib 1.0.1
pathy 0.10.3
patsy 0.5.6
peewee 3.17.0
pexpect 4.9.0
pickleshare 0.7.5
Pillow 9.4.0
pins 0.8.4
pip 23.3.2
pip-tools 6.13.0
platformdirs 4.1.0
plotly 5.15.0
plotnine 0.12.4
pluggy 1.3.0
polars 0.17.3
pooch 1.8.0
portpicker 1.5.2
prefetch-generator 1.0.3
preshed 3.0.9
prettytable 3.9.0
proglog 0.1.10
progressbar2 4.2.0
prometheus-client 0.19.0
promise 2.3
prompt-toolkit 3.0.43
prophet 1.1.5
proto-plus 1.23.0
protobuf 3.20.3
psutil 5.9.5
psycopg2 2.9.9
ptyprocess 0.7.0
py-cpuinfo 9.0.0
py4j 0.10.9.7
pyarrow 10.0.1
pyarrow-hotfix 0.6
pyasn1 0.5.1
pyasn1-modules 0.3.0
pycocotools 2.0.7
pycparser 2.21
pyct 0.5.0
pydantic 1.10.13
pydata-google-auth 1.8.2
pydot 1.4.2
pydot-ng 2.0.0
pydotplus 2.0.2
PyDrive 1.3.1
PyDrive2 1.6.3
pyerfa 2.0.1.1
pygame 2.5.2
Pygments 2.16.1
PyGObject 3.42.1
PyJWT 2.3.0
pymc 5.7.2
pymystem3 0.2.0
PyOpenGL 3.1.7
pyOpenSSL 23.3.0
pyparsing 3.1.1
pyperclip 1.8.2
pyproj 3.6.1
pyproject_hooks 1.0.0
pyshp 2.3.1
PySocks 1.7.1
pytensor 2.14.2
pytest 7.4.4
python-apt 0.0.0
python-box 7.1.1
python-dateutil 2.8.2
python-louvain 0.16
python-slugify 8.0.1
python-utils 3.8.1
pytz 2023.3.post1
pyviz_comms 3.0.0
PyWavelets 1.5.0
PyYAML 6.0.1
pyzmq 23.2.1
qdldl 0.1.7.post0
qudida 0.0.4
ratelim 0.1.6
referencing 0.32.0
regex 2023.6.3
requests 2.31.0
requests-oauthlib 1.3.1
requirements-parser 0.5.0
rich 13.7.0
rpds-py 0.16.2
rpy2 3.4.2
rsa 4.9
safetensors 0.4.1
scikit-image 0.19.3
scikit-learn 1.2.2
scipy 1.11.4
scooby 0.9.2
scs 3.2.4.post1
seaborn 0.12.2
SecretStorage 3.3.1
Send2Trash 1.8.2
setuptools 67.7.2
shapely 2.0.2
six 1.16.0
sklearn-pandas 2.2.0
smart-open 6.4.0
sniffio 1.3.0
snowballstemmer 2.2.0
sortedcontainers 2.4.0
soundfile 0.12.1
soupsieve 2.5
soxr 0.3.7
spacy 3.6.1
spacy-legacy 3.0.12
spacy-loggers 1.0.5
Sphinx 5.0.2
sphinxcontrib-applehelp 1.0.7
sphinxcontrib-devhelp 1.0.5
sphinxcontrib-htmlhelp 2.0.4
sphinxcontrib-jsmath 1.0.1
sphinxcontrib-qthelp 1.0.6
sphinxcontrib-serializinghtml 1.1.9
SQLAlchemy 2.0.24
sqlglot 19.9.0
sqlparse 0.4.4
srsly 2.4.8
stanio 0.3.0
statsmodels 0.14.1
sympy 1.12
tables 3.8.0
tabulate 0.9.0
tbb 2021.11.0
tblib 3.0.0
tenacity 8.2.3
tensorboard 2.15.1
tensorboard-data-server 0.7.2
tensorflow 2.15.0
tensorflow-datasets 4.9.4
tensorflow-estimator 2.15.0
tensorflow-gcs-config 2.15.0
tensorflow-hub 0.15.0
tensorflow-io-gcs-filesystem 0.35.0
tensorflow-metadata 1.14.0
tensorflow-probability 0.23.0
tensorstore 0.1.45
termcolor 2.4.0
terminado 0.18.0
text-unidecode 1.3
textblob 0.17.1
tf-slim 1.1.0
thinc 8.1.12
threadpoolctl 3.2.0
tifffile 2023.12.9
tiktoken 0.5.2
tinycss2 1.2.1
tokenizers 0.15.0
toml 0.10.2
tomli 2.0.1
toolz 0.12.0
torch 2.1.0+cu121
torchaudio 2.1.0+cu121
torchdata 0.7.0
torchsummary 1.5.1
torchtext 0.16.0
torchvision 0.16.0+cu121
tornado 6.3.2
tqdm 4.66.1
traitlets 5.7.1
traittypes 0.2.1
transformers 4.35.2
triton 2.1.0
tweepy 4.14.0
typer 0.9.0
types-pytz 2023.3.1.1
types-setuptools 69.0.0.20240106
typing_extensions 4.7.0
typing-inspect 0.9.0
tzlocal 5.2
uc-micro-py 1.0.2
uritemplate 4.1.1
urllib3 2.0.7
vega-datasets 0.9.0
wadllib 1.3.6
wasabi 1.1.2
wcwidth 0.2.12
webcolors 1.13
webencodings 0.5.1
websocket-client 1.7.0
Werkzeug 3.0.1
wheel 0.42.0
widgetsnbextension 3.6.6
wordcloud 1.9.3
wrapt 1.14.1
xarray 2023.7.0
xarray-einstats 0.6.0
xgboost 2.0.3
xlrd 2.0.1
xxhash 3.4.1
xyzservices 2023.10.1
yarl 1.9.4
yellowbrick 1.5
yfinance 0.2.33
zict 3.0.0
zipp 3.17.0
------------------------------------------------------------------------------------------------------------------------------------------
from langchain_community.document_loaders import TextLoader
from langchain.text_splitter import CharacterTextSplitter
from langchain_community.embeddings import OpenAIEmbeddings
from langchain.vectorstores.chroma import Chroma
**There has Error!!**
**embeddings = OpenAIEmbeddings()**
emb = embeddings.embed_query("beef dishes")
#print(emb)
text_splitter = CharacterTextSplitter(
separator="\n",
chunk_size=100,
chunk_overlap=0
)
loader = TextLoader("/content/drive/MyDrive/food.txt", encoding='utf-8')
#loader = TextLoader("facts.txt")
docs = loader.load_and_split(
text_splitter=text_splitter
)
db = Chroma(embedding_function=embeddings)
db.add_documents(docs, persist_directory="emb")
results = db.similarity_search_with_score("looking for beef dishes?")
for result in results:
print("\n")
print(result[1])
print(result[0].page_content)
-----------------------------------------------------------------------------------------
**ImportError Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/langchain_community/embeddings/openai.py](https://localhost:8080/#) in validate_environment(cls, values)
326 try:
--> 327 import openai
328 except ImportError:
10 frames
ImportError: cannot import name 'Iterator' from 'typing_extensions' (/usr/local/lib/python3.10/dist-packages/typing_extensions.py)
During handling of the above exception, another exception occurred:
ImportError Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/langchain_community/embeddings/openai.py](https://localhost:8080/#) in validate_environment(cls, values)
327 import openai
328 except ImportError:
--> 329 raise ImportError(
330 "Could not import openai python package. "
331 "Please install it with `pip install openai`."**
ImportError: Could not import openai python package. Please install it with `pip install openai`.
### Suggestion:
_No response_ | Issue: ImportError in Langchain Community Library When Importing OpenAI Package Due to Typing_Extensions Issue | https://api.github.com/repos/langchain-ai/langchain/issues/15795/comments | 1 | 2024-01-10T03:30:46Z | 2024-04-17T16:33:20Z | https://github.com/langchain-ai/langchain/issues/15795 | 2,073,517,905 | 15,795 |
[
"hwchase17",
"langchain"
] | ### System Info
```
end_response = chain.run(
input=input["input"],
question=input["question"],
callbacks=[StreamingHandler()],
tags=tags,
)
```
```
StreamingHandler() is an extension of the langchain class `BaseCallbackHandler` and extends its methods:
```
def on_llm_new_token(self, token: str, **kwargs) -> None:
if token:
self.queue_event(event_data=token)
```
With a regular `LLMChain`:
```
conv_chain = LLMChain(
llm=llm,
memory=memory,
prompt=chain_prompt,
verbose=True,
)
```
this `on_llm_new_token` method gets invoked each call with each new token.
However, with create_structured_output_chain, it seems to get invoked with empty tokens each time:
```
conv_chain = create_structured_output_chain(
output_schema=APydanticClass,
llm=llm,
prompt=chain_prompt,
verbose=True,
)
```
### Who can help?
@agola11 seems the right perrson to tag 🙏
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Use create_structured_output_chain with a pydantic schema
2. Attach a callback with on_llm_new_token overriden
3. on_llm_new_token gets invoked with empty tokens.
### Expected behavior
Tokens streamed back in the json format of the schema requested. E.g. if the schema is:
```
class XYZ(BaseModel):
matches: Optional[List[str]] = Field(
default=None, description="abc"
)
not_matches: Optional[List[str]] = Field(
default=None,
description="def",
)
```
I'd expect it to be streamed back token by token or even category by category. | create_structured_output_chain doesn't invoke the given callback and on_llm_new_token with tokens | https://api.github.com/repos/langchain-ai/langchain/issues/15790/comments | 2 | 2024-01-10T02:43:26Z | 2024-04-18T16:21:24Z | https://github.com/langchain-ai/langchain/issues/15790 | 2,073,482,807 | 15,790 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain==0.0.352
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.retrievers import (
GoogleVertexAIMultiTurnSearchRetriever,
GoogleVertexAISearchRetriever,
GoogleCloudEnterpriseSearchRetriever
)
PROJECT_ID = "my_project_id"
SEARCH_ENGINE_ID = "I tried both for datastore_id and app_id at Vertex Search"
LOCATION_ID = "us"
retriever = GoogleCloudEnterpriseSearchRetriever(
project_id=PROJECT_ID,
search_engine_id=SEARCH_ENGINE_ID,
location_id=LOCATION_ID,
max_documents=3
)
while 1:
message = input()
result = retriever.get_relevant_documents(message)
for doc in result:
print(doc)
```
### Expected behavior
I expected it works well with the defined datastore, but it returned the error saying,
```
google.api_core.exceptions.NotFound: 404 DataStore projects/500618827687/locations/us/collections/default_collection/dataStores/['datastore_id'] not found
``` | GoogleCloudEnterpriseSearchRetriever returned 'datastore not found' error even with the 'us' configurations | https://api.github.com/repos/langchain-ai/langchain/issues/15785/comments | 7 | 2024-01-10T00:05:52Z | 2024-01-22T23:17:32Z | https://github.com/langchain-ai/langchain/issues/15785 | 2,073,361,082 | 15,785 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I am currently utilizing LangChain version 0.0.335 in my Fast API Python application. In the main.py file, the following code snippet is implemented:
main.py
```
streaming_model = ChatOpenAI(
model_name="gpt-4",
temperature=0.1,
openai_api_key=os.getenv("OPENAI_API_KEY2"),
)
non_streaming_model = ChatOpenAI(
model_name="gpt-4",
temperature=0.1,
openai_api_key=os.getenv("OPENAI_API_KEY2"),
)
retriever = vector_store.as_retriever()
sales_persona_prompt = PromptTemplate.from_template(SALES_PERSONA_PROMPT)
condense_prompt = PromptTemplate.from_template(CONDENSE_PROMPT)
chain = ConversationalRetrievalChain.from_llm(
llm=streaming_model,
retriever=retriever,
condense_question_prompt=condense_prompt,
condense_question_llm=non_streaming_model,
combine_docs_chain_kwargs={"prompt": sales_persona_prompt},
verbose=True,
)
return chain(
{"question": sanitized_question, "chat_history": conversation_history}
)
except Exception as e:
return {"error": str(e)}
```
However, this implementation throws the following error:
` "error": "2 validation errors for LLMChain\nllm\n instance of Runnable expected (type=type_error.arbitrary_type; expected_arbitrary_type=Runnable)\nllm\n instance of Runnable expected (type=type_error.arbitrary_type; expected_arbitrary_type=Runnable)"`
Expected Behavior:
I expected the code to execute without errors. The issue seems to be related to the expected types for the llm parameter in the ConversationalRetrievalChain.from_llm method.
Request for Assistance:
I kindly request assistance in understanding and resolving this issue. Any insights, recommendations, or specific steps to address the error would be highly appreciated.
Thank you for your time and support.
### Suggestion:
_No response_ | Issue with LangChain v0.0.335 - Error in ChatOpenAI Callbacks Expected Runnable Instances | https://api.github.com/repos/langchain-ai/langchain/issues/15779/comments | 4 | 2024-01-09T21:13:36Z | 2024-03-02T01:26:21Z | https://github.com/langchain-ai/langchain/issues/15779 | 2,073,178,041 | 15,779 |
[
"hwchase17",
"langchain"
] | ### System Info
**Platform**: Ubuntu 22.04
**Python**: 3.10
**Langchain**:
langchain 0.1.0
langchain-community 0.0.10
langchain-core 0.1.8
langchain-openai 0.0.2
langsmith 0.0.78
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I used this code :
```py
out = chain.batch(entries, config={"max_concurrency": 3})
```
I can see in Langsmith that more than 12 requests were made at parallel, causing rate limit failure with OpenAI API (TPM).
```
RateLimitError: Error code: 429 - {'error': {'message': 'Rate limit reached for gpt-3.5-turbo-1106 in organization org-W83OoPhCAmgMx2r35aLyv9Tr on tokens per min (TPM): Limit 60000, Used 54134, Requested 6465. Please try again in 599ms. Visit https://platform.openai.com/account/rate-limits to learn more.', 'type': 'tokens', 'param': None, 'code': 'rate_limit_exceeded'}}
```
### Expected behavior
I would expect max_concurrency to limit the amount of concurrency used, but actually that doesn't seem to be the case.
Batch doesn't seem to limit concurrency at all.
This code works perfectly :
```py
from concurrent.futures import ThreadPoolExecutor
def batch_chain(inputs: list) -> list:
with ThreadPoolExecutor(max_workers=3) as executor:
return list(executor.map(chain.invoke, inputs))
out = batch_chain(entries)
``` | chain.batch() doesn't use config options properly (max concurrency) | https://api.github.com/repos/langchain-ai/langchain/issues/15767/comments | 9 | 2024-01-09T18:34:52Z | 2024-06-11T15:43:01Z | https://github.com/langchain-ai/langchain/issues/15767 | 2,072,940,890 | 15,767 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
How can EnsembleRetriever be called asynchronously? I have a dataset with ~1k questions and I wish to find the documents that can best answer each of them. However, calling it sequentially takes a lot of time. Can I run the retriever in parallel for all rows (or chunks of it)? Or is there a different way to optimise the run times?
I'm calling it like this now but it gives out a segmentation fault after getting stuck for an hour
```
import asyncio
queries = [query1, query2, ...]
async def process_query(profile):
result = await ensemble_retriever.aget_relevant_documents(profile)
return result
async def process_all_queries():
tasks = [process_query(query) for query in queries]
results = await asyncio.gather(*tasks)
return results
results = asyncio.run(process_all_queries())
```
### Suggestion:
_No response_ | Async with EnsembleRetriever | https://api.github.com/repos/langchain-ai/langchain/issues/15764/comments | 6 | 2024-01-09T17:13:31Z | 2024-04-18T17:00:46Z | https://github.com/langchain-ai/langchain/issues/15764 | 2,072,810,448 | 15,764 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I am developing a Streamlit application where I aim to stream the agent's responses to the UI. Previously, I was able to achieve this by utilizing chains with a simple call to ```chain.stream()```. However, after switching to agents, I cannot stream its response in the same way given that it is implemented in LCEL.
I've tried to use ```StreamingStdOutCallbackHandler``` but the response gets streamed in the terminal only and not to the UI.
Any insights, guidance, or fixes regarding this issue would be greatly appreciated
### Suggestion:
_No response_ | Issue: Streaming agent's response to Streamlit UI | https://api.github.com/repos/langchain-ai/langchain/issues/15747/comments | 1 | 2024-01-09T13:06:25Z | 2024-01-09T14:42:49Z | https://github.com/langchain-ai/langchain/issues/15747 | 2,072,340,407 | 15,747 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Everything was working fine but now suddenly I'm receiving all sorts of LangChain Deprecation issues.
I installed the langchain_openai package and also installed langchain_community package too and replaced all the imports with the suggested ones in the error. It went well but now I'm stuck at this issue
The error is:
`/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/langchain_core/_api/deprecation.py:115: LangChainDeprecationWarning: The function `__call__` was deprecated in LangChain 0.1.0 and will be removed in 0.2.0. Use invoke instead.
warn_deprecated(`
In my code I've replaced all "run" to "invoke" but don't know why this warning is coming up.
I'm also using a LangChain Summarizer and I checked the documentation and it's exactly the way it is in the documentation.
I don't know how to get rid of that deprecation warning now. I don't want to suppress the warning, I want to resolve it so it won't cause any issue in the future.
**This is the only code that I've related to LangChain:**
```
# Langchain Libraries
from langchain_openai import ChatOpenAI
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
from langchain.docstore.document import Document
from langchain_community.callbacks import get_openai_callback
from langchain.text_splitter import TokenTextSplitter
from langchain.chains.summarize import load_summarize_chain
from langchain_core.output_parsers import StrOutputParser
# ------------------------------------------------------------
# General ChatGPT function that's required for all the Call-type Prompts
def chatgpt_function(prompt, transcript):
model_kwargs={"seed":235, "top_p":0.01}
llm = ChatOpenAI(model_name='gpt-3.5-turbo', temperature=0, model_kwargs=model_kwargs, max_tokens=tokens)
template = """
{prompt}
Call Transcript: ```{text}```
"""
prompt_main = PromptTemplate(
input_variables=["prompt", "text"],
template=template,)
with get_openai_callback() as cb:
# llm_chain = LLMChain(llm=llm, prompt=prompt_main)
output_parser = StrOutputParser()
llm_chain = prompt_main | llm | output_parser
all_text = str(template) + str(prompt) + str(transcript)
threshold = (llm.get_num_tokens(text=all_text) + tokens)
# print("Total Tokens:",threshold)
if int(threshold) <= 4000:
chatgpt_output = llm_chain.invoke({"prompt":prompt, "text":transcript})
else:
transcript_ = token_limiter(transcript)
chatgpt_output = llm_chain.invoke({"prompt":prompt, "text":transcript_})
return chatgpt_output
# -------------------------------------------------------
# Function to get refined summary if Transcript is long
def token_limiter(transcript):
text_splitter = TokenTextSplitter(chunk_size=3000, chunk_overlap=200)
texts = text_splitter.split_text(transcript)
docs = [Document(page_content=text) for text in texts]
question_prompt_template = """
I'm providing you a call transcript refined summary enclosed in triple backticks. summarize it furter.
Call Transcript: ```{text}```
Provide me a summary transcript. do not add add any title/ heading like summary or anything else. just give summary text.
"""
question_prompt = PromptTemplate(
template=question_prompt_template, input_variables=["text"]
)
refine_prompt_template = """
Write a summary of the following text enclosed in triple backticks (```).
```{text}```
"""
refine_prompt = PromptTemplate(
template=refine_prompt_template, input_variables=["text"]
)
llm = ChatOpenAI(model_name='gpt-3.5-turbo', temperature=0, max_tokens=800)
refine_chain = load_summarize_chain(
llm,
chain_type="refine",
question_prompt=question_prompt,
refine_prompt=refine_prompt,
return_intermediate_steps=True,
)
summary_refine = refine_chain({"input_documents": docs}, return_only_outputs=True)
return summary_refine['output_text']```
### Suggestion:
Please let me know what I need to change in my code to get rid of that Deprecation warning. Thank you | Issue: LangChainDeprecationWarning: The function `__call__` was deprecated in LangChain 0.1.0 and will be removed in 0.2.0. Use invoke instead. warn_deprecated( | https://api.github.com/repos/langchain-ai/langchain/issues/15741/comments | 11 | 2024-01-09T10:53:09Z | 2024-04-14T20:26:29Z | https://github.com/langchain-ai/langchain/issues/15741 | 2,072,124,775 | 15,741 |
[
"hwchase17",
"langchain"
] | ### System Info
From pyproject.toml:
python=3.11.5
crewai = "0.1.6"
langchain = '==0.0.335'
openai = '==0.28.1'
unstructured = '==0.10.25'
pyowm = '3.3.0'
tools = "^0.1.9"
wikipedia = "1.4.0"
yfinance = "0.2.33"
sec-api = "1.0.17"
tiktoken = "0.5.2"
faiss-cpu = "1.7.4"
python-dotenv = "1.0.0"
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Running any of the scripts in the crewAI (https://github.com/joaomdmoura/crewAI)
Running crewAI I get the following.
Connection error caused failure to patch http://localhost:1984/runs/7fdd9cf2-4f50-4ee1-8fef-9202b07cc756 in LangSmith API. Please confirm your LANGCHAIN_ENDPOINT. ConnectionError(MaxRetryError("HTTPConnectionPool(host='localhost', port=1984): Max retries exceeded with url: /runs/7fdd9cf2-4f50-4ee1-8fef-9202b07cc756 (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x130d51e10>: Failed to establish a new connection: [Errno 61] Connection refused'))"))
Connection error caused failure to post http://localhost:1984/runs in LangSmith API. Please confirm your LANGCHAIN_ENDPOINT. ConnectionError(MaxRetryError("HTTPConnectionPool(host='localhost', port=1984): Max retries exceeded with url: /runs (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x130d6b390>: Failed to establish a new connection: [Errno 61] Connection refused'))"))
I ma not running LangSmith nor do I have any access to it. I have tried setting in my .env to no effect.
LANGCHAIN_TRACING=false
LANGCHAIN_TRACING_V2=false
LANGCHAIN_HANDLER=None
### Expected behavior
Dont expect to see the error reports. Note that not all users are seeing this error | Connection error caused failure to post http://localhost:1984/runs in LangSmith API. | https://api.github.com/repos/langchain-ai/langchain/issues/15739/comments | 2 | 2024-01-09T10:36:08Z | 2024-04-25T16:17:04Z | https://github.com/langchain-ai/langchain/issues/15739 | 2,072,094,761 | 15,739 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
import os
from urllib.parse import quote_plus
from langchain.vectorstores.pgvector import PGVector
from langchain.chat_models import ChatOpenAI
from langchain.chains import ConversationalRetrievalChain
from langchain.memory import ConversationTokenBufferMemory
from langchain_community.embeddings import HuggingFaceEmbeddings
from langchain_core.prompts import ChatPromptTemplate, SystemMessagePromptTemplate, MessagesPlaceholder,
HumanMessagePromptTemplate
os.environ['OPENAI_API_KEY'] = "key"
embeddings = HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2")
CONNECTION_STRING = PGVector.connection_string_from_db_params(
driver=os.environ.get("PGVECTOR_DRIVER", "psycopg2"),
host="1x2.1x8.xx.xx",
port=5432,
database="Ai",
user="xxxxxxxxx",
password=quote_plus("xxxxxx@xx"),
)
vectordb = PGVector(embedding_function=embeddings,
collection_name="tmp04",
connection_string=CONNECTION_STRING,
)
prompt = ChatPromptTemplate(
messages=[
SystemMessagePromptTemplate.from_template(
"i am robot"
),
MessagesPlaceholder(variable_name="chat_history"),
HumanMessagePromptTemplate.from_template("{question}"),
]
)
retriever = vectordb.as_retriever()
llm_name = "gpt-3.5-turbo"
llm = ChatOpenAI(model_name=llm_name, temperature=0)
memory_token_limit = 1000
memory = ConversationTokenBufferMemory(
llm=llm,
prompt=prompt,
max_token_limit=int(memory_token_limit),
memory_key="chat_history",
return_messages=True,
)
qa = ConversationalRetrievalChain.from_llm(
llm=llm,
retriever=retriever,
memory=memory,
verbose=True,
)
chat_history = []
while True:
memory.load_memory_variables({})
question = input('ask:')
# question = retriever.get_relevant_documents(input('ask:'))
result = qa.run({'question': question, 'chat_history': chat_history})
print(result)
chat_history.append([f'User: {question}', f'Ai: {result}'])
print(chat_history)
embeddings = HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2")
st_history = ' '.join(map(str, chat_history))
res = embeddings.embed_query(st_history)
vectordb.add_vector(res)
select_vdb = vectordb.similarity_search(question, k=5)
print(select_vdb)
print(f'ok: {res[:4]}...')
if question.lower() == 'bye':
break
![Uploading image.png…]()
### Suggestion:
How can I specify my own designated table for vector search to retrieve vector data for comparison and provide a response to OpenAI for reference? Also, I noticed the prompt disappeared | How can I specify my own designated table for vector search to retrieve vector data for comparison and provide a response to OpenAI for reference? Also, I noticed the prompt disappeared | https://api.github.com/repos/langchain-ai/langchain/issues/15735/comments | 2 | 2024-01-09T08:59:07Z | 2024-04-16T16:20:31Z | https://github.com/langchain-ai/langchain/issues/15735 | 2,071,916,307 | 15,735 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
import os
from urllib.parse import quote_plus
from langchain.vectorstores.pgvector import PGVector
from langchain.chat_models import ChatOpenAI
from langchain.chains import ConversationalRetrievalChain
from langchain.memory import ConversationTokenBufferMemory
from langchain_community.embeddings import HuggingFaceEmbeddings
from langchain_core.prompts import ChatPromptTemplate, SystemMessagePromptTemplate, MessagesPlaceholder,
HumanMessagePromptTemplate
os.environ['OPENAI_API_KEY'] = "key"
embeddings = HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2")
CONNECTION_STRING = PGVector.connection_string_from_db_params(
driver=os.environ.get("PGVECTOR_DRIVER", "psycopg2"),
host="1x2.1x8.xx.xx",
port=5432,
database="Ai",
user="xxxxxxxxx",
password=quote_plus("xxxxxx@xx"),
)
vectordb = PGVector(embedding_function=embeddings,
collection_name="tmp04",
connection_string=CONNECTION_STRING,
)
prompt = ChatPromptTemplate(
messages=[
SystemMessagePromptTemplate.from_template(
"i am robot"
),
MessagesPlaceholder(variable_name="chat_history"),
HumanMessagePromptTemplate.from_template("{question}"),
]
)
retriever = vectordb.as_retriever()
llm_name = "gpt-3.5-turbo"
llm = ChatOpenAI(model_name=llm_name, temperature=0)
memory_token_limit = 1000
memory = ConversationTokenBufferMemory(
llm=llm,
prompt=prompt,
max_token_limit=int(memory_token_limit),
memory_key="chat_history",
return_messages=True,
)
qa = ConversationalRetrievalChain.from_llm(
llm=llm,
retriever=retriever,
memory=memory,
verbose=True,
)
chat_history = []
while True:
memory.load_memory_variables({})
question = input('ask:')
# question = retriever.get_relevant_documents(input('提問:'))
result = qa.run({'question': question, 'chat_history': chat_history})
print(result)
chat_history.append([f'User: {question}', f'Ai: {result}'])
print(chat_history)
embeddings = HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2")
st_history = ' '.join(map(str, chat_history))
res = embeddings.embed_query(st_history)
vectordb.add_vector(res)
select_vdb = vectordb.similarity_search(question, k=5)
print(select_vdb)
print(f'ok: {res[:4]}...')
if question.lower() == 'bye':
break

### Idea or request for content:
How can I specify my own designated table for vector search to retrieve vector data for comparison and provide a response to OpenAI for reference? Also, I noticed the prompt disappeared. | How can I specify my own designated table for vector search to retrieve vector data for comparison and provide a response to OpenAI for reference? Also, I noticed the prompt disappeared. | https://api.github.com/repos/langchain-ai/langchain/issues/15734/comments | 4 | 2024-01-09T08:39:38Z | 2024-01-09T08:48:55Z | https://github.com/langchain-ai/langchain/issues/15734 | 2,071,885,054 | 15,734 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
I just installed langchain 0.1.0 and according to the documentation
https://api.python.langchain.com/en/latest/_modules/langchain_openai/chat_models/azure.html#
AzureChatOpenAI should be in langchain_openai.chat_models but its instead in langchain_community.chat_models
### Idea or request for content:
_No response_ | DOC: AzureChatOpenAI in documentation | https://api.github.com/repos/langchain-ai/langchain/issues/15733/comments | 1 | 2024-01-09T08:21:31Z | 2024-04-16T16:07:23Z | https://github.com/langchain-ai/langchain/issues/15733 | 2,071,858,324 | 15,733 |
[
"hwchase17",
"langchain"
] | ### System Info
This is a random occurrence. Maybe after I ask many questions
when it happen, Only clear the memory can recover.
the code to ask:
async for chunk in runnable.astream( #or call astream_log
question,
config
):
await res.stream_token(chunk)
error information:
2024-01-09 13:32:02 - Error in LangchainTracer.on_llm_error callback: IndexError('list index out of range')
2024-01-09 13:32:02 -
Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/chainlit/utils.py", line 39, in wrapper
return await user_function(**params_values)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/rag-app/main.py", line 164, in onMessage
await app.question_anwsering(message.content, False)
File "/rag-app/app.py", line 367, in question_anwsering
async for chunk in runnable.astream_log(
File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 752, in astream_log
await task
File "/usr/local/lib/python3.11/asyncio/futures.py", line 290, in __await__
return self.result() # May raise too.
^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/asyncio/futures.py", line 203, in result
raise self._exception.with_traceback(self._exception_tb)
File "/usr/local/lib/python3.11/asyncio/tasks.py", line 267, in __step
result = coro.send(None)
^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 706, in consume_astream
async for chunk in self.astream(input, config, **kwargs):
File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2158, in astream
async for chunk in self.atransform(input_aiter(), config, **kwargs):
File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2141, in atransform
async for chunk in self._atransform_stream_with_config(
File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1308, in _atransform_stream_with_config
chunk: Output = await asyncio.create_task( # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/asyncio/futures.py", line 287, in __await__
yield self # This tells Task to wait for completion.
^^^^^^^^^^
File "/usr/local/lib/python3.11/asyncio/tasks.py", line 339, in __wakeup
future.result()
File "/usr/local/lib/python3.11/asyncio/futures.py", line 203, in result
raise self._exception.with_traceback(self._exception_tb)
File "/usr/local/lib/python3.11/asyncio/tasks.py", line 267, in __step
result = coro.send(None)
^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2111, in _atransform
async for output in final_pipeline:
File "/usr/local/lib/python3.11/site-packages/langchain_core/output_parsers/transform.py", line 60, in atransform
async for chunk in self._atransform_stream_with_config(
File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1283, in _atransform_stream_with_config
final_input: Optional[Input] = await py_anext(input_for_tracing, None)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain_core/utils/aiter.py", line 62, in anext_impl
return await __anext__(iterator)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain_core/utils/aiter.py", line 97, in tee_peer
item = await iterator.__anext__()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 806, in atransform
async for output in self.astream(final, config, **kwargs):
File "/usr/local/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 307, in astream
raise e
File "/usr/local/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 299, in astream
assert generation is not None
^^^^^^^^^^^^^^^^^^^^^^
AssertionError
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [X] Callbacks/Tracing
- [X] Async
### Reproduction
This is a random occurrence. Maybe after I ask many questions
when it happen, Only clear the memory can recover.
### Expected behavior
fix it | LangchainTracer.on_llm_error callback: IndexError('list index out of range') | https://api.github.com/repos/langchain-ai/langchain/issues/15732/comments | 3 | 2024-01-09T07:20:53Z | 2024-04-17T16:32:32Z | https://github.com/langchain-ai/langchain/issues/15732 | 2,071,773,809 | 15,732 |
[
"hwchase17",
"langchain"
] | ### System Info
- Langchain 0.1.0
- PHP 6 (a.k.a. Python 3.11.7)
- Windows 9 (a.k.a. Fedora 39)
<details><summary>requirements.txt</summary>
- aiohttp==3.9.1
- aiosignal==1.3.1
- annotated-types==0.6.0
- anyio==4.2.0
- argon2-cffi==23.1.0
- argon2-cffi-bindings==21.2.0
- arrow==1.3.0
- asgiref==3.7.2
- asttokens==2.4.1
- async-lru==2.0.4
- attrs==23.2.0
- Babel==2.14.0
- backoff==2.2.1
- bcrypt==4.1.2
- beautifulsoup4==4.12.2
- bleach==6.1.0
- build==1.0.3
- cachetools==5.3.2
- certifi==2023.11.17
- cffi==1.16.0
- chardet==5.2.0
- charset-normalizer==3.3.2
- chroma-hnswlib==0.7.3
- chromadb==0.4.22
- click==8.1.7
- coloredlogs==15.0.1
- comm==0.2.1
- dataclasses-json==0.6.3
- debugpy==1.8.0
- decorator==5.1.1
- defusedxml==0.7.1
- Deprecated==1.2.14
- distro==1.9.0
- docarray==0.40.0
- emoji==2.9.0
- executing==2.0.1
- fastapi==0.108.0
- fastjsonschema==2.19.1
- filelock==3.13.1
- filetype==1.2.0
- flatbuffers==23.5.26
- fqdn==1.5.1
- frozenlist==1.4.1
- fsspec==2023.12.2
- gitdb==4.0.11
- GitPython==3.1.40
- google-auth==2.26.1
- googleapis-common-protos==1.62.0
- greenlet==3.0.3
- grpcio==1.60.0
- h11==0.14.0
- httpcore==1.0.2
- httptools==0.6.1
- httpx==0.26.0
- huggingface-hub==0.20.2
- humanfriendly==10.0
- idna==3.6
- importlib-metadata==6.11.0
- importlib-resources==6.1.1
- ipykernel==6.28.0
- ipython==8.19.0
- isoduration==20.11.0
- jedi==0.19.1
- Jinja2==3.1.2
- joblib==1.3.2
- json5==0.9.14
- jsonpatch==1.33
- jsonpath-python==1.0.6
- jsonpointer==2.4
- jsonschema==4.20.0
- jsonschema-specifications==2023.12.1
- jupyter-events==0.9.0
- jupyter-lsp==2.2.1
- jupyter_client==8.6.0
- jupyter_core==5.7.0
- jupyter_server==2.12.2
- jupyter_server_terminals==0.5.1
- jupyterlab==4.0.10
- jupyterlab_pygments==0.3.0
- jupyterlab_server==2.25.2
- kubernetes==28.1.0
- langchain==0.1.0
- langchain-community==0.0.9
- langchain-core==0.1.7
- langchain-openai==0.0.2
- langdetect==1.0.9
- langsmith==0.0.77
- lxml==5.1.0
- Markdown==3.5.1
- markdown-it-py==3.0.0
- MarkupSafe==2.1.3
- marshmallow==3.20.1
- matplotlib-inline==0.1.6
- mdurl==0.1.2
- mistune==3.0.2
- mmh3==4.0.1
- monotonic==1.6
- mpmath==1.3.0
- multidict==6.0.4
- mypy-extensions==1.0.0
- nbclient==0.9.0
- nbconvert==7.14.0
- nbformat==5.9.2
- nest-asyncio==1.5.8
- nltk==3.8.1
- notebook_shim==0.2.3
- numpy==1.26.3
- oauthlib==3.2.2
- onnxruntime==1.16.3
- openai==1.6.1
- opentelemetry-api==1.22.0
- opentelemetry-exporter-otlp-proto-common==1.22.0
- opentelemetry-exporter-otlp-proto-grpc==1.22.0
- opentelemetry-instrumentation==0.43b0
- opentelemetry-instrumentation-asgi==0.43b0
- opentelemetry-instrumentation-fastapi==0.43b0
- opentelemetry-proto==1.22.0
- opentelemetry-sdk==1.22.0
- opentelemetry-semantic-conventions==0.43b0
- opentelemetry-util-http==0.43b0
- orjson==3.9.10
- overrides==7.4.0
- packaging==23.2
- pandocfilters==1.5.0
- parso==0.8.3
- pexpect==4.9.0
- platformdirs==4.1.0
- posthog==3.1.0
- prometheus-client==0.19.0
- prompt-toolkit==3.0.43
- protobuf==4.25.1
- psutil==5.9.7
- ptyprocess==0.7.0
- pulsar-client==3.4.0
- pure-eval==0.2.2
- pyasn1==0.5.1
- pyasn1-modules==0.3.0
- pycparser==2.21
- pydantic==2.5.3
- pydantic_core==2.14.6
- Pygments==2.17.2
- PyPika==0.48.9
- pyproject_hooks==1.0.0
- python-dateutil==2.8.2
- python-dotenv==1.0.0
- python-iso639==2024.1.2
- python-json-logger==2.0.7
- python-magic==0.4.27
- PyYAML==6.0.1
- pyzmq==25.1.2
- rapidfuzz==3.6.1
- referencing==0.32.1
- regex==2023.12.25
- requests==2.31.0
- requests-oauthlib==1.3.1
- rfc3339-validator==0.1.4
- rfc3986-validator==0.1.1
- rich==13.7.0
- rpds-py==0.16.2
- rsa==4.9
- Send2Trash==1.8.2
- six==1.16.0
- smmap==5.0.1
- sniffio==1.3.0
- soupsieve==2.5
- SQLAlchemy==2.0.25
- stack-data==0.6.3
- starlette==0.32.0.post1
- sympy==1.12
- tabulate==0.9.0
- tenacity==8.2.3
- terminado==0.18.0
- tiktoken==0.5.2
- tinycss2==1.2.1
- tokenizers==0.15.0
- tornado==6.4
- tqdm==4.66.1
- traitlets==5.14.1
- typer==0.9.0
- types-python-dateutil==2.8.19.20240106
- types-requests==2.31.0.20240106
- typing-inspect==0.9.0
- typing_extensions==4.9.0
- unstructured==0.11.8
- unstructured-client==0.15.2
- uri-template==1.3.0
- urllib3==1.26.18
- uvicorn==0.25.0
- uvloop==0.19.0
- watchfiles==0.21.0
- wcwidth==0.2.13
- webcolors==1.13
- webencodings==0.5.1
- websocket-client==1.7.0
- websockets==12.0
- wrapt==1.16.0
- yarl==1.9.4
- zipp==3.17.0
</details>
### Who can help?
@ey
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Code sample to reproduce where `my-codebase` is a directory with a heterogeneous collection of files (.tsx, .json, .ts, .js, .md)
```
# Document loading: Load codebase from local directory
from langchain_community.document_loaders import DirectoryLoader
project_path = "my-codebase"
loader = DirectoryLoader(project_path, use_multithreading=False)
my_codebase_data = loader.load()
```
This creates the following error:
```
{
"name": "ValueError",
"message": "Detected a JSON file that does not conform to the Unstructured schema. partition_json currently only processes serialized Unstructured output.",
"stack": "---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[16], line 8
4 project_path = \"my-codebase\"
6 loader = DirectoryLoader(project_path, use_multithreading=False)
----> 8 my_codebase_data = loader.load()
File ~/Repos/chain-repo/.venv/lib64/python3.11/site-packages/langchain_community/document_loaders/directory.py:157, in DirectoryLoader.load(self)
155 else:
156 for i in items:
--> 157 self.load_file(i, p, docs, pbar)
159 if pbar:
160 pbar.close()
File ~/Repos/chain-repo/.venv/lib64/python3.11/site-packages/langchain_community/document_loaders/directory.py:106, in DirectoryLoader.load_file(self, item, path, docs, pbar)
104 logger.warning(f\"Error loading file {str(item)}: {e}\")
105 else:
--> 106 raise e
107 finally:
108 if pbar:
File ~/Repos/chain-repo/.venv/lib64/python3.11/site-packages/langchain_community/document_loaders/directory.py:100, in DirectoryLoader.load_file(self, item, path, docs, pbar)
98 try:
99 logger.debug(f\"Processing file: {str(item)}\")
--> 100 sub_docs = self.loader_cls(str(item), **self.loader_kwargs).load()
101 docs.extend(sub_docs)
102 except Exception as e:
File ~/Repos/chain-repo/.venv/lib64/python3.11/site-packages/langchain_community/document_loaders/unstructured.py:87, in UnstructuredBaseLoader.load(self)
85 def load(self) -> List[Document]:
86 \"\"\"Load file.\"\"\"
---> 87 elements = self._get_elements()
88 self._post_process_elements(elements)
89 if self.mode == \"elements\":
File ~/Repos/chain-repo/.venv/lib64/python3.11/site-packages/langchain_community/document_loaders/unstructured.py:173, in UnstructuredFileLoader._get_elements(self)
170 def _get_elements(self) -> List:
171 from unstructured.partition.auto import partition
--> 173 return partition(filename=self.file_path, **self.unstructured_kwargs)
File ~/Repos/chain-repo/.venv/lib64/python3.11/site-packages/unstructured/partition/auto.py:480, in partition(filename, content_type, file, file_filename, url, include_page_breaks, strategy, encoding, paragraph_grouper, headers, skip_infer_table_types, ssl_verify, ocr_languages, languages, detect_language_per_element, pdf_infer_table_structure, pdf_extract_images, pdf_extract_element_types, pdf_image_output_dir_path, pdf_extract_to_payload, xml_keep_tags, data_source_metadata, metadata_filename, request_timeout, hi_res_model_name, model_name, **kwargs)
478 elif filetype == FileType.JSON:
479 if not is_json_processable(filename=filename, file=file):
--> 480 raise ValueError(
481 \"Detected a JSON file that does not conform to the Unstructured schema. \"
482 \"partition_json currently only processes serialized Unstructured output.\",
483 )
484 elements = partition_json(filename=filename, file=file, **kwargs)
485 elif (filetype == FileType.XLSX) or (filetype == FileType.XLS):
ValueError: Detected a JSON file that does not conform to the Unstructured schema. partition_json currently only processes serialized Unstructured output."
}
```
### Expected behavior
To get the expected behavior, set `use_multithreading` to True:
```
loader = DirectoryLoader(project_path, use_multithreading=True)
```
Doing this loads the files without error.
Curiously, I get the same loader if I just set `silent_errors` to True:
```
loader = DirectoryLoader(project_path, use_multithreading=False, silent_errors=True)
```
In this case, the error is printed, but the execution is not halted.
Curiously, if I set `use_multithreading` to True and have `silent_errors` set to True, I get the same behaviour as for `use_multithreading=False`. This time it acknowledges that there are errors, where as if it is silent, it just ignores them and doesn't even print them.
```
loader = DirectoryLoader(project_path, use_multithreading=True, silent_errors=True)
```
### Additional thoughts
- This might need to be broken up into different issues
- I am also noticing that the `recursive` parameter is set to False by default, but it still recursively goes through each subdirectory in the directory, is this expected?. | DirectoryLoader use_multithreading inconsistent behavior between true and false (and issue with UnstructuredFileLoader and .json files) | https://api.github.com/repos/langchain-ai/langchain/issues/15731/comments | 2 | 2024-01-09T06:38:32Z | 2024-07-23T16:07:11Z | https://github.com/langchain-ai/langchain/issues/15731 | 2,071,722,976 | 15,731 |
[
"hwchase17",
"langchain"
] | ### System Info
Ubuntu 20.04
I got this while reading a book pdf with extract_images=True.
[113](https://file+.vscode-resource.vscode-cdn.net/home/karan/kj_workspace/kj_argentelm/risk_assessment/backend/~/anaconda3/envs/python39/lib/python3.9/site-packages/langchain_community/document_loaders/parsers/pdf.py:113) if xObject[obj]["[/Filter](https://file+.vscode-resource.vscode-cdn.net/Filter)"][1:] in _PDF_FILTER_WITHOUT_LOSS:
[114](https://file+.vscode-resource.vscode-cdn.net/home/karan/kj_workspace/kj_argentelm/risk_assessment/backend/~/anaconda3/envs/python39/lib/python3.9/site-packages/langchain_community/document_loaders/parsers/pdf.py:114) height, width = xObject[obj]["[/Height](https://file+.vscode-resource.vscode-cdn.net/Height)"], xObject[obj]["[/Width](https://file+.vscode-resource.vscode-cdn.net/Width)"]
[116](https://file+.vscode-resource.vscode-cdn.net/home/karan/kj_workspace/kj_argentelm/risk_assessment/backend/~/anaconda3/envs/python39/lib/python3.9/site-packages/langchain_community/document_loaders/parsers/pdf.py:116) images.append(
--> [117](https://file+.vscode-resource.vscode-cdn.net/home/karan/kj_workspace/kj_argentelm/risk_assessment/backend/~/anaconda3/envs/python39/lib/python3.9/site-packages/langchain_community/document_loaders/parsers/pdf.py:117) np.frombuffer(xObject[obj].get_data(), dtype=np.uint8).reshape(
[118](https://file+.vscode-resource.vscode-cdn.net/home/karan/kj_workspace/kj_argentelm/risk_assessment/backend/~/anaconda3/envs/python39/lib/python3.9/site-packages/langchain_community/document_loaders/parsers/pdf.py:118) height, width, -1
[119](https://file+.vscode-resource.vscode-cdn.net/home/karan/kj_workspace/kj_argentelm/risk_assessment/backend/~/anaconda3/envs/python39/lib/python3.9/site-packages/langchain_community/document_loaders/parsers/pdf.py:119) )
[120](https://file+.vscode-resource.vscode-cdn.net/home/karan/kj_workspace/kj_argentelm/risk_assessment/backend/~/anaconda3/envs/python39/lib/python3.9/site-packages/langchain_community/document_loaders/parsers/pdf.py:120) )
[121](https://file+.vscode-resource.vscode-cdn.net/home/karan/kj_workspace/kj_argentelm/risk_assessment/backend/~/anaconda3/envs/python39/lib/python3.9/site-packages/langchain_community/document_loaders/parsers/pdf.py:121) elif xObject[obj]["[/Filter](https://file+.vscode-resource.vscode-cdn.net/Filter)"][1:] in _PDF_FILTER_WITH_LOSS:
[122](https://file+.vscode-resource.vscode-cdn.net/home/karan/kj_workspace/kj_argentelm/risk_assessment/backend/~/anaconda3/envs/python39/lib/python3.9/site-packages/langchain_community/document_loaders/parsers/pdf.py:122) images.append(xObject[obj].get_data())
ValueError: cannot reshape array of size 293 into shape (193,121,newaxis)

### Who can help?
@eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain_community.document_loaders import PyPDFLoader
loader = PyPDFLoader("./book.pdf", extract_images=True)
```
### Expected behavior
It should load the pdf and extract info from images also. When I set extract_images=False it works fine. | ValueError: cannot reshape array of size 293 into shape (193,121,newaxis) | https://api.github.com/repos/langchain-ai/langchain/issues/15730/comments | 1 | 2024-01-09T05:52:40Z | 2024-01-09T14:40:30Z | https://github.com/langchain-ai/langchain/issues/15730 | 2,071,675,818 | 15,730 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I've seen in the langchain documentation code for vector search in Neo4j which take `OpenAIEmbeddings()` as an object parameter in order to make an embedding for input query
```python
index_name = "vector" # default index name
store = Neo4jVector.from_existing_index(
OpenAIEmbeddings(),
url=url,
username=username,
password=password,
index_name=index_name,
)
```
What i wonder is that, can we pass another embedding model e.g. huggingface model into that parameter instead of openai itself because it many cases, there exist an incompatible dimension when we already have an existing index that is embedded by another off-the-shelf model rather than a embedding model from OpenAI ?
Moreover, i took a look at the source code in case that there has no way to add huggingface model
https://github.com/langchain-ai/langchain/blob/04caf07dee2e2843ab720e5b8f0c0e83d0b86a3e/libs/community/langchain_community/vectorstores/neo4j_vector.py#L111-L147
what i've found is that for the `embedding` parameters of `Neo4jVector` object, it should be Any embedding function implementing`langchain.embeddings.base.Embeddings` interface. Here is the code described that class
https://github.com/langchain-ai/langchain/blob/04caf07dee2e2843ab720e5b8f0c0e83d0b86a3e/libs/core/langchain_core/embeddings.py#L7-L24
Does it mean that we must construct a class that inherits from it in order to implement it effectively, if yes, please provide an example to do it.
### Suggestion:
Any suggested way from this case, if it is currently not supported huggingface model with Neo4j VectorStore, i will help contributing it and make a PR then. | Issue: mechanism of embedding parameters in Neo4j Vector object | https://api.github.com/repos/langchain-ai/langchain/issues/15729/comments | 1 | 2024-01-09T04:39:49Z | 2024-01-10T02:55:08Z | https://github.com/langchain-ai/langchain/issues/15729 | 2,071,616,292 | 15,729 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
import os
from urllib.parse import quote_plus
from langchain.vectorstores.pgvector import PGVector
from langchain.chat_models import ChatOpenAI
from langchain.chains import ConversationalRetrievalChain
from langchain.memory import ConversationTokenBufferMemory
from langchain_community.embeddings import HuggingFaceEmbeddings
from langchain_core.prompts import ChatPromptTemplate, SystemMessagePromptTemplate, MessagesPlaceholder,
HumanMessagePromptTemplate
os.environ['OPENAI_API_KEY'] = "mykey"
embeddings = HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2")
CONNECTION_STRING = PGVector.connection_string_from_db_params(
driver=os.environ.get("PGVECTOR_DRIVER", "psycopg2"),
host="192.xxx.xx.xxx",
port=5432,
database="xxx",
user="xxx",
password=quote_plus("xx@xxxxxr"),
)
vectordb = PGVector(embedding_function=embeddings,
collection_name="tmp06",
connection_string=CONNECTION_STRING,
)
prompt = ChatPromptTemplate(
messages=[
SystemMessagePromptTemplate.from_template(
"請把使用者的對話紀錄當作參考作為回覆,回答只能使用繁體中文字"
),
MessagesPlaceholder(variable_name="chat_history"),
HumanMessagePromptTemplate.from_template("{question}"),
]
)
llm_name = "gpt-3.5-turbo"
llm = ChatOpenAI(model_name=llm_name, temperature=0)
retriever = vectordb.as_retriever()
memory = ConversationTokenBufferMemory(
llm=llm,
prompt=prompt,
memory_key="chat_history",
return_messages=True,
)
qa = ConversationalRetrievalChain.from_llm(
llm=llm,
retriever=retriever,
memory=memory,
verbose=True,
)
chat_history = []
while True:
memory.load_memory_variables({})
question = input('提問:')
result = qa.run({'question': question, 'chat_history': chat_history})
print(result)
chat_history.append([f'User: {question}', f'Ai: {result}'])
print(chat_history)
embeddings = HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2")
st_history = ' '.join(map(str, chat_history))
res = embeddings.embed_query(st_history)
vectordb.add_vector(res)
select_vdb = vectordb.nearest(res, n=1)
print(f'ok: {res[:4]}...')
if question.lower() == 'bye':
break
### Suggestion:
translates to "Unable to retrieve my prompt when starting the conversation" in English | Unable to retrieve my prompt when starting the conversation | https://api.github.com/repos/langchain-ai/langchain/issues/15728/comments | 2 | 2024-01-09T03:13:25Z | 2024-01-09T14:39:02Z | https://github.com/langchain-ai/langchain/issues/15728 | 2,071,551,848 | 15,728 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
i have question, is it possible to get two different type answer from one prompt ? i want to change my question from nlp to queries sql or it will return common answer from chatgpt, for example "show data purchasing" the answer will queries and if the question "show me rate usd today" it will result from the internet
### Suggestion:
_No response_ | promt result | https://api.github.com/repos/langchain-ai/langchain/issues/15719/comments | 1 | 2024-01-08T19:53:03Z | 2024-01-08T19:53:28Z | https://github.com/langchain-ai/langchain/issues/15719 | 2,071,122,353 | 15,719 |
[
"hwchase17",
"langchain"
] | ### System Info
```➜ ~ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 22.04.3 LTS
Release: 22.04
Codename: jammy
```
```
In [2]: langchain.__version__
Out[2]: '0.0.354'
```
```
In [4]: from langchain_core import __version__
In [5]: __version__
Out[5]: '0.1.8'
```
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
# Description
Create extraction chain pydantic does not work for valid pydantic schemas.
```python
from typing import Optional, List
from langchain.chains import create_extraction_chain_pydantic
from langchain_openai import ChatOpenAI
class Person(BaseModel):
"""Identifying information about a person in a text."""
person_name: str
person_height: Optional[int]
person_hair_color: Optional[str]
dog_breed: Optional[str]
dog_name: Optional[str]
# Chain for an extraction approach based on OpenAI Functions
extraction_chain = create_extraction_chain_pydantic(Person, ChatOpenAI(temperature=0))
extraction_chain.invoke("My name is tom and i'm 6 feet tall")
```
However, more complex pydantic definitions fail:
```
class People(BaseModel):
"""Identifying information about all people in a text."""
__root__: List[Person]
# Chain for an extraction approach based on OpenAI Functions
extraction_chain = create_extraction_chain_pydantic(People, ChatOpenAI(temperature=0))
extraction_chain.invoke("My name is tom and i'm 6 feet tall")
```

```
class NestedPeople(BaseModel):
"""Identifying information about all people in a text."""
people: List[Person]
# Chain for an extraction approach based on OpenAI Functions
extraction_chain = create_extraction_chain_pydantic(NestedPeople, ChatOpenAI(temperature=0))
extraction_chain.invoke("My name is tom and i'm 6 feet tall")
```

---
## Acceptance criteria
1. Code does not affect backwards compatibility if possible If must be a breaking change, perhaps we should create a new function for this purpose.
2. Should we replace LLMChain with an LCEL chain and determine what is the correct output interface for extractions? User may want error information to be returned rather than raised.
3. Unit-tests must cover above cases
### Expected behavior
All shown cases should work properly and not fail during initialization time. | Extraction: create_extraction_chain_pydantic | https://api.github.com/repos/langchain-ai/langchain/issues/15715/comments | 3 | 2024-01-08T19:11:32Z | 2024-03-08T16:39:50Z | https://github.com/langchain-ai/langchain/issues/15715 | 2,071,064,930 | 15,715 |
[
"hwchase17",
"langchain"
] | ### Feature request
I'm trying to extend AgentExecutor with custom logic and I want to override how the agent perform actions.
What i'd really need is only to override the __aperform_agent_action_ function; however this function is defined in the __aiter_next_step_ function, making it necessary to override the whole function.
This obviously comes with the drawbacks of more code and having to reconciliate future updates.
In my opinion, the __aiter_next_step_ function could be extracted into an instance or static method, allowing to override only the relevant parts.
Also, for the synchronous version _iter_next_step_ a similar problem arises, as the __perform_agent_action_ is not defined at all.
The relevant code can be extracted in a method making it easier to override it.
### Motivation
This update would allow for better extendibility of the AgentExecutor class
### Your contribution
I can submit a PR to address the issue | Extract _aperform_agent_action from _aiter_next_step from AgentExecutor | https://api.github.com/repos/langchain-ai/langchain/issues/15706/comments | 1 | 2024-01-08T14:12:40Z | 2024-01-24T02:22:10Z | https://github.com/langchain-ai/langchain/issues/15706 | 2,070,544,706 | 15,706 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
Dear all
I have this pipeline
```python
translation_cache = ToJSON(key=key, out_dir=Path("results/sabadel/translation"))
translation_prompt = Prompt.from_yaml(Path("prompts/translate.yml"))
translation_chain = (
{
"transcription": lambda data: format_transcription_for_prompt(
data["transcription"]
)
}
| translation_prompt.template
| model
| {
"transcription": RunnableLambda(
lambda res: translation_output_parser(res, transcription)
)
}
| {"transcription": lambda x: translation_cache(x)}
)
translation_chain = (
RunnableLambda(lambda data: {**data, "transcription": translation_cache.load()})
if translation_cache.exists
else translation_chain
)
# evaluation
evaluation_prompt = Prompt.from_yaml(Path("prompts/sabadell/evaluation.yml"))
evaluation_cache = ToJSON(key=key, out_dir=Path("results/sabadel/evaluation"))
evaluation_chain = (
evaluation_prompt.template | model | evaluation_output_parser | evaluation_cache
)
evaluation_chain = (
RunnableLambda(lambda data: {**data, "evaluations": evaluation_cache.load()})
if evaluation_cache.exists
else evaluation_chain
)
# retention
retention_prompt = Prompt.from_yaml(Path("prompts/sabadell/evaluation.retention.yml"))
retention_cache = ToJSON(key=key, out_dir=Path("results/sabadel/retention"))
retention_chain = (
retention_prompt.template | model | retention_output_parser | retention_cache
)
retention_chain = RunnableLambda(
lambda data: {
**data,
"retention": retention_cache.load()
if retention_cache.exists
else retention_chain(**data),
}
)
# final chain
# chain = translation_chain | retention_chain
# print(translation_chain.invoke({"transcription": transcription}))
chain = translation_chain | evaluation_chain | retention_chain
print(chain.get_graph().print_ascii())
print(
chain.invoke({"transcription": transcription, "retention_script": retention_script})
)
```
Now what I'd like to do is that `evaluation_chain` puts stuff into a key `evaluations` and pass along the original data dict + that key to `retention_chain` and `retention_chain` should put it's output into a `retention` data key and then pass along the original dict + all the outputs
how to do it?
### Idea or request for content:
_No response_ | DOC: Data Pipeline for humans | https://api.github.com/repos/langchain-ai/langchain/issues/15705/comments | 3 | 2024-01-08T14:10:39Z | 2024-01-09T14:42:08Z | https://github.com/langchain-ai/langchain/issues/15705 | 2,070,541,205 | 15,705 |
[
"hwchase17",
"langchain"
] | ### Feature request
Feature request
They provide a [python client](https://docs.mistral.ai/platform/endpoints/) to access the embedding model
### Motivation
It would be great if we added the new embedding service from Mistral!
### Your contribution
I can work on this and submit a PR | Add support for the Mistral AI Embedding Model | https://api.github.com/repos/langchain-ai/langchain/issues/15702/comments | 2 | 2024-01-08T12:35:54Z | 2024-04-16T16:15:00Z | https://github.com/langchain-ai/langchain/issues/15702 | 2,070,370,106 | 15,702 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
Hi.
I am a newcomer to Langchain following the Quickstart tutorial in a Jupyter Notebook, using the setup recommended by the installation guide. I am following the OpenAI tutorial, rather than the local LLM version.
I followed the exact code in the docs by pasting the cells into my notebook. All code works perfectly without a single error or warning. However, the code fails at this point:
```python
response = retrieval_chain.invoke({"input": "how can langsmith help with testing?"})
print(response["answer"])
# LangSmith offers several features that can help with testing:...
```
When I attempt to run this code, I get the following output in my notebook:
```
{
"name": "ValidationError",
"message": "2 validation errors for DocArrayDoc
text
Field required [type=missing, input_value={'embedding': [-0.0144587... -0.015377209573652503]}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.5/v/missing
metadata
Field required [type=missing, input_value={'embedding': [-0.0144587... -0.015377209573652503]}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.5/v/missing",
"stack": "---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
Cell In[18], line 1
----> 1 response = retrieval_chain.invoke({\"input\": \"how can langsmith help with testing?\"})
2 print(response[\"answer\"])
4 # LangSmith offers several features that can help with testing:...
File e:\\Repos\\Practice Projects\\Langchain\\Quickstart\\quickstart\\Lib\\site-packages\\langchain_core\\runnables\\base.py:3590, in RunnableBindingBase.invoke(self, input, config, **kwargs)
3584 def invoke(
3585 self,
3586 input: Input,
3587 config: Optional[RunnableConfig] = None,
3588 **kwargs: Optional[Any],
3589 ) -> Output:
-> 3590 return self.bound.invoke(
3591 input,
3592 self._merge_configs(config),
3593 **{**self.kwargs, **kwargs},
3594 )
File e:\\Repos\\Practice Projects\\Langchain\\Quickstart\\quickstart\\Lib\\site-packages\\langchain_core\\runnables\\base.py:1762, in RunnableSequence.invoke(self, input, config)
1760 try:
1761 for i, step in enumerate(self.steps):
-> 1762 input = step.invoke(
1763 input,
1764 # mark each step as a child run
1765 patch_config(
1766 config, callbacks=run_manager.get_child(f\"seq:step:{i+1}\")
1767 ),
1768 )
1769 # finish the root run
1770 except BaseException as e:
File e:\\Repos\\Practice Projects\\Langchain\\Quickstart\\quickstart\\Lib\\site-packages\\langchain_core\\runnables\\passthrough.py:415, in RunnableAssign.invoke(self, input, config, **kwargs)
409 def invoke(
410 self,
411 input: Dict[str, Any],
412 config: Optional[RunnableConfig] = None,
413 **kwargs: Any,
414 ) -> Dict[str, Any]:
--> 415 return self._call_with_config(self._invoke, input, config, **kwargs)
File e:\\Repos\\Practice Projects\\Langchain\\Quickstart\\quickstart\\Lib\\site-packages\\langchain_core\\runnables\\base.py:975, in Runnable._call_with_config(self, func, input, config, run_type, **kwargs)
971 context = copy_context()
972 context.run(var_child_runnable_config.set, child_config)
973 output = cast(
974 Output,
--> 975 context.run(
976 call_func_with_variable_args,
977 func, # type: ignore[arg-type]
978 input, # type: ignore[arg-type]
979 config,
980 run_manager,
981 **kwargs,
982 ),
983 )
984 except BaseException as e:
985 run_manager.on_chain_error(e)
File e:\\Repos\\Practice Projects\\Langchain\\Quickstart\\quickstart\\Lib\\site-packages\\langchain_core\\runnables\\config.py:323, in call_func_with_variable_args(func, input, config, run_manager, **kwargs)
321 if run_manager is not None and accepts_run_manager(func):
322 kwargs[\"run_manager\"] = run_manager
--> 323 return func(input, **kwargs)
File e:\\Repos\\Practice Projects\\Langchain\\Quickstart\\quickstart\\Lib\\site-packages\\langchain_core\\runnables\\passthrough.py:402, in RunnableAssign._invoke(self, input, run_manager, config, **kwargs)
389 def _invoke(
390 self,
391 input: Dict[str, Any],
(...)
394 **kwargs: Any,
395 ) -> Dict[str, Any]:
396 assert isinstance(
397 input, dict
398 ), \"The input to RunnablePassthrough.assign() must be a dict.\"
400 return {
401 **input,
--> 402 **self.mapper.invoke(
403 input,
404 patch_config(config, callbacks=run_manager.get_child()),
405 **kwargs,
406 ),
407 }
File e:\\Repos\\Practice Projects\\Langchain\\Quickstart\\quickstart\\Lib\\site-packages\\langchain_core\\runnables\\base.py:2327, in RunnableParallel.invoke(self, input, config)
2314 with get_executor_for_config(config) as executor:
2315 futures = [
2316 executor.submit(
2317 step.invoke,
(...)
2325 for key, step in steps.items()
2326 ]
-> 2327 output = {key: future.result() for key, future in zip(steps, futures)}
2328 # finish the root run
2329 except BaseException as e:
File e:\\Repos\\Practice Projects\\Langchain\\Quickstart\\quickstart\\Lib\\site-packages\\langchain_core\\runnables\\base.py:2327, in <dictcomp>(.0)
2314 with get_executor_for_config(config) as executor:
2315 futures = [
2316 executor.submit(
2317 step.invoke,
(...)
2325 for key, step in steps.items()
2326 ]
-> 2327 output = {key: future.result() for key, future in zip(steps, futures)}
2328 # finish the root run
2329 except BaseException as e:
File C:\\ProgramData\\miniconda3\\Lib\\concurrent\\futures\\_base.py:456, in Future.result(self, timeout)
454 raise CancelledError()
455 elif self._state == FINISHED:
--> 456 return self.__get_result()
457 else:
458 raise TimeoutError()
File C:\\ProgramData\\miniconda3\\Lib\\concurrent\\futures\\_base.py:401, in Future.__get_result(self)
399 if self._exception:
400 try:
--> 401 raise self._exception
402 finally:
403 # Break a reference cycle with the exception in self._exception
404 self = None
File C:\\ProgramData\\miniconda3\\Lib\\concurrent\\futures\\thread.py:58, in _WorkItem.run(self)
55 return
57 try:
---> 58 result = self.fn(*self.args, **self.kwargs)
59 except BaseException as exc:
60 self.future.set_exception(exc)
File e:\\Repos\\Practice Projects\\Langchain\\Quickstart\\quickstart\\Lib\\site-packages\\langchain_core\\runnables\\base.py:3590, in RunnableBindingBase.invoke(self, input, config, **kwargs)
3584 def invoke(
3585 self,
3586 input: Input,
3587 config: Optional[RunnableConfig] = None,
3588 **kwargs: Optional[Any],
3589 ) -> Output:
-> 3590 return self.bound.invoke(
3591 input,
3592 self._merge_configs(config),
3593 **{**self.kwargs, **kwargs},
3594 )
File e:\\Repos\\Practice Projects\\Langchain\\Quickstart\\quickstart\\Lib\\site-packages\\langchain_core\\runnables\\base.py:1762, in RunnableSequence.invoke(self, input, config)
1760 try:
1761 for i, step in enumerate(self.steps):
-> 1762 input = step.invoke(
1763 input,
1764 # mark each step as a child run
1765 patch_config(
1766 config, callbacks=run_manager.get_child(f\"seq:step:{i+1}\")
1767 ),
1768 )
1769 # finish the root run
1770 except BaseException as e:
File e:\\Repos\\Practice Projects\\Langchain\\Quickstart\\quickstart\\Lib\\site-packages\\langchain_core\\retrievers.py:121, in BaseRetriever.invoke(self, input, config)
117 def invoke(
118 self, input: str, config: Optional[RunnableConfig] = None
119 ) -> List[Document]:
120 config = ensure_config(config)
--> 121 return self.get_relevant_documents(
122 input,
123 callbacks=config.get(\"callbacks\"),
124 tags=config.get(\"tags\"),
125 metadata=config.get(\"metadata\"),
126 run_name=config.get(\"run_name\"),
127 )
File e:\\Repos\\Practice Projects\\Langchain\\Quickstart\\quickstart\\Lib\\site-packages\\langchain_core\\retrievers.py:223, in BaseRetriever.get_relevant_documents(self, query, callbacks, tags, metadata, run_name, **kwargs)
221 except Exception as e:
222 run_manager.on_retriever_error(e)
--> 223 raise e
224 else:
225 run_manager.on_retriever_end(
226 result,
227 **kwargs,
228 )
File e:\\Repos\\Practice Projects\\Langchain\\Quickstart\\quickstart\\Lib\\site-packages\\langchain_core\\retrievers.py:216, in BaseRetriever.get_relevant_documents(self, query, callbacks, tags, metadata, run_name, **kwargs)
214 _kwargs = kwargs if self._expects_other_args else {}
215 if self._new_arg_supported:
--> 216 result = self._get_relevant_documents(
217 query, run_manager=run_manager, **_kwargs
218 )
219 else:
220 result = self._get_relevant_documents(query, **_kwargs)
File e:\\Repos\\Practice Projects\\Langchain\\Quickstart\\quickstart\\Lib\\site-packages\\langchain_core\\vectorstores.py:654, in VectorStoreRetriever._get_relevant_documents(self, query, run_manager)
650 def _get_relevant_documents(
651 self, query: str, *, run_manager: CallbackManagerForRetrieverRun
652 ) -> List[Document]:
653 if self.search_type == \"similarity\":
--> 654 docs = self.vectorstore.similarity_search(query, **self.search_kwargs)
655 elif self.search_type == \"similarity_score_threshold\":
656 docs_and_similarities = (
657 self.vectorstore.similarity_search_with_relevance_scores(
658 query, **self.search_kwargs
659 )
660 )
File e:\\Repos\\Practice Projects\\Langchain\\Quickstart\\quickstart\\Lib\\site-packages\\langchain_community\\vectorstores\\docarray\\base.py:127, in DocArrayIndex.similarity_search(self, query, k, **kwargs)
115 def similarity_search(
116 self, query: str, k: int = 4, **kwargs: Any
117 ) -> List[Document]:
118 \"\"\"Return docs most similar to query.
119
120 Args:
(...)
125 List of Documents most similar to the query.
126 \"\"\"
--> 127 results = self.similarity_search_with_score(query, k=k, **kwargs)
128 return [doc for doc, _ in results]
File e:\\Repos\\Practice Projects\\Langchain\\Quickstart\\quickstart\\Lib\\site-packages\\langchain_community\\vectorstores\\docarray\\base.py:106, in DocArrayIndex.similarity_search_with_score(self, query, k, **kwargs)
94 \"\"\"Return docs most similar to query.
95
96 Args:
(...)
103 Lower score represents more similarity.
104 \"\"\"
105 query_embedding = self.embedding.embed_query(query)
--> 106 query_doc = self.doc_cls(embedding=query_embedding) # type: ignore
107 docs, scores = self.doc_index.find(query_doc, search_field=\"embedding\", limit=k)
109 result = [
110 (Document(page_content=doc.text, metadata=doc.metadata), score)
111 for doc, score in zip(docs, scores)
112 ]
File e:\\Repos\\Practice Projects\\Langchain\\Quickstart\\quickstart\\Lib\\site-packages\\pydantic\\main.py:164, in BaseModel.__init__(__pydantic_self__, **data)
162 # `__tracebackhide__` tells pytest and some other tools to omit this function from tracebacks
163 __tracebackhide__ = True
--> 164 __pydantic_self__.__pydantic_validator__.validate_python(data, self_instance=__pydantic_self__)
ValidationError: 2 validation errors for DocArrayDoc
text
Field required [type=missing, input_value={'embedding': [-0.0144587... -0.015377209573652503]}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.5/v/missing
metadata
Field required [type=missing, input_value={'embedding': [-0.0144587... -0.015377209573652503]}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.5/v/missing"
}
```
### Idea or request for content:
As I am a newcomer, I do not understand exactly what the issue is. Thus, I would like to request that the documentation be updated so that the code works correctly. In the meantime, I would appreciate any assistance so I can continue to learn Langchain through the quickstart and work my way through the rest of the docs . | DOC: Quickstart Code Fails for Retrieval Chain | https://api.github.com/repos/langchain-ai/langchain/issues/15700/comments | 5 | 2024-01-08T10:23:26Z | 2024-01-08T15:54:43Z | https://github.com/langchain-ai/langchain/issues/15700 | 2,070,146,142 | 15,700 |
[
"hwchase17",
"langchain"
] | ### System Info
Python 3.10.12
langchain 0.0.354
### Who can help?
@hwch
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.agents.agent_toolkits.slack.toolkit import SlackToolkit
stoolkit = SlackToolkit()
tools = stoolkit.get_tools()
agent = OpenAIAssistantRunnable.create_assistant(
name="Sales assistant",
instructions="""You are a admin agent, tasked with the following jobs:
2. Read and post messages on Slack""",
tools=tools,
model="gpt-4-1106-preview",
as_agent=True
)
from langchain.agents.agent import AgentExecutor
agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools)
agent_executor.invoke({"content":"list all messages in #budget-decisions"})
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[11], line 13
agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools)
---> agent_executor.invoke({"content":"list all messages in #budget-decisions"})
File ~/smith/lib/python3.10/site-packages/langchain/chains/base.py:93, in Chain.invoke(self, input, config, **kwargs)
86 def invoke(
87 self,
88 input: Dict[str, Any],
89 config: Optional[RunnableConfig] = None,
90 **kwargs: Any,
91 ) -> Dict[str, Any]:
92 config = ensure_config(config)
---> 93 return self(
94 input,
95 callbacks=config.get("callbacks"),
96 tags=config.get("tags"),
97 metadata=config.get("metadata"),
98 run_name=config.get("run_name"),
99 **kwargs,
100 )
File ~/smith/lib/python3.10/site-packages/langchain/chains/base.py:316, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
314 except BaseException as e:
315 run_manager.on_chain_error(e)
--> 316 raise e
317 run_manager.on_chain_end(outputs)
318 final_outputs: Dict[str, Any] = self.prep_outputs(
319 inputs, outputs, return_only_outputs
320 )
File ~/smith/lib/python3.10/site-packages/langchain/chains/base.py:310, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
303 run_manager = callback_manager.on_chain_start(
304 dumpd(self),
305 inputs,
306 name=run_name,
307 )
308 try:
309 outputs = (
--> 310 self._call(inputs, run_manager=run_manager)
311 if new_arg_supported
312 else self._call(inputs)
313 )
314 except BaseException as e:
315 run_manager.on_chain_error(e)
File ~/smith/lib/python3.10/site-packages/langchain/agents/agent.py:1312, in AgentExecutor._call(self, inputs, run_manager)
1310 # We now enter the agent loop (until it returns something).
1311 while self._should_continue(iterations, time_elapsed):
-> 1312 next_step_output = self._take_next_step(
1313 name_to_tool_map,
1314 color_mapping,
1315 inputs,
1316 intermediate_steps,
1317 run_manager=run_manager,
1318 )
1319 if isinstance(next_step_output, AgentFinish):
1320 return self._return(
1321 next_step_output, intermediate_steps, run_manager=run_manager
1322 )
File ~/smith/lib/python3.10/site-packages/langchain/agents/agent.py:1038, in AgentExecutor._take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
1029 def _take_next_step(
1030 self,
1031 name_to_tool_map: Dict[str, BaseTool],
(...)
1035 run_manager: Optional[CallbackManagerForChainRun] = None,
1036 ) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]:
1037 return self._consume_next_step(
-> 1038 [
1039 a
1040 for a in self._iter_next_step(
1041 name_to_tool_map,
1042 color_mapping,
1043 inputs,
1044 intermediate_steps,
1045 run_manager,
1046 )
1047 ]
1048 )
File ~/smith/lib/python3.10/site-packages/langchain/agents/agent.py:1038, in <listcomp>(.0)
1029 def _take_next_step(
1030 self,
1031 name_to_tool_map: Dict[str, BaseTool],
(...)
1035 run_manager: Optional[CallbackManagerForChainRun] = None,
1036 ) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]:
1037 return self._consume_next_step(
-> 1038 [
1039 a
1040 for a in self._iter_next_step(
1041 name_to_tool_map,
1042 color_mapping,
1043 inputs,
1044 intermediate_steps,
1045 run_manager,
1046 )
1047 ]
1048 )
File ~/smith/lib/python3.10/site-packages/langchain/agents/agent.py:1134, in AgentExecutor._iter_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
1132 tool_run_kwargs["llm_prefix"] = ""
1133 # We then call the tool on the tool input to get an observation
-> 1134 observation = tool.run(
1135 agent_action.tool_input,
1136 verbose=self.verbose,
1137 color=color,
1138 callbacks=run_manager.get_child() if run_manager else None,
1139 **tool_run_kwargs,
1140 )
1141 else:
1142 tool_run_kwargs = self.agent.tool_run_logging_kwargs()
File ~/smith/lib/python3.10/site-packages/langchain_core/tools.py:365, in BaseTool.run(self, tool_input, verbose, start_color, color, callbacks, tags, metadata, run_name, **kwargs)
363 except (Exception, KeyboardInterrupt) as e:
364 run_manager.on_tool_error(e)
--> 365 raise e
366 else:
367 run_manager.on_tool_end(
368 str(observation), color=color, name=self.name, **kwargs
369 )
File ~/smith/lib/python3.10/site-packages/langchain_core/tools.py:337, in BaseTool.run(self, tool_input, verbose, start_color, color, callbacks, tags, metadata, run_name, **kwargs)
334 try:
335 tool_args, tool_kwargs = self._to_args_and_kwargs(parsed_input)
336 observation = (
--> 337 self._run(*tool_args, run_manager=run_manager, **tool_kwargs)
338 if new_arg_supported
339 else self._run(*tool_args, **tool_kwargs)
340 )
341 except ToolException as e:
342 if not self.handle_tool_error:
TypeError: SlackGetChannel._run() got multiple values for argument 'run_manager'
### Expected behavior
The slack agent should send a message on the said channel. | TypeError: SlackGetChannel._run() got multiple values for argument 'run_manager' | https://api.github.com/repos/langchain-ai/langchain/issues/15698/comments | 2 | 2024-01-08T09:58:38Z | 2024-04-15T16:25:31Z | https://github.com/langchain-ai/langchain/issues/15698 | 2,070,099,650 | 15,698 |
[
"hwchase17",
"langchain"
] | ### System Info
Chroma 0.4.22
Langchain 0.0.354
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Create a SelfQueryRetriever
2. Create AttributeInfo metadata list in preparation for filtering based off metadata.
```python
self_query_retriever = SelfQueryRetriever.from_llm(
llm,
vectorstore,
"Information about when document was published and where it originated from",
metadata_field_info
)
# retriever = MergerRetriever(retrievers=[parent_retriever, self_query_retriever])
retriever = self_query_retriever
template = """
### Instruction: You're an assistant who knows the following information:
### {context}
If you don't know the answer, then say you don't know and refer the user to the respective department for extra information.
Absolutely do not mention you are an AI language model. Use only the chat history and the following information.
### {chat_history}
### Input: {question}
### Response:
""".strip()
prompt = PromptTemplate(input_variables=["context", "chat_history", "question"], template=template)
chain = ConversationalRetrievalChain.from_llm(
llm,
chain_type="stuff",
retriever=retriever,
combine_docs_chain_kwargs={"prompt": prompt},#, "metadata_weights": metadata_weights},
return_source_documents=True,
verbose=False,
rephrase_question=True,
max_tokens_limit=16000,
response_if_no_docs_found="""I'm sorry, but I was not able to find the answer to your question based on the information I know. You may have to reach out to the respective internal department for more details regarding your inquiry."""
)
return chain
def score_unstructured(model, data, query, **kwargs) -> str:
"""Custom model hook for making completions with our knowledge base.
When requesting predictions from the deployment, pass a dictionary
with the following keys:
- 'question' the question to be passed to the retrieval chain
- 'chat_history' (optional) a list of two-element lists corresponding to
preceding dialogue between the Human and AI, respectively
datarobot-user-models (DRUM) handles loading the model and calling
this function with the appropriate parameters.
Returns:
--------
rv : str
Json dictionary with keys:
- 'question' user's original question
- 'chat_history' chat history that was provided with the original question
- 'answer' the generated answer to the question
- 'references' list of references that were used to generate the answer
- 'error' - error message if exception in handling request
"""
import json
try:
chain = model
data_dict = json.loads(data)
if 'chat_history' in data_dict:
chat_history = [(human, ai,) for human, ai in data_dict['chat_history']]
else:
chat_history = []# model.chat_history
rv = chain(
inputs={
'question': data_dict['question'],
'chat_history': chat_history,
},
)
source_docs = rv.pop('source_documents')
rv['references'] = [doc.metadata['source'] for doc in source_docs]
if len(source_docs) > 0:
rv["top_reference_text"] = [doc.page_content for doc in source_docs]
else:
rv["top_reference_text"] = ""
except Exception as e:
rv = {'error': f"{e.__class__.__name__}: {str(e)}"}
return json.dumps(rv)
model = load_model(".")
```
I asked the following question:
```python
questions = ["What is the minimum opening deposit for each account as of January 2023?"]
os.environ["TOKENIZERS_PARALLELISM"] = "false"
for question in questions:
rv = score_unstructured(model, json.dumps(
{
"question": question
# "chat_history": []
}
),
None)
print(rv)
print(question.upper())
print(json.loads(rv)["answer"])
print(json.loads(rv))
print("------------------------------------------------")
```
The issue I got was ```ValueError: Expected where operand value to be a str, int, float, or list of those type, got {'date': '2023-01-01', 'type': 'date'}```
It looks like the SelfQueryRetriever converted my question that had January 2023 to a date object. This date object throws an error. I'm not sure how to resolve this issue on my end.
### Expected behavior
Query with a date and receive an answer from the SelfQueryRetriever. | SelfQueryRetriever, ValueError: Expected where operand value to be a str, int, float, or list of those type | https://api.github.com/repos/langchain-ai/langchain/issues/15696/comments | 11 | 2024-01-08T09:48:39Z | 2024-06-10T14:52:24Z | https://github.com/langchain-ai/langchain/issues/15696 | 2,070,080,675 | 15,696 |
[
"hwchase17",
"langchain"
] | ### Feature request
- I want the local LLM (IlamaCpp) to maintain its context, which will significantly improve the efficiency of follow-up questions.
- Currently, the context of IlamaCpp is lost after the first call, necessitating the reprocessing of all tokens for any subsequent question.
- **Proposed Solution:** Utilize the internal KV cache of IlamaCpp to retain context and avoid reprocessing the same tokens repeatedly.
### Motivation
- My motivation is to address the inefficiency in the current process where the context is not preserved between queries.
- There seems to be no existing solution for this specific issue as per my research, for example, [LangChain Caching Documentation](https://python.langchain.com/docs/modules/model_io/chat/chat_model_caching#in-memory-cache).
minimized example which shows my current workaround:
```
from langchain.llms import LlamaCpp
from langchain.chains import LLMChain
from langchain.prompts import ChatPromptTemplate, HumanMessagePromptTemplate
from pathlib import Path
class LlmRunner:
def __init__(self, path_to_model: Path) -> None:
self.llm_instance = LlamaCpp(
model_path=str(path_to_model),
n_ctx=16384,
max_tokens=-1,
temperature=0,
repeat_penalty=1.15,
n_gpu_layers=1,
n_threads=8,
verbose=False,
)
def run(self, invoice_text: str):
# Initial processing filling up kv-cache and context
initial_prompt = """
Extract and format keys from the invoice text into JSON.
<context>
{input}
</context>
"""
chain1 = LLMChain(
llm=self.llm_instance,
prompt=ChatPromptTemplate.from_messages([
HumanMessagePromptTemplate.from_template(initial_prompt)
])
)
response1 = chain1.invoke({'input': invoice_text})
# Follow-up processing which COULD reuse the context, but doesn't
follow_up_prompt = """
Review your results and normalize the dates to YYYY-MM-DD.
<input>
{input}
</input>
<context>
{invoice_text}
</context>"""
chain2 = LLMChain(
llm=self.llm_instance,
prompt=ChatPromptTemplate.from_messages([
HumanMessagePromptTemplate.from_template(follow_up_prompt)
])
)
response2 = chain2.invoke({'invoice_text' : invoice_text, 'input': response1['text']})
return response2['text']
# Example usage
path_to_model = Path("path_to_your_model")
runner = LlmRunner(path_to_model)
invoice_text = "Your invoice text here"
result = runner.run(invoice_text)
print(result)
```
### Your contribution
- if something like this already exists I am willing to provide example and update documentation
- if you point me in the right directions and it's just a few 100s LOC I am willing to submit a PR | Reuse KV-Cache with local LLM (IlamaCpp) instead of expensive reprocessing of all history tokens | https://api.github.com/repos/langchain-ai/langchain/issues/15695/comments | 3 | 2024-01-08T09:47:45Z | 2024-03-23T22:37:54Z | https://github.com/langchain-ai/langchain/issues/15695 | 2,070,079,179 | 15,695 |
[
"hwchase17",
"langchain"
] | ### Feature request
Every time i create a milvus object, i load the collection, but there is no way to dynamically know the replica_number of the currently loaded collection, so there is a disadvantage that i have to hand over the different replica_number for each collection as an argument. Therefore, when creating a milvus object, I would like to add a flag that can determine whether to load or not.
### Motivation
Always loading a collection can cause an unexpected error.
### Your contribution
https://github.com/langchain-ai/langchain/pull/15693 | feat: add a flag that determines whether to load the milvus collection | https://api.github.com/repos/langchain-ai/langchain/issues/15694/comments | 1 | 2024-01-08T09:14:35Z | 2024-01-15T19:25:25Z | https://github.com/langchain-ai/langchain/issues/15694 | 2,070,024,246 | 15,694 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
<pre>
```
def generate_custom_prompt(query=None, name=None, not_uuid=None, chroma_db_path=None):
check = query.lower()
embedding = OpenAIEmbeddings()
vectordb = Chroma(persist_directory=chroma_db_path, embedding_function=embedding)
retriever = vectordb.as_retriever(search_kwargs={"k": 2})
relevant_document = retriever.get_relevant_documents(query)
print(relevant_document, "*****************************************")
context_text = "\n\n---\n\n".join([doc.page_content for doc in relevant_document])
user_experience_inst = UserExperience.objects.get(not_uuid=not_uuid)
greetings = ['hi', 'hello', 'hey', 'hui', 'hiiii', 'hii', 'hiii', 'heyyy']
if check in greetings:
custom_prompt_template = f"""
Just simply reply with "Hello {name}! How can I assist you today?"
"""
elif check not in greetings and user_experience_inst.custom_prompt:
custom_prompt_template = f"""Answer the question based only on following context: ```{context_text} ```
You are a chatbot designed to provide answers to User's Questions:```{check}```, delimited by triple backticks.
Generate your answer to match the user's requirements: {user_experience_inst.custom_prompt}
If you encounter a question for which you don't know the answer, please respond with 'I don't know' and refrain from making up an answer.
- Before saying 'I don't know,' please re-verify your vector store to ensure the answer is not present in the database.
Remember, your goal is to assist the user in the best way possible. If the question is unclear or ambiguous, feel free to ask for clarification.
User's Question: ```{check}```
AI Answer:"""
else:
custom_prompt_template = f"""Generate your response exclusively from the provided context: {{context_text}}. You function as a chatbot specializing in delivering detailed answers to the User's Question: ```{{check}} ```, enclosed within triple backticks.
Generate your answer in points in the following format:
1. Point no 1
1.1 Its subpoint in details
1.2 More information if needed.
2. Point no 2
2.1 Its subpoint in details
2.2 More information if needed.
…
N. Another main point.
If you encounter a question for which you don't know the answer based on the predefined points, please respond with 'I don't know' and refrain from making up an answer.
However, if the answer is not present in the predefined points, then Provide comprehensive information related to the user's query.
Remember, your goal is to assist the user in the best way possible. If the question is unclear or ambiguous, you can ask for clarification.
User's Question: ```{{check}} ```
AI Answer:"""
custom_prompt = ChatPromptTemplate.from_template(template=custom_prompt_template)
formatted_prompt = custom_prompt.format(context_text=context_text, check=check)
llm = ChatOpenAI(temperature=0.1)
memory = ConversationBufferMemory(llm=llm, output_key='answer', memory_key='chat_history', return_messages=True)
qa = ConversationalRetrievalChain.from_llm(llm=llm, memory=memory, chain_type="stuff", retriever=retriever,
return_source_documents=True, get_chat_history=lambda h: h, verbose=True,
combine_docs_chain_kwargs={"prompt": PromptTemplate(
template=custom_prompt_template, input_variables=["context_text", "check"])})
return qa
```
</pre>
How to add chat history in prompt template as well in above function
### Suggestion:
_No response_ | Issue: How to add chat history in prompt template | https://api.github.com/repos/langchain-ai/langchain/issues/15692/comments | 5 | 2024-01-08T08:54:42Z | 2024-04-15T16:20:34Z | https://github.com/langchain-ai/langchain/issues/15692 | 2,069,993,594 | 15,692 |
[
"hwchase17",
"langchain"
] | ### Feature request
I suggest supporting the Milvus vector database's new [Dynamic Schema](https://milvus.io/docs/dynamic_schema.md) feature.
### Motivation
According to Milvus:
> Dynamic schema enables users to insert entities with new fields into a Milvus collection without modifying the existing schema. This means that users can insert data without knowing the full schema of a collection and can include fields that are not yet defined.
I think it is good to allow Langchain to have this feature when multiple types or schema of documents are added to the database.
### Your contribution
I propose to add a "dynamic_schema" flag to the `__init__` and `from_texts` method of the Milvus class:
`__init__` method:
https://github.com/langchain-ai/langchain/blob/4c47f39fcb539fdeff6dd6d9b1f483cd9a1af69b/libs/community/langchain_community/vectorstores/milvus.py#L107-L125
Change to:
```python
def __init__(
self,
embedding_function: Embeddings,
collection_name: str = "LangChainCollection",
collection_description: str = "",
connection_args: Optional[dict[str, Any]] = None,
consistency_level: str = "Session",
index_params: Optional[dict] = None,
search_params: Optional[dict] = None,
drop_old: Optional[bool] = False,
*,
primary_field: str = "pk",
text_field: str = "text",
vector_field: str = "vector",
metadata_field: Optional[str] = None,
partition_names: Optional[list] = None,
replica_number: int = 1,
timeout: Optional[float] = None,
dynamic_schema = False,
):
```
`from_texts` method:
https://github.com/langchain-ai/langchain/blob/4c47f39fcb539fdeff6dd6d9b1f483cd9a1af69b/libs/community/langchain_community/vectorstores/milvus.py#L839-L887
Change to:
```python
def from_texts(
cls,
texts: List[str],
embedding: Embeddings,
metadatas: Optional[List[dict]] = None,
collection_name: str = "LangChainCollection",
connection_args: dict[str, Any] = DEFAULT_MILVUS_CONNECTION,
consistency_level: str = "Session",
index_params: Optional[dict] = None,
search_params: Optional[dict] = None,
drop_old: bool = False,
dynamic_schema = False,
**kwargs: Any,
) -> Milvus:
```
I may later submit a PR for this suggestion. | Add Dynamic Schema support for the Milvus vector store | https://api.github.com/repos/langchain-ai/langchain/issues/15690/comments | 3 | 2024-01-08T08:06:51Z | 2024-08-07T16:06:24Z | https://github.com/langchain-ai/langchain/issues/15690 | 2,069,926,013 | 15,690 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
import os
from urllib.parse import quote_plus
from langchain.vectorstores.pgvector import PGVector
from langchain.chat_models import ChatOpenAI
from langchain.chains import ConversationalRetrievalChain
from langchain.memory import ConversationTokenBufferMemory
from langchain_community.embeddings import HuggingFaceEmbeddings
from langchain_community.vectorstores.pgvector import DistanceStrategy
os.environ['OPENAI_API_KEY'] = "mykey"
embeddings = HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2")
CONNECTION_STRING = PGVector.connection_string_from_db_params(
driver=os.environ.get("PGVECTOR_DRIVER", "psycopg2"),
host="192.168.xxx.xx",
port=5432,
database="xxxxx",
user="xxxxxxxxx",
password=quote_plus("xxxx@r"),
)
vectordb = PGVector(embedding_function=embeddings,
collection_name="tmp06",
connection_string=CONNECTION_STRING,
distance_strategy=DistanceStrategy.COSINE,
)
llm_name = "gpt-3.5-turbo"
llm = ChatOpenAI(model_name=llm_name, temperature=0)
memory_token_limit = 100
retriever = vectordb.as_retriever()
memory = ConversationTokenBufferMemory(
llm=llm,
max_token_limit=int(memory_token_limit),
memory_key="chat_history",
return_messages=True
)
qa = ConversationalRetrievalChain.from_llm(
llm,
retriever=retriever,
memory=memory,
verbose=True,
)
chat_history = []
while True:
memory.load_memory_variables({})
question = input('ask:')
result = qa.run({'question': question, 'chat_history': chat_history})
print(result)
chat_history.append([f'User: {question}', f'Ai: {result}'])
print(chat_history)
embeddings = HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2")
st_history = ' '.join(map(str, chat_history))
res = embeddings.embed_query(st_history)
vectordb.add_vector(res)
best_solution = vectordb.nearest(res, n=1)
print(f'ok: {res[:4]}...')
if question.lower() == 'bye':
break
Traceback (most recent call last):
File "C:\Users\syz\Downloads\ChatBotgpt-3.5-turbo-main\models\1227.py", line 53, in
vectordb.add_vector(res)
^^^^^^^^^^^^^^^^^^^
AttributeError: 'PGVector' object has no attribute 'add_vector'
### Suggestion:
How can I extract vector data from pgvector for use as a reference in the next conversation to enable long-term memory functionality for my chatbot? | Issue: <How can I extract vector data from pgvector for use as a reference in the next conversation to enable long-term memory functionality for my chatbot?> | https://api.github.com/repos/langchain-ai/langchain/issues/15689/comments | 1 | 2024-01-08T07:57:56Z | 2024-04-15T16:24:00Z | https://github.com/langchain-ai/langchain/issues/15689 | 2,069,914,553 | 15,689 |
[
"hwchase17",
"langchain"
] | ### Feature request
from langchain_experimental.sql import SQLDatabaseChain
from langchain.sql_database import SQLDatabase
I'm using the above packages to connect the databricks database(SQLDatabse)and passing it to the model chain(SQLDatabaseChain) to generate the SQLQuery. But I want to close the connection of the database after each response. I couldn't find anything to close the database connection using this SQLDatabase package. Even in the SQLDatabase documentation I couldn't find anything. So I need some close() function to close the connection of the database.
### Motivation
Because of this close() functionality not available in the SQLDatabase package. I'm getting (sqlalchemy.exc.OperationalError) so I need to reboot the server to tackle this issue but that was not the feasible solution. And one more thing I can't use other different packages to connect my database because the model chain accept only the SQLDatabase in the parameter.
### Your contribution
Try to add the close() functionality in the SQLDatabase.py file so the database connection can be closed. So that I'll not be facing any issues in the future.
Thanks in advance. | No close() functionality in langchain.sql_database import SQLDatabase package | https://api.github.com/repos/langchain-ai/langchain/issues/15687/comments | 1 | 2024-01-08T07:38:59Z | 2024-04-15T16:15:25Z | https://github.com/langchain-ai/langchain/issues/15687 | 2,069,891,752 | 15,687 |
[
"hwchase17",
"langchain"
] | Hi,
I have built a RAG app with RetrievalQA and now wanted to try out a new approach. I am using an English LLM but the responses should be in German. E.g. if the user asks something in German "Hallo, wer bist du?", the user query should be translated to "Hello, who are you?" before feeding it into the RAG pipeline.
After the model made its response in English "I am an helpful assistant" the output should be translated back to German "Ich bin ein hilfreicher Assistent".
As translator I am using `googletrans==3.1.0a0`
Here is my RetrievalQA Chain:
```
from langchain.chains import RetrievalQA
from langchain.memory import ConversationBufferWindowMemory
import box
import yaml
from src.utils import set_prompt, setup_dbqa, build_retrieval_qa
from src.llm import build_llm
from src.prompts import mistral_prompt
from langchain.vectorstores import FAISS
with open('config/config.yml', 'r', encoding='utf8') as ymlfile:
cfg = box.Box(yaml.safe_load(ymlfile))
def build_retrieval_qaa(llm, prompt, vectordb):
chain_type_kwargs={ "prompt": prompt,
"memory": ConversationBufferWindowMemory(
memory_key="chat_history",
input_key="question",
#output_key="answer",
k=8,
return_messages=True),
"verbose": False
}
dbqa = RetrievalQA.from_chain_type(llm=llm,
chain_type='stuff',
retriever=vectordb,
return_source_documents=cfg.RETURN_SOURCE_DOCUMENTS,
chain_type_kwargs=chain_type_kwargs,
verbose=False
)
return dbqa
llm = build_llm(ANY LLM)
qa_prompt = set_prompt(mistral_prompt)
vectordb = FAISS.load_local(cfg.DB_FAISS_PATH, bge_embeddings)
vectordb = vectordb.as_retriever(search_kwargs={'k': cfg.VECTOR_COUNT, 'score_treshold': cfg.SCORE_TRESHOLD}, search_type="similarity")
dbqa = build_retrieval_qaa(llm, qa_prompt, vectordb)
dbqa("Was bedeutet IPv6 für die Software-Entwicklung?") # Gives me a response
```
The prompt looks like this:
```
mistral_prompt = """
<s> [INST] Du bist RagBot, ein hilfsbereiter Assistent. Antworte nur auf Deutsch. Verwende die folgenden Kontextinformationen, um die Frage am Ende zu beantworten. Wenn du die Antwort nicht kennst, sag einfach, dass du es nicht weisst. Versuche nicht eine Antwort zu erfinden.
###Chat History###: {chat_history}
###Kontext###: {context}
###Frage###: {question}
Antwort: [/INST]
"""
```
So what do I have to change here, to first translate the user query and the prompt from DE to EN, and afterwards the Model response from EN to DE? Specifically I have problems to translate the provided context, chat history and question. | Translate User Query and Model Response in RetrievalQA Chain | https://api.github.com/repos/langchain-ai/langchain/issues/15686/comments | 1 | 2024-01-08T07:34:14Z | 2024-04-15T16:37:21Z | https://github.com/langchain-ai/langchain/issues/15686 | 2,069,885,942 | 15,686 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
import os
from urllib.parse import quote_plus
from langchain.vectorstores.pgvector import PGVector
from langchain.chat_models import ChatOpenAI
from langchain.chains import ConversationalRetrievalChain
from langchain.memory import ConversationTokenBufferMemory
from langchain_community.embeddings import HuggingFaceEmbeddings
from langchain_community.vectorstores.pgvector import DistanceStrategy
os.environ['OPENAI_API_KEY'] = "mykey"
embeddings = HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2")
CONNECTION_STRING = PGVector.connection_string_from_db_params(
driver=os.environ.get("PGVECTOR_DRIVER", "psycopg2"),
host="192.168.xxx.xx",
port=5432,
database="xxxxx",
user="xxxxxxxxx",
password=quote_plus("xxxxxr"),
)
vectordb = PGVector(embedding_function=embeddings,
collection_name="tmp06",
connection_string=CONNECTION_STRING,
distance_strategy=DistanceStrategy.COSINE,
)
llm_name = "gpt-3.5-turbo"
llm = ChatOpenAI(model_name=llm_name, temperature=0)
memory_token_limit = 100
retriever = vectordb.as_retriever()
memory = ConversationTokenBufferMemory(
llm=llm,
max_token_limit=int(memory_token_limit),
memory_key="chat_history",
return_messages=True
)
qa = ConversationalRetrievalChain.from_llm(
llm,
retriever=retriever,
memory=memory,
verbose=True,
)
chat_history = []
while True:
memory.load_memory_variables({})
question = input('ask:')
result = qa.run({'question': question, 'chat_history': chat_history})
print(result)
chat_history.append([f'User: {question}', f'Ai: {result}'])
print(chat_history)
embeddings = HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2")
st_history = ' '.join(map(str, chat_history))
res = embeddings.embed_query(st_history) ########################ˇ
print(f'ok: {res[:4]}...')
if question.lower() == 'bye':
break
How can I store 'res' in a vector database, and have a vector retrieval query for the best solution every time there's an input, to achieve long-term memory for OpenAI responses? Please help me modify this string translates to "Please help me see if there are any errors" in English
### Suggestion:
How can I store 'res' in a vector database, and have a vector retrieval query for the best solution every time there's an input, to achieve long-term memory for OpenAI responses? Please help me modify this string translates to "Please help me see if there are any errors" in English | Issue: <How can I store 'res' in a vector database, and have a vector retrieval query for the best solution every time there's an input, to achieve long-term memory for OpenAI responses? Please help me modify this string: ' prefix> | https://api.github.com/repos/langchain-ai/langchain/issues/15685/comments | 2 | 2024-01-08T07:33:59Z | 2024-04-15T16:20:22Z | https://github.com/langchain-ai/langchain/issues/15685 | 2,069,885,649 | 15,685 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
User: Help me reset my password
Agent: Please provide your account number
User: My account is Axxx
Agent: SMS verification code has been sent, please provide SMS verification code
User: 091839
Agent: Account password has been reset to 123456
The Agent is responsible for resetting the password for the user. In this example, the Agent needs to communicate back and forth with the user, such as providing an account to send a verification code, and providing a verification code to reset the password.
I added 4 tools, but when I asked any questions, all the tools were used once, which was not what I expected. I wanted to use the corresponding tool according to the specific situation.
```
> Entering new AgentExecutor chain...
{
"action": "ResetPasswordAskTool",
"action_input": "Axxx"
}
Observation: 请提供下您的账号:
Thought:/usr/local/lib/python3.10/dist-packages/langchain_core/_api/deprecation.py:189: LangChainDeprecationWarning: The function `__call__` was deprecated in LangChain 0.1.0 and will be removed in 0.2.0. Use invoke instead.
warn_deprecated(
{
"action": "ReceiveUserAccountTool",
"action_input": "Axxx"
}
Observation: 已经接收到您的账号,您提供的账号为:Axxx
Thought:/usr/local/lib/python3.10/dist-packages/langchain_core/_api/deprecation.py:189: LangChainDeprecationWarning: The function `__call__` was deprecated in LangChain 0.1.0 and will be removed in 0.2.0. Use invoke instead.
warn_deprecated(
{
"action": "SendSmsTool",
"action_input": "Axxx"
}
Observation: 短信验证码已发出,请查看手机收到的重置密码的短信验证码,并提供给我。
Thought:/usr/local/lib/python3.10/dist-packages/langchain_core/_api/deprecation.py:189: LangChainDeprecationWarning: The function `__call__` was deprecated in LangChain 0.1.0 and will be removed in 0.2.0. Use invoke instead.
warn_deprecated(
{
"action": "ResetPasswordTool",
"action_input": "123456"
}
Observation: 密码已经重置为:123321
Thought:/usr/local/lib/python3.10/dist-packages/langchain_core/_api/deprecation.py:189: LangChainDeprecationWarning: The function `__call__` was deprecated in LangChain 0.1.0 and will be removed in 0.2.0. Use invoke instead.
warn_deprecated(
{
"action": "Final Answer",
"action_input": "密码已经重置为:123321"
}
> Finished chain.
intermediate_steps= [(AgentAction(tool='ResetPasswordAskTool', tool_input='Axxx', log='{\n "action": "ResetPasswordAskTool",\n "action_input": "Axxx"\n}'), '请提供下您的账号:'), (AgentAction(tool='ReceiveUserAccountTool', tool_input='Axxx', log='{\n "action": "ReceiveUserAccountTool",\n "action_input": "Axxx"\n}'), '已经接收到您的账号,您提供的账号为:Axxx'), (AgentAction(tool='SendSmsTool', tool_input='Axxx', log='{\n "action": "SendSmsTool",\n "action_input": "Axxx"\n}'), '短信验证码已发出,请查看手机收到的重置密码的短信验证码,并提供给我。'), (AgentAction(tool='ResetPasswordTool', tool_input='123456', log='{\n "action": "ResetPasswordTool",\n "action_input": "123456"\n}'), '密码已经重置为:123321')]
response output= 密码已经重置为:123321
```
### Suggestion:
_No response_ | How to use tools for tasks that are dependent on each other | https://api.github.com/repos/langchain-ai/langchain/issues/15684/comments | 1 | 2024-01-08T07:14:13Z | 2024-04-15T16:15:21Z | https://github.com/langchain-ai/langchain/issues/15684 | 2,069,861,793 | 15,684 |
[
"hwchase17",
"langchain"
] | ### System Info
Langchain 0.1.0
Python 3.10.12
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.agents.agent_toolkits import O365Toolkit
otoolkit = O365Toolkit()
o365_tools = otoolkit.get_tools()
tools.append(o365_tools)
from langchain_experimental.openai_assistant import OpenAIAssistantRunnable
agent = OpenAIAssistantRunnable.create_assistant(
name="My assistant",
# instructions="""You are a admin agent, tasked with the following jobs:
1. Read and post messages on Microsoft 365 Outlook"""
tools=tools,
model="gpt-4-1106-preview",
as_agent=True
)
from langchain.agents.agent import AgentExecutor
agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools)
from langchain.agents.agent import AgentExecutor
agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools)
-------------------------------------------------------------------------------------------------------------------------------
KeyError Traceback (most recent call last)
Cell In[13], line 3
1 from langchain.agents.agent import AgentExecutor
----> 3 agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools)
File ~/smith/lib/python3.10/site-packages/langchain/agents/agent.py:891, in AgentExecutor.from_agent_and_tools(cls, agent, tools, callbacks, **kwargs)
882 @classmethod
883 def from_agent_and_tools(
884 cls,
(...)
888 **kwargs: Any,
889 ) -> AgentExecutor:
890 """Create from agent and tools."""
--> 891 return cls(
892 agent=agent,
893 tools=tools,
894 callbacks=callbacks,
895 **kwargs,
896 )
File ~/smith/lib/python3.10/site-packages/langchain_core/load/serializable.py:107, in Serializable.__init__(self, **kwargs)
106 def __init__(self, **kwargs: Any) -> None:
--> 107 super().__init__(**kwargs)
108 self._lc_kwargs = kwargs
File ~/smith/lib/python3.10/site-packages/pydantic/main.py:339, in pydantic.main.BaseModel.__init__()
File ~/smith/lib/python3.10/site-packages/pydantic/main.py:1102, in pydantic.main.validate_model()
File ~/smith/lib/python3.10/site-packages/langchain/agents/agent.py:916, in AgentExecutor.validate_return_direct_tool(cls, values)
914 """Validate that tools are compatible with agent."""
915 agent = values["agent"]
--> 916 tools = values["tools"]
917 if isinstance(agent, BaseMultiActionAgent):
918 for tool in tools:
KeyError: 'tools'
### Expected behavior
agent_executor should get created properly, this was working a week ago. | AgentExecutor.from_agent_and_tools(agent=agent, tools=tools) -> throws KeyError. | https://api.github.com/repos/langchain-ai/langchain/issues/15679/comments | 4 | 2024-01-08T05:19:09Z | 2024-01-08T05:43:30Z | https://github.com/langchain-ai/langchain/issues/15679 | 2,069,692,507 | 15,679 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
I created an app using AzureOpenAI, and initially, the import statement worked fine:
```
from langchain.chat_models import AzureChatOpenAI
```
My original version details were:
```
langchain==0.0.352
langchain-community==0.0.6
langchain-core==0.1.3
openai==1.6.1
```
Later, I upgraded to:
```
langchain==0.0.354
langchain-community==0.0.9
langchain-core==0.1.7
langchain-experimental==0.0.47
langchain-openai==0.0.2
openai==1.6.1
```
The upgrade led to a deprecation warning for `AzureChatOpenAI`. The suggestion was to use `langchain_openai.AzureChatOpenAI`, but trying to import it gave a `ModuleNotFoundError`. After some trial and error, I found that installing `langchain_openai` separately fixed the issue. Now, I can import `AzureOpenAI`, `AzureOpenAIEmbeddings`, and `AzureChatOpenAI`.
### Idea or request for content:
Despite my research, I couldn't find documentation mentioning the need to install `langchain_openai` separately, which wasted a lot of time and created unnecessary confusion. Sharing this issue here, hope it helps others facing a similar problem. Please add this to the documentation | class `AzureChatOpenAI` was deprecated in LangChain 0.1.0 and will be removed in 0.2.0. Use langchain_openai.AzureChatOpenAI instead. | https://api.github.com/repos/langchain-ai/langchain/issues/15674/comments | 2 | 2024-01-08T03:59:37Z | 2024-04-16T16:14:59Z | https://github.com/langchain-ai/langchain/issues/15674 | 2,069,592,782 | 15,674 |
[
"hwchase17",
"langchain"
] | ### Feature request
Hi, I am trying to use ConversationalRetrievalChain with Azure Cognitive Search as retriever with streaming capabilities enabled. The code is not providing the output in a streaming manner. I would like to know if there is any such feature which is supported using Langchain combining Azure Cognitive Search with LLM.
The code snippet I used is as below.
# Code Snippet
def search_docs_chain_with_memory_streaming(
search_index_name=os.getenv("AZURE_COGNITIVE_SEARCH_INDEX_NAME"),
question_list=[],
answer_list=[],
):
code = detect(question)
language_name = map_language_code_to_name(code)
embeddings = OpenAIEmbeddings(
deployment=oaienvs.OPENAI_EMBEDDING_DEPLOYMENT_NAME,
model=oaienvs.OPENAI_EMBEDDING_MODEL_NAME,
openai_api_base=os.environ["OPENAI_API_BASE"],
openai_api_type=os.environ["OPENAI_API_TYPE"],
)
memory = ConversationBufferMemory(memory_key="chat_history", output_key="answer")
acs = AzureSearch(
azure_search_endpoint=os.getenv("AZURE_SEARCH_SERVICE_ENDPOINT"),
azure_search_key=os.getenv("AZURE_COGNITIVE_SEARCH_API_KEY"),
index_name=search_index_name,
search_type="similarity",
semantic_configuration_name="default",
embedding_function=embeddings.embed_query,
)
retriever = acs.as_retriever()
retriever.search_kwargs = {"score_threshold": 0.8} # {'k':1}
print("language_name-----", language_name)
hcp_conv_template = (
get_prompt(workflows, "retrievalchain_hcp_conv_template1", "system_prompt", "v0")
+ language_name +
get_prompt(workflows, "retrievalchain_hcp_conv_template2", "system_prompt", "v0")
)
CONDENSE_QUESTION_PROMPT = get_prompt(workflows, "retrievalchain_condense_question_prompt", "system_prompt", "v0")
prompt = PromptTemplate(
input_variables=["question"], template=CONDENSE_QUESTION_PROMPT
)
SYSTEM_MSG2 = get_prompt(workflows, "retrievalchain_system_msg_template", "system_prompt", "v0")
messages = [
SystemMessagePromptTemplate.from_template(SYSTEM_MSG2),
HumanMessagePromptTemplate.from_template(hcp_conv_template),
]
qa_prompt = ChatPromptTemplate.from_messages(messages)
llm = AzureChatOpenAI(
deployment_name=oaienvs.OPENAI_CHAT_MODEL_DEPLOYMENT_NAME, temperature=0.7, max_retries=4,
#callbacks=[streaming_cb],
streaming=True
#callback_manager=CallbackManager([MyCustomHandler()])
)
qa_chain = ConversationalRetrievalChain.from_llm(
llm=llm,
retriever=retriever,
get_chat_history=lambda o: o,
memory=memory,
condense_question_prompt=prompt,
return_source_documents=True,
verbose=True,
#callback_manager=convo_cb_manager,
#condense_question_llm = llm_condense_ques,
combine_docs_chain_kwargs={"prompt": qa_prompt},
)
if len(question_list) == 0:
question = question + ". Give the answer only in " + language_name + "."
for i in range(len(question_list)):
qa_chain.memory.save_context(
inputs={"question": question_list[i]}, outputs={"answer": answer_list[i]}
)
#return qa_chain.stream({"question": question, "chat_history": []})
return qa_chain
Also I have tried different callback handlers and invoke methods as mentioned in https://gist.github.com/jvelezmagic/03ddf4c452d011aae36b2a0f73d72f68
Kindly suggest if there is any workaround to it.
### Motivation
The motivation is to stream the LLM response using Langchain and Azure Cognitive Search for RAG usecase.
### Your contribution
I have attached the code and the support links in the description. | Support for ConversationalRetrievalChain with Azure Cognitive Search as retriever and Azure Open AI as LLM for Streaming Output | https://api.github.com/repos/langchain-ai/langchain/issues/15673/comments | 2 | 2024-01-08T03:42:19Z | 2024-04-15T16:44:18Z | https://github.com/langchain-ai/langchain/issues/15673 | 2,069,572,435 | 15,673 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.


When I deploy the server then it report the error like the above.
How can I fix it ? Thank you !
Heres my code:
`
PROMPT_TEST = """Answer the question based only on the following context:
Based on the previous {history}
Question: {question}
Following: {affection}
Totally,You could select one of the above strategy of mix one.
{format_instructions}
"""`
`chain_with_history_stream = RunnableWithMessageHistory(
{
"question": itemgetter("question"),
"affection": RunnablePassthrough()
}
| PROMPT_TEST | llm,
lambda session_id: MyRedisChatMessageHistory(session_id, url=REDIS_URL),
input_messages_key="question",
history_messages_key="history",
verbose=True,
max_message_history=30,
)`
##Error feedback:
`Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/root/miniconda3/envs/xdan-chat/lib/python3.10/multiprocessing/spawn.py", line 116, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "/root/miniconda3/envs/xdan-chat/lib/python3.10/multiprocessing/spawn.py", line 125, in _main
prepare(preparation_data)
File "/root/miniconda3/envs/xdan-chat/lib/python3.10/multiprocessing/spawn.py", line 236, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "/root/miniconda3/envs/xdan-chat/lib/python3.10/multiprocessing/spawn.py", line 287, in _fixup_main_from_path
main_content = runpy.run_path(main_path,
File "/root/miniconda3/envs/xdan-chat/lib/python3.10/runpy.py", line 289, in run_path
return _run_module_code(code, init_globals, run_name,
File "/root/miniconda3/envs/xdan-chat/lib/python3.10/runpy.py", line 96, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "/root/miniconda3/envs/xdan-chat/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/workspace/xDAN-Dreamy-Chat/app/server.py", line 254, in <module>
{
TypeError: unsupported operand type(s) for |: 'dict' and 'str'`
### Suggestion:
_No response_ | How to manager the new Variables:TypeError: unsupported operand type(s) for |: 'dict' and 'str' | https://api.github.com/repos/langchain-ai/langchain/issues/15672/comments | 5 | 2024-01-08T01:57:00Z | 2024-04-15T16:25:16Z | https://github.com/langchain-ai/langchain/issues/15672 | 2,069,449,221 | 15,672 |
[
"hwchase17",
"langchain"
] | ### System Info
Hi, I am encountering this error when trying to import anything from the `langchain.embeddings` on Amazon linux AMI with python 3.9 and `langchain==0.0.350`
```python
Traceback (most recent call last):
File "/home/ec2-user/app/search/./app.py", line 9, in <module>
from search import make_chain, postprocess
File "/home/ec2-user/app/search/search.py", line 6, in <module>
from langchain.embeddings import HuggingFaceInstructEmbeddings
File "/home/ec2-user/.local/lib/python3.9/site-packages/langchain/embeddings/__init__.py", line 62, in <module>
from langchain.embeddings.openai import OpenAIEmbeddings
File "/home/ec2-user/.local/lib/python3.9/site-packages/langchain/embeddings/openai.py", line 1, in <module>
from langchain_community.embeddings.openai import (
ImportError: cannot import name '_is_openai_v1' from 'langchain_community.embeddings.openai' (/home/ec2-user/.local/lib/python3.9/site-packages/langchain_community/embeddings/openai.py)
```
The error currently occurs when calling
```python
from langchain.embeddings import HuggingFaceInstructEmbeddings
```
My requirements.txt file looks like this:
```
fastapi==0.105.0
lancedb==0.3.4
langchain==0.0.350
langserve==0.0.36
numpy==1.26.2
pandas==2.1.4
Requests==2.31.0
uvicorn==0.24.0.post1
```
I should note that I've tried reinstalling langchain, openai and transfomers. I've also tried python 3.10 and got the same error.
I should also note that none of my modules are called openai.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Call `from langchain.embeddings import HuggingFaceInstructEmbeddings` or any of the embeddings modules
### Expected behavior
Should be able to import without errors. | ImportError: cannot import name '_is_openai_v1' | https://api.github.com/repos/langchain-ai/langchain/issues/15671/comments | 3 | 2024-01-08T01:46:22Z | 2024-01-08T15:49:42Z | https://github.com/langchain-ai/langchain/issues/15671 | 2,069,437,208 | 15,671 |
[
"hwchase17",
"langchain"
] | ### System Info
google-cloud-aiplatform==1.35.0,
langchain-0.0.354
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.output_parsers import ResponseSchema, StructuredOutputParser
from langchain.prompts import PromptTemplate
from langchain.llms import VertexAI
import vertexai
class bcolors:
SERVER = '\033[92m'
CLIENT = '\033[93m'
ENDC = '\033[0m'
source_response_schemas = [
ResponseSchema(name="answer", description="answer to the user's question"),
ResponseSchema(
name="source",
description="source used to answer the user's question, should be a website.",
),
]
found_response_schemas = [
ResponseSchema(name="answer", description="answer to the user's question"),
ResponseSchema(
name="found",
description="whether the model could find the proper answers or not.",
),
]
source_output_parser = StructuredOutputParser.from_response_schemas(source_response_schemas)
found_output_parser = StructuredOutputParser.from_response_schemas(found_response_schemas)
format_instructions = source_output_parser.get_format_instructions()
found_checker = found_output_parser.get_format_instructions()
prompt = PromptTemplate(
template="answer the users question as best as possible.\n{found_cheker}\n{format_instructions}\n{question}",
input_variables=["question"],
partial_variables={"found_checker": found_checker, "format_instructions": format_instructions},
)
vertexai.init(project="my_project_id", location="us-central1")
model = VertexAI(model_name='text-bison@001', max_output_tokens=512, temperature=0.2)
chain = prompt | model | found_output_parser | source_output_parser
while 1:
message = input(bcolors.CLIENT + "Ask to the Cooking Assistant --->> " + bcolors.ENDC)
for s in chain.stream({"question": message}):
print(bcolors.SERVER + "<<<<<<< Cooking Assistant >>>>>>", str(s) + bcolors.ENDC)
```
this code returns the error saying,
```
KeyError: "Input to PromptTemplate is missing variable 'found_cheker'. Expected: ['found_cheker', 'question'] Received: ['question']"
```
### Expected behavior
I expect the model responses would be something like,
```
{'answer': 'proper answer', 'found': True, 'source': 'the source found.'}
``` | multiple ResponseSchema | https://api.github.com/repos/langchain-ai/langchain/issues/15670/comments | 3 | 2024-01-08T01:02:36Z | 2024-01-16T00:48:55Z | https://github.com/langchain-ai/langchain/issues/15670 | 2,069,393,611 | 15,670 |
[
"hwchase17",
"langchain"
] | ### System Info
google-cloud-aiplatform==1.35.0,
langchain-0.0.354
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
response_schemas = [
ResponseSchema(name="answer", description="answer to the user's question"),
ResponseSchema(
name="found check",
description="boolean value (True or False) whether the data found from the reference or not.",
),
]
output_parser = StructuredOutputParser.from_response_schemas(response_schemas)
format_instructions = output_parser.get_format_instructions()
qa_prompt = PromptTemplate(
input_variables=[
("system", template),
MessagesPlaceholder(variable_name="chat_history"),
("user", "{question}"),
],
partial_variables={"format_instructions": format_instructions},
)
rag_chain = (
RunnablePassthrough.assign(
context=contextualized_question | temp_retriever | format_docs
)
| qa_prompt
| llm
| remove_prefix
)
```
### Expected behavior
I expect that I could use something like ChatPromptTemplate.from_messages and response_schemas at a same time to return specific value with the conversation history based prompting. | Adding response_schemas to ChatPromptTemplate.from_messages prompt design | https://api.github.com/repos/langchain-ai/langchain/issues/15669/comments | 2 | 2024-01-07T23:58:01Z | 2024-01-08T00:59:54Z | https://github.com/langchain-ai/langchain/issues/15669 | 2,069,357,139 | 15,669 |
[
"hwchase17",
"langchain"
] | ### System Info
Langchain v0.0.354, Python v3.11, Chroma v0.4.22, Lark v1.1.8
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
def load_self_query_retriever(persist_dir: str, docs: list, metadata_field_info: list, document_content_description = "Information about various documents, the date they are up to date with and where they were sourced from."):
llm = ChatOpenAI(temperature=0)
vectorstore = None
try:
vectorstore = Chroma(persist_directory=persist_dir, embedding_function=get_embedding_function())
except:
vectorstore = Chroma.from_documents(docs, get_embedding_function(), persist_directory=persist_dir)
return SelfQueryRetriever.from_llm(
llm,
vectorstore,
document_content_description,
metadata_field_info,
)
metadata_field_info = [
AttributeInfo(name="source",description="The document this chunk is from.",type="string",),
AttributeInfo(name="origin",description="The origin the document came from. Bancworks is the higher priority.",type="string",),
AttributeInfo(name="date_day",description="The day the document was uploaded.",type="integer",),
AttributeInfo(name="date_uploaded",description="The month year the document is current to.",type="integer",)
]
self_query_retriever = load_self_query_retriever("storage/deploy/chroma-db-self-query", bancworks_docs, metadata_field_info)
```
The following error is thrown:
```python
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
Cell In[1], line 110
76 return SelfQueryRetriever.from_llm(
77 llm,
78 vectorstore,
79 document_content_description,
80 metadata_field_info,
81 )
83 metadata_field_info = [
84 AttributeInfo(name="source",description="The document this chunk is from.",type="string",),
85 AttributeInfo(name="origin",description="The origin the document came from. Comes from either scraped websites like TheKinection.org, Kinecta.org or database files like Bancworks. Bancworks is the higher priority.",type="string",),
(...)
107 # ),
108 ]
--> 110 self_query_retriever = load_self_query_retriever("storage/deploy/chroma-db-self-query", bancworks_docs, metadata_field_info)
113 # parent_retriever = load_parent_retriever("full_docs", "storage/deploy/chroma-db-parent")
114
115 # current_place = 0
(...)
127 # retriever.add_documents(bancworks_docs)
128 # retriever.add_documents(bancworks_docs)
Cell In[1], line 76, in load_self_query_retriever(persist_dir, docs, metadata_field_info, document_content_description)
73 llm = ChatOpenAI(temperature=0)
74 vectorstore = Chroma.from_documents(docs, OpenAIEmbeddings(), persist_directory=persist_dir)
---> 76 return SelfQueryRetriever.from_llm(
77 llm,
78 vectorstore,
79 document_content_description,
80 metadata_field_info,
81 )
File /etc/system/kernel/.venv/lib64/python3.11/site-packages/langchain/retrievers/self_query/base.py:225, in SelfQueryRetriever.from_llm(cls, llm, vectorstore, document_contents, metadata_field_info, structured_query_translator, chain_kwargs, enable_limit, use_original_query, **kwargs)
218 if (
219 "allowed_operators" not in chain_kwargs
220 and structured_query_translator.allowed_operators is not None
221 ):
222 chain_kwargs[
223 "allowed_operators"
224 ] = structured_query_translator.allowed_operators
--> 225 query_constructor = load_query_constructor_runnable(
226 llm,
227 document_contents,
228 metadata_field_info,
229 enable_limit=enable_limit,
230 **chain_kwargs,
231 )
232 return cls(
233 query_constructor=query_constructor,
234 vectorstore=vectorstore,
(...)
237 **kwargs,
238 )
File /etc/system/kernel/.venv/lib64/python3.11/site-packages/langchain/chains/query_constructor/base.py:357, in load_query_constructor_runnable(llm, document_contents, attribute_info, examples, allowed_comparators, allowed_operators, enable_limit, schema_prompt, fix_invalid, **kwargs)
353 for ainfo in attribute_info:
354 allowed_attributes.append(
355 ainfo.name if isinstance(ainfo, AttributeInfo) else ainfo["name"]
356 )
--> 357 output_parser = StructuredQueryOutputParser.from_components(
358 allowed_comparators=allowed_comparators,
359 allowed_operators=allowed_operators,
360 allowed_attributes=allowed_attributes,
361 fix_invalid=fix_invalid,
362 )
363 return prompt | llm | output_parser
File /etc/system/kernel/.venv/lib64/python3.11/site-packages/langchain/chains/query_constructor/base.py:99, in StructuredQueryOutputParser.from_components(cls, allowed_comparators, allowed_operators, allowed_attributes, fix_invalid)
96 return fixed
98 else:
---> 99 ast_parse = get_parser(
100 allowed_comparators=allowed_comparators,
101 allowed_operators=allowed_operators,
102 allowed_attributes=allowed_attributes,
103 ).parse
104 return cls(ast_parse=ast_parse)
File /etc/system/kernel/.venv/lib64/python3.11/site-packages/langchain/chains/query_constructor/parser.py:174, in get_parser(allowed_comparators, allowed_operators, allowed_attributes)
172 # QueryTransformer is None when Lark cannot be imported.
173 if QueryTransformer is None:
--> 174 raise ImportError(
175 "Cannot import lark, please install it with 'pip install lark'."
176 )
177 transformer = QueryTransformer(
178 allowed_comparators=allowed_comparators,
179 allowed_operators=allowed_operators,
180 allowed_attributes=allowed_attributes,
181 )
182 return Lark(GRAMMAR, parser="lalr", transformer=transformer, start="program")
ImportError: Cannot import lark, please install it with 'pip install lark'.
```
### Expected behavior
Be able to instantiate SelfQueryRetriever.from_llm successfully | SelfQueryRetriever.from_llm raises following issue: ImportError: Cannot import lark, please install it with 'pip install lark'. | https://api.github.com/repos/langchain-ai/langchain/issues/15668/comments | 8 | 2024-01-07T23:44:54Z | 2024-05-15T04:41:38Z | https://github.com/langchain-ai/langchain/issues/15668 | 2,069,348,971 | 15,668 |
[
"hwchase17",
"langchain"
] | ### Feature request
It would be helpful if I can make a RAG chain to output whether it could find the answer from the reference or not as a boolean value.
### Motivation
From personal ideation.
### Your contribution
N/A | found checker for RAG chain | https://api.github.com/repos/langchain-ai/langchain/issues/15667/comments | 2 | 2024-01-07T23:32:22Z | 2024-07-12T16:03:13Z | https://github.com/langchain-ai/langchain/issues/15667 | 2,069,343,504 | 15,667 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
After upgrading to langchain 0.1.0, I received depreciation warnings and updated my imports to langchain_community which cleared that error, then received depreciation warnings about __call__ to Invoke:
The function `__call__` was deprecated in LangChain 0.1.0 and will be removed in 0.2.0. Use invoke instead.
I switched the invoke method on the call still get some of the same depreciation warnings. Not sure how to fix this or if it's a bug. code:
```python
qachain = RetrievalQA.from_chain_type(ollama, retriever=vectorstore.as_retriever(search_kwargs={"k": args.top_matches}))
res = (qachain.invoke({"query": args.question}))
```
How do I fix this?
### Suggestion:
_No response_ | Issue: __call__ was deprecated use invoke instead warning persists after switching to invoke | https://api.github.com/repos/langchain-ai/langchain/issues/15665/comments | 2 | 2024-01-07T21:49:55Z | 2024-05-31T15:02:56Z | https://github.com/langchain-ai/langchain/issues/15665 | 2,069,304,783 | 15,665 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Hello LangChain community,
We're always happy to see more folks getting involved in contributing to the LangChain codebase.
This is a good first issue if you want to learn more about how to set up
for development in the LangChain codebase.
## Goal
Your contribution will make it easier for users to use integrations with the newest LangChain syntax
## Context
As you may have noticed, we’ve recently gone to LangChain 0.1. As part of this, we want to update integration pages to be consistent with new methods. These largely include: (a) new methods for invoking integrations and chains (`invoke`, `stream`), (b) new methods for creating chains (LCEL, `create_xyz_..`).
There are a lot of integrations, so we’d love community help! This is a great way to get started contributing to the library as it will make you familiar with best practices and various integrations.
## Set up for development
There are lots of integration notebooks in https://github.com/langchain-ai/langchain/tree/master/docs/docs/integrations. After making changes there, you should run `make format` from the root LangChain directory to run our formatter.
## Shall you accept
Shall you accept this challenge, please claim one (and only one) of the modules from the list
below as one that you will be working on, and respond to this issue.
Once you've made the required code changes, open a PR and link to this issue.
## Acceptance Criteria
- Uses new methods for calling chains (`invoke`, `stream`, etc)
- Uses LCEL where appropriate
- Follows the format outlined below
## Examples
We've gotten started with some examples to show how we imagine these integration pages should look like. The exact format may look different for each type of integration, so make sure to look at the type you are working on:
- LLMs:
- https://python.langchain.com/docs/integrations/llms/cohere
- Chat Models:
- https://python.langchain.com/docs/integrations/chat/cohere
- Vectorstores:
- https://python.langchain.com/docs/integrations/vectorstores/faiss
- Retrievers:
- https://python.langchain.com/docs/integrations/retrievers/tavily
- https://python.langchain.com/docs/integrations/retrievers/ragatouille
- Tools:
- https://python.langchain.com/docs/integrations/tools/tavily_search
- Toolkits:
- https://python.langchain.com/docs/integrations/toolkits/gmail
- Memory:
- https://python.langchain.com/docs/integrations/memory/sql_chat_message_history
## Your contribution
Please sign up by responding to this issue and including the name of the module.
### Suggestion:
_No response_ | For New Contributors: Update Integration Documentation | https://api.github.com/repos/langchain-ai/langchain/issues/15664/comments | 30 | 2024-01-07T21:22:46Z | 2024-02-12T05:19:32Z | https://github.com/langchain-ai/langchain/issues/15664 | 2,069,295,306 | 15,664 |
[
"hwchase17",
"langchain"
] | ### System Info
Langchain Version: 0.0.354 (also tried with 0.1.0)
Python version: 3.9.18
yfinance version: 0.2.35
OS: Windows 10
### Who can help?
@hwchase17 , @agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Just exectuing the bottom of this page (the tool soley): https://python.langchain.com/docs/integrations/tools/yahoo_finance_news
**Returned this error (see):**
from langchain.tools.yahoo_finance_news import YahooFinanceNewsTool
tool = YahooFinanceNewsTool()
res = tool.run("AAPL")
print(res)
updating langchain to newest version didnt change anything for me.
Im also using a poetry installed file with a clean fresh enviroment, same error.
### Expected behavior
To do exactly whats written in the docs and to not drop an error | using YahooFinanceNewsTool() results to KeyError: 'description' | https://api.github.com/repos/langchain-ai/langchain/issues/15656/comments | 1 | 2024-01-07T13:52:58Z | 2024-04-14T16:16:15Z | https://github.com/langchain-ai/langchain/issues/15656 | 2,069,139,043 | 15,656 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I want to use chatopenai as a tool, I need to add the agent chat_history or context into the tool, but the tool generally only accepts a string input, so how do I pass in other parameters
### Suggestion:
none | Issue: <Please write a comprehensive title after the 'Issue: ' prefix> | https://api.github.com/repos/langchain-ai/langchain/issues/15654/comments | 1 | 2024-01-07T10:35:43Z | 2024-01-07T10:55:01Z | https://github.com/langchain-ai/langchain/issues/15654 | 2,069,076,913 | 15,654 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
When clicking on redis and then when trying to re-direct to github for seeing the implementation that page is not found
from this [integration's page](https://integrations.langchain.com/memory)

Error :

### Idea or request for content:
i would like to know the place where this has been implemented to fix the issue and raise an PR
Happy to help the community | DOC: Integration re-direct to github page not found | https://api.github.com/repos/langchain-ai/langchain/issues/15651/comments | 4 | 2024-01-07T07:32:23Z | 2024-04-14T16:16:47Z | https://github.com/langchain-ai/langchain/issues/15651 | 2,069,024,318 | 15,651 |
[
"hwchase17",
"langchain"
] | I'm trying to create a simple test that can:
- use Ollama as the model
- use the agent with my custom tools to enrich the output
- history to store the conversation history
Based on examples, the code should look like this:
```
const llm = new ChatOllama(...);
const tools = [...];
const executor = await initializeAgentExecutorWithOptions(tools, llm, ...);
```
the compiler does not like `llm` parameter because
```
Argument of type 'ChatOllama' is not assignable to parameter of type 'BaseLanguageModelInterface<any, BaseLanguageModelCallOptions>'
```
and this is the same for OpenAI llm as well.
I don't see this `BaseLanguageModelCallOptions` interface being used anywhere in the code. Is this the right way to use it? | Creating a conversation agent with tools and history for Ollama | https://api.github.com/repos/langchain-ai/langchain/issues/15650/comments | 2 | 2024-01-07T06:16:24Z | 2024-01-07T15:12:27Z | https://github.com/langchain-ai/langchain/issues/15650 | 2,069,007,389 | 15,650 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I have a warning when I run my langchain code "how to resolve this warning "My code has a warning "D:\anaconda3\envs\py311\Lib\site-packages\langchain\__init__.py:34: UserWarning: Importing verbose from langchain root module is no longer supported. Please use langchain.globals.set_verbose() / langchain.globals.get_verbose() instead. warnings.warn(", how to resolve it?
### Suggestion:
_No response_ | Issue: how to resolve this warning "My code has a warning "D:\anaconda3\envs\py311\Lib\site-packages\langchain\__init__.py:34: UserWarning: Importing verbose from langchain root module is no longer supported. Please use langchain.globals.set_verbose() / langchain.globals.get_verbose() instead. warnings.warn("" | https://api.github.com/repos/langchain-ai/langchain/issues/15647/comments | 3 | 2024-01-07T00:49:44Z | 2024-06-17T11:24:20Z | https://github.com/langchain-ai/langchain/issues/15647 | 2,068,909,302 | 15,647 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I had no issues running the langchain code before, but when I moved the callback_handler position, this warning appeared: "D:\anaconda3\envs\py311\Lib\site-packages\langchain\__init__.py:34: UserWarning: Importing verbose from langchain root module is no longer supported. Please use langchain.globals.set_verbose() / langchain.globals.get_verbose() instead. warnings.warn("
### Suggestion:
_No response_ | Issue: My code has a warning "D:\anaconda3\envs\py311\Lib\site-packages\langchain\__init__.py:34: UserWarning: Importing verbose from langchain root module is no longer supported. Please use langchain.globals.set_verbose() / langchain.globals.get_verbose() instead. warnings.warn(" | https://api.github.com/repos/langchain-ai/langchain/issues/15646/comments | 1 | 2024-01-07T00:42:11Z | 2024-01-07T00:48:07Z | https://github.com/langchain-ai/langchain/issues/15646 | 2,068,907,422 | 15,646 |
[
"hwchase17",
"langchain"
] | ### System Info
LangChain 0.0.354, Python 3.11
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Embedded 1000 or so documents and did a vector similarity search. Came back with a lot of good results. Did the get_relevant_documents call but had no returns. LLM also did not.
My retriever is:
- ParentDocumentRetriever with a parent_splitter and child_splitter
- Parent splits at 2000 tokens. Child splits at 400.
```python
def load_chroma_db(collection_name: str):
parent_splitter = RecursiveCharacterTextSplitter(chunk_size=2000)
child_splitter = RecursiveCharacterTextSplitter(chunk_size=400)
vectorstore = None
try:
vectorstore = Chroma(persist_directory="storage/deploy/chroma-db", embedding_function=get_embedding_function())
print("Loaded existing vector store")
except:
print("Creating new vector store")
vectorstore = Chroma(
collection_name=collection_name,
embedding_function=get_embedding_function(),
persist_directory="storage/deploy/chroma-db"
)
store = InMemoryStore()
retriever = ParentDocumentRetriever(
vectorstore=vectorstore,
docstore=store,
child_splitter=child_splitter,
parent_splitter=parent_splitter
)
return retriever
retriever = load_chroma_db("full_docs")
retriever.vectorstore.similarity_search_with_score("How do I make a zelle transaction?")
```
This returns
```python
[(Document(page_content='Basic Introduction to Zelle® What is Zelle®? Zelle® is a fast, safe, and easy way for members to send money directly between most bank or credit union accounts in the U.S. These Person- to- Person transactions typically occur within minutes. With just an email address or U.S. mobile phone number, members can send money to friends, family, and people they know and trust, regardless of where they', metadata={'date_day': 27, 'date_month': 4, 'date_year': 2023, 'doc_id': '22bfa535-8bc7-4a97-8270-d84b77ba81b0', 'source': 'storage/Bancworks/Zelle & CST FAQs - Internal Use.pdf.txt'}),
0.24197007715702057),
(Document(page_content='Basic Introduction to Zelle® ......................................... 2 Zelle® Transaction Limits / Tiers ............................. 2 Enrollment / Eligible Accounts ................................ 3 Sending / Receiving Transactions ........................... 4 Disputes / Fraud / Scams ......................................... 5 Customer Service Tool (CST)', metadata={'date_day': 27, 'date_month': 4, 'date_year': 2023, 'doc_id': 'eb27b502-c37c-4462-9d8b-488b53c3aa11', 'source': 'storage/Bancworks/Zelle & CST FAQs - Internal Use.pdf.txt'}),
0.24453413486480713),
(Document(page_content='Step 1: Find Zelle in the main menu of the Kinecta mobile banking app. Step 2: Enroll with a U.S. mobile number or email address and select a checking account. Step 3: Start using Zelle. Talking Points: • Zelle is a fast, safe and easy way to send money directly between almost any checking or savings accounts in the U.S., typically within minutes. • With just an email address or U.S. mobile phone', metadata={'date_day': 28, 'date_month': 4, 'date_year': 2023, 'doc_id': '161170fc-8871-412d-a0e7-47e4b1b3d889', 'source': 'storage/Bancworks/Zelle - MarketGram - 20230502.pdf.txt'}),
0.2447606921195984),
(Document(page_content='• Zelle is a fast, safe and easy way to send money directly between almost any checking or savings accounts in the U.S., typically within minutes. • With just an email address or U.S. mobile phone number, send money to people you trust, regardless of where they bank. • Transactions between enrolled consumers typically occur in minutes and generally do not incur transaction fees. • Send, split or', metadata={'date_day': 28, 'date_month': 4, 'date_year': 2023, 'doc_id': '161170fc-8871-412d-a0e7-47e4b1b3d889', 'source': 'storage/Bancworks/Zelle - MarketGram - 20230502.pdf.txt'}),
0.2502959370613098)]
```
If I do the following call:
```python
retriever.get_relevant_documents("How do I make a zelle transaction?", k=4)
```
I get nothing returned.
```python
[]
```
### Expected behavior
Parent documents should be returned based on the child embeddings found. | ChromaDB ParentDocumentRetriever.get_relevant_documents not returning docs despite similarity_search returning matching docs | https://api.github.com/repos/langchain-ai/langchain/issues/15644/comments | 4 | 2024-01-06T22:51:01Z | 2024-01-07T00:56:13Z | https://github.com/langchain-ai/langchain/issues/15644 | 2,068,873,967 | 15,644 |
[
"hwchase17",
"langchain"
] | ### System Info
Using...
langchain==0.0.353
langchain-core==0.1.4
Seems to have broken from yesterday's merges?
```
from langchain.chains.combine_documents.stuff import StuffDocumentsChain
--
2319 | File "/root/.local/lib/python3.9/site-packages/langchain/chains/__init__.py", line 56, in <module>
2320 | from langchain.chains.openai_functions import (
2321 | File "/root/.local/lib/python3.9/site-packages/langchain/chains/openai_functions/__init__.py", line 1, in <module>
2322 | from langchain.chains.openai_functions.base import (
2323 | File "/root/.local/lib/python3.9/site-packages/langchain/chains/openai_functions/base.py", line 32, in <module>
2324 | from langchain.utils.openai_functions import convert_pydantic_to_openai_function
2325 | File "/root/.local/lib/python3.9/site-packages/langchain/utils/openai_functions.py", line 1, in <module>
2326 | from langchain_community.utils.openai_functions import (
2327 | File "/root/.local/lib/python3.9/site-packages/langchain_community/utils/openai_functions.py", line 3, in <module>
2328 | from langchain_core.utils.function_calling import (
2329 | ModuleNotFoundError: No module named 'langchain_core.utils.function_calling'
```
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [x] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
It's triggered, in our case, when we import `StuffDocumentsChain`
from langchain.chains.combine_documents.stuff import StuffDocumentsChain
### Expected behavior
No error! | Broken imports | https://api.github.com/repos/langchain-ai/langchain/issues/15643/comments | 2 | 2024-01-06T21:23:27Z | 2024-01-06T21:45:16Z | https://github.com/langchain-ai/langchain/issues/15643 | 2,068,840,009 | 15,643 |
[
"hwchase17",
"langchain"
] | ### System Info
Langchain
### Who can help?
LangChain with Gemini Pro
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
stuff_chain = load_qa_chain(model, chain_type="stuff", prompt=prompt)
question = "content pls?"
stuff_answer = stuff_chain(
{"input_documents": pages[1:], "question": question}, return_only_outputs=True
)
### Expected behavior
ReadTimeout: HTTPConnectionPool(host='localhost', port=36027): Read timed out. (read timeout=60.0) | ReadTimeout with Arabic pdf files | https://api.github.com/repos/langchain-ai/langchain/issues/15639/comments | 3 | 2024-01-06T19:35:55Z | 2024-04-13T16:12:05Z | https://github.com/langchain-ai/langchain/issues/15639 | 2,068,795,849 | 15,639 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I have the following ChromaDB setup:
```python
parent_splitter = RecursiveCharacterTextSplitter(chunk_size=2000)
child_splitter = RecursiveCharacterTextSplitter(chunk_size=400)
vectorstore = None
try:
vectorstore = Chroma(persist_directory="storage/deploy/chroma-db", embedding_function=get_embedding_function())
print("Loaded existing vector store")
except:
print("Creating new vector store")
vectorstore = Chroma(
collection_name=collection_name,
embedding_function=get_embedding_function(),
persist_directory="storage/deploy/chroma-db"
)
store = InMemoryStore()
retriever = ParentDocumentRetriever(
vectorstore=vectorstore,
docstore=store,
child_splitter=child_splitter,
parent_splitter=parent_splitter
)
return retriever
```
The issue is if I add a bunch of documents to the retriever, the memory eventually can run out and crash the system. Is there a way for this to be done out of RAM instead? Or am I misunderstanding the usage of this.
### Suggestion:
Is there a non-inmemory docstore that can be used in the ParentDocumentRetriever or does it not make sense in the use case. | Issue: What docstore to use in ChromaDB that isn't in memory? | https://api.github.com/repos/langchain-ai/langchain/issues/15633/comments | 5 | 2024-01-06T10:53:10Z | 2024-03-07T10:29:16Z | https://github.com/langchain-ai/langchain/issues/15633 | 2,068,532,355 | 15,633 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
https://python.langchain.com/docs/integrations/chat/fireworks
Hi, I'm new Langchain with Fireworks.
I run this code in document 'ChatFireworks' and got an issue.
Environment : python 3.11, Window10
```Create a simple chain with memory
chain = (
RunnablePassthrough.assign(
history=memory.load_memory_variables | (lambda x: x["history"])
)
| prompt
| llm.bind(stop=["\n\n"])
)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[17], line 3
1 chain = (
2 RunnablePassthrough.assign(
----> 3 history=memory.load_memory_variables | (lambda x: x["history"])
4 )
5 | prompt
6 | llm.bind(stop=["\n\n"])
7 )
TypeError: unsupported operand type(s) for |: 'method' and 'function'```
### Idea or request for content:
TypeError: unsupported operand type(s) for |: 'method' and 'function | DOC: langchain with Fireworks ai | https://api.github.com/repos/langchain-ai/langchain/issues/15632/comments | 4 | 2024-01-06T10:45:41Z | 2024-04-13T16:16:17Z | https://github.com/langchain-ai/langchain/issues/15632 | 2,068,529,844 | 15,632 |
[
"hwchase17",
"langchain"
] | ### System Info
`langchain==0.1.0`
`langchain-community==0.0.9`
`langchain-core==0.1.7`
`linux 20.04`
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Following the documentation https://python.langchain.com/docs/modules/agents/how_to/custom_agent
### Expected behavior
Should output something similar to this
```
{'input': 'How many letters in the word educa',
'output': 'There are 5 letters in the word "educa".'}
```
Instead got an error when ran `agent_executor.invoke({"input": "How many letters in the word educa"})`
```
NotFoundError: Error code: 404 - {'error': {'message': 'Unrecognized request argument supplied: functions', 'type': 'invalid_request_error', 'param': None, 'code': None}}
``` | 'Unrecognized request argument supplied: functions' error when executing agent | following documentation | https://api.github.com/repos/langchain-ai/langchain/issues/15628/comments | 2 | 2024-01-06T06:53:43Z | 2024-01-06T07:02:47Z | https://github.com/langchain-ai/langchain/issues/15628 | 2,068,438,255 | 15,628 |
[
"hwchase17",
"langchain"
] | ### System Info
Python 3.11, Langchain 0.0.354, ChromaDB v0.4.22
### Who can help?
@agola11
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.text_splitter import MarkdownTextSplitter, RecursiveCharacterTextSplitter
from langchain.document_loaders import DirectoryLoader
from langchain.storage import InMemoryStore
from langchain.retrievers import ParentDocumentRetriever
from langchain.vectorstores import Chroma
import chromadb
def load_chroma_db(collection_name: str):
parent_splitter = RecursiveCharacterTextSplitter(chunk_size=2000)
child_splitter = RecursiveCharacterTextSplitter(chunk_size=400)
vectorstore = Chroma(
collection_name=collection_name,
embedding_function=get_embedding_function(),
persist_directory="storage/deploy/chroma-db"
)
store = InMemoryStore()
retriever = ParentDocumentRetriever(
vectorstore=vectorstore,
docstore=store,
child_splitter=child_splitter,
parent_splitter=parent_splitter
)
return retriever
retriever = load_chroma_db("full_docs")
retriever.add_documents(bancworks_docs)
```
### Expected behavior
Should be able to load ChromaDB and persist it. | AttributeError: module 'chromadb' has no attribute 'config' | https://api.github.com/repos/langchain-ai/langchain/issues/15616/comments | 9 | 2024-01-06T00:06:53Z | 2024-02-23T13:36:44Z | https://github.com/langchain-ai/langchain/issues/15616 | 2,068,219,804 | 15,616 |
[
"hwchase17",
"langchain"
] | ### System Info
LangChain version: 0.0.354
LangChain Community version: 0.0.8
Platform: Apple M3 Pro chip on MacOS Sonoma (MacOS 14.2.1)
Python version: 3.11.7
### Who can help?
@baskaryan has the most recent commits on this section of the code, but it was for moving items to the `langchain_community` package. I'm not sure who is the original author
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Create a list of file paths
2. Attempt to load using `UnstructuredFileLoader`
```python
from langchain_community.document_loaders import UnstructuredFileLoader
files = [
"file_1.txt",
"file_2.txt"
]
loader = UnstructuredFileLoader(file_path=files)
documents = loader.load() # Error occurs on this line
```
Contents of `file_1.txt`
```txt
some stuff
```
Contents of `file_2.txt`
```txt
some more stuff
```
Stack trace
```
Traceback (most recent call last):
File "/Users/joshl/Library/Application Support/JetBrains/PyCharm2023.3/scratches/scratch.py", line 9, in <module>
documents = loader.load() # Error occurs on this line
^^^^^^^^^^^^^
File "/Users/joshl/miniforge3/envs/MechanisticLLM/lib/python3.11/site-packages/langchain_community/document_loaders/unstructured.py", line 87, in load
elements = self._get_elements()
^^^^^^^^^^^^^^^^^^^^
File "/Users/joshl/miniforge3/envs/MechanisticLLM/lib/python3.11/site-packages/langchain_community/document_loaders/unstructured.py", line 173, in _get_elements
return partition(filename=self.file_path, **self.unstructured_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/joshl/miniforge3/envs/MechanisticLLM/lib/python3.11/site-packages/unstructured/partition/auto.py", line 278, in partition
filetype = detect_filetype(
^^^^^^^^^^^^^^^^
File "/Users/joshl/miniforge3/envs/MechanisticLLM/lib/python3.11/site-packages/unstructured/file_utils/filetype.py", line 248, in detect_filetype
_, extension = os.path.splitext(_filename)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen posixpath>", line 118, in splitext
TypeError: expected str, bytes or os.PathLike object, not list
Process finished with exit code 1
```
### Expected behavior
A list of strings should be able to be passed to the `UnstructuredFileLoader` class based on the `__init__` arguments
```python
class UnstructuredFileLoader(UnstructuredBaseLoader):
def __init__(
self,
file_path: Union[str, List[str]],
mode: str = "single",
**unstructured_kwargs: Any,
):
``` | `UnstructuredFileLoader` shows `TypeError: expected str, bytes or os.PathLike object, not list` when a list of files is passed in | https://api.github.com/repos/langchain-ai/langchain/issues/15607/comments | 4 | 2024-01-05T22:14:50Z | 2024-01-24T03:37:38Z | https://github.com/langchain-ai/langchain/issues/15607 | 2,068,110,708 | 15,607 |
[
"hwchase17",
"langchain"
] | ### System Info
## System Info
**LangChain Version:** 0.0.354
**Platform:** MacOS Sonoma 14.2.1
**Python Version:** 3.11.6
### Who can help?
@hwchase17
@agola11
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
**Code sample that utilizes the Playwright Toolkit:** [https://python.langchain.com/docs/integrations/toolkits/playwright](https://python.langchain.com/docs/integrations/toolkits/playwright)
For instance, when you attempt to run:
```Python
await navigate_tool.arun(
{"url": "https://web.archive.org/web/20230428131116/https://www.cnn.com/world"}
)
```
Nothing happens after 20 minutes.
The only way I have been able to get a response was by waiting a few seconds and then turning off my computer's wifi, which would return the expected:
```Bash
'Navigating to https://web.archive.org/web/20230428131116/https://www.cnn.com/world returned status code 200'
```
I am utilizing the browser in an agent, and here is the code:
```Python
import asyncio
from typing import Type, Optional
from langchain.agents import AgentExecutor
from langchain.schema import SystemMessage, HumanMessage, AIMessage
from long-chain.agents.format_scratchpad.openai_functions import format_to_openai_function_messages
from langchain.agents.output_parsers.openai_functions import OpenAIFunctionsAgentOutputParser
from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain.tools.convert_to_openai import format_tool_to_openai_function
from langchain_community.tools.playwright.utils import create_async_playwright_browser
from langchain.tools.tavily_search import TavilySearchResults
from langchain.utilities.tavily_search import TavilySearchAPIWrapper
from langchain_community.agent_toolkits.playwright import PlayWrightBrowserToolkit
from langchain.callbacks.manager import CallbackManagerForToolRun
from langchain.pydantic_v1 import BaseModel, Field
from langchain.tools import BaseTool
from auto_sentry.src.chat_models.enums.OpenAI import OpenAI as openai_enums
from auto_sentry.src.chat_models.Memory import Memory
from auto_sentry.src.chat_models.OpenAI import GPT
class AIQuestionInput(BaseModel):
query: str = Field(description="should contain your question for the user including specific aspects or details from the user's input that require further clarification or elaboration")
class AIQuestionTool(BaseTool):
name = "ai_question"
description = "useful for when you need to clairify user requests or when you need to ask them a question"
args_schema: Type[BaseModel] = AIQuestionInput
def _run(
self, query: str, run_manager: Optional[CallbackManagerForToolRun] = None
) -> str:
"""Use the tool."""
return query
llm = GPT(model=openai_enums.GPT_4_1106_PREVIEW)
memory = Memory("Test_Conversation")
memory.clear()
search = [TavilySearchResults(api_wrapper=TavilySearchAPIWrapper(), handle_tool_error=True, verbose=True)]
playwright_async_browser = create_async_playwright_browser(headless=True)
toolkit = PlayWrightBrowserToolkit.from_browser(async_browser=playwright_async_browser)
playwright_tools = toolkit.get_tools()
tools:list = search + playwright_tools
MEMORY_KEY = "chat_history"
prompt = ChatPromptTemplate.from_messages(
[
SystemMessage(
content="You are a helpful assistant."
),
MessagesPlaceholder(variable_name=MEMORY_KEY),
HumanMessage(content="{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
]
)
memory.add(prompt.messages[0])
functions=[format_tool_to_openai_function(tool) for tool in tools]
#"""
llm_with_tools = llm.llm.bind(
functions=functions
)
agent = (
{
"input": lambda x: x["input"],
"agent_scratchpad": lambda x: format_to_openai_function_messages(
x["intermediate_steps"]
),
"chat_history": lambda x: x["chat_history"],
}
| prompt
| llm_with_tools
| OpenAIFunctionsAgentOutputParser()
)
agent_executer = AgentExecutor(
agent=agent, tools=tools, verbose=True, return_intermediate_steps=True
)
query = HumanMessage(content=input("> "))
memory.add(query)
async def run_agent():
print(memory.memory())
response = await agent_executer.ainvoke(
{"input": query, MEMORY_KEY: memory.memory()}
)
return response
response = asyncio.run(run_agent())
agent_messages = format_to_openai_function_messages(response["intermediate_steps"])
staged_messages = []
staged_messages.extend(agent_messages)
staged_messages.append(AIMessage(content=response["output"]))
memory.add(staged_messages)
print(memory.memory())
```
And when it executes a Playwright browser-related command, it just freezes and does nothing.
Here is an example that runs a Playwright browser-related command:
```Bash
> Summarize today's financial news from Google Finance.
[SystemMessage(content='You are a helpful assistant.'), HumanMessage(content="Summarize today's financial news from Google Finance.")]
> Entering new AgentExecutor chain...
Invoking: `navigate_browser` with `{'url': 'https://www.google.com/finance'}`
```
But when it utilizes any other tool, such as Tavily Search, it works successfully:
```Bash
> What is the weather supposed to be like in Miami tomorrow?
[SystemMessage(content='You are a helpful assistant.'), HumanMessage(content='What is the weather supposed to be like in Miami tomorrow?')]
> Entering new AgentExecutor chain...
Invoking: `tavily_search_results_json` with `{'query': 'Miami weather forecast for tomorrow'}`
[{'url': 'https://www.weather.gov/mfl/\xa0', 'content': 'Read More > Privacy Policy Miami - South Florida Weather Forecast Office NWS Forecast Of
fice Miami - South Florida Aviation Weather International Weather RADAR IMAGERY National Miami Radar Key West Radar Across Florida CLIMATE Miami - South Florida11691 SW 17th StreetMiami, FL 33165305-229-4522Comments? Questions? Please Contact Us. Last Map Update: Fri, Jan. 5, 2024 at 3:54:05 pm EST Text Product Selector: CURRENT HAZARDS OutlooksMiami - South Florida. Weather Forecast Office. NWS Forecast Office Miami - South Florida. Weather.gov > Miami ... Fri, Jan. 5, 2024 at 2:20:05 am EST. Watches, Warnings & Advisories. Zoom Out: Gale Warning: Small Craft Advisory: Rip Current Statement: ... National Weather Service Miami - South Florida 11691 SW 17th Street Miami, FL 33165 305 ...'}]The weather forecast for Miami tomorrow can be found on the National Weather Service's website for the Miami - South Florida region. You can visit [this link](https://www.weather.gov/mfl/) for the most up-to-date information on the weather forecast, including any watches, warnings, or advisories that may be in effect.
> Finished chain.
[SystemMessage(content='You are a helpful assistant.'), HumanMessage(content='What is the weather supposed to be like in Miami tomorrow?'), AIMes
sage(content='', additional_kwargs={'function_call': {'arguments': '{"query":"Miami weather forecast for tomorrow"}', 'name': 'tavily_search_results_json'}}), FunctionMessage(content='[{"url": "https://www.weather.gov/mfl/\xa0", "content": "Read More > Privacy Policy Miami - South Florida Weather Forecast Office NWS Forecast Office Miami - South Florida Aviation Weather International Weather RADAR IMAGERY National Miami Radar Key West Radar Across Florida CLIMATE Miami - South Florida11691 SW 17th StreetMiami, FL 33165305-229-4522Comments? Questions? Please Contact Us. Last Map Update: Fri, Jan. 5, 2024 at 3:54:05 pm EST Text Product Selector: CURRENT HAZARDS OutlooksMiami - South Florida. Weather Forecast Office. NWS Forecast Office Miami - South Florida. Weather.gov > Miami ... Fri, Jan. 5, 2024 at 2:20:05 am EST. Watches, Warnings & Advisories. Zoom Out: Gale Warning: Small Craft Advisory: Rip Current Statement: ... National Weather Service Miami - South Florida 11691 SW 17th Street Miami, FL 33165 305 ..."}]', name='tavily_search_results_json'), AIMessage(content="The weather forecast for Miami tomorrow can be found on the National Weather Service's website for the Miami - South Florida region. You can visit [this link](https://www.weather.gov/mfl/) for the most up-to-date information on the weather forecast, including any watches, warnings, or advisories that may be in effect.")]
```
### Expected behavior
The behavior of the Playwright Browser should return results to an input in a relatively prompt manner, but it currently freezes and returns. | Playwright Browser Freezing | https://api.github.com/repos/langchain-ai/langchain/issues/15605/comments | 6 | 2024-01-05T20:59:22Z | 2024-07-06T11:44:10Z | https://github.com/langchain-ai/langchain/issues/15605 | 2,068,033,254 | 15,605 |
[
"hwchase17",
"langchain"
] | ### Discussed in https://github.com/langchain-ai/langchain/discussions/15598
<div type='discussions-op-text'>
<sup>Originally posted by **MahdiJafari1** January 5, 2024</sup>
OpenAI deprecated its `text-davinci-003` completion model. I've updated the model to `gpt-3.5-turbo-instruct`. I am encountering an issue with the LangChain where it incorrectly classifies the `gpt-3.5-turbo-instruct` model as a chat model. This is causing initialization problems in my code.
**Environment:**
```
python = "^3.10"
langchain = "^0.0.130"
```
OS: Ubuntu
**Expected Behavior:**
The expected behavior is that the gpt-3.5-turbo-instruct model should be recognized as a completion model by LangChain and initialized appropriately without warnings or errors.
**Actual Behavior:**
When attempting to initialize the gpt-3.5-turbo-instruct model, I receive warnings suggesting that this model is being misclassified as a chat model. The specific warnings are:
```shell
/home/mahdi/.cache/pypoetry/virtualenvs/backend-bRqVKcMN-py3.11/lib/python3.11/site-packages/langchain/llms/openai.py:169: UserWarning: You are trying to use a chat model. This way of initializing it is no longer supported. Instead, please use: `from langchain.chat_models import ChatOpenAI`
/home/mahdi/.cache/pypoetry/virtualenvs/backend-bRqVKcMN-py3.11/lib/python3.11/site-packages/langchain/llms/openai.py:608: UserWarning: You are trying to use a chat model. This way of initializing it is no longer supported. Instead, please use: `from langchain.chat_models import ChatOpenAI`
```
**My simplified code:**
```python
from langchain import OpenAI
llm = OpenAI({
"model_name": "gpt-3.5-turbo-instruct",
"temperature": 0.0,
"top_p": 1,
"openai_api_key": "API_KEY",
})
print(llm)
```
Output:
```shell
OpenAIChat[Params: {'model_name': 'gpt-3.5-turbo-instruct', 'temperature': 0.0, 'top_p': 1}
```</div> | Issue with LangChain Misclassifying gpt-3.5-turbo-instruct as Chat Model | https://api.github.com/repos/langchain-ai/langchain/issues/15604/comments | 3 | 2024-01-05T20:55:57Z | 2024-01-06T18:29:34Z | https://github.com/langchain-ai/langchain/issues/15604 | 2,068,029,209 | 15,604 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain 0.0.354, Windows 10,Python 3.11.5
### Who can help?
@hwchase17 @eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
input_data = {
"chat_history": chat_history,
"question": question
}
chain = ConversationalRetrievalChain.from_llm(
llm=ChatOpenAI(model=current_model),
retriever=retriever,
memory=memory,
return_source_documents=True,
get_chat_history=lambda h: h
)
result = chain(input_data)
### Expected behavior
I am writing to seek assistance with an issue I've encountered while working with the ConversationalRetrievalChain in LangChain. I have been developing a Discord bot using LangChain and have run into a consistent error that I'm struggling to resolve.
I am trying to use ConversationalRetrievalChain for a chatbot application. However, I keep encountering a ValueError related to input keys. The error message states: "ValueError: Missing some input keys: {'chat_history', 'question'}". This error occurs when I attempt to pass a dictionary containing 'chat_history' and 'question' as separate keys to the ConversationalRetrievalChain.
I have tried various approaches to format the input data correctly, including combining 'chat_history' and 'question' into a single string and passing them as separate keys. Despite these efforts, the error persists. I have also searched for solutions on platforms like Stack Overflow and GitHub but haven't found a resolution that addresses this specific issue.
Could you please provide guidance on how to correctly structure the input data for ConversationalRetrievalChain, or suggest any alternative approaches to resolve this issue? Any insights or recommendations would be greatly appreciated.
Thank you for your time and assistance. I look forward to your response. | Issue with ConversationalRetrievalChain in LangChain - ValueError: Missing Input Keys | https://api.github.com/repos/langchain-ai/langchain/issues/15601/comments | 3 | 2024-01-05T20:09:10Z | 2024-04-15T16:25:10Z | https://github.com/langchain-ai/langchain/issues/15601 | 2,067,974,482 | 15,601 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
There is no way to view the old documentation on [the official site](https://python.langchain.com/). This makes it extremely difficult to develop. It seems as though every week there is another feature that is deleted, thus another page being deleted.
How is this acceptable? It is becoming almost unusable in the Enterprise world due to the constant changes and lack of documentation.
### Idea or request for content:
Implement a selector that lets you choose which langchain version you're on so that you can view the documentation for that specific version. | DOC: Lack of Documentation Versioning on Langchain Website | https://api.github.com/repos/langchain-ai/langchain/issues/15597/comments | 2 | 2024-01-05T19:17:59Z | 2024-04-13T16:11:50Z | https://github.com/langchain-ai/langchain/issues/15597 | 2,067,913,011 | 15,597 |
[
"hwchase17",
"langchain"
] | ### System Info
Hello Langchain team,
I have encountered an error while using `AgentTokenBufferMemory` and `RedisChatMessageHistory`. The problem occurs because the buffer is not removing old messages when new ones are added. This causes an issue with OpenAI as the context window exceeds the token limit. Upon investigation, I found that the issue is in the `save_context()` method of the `AgentTokenBufferMemory` class.
```python
def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, Any]) -> None:
"""Save context from this conversation to buffer. Pruned."""
input_str, output_str = self._get_input_output(inputs, outputs)
self.chat_memory.add_user_message(input_str)
steps = format_to_openai_function_messages(outputs[self.intermediate_steps_key])
for msg in steps:
self.chat_memory.add_message(msg)
self.chat_memory.add_ai_message(output_str)
# Prune buffer if it exceeds max token limit
buffer = self.chat_memory.messages
curr_buffer_length = self.llm.get_num_tokens_from_messages(buffer)
if curr_buffer_length > self.max_token_limit:
while curr_buffer_length > self.max_token_limit:
buffer.pop(0)
curr_buffer_length = self.llm.get_num_tokens_from_messages(buffer)
```
In the `save_context()` method, the input and output messages are retrieved and added to the chat memory. The intermediate steps are then converted to OpenAI function messages and added to the chat memory as well. If the maximum token limit is exceeded, the buffer is pruned by removing the oldest messages until the current buffer length is below the limit.
However, the problem arises from the line `buffer.pop(0)`, which removes a message from a local variable rather than removing it from the Redis list. Since the list is not pruned, the agent becomes stuck and fails permanently.
Here is the error I am receiving:
```json
{'message': "This model's maximum context length is 16385 tokens. However, your messages resulted in 16602 tokens (16529 in the messages, 73 in the functions). Please reduce the length of the messages or functions.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}
```
Here is the information from LangSmith -> Metadata -> Runtime:
<img width="422" alt="image" src="https://github.com/langchain-ai/langchain/assets/101429097/eb3105f2-fd18-4527-b4b2-f89e2e87bdb7">
### Who can help?
@baskarya
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
import uuid
from langchain.memory.chat_message_histories import RedisChatMessageHistory
from langchain.agents.openai_functions_agent.agent_token_buffer_memory import AgentTokenBufferMemory
from langchain.schema import SystemMessage
from langchain.agents import OpenAIFunctionsAgent, AgentExecutor
from langchain.prompts import MessagesPlaceholder
prompt: SystemMessage = None # my prompt
chat_message_history = RedisChatMessageHistory(
session_id=uuid.uuid4(),
url="redis://localhost:6379",
key_prefix="my_feature_",
ttl=3600,
)
memory = AgentTokenBufferMemory(
chat_memory=chat_message_history,
memory_key="chat_history",
return_messages=True,
llm=llm,
max_token_limit=10000,
)
prompt = OpenAIFunctionsAgent.create_prompt(
system_message=prompt, extra_prompt_messages=[MessagesPlaceholder(variable_name="chat_history")]
)
agent = OpenAIFunctionsAgent(llm=llm, tools=tools, prompt=prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, memory=memory, verbose=True, return_intermediate_steps=True)
agent_response = agent_executor.invoke({"input": user_message})
```
### Expected behavior
Once the current messages have been saved, if the tokens exceed the maximum token limit, the `AgentTokenBufferMemory` should remove old messages from the Redis list (RPOP) until they are below the maximum token limit. | AgentTokenBufferMemory does not remove old messages, leading to the "context_length_exceeded" error from OpenAI. | https://api.github.com/repos/langchain-ai/langchain/issues/15593/comments | 2 | 2024-01-05T18:25:18Z | 2024-05-01T16:06:03Z | https://github.com/langchain-ai/langchain/issues/15593 | 2,067,842,408 | 15,593 |
[
"hwchase17",
"langchain"
] | ### Feature request
It would be really great to enhance the VectorStoreRetriever class, by allowing additional (optional) search kwargs to be passed directly to the invoke method. Right now the input type of the invoke method is str, it would be interesting to be able to receive a custom object with "query" and "filters".
Ideally one would be able to do:
```python
chain = (
vector_store.as_retriever()
| parse_documents_to_str
| llm
)
chain.invoke({'query': some_question, 'filter': filter_expression})
```
### Motivation
This change would enable dynamic querying capabilities, such as metadata filtering, which are not currently possible due to the requirement of defining search_kwargs at the constructor level or altering them in the instantiated object. I know I can work this out using a custom retriever class and override some methods, but metadata filtering is a very powerful option to enhance search, and it would be really helpful for developers to have this built in as a default behaviour.
### Your contribution
I can submit a PR if someone confirms this makes sense | Enhance Flexibility in VectorStoreRetriever by Allowing Dynamic search args in invoke Method | https://api.github.com/repos/langchain-ai/langchain/issues/15590/comments | 3 | 2024-01-05T17:29:34Z | 2024-04-12T16:11:30Z | https://github.com/langchain-ai/langchain/issues/15590 | 2,067,752,188 | 15,590 |
[
"hwchase17",
"langchain"
] | ### System Info
python: 3.11
langchain: latest
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
in my chatbot which interact with sql db, if im typing hi its giving me the output as the entity of 1st row and 1st column, instead of answering with nothing or invalid question. and some time the response is generated in ascending order and when re runed the query in descending order, how to validate the answer before its given out?
and how can i make my chatbot user friendly, like when user ask hi, it should greet back or when there is informal question it should just reply
import pandas as pd
import sqlalchemy as sal
import os, sys, openai
import constants
from langchain.sql_database import SQLDatabase
from langchain.llms.openai import OpenAI
from langchain_experimental.sql import SQLDatabaseChain
from sqlalchemy import create_engine
from langchain.prompts import ChatPromptTemplate
# import ChatOpenAI
from langchain.memory import ConversationBufferMemory
# import sql_cmd
from langchain.chat_models import ChatOpenAI
from typing import List, Optional
from langchain.agents.agent_toolkits import SQLDatabaseToolkit
from langchain.callbacks.manager import CallbackManagerForToolRun
from langchain.chat_models import ChatOpenAI
from langchain_experimental.plan_and_execute import (
PlanAndExecute,
load_agent_executor,
load_chat_planner,
)
from langchain.sql_database import SQLDatabase
from langchain.text_splitter import TokenTextSplitter
from langchain.tools import BaseTool
from langchain.tools.sql_database.tool import QuerySQLDataBaseTool
from secret_key import openapi_key
# from langchain import PromptTemplate
from langchain.prompts import PromptTemplate
# from langchain.prompts.PromptTemplate import PromptTemplate
# from langchain.models import ChatGPTClient
# from langchain.utils import save_conversation
os.environ['OPENAI_API_KEY'] = openapi_key
from urllib.parse import quote_plus
server_name = constants.server_name
database_name = constants.database_name
username = constants.username
password = constants.password
encoded_password = quote_plus(password)
connection_uri = f"mssql+pyodbc://{username}:{encoded_password}@{server_name}/{database_name}?driver=ODBC+Driver+17+for+SQL+Server"
# custom_suffix = """""
# If the SQLResult is empty, the Answer should be "No results found". DO NOT hallucinate an answer if there is no result."""
engine = create_engine(connection_uri)
def chat(question,sql_format):
model_name="gpt-3.5-turbo-16k"
db = SQLDatabase(engine, view_support=True, include_tables=['EGV_emp_departments_ChatGPT'])
# db2 = SQLDatabase(engine, view_support=True, include_tables=['EGV_emp_departments_ChatGPT'])
# db = SQLDatabase(engine, view_support=True, include_tables=['EGV_emp_departments'])
llm = ChatOpenAI(temperature=0, verbose=False, model=model_name)
answer = None
if sql_format==False:
db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)
answer = db_chain.run(question)
else:
db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True , return_sql =True)
sql_query = db_chain.run(question)
print("SQLQuery: "+str(sql_query))
# result = engine.execute(sql_query)
result_df = pd.read_sql(sql_query, engine)
answer = result_df.to_dict()
from langchain.prompts import PromptTemplate
PROMPT = """
Given an input question, first create a syntactically correct mssql query to run,
then look at the results of the query and return the SQLResult as answer.
The question: {db_chain.run}
"""
prompt_template = """" Use the following pieces of context to answer the question at the end.
If you don't know the answer, please think rationally and answer from your own knowledge base.
Don't consider the table which are not mentioned, if no result is matching with the keyword Please return the answer as invalid question
{context}
Question: {questions}
"""
PROMPT = PromptTemplate(
template=prompt_template, input_variables=["context", "questions"]
)
def split_text(text, chunk_size, chunk_overlap=0):
text_splitter = TokenTextSplitter(
chunk_size=chunk_size, chunk_overlap=chunk_overlap
)
yield from text_splitter.split_text(text)
class QuerySQLDatabaseTool2(QuerySQLDataBaseTool):
def _run(
self,
query: str,
run_manager: Optional[CallbackManagerForToolRun] = None,
) -> str:
result = self.db.run_no_throw(query)
return next(split_text(result, chunk_size=14_000))
class SQLDatabaseToolkit2(SQLDatabaseToolkit):
def get_tools(self) -> List[BaseTool]:
tools = super().get_tools()
original_query_tool_description = tools[0].description
new_query_tool = QuerySQLDatabaseTool2(
db=self.db, description=original_query_tool_description
)
tools[0] = new_query_tool
return tools
return answer
def chain1(question):
text = chat(question,False)
return text
def chain2(question):
query = chat(question,True)
return query
it uses "\Lib\site-packages\langchain_experimental\sql\base.py"
"\Lib\site-packages\langchain_experimental\sql\vector_sql.py"
"\Lib\site-packages\langchain_experimental\sql\prompt.py"
### Expected behavior
in my chatbot which interact with sql db, if im typing hi its giving me the output as the entity of 1st row and 1st column, instead of answering with nothing or invalid question. and some time the response is generated in ascending order and when re runed the query in descending order, how to validate the answer before its given out?
and how can i make my chatbot user friendly, like when user ask hi, it should greet back or when there is informal question it should just reply
| In a Chatbot to chat with SQL using openai and langchain, how to integrate the chatbot to make simple conversations | https://api.github.com/repos/langchain-ai/langchain/issues/15587/comments | 7 | 2024-01-05T14:26:07Z | 2024-04-15T16:19:09Z | https://github.com/langchain-ai/langchain/issues/15587 | 2,067,440,671 | 15,587 |
[
"hwchase17",
"langchain"
] | ### System Info
google-cloud-aiplatform==1.35.0,
langchain-0.0.354
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
template = """
**Role:**
You are a helpful assistant.
**Context:**
You have to only use a reference stored document to generate a response.
CONTEXT: {context}
**Task:**
1. task 1
- some important requirements for task 1
2. task 2
- some important requirements for task 2
3. task 3
- some important requirements for task 3
Question: {question}
Helpful Answer:
**Resource:** [reference source name]
"""
qa_prompt = ChatPromptTemplate.from_messages(
[
("system", template),
MessagesPlaceholder(variable_name="chat_history"),
("user", "{question}"),
]
)
rag_chain = (
RunnablePassthrough.assign(
context=contextualized_question | retriever | format_docs
)
| qa_prompt
| llm
)
response = rag_chain.invoke({"question": message, "chat_history": memory.get_history()})
print(response)
```
### Expected behavior
I expect to get the response which is composed of only a proper response sentence. Sometimes, it does well as I expected.
But frequently, it returns "\n AI:" in front of the actual response which is not suppose to do so.
For example,
```
"\n AI: The reference document says blah blah.",
``` | RAG chain response often includes "\n AI:" in front of actual response | https://api.github.com/repos/langchain-ai/langchain/issues/15586/comments | 4 | 2024-01-05T14:12:29Z | 2024-01-16T00:49:32Z | https://github.com/langchain-ai/langchain/issues/15586 | 2,067,418,916 | 15,586 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain = 0.0.354
This problem appears since commit 62d32bd
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Comit 62d32bd allowed to pass kwarg to ChromaDb. This is really nice but I my case it leads to an error ...
I don't know if it is relevant or a workaround exists so I raise the issue
Here is how to reproduce
``` python
from langchain.vectorstores import Chroma
# load a simple Chroma DB
vectordb = Chroma(persist_directory=my_chroma_db_directory,
embedding_function=my_embedding)
# init a retriever function
retriever = vectordb.as_retriever(search_kwargs={"k": retriever_output_number, "fetch_k": retriever_documents_used, "lambda_mult": retriever_diversity})
# call the retriever in a qa
qa_chain = RetrievalQA.from_chain_type(llm=llm,
chain_type="stuff",
retriever=retriever,
return_source_documents=False,
verbose=True)
# ask something
answer = qa_chain.run("find something in doc ...")
```
### Expected behavior
kwargs are needed by the `as_retriever` function, in the example above the argument `fetch_k` is mandatory
https://github.com/langchain-ai/langchain/blob/fd5fbb507dd3b1c189aa1e4b601b8669217b0f78/libs/core/langchain_core/vectorstores.py#L553
Since commit 62d32bd kwargs are passed up to Chroma `function similarity_search_with_score` which calls `__query_collection` leading to the error
https://github.com/langchain-ai/langchain/blob/fd5fbb507dd3b1c189aa1e4b601b8669217b0f78/libs/community/langchain_community/vectorstores/chroma.py#L408
TypeError: Collection.query() got an unexpected keyword argument 'fetch_k'
Is there a way to avoid that ? | Chroma as_retriever function with kwargs leads to unexpected keyword argument | https://api.github.com/repos/langchain-ai/langchain/issues/15585/comments | 7 | 2024-01-05T13:35:08Z | 2024-06-26T20:11:46Z | https://github.com/langchain-ai/langchain/issues/15585 | 2,067,361,958 | 15,585 |
[
"hwchase17",
"langchain"
] | ### System Info
Python 3.10.11
Mac M1
Langchain Version: 0.0.353
openai Version: 0.28.0
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.agents import create_sql_agent
from langchain.agents.agent_toolkits import SQLDatabaseToolkit
from langchain.agents.agent_types import AgentType
from langchain.llms.openai import OpenAI
from langchain.sql_database import SQLDatabase
db = SQLDatabase.from_uri("clickhouse+http://clickhouse_admin:P%21%40ssword42%21@localhost:8123/idfyDB")
toolkit = SQLDatabaseToolkit(db=db, llm=OpenAI(temperature=0))
agent_executor = create_sql_agent(
llm=OpenAI(model_name='gpt-4', temperature=0),
toolkit=toolkit,
verbose=True,
agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
)
agent_executor.run("Describe table sample_tbl")
### Expected behavior
The SQLDatabase toolkit should be able to query the clickhouse database as I have installed and provided the right dialect and connecting directly via sqlalchemy works fine. The issue is langchain's sql database toolkit somehow is unable to query the clickhouse database, it works fine for the chinook db sqlite file. Logs for reference -
> Entering new AgentExecutor chain...
Action: sql_db_list_tables
Action Input:
Observation:
Thought:I need to check the tables in the database to see if there is an "eve_tasks_executed" table.
Action: sql_db_list_tables
Action Input:
Observation:
Thought:Traceback (most recent call last):
File "/Users/vivekkalyanarangan/opt/anaconda3/envs/streaml/lib/python3.10/site-packages/langchain/agents/agent.py", line 1066, in _iter_next_step
output = self.agent.plan(
File "/Users/vivekkalyanarangan/opt/anaconda3/envs/streaml/lib/python3.10/site-packages/langchain/agents/agent.py", line 636, in plan
return self.output_parser.parse(full_output)
File "/Users/vivekkalyanarangan/opt/anaconda3/envs/streaml/lib/python3.10/site-packages/langchain/agents/mrkl/output_parser.py", line 63, in parse
raise OutputParserException(
langchain_core.exceptions.OutputParserException: Could not parse LLM output: `I don't know the answer to the question because I don't have access to the list of tables in the database.`
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
agent_executor.run("Run select * query on eve_tasks_executed table")
return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
raise e
self._call(inputs, run_manager=run_manager)
next_step_output = self._take_next_step(
[
[
raise ValueError(
ValueError: An output parsing error occurred. In order to pass this error back to the agent and have it try again, pass `handle_parsing_errors=True` to the AgentExecutor. This is the error: Could not parse LLM output: `I don't know the answer to the question because I don't have access to the list of tables in the database.` [hidden file names]
| Clickhouse SQL Database Agent | https://api.github.com/repos/langchain-ai/langchain/issues/15584/comments | 2 | 2024-01-05T13:31:32Z | 2024-01-06T07:38:01Z | https://github.com/langchain-ai/langchain/issues/15584 | 2,067,357,036 | 15,584 |
[
"hwchase17",
"langchain"
] | Hi,
I am using langchain and llama-cpp-python to do some QA on a text file. When using the llama-2-13b-chat quantized model from [HuggingFace](https://huggingface.co/TheBloke/Llama-2-13B-chat-GGUF/blob/main/llama-2-13b-chat.Q5_K_M.gguf). I am able to create a RetrievalQA chain passing the vectorstore and prompt, but when I use the chain.run(query), it crashes the anaconda kernel.
I tried using the [7b variant](https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGUF/blob/main/llama-2-7b-chat.Q5_K_M.gguf) and this works fine without any issue.
### Package Version
langchain==0.0.350
langchain-community==0.0.3
langchain-core==0.1.1
langsmith==0.0.70
llama_cpp_python==0.2.19
### Code Snippet
from langchain.text_splitter import RecursiveCharacterTextSplitter,SentenceTransformersTokenTextSplitter
from langchain.embeddings import HuggingFaceEmbeddings, OpenAIEmbeddings, SentenceTransformerEmbeddings
from langchain.vectorstores import FAISS
from langchain.llms import LlamaCpp
from langchain import PromptTemplate
from langchain.chains import RetrievalQA
model_path="Model Path/llama-2-13b-chat.Q5_K_M.gguf"
llm = LlamaCpp(
model_path = model_path,
max_tokens=700,f16_kv=True,model_kwargs={'context_length':4000,'temparature':0.3}
)
embeddings = HuggingFaceEmbeddings(model_name='sentence-transformers/all-MiniLM-L6-v2', model_kwargs={'device':'cpu'})
vectorstore = FAISS.load_local(f'Text File', embeddings=embeddings)
template="""
Answer the question using only the context provided to you.
Context: {context}
Question:{question}
"""
qa_prompt=PromptTemplate(template=template, input_variables=["context","question"])
chain = RetrievalQA.from_chain_type(llm=llm,
chain_type='stuff',
retriever=vectorstore.as_retriever(),
return_source_documents=False,
chain_type_kwargs={'prompt': qa_prompt})
chain.run("Who are the people in the conversation?")
### Suggestion:
_No response_ | Issue: ipykernel kernel crashes when using llama-2-13b model | https://api.github.com/repos/langchain-ai/langchain/issues/15583/comments | 1 | 2024-01-05T13:08:21Z | 2024-04-12T16:12:43Z | https://github.com/langchain-ai/langchain/issues/15583 | 2,067,325,836 | 15,583 |
[
"hwchase17",
"langchain"
] | ### System Info
In the langchain_community/vectorstores/azuresearch.py on line 656 the filed name is used explicitly, which leads to an error if the index does not have the mentioned filed. The suggestion is to replace
`json.loads(result["metadata").get("key"), ""),`
with `json.loads(result[FIELDS_METADATA]).get("key"), "") if FIELDS_METADATA in result else "",`
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Configure Azure Search index without metadata field.
2. execute the following code `docs = vector_store.semantic_hybrid_search_with_score_and_rerank(
query=query, k=3
)`
### Expected behavior
successful execution of the above mentioned code | metadata is not properly processed when the field does not exists | https://api.github.com/repos/langchain-ai/langchain/issues/15581/comments | 1 | 2024-01-05T11:39:32Z | 2024-01-07T01:05:01Z | https://github.com/langchain-ai/langchain/issues/15581 | 2,067,198,112 | 15,581 |
[
"hwchase17",
"langchain"
] | ### System Info
google-cloud-aiplatform==1.35.0,
langchain-0.0.354
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
class ChatbotHistory:
def __init__(self, max_size=5):
self.buffer = deque(maxlen=max_size)
def add_interaction(self, user_message, bot_response):
# Assuming HumanMessage is a class that stores a message content
self.buffer.append(HumanMessage(content=user_message))
self.buffer.append(bot_response)
def get_history(self):
return list(self.buffer)
def get_history_as_string(self):
history_string = ""
for message in self.buffer:
if isinstance(message, HumanMessage):
history_string += f"User: {message.content}\n"
else: # Assuming bot responses are strings
history_string += f"Bot: {message}\n"
return history_string.strip()
def clear_history(self):
self.buffer.clear()
def format_docs(docs):
return "\n\n".join(doc.page_content for doc in docs)
def contextualized_question(input: dict):
llm = VertexAI(model_name='text-bison@001', max_output_tokens=512, temperature=0.2)
contextualize_q_system_prompt = """Given a chat history and the latest user question \
which might reference context in the chat history, formulate a standalone question \
which can be understood without the chat history. Do NOT answer the question, \
just reformulate it if needed and otherwise return it as is."""
contextualize_q_prompt = ChatPromptTemplate.from_messages(
[
("system", contextualize_q_system_prompt),
MessagesPlaceholder(variable_name="chat_history"),
("human", "{question}"),
]
)
contextualize_q_chain = contextualize_q_prompt | llm | StrOutputParser()
if input.get("chat_history"):
return contextualize_q_chain
else:
return input["question"]
if not message:
message = request.form.get('userInput')
template = """
CONTEXT: {context}
Question: {question}
Helpful Answer:
**Resource:** [reference source name]
"""
# rag_prompt_custom = PromptTemplate.from_template(template)
qa_prompt = ChatPromptTemplate.from_messages(
[
("system", template),
MessagesPlaceholder(variable_name="chat_history"),
("human", "{question}"),
]
)
rag_chain = (
RunnablePassthrough.assign(
context=contextualized_question | temp_retriever | format_docs
)
| qa_prompt
| llm
)
response = rag_chain.invoke({"question": message, "chat_history": memory.get_history()})
memory.add_interaction(message, response)
```
### Expected behavior
I expected to have string or json format output response as an output of invoke(),
but it showed me an error saying,
```
ValueError: variable chat_history should be a list of base messages, got [HumanMessage(content='input message'), "output response"]
```
| ValueError: variable chat_history should be a list of base messages, got [HumanMessage(content='input message'), "output response"] | https://api.github.com/repos/langchain-ai/langchain/issues/15580/comments | 1 | 2024-01-05T11:34:20Z | 2024-01-05T13:53:05Z | https://github.com/langchain-ai/langchain/issues/15580 | 2,067,191,288 | 15,580 |
[
"hwchase17",
"langchain"
] | ### Discussed in https://github.com/langchain-ai/langchain/discussions/5701
<div type='discussions-op-text'>
<sup>Originally posted by **rdhillbb** June 5, 2023</sup>
Newbie here.
I found an issue while importing 'VectorstoreIndexCreator'
ImportError: cannot import name 'URL' from 'sqlalchemy' (/Users/tst/anaconda3/lib/python3.10/site-packages/sqlalchemy/__init__.py)
Error Log:
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
Cell In[88], line 2
1 from langchain.docstore.document import Document
----> 2 from langchain.indexes import VectorstoreIndexCreator
3 from langchain_community.utilities import ApifyWrapper
5 apify = ApifyWrapper()
File ~/anaconda3/lib/python3.10/site-packages/langchain/indexes/__init__.py:17
1 """Code to support various indexing workflows.
2
3 Provides code to:
(...)
14 documents that were derived from parent documents by chunking.)
15 """
16 from langchain.indexes._api import IndexingResult, aindex, index
---> 17 from langchain.indexes._sql_record_manager import SQLRecordManager
18 from langchain.indexes.graph import GraphIndexCreator
19 from langchain.indexes.vectorstore import VectorstoreIndexCreator
File ~/anaconda3/lib/python3.10/site-packages/langchain/indexes/_sql_record_manager.py:21
18 import uuid
19 from typing import Any, AsyncGenerator, Dict, Generator, List, Optional, Sequence, Union
---> 21 from sqlalchemy import (
22 URL,
23 Column,
24 Engine,
25 Float,
26 Index,
27 String,
28 UniqueConstraint,
29 and_,
30 create_engine,
31 delete,
32 select,
33 text,
34 )
35 from sqlalchemy.ext.asyncio import (
36 AsyncEngine,
37 AsyncSession,
38 async_sessionmaker,
39 create_async_engine,
40 )
41 from sqlalchemy.ext.declarative import declarative_base
Thank You
Vj | Cannot import name 'URL' from 'sqlalchemy' | https://api.github.com/repos/langchain-ai/langchain/issues/15579/comments | 5 | 2024-01-05T11:32:28Z | 2024-05-13T16:09:17Z | https://github.com/langchain-ai/langchain/issues/15579 | 2,067,188,914 | 15,579 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
below is my code
def generate_custom_prompt(query=None,name=None,not_uuid=None,chroma_db_path=None):
check = query.lower()
embedding = OpenAIEmbeddings()
vectordb = Chroma(persist_directory=chroma_db_path, embedding_function=embedding)
retriever = vectordb.as_retriever(search_kwargs={"k": 2})
relevant_document = retriever.get_relevant_documents(query)
print(relevant_document,"*****************************************")
context_text="\n\n---\n\n".join([doc.page_content for doc in relevant_document])
# print(context_text,"context_text")
user_experience_inst = UserExperience.objects.get(not_uuid=not_uuid)
greetings = ['hi', 'hello', 'hey', 'hui', 'hiiii', 'hii', 'hiii', 'heyyy']
if check in greetings:
custom_prompt_template = f"""
Just simply reply with "Hello {name}! How can I assist you today?"
"""
elif check not in greetings and user_experience_inst.custom_prompt:
custom_prompt_template = f"""Answer the question based only on following context: ```{context_text} ```
You are a chatbot designed to provide answers to User's Questions:```{check}```, delimited by triple backticks.
Generate your answer to match the user's requirements: {user_experience_inst.custom_prompt}
If you encounter a question for which you don't know the answer, please respond with 'I don't know' and refrain from making up an answer.
- Before saying 'I don't know,' please re-verify your vector store to ensure the answer is not present in the database.
Remember, your goal is to assist the user in the best way possible. If the question is unclear or ambiguous, feel free to ask for clarification.
User's Question: ```{check}```
AI Answer:"""
else:
# Create the custom prompt template
custom_prompt_template = f"""Generate your response exclusively from the provided context: {{context_text}}. You function as a chatbot specializing in delivering detailed answers to the User's Question: ```{{check}} ```, enclosed within triple backticks.
Generate your answer in points in the following format: {{chat_history}}
1. Point no 1
1.1 Its subpoint in details
1.2 More information if needed.
2. Point no 2
2.1 Its subpoint in details
2.2 More information if needed.
…
N. Another main point.
If you encounter a question for which you don't know the answer based on the predefined points, please respond with 'I don't know' and refrain from making up an answer.
However, if the answer is not present in the predefined points, then Provide comprehensive information related to the user's query.
Remember, your goal is to assist the user in the best way possible. If the question is unclear or ambiguous, you can ask for clarification.
User's Question: ```{{check}} ```
AI Answer:"""
# Create the PromptTemplate
custom_prompt = PromptTemplate(input_variables=["context_text","check","chat_history"],template=custom_prompt_template)
formatted_prompt = custom_prompt.format(context_text=context_text, check=check)
# QA_CHAIN_PROMPT = PromptTemplate(
# input_variables=["context_text","check"],template=custom_prompt_template,)
llm = ChatOpenAI(temperature=0.1)
memory = ConversationBufferMemory(memory_key='chat_history', output_key='answer', return_messages=True)
# qa = ConversationalRetrievalChain.from_llm(llm=llm,memory=memory,chain_type="stuff",retriever=retriever,return_source_documents=True,get_chat_history=lambda h : h,verbose=True)
qa = ConversationalRetrievalChain.from_llm(llm=llm,memory=memory,chain_type="stuff",retriever=retriever,return_source_documents=True,get_chat_history=lambda h : h,verbose=True, combine_docs_chain_kwargs={"prompt": custom_prompt})
# prompt_qa={"qa": qa, "formatted_prompt": formatted_prompt}
return qa
The error I am getting is:
File "/usr/lib/python3.8/string.py", line 272, in get_field
obj = self.get_value(first, args, kwargs)
File "/usr/lib/python3.8/string.py", line 229, in get_value
return kwargs[key]
KeyError: 'chat_history'
_No response_
### Suggestion:
_No response_ | Issue:How can I resolve memory with conversation retreival chain error? | https://api.github.com/repos/langchain-ai/langchain/issues/15577/comments | 1 | 2024-01-05T10:33:22Z | 2024-04-12T16:18:52Z | https://github.com/langchain-ai/langchain/issues/15577 | 2,067,106,521 | 15,577 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Hello Team,
we are trying to use pypdf to get the text out from the pdf use the chunk for embedding(details are there in attached code snippet. while using, i have installed all the required packages. its working fine in my local(windows 10). same code snippet and requirement.txt if i use on docker that uses ubuntu OS(its in prod), i am getting below error:
**Error while chunking the file: Error while chunking the file, Errored while loading the document: `rapidocr-onnxruntime` package not found, please install it with `pip install rapidocr-onnxruntime**`
The strage part here is this **rapidocr-onnxruntime** package is already installed on the ubuntu system**(i re-verified by seeing the github action runner logs where it installs all the packages from requirement.txt)**
Not able to understand why on prod it **pypdf package with extract_image=True** is throwing above error
It will be helpful, if you can provide any insight or workaround to this issue.

### Suggestion:
_No response_ | Issue: Pypdf extract_image=True is not working on docker(production) | https://api.github.com/repos/langchain-ai/langchain/issues/15576/comments | 8 | 2024-01-05T09:42:50Z | 2024-06-12T07:02:00Z | https://github.com/langchain-ai/langchain/issues/15576 | 2,067,029,408 | 15,576 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
` if file_path.lower().endswith(".xlsx") or file_path.lower().endswith(".xls"):
loader = UnstructuredExcelLoader(file_path, mode="elements")
document = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=100, chunk_overlap=10)
texts = text_splitter.split_documents(documents=document)`
what can i modify so that I get answer from the column?
### Suggestion:
_No response_ | Issue: Not able to get the expected answers when asking answer of other column corresponding to other column | https://api.github.com/repos/langchain-ai/langchain/issues/15573/comments | 4 | 2024-01-05T08:49:04Z | 2024-04-12T16:16:29Z | https://github.com/langchain-ai/langchain/issues/15573 | 2,066,955,284 | 15,573 |
[
"hwchase17",
"langchain"
] | ### System Info
I want to use the news-api tool, and I have these setting for api key:
```
os.environ["NEWS_API_KEY"] = "9ed***"
tools = load_tools(["news-api"], llm=llm, news_api_key="9ed****", memory=memory)
```
But when the action is activated, the error is:
```
Action: Search
https://newsapi.org/v2/top-headlines?country=us&category=sports&pageSize=2&apiKey=YOUR_API_KEY
Replace `YOUR_API_KEY` with your actual API key from NewsAPI.org to authenticate the request.
```
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. load news-api tool
2. run the tool in the agent
### Expected behavior
the news-api tool can run successfully | How to set api key for news-api? | https://api.github.com/repos/langchain-ai/langchain/issues/15572/comments | 2 | 2024-01-05T08:21:08Z | 2024-04-12T22:37:11Z | https://github.com/langchain-ai/langchain/issues/15572 | 2,066,921,236 | 15,572 |
[
"hwchase17",
"langchain"
] | ### System Info
Python 3.11
Langchain 0.0.354
Windows 11
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Getting exception httpcore.UnsupportedProtocol: Request URL is missing an 'http://' or 'https://' protocol. while executing SQLDatabaseToolkit
Execute below code and described environment:
`import os
from dotenv import load_dotenv
from langchain.agents import create_sql_agent
from langchain.agents.agent_toolkits import SQLDatabaseToolkit
from langchain.sql_database import SQLDatabase
from langchain.chat_models import ChatOpenAI
llm_model = "gpt-3.5-turbo"
#load secrets and keys from .env
load_dotenv()
database_user_local = os.getenv("DATABASE_USERNAME_LOCAL")
database_password_local = os.getenv("DATABASE_PASSWORD_LOCAL")
database_server_local = os.getenv("DATABASE_SERVER_LOCAL")
database_db_local = os.getenv("DATABASE_DB_LOCAL")
llm = ChatOpenAI(temperature = 0.0, model_name=llm_model)
connection_string = f"mssql+pymssql://{database_user_local}:{database_password_local}@{database_server_local}/{database_db_local}"
db = SQLDatabase.from_uri(connection_string)
user_id = 118
query = "Select top 5 * from dbo.Users where Id = " + str(user_id)
toolkit = SQLDatabaseToolkit(db=db, llm=llm, reduce_k_below_max_tokens=True)
agent_executor = create_sql_agent(
llm=llm,
toolkit=toolkit,
verbose=True,
input_variables=["query"]
)
agent_executor.run("Get lastlogin from dbo.Users for user_id 118")
`
Error:
` Entering new AgentExecutor chain...
Traceback (most recent call last):
File "D:\Program\Python\Python311\Lib\site-packages\httpx\_transports\default.py", line 67, in map_httpcore_exceptions
yield
File "D:\Program\Python\Python311\Lib\site-packages\httpx\_transports\default.py", line 231, in handle_request
resp = self._pool.handle_request(req)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Program\Python\Python311\Lib\site-packages\httpcore\_sync\connection_pool.py", line 215, in handle_request
raise UnsupportedProtocol(
httpcore.UnsupportedProtocol: Request URL is missing an 'http://' or 'https://' protocol.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "D:\Program\Python\Python311\Lib\site-packages\openai\_base_client.py", line 866, in _request
response = self._client.send(request, auth=self.custom_auth, stream=stream)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Program\Python\Python311\Lib\site-packages\httpx\_client.py", line 915, in send
response = self._send_handling_auth(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Program\Python\Python311\Lib\site-packages\httpx\_client.py", line 943, in _send_handling_auth
response = self._send_handling_redirects(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Program\Python\Python311\Lib\site-packages\httpx\_client.py", line 980, in _send_handling_redirects
response = self._send_single_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Program\Python\Python311\Lib\site-packages\httpx\_client.py", line 1016, in _send_single_request
response = transport.handle_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Program\Python\Python311\Lib\site-packages\httpx\_transports\default.py", line 230, in handle_request
with map_httpcore_exceptions():
File "D:\Program\Python\Python311\Lib\contextlib.py", line 155, in __exit__
self.gen.throw(typ, value, traceback)
File "D:\Program\Python\Python311\Lib\site-packages\httpx\_transports\default.py", line 84, in map_httpcore_exceptions
raise mapped_exc(message) from exc
httpx.UnsupportedProtocol: Request URL is missing an 'http://' or 'https://' protocol.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "D:\Program\Python\Python311\Lib\site-packages\httpx\_transports\default.py", line 67, in map_httpcore_exceptions
yield
File "D:\Program\Python\Python311\Lib\site-packages\httpx\_transports\default.py", line 231, in handle_request
resp = self._pool.handle_request(req)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Program\Python\Python311\Lib\site-packages\httpcore\_sync\connection_pool.py", line 215, in handle_request
raise UnsupportedProtocol(
httpcore.UnsupportedProtocol: Request URL is missing an 'http://' or 'https://' protocol.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "D:\Program\Python\Python311\Lib\site-packages\openai\_base_client.py", line 866, in _request
response = self._client.send(request, auth=self.custom_auth, stream=stream)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Program\Python\Python311\Lib\site-packages\httpx\_client.py", line 915, in send
response = self._send_handling_auth(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Program\Python\Python311\Lib\site-packages\httpx\_client.py", line 943, in _send_handling_auth
response = self._send_handling_redirects(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Program\Python\Python311\Lib\site-packages\httpx\_client.py", line 980, in _send_handling_redirects
response = self._send_single_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Program\Python\Python311\Lib\site-packages\httpx\_client.py", line 1016, in _send_single_request
response = transport.handle_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Program\Python\Python311\Lib\site-packages\httpx\_transports\default.py", line 230, in handle_request
with map_httpcore_exceptions():
File "D:\Program\Python\Python311\Lib\contextlib.py", line 155, in __exit__
self.gen.throw(typ, value, traceback)
File "D:\Program\Python\Python311\Lib\site-packages\httpx\_transports\default.py", line 84, in map_httpcore_exceptions`
### Expected behavior
It should execute sql agent and return the result | SQLDatabaseToolkit - httpx.UnsupportedProtocol: Request URL is missing an 'http://' or 'https://' protocol | https://api.github.com/repos/langchain-ai/langchain/issues/15567/comments | 1 | 2024-01-05T04:54:04Z | 2024-04-12T16:19:05Z | https://github.com/langchain-ai/langchain/issues/15567 | 2,066,727,007 | 15,567 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain:0.0.353
platform:windows10
python:3.10
I am a beginner in Langchain. I am a I want to use ConversationTokenBufferMemory to manually save the context, but an error occurred. My code is as follows
`import os
from lc.api_key import OPENAI_API_KEY
os.environ['OPENAI_API_KEY'] = OPENAI_API_KEY
from langchain.chains import ConversationChain
from langchain.chat_models import ChatOpenAI
from langchain.memory import ConversationTokenBufferMemory
llm = ChatOpenAI(temperature=0.0)
memory = ConversationTokenBufferMemory(
llm=llm,
max_token_limit=100
)
memory.save_context(inputs={"input": "how about AI?"}, outputs={"output": "Amazing!"})
memory.save_context({"input": "what is space?"}, {"output": "just like a stage."})
memory.save_context({"input": "what can i do?"}, {"output": "workers of the world, unite!"})
print(memory.load_memory_variables())
chain = ConversationChain(
llm=llm,
memory=memory,
verbose=True
)
print(chain.predict(input="what is 1+1?"))
print(chain.predict(input="what is my name?"))
print(chain.predict(input="what can i do?"))`
---------------------------------
---------------------------------
The problem is that the code is executed to memory.save_ Context, an error will be reported.The stack information is as follows:
memory.save_context(inputs={"input": "how about AI?"}, outputs={"output": "Amazing!"})
File "E:\Project\pythonProject\langChain\venv\lib\site-packages\langchain\memory\token_buffer.py", line 54, in save_context
curr_buffer_length = self.llm.get_num_tokens_from_messages(buffer)
File "E:\Project\pythonProject\langChain\venv\lib\site-packages\langchain_community\chat_models\openai.py", line 610, in get_num_tokens_from_messages
model, encoding = self._get_encoding_model()
File "E:\Project\pythonProject\langChain\venv\lib\site-packages\langchain_community\chat_models\openai.py", line 588, in _get_encoding_model
encoding = tiktoken_.encoding_for_model(model)
File "E:\Project\pythonProject\langChain\venv\lib\site-packages\tiktoken\model.py", line 97, in encoding_for_model
return get_encoding(encoding_name_for_model(model_name))
File "E:\Project\pythonProject\langChain\venv\lib\site-packages\tiktoken\registry.py", line 73, in get_encoding
enc = Encoding(**constructor())
File "E:\Project\pythonProject\langChain\venv\lib\site-packages\tiktoken_ext\openai_public.py", line 64, in cl100k_base
mergeable_ranks = load_tiktoken_bpe(
File "E:\Project\pythonProject\langChain\venv\lib\site-packages\tiktoken\load.py", line 117, in load_tiktoken_bpe
return {
File "E:\Project\pythonProject\langChain\venv\lib\site-packages\tiktoken\load.py", line 117, in <dictcomp>
return {
ValueError: too many values to unpack (expected 2)
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
import os
from lc.api_key import OPENAI_API_KEY
os.environ['OPENAI_API_KEY'] = OPENAI_API_KEY
from langchain.chains import ConversationChain
from langchain.chat_models import ChatOpenAI
from langchain.memory import ConversationTokenBufferMemory
llm = ChatOpenAI(temperature=0.0)
memory = ConversationTokenBufferMemory(
llm=llm,
max_token_limit=100
)
memory.save_context(inputs={"input": "how about AI?"}, outputs={"output": "Amazing!"})
memory.save_context({"input": "what is space?"}, {"output": "just like a stage."})
memory.save_context({"input": "what can i do?"}, {"output": "workers of the world, unite!"})
print(memory.load_memory_variables())
chain = ConversationChain(
llm=llm,
memory=memory,
verbose=True
)
print(chain.predict(input="what is 1+1?"))
print(chain.predict(input="what is my name?"))
print(chain.predict(input="what can i do?"))
### Expected behavior
Traceback (most recent call last):
File "E:\Project\pythonProject\langChain\lc\memory\ConversationTokenBufferMemoryUseCaseScript.py", line 14, in <module>
memory.save_context(inputs={"input": "how about AI?"}, outputs={"output": "Amazing!"})
File "E:\Project\pythonProject\langChain\venv\lib\site-packages\langchain\memory\token_buffer.py", line 54, in save_context
curr_buffer_length = self.llm.get_num_tokens_from_messages(buffer)
File "E:\Project\pythonProject\langChain\venv\lib\site-packages\langchain_community\chat_models\openai.py", line 610, in get_num_tokens_from_messages
model, encoding = self._get_encoding_model()
File "E:\Project\pythonProject\langChain\venv\lib\site-packages\langchain_community\chat_models\openai.py", line 588, in _get_encoding_model
encoding = tiktoken_.encoding_for_model(model)
File "E:\Project\pythonProject\langChain\venv\lib\site-packages\tiktoken\model.py", line 97, in encoding_for_model
return get_encoding(encoding_name_for_model(model_name))
File "E:\Project\pythonProject\langChain\venv\lib\site-packages\tiktoken\registry.py", line 73, in get_encoding
enc = Encoding(**constructor())
File "E:\Project\pythonProject\langChain\venv\lib\site-packages\tiktoken_ext\openai_public.py", line 64, in cl100k_base
mergeable_ranks = load_tiktoken_bpe(
File "E:\Project\pythonProject\langChain\venv\lib\site-packages\tiktoken\load.py", line 124, in load_tiktoken_bpe
return {
File "E:\Project\pythonProject\langChain\venv\lib\site-packages\tiktoken\load.py", line 124, in <dictcomp>
return {
ValueError: too many values to unpack (expected 2)
Process finished with exit code 1
| ValueError: too many values to unpack (expected 2) | https://api.github.com/repos/langchain-ai/langchain/issues/15564/comments | 1 | 2024-01-05T03:05:15Z | 2024-04-12T16:16:44Z | https://github.com/langchain-ai/langchain/issues/15564 | 2,066,656,658 | 15,564 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I'm using Langchain 0.0.354 and ChatOpenAI. I want to use parameter "n" in OpenAI API to return "n" completions. However, ChatOpenAI always returns a single output. Ultimately, I would like to build my chain using LCEL as follows: `chain = prompt | ChatOpenAI (n=10) | MyCustomParser`. Can someone help me how can I achieve this?
### Suggestion:
I think it would be nice to return a list of strings by default if n > 1. | Issue: How to use "n" completions with LCEL | https://api.github.com/repos/langchain-ai/langchain/issues/15560/comments | 1 | 2024-01-04T21:31:34Z | 2024-04-11T16:14:15Z | https://github.com/langchain-ai/langchain/issues/15560 | 2,066,364,259 | 15,560 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I am trying to run the below:
> import requests
> import json
> from langchain.agents import AgentType, initialize_agent, load_tools
> from langchain_community.llms import OpenAI
> from langchain.chat_models import ChatOpenAI
>
>
> llm = ChatOpenAI(temperature=0,model= 'gpt-3.5-turbo', openai_api_key="...")
>
> token='...'
> tools = load_tools(
> ["graphql"],
> custom_headers={"Authorization": token, "Content-Type": "application/json","API-Version" : "2024-01"},
> graphql_endpoint="https://api.monday.com/v2",
> llm=llm, fetch_schema_from_transport=False
> )
>
>
> agent = initialize_agent(
> tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True
> )
>
>
>
> graphql_fields = """
>
> mutation{
> create_item (board_id: 3573920662, item_name: "New ItemX"){
> id
> name
> }
> }
>
> """
>
> suffix = "Create the item specified"
>
>
> print(suffix + graphql_fields)
>
> agent.run(suffix + graphql_fields)
But I keep getting the error:
>
> TransportQueryError: Error while fetching schema: Not Authenticated
> If you don't need the schema, you can try with: "fetch_schema_from_transport=False"
The authorisation is correct (token and API key censored here) and so is the end point. How can I fix this?
### Suggestion:
_No response_ | Authentican error | https://api.github.com/repos/langchain-ai/langchain/issues/15555/comments | 1 | 2024-01-04T19:48:30Z | 2024-04-11T16:22:16Z | https://github.com/langchain-ai/langchain/issues/15555 | 2,066,241,880 | 15,555 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
```
from langchain.vectorstores.pgvector import PGVector
db = PGVector.from_documents(
documents= docs,
embedding = embeddings,
collection_name= "blog_posts",
distance_strategy = DistanceStrategy.COSINE,
connection_string=CONNECTION_STRING
)
```
This code will create a table in public schema. How to specify a custom schema other than public?
### Suggestion:
_No response_ | How to specify a custom schema in PGVector.from_documents? | https://api.github.com/repos/langchain-ai/langchain/issues/15553/comments | 2 | 2024-01-04T19:08:46Z | 2024-06-16T16:07:39Z | https://github.com/langchain-ai/langchain/issues/15553 | 2,066,194,527 | 15,553 |
[
"hwchase17",
"langchain"
] | ### System Info
Error stack
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
File <command-3066972537097411>, line 1
----> 1 issue_recommendation(
2 review_title="Terrible",
3 review_text="This baking sheet is terrible. It stains so easily and i've tried everything to get it clean. I've maybe used it 5 times and it looks like it's 20 years old. The side of the pan also hold water, so when you pick it up off the drying rack, water runs out. I would never purchase these again.",
4 product_num="8888999"
5
6 )
File <command-3066972537097410>, line 44, in issue_recommendation(review_title, review_text, product_num)
36 retriever = vectordb.as_retriever(search_type="similarity", search_kwargs={'filter': {'product_num': product_num}})
38 retrieval_chain = (
39 {"context": retriever | format_docs, "review_text": RunnablePassthrough()}
40 | rag_prompt
41 | llm
42 | StrOutputParser()
43 )
---> 44 return retrieval_chain.invoke({"review_title":review_title, "review_text": review_text})
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-65a09d8c-062d-4f4f-9c52-1bf534f6511e/lib/python3.10/site-packages/langchain_core/runnables/base.py:1762, in RunnableSequence.invoke(self, input, config)
1760 try:
1761 for i, step in enumerate(self.steps):
-> 1762 input = step.invoke(
1763 input,
1764 # mark each step as a child run
1765 patch_config(
1766 config, callbacks=run_manager.get_child(f"seq:step:{i+1}")
1767 ),
1768 )
1769 # finish the root run
1770 except BaseException as e:
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-65a09d8c-062d-4f4f-9c52-1bf534f6511e/lib/python3.10/site-packages/langchain_core/runnables/base.py:2327, in RunnableParallel.invoke(self, input, config)
2314 with get_executor_for_config(config) as executor:
2315 futures = [
2316 executor.submit(
2317 step.invoke,
(...)
2325 for key, step in steps.items()
2326 ]
-> 2327 output = {key: future.result() for key, future in zip(steps, futures)}
2328 # finish the root run
2329 except BaseException as e:
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-65a09d8c-062d-4f4f-9c52-1bf534f6511e/lib/python3.10/site-packages/langchain_core/runnables/base.py:2327, in <dictcomp>(.0)
2314 with get_executor_for_config(config) as executor:
2315 futures = [
2316 executor.submit(
2317 step.invoke,
(...)
2325 for key, step in steps.items()
2326 ]
-> 2327 output = {key: future.result() for key, future in zip(steps, futures)}
2328 # finish the root run
2329 except BaseException as e:
File /usr/lib/python3.10/concurrent/futures/_base.py:451, in Future.result(self, timeout)
449 raise CancelledError()
450 elif self._state == FINISHED:
--> 451 return self.__get_result()
453 self._condition.wait(timeout)
455 if self._state in [CANCELLED, CANCELLED_AND_NOTIFIED]:
File /usr/lib/python3.10/concurrent/futures/_base.py:403, in Future.__get_result(self)
401 if self._exception:
402 try:
--> 403 raise self._exception
404 finally:
405 # Break a reference cycle with the exception in self._exception
406 self = None
File /usr/lib/python3.10/concurrent/futures/thread.py:58, in _WorkItem.run(self)
55 return
57 try:
---> 58 result = self.fn(*self.args, **self.kwargs)
59 except BaseException as exc:
60 self.future.set_exception(exc)
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-65a09d8c-062d-4f4f-9c52-1bf534f6511e/lib/python3.10/site-packages/langchain_core/runnables/base.py:1762, in RunnableSequence.invoke(self, input, config)
1760 try:
1761 for i, step in enumerate(self.steps):
-> 1762 input = step.invoke(
1763 input,
1764 # mark each step as a child run
1765 patch_config(
1766 config, callbacks=run_manager.get_child(f"seq:step:{i+1}")
1767 ),
1768 )
1769 # finish the root run
1770 except BaseException as e:
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-65a09d8c-062d-4f4f-9c52-1bf534f6511e/lib/python3.10/site-packages/langchain_core/retrievers.py:121, in BaseRetriever.invoke(self, input, config)
117 def invoke(
118 self, input: str, config: Optional[RunnableConfig] = None
119 ) -> List[Document]:
120 config = ensure_config(config)
--> 121 return self.get_relevant_documents(
122 input,
123 callbacks=config.get("callbacks"),
124 tags=config.get("tags"),
125 metadata=config.get("metadata"),
126 run_name=config.get("run_name"),
127 )
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-65a09d8c-062d-4f4f-9c52-1bf534f6511e/lib/python3.10/site-packages/langchain_core/retrievers.py:223, in BaseRetriever.get_relevant_documents(self, query, callbacks, tags, metadata, run_name, **kwargs)
221 except Exception as e:
222 run_manager.on_retriever_error(e)
--> 223 raise e
224 else:
225 run_manager.on_retriever_end(
226 result,
227 **kwargs,
228 )
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-65a09d8c-062d-4f4f-9c52-1bf534f6511e/lib/python3.10/site-packages/langchain_core/retrievers.py:216, in BaseRetriever.get_relevant_documents(self, query, callbacks, tags, metadata, run_name, **kwargs)
214 _kwargs = kwargs if self._expects_other_args else {}
215 if self._new_arg_supported:
--> 216 result = self._get_relevant_documents(
217 query, run_manager=run_manager, **_kwargs
218 )
219 else:
220 result = self._get_relevant_documents(query, **_kwargs)
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-65a09d8c-062d-4f4f-9c52-1bf534f6511e/lib/python3.10/site-packages/langchain_core/vectorstores.py:654, in VectorStoreRetriever._get_relevant_documents(self, query, run_manager)
650 def _get_relevant_documents(
651 self, query: str, *, run_manager: CallbackManagerForRetrieverRun
652 ) -> List[Document]:
653 if self.search_type == "similarity":
--> 654 docs = self.vectorstore.similarity_search(query, **self.search_kwargs)
655 elif self.search_type == "similarity_score_threshold":
656 docs_and_similarities = (
657 self.vectorstore.similarity_search_with_relevance_scores(
658 query, **self.search_kwargs
659 )
660 )
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-65a09d8c-062d-4f4f-9c52-1bf534f6511e/lib/python3.10/site-packages/langchain_community/vectorstores/chroma.py:348, in Chroma.similarity_search(self, query, k, filter, **kwargs)
331 def similarity_search(
332 self,
333 query: str,
(...)
336 **kwargs: Any,
337 ) -> List[Document]:
338 """Run similarity search with Chroma.
339
340 Args:
(...)
346 List[Document]: List of documents most similar to the query text.
347 """
--> 348 docs_and_scores = self.similarity_search_with_score(
349 query, k, filter=filter, **kwargs
350 )
351 return [doc for doc, _ in docs_and_scores]
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-65a09d8c-062d-4f4f-9c52-1bf534f6511e/lib/python3.10/site-packages/langchain_community/vectorstores/chroma.py:437, in Chroma.similarity_search_with_score(self, query, k, filter, where_document, **kwargs)
429 results = self.__query_collection(
430 query_texts=[query],
431 n_results=k,
(...)
434 **kwargs,
435 )
436 else:
--> 437 query_embedding = self._embedding_function.embed_query(query)
438 results = self.__query_collection(
439 query_embeddings=[query_embedding],
440 n_results=k,
(...)
443 **kwargs,
444 )
446 return _results_to_docs_and_scores(results)
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-65a09d8c-062d-4f4f-9c52-1bf534f6511e/lib/python3.10/site-packages/langchain_community/embeddings/openai.py:691, in OpenAIEmbeddings.embed_query(self, text)
682 def embed_query(self, text: str) -> List[float]:
683 """Call out to OpenAI's embedding endpoint for embedding query text.
684
685 Args:
(...)
689 Embedding for the text.
690 """
--> 691 return self.embed_documents([text])[0]
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-65a09d8c-062d-4f4f-9c52-1bf534f6511e/lib/python3.10/site-packages/langchain_community/embeddings/openai.py:662, in OpenAIEmbeddings.embed_documents(self, texts, chunk_size)
659 # NOTE: to keep things simple, we assume the list may contain texts longer
660 # than the maximum context and use length-safe embedding function.
661 engine = cast(str, self.deployment)
--> 662 return self._get_len_safe_embeddings(texts, engine=engine)
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-65a09d8c-062d-4f4f-9c52-1bf534f6511e/lib/python3.10/site-packages/langchain_community/embeddings/openai.py:465, in OpenAIEmbeddings._get_len_safe_embeddings(self, texts, engine, chunk_size)
459 if self.model.endswith("001"):
460 # See: https://github.com/openai/openai-python/
461 # issues/418#issuecomment-1525939500
462 # replace newlines, which can negatively affect performance.
463 text = text.replace("\n", " ")
--> 465 token = encoding.encode(
466 text=text,
467 allowed_special=self.allowed_special,
468 disallowed_special=self.disallowed_special,
469 )
471 # Split tokens into chunks respecting the embedding_ctx_length
472 for j in range(0, len(token), self.embedding_ctx_length):
File /databricks/python/lib/python3.10/site-packages/tiktoken/core.py:116, in Encoding.encode(self, text, allowed_special, disallowed_special)
114 if not isinstance(disallowed_special, frozenset):
115 disallowed_special = frozenset(disallowed_special)
--> 116 if match := _special_token_regex(disallowed_special).search(text):
117 raise_disallowed_special_token(match.group())
119 try:
TypeError: expected string or buffer
```
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I am trying to create RAG pattern using the product manuals in PDF which are spitted, indexed, and stored in Chroma persisted on a disk. When I try the function that classifies the reviews using the document context, below is the error I get:
```
from langchain import PromptTemplate
from langchain_core.output_parsers import StrOutputParser
from langchain_core.runnables import RunnablePassthrough
from langchain.embeddings import AzureOpenAIEmbeddings
from langchain.chat_models import AzureChatOpenAI
from langchain.vectorstores import Chroma
llm = AzureChatOpenAI(
azure_deployment="ChatGPT-16K",
openai_api_version="2023-05-15",
azure_endpoint=endpoint,
api_key=result["access_token"],
temperature=0,
seed = 100
)
embedding_model = AzureOpenAIEmbeddings(
api_version="2023-05-15",
azure_endpoint=endpoint,
api_key=result["access_token"],
azure_deployment="ada002",
)
vectordb = Chroma(
persist_directory=vector_db_path,
embedding_function=embedding_model,
collection_name="product_manuals",
)
def format_docs(docs):
return "\n\n".join(doc.page_content for doc in docs)
def classify (review_title, review_text, product_num):
template = """
You are a customer service AI Assistant that handles responses to negative product reviews.
Use the context below and categorize {review_title} and {review_text} into defect, misuse or poor quality categories based only on provided context. If you don't know, say that you do not know, don't try to make up an answer. Respond back with an answer in the following format:
poor quality
misuse
defect
{context}
Category:
"""
rag_prompt = PromptTemplate.from_template(template)
retriever = vectordb.as_retriever(search_type="similarity", search_kwargs={'filter': {'product_num': product_num}})
retrieval_chain = (
{"context": retriever | format_docs, "review_title: RunnablePassthrough(), "review_text": RunnablePassthrough()}
| rag_prompt
| llm
| StrOutputParser()
)
return retrieval_chain.invoke({"review_title": review_title, "review_text": review_text})
classify(review_title="Terrible", review_text ="This baking sheet is terrible. It stains so easily and i've tried everything to get it clean", product_num ="8888999")
```
### Expected behavior
Embeddings seems to work fine when I test. It also works fine when I remove the context and retriever from the chain. It seems to be related to embeddings. Examples on Langchain [website (https://python.langchain.com/docs/use_cases/question_answering/sources) instantiates retriver from Chroma.from_documents() whereas I load Chroma vector store from a persisted path. I also tried invoking with review_text only (instead of review title and review text) but the error persists. Not sure why this is happening. These are the package versions I work:
Name: openai Version: 1.6.1
Name: langchain Version: 0.0.354 | TypeError: expected string or buffer | https://api.github.com/repos/langchain-ai/langchain/issues/15552/comments | 2 | 2024-01-04T19:02:22Z | 2024-06-08T16:08:45Z | https://github.com/langchain-ai/langchain/issues/15552 | 2,066,185,557 | 15,552 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I have a large Agent with lots of memory/observations but initializing takes to much time. Is there a way to save the memory and load it again? Whats the best way to achieve this? Ideally I would like to reuse the vector store for memory and then for each new user save/load the memory/conversations specific to that agent/user connection.
I'm using the examples in the cookbook here: https://github.com/langchain-ai/langchain/blob/master/cookbook/generative_agents_interactive_simulacra_of_human_behavior.ipynb
### Suggestion:
_No response_ | Question: Using TimeWeightedVectorStoreRetriever and GenerativeAgentMemory is there a way to save the memory and load it again? | https://api.github.com/repos/langchain-ai/langchain/issues/15549/comments | 3 | 2024-01-04T16:39:30Z | 2024-01-04T16:59:31Z | https://github.com/langchain-ai/langchain/issues/15549 | 2,065,981,904 | 15,549 |
[
"hwchase17",
"langchain"
] | ### System Info
I try to load pdf in from langchain.document_loaders import PyPDFDirectoryLoader
got error this WARNING:pypdf._reader:incorrect startxref pointer(3)
from langchain.document_loaders import PyPDFDirectoryLoader
from langchain_community.document_loaders import PyPDFLoader
loader = PyPDFDirectoryLoader("/content/pdfs/Carina Lueschen Masterarbeit Ryan Trecartin (1).pdf")
pages = loader.load_and_split()
will return blank array and warning error
### Who can help?
@hwchase17 @agola11 @sbusso
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
just load the pdf which having images
### Expected behavior
output should be in array of pdf data | WARNING:pypdf._reader:incorrect startxref pointer(3) | https://api.github.com/repos/langchain-ai/langchain/issues/15548/comments | 4 | 2024-01-04T16:25:57Z | 2024-04-12T16:12:41Z | https://github.com/langchain-ai/langchain/issues/15548 | 2,065,959,358 | 15,548 |
[
"hwchase17",
"langchain"
] | ### System Info
python==3.10
langchain==0.0.326
langdetect==1.0.9
langsmith==0.0.54
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. define `GraphCypherQAChain` chain to query over Neo4j graph db and construct cypher query then return answer to user's question while setting `return_source_documents` to `True`:
` graph = Neo4jGraph(
url=NEO4J_URL, username=NEO4J_USERNAME, password=NEO4J_PASSWORD
)
EXAMPLES_PROMPT_TEMPLATE = """
Input: {db_question},
Output: {db_query}
"""
example_prompt = PromptTemplate(input_variables=["db_question", "db_query"], template=EXAMPLES_PROMPT_TEMPLATE)
example_selector = MaxMarginalRelevanceExampleSelector.from_examples(
# This is the list of examples available to select from.
query_examples,
# This is the embedding class used to produce embeddings which are used to measure semantic similarity.
OpenAIEmbeddings(),
# This is the VectorStore class that is used to store the embeddings and do a similarity search over.
Chroma,
# This is the number of examples to produce.
k=2
)
prompt_cypher = FewShotPromptTemplate(
example_selector=example_selector,
example_prompt=example_prompt,
suffix="The question is:\n{question}",
prefix="""Task: Generate a Cypher query to query a graph database based on the user's question.
Instructions:
Use the provided schema for node types, relationship types, and properties in the graph database. Only incorporate these defined elements.
Avoid utilizing any other node types, relationship types, or properties not present in the provided schema. Here's the schema definition:
{schema}
if the question matches one of the sample questions in the knowledge base then just return the query used to answer it.
if the user asks to retrieve a piece of information about a document or section given their name, then use a `WHERE` statement
and a cypher regular expression matching without case sensitivity like in the queries in your knowledge base when filtering by the name.
Use the statement `(t:transaction)-[:CONTAINS*]->(s)` in the cypher query
with the `*` sign next to the relationship label `CONTAINS` and where s is the section node you are looking for.
Ensure the generated query captures relevant information from the graph database without reducing the retrieved data due to variations in user wording.
Note: do not include any explanations or apologies in your responses.
Do not respond to inquiries seeking information other than the construction of a Cypher statement.
Do not include any text except the generated Cypher statement.
""",
input_variables=["schema", "question"]
)
QA_GENERATION_TEMPLATE = """
Task: answer the question you are given based on the context provided.
Instructions:
You are an assistant that helps to form nice and human understandable answers.
Use the context information provided to generate a well organized and comprehensve answer to the user's question.
When the provided information contains multiple elements, structure your answer as a bulleted or numbered list to enhance clarity and readability.
You must use the information to construct your answer.
The provided information is authoritative; do not doubt it or try to use your internal knowledge to correct it.
Make the answer sound like a response to the question without mentioning that you based the result on the given information.
Here's the information:
{context}
Question: {question}
Answer:
"""
prompt_qa = PromptTemplate(input_variables=["context", "question"], template=QA_GENERATION_TEMPLATE)
chain = GraphCypherQAChain.from_llm(
cypher_llm=ChatOpenAI(temperature=0, model="gpt-4"),
qa_llm=ChatOpenAI(temperature=0, model="gpt-4"),
graph=graph,
verbose=True,
return_intermediate_steps=True,
return_source_documents=True,
validate_cypher=True,
cypher_prompt=prompt_cypher,
qa_prompt=prompt_qa
)`
2. run chain and retrieve source docs used to answer question:
` res = chain.run({)) # chatbot response
answer = res['result']
print("source_documents" in res)
print(res.get("source_documents"))`
### Expected behavior
output is expected to be the source files that were queried or a similar output indicating the high level graph elements that were used to construct the context passed in the prompt. | GraphCypherQAChain doesn't support returning source documents with `return_source_documents` param like the `BaseQAWithSourcesChain` chains | https://api.github.com/repos/langchain-ai/langchain/issues/15543/comments | 3 | 2024-01-04T14:32:14Z | 2024-04-17T16:33:13Z | https://github.com/langchain-ai/langchain/issues/15543 | 2,065,766,207 | 15,543 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.