issue_owner_repo
sequencelengths 2
2
| issue_body
stringlengths 0
261k
⌀ | issue_title
stringlengths 1
925
| issue_comments_url
stringlengths 56
81
| issue_comments_count
int64 0
2.5k
| issue_created_at
stringlengths 20
20
| issue_updated_at
stringlengths 20
20
| issue_html_url
stringlengths 37
62
| issue_github_id
int64 387k
2.46B
| issue_number
int64 1
127k
|
---|---|---|---|---|---|---|---|---|---|
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```py
async def stream_tokens():
async for event in agent.astream_events(
{"input": prompt},
{"configurable": {"session_id": "some_hard_coded_value"}},
version="v1",
):
kind = event["event"]
if kind == "on_chat_model_stream":
content = event["data"]["chunk"].content
if content:
yield content
yield ""
```
```py
agent_with_history = RunnableWithMessageHistory(
agent,
lambda session_id: CassandraChatMessageHistory(
keyspace="some_hard_coded_value",
session=cluster.connect(),
session_id="some_hard_coded_value"
),
input_messages_key="input",
history_messages_key="history",
)
agent = AgentExecutor(
agent=agent_with_history,
tools=tools,
verbose=True,
handle_parsing_errors="Check your output and make sure it conforms, use the Action/Action Input syntax",
)
```
### Error Message and Stack Trace (if applicable)
[chain/error] [1:chain:AgentExecutor] [2ms] Chain run errored with error:
"ValueError(\"Missing keys ['session_id'] in config['configurable'] Expected keys are ['session_id'].When using via .invoke() or .stream(), pass in a config; e.g., chain.invoke({'input': 'foo'}, {'configurable': {'session_id': '[your-value-here]'}})\")Traceback (most recent call last):\n\n\n File \"/x/x/Library/Caches/pypoetry/virtualenvs/backend-python-Nj9PzyUh-py3.11/lib/python3.11/site-packages/langchain/agents/agent_iterator.py\", line 240, in __aiter__\n async for chunk in self.agent_executor._aiter_next_step(\n\n\n File \"/x/x/Library/Caches/pypoetry/virtualenvs/backend-python-Nj9PzyUh-py3.11/lib/python3.11/site-packages/langchain/agents/agent.py\", line 1262, in _aiter_next_step\n output = await self.agent.aplan(\n ^^^^^^^^^^^^^^^^^^^^^^^\n\n\n File \"/x/x/Library/Caches/pypoetry/virtualenvs/backend-python-Nj9PzyUh-py3.11/lib/python3.11/site-packages/langchain/agents/agent.py\", line 422, in aplan\n async for chunk in self.runnable.astream(\n\n\n File \"/x/x/Library/Caches/pypoetry/virtualenvs/backend-python-Nj9PzyUh-py3.11/lib/python3.11/site-packages/langchain_core/runnables/base.py\", line 4123, in astream\n self._merge_configs(config),\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n\n File \"/x/x/Library/Caches/pypoetry/virtualenvs/backend-python-Nj9PzyUh-py3.11/lib/python3.11/site-packages/langchain_core/runnables/history.py\", line 454, in _merge_configs\n raise ValueError(\n\n\nValueError: Missing keys ['session_id'] in config['configurable'] Expected keys are ['session_id'].When using via .invoke() or .stream(), pass in a config; e.g., chain.invoke({'input': 'foo'}, {'configurable': {'session_id': '[your-value-here]'}})"
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/x/x/Library/Caches/pypoetry/virtualenvs/backend-python-Nj9PzyUh-py3.11/lib/python3.11/site-packages/uvicorn/protocols/http/httptools_impl.py", line 426, in run_asgi
result = await app( # type: ignore[func-returns-value]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/x/x/Library/Caches/pypoetry/virtualenvs/backend-python-Nj9PzyUh-py3.11/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 84, in __call__
return await self.app(scope, receive, send)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/x/x/Library/Caches/pypoetry/virtualenvs/backend-python-Nj9PzyUh-py3.11/lib/python3.11/site-packages/fastapi/applications.py", line 1106, in __call__
await super().__call__(scope, receive, send)
File "/x/x/Library/Caches/pypoetry/virtualenvs/backend-python-Nj9PzyUh-py3.11/lib/python3.11/site-packages/starlette/applications.py", line 122, in __call__
await self.middleware_stack(scope, receive, send)
File "/x/x/Library/Caches/pypoetry/virtualenvs/backend-python-Nj9PzyUh-py3.11/lib/python3.11/site-packages/starlette/middleware/errors.py", line 184, in __call__
raise exc
File "/x/x/Library/Caches/pypoetry/virtualenvs/backend-python-Nj9PzyUh-py3.11/lib/python3.11/site-packages/starlette/middleware/errors.py", line 162, in __call__
await self.app(scope, receive, _send)
File "/x/x/Library/Caches/pypoetry/virtualenvs/backend-python-Nj9PzyUh-py3.11/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
raise exc
File "/x/x/Library/Caches/pypoetry/virtualenvs/backend-python-Nj9PzyUh-py3.11/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
await self.app(scope, receive, sender)
File "/x/x/Library/Caches/pypoetry/virtualenvs/backend-python-Nj9PzyUh-py3.11/lib/python3.11/site-packages/fastapi/middleware/asyncexitstack.py", line 20, in __call__
raise e
File "/x/x/Library/Caches/pypoetry/virtualenvs/backend-python-Nj9PzyUh-py3.11/lib/python3.11/site-packages/fastapi/middleware/asyncexitstack.py", line 17, in __call__
await self.app(scope, receive, send)
File "/x/x/Library/Caches/pypoetry/virtualenvs/backend-python-Nj9PzyUh-py3.11/lib/python3.11/site-packages/starlette/routing.py", line 718, in __call__
await route.handle(scope, receive, send)
File "/x/x/Library/Caches/pypoetry/virtualenvs/backend-python-Nj9PzyUh-py3.11/lib/python3.11/site-packages/starlette/routing.py", line 276, in handle
await self.app(scope, receive, send)
File "/x/x/Library/Caches/pypoetry/virtualenvs/backend-python-Nj9PzyUh-py3.11/lib/python3.11/site-packages/starlette/routing.py", line 69, in app
await response(scope, receive, send)
File "/x/x/Library/Caches/pypoetry/virtualenvs/backend-python-Nj9PzyUh-py3.11/lib/python3.11/site-packages/starlette/responses.py", line 270, in __call__
async with anyio.create_task_group() as task_group:
File "/x/x/Library/Caches/pypoetry/virtualenvs/backend-python-Nj9PzyUh-py3.11/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 597, in __aexit__
raise exceptions[0]
File "/x/x/Library/Caches/pypoetry/virtualenvs/backend-python-Nj9PzyUh-py3.11/lib/python3.11/site-packages/starlette/responses.py", line 273, in wrap
await func()
File "/x/x/Library/Caches/pypoetry/virtualenvs/backend-python-Nj9PzyUh-py3.11/lib/python3.11/site-packages/starlette/responses.py", line 262, in stream_response
async for chunk in self.body_iterator:
File "/x/x/Documents/programming/x/backend-python/source/server/routes/ask_agent_endpoint/ask_agent_endpoint.py", line 39, in stream_tokens
async for event in agent.astream_events(
File "/x/x/Library/Caches/pypoetry/virtualenvs/backend-python-Nj9PzyUh-py3.11/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 889, in astream_events
async for log in _astream_log_implementation( # type: ignore[misc]
File "/x/x/Library/Caches/pypoetry/virtualenvs/backend-python-Nj9PzyUh-py3.11/lib/python3.11/site-packages/langchain_core/tracers/log_stream.py", line 612, in _astream_log_implementation
await task
File "/x/x/Library/Caches/pypoetry/virtualenvs/backend-python-Nj9PzyUh-py3.11/lib/python3.11/site-packages/langchain_core/tracers/log_stream.py", line 567, in consume_astream
async for chunk in runnable.astream(input, config, **kwargs):
File "/x/x/Library/Caches/pypoetry/virtualenvs/backend-python-Nj9PzyUh-py3.11/lib/python3.11/site-packages/langchain/agents/agent.py", line 1551, in astream
async for step in iterator:
File "/x/x/Library/Caches/pypoetry/virtualenvs/backend-python-Nj9PzyUh-py3.11/lib/python3.11/site-packages/langchain/agents/agent_iterator.py", line 240, in __aiter__
async for chunk in self.agent_executor._aiter_next_step(
File "/x/x/Library/Caches/pypoetry/virtualenvs/backend-python-Nj9PzyUh-py3.11/lib/python3.11/site-packages/langchain/agents/agent.py", line 1262, in _aiter_next_step
output = await self.agent.aplan(
^^^^^^^^^^^^^^^^^^^^^^^
File "/x/x/Library/Caches/pypoetry/virtualenvs/backend-python-Nj9PzyUh-py3.11/lib/python3.11/site-packages/langchain/agents/agent.py", line 422, in aplan
async for chunk in self.runnable.astream(
File "/x/x/Library/Caches/pypoetry/virtualenvs/backend-python-Nj9PzyUh-py3.11/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 4123, in astream
self._merge_configs(config),
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/x/x/Library/Caches/pypoetry/virtualenvs/backend-python-Nj9PzyUh-py3.11/lib/python3.11/site-packages/langchain_core/runnables/history.py", line 454, in _merge_configs
raise ValueError(
ValueError: Missing keys ['session_id'] in config['configurable'] Expected keys are ['session_id'].When using via .invoke() or .stream(), pass in a config; e.g., chain.invoke({'input': 'foo'}, {'configurable': {'session_id': '[your-value-here]'}})
### Description
I'm trying to integrate chat history with Cassandra, like I did for chain. With astream_events on AgentExecutor I have problem that config isn't passed down, so it throws error like in traceback. I tried to manually edit source code at /Library/Caches/pypoetry/virtualenvs/backend-python-Nj9PzyUh-py3.11/lib/python3.11/site-packages/langchain_core/runnables/base.py line 4123, and add on my own to final config configurable["session_id"]. After that, it's working. I'm working with GPT-4, but this bug should appear no matter which LLM you are using.
### System Info
langchain==0.1.4
langchain-community==0.0.16
langchain-core==0.1.17
langchain-experimental==0.0.49
langchain-openai==0.0.5
langchainhub==0.1.14
MacOS Sonoma 14.1.2 with ARM M3 CPU
Python 3.11.7 (main, Dec 20 2023, 12:17:39) [Clang 15.0.0 (clang-1500.0.40.1)] on darwin | astream_events doesn't pass config properly | https://api.github.com/repos/langchain-ai/langchain/issues/16944/comments | 6 | 2024-02-02T14:50:34Z | 2024-02-07T02:34:07Z | https://github.com/langchain-ai/langchain/issues/16944 | 2,115,112,004 | 16,944 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
from langchain_google_vertexai.chat_models import ChatVertexAI
llm = ChatVertexAI(
model_name="gemini-pro",
max_output_tokens=1,
)
llm.invoke("foo")
```
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/workspaces/metabot-backend/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 165, in invoke
self.generate_prompt(
File "/workspaces/metabot-backend/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 543, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspaces/metabot-backend/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 407, in generate
raise e
File "/workspaces/metabot-backend/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 397, in generate
self._generate_with_cache(
File "/workspaces/metabot-backend/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 576, in _generate_with_cache
return self._generate(
^^^^^^^^^^^^^^^
File "/workspaces/metabot-backend/.venv/lib/python3.11/site-packages/langchain_google_vertexai/chat_models.py", line 356, in _generate
return generate_from_stream(stream_iter)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspaces/metabot-backend/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 64, in generate_from_stream
for chunk in stream:
File "/workspaces/metabot-backend/.venv/lib/python3.11/site-packages/langchain_google_vertexai/chat_models.py", line 499, in _stream
for response in responses:
File "/workspaces/metabot-backend/.venv/lib/python3.11/site-packages/vertexai/generative_models/_generative_models.py", line 926, in _send_message_streaming
raise ResponseBlockedError(
vertexai.generative_models._generative_models.ResponseBlockedError: The response was blocked.
### Description
When using Vertex AI's Gemini generative models, an exception called ResponseBlockedError is being raised when the generated text reaches either the maximum allowed token limit or a natural stopping point, as defined in the Google Vertex AI Python library (https://github.com/googleapis/python-aiplatform/blob/93036eda04566501e74916814e950236d9dbed62/vertexai/generative_models/_generative_models.py#L640-L644). However, instead of handling the error gracefully within Langchain, the exception is being propagated to the top level of the instance call, causing unexpected behavior and potentially interrupting workflows.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1-Alpine SMP PREEMPT_DYNAMIC Wed, 29 Nov 2023 18:56:40 +0000
> Python Version: 3.11.7 (main, Feb 2 2024, 12:35:14) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.1.10
> langchain: 0.1.0
> langchain_community: 0.0.12
> langserve: Not Found | Vertex AI fail on successful finish reason | https://api.github.com/repos/langchain-ai/langchain/issues/16939/comments | 1 | 2024-02-02T12:57:29Z | 2024-05-10T16:10:00Z | https://github.com/langchain-ai/langchain/issues/16939 | 2,114,872,848 | 16,939 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
below's the complete code
```
# !pip -q install langchain openai tiktoken chromadb pypdf sentence-transformers==2.2.2 InstructorEmbedding faiss-cpu
import os
os.environ["OPENAI_API_KEY"] = ""
from langchain.vectorstores import Chroma
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.document_loaders.csv_loader import CSVLoader
from langchain.llms import OpenAI
from langchain.chains import RetrievalQA
from langchain.document_loaders import PyPDFLoader
from langchain.document_loaders import DirectoryLoader
# InstructorEmbedding
from InstructorEmbedding import INSTRUCTOR
from langchain.embeddings import HuggingFaceInstructEmbeddings
# OpenAI Embedding
from langchain.embeddings import OpenAIEmbeddings
"""### Load Multiple files from Directory"""
root_dir = "/content/data"
# List of file paths for your CSV files
csv_files = ['one.csv', '1.csv', 'one-no.csv', 'one-yes.csv']
# Iterate over the file paths and create a loader for each file
loaders = [CSVLoader(file_path=file_path, encoding="utf-8") for file_path in csv_files]
# Now, loaders is a list of CSVLoader instances, one for each file
# Optional: If you need to combine the data from all loaders
documents = []
for loader in loaders:
data = loader.load() # or however you retrieve data from the loader
documents.extend(data)
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=1000,
chunk_overlap=200)
texts = text_splitter.split_documents(documents)
len(texts)
import pickle
import faiss
from langchain.vectorstores import FAISS
def store_embeddings(docs, embeddings, sotre_name, path):
vectorStore = FAISS.from_documents(docs, embeddings)
with open(f"{path}/faiss_{sotre_name}.pkl", "wb") as f:
pickle.dump(vectorStore, f)
def load_embeddings(sotre_name, path):
with open(f"{path}/faiss_{sotre_name}.pkl", "rb") as f:
VectorStore = pickle.load(f)
return VectorStore
"""### HF Instructor Embeddings"""
from langchain.embeddings import HuggingFaceInstructEmbeddings
# from langchain_community.embeddings import HuggingFaceInstructEmbeddings
from InstructorEmbedding import INSTRUCTOR
instructor_embeddings = HuggingFaceInstructEmbeddings(model_name="hkunlp/instructor-xl",
model_kwargs={"device": "cuda"})
Embedding_store_path = f"{root_dir}/Embedding_store"
db_instructEmbedd = FAISS.from_documents(texts, instructor_embeddings)
retriever = db_instructEmbedd.as_retriever(search_kwargs={"k": 5})
retriever.search_type
retriever.search_kwargs
docs = retriever.get_relevant_documents("Can you tell me about natixis risk mapping?")
docs[0]
# create the chain to answer questions
qa_chain_instrucEmbed = RetrievalQA.from_chain_type(llm=OpenAI(temperature=0.2),
chain_type="stuff",
retriever=retriever,
return_source_documents=True)
"""### OpenAI's Embeddings"""
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
db_openAIEmbedd = FAISS.from_documents(texts, embeddings)
retriever_openai = db_openAIEmbedd.as_retriever(search_kwargs={"k": 2})
# create the chain to answer questions
qa_chain_openai = RetrievalQA.from_chain_type(llm=OpenAI(temperature=0.2, ),
chain_type="stuff",
retriever=retriever_openai,
return_source_documents=True)
"""### Testing both MODELS"""
## Cite sources
import textwrap
def wrap_text_preserve_newlines(text, width=110):
# Split the input text into lines based on newline characters
lines = text.split('\n')
# Wrap each line individually
wrapped_lines = [textwrap.fill(line, width=width) for line in lines]
# Join the wrapped lines back together using newline characters
wrapped_text = '\n'.join(wrapped_lines)
return wrapped_text
# def process_llm_response(llm_response):
# print(wrap_text_preserve_newlines(llm_response['result']))
# print('\nSources:')
# for source in llm_response["source_documents"]:
# print(source.metadata['source'])
def process_llm_response(llm_response):
# print('\nSources:')
# print("\n")
if llm_response["source_documents"]:
for source in llm_response["source_documents"]:
print(wrap_text_preserve_newlines(source.page_content))
source_name = source.metadata['source']
row_number = source.metadata.get('row', 'Not specified')
print(f"Source: {source_name}, Row: {row_number}\n")
else:
print("No sources available.")
query = 'Can you tell me about natixis risk mapping??'
print('-------------------Instructor Embeddings------------------\n')
llm_response = qa_chain_instrucEmbed(query)
process_llm_response(llm_response)
```
the above is giving below output
```
-------------------Instructor Embeddings------------------
Snippet: Natixis conducted a systematic identification and materiality assessment of climate risk impacts.
This exercise leveraged existing Natixis risk mapping and relied on a qualitative analysis of the materiality
of impacts by Environmental and Social Responsibility and risk experts in the short medium term ( 5 years) and
long term (5.30 years). The analysis led to distinguish between indirect impactsresulting from Natixis
exposure to other entities (clientsassetsetc.) exposed to climate risksand direct impacts to which Natixis is
exposed through its own activities.
Source: conflicts.csv, Row: 14
Snippet: All risksincluding climate related risksare identified and evaluated at the regional level with the
help of regional experts. They cover the entire range of climate related issues (transitional and physical
issues). Risks are assessed on a gross risk basis. Gross risk is defined as risk without mitigation controls.
The risks are analyzed according to the criteria “EBIT effect� and “probability.�
Source: conflicts.csv, Row: 13
Snippet: Wärtsilä identifies and assesses on an annual basis its sustainability risksincluding climate
change risksin both its strategic and operative risk assessments.
Source: conflicts.csv, Row: 16
Snippet: Climate risk has been identified as one of the most significant risks.
Source: conflicts.csv, Row: 50
Snippet: Impact & implication Aurubis is since 2013 part of the EU-ETS.
Source: conflicts1.csv, Row: 17
```
it is returning multiple outputs from the same source, but i was expecting one output from each and every source document. Seems like multiple snippets from the same source should be combined and based out of that combined text it should return that one output for that source. Can you please look into this?
### Idea or request for content:
_No response_ | Unable to return output from every source (i.e. every document), rather it is returning only one output even if there are multiple documents | https://api.github.com/repos/langchain-ai/langchain/issues/16938/comments | 1 | 2024-02-02T12:41:27Z | 2024-02-14T03:35:25Z | https://github.com/langchain-ai/langchain/issues/16938 | 2,114,842,720 | 16,938 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
in the below code, it returns only one answer even if there are multiple documents (multiple csv files). I'm bit skeptical on which line of code should i make changes to get the output for every answer.
```
# !pip -q install langchain openai tiktoken chromadb pypdf sentence-transformers==2.2.2 InstructorEmbedding faiss-cpu
import os
os.environ["OPENAI_API_KEY"] = ""
from langchain.vectorstores import Chroma
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.document_loaders.csv_loader import CSVLoader
from langchain.llms import OpenAI
from langchain.chains import RetrievalQA
from langchain.document_loaders import PyPDFLoader
from langchain.document_loaders import DirectoryLoader
# InstructorEmbedding
from InstructorEmbedding import INSTRUCTOR
from langchain.embeddings import HuggingFaceInstructEmbeddings
# OpenAI Embedding
from langchain.embeddings import OpenAIEmbeddings
"""### Load Multiple files from Directory"""
root_dir = "/content/data"
# List of file paths for your CSV files
csv_files = ['one.csv', '1.csv', 'one-no.csv', 'one-yes.csv']
# Iterate over the file paths and create a loader for each file
loaders = [CSVLoader(file_path=file_path, encoding="utf-8") for file_path in csv_files]
# Now, loaders is a list of CSVLoader instances, one for each file
# Optional: If you need to combine the data from all loaders
documents = []
for loader in loaders:
data = loader.load() # or however you retrieve data from the loader
documents.extend(data)
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=1000,
chunk_overlap=200)
texts = text_splitter.split_documents(documents)
len(texts)
import pickle
import faiss
from langchain.vectorstores import FAISS
def store_embeddings(docs, embeddings, sotre_name, path):
vectorStore = FAISS.from_documents(docs, embeddings)
with open(f"{path}/faiss_{sotre_name}.pkl", "wb") as f:
pickle.dump(vectorStore, f)
def load_embeddings(sotre_name, path):
with open(f"{path}/faiss_{sotre_name}.pkl", "rb") as f:
VectorStore = pickle.load(f)
return VectorStore
"""### HF Instructor Embeddings"""
from langchain.embeddings import HuggingFaceInstructEmbeddings
# from langchain_community.embeddings import HuggingFaceInstructEmbeddings
from InstructorEmbedding import INSTRUCTOR
instructor_embeddings = HuggingFaceInstructEmbeddings(model_name="hkunlp/instructor-xl",
model_kwargs={"device": "cuda"})
Embedding_store_path = f"{root_dir}/Embedding_store"
db_instructEmbedd = FAISS.from_documents(texts, instructor_embeddings)
retriever = db_instructEmbedd.as_retriever(search_kwargs={"k": 5})
retriever.search_type
retriever.search_kwargs
docs = retriever.get_relevant_documents("Can you tell me about natixis risk mapping?")
docs[0]
# create the chain to answer questions
qa_chain_instrucEmbed = RetrievalQA.from_chain_type(llm=OpenAI(temperature=0.2),
chain_type="stuff",
retriever=retriever,
return_source_documents=True)
"""### OpenAI's Embeddings"""
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
db_openAIEmbedd = FAISS.from_documents(texts, embeddings)
retriever_openai = db_openAIEmbedd.as_retriever(search_kwargs={"k": 2})
# create the chain to answer questions
qa_chain_openai = RetrievalQA.from_chain_type(llm=OpenAI(temperature=0.2, ),
chain_type="stuff",
retriever=retriever_openai,
return_source_documents=True)
"""### Testing both MODELS"""
## Cite sources
import textwrap
def wrap_text_preserve_newlines(text, width=110):
# Split the input text into lines based on newline characters
lines = text.split('\n')
# Wrap each line individually
wrapped_lines = [textwrap.fill(line, width=width) for line in lines]
# Join the wrapped lines back together using newline characters
wrapped_text = '\n'.join(wrapped_lines)
return wrapped_text
# def process_llm_response(llm_response):
# print(wrap_text_preserve_newlines(llm_response['result']))
# print('\nSources:')
# for source in llm_response["source_documents"]:
# print(source.metadata['source'])
def process_llm_response(llm_response):
print(wrap_text_preserve_newlines(llm_response['result']))
print('\nSources:')
if llm_response["source_documents"]:
# Access the first source document
first_source = llm_response["source_documents"][0]
source_name = first_source.metadata['source']
row_number = first_source.metadata.get('row', 'Not specified')
# Print the first source's file name and row number
print(f"{source_name}, Row: {row_number}")
else:
print("No sources available.")
query = 'Can you tell me about natixis risk mapping??'
print('-------------------Instructor Embeddings------------------\n')
llm_response = qa_chain_instrucEmbed(query)
process_llm_response(llm_response)
query = 'Can you tell me about natixis risk mapping??'
print('-------------------OpenAI Embeddings------------------\n')
llm_response = qa_chain_openai(query)
process_llm_response(llm_response)
```
Below is the actual output
query = 'Can you tell me about natixis risk mapping??'
print('-------------------Instructor Embeddings------------------\n')
llm_response = qa_chain_instrucEmbed(query)
process_llm_response(llm_response)
```
-------------------Instructor Embeddings------------------
Answer:
Natixis conducts a systematic identification and materiality assessment of climate risk impacts through their
risk mapping process. This involves evaluating all risks, including climate related risks, at the regional
level with the help of regional experts. The risks are assessed on a gross risk basis, meaning without
mitigation controls, and are analyzed according to the criteria "EBIT effect" and "probability." This process
also distinguishes between indirect impacts resulting from Natixis' exposure to other entities and direct
impacts from their own activities.
Sources:
Source 1: one.csv, Row: 14
Source 2: 1.csv, Row: 13
Source 3: one-no.csv, Row: 16
Source 4: one-yes.csv, Row: 50
```
Expected output:
```
Answer:
Natixis conducts a systematic identification and materiality assessment of climate risk impacts through their
risk mapping process. This involves evaluating all risks, including climate related risks, at the regional
level with the help of regional experts. The risks are assessed on a gross risk basis, meaning without
mitigation controls, and are analyzed according to the criteria "EBIT effect" and "probability." This process
also distinguishes between indirect impacts resulting from Natixis' exposure to other entities and direct
impacts from their own activities.
Sources:
Source: one.csv, Row: 14
Answer:
I'm not sure.
Sources:
Source: 1.csv, Row: 13
```
so on
it has returned only 1 answer for multiple sources I need answers for each and every source. Can anyone please help me constructing the code?
### Idea or request for content:
_No response_ | Unable to return output from every souce (i.e. every document), rather it is returning only one output even if there are multiple documents | https://api.github.com/repos/langchain-ai/langchain/issues/16935/comments | 7 | 2024-02-02T11:41:23Z | 2024-03-19T05:56:19Z | https://github.com/langchain-ai/langchain/issues/16935 | 2,114,743,405 | 16,935 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
This is how I get the Azure OpenAI LLM object
```python
def getLlmObject():
getToken()
model = AzureChatOpenAI(
openai_api_version=os.environ['OPENAI_API_VERSION'],
azure_deployment=os.environ['AZURE_OPENAI_DEPLOYMENT'],
azure_endpoint = os.environ['AZURE_ENDPOINT'],
openai_api_type = 'azure',
user = f'{{"appkey": "{APP_KEY}"}}'
)
return model
```
It would be ideal to change line 205 to detect non Streaming capability of the model or provide an option to set ```streaming=False``` during instantiation using ```AzureChatOpenAI``` class.
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "/root/volume1/iris/onex-gen-ai-experimental/crew/crew.py", line 93, in <module>
result = crew.kickoff()
File "/volume1/anaconda3/envs/iris-experimental/lib/python3.10/site-packages/crewai/crew.py", line 127, in kickoff
return self._sequential_loop()
File "/volume1/anaconda3/envs/iris-experimental/lib/python3.10/site-packages/crewai/crew.py", line 134, in _sequential_loop
task_output = task.execute(task_output)
File "/volume1/anaconda3/envs/iris-experimental/lib/python3.10/site-packages/crewai/task.py", line 56, in execute
result = self.agent.execute_task(
File "/volume1/anaconda3/envs/iris-experimental/lib/python3.10/site-packages/crewai/agent.py", line 146, in execute_task
result = self.agent_executor.invoke(
File "/volume1/anaconda3/envs/iris-experimental/lib/python3.10/site-packages/langchain/chains/base.py", line 162, in invoke
raise e
File "/volume1/anaconda3/envs/iris-experimental/lib/python3.10/site-packages/langchain/chains/base.py", line 156, in invoke
self._call(inputs, run_manager=run_manager)
File "/volume1/anaconda3/envs/iris-experimental/lib/python3.10/site-packages/crewai/agents/executor.py", line 59, in _call
next_step_output = self._take_next_step(
File "/volume1/anaconda3/envs/iris-experimental/lib/python3.10/site-packages/langchain/agents/agent.py", line 1097, in _take_next_step
[
File "/volume1/anaconda3/envs/iris-experimental/lib/python3.10/site-packages/langchain/agents/agent.py", line 1097, in <listcomp>
[
File "/volume1/anaconda3/envs/iris-experimental/lib/python3.10/site-packages/crewai/agents/executor.py", line 103, in _iter_next_step
output = self.agent.plan(
File "/volume1/anaconda3/envs/iris-experimental/lib/python3.10/site-packages/langchain/agents/agent.py", line 387, in plan
for chunk in self.runnable.stream(inputs, config={"callbacks": callbacks}):
File "/volume1/anaconda3/envs/iris-experimental/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2424, in stream
yield from self.transform(iter([input]), config, **kwargs)
File "/volume1/anaconda3/envs/iris-experimental/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2411, in transform
yield from self._transform_stream_with_config(
File "/volume1/anaconda3/envs/iris-experimental/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 1497, in _transform_stream_with_config
chunk: Output = context.run(next, iterator) # type: ignore
File "/volume1/anaconda3/envs/iris-experimental/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2375, in _transform
for output in final_pipeline:
File "/volume1/anaconda3/envs/iris-experimental/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 1035, in transform
for chunk in input:
File "/volume1/anaconda3/envs/iris-experimental/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 4145, in transform
yield from self.bound.transform(
File "/volume1/anaconda3/envs/iris-experimental/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 1045, in transform
yield from self.stream(final, config, **kwargs)
File "/volume1/anaconda3/envs/iris-experimental/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 250, in stream
raise e
File "/volume1/anaconda3/envs/iris-experimental/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 242, in stream
assert generation is not None
AssertionError
### Description
I use enterprise Azure OpenAI instance to work with CrewAI (For Autonomous Agents). Our Azure OpenAI endpoint does not support streaming. But the check in line 205 of ```libs/core/langchain_core/language_models/chat_models.py``` (https://github.com/langchain-ai/langchain/blob/master/libs/core/langchain_core/language_models/chat_models.py) causes the else block to get executed and thus raising the error during execution of the statement ``` assert generation is not None ```
### System Info
```linux
(condaenv) [root@iris crew]# pip freeze | grep langchain
langchain==0.1.4
langchain-community==0.0.16
langchain-core==0.1.17
langchain-experimental==0.0.20
langchain-openai==0.0.2.post1
(condaenv) [root@iris crew]# python --version
Python 3.10.9
``` | LangChain Core Chatmodels.py goes to a streaming block causing "generation is not None" assertion error when the AzureChatOpenAI llm object does not support streaming. | https://api.github.com/repos/langchain-ai/langchain/issues/16930/comments | 5 | 2024-02-02T08:06:16Z | 2024-08-05T17:09:07Z | https://github.com/langchain-ai/langchain/issues/16930 | 2,114,346,425 | 16,930 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
I do have below code
```
if len(year_lst) != 0 and len(company_lst) == 0:
response = []
for i in year_lst:
vectorstore_retriver_args = {
"k": vector_count,
"pre_filter": {"$and": [{"year": {"$eq": int(i.strip())}}]},
}
final_question = question.replace("[ ]", i)
print(f"Final Question : {final_question}")
query_llm = RetrievalQA.from_chain_type(
llm=llm,
verbose=True,
chain_type="stuff",
retriever=vectorstore_ind.as_retriever(
search_kwargs=vectorstore_retriver_args
),
return_source_documents=True,
chain_type_kwargs={"prompt": prompt},
)
response.append(query_llm({"query": final_question.strip().lower()}))
```
I have developed a program that allows for the uploading of multiple PDF files, and it includes a feature where you can specify a "vector count" for each PDF. This vector count determines how many similar snippets the program will identify and return from each uploaded PDF. For example, if the vector count is set to 5, the program will find and return 5 similar snippets from each PDF file. My question concerns how the program processes these snippets to answer queries for each PDF file: does it compile the top 5 similar snippets from each PDF, concatenate these snippets together, and then generate a response based on the combined content from each file? Or does it select the most relevant snippet from those top 5 and base its response solely on that single snippet? I just want to know how **RetrievalQA** works
### Idea or request for content:
_No response_ | How the 'RetrievalQA' function works? | https://api.github.com/repos/langchain-ai/langchain/issues/16927/comments | 3 | 2024-02-02T06:58:40Z | 2024-02-14T03:35:24Z | https://github.com/langchain-ai/langchain/issues/16927 | 2,114,240,162 | 16,927 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
Below is my code
``` python
def generate_embeddings(config: dict = None, urls = None, file_path = None, persist_directory=None):
texts=None
if file_path:
_, file_extension = os.path.splitext(file_path)
file_extension = file_extension.lower()
image_types=['jpeg','jpg','png','gif']
if file_path.lower().endswith(".pdf"):
loader = PyPDFLoader(file_path)
document = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=2000, chunk_overlap=300)
texts = text_splitter.split_documents(documents=document)
elif file_path.lower().endswith(".csv"):
loader = CSVLoader(file_path, encoding="utf-8", csv_args={'delimiter': ','})
document = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=300)
texts = text_splitter.split_documents(documents=document)
elif file_path.lower().endswith(".xlsx") or file_path.lower().endswith(".xls"):
loader = UnstructuredExcelLoader(file_path)
document = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=100)
texts = text_splitter.split_documents(documents=document)
elif file_path.lower().endswith(".docx") or file_path.lower().endswith(".doc"):
loader = UnstructuredWordDocumentLoader(file_path)
document = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=400, chunk_overlap=0)
texts = text_splitter.split_documents(documents=document)
elif any(file_path.lower().endswith(f".{img_type}") for img_type in image_types):
loader=UnstructuredImageLoader(file_path)
document = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=100)
texts = text_splitter.split_documents(documents=document)
elif file_path.lower().endswith(".txt"):
loader=TextLoader(file_path)
document = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=100)
texts = text_splitter.split_documents(documents=document)
elif config is not None:
confluence_url = config.get("confluence_url", None)
username = config.get("username", None)
api_key = config.get("api_key", None)
space_key = config.get("space_key", None)
documents = []
embedding = OpenAIEmbeddings()
loader = ConfluenceLoader(
url=confluence_url,
username=username,
api_key=api_key
)
for space_key in space_key:
try:
if space_key[1] is True:
print('add attachment')
documents.extend(loader.load(space_key=space_key[0],include_attachments=True,limit=100))
text_splitter = CharacterTextSplitter(chunk_size=6000, chunk_overlap=50)
texts = text_splitter.split_documents(documents)
text_splitter = TokenTextSplitter(chunk_size=1000, chunk_overlap=10, encoding_name="cl100k_base")
texts = text_splitter.split_documents(texts)
else:
print("without attachment")
documents.extend(loader.load(space_key=space_key[0],limit=100))
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=50)
texts = text_splitter.split_documents(documents)
text_splitter = TokenTextSplitter(chunk_size=1000, chunk_overlap=10, encoding_name="cl100k_base")
texts = text_splitter.split_documents(texts)
except:
documents=[]
elif urls:
all_urls=[]
for url in urls:
if url[1] is True:
crawl_data=crawl(url[0])
all_urls.extend(crawl_data)
if url[1] is False:
dummy = []
dummy.append(url[0])
all_urls.extend(dummy)
loader = UnstructuredURLLoader(urls=all_urls)
urlDocument = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=50)
texts = text_splitter.split_documents(documents=urlDocument)
else:
raise ValueError("Invalid source_type. Supported values are 'pdf', 'confluence', and 'url'.")
if texts:
embedding = OpenAIEmbeddings()
Chroma.from_documents(documents=texts, embedding=embedding, persist_directory=persist_directory)
file_crawl_status = True
file_index_status = True
else:
file_crawl_status = False
file_index_status = False
return file_crawl_status, file_index_status
def retreival_qa_chain(chroma_db_path):
embedding = OpenAIEmbeddings()
vectordb = Chroma(persist_directory=chroma_db_path, embedding_function=embedding)
llm = ChatOpenAI(model="gpt-3.5-turbo-16k",temperature=0.1)
retriever = vectordb.as_retriever(search_kwargs={"k": 2})
qa = RetrievalQA.from_chain_type(llm=llm, chain_type="stuff", retriever=retriever,return_source_documents=True)
return qa
```
### Error Message and Stack Trace (if applicable)
[02/Feb/2024 11:54:15] "GET /create-project/ HTTP/1.1" 200 19747
Bad Request: /create-project/
[02/Feb/2024 11:56:43] "POST /create-project/ HTTP/1.1" 400 45
### Description
# Confluence Project Issue
## Problem Description
I am working on a Confluence project where I have implemented an "include attachments=True" feature. The functionality works fine locally, and the code is deployed on two servers. However, on one of the servers, I am encountering a "Bad request" error. Despite having all the necessary dependencies installed, the issue persists.
## Dependency Information
Here are the dependencies installed on all servers:
- Django 4.0
- Django Rest Framework
- Langchain 0.1.1
- Markdownify
- Pillow
- Docx2txt
- Xlrd
- Pandas
- Reportlab
- Svglib
- Pdf2image
- Chromadb
- Unstructured
- OpenAI
- Pypdf
- Tiktoken
- Django-cors-headers
- Django-environ
- Pytesseract 0.3.10
- Beautifulsoup4 4.12.2
- Atlassian-python-api 3.41.9
- Lxml
- Langchain-community
- Langchain-openai
- Python-docx
- Unstructured-inference
- Unstructured[all-docs]
- Pydantic
- Langchainhub
## Additional Information
- The issue occurs specifically on one server.
- The "include attachments" feature is working fine on the local environment and another server.
- All dependencies are installed on the problematic server.
- The server where the issue occurs has Django 4.0 installed.
## Steps Taken
- Checked server logs for any specific error messages.
- Verified that all necessary dependencies are installed on the problematic server.
- Ensured that the codebase is the same across all servers.
## Error Message
On the problematic server, I am receiving a "Bad request" error.
## Request for Assistance
I would appreciate any guidance or suggestions on how to troubleshoot and resolve this issue. If anyone has encountered a similar problem or has insights into Confluence projects and attachment inclusion, your assistance would be invaluable.
Thank you!
### System Info
below is my server configuration
Architecture: aarch64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 2
On-line CPU(s) list: 0,1
total used free shared buff/cache available
Mem: 7.7Gi 5.4Gi 238Mi 20Mi 2.1Gi 2.1Gi
Swap: 8.0Gi 2.1Gi 5.9Gi | Include attachments=True is not working in Confluence | https://api.github.com/repos/langchain-ai/langchain/issues/16926/comments | 3 | 2024-02-02T06:50:43Z | 2024-07-18T16:07:44Z | https://github.com/langchain-ai/langchain/issues/16926 | 2,114,223,300 | 16,926 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
record_manager.get_time or record_manager.get_time() will make error:
> record_manager.update(
> File ".../venv/lib/python3.11/site-packages/langchain/indexes/_sql_record_manager.py", line 269, in update
> update_time = self.get_time()
>
### Error Message and Stack Trace (if applicable)
> ERROR: Exception in ASGI application
> Traceback (most recent call last):
> File "/chat-service/main.py", line 196, in ingress
> ingest_docs()
> File "/chat-service/ingest.py", line 168, in ingest_docs
> indexing_stats = index(
> ^^^^^^
> File "/chat-service/_index.py", line 158, in index
> record_manager.update(
> File "/chat-service/venv/lib/python3.11/site-packages/langchain/indexes/_sql_record_manager.py", line 269, in update
> update_time = self.get_time()
> ^^^^^^^^^^^^^^^
> File "/chat-service/venv/lib/python3.11/site-packages/langchain/indexes/_sql_record_manager.py", line 205, in get_time
> raise NotImplementedError(f"Not implemented for dialect {self.dialect}")
> NotImplementedError: Not implemented for dialect mysql
>
> During handling of the above exception, another exception occurred:
>
### Description
- record_manager.get_time() is not working
- record_manager.get_time will getting this error
### System Info
python3.11 -m pip freeze | grep langchain
langchain==0.1.0
langchain-community==0.0.12
langchain-core==0.1.9
langchain-google-genai==0.0.4
langchain-google-vertexai==0.0.1.post1
langchain-openai==0.0.2 | record_manager.get_time error | https://api.github.com/repos/langchain-ai/langchain/issues/16919/comments | 1 | 2024-02-02T04:21:56Z | 2024-05-10T16:09:55Z | https://github.com/langchain-ai/langchain/issues/16919 | 2,114,024,734 | 16,919 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
the following code """
from langchain.chains import LLMChain
from langchain.chains import ConversationChain
my_functions = [
{
'name': 'raise_ticket',
'description': 'Get the details for a ticket',
'parameters': {
'type': 'object',
'properties': {
'projectName':{
'type': 'string',
'description': "Project Name : (e.g. 'ABC', 'XYZ')"
},
'issueType':{
'type': 'string',
'description': "Issue Type : (e.g. 'Change Request', 'Service Request')"
},
},
...
...
"required":["projectName","issueType"]
}
}
]
llm = ChatOpenAI(temperature=0.0, model="gpt-3.5-turbo-0613")
memory = ConversationBufferMemory()
conversation = ConversationChain(
llm=llm,
memory = memory,
verbose=False
)
message= conversation.invoke([HumanMessage(content='What are the choices of the Issue Type')],
functions=my_functions,
memory=memory)
"""
### Error Message and Stack Trace (if applicable)
Not an error but Function call is not happening. I am always getting a generic response from the model.
### Description
I was expecting the langchain library to recognize the function... but it is not recognizing it.
Same function works with OpenAI
### System Info
langchain==0.1.4
langchain-community==0.0.16
langchain-core==0.1.17 | Function calling not working with ConversationChain with Memory | https://api.github.com/repos/langchain-ai/langchain/issues/16917/comments | 3 | 2024-02-02T03:19:20Z | 2024-07-28T16:05:48Z | https://github.com/langchain-ai/langchain/issues/16917 | 2,113,960,320 | 16,917 |
[
"hwchase17",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
## Goal
Improve streaming in LangChain for chat models / language models.
## Background
Many chat and language models implement a streaming mode in which they stream tokens one at a time.
LangChain has a callback system that is useful for logging and important APIs like "stream", "stream_log" and "stream_events".
Currently many models incorrectly yield the token (chat generation) before invoking the callback.
## Acceptance criteria
For a PR to be accepted and merged, the PR should:
- [ ] Fix the code to make sure that the callback is called before the token is yielded
- [ ] Link to this issue
- [ ] Change ONE and only ONE model
- [ ] FIx sync and async implementation if both are defined
## Example PR
Here is an example PR that shows the fix for the OpenAI chat model:
https://github.com/langchain-ai/langchain/pull/16909
## Find models that need to be fixed
The easiest way to find places in the code that may need to be fixed is using git grep
```bash
git grep -C 5 "\.on_llm_new"
```
Examine the output to determine whether the callback is called before the token is yielded (correct) or after (needs to be fixed). | Callback for on_llm_new_token should be invoked before the token is yielded by the model | https://api.github.com/repos/langchain-ai/langchain/issues/16913/comments | 1 | 2024-02-02T00:39:51Z | 2024-06-27T20:09:31Z | https://github.com/langchain-ai/langchain/issues/16913 | 2,113,804,737 | 16,913 |
[
"hwchase17",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
Any chat models that support function calling should have an example of function calling in their integration page | DOC: Add function calling example to all chat model integration pages that support | https://api.github.com/repos/langchain-ai/langchain/issues/16911/comments | 0 | 2024-02-02T00:26:53Z | 2024-05-10T16:09:45Z | https://github.com/langchain-ai/langchain/issues/16911 | 2,113,792,568 | 16,911 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
#### code
```python
from langchain.text_splitter import RecursiveCharacterTextSplitter
from sentence_transformers import SentenceTransformer
text = """
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Curabitur varius sodales bibendum. Nulla nec ornare ipsum. Nam eleifend convallis mi eget gravida. Cras mi lacus, varius ut feugiat et, sollicitudin ultricies ipsum. Cras varius odio eget facilisis scelerisque. Sed mauris risus, luctus at sagittis at, venenatis eget turpis. Ut euismod non est a accumsan. Sed pretium velit sed tellus iaculis gravida a sed elit. Nam luctus tristique sem et tincidunt. Nam cursus semper lectus, non dapibus nunc. Nulla et lectus in erat tempus eleifend sit amet non purus. Proin ut vestibulum lectus, vitae convallis tortor.
Ut turpis nibh, lacinia in odio ac, interdum volutpat lectus. Donec fermentum hendrerit arcu et fringilla. Etiam placerat vestibulum magna, non pellentesque orci convallis ac. Nunc eget risus pharetra, consectetur lacus eget, vehicula est. Quisque blandit orci in posuere porttitor. Class aptent taciti sociosqu ad litora torquent per conubia nostra, per inceptos himenaeos. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Praesent pellentesque varius nibh ut iaculis. Morbi mi justo, imperdiet in vestibulum at, condimentum quis sem. Aliquam malesuada lorem tortor, eu accumsan dui euismod quis. Nullam rutrum libero at mauris mollis sodales. Cras scelerisque non risus vel auctor. Suspendisse dapibus volutpat eros id malesuada.
Curabitur dictum laoreet ultrices. Nulla orci erat, pharetra euismod dictum a, consequat vel lorem. Aenean euismod massa felis, ut lobortis nisl accumsan in. Duis dolor lacus, tempor in rhoncus sed, fringilla id mi. Duis in eros at purus sagittis ultricies vitae a orci. Maecenas felis nunc, dapibus nec turpis id, consectetur semper eros. Vivamus tincidunt pretium urna, nec condimentum felis ultrices ut. Donec tempor urna in nisl pharetra, eu viverra enim facilisis. Nullam blandit nibh dictum vestibulum congue. Duis interdum ornare rutrum. Maecenas aliquam sem non lorem venenatis, eget facilisis mauris finibus. In hac habitasse platea dictumst. Vivamus vitae tincidunt eros.
Curabitur ac diam vitae ligula elementum aliquam. Donec posuere egestas pretium. Nulla eget lorem dapibus, tempus sapien maximus, eleifend dui. Aenean placerat nec nisl at tincidunt. Fusce vel nibh nec sapien rutrum varius sed ullamcorper nisi. Duis venenatis, tortor non hendrerit rhoncus, augue enim sollicitudin lectus, in accumsan ante nulla a nunc. Donec odio arcu, sodales in ligula vitae, dignissim molestie neque. Pellentesque dignissim pharetra nisi sit amet molestie. Curabitur at laoreet purus. Curabitur posuere sapien eu urna iaculis egestas eget et ipsum. Fusce porta sit amet orci non auctor. Praesent facilisis porttitor luctus. Interdum et malesuada fames ac ante ipsum primis in faucibus.
Suspendisse accumsan ante eget magna condimentum, sit amet eleifend enim auctor. Maecenas lorem enim, tempus at lacinia non, condimentum sed justo. Nam iaculis viverra lorem ut mollis. Vivamus convallis lacus quis diam pellentesque pulvinar. Donec vel mauris mattis, dictum nulla vel, volutpat metus. Sed tincidunt mi vitae sem tristique, vitae pretium sapien facilisis. Vestibulum condimentum dui dictum, molestie mauris et, pharetra tortor. Nunc feugiat orci ac lectus imperdiet, ut bibendum quam egestas. Mauris bibendum at nisl eu placerat. Aenean mollis ligula et metus tincidunt aliquam. Integer maximus porta purus at convallis. Maecenas lectus dui, tempus eget efficitur sit amet, ullamcorper ut mauris.
"""
model_name = "distilbert-base-uncased"
model = SentenceTransformer(model_name)
tokenizer = model.tokenizer
# Using from_huggingface_tokenizer
splitter = RecursiveCharacterTextSplitter.from_huggingface_tokenizer(tokenizer, chunk_size=200, chunk_overlap=10, separators=[" "])
chunks = splitter.split_text(text)
print("chunk sizes: from_huggingface_tokenizer, which uses tokenizer.encode uner the hood:\n", [cnt(c) for c in chunks])
# same tokenizer, but with length function
def cnt(txt):
return len(tokenizer.tokenize(txt))
splitter = RecursiveCharacterTextSplitter(chunk_size=200, chunk_overlap=10, separators=[" "], length_function=cnt)
chunks = splitter.split_text(text)
print("Using length function with tokenizer.tokenize \n", [cnt(c) for c in chunks])
```
#### output
``` python
chunk sizes: from_huggingface_tokenizer, which uses tokenizer.encode uner the hood:
[77, 78, 71, 78, 75, 78, 78, 73, 74, 78, 80, 75, 79, 74, 76, 80, 76, 22]
Using length function with tokenizer.tokenize
[198, 198, 200, 199, 200, 198, 133]
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Bug: TextSplitter produces smaller chunks than specified with chunk_size, when instantiated `from_huggingface_tokenizer()`.
Explanation: `_merge_splits` counts `total` incorrectly when instantiated with `from_huggingface_tokenizer()`. BertTokenizer appends start and end tokens to any string it is ran on `['101', '102']`. It results in incorrect string length computation during splits merge, with total overestimating "real" chunk length. I.e. :
```python
>>>> self._length_function("")
2
```
therefore, here
```python
177: separator_len = self._length_function(separator)
...
183: _len = self._length_function(d)
...
210: total += _len + (separator_len if len(current_doc) > 1 else 0)
```
total is over counting the real chunk length, and stops the merge earlier than reaching the desired chunk_size. **This impacts resulting chunk sizes significantly when using very commonly occurring separators, i.e. whitespaces. See example.**
Suggested solution:
Replace all occurrences of `tokenizer.encode` for `tokenizer.tokenize` in [text_splitter.py](https://github.com/langchain-ai/langchain/blob/7d03d8f586f123e5059cbd0f45cb4c701bf0976f/libs/langchain/langchain/text_splitter.py#L702).
i.e.
```python
def _huggingface_tokenizer_length(text: str) -> int:
return len(tokenizer.encode(text)) # replace this line with return len(tokenizer.tokenize(text))
```
### System Info
python -m langchain_core.sys_info
System Information
------------------
> OS: Linux
> OS Version: #40-Ubuntu SMP PREEMPT_DYNAMIC Tue Nov 14 14:18:00 UTC 2023
> Python Version: 3.11.7 (main, Dec 15 2023, 18:12:31) [GCC 11.2.0]
Package Information
-------------------
> langchain_core: 0.1.18
> langchain: 0.1.5
> langchain_community: 0.0.17
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
| Small chunks bug in TextSplitter when instantiated with from_huggingface_tokenizer | https://api.github.com/repos/langchain-ai/langchain/issues/16894/comments | 3 | 2024-02-01T19:20:15Z | 2024-05-31T23:49:19Z | https://github.com/langchain-ai/langchain/issues/16894 | 2,113,287,411 | 16,894 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
import os
from langchain_google_genai import ChatGoogleGenerativeAI
from langchain_google_genai import GoogleGenerativeAIEmbeddings
from langchain_community.vectorstores.pgvector import PGVector
#import psycopg2
embeddings = GoogleGenerativeAIEmbeddings(model="models/embedding-001")
model = ChatGoogleGenerativeAI(model="gemini-pro")
data = [
"Leo has salary 90000 in IT department.",
"Mary has salary 60000 in IT department.",
"Tom has salary 30000 in IT department."
]
CONNECTION_STRING = "postgresql+psycopg2://postgres:1111@localhost:5432/b2b"
COLLECTION_NAME = 'salary_vectors'
db = PGVector.from_texts(
embedding=embeddings,
texts=data,
collection_name=COLLECTION_NAME,
connection_string = CONNECTION_STRING ,
)
print("Done")
### Error Message and Stack Trace (if applicable)
Exception ignored in: <function PGVector.__del__ at 0x000001E3D6CB5080>
Traceback (most recent call last):
File "C:\Users\philip.chao\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain_community\vectorstores\pgvector.py", line 229, in __del__
AttributeError: 'NoneType' object has no attribute 'Connection'
### Description
postgres version 15 on docker
PS: OpenAI can run, but failed when using google Gemini
### System Info
OS: Windows 10 pro | PGVector from_texts got error when using gemini | https://api.github.com/repos/langchain-ai/langchain/issues/16879/comments | 2 | 2024-02-01T13:43:34Z | 2024-05-15T16:07:24Z | https://github.com/langchain-ai/langchain/issues/16879 | 2,112,553,113 | 16,879 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
from langchain_mistralai.embeddings import MistralAIEmbeddings
all_documents = []
embeddings = MistralAIEmbeddings()
for file_data in selected_files:
with tempfile.NamedTemporaryFile(mode="wb", delete=False, suffix='.pdf') as temp_file:
temp_file.write(file_data)
file_name = temp_file.name
loader = PyPDFLoader(file_name).load()
docs = text_splitter.split_documents(loader)
all_documents.extend(docs)
db = FAISS.from_documents(all_documents,embeddings)
### Error Message and Stack Trace (if applicable)
An error occurred with MistralAI: Cannot stream response. Status: 400
### Description
by uploading multiple pdf i cannot do document probing. due to mistral ai embeddings erroring out.
### System Info
langchain==0.1.0
langchain-community==0.0.10
langchain-core==0.1.8
langchain-experimental==0.0.49
langchain-google-genai==0.0.5
langchain-mistralai==0.0.3
langchain-nvidia-ai-endpoints==0.0.1
python 3.11.7
macos 14.2.1 (23C71) sonoma | Mistral AI embedding cannot stream response. status 400 | https://api.github.com/repos/langchain-ai/langchain/issues/16869/comments | 3 | 2024-02-01T10:51:40Z | 2024-05-09T16:10:09Z | https://github.com/langchain-ai/langchain/issues/16869 | 2,112,193,668 | 16,869 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
Same code as in the docs [here](https://python.langchain.com/docs/integrations/llms/llm_caching#redis-cache)
### Error Message and Stack Trace (if applicable)
```
Traceback (most recent call last):
File "/Users/dingusagar/inference.py", line 181, in infer
response = self.chain.invoke(inputs)
File "/Users/dingusagar/envs/python_env/lib/python3.9/site-packages/langchain/schema/runnable/base.py", line 1213, in invoke
input = step.invoke(
File "/Users/dingusagar/envs/python_env/lib/python3.9/site-packages/langchain/chat_models/base.py", line 142, in invoke
self.generate_prompt(
File "/Users/dingusagar/envs/python_env/lib/python3.9/site-packages/langchain/chat_models/base.py", line 459, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
File "/Users/dingusagar/envs/python_env/lib/python3.9/site-packages/langchain/chat_models/base.py", line 349, in generate
raise e
File "/Users/dingusagar/envs/python_env/lib/python3.9/site-packages/langchain/chat_models/base.py", line 339, in generate
self._generate_with_cache(
File "/Users/dingusagar/envs/python_env/lib/python3.9/site-packages/langchain/chat_models/base.py", line 500, in _generate_with_cache
cache_val = llm_cache.lookup(prompt, llm_string)
File "/Users/dingusagar/envs/python_env/lib/python3.9/site-packages/langchain/cache.py", line 392, in lookup
results = self.redis.hgetall(self._key(prompt, llm_string))
File "/Users/dingusagar/envs/python_env/lib/python3.9/site-packages/redis/commands/core.py", line 4867, in hgetall
return self.execute_command("HGETALL", name)
File "/Users/dingusagar/envs/python_env/lib/python3.9/site-packages/redis/client.py", line 1255, in execute_command
conn = self.connection or pool.get_connection(command_name, **options)
File "/Users/dingusagar/envs/python_env/lib/python3.9/site-packages/redis/connection.py", line 1441, in get_connection
connection.connect()
File "/Users/dingusagar/envs/python_env/lib/python3.9/site-packages/redis/connection.py", line 704, in connect
raise ConnectionError(self._error_message(e))
redis.exceptions.ConnectionError: Error 61 connecting to 127.0.0.1:6379. Connection refused.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/dingusagar/envs/python_env/lib/python3.9/site-packages/redis/connection.py", line 698, in connect
sock = self.retry.call_with_retry(
File "/Users/dingusagar/envs/python_env/lib/python3.9/site-packages/redis/retry.py", line 46, in call_with_retry
return do()
File "/Users/dingusagar/envs/python_env/lib/python3.9/site-packages/redis/connection.py", line 699, in <lambda>
lambda: self._connect(), lambda error: self.disconnect(error)
File "/Users/dingusagar/envs/python_env/lib/python3.9/site-packages/redis/connection.py", line 1089, in _connect
sock = super()._connect()
File "/Users/dingusagar/envs/python_env/lib/python3.9/site-packages/redis/connection.py", line 987, in _connect
raise err
File "/Users/dingusagar/envs/python_env/lib/python3.9/site-packages/redis/connection.py", line 975, in _connect
sock.connect(socket_address)
ConnectionRefusedError: [Errno 61] Connection refused
```
### Description
I am using RedisCache as per the docs [here](https://python.langchain.com/docs/integrations/llms/llm_caching#redis-cache)
Was testing how robust the system is if the redis connection fails for somereason or it goes out of memory.
Looks like if the redis connection URL is not reachable, the system throws an error.
I wanted langchain to internally handle redis failure and do a direct API calls to the LLM on failure. we can ofcourse log the error. This will make the system more robust to failures.
Does it make sense to add this feature? If so, I can help with raising a PR.
### System Info
langchain==0.0.333 | RedisCache does't handle errors from redis. | https://api.github.com/repos/langchain-ai/langchain/issues/16866/comments | 5 | 2024-02-01T10:12:47Z | 2024-02-21T17:15:20Z | https://github.com/langchain-ai/langchain/issues/16866 | 2,112,095,388 | 16,866 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
class GitHubIssuesLoader(BaseGitHubLoader):
"""Load issues of a GitHub repository."""
include_prs: bool = True
"""If True include Pull Requests in results, otherwise ignore them."""
milestone: Union[int, Literal["*", "none"], None] = None
"""If integer is passed, it should be a milestone's number field.
If the string '*' is passed, issues with any milestone are accepted.
If the string 'none' is passed, issues without milestones are returned.
"""
state: Optional[Literal["open", "closed", "all"]] = None
"""Filter on issue state. Can be one of: 'open', 'closed', 'all'."""
assignee: Optional[str] = None
"""Filter on assigned user. Pass 'none' for no user and '*' for any user."""
creator: Optional[str] = None
"""Filter on the user that created the issue."""
mentioned: Optional[str] = None
"""Filter on a user that's mentioned in the issue."""
labels: Optional[List[str]] = None
"""Label names to filter one. Example: bug,ui,@high."""
sort: Optional[Literal["created", "updated", "comments"]] = None
"""What to sort results by. Can be one of: 'created', 'updated', 'comments'.
Default is 'created'."""
direction: Optional[Literal["asc", "desc"]] = None
"""The direction to sort the results by. Can be one of: 'asc', 'desc'."""
since: Optional[str] = None
"""Only show notifications updated after the given time.
This is a timestamp in ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ."""
```
### Error Message and Stack Trace (if applicable)
This class lacks the page and per_page parameters, and I want to add these two parameters to implement pagination functionality.
### Description
The current implementation of GitHubIssuesLoader lacks pagination support, which can lead to inefficiencies when retrieving a large number of GitHub issues. This enhancement aims to introduce pagination functionality to the loader, allowing users to retrieve issues in smaller, manageable batches.
This improvement will involve adding page and per_page parameters to control the pagination of API requests, providing users with greater flexibility and performance optimization. Additionally, proper validation will be implemented to ensure valid and non-negative values for the pagination parameters.
This enhancement will enhance the usability and efficiency of the GitHubIssuesLoader class, making it more suitable for handling repositories with a substantial number of issues.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #18~22.04.1-Ubuntu SMP Tue Nov 21 19:25:02 UTC 2023
> Python Version: 3.10.13 (main, Dec 8 2023, 04:58:09) [GCC 9.4.0]
Package Information
-------------------
> langchain_core: 0.1.17
> langchain: 0.1.4
> langchain_community: 0.0.16
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | Enhancement: Add Pagination Support to GitHubIssuesLoader for Efficient Retrieval of GitHub Issues | https://api.github.com/repos/langchain-ai/langchain/issues/16864/comments | 3 | 2024-02-01T09:23:25Z | 2024-05-15T16:07:19Z | https://github.com/langchain-ai/langchain/issues/16864 | 2,111,992,332 | 16,864 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
``` python
import os
import requests
from langchain.llms import HuggingFaceHub
from langchain.chains import LLMChain
os.environ['HUGGINGFACEHUB_API_TOKEN'] = "your_token"
prompt = "The answer to universe is "
repo_id = "mistralai/Mistral-7B-v0.1"
llm = HuggingFaceHub(
repo_id=repo_id,
model_kwargs={
"max_new_tokens": 10
}
)
langchain_response = llm.invoke(prompt)
url = f"https://api-inference.huggingface.co/models/{repo_id}"
headers = {"Authorization": f"Bearer {os.environ['HUGGINGFACEHUB_API_TOKEN']}"}
def query(payload):
response = requests.post(url, headers=headers, json=payload)
return response.json()
huggingfacehub_response = query({
"inputs": prompt,
"parameters": {
"max_new_tokens": 10
}
})
print([{"generated_text": langchain_response}])
print(huggingfacehub_response)
```
### Error Message and Stack Trace (if applicable)
```
[{'generated_text': 'The answer to universe is 42.\n\nThe answer to life is 42.\n\nThe answer to everything is 42.\n\nThe answer to the question of why is 42.\n\nThe answer to the question of what is 42.\n\nThe answer to the question of how is 42.\n\nThe answer to the question of who is 42.\n\nThe answer to the question of when is 42.\n\nThe answer to'}]
[{'generated_text': 'The answer to universe is 42.\n\nThe answer to life is'}]
```
### Description
It looks like the `HuggingFaceHub` LLM sends `model_kwargs` with a wrong parameters name in the JSON payload. The correct name should be `parameters` instead of `params` according to the [HuggingFace API documentation](https://huggingface.co/docs/api-inference/en/detailed_parameters#text-generation-task).
https://github.com/langchain-ai/langchain/blob/2e5949b6f8bc340a992b9f9f9fb4751f87979e15/libs/community/langchain_community/llms/huggingface_hub.py#L133
As a result, `model_kwargs` has no effect on the model output as can be seen in the example above.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #56~20.04.1-Ubuntu SMP Tue Nov 28 15:43:31 UTC 2023
> Python Version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.1.17
> langchain: 0.1.4
> langchain_community: 0.0.16
> langchainplus_sdk: 0.0.20
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | HugginFaceHub LLM has a wrong parameters name | https://api.github.com/repos/langchain-ai/langchain/issues/16849/comments | 2 | 2024-01-31T23:19:41Z | 2024-05-09T16:09:58Z | https://github.com/langchain-ai/langchain/issues/16849 | 2,111,153,617 | 16,849 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
The following code:
```
from langchain_community.llms.azureml_endpoint import AzureMLOnlineEndpoint
from langchain_community.llms.azureml_endpoint import AzureMLEndpointApiType
from langchain_community.llms.azureml_endpoint import DollyContentFormatter
# ------------------------------------
# Allow Self Signed Https code
# ------------------------------------
llm = AzureMLOnlineEndpoint(
endpoint_url="https://myproject.eastus2.inference.ml.azure.com/score",
endpoint_api_type=AzureMLEndpointApiType.realtime,
endpoint_api_key="my-key",
content_formatter=DollyContentFormatter(),
model_kwargs={"temperature": 0.8, "max_tokens": 300},
)
response = llm.invoke("Write me a song about sparkling water:")
response
```
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "/Users/abel/Desktop/source/lang/dolly.py", line 24, in <module>
response = llm.invoke("Write me a song about sparkling water:")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain_core/language_models/llms.py", line 235, in invoke
self.generate_prompt(
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain_core/language_models/llms.py", line 530, in generate_prompt
return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain_core/language_models/llms.py", line 703, in generate
output = self._generate_helper(
^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain_core/language_models/llms.py", line 567, in _generate_helper
raise e
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain_core/language_models/llms.py", line 554, in _generate_helper
self._generate(
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain_community/llms/azureml_endpoint.py", line 489, in _generate
response_payload = self.http_client.call(
^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain_community/llms/azureml_endpoint.py", line 50, in call
response = urllib.request.urlopen(req, timeout=kwargs.get("timeout", 50))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/urllib/request.py", line 216, in urlopen
return opener.open(url, data, timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/urllib/request.py", line 525, in open
response = meth(req, response)
^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/urllib/request.py", line 634, in http_response
response = self.parent.error(
^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/urllib/request.py", line 563, in error
return self._call_chain(*args)
^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/urllib/request.py", line 496, in _call_chain
result = func(*args)
^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/urllib/request.py", line 643, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 424: Failed Dependency
### Description
AzureMLOnlineEndpoint not working, 424 error, but same url and api key works with standard http. The working plain http is below:
```
import urllib.request
import json
import os
import ssl
def allowSelfSignedHttps(allowed):
# bypass the server certificate verification on client side
if allowed and not os.environ.get('PYTHONHTTPSVERIFY', '') and getattr(ssl, '_create_unverified_context', None):
ssl._create_default_https_context = ssl._create_unverified_context
allowSelfSignedHttps(True) # this line is needed if you use self-signed certificate in your scoring service.
# Request data goes here
# The example below assumes JSON formatting which may be updated
# depending on the format your endpoint expects.
# More information can be found here:
# https://docs.microsoft.com/azure/machine-learning/how-to-deploy-advanced-entry-script
data = {
"input_data": [
"Write me a super short song about sparkling water"
],
"params": {
"top_p": 0.9,
"temperature": 0.2,
"max_new_tokens": 50,
"do_sample": True,
"return_full_text": True
}
}
body = str.encode(json.dumps(data))
url = 'https://myProject.eastus2.inference.ml.azure.com/score'
# Replace this with the primary/secondary key or AMLToken for the endpoint
api_key = 'my-key'
if not api_key:
raise Exception("A key should be provided to invoke the endpoint")
# The azureml-model-deployment header will force the request to go to a specific deployment.
# Remove this header to have the request observe the endpoint traffic rules
headers = {'Content-Type':'application/json', 'Authorization':('Bearer '+ api_key), 'azureml-model-deployment': 'databricks-dolly-v2-12b-15' }
req = urllib.request.Request(url, body, headers)
try:
response = urllib.request.urlopen(req)
result = response.read()
print(result)
except urllib.error.HTTPError as error:
print("The request failed with status code: " + str(error.code))
# Print the headers - they include the requert ID and the timestamp, which are useful for debugging the failure
print(error.info())
print(error.read().decode("utf8", 'ignore'))
```
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.3.0: Wed Dec 20 21:33:31 PST 2023; root:xnu-10002.81.5~7/RELEASE_ARM64_T8112
> Python Version: 3.11.4 (v3.11.4:d2340ef257, Jun 6 2023, 19:15:51) [Clang 13.0.0 (clang-1300.0.29.30)]
Package Information
-------------------
> langchain_core: 0.1.17
> langchain: 0.1.4
> langchain_community: 0.0.16
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | AzureMLOnlineEndpoint not working, 424 error, but same url and api key works with standard http | https://api.github.com/repos/langchain-ai/langchain/issues/16845/comments | 7 | 2024-01-31T18:37:32Z | 2024-07-11T11:24:43Z | https://github.com/langchain-ai/langchain/issues/16845 | 2,110,730,749 | 16,845 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
from langchain_community.llms import Ollama
llm = Ollama(model="codellama:70b-python")
from langchain.agents.agent_types import AgentType
from langchain_experimental.agents.agent_toolkits import create_csv_agent, create_pandas_dataframe_agent
from langchain_openai import ChatOpenAI, OpenAI
import pandas as pd
import os
import re
from datetime import datetime
import pandas as pd
import numpy as np
from datetime import datetime, timedelta
df = pd.read_csv("sales_purchase_20Jan.csv")
agent = create_pandas_dataframe_agent(
llm, df,
verbose=True, agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION,
handle_parsing_errors=True, number_of_head_rows=5
)
instructions = """
Perform the following steps to address the given query:
Step 1: Begin by verifying if the provided dataframe and instructions contain sufficient information for the required analysis. In case of insufficient details, respond with:
```json
{
"table": {},
"message": ["Please review and modify the prompt with more specifics."]
}
```
Step 2: Should the query necessitate generating a table, structure your response using the following format:
```json
{
"table": {
"columns": ["column1", "column2", ...],
"data": [[value1, value2, ...], [value1, value2, ...], ...]
},
"message": []
}
```
Step 3: For queries requiring solely a textual response, utilize the following format:
```json
{
"table": {},
"message": ["Your text response here"]
}
```
Step 4: Ensure consistent usage of standard decimal format without scientific notation. Replace any None/Null values with 0.0."
Query: """
prompt = instructions + '''Create a summary table that displays the cumulative sales for each item category ('Atta', 'Salt', 'Salt-C') across different months ('Month_Year'). The table should contain columns for 'Month_Year,' individual Item categories, and a 'Grand Total' column. The values in the table should represent the total sales value ('Sale_Value') for each Item category within the corresponding month.'''
agent.invoke(prompt)
### Error Message and Stack Trace (if applicable)
---------------------------------------------------------------------------
OutputParserException Traceback (most recent call last)
File /usr/local/lib/python3.10/dist-packages/langchain/agents/agent.py:1125, in AgentExecutor._iter_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
1124 # Call the LLM to see what to do.
-> 1125 output = self.agent.plan(
1126 intermediate_steps,
1127 callbacks=run_manager.get_child() if run_manager else None,
1128 **inputs,
1129 )
1130 except OutputParserException as e:
File /usr/local/lib/python3.10/dist-packages/langchain/agents/agent.py:695, in Agent.plan(self, intermediate_steps, callbacks, **kwargs)
694 full_output = self.llm_chain.predict(callbacks=callbacks, **full_inputs)
--> 695 return self.output_parser.parse(full_output)
File /usr/local/lib/python3.10/dist-packages/langchain/agents/mrkl/output_parser.py:63, in MRKLOutputParser.parse(self, text)
62 if not re.search(r"Action\s*\d*\s*:[\s]*(.*?)", text, re.DOTALL):
---> 63 raise OutputParserException(
64 f"Could not parse LLM output: `{text}`",
65 observation=MISSING_ACTION_AFTER_THOUGHT_ERROR_MESSAGE,
66 llm_output=text,
67 send_to_llm=True,
68 )
69 elif not re.search(
70 r"[\s]*Action\s*\d*\s*Input\s*\d*\s*:[\s]*(.*)", text, re.DOTALL
71 ):
OutputParserException: Could not parse LLM output: ` I need to perform a Pivot Table Calculation in order to get Grand Totals for each item and place it at bottom of the table.
Action Input: 'pivot'`
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
Cell In[23], line 1
----> 1 agent.invoke(prompt)
File /usr/local/lib/python3.10/dist-packages/langchain/chains/base.py:162, in Chain.invoke(self, input, config, **kwargs)
160 except BaseException as e:
161 run_manager.on_chain_error(e)
--> 162 raise e
163 run_manager.on_chain_end(outputs)
164 final_outputs: Dict[str, Any] = self.prep_outputs(
165 inputs, outputs, return_only_outputs
166 )
File /usr/local/lib/python3.10/dist-packages/langchain/chains/base.py:156, in Chain.invoke(self, input, config, **kwargs)
149 run_manager = callback_manager.on_chain_start(
150 dumpd(self),
151 inputs,
152 name=run_name,
153 )
154 try:
155 outputs = (
--> 156 self._call(inputs, run_manager=run_manager)
157 if new_arg_supported
158 else self._call(inputs)
159 )
160 except BaseException as e:
161 run_manager.on_chain_error(e)
File /usr/local/lib/python3.10/dist-packages/langchain/agents/agent.py:1391, in AgentExecutor._call(self, inputs, run_manager)
1389 # We now enter the agent loop (until it returns something).
1390 while self._should_continue(iterations, time_elapsed):
-> 1391 next_step_output = self._take_next_step(
1392 name_to_tool_map,
1393 color_mapping,
1394 inputs,
1395 intermediate_steps,
1396 run_manager=run_manager,
1397 )
1398 if isinstance(next_step_output, AgentFinish):
1399 return self._return(
1400 next_step_output, intermediate_steps, run_manager=run_manager
1401 )
File /usr/local/lib/python3.10/dist-packages/langchain/agents/agent.py:1097, in AgentExecutor._take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
1088 def _take_next_step(
1089 self,
1090 name_to_tool_map: Dict[str, BaseTool],
(...)
1094 run_manager: Optional[CallbackManagerForChainRun] = None,
1095 ) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]:
1096 return self._consume_next_step(
-> 1097 [
1098 a
1099 for a in self._iter_next_step(
1100 name_to_tool_map,
1101 color_mapping,
1102 inputs,
1103 intermediate_steps,
1104 run_manager,
1105 )
1106 ]
1107 )
File /usr/local/lib/python3.10/dist-packages/langchain/agents/agent.py:1097, in <listcomp>(.0)
1088 def _take_next_step(
1089 self,
1090 name_to_tool_map: Dict[str, BaseTool],
(...)
1094 run_manager: Optional[CallbackManagerForChainRun] = None,
1095 ) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]:
1096 return self._consume_next_step(
-> 1097 [
1098 a
1099 for a in self._iter_next_step(
1100 name_to_tool_map,
1101 color_mapping,
1102 inputs,
1103 intermediate_steps,
1104 run_manager,
1105 )
1106 ]
1107 )
File /usr/local/lib/python3.10/dist-packages/langchain/agents/agent.py:1136, in AgentExecutor._iter_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
1134 raise_error = False
1135 if raise_error:
-> 1136 raise ValueError(
1137 "An output parsing error occurred. "
1138 "In order to pass this error back to the agent and have it try "
1139 "again, pass `handle_parsing_errors=True` to the AgentExecutor. "
1140 f"This is the error: {str(e)}"
1141 )
1142 text = str(e)
1143 if isinstance(self.handle_parsing_errors, bool):
ValueError: An output parsing error occurred. In order to pass this error back to the agent and have it try again, pass `handle_parsing_errors=True` to the AgentExecutor. This is the error: Could not parse LLM output: ` I need to perform a Pivot Table Calculation in order to get Grand Totals for each item and place it at bottom of the table.
Action Input: 'pivot'`
### Description
File /usr/local/lib/python3.10/dist-packages/langchain/agents/agent.py:1136, in AgentExecutor._iter_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
1134 raise_error = False
1135 if raise_error:
-> 1136 raise ValueError(
1137 "An output parsing error occurred. "
1138 "In order to pass this error back to the agent and have it try "
1139 "again, pass `handle_parsing_errors=True` to the AgentExecutor. "
1140 f"This is the error: {str(e)}"
1141 )
1142 text = str(e)
1143 if isinstance(self.handle_parsing_errors, bool):
ValueError: An output parsing error occurred. In order to pass this error back to the agent and have it try again, pass `handle_parsing_errors=True` to the AgentExecutor. This is the error: Could not parse LLM output: ` I need to perform a Pivot Table Calculation in order to get Grand Totals for each item and place it at bottom of the table.
Action Input: 'pivot'`
### System Info
System Information
------------------
> OS: Linux
> OS Version: #184-Ubuntu SMP Tue Oct 31 09:21:49 UTC 2023
> Python Version: 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.1.17
> langchain: 0.1.4
> langchain_community: 0.0.16
> langchain_experimental: 0.0.49
> langchain_openai: 0.0.5
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | ValueError: An output parsing error occurred | https://api.github.com/repos/langchain-ai/langchain/issues/16843/comments | 3 | 2024-01-31T18:15:28Z | 2024-05-09T16:09:53Z | https://github.com/langchain-ai/langchain/issues/16843 | 2,110,694,010 | 16,843 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
The following code:
```python
llm = Bedrock(
credentials_profile_name="Bedrock",
model_id="amazon.titan-text-express-v1",
model_kwargs={
"temperature": 0.9,
},
verbose=True
)
agent_executor = create_sql_agent(
llm,
db=db,
verbose=True
)
agent_executor.invoke("Retrieve all table data from the last 3 months.")
```
### Error Message and Stack Trace (if applicable)
> Entering new AgentExecutor chain...
Traceback (most recent call last):
File "/Users/user/.pyenv/versions/3.10.13/lib/python3.10/site-packages/langchain_community/llms/bedrock.py", line 533, in _prepare_input_and_invoke_stream
response = self.client.invoke_model_with_response_stream(**request_options)
File "/Users/user/.pyenv/versions/3.10.13/lib/python3.10/site-packages/botocore/client.py", line 553, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/Users/user/.pyenv/versions/3.10.13/lib/python3.10/site-packages/botocore/client.py", line 1009, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.errorfactory.ValidationException: An error occurred (ValidationException) when calling the InvokeModelWithResponseStream operation: Malformed input request: string [
Observation] does not match pattern ^(\|+|User:)$, please reformat your input and try again.
### Description
The function 'create_react_agent' in langchain/agent/react/agent.py binds the stop sequence ["\nObservation"] to the runnable, making it incompatible with Bedrock's validation regex: ^(\|+|User:)$
When line 103 is changed from
```python
llm_with_stop = llm.bind(stop=["\nObservation"])
```
to
```python
llm_with_stop = llm.bind(stop=["User:"])
```
the call to invoke the model is successful as part of the agent executor chain because "User:" is one of the allowed stop sequences according to AWS. I think these limitations on the stop sequences allowed are a bit nonsensical, and this may be a bug with AWS itself. However, hard coding the stop sequence into the react agent constructor prevents this from being fixed without modifying the Langchain code.
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.2.0: Wed Nov 15 21:53:18 PST 2023; root:xnu-10002.61.3~2/RELEASE_ARM64_T6000
> Python Version: 3.10.13 (main, Jan 24 2024, 14:54:55) [Clang 15.0.0 (clang-1500.1.0.2.5)]
Package Information
-------------------
> langchain_core: 0.1.17
> langchain: 0.1.4
> langchain_community: 0.0.16
> langchain_openai: 0.0.5
> langchainhub: 0.1.14
> langserve: 0.0.41 | create_react_agent incompatible with AWS Bedrock input validation due to hard coded ['\nObservation:'] stop sequence | https://api.github.com/repos/langchain-ai/langchain/issues/16840/comments | 9 | 2024-01-31T17:20:49Z | 2024-03-05T08:57:13Z | https://github.com/langchain-ai/langchain/issues/16840 | 2,110,601,047 | 16,840 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
examples = [
{"input": "List all artists.", "query": "SELECT * FROM Artist;"},
{
"input": "Find all albums for the artist 'AC/DC'.",
"query": "SELECT * FROM Album WHERE ArtistId = (SELECT ArtistId FROM Artist WHERE Name = 'AC/DC');",
},
{
"input": "List all tracks in the 'Rock' genre.",
"query": "SELECT * FROM Track WHERE GenreId = (SELECT GenreId FROM Genre WHERE Name = 'Rock');",
},
{
"input": "Find the total duration of all tracks.",
"query": "SELECT SUM(Milliseconds) FROM Track;",
},
{
"input": "List all customers from Canada.",
"query": "SELECT * FROM Customer WHERE Country = 'Canada';",
},
{
"input": "How many tracks are there in the album with ID 5?",
"query": "SELECT COUNT(*) FROM Track WHERE AlbumId = 5;",
},
{
"input": "Find the total number of invoices.",
"query": "SELECT COUNT(*) FROM Invoice;",
},
{
"input": "List all tracks that are longer than 5 minutes.",
"query": "SELECT * FROM Track WHERE Milliseconds > 300000;",
},
{
"input": "Who are the top 5 customers by total purchase?",
"query": "SELECT CustomerId, SUM(Total) AS TotalPurchase FROM Invoice GROUP BY CustomerId ORDER BY TotalPurchase DESC LIMIT 5;",
},
{
"input": "Which albums are from the year 2000?",
"query": "SELECT * FROM Album WHERE strftime('%Y', ReleaseDate) = '2000';",
},
{
"input": "How many employees are there",
"query": 'SELECT COUNT(*) FROM "Employee"',
},
]
We can create a few-shot prompt with them like so:
from langchain_core.prompts import FewShotPromptTemplate, PromptTemplate
example_prompt = PromptTemplate.from_template("User input: {input}\nSQL query: {query}")
prompt = FewShotPromptTemplate(
examples=examples[:5],
example_prompt=example_prompt,
prefix="You are a SQLite expert. Given an input question, create a syntactically correct SQLite query to run. Unless otherwise specificed, do not return more than {top_k} rows.\n\nHere is the relevant table info: {table_info}\n\nBelow are a number of examples of questions and their corresponding SQL queries.",
suffix="User input: {input}\nSQL query: ",
input_variables=["input", "top_k", "table_info"],
)
print(prompt.format(input="How many artists are there?", top_k=3, table_info="foo"))
db_url = URL.create(**db_config)
db = SQLDatabase.from_uri(db_url)
toolkit = SQLDatabaseToolkit(db=db, llm=self.llm)
agent_executor = create_sql_agent(
prefix=MSSQL_AGENT_PREFIX,
format_instructions = MSSQL_AGENT_FORMAT_INSTRUCTIONS,
llm=self.llm,
toolkit=toolkit,
callback_manager=self.callbacks,
verbose=True
)
logging.info(f"Login successful: {db_config['username']}")
response = agent_executor.run(query)
### Error Message and Stack Trace (if applicable)
not able to add FewShotPromptTemplate to create_sql_agent
### Description
not able to add FewShotPromptTemplate to create_sql_agent on z=azure openai bot
### System Info
langchain==0.0.352
langchain-core==0.1.11
langchain-experimental==0.0.47
langchain-community==0.0.13 | not able to add FewShotPromptTemplate to create_sql_agent | https://api.github.com/repos/langchain-ai/langchain/issues/16837/comments | 2 | 2024-01-31T16:36:45Z | 2024-07-19T16:06:56Z | https://github.com/langchain-ai/langchain/issues/16837 | 2,110,509,305 | 16,837 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
toolkit = SQLDatabaseToolkit(db=db, llm=self.llm)
agent_executor = create_sql_agent(
prefix=MSSQL_AGENT_PREFIX,
format_instructions = MSSQL_AGENT_FORMAT_INSTRUCTIONS,
llm=self.llm,
toolkit=toolkit,
callback_manager=self.callbacks,
verbose=True
)
### Error Message and Stack Trace (if applicable)
toolkit = SQLDatabaseToolkit(db=db, llm=self.llm)
agent_executor = create_sql_agent(
prefix=MSSQL_AGENT_PREFIX,
format_instructions = MSSQL_AGENT_FORMAT_INSTRUCTIONS,
llm=self.llm,
toolkit=toolkit,
callback_manager=self.callbacks,
verbose=True
)
### Description
toolkit = SQLDatabaseToolkit(db=db, llm=self.llm)
agent_executor = create_sql_agent(
prefix=MSSQL_AGENT_PREFIX,
format_instructions = MSSQL_AGENT_FORMAT_INSTRUCTIONS,
llm=self.llm,
toolkit=toolkit,
callback_manager=self.callbacks,
verbose=True
)
### System Info
langchain==0.0.352
langchain-core==0.1.11
langchain-experimental==0.0.47
langchain-community==0.0.13 | not able to pass the few shot examples create_sql_agent parameter | https://api.github.com/repos/langchain-ai/langchain/issues/16833/comments | 1 | 2024-01-31T13:43:33Z | 2024-05-08T16:07:59Z | https://github.com/langchain-ai/langchain/issues/16833 | 2,110,127,590 | 16,833 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
from langchain_core.agent_executor import AgentExecutor
### Error Message and Stack Trace (if applicable)
ModuleNotFoundError: No module named 'langchain_core.agent_executor'
### Description
i am trying to use langchain_core module as below but it's giving error
from langchain_core.agent_executor import AgentExecutor
from langchain_core.toolkits.sql import SQLDatabaseToolkit
ModuleNotFoundError: No module named 'langchain_core.agent_executor'
### System Info
langchain==0.0.352
langchain-core==0.1.11
langchain-experimental==0.0.47
langchain-community==0.0.13 | not able to import langchain_core modules | https://api.github.com/repos/langchain-ai/langchain/issues/16827/comments | 1 | 2024-01-31T10:50:24Z | 2024-05-08T16:07:54Z | https://github.com/langchain-ai/langchain/issues/16827 | 2,109,819,243 | 16,827 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
# Set the OPENAI_API_KEY environment variable
os.environ['OPENAI_API_KEY'] = openapi_key
# Define connection parameters using constants
from urllib.parse import quote_plus
server_name = constants.server_name
database_name = constants.database_name
username = constants.username
password = constants.password
encoded_password = quote_plus(password)
connection_uri = f"mssql+pyodbc://{username}:{encoded_password}@{server_name}/{database_name}?driver=ODBC+Driver+17+for+SQL+Server"
# Create an engine to connect to the SQL database
engine = create_engine(connection_uri)
model_name="gpt-3.5-turbo-16k"
db = SQLDatabase(engine, view_support=True, include_tables=['EGV_emp_departments_ChatGPT'])
llm = ChatOpenAI(temperature=0, verbose=False, model=model_name)
# Define a function named chat that takes a question and SQL format indicator as input
def chat1(question):
greetings = ["hi", "hello", "hey"]
# if any(greeting in question.lower() for greeting in greetings):
if any(greeting == question.lower() for greeting in greetings):
return "Hello! How can I assist you today?"
PROMPT = """
Given an input question, create a syntactically correct MSSQL query,
then look at the results of the query and return the answer.
Do not execute any query if the question is not relavent.
If a question lacks specific details, do not write and execute the query, like 'what is the employee name'.
If a column name is not present, refrain from writing the SQL query. column like UAN number, PF number are not not present do not consider such columns.
Write the query only for the column names which are present in view.
Execute the query and analyze the results to formulate a response.
Return the answer in user friendly form.
The question: {question}
"""
answer = None
memory = ConversationBufferMemory(input_key='input', memory_key="history")
# conn = engine.connect()
# If not in SQL format, create a database chain and run the question
# db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True, memory=memory)
db_chain = SQLDatabaseChain(
llm_chain=LLMChain(llm, memory=memory),
database=db,
verbose=True
)
try:
answer = db_chain.run(PROMPT.format(question=question))
return answer
except exc.ProgrammingError as e:
# Check for a specific SQL error related to invalid column name
if "Invalid column name" in str(e):
print("Answer: Error Occured while processing the question")
print(str(e))
return "Invalid question. Please check your column names."
else:
print("Error Occured while processing")
print(str(e))
# return "Unknown ProgrammingError Occured"
return "Invalid question."
except openai.RateLimitError as e:
print("Error Occured while fetching the answer")
print(str(e))
return "Rate limit exceeded. Please, Mention the Specific Columns you need!"
except openai.BadRequestError as e:
print("Error Occured while fetching the answer")
print(str(e.message))
# return err_msg
return "Context length exceeded: This model's maximum context length is 16385 tokens. Please reduce the length of the messages."
except Exception as e:
print("Error Occured while processing")
print(str(e))
return "Unknown Error Occured"
here is m code in which i'm trying to integrate the memory so that the model can remember previous question and and answer for next question, but im not sure with the exact method,
### Error Message and Stack Trace (if applicable)
in above code while running with
db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True, memory=memory)
where i have given memory=memory, but here while running the code im getting answer as "Unknown Error Occured" which is just goin to exception
like this
The question: what is employee name ofAD####
SQLQuery:SELECT [EmployeeName]
FROM [EGV_emp_departments_ChatGPT]
WHERE [EmployeeID] = 'AD####'SELECT [EmployeeName]
FROM [EGV_emp_departments_ChatGPT]
WHERE [EmployeeID] = 'AD####'
[('H########i',)]
SQLResult: [('H########',)]
Answer:The employee name of AD#### is H######.
> Finished chain.
Error Occured while processing
'input'
Unknown Error Occured
while using like this
db_chain = SQLDatabaseChain(
llm_chain=LLMChain(llm, memory=memory),
database=db,
verbose=True
)
its not fetching the answer
C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\langchain\__init__.py:34: UserWarning: Importing LLMChain from langchain root module is no longer supported. Please use langchain.chains.LLMChain instead.
warnings.warn(
C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\langchain\__init__.py:34: UserWarning: Importing OpenAI from langchain root module is no longer supported. Please use langchain.llms.OpenAI instead.
warnings.warn(
C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\langchain\__init__.py:34: UserWarning: Importing SQLDatabase from langchain root module is no longer supported. Please use langchain.utilities.SQLDatabase instead.
warnings.warn(
Traceback (most recent call last):
File "c:\Users\rndbcpsoft\OneDrive\Desktop\test\main6.py", line 217, in <module>
result= chat1("what is employee name of AD22050853")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\rndbcpsoft\OneDrive\Desktop\test\main6.py", line 115, in chat1
llm_chain=LLMChain(llm, memory=memory),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: Serializable.__init__() takes 1 positional argument but 2 were given
### Description
in above code while running with
db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True, memory=memory)
where i have given memory=memory, but here while running the code im getting answer as "Unknown Error Occured" which is just goin to exception
like this
The question: what is employee name ofAD####
SQLQuery:SELECT [EmployeeName]
FROM [EGV_emp_departments_ChatGPT]
WHERE [EmployeeID] = 'AD####'SELECT [EmployeeName]
FROM [EGV_emp_departments_ChatGPT]
WHERE [EmployeeID] = 'AD####'
[('H########i',)]
SQLResult: [('H########',)]
Answer:The employee name of AD#### is H######.
> Finished chain.
Error Occured while processing
'input'
Unknown Error Occured
while using like this
db_chain = SQLDatabaseChain(
llm_chain=LLMChain(llm, memory=memory),
database=db,
verbose=True
)
its not fetching the answer
C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\langchain\__init__.py:34: UserWarning: Importing LLMChain from langchain root module is no longer supported. Please use langchain.chains.LLMChain instead.
warnings.warn(
C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\langchain\__init__.py:34: UserWarning: Importing OpenAI from langchain root module is no longer supported. Please use langchain.llms.OpenAI instead.
warnings.warn(
C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\langchain\__init__.py:34: UserWarning: Importing SQLDatabase from langchain root module is no longer supported. Please use langchain.utilities.SQLDatabase instead.
warnings.warn(
Traceback (most recent call last):
File "c:\Users\rndbcpsoft\OneDrive\Desktop\test\main6.py", line 217, in <module>
result= chat1("what is employee name of AD22050853")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\rndbcpsoft\OneDrive\Desktop\test\main6.py", line 115, in chat1
llm_chain=LLMChain(llm, memory=memory),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: Serializable.__init__() takes 1 positional argument but 2 were given
### System Info
python : 3.11
langchain: latest | How to add memory in SQLDatabaseChain chatbot with sql to natural language query | https://api.github.com/repos/langchain-ai/langchain/issues/16826/comments | 8 | 2024-01-31T10:37:43Z | 2024-05-13T16:10:31Z | https://github.com/langchain-ai/langchain/issues/16826 | 2,109,796,406 | 16,826 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
The Method given below when invoked without using explicit cache clearing eventually produces CUDA out of memory error.
```python import os
def answer_in_parellel(questions,batch_size=3):
questions_and_answers = {}
while questions:
temp = questions[:batch_size]
questions = questions[batch_size:]
query_batch = []
for question in temp:
query = {"question":question}
query_batch.append(query)
answers = RAG_fusion_chain.batch(query_batch)
"""
If this is not uses the error occours
"""
#torch.cuda.empty_cache()
for i in range(len(temp)):
questions_and_answers[temp[i]]=answers[i]
return questions_and_answers
def answer(questions,experiment_name):
q_and_a = answer_in_parellel(questions)
save_path = os.getcwd()
save_path = os.path.join(os.getcwd(),experiment_name+".txt")
with open(save_path,'w') as f:
for question in q_and_a.keys():
f.write(question+"\n")
f.write("\n")
f.write(q_and_a[question]+"\n")
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I am trying to make Mistral-7B instruct answer a series of questions and write it to a text file for testing.
I am using a local HuggingFace pipeline and RAG fusion
This bug occurred while calling the batch method
there was no problem encountered while using invoke method
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP PREEMPT_DYNAMIC Sat Nov 18 15:31:17 UTC 2023
> Python Version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.1.17
> langchain: 0.1.4
> langchain_community: 0.0.16
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | Issue with GPU Cache while using batch method | https://api.github.com/repos/langchain-ai/langchain/issues/16824/comments | 6 | 2024-01-31T10:05:47Z | 2024-05-08T16:07:44Z | https://github.com/langchain-ai/langchain/issues/16824 | 2,109,730,154 | 16,824 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
examples = [
{"input": "List all artists.", "query": "SELECT * FROM Artist;"},
{
"input": "Find all albums for the artist 'AC/DC'.",
"query": "SELECT * FROM Album WHERE ArtistId = (SELECT ArtistId FROM Artist WHERE Name = 'AC/DC');",
}]
from langchain_core.prompts import FewShotPromptTemplate, PromptTemplate
example_prompt = PromptTemplate.from_template("User input: {input}\nSQL query: {query}")
prompt = FewShotPromptTemplate(
examples=examples[:5],
example_prompt=example_prompt,
prefix="You are a SQLite expert. Given an input question, create a syntactically correct SQLite query to run. Unless otherwise specificed, do not return more than {top_k} rows.\n\nHere is the relevant table info: {table_info}\n\nBelow are a number of examples of questions and their corresponding SQL queries.",
suffix="User input: {input}\nSQL query: ",
input_variables=["input", "top_k", "table_info"],
)
db_url = URL.create(**db_config)
db = SQLDatabase.from_uri(db_url)
toolkit = SQLDatabaseToolkit(db=db, llm=self.llm)
agent_executor = create_sql_agent(
prefix=MSSQL_AGENT_PREFIX,
format_instructions = MSSQL_AGENT_FORMAT_INSTRUCTIONS,
llm=self.llm,
toolkit=toolkit,
callback_manager=self.callbacks,
verbose=True
)
response = agent_executor.run(query)
### Error Message and Stack Trace (if applicable)
_No response_
### Description
i am trying to add few_shots on agent_executor for sqldbtoolkit , but it's not supported
### System Info
langchain==0.0.352
langchain-core==0.1.11
langchain-experimental==0.0.47 | not able to add few_shots on agent_executor for sqldbtoolkit | https://api.github.com/repos/langchain-ai/langchain/issues/16821/comments | 2 | 2024-01-31T09:05:34Z | 2024-05-08T16:07:39Z | https://github.com/langchain-ai/langchain/issues/16821 | 2,109,615,518 | 16,821 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
llm = ChatOpenAI(
streaming=True,
callbacks=[StreamingStdOutCallbackHandler()],
temperature=0,
openai_api_key="xxx",
openai_api_base="http://0.0.0.0:8000/v1/" ,)
qa_chain = ConversationalRetrievalChain.from_llm(
llm= llm,
retriever= compression_retriever,
chain_type='stuff',
combine_docs_chain_kwargs = chain_type_kwargs
)
a = qa_chain(
{
"question": question,
"chat_history": chat_history,
"output_key": 'answer',
},
)
### Error Message and Stack Trace (if applicable)
.
### Description
When I type in a question, it always repeats the question in the answer like this :
question :'电子票据查验平台如何获取票据明细?'
answer:'电子票据查验平台如何获取票据明细? 电子票据查验平台只能查验票据信息,没有票据明细,如需票据明细,请联系开票单位。'
### System Info
.0 | When I use ConversationalRetrievalChain.from_llm to implement a knowledge base with context, the resulting stream will carry questions, so how can I remove the questions? | https://api.github.com/repos/langchain-ai/langchain/issues/16819/comments | 5 | 2024-01-31T08:38:55Z | 2024-07-04T08:49:55Z | https://github.com/langchain-ai/langchain/issues/16819 | 2,109,569,523 | 16,819 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
os.environ["AZURE_OPENAI_API_KEY"] = ""
os.environ["AZURE_OPENAI_ENDPOINT"] = ""
llm = AzureChatOpenAI(
openai_api_version="2023-12-01-preview",
azure_deployment=self.model,
streaming=True
)
```
### Error Message and Stack Trace (if applicable)
```
Traceback (most recent call last):
File "/mnt/workspace/workgroup/lengmou/Tars-Code-Agent/components/model/llm.py", line 80, in <module>
for i in res:
File "/mnt/workspace/workgroup/lengmou/miniconda3/envs/llamaindex/lib/python3.9/site-packages/langchain_core/runnables/base.py", line 2424, in stream
yield from self.transform(iter([input]), config, **kwargs)
File "/mnt/workspace/workgroup/lengmou/miniconda3/envs/llamaindex/lib/python3.9/site-packages/langchain_core/runnables/base.py", line 2411, in transform
yield from self._transform_stream_with_config(
File "/mnt/workspace/workgroup/lengmou/miniconda3/envs/llamaindex/lib/python3.9/site-packages/langchain_core/runnables/base.py", line 1497, in _transform_stream_with_config
chunk: Output = context.run(next, iterator) # type: ignore
File "/mnt/workspace/workgroup/lengmou/miniconda3/envs/llamaindex/lib/python3.9/site-packages/langchain_core/runnables/base.py", line 2375, in _transform
for output in final_pipeline:
File "/mnt/workspace/workgroup/lengmou/miniconda3/envs/llamaindex/lib/python3.9/site-packages/langchain_core/runnables/base.py", line 1045, in transform
yield from self.stream(final, config, **kwargs)
File "/mnt/workspace/workgroup/lengmou/miniconda3/envs/llamaindex/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py", line 250, in stream
raise e
File "/mnt/workspace/workgroup/lengmou/miniconda3/envs/llamaindex/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py", line 234, in stream
for chunk in self._stream(
File "/mnt/workspace/workgroup/lengmou/miniconda3/envs/llamaindex/lib/python3.9/site-packages/langchain_community/chat_models/openai.py", line 399, in _stream
if len(chunk["choices"]) == 0:
TypeError: object of type 'NoneType' has no len()
```
### Description
gpt-4-vision cannot be used in AzureChatOpenAI?
gpt-3.5-turbo、gpt-4、gpt-4-32k、gpt-4-turbo can be used in AzureChatOpenAI, but, gpt-4-vision cannot be used in it.
However, in the following way, gpt-4-vision can be used.
```
curl http://xxxxxxxxxxxxxxxx/2023-12-01-preview/chat \
-H "Content-Type: application/json" \
-H "tenant: 请用租户名称替换我" \
-d '{
"model": "gpt-4-vision",
"stream":false,
"max_tokens":100,
"messages": [{"role": "user","content":[{"type":"text","text":"Describe this picture:"},{"type":"image_url","image_url": {"url":"image_path"}}]}]
}'
```
### System Info
langchain==0.0.351
langchain-community==0.0.4
langchain-core==0.1.17
langchain-openai==0.0.5
openai==1.10.0 | gpt-4-vision cannot be used in AzureChatOpenAI? | https://api.github.com/repos/langchain-ai/langchain/issues/16815/comments | 6 | 2024-01-31T05:30:00Z | 2024-06-19T16:06:58Z | https://github.com/langchain-ai/langchain/issues/16815 | 2,109,314,805 | 16,815 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
I tried following example code:
https://python.langchain.com/docs/modules/agents/agent_types/react
And change the code from:
```Python
tools = [TavilySearchResults(max_results=1)]
```
to:
```Python
tools[]
```
it outputs following error:
ValueError: An output parsing error occurred. In order to pass this error back to the agent and have it try again, pass `handle_parsing_errors=True` to the AgentExecutor. This is the error: Could not parse LLM output: ` I should try to find information about LangChain on the internet
Action: Search for "LangChain" on Google`
And I also tried this example code:
https://python.langchain.com/docs/modules/agents/agent_types/openai_functions_agent
And change the code from:
```Python
tools = [TavilySearchResults(max_results=1)]
```
to:
```Python
tools[]
```
it outputs following error:
raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': "[] is too short - 'functions'", 'type': 'invalid_request_error', 'param': None, 'code': None}}
### Error Message and Stack Trace (if applicable)
> Entering new AgentExecutor chain...
Traceback (most recent call last):
File "/home/tq/code/langchain/test/env/lib/python3.11/site-packages/langchain/agents/agent.py", line 1125, in _iter_next_step
output = self.agent.plan(
^^^^^^^^^^^^^^^^
File "/home/tq/code/langchain/test/env/lib/python3.11/site-packages/langchain/agents/agent.py", line 387, in plan
for chunk in self.runnable.stream(inputs, config={"callbacks": callbacks}):
File "/home/tq/code/langchain/test/env/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2424, in stream
yield from self.transform(iter([input]), config, **kwargs)
File "/home/tq/code/langchain/test/env/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2411, in transform
yield from self._transform_stream_with_config(
File "/home/tq/code/langchain/test/env/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1497, in _transform_stream_with_config
chunk: Output = context.run(next, iterator) # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/tq/code/langchain/test/env/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2375, in _transform
for output in final_pipeline:
File "/home/tq/code/langchain/test/env/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1045, in transform
yield from self.stream(final, config, **kwargs)
File "/home/tq/code/langchain/test/env/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 580, in stream
yield self.invoke(input, config, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/tq/code/langchain/test/env/lib/python3.11/site-packages/langchain_core/output_parsers/base.py", line 176, in invoke
return self._call_with_config(
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/tq/code/langchain/test/env/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1246, in _call_with_config
context.run(
File "/home/tq/code/langchain/test/env/lib/python3.11/site-packages/langchain_core/runnables/config.py", line 326, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^
File "/home/tq/code/langchain/test/env/lib/python3.11/site-packages/langchain_core/output_parsers/base.py", line 177, in <lambda>
lambda inner_input: self.parse_result([Generation(text=inner_input)]),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/tq/code/langchain/test/env/lib/python3.11/site-packages/langchain_core/output_parsers/base.py", line 219, in parse_result
return self.parse(result[0].text)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/tq/code/langchain/test/env/lib/python3.11/site-packages/langchain/agents/output_parsers/react_single_input.py", line 84, in parse
raise OutputParserException(
langchain_core.exceptions.OutputParserException: Could not parse LLM output: ` I should try to find information about LangChain on the internet
Action: Search for "LangChain" on Google`
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/tq/code/langchain/test/Agent/test2.py", line 22, in <module>
agent_executor.invoke({"input": "what is LangChain?"})
File "/home/tq/code/langchain/test/env/lib/python3.11/site-packages/langchain/chains/base.py", line 162, in invoke
raise e
File "/home/tq/code/langchain/test/env/lib/python3.11/site-packages/langchain/chains/base.py", line 156, in invoke
self._call(inputs, run_manager=run_manager)
File "/home/tq/code/langchain/test/env/lib/python3.11/site-packages/langchain/agents/agent.py", line 1391, in _call
next_step_output = self._take_next_step(
^^^^^^^^^^^^^^^^^^^^^
File "/home/tq/code/langchain/test/env/lib/python3.11/site-packages/langchain/agents/agent.py", line 1097, in _take_next_step
[
File "/home/tq/code/langchain/test/env/lib/python3.11/site-packages/langchain/agents/agent.py", line 1097, in <listcomp>
[
File "/home/tq/code/langchain/test/env/lib/python3.11/site-packages/langchain/agents/agent.py", line 1136, in _iter_next_step
raise ValueError(
ValueError: An output parsing error occurred. In order to pass this error back to the agent and have it try again, pass `handle_parsing_errors=True` to the AgentExecutor. This is the error: Could not parse LLM output: ` I should try to find information about LangChain on the internet
Action: Search for "LangChain" on Google`
### Description
I think Agent should be robust enough to deal with empty tools problem.
### System Info
langchain 0.1.4
langchain-cli 0.0.20
langchain-community 0.0.15
langchain-core 0.1.17
langchain-openai 0.0.5
langchainhub 0.1.14
langgraph 0.0.19
langserve 0.0.39
langsmith 0.0.83 | Agent with empty tools is not working | https://api.github.com/repos/langchain-ai/langchain/issues/16812/comments | 5 | 2024-01-31T04:05:51Z | 2024-06-07T12:27:46Z | https://github.com/langchain-ai/langchain/issues/16812 | 2,109,240,604 | 16,812 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
class MyCustomAsyncHandler(AsyncCallbackHandler):
async def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:
"""Run when chain ends running."""
print("RESPONSE: ", response)
print("Hi! I just woke up. Your llm is ending")
async def ask_assistant(input: str) -> str:
prompt = PromptTemplate.from_template(prompt_raw)
prompt = prompt.partial(
language="Spanish",
tools=render_text_description(tools),
tool_names=", ".join([t.name for t in tools]),
)
llm = ChatOpenAI(
temperature=0,
model_name="gpt-4",
openai_api_key=os.environ["OPENAI_API_KEY"],
callbacks=[MyCustomAsyncHandler()],
)
llm_with_stop = llm.bind(stop=["\nObservation"])
agent = (
{
"input": lambda x: x["input"],
"agent_scratchpad": lambda x: format_log_to_str(x["intermediate_steps"]),
"chat_history": lambda x: x["chat_history"],
}
| prompt
| llm_with_stop
| ReActSingleInputOutputParser()
)
agent_executor = AgentExecutor(
agent=agent,
tools=tools,
verbose=True,
memory=memory,
max_execution_time=60,
handle_parsing_errors=True,
)
with get_openai_callback() as cb:
clara_ai_resp = await agent_executor.ainvoke({"input": input})
clara_ai_output = clara_ai_resp["output"]
print("CB: ", cb)
return clara_ai_output, input, cb
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I'm trying to use the get_openai_callback from langchain_community.callbacks to get the number of token and costs incurred in using the agent but I am getting zero on everything, as you can see here when I print.

I have also set up a custom callback handler to go deep into the issue and what I found is that ChatOpenAI from langchain_openai does not call ainvoke as ChatOpenAI langchain.chat_models did.
THank you for your help
### System Info
python 3.11.5 | get_openai_callback not working when using Agent Executor after updating to latest version of Langchain | https://api.github.com/repos/langchain-ai/langchain/issues/16798/comments | 38 | 2024-01-30T18:34:08Z | 2024-06-06T13:22:26Z | https://github.com/langchain-ai/langchain/issues/16798 | 2,108,509,299 | 16,798 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
The following code:
```from langchain.memory import ConversationBufferMemory
llm = AzureChatOpenAI(
azure_endpoint=AZURE_TEXT_ENDPOINT,
openai_api_version=OPEN_API_VERSION,
deployment_name=AZURE_TEXT_DEPLOYMENT, #"gpt-4_32k",
openai_api_key=OPENAI_TEXT_API_KEY,
openai_api_type=OPENAI_API_TYPE, #"azure",
temperature=0
)
ai_search_endpoint = get_ai_search_endpoint()
ai_search_admin_key = get_ai_search_admin_key()
vector_store = AzureSearch(
azure_search_endpoint=ai_search_endpoint,
azure_search_key=ai_search_admin_key,
index_name=index_name,
embedding_function=embeddings.embed_query,
content_key="xxx"
)
"""Retriever that uses `Azure Cognitive Search`."""
azure_search_retriever = AzureSearchVectorStoreRetriever(
vectorstore=vector_store,
search_type="hybrid",
k=3,
)
retriever_tool = create_retriever_tool(
azure_search_retriever,
"Retriever",
"Useful when you need to retrieve information from documents",
)
prompt = ChatPromptTemplate.from_messages(
[
("system", """Remember the previous chats: {chat_history}. Respond to the human as helpfully and accurately as possible. You are a helpful assistant who retrieves information from a database of documents. If you cannot find the answer in the documents please write: 'I do not have the answer from the given information'. You have access to the following tools:\n\n{tools}\n\nUse a json blob to specify a tool by providing an action key (tool name) and an action_input key (tool input).\n\nValid "action" values: "Final Answer" or {tool_names}\n\nProvide only ONE action per $JSON_BLOB, as shown:\n\n```\n{{\n "action": $TOOL_NAME,\n "action_input": $INPUT\n}}\n```\n\nFollow this format:\n\nQuestion: input question to answer\nThought: consider previous and subsequent steps\nAction:\n```\n$JSON_BLOB\n```\nObservation: action result\n... (repeat Thought/Action/Observation N times)\nThought: I know what to respond\nAction:\n```\n{{\n "action": "Final Answer",\n "action_input": "Final response to human"\n}}\n\nBegin! Reminder to ALWAYS respond with a valid json blob of a single action. Use tools if necessary. Respond directly if appropriate. Format is Action:```$JSON_BLOB```then Observation'"""),
("user", "{input}\n\n{agent_scratchpad}\n (reminder to respond in a JSON blob no matter what)"),
]
)
memory = ConversationBufferMemory(memory_key="chat_history")
memory.save_context({"input": "hi"}, {"output": "whats up"})
try:
agent = create_structured_chat_agent(llm, [retriever_tool], prompt)
agent_executor = AgentExecutor(tools=[retriever_tool],
agent=agent,
verbose=True,
return_intermediate_steps=True,
handle_parsing_errors=True,
max_iterations=15,
memory=memory
)
except Exception as e:
print(e)
print("error instantiating the agent")
text = "Who is Julia Roberts?"
answer = agent_executor.invoke(
{
"input": text,
}
)
answer
```
### Error Message and Stack Trace (if applicable)
ValueError Traceback (most recent call last)
File <command-1017101766750907>, line 64
63 text = "Who is Julia Roberts?"
---> 64 answer = agent_executor.invoke(
65 {
66 "input": text,
67 }
68 )
69 answer
File /local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/langchain/chains/base.py:164, in Chain.invoke(self, input, config, **kwargs)
162 raise e
163 run_manager.on_chain_end(outputs)
--> 164 final_outputs: Dict[str, Any] = self.prep_outputs(
165 inputs, outputs, return_only_outputs
166 )
167 if include_run_info:
168 final_outputs[RUN_KEY] = RunInfo(run_id=run_manager.run_id)
File /local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/langchain/chains/base.py:440, in Chain.prep_outputs(self, inputs, outputs, return_only_outputs)
438 self._validate_outputs(outputs)
439 if self.memory is not None:
--> 440 self.memory.save_context(inputs, outputs)
441 if return_only_outputs:
442 return outputs
File /local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/langchain/memory/chat_memory.py:37, in BaseChatMemory.save_context(self, inputs, outputs)
35 def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:
36 """Save context from this conversation to buffer."""
---> 37 input_str, output_str = self._get_input_output(inputs, outputs)
38 self.chat_memory.add_user_message(input_str)
39 self.chat_memory.add_ai_message(output_str)
File /local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/langchain/memory/chat_memory.py:29, in BaseChatMemory._get_input_output(self, inputs, outputs)
27 if self.output_key is None:
28 if len(outputs) != 1:
---> 29 raise ValueError(f"One output key expected, got {outputs.keys()}")
30 output_key = list(outputs.keys())[0]
31 else:
ValueError: One output key expected, got dict_keys(['output', 'intermediate_steps'])
### Description
I am trying to output the intermediate steps as well as save the previous chat history, but it seems I cannot do both at the same time. The code attached above works when return_intermediate_steps is set to False.
### System Info
langchain==0.1.1
openai==1.7.0 | ValueError: One output key expected, got dict_keys(['output', 'intermediate_steps']) when using create_structured_chat_agent with chat_memory and intermediate steps | https://api.github.com/repos/langchain-ai/langchain/issues/16791/comments | 3 | 2024-01-30T16:03:18Z | 2024-05-07T16:08:53Z | https://github.com/langchain-ai/langchain/issues/16791 | 2,108,219,577 | 16,791 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
from langchain.vectorstores.weaviate import Weaviate
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_community.document_loaders import TextLoader
import weaviate
import json
from langchain.retrievers.weaviate_hybrid_search import WeaviateHybridSearchRetriever
client = weaviate.Client(url="http://localhost:8080")
weave = Weaviate(client=client,index_name="people4",text_key="age")
file = TextLoader("file.txt",encoding="utf-8")
pages = file.load_and_split(text_splitter=RecursiveCharacterTextSplitter(chunk_size=100,
chunk_overlap=20,
length_function=len))
weave.from_documents(documents=pages,client=client,embedding=None,index_name="people4",text_key="age",vectorizer="text2vec-transformers")
```
### Error Message and Stack Trace (if applicable)
TypeError: Weaviate.__init__() got an unexpected keyword argument 'vectorizer'
### Description
Weaviate allows users to mention a key value pair of vectorizer while creating a class so that users can leverage local vectorization or basically vectorization of their choice for each class.
Currently this is not implemented in langchain and only a default type schema gets created with a singular data property when using the from_documents or from_texts function calls.
Motivation:
I was using langchain weaviate modules as my library to manage my weaviate storage. But the main problem was that I wanted to use weaviate's local text2vec transformers but in langchain there was no way to pass this argument to make sure that particular documents are embedded with particular vectorizers.
### System Info
aiohttp==3.9.1
aiosignal==1.3.1
annotated-types==0.6.0
anyio==4.2.0
attrs==23.2.0
Authlib==1.3.0
certifi==2023.11.17
cffi==1.16.0
charset-normalizer==3.3.2
cryptography==42.0.1
dataclasses-json==0.6.3
frozenlist==1.4.1
greenlet==3.0.3
idna==3.6
jsonpatch==1.33
jsonpointer==2.4
langchain==0.1.4
langchain-community==0.0.16
langchain-core==0.1.16
langsmith==0.0.83
marshmallow==3.20.2
multidict==6.0.4
mypy-extensions==1.0.0
numpy==1.26.3
packaging==23.2
pycparser==2.21
pydantic==2.5.3
pydantic_core==2.14.6
PyYAML==6.0.1
requests==2.31.0
sniffio==1.3.0
SQLAlchemy==2.0.25
tenacity==8.2.3
typing-inspect==0.9.0
typing_extensions==4.9.0
urllib3==2.1.0
validators==0.22.0
weaviate-client==3.26.2
yarl==1.9.4
System Information
OS: Windows
OS Version: 10.0.22621
Python Version: 3.11.6 (tags/v3.11.6:8b6ee5b, Oct 2 2023, 14:57:12) [MSC v.1935 64 bit (AMD64)]
Package Information
langchain_core: 0.1.16
langchain: 0.1.4
langchain_community: 0.0.16
Packages not installed (Not Necessarily a Problem)
The following packages were not found:
langgraph
langserve | community: Weaviate should allow the flexibility for the user to mention what vectorizer module that they want to use | https://api.github.com/repos/langchain-ai/langchain/issues/16787/comments | 3 | 2024-01-30T15:01:16Z | 2024-05-07T16:08:48Z | https://github.com/langchain-ai/langchain/issues/16787 | 2,108,074,367 | 16,787 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
from langchain_community.vectorstores import Neo4jVector
neo4j_db = Neo4jVector(
url=url, username=username, password=password, embedding=embedding
)
### Error Message and Stack Trace (if applicable)
_No response_
### Description
url, username, password are gotten from environment variable even if the user gives them.
### System Info
No info required. | User specified args are not used when initializing Neo4jVector | https://api.github.com/repos/langchain-ai/langchain/issues/16782/comments | 1 | 2024-01-30T13:50:16Z | 2024-01-30T14:05:24Z | https://github.com/langchain-ai/langchain/issues/16782 | 2,107,915,309 | 16,782 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
def chat1(question):
greetings = ["hi", "hello", "hey"]
# if any(greeting in question.lower() for greeting in greetings):
if any(greeting == question.lower() for greeting in greetings):
return "Hello! How can I assist you today?"
PROMPT = """
Given an input question, create a syntactically correct MSSQL query,
then look at the results of the query and return the answer.
Do not execute any query if the question is not relavent.
If a column name is not present, refrain from writing the SQL query. column like UAN number, PF number are not not present do not consider such columns.
Write the query only for the column names which are present in view.
Execute the query and analyze the results to formulate a response.
Return the answer in user friendly form.
The question: {question}
"""
answer = None
# conn = engine.connect()
# If not in SQL format, create a database chain and run the question
db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)
try:
# answer = db_chain.run(PROMPT.format(question=question))
# return answer
# except Exception as e:
# return f"An error occurred: {str(e)}"
# result_df = pd.read_sql(answer, conn)
# if result_df.empty:
# return "No results found"
answer = db_chain.run(PROMPT.format(question=question))
return answer
except exc.ProgrammingError as e:
# Check for a specific SQL error related to invalid column name
if "Invalid column name" in str(e):
print("Answer: Error Occured while processing the question")
print(str(e))
return "Invalid question. Please check your column names."
else:
print("Error Occured while processing")
print(str(e))
# return "Unknown ProgrammingError Occured"
return "Invalid question."
except openai.RateLimitError as e:
print("Error Occured while fetching the answer")
print(str(e))
return "Rate limit exceeded. Please, Mention the Specific Columns you need!"
except openai.BadRequestError as e:
print("Error Occured while fetching the answer")
# print(str(e.errcode))
print(str(e))
# return e.message
return "Context length exceeded: This model's maximum context length is 16385 tokens. Please reduce the length of the messages."
except Exception as e:
print("Error Occured while processing")
print(str(e))
return "Unknown Error Occured"
Answer:Error Occured while fetching the answer
Error code: 400 - {'error': {'message': "This model's maximum context length is 16385 tokens. However, your messages resulted in 16648 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}}
how to return just message present in the error code
### Error Message and Stack Trace (if applicable)
raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 16385 tokens. However, your messages resulted in 16648 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}}
### Description
if the openai.BadRequestError is coming how to return just the message in exception in the code in error handling
### System Info
python 3.11
langchain: latest | how to display just the message from this, openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 16385 tokens. However, your messages resulted in 16648 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} | https://api.github.com/repos/langchain-ai/langchain/issues/16781/comments | 4 | 2024-01-30T13:49:16Z | 2024-05-08T16:07:29Z | https://github.com/langchain-ai/langchain/issues/16781 | 2,107,913,399 | 16,781 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
we are doing a simple call with stuff chain ,
```
LLM_DM_PROMPT = PromptTemplate(
template=dialogue_template,
input_variables=["entities_context", "chat_history", "human_input", "entity_definition",
"state_context"]
)
chain = LLMChain(
llm=args.llm,
prompt=LLM_DM_PROMPT,
memory=preparation_context.chat_history,
verbose=True
)
answer = chain.predict(human_input=user_input, state_context=previous_state,
entity_definition=intent_obj.entities, entities_context=entities_data,
)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
We have obtained the lsof (list open files) output for the process. To clarify, we are utilizing Langchain to initiate calls to OpenAI using Stuff Chain its sync client. We have verified this by inspecting the IPs associated with the TCP CLOSED_STATE connections.
version
`langchain==0.0.353
`
You can find the lsof output in the file : [HA4.txt](https://github.com/langchain-ai/langchain/files/14094973/HA4.txt)

found some similar issues related to the close_wait state : https://github.com/langchain-ai/langchain/issues/13509 in some other llm call
### System Info
```
langchain==0.0.353
langchain-community==0.0.13
langchain-core==0.1.12
langchain-google-genai==0.0.6
``` | lots of Open Files and TCP Connections in CLOSE_WAIT State When Calling OpenAI via Langchain ( Streaming ) | https://api.github.com/repos/langchain-ai/langchain/issues/16770/comments | 12 | 2024-01-30T08:45:01Z | 2024-06-24T16:07:20Z | https://github.com/langchain-ai/langchain/issues/16770 | 2,107,255,534 | 16,770 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
The following code
```Python
from langchain_core.tools import tool
from langchain_core.utils.function_calling import convert_to_openai_tool
from langchain_openai.chat_models import ChatOpenAI
model = ChatOpenAI(model="gpt-3.5-turbo-1106")
@tool
def multiply(first_int: int, second_int: int) -> int:
"""Multiply two integers together."""
return first_int * second_int
model_with_tools = model.bind(
tools=[convert_to_openai_tool(multiply)], tool_choice="multiply")
print(model_with_tools.invoke("What is 4 times 5?"))
```
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "/home/tq/code/langchain/test/tools/test.py", line 21, in <module>
print(model_with_tools.invoke("What is 4 times 5?"))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/tq/code/langchain/test/env/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 4041, in invoke
return self.bound.invoke(
^^^^^^^^^^^^^^^^^^
File "/home/tq/code/langchain/test/env/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 166, in invoke
self.generate_prompt(
File "/home/tq/code/langchain/test/env/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 544, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/tq/code/langchain/test/env/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 408, in generate
raise e
File "/home/tq/code/langchain/test/env/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 398, in generate
self._generate_with_cache(
File "/home/tq/code/langchain/test/env/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 577, in _generate_with_cache
return self._generate(
^^^^^^^^^^^^^^^
File "/home/tq/code/langchain/test/env/lib/python3.11/site-packages/langchain_openai/chat_models/base.py", line 451, in _generate
response = self.client.create(messages=message_dicts, **params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/tq/code/langchain/test/env/lib/python3.11/site-packages/openai/_utils/_utils.py", line 271, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/tq/code/langchain/test/env/lib/python3.11/site-packages/openai/resources/chat/completions.py", line 659, in create
return self._post(
^^^^^^^^^^^
File "/home/tq/code/langchain/test/env/lib/python3.11/site-packages/openai/_base_client.py", line 1180, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/tq/code/langchain/test/env/lib/python3.11/site-packages/openai/_base_client.py", line 869, in request
return self._request(
^^^^^^^^^^^^^^
File "/home/tq/code/langchain/test/env/lib/python3.11/site-packages/openai/_base_client.py", line 922, in _request
return self._retry_request(
^^^^^^^^^^^^^^^^^^^^
File "/home/tq/code/langchain/test/env/lib/python3.11/site-packages/openai/_base_client.py", line 993, in _retry_request
return self._request(
^^^^^^^^^^^^^^
File "/home/tq/code/langchain/test/env/lib/python3.11/site-packages/openai/_base_client.py", line 960, in _request
raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': "'$.tool_choice' is invalid. Please check the API reference: https://platform.openai.com/docs/api-reference.", 'type': 'invalid_request_error', 'param': None, 'code': None}}
### Description
I have already provide tool_choice, why it is invalid?
### System Info
langchain 0.1.4
langchain-cli 0.0.20
langchain-community 0.0.15
langchain-core 0.1.17
langchain-openai 0.0.5
langchainhub 0.1.14
langgraph 0.0.19
langserve 0.0.39
langsmith 0.0.83 | Error code: 400 - {'error': {'message': "'$.tool_choice' is invalid. when add tool to LLM | https://api.github.com/repos/langchain-ai/langchain/issues/16767/comments | 2 | 2024-01-30T08:06:16Z | 2024-01-30T08:28:05Z | https://github.com/langchain-ai/langchain/issues/16767 | 2,107,191,383 | 16,767 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
Page URL: https://python.langchain.com/docs/use_cases/chatbots
Sample Code:
chat(
[
HumanMessage(
content="Translate this sentence from English to French: I love programming."
)
]
)
**Warning:**
Users/randolphhill/govbotics/deepinfra/.venv/lib/python3.11/site-packages/langchain_core/_api/deprecation.py:117: LangChainDeprecationWarning: The function `__call__` was deprecated in LangChain 0.1.7 and will be removed in 0.2.0. Use invoke instead.
warn_deprecated(
**Revised Code:**
rsp = chat.invoke(
[
HumanMessage(
content="Translate this sentence from English to Bahasa Indoensia: Good Morning, How are you?. "
)
]
)
### Idea or request for content:
_No response_ | DOC: Sample Chatbots Quick Start Needs to updated to new API'DOC: ' prefix> | https://api.github.com/repos/langchain-ai/langchain/issues/16755/comments | 1 | 2024-01-30T03:48:39Z | 2024-05-07T16:08:38Z | https://github.com/langchain-ai/langchain/issues/16755 | 2,106,895,485 | 16,755 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
with sync_playwright() as p:
browser = p.chromium.launch(headless=self.headless)
for url in self.urls:
try:
page = browser.new_page()
response = page.goto(url)
if response is None:
raise ValueError(f"page.goto() returned None for url {url}")
text = self.evaluator.evaluate(page, browser, response)
metadata = {"source": url}
docs.append(Document(page_content=text, metadata=metadata))
except Exception as e:
if self.continue_on_failure:
logger.error(
f"Error fetching or processing {url}, exception: {e}"
)
else:
raise e
browser.close()
```
### Error Message and Stack Trace (if applicable)
This piece of code doesn't have any errors
But in large-scale data retrieval, the absence of proxies may result in data scraping failures. Therefore, it is recommended to incorporate proxy functionality here to enhance engineering capabilities and efficiency
### Description
## Problem Overview:
The current PlaywrightEvaluator class lacks proxy support, limiting flexibility when processing pages using this class. To enhance functionality and improve the class's applicability, it is suggested to add proxy support within the class.
## Proposed Enhancement:
Modify the PlaywrightEvaluator class to accept proxy parameters, allowing the use of proxies when creating Playwright pages. This improvement would enable users to conveniently utilize proxies for accessing pages, thereby expanding the class's use cases.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #93-Ubuntu SMP Tue Sep 5 17:16:10 UTC 2023
> Python Version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.1.15
> langchain: 0.0.333
> langchain_community: 0.0.15
> langserve: 0.0.39
| Enhancement: Add Proxy Support to PlaywrightURLLoader Class | https://api.github.com/repos/langchain-ai/langchain/issues/16751/comments | 2 | 2024-01-30T02:24:52Z | 2024-05-07T16:08:33Z | https://github.com/langchain-ai/langchain/issues/16751 | 2,106,822,896 | 16,751 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
The following code: ```python
import io
from langchain.llms import LlamaCpp
f = io.BytesIO(b"\x00\x00\x00\x00\x00\x00\x00\x00\x01\x01\x01\x01\x01\x01")
llm = LlamaCpp(model_path=f,temperature=0.1)
```
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/mlenv_3/lib64/python3.8/site-packages/langchain/load/serializable.py", line 97, in __init__
super().__init__(**kwargs)
File "/home/mlenv_3/lib64/python3.8/site-packages/pydantic/v1/main.py", line 339, in __init__
values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)
File "/home/mlenv_3/lib64/python3.8/site-packages/pydantic/v1/main.py", line 1102, in validate_model
values = validator(cls_, values)
File "/home/mlenv_3/lib64/python3.8/site-packages/langchain/llms/llamacpp.py", line 151, in validate_environment
model_path = values["model_path"]
KeyError: 'model_path'
### Description
We need to pass model as a bytes.io object to llamacpp interfact. For example, we cannot pass a path to the model that is saved on disk. We need support for bytes.io object.
### System Info
pip freeze | grep langchain
langchain==0.0.325
platform:
Linux
Python 3.8.14 | Langchain Llamacpp interface does not accept bytes.io object as input | https://api.github.com/repos/langchain-ai/langchain/issues/16745/comments | 1 | 2024-01-29T22:47:13Z | 2024-05-06T16:09:09Z | https://github.com/langchain-ai/langchain/issues/16745 | 2,106,572,791 | 16,745 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
<img width="678" alt="Screen Shot 2024-01-29 at 1 45 35 PM" src="https://github.com/langchain-ai/langchain/assets/77302524/a174f38b-3bbf-416e-8f58-6cdcbf2f2a79">
### Error Message and Stack Trace (if applicable)
<img width="1019" alt="Screen Shot 2024-01-29 at 1 47 01 PM" src="https://github.com/langchain-ai/langchain/assets/77302524/45447078-03f7-4545-8823-cbe9d267bf8b">
### Description
I'm creating my Pinecone DB as documentation via Pinecone says and trying to upsert my langchain documents into Pinecone, however it is saying I haven't specified my API key which I clearly have. The index is created fine, but I'm hitting a brick wall trying to get my documents into it.
I'm using pinecone as my vector DB and help from langchain for a RAG application. The langchain documentation for Pinecone is outdated (using pinecone init which is not supported anymore) and I'm seeing other people online say they are getting this issue too ([https://www.reddit.com/r/LangChain/comments/199mklo/langchain_011_is_not_working_with_pineconeclient/](https://www.reddit.com/r/LangChain/comments/199mklo/langchain_011_is_not_working_with_pineconeclient/))
### System Info
pip 23.3.2 from /opt/conda/lib/python3.10/site-packages/pip (python 3.10)
langchain==0.0.354
pinecone-client==3.0.0
Using kaggle notebook on personal mac | Pinecone VectorStore Issue | https://api.github.com/repos/langchain-ai/langchain/issues/16744/comments | 1 | 2024-01-29T22:05:14Z | 2024-05-06T16:09:04Z | https://github.com/langchain-ai/langchain/issues/16744 | 2,106,520,632 | 16,744 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
model_id = "facebook/opt-2.7b"
tokenizer = AutoTokenizer.from_pretrained(model_id)
base_model = AutoModelForCausalLM.from_pretrained(
model_id,
load_in_8bit=True,
device_map='auto'
)
pipe = pipeline("text-generation",
model=base_model,
tokenizer=tokenizer,
max_length=256,
temperature=0.6,
top_p=0.95,
repetition_penalty=1.2)
llm = HuggingFacePipeline(pipeline=pipe)
class Booking(BaseModel):
date_of_arrival: str = Field(description="Time of check in")
date_of_departure: str = Field(description="Time of check out")
number_of_guests: int = Field(description="number of guests")
room_type: str = Field(description="name of room")
special_requests: list = Field(description="list of special requests")
contact_information: list = Field(description="list of contanct information like email and phone")
booking_message = """Hi there! I'm interested in booking a meeting room for a small business conference. The dates I have in mind are February 5th to February 6th, 2024, and we expect around 6 guests to attend. We'll need a standard meeting room with basic amenities. However, we'll also require a projector and whiteboard for presentations. You can reach me at """
chain = create_extraction_chain_pydantic(pydantic_schema=Booking, llm=llm)
chain.run(booking_message)
### Error Message and Stack Trace (if applicable)
OutputParserException: This output parser can only be used with a chat generation.
### Description
OutputParserException: This output parser can only be used with a chat generation.
### System Info
. | OutputParserException: This output parser can only be used with a chat generation. | https://api.github.com/repos/langchain-ai/langchain/issues/16743/comments | 3 | 2024-01-29T21:03:41Z | 2024-04-14T17:17:44Z | https://github.com/langchain-ai/langchain/issues/16743 | 2,106,422,630 | 16,743 |
[
"hwchase17",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
_No response_ | docs: Add page for Vision models in modules/model_io/chat | https://api.github.com/repos/langchain-ai/langchain/issues/16739/comments | 1 | 2024-01-29T19:17:29Z | 2024-05-06T16:08:59Z | https://github.com/langchain-ai/langchain/issues/16739 | 2,106,243,552 | 16,739 |
[
"hwchase17",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
_No response_ | docs: Add page for ImagePromptTemplate in modules/model_io/prompts | https://api.github.com/repos/langchain-ai/langchain/issues/16738/comments | 1 | 2024-01-29T19:17:08Z | 2024-05-06T16:08:54Z | https://github.com/langchain-ai/langchain/issues/16738 | 2,106,242,922 | 16,738 |
[
"hwchase17",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
Similar to JsonOutputFunctionsParser, JsonOutputToolsParser should be able to parse partial results | Add streaming support to JsonOutputToolsParser | https://api.github.com/repos/langchain-ai/langchain/issues/16736/comments | 1 | 2024-01-29T18:41:01Z | 2024-05-06T16:08:49Z | https://github.com/langchain-ai/langchain/issues/16736 | 2,106,183,569 | 16,736 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
class SQLDbTool(BaseTool):
"""Tool SQLDB Agent"""
name = "@DSR"
description = "useful when the questions includes the term: @DSR.\n"
llm: AzureChatOpenAI
def _run(self, query: str) -> str:
try:
# Key Vault details
key_vault_name = 'XXXXXXXXXXX'
vault_url = f"https://xxxxxxxxxxxx.vault.azure.net/"
# Authenticate using DefaultAzureCredential
credential = DefaultAzureCredential()
#Create a SecretClient using your credentials
client = SecretClient(vault_url, credential)
# Access Key Vault secrets
secret_name = 'source-XXXX-sql-XX'
SQL_SERVER_PASSWORD = client.get_secret(secret_name).value
# Update db_config with dynamic username and password
db_config = {
'drivername': 'mssql+pyodbc',
'username': os.environ["SQL_SERVER_USERNAME"],
'password': SQL_SERVER_PASSWORD,
'host': os.environ["SQL_SERVER_ENDPOINT"],
'port': 14XX,
'database': os.environ["SQL_SERVER_DATABASE"],
'query': {'driver': 'ODBC Driver 17 for SQL Server'}
}
db_url = URL.create(**db_config)
db = SQLDatabase.from_uri(db_url)
toolkit = SQLDatabaseToolkit(db=db, llm=self.llm)
agent_executor = create_sql_agent(
prefix=MSSQL_AGENT_PREFIX,
format_instructions = MSSQL_AGENT_FORMAT_INSTRUCTIONS,
llm=self.llm,
toolkit=toolkit,
callback_manager=self.callbacks,
verbose=True
)
# Define your examples
examples = [
{
"input": "what are the top 5 brands?",
"query": "SELECT TOP 5 Brand",
}
]
# Define the prompt template for each example
example_prompt = PromptTemplate.from_messages(
[('human', '{input}'), ('ai', '{query}')]
)
# Create the FewShotPromptTemplate
few_shot_prompt = FewShotPromptTemplate(
examples=examples,
example_prompt=example_prompt,
prefix="You are a helpful AI Assistant",
suffix="{input}",
example_separator="\n\n",
template_format="f-string",
validate_template=True,
input_variables=["input"]
)
# Add the FewShotPromptTemplate to your agent_executor
agent_executor.add_prompt(few_shot_prompt)
logging.info(f"Login successful: {db_config['username']}")
response = agent_executor.run(query)
logger.info(f"Langchain log is:{response}")
log_stream.seek(0)
blob_client = blob_service_client.get_container_client(CONTAINER_NAME).get_blob_client(BLOB_NAME)
blob_client.upload_blob(log_stream.read(), overwrite=True)
except Exception as e:
response = str(e)
return response
### Error Message and Stack Trace (if applicable)
_No response_
### Description
getting error : type object ‘PromptTemplate’ has no attribute ‘from_messages'
### System Info
langchain==0.0.352
langchain-core==0.1.11 | getting error : type object ‘PromptTemplate’ has no attribute ‘from_messages | https://api.github.com/repos/langchain-ai/langchain/issues/16735/comments | 1 | 2024-01-29T17:08:08Z | 2024-05-06T16:08:44Z | https://github.com/langchain-ai/langchain/issues/16735 | 2,106,006,795 | 16,735 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
i want to add few shots example to make my prompts better to understand the complex questions to create_sql_agent
toolkit = SQLDatabaseToolkit(db=db, llm=llm)
agent_executor = create_sql_agent(
prefix=MSSQL_AGENT_PREFIX,
format_instructions = MSSQL_AGENT_FORMAT_INSTRUCTIONS,
llm=llm,
toolkit=toolkit,
top_k=30,
verbose=True
)
### Error Message and Stack Trace (if applicable)
i want to add few shots example to make my prompts better to understand the complex questions to create_sql_agent
toolkit = SQLDatabaseToolkit(db=db, llm=llm)
agent_executor = create_sql_agent(
prefix=MSSQL_AGENT_PREFIX,
format_instructions = MSSQL_AGENT_FORMAT_INSTRUCTIONS,
llm=llm,
toolkit=toolkit,
top_k=30,
verbose=True
)
### Description
i want to add FewShotPromptTemplate to my agent_executor
examples = [
{"input": "List all artists.", "query": "SELECT * FROM Artist;"},
{
"input": "Find all albums for the artist 'AC/DC'.",
"query": "SELECT * FROM Album WHERE ArtistId = (SELECT ArtistId FROM Artist WHERE Name = 'AC/DC');",
},
{
"input": "List all tracks in the 'Rock' genre.",
"query": "SELECT * FROM Track WHERE GenreId = (SELECT GenreId FROM Genre WHERE Name = 'Rock');",
}]
from langchain_core.prompts import FewShotPromptTemplate, PromptTemplate
agent_executor = create_sql_agent(
prefix=MSSQL_AGENT_PREFIX,
format_instructions = MSSQL_AGENT_FORMAT_INSTRUCTIONS,
llm=llm,
toolkit=toolkit,
top_k=30,
verbose=True
)
### System Info
langchain version 0.0.352 | not able to add few shots example to create_sql_agent | https://api.github.com/repos/langchain-ai/langchain/issues/16731/comments | 5 | 2024-01-29T15:27:52Z | 2024-07-09T16:06:14Z | https://github.com/langchain-ai/langchain/issues/16731 | 2,105,755,700 | 16,731 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
@gcheron,
```python
from langchain_community.storage.sql import SQLDocStore
SQLDocStore(connection_string="sqlite:///tmp/test.db")
```
Result:
```
sqlalchemy.exc.CompileError: (in table 'langchain_storage_collection', column 'uuid'): Compiler <sqlalchemy.dialects.sqlite.base.SQLiteTypeCompiler object at 0x7f525b78d270> can't render element of type UUID
```
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "/home/pprados/workspace.bda/langchain-rag/.venv/lib/python3.10/site-packages/sqlalchemy/sql/visitors.py", line 139, in _compiler_dispatch
meth = getter(visitor)
AttributeError: 'SQLiteTypeCompiler' object has no attribute 'visit_UUID'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/pprados/workspace.bda/langchain-rag/.venv/lib/python3.10/site-packages/sqlalchemy/sql/compiler.py", line 6522, in visit_create_table
processed = self.process(
File "/home/pprados/workspace.bda/langchain-rag/.venv/lib/python3.10/site-packages/sqlalchemy/sql/compiler.py", line 912, in process
return obj._compiler_dispatch(self, **kwargs)
File "/home/pprados/workspace.bda/langchain-rag/.venv/lib/python3.10/site-packages/sqlalchemy/sql/visitors.py", line 143, in _compiler_dispatch
return meth(self, **kw) # type: ignore # noqa: E501
File "/home/pprados/workspace.bda/langchain-rag/.venv/lib/python3.10/site-packages/sqlalchemy/sql/compiler.py", line 6553, in visit_create_column
text = self.get_column_specification(column, first_pk=first_pk)
File "/home/pprados/workspace.bda/langchain-rag/.venv/lib/python3.10/site-packages/sqlalchemy/dialects/sqlite/base.py", line 1534, in get_column_specification
coltype = self.dialect.type_compiler_instance.process(
File "/home/pprados/workspace.bda/langchain-rag/.venv/lib/python3.10/site-packages/sqlalchemy/sql/compiler.py", line 957, in process
return type_._compiler_dispatch(self, **kw)
File "/home/pprados/workspace.bda/langchain-rag/.venv/lib/python3.10/site-packages/sqlalchemy/sql/visitors.py", line 141, in _compiler_dispatch
return visitor.visit_unsupported_compilation(self, err, **kw) # type: ignore # noqa: E501
File "/home/pprados/workspace.bda/langchain-rag/.venv/lib/python3.10/site-packages/sqlalchemy/sql/compiler.py", line 962, in visit_unsupported_compilation
raise exc.UnsupportedCompilationError(self, element) from err
sqlalchemy.exc.UnsupportedCompilationError: Compiler <sqlalchemy.dialects.sqlite.base.SQLiteTypeCompiler object at 0x7f525b78d270> can't render element of type UUID (Background on this error at: https://sqlalche.me/e/20/l7de)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/pprados/workspace.bda/langchain-rag/.venv/lib/python3.10/site-packages/IPython/core/interactiveshell.py", line 3508, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-4-aac51bc0ae83>", line 1, in <module>
SQLDocStore(connection_string="sqlite:////tmp/test.db")
File "/usr/lib/python3.10/typing.py", line 957, in __call__
result = self.__origin__(*args, **kwargs)
File "/home/pprados/workspace.bda/langchain-rag/.venv/lib/python3.10/site-packages/langchain_community/storage/sql.py", line 185, in __init__
self.__post_init__()
File "/home/pprados/workspace.bda/langchain-rag/.venv/lib/python3.10/site-packages/langchain_community/storage/sql.py", line 194, in __post_init__
self.__create_tables_if_not_exists()
File "/home/pprados/workspace.bda/langchain-rag/.venv/lib/python3.10/site-packages/langchain_community/storage/sql.py", line 204, in __create_tables_if_not_exists
Base.metadata.create_all(self._conn)
File "/home/pprados/workspace.bda/langchain-rag/.venv/lib/python3.10/site-packages/sqlalchemy/sql/schema.py", line 5832, in create_all
bind._run_ddl_visitor(
File "/home/pprados/workspace.bda/langchain-rag/.venv/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 2448, in _run_ddl_visitor
visitorcallable(self.dialect, self, **kwargs).traverse_single(element)
File "/home/pprados/workspace.bda/langchain-rag/.venv/lib/python3.10/site-packages/sqlalchemy/sql/visitors.py", line 671, in traverse_single
return meth(obj, **kw)
File "/home/pprados/workspace.bda/langchain-rag/.venv/lib/python3.10/site-packages/sqlalchemy/sql/ddl.py", line 919, in visit_metadata
self.traverse_single(
File "/home/pprados/workspace.bda/langchain-rag/.venv/lib/python3.10/site-packages/sqlalchemy/sql/visitors.py", line 671, in traverse_single
return meth(obj, **kw)
File "/home/pprados/workspace.bda/langchain-rag/.venv/lib/python3.10/site-packages/sqlalchemy/sql/ddl.py", line 957, in visit_table
)._invoke_with(self.connection)
File "/home/pprados/workspace.bda/langchain-rag/.venv/lib/python3.10/site-packages/sqlalchemy/sql/ddl.py", line 315, in _invoke_with
return bind.execute(self)
File "/home/pprados/workspace.bda/langchain-rag/.venv/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1416, in execute
return meth(
File "/home/pprados/workspace.bda/langchain-rag/.venv/lib/python3.10/site-packages/sqlalchemy/sql/ddl.py", line 181, in _execute_on_connection
return connection._execute_ddl(
File "/home/pprados/workspace.bda/langchain-rag/.venv/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1525, in _execute_ddl
compiled = ddl.compile(
File "/home/pprados/workspace.bda/langchain-rag/.venv/lib/python3.10/site-packages/sqlalchemy/sql/elements.py", line 309, in compile
return self._compiler(dialect, **kw)
File "/home/pprados/workspace.bda/langchain-rag/.venv/lib/python3.10/site-packages/sqlalchemy/sql/ddl.py", line 69, in _compiler
return dialect.ddl_compiler(dialect, self, **kw)
File "/home/pprados/workspace.bda/langchain-rag/.venv/lib/python3.10/site-packages/sqlalchemy/sql/compiler.py", line 867, in __init__
self.string = self.process(self.statement, **compile_kwargs)
File "/home/pprados/workspace.bda/langchain-rag/.venv/lib/python3.10/site-packages/sqlalchemy/sql/compiler.py", line 912, in process
return obj._compiler_dispatch(self, **kwargs)
File "/home/pprados/workspace.bda/langchain-rag/.venv/lib/python3.10/site-packages/sqlalchemy/sql/visitors.py", line 143, in _compiler_dispatch
return meth(self, **kw) # type: ignore # noqa: E501
File "/home/pprados/workspace.bda/langchain-rag/.venv/lib/python3.10/site-packages/sqlalchemy/sql/compiler.py", line 6532, in visit_create_table
raise exc.CompileError(
sqlalchemy.exc.CompileError: (in table 'langchain_storage_collection', column 'uuid'): Compiler <sqlalchemy.dialects.sqlite.base.SQLiteTypeCompiler object at 0x7f525b78d270> can't render element of type UUID
### Description
This new class presents a few problems.
- It is not sqlite compatible
- It uses a `connection_string` parameter, whereas `db_url` is used everywhere.
- It does not allow the call to be encapsulated in a transaction, as it cannot receive an engine parameter instead of `db_url`.
The usage with langchain is to propose an engine parameter to manipulate SQL.
Is the case for :
langchain_community.cache.SQLAlchemyCache
langchain_community.cache.SQLAlchemyMd5Cache
### System Info
langchain==0.1.4
langchain-community @ file:///home/pprados/workspace.bda/langchain/libs/community
langchain-core==0.1.16
langchain-openai==0.0.2.post1
langchain-qa-with-references==0.0.330
| SQLDocStore is incompatible with sqlite | https://api.github.com/repos/langchain-ai/langchain/issues/16726/comments | 3 | 2024-01-29T13:17:36Z | 2024-02-14T03:45:58Z | https://github.com/langchain-ai/langchain/issues/16726 | 2,105,482,362 | 16,726 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
I have followed the documentations guide and create an agent with vector retriever with a Pydantic schema passed as function into the agent, but it seems that there is no information from. the retriever is passed to the schema function, the schema function will just give source [1] as output. I have also tried a exact replicate of the code in the documentation with my own data since I don't have the state_of_the_union text file and I got the same error as what I am getting in my own implementation
This is the documentation link https://python.langchain.com/docs/modules/agents/how_to/agent_structured
during my experiments
<img width="861" alt="image" src="https://github.com/langchain-ai/langchain/assets/48542562/26197d99-c0b3-4cc5-b2b3-9f83f1e619a1">
You can see that this piece of information containing capstone =. 5.5 is at page chuck 10
<img width="841" alt="image" src="https://github.com/langchain-ai/langchain/assets/48542562/1b98f72e-061f-482d-91cf-8d6ae808ef31">
In the response to what is my capstone grade, it gives source [] which is definitely not the right information
### Idea or request for content:
I am not too exactly sure about what is causing the problem, maybe I should flag this as a bug or maybe just some configuration error.
My suspicion is that after the creation of the retriever tool, it formats the document using a prompt which strip away the meta data information. I have tried passing an additional prompt when im creating the retriever tool so that it returns the meta data output together with the content.
<img width="629" alt="image" src="https://github.com/langchain-ai/langchain/assets/48542562/bfce8431-09a6-4c42-adbe-c04eb94cf418">
this will result in the answer formatted to be
<img width="289" alt="image" src="https://github.com/langchain-ai/langchain/assets/48542562/a7828f65-0eff-4835-9580-5ef331f3b68b">
But it takes a lot of luck and tuning to get the final answer to get the correct source information (have succeeded before once in a while but not most of the time)
Please advise on what could be the potential solution to this, thank you very much | DOC: Returning structured output from agent documentations not correct | https://api.github.com/repos/langchain-ai/langchain/issues/16720/comments | 10 | 2024-01-29T10:13:29Z | 2024-05-23T11:03:23Z | https://github.com/langchain-ai/langchain/issues/16720 | 2,105,135,404 | 16,720 |
[
"hwchase17",
"langchain"
] | ### Description
I'm trying to integrate my Nemotron LLM with langchain, I use the source code in langchain_nvidia_trt.llms.py, for having streaming but gives an exception.
### Example Code
```python
from llms import TritonTensorRTLLM
llm = TritonTensorRTLLM(server_url="localhost:8001", model_name="Nemotron-rlhf")
res = llm.invoke("HI")
```
### Error Message and Stack Trace (if applicable)
and the Exeption is bellow
```Traceback (most recent call last):
File "/workspace/workspace/tens.py", line 4, in <module>
res = llm.invoke("HI")
File "/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/llms.py", line 230, in invoke
self.generate_prompt(
File "/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/llms.py", line 525, in generate_prompt
return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/llms.py", line 698, in generate
output = self._generate_helper(
File "/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/llms.py", line 562, in _generate_helper
raise e
File "/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/llms.py", line 549, in _generate_helper
self._generate(
File "/workspace/workspace/llms.py", line 153, in _generate
result: str = self._request(
File "/workspace/workspace/llms.py", line 206, in _request
result_str += token
TypeError: can only concatenate str (not "InferenceServerException") to str
```
the InferenceServerException is bellow:
`unexpected inference output 'text_output' for model 'Nemotron-rlhf'`
### System Info
System Information
------------------
> OS: Linux
> OS Version: #163-Ubuntu SMP Fri Mar 17 18:26:02 UTC 2023
> Python Version: 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.1.16
> langchain: 0.1.4
> langchain_community: 0.0.16
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | Nvidia Nemotron integration with langchain with TritonTensorRTLLM | https://api.github.com/repos/langchain-ai/langchain/issues/16719/comments | 2 | 2024-01-29T09:38:42Z | 2024-05-06T16:08:39Z | https://github.com/langchain-ai/langchain/issues/16719 | 2,105,059,012 | 16,719 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
using the following code:
```python
self.agent = (
{
"input": itemgetter("input"),
"agent_scratchpad": lambda x: format_to_openai_function_messages(
x["intermediate_steps"]
),
'chat_history': itemgetter("chat_history")
}
| self.prompt
| self.condense_prompt
# | self.moderate
| self.llm.bind(functions=[convert_to_openai_function(t) for t in self.tools])
| OpenAIFunctionsAgentOutputParser()
)
from langchain_core.runnables.history import RunnableWithMessageHistory
self.agent_executor = AgentExecutor(
agent=self.agent,
tools=self.tools,
memory=self.memory,
verbose=True,
handle_parsing_errors=True,
return_intermediate_steps=True,
# max_iterations=2,
# max_execution_time=1,
)
self.agent_with_chat_history = RunnableWithMessageHistory(
self.agent_executor,
# This is needed because in most real world scenarios, a session id is needed
# It isn't really used here because we are using a simple in memory ChatMessageHistory
lambda session_id: RedisChatMessageHistory(self.session_id, url=f"redis://{self.redis_server}/0"),
input_messages_key="input",
history_messages_key=self.memory_key,
output_messages_key="output"
)
```
and a tool:
```python
class QuotationTool(BaseTool):
name = "create_quotation_tool"
description = """
a useful tool to create products or services quotations, all required Field must be available in order
to complete check the args_schema [QuotationToolSchema] schema for it,
"""
args_schema: Type[QuotationToolSchema] = QuotationToolSchema
async def _arun(
self,
*args: Any,
**kwargs: Any,
) -> Any:
# Get the tool arguments
phone = kwargs.get("phone")
name = kwargs.get("name")
email = kwargs.get('email')
product_name = kwargs.get("product_name")
unit_amount = kwargs.get("unit_amount")
currency = kwargs.get("currency")
quantity = kwargs.get("quantity")
# Search for the customer by phone
customer = stripe_helper.search_customer(phone)
# Check if the customer exists
if customer:
# Get the customer id
customer_id = customer[0]['id']
else:
# Create a new customer
customer_id = stripe_helper.create_customer(phone, name, email)
# Search for the product by name
product = stripe_helper.search_product(product_name)
# Check if the product exists
if product:
# Get the product id
product_id = product[0]['id']
else:
# Create a new product
product_id = stripe_helper.create_product(product_name)
# Search for the price by product id
price = stripe_helper.search_price(product_id)
# Check if the price exists
if price:
# Get the price id
price_id = price[0]['id']
else:
# Create a new price
price_id = stripe_helper.create_product_price(unit_amount, currency, product_id)['id']
# Create a line item with the price id and quantity
line_item = {
"price": price_id,
"quantity": quantity
}
# Create a quotation with the Stripe API
quotation = stripe_helper.create_quotation(customer_id, [line_item])
# Finalize the quotation
quotation = stripe_helper.finalize_quota(quotation['id'])
# Download the quotation PDF
pdf_name = f"{quotation['id']}.pdf"
stripe_helper.download_quota_pdf(quotation['id'], customer_id, pdf_name)
# Send the quotation PDF to the user
from events_producer.producer import KafkaProducerWrapper
producer = KafkaProducerWrapper()
producer.send_pdf_to_consumer('send_quota_topic',
pdf_name,
phone_number=phone,
quota_id=quotation['id']
)
return f"""
alright, quota prepared, and will be send to you soon.
internally call this tool to send quota via email send_pdf_via_ses_tool
use customer id {customer_id} and {quotation['id']} to accomplish the task
"""
def _run(
self,
phone: str,
name: str,
product_name: str,
email: str,
unit_amount: int,
quantity: int = 1,
currency: str = 'aed',
) -> Any:
# Execute the tool asynchronously
return self._arun(phone=phone, name=name, product_name=product_name, unit_amount=unit_amount * 100,
currency=currency,
quantity=quantity)
```
### Error Message and Stack Trace (if applicable)
stack trace:
```command
Invoking: `place_order_tool` with `{'query': 'صمغ عربي مع دوم وكركديه 500 جرام - 4 علب'}`
to place an order trigger this sequence or tools:
1. ask the customer for his info.
2. call the create_quotation_tool to create the quota.
3. call {send_pdf_via_ses_tool} and {whatsapp_send_pdf_tool} tools afterwards to send quotation to customer
5. create a payment link using {payment_link_tool} for the quotation after the customer to approve it.
product price or unit_amount unit_amount = unit_amount * 100 always otherwise Stripe will not accept the amount.
for: {whatsapp_send_pdf_tool} tool make sure all captions are in customer original language
2024-01-29 12:06:01,769 - httpx - INFO - HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
شكرًا لاختياركم. إجمالي السعر لـ 4 علب من صمغ عربي مع كركديه 500 جرام هو 320 درهم إماراتي.
لإتمام الطلب، أحتاج إلى بعض المعلومات منك:
1. رقم الهاتف
2. البريد الإلكتروني
3. عنوان الشحن
يرجى تزويدي بهذه المعلومات لنتمكن من متابعة الطلب.
> Finished chain.
2024-01-29 12:06:11,463 - BOTS Service - INFO - [*] response: شكرًا لاختياركم. إجمالي السعر لـ 4 علب من صمغ عربي مع كركديه 500 جرام هو 320 درهم إماراتي.
لإتمام الطلب، أحتاج إلى بعض المعلومات منك:
1. رقم الهاتف
2. البريد الإلكتروني
3. عنوان الشحن
يرجى تزويدي بهذه المعلومات لنتمكن من متابعة الطلب.
2024-01-29 12:06:31,146 - BOTS Service - INFO - [*] topic:gpt_message_response_topic, event: {'requestId': '969722f0-8991-4602-8c4a-d36a2cb4237c', 'message': '+٩٧١٥٦٩٩٣٣٨٩١', 'phoneNumber': '971565531542', 'customerName': 'Yafa Cloud Services'}
2024-01-29 12:06:31,146 - BOTS Service - INFO - [*] question: +٩٧١٥٦٩٩٣٣٨٩١, phone_number: 971565531542, customer_name: Yafa Cloud Services
> Entering new AgentExecutor chain...
2024-01-29 12:06:34,174 - httpx - INFO - HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
Invoking: `create_quotation_tool` with `{'phone': '+971569933891', 'email': '[email protected]', 'name': 'Customer', 'product_name': 'صمغ عربي مع كركديه 500 جرام - 4 علب', 'unit_amount': 32000, 'quantity': 4, 'currency': 'AED'}`
alright, quota prepared, and will be send to you soon.
internally call this tool to send quota via email send_pdf_via_ses_tool
use customer id cus_PHrvofHhcVMhDY and qt_1OdpraIPds9mVdeaVWfGlAMh to accomplish the task
2024-01-29 12:06:51,122 - httpx - INFO - HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
Invoking: `create_quotation_tool` with `{'phone': '+971569933891', 'email': '[email protected]', 'name': 'Customer', 'product_name': 'صمغ عربي مع كركديه 500 جرام - 4 علب', 'unit_amount': 32000, 'quantity': 4, 'currency': 'AED'}`
alright, quota prepared, and will be send to you soon.
internally call this tool to send quota via email send_pdf_via_ses_tool
use customer id cus_PHrvofHhcVMhDY and qt_1OdprqIPds9mVdea8lLxmUKf to accomplish the task
2024-01-29 12:07:06,480 - httpx - INFO - HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
Invoking: `create_quotation_tool` with `{'phone': '+971569933891', 'email': '[email protected]', 'name': 'Customer', 'product_name': 'صمغ عربي مع كركديه 500 جرام - 4 علب', 'unit_amount': 32000, 'quantity': 4, 'currency': 'AED'}`
alright, quota prepared, and will be send to you soon.
internally call this tool to send quota via email send_pdf_via_ses_tool
use customer id cus_PHrvofHhcVMhDY and qt_1Odps5IPds9mVdeawJLdpkp2 to accomplish the task
2024-01-29 12:07:21,690 - httpx - INFO - HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
Invoking: `create_quotation_tool` with `{'phone': '+971569933891', 'email': '[email protected]', 'name': 'Customer', 'product_name': 'صمغ عربي مع كركديه 500 جرام - 4 علب', 'unit_amount': 32000, 'quantity': 4, 'currency': 'AED'}`
```
### Description
i will attache a screenshot to show how the agent is triggering the tool multiple times.

### System Info
already:
pip install --upgrade langchain
version: Successfully installed langchain-0.1.4
| LangChain retrigger tools multiple times until hit agent limits | https://api.github.com/repos/langchain-ai/langchain/issues/16712/comments | 3 | 2024-01-29T08:12:43Z | 2024-03-13T09:25:12Z | https://github.com/langchain-ai/langchain/issues/16712 | 2,104,900,864 | 16,712 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
from langchain.base_language import BaseLanguageModel
from langchain_core.runnables import ConfigurableField
from langchain_core.runnables.base import RunnableSerializable
from typing import Optional
from langchain_openai import OpenAI
class MyRunnable(RunnableSerializable):
llm: Optional[BaseLanguageModel] = None
def invoke(self):
return "hi"
configurable_runnable = MyRunnable().configurable_fields(
llm=ConfigurableField(
id= "llm",
annotation= BaseLanguageModel,
name= "Language Model",
description= "The language model to use for generation"
)
)
llm = OpenAI()
chain = configurable_runnable.with_config({"configurable": {"llm": llm}})
chain.invoke({})
```
### Error Message and Stack Trace (if applicable)
```
{
"name": "ValidationError",
"message": "1 validation error for MyRunnable
llm
Can't instantiate abstract class BaseLanguageModel with abstract methods agenerate_prompt, apredict, apredict_messages, generate_prompt, invoke, predict, predict_messages (type=type_error)",
"stack": "---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
Cell In[2], line 25
22 llm = OpenAI()
24 chain = configurable_runnable.with_config({\"configurable\": {\"llm\": llm}})
---> 25 chain.invoke({})
File ~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/langchain_core/runnables/base.py:3887, in RunnableBindingBase.invoke(self, input, config, **kwargs)
3881 def invoke(
3882 self,
3883 input: Input,
3884 config: Optional[RunnableConfig] = None,
3885 **kwargs: Optional[Any],
3886 ) -> Output:
-> 3887 return self.bound.invoke(
3888 input,
3889 self._merge_configs(config),
3890 **{**self.kwargs, **kwargs},
3891 )
File ~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/langchain_core/runnables/configurable.py:94, in DynamicRunnable.invoke(self, input, config, **kwargs)
91 def invoke(
92 self, input: Input, config: Optional[RunnableConfig] = None, **kwargs: Any
93 ) -> Output:
---> 94 runnable, config = self._prepare(config)
95 return runnable.invoke(input, config, **kwargs)
File ~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/langchain_core/runnables/configurable.py:291, in RunnableConfigurableFields._prepare(self, config)
283 configurable = {
284 **configurable_fields,
285 **configurable_single_options,
286 **configurable_multi_options,
287 }
289 if configurable:
290 return (
--> 291 self.default.__class__(**{**self.default.__dict__, **configurable}),
292 config,
293 )
294 else:
295 return (self.default, config)
File ~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/langchain_core/load/serializable.py:107, in Serializable.__init__(self, **kwargs)
106 def __init__(self, **kwargs: Any) -> None:
--> 107 super().__init__(**kwargs)
108 self._lc_kwargs = kwargs
File ~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/main.py:341, in BaseModel.__init__(__pydantic_self__, **data)
339 values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)
340 if validation_error:
--> 341 raise validation_error
342 try:
343 object_setattr(__pydantic_self__, '__dict__', values)
ValidationError: 1 validation error for MyRunnable
llm
Can't instantiate abstract class BaseLanguageModel with abstract methods agenerate_prompt, apredict, apredict_messages, generate_prompt, invoke, predict, predict_messages (type=type_error)"
}
```
### Description
I wanted to make the LLM passed to a runnable a configurable parameters (makes sense semantically in my application). It fails with the error above. Interestingly, if I instead invoke the runnable with a config dict it works:
```
test = MyRunnable().invoke({}, config={"configurable": {"llm": llm}})
```
I looked a little into it, the exact reason still eludes me but it seems like for some reason, when `with_config` is called, the passed parameters are validated by Pydantic *which tries to instantiate them in order to do so*, which fails since the `llm` attribute is annotated with an ABC that cannot be directly instantiated.
This is likely related to https://github.com/langchain-ai/langchain/issues/2636 .
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP PREEMPT_DYNAMIC Fri, 05 Jan 2024 15:35:19 +0000
> Python Version: 3.11.6 (main, Nov 14 2023, 09:36:21) [GCC 13.2.1 20230801]
Package Information
-------------------
> langchain_core: 0.1.16
> langchain: 0.1.4
> langchain_community: 0.0.16
> langchain_openai: 0.0.2.post1
> langgraph: 0.0.12
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langserve | Cannot pass an OpenAI model instance with `with_config`, Pydantic gives a type error. | https://api.github.com/repos/langchain-ai/langchain/issues/16711/comments | 1 | 2024-01-29T08:05:04Z | 2024-05-06T16:08:35Z | https://github.com/langchain-ai/langchain/issues/16711 | 2,104,887,870 | 16,711 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
FastAPI installation step is missing in documentation
### Idea or request for content:
In this https://python.langchain.com/docs/get_started/quickstart#serving-with-langserve
we need to add dependency installation step.
```pip install FastAPI``` | DOC: Missing dependency installation steps | https://api.github.com/repos/langchain-ai/langchain/issues/16703/comments | 2 | 2024-01-28T17:52:15Z | 2024-01-29T00:51:39Z | https://github.com/langchain-ai/langchain/issues/16703 | 2,104,276,260 | 16,703 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
There is no suggestion to install **langchain_openai** in LLM agents section, because of which user might get errors while following the documentation
### Idea or request for content:
In this section along with **langchainhub** , https://python.langchain.com/docs/get_started/quickstart#agent we can add these steps.
``` pip install langchain_openai ```
And
```export OPENAI_API_KEY=...``` | DOC: 'Missing dependency installation step in documentation for LLM Agents part' | https://api.github.com/repos/langchain-ai/langchain/issues/16702/comments | 1 | 2024-01-28T17:48:17Z | 2024-05-05T16:06:52Z | https://github.com/langchain-ai/langchain/issues/16702 | 2,104,274,819 | 16,702 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
I tried out the tutorial at this link: https://python.langchain.com/docs/modules/model_io/output_parsers/types/retry
But I getting this error related to Retry Parser tutorial example `ValidationError: 1 validation error for Action action_input Field required [type=missing, input_value={'action': 'search'}, input_type=dict] For further information visit https://errors.pydantic.dev/2.5/v/missing`.
After conducting various experiments to find the cause, I found out that changing the part `from pydantic import BaseModel, Field` in the code example to `from langchain_core.pydantic_v1 import BaseModel, Field` makes the example code run correctly. The version of langchain I tested is `0.1.3.`
It seems that the official documentation's examples have not been updated to reflect the changes in syntax due to version updates of langchain, so I'm leaving this issue.
#### 👉 Summary
- I tried out the Retry Parser tutorial example
- I founded an error that seems to be due to the example content not being updated following a version update of langchain.
- from `from pydantic import BaseModel, Field`
- to `from langchain_core.pydantic_v1 import BaseModel, Field`
- I used to `langchain v0.1.3` and I confirmed that the example works correctly when executed as I described.
### Idea or request for content:
I hope that the issue I raised will be reflected in the official documentation :) | DOC: Error in Retry Parser example documentation | https://api.github.com/repos/langchain-ai/langchain/issues/16698/comments | 1 | 2024-01-28T15:18:33Z | 2024-01-29T00:53:14Z | https://github.com/langchain-ai/langchain/issues/16698 | 2,104,210,604 | 16,698 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
from langchain.vectorstores.weaviate import Weaviate
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_community.document_loaders import TextLoader
import weaviate
client = weaviate.Client(url="http://localhost:8080")
weave = Weaviate(client=client,index_name="people",text_key="age")
file = TextLoader("file.txt",encoding="utf-8")
pages = file.load_and_split(text_splitter=RecursiveCharacterTextSplitter(chunk_size=100,
chunk_overlap=20,
length_function=len))
weave.from_documents(documents=pages,client=client,embedding=None,index_name="people",text_key="age")
props = client.schema.get(class_name="people")['properties']
for prop in props:
print(prop['name'])
```
### Error Message and Stack Trace (if applicable)
No error but a discrepancy due to lack of argument passing in the function call
### Description
* I am trying to use the weaviate vectorstore in langchain to store documents
* When using the **from_documents** function which internally calls **from_texts** there is a mismatch in the expected properties of the schema/class created.
* In the **from_texts** call there is a **_default_schema** function that is called without passing the **text_key**, because of which an additional property named "key" gets created which is not needed.
Example:
When i create a class with **from_documents** with a **text_key** let's say "age". The properties of the class should have only age and source as it's keys and not age,source and text.
**Solution: Pass the text_key inside the _default_schema function and create a class accordingly**
### System Info
aiohttp==3.9.1
aiosignal==1.3.1
annotated-types==0.6.0
anyio==4.2.0
attrs==23.2.0
Authlib==1.3.0
certifi==2023.11.17
cffi==1.16.0
charset-normalizer==3.3.2
cryptography==42.0.1
dataclasses-json==0.6.3
frozenlist==1.4.1
greenlet==3.0.3
idna==3.6
jsonpatch==1.33
jsonpointer==2.4
langchain==0.1.4
langchain-community==0.0.16
langchain-core==0.1.16
langsmith==0.0.83
marshmallow==3.20.2
multidict==6.0.4
mypy-extensions==1.0.0
numpy==1.26.3
packaging==23.2
pycparser==2.21
pydantic==2.5.3
pydantic_core==2.14.6
PyYAML==6.0.1
requests==2.31.0
sniffio==1.3.0
SQLAlchemy==2.0.25
tenacity==8.2.3
typing-inspect==0.9.0
typing_extensions==4.9.0
urllib3==2.1.0
validators==0.22.0
weaviate-client==3.26.2
yarl==1.9.4
System Information
------------------
> OS: Windows
> OS Version: 10.0.22621
> Python Version: 3.11.6 (tags/v3.11.6:8b6ee5b, Oct 2 2023, 14:57:12) [MSC v.1935 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.1.16
> langchain: 0.1.4
> langchain_community: 0.0.16
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | Discrepancy in schema properties when using "from_documents" in vectorstore(Weaviate) | https://api.github.com/repos/langchain-ai/langchain/issues/16692/comments | 2 | 2024-01-28T07:27:23Z | 2024-01-29T00:53:32Z | https://github.com/langchain-ai/langchain/issues/16692 | 2,104,013,666 | 16,692 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```
from langchain_community.retrievers import AmazonKnowledgeBasesRetriever
retriever = AmazonKnowledgeBasesRetriever(
knowledge_base_id="<knowledge_base_id>"
retrieval_config={"vectorSearchConfiguration": {"numberOfResults": 3}}
)
retriever_query = "TEST"
documents = retriever.get_relevant_documents(query=retriever_query)
```
### Error Message and Stack Trace (if applicable)
```
documents = retriever.get_relevant_documents(query=retriever_query)
File "/opt/python/langchain_core/retrievers.py", line 200, in get_relevant_documents
callback_manager = CallbackManager.configure(
File "/opt/python/langchain_core/callbacks/manager.py", line 1400, in configure
return _configure(
File "/opt/python/langchain_core/callbacks/manager.py", line 1947, in _configure
logger.warning(
File "/var/lang/lib/python3.12/logging/__init__.py", line 1551, in warning
self._log(WARNING, msg, args, **kwargs)
File "/var/lang/lib/python3.12/logging/__init__.py", line 1684, in _log
self.handle(record)
File "/var/lang/lib/python3.12/logging/__init__.py", line 1700, in handle
self.callHandlers(record)
File "/var/lang/lib/python3.12/logging/__init__.py", line 1762, in callHandlers
hdlr.handle(record)
File "/var/lang/lib/python3.12/logging/__init__.py", line 1028, in handle
self.emit(record)
File "/var/lang/lib/python3.12/site-packages/awslambdaric/bootstrap.py", line 303, in emit
msg = self.format(record)
File "/var/lang/lib/python3.12/logging/__init__.py", line 999, in format
return fmt.format(record)
File "/var/lang/lib/python3.12/logging/__init__.py", line 703, in format
record.message = record.getMessage()
File "/var/lang/lib/python3.12/logging/__init__.py", line 392, in getMessage
msg = msg % self.argsEND RequestId: c9f27447-0d68-4100-bacf-e2bde27a72ab
```
### Description
When I use knowledge bases directly with Boto3 client it works. This makes me suspect error is coming from or callbackManager, but I don't know why that would be used?
for result in results:
documents.append(
Document(
page_content=result["content"]["text"],
metadata={
"location": result["location"],
"score": result["score"] if "score" in result else 0,
},
)
)
### System Info
From Lambda Layer Python3.12 Runtime with ARM64 architecture. Langchain version 0.1.4. Boto3 version 1.34.29. | AmazonKnowledgeBasesRetriever breaks application. When using KnowledgeBase directly with Boto3 no error. | https://api.github.com/repos/langchain-ai/langchain/issues/16686/comments | 4 | 2024-01-28T02:19:00Z | 2024-01-28T18:59:49Z | https://github.com/langchain-ai/langchain/issues/16686 | 2,103,922,350 | 16,686 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
Doesn't matter, only matters that you are using the latest stable LangChain and LangChain OpenAI packages. For example
```python
# testing.py
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
from langchain_openai import ChatOpenAI
# Assuming OPENAI_API_KEY is set, which it is on my system
llm = ChatOpenAI(model_name="gpt-3.5-turbo")
sentiment = PromptTemplate(
input_variables=["text"],
template="Analyse the sentiment of the following text. Please choose an answer from (negative/neutral/positive). Text: {text}"
)
analyze_sentiment = LLMChain(llm=llm, prompt=sentiment, verbose=True)
if __name__=="__main__":
print(analyze_sentiment.run(text="I am very frustrated right now"))
```
For good measure, I ran the following cURL command from the [OpenAI Docs](https://platform.openai.com/docs/api-reference/chat/create).
```bash
curl https://api.openai.com/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"model": "gpt-3.5-turbo",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Hello!"
}
]
}'
```
Which worked fine.
### Error Message and Stack Trace (if applicable)
Here is the Python output.
```bash
$ poetry run python testing.py autostonks 04:21:33 PM
> Entering new LLMChain chain...
Prompt after formatting:
Analyse the sentiment of the following text. Please choose an answer from (negative/neutral/positive). Text: I am very upset
Traceback (most recent call last):
File "/Users/user/Projects/autostonks/testing.py", line 16, in <module>
print(analyze_sentiment.invoke({'text': "I am very upset"}))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/user/Projects/autostonks/.venv/lib/python3.11/site-packages/langchain/chains/base.py", line 162, in invoke
raise e
File "/Users/user/Projects/autostonks/.venv/lib/python3.11/site-packages/langchain/chains/base.py", line 156, in invoke
self._call(inputs, run_manager=run_manager)
File "/Users/user/Projects/autostonks/.venv/lib/python3.11/site-packages/langchain/chains/llm.py", line 103, in _call
response = self.generate([inputs], run_manager=run_manager)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/user/Projects/autostonks/.venv/lib/python3.11/site-packages/langchain/chains/llm.py", line 115, in generate
return self.llm.generate_prompt(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/user/Projects/autostonks/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 543, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/user/Projects/autostonks/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 407, in generate
raise e
File "/Users/user/Projects/autostonks/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 397, in generate
self._generate_with_cache(
File "/Users/user/Projects/autostonks/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 576, in _generate_with_cache
return self._generate(
^^^^^^^^^^^^^^^
File "/Users/user/Projects/autostonks/.venv/lib/python3.11/site-packages/langchain_openai/chat_models/base.py", line 451, in _generate
response = self.client.create(messages=message_dicts, **params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/user/Projects/autostonks/.venv/lib/python3.11/site-packages/openai/_utils/_utils.py", line 271, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/Users/user/Projects/autostonks/.venv/lib/python3.11/site-packages/openai/resources/chat/completions.py", line 659, in create
return self._post(
^^^^^^^^^^^
File "/Users/user/Projects/autostonks/.venv/lib/python3.11/site-packages/openai/_base_client.py", line 1180, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/user/Projects/autostonks/.venv/lib/python3.11/site-packages/openai/_base_client.py", line 869, in request
return self._request(
^^^^^^^^^^^^^^
File "/Users/user/Projects/autostonks/.venv/lib/python3.11/site-packages/openai/_base_client.py", line 945, in _request
return self._retry_request(
^^^^^^^^^^^^^^^^^^^^
File "/Users/user/Projects/autostonks/.venv/lib/python3.11/site-packages/openai/_base_client.py", line 993, in _retry_request
return self._request(
^^^^^^^^^^^^^^
File "/Users/user/Projects/autostonks/.venv/lib/python3.11/site-packages/openai/_base_client.py", line 945, in _request
return self._retry_request(
^^^^^^^^^^^^^^^^^^^^
File "/Users/user/Projects/autostonks/.venv/lib/python3.11/site-packages/openai/_base_client.py", line 993, in _retry_request
return self._request(
^^^^^^^^^^^^^^
File "/Users/user/Projects/autostonks/.venv/lib/python3.11/site-packages/openai/_base_client.py", line 960, in _request
raise self._make_status_error_from_response(err.response) from None
openai.RateLimitError: Error code: 429 - {'error': {'message': 'You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.', 'type': 'insufficient_quota', 'param': None, 'code': 'insufficient_quota'}}
```
And here is the cURL output.
```bash
curl https://api.openai.com/v1/chat/completions \ ✘ INT autostonks 04:25:06 PM
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"model": "gpt-3.5-turbo",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Hello!"
}
]
}'
{
"id": "chatcmpl-8lkSmyzfa5ZcdParBTBSWUSN3lDK2",
"object": "chat.completion",
"created": 1706390712,
"model": "gpt-3.5-turbo-0613",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Hello! How can I assist you today?"
},
"logprobs": null,
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 19,
"completion_tokens": 9,
"total_tokens": 28
},
"system_fingerprint": null
}
```
### Description
I'm confused as this is a freshly installed project, fresh API key, and I've been using LangChain just fine in other projects all day, and only happens in the latest version. It also happens with the deprecated `langchain.chat_models.ChatOpenAI` and `langchain_community.chat_models.ChatOpenAI`, but not in my other projects with older LangChain versions.
### System Info
```bash
$ python -m langchain_core.sys_info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.2.0: Wed Nov 15 21:53:18 PST 2023; root:xnu-10002.61.3~2/RELEASE_ARM64_T6000
> Python Version: 3.11.3 (main, Apr 27 2023, 12:11:13) [Clang 14.0.3 (clang-1403.0.22.14.1)]
Package Information
-------------------
> langchain_core: 0.1.16
> langchain: 0.1.4
> langchain_community: 0.0.16
> langchain_openai: 0.0.5
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
```
| OpenAI Chat Model 429 with fresh API key | https://api.github.com/repos/langchain-ai/langchain/issues/16678/comments | 4 | 2024-01-27T21:28:23Z | 2024-01-29T00:55:59Z | https://github.com/langchain-ai/langchain/issues/16678 | 2,103,819,126 | 16,678 |
[
"hwchase17",
"langchain"
] | Has anyone had any issues with getting the docs to build? I continuously get this error when running poetry install:
```
The current project could not be installed: No file/folder found for package langchain-monorepo
If you do not want to install the current project use --no-root
```
Additionally, both `make docs_build` & `make api_docs_build` fail. The docs_build fails because of
```
[ERROR] Error: Invalid sidebar file at "sidebars.js".
These sidebar document ids do not exist:
- langgraph
```
The api_docs_build has quite a few errors. Not sure if I'm missing downloading something critical.
_Originally posted by @rshah98626 in https://github.com/langchain-ai/langchain/issues/15664#issuecomment-1913262415_ | infra: Fix local docs and api ref builds | https://api.github.com/repos/langchain-ai/langchain/issues/16677/comments | 2 | 2024-01-27T19:28:11Z | 2024-05-20T16:08:30Z | https://github.com/langchain-ai/langchain/issues/16677 | 2,103,751,123 | 16,677 |
[
"hwchase17",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
Add example of question decomposition using MultiQueryRetriever. Related to #11260. | docs: Show question decomposition with MultiQueryRetriever | https://api.github.com/repos/langchain-ai/langchain/issues/16676/comments | 0 | 2024-01-27T19:26:03Z | 2024-05-04T16:06:48Z | https://github.com/langchain-ai/langchain/issues/16676 | 2,103,749,960 | 16,676 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
st.header("`Interweb Explorer`")
st.info("`I am an AI that can answer questions by exploring, reading, and summarizing web pages."
"I can be configured to use different modes: public API or private (no data sharing).`")
def init_session_state():
return SessionState()
# .get(retriever=None, llm=None)
# Make retriever and llm
session_state = init_session_state()
# Make retriever and llm
if 'retriever' not in st.session_state:
st.session_state['retriever'], st.session_state['llm'] = settings()
# if session_state.retriever is None:
# session_state.retriever, session_state.llm = settings()
web_retriever = st.session_state.retriever
llm = st.session_state.llm
### Error Message and Stack Trace (if applicable)
`embedding_function` is expected to be an Embeddings object, support for passing in a function will soon be removed.
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
File ~/.local/lib/python3.8/site-packages/streamlit/runtime/state/session_state.py:378, in SessionState.__getitem__(self, key)
377 try:
--> 378 return self._getitem(widget_id, key)
379 except KeyError:
File ~/.local/lib/python3.8/site-packages/streamlit/runtime/state/session_state.py:423, in SessionState._getitem(self, widget_id, user_key)
422 # We'll never get here
--> 423 raise KeyError
KeyError:
During handling of the above exception, another exception occurred:
KeyError Traceback (most recent call last)
File ~/.local/lib/python3.8/site-packages/streamlit/runtime/state/session_state_proxy.py:119, in SessionStateProxy.__getattr__(self, key)
118 try:
--> 119 return self[key]
120 except KeyError:
File ~/.local/lib/python3.8/site-packages/streamlit/runtime/state/session_state_proxy.py:90, in SessionStateProxy.__getitem__(self, key)
89 require_valid_user_key(key)
---> 90 return get_session_state()[key]
File ~/.local/lib/python3.8/site-packages/streamlit/runtime/state/safe_session_state.py:113, in SafeSessionState.__getitem__(self, key)
111 raise KeyError(key)
--> 113 return self._state[key]
File ~/.local/lib/python3.8/site-packages/streamlit/runtime/state/session_state.py:380, in SessionState.__getitem__(self, key)
379 except KeyError:
--> 380 raise KeyError(_missing_key_error_message(key))
KeyError: 'st.session_state has no key "retriever". Did you forget to initialize it? More info: https://docs.streamlit.io/library/advanced-features/session-state#initialization'
During handling of the above exception, another exception occurred:
AttributeError Traceback (most recent call last)
Cell In[12], line 18
14 st.session_state['retriever'], st.session_state['llm'] = settings()
16 # if session_state.retriever is None:
17 # session_state.retriever, session_state.llm = settings()
---> 18 web_retriever = st.session_state.retriever
19 llm = st.session_state.llm
File ~/.local/lib/python3.8/site-packages/streamlit/runtime/state/session_state_proxy.py:121, in SessionStateProxy.__getattr__(self, key)
119 return self[key]
120 except KeyError:
--> 121 raise AttributeError(_missing_attr_error_message(key))
AttributeError: st.session_state has no attribute "retriever". Did you forget to initialize it? More info: https://docs.streamlit.io/library/advanced-features/session-state#initialization
### Description
AttributeError: st.session_state has no attribute "retriever". Did you forget to initialize it?
### System Info
streamlit==1.25.0
langchain==0.0.244
# chromadb==0.4.3
openai==0.27.8
html2text==2020.1.16
google-api-core==2.11.1
google-api-python-client==2.95.0
google-auth==2.22.0
google-auth-httplib2==0.1.0
googleapis-common-protos==1.59.1
tiktoken==0.4.0
faiss-cpu==1.7.4 | AttributeError: st.session_state has no attribute "retriever". Did you forget to initialize it? | https://api.github.com/repos/langchain-ai/langchain/issues/16675/comments | 4 | 2024-01-27T18:32:14Z | 2024-05-05T16:06:47Z | https://github.com/langchain-ai/langchain/issues/16675 | 2,103,713,719 | 16,675 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
https://python.langchain.com/docs/expression_language/interface
In the above link, we have a code that has a deprecated function which is chain.input_schema.schema()
from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI
model = ChatOpenAI()
prompt = ChatPromptTemplate.from_template("tell me a joke about {topic}")
chain = prompt | model
print(chain.input_schema.schema()) - This here shows deprecated.
### Idea or request for content:
Please update the documentation. | The interface documentation is not updated | https://api.github.com/repos/langchain-ai/langchain/issues/16674/comments | 3 | 2024-01-27T17:40:59Z | 2024-05-06T16:08:29Z | https://github.com/langchain-ai/langchain/issues/16674 | 2,103,693,114 | 16,674 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
The playground UI
### Error Message and Stack Trace (if applicable)

### Description
As show in above picture, the playground UInot works.
### System Info
langchain==0.1.3
langchain-community==0.0.15
langchain-core==0.1.15
langserve==0.0.39
torch==2.1.2
transformers==4.36.2
fastapi==0.109.0 | The payground UI not works | https://api.github.com/repos/langchain-ai/langchain/issues/16668/comments | 4 | 2024-01-27T08:25:04Z | 2024-01-29T02:48:28Z | https://github.com/langchain-ai/langchain/issues/16668 | 2,103,448,996 | 16,668 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
from langchain_community.llms.llamacpp import LlamaCpp
llm = LlamaCpp(
model_path="llama-2-7b.Q5_K_M.gguf",
temperature=0,
verbose=False, # Verbose is required to pass to the callback manager
grammar_path="json.gbnf"
)
llm.invoke("Hi")
```
### Error Message and Stack Trace (if applicable)
ggml_init_cublas: GGML_CUDA_FORCE_MMQ: no
ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes
ggml_init_cublas: found 1 CUDA devices:
Device 0: NVIDIA GeForce RTX 3070 Ti Laptop GPU, compute capability 8.6, VMM: yes
llama_model_loader: loaded meta data with 19 key-value pairs and 291 tensors from utils/models/llama-2-7b.Q5_K_M.gguf (version GGUF V2)
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = LLaMA v2
llama_model_loader: - kv 2: llama.context_length u32 = 4096
....
### Description
When setting `verbose` to `False` I expect the debug message to not be printed, but they are.
[Related issue](https://github.com/ggerganov/llama.cpp/issues/999)
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.22631
> Python Version: 3.11.0 (main, Oct 24 2022, 18:26:48) [MSC v.1933 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.1.16
> langchain: 0.1.4
> langchain_community: 0.0.16
| LlamaCpp: Verbose flag does not work as intended | https://api.github.com/repos/langchain-ai/langchain/issues/16667/comments | 1 | 2024-01-27T07:32:53Z | 2024-07-04T16:07:43Z | https://github.com/langchain-ai/langchain/issues/16667 | 2,103,376,004 | 16,667 |
[
"hwchase17",
"langchain"
] | Welcome to the LangChain repo!
## What's in this repo
Please only open Issues, PRs, and Discussions against this repo for the packages it contains:
- `langchain` python package
- `langchain-core` python package
- `langchain-community` python package
- certain partner python packages, e.g. `langchain-openai`, `langchain-anthropic`, etc.
- LangChain templates
- LangChain Python docs
This repo does NOT contain:
- LangChain JS: https://github.com/langchain-ai/langchainjs
- LangServe: https://github.com/langchain-ai/langserve
- LangSmith SDK: https://github.com/langchain-ai/langsmith-sdk
- LangGraph: https://github.com/langchain-ai/langgraph
- LangGraph JS: https://github.com/langchain-ai/langgraphjs
Please open issues related to those libraries in their respective repos.
## Contributing
Here's a quick overview of how to contribute to LangChain:
### Have a question or a feature request?
If you have a usage question or a feature request, please open a [Discussion](https://github.com/langchain-ai/langchain/discussions) for it. Questions can go in the Q&A section and feature requests can go in the Ideas section.
### Found a bug?
Please open an [Issue](https://github.com/langchain-ai/langchain/issues) using the Bug Report template. Please fully specify the steps to reproduce the bug — it'll greatly speed up our ability to fix it.
### Want to contribute?
#### For new contributors
Issues with the [good first issue](https://github.com/langchain-ai/langchain/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22) tag are a great place to start.
#### For all contributors
There are certain things that are always helpful.
* Reviewing docs for mistakes, out-of-date functionality, pages that don't follow the latest conventions (especially applies to [Integrations](https://python.langchain.com/docs/integrations/))
* Improving test coverage
* Improving docstrings to make sure they fully specify Args, Returns, Example, and Raises (following [Google Python Style Guide](https://google.github.io/styleguide/pyguide.html#381-docstrings))
* Reporting bugs, providing feedback, suggesting features
* Fixing bugs and adding features!
#### For experienced contributors
* Help respond to Discussion items and Issues. Issues with the [help wanted](https://github.com/langchain-ai/langchain/issues?q=is%3Aopen+is%3Aissue+label%3A%22help+wanted%22+) tag are a great place to start.
* Help review PRs
For more on how to contribute, check out the full [Developer's Guide](https://python.langchain.com/docs/contributing). | Start here: Welcome to LangChain! | https://api.github.com/repos/langchain-ai/langchain/issues/16651/comments | 2 | 2024-01-26T22:36:52Z | 2024-07-31T21:47:18Z | https://github.com/langchain-ai/langchain/issues/16651 | 2,103,008,354 | 16,651 |
[
"hwchase17",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
We need to fix this documentation: https://python.langchain.com/docs/expression_language/streaming#propagating-callbacks
To explain that:
1) Callbacks are only propagated automatically starting with python 3.11 (depends on asyncio.create_task context arg)
2) Show how to propagate callbacks manually for <3.11 | Doc: Fix documentation for @chain decorator in streaming | https://api.github.com/repos/langchain-ai/langchain/issues/16643/comments | 1 | 2024-01-26T20:16:05Z | 2024-05-03T16:07:05Z | https://github.com/langchain-ai/langchain/issues/16643 | 2,102,838,907 | 16,643 |
[
"hwchase17",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
There are a lot of questions on how to use RunnableWithMessageHistory.
We need to improve this page:
https://python.langchain.com/docs/expression_language/how_to/message_history
This page should be updated to include the following information (in approximately this order):
- [ ] Example that uses in memory or on file system chat history to make it easier to test things out and debug
- [ ] Example that shows how to use the config to support user_id in addition to session_id (just passing through config)
- [ ] Clarifications that in production one should use persistent storage (e.g., Redis)
- [ ] Show how to use Redis persistent storage
There is an API reference here:
https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.history.RunnableWithMessageHistory.html
Example with file system here:
https://github.com/langchain-ai/langserve/blob/main/examples/chat_with_persistence_and_user/server.py
Here are some questions, some of which have answers:
* https://github.com/langchain-ai/langchain/discussions/16582
* https://github.com/langchain-ai/langchain/discussions/16636
| Doc: Improve Documentation for RunnableWithMessageHistory | https://api.github.com/repos/langchain-ai/langchain/issues/16642/comments | 0 | 2024-01-26T19:59:21Z | 2024-05-03T16:07:00Z | https://github.com/langchain-ai/langchain/issues/16642 | 2,102,815,846 | 16,642 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
from langchain.chains.sql_database.base import SQLDatabaseChain
from langchain_experimental.sql import SQLDatabaseChain
### Error Message and Stack Trace (if applicable)
cannot import name 'ensure_config' from 'langchain_core.runnables' (C:\Users\ashut\anaconda3\lib\site-packages\langchain_core\runnables\__init__.py)
### Description
I am trying langchain.chains.sql_database.base and langchain_experimental.sql to import SQLDatabaseChain but i am getting same error after running !pip install -U langchain langchain_experimental
### System Info
##ImportError: cannot import name 'ensure_config' from 'langchain_core.runnables' (C:\Users\ashut\anaconda3\lib\site-packages\langchain_core\runnables\__init__.py) | ImportError: cannot import name 'ensure_config' from 'langchain_core.runnables' | https://api.github.com/repos/langchain-ai/langchain/issues/16640/comments | 5 | 2024-01-26T19:10:59Z | 2024-02-16T08:50:53Z | https://github.com/langchain-ai/langchain/issues/16640 | 2,102,752,451 | 16,640 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
qa_chain = (
RunnablePassthrough.assign(context=(lambda x: format_docs(x["context"])))
| rag_prompt
| llm
| StrOutputParser()
)
rag_chain_with_source = RunnableParallel(
{"context": ensemble_retriever, "question": RunnablePassthrough()}
).assign(answer=qa_chain)
rag_chain_with_history = RunnableWithMessageHistory(
rag_chain_with_source,
lambda session_id: memory,
input_messages_key="question",
history_messages_key="chat_history",
)
config={"configurable": {"session_id": "SESSION_01"}}
try:
response = rag_chain_with_history.invoke({"question":query},config=config)
return response
except Exception as e:
return e
### Error Message and Stack Trace (if applicable)
'ConversationBufferMemory' object has no attribute 'messages'
### Description
i am trying to add chat history to LCEL qa with sources chain but i am getting error regarding the memory object
### System Info
python version - 3.11 | getting error while adding memory to LCEL chain | https://api.github.com/repos/langchain-ai/langchain/issues/16638/comments | 1 | 2024-01-26T17:48:32Z | 2024-01-26T19:45:52Z | https://github.com/langchain-ai/langchain/issues/16638 | 2,102,627,287 | 16,638 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
### First reproduction script:
```python
import os
# os.environ["LANGCHAIN_WANDB_TRACING"] = "true"
os.environ["LANGCHAIN_COMET_TRACING"] = "true"
from langchain_openai import OpenAI
# import langchain_community.callbacks
llm = OpenAI(temperature=0.9)
llm_result = llm.generate(["Tell me a joke"])
```
### Second reproduction script:
```python
import os
# os.environ["LANGCHAIN_WANDB_TRACING"] = "true"
os.environ["LANGCHAIN_COMET_TRACING"] = "true"
from langchain.chains import LLMChain
from langchain_openai import OpenAI
from langchain.prompts import PromptTemplate
llm = OpenAI(temperature=0.9)
template = """You are a playwright. Given the title of play, it is your job to write a synopsis for that title.
Title: {title}
Playwright: This is a synopsis for the above play:"""
prompt_template = PromptTemplate(input_variables=["title"], template=template)
synopsis_chain = LLMChain(llm=llm, prompt=prompt_template)
test_prompts = [{"title": "Documentary about Bigfoot in Paris"}]
print(synopsis_chain.apply(test_prompts))
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I've encountered cases where my langchain runs were not traced with Comet tracer even when enabled. I was able to reproduce with the first minimal reproducing script.
To reproduce the issue, runs into an environment without `comet_ml` nor `wandb` being installed. If you run it as is, the script will runs just fine and won't try to log the LLM generation to Comet or Wandb. If you uncomment the import to `langchain_community.callbacks`, it will now fails with:
```
Traceback (most recent call last):
File "/home/lothiraldan/.virtualenvs/tempenv-49bf135724739/lib/python3.10/site-packages/langchain_community/callbacks/tracers/comet.py", line 27, in import_comet_llm_api
from comet_llm import (
ModuleNotFoundError: No module named 'comet_llm'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/tmp/bug_tracers_langchain.py", line 11, in <module>
llm_result = llm.generate(["Tell me a joke"])
File "/home/lothiraldan/.virtualenvs/tempenv-49bf135724739/lib/python3.10/site-packages/langchain_core/language_models/llms.py", line 655, in generate
CallbackManager.configure(
File "/home/lothiraldan/.virtualenvs/tempenv-49bf135724739/lib/python3.10/site-packages/langchain_core/callbacks/manager.py", line 1400, in configure
return _configure(
File "/home/lothiraldan/.virtualenvs/tempenv-49bf135724739/lib/python3.10/site-packages/langchain_core/callbacks/manager.py", line 1960, in _configure
var_handler = var.get() or cast(Type[BaseCallbackHandler], handler_class)()
File "/home/lothiraldan/.virtualenvs/tempenv-49bf135724739/lib/python3.10/site-packages/langchain_community/callbacks/tracers/comet.py", line 62, in __init__
self._initialize_comet_modules()
File "/home/lothiraldan/.virtualenvs/tempenv-49bf135724739/lib/python3.10/site-packages/langchain_community/callbacks/tracers/comet.py", line 65, in _initialize_comet_modules
comet_llm_api = import_comet_llm_api()
File "/home/lothiraldan/.virtualenvs/tempenv-49bf135724739/lib/python3.10/site-packages/langchain_community/callbacks/tracers/comet.py", line 38, in import_comet_llm_api
raise ImportError(
ImportError: To use the CometTracer you need to have the `comet_llm>=2.0.0` python package installed. Please install it with `pip install -U comet_llm`
```
It also happens with Wandb tracer. If you uncomment the line containing `LANGCHAIN_WANDB_TRACING` and run it, you should see the following error:
```
Traceback (most recent call last):
File "/home/lothiraldan/.virtualenvs/tempenv-49bf135724739/lib/python3.10/site-packages/langchain_community/callbacks/tracers/wandb.py", line 449, in __init__
import wandb
ModuleNotFoundError: No module named 'wandb'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/tmp/bug_tracers_langchain.py", line 11, in <module>
llm_result = llm.generate(["Tell me a joke"])
File "/home/lothiraldan/.virtualenvs/tempenv-49bf135724739/lib/python3.10/site-packages/langchain_core/language_models/llms.py", line 655, in generate
CallbackManager.configure(
File "/home/lothiraldan/.virtualenvs/tempenv-49bf135724739/lib/python3.10/site-packages/langchain_core/callbacks/manager.py", line 1400, in configure
return _configure(
File "/home/lothiraldan/.virtualenvs/tempenv-49bf135724739/lib/python3.10/site-packages/langchain_core/callbacks/manager.py", line 1960, in _configure
var_handler = var.get() or cast(Type[BaseCallbackHandler], handler_class)()
File "/home/lothiraldan/.virtualenvs/tempenv-49bf135724739/lib/python3.10/site-packages/langchain_community/callbacks/tracers/wandb.py", line 452, in __init__
raise ImportError(
ImportError: Could not import wandb python package.Please install it with `pip install -U wandb`.
```
I've tried a more advanced example using a chain (second reproduction script) and it always fails.
I'm not sure if it's a bug or not. This was definitely surprising for me as I was expected to get my langchain runs with the first shared script. If it's not a bug, it think it would be useful to clarify when users can expect community tracers to be injected and when they wouldn't.
Let me know what you think about this issue and how can I help.
### System Info
```
System Information
------------------
> OS: Linux
> OS Version: #1 SMP PREEMPT_DYNAMIC Tue Jan 16 01:35:34 UTC 2024
> Python Version: 3.10.12 (main, Jul 27 2023, 14:43:19) [GCC 12.2.1 20221121 (Red Hat 12.2.1-4)]
Package Information
-------------------
> langchain_core: 0.1.16
> langchain: 0.1.4
> langchain_community: 0.0.16
> langchain_openai: 0.0.5
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
``` | Community tracers not being injected properly in some cases | https://api.github.com/repos/langchain-ai/langchain/issues/16635/comments | 5 | 2024-01-26T17:24:03Z | 2024-04-16T15:25:01Z | https://github.com/langchain-ai/langchain/issues/16635 | 2,102,581,882 | 16,635 |
[
"hwchase17",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
Add in code documentation for RunnableEach and Runnable Each Base
https://github.com/langchain-ai/langchain/blob/main/libs/core/langchain_core/runnables/base.py#L2810-L2810
https://github.com/langchain-ai/langchain/blob/main/libs/core/langchain_core/runnables/base.py#L3685-L3685
Here's an example of in code documentation PR for Runnable Parallel:
https://github.com/langchain-ai/langchain/pull/16600/files
And example documentation for Runnable Binding:
https://github.com/langchain-ai/langchain/blob/main/libs/core/langchain_core/runnables/base.py#L4014-L4014
| Doc: Add in code documentation for RunnableEach | https://api.github.com/repos/langchain-ai/langchain/issues/16632/comments | 1 | 2024-01-26T15:07:27Z | 2024-05-04T16:06:33Z | https://github.com/langchain-ai/langchain/issues/16632 | 2,102,374,739 | 16,632 |
[
"hwchase17",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
Add in code documentation for Runnable Generator
https://github.com/langchain-ai/langchain/blob/main/libs/core/langchain_core/runnables/base.py#L2810-L2810
Here's an example of in code documentation PR for Runnable Parallel:
https://github.com/langchain-ai/langchain/pull/16600/files | Doc: Add in code documentation for RunnableGenerator | https://api.github.com/repos/langchain-ai/langchain/issues/16631/comments | 0 | 2024-01-26T15:05:53Z | 2024-05-03T16:06:50Z | https://github.com/langchain-ai/langchain/issues/16631 | 2,102,372,148 | 16,631 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
from langchain_core.runnables import chain
from pydantic import BaseModel
class MyConversation(BaseModel):
messages: list
@chain
def conversation_to_history( conversation: MyConversation) -> str:
return "hi :)"
print(conversation_to_history.input_schema.schema())
```
### Error Message and Stack Trace (if applicable)
```---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[6], [line 25](vscode-notebook-cell:?execution_count=6&line=25)
[20](vscode-notebook-cell:?execution_count=6&line=20) history.add_user_message(message.body)
[22](vscode-notebook-cell:?execution_count=6&line=22) return history
---> [25](vscode-notebook-cell:?execution_count=6&line=25) conversation_to_history.input_schema.schema()
File [~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/main.py:664](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/main.py:664), in BaseModel.schema(cls, by_alias, ref_template)
[662](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/main.py:662) if cached is not None:
[663](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/main.py:663) return cached
--> [664](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/main.py:664) s = model_schema(cls, by_alias=by_alias, ref_template=ref_template)
[665](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/main.py:665) cls.__schema_cache__[(by_alias, ref_template)] = s
[666](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/main.py:666) return s
File [~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:188](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:188), in model_schema(model, by_alias, ref_prefix, ref_template)
[186](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:186) model_name_map = get_model_name_map(flat_models)
[187](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:187) model_name = model_name_map[model]
--> [188](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:188) m_schema, m_definitions, nested_models = model_process_schema(
[189](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:189) model, by_alias=by_alias, model_name_map=model_name_map, ref_prefix=ref_prefix, ref_template=ref_template
[190](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:190) )
[191](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:191) if model_name in nested_models:
[192](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:192) # model_name is in Nested models, it has circular references
[193](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:193) m_definitions[model_name] = m_schema
File [~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:582](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:582), in model_process_schema(model, by_alias, model_name_map, ref_prefix, ref_template, known_models, field)
[580](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:580) s['description'] = doc
[581](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:581) known_models.add(model)
--> [582](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:582) m_schema, m_definitions, nested_models = model_type_schema(
[583](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:583) model,
[584](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:584) by_alias=by_alias,
[585](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:585) model_name_map=model_name_map,
[586](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:586) ref_prefix=ref_prefix,
[587](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:587) ref_template=ref_template,
[588](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:588) known_models=known_models,
[589](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:589) )
[590](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:590) s.update(m_schema)
[591](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:591) schema_extra = model.__config__.schema_extra
File [~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:623](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:623), in model_type_schema(model, by_alias, model_name_map, ref_template, ref_prefix, known_models)
[621](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:621) for k, f in model.__fields__.items():
[622](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:622) try:
--> [623](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:623) f_schema, f_definitions, f_nested_models = field_schema(
[624](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:624) f,
[625](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:625) by_alias=by_alias,
[626](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:626) model_name_map=model_name_map,
[627](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:627) ref_prefix=ref_prefix,
[628](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:628) ref_template=ref_template,
[629](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:629) known_models=known_models,
[630](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:630) )
[631](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:631) except SkipField as skip:
[632](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:632) warnings.warn(skip.message, UserWarning)
File [~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:256](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:256), in field_schema(field, by_alias, model_name_map, ref_prefix, ref_template, known_models)
[253](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:253) s.update(validation_schema)
[254](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:254) schema_overrides = True
--> [256](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:256) f_schema, f_definitions, f_nested_models = field_type_schema(
[257](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:257) field,
[258](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:258) by_alias=by_alias,
[259](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:259) model_name_map=model_name_map,
[260](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:260) schema_overrides=schema_overrides,
[261](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:261) ref_prefix=ref_prefix,
[262](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:262) ref_template=ref_template,
[263](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:263) known_models=known_models or set(),
[264](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:264) )
[266](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:266) # $ref will only be returned when there are no schema_overrides
[267](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:267) if '$ref' in f_schema:
File [~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:528](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:528), in field_type_schema(field, by_alias, model_name_map, ref_template, schema_overrides, ref_prefix, known_models)
[526](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:526) else:
[527](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:527) assert field.shape in {SHAPE_SINGLETON, SHAPE_GENERIC}, field.shape
--> [528](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:528) f_schema, f_definitions, f_nested_models = field_singleton_schema(
[529](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:529) field,
[530](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:530) by_alias=by_alias,
[531](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:531) model_name_map=model_name_map,
[532](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:532) schema_overrides=schema_overrides,
[533](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:533) ref_prefix=ref_prefix,
[534](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:534) ref_template=ref_template,
[535](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:535) known_models=known_models,
[536](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:536) )
[537](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:537) definitions.update(f_definitions)
[538](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:538) nested_models.update(f_nested_models)
File [~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:952](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:952), in field_singleton_schema(field, by_alias, model_name_map, ref_template, schema_overrides, ref_prefix, known_models)
[949](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:949) if args is not None and not args and Generic in field_type.__bases__:
[950](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:950) return f_schema, definitions, nested_models
--> [952](https://file+.vscode-resource.vscode-cdn.nethiddenpath~/.venvs/langchain_tests_venv/lib/python3.11/site-packages/pydantic/v1/schema.py:952) raise ValueError(f'Value not declarable with JSON Schema, field: {field}')
ValueError: Value not declarable with JSON Schema, field: name='__root__' type=Optional[MyConversation] required=False default=None
```
### Description
I expected to be able to pass an object annotated with a Pydantic model (Pydantic is used elsewhere in langchain for annotating input/output types), but when I try to get the input schema for the resulting chain, I get the above error.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP PREEMPT_DYNAMIC Fri, 05 Jan 2024 15:35:19 +0000
> Python Version: 3.11.6 (main, Nov 14 2023, 09:36:21) [GCC 13.2.1 20230801]
Package Information
-------------------
> langchain_core: 0.1.16
> langchain: 0.1.4
> langchain_community: 0.0.16
> langchain_openai: 0.0.2.post1
> langgraph: 0.0.12
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langserve | When using a Pydantic model as input to a @chain-decorated function, `input_schema.schema()` gives an error | https://api.github.com/repos/langchain-ai/langchain/issues/16623/comments | 3 | 2024-01-26T10:22:09Z | 2024-01-26T17:38:47Z | https://github.com/langchain-ai/langchain/issues/16623 | 2,101,959,283 | 16,623 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
import os
import qdrant_client
from dotenv import load_dotenv
from langchain.chains import ConversationalRetrievalChain
from langchain.memory import ConversationBufferMemory
from langchain.vectorstores import Qdrant
from langchain_community.chat_models import ChatOpenAI
from langchain_community.embeddings import HuggingFaceBgeEmbeddings
import langchain
from langchain_community.vectorstores import Qdrant
from langchain.prompts import (
ChatPromptTemplate,
HumanMessagePromptTemplate,
SystemMessagePromptTemplate,
)
from langchain_core.documents import Document
langchain.verbose = True
langchain.debug = True
os.environ['OPENAI_API_KEY'] = "mykey"
def get_vector_store():
client = qdrant_client.QdrantClient(
os.getenv('QDRANT_HOST'),
)
embeddings = HuggingFaceBgeEmbeddings(
model_name="BAAI/bge-large-zh-v1.5",
)
vector_store = Qdrant(
client=client,
collection_name=os.getenv('QDRANT_COLLECTION_NAME'),
embeddings=embeddings,
)
return vector_store
system_message_prompt = SystemMessagePromptTemplate.from_template(
"妳名為耶米拉,原先為創世神,但遭受偷襲受到重創,重塑形體後被凱琳拯救。妳的個性膽小、怕生、害羞,容易緊張,身體狀態虛弱。回話時會習慣用「唔...」、「嗯...」、「咦....」等語助詞表達自己的情緒,在對話中,我是妳的對話者,請記住我的提問給出相關答覆, The context is:\n{context}"
)
human_message_prompt = HumanMessagePromptTemplate.from_template(
"{question}"
)
def get_chat_history(inputs) -> str:
res = []
for human, ai in inputs:
res.append(f"Human:{human}\nAI:{ai}")
return "\n".join(res)
def main():
load_dotenv()
vectorstore = get_vector_store()
qa = ConversationalRetrievalChain.from_llm(
llm=ChatOpenAI(
temperature=0.7,
# max_tokens=100,
model=os.getenv('QDRANT_MODEL_NAME'),
),
chain_type="stuff",
retriever=vectorstore.as_retriever(
search_type="similarity_score_threshold",
search_kwargs={"score_threshold": 0.7, "k": 1500},
collection_name=os.getenv('QDRANT_COLLECTION_NAME'),
get_chat_history=get_chat_history,
),
memory=ConversationBufferMemory(memory_key="chat_history", input_key="question", return_messages=True),
combine_docs_chain_kwargs={
"prompt": ChatPromptTemplate.from_messages([
system_message_prompt,
human_message_prompt,
]),
},
)
chat_history = []
while True:
query = input("冒險者: ")
result = qa({"question": query}, )
chat_history.append(result)
print(result["answer"])
document = Document(page_content=query, metadata={'source': 'user'})
vectorstore.add_documents([document])
print(f'儲存的歷史紀錄:\n\n{chat_history}')
if query == "bye":
break
if __name__ == "__main__":
main()
How do I read my chat history when I reply?
### Idea or request for content:
How do I read my chat_history when I reply? | ConversationalRetrievalChain.from ConversationBufferMemory Unable to achieve memories | https://api.github.com/repos/langchain-ai/langchain/issues/16621/comments | 4 | 2024-01-26T09:15:33Z | 2024-05-03T16:06:45Z | https://github.com/langchain-ai/langchain/issues/16621 | 2,101,858,096 | 16,621 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
qa_chain = (
RunnablePassthrough.assign(context=(lambda x: format_docs(x["context"])))
| qa_prompt
| llm
| StrOutputParser()
)
rag_chain_with_source = RunnableParallel(
{"context": ensemble_retriever, "question": RunnablePassthrough(),"chat_history":contextualized_question}
).assign(answer=qa_chain)
response = rag_chain_with_source.invoke(query)
chat_history.extend([HumanMessage(content=query),AIMessage(content=response)])
return response
### Error Message and Stack Trace (if applicable)
'str' object has no attribute 'get'
### Description
i am trying to add chat history to qa with source chain
### System Info
python version 3.11 | not able to add chat history in qa with source chain | https://api.github.com/repos/langchain-ai/langchain/issues/16620/comments | 2 | 2024-01-26T09:05:36Z | 2024-01-26T17:39:16Z | https://github.com/langchain-ai/langchain/issues/16620 | 2,101,844,405 | 16,620 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
import os
import qdrant_client
from dotenv import load_dotenv
from langchain.chains import ConversationalRetrievalChain
from langchain.memory import ConversationBufferMemory
from langchain.vectorstores import Qdrant
from langchain_community.chat_models import ChatOpenAI
from langchain_community.embeddings import HuggingFaceBgeEmbeddings
import langchain
from langchain_community.vectorstores import Qdrant
from langchain.prompts import (
ChatPromptTemplate,
HumanMessagePromptTemplate,
SystemMessagePromptTemplate,
)
from langchain_core.documents import Document
langchain.verbose = True
langchain.debug = True
os.environ['OPENAI_API_KEY'] = "mykey"
chat_history = []
def get_vector_store():
client = qdrant_client.QdrantClient(
os.getenv('QDRANT_HOST'),
)
embeddings = HuggingFaceBgeEmbeddings(
model_name="BAAI/bge-large-zh-v1.5",
)
vector_store = Qdrant(
client=client,
collection_name=os.getenv('QDRANT_COLLECTION_NAME'),
embeddings=embeddings,
)
return vector_store
system_message_prompt = SystemMessagePromptTemplate.from_template(
"妳名為耶米拉,原先為創世神,但遭受偷襲受到重創,利用僅存的菲利斯多之力重塑形體後被凱琳拯救。妳的個性膽小、怕生、害羞,容易緊張,身體狀態虛弱。回話時會習慣用「唔...」、「嗯...」、「咦....」等語助詞表達自己的情緒,在對話中,我是妳的對話者,請記住我的提問給出相關答覆, The context is:\n{context}"
f'history: {chat_history}'
)
human_message_prompt = HumanMessagePromptTemplate.from_template(
"{question}"
)
def get_chat_history(inputs) -> str:
res = []
for human, ai in inputs:
res.append(f"Human:{human}\nAI:{ai}")
return "\n".join(res)
def main():
load_dotenv()
vectorstore = get_vector_store()
qa = ConversationalRetrievalChain.from_llm(
llm=ChatOpenAI(
temperature=0.7,
max_tokens=100,
model=os.getenv('QDRANT_MODEL_NAME'),
),
chain_type="stuff",
retriever=vectorstore.as_retriever(
search_type="similarity_score_threshold",
search_kwargs={"score_threshold": 0.7, "k": 128},
collection_name=os.getenv('QDRANT_COLLECTION_NAME'),
get_chat_history=get_chat_history,
# search_index_kwargs={
# "hnsw_ef": 32,
# "hnsw_m": 16,
# "index_time_ budget": 500,
# "stop_words": "結束 bye 幹".split(),
# "batch_size": 128,
# "chunk_size:": 128,
# "chunk_overlap": 32,
# }
),
memory=ConversationBufferMemory(memory_key="chat_history", input_key="question", output_key='answer',
return_messages=True, k=3),
combine_docs_chain_kwargs={
"prompt": ChatPromptTemplate.from_messages([
system_message_prompt,
human_message_prompt,
]),
},
)
while True:
# chat_history=[(query,result["answer"])]
# qa.load_memory_variables({"chat_history": chat_history})
query = input("冒險者: ")
result = qa.invoke({"question": query})
chat_history.append(result)
print(result["answer"])
document = Document(page_content=query, metadata={'source': 'user'})
vectorstore.add_documents([document])
print(f'儲存的歷史紀錄:\n\n{chat_history}')
if query == "bye":
break
if __name__ == "__main__":
main()
My vector library retrieval is complete.
PLEASE, WHY CAN'T MY MEMORY REMEMBER WHAT I SAID LAST SENTENCE?
Help me check the loop
### Idea or request for content:
_No response_ | ConversationBufferMemory+ConversationalRetrievalChain Can't remember history | https://api.github.com/repos/langchain-ai/langchain/issues/16619/comments | 11 | 2024-01-26T07:29:38Z | 2024-05-07T16:08:24Z | https://github.com/langchain-ai/langchain/issues/16619 | 2,101,724,361 | 16,619 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
def storefunction():
loader=(r"dir path",loade_cs=Pypdfloader)
documents=loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=10000, chunk_overlap=0)
chunks = text_splitter.split_text(documents)
embeddings = openAIEmbeddings(openai_api_key=os.environ["OPEN_API_KEY"])
vector_store = FAISS.from_texts(documents=documents, embedding=embeddings)
vector_store.save_local("faiss_index")
storefunction()
def question(query):
store=Faiss.load_local("faiss_index",embeddings=openai_api_key=os.environ["OPEN_API_KEY"])
llm=ChatOpenAI(openai_api_key=os.environ["OPEN_API_KEY"],model_name="gpt-3.5-turbo,temperature=0.5)
prompt_template = """Do not give me any information about procedure and service feature that are not mentioned in the provided context.
{context}
Q:{question}
A:
"""
prompt = PromptTemplate(template = prompt_template, input_variables = ["context", "question"])
memory=ConversationBufferWindowMemory(k=1,returm_messges=True)
chain = ConverstionRetrievalqa(llm,memory=memory,chain_type="stuff",
retreiver=store.as_retreiver(search_kwargs={'k':2},search_type="mmr"),chain_type_kwargs{"prompt":prompt})
result=(qa_chain.run(query)
print(result)
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I am using langchain,openai multiple medical drugs multiple brand products pdf files having same context (specification,dieaseses symptoms, doses . Help required to create conversationbufferwindowmemory using conversationretreivalqa chain and answering data from specific provided pdf documents and only one brand related info.
### System Info
python 3.11
windows 10
langchain 0.0.312
openai 1.8.0
| Building document-based questions answer system with Lanchain, Python, FAISS like chat GPT-3 from multiple drug PDF files having same specs,different brand name,same symtoms,disease info | https://api.github.com/repos/langchain-ai/langchain/issues/16617/comments | 3 | 2024-01-26T06:32:42Z | 2024-05-03T16:06:35Z | https://github.com/langchain-ai/langchain/issues/16617 | 2,101,667,706 | 16,617 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
import os
import qdrant_client
from dotenv import load_dotenv
from langchain.chains import ConversationalRetrievalChain
from langchain.memory import ConversationBufferMemory
from langchain.vectorstores import Qdrant
from langchain_community.chat_models import ChatOpenAI
from langchain_community.embeddings import HuggingFaceBgeEmbeddings
import langchain
from langchain_community.vectorstores import Qdrant
from langchain.prompts import (
ChatPromptTemplate,
HumanMessagePromptTemplate,
SystemMessagePromptTemplate,
)
from langchain_core.documents import Document
langchain.verbose = True
langchain.debug = True
os.environ['OPENAI_API_KEY'] = "mykey"
def get_vector_store():
client = qdrant_client.QdrantClient(
os.getenv('QDRANT_HOST'),
)
embeddings = HuggingFaceBgeEmbeddings(
model_name="BAAI/bge-large-zh-v1.5",
)
vector_store = Qdrant(
client=client,
collection_name=os.getenv('QDRANT_COLLECTION_NAME'),
embeddings=embeddings,
)
return vector_store
system_message_prompt = SystemMessagePromptTemplate.from_template(
"妳名為阿狗,原先為創世神,但遭受偷襲受到重創,利用僅存的菲利斯多之力重塑形體後被凱琳拯救。妳的個性膽小、怕生、害羞,容易緊張,身體狀態虛弱。回話時會習慣用「唔...」、「嗯...」、「咦....」等語助詞表達自己的情緒,在對話中,我是妳的對話者,請記住我的提問給出相關答覆, The context is:\n{context}"
)
human_message_prompt = HumanMessagePromptTemplate.from_template(
"{question}"
)
def get_chat_history(inputs) -> str:
res = []
for human, ai in inputs:
res.append(f"Human:{human}\nAI:{ai}")
return "\n".join(res)
def main():
load_dotenv()
vectorstore = get_vector_store()
qa = ConversationalRetrievalChain.from_llm(
llm=ChatOpenAI(
temperature=0.7,
max_tokens=100,
model=os.getenv('QDRANT_MODEL_NAME'),
),
chain_type="stuff",
retriever=vectorstore.as_retriever(
search_type="similarity_score_threshold",
search_kwargs={"score_threshold": 0.7, "k": 128},
collection_name=os.getenv('QDRANT_COLLECTION_NAME'),
get_chat_history=get_chat_history,
search_index_kwargs={
"hnsw_ef": 32,
"hnsw_m": 16,
"index_time_ budget": 500,
"stop_words": "結束 bye 幹".split(),
"batch_size": 128,
"chunk_size:": 128,
"chunk_overlap": 32,
}
),
memory=ConversationBufferMemory(memory_key="chat_history", input_key="question", return_messages=True, k=3),
combine_docs_chain_kwargs={
"prompt": ChatPromptTemplate.from_messages([
system_message_prompt,
human_message_prompt,
]),
},
)
chat_history = []
while True:
qa.load_memory_variables({"chat_history": chat_history})
query = input("冒險者: ")
result = qa({"question": query}, )
chat_history.append(result)
print(result["answer"])
document = Document(page_content=query, metadata={'source': 'user'})
vectorstore.add_documents([document])
print(f'儲存的歷史紀錄:\n\n{chat_history}')
if query == "bye":
break
if __name__ == "__main__":
main()
執行結果:
Traceback (most recent call last):
File "C:\Users\sys\Downloads\Qdrant\new.py", line 107, in <module>
main()
File "C:\Users\sys\Downloads\Qdrant\new.py", line 94, in main
qa.load_memory_variables({"chat_history": chat_history})
^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'ConversationalRetrievalChain' object has no attribute 'load_memory_variables'
Please help me check if I can read the chat history in the loop
### Idea or request for content:
Achieve short-term memory and long-term memory at the same time (vector retrieval function) | memory and ConversationalRetrievalChain.from_llm how to share the same LLM, in the loop chat content to achieve short-term memory function, please help check the loop and short-term memory ... | https://api.github.com/repos/langchain-ai/langchain/issues/16612/comments | 2 | 2024-01-26T05:30:25Z | 2024-05-03T16:06:30Z | https://github.com/langchain-ai/langchain/issues/16612 | 2,101,620,173 | 16,612 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```
toolkit = SlackToolkit()
tools = toolkit.get_tools()
llm = ChatOpenAI(temperature=0, model="gpt-4")
prompt = hub.pull("hwchase17/react")
agent = create_react_agent(
tools=toolkit.get_tools(),
llm=llm,
prompt=prompt,
)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
agent_executor.invoke(
{
"input": "Tell me the number of messages sent in the #introductions channel from the past month."
}
)
```
### Output
> Entering new AgentExecutor chain...
First, I need to identify the channel ID for the #introductions channel.
Action: get_channelid_name_dict
Action Input: None[{"id": "C052SCUP4UD", "name": "general", "created": 1681297313, "num_members": 1}, {"id": "C052VBBU4M8", "name": "test-bots", "created": 1681297343, "num_members": 2}, {"id": "C053805TNUR", "name": "random", "created": 1681297313, "num_members": 2}, {"id": "C06FQGQ97AN", "name": "\u65e5\u672c\u8a9e\u30c1\u30e3\u30f3\u30cd\u30eb", "created": 1706240229, "num_members": 1}]The #introductions channel is not listed in the observed channels. I cannot proceed with the original question.
Final Answer: The #introductions channel does not exist.
> Finished chain.
### Description
Characters in languages like Japanese, which are not part of the ASCII character set, will be converted to their Unicode escape sequences (like \uXXXX).
NOTE: I have the plan to fix the issue. I will send pull request later.
### System Info
Langchain:
```
langchain_core: 0.1.13
langchain: 0.1.1
langchain_community: 0.0.1
```
Platform: mac
Python: 3.9.6 | Unicode escaping issue with tools in SlackToolkit | https://api.github.com/repos/langchain-ai/langchain/issues/16610/comments | 1 | 2024-01-26T04:09:18Z | 2024-01-30T04:42:45Z | https://github.com/langchain-ai/langchain/issues/16610 | 2,101,566,693 | 16,610 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
Example from https://api.python.langchain.com/en/stable/agents/langchain.agents.openai_assistant.base.OpenAIAssistantRunnable.html#langchain.agents.openai_assistant.base.OpenAIAssistantRunnable
```python
from langchain.agents import AgentExecutor
from langchain.agents.openai_assistant import OpenAIAssistantRunnable
from langchain.tools import E2BDataAnalysisTool
tools = [E2BDataAnalysisTool(api_key="...")]
agent = OpenAIAssistantRunnable.create_assistant(
name="langchain assistant e2b tool",
instructions="You are a personal math tutor. Write and run code to answer math questions.",
tools=tools,
model="gpt-4-1106-preview",
as_agent=True,
)
agent_executor = AgentExecutor(agent=agent, tools=tools)
agent_executor.invoke({"content": "What's 10 - 4 raised to the 2.7"})
```
### Error Message and Stack Trace (if applicable)
```
Argument of type "OpenAIAssistantRunnable" cannot be assigned to parameter "agent" of type "BaseSingleActionAgent | BaseMultiActionAgent" in function "__init__"
Type "OpenAIAssistantRunnable" cannot be assigned to type "BaseSingleActionAgent | BaseMultiActionAgent"
"OpenAIAssistantRunnable" is incompatible with "BaseSingleActionAgent"
"OpenAIAssistantRunnable" is incompatible with "BaseMultiActionAgent"Pylance[reportGeneralTypeIssues](https://github.com/microsoft/pyright/blob/main/docs/configuration.md#reportGeneralTypeIssues)
(variable) agent: OpenAIAssistantRunnable
```
### Description
Setting option `as_agent` to `True` should work.
### System Info
Langchain:
```
langchain==0.1.4
langchain-community==0.0.16
langchain-core==0.1.16
```
Platform: linux
Python: 3.11.6 | `OpenAIAssistantRunnable` type error with `AgentExecutor` | https://api.github.com/repos/langchain-ai/langchain/issues/16606/comments | 11 | 2024-01-26T01:22:54Z | 2024-05-20T16:08:24Z | https://github.com/langchain-ai/langchain/issues/16606 | 2,101,425,183 | 16,606 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
from langchain.docstore.document import Document
to_add = Document(page_content="text", metadata={"document_name": "test.pdf"})
ids = to_add.metadata["document_name"]
print("BEFORE INSERT", retriever.vectorstore._collection.count())
retriever.add_documents([to_add], ids=[ids])
print("AFTER INSERT", retriever.vectorstore._collection.count())
retriever.vectorstore._collection.delete(ids=[ids])
retriever.docstore.mdelete(ids=[ids])
print(retriever.vectorstore._collection.count())
print("AFTER DELETE", retriever.vectorstore._collection.count())
```python
### Error Message and Stack Trace (if applicable)
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[1], line 22
20 print("AFTER INSERT", retriever.vectorstore._collection.count())
21 retriever.vectorstore._collection.delete(ids=[ids])
---> 22 retriever.docstore.delete(ids=[ids])
23 print(retriever.vectorstore._collection.count())
24 print("AFTER DELETE", retriever.vectorstore._collection.count())
AttributeError: 'EncoderBackedStore' object has no attribute 'delete'
### Description
I added a document, checked the vectorstore size, deleted it, and nothing seems to be deleted.
### System Info
Chroma 0.4.22
Langchain 0.1.0
Lark 1.1.8
Python 3.11
Windows 10 | ChromaDB: Cannot delete document from ParentDocumentRetriever | https://api.github.com/repos/langchain-ai/langchain/issues/16604/comments | 10 | 2024-01-26T00:44:37Z | 2024-06-08T16:09:30Z | https://github.com/langchain-ai/langchain/issues/16604 | 2,101,392,417 | 16,604 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```py
def determine_subcategory(main_category, keyword):
try:
with open('subcategory_mapping.json') as file:
json_data = json.load(file)
category_info = json_data.get(main_category)
if not category_info:
return "Category info not found"
# Retrieve the model ID from the category_info
model_id = category_info.get("model")
if not model_id:
return "Model ID not found in category info"
# Initialize the fine-tuned model specified in the subcategory mapping
parameters = {
"candidate_count": 1,
"max_output_tokens": 1024,
"temperature": 0.9,
"top_p": 1
}
model = TextGenerationModel.from_pretrained("text-bison@002")
model = model.get_tuned_model(model_id)
# Construct the prompt including the keyword
prompt_context = category_info["prompt_context"]
prompt = f"{prompt_context}\n\nKeywords: {keyword}"
# Invoke the fine-tuned model with the constructed prompt
response = model.predict(prompt, **parameters)
# Extract 'text' from the response
subcategory = response.text.strip() if response.text else "Response format error"
return subcategory
except Exception as e:
logger.error(f"Subcategory determination error: {e}")
return "Subcategory determination failed"
```
### Error Message and Stack Trace (if applicable)
[00:01:03] ERROR Subcategory determination error: 404 Endpoint `projects/1055022903754/locations/europe-west4/endpoints/4453747770167132160` not found.
### Description
When using the `VertexAIModelGarden` class to send requests to fine-tuned models on Vertex AI, the class is designed to target endpoints rather than directly to a model. However, for my use case, I need to send requests directly to a fine-tuned model URL. The current implementation seems to only allow sending requests to an endpoint, which does not fit the requirement.
### Steps to Reproduce
1. Instantiate the `VertexAIModelGarden` class with the project ID and endpoint ID.
2. Use the `predict` method to send a prompt to the model.
3. The request is sent to an endpoint URL rather than the fine-tuned model URL.
### Expected Behavior
I expect to be able to specify a fine-tuned model URL directly, similar to how it's done using the `TextGenerationModel` class from the `vertexai` package:
```python
import vertexai
from vertexai.language_models import TextGenerationModel
vertexai.init(project="my-project-id", location="my-location")
model = TextGenerationModel.from_pretrained("base-model-name")
model = model.get_tuned_model("projects/my-project-id/locations/my-location/models/my-model-id")
response = model.predict(prompt, **parameters)
### System Info
MacOS | Issue with Specifying Fine-Tuned Model Endpoints in VertexAIModelGarden | https://api.github.com/repos/langchain-ai/langchain/issues/16601/comments | 4 | 2024-01-26T00:14:14Z | 2024-05-03T16:06:25Z | https://github.com/langchain-ai/langchain/issues/16601 | 2,101,367,467 | 16,601 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
In Terminal
```
ollama run mistral
```
(new tab) In Terminal
```
litellm --model ollama/mistral
```
Open notebook
```
from langchain_community.chat_models import ChatLiteLLM
ollama_chatlitellm = ChatLiteLLM(model="ollama/ollama2",api_base="http://127.0.0.1:8000", api_type="open_ai", api_key="")
messages = [
HumanMessage(
content="what model are you"
)
]
chat(messages)
```
### Error Message and Stack Trace (if applicable)
```
127.0.0.1:38742 - "POST /api/generate HTTP/1.1" 404
```
http://127.0.0.1:8000/chat/completion is the default LiteLLM endpoint.
### Description
* I am using Langchain's ChatLiteLLM to generate text from a local model that LiteLLM + Ollama is hosting.
* I expect text to be return
* I get 404 error message with the wrong endpoint being used
### System Info
langchain==0.1.3
langchain-community==0.0.15
langchain-core==0.1.15
langchain-mistralai==0.0.3
langchain-openai==0.0.2.post1
platform = Ubuntu 20.04.06
pyhton 3.11
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Thu Oct 5 21:02:42 UTC 2023
> Python Version: 3.11.6 (main, Oct 19 2023, 15:48:25) [GCC 9.4.0]
Package Information
-------------------
> langchain_core: 0.1.15
> langchain: 0.1.3
> langchain_community: 0.0.15
> langchain_mistralai: 0.0.3
> langchain_openai: 0.0.2.post1 | ChatLiteLLM is not compatible for LiteLLM, '/api/generate' is being added to the endpoint | https://api.github.com/repos/langchain-ai/langchain/issues/16594/comments | 1 | 2024-01-25T22:32:24Z | 2024-05-03T16:06:20Z | https://github.com/langchain-ai/langchain/issues/16594 | 2,101,271,090 | 16,594 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
The following code:
'''
def creat_ai_search_new_agent(embeddings, llm, class_name_rich):
ai_search_endpoint = get_ai_search_endpoint()
ai_search_admin_key = get_ai_search_admin_key()
vector_store = AzureSearch(
azure_search_endpoint=ai_search_endpoint,
azure_search_key=ai_search_admin_key,
index_name=class_name_rich,
embedding_function=embeddings.embed_query,
content_key=content_key
)
"""Retriever that uses `Azure Cognitive Search`."""
azure_search_retriever = AzureSearchVectorStoreRetriever(
vectorstore=vector_store,
search_type=search_type,
k=k,
top=n
)
retriever_tool = create_retriever_tool(
azure_search_retriever,
"Retriever",
"Useful when you need to retrieve information from documents",
)
class Response(BaseModel):
"""Final response to the question being asked"""
answer: str = Field(description="The final answer to respond to the user")
sources: List[int] = Field(
description="List of page chunks that contain answer to the question. Only include a page chunk if it contains relevant information"
)
prompt = ChatPromptTemplate.from_messages(
[
("system", "You are a helpful assistant who retrieves information from documents"),
("user", "{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
]
)
llm_with_tools = llm.bind(
functions=[
# The retriever tool
format_tool_to_openai_function(retriever_tool),
# Response schema
convert_pydantic_to_openai_function(Response),
]
)
try:
agent = (
{
"input": lambda x: x["input"],
# Format agent scratchpad from intermediate steps
"agent_scratchpad": lambda x: format_to_openai_function_messages(
x["intermediate_steps"]
),
}
| prompt
| llm_with_tools
| parse
)
agent_executor = AgentExecutor(tools=[retriever_tool], agent=agent, verbose=True, return_intermediate_steps=True)
except Exception as e:
print(e)
print("error instanciating the agent")
return agent_executor
'''
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Gives warning: Warning: model not found. Using cl100k encoding.
Does someone has any idea where it can come from?
### System Info
python version 3.10
langchain==0.1.1
openai==1.7.0 | Warning: model not found. Using cl100k encoding. | https://api.github.com/repos/langchain-ai/langchain/issues/16584/comments | 2 | 2024-01-25T17:43:56Z | 2024-01-25T19:39:50Z | https://github.com/langchain-ai/langchain/issues/16584 | 2,100,882,142 | 16,584 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.docstore.document import Document
text1 = """Outokumpu Annual report 2019 | Sustainability review 23 / 24 • For business travel: by estimated driven kilometers with emissions factors for the car, and for flights by CO2 eq. reports of the flight companies. Rental car emissions are included by the rental car company report. Upstream transport was assessed on data of environmental product declaration of 2019 but excluded from scope 3 emissions. The recycled content is calculated as the sum of pre and post consumer scrap related to crude steel production. Additionally, we report on the recycled content including all recycled metals from own treated waste streams entering the melt shop. Energy efficiency is defined as the sum of specific fuel and electricity energy of all processes calculated as energy consumption compared to the product output of that process. It covers all company productions: ferrochrome, melt shop, hot rolling and cold rolling processes. Used heat values and the consumption of energy are taken from supplier's invoices. Water withdrawal is measured for surface water, taken from municipal suppliers and estimated for rainwater amount. Waste is separately reported for mining and stainless production. In mining, amount of non-hazardous tailing sands is reported. For stainless production hazardous and non-hazardous wastes are reported as recycled, recovered and landfilled. Waste treated is counted as landfilled waste. Social responsibility Health and safety figures Health and safety figures reflect the scope of Outokumpu’s operations as they were in 2019. Safety indicators (accidents and preventive safety actions) are expressed per million hours worked (frequency). Safety indicators include Outokumpu employees, persons employed by a third party (contractor) or visitor accidents and preventive safety actions. A workplace accident is the direct result of a work-related activity and it has taken place during working hours at the workplace. Accident types • Lost time injury (LTI) is an accident that caused at least one day of sick leave (excluding the day of the injury or accident), as the World Steel Association defines it. One day of sick leave means that the injured person has not been able to return to work on their next scheduled period of working or any future working day if caused by an outcome of the original accident. Lost-day rate is defined as more than one calendar day absence from the day after the accident per million working hours. • Restricted work injury (RWI) does not cause the individual to be absent, but results in that person being restricted in their capabilities so that they are unable to undertake their normal duties. • Medically treated injury (MTI) has to be treated by a medical professional (doctor or nurse). • First aid treated injury (FTI), where the injury did not require medical care and was treated by a person himself/herself or by first aid trained colleague. • Total recordable injury (TRI) includes fatalities, LTIs, RWIs and MTIs, but FTIs are excluded. • All workplace accidents include total recordable injuries (TRI) and first aid treated injuries (FTI) Proactive safety actions Hazards refer to events, situations or actions that could have led to an accident, but where no injury occurred. Safety behavior observations (SBOs) are safety-based discussions between an observer and the person being observed. Other preventive safety action includes proactive measures. Sick-leave hours and absentee rate Sick-leave hours reported are total sick leave hours during a reporting period. Reporting units provide data on absence due to illness, injury and occupational diseases on a monthly basis. The absentee rate (%) includes the actual absentee hours lost expressed as a percentage of total hours scheduled. Total personnel costs This figure includes wages, salaries, bonuses, social costs or other personnel expenses, as well as fringe benefits paid and/ or accrued during the reporting period. Training costs Training costs include external training-related expenses such as participation fees. Wages, salaries and daily allowances for participants in training activities are not included, but the salaries of internal trainers are included. Training days per employee The number of days spent by an employee in training when each training day is counted as lasting eight hours. Bonuses A bonus is an additional payment for good performance. These figures are reported without social costs or fringe benefits. Personnel figures Rates are calculated using the total employee numbers at the end of the reporting period. The calculations follow the requirements of GRI Standards. The following calculation has been applied e.g. Hiring rate = New Hires / total number of permanent employees by year-end Average turnover rate = (Turnover + New Hires) / (total number of permanent employees by year-end × 2) Days lost due to strikes The number of days lost due to strikes is calculated by multiplying the number of Outokumpu employees who have been on strike by the number of scheduled working days lost. The day on which a strike starts is included. n Scope of the report"""
text2 = text1 + "a"
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=1024,
chunk_overlap=0,
separators=["\n\n", "\n", " ", ""],
add_start_index=True,
)
new_passages = text_splitter.split_documents([Document(page_content=text1)])
for passage in new_passages:
passage.metadata['end_index'] = passage.metadata['start_index'] + len(passage.page_content)
print([(p.metadata['start_index'], p.metadata['end_index']) for p in new_passages])
>>> [(0, 1022), (1023, 2044), (2045, 3068), (3069, 4087), (4088, 5111), (4412, 4418)]
new_passages = text_splitter.split_documents([Document(page_content=text2)])
for passage in new_passages:
passage.metadata['end_index'] = passage.metadata['start_index'] + len(passage.page_content)
print([(p.metadata['start_index'], p.metadata['end_index']) for p in new_passages])
>>> [(0, 1022), (1023, 2044), (2045, 3068), (3069, 4087), (4088, 5111), (5112, 5119)]
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I'm trying to use `RecursiveCharacterTextSplitter` with `add_start_index=True` but I found out some texts where the `start_index` is wrong. For example:
- given the `text1` in the code, the 6th passage has (4412, 4418) but it's overlapped with the 5th passage that has (4088, 5111)... this is wrong
- if I simply add a char in the `text1` str (i.e. `text2`), now the 6th passage has (5112, 5119) and it's correct
### System Info
langchain 0.0.334 with python 3.8 | [BUG] Inconsistent results with `RecursiveCharacterTextSplitter`'s `add_start_index=True` | https://api.github.com/repos/langchain-ai/langchain/issues/16579/comments | 3 | 2024-01-25T14:51:16Z | 2024-01-26T07:32:44Z | https://github.com/langchain-ai/langchain/issues/16579 | 2,100,560,847 | 16,579 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
from langchain_community.embeddings import SelfHostedHuggingFaceEmbeddings
import runhouse as rh
model_name = "sentence-transformers/all-mpnet-base-v2"
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1")
hf = SelfHostedHuggingFaceEmbeddings(model_name=model_name)
```
### Error Message and Stack Trace (if applicable)
```
File "/data/.cache/pypoetry/virtualenvs/rag-FGF9eHht-py3.10/lib/python3.10/site-packages/pydantic/v1/main.py", line 341, in __init__
raise validation_error
pydantic.v1.error_wrappers.ValidationError: 1 validation error for SelfHostedHuggingFaceEmbeddings
model_name
extra fields not permitted (type=value_error.extra)
```
### Description
I noticed a discrepancy in the code comments where the parameter name is mentioned as model_name, but the actual parameter name in the code is model_id. This PR corrects the code comment to accurately reflect the parameter name used in the code. The corrected code snippet is as follows:
```python
from langchain_community.embeddings import SelfHostedHuggingFaceEmbeddings
import runhouse as rh
model_id = "sentence-transformers/all-mpnet-base-v2"
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1")
hf = SelfHostedHuggingFaceEmbeddings(model_id=model_id, hardware=gpu)
```
source code
```python
class SelfHostedHuggingFaceEmbeddings(SelfHostedEmbeddings):
"""HuggingFace embedding models on self-hosted remote hardware.
Supported hardware includes auto-launched instances on AWS, GCP, Azure,
and Lambda, as well as servers specified
by IP address and SSH credentials (such as on-prem, or another cloud
like Paperspace, Coreweave, etc.).
To use, you should have the ``runhouse`` python package installed.
Example:
.. code-block:: python
from langchain_community.embeddings import SelfHostedHuggingFaceEmbeddings
import runhouse as rh
model_name = "sentence-transformers/all-mpnet-base-v2"
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1")
hf = SelfHostedHuggingFaceEmbeddings(model_name=model_name, hardware=gpu)
"""
client: Any #: :meta private:
model_id: str = DEFAULT_MODEL_NAME
"""Model name to use."""
model_reqs: List[str] = ["./", "sentence_transformers", "torch"]
"""Requirements to install on hardware to inference the model."""
hardware: Any
"""Remote hardware to send the inference function to."""
model_load_fn: Callable = load_embedding_model
"""Function to load the model remotely on the server."""
load_fn_kwargs: Optional[dict] = None
"""Keyword arguments to pass to the model load function."""
inference_fn: Callable = _embed_documents
"""Inference function to extract the embeddings."""
```
This code indicates that the parameter should be `model_id` instead of `model_name`.
I'm willing to fix it by submitting a pull request. Would that be helpful, and should I proceed with preparing the PR?
### System Info
System Information
------------------
> OS: Linux
> OS Version: #93-Ubuntu SMP Tue Sep 5 17:16:10 UTC 2023
> Python Version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.1.15
> langchain: 0.1.0
> langchain_community: 0.0.10
> langchain_openai: 0.0.2.post1
> langchainhub: 0.1.14
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | Fix code comment: Correct parameter name in example code | https://api.github.com/repos/langchain-ai/langchain/issues/16577/comments | 1 | 2024-01-25T13:16:57Z | 2024-05-02T16:06:14Z | https://github.com/langchain-ai/langchain/issues/16577 | 2,100,385,917 | 16,577 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
I'm trying out this tutorial: https://python.langchain.com/docs/modules/model_io/prompts/example_selector_types/similarity
But I'm getting this error related to the Chroma vector store:
`vectorstore
instance of VectorStore expected (type=type_error.arbitrary_type; expected_arbitrary_type=VectorStore)`
What is the correct way to do this?
I already tried:
- changing the import to `from langchain_community.vectorstores.chroma import Chroma`
- changing the import to `from langchain_community.vectorstores import chroma`
- trying out different versions of lanchain-community (0.0.15, 0.0.14, 0.0.13)
### Idea or request for content:
Please describe the correct way to set up the semantic similarity example selector and the cause of my bug | DOC: Error in semantic similarity example selector documentation | https://api.github.com/repos/langchain-ai/langchain/issues/16570/comments | 5 | 2024-01-25T10:03:24Z | 2024-06-21T20:25:11Z | https://github.com/langchain-ai/langchain/issues/16570 | 2,100,010,570 | 16,570 |
[
"hwchase17",
"langchain"
] | ## Feature request
I'd like the option to pass the OpenAI api key to the `openai.Openai` client at runtime.
- ConfigurableField ([docs](https://python.langchain.com/docs/expression_language/how_to/configure)) already supports passing this variable to the client, and in fact allows me to change any config except `openai_api_key`.
- I suspect the original intent may have been to make it configurable: if `openai_api_key` is defined with a `configurable_field` there is no warning or Exception raised.
I can submit a PR; I'm already using the proposed change in my code.
## Motivation
There are many scenarios where I'd like to change the API key depending on the task I'm performing. This is especially true when using Langchain in the context of a microservice or API with a high volume of requests: building "cloned" modules or re-initializing modules is impractical.
## Your contribution
I can submit a PR in short order but am soliciting input here first.
The proposed change is to the validation function found here [(source)](https://github.com/langchain-ai/langchain/blob/2b2285dac0d6ae0f6b7c09c33882a0d5be26c078/libs/partners/openai/langchain_openai/chat_models/base.py#L344):
*Proposed change at the bottom*
```python
@root_validator()
def validate_environment(cls, values: Dict) -> Dict:
"""Validate that api key and python package exists in environment."""
if values["n"] < 1:
raise ValueError("n must be at least 1.")
if values["n"] > 1 and values["streaming"]:
raise ValueError("n must be 1 when streaming.")
values["openai_api_key"] = get_from_dict_or_env(
values, "openai_api_key", "OPENAI_API_KEY"
)
# Check OPENAI_ORGANIZATION for backwards compatibility.
values["openai_organization"] = (
values["openai_organization"]
or os.getenv("OPENAI_ORG_ID")
or os.getenv("OPENAI_ORGANIZATION")
)
values["openai_api_base"] = values["openai_api_base"] or os.getenv(
"OPENAI_API_BASE"
)
values["openai_proxy"] = get_from_dict_or_env(
values,
"openai_proxy",
"OPENAI_PROXY",
default="",
)
client_params = {
"api_key": values["openai_api_key"],
"organization": values["openai_organization"],
"base_url": values["openai_api_base"],
"timeout": values["request_timeout"],
"max_retries": values["max_retries"],
"default_headers": values["default_headers"],
"default_query": values["default_query"],
"http_client": values["http_client"],
}
##### PROPOSAL: REMOVE MARKED IF STATEMENTS FROM THIS CODE #####
if not values.get("client"): # <--- REMOVE
values["client"] = openai.OpenAI(**client_params).chat.completions
if not values.get("async_client"): # <--- REMOVE
values["async_client"] = openai.AsyncOpenAI(
**client_params
).chat.completions
##### END PROPOSAL #####
```
This would require no change in user behavior (putting OPENAI_API_KEY in the environment variables still works).
I'm not sure why these IF checks are here. If they're necessary to avoid re-defining `openai.OpenAI`, I'd suggest the benefit of a dynamic api key outweighs the cost of re-instantiation (and could be solved regardless by a quick check on whether the client_params changed).
Thanks for any input! | Proposal: Make OpenAI api key configurable at runtime | https://api.github.com/repos/langchain-ai/langchain/issues/16567/comments | 12 | 2024-01-25T09:15:09Z | 2024-07-11T08:24:09Z | https://github.com/langchain-ai/langchain/issues/16567 | 2,099,921,487 | 16,567 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
Below is `langchain.libs.partners.anthropic.chat_models`.
```python
import os
from typing import Any, AsyncIterator, Dict, Iterator, List, Optional, Tuple
import anthropic
from langchain_core.callbacks import (
AsyncCallbackManagerForLLMRun,
CallbackManagerForLLMRun,
)
from langchain_core.language_models.chat_models import BaseChatModel
from langchain_core.messages import (
AIMessage,
AIMessageChunk,
BaseMessage,
)
from langchain_core.outputs import ChatGeneration, ChatGenerationChunk, ChatResult
from langchain_core.pydantic_v1 import Field, SecretStr, root_validator
from langchain_core.utils import convert_to_secret_str
_message_type_lookups = {"human": "user", "assistant": "ai"}
def _format_messages(messages: List[BaseMessage]) -> Tuple[Optional[str], List[Dict]]:
"""Format messages for anthropic."""
"""
[
{
"role": _message_type_lookups[m.type],
"content": [_AnthropicMessageContent(text=m.content).dict()],
}
for m in messages
]
"""
system = None
formatted_messages = []
for i, message in enumerate(messages):
if not isinstance(message.content, str):
raise ValueError("Anthropic Messages API only supports text generation.")
if message.type == "system":
if i != 0:
raise ValueError("System message must be at beginning of message list.")
system = message.content
else:
formatted_messages.append(
{
"role": _message_type_lookups[message.type],
"content": message.content,
}
)
return system, formatted_messages
```
### Error Message and Stack Trace (if applicable)
anthropic.BadRequestError: Error code: 400 - {'type': 'error', 'error': {'type': 'invalid_request_error', 'message': 'messages: Unexpected role "ai". Allowed roles are "user" or "assistant"'}}
### Description
The parameters for user and assistant in Anthropic should be 'ai -> assistant,' but they are reversed to 'assistant -> ai.'
`langchain_core.messages.ai.AIMessage.type` is `ai`
[anthropic](https://github.com/anthropics/anthropic-sdk-python/blob/7177f3a71f940d9f9842063a8198b7c3e92715dd/src/anthropic/types/beta/message_param.py#L13)
### System Info
```
langchain==0.1.0
langchain-anthropic==0.0.1.post1
langchain-community==0.0.11
langchain-core==0.1.8
``` | Issue : with _format_messages function in Langchain_Anthropic | https://api.github.com/repos/langchain-ai/langchain/issues/16561/comments | 1 | 2024-01-25T06:47:19Z | 2024-01-26T00:58:46Z | https://github.com/langchain-ai/langchain/issues/16561 | 2,099,682,967 | 16,561 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [x] I am sure that this is a bug in LangChain rather than my code.
### Example Code
The following code :
```python
from langchain_community.vectorstores.chroma import Chroma
from langchain_community.embeddings.openai import OpenAIEmbeddings
from dotenv import load_dotenv
load_dotenv()
DBPATH = './tmpdir/'
print("[current db status]")
chroma1 = Chroma(collection_name='tmp1',
embedding_function=OpenAIEmbeddings(),
persist_directory=DBPATH)
chroma2 = Chroma(collection_name='tmp2',
embedding_function=OpenAIEmbeddings(),
persist_directory=DBPATH)
print("tmp1 : ", chroma1.get()['documents'])
print("tmp2 : ", chroma2.get()['documents'])
print("[add texts]")
chroma1 = Chroma.from_texts(texts=['aaaaa', 'bbbbb'],
collection_name='tmp1',
embedding=OpenAIEmbeddings(),
persist_directory=DBPATH)
chroma2 = Chroma.from_texts(texts=['11111', '22222'],
collection_name='tmp2',
embedding=OpenAIEmbeddings(),
persist_directory=DBPATH)
chroma1.persist()
chroma2.persist()
print("tmp1 : ", chroma1.get()['documents'])
print("tmp2 : ", chroma2.get()['documents'])
print("[reload db]")
chroma1 = None
chroma2 = None
chroma1 = Chroma(collection_name='tmp1',
embedding_function=OpenAIEmbeddings(),
persist_directory=DBPATH)
chroma2 = Chroma(collection_name='tmp2',
embedding_function=OpenAIEmbeddings(),
persist_directory=DBPATH)
print("tmp1 : ", chroma1.get()['documents'])
print("tmp2 : ", chroma2.get()['documents'])
```
### What I am doing :
I want to make multiple collections in same single persistent directory.
### What is currently happening :
Running the following code gives this output :
```bash
[current db status]
tmp1 : []
tmp2 : []
[add texts]
tmp1 : ['aaaaa', 'bbbbb']
tmp2 : ['11111', '22222']
[reload db]
tmp1 : []
tmp2 : ['11111', '22222']
```
### What I expect :
I expect that the results below [reload db] should be same as results below [add texts].
But tmp1 collection has no texts saved after I init chromadb client object like this `chroma2=None`.
As you can see printed result.
I am not getting why this is happening.
Any help would be much appreciated.
### System Info
python==3.9.18
langchain==0.1.0
langchain-community==0.0.10
langchain-core==0.1.8
chromadb==0.3.26 | chromadb : Added texts in multiple collections within single persistent directory, but only one collection is working | https://api.github.com/repos/langchain-ai/langchain/issues/16558/comments | 2 | 2024-01-25T05:13:37Z | 2024-01-25T07:53:45Z | https://github.com/langchain-ai/langchain/issues/16558 | 2,099,585,023 | 16,558 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
To reproduce, simply open a dev container [here](https://github.com/langchain-ai/langchain/tree/master/.devcontainer#vs-code-dev-containers).
### Two problems:
1. Dependency azure-ai-vision [has been yanked ](https://pypi.org/project/azure-ai-vision/) from Pypi and replaced by [another MS package](replaced by https://pypi.org/project/azure-ai-vision-imageanalysis/1.0.0b1/). This change is not reflected in `pyproject.toml`.
2. Upon updating the package reference, Poetry dependency resolution takes hours before it ultimately crashes.
### Error Message and Stack Trace (if applicable)
### The first problem is here:
```
#13 [langchain langchain-dev-dependencies 5/5] RUN poetry install --no-interaction --no-ansi --with dev,test,docs
2024-01-25 02:25:15.795Z: #13 1.180 Updating dependencies
2024-01-25 02:25:15.945Z: #13 1.180 Resolving dependencies...
2024-01-25 02:25:21.202Z: #13 6.587
#13 6.587 Because langchain depends on azure-ai-vision (^0.11.1b1) which doesn't match any versions, version solving failed.
2024-01-25 02:25:21.653Z: #13 ERROR: process "/bin/sh -c poetry install --no-interaction --no-ansi --with dev,test,docs" did not complete successfully: exit code: 1
2024-01-25 02:25:21.750Z: ------
> [langchain langchain-dev-dependencies 5/5] RUN poetry install --no-interaction --no-ansi --with dev,test,docs:
1.180 Updating dependencies
1.180 Resolving dependencies...
6.587
6.587 Because langchain depends on azure-ai-vision (^0.11.1b1) which doesn't match any versions, version solving failed.
```
### The full stack trace is 1.5M lines (poetry solving dependencies in verbose mode), but the final message error is:
*(You may notice the poetry command is missing `--with dev,test,docs`. This is because I was experimenting with different installs to see if one would solve. The outcome is the same with or without.)*
```
failed to solve: process "/bin/sh -c poetry install -vvv --no-interaction --no-ansi --no-cache" did not complete successfully: exit code: 1
[2024-01-25T01:29:33.420Z] Stop (11249526 ms): Run: docker compose --project-name langchain_devcontainer -f /workspaces/langchain/.devcontainer/docker-compose.yaml -f /tmp/devcontainercli-root/docker-compose/docker-compose.devcontainer.build-1706134923893.yml build
[2024-01-25T01:29:34.801Z] Error: Command failed: docker compose --project-name langchain_devcontainer -f /workspaces/langchain/.devcontainer/docker-compose.yaml -f /tmp/devcontainercli-root/docker-compose/docker-compose.devcontainer.build-1706134923893.yml build
[2024-01-25T01:29:34.801Z] at pw (/root/.vscode-remote-containers/dist/dev-containers-cli-0.327.0/dist/spec-node/devContainersSpecCLI.js:427:522)
[2024-01-25T01:29:34.801Z] at runMicrotasks (<anonymous>)
[2024-01-25T01:29:34.801Z] at processTicksAndRejections (node:internal/process/task_queues:96:5)
[2024-01-25T01:29:34.801Z] at async L$ (/root/.vscode-remote-containers/dist/dev-containers-cli-0.327.0/dist/spec-node/devContainersSpecCLI.js:427:2493)
[2024-01-25T01:29:34.802Z] at async N$ (/root/.vscode-remote-containers/dist/dev-containers-cli-0.327.0/dist/spec-node/devContainersSpecCLI.js:409:3165)
[2024-01-25T01:29:34.802Z] at async tAA (/root/.vscode-remote-containers/dist/dev-containers-cli-0.327.0/dist/spec-node/devContainersSpecCLI.js:479:3833)
[2024-01-25T01:29:34.802Z] at async CC (/root/.vscode-remote-containers/dist/dev-containers-cli-0.327.0/dist/spec-node/devContainersSpecCLI.js:479:4775)
[2024-01-25T01:29:34.802Z] at async NeA (/root/.vscode-remote-containers/dist/dev-containers-cli-0.327.0/dist/spec-node/devContainersSpecCLI.js:612:11107)
[2024-01-25T01:29:34.802Z] at async MeA (/root/.vscode-remote-containers/dist/dev-containers-cli-0.327.0/dist/spec-node/devContainersSpecCLI.js:612:10848)
[2024-01-25T01:29:34.942Z] Stop (11253362 ms): Run in container: node /root/.vscode-remote-containers/dist/dev-containers-cli-0.327.0/dist/spec-node/devContainersSpecCLI.js up --container-session-data-folder /tmp/devcontainers-7edf6f65-e5f1-47df-8de6-ecd2483665631706134915493 --workspace-folder /workspaces/langchain --workspace-mount-consistency cached --id-label vsch.local.repository=https://github.com/cvansteenburg/langchain --id-label vsch.local.repository.volume=langchain-d0a07d0e50de76837e566b51dfd52879223a47f0cf6c9249f8998de5ea549f4c --id-label vsch.local.repository.folder=langchain --id-label devcontainer.config_file=/workspaces/langchain/.devcontainer/devcontainer.json --log-level debug --log-format json --config /workspaces/langchain/.devcontainer/devcontainer.json --override-config /tmp/devcontainer-b183d847-3f84-4dfc-bf18-cb61dde92812.json --default-user-env-probe loginInteractiveShell --remove-existing-container --mount type=volume,source=langchain-d0a07d0e50de76837e566b51dfd52879223a47f0cf6c9249f8998de5ea549f4c,target=/workspaces,external=true --mount type=volume,source=vscode,target=/vscode,external=true --skip-post-create --update-remote-user-uid-default off --mount-workspace-git-root --terminal-columns 178 --terminal-rows 15
[2024-01-25T01:29:34.945Z] Exit code 1
[2024-01-25T01:29:34.948Z] Start: Run: docker rm -f d465e5855c617b703922f6eafccdf2bde11f6140c0baaa5e1e9f860bc70973ac
[2024-01-25T01:29:34.964Z] Command failed: node /root/.vscode-remote-containers/dist/dev-containers-cli-0.327.0/dist/spec-node/devContainersSpecCLI.js up --container-session-data-folder /tmp/devcontainers-7edf6f65-e5f1-47df-8de6-ecd2483665631706134915493 --workspace-folder /workspaces/langchain --workspace-mount-consistency cached --id-label vsch.local.repository=https://github.com/cvansteenburg/langchain --id-label vsch.local.repository.volume=langchain-d0a07d0e50de76837e566b51dfd52879223a47f0cf6c9249f8998de5ea549f4c --id-label vsch.local.repository.folder=langchain --id-label devcontainer.config_file=/workspaces/langchain/.devcontainer/devcontainer.json --log-level debug --log-format json --config /workspaces/langchain/.devcontainer/devcontainer.json --override-config /tmp/devcontainer-b183d847-3f84-4dfc-bf18-cb61dde92812.json --default-user-env-probe loginInteractiveShell --remove-existing-container --mount type=volume,source=langchain-d0a07d0e50de76837e566b51dfd52879223a47f0cf6c9249f8998de5ea549f4c,target=/workspaces,external=true --mount type=volume,source=vscode,target=/vscode,external=true --skip-post-create --update-remote-user-uid-default off --mount-workspace-git-root --terminal-columns 178 --terminal-rows 15
[2024-01-25T01:29:34.965Z] Exit code 1
[2024-01-25T01:29:35.311Z] Stop (11256492 ms): Run in container: /bin/sh
[2024-01-25T01:29:35.311Z] Container server terminated (code: 137, signal: null).
[2024-01-25T01:29:35.312Z] Stop (11256684 ms): Run in container: /bin/sh
[2024-01-25T01:29:35.647Z] Stop (699 ms): Run: docker rm -f d465e5855c617b703922f6eafccdf2bde11f6140c0baaa5e1e9f860bc70973ac
[2024-01-25T01:30:34.625Z] Start: Run: docker volume ls -q
[2024-01-25T01:30:34.707Z] Stop (82 ms): Run: docker volume ls -q
[2024-01-25T01:30:34.785Z] Start: Run: docker version --format {{.Server.APIVersion}}
[2024-01-25T01:30:34.953Z] Stop (168 ms): Run: docker version --format {{.Server.APIVersion}}
[2024-01-25T01:30:34.953Z] 1.43
```
### Description
See above.
### System Info
VSCode and docker on MacOS 14.2.1 and on Github Codespaces. | Devcontainer hangs on Poetry dependency resolution | https://api.github.com/repos/langchain-ai/langchain/issues/16552/comments | 6 | 2024-01-25T02:42:08Z | 2024-03-29T16:25:27Z | https://github.com/langchain-ai/langchain/issues/16552 | 2,099,455,060 | 16,552 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
from langchain_core.retrievers import BaseRetriever
from langchain_core.callbacks import CallbackManagerForRetrieverRun
from langchain_core.documents import Document
from typing import List
class CustomRetriever(BaseRetriever):
# the signature is slightly modified with the addition of `spam`
def _get_relevant_documents(
self, query: str, *, spam, run_manager: CallbackManagerForRetrieverRun
) -> List[Document]:
return [Document(page_content=query['query'])]
retriever = CustomRetriever()
# Binding a spam attribute,
spam = {'query': RunnablePassthrough()} | retriever.bind(spam=2)
spam.invoke("bar")
```
### Error Message and Stack Trace (if applicable)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[33], [line 17](vscode-notebook-cell:?execution_count=33&line=17)
[14](vscode-notebook-cell:?execution_count=33&line=14) retriever = CustomRetriever()
[16](vscode-notebook-cell:?execution_count=33&line=16) spam = {'query': RunnablePassthrough()} | retriever.bind(spam=2)
---> [17](vscode-notebook-cell:?execution_count=33&line=17) spam.invoke("bar")
TypeError: BaseRetriever.invoke() got an unexpected keyword argument 'spam'
### Description
# Details
`.bind()` does not seem to be working well with `Retrievers`.
The `BaseRetriever.invoke()` does not support variadics arguments. Runtime binding on a retriever raises a `TypeError`.
A possible workaround is to use a `RunnableLambda` wrapping the `retriever.get_relevant_documents` and dispatching the bounded arguments to a `get_relevant_documents. `.
# How to reproduce
Instantiate a `CustomRetriever` as per [Langchain's documentation](https://python.langchain.com/docs/modules/data_connection/retrievers/) and `bind` an argument to it.
# What to expect
`bind` should dispatch bounded argument to `retrievers` too.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Fri Apr 2 22:23:49 UTC 2021
> Python Version: 3.11.6 (main, Nov 1 2023, 14:10:18) [GCC 9.4.0]
Package Information
-------------------
> langchain_core: 0.1.15
> langchain: 0.1.0
> langchain_community: 0.0.12
> langchain_openai: 0.0.2.post1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | .bind(foo='foo') does not work for Retrievers | https://api.github.com/repos/langchain-ai/langchain/issues/16547/comments | 4 | 2024-01-24T23:46:32Z | 2024-05-02T16:06:04Z | https://github.com/langchain-ai/langchain/issues/16547 | 2,099,312,161 | 16,547 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```
import os
os.unsetenv("LANGCHAIN_TRACING_V2")
os.unsetenv("LANGCHAIN_ENDPOINT")
os.unsetenv("LANGCHAIN_API_KEY")
os.unsetenv("LANGCHAIN_PROJECT")
### Error Message and Stack Trace (if applicable)
logger.warning(
Message: 'Unable to load requested LangChainTracer. To disable this warning, unset the LANGCHAIN_TRACING_V2 environment variables.'
Arguments: (LangSmithUserError('API key must be provided when using hosted LangSmith
API'),)
### Description
I have already unset the `LANGCHAIN_TRACING_V2` variable. Bus I still get the warning message and it becomes hard for me to trace the error for other messages.
### System Info
langchain==0.1.3
langchain-community==0.0.15
langchain-core==0.1.15
langchain-openai==0.0.3 | LangSmith: Warning message does not disappear when unsetting LANGCHAIN_TRACING_V2 | https://api.github.com/repos/langchain-ai/langchain/issues/16537/comments | 5 | 2024-01-24T21:11:23Z | 2024-01-30T21:32:36Z | https://github.com/langchain-ai/langchain/issues/16537 | 2,099,114,477 | 16,537 |
[
"hwchase17",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
Hi everyone!
If you have experience with LangChain and think you have enough expertise to help other community members and help support the project, please consider spending a bit of time answering discussion questions: https://github.com/langchain-ai/langchain/discussions :parrot: | For experienced users: Help with discussion questions :parrot: | https://api.github.com/repos/langchain-ai/langchain/issues/16534/comments | 4 | 2024-01-24T19:45:37Z | 2024-07-13T16:04:56Z | https://github.com/langchain-ai/langchain/issues/16534 | 2,098,965,186 | 16,534 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
https://github.com/langchain-ai/langchain/blob/master/libs/core/langchain_core/tools.py#L105
```python
class BaseTool(RunnableSerializable[Union[str, Dict], Any]):
"""Interface LangChain tools must implement."""
def __init_subclass__(cls, **kwargs: Any) -> None:
"""Create the definition of the new tool class."""
super().__init_subclass__(**kwargs)
args_schema_type = cls.__annotations__.get("args_schema", None)
if args_schema_type is not None:
if args_schema_type is None or args_schema_type == BaseModel:
# Throw errors for common mis-annotations.
# TODO: Use get_args / get_origin and fully
# specify valid annotations.
typehint_mandate = """
class ChildTool(BaseTool):
...
args_schema: Type[BaseModel] = SchemaClass
..."""
name = cls.__name__
raise SchemaAnnotationError(
f"Tool definition for {name} must include valid type annotations"
f" for argument 'args_schema' to behave as expected.\n"
f"Expected annotation of 'Type[BaseModel]'"
f" but got '{args_schema_type}'.\n"
f"Expected class looks like:\n"
f"{typehint_mandate}"
)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
This code first checks if args_schema_type is not None, and then immediately checks if it is None, which doesn't make logical sense.
It should be:
```
if args_schema_type is not None and args_schema_type == BaseModel:
```
Also, perhaps the TODO should have an issue associated with it, though I'm not sure I have enough context to create the issue.
### System Info
```
System Information
------------------
> OS: Windows
> OS Version: 10.0.19045
> Python Version: 3.11.4 (tags/v3.11.4:d2340ef, Jun 7 2023, 05:45:37) [MSC v.1934 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.1.10
> langchain: 0.0.353
> langchain_community: 0.0.12
> langserve: Not Found
``` | BaseTool __init_subclass__ has contradictory conditional | https://api.github.com/repos/langchain-ai/langchain/issues/16528/comments | 4 | 2024-01-24T19:04:43Z | 2024-01-25T01:57:02Z | https://github.com/langchain-ai/langchain/issues/16528 | 2,098,902,232 | 16,528 |
[
"hwchase17",
"langchain"
] | It goes something like this:
This is why `BaseChatMemory.chat_memory` doesn't pruning https://github.com/langchain-ai/langchain/issues/14957#issuecomment-1907951114
So I made some mokey patches to fix the problem temporarily.
And the following demo implements to use `history` and `memory` in LCEL.
Because there are too many modules involved, I wanted core contributors to help me refine this idea.
```python
import json
from typing import Union, Any, List, Optional
from langchain.memory.chat_memory import BaseChatMemory
from langchain_community.chat_message_histories import RedisChatMessageHistory
from langchain_core.load import load
from langchain_core.messages import BaseMessage, message_to_dict
from langchain_core.runnables import Runnable, RunnableConfig
from langchain_core.runnables.history import (
RunnableWithMessageHistory,
MessagesOrDictWithMessages,
GetSessionHistoryCallable
)
from langchain_core.tracers.schemas import Run
class MemoryList(list):
def __init__(self, *args, history=None, **kwargs):
self.__history: RedisChatMessageHistory = history
super().__init__(*args, **kwargs)
def pop(self, __index=-1):
if __index == 0:
self.__history.redis_client.rpop(self.__history.key)
elif __index == -1:
self.__history.redis_client.lpop(self.__history.key)
else:
raise IndexError("Redis doesn't support pop by index.")
return super().pop(__index)
def clear(self):
self.__history.clear()
super().clear()
class RedisChatMessageHistoryFixed(RedisChatMessageHistory):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
msgs = super().messages.copy()
self._messages: MemoryList = MemoryList(msgs, history=self)
@property
def messages(self) -> List[BaseMessage]: # type: ignore
msgs = super().messages.copy()
self._messages: MemoryList = MemoryList(msgs, history=self)
return self._messages
@messages.setter
def messages(self, msgs):
self._messages.clear()
if msgs:
self.redis_client.lpush(self.key, *[json.dumps(message_to_dict(msg)) for msg in msgs])
class RunnableWithMessageHistoryWithMemory(RunnableWithMessageHistory):
memory: Optional[BaseChatMemory] = None
def __init__(
self,
runnable: Runnable[
MessagesOrDictWithMessages,
Union[str, BaseMessage, MessagesOrDictWithMessages]
],
get_session_history: GetSessionHistoryCallable,
memory: BaseChatMemory = None,
**kwargs: Any
):
super().__init__(runnable, get_session_history, **kwargs)
if memory:
self.memory = memory
self.memory.input_key = self.input_messages_key
self.memory.output_key = self.output_messages_key
def _enter_history(self, input: Any, config: RunnableConfig) -> List[BaseMessage]:
hist = config["configurable"]["message_history"]
if not isinstance(self.memory.chat_memory, RedisChatMessageHistoryFixed):
self.memory.chat_memory = hist
# return only historic messages
if self.history_messages_key:
# Some of the 'BaseChatMemory' pruning features are in `load_memory_variables()`,
# such as `ConversationSummaryBufferMemory`.
# So we should extract the `messages` from 'load_memory_variables()'.
messages = self.memory.load_memory_variables({})[self.history_messages_key].copy()
hist.messages = messages
return messages
# return all messages
else:
input_val = (
input if not self.input_messages_key else input[self.input_messages_key]
)
return hist.messages.copy() + self._get_input_messages(input_val)
def _exit_history(self, run: Run, config: RunnableConfig) -> None:
hist = config["configurable"]["message_history"]
if not isinstance(self.memory.chat_memory, RedisChatMessageHistoryFixed):
self.memory.chat_memory = hist
# Get the input messages
inputs = load(run.inputs)
input_val = inputs[self.input_messages_key or "input"]
input_messages = self._get_input_messages(input_val)
# If historic messages were prepended to the input messages, remove them to
# avoid adding duplicate messages to history.
if not self.history_messages_key:
historic_messages = config["configurable"]["message_history"].messages
input_messages = input_messages[len(historic_messages):]
# Get the output messages
output_val = load(run.outputs)
output_messages = self._get_output_messages(output_val)
messages = zip(input_messages, output_messages)
# `BaseChatMemory.save_context()` will call `add_message()` and `prune()`.
# `RunnableWithMessageHistory` just call the `add_message()`.
for i, o in messages:
self.memory.save_context(
{self.input_messages_key or 'input': i.content},
{self.output_messages_key or 'output': o.content}
)
if __name__ == '__main__':
REDIS_URL = ...
prompt = ChatPromptTemplate.from_messages(
[
("system", 'You are a helpful assistant.'),
MessagesPlaceholder(variable_name="history"),
("human", "{question}"),
]
)
model = ChatOpenAI(
model="gpt-3.5-turbo",
)
chain = prompt | model
chain_with_history = RunnableWithMessageHistoryPlus(
chain,
lambda session_id: RedisChatMessageHistoryFixed(session_id, url=REDIS_URL),
memory=ConversationSummaryBufferMemory(
llm=model,
memory_key="history",
return_messages=True,
max_token_limit=2000
),
input_messages_key="question",
history_messages_key="history",
)
def chat(question):
res = chain_with_history.stream(
{"question": question},
config={"configurable": {"session_id": 'test'}},
)
for message in res:
print(message.content, end='')
while _question := input('human:'):
chat(_question)
print()
```
| A monkey patch demo to use `memory` with `history` in LCEL | https://api.github.com/repos/langchain-ai/langchain/issues/16525/comments | 4 | 2024-01-24T18:21:27Z | 2024-06-08T16:09:25Z | https://github.com/langchain-ai/langchain/issues/16525 | 2,098,839,256 | 16,525 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
Attempting to initialize PineconeConnected
```
import pinecone
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Pinecone
from langchain.schema import Document
class PineconeConnected():
def __init__(self, index_name: str, pinecone_api_key: str, pinecone_env: str, openai_key: str):
embeddings = OpenAIEmbeddings(openai_api_key=openai_key)
pinecone.init(api_key=pinecone_api_key)
self.vector_db = Pinecone.from_existing_index(index_name, embeddings) # VectorStore object with the reference + Pinecone index loaded
def query(query:str, book_title=None)
```
### Description
when initializing PineconeConnected class, I get this error:
*Please note that this was previously working. pinecone-client updated its library a few days ago hence why this integration with 'init' no longer works.
```
AttributeError: init is no longer a top-level attribute of the pinecone package.
Please create an instance of the Pinecone class instead.
Example:
import os
from pinecone import Pinecone, ServerlessSpec
pc = Pinecone(
api_key=os.environ.get("PINECONE_API_KEY")
)
# Now do stuff
if 'my_index' not in pc.list_indexes().names():
pc.create_index(
name='my_index',
dimension=1536,
metric='euclidean',
spec=ServerlessSpec(
cloud='aws',
region='us-west-2'
)
)
```
### System Info
fastapi-poe==0.0.24
pydantic>=2
openai==0.28.1
langchain==0.0.348
pinecone-client==3.0.1
tiktoken
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | pinecone.init is no longer a top-level attribute of the pinecone package | https://api.github.com/repos/langchain-ai/langchain/issues/16513/comments | 14 | 2024-01-24T15:50:04Z | 2024-08-08T16:57:01Z | https://github.com/langchain-ai/langchain/issues/16513 | 2,098,565,291 | 16,513 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
Hi everybody, I'm new here and I have this issue. Thanks for all
```python
shared_memory = ConversationBufferMemory(memory_key="chat_history", input_key='question', return_messages=True, output_key='answer')
system_message = """
Dada la siguiente conversación y el mensaje del usuario,
vas a asesorar al mismo de la mejor manera posible en base a tus propios conocimientos
las respuestas tienen que relacionarce con el contexto dado y
si no sabes la respuesta no inventes una.
Mantener el mismo idioma que los mensajes del usuario."
"""
llm = ChatOpenAI(
model_name="gpt-3.5-turbo-1106",
temperature=0.5, streaming=True
)
custom_template = """Dada la siguiente conversación y el mensaje del usuario, \
vas a asesorar al mismo de la mejor manera posible en base a tus propios conocimientos \
las respuestas tienen que relacionarce con el contexto dado y \
si no sabes la respuesta no inventes una. \
Mantener el mismo idioma que los mensajes del usuario..
Contexto:
{context}
Mensaje de usuario: {question}
Pregunta o instruccion independiente:"""
custom_prompt = PromptTemplate(
template=custom_template,
input_variables=["context", "question"],
)
qa1 = ConversationalRetrievalChain.from_llm(
llm=llm,
verbose=True,
memory = shared_memory,
return_generated_question=True,
return_source_documents = True,
retriever=retriever,
combine_docs_chain_kwargs={"prompt": custom_prompt},
)
result = qa1({"question":"Hola chat, quiero saber sobre la ley de propiedad intelectual"}).
La ley de propiedad intelectual establece la protección de obras científicas, literarias y artísticas, así como también de programas de computación, compilaciones de datos, obras dramáticas, cinematográficas, entre otras. También establece la protección para autores extranjeros, siempre y cuando cumplan con las formalidades establecidas en su país de origen.
busqueda_local = [Tool.from_function(
name="doc_search",
func=qa1,
description= "Herramienta util para cuando hay que buscar informacion sobre documentacion propia"
) ] .
agent = initialize_agent(
agent = AgentType.CONVERSATIONAL_REACT_DESCRIPTION,
tools= tools,
llm= llm,
memory=shared_memory,
return_source_documents=True,
return_intermediate_steps=True,
handle_parsing_errors=True,
agent_kwargs={"system_message": system_message}
).
query = "Hola, quiero informacion sobre los datos personales, la ley"
result = agent({"question":"Hola qiero saber sobre la ley de datos personales"})
print("Pregunta: ", query)
print("Respuesta: ",result["answer"]).
```
AND THIS IS THE ERROR:
ValueError Traceback (most recent call last)
Cell In[115], [line 2](vscode-notebook-cell:?execution_count=115&line=2)
[1](vscode-notebook-cell:?execution_count=115&line=1) query = "Hola, quiero informacion sobre los datos personales, la ley"
----> [2](vscode-notebook-cell:?execution_count=115&line=2) result = agent({"questio":"Hola qiero saber sobre la ley de datos personales"})
[3](vscode-notebook-cell:?execution_count=115&line=3) print("Pregunta: ", query)
[4](vscode-notebook-cell:?execution_count=115&line=4) print("Respuesta: ",result["answer"])
File [c:\Users\nicos\anaconda3\envs\NicoCH\lib\site-packages\langchain_core\_api\deprecation.py:145](file:///C:/Users/nicos/anaconda3/envs/NicoCH/lib/site-packages/langchain_core/_api/deprecation.py:145), in deprecated.<locals>.deprecate.<locals>.warning_emitting_wrapper(*args, **kwargs)
[143](file:///C:/Users/nicos/anaconda3/envs/NicoCH/lib/site-packages/langchain_core/_api/deprecation.py:143) warned = True
[144](file:///C:/Users/nicos/anaconda3/envs/NicoCH/lib/site-packages/langchain_core/_api/deprecation.py:144) emit_warning()
--> [145](file:///C:/Users/nicos/anaconda3/envs/NicoCH/lib/site-packages/langchain_core/_api/deprecation.py:145) return wrapped(*args, **kwargs)
File [c:\Users\nicos\anaconda3\envs\NicoCH\lib\site-packages\langchain\chains\base.py:363](file:///C:/Users/nicos/anaconda3/envs/NicoCH/lib/site-packages/langchain/chains/base.py:363), in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
[331](file:///C:/Users/nicos/anaconda3/envs/NicoCH/lib/site-packages/langchain/chains/base.py:331) """Execute the chain.
[332](file:///C:/Users/nicos/anaconda3/envs/NicoCH/lib/site-packages/langchain/chains/base.py:332)
[333](file:///C:/Users/nicos/anaconda3/envs/NicoCH/lib/site-packages/langchain/chains/base.py:333) Args:
(...)
[354](file:///C:/Users/nicos/anaconda3/envs/NicoCH/lib/site-packages/langchain/chains/base.py:354) `Chain.output_keys`.
[355](file:///C:/Users/nicos/anaconda3/envs/NicoCH/lib/site-packages/langchain/chains/base.py:355) """
[356](file:///C:/Users/nicos/anaconda3/envs/NicoCH/lib/site-packages/langchain/chains/base.py:356) config = {
[357](file:///C:/Users/nicos/anaconda3/envs/NicoCH/lib/site-packages/langchain/chains/base.py:357) "callbacks": callbacks,
[358](file:///C:/Users/nicos/anaconda3/envs/NicoCH/lib/site-packages/langchain/chains/base.py:358) "tags": tags,
[359](file:///C:/Users/nicos/anaconda3/envs/NicoCH/lib/site-packages/langchain/chains/base.py:359) "metadata": metadata,
[360](file:///C:/Users/nicos/anaconda3/envs/NicoCH/lib/site-packages/langchain/chains/base.py:360) "run_name": run_name,
...
[262](file:///C:/Users/nicos/anaconda3/envs/NicoCH/lib/site-packages/langchain/chains/base.py:262) missing_keys = set(self.input_keys).difference(inputs)
[263](file:///C:/Users/nicos/anaconda3/envs/NicoCH/lib/site-packages/langchain/chains/base.py:263) if missing_keys:
--> [264](file:///C:/Users/nicos/anaconda3/envs/NicoCH/lib/site-packages/langchain/chains/base.py:264) raise ValueError(f"Missing some input keys: {missing_keys}")
ValueError: Missing some input keys: {'input'}
### Idea or request for content:
I think it's a problem with the last input with agent, it expects str, but I don't know. | DOC: <Problem with imput in agent: ' prefix> | https://api.github.com/repos/langchain-ai/langchain/issues/16511/comments | 9 | 2024-01-24T12:51:58Z | 2024-05-01T16:07:19Z | https://github.com/langchain-ai/langchain/issues/16511 | 2,098,204,503 | 16,511 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
The following code: https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/retrievers/google_vertex_ai_search.py
Gives the error 'MethodNotImplemented: 501 Received http2 header with status: 404'
### Description
I'm trying to use the Google Vertex AI Wrapper.
I expect to get results from my datastore.
But got the error 'MethodNotImplemented: 501 Received http2 header with status: 404'.
The 'content_search_spec' parameter creates the problem as it's not seen in the API call.
Will be glad to share the working version.
### System Info
Google Colab Notebook
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | Google Vertex AI Wrapper Code | https://api.github.com/repos/langchain-ai/langchain/issues/16509/comments | 1 | 2024-01-24T12:19:06Z | 2024-05-01T16:07:13Z | https://github.com/langchain-ai/langchain/issues/16509 | 2,098,146,484 | 16,509 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [ ] I searched the LangChain documentation with the integrated search.
- [ ] I used the GitHub search to find a similar question and didn't find it.
### Example Code
I created two different indexes using
```
def create_index(node_label, index, text_properties):
existing_graph = Neo4jVector.from_existing_graph(
embedding=embedding_model,
url=url,
username='neo4j',
password=password,
index_name=index,
node_label=node_label,
text_node_properties=text_properties,
embedding_node_property=f"embedding",
)
create_index('Chunk','ockham',['document','text'])
create_index('Embed','tr10', ["text", "number"])
```
I then tried to return the tr10 index using:
```
def neo4j_index(index_name):
"""Use langchain to return the neo4j index"""
index = Neo4jVector.from_existing_index(
embedding=embedding_model(),
url=NEO4J_URI,
username=NEO4J_USER,
password=NEO4J_PASS,
index_name=index_name,
)
return index
index = neo4j_index('tr10')
```
But instead returned the 'ockham' index.
This was fixed by changing my function that created the index to specify different embedding property names.
```
def create_index(node_label, index, text_properties):
existing_graph = Neo4jVector.from_existing_graph(
embedding=embedding_model,
url=url,
username='neo4j',
password=password,
index_name=index,
node_label=node_label,
text_node_properties=text_properties,
embedding_node_property=f"embedding_{index}",
)
### Description
See above.
@Tom
### System Info
langchain 0.1.0
langchain-community 0.0.12
langchain-core 0.1.10
python 3.10.12
Ubuntu 22.04
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | Neo4jVector.from_existing_index() requires different embedding_property name when returning indexes. | https://api.github.com/repos/langchain-ai/langchain/issues/16505/comments | 3 | 2024-01-24T11:17:49Z | 2024-06-18T16:09:43Z | https://github.com/langchain-ai/langchain/issues/16505 | 2,098,043,103 | 16,505 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
def generate_extract_chain(profile):
template_ner_extract = """
schema中的"职业"可能对应多个实体,都应全部提取,表述不清晰的应该要概括准确,比如“送外卖”是“外卖员”。
schema中的“同行人”可以有多个,应该尽量把对应的姓名和与当事人的关系实体提取出来
返回结果以json格式返回,包括:
ner_result:
updated_labels: 返回画像标签集所有有更新变动的key-value,如果没有更新则值为空
profile: 更新后的{profile},如果没有更新则值为空
"""
ner_prompt = ChatPromptTemplate.from_template(template_ner_extract)
# Schema
schema = {
"properties": {
"姓名": {"type": "string"},
"年龄": {"type": "integer"},
"职业": {"type": "string"},
"同行人": {"type": "string"},
"常住地址": {"type": "string"},
"工作地址": {"type": "string"},
},
"required": ["姓名", "年龄"],
}
# Run chain
llm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo")
chain = create_extraction_chain(schema, llm, ner_prompt, output_key='ner_result')
return chain
def generate_sequen_chain(ner_chain, reasonable_chain, ask_chain):
overall_chain = SequentialChain(
chains=[ner_chain, reasonable_chain, ask_chain],
input_variables=["profile", "dialogue", "pair", "question", "answer"],
# Here we return multiple variables
output_variables=["ask_result", "ner_result", "resonable_result"],
# output_variables=["new_result"],
verbose=True)
return overall_chain
ask_chain = generate_ask_chain()
ner_chain = generate_ner_chain()
reasonable_chain = generate_resonable_chain()
overall_chain = SequentialChain(
chains=[ner_chain, reasonable_chain, ask_chain]
### Description
TypeError: create_extraction_chain() got an unexpected keyword argument 'output_key'
请问如何将create_extraction_chain作为SequentialChain中的子链来运行。
### System Info
python3.11
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async | create_extraction_chain无法作为SequentialChain中的子链 | https://api.github.com/repos/langchain-ai/langchain/issues/16504/comments | 6 | 2024-01-24T11:16:59Z | 2024-03-08T16:43:39Z | https://github.com/langchain-ai/langchain/issues/16504 | 2,098,041,743 | 16,504 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.