issue_owner_repo
listlengths 2
2
| issue_body
stringlengths 0
261k
⌀ | issue_title
stringlengths 1
925
| issue_comments_url
stringlengths 56
81
| issue_comments_count
int64 0
2.5k
| issue_created_at
stringlengths 20
20
| issue_updated_at
stringlengths 20
20
| issue_html_url
stringlengths 37
62
| issue_github_id
int64 387k
2.46B
| issue_number
int64 1
127k
|
---|---|---|---|---|---|---|---|---|---|
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I have the following piece of code:
openai_llm = ChatOpenAI(model_name='gpt-3.5-turbo-1106', streaming=True, callbacks=[StreamingStdOutCallbackHandler()],
temperature=0.5, max_retries=0)
I still keep getting this error:
urllib3.util.retry:Converted retries value: 2 -> Retry(total=2, connect=None, read=None, redirect=None, status=None)
I tried everything. nothing is working at this point. I dont want the Retries to happen. I have fallback models but they aren't being utilized.
### Suggestion:
_No response_ | URL Lib error doesn't get resolved. | https://api.github.com/repos/langchain-ai/langchain/issues/13816/comments | 1 | 2023-11-24T14:18:22Z | 2024-03-13T20:00:26Z | https://github.com/langchain-ai/langchain/issues/13816 | 2,009,801,860 | 13,816 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I've written this code to make sure that the max_retries count is 0. Even though I set that value in my LLM instance, It doesn't work. So I've decided to create a Custom WebBaseLoader. But I'm stuck on how to mount it and make sure that it works.
would appreciate any help!
`class CustomWebBaseLoader(WebBaseLoader):
def __init__(
self,
# ...
) -> None:
# ...
if session:
self.session = session
else:
session = requests.Session()
# ...
# Set the retry configuration for the session
retries = Retry(total=0, backoff_factor=1, status_forcelist=[500, 502, 503, 504])
session.mount('http://', HTTPAdapter(max_retries=retries))
session.mount('https://', HTTPAdapter(max_retries=retries))
# ...
self.session = session
# ...
`
### Suggestion:
_No response_ | How to mount the custom Retry class with 0 retries to improve Fallbacks? | https://api.github.com/repos/langchain-ai/langchain/issues/13814/comments | 1 | 2023-11-24T13:58:05Z | 2023-11-24T15:15:47Z | https://github.com/langchain-ai/langchain/issues/13814 | 2,009,773,400 | 13,814 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hi, I was recently trying to implement Fallbacks into my system but I've realized that even though there are other models that can be hit if there is a failure, I've noticed that this isn't happening. I tried to Debug it and this is the error I see. I keep getting this error even though the max retries value is 0
openai_llm = ChatOpenAI(model_name='gpt-3.5-turbo-1106', streaming=True, callbacks=[StreamingStdOutCallbackHandler()],
temperature=0.5, max_retries=0)
Error:
DEBUG:urllib3.util.retry:Converted retries value: 2 -> Retry(total=2, connect=None, read=None, redirect=None, status=None)
The model takes forever to respond. is there a fix for this?
### Suggestion:
_No response_ | Urllib3 retry error fix with fallbacks. | https://api.github.com/repos/langchain-ai/langchain/issues/13811/comments | 3 | 2023-11-24T11:20:31Z | 2023-11-24T13:57:24Z | https://github.com/langchain-ai/langchain/issues/13811 | 2,009,558,007 | 13,811 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hi, I was recently trying to implement Fallbacks into my system but I've realized that even though there are other models that can be hit if there is a failure, I've noticed that this isn't happening. I tried to Debug it and this is the error I see. I keep getting this error even though the max retries value is 0
openai_llm = ChatOpenAI(model_name='gpt-3.5-turbo-1106', streaming=True, callbacks=[StreamingStdOutCallbackHandler()],
temperature=0.5, max_retries=0)
Error:
DEBUG:urllib3.util.retry:Converted retries value: 2 -> Retry(total=2, connect=None, read=None, redirect=None, status=None)
The model takes forever to respond. is there a fix for this?
### Suggestion:
_No response_ | Urllib Retry error help. | https://api.github.com/repos/langchain-ai/langchain/issues/13809/comments | 1 | 2023-11-24T10:21:56Z | 2023-11-24T11:19:25Z | https://github.com/langchain-ai/langchain/issues/13809 | 2,009,473,024 | 13,809 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hi, sorry if I missed it, but I couldn't find in the documentation, here in issues nor through google an answer to this question: how can one compute perplexity score (for each generated token and/or a mean perplexity score for the whole output) during inference?
In our use case, we use LLMs only for inference, but we have to be able to give some kind of confidence score along with the models' answers. We use various integration backends in our stack, HF transformers, vLLM, llama.cpp to name a few.
Any help would be greatly appreciated. Thanks!
### Suggestion:
_No response_ | Issue: how to compute perplexity score during inference? | https://api.github.com/repos/langchain-ai/langchain/issues/13808/comments | 3 | 2023-11-24T09:56:16Z | 2024-03-17T16:06:26Z | https://github.com/langchain-ai/langchain/issues/13808 | 2,009,434,445 | 13,808 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
_No response_
### Suggestion:
_No response_ | It’s so damn hard to use. | https://api.github.com/repos/langchain-ai/langchain/issues/13807/comments | 3 | 2023-11-24T09:19:01Z | 2024-03-13T19:55:50Z | https://github.com/langchain-ai/langchain/issues/13807 | 2,009,376,766 | 13,807 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hello
I have to configure the langchain with PDF data, and the PDF contains a lot of unstructured table.
We have a string and a table, so how do you recommend handling it?
### Suggestion:
_No response_ | how to handle a PDF(including tables..) | https://api.github.com/repos/langchain-ai/langchain/issues/13805/comments | 7 | 2023-11-24T08:54:28Z | 2024-04-18T16:34:48Z | https://github.com/langchain-ai/langchain/issues/13805 | 2,009,341,776 | 13,805 |
[
"hwchase17",
"langchain"
]
| CypherQueryCorrector does not handle with some query types:
If there is such a query (There is a comma in the match between clauses):
```
MATCH (a:APPLE {apple_id: 123})-[:IN]->(b:BUCKET), (ba:BANANA {name: banana1})
```
Corresponding code section:
- It extract a relation between BUCKET and BANANA, however there is none.
- ELSE case should be divided into cases. (INCOMING relation, OUTGOING relation, BIDIRECTIONAL relation, NO relation etc.)
- IF there is no relation and only a comma between clauses, then it should not try validation.
https://github.com/langchain-ai/langchain/blob/751226e067bc54a70910763c0eebb34544aaf47c/libs/langchain/langchain/chains/graph_qa/cypher_utils.py#L228 | CypherQueryCorrector cannot validate a correct cypher, some query types are not handled | https://api.github.com/repos/langchain-ai/langchain/issues/13803/comments | 4 | 2023-11-24T08:39:00Z | 2023-11-27T03:30:12Z | https://github.com/langchain-ai/langchain/issues/13803 | 2,009,321,641 | 13,803 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Entering new SQLDatabaseChain chain...
what is Gesh D desigation
SQLQuery:SELECT [Designation]
FROM [EGV_emp_departments_ChatGPT]
WHERE [EmployeeName] = 'Gesh D'
SQLResult:
Answer:Final answer here
> Finished chain.
Final answer here
below is my code
# import os
# import re
# from langchain.llms import OpenAI
# from langchain_experimental.sql import SQLDatabaseChain
# from langchain.sql_database import SQLDatabase
# # from secret_key import openapi_key
# openapi_key = "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"
# os.environ['OPENAI_API_KEY'] = openapi_key
# def chat(question):
# # llm = OpenAI(temperature=0)
# # tools = load_tools(["llm-math"], llm=llm)
# # agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION)
# driver = 'ODBC Driver 17 for SQL Server'
# from urllib.parse import quote_plus
# driver = 'ODBC Driver 17 for SQL Server'
# host = '####'
# user = '###'
# database = '#####'
# password = '#####'
# encoded_password = quote_plus(password)
# db = SQLDatabase.from_uri(f"mssql+pyodbc://{user}:{encoded_password}@{host}/{database}?driver={quote_plus(driver)}", include_tables = ['HRMSGPTAutomation'], sample_rows_in_table_info=2)
# llm = OpenAI(temperature=0, verbose=True)
# token_limit = 16_000
# model_name="gpt-3.5-turbo-16k"
# db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)
# # agent_executor = create_sql_agent(
# # llm=llm,
# # toolkit=toolkit,
# # verbose=True,
# # reduce_k_below_max_tokens=True,
# # )
# # mrkl = initialize_agent(
# # tools,
# # ChatOpenAI(temperature=0),
# # agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION,
# # verbose=True,
# # handle_parsing_errors=True,
# # )
# return db_chain.run(question)
# # print(chat("what is Vijayalakshmi B department"))
import sqlalchemy as sal
import os, sys, openai
import constants
from langchain.sql_database import SQLDatabase
from langchain.llms.openai import OpenAI
from langchain_experimental.sql import SQLDatabaseChain
from sqlalchemy import create_engine
from langchain.chat_models import ChatOpenAI
from typing import List, Optional
from langchain.agents.agent_toolkits import SQLDatabaseToolkit
from langchain.callbacks.manager import CallbackManagerForToolRun
from langchain.chat_models import ChatOpenAI
from langchain_experimental.plan_and_execute import (
PlanAndExecute,
load_agent_executor,
load_chat_planner,
)
from langchain.sql_database import SQLDatabase
from langchain.text_splitter import TokenTextSplitter
from langchain.tools import BaseTool
from langchain.tools.sql_database.tool import QuerySQLDataBaseTool
from secret_key import openapi_key
os.environ['OPENAI_API_KEY'] = openapi_key
def chat(question):
from urllib.parse import quote_plus
server_name = constants.server_name
database_name = constants.database_name
username = constants.username
password = constants.password
encoded_password = quote_plus(password)
connection_uri = f"mssql+pyodbc://{username}:{encoded_password}@{server_name}/{database_name}?driver=ODBC+Driver+17+for+SQL+Server"
engine = create_engine(connection_uri)
model_name="gpt-3.5-turbo-16k"
db = SQLDatabase(engine, view_support=True, include_tables=['EGV_emp_departments_ChatGPT'])
llm = ChatOpenAI(temperature=0, verbose=False, model=model_name)
db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)
PROMPT = """
Given an input question, first create a syntactically correct mssql query to run,
then look at the results of the query and return the answer.
The question: {db_chain.run}
"""
return db_chain.run(question)
answer=chat("what is Gesh D desigation")
print(answer)
### Suggestion:
_No response_ | How to modify the code, If the SQLResult is empty, the Answer should be "No results found". DO NOT hallucinate an answer if there is no result. | https://api.github.com/repos/langchain-ai/langchain/issues/13802/comments | 9 | 2023-11-24T08:18:06Z | 2024-04-22T16:30:35Z | https://github.com/langchain-ai/langchain/issues/13802 | 2,009,296,023 | 13,802 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Encountered IndexError in the code due to an attempt to access a list index that is out of range. This issue occurs when there are no documents available, resulting in the deletion of all embeddings. To handle this case gracefully, consider implementing a mechanism to display a default value or a meaningful message when there are no documents present. The specific line causing the error is:
```python
source = relevant_document[0].metadata['source']
### Suggestion:
_No response_ | Issue:Handle IndexError for Empty Document Lists - Display Default Value or Message | https://api.github.com/repos/langchain-ai/langchain/issues/13799/comments | 3 | 2023-11-24T06:41:19Z | 2024-03-13T19:56:37Z | https://github.com/langchain-ai/langchain/issues/13799 | 2,009,189,632 | 13,799 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Currently when initilizing the `MlflowCallbackHandler`, we can pass in name, experiment, tracking_uri. But couldn't pass in the `nested` param.
### Motivation
It would be nice if we can do that, so that when there are multiple chains running, we can nest those runs under one parent run, making it easier to group them for later on monitor/debug.
### Your contribution
I tried to edit the mlflow_callback.py and add the nested option, but it doesn't seem to honor the value.
Please let me know if you know how to make this work and I'm happy to put up a PR. Thanks! | Feat: MLFlow callback allow passing `nested` param | https://api.github.com/repos/langchain-ai/langchain/issues/13795/comments | 1 | 2023-11-24T05:05:03Z | 2024-03-13T20:04:39Z | https://github.com/langchain-ai/langchain/issues/13795 | 2,009,104,256 | 13,795 |
[
"hwchase17",
"langchain"
]
| ### System Info
master branch
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
https://github.com/langchain-ai/langchain/blob/751226e067bc54a70910763c0eebb34544aaf47c/libs/core/langchain_core/prompts/chat.py#L653-L659
ChatPromptTemplate override save method as NotImplementedError, while the base class `BasePromptTemplate` has a default implementation of `save` method https://github.com/langchain-ai/langchain/blob/751226e067bc54a70910763c0eebb34544aaf47c/libs/core/langchain_core/prompts/base.py#L157-L192. It should just depend on the implementation of `_prompt_type` property, where ChatPromptTemplate already implemented here https://github.com/langchain-ai/langchain/blob/751226e067bc54a70910763c0eebb34544aaf47c/libs/core/langchain_core/prompts/chat.py#L648-L651.
If removing the override save function, ChatPromptTemplate could be saved correctly.
### Expected behavior
We should be able to save ChatPromptTemplate object into a file. | ChatPromptTemplate save method not implemented | https://api.github.com/repos/langchain-ai/langchain/issues/13794/comments | 1 | 2023-11-24T04:52:26Z | 2024-03-13T20:04:59Z | https://github.com/langchain-ai/langchain/issues/13794 | 2,009,093,017 | 13,794 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
it seems my discord account has been hacked, so it auto send advertisement like below, and i thought it was official ads just like telegram and the ads only show to me, so I don't pay attention to it.
<img width="503" alt="image" src="https://github.com/langchain-ai/langchain/assets/1664952/8a048285-bd18-408e-9a31-0d2c57e1ef17">
Now i can't join discord server as expect.
I have enable 2FA and change my password, so could your please assist me in removing myself from the Discord blacklist. my account id is "h3l1221", thanks a low .
### Suggestion:
_No response_ | Issue: please assist me in removing myself from the Discord blacklist. | https://api.github.com/repos/langchain-ai/langchain/issues/13793/comments | 1 | 2023-11-24T03:13:29Z | 2024-03-16T16:06:51Z | https://github.com/langchain-ai/langchain/issues/13793 | 2,009,030,550 | 13,793 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
can i define an agent using a llm chain or a conversationcain as a too?
### Suggestion:
_No response_ | Issue: can i define an agent using a llm chain or a conversationcain as a too? | https://api.github.com/repos/langchain-ai/langchain/issues/13792/comments | 9 | 2023-11-24T02:15:35Z | 2024-03-13T20:03:36Z | https://github.com/langchain-ai/langchain/issues/13792 | 2,008,978,562 | 13,792 |
[
"hwchase17",
"langchain"
]
| ### System Info
$ python3 --version
Python 3.11.6
$ pip show openai | grep Version
Version: 1.3.5
$ pip show langchain | grep Version
Version: 0.0.340
### Who can help?
Who wants to use AzureOpenai deployments with langchain, enabling last openai package versione 1.x.x.
### Information
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
### Reproduction
Hi @hwchase17, @agola11, all!
Thanks in advance for your huge work.
I so far successfully used langchain with the openai module 0.28.x.
Today I upgraded the openai package to the latest version (1.x.x), and I installed also the latest langchain package version.
I configured the environment variables (as described above) and I run the following simple program, as described in langchain documentation: https://python.langchain.com/docs/integrations/chat/azure_chat_openai
```
$ source path/.langchain_azure.env
$ cat path/.langchain_azure.env
# https://python.langchain.com/docs/integrations/llms/azure_openai
export OPENAI_API_TYPE=azure
export OPENAI_API_VERSION=2023-09-01-preview
export AZURE_OPENAI_ENDPOINT="https://xxxxxxxxxxx.openai.azure.com/"
export AZURE_OPENAI_API_KEY="XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
```
Program that generate a run-time exception:
```python
# langchain_simple.py
# this example follows the langchain documentation example at: https://python.langchain.com/docs/integrations/chat/azure_chat_openai
import os
from langchain.chat_models import AzureChatOpenAI
from langchain.schema import HumanMessage
model = AzureChatOpenAI(
openai_api_version="2023-05-15",
azure_deployment="gpt-35-turbo" # my existing deployment name
)
message = HumanMessage(content="Translate this sentence from English to French. I love programming.")
model([message])
```
I got the following exception:
```
$ py langchain_simple.py
/home/giorgio/.local/lib/python3.11/site-packages/langchain/chat_models/azure_openai.py:162: UserWarning: As of openai>=1.0.0, if `deployment_name` (or alias `azure_deployment`) is specified then `openai_api_base` (or alias `base_url`) should not be. Instead use `deployment_name` (or alias `azure_deployment`) and `azure_endpoint`.
warnings.warn(
/home/giorgio/.local/lib/python3.11/site-packages/langchain/chat_models/azure_openai.py:170: UserWarning: As of openai>=1.0.0, if `openai_api_base` (or alias `base_url`) is specified it is expected to be of the form https://example-resource.azure.openai.com/openai/deployments/example-deployment. Updating https://openai-convai.openai.azure.com/ to https://openai-convai.openai.azure.com/.
warnings.warn(
Traceback (most recent call last):
File "/home/giorgio/gpt/langchain/langchain_simple.py", line 9, in <module>
model = AzureChatOpenAI(
^^^^^^^^^^^^^^^^
File "/home/giorgio/.local/lib/python3.11/site-packages/langchain/load/serializable.py", line 97, in __init__
super().__init__(**kwargs)
File "/home/giorgio/.local/lib/python3.11/site-packages/pydantic/v1/main.py", line 341, in __init__
raise validation_error
pydantic.v1.error_wrappers.ValidationError: 1 validation error for AzureChatOpenAI
__root__
base_url and azure_endpoint are mutually exclusive (type=value_error)
```
BTW, I do not understand exactly the error sentence:
```base_url and azure_endpoint are mutually exclusive (type=value_error)```
Where am I wrong?
Thanks
Giorgio
### Expected behavior
I didn't expect to have run-time errors | Azure OpenAI (with openai module 1.x.x) seems not working anymore | https://api.github.com/repos/langchain-ai/langchain/issues/13785/comments | 11 | 2023-11-23T18:06:59Z | 2024-07-01T16:04:09Z | https://github.com/langchain-ai/langchain/issues/13785 | 2,008,648,479 | 13,785 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
How can I add the new parameter **JSON MODE** ( https://platform.openai.com/docs/guides/text-generation/json-mode )
to this snippet of code?
```python
from langchain.chat_models import ChatOpenAI
llm = ChatOpenAI(
model_name="gpt-3.5-turbo-1106",
temperature=1,
max_tokens=None
)
```
I see in openai it should be used in this way:
```python
response = client.chat.completions.create(
model="gpt-3.5-turbo-1106",
response_format={ "type": "json_object" },
messages=[
{"role": "system", "content": "You are a helpful assistant designed to output JSON."},
{"role": "user", "content": "Who won the world series in 2020?"}
]
)
```
Thanks!!
### Suggestion:
_No response_ | How to add json_object to ChatOpenAI class? | https://api.github.com/repos/langchain-ai/langchain/issues/13783/comments | 6 | 2023-11-23T17:26:32Z | 2024-05-16T16:07:54Z | https://github.com/langchain-ai/langchain/issues/13783 | 2,008,609,445 | 13,783 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Support auth token headers in langchain, when underlying model supports it.
### Motivation
I'd like to use ChatAnthropic, but I'm using a custom proxy backend, where I need to provide Authorization header.
This is simple using AnthropicClient directly as it accepts auth_token parameter.
Problem is, langchain ChatAnthropic abstraction, doesn't accept this param.
It would be great to have an option to pass auth_token to model, when model supports it.
### Your contribution
I don't feel comfortable in this codebase to create a PR | Support auth header, when underlying client support it. | https://api.github.com/repos/langchain-ai/langchain/issues/13782/comments | 1 | 2023-11-23T16:46:27Z | 2024-03-13T20:02:24Z | https://github.com/langchain-ai/langchain/issues/13782 | 2,008,561,158 | 13,782 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
InMemoryCache works, but it doesn't print to stderr like all other LLM responses. I checked logs and response is stored. I don't know how to access to this stored values...
### Suggestion:
option to view object or option to return values. | Issue: how to access to cached question and answer in InMemoryCache | https://api.github.com/repos/langchain-ai/langchain/issues/13778/comments | 8 | 2023-11-23T13:14:19Z | 2024-02-12T14:22:48Z | https://github.com/langchain-ai/langchain/issues/13778 | 2,008,213,482 | 13,778 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
if "our goal is to have the simplest developer setup possible", we shouldn't encourage mixing poetry with the use of conda and pip in [CONTRIBUTING.md](https://github.com/langchain-ai/langchain/blob/5ae51a8a85d1a37ea98afeaf639a72ac74a50523/.github/CONTRIBUTING.md). Mixing different tools for the same job is error prone and leads to confusion for new contributors.
Specifically, the following lines should be reconsidered:
❗Note: Before installing Poetry, if you use Conda, create and activate a new Conda env (e.g. conda create -n langchain python=3.9) ([link](https://github.com/langchain-ai/langchain/blob/5ae51a8a85d1a37ea98afeaf639a72ac74a50523/.github/CONTRIBUTING.md#:~:text=%E2%9D%97Note%3A%20Before%20installing%20Poetry%2C%20if%20you%20use%20Conda%2C%20create%20and%20activate%20a%20new%20Conda%20env%20(e.g.%20conda%20create%20%2Dn%20langchain%20python%3D3.9)))
❗Note: If you use Conda or Pyenv as your environment/package manager, after installing Poetry, tell Poetry to use the virtualenv python environment (poetry config virtualenvs.prefer-active-python true) ([link](https://github.com/langchain-ai/langchain/blob/5ae51a8a85d1a37ea98afeaf639a72ac74a50523/.github/CONTRIBUTING.md#:~:text=%E2%9D%97Note%3A%20If%20you%20use%20Conda%20or%20Pyenv%20as%20your%20environment/package%20manager%2C%20after%20installing%20Poetry%2C%20tell%20Poetry%20to%20use%20the%20virtualenv%20python%20environment%20(poetry%20config%20virtualenvs.prefer%2Dactive%2Dpython%20true)))
If the tests don't pass, you may need to pip install additional dependencies, such as numexpr and openapi_schema_pydantic. ([link](https://github.com/langchain-ai/langchain/blob/5ae51a8a85d1a37ea98afeaf639a72ac74a50523/.github/CONTRIBUTING.md#:~:text=If%20the%20tests%20don%27t%20pass%2C%20you%20may%20need%20to%20pip%20install%20additional%20dependencies%2C%20such%20as%20numexpr%20and%20openapi_schema_pydantic.))
### Idea or request for content:
_No response_ | DOC: Simplify CONTRIBUTING.md by removing conda and pip references | https://api.github.com/repos/langchain-ai/langchain/issues/13776/comments | 1 | 2023-11-23T11:09:26Z | 2024-03-13T19:56:04Z | https://github.com/langchain-ai/langchain/issues/13776 | 2,008,002,786 | 13,776 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
(import pandas as pd
import json
from IPython.display import Markdown, display
from langchain.agents import create_csv_agent
from langchain.chat_models import ChatOpenAI
from langchain.llms import OpenAI
import os
os.environ["OPENAI_API_KEY"] = ""
# Load the dataset
df = pd.read_csv('Loan Collections - Sheet1.csv')
def convert_columns_to_float_by_keywords(df):
# Convert date and time columns to float
for column in df.select_dtypes(include=['object']).columns:
# Check if the column name contains 'date' or 'time'
if 'date' in column.lower() or 'time' in column.lower() or 'dob' in column.lower() or 'date of birth' in column.lower():
try:
# Convert the column to datetime
df[column] = pd.to_datetime(df[column], errors='coerce')
# Convert datetime to numerical representation (e.g., days since a reference date)
reference_date = pd.to_datetime('1900-01-01')
df[column] = (df[column] - reference_date).dt.total_seconds() / (24 * 60 * 60)
except ValueError:
# Handle errors during conversion
print(f"Error converting column '{column}' to float.")
# Convert columns with specific keywords to float
keywords_to_convert = ["unique id", "reference id", "account id"]
for column in df.columns:
# Check if the column name contains any of the specified keywords
if any(keyword in column.lower() for keyword in keywords_to_convert):
try:
# Convert the column to float
df[column] = pd.to_numeric(df[column], errors='coerce')
except ValueError:
# Handle errors during conversion
print(f"Error converting column '{column}' to float.")
# Convert 'date' and 'time' columns to float
convert_columns_to_float_by_keywords(df)
# Extract unique values for each column
unique_values_per_column = {}
for column in df.select_dtypes(include=['object']).columns:
unique_values_per_column[column] = df[column].unique().tolist()
# Convert the dictionary to JSON
json_data_train = json.dumps(unique_values_per_column, indent=4)
testData_fname = "Mutual Funds 2023 - Mutual Funds Data Final File (1).csv"
# Load the dataset
df2 = pd.read_csv(testData_fname)
convert_columns_to_float_by_keywords(df2)
# Extract unique values for each column
unique_values_per_column = {}
for column in df2.select_dtypes(include=['object']).columns:
unique_values_per_column[column] = df2[column].unique().tolist()
# Convert the dictionary to JSON
json_data_test = json.dumps(unique_values_per_column, indent=4)
# Define user's question
user_question = "monthly growth analysis of Broker Commission ?"
# Define the prompt template
prompt_template = f'''If the dataset has the following columns: {json_data_train}'''+''' Understand user questions with different column names and convert them to a JSON format.
Question might not even mentioned column name at all, it would probably mention value of the column. so it has to figure it out columnn name based on that value.
Example1:
User Question1: top zone in the year 2019 with Loan Amt between 10k and 20k and tenure > 12 excluding Texas region?
{
"start_date": "01-01-2019",
"end_date": "31-12-2019",
"time_stamp_col": "Due Date",
"agg_columns": [],
"trend": "null",
"to_start_date": "null",
"to_end_date": "null",
"growth": "null",
"variables_grpby": ["Zone"],
"filters": {},
"not_in": {"Region": ["Texas"]},
"num_filter": {
"gt": [
["Loan Tenure", 12],
["Loan Amount", 10000]
],
"lt": [
["Loan Amount", 20000]
]
},
"percent": "false",
"top": "1",
"bottom": "null"
}
Note the following in the above example
- The word "top" in the User Question made the "top" key have the value as "1". If "highest" is mentioned in the User Question, even then "top" would have the value as "1". If "top" is not mentioned or not implied in the User Question, then it takes on the value "null". Similarly for "bottom" key in the System Response.
- The word "zone" in the User Question refers to a column "Zone" in the dataset and since it is a non-numeric column and we have to group by that column, the system response has it as one of the values of the list of the key "variables_grpby"
- The key "start_date" and "end_date" Since it is mentioned 2019 in the User Question as the timeframe, the "start_date" assumes the beginning of the year 2019 and "end_date" assumes the end of the year 2019. If no date related words are mentioned in the question, "start_date" would be "null" and "end_date" would be "null".
- The key "time_stamp_col" in the System Response should mention the relevant time related column name from the dataset according to the question if the question mentions a time related word.
- The key "agg_columns" in the System Response is a list of columns to be aggregated which should mention the numeric column names on which the question wants us to aggregate on.
- The key "trend" in the System Response, "trend" is set to "null" since the user question doesn't imply any trend analysis . If the question were about trends over time, this key would contain information about the trend, such as "upward," "downward," or "null" if no trend is specified.
- The key "filters" An empty dictionary in this case, as there are no explicit filters mentioned in the user question. If the user asked to filter data based on certain conditions (e.g. excluding a specific region), this key would contain the relevant filters.
- The key "to_start_date" and "to_end_date" Both set to "null" in this example because the user question specifies a single timeframe (2019). If the question mentioned a range (e.g. "from January 2019 to March 2019"), these keys would capture the specified range.
- The key "growth" Set to "null" in this example as there is no mention of growth in the user question. If the user inquired about growth or change over time, this key would provide information about the type of growth (e.g."monthly","yearly"," "absolute") or be set to "null" if not applicable.
- The key "not_in" Contains information about exclusion criteria based on the user's question. In this example, it excludes the "Texas" region. If the user question doesn't involve exclusions, this key would be an empty dictionary.
- The key "num_filter" Specifies numerical filters based on conditions in the user question. In this example, it filters loans with a tenure greater than 12 and loan amounts between 10k and 20k. If the user question doesn't involve numerical filters, this key would be an empty dictionary.
- The key "percent" Set to "false" in this example as there is no mention of percentage in the user question. If the user inquired about percentages, this key would contain information about the use of percentages in the response.
Similarly, below are more examples of user questions and their corresponding expected System Responses.
Example 2:
User Question: What is the Highest Loan Amount and Loan Outstanding by RM Name James in January 2020
{
"start_date": "01-01-2020",
"end_date": "31-01-2020",
"time_stamp_col": "Due Date",
"agg_columns": ["Loan Amount", "Loan Outstanding"],
"trend": "null",
"to_start_date": "null",
"to_end_date": "null",
"growth": "null",
"variables_grpby": [],
"filters": {"RM Name": ["James"]},
"not_in": {},
"num_filter": {},
"percent": "false",
"top": "1",
"bottom": "null"
}
Example 3:
User Question: Which RM Name with respect to Region has the Highest Interest Outstanding and Principal Outstanding in the year 2019
{
"start_date": "01-01-2019",
"end_date": "31-12-2019",
"time_stamp_col": "Due Date",
"agg_columns": ["Interest Outstanding", "Principal Outstanding"],
"trend": "null",
"to_start_date": "null",
"to_end_date": "null",
"growth": "null",
"variables_grpby": ["RM Name", "Region"],
"filters": {},
"not_in": {},
"num_filter": {},
"percent": "false",
"top": "1",
"bottom": "null"
}
Example 4:
User Question: Which Branch in North Carolina with respect to Cibil Score Bucket has the Highest Cibil Score in 2019
{
"start_date": "01-01-2019",
"end_date": "31-12-2019",
"time_stamp_col": "Due Date",
"agg_columns": ["Cibil Score", "DPD Bucket"],
"trend": "null",
"to_start_date": "null",
"to_end_date": "null",
"growth": "null",
"variables_grpby": ["Branch"],
"filters": {"Region": ["North Carolina"]},
"not_in": {},
"num_filter": {},
"percent": "false",
"top": "1",
"bottom": "null"
}
''
Example 5:
User Question: With respect to Zone, Region, Branch, RM Name what is the Highest Loan Amount, Loan Tenure, Loan Outstanding, EMI Pending, Principal Outstanding
{
"start_date": "null",
"end_date": "null",
"time_stamp_col": "null",
"agg_columns": ["Loan Amount", "Loan Tenure", "Loan Outstanding", "EMI Pending", "Principal Outstanding"],
"trend": "null",
"to_start_date": "null",
"to_end_date": "null",
"growth": "null",
"variables_grpby": ["Zone", "Region", "Branch", "RM Name"],
"filters": {},
"not_in": {},
"num_filter": {},
"percent": "false",
"top": "1",
"bottom": "null"
}
Example 6:
User Question: Top 2 zones by Housing Loan in the year 2019
{
"start_date": "01-01-2019",
"end_date": "31-12-2019",
"time_stamp_col": "Due Date",
"agg_columns": ["Housing Loan"],
"trend": "null",
"to_start_date": "null",
"to_end_date": "null",
"growth": "null",
"variables_grpby": ["Zone"],
"filters": {"Product": ["Home Loan"]},
"not_in": {},
"num_filter": {},
"percent": "false",
"top": "2",
"bottom": "null"
}
Example 7:
User Question: yearly growth analysis by Due Date of Loan Amount?
{
"start_date": "null",
"end_date": "null",
"time_stamp_col": "Due Date",
"agg_columns": ["Loan Amount"],
"trend": "null",
"to_start_date": "null",
"to_end_date": "null",
"growth": "yearly",
"variables_grpby": ["Due Date"],
"filters": {},
"not_in": {},
"num_filter": {},
"percent": "false",
"top": "null",
"bottom": "null"
}
'''+ f'''Our test dataset has the following columns: {json_data_test}
User Question (to be converted): {user_question}'''
# Set the context length
context_length = 3 # Set your desired context length here
# Load the agent with GPT-4 and the specified context length
gpt4_agent = create_csv_agent(ChatOpenAI(temperature=0, model_name="gpt-4"), testData_fname, context_length=context_length)
# Use the formatted question as the input to your agent
response = gpt4_agent.run(prompt_template)
# Print the response
print(user_question)
print(response))
and the error is ( RateLimitError: Rate limit reached for gpt-4 in organization org-bJurmVX4HBor6BJfUF9Q6miB on tokens per min (TPM): Limit 10000, Used 1269, Requested 9015. Please try again in 1.704s. Visit https://platform.openai.com/account/rate-limits to learn more.) so try to run gpt-4 model but chaining the lenght contex
### Idea or request for content:
_No response_ | how to chain the lenght of context if i wanted use gpt-4 model because of increase token i can't use gpt 4 so how to chain the context | https://api.github.com/repos/langchain-ai/langchain/issues/13772/comments | 1 | 2023-11-23T08:57:22Z | 2024-03-13T20:02:21Z | https://github.com/langchain-ai/langchain/issues/13772 | 2,007,748,476 | 13,772 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
It stated in the docs that there's a parameter called response_if_no_docs_found, which If specified, the chain will return a fixed response if no docs
are found for the question.
I have tried it and asked another person, who says they tried it and it doesn't work,
how do i get it to work
because its a very important aspect
### Suggestion:
_No response_ | Issue: Conversational retrieval chain, response_if_no_docs_found not working | https://api.github.com/repos/langchain-ai/langchain/issues/13771/comments | 2 | 2023-11-23T08:17:41Z | 2024-03-13T19:57:37Z | https://github.com/langchain-ai/langchain/issues/13771 | 2,007,681,270 | 13,771 |
[
"hwchase17",
"langchain"
]
| ### Feature request
JS LangChain (https://js.langchain.com/docs/integrations/chat/ollama_functions) supports Ollama Functions and allows to return JSON output from Ollama. It would be good to have the same functionality in Python LangChain.
### Motivation
JSON response from LangChain Ollama would be very useful, when integrating LangChain into API pipelines running on local environments
### Your contribution
I could test the implementation. | JSON response support for Ollama | https://api.github.com/repos/langchain-ai/langchain/issues/13770/comments | 1 | 2023-11-23T08:13:02Z | 2024-03-13T19:55:41Z | https://github.com/langchain-ai/langchain/issues/13770 | 2,007,675,246 | 13,770 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I am trying to create a vertex ai and langchain based entity extraction program. I have stored my documents in form of vector embeddings in chromadb. I have been trying to extract the attributes/features like name, price, etc but whenever i run chains I'm getting this error.
### Suggestion:
_No response_ | _TextGenerationModel.predict() got an unexpected keyword argument 'functions'Issue: <Please write a comprehensive title after the 'Issue: ' prefix> | https://api.github.com/repos/langchain-ai/langchain/issues/13766/comments | 4 | 2023-11-23T06:51:29Z | 2024-03-17T16:06:22Z | https://github.com/langchain-ai/langchain/issues/13766 | 2,007,583,055 | 13,766 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Introduce a more flexible method to specify model parameters directly within the LLMChain or VLLM method, allowing users to modify these parameters without creating a new model instance each time.
### Motivation
Currently, when utilizing the Langchain library's VLLM class, modifying model parameters (such as temperature, top_k, top_p, etc.) requires creating a new instance of the VLLM class. This process becomes cumbersome, especially when frequent adjustments to model parameters are necessary.
Current Situation:
`
from langchain.llms import VLLM
llm = VLLM(
model="mosaicml/mpt-7b",
trust_remote_code=True, # mandatory for hf models
max_new_tokens=128,
top_k=10,
top_p=0.95,
temperature=0.8,
)
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "Who was the US president in the year the first Pokemon game was released?"
print(llm_chain.run(question))
`
It is very tedious to keep creating new model when i want to change model params like (temperature and etc) .
In naive VLLM, this can be done so easily:
llm = LLM(model="qwen/Qwen-7B-Chat", revision="v1.1.8", trust_remote_code=True)
outputs = llm.generate(prompts, model_params)
outputs = llm.generate(prompts, model_params)
### Your contribution
NA | Changing Model Param after Initialization VLLM Model | https://api.github.com/repos/langchain-ai/langchain/issues/13762/comments | 2 | 2023-11-23T03:45:30Z | 2024-04-22T16:52:00Z | https://github.com/langchain-ai/langchain/issues/13762 | 2,007,439,169 | 13,762 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I got 2 functions and 2 related tools:
def Func_Tool_Extractor(parameters):
print("\n", parameters)
print("触发Func_Tool_Extractor插件")
pdf_id, extractor_id_position = parameters.split(",")
print(f"获取到pdf id为{pdf_id},这篇文档用户想要查询第{extractor_id_position}个分子")
test_smiles = "C1C2=C3CCC3=C2CC1"
extractor_id = 1876
print(f"查询到第{extractor_id_position}个分子的smiles为{test_smiles}, extractor_id为{extractor_id}")
answer = f"查询到分子的SMILES为{test_smiles}, extractor_id为{extractor_id}, pdf_id为{pdf_id}"
return answer
def Func_Tool_ADMET(parameters):
print("\n", parameters)
print("触发Func_Tool_ADMET插件")
print("........正在解析ADMET属性..........")
data = {
"id": "2567",
"smiles": "C1=CC=CC=C1C1=C(C2=CC=CC=C2)C=CC=C1",
"humanIntestinalAbsorption": "HIA+|0.73",
"caco2Permeability": "None",
"caco2PermeabilityIi": "Caco2+|0.70",
"pGlycoproteinInhibitorI": "Pgp_nonInhibitor|0.51",
"pGlycoproteinInhibitorIi": "Pgp_Inhibitor|0.68",
"pGlycoproteinSubstrate": "substrate|0.56",
"bloodBrainBarrier": "BBB+|0.73",
"cyp4501a2Inhibitor": "Inhibitor|0.73",
"cyp4502c19Inhibitor": "Inhibitor|0.68",
"cyp4502c9Inhibitor": "Non_Inhibitor|0.53",
"cyp4502c9Substrate": "non-substrate|0.59",
"cyp4502d6Inhibitor": "Non_Inhibitor|0.65",
"cyp4502d6Substrate": "substrate|0.55",
"cyp4503a4Inhibitor": "Non_Inhibitor|0.71",
"cyp4503a4Substrate": "non_substrate|0.52",
"cypInhibitorPromiscuity": "High CYP Inhibitory Promiscuity|0.61",
"biodegradation": "Not ready biodegradable|0.64",
"renalOrganicCationTransporter": "inhibitor|0.64",
"amesToxicity": "Non AMES toxic|0.66",
"carcinogens": "non_carcinogens|0.71",
"humanEtherAGoGoRelatedGeneInhibitionI": "Weak inhibitor|0.60",
"humanEtherAGoGoRelatedGeneInhibitionIi": "Weak inhibitor|0.53",
"honeyBeeToxicity": "highAT|0.56",
"tetrahymenaPyriformisToxicity": "None",
"tetrahymenaPyriformisToxicityIi": "non-TPT|0.73",
"fishToxicity": "None",
"fishToxicityIi": "High FHMT|0.73",
"aqueousSolubility": "None",
"savePath": "/profile/chemicalAppsResult/admetResult/2023/11/10/2567",
"status": "1",
"favoriteFlag": "0"
}
return data
# Define the tools
tool1 = Tool(
name="Tool_Extractor",
func=Func_Tool_Extractor,
description="""
useful when you want to get a molecule from a document.
like: get the 3rd molecule of the document.
The input of this tool should be comma separated string of two, representing the pdf_id and the number of the molecule to be gotten.
"""
)
tool2 = Tool(
name="Tool_ADMET",
func=Func_Tool_ADMET,
description="""
useful when you want to obtain the ADMET data for a molecule.
like: get the ADMET data for molecule X
The input to this tool should be a string, representing the SMILES of the molecule.
"""
)
Give me a CONVERSATIONAL_REACT_DESCRIPTION code example to use these two tools, and the following requirements should be met:
1. Use initialize_agent method to initialize the agent as needed;
2. The output of tool2 should not be modified by LLM or further processed by the agent chain, avoiding data elimination caused by the thoughts made by LLM models;
3. Use memory to keep history chat messages;
4. Use prompt templates to customize the outputs, especially for the tool2.
### Suggestion:
_No response_ | Combine Tools Together in a Chain | https://api.github.com/repos/langchain-ai/langchain/issues/13760/comments | 1 | 2023-11-23T03:12:58Z | 2024-03-13T19:55:50Z | https://github.com/langchain-ai/langchain/issues/13760 | 2,007,415,465 | 13,760 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
HI there, I am trying to use Multiquery retiever on pinecone vectordb with multiple filters. I specifically need to use an OR operator.
For example, I would like to retrieve documents with metadata having the metadata "category" : 'value1' OR 'value2' OR 'value3'
this is my current implementation:
```
import pinecone
from langchain.llms import Cohere
from langchain.retrievers.multi_query import MultiQueryRetriever
from langchain.vectorstores import Pinecone
pinecone.init(api_key=pine_api_key, environment=pinecone_env)
index_name = "index_name"
index = pinecone.Index(index_name)
vectorstore = Pinecone(index, embeddings, "text")
llm = Cohere(cohere_api_key=cohere_api_key)
retriever_from_llm = MultiQueryRetriever.from_llm(
retriever=vectorstore.as_retriever(search_kwargs={"k": 10,
'filter': {'user_id': '42',
'category': "c1"}}), llm=llm
)
```
At this point I am able to sucessfully filter by user_id and category when it is only one value, but when I add more values as such
'category': ["c1", "c2", "c3"].
It does not work, when retrieving documents as such:
```
question = "what is foo?"
unique_docs = retriever_from_llm.get_relevant_documents(query=question)
```
Error:
```
ApiException: (400)
Reason: Bad Request
HTTP response headers: HTTPHeaderDict({'content-type': 'application/json', 'date': 'Thu, 23 Nov 2023 00:24:58 GMT', 'x-envoy-upstream-service-time': '0', 'content-length': '108', 'server': 'envoy'})
HTTP response body: {"code":3,"message":"illegal condition for field category, got ["c1","c2"]","details":[]}.
```
How can I add a filters that returns documents with category c1 Or c2 Or c3, using pincone and MultiqueryRetriver?
### Suggestion:
_No response_ | Pinecone MultiqueryRetriver with multiple OR metadata filters | https://api.github.com/repos/langchain-ai/langchain/issues/13758/comments | 2 | 2023-11-23T00:55:10Z | 2024-03-13T19:56:26Z | https://github.com/langchain-ai/langchain/issues/13758 | 2,007,320,637 | 13,758 |
[
"hwchase17",
"langchain"
]
| ### Feature request
When I receive empty text from the Azure content filter, I need to temporarily switch to OpenAI and try again to get an answer.
How can I achieve this functionality?
### Motivation
Error handling for prod. High availability
### Your contribution
pr | How to use a different OpenAI endpoint when Azure content filter applys ? | https://api.github.com/repos/langchain-ai/langchain/issues/13757/comments | 3 | 2023-11-23T00:47:25Z | 2024-03-13T19:56:29Z | https://github.com/langchain-ai/langchain/issues/13757 | 2,007,316,049 | 13,757 |
[
"hwchase17",
"langchain"
]
| ```python
from langchain.memory import RedisChatMessageHistory
history = RedisChatMessageHistory("foo")
history.add_user_message("hi!")
history.add_ai_message("whats up?")
```
{**<ins>"type": "human"**</ins>, "data": {"content": "hi!", "additional_kwargs": {}, <ins>**"type": "human"**</ins>, "example": false}}
I suspect it's a bug in the serialization or does it serve any purpose being in that structure with a duplicate k/v?
https://github.com/langchain-ai/langchain/blob/163bf165ed2d6ae453aad1bca4ed56814d81bf5b/libs/core/langchain_core/messages/base.py#L113C1-L115C1 | MessageHistory type field serialized twice twice | https://api.github.com/repos/langchain-ai/langchain/issues/13755/comments | 1 | 2023-11-22T23:55:16Z | 2024-02-28T16:06:45Z | https://github.com/langchain-ai/langchain/issues/13755 | 2,007,263,917 | 13,755 |
[
"hwchase17",
"langchain"
]
| ### System Info
Python 3.9.6, Langchain 0.0.334
### Who can help?
@eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I'm experimenting with some simple code to load a local repository to test CodeLlama, but the "exclude" in GenericLoader.from_filesystem seems not working:
`from langchain.document_loaders.generic import GenericLoader
from langchain.document_loaders.parsers import LanguageParser
from langchain.text_splitter import Language
repo_path = "../../my/laravel/project/"
# Load
loader = GenericLoader.from_filesystem(
repo_path,
glob="**/*",
suffixes=[".php"],
parser=LanguageParser(
parser_threshold=2000,
),
exclude=["../../my/laravel/project/vendor/", "../../my/laravel/project/node_modules/", "../../my/laravel/project/storage/", "../../my/laravel/project/public/", "../../my/laravel/project/tests/", "../../my/laravel/project/resources/"]
)
documents = loader.load()
len(documents)
`
Am I missing something obvious? I cannot find any example...with or without the exclude, the length of docs is the same (and if I just print "documents" I see files in the folders I excluded).
### Expected behavior
I would expect that listing subpaths from the main path then these would be excluded. | GenericLoader.from_filesystem "exclude" not working | https://api.github.com/repos/langchain-ai/langchain/issues/13751/comments | 4 | 2023-11-22T23:08:00Z | 2024-03-13T19:55:44Z | https://github.com/langchain-ai/langchain/issues/13751 | 2,007,226,855 | 13,751 |
[
"hwchase17",
"langchain"
]
| ### System Info
**Operating system/architecture:**
Linux/X86_64
**CPU | Memory**
8 vCPU | 16 GB
**Platform version**
1.4.0
**Launch type**
FARGATE
**Project libraries:**
snowflake-sqlalchemy==1.4.6
python-dotenv==0.21.0
openai==0.27.2
langchain==0.0.336
pandas==2.0.2
boto3==1.26.144
colorama==0.4.6
fastapi==0.100.1
pydantic~=1.10.8
pytest~=7.1.2
uvicorn~=0.17.6
cassio==0.1.3
sentry-sdk==1.29.2
langsmith==0.0.66
numpy==1.24.3
SQLAlchemy==1.4.46
psycopg2-binary==2.9.7
tiktoken==0.4.0
httpx==0.24.1
unidecode==1.3.7
transformers==4.28.0
transformers[torch]
tensorflow==2.12.1
keras==2.12.0
**Python version of the project:**
python:3.10-slim-bullseye
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
It's quite challenging to replicate the error as it appears to be rather random. After a few requests, FastAPI stops responding following the OPTIONS query of the endpoint. This issue seems to be attributable to one of the libraries in use. I observed this error after a code refactoring in the project, moving from legacy chains to chains with LCEL. Since this refactoring, the ECS system has exhibited peculiar behavior. Extensive debugging has been conducted throughout the codebase, yet there are no indications of the error's origin. It's worth noting that everything functions flawlessly in local emulation, with no occurrence of any unusual errors. The problem arises when the code is deployed to the ECS Fargate instance, and I want to emphasize that this issue did not exist before the aforementioned changes were made.
<img width="996" alt="Captura de pantalla 2023-11-22 a la(s) 5 51 24 p m" src="https://github.com/langchain-ai/langchain/assets/122487744/15c12aeb-6016-468d-81de-b31a4204ca78">
### Expected behavior
I need someone to help me with new ways to debug this extremely rare bug, to give me ideas on what to do, what to show from my machine, what can be done, or if it's some incompatibility between the libraries. I haven't been able to pinpoint the specific point where the program stops, and it's proving to be very challenging. | Random Application Lockdown on ECS Fargate with Langchain and FastAPI | https://api.github.com/repos/langchain-ai/langchain/issues/13750/comments | 5 | 2023-11-22T22:56:07Z | 2024-05-14T16:07:15Z | https://github.com/langchain-ai/langchain/issues/13750 | 2,007,218,514 | 13,750 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain version: 0.0.339
Python version: 3.11.5
Running on Ubuntu 22.04.3 LTS via WSL 2
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Running the steps described in the guide for RecursiveUrlLoader: [https://python.langchain.com/docs/integrations/document_loaders/recursive_url](url)
2. Loading some content with UTF-8 encoding, the Python docs for example: [https://docs.python.org/3/](url)
Exact code used:
```python
from pprint import pprint
from langchain.document_loaders import RecursiveUrlLoader
from bs4 import BeautifulSoup
def load_python_docs():
url = "https://docs.python.org/3/"
loader = RecursiveUrlLoader(
url=url, max_depth=2, extractor=lambda x: BeautifulSoup(x, "html.parser").text
)
return loader.load()
pprint([doc.metadata for doc in load_python_docs()])
```
3. If you print the loaded documents, you should be able to see these kind of encoding issues:
```
{'description': 'Editor, Adam Turner,. This article explains the new features '
'in Python 3.12, compared to 3.11. Python 3.12 was released '
'on October 2, 2023. For full details, see the changelog. '
'Summary â\x80\x93 Release hi...',
'language': None,
'source': 'https://docs.python.org/3/whatsnew/3.12.html',
'title': 'Whatâ\x80\x99s New In Python 3.12 — Python 3.12.0 documentation'}
```
### Expected behavior
Should load content correctly, using the right encoding to parse the document. I suppose the issue is due to the fact that the `_get_child_links_recursive` method is calling `requests.get` and not specifying the encoding for the response. A quick fix that worked for me was to include the following line, just after the GET request: `response.encoding = response.apparent_encoding` | UTF-8 content is not loaded correctly with RecursiveUrlLoader | https://api.github.com/repos/langchain-ai/langchain/issues/13749/comments | 1 | 2023-11-22T22:47:26Z | 2024-02-28T16:06:55Z | https://github.com/langchain-ai/langchain/issues/13749 | 2,007,212,254 | 13,749 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain version: 0.0.339
Laangserve version: 0.0.30
Langchain-cli version: 0.0.19
Running on apple silicone: MacBook Pro M3 max using a zsh shell.
### Who can help?
@erick
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [x] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Running the installation steps in the guide with
`pip3 install -U langchain-cli`
And then trying to run `langchain app` just results in zsh responding with `zsh: command not found: langchain`.
### Expected behavior
Running the installation steps in the guide with
`pip3 install -U langchain-cli`
Should allow me to run
`langchain app new my-app` | Langchain-cli Does not install a usable binary. | https://api.github.com/repos/langchain-ai/langchain/issues/13743/comments | 6 | 2023-11-22T19:54:42Z | 2023-12-04T02:56:26Z | https://github.com/langchain-ai/langchain/issues/13743 | 2,007,028,867 | 13,743 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
Some langchain objects are used but not imported in [Types of MessagePromptTemplate](https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/msg_prompt_templates) documentation.
Specifically, it lacks the following:
```python
from langchain.prompts import HumanMessagePromptTemplate
from langchain.prompts import ChatPromptTemplate
```
and
```python
from langchain.schema.messages import HumanMessage, AIMessage
```
### Idea or request for content:
_No response_ | DOC: missing imports in "Types of MessagePromptTemplate" | https://api.github.com/repos/langchain-ai/langchain/issues/13736/comments | 1 | 2023-11-22T18:08:15Z | 2024-02-28T16:07:00Z | https://github.com/langchain-ai/langchain/issues/13736 | 2,006,867,499 | 13,736 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
**Description:**
Hi, sir. I am currently developing an Agent for scientific tasks using Langchain. The first tool I intend to integrate is a function for generating a mesh, implemented through the external API `dolfinx`. However, I've encountered a compatibility issue between my Langchain packages and the `dolfinx` package. This seems to be a broader problem related to designing an Agent that leverages diverse external APIs.
The concern here is that as I continue to expand the Agent's capabilities with more tools utilizing various external APIs, I might face additional conflicts. This issue could be a common challenge in the development process of creating Agents with multiple, diverse external integrations. Here's a snippet of the code for reference:
```
python
from mpi4py import MPI
from dolfinx import mesh, io
@tool("create_mesh")
def create_mesh_tool(text: str):
"""Returns a dolfinx.mesh.Mesh object, useful for generating meshes for PDEs.
The input should always be a string."""
domain = mesh.create_unit_square(MPI.COMM_WORLD, 8, 8, mesh.CellType.quadrilateral)
return domain
tools = [make_random_num, create_mesh_tool]
```
I am seeking advice or solutions on how to effectively manage and resolve these package conflicts within the Langchain framework, particularly when integrating tools that depend on external APIs like dolfinx.
### Suggestion:
_No response_ | Issue: <Handling Python Package Conflicts in Langchain When Integrating External APIs for Agent Tools> | https://api.github.com/repos/langchain-ai/langchain/issues/13734/comments | 1 | 2023-11-22T17:59:25Z | 2024-02-28T16:07:06Z | https://github.com/langchain-ai/langchain/issues/13734 | 2,006,847,641 | 13,734 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain==0.0.339
windows11
python==3.10
### Who can help?
simple chain with tool code ,but not pass full input to tool cause '\n'
`
model = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0, verbose=False, streaming=True)
custom_tool_list = [PythonREPLTool(),#CustomPythonExec()
]
memory = ConversationBufferMemory(memory_key="chat_history")
agent_executor = initialize_agent(
custom_tool_list,
llm=model,
agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION,
verbose=True,
max_iterations=1,
memory=memory,
early_stopping_method="generate",
agent_kwargs={"prefix": custom_prefix},
handle_parsing_errors="Check your output and make sure it conforms",
agent_executor.run("some prompt")
`
logs:
[chain/end] [1:chain:AgentExecutor > 2:chain:LLMChain] [4.94s] Exiting Chain run with output:
{
"text": "Thought: Do I need to use a tool? Yes\nAction: Python_REPL\nAction Input: import pandas as pd\ndf = pd.read_csv('./statics/20231123_000614_test.csv')\nline_count = len(df
)\nline_count"
}
[tool/start] [1:chain:AgentExecutor > 4:tool:Python_REPL] Entering Tool run with input:
"import pandas as pd"
2023-11-23 00:06:19 - Python REPL can execute arbitrary code. Use with caution.
[tool/end] [1:chain:AgentExecutor > 4:tool:Python_REPL] [4ms] Exiting Tool run with output:
""
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
[chain/end] [1:chain:AgentExecutor > 2:chain:LLMChain] [4.94s] Exiting Chain run with output:
{
"text": "Thought: Do I need to use a tool? Yes\nAction: Python_REPL\nAction Input: import pandas as pd\ndf = pd.read_csv('./statics/20231123_000614_test.csv')\nline_count = len(df
)\nline_count"
}
[tool/start] [1:chain:AgentExecutor > 4:tool:Python_REPL] Entering Tool run with input:
"import pandas as pd"
2023-11-23 00:06:19 - Python REPL can execute arbitrary code. Use with caution.
[tool/end] [1:chain:AgentExecutor > 4:tool:Python_REPL] [4ms] Exiting Tool run with output:
""
### Expected behavior
pass full input string to tool | chain input not pass into Python_REPL tool input by '\n' | https://api.github.com/repos/langchain-ai/langchain/issues/13730/comments | 2 | 2023-11-22T16:15:52Z | 2024-03-13T19:57:11Z | https://github.com/langchain-ai/langchain/issues/13730 | 2,006,686,650 | 13,730 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I followed the lecture by Harrison on langchain on deeplearning.ai and when i follow it I am getting error `Got invalid JSON object. Error: Extra data:`when using `SelfQueryRetriever` to `get_relevant_documents`.
Using the following code:
```python
document_content_description = "Lecture notes"
retriever = SelfQueryRetriever.from_llm(
llm=model,
vectorstore=vectordb,
metadata_field_info=metadata_field_info,
document_contents=document_content_description,
verbose=True,
handle_parsing_errors=True,
)
docs = retriever.get_relevant_documents(question)
```
### Suggestion:
_No response_ | Issue: Error `Got invalid JSON object. Error: Extra data:`when using `SelfQueryRetriever` | https://api.github.com/repos/langchain-ai/langchain/issues/13728/comments | 3 | 2023-11-22T16:07:12Z | 2023-11-24T17:09:20Z | https://github.com/langchain-ai/langchain/issues/13728 | 2,006,669,669 | 13,728 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain Version: 0.0.339
Platform: Windows 10
Python Version: 3.10.10
### Who can help?
@agol
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [X] Callbacks/Tracing
- [X] Async
### Reproduction
I've implemented this `RunnableBranch` example: https://python.langchain.com/docs/expression_language/how_to/routing
using this `MyCallbackHandler` example: https://python.langchain.com/docs/modules/agents/how_to/streaming_stdout_final_only
For streamming I've used this `streamming` example: https://python.langchain.com/docs/modules/model_io/chat/streaming
the result becomes something like this:
```
##
#Other#
##
(Verse 1)... <the output of the question as a whole, not separated by tokens, generated all at once>
```
### Expected behavior
every single token be surrounded by ##, like this:
```
#token1#
#token2#
...
```
| RunnableBranch doesnt stream correctly | https://api.github.com/repos/langchain-ai/langchain/issues/13723/comments | 7 | 2023-11-22T14:57:38Z | 2024-03-23T16:06:11Z | https://github.com/langchain-ai/langchain/issues/13723 | 2,006,525,842 | 13,723 |
[
"hwchase17",
"langchain"
]
| ### Feature request
it could be better where i can add a callback to get the total token spend on other LLMS apart from OPENAI ?
### Motivation
since there is a call back for openai why don't we have for other models too just to see how much token is consumed ?
### Your contribution
Happy to contribute to the issue with PR later | get token count for other LLM Models | https://api.github.com/repos/langchain-ai/langchain/issues/13719/comments | 5 | 2023-11-22T12:50:02Z | 2024-05-13T16:08:52Z | https://github.com/langchain-ai/langchain/issues/13719 | 2,006,274,368 | 13,719 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hi I am running the create_pandas_dataframe_agent agent, and the issue I'm facing is with the call back FinalStreamingStdOutCallbackHandler for some reason when I only have one civ it works fine as soon as I pass through multiple then it doesn't stream the data it just prints it all in one go.
here is my code:
```
def _get_llm(callback):
return OpenAI(
callbacks=[callback],
streaming=True,
temperature=0
)
file_path = ['people-100.csv','organizations-100.csv']
dataframes = [pd.read_csv(path) for path in file_path]
agent = create_pandas_dataframe_agent(
llm = _get_llm(FinalStreamingStdOutCallbackHandler()),
df = dataframes,
# agent_executor_kwargs={"memory": memory},
verbose=False,
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
)
print(agent.run('go in deatil on the explination of the data and give me a statistical list aswell'))
```
### Suggestion:
_No response_ | Issue: <Please write a comprehensive title after the 'Issue: ' prefix> | https://api.github.com/repos/langchain-ai/langchain/issues/13717/comments | 3 | 2023-11-22T11:54:27Z | 2024-03-13T19:55:38Z | https://github.com/langchain-ai/langchain/issues/13717 | 2,006,180,319 | 13,717 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I'm using
```python
from langchain.agents import create_pandas_dataframe_agent
pd_agent = create_pandas_dataframe_agent(OpenAI(temperature=0),
df,
verbose=True)
pd_agent.run('some question on dataframe')
```
```python
english_tools = [
Tool(name="SomeNAME_1",
func=lambda q: app.finance_chain.run(q),
description=" Some app related description ",
return_direct=True,
coroutine=lambda q: app.finance_chain.arun(q),
),
Tool(name="SomeNAME_2",
func=lambda q: app.rqa(q),
description=" Some app related description ",
coroutine=lambda q: app.rqa_english.arun(q),
return_direct=True
),
Tool.from_function(
name="SomeNAME_3",
func=lambda q: app.pd_agent(q),
description=" Some app related description",
coroutine=lambda q: app.pd_agent.arun(q),
)
]
```
when we call this agent seperatly it's gives me a good result
but when i use this agent as a tool along with other tools it's not gives consistent results, mostly wrong answer
### Suggestion:
_No response_ | Agent as tool gives wrong result ( pandas agent) | https://api.github.com/repos/langchain-ai/langchain/issues/13711/comments | 4 | 2023-11-22T08:56:25Z | 2024-03-24T11:39:43Z | https://github.com/langchain-ai/langchain/issues/13711 | 2,005,863,753 | 13,711 |
[
"hwchase17",
"langchain"
]
| ### System Info
Traceback (most recent call last):
File "/Users/xxx/anaconda3/envs/kaoyan-chat/lib/python3.10/site-packages/langchain/output_parsers/xml.py", line 2, in <module>
import xml.etree.ElementTree as ET
File "/Users/xxx/anaconda3/envs/kaoyan-chat/lib/python3.10/site-packages/langchain/output_parsers/xml.py", line 2, in <module>
import xml.etree.ElementTree as ET
ModuleNotFoundError: No module named 'xml.etree'; 'xml' is not a package
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
llm = AzureChatOpenAI(model_name="gpt-35-turbo",
deployment_name="gpt-35-turbo", temperature=0.3)
tools = load_tools(["serpapi", "llm-math"], llm=llm)
### Expected behavior
i run this code and hope it can run normally, but it throw the exception. xml is not a package , i think this is beacause the python file is named xml.py , it confilct with import xml.etree.ElementTree as ET
| ModuleNotFoundError: No module named 'xml.etree'; 'xml' is not a package | https://api.github.com/repos/langchain-ai/langchain/issues/13709/comments | 6 | 2023-11-22T08:36:24Z | 2024-02-28T16:07:20Z | https://github.com/langchain-ai/langchain/issues/13709 | 2,005,831,934 | 13,709 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Given that the seed is already in beta phase, the system_fingerprint would also be highly useful in tracking deterministic responses. Suggest to add a `system_fingerprint` as part of the callbacks. https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.openai_info.OpenAICallbackHandler.html#langchain.callbacks.openai_info.OpenAICallbackHandler
### Motivation
Given that the seed is already in beta phase, the system_fingerprint would also be highly useful
### Your contribution
Proposed suggestion | Add system_fingerprint in OpenAI callbacks | https://api.github.com/repos/langchain-ai/langchain/issues/13707/comments | 4 | 2023-11-22T08:24:18Z | 2024-03-13T19:58:46Z | https://github.com/langchain-ai/langchain/issues/13707 | 2,005,812,732 | 13,707 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
This question may be stupid but I find nothing to refer to.
I'm using langchain.chains.LLMChain in flask api, but I found that the chain will be reloaded and reconstructed if I call the api after a call. So the first question I wanna ask is how to make my LLMChain be remembered so that I can use it directly in the next call.
I tried to use session in flask, but get ```Object of type LLMChain is not JSON serializable```.
Thanks.
### Suggestion:
_No response_ | Issue: how to remain the chain in different api calls? | https://api.github.com/repos/langchain-ai/langchain/issues/13697/comments | 12 | 2023-11-22T03:33:38Z | 2024-04-19T10:14:36Z | https://github.com/langchain-ai/langchain/issues/13697 | 2,005,499,791 | 13,697 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
There is a lack of comprehensive documentation on how to use load_qa_chain with memory.
I've found this: https://cheatsheet.md/langchain-tutorials/load-qa-chain-langchain.en but does not cover other memories, like for example BufferedWindowMemory.
I have the following memory:
```
memory = {ConversationBufferWindowMemory} chat_memory=ChatMessageHistory(messages=[HumanMessage(content='What is the main material? Keept he answer short'), AIMessage(content='The main material is R 2 Fe 14 B.'), HumanMessage(content='What are its properties? '), AIMessage(content='The properties
Config = {type} <class 'langchain.schema.memory.BaseMemory.Config'>
ai_prefix = {str} 'AI'
buffer = {str} 'Human: What is the main material? Keept he answer short\nAI: The main material is R 2 Fe 14 B.\nHuman: What are its properties? \nAI: The properties of the material being discussed are not explicitly mentioned in the given context. However, some information
buffer_as_messages = {list: 4} [content='What is the main material? Keept he answer short', content='The main material is R 2 Fe 14 B.', content='What are its properties? ', content='The properties of the material being discussed are not explicitly mentioned in the given context. Howeve
buffer_as_str = {str} 'Human: What is the main material? Keept he answer short\nAI: The main material is R 2 Fe 14 B.\nHuman: What are its properties? \nAI: The properties of the material being discussed are not explicitly mentioned in the given context. However, some information
chat_memory = {ChatMessageHistory} messages=[HumanMessage(content='What is the main material? Keept he answer short'), AIMessage(content='The main material is R 2 Fe 14 B.'), HumanMessage(content='What are its properties? '), AIMessage(content='The properties of the material being discussed
human_prefix = {str} 'Human'
input_key = {NoneType} None
k = {int} 4
lc_attributes = {dict: 0} {}
lc_secrets = {dict: 0} {}
memory_key = {str} 'history'
memory_variables = {list: 1} ['history']
output_key = {NoneType} None
return_messages = {bool} False
```
And the messages are the following:
```
0 = {HumanMessage} content='What is the main material? Keept he answer short'
1 = {AIMessage} content='The main material is R 2 Fe 14 B.'
2 = {HumanMessage} content='What are its properties? '
3 = {AIMessage} content='The properties of the material being discussed are not explicitly mentioned in the given context. However, some information about the magnetic properties and microstructure of the material is provided. It is mentioned that the magnetic properties of the specimens were measured using a BH tracer or a vibrating sample magnetometer (VSM). The microstructure and crystal structure were analyzed using various techniques such as focused ion beam-scanning electron microscopy (FIB-SEM), electron probe microanalyses (EPMA), scanning transmission electron microscope-energy dispersive spectroscopy (STEM-EDS), and X-ray diffraction (XRD). The area ratio of each phase and the coverage ratio of the grains by the grain-boundary phase were calculated. Additionally, it is mentioned that the material contains rare-earth elements (Nd, Y, and Ce) and Fe, and that the coercivity can be improved by controlling the grain boundaries using certain phases (R6Fe13Ga and RFe2).'
```
And then I pass it to a load_qa_chain which is instantiated as follow (I'm summarising)
```
self.chain = load_qa_chain(llm, chain_type=qa_chain_type)
[...]
self.chain.run(input_documents=relevant_documents,
question=query,
memory=memory)
```
however, in the prompt, there seems to be no memory information.
I would like to avoid having to rewrite the prompt with a custom one. Maybe the memory does not follow the required conventions (e.g. how the human / assitant message should be characterized), which are not clear to me, I haven't found any documentation in regard.
### Idea or request for content:
_No response_ | DOC: how to use load_qa_chain with memory | https://api.github.com/repos/langchain-ai/langchain/issues/13696/comments | 6 | 2023-11-22T03:29:40Z | 2023-11-22T06:30:33Z | https://github.com/langchain-ai/langchain/issues/13696 | 2,005,494,274 | 13,696 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
Hey guys! Below is the code which i'm using to get the output
```
import pandas as pd
from IPython.display import Markdown, display
from langchain.agents import create_csv_agent
from langchain.chat_models import ChatOpenAI
from langchain.llms import OpenAI
import os
os.environ["OPENAI_API_KEY"] = ""
# Load the dataset
df = pd.read_csv('https://gist.githubusercontent.com/armgilles/194bcff35001e7eb53a2a8b441e8b2c6/raw/92200bc0a673d5ce2110aaad4544ed6c4010f687/pokemon.csv')
df.to_csv('pokemon.csv', index=False)
# Extract columns for the prompt template
columns = list(df.columns)
json_data = ', '.join(columns)
# Define the prompt template
prompt_template = f'''If the dataset has the following columns: {json_data}
Understand user questions with different column names and convert them to a JSON format. Here's an example:
Example 1:
User Question: Top 2 zones by Housing Loan in the West and South in the year 2019 excluding small loans and with Discount > 5 and Term Deposit between 10k and 15k
{{
"start_date": "01-01-2019",
"end_date": "31-12-2019",
"time_stamp_col": "Month",
"agg_columns": ["Housing Loan"],
"trend": null,
"to_start_date": null,
"to_end_date": null,
"growth": null,
"variables_grpby": ["Zone"],
"filters": {{"Zone": ["West", "South"]}},
"not_in": {{"Loan Type": ["Small"]}},
"num_filter": {{
"gt": [
["Discount", 5],
["Term Deposit", 10000]
],
"lt": [
["Term Deposit", 15000]
]
}},
"percent": false,
"top": "2",
"bottom": null
}}
Example 2:
User Question: What is the Highest Loan Amount and Loan Outstanding by RM Name James in January 2020
{{
"start_date": "01-01-2020",
"end_date": "31-01-2020",
"time_stamp_col": "Due Date",
"agg_columns": ["Loan Amount", "Loan Outstanding"],
"trend": null,
"to_start_date": null,
"to_end_date": null,
"growth": null,
"variables_grpby": [],
"filters": {{"RM Name": ["James"]}},
"not_in": {{}},
"num_filter": {{}},
"percent": false,
"top": "1",
"bottom": null
}}
Example 3:
User Question: Which RM Name with respect to Region has the Highest Interest Outstanding and Principal Outstanding in the year 2019
{{
"start_date": "01-01-2019",
"end_date": "31-12-2019",
"time_stamp_col": "Due Date",
"agg_columns": ["Interest Outstanding", "Principal Outstanding"],
"trend": null,
"to_start_date": null,
"to_end_date": null,
"growth": null,
"variables_grpby": ["RM Name", "Region"],
"filters": {{}},
"not_in": {{}},
"num_filter": {{}},
"percent": false,
"top": "1",
"bottom": null
}}
Example 4:
User Question: Which Branch in North Carolina with respect to Cibil Score Bucket has the Highest Cibil Score in 2019
{{
"start_date": "01-01-2019",
"end_date": "31-12-2019",
"time_stamp_col": "Due Date",
"agg_columns": ["Cibil Score", "DPD Bucket"],
"trend": null,
"to_start_date": null,
"to_end_date": null,
"growth": null,
"variables_grpby": ["Branch"],
"filters": {{"Region": ["North Carolina"]}},
"not_in": {{}},
"num_filter": {{}},
"percent": false,
"top": "1",
"bottom": null
}}
Example 5:
User Question: With respect to Zone, Region, Branch, RM Name what is the Highest Loan Amount, Loan Tenure, Loan Outstanding, EMI Pending, Principal Outstanding
{{
"start_date": null,
"end_date": null,
"time_stamp_col": null,
"agg_columns": ["Loan Amount", "Loan Tenure", "Loan Outstanding", "EMI Pending", "Principal Outstanding"],
"trend": null,
"to_start_date": null,
"to_end_date": null,
"growth": null,
"variables_grpby": ["Zone", "Region", "Branch", "RM Name"],
"filters": {{}},
"not_in": {{}},
"num_filter": {{}},
"percent": false,
"top": "1",
"bottom": null
}}
Example 6:
User Question: Top 2 zones by Housing Loan in the year 2019
{{
"start_date": "01-01-2019",
"end_date": "31-12-2019",
"time_stamp_col": "Due Date",
"agg_columns": ["Housing Loan"],
"trend": null,
"to_start_date": null,
"to_end_date": null,
"growth": null,
"variables_grpby": ["Zone"],
"filters": {{"Product": ["Home Loan"]}},
"not_in": {{}},
"num_filter": {{}},
"percent": false,
"top": "2",
"bottom": null
}}
Our test dataset has the following columns: {json_data}
User Question (to be converted): {{{{user_question}}}}'''
prompt_template = prompt_template.replace("{", "{{").replace("}", "}}").replace("{{{{", "{").replace("}}}}", "}")
# Define user's question
user_question = "Which pokemon has the highest attack and which has lowest defense? I need the output in the json format"
# Format the user's question within the template
formatted_question = prompt_template.format(user_question=user_question)
# Load the agent
agent = create_csv_agent(OpenAI(temperature=0), "pokemon.csv", verbose=True)
gpt4_agent = create_csv_agent(ChatOpenAI(temperature=0, model_name="gpt-4"), "pokemon.csv", verbose=True)
# Use the formatted question as the input to your agent
response = agent.run(formatted_question)
# Print the response
print(response)
```
Below is the output which i got
```
> Entering new chain...
Thought: I need to find the highest attack and lowest defense values in the dataset
Action: python_repl_ast
Action Input: df[['Name', 'Attack', 'Defense']].sort_values(by=['Attack', 'Defense'], ascending=[False, True]).head(1).to_json()
Observation: {"Name":{"163":"MewtwoMega Mewtwo X"},"Attack":{"163":190},"Defense":{"163":100}}
Thought: I now know the final answer
Final Answer: The pokemon with the highest attack is MewtwoMega Mewtwo X with an attack of 190 and the pokemon with the lowest defense is MewtwoMega Mewtwo X with a defense of 100.
> Finished chain.
The pokemon with the highest attack is MewtwoMega Mewtwo X with an attack of 190 and the pokemon with the lowest defense is MewtwoMega Mewtwo X with a defense of 100.
```
But the output i need is in the JSON format whichi've mentioned in the prompt_template. Can anyone assist me?
### Idea or request for content:
_No response_ | While utlizing the create_csv_agent function, the output is not returned in the JSON format | https://api.github.com/repos/langchain-ai/langchain/issues/13686/comments | 1 | 2023-11-21T21:51:45Z | 2024-02-14T03:35:23Z | https://github.com/langchain-ai/langchain/issues/13686 | 2,005,201,744 | 13,686 |
[
"hwchase17",
"langchain"
]
| ### System Info
It appears that OpenAI's SDK v1.0.0 update introduced some needed migrations.
Running the Langchain OpenAIModerationChain with OpenAI SDK >= v1.0.0 provides the following error:
```
You tried to access openai.Moderation, but this is no longer supported in openai>=1.0.0 - see the README at https://github.com/openai/openai-python for the API.
You can run `openai migrate` to automatically upgrade your codebase to use the 1.0.0 interface.
Alternatively, you can pin your installation to the old version, e.g. `pip install openai==0.28`
A detailed migration guide is available here: https://github.com/openai/openai-python/discussions/742
```
After briefly reading the mitigation steps I believe the suggested migration is from `openai.Moderation.create()` -> `client.moderations.create()`.
I believe the `validate_environment` of `OpenAIModerationChain` will want updating from ```values["client"] = openai.Moderation``` to using the recommended `client.moderations` syntax. (https://api.python.langchain.com/en/latest/_modules/langchain/chains/moderation.html#OpenAIModerationChain)
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
With OpenAI SDK >= v1.0.0 try to use `OpenAIModerationChain` to moderate a piece of content.
Error appears:
```
You tried to access openai.Moderation, but this is no longer supported in openai>=1.0.0 - see the README at https://github.com/openai/openai-python for the API.
You can run `openai migrate` to automatically upgrade your codebase to use the 1.0.0 interface.
Alternatively, you can pin your installation to the old version, e.g. `pip install openai==0.28`
A detailed migration guide is available here: https://github.com/openai/openai-python/discussions/742
```
### Expected behavior
When using `OpenAIModerationChain` with OpenAI SDK >= v1.0.0 I expect the chain to properly moderate content and not fail with an error. | OpenAIModerationChain with OpenAI SDK >= v1.0.0 Broken | https://api.github.com/repos/langchain-ai/langchain/issues/13685/comments | 9 | 2023-11-21T21:45:08Z | 2024-05-10T22:20:32Z | https://github.com/langchain-ai/langchain/issues/13685 | 2,005,192,238 | 13,685 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain version: 0.0.339
python: 3.9.17
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
When creating a Bedrock model without `region_name`
```py
llm_bedrock = Bedrock(model_id='anthropic.claude-instant-v1')
```
It errors out
```py
ValidationError: 1 validation error for Bedrock
__root__
Could not load credentials to authenticate with AWS client. Please check that credentials in the specified profile name are valid. (type=value_error)
```
This works
```py
llm_bedrock = Bedrock(model_id='anthropic.claude-instant-v1', region_name='us-west-2')
```
The problem is from https://github.com/langchain-ai/langchain/blob/bfb980b96800020d90c9362aaad40d8817636711/libs/langchain/langchain/llms/bedrock.py#L203-L207
where `get_from_dict_or_env` raises an exception if region is neither specified nor in env and default is not set.
To fix it, change default from `None` to `session.region_name`.
### Expected behavior
No error is raised. | [Issue] Error with Bedrock when AWS region is not specified or not in environment variable | https://api.github.com/repos/langchain-ai/langchain/issues/13683/comments | 2 | 2023-11-21T21:28:42Z | 2023-11-30T03:55:47Z | https://github.com/langchain-ai/langchain/issues/13683 | 2,005,167,112 | 13,683 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
Hey guys! Below is the code which i'm working on
```
import pandas as pd
from IPython.display import Markdown, display
from langchain.agents import create_csv_agent
from langchain.chat_models import ChatOpenAI
from langchain.llms import OpenAI
import os
import pandas as pd
os.environ["OPENAI_API_KEY"] = ""
df = pd.read_csv('https://gist.githubusercontent.com/armgilles/194bcff35001e7eb53a2a8b441e8b2c6/raw/92200bc0a673d5ce2110aaad4544ed6c4010f687/pokemon.csv')
df.to_csv('pokemon.csv', index=False)
# Load the agent
agent = create_csv_agent(OpenAI(temperature=0), "pokemon.csv", verbose=True)
gpt4_agent = create_csv_agent(
ChatOpenAI(temperature=0, model_name="gpt-4"), "pokemon.csv", verbose=True
)
agent.run("Which pokemon has the highest attack and which has lowest defense?")
```
In the above code, you can see the query has the normal question. How do i add the below prompt template as the question?
```
# Now you can use the json_data in your prompt
prompt_template = f'''If the dataset has the following columns: {json_data}
Understand user questions with different column names and convert them to a JSON format. Here's an example:
Example 1:
User Question: Top 2 zones by Housing Loan in the West and South in the year 2019 excluding small loans and with Discount > 5 and Term Deposit between 10k and 15k
{{
"start_date": "01-01-2019",
"end_date": "31-12-2019",
"time_stamp_col": "Month",
"agg_columns": ["Housing Loan"],
"trend": null,
"to_start_date": null,
"to_end_date": null,
"growth": null,
"variables_grpby": ["Zone"],
"filters": {{"Zone": ["West", "South"]}},
"not_in": {{"Loan Type": ["Small"]}},
"num_filter": {{
"gt": [
["Discount", 5],
["Term Deposit", 10000]
],
"lt": [
["Term Deposit", 15000]
]
}},
"percent": false,
"top": "2",
"bottom": null
}}
Example 2:
User Question: What is the Highest Loan Amount and Loan Outstanding by RM Name James in January 2020
{{
"start_date": "01-01-2020",
"end_date": "31-01-2020",
"time_stamp_col": "Due Date",
"agg_columns": ["Loan Amount", "Loan Outstanding"],
"trend": null,
"to_start_date": null,
"to_end_date": null,
"growth": null,
"variables_grpby": [],
"filters": {{"RM Name": ["James"]}},
"not_in": {{}},
"num_filter": {{}},
"percent": false,
"top": "1",
"bottom": null
}}
Example 3:
User Question: Which RM Name with respect to Region has the Highest Interest Outstanding and Principal Outstanding in the year 2019
{{
"start_date": "01-01-2019",
"end_date": "31-12-2019",
"time_stamp_col": "Due Date",
"agg_columns": ["Interest Outstanding", "Principal Outstanding"],
"trend": null,
"to_start_date": null,
"to_end_date": null,
"growth": null,
"variables_grpby": ["RM Name", "Region"],
"filters": {{}},
"not_in": {{}},
"num_filter": {{}},
"percent": false,
"top": "1",
"bottom": null
}}
Example 4:
User Question: Which Branch in North Carolina with respect to Cibil Score Bucket has the Highest Cibil Score in 2019
{{
"start_date": "01-01-2019",
"end_date": "31-12-2019",
"time_stamp_col": "Due Date",
"agg_columns": ["Cibil Score", "DPD Bucket"],
"trend": null,
"to_start_date": null,
"to_end_date": null,
"growth": null,
"variables_grpby": ["Branch"],
"filters": {{"Region": ["North Carolina"]}},
"not_in": {{}},
"num_filter": {{}},
"percent": false,
"top": "1",
"bottom": null
}}
Example 5:
User Question: With respect to Zone, Region, Branch, RM Name what is the Highest Loan Amount, Loan Tenure, Loan Outstanding, EMI Pending, Principal Outstanding
{{
"start_date": null,
"end_date": null,
"time_stamp_col": null,
"agg_columns": ["Loan Amount", "Loan Tenure", "Loan Outstanding", "EMI Pending", "Principal Outstanding"],
"trend": null,
"to_start_date": null,
"to_end_date": null,
"growth": null,
"variables_grpby": ["Zone", "Region", "Branch", "RM Name"],
"filters": {{}},
"not_in": {{}},
"num_filter": {{}},
"percent": false,
"top": "1",
"bottom": null
}}
Example 6:
User Question: Top 2 zones by Housing Loan in the year 2019
{{
"start_date": "01-01-2019",
"end_date": "31-12-2019",
"time_stamp_col": "Due Date",
"agg_columns": ["Housing Loan"],
"trend": null,
"to_start_date": null,
"to_end_date": null,
"growth": null,
"variables_grpby": ["Zone"],
"filters": {{"Product": ["Home Loan"]}},
"not_in": {{}},
"num_filter": {{}},
"percent": false,
"top": "2",
"bottom": null
}}
Our test dataset has the following columns: {json_data}
User Question (to be converted): {{user_question}}'''
user_question = "Which pokemon has the highest attack and which has lowest defense?"
```
Can anyone assist with this?
### Idea or request for content:
_No response_ | How to add prompt template to create_csv_agent? | https://api.github.com/repos/langchain-ai/langchain/issues/13682/comments | 4 | 2023-11-21T21:26:28Z | 2024-02-14T03:35:23Z | https://github.com/langchain-ai/langchain/issues/13682 | 2,005,164,418 | 13,682 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
I have run the below code which will try to go through the csv file and return the answer
```
import pandas as pd
import json
from langchain.llms import OpenAI
from langchain.chains import LLMChain
openai_api_key = ''
# Read CSV file into a DataFrame
df = pd.read_csv('https://gist.githubusercontent.com/armgilles/194bcff35001e7eb53a2a8b441e8b2c6/raw/92200bc0a673d5ce2110aaad4544ed6c4010f687/pokemon.csv')
# del df['Due Date']
# del df['Closing Date']
# Create a dictionary of unique values per column
unique_values_per_column = {}
for column in df.select_dtypes(include=['object']).columns:
# Convert ndarray to list
unique_values_per_column[column] = df[column].unique().tolist()
# Convert the dictionary to JSON
json_data = json.dumps(unique_values_per_column, indent=4)
# Now you can use the json_data in your prompt
prompt_template = f'''If the dataset has the following columns: {json_data}
Understand user questions with different column names and convert them to a JSON format. Here's an example:
Example 1:
User Question: Top 2 zones by Housing Loan in the West and South in the year 2019 excluding small loans and with Discount > 5 and Term Deposit between 10k and 15k
{{
"start_date": "01-01-2019",
"end_date": "31-12-2019",
"time_stamp_col": "Month",
"agg_columns": ["Housing Loan"],
"trend": null,
"to_start_date": null,
"to_end_date": null,
"growth": null,
"variables_grpby": ["Zone"],
"filters": {{"Zone": ["West", "South"]}},
"not_in": {{"Loan Type": ["Small"]}},
"num_filter": {{
"gt": [
["Discount", 5],
["Term Deposit", 10000]
],
"lt": [
["Term Deposit", 15000]
]
}},
"percent": false,
"top": "2",
"bottom": null
}}
Example 2:
User Question: What is the Highest Loan Amount and Loan Outstanding by RM Name James in January 2020
{{
"start_date": "01-01-2020",
"end_date": "31-01-2020",
"time_stamp_col": "Due Date",
"agg_columns": ["Loan Amount", "Loan Outstanding"],
"trend": null,
"to_start_date": null,
"to_end_date": null,
"growth": null,
"variables_grpby": [],
"filters": {{"RM Name": ["James"]}},
"not_in": {{}},
"num_filter": {{}},
"percent": false,
"top": "1",
"bottom": null
}}
Example 3:
User Question: Which RM Name with respect to Region has the Highest Interest Outstanding and Principal Outstanding in the year 2019
{{
"start_date": "01-01-2019",
"end_date": "31-12-2019",
"time_stamp_col": "Due Date",
"agg_columns": ["Interest Outstanding", "Principal Outstanding"],
"trend": null,
"to_start_date": null,
"to_end_date": null,
"growth": null,
"variables_grpby": ["RM Name", "Region"],
"filters": {{}},
"not_in": {{}},
"num_filter": {{}},
"percent": false,
"top": "1",
"bottom": null
}}
Example 4:
User Question: Which Branch in North Carolina with respect to Cibil Score Bucket has the Highest Cibil Score in 2019
{{
"start_date": "01-01-2019",
"end_date": "31-12-2019",
"time_stamp_col": "Due Date",
"agg_columns": ["Cibil Score", "DPD Bucket"],
"trend": null,
"to_start_date": null,
"to_end_date": null,
"growth": null,
"variables_grpby": ["Branch"],
"filters": {{"Region": ["North Carolina"]}},
"not_in": {{}},
"num_filter": {{}},
"percent": false,
"top": "1",
"bottom": null
}}
Example 5:
User Question: With respect to Zone, Region, Branch, RM Name what is the Highest Loan Amount, Loan Tenure, Loan Outstanding, EMI Pending, Principal Outstanding
{{
"start_date": null,
"end_date": null,
"time_stamp_col": null,
"agg_columns": ["Loan Amount", "Loan Tenure", "Loan Outstanding", "EMI Pending", "Principal Outstanding"],
"trend": null,
"to_start_date": null,
"to_end_date": null,
"growth": null,
"variables_grpby": ["Zone", "Region", "Branch", "RM Name"],
"filters": {{}},
"not_in": {{}},
"num_filter": {{}},
"percent": false,
"top": "1",
"bottom": null
}}
Example 6:
User Question: Top 2 zones by Housing Loan in the year 2019
{{
"start_date": "01-01-2019",
"end_date": "31-12-2019",
"time_stamp_col": "Due Date",
"agg_columns": ["Housing Loan"],
"trend": null,
"to_start_date": null,
"to_end_date": null,
"growth": null,
"variables_grpby": ["Zone"],
"filters": {{"Product": ["Home Loan"]}},
"not_in": {{}},
"num_filter": {{}},
"percent": false,
"top": "2",
"bottom": null
}}
Our test dataset has the following columns: {json_data}
User Question (to be converted): {{user_question}}'''
# Use langchain to process the prompt
llm = OpenAI(api_key=openai_api_key, temperature=0.9)
chain = LLMChain(llm=llm, prompt=prompt_template)
user_question = "What is the Highest Loan Amount and Loan Outstanding by RM Name James in January 2020"
response = chain.run(user_question)
print(response)
```
Below is the error it's returning
```
WARNING! api_key is not default parameter.
api_key was transferred to model_kwargs.
Please confirm that api_key is what you intended.
---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
~\AppData\Local\Temp\ipykernel_22224\1265770286.py in <cell line: 169>()
167 # Use langchain to process the prompt
168 llm = OpenAI(api_key=openai_api_key, temperature=0.9)
--> 169 chain = LLMChain(llm=llm, prompt=prompt_template)
170
171
~\anaconda3\lib\site-packages\langchain\load\serializable.py in __init__(self, **kwargs)
72
73 def __init__(self, **kwargs: Any) -> None:
---> 74 super().__init__(**kwargs)
75 self._lc_kwargs = kwargs
76
~\anaconda3\lib\site-packages\pydantic\main.cp310-win_amd64.pyd in pydantic.main.BaseModel.__init__()
ValidationError: 1 validation error for LLMChain
prompt
value is not a valid dict (type=type_error.dict)
```
Can anyone assist what exactly is the error?
### Idea or request for content:
_No response_ | Returning "ValidationError: 1 validation error for LLMChain prompt value is not a valid dict (type=type_error.dict)" while trying to run the LLMChain | https://api.github.com/repos/langchain-ai/langchain/issues/13681/comments | 1 | 2023-11-21T21:15:55Z | 2024-02-14T03:35:23Z | https://github.com/langchain-ai/langchain/issues/13681 | 2,005,150,371 | 13,681 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I am using conversational retreival chain .from_llm(). If i try to print the final prompt that is been sent to LLM there i see LLM calls total 2 times. I will paste both the prompt in both calls.
Below is the code i am using
`qa_chain = ConversationalRetrievalChain.from_llm(llm=llm, chain_type=chain_type,
retriever=vector_database.as_retriever(),
return_source_documents=True)
qa_chain({'question': 'summarize this document', "chat_history": [('what is this', 'this is something'),('who you are', 'i am nothing')]})`
Below i can see the prompt sending to LLM 2 times
**1st call to LLM:**
Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question, in its original language.
Chat History:
human: what is this
assistant: this is something
human: who you are
assistant: i am nothing
Follow Up Input: summarize this document
Standalone question:"""
**2nd call to LLM:**
Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.
Title: Details
Title:Details
Tile:Details
Question: what is the summary of the document?
Helpful Answers:
In the above mentioned 2 calls i can see clearly whatever i have passed in **chat_history** as a question and answer tuple pair that is used to frame my question in the **1st call prompt**. and i can see clearly my original question **summarize this document** is been modified to **what is the summary of the document?**
My question or i am trying to understand is, In place of context i see 3 times Title:Details. what is this?? Can you please explain? . is this the output of vectordb.asretreiver? if yes why i am seeing Title:Details 3 times?. please help me understand what is **Title:Details.**? in the **2nd prompt call**
Let me know if u need additional context to understand my questions.
### Suggestion:
_No response_ | Issue: Queries on conversational retreival chain prompt | https://api.github.com/repos/langchain-ai/langchain/issues/13675/comments | 4 | 2023-11-21T20:28:07Z | 2024-02-27T16:05:54Z | https://github.com/langchain-ai/langchain/issues/13675 | 2,005,073,074 | 13,675 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain version: `0.0.339`
Python version: `3.10`
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
When serializing the `ChatPromptTemplate`, it saves it in a JSON/YAML format like this:
```
{'input_variables': ['question', 'context'],
'output_parser': None,
'partial_variables': {},
'messages': [{'prompt': {'input_variables': ['context', 'question'],
'output_parser': None,
'partial_variables': {},
'template': "...",
'template_format': 'f-string',
'validate_template': True,
'_type': 'prompt'},
'additional_kwargs': {}}],
'_type': 'chat'}
```
Note that the `_type` is "chat".
However, LangChain's `load_prompt_from_config` [does not recognize "chat" as the supported prompt type](https://github.com/langchain-ai/langchain/blob/master/libs/core/langchain_core/prompts/loading.py#L19).
Here is a minimal example to reproduce the issue:
```python
from langchain.prompts import ChatPromptTemplate
from langchain.chains import RetrievalQA
from langchain.llms import OpenAI
from langchain.vectorstores import FAISS
from langchain.embeddings import OpenAIEmbeddings
from langchain.chains.loading import load_chain
TEMPLATE = """Answer the question based on the context:
{context}
Question: {question}
Answer:
"""
chat_prompt = ChatPromptTemplate.from_template(TEMPLATE)
llm = OpenAI()
def get_retriever(persist_dir = None):
vectorstore = FAISS.from_texts(
["harrison worked at kensho"], embedding=OpenAIEmbeddings()
)
return vectorstore.as_retriever()
chain_with_chat_prompt = RetrievalQA.from_chain_type(
llm=llm,
chain_type="stuff",
retriever=get_retriever(),
chain_type_kwargs={"prompt": chat_prompt},
)
chain_with_prompt_saved_path = "./chain_with_prompt.yaml"
chain_with_prompt.save(chain_with_prompt_saved_path)
loaded_chain = load_chain(chain_with_prompt_saved_path, retriever=get_retriever())
```
The above script failed with the error:
`ValueError: Loading chat prompt not supported`
### Expected behavior
Load a chain that contains `ChatPromptTemplate` should work. | Can not load chain with ChatPromptTemplate | https://api.github.com/repos/langchain-ai/langchain/issues/13667/comments | 1 | 2023-11-21T17:40:19Z | 2023-11-27T16:39:51Z | https://github.com/langchain-ai/langchain/issues/13667 | 2,004,820,202 | 13,667 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I am creating a tool that takes a multiple input arguments (say input1, input2). I would like to validate their types and also make sure that the tool only receives input1 and input2. How do I validate this without breaking the llm chain. I would instead like to return a warning to the llm agent. something like, "The input passed were incorrect, please try again"
### Suggestion:
_No response_ | Issue: How to validate Tool input arguments without raising ValidationError | https://api.github.com/repos/langchain-ai/langchain/issues/13662/comments | 19 | 2023-11-21T16:23:53Z | 2024-07-29T16:06:18Z | https://github.com/langchain-ai/langchain/issues/13662 | 2,004,693,724 | 13,662 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain Version: 0.0.339
Python Version: 3.10.12
### Who can help?
@hwchase17
When trying to execute the example from the GraphSparqlQAChain docs a ValueError arises, in my understanding because it cannot parse the query correctly and execute it against the graph.
```
> Entering new GraphSparqlQAChain chain...
Llama.generate: prefix-match hit
The URI of Tim Berners-Lee's work homepage is <https://www.w3.org/People/Berners-Lee/>.
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[<ipython-input-13-432f483fbf1b>](https://localhost:8080/#) in <cell line: 2>()
1 llm_chain = GraphSparqlQAChain.from_llm(llm=llm, graph=graph, verbose=True)
----> 2 llm_chain.run("What is Tim Berners-Lee's work homepage?")
3 frames
[/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py](https://localhost:8080/#) in run(self, callbacks, tags, metadata, *args, **kwargs)
503 if len(args) != 1:
504 raise ValueError("`run` supports only one positional argument.")
--> 505 return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
506 _output_key
507 ]
[/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py](https://localhost:8080/#) in __call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
308 except BaseException as e:
309 run_manager.on_chain_error(e)
--> 310 raise e
311 run_manager.on_chain_end(outputs)
312 final_outputs: Dict[str, Any] = self.prep_outputs(
[/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py](https://localhost:8080/#) in __call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
302 try:
303 outputs = (
--> 304 self._call(inputs, run_manager=run_manager)
305 if new_arg_supported
306 else self._call(inputs)
[/usr/local/lib/python3.10/dist-packages/langchain/chains/graph_qa/sparql.py](https://localhost:8080/#) in _call(self, inputs, run_manager)
101 intent = "UPDATE"
102 else:
--> 103 raise ValueError(
104 "I am sorry, but this prompt seems to fit none of the currently "
105 "supported SPARQL query types, i.e., SELECT and UPDATE."
ValueError: I am sorry, but this prompt seems to fit none of the currently supported SPARQL query types, i.e., SELECT and UPDATE.
```
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce the behaviour:
1. Follow the example as per the documentation: https://python.langchain.com/docs/use_cases/graph/graph_sparql_qa
2. Instead of the OpenAPI use a custom llamacpp deploy i.e.
```
llm = LlamaCpp(
model_path=model_path,
n_gpu_layers=32,
n_batch=512,
callback_manager=callback_manager,
n_ctx=2048,
verbose=True, # Verbose is required to pass to the callback manager
)
```
3. Run the chain:
```
llm_chain = GraphSparqlQAChain.from_llm(llm=llm, graph=graph, verbose=True)
llm_chain.run("What is Tim Berners-Lee's work homepage?")
```
### Expected behavior
Apart from giving the right result (which it does) the SPARQL query should be shown and run against the graph, as per documentation this should be the reply:
```
> Entering new GraphSparqlQAChain chain...
Identified intent:
SELECT
Generated SPARQL:
PREFIX foaf: <http://xmlns.com/foaf/0.1/>
SELECT ?homepage
WHERE {
?person foaf:name "Tim Berners-Lee" .
?person foaf:workplaceHomepage ?homepage .
}
Full Context:
[]
> Finished chain.
"Tim Berners-Lee's work homepage is http://www.w3.org/People/Berners-Lee/."
``` | Error in executing SPARQL query with GraphSparqlQAChain and custom LLM (llamacpp) | https://api.github.com/repos/langchain-ai/langchain/issues/13656/comments | 5 | 2023-11-21T14:51:15Z | 2024-03-13T19:55:45Z | https://github.com/langchain-ai/langchain/issues/13656 | 2,004,456,391 | 13,656 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain 0.0.326
Windows
Python 3.11.5
Nvidia P6-16Q GPU
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Loading up all available GPU layers appears to cause LlamaCppEmbeddings to fail?
- Edit: this was my original submission because I was using -1 for the n_gpu_layers but I did further testing and noted that it was only if you used 34 or 35 - for 33 it works, but not 34 or 35.
For example this fails:
`llama = LlamaCppEmbeddings(model_path=r"mistral-7b-instruct-v0.1.Q5_K_M.gguf", n_ctx=512, n_gpu_layers=34)`
```
llm_load_tensors: ggml ctx size = 0.09 MB
llm_load_tensors: using CUDA for GPU acceleration
llm_load_tensors: mem required = 86.05 MB (+ 128.00 MB per state)
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloading v cache to GPU
llm_load_tensors: offloaded 34/35 layers to GPU
llm_load_tensors: VRAM used: 4840 MB
```
With this error message:
```
Traceback (most recent call last):
File "c:\Users\x\Desktop\Embedding\EmbeddingScript.py", line 11, in <module>
query_result = llama.embed_query("Hello World")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\x\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\embeddings\llamacpp.py", line 125, in embed_query
embedding = self.client.embed(text)
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\x\AppData\Local\Programs\Python\Python311\Lib\site-packages\llama_cpp\llama.py", line 860, in embed
return list(map(float, self.create_embedding(input)["data"][0]["embedding"]))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\x\AppData\Local\Programs\Python\Python311\Lib\site-packages\llama_cpp\llama.py", line 824, in create_embedding
self.eval(tokens)
File "C:\Users\x\AppData\Local\Programs\Python\Python311\Lib\site-packages\llama_cpp\llama.py", line 491, in eval
return_code = llama_cpp.llama_eval(
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\x\AppData\Local\Programs\Python\Python311\Lib\site-packages\llama_cpp\llama_cpp.py", line 808, in llama_eval
return _lib.llama_eval(ctx, tokens, n_tokens, n_past, n_threads)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
OSError: exception: access violation writing 0x0000000000000000
```
However, this does not fail:
`llama = LlamaCppEmbeddings(model_path=r"mistral-7b-instruct-v0.1.Q5_K_M.gguf", n_ctx=512, n_gpu_layers=33)`
```
lm_load_tensors: ggml ctx size = 0.09 MB
llm_load_tensors: using CUDA for GPU acceleration
llm_load_tensors: mem required = 86.05 MB (+ 128.00 MB per state)
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/35 layers to GPU
llm_load_tensors: VRAM used: 4808 MB
```
It may not be a langchain problem, I am not knowledgeable enough to know - but not sure why you can't load up all the available layers of the GPU?
### Expected behavior
I would have thought that using all available layers of the GPU would not cause it to fail. | LlamaCppEmbeddings - fails if all available GPU layers are used. | https://api.github.com/repos/langchain-ai/langchain/issues/13655/comments | 3 | 2023-11-21T14:42:36Z | 2024-06-01T00:16:36Z | https://github.com/langchain-ai/langchain/issues/13655 | 2,004,439,132 | 13,655 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
To develop a scalable solution, I hoped to store the AgentExecutor object in centralized memory, such as Redis. However, due to its non-serializable nature, this has not been possible. Is there a workaround to achieve this?
### Suggestion:
_No response_ | Issue: Unable to Serialize AgentExecutor Object for Centralized Storage | https://api.github.com/repos/langchain-ai/langchain/issues/13653/comments | 6 | 2023-11-21T14:03:40Z | 2024-08-07T22:38:49Z | https://github.com/langchain-ai/langchain/issues/13653 | 2,004,360,284 | 13,653 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
def delete_embeddings(file_path, persist_directory):
chroma_db = chromadb.PersistentClient(path=persist_directory)
collection = chroma_db.get_or_create_collection(name="langchain")
ids = collection.get(where={"source": file_path})['ids']
collection.delete(where={"source": file_path},ids=ids)
print("delete successfully")
below is the error i am getting
File "/home/aaditya/DBChat/CustomBot/user_projects/views.py", line 1028, in project_pages
delete_embeddings(names, persist_directory)
File "/home/aaditya/DBChat/CustomBot/accounts/common_langcain_qa.py", line 134, in delete_embeddings
chroma_db = chromadb.PersistentClient(path=persist_directory)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/aaditya/DBChat/chat/lib/python3.11/site-packages/chromadb/__init__.py", line 106, in PersistentClient
return Client(settings)
^^^^^^^^^^^^^^^^
File "/home/aaditya/DBChat/chat/lib/python3.11/site-packages/chromadb/__init__.py", line 143, in Client
api = system.instance(API)
^^^^^^^^^^^^^^^^^^^^
File "/home/aaditya/DBChat/chat/lib/python3.11/site-packages/chromadb/config.py", line 248, in instance
impl = type(self)
^^^^^^^^^^
File "/home/aaditya/DBChat/chat/lib/python3.11/site-packages/chromadb/api/segment.py", line 81, in __init__
self._sysdb = self.require(SysDB)
^^^^^^^^^^^^^^^^^^^
File "/home/aaditya/DBChat/chat/lib/python3.11/site-packages/chromadb/config.py", line 189, in require
inst = self._system.instance(type)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/aaditya/DBChat/chat/lib/python3.11/site-packages/chromadb/config.py", line 248, in instance
impl = type(self)
### Suggestion:
_No response_ | Issue: getting error in deletion of embeddings for a file | https://api.github.com/repos/langchain-ai/langchain/issues/13651/comments | 2 | 2023-11-21T13:25:26Z | 2024-02-27T16:06:04Z | https://github.com/langchain-ai/langchain/issues/13651 | 2,004,285,505 | 13,651 |
[
"hwchase17",
"langchain"
]
| ### System Info
Ubuntu 22
python 3.10.12
Langchain==0.0.339
duckduckgo-search==3.9.6
### Who can help?
@timonpalm who created [the PR](https://github.com/langchain-ai/langchain/pull/8292)
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps are straight from [this help page](https://python.langchain.com/docs/integrations/tools/ddg) which is [this notebook](https://github.com/langchain-ai/langchain/blob/611e1e0ca45343b86debc0d24db45703ee63643b/docs/docs/integrations/tools/ddg.ipynb#L69)
```
from langchain.tools import DuckDuckGoSearchResults
search = DuckDuckGoSearchResults(backend="news")
search.run("Obama")
```
This results in error:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/USER/.local/lib/python3.10/site-packages/langchain/tools/base.py", line 365, in run
raise e
File "/home/USER/.local/lib/python3.10/site-packages/langchain/tools/base.py", line 337, in run
self._run(*tool_args, run_manager=run_manager, **tool_kwargs)
File "/home/USER/.local/lib/python3.10/site-packages/langchain/tools/ddg_search/tool.py", line 61, in _run
res = self.api_wrapper.results(query, self.num_results, backend=self.backend)
File "/home/USER/.local/lib/python3.10/site-packages/langchain/utilities/duckduckgo_search.py", line 108, in results
for i, res in enumerate(results, 1):
File "/home/USER/.local/lib/python3.10/site-packages/duckduckgo_search/duckduckgo_search.py", line 96, in text
for i, result in enumerate(results, start=1):
UnboundLocalError: local variable 'results' referenced before assignment
```
Looking at the source code of duckduckgo-search package I see this function:
```
def text(
self,
keywords: str,
region: str = "wt-wt",
safesearch: str = "moderate",
timelimit: Optional[str] = None,
backend: str = "api",
max_results: Optional[int] = None,
) -> Iterator[Dict[str, Optional[str]]]:
"""DuckDuckGo text search generator. Query params: https://duckduckgo.com/params
Args:
keywords: keywords for query.
region: wt-wt, us-en, uk-en, ru-ru, etc. Defaults to "wt-wt".
safesearch: on, moderate, off. Defaults to "moderate".
timelimit: d, w, m, y. Defaults to None.
backend: api, html, lite. Defaults to api.
api - collect data from https://duckduckgo.com,
html - collect data from https://html.duckduckgo.com,
lite - collect data from https://lite.duckduckgo.com.
max_results: max number of results. If None, returns results only from the first response. Defaults to None.
Yields:
dict with search results.
"""
if backend == "api":
results = self._text_api(keywords, region, safesearch, timelimit, max_results)
elif backend == "html":
results = self._text_html(keywords, region, safesearch, timelimit, max_results)
elif backend == "lite":
results = self._text_lite(keywords, region, timelimit, max_results)
for i, result in enumerate(results, start=1):
yield result
if max_results and i >= max_results:
break
```
My assessment is that langchain should raise an exception if backend is not part of ["api", "html", "lite"] and the notebook should not mention this "news" feature anymore.
### Expected behavior
Crashing when creating instance of ` DuckDuckGoSearchResults` with invalid `backend` argument and updating the documentation. | ddg error: backend "news" is obsolete so should raise an error and example should be updated | https://api.github.com/repos/langchain-ai/langchain/issues/13648/comments | 6 | 2023-11-21T12:10:21Z | 2024-03-16T15:41:57Z | https://github.com/langchain-ai/langchain/issues/13648 | 2,004,139,873 | 13,648 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Im trying to develop an sql agent chatbot that will generate and exceute SQL queries based on the user free-text questions.
so far I have developed a generic and basic agent with ChatGPT-4 and it worked pretty well (generated complex SQL and returned an answer).
When I tried to switch the LLM with Claude v2 (hosted on AWS Bedrock) things started to break.
sometimes the agent get stuck in a loop of the same input (for example):
```
> Entering new AgentExecutor chain...
Here is my thought process and actions to answer the question "how many alerts?":
Thought: I need to first see what tables are available in the database.
Action: sql_db_list_tables
Action Input:
Observation: alerts, detectors
Thought: Here is my response to the question "how many alerts?":
Thought: I need to first see what tables are available in the database.
Action: sql_db_list_tables
Action Input:
Observation: alerts, detectors
Thought: Here is my response to the question "how many alerts?":
Thought: I need to first see what tables are available in the database.
Action: sql_db_list_tables
Action Input:
Observation: alerts, detectors
Thought: Here is my response to the question "how many alerts?":
Thought: I need to first see what tables are available in the database.
Action: sql_db_list_tables
Action Input:
Observation: alerts, detectors
Thought: Here is my response to the question "how many alerts?":
```
sometimes it gets to the point that it actually query the db and get the result but don't stop the run and return the answer.
Here is how I configured the LLM and agent:
```
def aws_bedrock():
config = AwsBedrockConfig()
client = boto3.client(
'bedrock-runtime',
region_name=config.region_name,
aws_access_key_id=config.aws_access_key_id,
aws_secret_access_key=config.aws_secret_access_key,
)
model_kwargs = {
"prompt": "\n\nHuman: Hello world\n\nAssistant:",
"max_tokens_to_sample": 100000,
"temperature": 0,
"top_k": 10,
"top_p": 1,
"stop_sequences": [
"\n\nHuman:"
],
"anthropic_version": "bedrock-2023-05-31"
}
return Bedrock(
client=client,
model_id=config.model_id,
model_kwargs=model_kwargs
)
```
(also tried to change the model arguments with no success)
```
memory = ConversationBufferMemory(memory_key="history", chat_memory=history)
toolkit = SQLDatabaseToolkit(db=db, llm=self.llm)
self.agent = create_sql_agent(
llm=self.llm,
toolkit=toolkit,
extra_tools=[TableCommentsTool(db=db)],
prefix=self.format_sql_prefix(filters),
suffix=HISTORY_SQL_SUFFIX,
agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
agent_executor_kwargs={"memory": memory},
verbose=True,
input_variables=["input", "history", "agent_scratchpad"]
)
```
### Suggestion:
_No response_ | SQL Agent not working well with Claude v2 model | https://api.github.com/repos/langchain-ai/langchain/issues/13647/comments | 3 | 2023-11-21T12:00:56Z | 2024-04-09T08:50:03Z | https://github.com/langchain-ai/langchain/issues/13647 | 2,004,123,075 | 13,647 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Hello,
I'm not sure if this is already supported or not, I couldn't find anything in the documentation.
Is there a way to make chains support streaming ? It would be nice if we can get it working with something like the `load_summarize_chain`.
Or something like this:
```
doc_prompt = PromptTemplate.from_template("{page_content}")
chain = (
{
"content": lambda docs: "\n\n".join(
format_document(doc, doc_prompt) for doc in docs
)
}
| PromptTemplate.from_template("Summarize the following content:\n\n{content}")
| OpenAI(
temperature=1,
model_name=llm_model,
stream=True,
)
| StrOutputParser()
)
docs = [
Document(
page_content=split,
metadata={"source": "https://en.wikipedia.org/wiki/Nuclear_power_in_space"},
)
for split in text.split()
]
for partial_result in chain.invoke(docs):
print(partial_result)
```
### Motivation
I have long documents to summarize, so I would like to show the partial results in streaming mode and not make the user wait so long to get the final result.
### Your contribution
No. If it's not possible, I'm willing to implement the summarization chain from scratch and use the OpenAI lib. | Support for streaming in the langchain chains (eg., load_summarize_chain) | https://api.github.com/repos/langchain-ai/langchain/issues/13644/comments | 3 | 2023-11-21T11:20:41Z | 2024-03-13T19:55:39Z | https://github.com/langchain-ai/langchain/issues/13644 | 2,004,052,880 | 13,644 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
Regarding:
https://github.com/langchain-ai/langchain/blob/master/templates/rag-gpt-crawler/README.md
I found 2 issues:
1. no such file as "server.py" ( there is "rag_gpt_crawler.ipynb" instead )
2. in docs linked line
`from rag_chroma import chain as rag_gpt_crawler`
is not the same as shown in terminal
`from rag_gpt_crawler import chain as rag_gpt_crawler_chain`
I think the first is incorrect.
### Idea or request for content:
_No response_ | DOC: Template "rag-gpt-crawler" doc is incorrect | https://api.github.com/repos/langchain-ai/langchain/issues/13640/comments | 2 | 2023-11-21T09:39:28Z | 2023-11-22T22:06:11Z | https://github.com/langchain-ai/langchain/issues/13640 | 2,003,853,667 | 13,640 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
The documentation on [creating documents](https://js.langchain.com/docs/modules/data_connection/document_loaders/how_to/creating_documents) covers optional document metadata but doesn't mention that it's possible to create text metadata in `page_content`. For example if only a filename is given to `CSVLoader` it will assume the header is metadata and [delimit each key-value pair with a newline](https://github.com/aws-samples/multi-tenant-chatbot-using-rag-with-amazon-bedrock/pull/14).
Chat LangChain will talk about this source of metadata but I can't find any additional information in the provided references.
> In the context of the RAG model, if the CSV data is used as a source for retrieval or generation, the fieldnames can be utilized in several ways:
>
> • Retrieval: The fieldnames can be used as query terms to retrieve relevant documents from the CSV dataset. The RAG model can leverage the fieldnames to understand the user's query and retrieve documents that match the specified criteria.
>
> • Generation: The fieldnames can provide context and constraints for generating responses. The RAG model can use the fieldnames as prompts or conditioning information to generate responses that are specific to the content of the corresponding columns in the CSV file.
>
> By incorporating the CSV fieldnames into the retrieval and generation processes, the RAG model can produce more accurate and contextually relevant results based on the specific attributes and structure of the CSV dataset.
### Idea or request for content:
A description of the consequences of including text metadata for different scenarios, models, data stores.
Strategies for including text metadata. | DOC: Text metadata | https://api.github.com/repos/langchain-ai/langchain/issues/13639/comments | 2 | 2023-11-21T09:39:12Z | 2023-11-24T10:28:28Z | https://github.com/langchain-ai/langchain/issues/13639 | 2,003,853,198 | 13,639 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Add a progress bar to `GooglePalmEmbeddings.embed_documents()` function. [tqdm](https://github.com/tqdm/tqdm) would work just fine.
In my opinion, all embedders should have a progress bar.
### Motivation
When processing embeddings the user should have an idea of how much time is going to take to embed the data. While using GooglePalmEmbeddings, which are not the fastest, I couldn't see a progress bar and that was frustrating me because I had no idea if it was even accessing GooglePalm correctly or how much time it was going to take.
### Your contribution
```python
from __future__ import annotations
import logging
from typing import Any, Callable, Dict, List, Optional
from tqdm import tqdm
from langchain_core.pydantic_v1 import BaseModel, root_validator
from langchain_core.schema.embeddings import Embeddings
from tenacity import (
before_sleep_log,
retry,
retry_if_exception_type,
stop_after_attempt,
wait_exponential,
)
from langchain.utils import get_from_dict_or_env
logger = logging.getLogger(__name__)
...
class GooglePalmEmbeddings(BaseModel, Embeddings):
"""Google's PaLM Embeddings APIs."""
client: Any
google_api_key: Optional[str]
model_name: str = "models/embedding-gecko-001"
"""Model name to use."""
@root_validator()
def validate_environment(cls, values: Dict) -> Dict:
"""Validate api key, python package exists."""
google_api_key = get_from_dict_or_env(
values, "google_api_key", "GOOGLE_API_KEY"
)
try:
import google.generativeai as genai
genai.configure(api_key=google_api_key)
except ImportError:
raise ImportError("Could not import google.generativeai python package.")
values["client"] = genai
return values
def embed_documents(self, texts: List[str]) -> List[List[float]]:
return [self.embed_query(text) for text in tqdm(texts)]
def embed_query(self, text: str) -> List[float]:
"""Embed query text."""
embedding = embed_with_retry(self, self.model_name, text)
return embedding["embedding"]
``` | Add progress bar to GooglePalmEmbeddings | https://api.github.com/repos/langchain-ai/langchain/issues/13637/comments | 3 | 2023-11-21T09:16:16Z | 2024-02-27T16:06:19Z | https://github.com/langchain-ai/langchain/issues/13637 | 2,003,810,406 | 13,637 |
[
"hwchase17",
"langchain"
]
| ### System Info
When using the Jira wrapper in LangChain to parse data from Jira tickets, the application encounters a TypeError if the ticket information is empty. This issue occurs specifically when the priority field of a ticket is not set, leading to a 'NoneType' object is not subscriptable error.
### Environment Details
LangChain version: [specify version]
Jira Wrapper version: [specify version]
Python version: 3.10
Operating System: [specify OS]
### Error Logs/Stack Traces
```
Traceback (most recent call last):
...
File "/path/to/langchain/tools/jira/tool.py", line 53, in _run
return self.api_wrapper.run(self.mode, instructions)
...
File "/path/to/langchain/utilities/jira.py", line 72, in parse_issues
priority = issue["fields"]["priority"]["name"]
TypeError: 'NoneType' object is not subscriptable
```
### Proposed Solution
I propose adding a check before parsing the ticket information. If the information is empty, return an empty string instead of 'None'. This modification successfully prevented the application from breaking in my tests.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to Reproduce
1- Execute a JQL query to fetch issues from a Jira project (e.g., project = SN and sprint = 'SN Sprint 8 + 9' and labels = 'fe').
2- Ensure that one of the fetched issues has an empty priority field.
3- Observe the application breaking with a TypeError.
### Expected behavior
The Jira wrapper should handle cases where ticket information, such as the priority field, is empty, without causing the application to break. | TypeError in Jira Wrapper When Parsing Empty Ticket Information | https://api.github.com/repos/langchain-ai/langchain/issues/13636/comments | 3 | 2023-11-21T08:52:06Z | 2024-02-27T16:06:24Z | https://github.com/langchain-ai/langchain/issues/13636 | 2,003,767,185 | 13,636 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain 0.0.326
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.chains import RetrievalQA
from langchain.memory import ConversationBufferMemory
from langchain.prompts import PromptTemplate
# This text splitter is used to create the parent documents - The big chunks
parent_splitter = RecursiveCharacterTextSplitter(chunk_size=2000, chunk_overlap=400)
# This text splitter is used to create the child documents - The small chunks
# It should create documents smaller than the parent
child_splitter = RecursiveCharacterTextSplitter(chunk_size=400)
# The vectorstore to use to index the child chunks
from chromadb.errors import InvalidDimensionException
try:
vectorstore = Chroma(collection_name="split_parents", embedding_function=bge_embeddings, persist_directory="chroma_db")
except InvalidDimensionException:
Chroma().delete_collection()
vectorstore = Chroma(collection_name="split_parents", embedding_function=bge_embeddings, persist_directory="chroma_db")
#vectorstore = Chroma(collection_name="split_parents", embedding_function=bge_embeddings)
# The storage layer for the parent documents
store = InMemoryStore()
big_chunks_retriever = ParentDocumentRetriever(
vectorstore=vectorstore,
docstore=store,
child_splitter=child_splitter,
parent_splitter=parent_splitter,
)
big_chunks_retriever.add_documents(documents)
qa_template = """
Nutze die folgenden Informationen aus dem Kontext (getrennt mit <ctx></ctx>), um die Frage zu beantworten.
Antworte nur auf Deutsch, weil der Nutzer kein Englisch versteht! \
Falls du die Antwort nicht weißt, antworte mit "Leider fehlen mir dazu die Informationen." \
Wenn du nicht genügend Informationen unten findest, antworte ebenfalls mit "Leider fehlen mir dazu die Informationen." \
------
<ctx>
{context}
</ctx>
------
{query}
Answer:
"""
prompt = PromptTemplate(template=qa_template,
input_variables=['context','history', 'question'])
chain_type_kwargs={
"verbose": True,
"prompt": prompt,
"memory": ConversationSummaryMemory(
llm=build_llm(),
memory_key="history",
input_key="question",
return_messages=True)}
refine = RetrievalQA.from_chain_type(llm=build_llm(),
chain_type="refine",
return_source_documents=True,
chain_type_kwargs=chain_type_kwargs,
retriever=big_chunks_retriever,
verbose=True)
query = "Hi, I am Max, can you help me??"
refine(query)
```
### Expected behavior
Hi,
in the code you see how I built my RAG model with the ParentDocumentRetriever from Langchain and with Memory. At the moment I am using the RetrievalQA-Chain with the default `chain_type="stuff"`. However I want to try different chain types like "map_reduce" or "refine". But when replacing `chain_type="refine"` and creating the Retrieval QA chain, I get the following Error:
```
ValidationError: 1 validation error for RefineDocumentsChain
prompt
extra fields not permitted (type=value_error.extra)
```
How can I solve this? | Chain Type Refine Error: 1 validation error for RefineDocumentsChain prompt extra fields not permitted | https://api.github.com/repos/langchain-ai/langchain/issues/13635/comments | 3 | 2023-11-21T08:36:32Z | 2024-02-27T16:06:29Z | https://github.com/langchain-ai/langchain/issues/13635 | 2,003,734,187 | 13,635 |
[
"hwchase17",
"langchain"
]
| ### System Info
Lanchain V: 0.339
### Who can help?
@hwchase17
@agola11
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Following any example that uses a `langchain.schema.runnable` object. For example the "[Adding memory](https://python.langchain.com/docs/expression_language/cookbook/memory)" tutorial uses `RunnableLambda` and `RunnablePassthrough`.
- I no longer see `langchain.schema` in the API docs (see image below).
- Searching in the API docs also doesn't return any results when searching for `RunnablePassthrough`
<img width="363" alt="image" src="https://github.com/langchain-ai/langchain/assets/94480542/1b3ded17-c669-406d-8309-4f953f42c1f6">
### Expected behavior
- I don't see anything in the release notes about the `langchain.schema.runnables` being removed or relocated.
- I would have expected to see them in the API docs, or at least for them to still be returned when searching for them.
- Not sure if this is a documentation build issue and the modules are still importable as I have not updated my Langchain version yet. I was just using the docs as a reference and then starting getting 404 errors upon page refresh (e.g. [this](https://api.python.langchain.com/en/latest/schema/langchain.schema.messages.AIMessageChunk.html#langchain.schema.messages.AIMessageChunk) page for `AIMessageChunk` also no longer exists) | Langchain.schema.runnable now missing from docs? | https://api.github.com/repos/langchain-ai/langchain/issues/13631/comments | 4 | 2023-11-20T23:53:15Z | 2024-03-13T19:55:48Z | https://github.com/langchain-ai/langchain/issues/13631 | 2,003,206,976 | 13,631 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain Version: 0.0.339
Python version: 3.10.8
Windows 10 Enterprise 21H2
When creating a ConversationalRetrievalChain as follows:
CONVERSATION_RAG_CHAIN_WITH_SUMMARY_BUFFER = ConversationalRetrievalChain(
combine_docs_chain=combine_docs_chain,
memory=summary_memory,
retriever=rag_retriever,
question_generator=question_generator_chain
)
With
LLM = AzureChatOpenAI(...)
The following error occurs:
"
Traceback (most recent call last):
File "C:\Users\om403f\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner
self.run()
File "C:\Users\om403f\AppData\Local\Programs\Python\Python310\lib\threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\om403f\Documents\Applied_Research\Deep_Learning\web_app\app.py", line 1456, in llm_task
history_rag_buffer_result = CONVERSATION_RAG_CHAIN_WITH_SUMMARY_BUFFER.invoke({'question':user_query, 'chat_history':summary_memory})
File "C:\Users\om403f\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain\chains\base.py", line 87, in invoke
return self(
File "C:\Users\om403f\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain\chains\base.py", line 310, in __call__
raise e
File "C:\Users\om403f\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain\chains\base.py", line 304, in __call__
self._call(inputs, run_manager=run_manager)
File "C:\Users\om403f\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain\chains\conversational_retrieval\base.py", line 135, in _call
chat_history_str = get_chat_history(inputs["chat_history"])
File "C:\Users\om403f\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain\chains\conversational_retrieval\base.py", line 44, in _get_chat_history
ai = "Assistant: " + dialogue_turn[1]
TypeError: can only concatenate str (not "AzureChatOpenAI") to str
"
Alternately, when using CTransformers as follows:
LLM = CTransformers(model=llm_model, model_type="llama", config=config, streaming=True, callbacks=[StreamingStdOutCallbackHandler()])
The following error occurs:
"
Traceback (most recent call last):
File "C:\Users\om403f\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner
self.run()
File "C:\Users\om403f\AppData\Local\Programs\Python\Python310\lib\threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\om403f\Documents\Applied_Research\Deep_Learning\web_app\app.py", line 1456, in llm_task
history_rag_buffer_result = CONVERSATION_RAG_CHAIN_WITH_SUMMARY_BUFFER.invoke({'question':user_query, 'chat_history':summary_memory})
File "C:\Users\om403f\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain\chains\base.py", line 87, in invoke
return self(
File "C:\Users\om403f\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain\chains\base.py", line 310, in __call__
raise e
File "C:\Users\om403f\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain\chains\base.py", line 304, in __call__
self._call(inputs, run_manager=run_manager)
File "C:\Users\om403f\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain\chains\conversational_retrieval\base.py", line 135, in _call
chat_history_str = get_chat_history(inputs["chat_history"])
File "C:\Users\om403f\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain\chains\conversational_retrieval\base.py", line 44, in _get_chat_history
ai = "Assistant: " + dialogue_turn[1]
TypeError: can only concatenate str (not "CTransformers") to str
"
Hope this is an accurate bug report and it helps! Apologies if this is in fact a dumb report and actually an error at my end.
### Who can help?
@hwchase17
@agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain.schema.vectorstore import VectorStoreRetriever
from langchain.memory import ConversationSummaryBufferMemory
from langchain.embeddings import HuggingFaceEmbeddings
from langchain.prompts import PromptTemplate
from langchain.vectorstores import Chroma
from langchain.chains import LLMChainn
from langchain.chains import StuffDocumentsChain
from langchain.chains import ConversationalRetrievalChain
from langchain.llms import CTransformers
VECTOR_STORE = Chroma(persist_directory=VECTORDB_SBERT_FOLDER, embedding_function=HuggingFaceEmbeddings())
LLM = CTransformers(model=llm_model, model_type="llama", config=config, streaming=True, callbacks=[StreamingStdOutCallbackHandler()])
document_prompt = PromptTemplate(
input_variables=["page_content"],
template="{page_content}"
)
document_variable_name = "context"
temp_StuffDocumentsChain_prompt = PromptTemplate.from_template(
"Summarize this content: {context}"
)
llm_chain_for_StuffDocumentsChain = LLMChain(llm=LLM, prompt=temp_StuffDocumentsChain_prompt)
combine_docs_chain = StuffDocumentsChain(
llm_chain=llm_chain_for_StuffDocumentsChain,
document_prompt=document_prompt,
document_variable_name=document_variable_name
)
summary_memory = ConversationSummaryBufferMemory(llm=LLM, max_token_limit=100)
retriever=VECTOR_STORE.as_retriever()
rag_retriever = VectorStoreRetriever(vectorstore=VECTOR_STORE)
temp_template = (
"""
Combine the chat history and qustion into a standalone question:
Chat history: {chat_history}
question: {user_query}
"""
)
temp_prompt = PromptTemplate.from_template(temp_template)
question_generator_chain = LLMChain(llm=LLM, prompt=temp_prompt)
CONVERSATION_RAG_CHAIN_WITH_SUMMARY_BUFFER = ConversationalRetrievalChain(
combine_docs_chain=combine_docs_chain,
memory=summary_memory,
retriever=rag_retriever,
question_generator=question_generator_chain
)
CONVERSATION_RAG_CHAIN_WITH_SUMMARY_BUFFER.invoke({'question':user_query, 'chat_history':summary_memory})
### Expected behavior
Should work according to code example and API specs as described in the official LangChain API docs:
https://api.python.langchain.com/en/latest/chains/langchain.chains.conversational_retrieval.base.ConversationalRetrievalChain.html#langchain.chains.conversational_retrieval.base.ConversationalRetrievalChain | Potential Bug in ConversationalRetrievalChain - TypeError: can only concatenate str (not "CTransformers") to str | TypeError: can only concatenate str (not "AzureChatOpenAI") to str | https://api.github.com/repos/langchain-ai/langchain/issues/13628/comments | 3 | 2023-11-20T23:14:44Z | 2024-03-17T16:06:11Z | https://github.com/langchain-ai/langchain/issues/13628 | 2,003,171,128 | 13,628 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain Version: 0.0.339
Python version: 3.10.8
Windows 10 Enterprise 21H2
When creating a ConversationalRetrievalChain as follows:
CONVERSATION_RAG_CHAIN_WITH_SUMMARY_BUFFER = ConversationalRetrievalChain(
combine_docs_chain=combine_docs_chain,
memory=summary_memory,
retriever=rag_retriever,
question_generator=question_generator_chain
)
With rag_retriever = VectorStoreRetrieverMemory(retriever=VECTOR_STORE.as_retriever())
The following error occurs:
"
Traceback (most recent call last):
File "C:\Users\om403f\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner
self.run()
File "C:\Users\om403f\AppData\Local\Programs\Python\Python310\lib\threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\om403f\Documents\Applied_Research\Deep_Learning\web_app\app.py", line 1438, in llm_task
CONVERSATION_RAG_CHAIN_WITH_SUMMARY_BUFFER = ConversationalRetrievalChain(
File "C:\Users\om403f\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain\load\serializable.py", line 97, in __init__
super().__init__(**kwargs)
File "pydantic\main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for ConversationalRetrievalChain
retriever
Can't instantiate abstract class BaseRetriever with abstract method _get_relevant_documents (type=type_error)
"
Name mangling may be occurring as described here: https://stackoverflow.com/questions/31457855/cant-instantiate-abstract-class-with-abstract-methods
retriever.py implements the abstract method _get_relevant_documents: https://github.com/langchain-ai/langchain/blob/4eec47b19128fa168e58b9a218a9da049275f6ce/libs/langchain/langchain/schema/retriever.py#L136
Hope this is an accurate bug report and it helps! Apologies if this is in fact a dumb report and actually an error at my end.
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.memory import ConversationSummaryBufferMemory
from langchain.memory import VectorStoreRetrieverMemory
from langchain.prompts import PromptTemplate
from langchain.vectorstores import Chroma
from langchain.chains import LLMChain
from langchain.chains import RetrievalQA
from langchain.chains import ConversationChain
from langchain.chains import StuffDocumentsChain
from langchain.chains import ConversationalRetrievalChain
VECTOR_STORE = Chroma(persist_directory=VECTORDB_SBERT_FOLDER, embedding_function=HuggingFaceEmbeddings())
LLM = AzureChatOpenAI()
document_prompt = PromptTemplate(
input_variables=["page_content"],
template="{page_content}"
)
document_variable_name = "context"
temp_StuffDocumentsChain_prompt = PromptTemplate.from_template(
"Summarize this content: {context}"
)
llm_chain_for_StuffDocumentsChain = LLMChain(llm=LLM, prompt=temp_StuffDocumentsChain_prompt)
combine_docs_chain = StuffDocumentsChain(
llm_chain=llm_chain_for_StuffDocumentsChain,
document_prompt=document_prompt,
document_variable_name=document_variable_name
)
summary_memory = ConversationSummaryBufferMemory(llm=LLM, max_token_limit=100)
retriever=VECTOR_STORE.as_retriever()
rag_retriever = VectorStoreRetrieverMemory(retriever=retriever)
temp_template = (
"""
Combine the chat history and qustion into a standalone question:
Chat history: {chat_history}
question: {user_query}
"""
)
temp_prompt = PromptTemplate.from_template(temp_template)
question_generator_chain = LLMChain(llm=LLM, prompt=temp_prompt)
CONVERSATION_RAG_CHAIN_WITH_SUMMARY_BUFFER = ConversationalRetrievalChain(
combine_docs_chain=combine_docs_chain,
memory=summary_memory,
retriever=rag_retriever,
question_generator=question_generator_chain
)
### Expected behavior
Example code here works: https://api.python.langchain.com/en/latest/chains/langchain.chains.conversational_retrieval.base.ConversationalRetrievalChain.html#langchain.chains.conversational_retrieval.base.ConversationalRetrievalChain | Potential Bug in Retriever.py: Can't instantiate abstract class BaseRetriever with abstract method _get_relevant_documents | https://api.github.com/repos/langchain-ai/langchain/issues/13624/comments | 9 | 2023-11-20T21:27:06Z | 2024-07-01T04:57:23Z | https://github.com/langchain-ai/langchain/issues/13624 | 2,003,033,612 | 13,624 |
[
"hwchase17",
"langchain"
]
| ### Feature request
I have been using Ollama with Langchain for various tasks, but sometimes Ollama takes too long to respond depending on my local hardware. Is it possible to add a configurable timeout to the Ollama base class so that I can adjust this setting to avoid timeouts when using agents. Currently, I am getting a `httpx` timeout error when using Ollama.
### Motivation
This feature will help to leverage local LLMs on a variety of hardware and let's experiment and build with local LLMs before using any third party APIs.
### Your contribution
If this is something that would be considered as a feature I am happy to add in a PR for this feature. | Configurable timeout for Ollama | https://api.github.com/repos/langchain-ai/langchain/issues/13622/comments | 3 | 2023-11-20T21:10:54Z | 2023-11-20T21:36:40Z | https://github.com/langchain-ai/langchain/issues/13622 | 2,003,012,832 | 13,622 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
The compatible versions for library `pgvector` listed in `poetry.lock` are really old: `"pgvector (>=0.1.6,<0.2.0)"`.
Are we able to update them to more recent ones?
### Suggestion:
Update versions to recent ones. | Issue: `pgvector` versions in `poetry.lock` are really old | https://api.github.com/repos/langchain-ai/langchain/issues/13617/comments | 3 | 2023-11-20T19:32:27Z | 2024-03-17T16:06:06Z | https://github.com/langchain-ai/langchain/issues/13617 | 2,002,864,501 | 13,617 |
[
"hwchase17",
"langchain"
]
| ### System Info
# Create and load Redis with documents
vectorstore = RedisVectorStore.from_texts(
texts=texts,
metadatas=metadatas,
embedding=embedding,
index_name=index_name,
redis_url=redis_url
)
The error i faced
Redis cannot be used as a vector database without RediSearch >=2.4Please head to https://redis.io/docs/stack/search/quick_start/to know more about installing the RediSearch module within Redis Stack.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
.
### Expected behavior
. | ValueError: Redis failed to connect: | https://api.github.com/repos/langchain-ai/langchain/issues/13611/comments | 4 | 2023-11-20T16:10:48Z | 2024-06-19T11:39:45Z | https://github.com/langchain-ai/langchain/issues/13611 | 2,002,541,905 | 13,611 |
[
"hwchase17",
"langchain"
]
| ### System Info
I encountered an exception and a type checking notice in PyCharm while working with the following code snippet:
```
split_documents = text_splitter.split_documents(raw_documents)
cached_embedder.embed_documents(**split_documents**)
```
The type checking notice indicates that there is a mismatch in the expected type for a Document. According to the type definition, a Document should have properties such as page_content, type, and metadata. However, the function embed_documents seems to be designed to handle a list of strings instead of documents.
To align with the expected type for a Document, it is suggested to consider renaming the function from embed_documents to something like embed_strings or embed_texts. This change would accurately reflect the input type expected by the function and help avoid type-related issues during development.
Thank you for your attention to this matter.
```
cached_embedder.embed_documents(split_documents)
...
File "venv/lib/python3.11/site-packages/langchain/embeddings/cache.py", line 26, in _hash_string_to_uuid
hash_value = hashlib.sha1(input_string.encode("utf-8")).hexdigest()
^^^^^^^^^^^^^^^^^^^
AttributeError: 'Document' object has no attribute 'encode'
```
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
loader = DirectoryLoader(
"./data",
glob="**/*.md",
show_progress=True,
use_multithreading=True,
loader_cls=UnstructuredMarkdownLoader,
loader_kwargs={"mode": "elements"},
)
raw_documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
split_documents = text_splitter.split_documents(raw_documents)
cached_embedder.embed_documents(split_documents)
### Expected behavior
method should either accept a list of Documents or be renamed to embed_strings | Type Checking issue: CacheBackedEmbeddings.split_documents does not take a list of Documents | https://api.github.com/repos/langchain-ai/langchain/issues/13610/comments | 3 | 2023-11-20T15:27:25Z | 2024-02-26T16:05:48Z | https://github.com/langchain-ai/langchain/issues/13610 | 2,002,454,320 | 13,610 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hello, I am working on a conversational chatbot, here is a snippet of the code :
```
general_system_template = ""You a chatbot...
---
{summaries}"""
general_user_template = "Question: {question}"
messages = [
SystemMessagePromptTemplate.from_template(general_system_template),
HumanMessagePromptTemplate.from_template(general_user_template)
]
qa_prompt = ChatPromptTemplate(
messages=messages,
input_variables=['question', 'summaries']
)
q = Queue()
llm_chat = ChatVertexAI(
temperature=0,
model_name="chat-bison",
streaming=True,
callbacks=[QueueCallback(q)],
verbose=False
)
retriever = docsearch.as_retriever(
search_type="similarity",
search_kwargs={
'k': 2,
'filter': {'source': {'$in': sources}}
}
)
llm_text = VertexAI(
temperature=0,
model_name="text-bison"
)
combine_docs_chain = load_qa_with_sources_chain(
llm=llm_chat,
chain_type="stuff",
prompt=qa_prompt
)
condense_question_template = (
"""Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question, in its original language.
Chat History: {chat_history}
Follow Up Input: {question}"""
)
condense_question_prompt = PromptTemplate.from_template(condense_question_template)
condense_chain = LLMChain(
llm=llm_text,
prompt=condense_question_prompt,
verbose=True
)
chain = ConversationalRetrievalChain(
combine_docs_chain_=combine_docs_chain,
retriever=retriever,
question_generator=condense_chain
)
```
When running the code I have the following error :
```
pydantic.error_wrappers.ValidationError: 2 validation errors for ConversationalRetrievalChain
combine_docs_chain
field required (type=value_error.missing)
combine_docs_chain_
extra fields not permitted (type=value_error.extra)
```
How could I solve this ? Is there any way to have a more detailed error ?
### Suggestion:
_No response_ | Issue: Validation errors for ConversationalRetrievalChain (combine_docs_chain) | https://api.github.com/repos/langchain-ai/langchain/issues/13607/comments | 3 | 2023-11-20T13:30:37Z | 2024-02-26T16:05:53Z | https://github.com/langchain-ai/langchain/issues/13607 | 2,002,214,770 | 13,607 |
[
"hwchase17",
"langchain"
]
| ### System Info
python = "^3.8.10"
langchain = "^0.0.336"
google-cloud-aiplatform = "^1.36.3"
### Who can help?
@hwchase17 @agol
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.llms.vertexai import VertexAI
model = VertexAI(
model_name="text-bison@001",
temperature=0.2,
max_output_tokens=1024,
top_k=40,
top_p=0.8
)
model.client
# <vertexai.preview.language_models._PreviewTextGenerationModel at ...>
# it should be <vertexai.language_models.TextGenerationModel at ...>
```
### Expected behavior
Code reference: https://github.com/langchain-ai/langchain/blob/78a1f4b264fbdca263a4f8873b980eaadb8912a7/libs/langchain/langchain/llms/vertexai.py#L255C77-L255C77
The VertexAI API is now using vertexai.language_models.TextGenerationModel.
Instead, here we are still importing it from from vertexai.preview.language_models. | Changed import of VertexAI | https://api.github.com/repos/langchain-ai/langchain/issues/13606/comments | 3 | 2023-11-20T12:55:25Z | 2024-02-26T16:05:58Z | https://github.com/langchain-ai/langchain/issues/13606 | 2,002,142,951 | 13,606 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I've created a multi-level directory vector store using Faiss. How can I retrieve all indices within one or multiple subdirectories?
### Suggestion:
_No response_ | Issue: retrieve multi index from vector store using Faiss in Langchain | https://api.github.com/repos/langchain-ai/langchain/issues/13605/comments | 2 | 2023-11-20T12:47:43Z | 2023-11-21T14:58:27Z | https://github.com/langchain-ai/langchain/issues/13605 | 2,002,129,643 | 13,605 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain 0.0.338
Python 3.11.5
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I was trying to combine multiple structured `Tool`s, one that produces a `List` of values and another that consumes it, but couldn't get it to work. I asked the LangChain support bot whether it was possible and it said yes and produced the following example. But it does not work :)
```python
from langchain.llms import OpenAI
from langchain.agents import initialize_agent, AgentType
from langchain.tools import BaseTool
from typing import List
# Define the first structured tool that returns a list of strings
class ListTool(BaseTool):
name = "List Tool"
description = "Generates a list of strings."
def _run(self) -> List[str]:
"""Return a list of strings."""
return ["apple", "banana", "cherry"]
tool1 = ListTool()
# Define the second structured tool that accepts a list of strings
class ProcessListTool(BaseTool):
name = "Process List Tool"
description = "Processes a list of strings."
def _run(self, input_list: List[str]) -> str:
"""Process the list of strings."""
# Perform the processing logic here
processed_list = [item.upper() for item in input_list]
return f"Processed list: {', '.join(processed_list)}"
tool2 = ProcessListTool()
llm = OpenAI(temperature=0)
agent_executor = initialize_agent(
[tool1, tool2],
llm,
agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
)
output = agent_executor.run("Process the list")
print(output) # Output: 'Processed list: APPLE, BANANA, CHERRY'
```
Full output:
```
> Entering new AgentExecutor chain...
Action:
{
"action": "Process List Tool",
"action_input": {
"input_list": {
"title": "Input List",
"type": "array",
"items": {
"type": "string"
}
}
}
}
Observation: Processed list: TITLE, TYPE, ITEMS
Thought: I have the processed list
Action:
{
"action": "Final Answer",
"action_input": "I have processed the list and it contains the following: TITLE, TYPE, ITEMS"
}
> Finished chain.
```
### Expected behavior
Expected output:
```
Processed list: APPLE, BANANA, CHERRY'
``` | Structured tools not able to pass structured data to each other | https://api.github.com/repos/langchain-ai/langchain/issues/13602/comments | 12 | 2023-11-20T10:21:21Z | 2024-02-26T16:06:04Z | https://github.com/langchain-ai/langchain/issues/13602 | 2,001,851,127 | 13,602 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain==0.0.338
python==3.8.1
neo4j latest
this is the error:
---------------------------------------------------------------------------
ConfigurationError Traceback (most recent call last)
[/Users/m1/Desktop/LangChain/Untitled.ipynb](https://file+.vscode-resource.vscode-cdn.net/Users/m1/Desktop/LangChain/Untitled.ipynb) Cell 1 line 5
[2](vscode-notebook-cell:/Users/m1/Desktop/LangChain/Untitled.ipynb#W0sZmlsZQ%3D%3D?line=1) import os
[4](vscode-notebook-cell:/Users/m1/Desktop/LangChain/Untitled.ipynb#W0sZmlsZQ%3D%3D?line=3) uri, user, password = os.getenv("NEO4J_URI"), os.getenv("NEO4J_USERNAME"), os.getenv("NEO4J_PASSWORD")
----> [5](vscode-notebook-cell:/Users/m1/Desktop/LangChain/Untitled.ipynb#W0sZmlsZQ%3D%3D?line=4) graph = Neo4jGraph(
[6](vscode-notebook-cell:/Users/m1/Desktop/LangChain/Untitled.ipynb#W0sZmlsZQ%3D%3D?line=5) url=uri,
[7](vscode-notebook-cell:/Users/m1/Desktop/LangChain/Untitled.ipynb#W0sZmlsZQ%3D%3D?line=6) username=user,
[8](vscode-notebook-cell:/Users/m1/Desktop/LangChain/Untitled.ipynb#W0sZmlsZQ%3D%3D?line=7) password=password,
[9](vscode-notebook-cell:/Users/m1/Desktop/LangChain/Untitled.ipynb#W0sZmlsZQ%3D%3D?line=8) )
File [~/Desktop/LangChain/KG_openai/lib/python3.8/site-packages/langchain/graphs/neo4j_graph.py:69](https://file+.vscode-resource.vscode-cdn.net/Users/m1/Desktop/LangChain/~/Desktop/LangChain/KG_openai/lib/python3.8/site-packages/langchain/graphs/neo4j_graph.py:69), in Neo4jGraph.__init__(self, url, username, password, database)
[66](https://file+.vscode-resource.vscode-cdn.net/Users/m1/Desktop/LangChain/~/Desktop/LangChain/KG_openai/lib/python3.8/site-packages/langchain/graphs/neo4j_graph.py:66) password = get_from_env("password", "NEO4J_PASSWORD", password)
[67](https://file+.vscode-resource.vscode-cdn.net/Users/m1/Desktop/LangChain/~/Desktop/LangChain/KG_openai/lib/python3.8/site-packages/langchain/graphs/neo4j_graph.py:67) database = get_from_env("database", "NEO4J_DATABASE", database)
---> [69](https://file+.vscode-resource.vscode-cdn.net/Users/m1/Desktop/LangChain/~/Desktop/LangChain/KG_openai/lib/python3.8/site-packages/langchain/graphs/neo4j_graph.py:69) self._driver = neo4j.GraphDatabase.driver(url, auth=(username, password))
[70](https://file+.vscode-resource.vscode-cdn.net/Users/m1/Desktop/LangChain/~/Desktop/LangChain/KG_openai/lib/python3.8/site-packages/langchain/graphs/neo4j_graph.py:70) self._database = database
[71](https://file+.vscode-resource.vscode-cdn.net/Users/m1/Desktop/LangChain/~/Desktop/LangChain/KG_openai/lib/python3.8/site-packages/langchain/graphs/neo4j_graph.py:71) self.schema: str = ""
File [~/Desktop/LangChain/KG_openai/lib/python3.8/site-packages/neo4j/_sync/driver.py:190](https://file+.vscode-resource.vscode-cdn.net/Users/m1/Desktop/LangChain/~/Desktop/LangChain/KG_openai/lib/python3.8/site-packages/neo4j/_sync/driver.py:190), in GraphDatabase.driver(cls, uri, auth, **config)
[170](https://file+.vscode-resource.vscode-cdn.net/Users/m1/Desktop/LangChain/~/Desktop/LangChain/KG_openai/lib/python3.8/site-packages/neo4j/_sync/driver.py:170) @classmethod
[171](https://file+.vscode-resource.vscode-cdn.net/Users/m1/Desktop/LangChain/~/Desktop/LangChain/KG_openai/lib/python3.8/site-packages/neo4j/_sync/driver.py:171) def driver(
[172](https://file+.vscode-resource.vscode-cdn.net/Users/m1/Desktop/LangChain/~/Desktop/LangChain/KG_openai/lib/python3.8/site-packages/neo4j/_sync/driver.py:172) cls, uri: str, *,
ref='~/Desktop/LangChain/KG_openai/lib/python3.8/site-packages/neo4j/_sync/driver.py:0'>0</a>;32m (...)
[177](https://file+.vscode-resource.vscode-cdn.net/Users/m1/Desktop/LangChain/~/Desktop/LangChain/KG_openai/lib/python3.8/site-packages/neo4j/_sync/driver.py:177) **config
[178](https://file+.vscode-resource.vscode-cdn.net/Users/m1/Desktop/LangChain/~/Desktop/LangChain/KG_openai/lib/python3.8/site-packages/neo4j/_sync/driver.py:178) ) -> Driver:
...
--> [486](https://file+.vscode-resource.vscode-cdn.net/Users/m1/Desktop/LangChain/~/Desktop/LangChain/KG_openai/lib/python3.8/site-packages/neo4j/api.py:486) raise ConfigurationError("Username is not supported in the URI")
[488](https://file+.vscode-resource.vscode-cdn.net/Users/m1/Desktop/LangChain/~/Desktop/LangChain/KG_openai/lib/python3.8/site-packages/neo4j/api.py:488) if parsed.password:
[489](https://file+.vscode-resource.vscode-cdn.net/Users/m1/Desktop/LangChain/~/Desktop/LangChain/KG_openai/lib/python3.8/site-packages/neo4j/api.py:489) raise ConfigurationError("Password is not supported in the URI")
ConfigurationError: Username is not supported in the URI
Output is truncated. View as a [scrollable element](command:cellOutput.enableScrolling?bbc3ed55-b69e-4557-b7e7-e9913806eb86) or open in a [text editor](command:workbench.action.openLargeOutput?bbc3ed55-b69e-4557-b7e7-e9913806eb86). Adjust cell output [settings](command:workbench.action.openSettings?%5B%22%40tag%3AnotebookOutputLayout%22%5D)...
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.graphs import Neo4jGraph
import os
uri, user, password = os.getenv("NEO4J_URI"), os.getenv("NEO4J_USERNAME"), os.getenv("NEO4J_PASSWORD")
graph = Neo4jGraph(
url= uri,
username=user,
password=password,
)
### Expected behavior
This driver formation is running fine in v264. however its giving me error in v338 version. at last in the driver stub, its parsing the url and then the username from the parsed url is being checked. If its present then its raising this above config error.
| Neo4j - ConfigurationError: username not supported in the URI | https://api.github.com/repos/langchain-ai/langchain/issues/13601/comments | 5 | 2023-11-20T10:21:02Z | 2024-02-26T16:06:08Z | https://github.com/langchain-ai/langchain/issues/13601 | 2,001,850,563 | 13,601 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I'd like to make `ConversationSummaryMemory` is filled with the previous questions and answers for a specific conversation from an SQLite database so I can have my agent already aware of previous conversation with the user.
Here's my current code:
```py
import os
import sys
from langchain.chains import ConversationalRetrievalChain
from langchain.chat_models import ChatOpenAI
from langchain.embeddings import OpenAIEmbeddings
from langchain.indexes import VectorstoreIndexCreator
from langchain.indexes.vectorstore import VectorStoreIndexWrapper
from langchain.vectorstores.chroma import Chroma
from langchain.memory import ConversationSummaryMemory
from langchain.tools import Tool
from langchain.agents.types import AgentType
from langchain.agents import initialize_agent
from dotenv import load_dotenv
load_dotenv()
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
query = " ".join(sys.argv[1:]) if len(sys.argv) > 1 else None
retriever = # retriever stuff here for the `local-docs` tool
llm = ChatOpenAI(temperature=0.7, model="gpt-3.5-turbo-1106")
memory = ConversationSummaryMemory(
llm=llm,
memory_key="chat_history",
return_messages=True,
)
chain = ConversationalRetrievalChain.from_llm(
llm=llm,
memory=memory,
chain_type="stuff",
retriever=index.vectorstore.as_retriever(search_kwargs={"k": 4}),
get_chat_history=lambda h: h,
verbose=False,
)
system_message = (
"Be helpful to your users".
)
tools = [
Tool(
name="local-docs",
func=chain,
description="Useful when you need to answer docs-related questions",
)
]
def ask(input: str) -> str:
result = ""
try:
result = executor({"input": input})
except Exception as e:
response = str(e)
if response.startswith("Could not parse LLM output: `"):
response = response.removeprefix(
"Could not parse LLM output: `"
).removesuffix("`")
return response
else:
raise Exception(str(e))
return result
chat_history = []
executor = initialize_agent(
agent=AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION,
tools=tools,
llm=llm,
memory=memory,
agent_kwargs={"system_message": system_message},
verbose=True,
max_execution_time=30,
max_iterations=6,
handle_parsing_errors=True,
early_stopping_method="generate",
stop=["\nObservation:"],
)
result = ask(query)
print(result["output"])
``` | Issue: Filling `ConversationSummaryMemory` with existing conversation from an SQLite database | https://api.github.com/repos/langchain-ai/langchain/issues/13599/comments | 17 | 2023-11-20T08:52:20Z | 2023-11-30T03:27:24Z | https://github.com/langchain-ai/langchain/issues/13599 | 2,001,666,284 | 13,599 |
[
"hwchase17",
"langchain"
]
| ### Feature request
add support for other multimodal models like Llava, Fuyu, Bakllava... This would help with RAG, where documents have non text data.
### Motivation
I have a lot of tables and images to proccess in PDFs when doing RAG, and right now this is not ideal.
### Your contribution
no time :( | add multimodal support | https://api.github.com/repos/langchain-ai/langchain/issues/13597/comments | 3 | 2023-11-20T07:00:43Z | 2024-02-26T16:06:13Z | https://github.com/langchain-ai/langchain/issues/13597 | 2,001,501,651 | 13,597 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I am using Qdrant as my vector store, and now, every time I use 'max_marginal_relevance_search' with a fix k parameter. It will always return the same documents. How to add some randomness? So it will return something different(Still within the score_threshold) each time. Here is my sample code of using 'max_marginal_relevance_search':
related_docs = vectorstore.max_marginal_relevance_search(target_place, k=fetch_amount, score_threshold=0.5, filter=rest.Filter(must=[rest.FieldCondition(
key='metadata.category',
match=rest.MatchValue(value=category),
),rest.FieldCondition(
key='metadata.related_words',
match=rest.MatchAny(any=related_words),
)]))
### Suggestion:
_No response_ | Issue: How to add randomness when using max_marginal_relevance_search with Qdrant | https://api.github.com/repos/langchain-ai/langchain/issues/13596/comments | 3 | 2023-11-20T06:38:24Z | 2024-02-26T16:06:18Z | https://github.com/langchain-ai/langchain/issues/13596 | 2,001,474,706 | 13,596 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain==0.0.266
### Who can help?
@eyurtsev @hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
import datetime
import chainlit
from dotenv import load_dotenv
from langchain.chains.query_constructor.base import AttributeInfo
from langchain.chat_models import ChatOpenAI
from langchain.docstore.document import Document # noqa
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.retrievers import SelfQueryRetriever
from langchain.vectorstores import Chroma
chainlit.debug = True
load_dotenv()
llm = ChatOpenAI()
docs = [
Document(
page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose",
metadata={"released_at": 1700190868, "rating": 7.7, "genre": "science fiction"},
),
Document(
page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...",
metadata={"released_at": 1700190868, "director": "Christopher Nolan", "rating": 8.2},
),
Document(
page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea",
metadata={"released_at": 1700190868, "director": "Satoshi Kon", "rating": 8.6},
),
Document(
page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them",
metadata={"released_at": 1700190868, "director": "Greta Gerwig", "rating": 8.3},
),
Document(
page_content="Toys come alive and have a blast doing so",
metadata={"released_at": 1700190868, "genre": "animated"},
),
Document(
page_content="Three men walk into the Zone, three men walk out of the Zone",
metadata={"released_at": 1700190868, "director": "Andrei Tarkovsky", "genre": "thriller", "rating": 9.9},
),
]
vectorstore = Chroma.from_documents(docs, OpenAIEmbeddings())
metadata_field_info = [
AttributeInfo(
name="released_at",
description="Time the movie was released. It's second timestamp.",
type="integer",
),
]
document_content_description = "Brief summary of a movie"
retriever = SelfQueryRetriever.from_llm(
llm,
vectorstore,
document_content_description,
metadata_field_info,
)
result = retriever.invoke(
f"What's a movie in this month that's all about toys, and preferably is animated. Current time is: {datetime.datetime.now().strftime('%m/%d/%Y, %H:%M:%S')}.",
)
print(result)
```
### Expected behavior
I declared the `metadata_field_info` which includes the `released_at` field with the data type `integer`.
## I expected the following:
When my query involves the time/timerange of the release time, the query should compare using `integer` instead of `date time`.
### Why I expected this:
- The data type declared in `metadata_field_info` should be utilized.
- In the implementations of `SelfQueryRetriever` (I tested `qdrant` and `chroma`), the accepted type in comparison operations (gte/lte) must be numeric, not a date.
### Identified Reason
I identified the problem due to the `"SCHEMA[s]"` in [langchain/chains/query_constructor/prompt.py](https://github.com/langchain-ai/langchain/blob/190952fe76d8f7bf1e661cbdaa2ba0a2dc0f5456/libs/langchain/langchain/chains/query_constructor/prompt.py#L117).
This line in prompt led the result:
```
Make sure that filters only use format `YYYY-MM-DD` when handling date data typed values
```
I guess that it works in some SQL queries such as `Postgresql`, which accepts 'YYY-MM-DD' as date query inputs.
However, we are working with metadata in vector records, which are structured like JSON objects with key-value pairs, it may not work.
### Proof of reason
I tryed modifing the PROMPTs by defining my own Classes and Functions such as `load_query_constructor_chain`, `_get_prompt`, `SelfQueryRetriever`.
After replacing the above line with the following, it worked as expected:
```
Make sure that filters only use timestamp in second (integer) when handling timestamp data typed values.
```
### Proposals
- Review the above problem. If metadata fields do not support querying with the date format 'YYYY-MM-DD' as specified in the prompt, please update it.
- If this prompt is specified for some use cases, please allow overriding the prompts.
| [SelfQueryRetriever] Generated Query Mismatched Timestamp Type | https://api.github.com/repos/langchain-ai/langchain/issues/13593/comments | 3 | 2023-11-20T04:16:00Z | 2024-04-30T16:22:56Z | https://github.com/langchain-ai/langchain/issues/13593 | 2,001,330,836 | 13,593 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Ability to use guidance.
https://github.com/guidance-ai/guidance
### Motivation
Not related to a problem.
### Your contribution
Not sure yet but I can look into it if it is something the community considers. | Support for Guidance | https://api.github.com/repos/langchain-ai/langchain/issues/13590/comments | 3 | 2023-11-20T03:54:37Z | 2024-02-26T16:06:23Z | https://github.com/langchain-ai/langchain/issues/13590 | 2,001,313,070 | 13,590 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I know I can generate a python dictionary output using StructuredOutputParser like { "a":1, "b":2, "c":3}. However, I would like to generate a nested dic like { "a":1, "b":2, "c":{"d":4, "e":5}}
How can I do it?
### Suggestion:
_No response_ | Issue: can i generate a nested dic output | https://api.github.com/repos/langchain-ai/langchain/issues/13589/comments | 3 | 2023-11-20T03:10:22Z | 2024-02-26T16:06:27Z | https://github.com/langchain-ai/langchain/issues/13589 | 2,001,278,123 | 13,589 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hi, I'm using a conversationchain that contains memory. It is defined as:
llm = ChatOpenAI(temperature=0.0, model=llm_model)
memory = ConversationBufferMemory()
conversation = ConversationChain(
llm=llm,
memory = memory,
verbose=True
)
I know I can access the current memory by using "memory.buffer". However, I was wondering if there is a way to access memory only through ConversationChain instance "conversation"?
### Suggestion:
_No response_ | Issue: can i access memory buffer through chain? | https://api.github.com/repos/langchain-ai/langchain/issues/13584/comments | 5 | 2023-11-19T21:31:23Z | 2024-02-25T16:05:02Z | https://github.com/langchain-ai/langchain/issues/13584 | 2,001,045,763 | 13,584 |
[
"hwchase17",
"langchain"
]
| ### System Info
Linux 20.04 LTS
Python 3.6
### Who can help?
@hwchase17 seems like this got introduced on 2023-11-16
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Attempt to use a tracer to trace an LLM error
2. Note that the tracer hook for _on_chain_error is called instead
### Expected behavior
_on_llm_error hook should be called.
| The tracing on_llm_error() implementation calls _on_chain_error(), not _on_llm_error() | https://api.github.com/repos/langchain-ai/langchain/issues/13580/comments | 3 | 2023-11-19T19:21:07Z | 2024-02-28T16:07:56Z | https://github.com/langchain-ai/langchain/issues/13580 | 2,000,998,966 | 13,580 |
[
"hwchase17",
"langchain"
]
| ### System Info
Mac M1
### Who can help?
@eyurtsev
Here:
https://github.com/langchain-ai/langchain/blob/78a1f4b264fbdca263a4f8873b980eaadb8912a7/libs/langchain/langchain/document_loaders/confluence.py#L284
We start adding the "max_pages" first pages to the "docs" list that will be the output of loader.load.
So we are sure that I cannot retrieve only one specific `page_id`.
`loader.load(..., page_ids=['1234'], max_pages=N)`
will output X pages where X in [min(N, # pages in my confluence), N + 1]
In other words, if I want only a specific page, I will always have at least 2 pages (in case max_pages = 1)
So page_ids does not work at all because space_key is mandatory.
adding ìf space_key and not page_ids` fix my problem but may lead to other problems (I did not check)
Dirty hack would be to collect the F last elements of the return list if pages where F is the number of found pages asked in page_ids
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
No time to do but easy when reading code
### Expected behavior
I can retrieve only the page_ids specified | Confluence loader fails to retrieve specific pages when 'pages_ids' is given | https://api.github.com/repos/langchain-ai/langchain/issues/13579/comments | 5 | 2023-11-19T18:54:14Z | 2024-02-26T16:06:38Z | https://github.com/langchain-ai/langchain/issues/13579 | 2,000,989,464 | 13,579 |
[
"hwchase17",
"langchain"
]
| I am having a wonderful time with my code, but after changing my template it now fails before I even get to give my input. Baffling!
all the required imports are not shown here nor is all the prompt text (containing no special characters)
template = '''Your task is to extract the relationships between terms in the input text,
Format your output as a json list. '''
prompt = ChatPromptTemplate.from_messages([
SystemMessagePromptTemplate.from_template(template),
HumanMessagePromptTemplate.from_template("{input}"),
MessagesPlaceholder(variable_name="history "),
])
llm = ChatOpenAI(temperature=0.8, model_name='gpt-4-1106-preview')
memory = ConversationBufferMemory(return_messages=True)
conversation = ConversationChain(memory=memory, prompt=prompt, llm=llm)](url)
Traceback .........
conversation = ConversationChain(memory=memory, prompt=prompt, llm=llm)
File "pydantic\main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for ConversationChain
__root__ Got unexpected prompt input variables. The prompt expects ['input', 'history '], but got ['history'] as inputs from memory, and input as the normal input key. (type=value_error) | ConversationChain failure after changing template text | https://api.github.com/repos/langchain-ai/langchain/issues/13578/comments | 6 | 2023-11-19T16:56:45Z | 2023-11-20T13:28:40Z | https://github.com/langchain-ai/langchain/issues/13578 | 2,000,941,147 | 13,578 |
[
"hwchase17",
"langchain"
]
| ### Feature request
The feature request I am proposing involves the implementation of hybrid search, specifically using the Reciprocal Rank Fusion (RRF) method, in LangChain through the integration of OpenSearch's vector store.
This would enable the combination of keyword and similarity search. Currently, LangChain doesn't appear to support this functionality, even though OpenSearch has had this capability since its 2.10 release. The goal is to allow LangChain to call search pipelines using OpenSearch's vector implementation, enabling OpenSearch to handle the complexities of hybrid search.
**Relevant Links**:
https://opensearch.org/docs/latest/query-dsl/compound/hybrid
### Motivation
The motivation behind this request stems from the current limitation in LangChain regarding hybrid search capabilities. As someone working on a search project currently, I find it frustrating that despite OpenSearch supporting hybrid search since version 2.10, LangChain has not yet integrated this feature.
### Your contribution
I would gladly help as long as I get guidance.. | Implementing Hybrid Search (RRF) in LangChain Using OpenSearch Vector Store | https://api.github.com/repos/langchain-ai/langchain/issues/13574/comments | 13 | 2023-11-19T13:59:02Z | 2024-04-05T23:00:07Z | https://github.com/langchain-ai/langchain/issues/13574 | 2,000,862,839 | 13,574 |
[
"hwchase17",
"langchain"
]
| ### System Info
Using langchain 0.0.337 python, FastAPI.
When I use openai up through 0.28.1 it works fine. Upgrading to 1.0.0 or above results in the following error (when I try to use ChatOpenAI from langchain.chat_models):
"ImportError: Could not import openai python package. Please install it with `pip install openai`."
Trying to follow this notebook to integrate vision preview model:
https://github.com/langchain-ai/langchain/blob/master/cookbook/openai_v1_cookbook.ipynb
Any thoughts on what I might try? Thanks!
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce:
1. install openai (1.0.0), langchain (0.0.337) & langchain-experimental (0.0.39)
2. in FastAPI route, import ChatOpenAI from langchain.chat_models
3. Use ChatOpenAI as usual (working fine w/ openai <= 0.28.1
` llm = ChatOpenAI(
temperature=temperature,
streaming=True,
verbose=True,
model_name=nameCode,
max_tokens=tokens,
callbacks=[callback],
openai_api_key=relevantAiKey,
)`
### Expected behavior
I would expect to not get a "failed import" error when the package is clearly installed. | Upgrading to OpenAI Python 1.0+ = ImportError: Could not import openai python package. | https://api.github.com/repos/langchain-ai/langchain/issues/13567/comments | 4 | 2023-11-18T22:04:33Z | 2023-11-21T00:39:08Z | https://github.com/langchain-ai/langchain/issues/13567 | 2,000,596,810 | 13,567 |
[
"hwchase17",
"langchain"
]
| ### System Info
```
poetry show langchain
name : langchain
version : 0.0.259
description : Building applications with LLMs through composability
dependencies
- aiohttp >=3.8.3,<4.0.0
- async-timeout >=4.0.0,<5.0.0
- dataclasses-json >=0.5.7,<0.6.0
- langsmith >=0.0.11,<0.1.0
- numexpr >=2.8.4,<3.0.0
- numpy >=1,<2
- openapi-schema-pydantic >=1.2,<2.0
- pydantic >=1,<2
- PyYAML >=5.3
- requests >=2,<3
- SQLAlchemy >=1.4,<3
- tenacity >=8.1.0,<9.0.0
```
Python: v3.10.12
### Who can help?
@hwchase17 @agola11
With the current GPT-4 model, the invocation of `from_llm_and_api_docs` works as expected. However, when switching the model to the upcoming `gpt-4-1106-preview`, the function fails as the LLM, instead of returning the URL for the API call, returns a verbose response:
```
LLM response on_text: To generate the API URL for the user's question "basketball tip of the day", we need to include the `sport` parameter with the value "Basketball" since the user is asking about basketball. We also need to include the `event_start` parameter with today's date to get the tip of the day. Since the user is asking for a singular "tip", we should set the `limit` parameter to 1. The `order` parameter should be set to "popularity" if not specified, as per the documentation.
Given that today is 2023-11-18, the API URL would be:
http://<domain_name_hidden>/search/ai?date=2023-11-18&limit=1&order=popularity
```
The prompt should be refined or extra logic should be added to retrieve just the URL with the upcoming GPT-4 model.
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Try to get the upcoming GPT-4 model to return just the URL of the API call.
```
ERROR:root:No connection adapters were found for 'To generate the API URL for the user\'s question "<question edited>", we need to include the `sport` parameter with the value "Basketball" since the user is asking about basketball. We also need to include the `event_start` parameter with today\'s date to get the tip of the day. The `order` parameter should be set to "popularity" if not specified, as per the documentation.\n\nGiven that today is 2023-11-18, the API URL would be:\n\nhttp://<domain_removed>/search/ai?date=2023-11-18&limit=1&order=popularity'
```
### Expected behavior
The LLM to return just the URL and for Langchain to not error out. | from_llm_and_api_docs fails on gpt-4-1106-preview | https://api.github.com/repos/langchain-ai/langchain/issues/13566/comments | 3 | 2023-11-18T22:02:27Z | 2024-02-26T16:06:42Z | https://github.com/langchain-ai/langchain/issues/13566 | 2,000,596,258 | 13,566 |
[
"hwchase17",
"langchain"
]
| ### System Info
Facing this error while executing the langchain code.
```
pydantic.error_wrappers.ValidationError: 1 validation error for RetrievalQA
separators
extra fields not permitted (type=value_error.extra)
```
Code for retrivalQA
def retrieval_qa_chain(llm, prompt, retriever):
qa_chain = RetrievalQA.from_chain_type(
llm=llm,
chain_type="stuff",
retriever = retriever,
verbose=True,
callbacks=[handler],
chain_type_kwargs={"prompt": prompt},
return_source_documents=True
)
return qa_chain
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
def retrieval_qa_chain(llm, prompt, retriever):
qa_chain = RetrievalQA.from_chain_type(
llm=llm,
chain_type="stuff",
retriever = retriever,
verbose=True,
callbacks=[handler],
chain_type_kwargs={"prompt": prompt},
return_source_documents=True
)
return qa_chain
```
```
def retrieval_qa_chain(llm, prompt, retriever):
qa_chain = RetrievalQA.from_chain_type(
llm=llm,
chain_type="stuff",
retriever = retriever,
verbose=True,
callbacks=[handler],
chain_type_kwargs={"prompt": prompt},
return_source_documents=True
)
return qa_chain
```
### Expected behavior
Need a fix for the above error | pydantic.error_wrappers.ValidationError: 1 validation error for RetrievalQA separators extra fields not permitted (type=value_error.extra) | https://api.github.com/repos/langchain-ai/langchain/issues/13565/comments | 3 | 2023-11-18T21:06:02Z | 2024-02-24T16:05:13Z | https://github.com/langchain-ai/langchain/issues/13565 | 2,000,580,109 | 13,565 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain Version: 0.0.337
Python: 3.10
### Who can help?
@hwchase17
Note: I am facing this issue with Weaviate, when I use the Chroma Vector Store it's working fine.
I am trying to use "Weaviate Vector DB" with ParentDocumentRetriever and I am getting this error during the pipeline:
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
Cell In[13], line 1
----> 1 retriever.get_relevant_documents("realization")
File ~/miniconda3/envs/docs_qa/lib/python3.10/site-packages/langchain/schema/retriever.py:211, in BaseRetriever.get_relevant_documents(self, query, callbacks, tags, metadata, run_name, **kwargs)
209 except Exception as e:
210 run_manager.on_retriever_error(e)
--> 211 raise e
212 else:
213 run_manager.on_retriever_end(
214 result,
215 **kwargs,
216 )
File ~/miniconda3/envs/docs_qa/lib/python3.10/site-packages/langchain/schema/retriever.py:204, in BaseRetriever.get_relevant_documents(self, query, callbacks, tags, metadata, run_name, **kwargs)
202 _kwargs = kwargs if self._expects_other_args else {}
203 if self._new_arg_supported:
--> 204 result = self._get_relevant_documents(
205 query, run_manager=run_manager, **_kwargs
206 )
207 else:
208 result = self._get_relevant_documents(query, **_kwargs)
File ~/miniconda3/envs/docs_qa/lib/python3.10/site-packages/langchain/retrievers/multi_vector.py:36, in MultiVectorRetriever._get_relevant_documents(self, query, run_manager)
34 ids = []
35 for d in sub_docs:
---> 36 if d.metadata[self.id_key] not in ids:
37 ids.append(d.metadata[self.id_key])
38 docs = self.docstore.mget(ids)
KeyError: 'doc_id'
```
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
import weaviate
from langchain.vectorstores.weaviate import Weaviate
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.retrievers import ParentDocumentRetriever
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.storage import RedisStore
from langchain.schema import Document
from langchain.storage._lc_store import create_kv_docstore
from langchain.storage import InMemoryStore
from langchain.vectorstores import Chroma
import redis
import os
os.environ["OPENAI_API_KEY"] = ""
client = weaviate.Client(url="https://test-n5.weaviate.network")
embeddings = OpenAIEmbeddings()
vectorstore = Weaviate(client=client, embedding=embeddings, index_name="test1".capitalize(), text_key="text", by_text=False)
parent_splitter = RecursiveCharacterTextSplitter(chunk_size=50, chunk_overlap=1)
child_splitter = RecursiveCharacterTextSplitter(chunk_size=5, chunk_overlap=1)
store = InMemoryStore()
retriever = ParentDocumentRetriever(
vectorstore=vectorstore,
docstore=store,
child_splitter=child_splitter,
parent_splitter=parent_splitter,
id_key="doc_id"
)
docs = [
"The sun is shining brightly in the clear blue sky.",
"Roses are red, violets are blue, sugar is sweet, and so are you.",
"The quick brown fox jumps over the lazy dog.",
"Life is like a camera. Focus on what's important, capture the good times, develop from the negatives, and if things don't work out, take another shot.",
"A journey of a thousand miles begins with a single step.",
"The only limit to our realization of tomorrow will be our doubts of today.",
"Success is not final, failure is not fatal: It is the courage to continue that counts.",
"Happiness can be found even in the darkest of times if one only remembers to turn on the light."
]
docs = [Document(page_content=text) for en, text in enumerate(docs)]
retriever.add_documents(docs)
```
The output of below line below didn't contain a ID_Key for mapping the child to parent.
`vectorstore.similarity_search("realization", k=4)`
So, when I tried `retriever.get_relevant_documents("realization")` this returned the KeyError I mentioned.
### Expected behavior
The output of `vectorstore.similarity_search("realization", k=2)` should have been:
```
[Document(page_content='real', metadata={"doc_id": "fdsfsdfsdfsdfsd"}),
Document(page_content='real'), metadata={"doc_id": "rewrwetet"}]
```
but the output I got was:
[Document(page_content='real'),
Document(page_content='real')]
| Bug: Weaviate raise doc_id error using with ParentDocumentRetriever | https://api.github.com/repos/langchain-ai/langchain/issues/13563/comments | 2 | 2023-11-18T18:09:56Z | 2023-11-18T18:33:42Z | https://github.com/langchain-ai/langchain/issues/13563 | 2,000,522,960 | 13,563 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Hi, the new Cohere embedding models are now available on Amazon Bedrock. How can we use them for their reranking capability (instead of just embedding via BedrockEmbedding class)
### Motivation
These models perform well for reranking | BedrockRerank using newly available Cohere embedding model | https://api.github.com/repos/langchain-ai/langchain/issues/13562/comments | 10 | 2023-11-18T17:51:30Z | 2024-05-25T20:47:11Z | https://github.com/langchain-ai/langchain/issues/13562 | 2,000,516,549 | 13,562 |
[
"hwchase17",
"langchain"
]
| ### System Info
Hi there,
I have a LangChain app at https://huggingface.co/spaces/bstraehle/openai-llm-rag/blob/main/app.py. Using the latest release 0.0.337 produces the error below. Pinning the library to release 0.0.336 works as expected.
:blue_heart: LangChain, thanks!
Bernd
---
Traceback (most recent call last):
File "/home/user/app/app.py", line 129, in invoke
db = document_retrieval_mongodb(llm, prompt)
File "/home/user/app/app.py", line 91, in document_retrieval_mongodb
db = MongoDBAtlasVectorSearch.from_connection_string(MONGODB_URI,
File "/home/user/.local/lib/python3.10/site-packages/langchain/vectorstores/mongodb_atlas.py", line 109, in from_connection_string
raise ImportError(
ImportError: Could not import pymongo, please install it with `pip install pymongo`.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce the behavior:
1. In file https://huggingface.co/spaces/bstraehle/openai-llm-rag/blob/main/requirements.txt, unpin the langchain library (or pin it to release 0.0.337).
2. Use the app at https://huggingface.co/spaces/bstraehle/openai-llm-rag with MongoDB selected to invoke `MongoDBAtlasVectorSearch.from_connection_string`, which produces the error.
### Expected behavior
When using release 0.0.337 `MongoDBAtlasVectorSearch.from_connection_string`, error "ImportError: Could not import pymongo, please install it with `pip install pymongo`." should not happen. | Release 0.0.337 breaks MongoDBAtlasVectorSearch.from_connection_string? | https://api.github.com/repos/langchain-ai/langchain/issues/13560/comments | 7 | 2023-11-18T16:43:18Z | 2023-11-28T14:54:05Z | https://github.com/langchain-ai/langchain/issues/13560 | 2,000,493,292 | 13,560 |
[
"hwchase17",
"langchain"
]
| Im building an embedded chatbot using langchain and openai its working fine but the issue is that the responses takes around 15-25 seconds and i tried to use the time library to know which line is taking this much
`import os
import sys
from langchain.chains import ConversationalRetrievalChain
from langchain.chat_models import ChatOpenAI
from langchain.document_loaders import TextLoader
from langchain.embeddings import OpenAIEmbeddings
from langchain.indexes import VectorstoreIndexCreator
from langchain.indexes.vectorstore import VectorStoreIndexWrapper
from langchain.vectorstores import Chroma
from cachetools import TTLCache
import time
import constants
os.environ["OPENAI_API_KEY"] = constants.APIKEY
cache = TTLCache(maxsize=100, ttl=3600) # Example: Cache up to 100 items for 1 hour
PERSIST = False
template_prompt = "If the user greets you, greet back. If there is a link in the response return it as a clickable link as if it is an a tag '<a>'. If you don't know the answer, you can say, 'I don't have the information you need, I recommend contacting our support team for assistance.' Here is the user prompt: 'On the Hawsabah platform"
def initialize_chatbot():
query = None
if len(sys.argv) > 1:
query = sys.argv[1]
if PERSIST and os.path.exists("persist"):
print("Reusing index...\n")
vectorstore = Chroma(persist_directory="persist", embedding_function=OpenAIEmbeddings())
index = VectorStoreIndexWrapper(vectorstore=vectorstore)
else:
loader = TextLoader("data/data.txt")
if PERSIST:
index = VectorstoreIndexCreator(vectorstore_kwargs={"persist_directory":"persist"}).from_loaders([loader])
else:
index = VectorstoreIndexCreator().from_loaders([loader])
chat_chain = ConversationalRetrievalChain.from_llm(
llm=ChatOpenAI(model="gpt-3.5-turbo"),
retriever=index.vectorstore.as_retriever(search_kwargs={"k": 1}),
)
chat_history = []
return chat_chain, chat_history
MAX_CONVERSATION_HISTORY = 3 # Set the maximum number of interactions to keep in the buffer
def chatbot_response(user_prompt, chat_chain, chat_history):
# Check if the response is cached
cached_response = cache.get(user_prompt)
if cached_response:
return cached_response
# Check if the user's query is a greeting or unrelated
is_greeting = check_for_greeting(user_prompt)
# Conditionally clear the conversation history
if is_greeting:
chat_history.clear()
query_with_template = f"{template_prompt} {user_prompt}'"
s = time.time()
result = chat_chain({"question": query_with_template, "chat_history": chat_history})
e = time.time()
# Append the new interaction and limit the conversation buffer to the last MAX_CONVERSATION_HISTORY interactions
chat_history.append((user_prompt, result['answer']))
if len(chat_history) > MAX_CONVERSATION_HISTORY:
chat_history.pop(0) # Remove the oldest interaction
response = result['answer']
# Cache the response for future use
cache[user_prompt] = response
print("Time taken by chatbot_response:", (e - s) * 1000, "ms")
return response`
the line result = chat_chain({"question": query_with_template, "chat_history": chat_history}) was the one taking this long i tried to figure out how to fix this but i couldnt i also tried to implement word streaming to help make it look faster but it only worked for the davinci model. Is there a way or method to make responses faster? | Response taking way to long | https://api.github.com/repos/langchain-ai/langchain/issues/13558/comments | 4 | 2023-11-18T15:01:10Z | 2024-02-25T16:05:22Z | https://github.com/langchain-ai/langchain/issues/13558 | 2,000,456,203 | 13,558 |
[
"hwchase17",
"langchain"
]
| ### System Info
Bumped into HTTPError when using DuckDuckGo search wrapper in an agent, currently using `langchain==0.0.336`.
Here's the snippet of the traceback as below.
```
File "/path/to/venv/lib/python3.10/site-packages/langchain/utilities/duckduckgo_search.py", line 64, in run
snippets = self.get_snippets(query)
File "/path/to/venv/lib/python3.10/site-packages/langchain/utilities/duckduckgo_search.py", line 55, in get_snippets
for i, res in enumerate(results, 1):
File "/path/to/venv/lib/python3.10/site-packages/duckduckgo_search/duckduckgo_search.py", line 96, in text
for i, result in enumerate(results, start=1):
File "/path/to/venv/lib/python3.10/site-packages/duckduckgo_search/duckduckgo_search.py", line 148, in _text_api
resp = self._get_url("GET", "https://links.duckduckgo.com/d.js", params=payload)
File "/path/to/venv/lib/python3.10/site-packages/duckduckgo_search/duckduckgo_search.py", line 55, in _get_url
raise ex
File "/path/to/venv/lib/python3.10/site-packages/duckduckgo_search/duckduckgo_search.py", line 48, in _get_url
raise httpx._exceptions.HTTPError("")
httpx.HTTPError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/path/to/src/single_host.py", line 179, in <module>
response = chain({"topic": "Why did Sam Altman got fired by OpenAI.",
File "/path/to/venv/lib/python3.10/site-packages/langchain/chains/base.py", line 310, in __call__
raise e
File "/path/to/venv/lib/python3.10/site-packages/langchain/chains/base.py", line 306, in __call__
else self._call(inputs)
File "/path/to/src/single_host.py", line 163, in _call
script = script_chain.run({"topic": inputs["topic"], "focuses": inputs["focuses"], "keypoints": keypoints})
File "/path/to/venv/lib/python3.10/site-packages/langchain/chains/base.py", line 505, in run
return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
File "/path/to/venv/lib/python3.10/site-packages/langchain/chains/base.py", line 310, in __call__
raise e
File "/path/to/venv/lib/python3.10/site-packages/langchain/chains/base.py", line 306, in __call__
else self._call(inputs)
File "/path/to/src/single_host.py", line 117, in _call
information = agent.run(background_info_search_formatted)
File "/path/to/venv/lib/python3.10/site-packages/langchain/chains/base.py", line 505, in run
return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
File "/path/to/venv/lib/python3.10/site-packages/langchain/chains/base.py", line 310, in __call__
raise e
File "/path/to/venv/lib/python3.10/site-packages/langchain/chains/base.py", line 304, in __call__
self._call(inputs, run_manager=run_manager)
File "/path/to/venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 1245, in _call
next_step_output = self._take_next_step(
File "/path/to/venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 1095, in _take_next_step
observation = tool.run(
File "/path/to/venv/lib/python3.10/site-packages/langchain/tools/base.py", line 344, in run
raise e
File "/path/to/venv/lib/python3.10/site-packages/langchain/tools/base.py", line 337, in run
self._run(*tool_args, run_manager=run_manager, **tool_kwargs)
File "/path/to/venv/lib/python3.10/site-packages/langchain/tools/base.py", line 510, in _run
self.func(
File "/path/to/venv/lib/python3.10/site-packages/langchain/tools/base.py", line 344, in run
raise e
File "/path/to/venv/lib/python3.10/site-packages/langchain/tools/base.py", line 337, in run
self._run(*tool_args, run_manager=run_manager, **tool_kwargs)
File "/path/to/venv/lib/python3.10/site-packages/langchain/tools/ddg_search/tool.py", line 36, in _run
return self.api_wrapper.run(query)
File "/path/to/venv/lib/python3.10/site-packages/langchain/utilities/duckduckgo_search.py", line 67, in run
raise ToolException("DuckDuckGo Search encountered HTTPError.")
```
I tried to add an error handling in the method `run()` in `langchain/utilities/duckduckgo_search.py`, something look like below:
```
def run(self, query: str) -> str:
try:
snippets = self.get_snippets(query)
return " ".join(snippets)
except httpx._exceptions.HTTPError as e:
raise ToolException("DuckDuckGo Search encountered HTTPError.")
```
I have also added `handle_tool_error`, where it was copied from the langchain [documentation](https://python.langchain.com/docs/modules/agents/tools/custom_tools)
```
def _handle_error(error: ToolException) -> str:
return (
"The following errors occurred during tool execution:"
+ error.args[0]
+ "Please try another tool."
)
```
However these methods do not seems to stop and still cause the error showed in first code block above. Am I implementing this incorrectly? or should there be other mechanism to handle the error occuried?
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Adding `handle_tool_errors` and passing the `_handle_error` function into it.
```
news_tool = Tool.from_function(name="News Search",
func=news_duckduckgo.run,
description="News search to help you look up latest news, which help you to understand latest current affair, and being up-to-date.",
handle_tool_errors=_handle_error)
```
2. Does not seems to work, so I tried to change the DuckDuckGo Wrapper, as described above.
3. HTTPError still lead to abrupt stop of Agent actions.
### Expected behavior
Expecting a proper error handling method, if tool fails, Agent moves on, or try n time before moving on to next step. | Adding DuckDuckGo search HTTPError handling | https://api.github.com/repos/langchain-ai/langchain/issues/13556/comments | 8 | 2023-11-18T13:58:54Z | 2024-02-24T16:05:22Z | https://github.com/langchain-ai/langchain/issues/13556 | 2,000,431,479 | 13,556 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
How to update the template and packages of an app created from a template?
I checked:
https://github.com/langchain-ai/langchain/tree/master/templates
and a couple of templates' README.mds, but this info is missing and it's not obvious for us citizen devs.
I supposed it should be done via langchain-cli, but there's no such option.
So pls. provide a solution and add it to docs.
### Idea or request for content:
How to update the template and packages of an app created from a template? | DOC: add info about how to update the template and the packages of an app created from a template | https://api.github.com/repos/langchain-ai/langchain/issues/13551/comments | 5 | 2023-11-18T10:44:59Z | 2024-02-24T16:05:27Z | https://github.com/langchain-ai/langchain/issues/13551 | 2,000,367,453 | 13,551 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain: 0.0.336
Python: 3.11.6
OS: Microsoft Windows [Version 10.0.19045.3693]
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Should be very easy to reproduce. Just enable streaming and use function call in chat. Info about `function_call` supposed to be in `additional_kwargs` has lost. I found this issue because I wanted to use 'function call' feature.
These is a debug output from my console. As you see, output becomes an `AIMessage` with empty `content`, and `additional_kwargs` is empty.
```
[llm/end] [1:chain:AgentExecutor > 2:llm:QianfanChatEndpoint] [2.39s] Exiting LLM run with output:
{
"generations": [
[
{
"text": "",
"generation_info": {
"finish_reason": "stop"
},
"type": "ChatGeneration",
"message": {
"lc": 1,
"type": "constructor",
"id": [
"langchain",
"schema",
"messages",
"AIMessage"
],
"kwargs": {
"content": "",
"additional_kwargs": {}
}
}
}
]
],
"llm_output": {
"token_usage": {},
"model_name": "ERNIE-Bot"
},
"run": null
}
[chain/end] [1:chain:AgentExecutor] [2.40s] Exiting Chain run with output:
{
"output": ""
}
```
A quick-and-dirty hack in `QianfanChatEndpoint` can fix the issue. Please read the following code related to `first_additional_kwargs` (which is added by me).
```python
async def _agenerate(
self,
messages: List[BaseMessage],
stop: Optional[List[str]] = None,
run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> ChatResult:
if self.streaming:
completion = ""
token_usage = {}
first_additional_kwargs = None
async for chunk in self._astream(messages, stop, run_manager, **kwargs):
if first_additional_kwargs is None:
first_additional_kwargs = chunk.message.additional_kwargs
completion += chunk.text
lc_msg = AIMessage(content=completion, additional_kwargs=first_additional_kwargs or {})
gen = ChatGeneration(
message=lc_msg,
generation_info=dict(finish_reason="stop"),
)
return ChatResult(
generations=[gen],
llm_output={"token_usage": {}, "model_name": self.model},
)
params = self._convert_prompt_msg_params(messages, **kwargs)
response_payload = await self.client.ado(**params)
lc_msg = _convert_dict_to_message(response_payload)
generations = []
gen = ChatGeneration(
message=lc_msg,
generation_info={
"finish_reason": "stop",
**response_payload.get("body", {}),
},
)
generations.append(gen)
token_usage = response_payload.get("usage", {})
llm_output = {"token_usage": token_usage, "model_name": self.model}
return ChatResult(generations=generations, llm_output=llm_output)
```
Similarly `_generate` probably contains the same bug.
The following is the new debug output in console. As you can see, 'function call' now works. `additional_kwargs` also contains non-empty `usage`. But `token_usage` in `llm_output` is still empty.
```
[llm/end] [1:chain:AgentExecutor > 2:llm:QianfanChatEndpointHacked] [2.21s] Exiting LLM run with output:
{
"generations": [
[
{
"text": "",
"generation_info": {
"finish_reason": "stop"
},
"type": "ChatGeneration",
"message": {
"lc": 1,
"type": "constructor",
"id": [
"langchain",
"schema",
"messages",
"AIMessage"
],
"kwargs": {
"content": "",
"additional_kwargs": {
"id": "as-zh6tasbjyb",
"object": "chat.completion",
"created": 1700274407,
"sentence_id": 0,
"is_end": true,
"is_truncated": false,
"result": "",
"need_clear_history": false,
"function_call": {
"name": "GetCurrentTime",
"arguments": "{}"
},
"search_info": {
"is_beset": 0,
"rewrite_query": "",
"search_results": null
},
"finish_reason": "function_call",
"usage": {
"prompt_tokens": 121,
"completion_tokens": 0,
"total_tokens": 121
}
}
}
}
}
]
],
"llm_output": {
"token_usage": {},
"model_name": "ERNIE-Bot"
},
"run": null
}
```
### Expected behavior
`additional_kwargs` should not be empty. | AIMessage in output of Qianfan with streaming enabled may lose info about 'additional_kwargs', which causes 'function_call', 'token_usage' info lost. | https://api.github.com/repos/langchain-ai/langchain/issues/13548/comments | 6 | 2023-11-18T03:19:52Z | 2024-02-25T16:05:27Z | https://github.com/langchain-ai/langchain/issues/13548 | 2,000,187,192 | 13,548 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Here is my code:
"""For basic init and call"""
import os
import qianfan
from langchain.chat_models import QianfanChatEndpoint
from langchain.chat_models.base import HumanMessage
os.environ["QIANFAN_AK"] = "myak"
os.environ["QIANFAN_SK"] = "mysk"
chat = QianfanChatEndpoint(
streaming=True,
)
res = chat.stream([HumanMessage(content="给我一篇100字的睡前故事")], streaming=True)
for r in res:
print("chat resp:", r)
And after it prints two sentences, returns an error. The full error message is:
Traceback (most recent call last):
File "d:\work\qianfan_test.py", line 13, in <module>
for r in res:
File "C:\Users\a1383\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\langchain\chat_models\base.py", line 220, in stream
raise e
File "C:\Users\a1383\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\langchain\chat_models\base.py", line 216, in stream
generation += chunk
File "C:\Users\a1383\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\langchain\schema\output.py", line 94, in __add__
message=self.message + other.message,
File "C:\Users\a1383\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\langchain\schema\messages.py", line 225, in __add__
additional_kwargs=self._merge_kwargs_dict(
File "C:\Users\a1383\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\langchain\schema\messages.py", line 138, in _merge_kwargs_dict
raise ValueError(
ValueError: Additional kwargs key created already exists in this message.
I am only following the official langchain documentation:https://python.langchain.com/docs/integrations/chat/baidu_qianfan_endpoint
And it is not working. What have I done wrong?
### Suggestion:
_No response_ | Issue: When using Qianfan chat model and enabling streaming, get ValueError | https://api.github.com/repos/langchain-ai/langchain/issues/13546/comments | 4 | 2023-11-18T02:49:13Z | 2024-03-13T19:55:40Z | https://github.com/langchain-ai/langchain/issues/13546 | 2,000,175,679 | 13,546 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain version: 0.0.337
Python version: 3.10.13
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
db = Chroma.from_documents(docs, AzureOpenAIEmbeddings())
### Expected behavior
This worked on previous versions of LangChain using OpenAIEmbeddings(), but now I get this error
BadRequestError: Error code: 400 - {'error': {'message': 'Too many inputs. The max number of inputs is 16. We hope to increase the number of inputs per request soon. Please contact us through an Azure support request at: https://go.microsoft.com/fwlink/?linkid=2213926 for further questions.', 'type': 'invalid_request_error', 'param': None, 'code': None}}
| New update broke embeddings models | https://api.github.com/repos/langchain-ai/langchain/issues/13539/comments | 3 | 2023-11-17T21:47:33Z | 2023-11-18T20:07:42Z | https://github.com/langchain-ai/langchain/issues/13539 | 1,999,979,607 | 13,539 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Why does the below code complain that extra_instructions is a missing key, even though it's learning included in input_variables=["context", "question", "extra_instructions"]?
Any help is greatly appreciated.
vectorstore = Chroma(
collection_name=collection_name,
persist_directory=chroma_db_directory,
embedding_function=embedding,
)
prompt_template = """
{extra_instructions}
{context}
{question}
Continuation:
"""
PROMPT = PromptTemplate(
template=prompt_template,
input_variables=["context", "question", "extra_instructions"],
)
qa = RetrievalQA.from_chain_type(
llm=llm,
chain_type="stuff",
retriever=vectorstore.as_retriever(
search_kwargs={"k": 1}
),
chain_type_kwargs={"verbose": True, "prompt": PROMPT},
)
### Suggestion:
_No response_ | Issue: Missing some input keys in langchain even when it's present - unclear how prompt args are treated | https://api.github.com/repos/langchain-ai/langchain/issues/13536/comments | 3 | 2023-11-17T21:00:10Z | 2024-02-23T16:05:27Z | https://github.com/langchain-ai/langchain/issues/13536 | 1,999,921,377 | 13,536 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.