issue_owner_repo
listlengths 2
2
| issue_body
stringlengths 0
261k
⌀ | issue_title
stringlengths 1
925
| issue_comments_url
stringlengths 56
81
| issue_comments_count
int64 0
2.5k
| issue_created_at
stringlengths 20
20
| issue_updated_at
stringlengths 20
20
| issue_html_url
stringlengths 37
62
| issue_github_id
int64 387k
2.46B
| issue_number
int64 1
127k
|
---|---|---|---|---|---|---|---|---|---|
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain 0.0.317
atlassian-python-api 3.41.3
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
When playing with documents from Confluence (Cloud hosting in my case), I have noticed that I am getting multiple copies of the same document in my vector store or from my Confluence retriever (custom implementation not in LangChain). In both cases, the vector store is populated with LangChain's Confluence document loader and my retriever is also using that same document loader. The problem happens in `ConfluenceLoader.paginate_request()` in the `while` loop until we reach `max_pages`. In summary, you will get copies of the same documents until you reach `max_pages`. For example, if your query (I'm using CQL in my use-case) returns 10 documents and `max_pages` is set to `200`, you will get 20 copies of each document. This also makes the search process much slower.
The previous pagination system of Confluence REST server was leveraging an index controlled by the `start` parameter. According to this [page](https://developer.atlassian.com/cloud/confluence/change-notice-moderize-search-rest-apis/), it has been deprecated in favor of a cursor system. It is now recommended to follow `_links.next` instead of relying on the `start` parameter and getting an empty result when documents to be returned have been exhausted.
This change is now in effect for the Cloud hosting of Confluence, not sure about private deployments. You can see for yourself by running a script that looks like this:
```python
from atlassian import Confluence
site = "templates" # Public site
confluence = Confluence(url=f"https://{site}.atlassian.net/")
cql='type=page AND text~"is"'
response_1 = confluence.cql(cql=cql, start=0)
print(response_1) # Easier to see in debug mode
next_start = len(response_1["results"])*1000
response_2 = confluence.cql(cql=cql, start=next_start)
print(response_2) # Notice returned documents are the same as in response_1
```
### Expected behavior
The Confluence document loader should handle properly pagination for both the old (using `start`) and new way (cursor) to maintain backward compatibility for private Confluence deployments. It should also properly stop when `max_pages` has been reached. | Confluence document loader not handling pagination correctly anymore (when using Confluence Cloud) | https://api.github.com/repos/langchain-ai/langchain/issues/12082/comments | 5 | 2023-10-20T15:41:10Z | 2024-05-20T07:16:20Z | https://github.com/langchain-ai/langchain/issues/12082 | 1,954,551,450 | 12,082 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hello,
I am using langchain offline on a local machine. I'd like to split document over tokens user TokenTextSplitter.
Unfortunately, I can't get to make the class to user a local tokenizer.
I tried to do
`text_splitter = TokenTextSplitter(model_name='/my/path/to/my/tokenizer/', chunk_size = 50, chunk_overlap = 10`
like I did for the HuggingFaceEmbedding (and it worked pretty well).
But I get the following error:
Could not automaticcaly map '/my/path/to/my/tokenizer/' to a tokenizer. Please use 'tiktoken.get_encoding' to explicitly get the tokenizer you expect
Couldn't find any info in the documentation about setup an offline / local tokenizer.
### Suggestion:
_No response_ | Issue: TokenTextSplitter with local tokenizer ? | https://api.github.com/repos/langchain-ai/langchain/issues/12078/comments | 4 | 2023-10-20T13:46:35Z | 2023-10-20T14:30:20Z | https://github.com/langchain-ai/langchain/issues/12078 | 1,954,330,346 | 12,078 |
[
"hwchase17",
"langchain"
]
| ### System Info
Running SQLDatabaseChain with LangChain version 0.0.319 and Snowflake return SQL query which is to be executed on snowflake database in next step. But returned query contains the prefix "SQLQuery:\n" which will broke the whole chain when the query gets executed on snowflake. How to get rid of this "SQLQuery:\n" prefix from query.
Note: Using AWS Bedrock endpoint with Anthropic Claudev2 LLM.
Using default prompt provided in the langchain:
`Given an input question, first create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer. Unless the user specifies in his question a specific number of examples he wishes to obtain, always limit your query to at most {top_k} results. You can order the results by a relevant column to return the most interesting examples in the database.
Never query for all the columns from a specific table, only ask for a the few relevant columns given the question.
Pay attention to use only the column names that you can see in the schema description. Be careful to not query for columns that do not exist. Also, pay attention to which column is in which table.
Use the following format:
Question: Question here
SQLQuery: SQL Query to run
SQLResult: Result of the SQLQuery
Answer: Final answer here
`
Getting following response in debug/verbose mode:
`"text": " SQLQuery:\nSELECT top 5 p.productname, sum(od.quantity) as total_sold\nFROM products p\nJOIN orderdetails od ON p.productid = od.productid \nGROUP BY p.productname\nORDER BY total_sold DESC\n"
}`
Any help would be appreciated.
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Just use SQLDatabaseChain with AWS Bedrock Anthropic Claude 2 to produce this one.
### Expected behavior
Should not prefix "SQLQuery:\n" in front returned SQL query. | Running SQLDatabaseChain adds prefix "SQLQuery:\n" infront of returned SQL by LLM, causing invalid query when ran on Database using chain | https://api.github.com/repos/langchain-ai/langchain/issues/12077/comments | 11 | 2023-10-20T13:14:25Z | 2024-04-14T16:17:56Z | https://github.com/langchain-ai/langchain/issues/12077 | 1,954,271,029 | 12,077 |
[
"hwchase17",
"langchain"
]
| ### System Info
MacOS Ventura 13.6
Python 3.10.13
langchain 0.0.306
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
llm = AzureOpenAI(temperature=0, model="gpt-35-turbo")
compressor = LLMChainExtractor.from_llm(llm)
base_retriever = vectorstores.as_retriever()
compression_retriever = ContextualCompressionRetriever(base_compressor=compressor, base_retriever=base_retriever)
compression_retriever.get_relevant_documents("Owner")
```
### Expected behavior
Expected to return the docs compressed from the vectorstore, but I'm getting the `AttributeError: 'str' object has no attribute 'get'` | Function get_relevant_docs() returning AttributeError | https://api.github.com/repos/langchain-ai/langchain/issues/12076/comments | 4 | 2023-10-20T12:49:04Z | 2024-02-21T16:08:04Z | https://github.com/langchain-ai/langchain/issues/12076 | 1,954,222,876 | 12,076 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
#### **Description**
In various places within the documentation, the import statement is being used:
```python
from langchain.llms import OpenAI
```
However, as hinted in https://github.com/langchain-ai/langchain/commit/779790167e37f49b3eec5d04dfd30b0447d4a32a, this statement has been deprecated. The correct and updated import statement should be:
```python
from langchain.chat_models import ChatOpenAI
```
#### **Problem**
The use of the deprecated import statement leads to problems, particularly when interfacing with Pydantic. Users following the outdated documentation might experience unexpected errors or issues due to this.
| DOC: Deprecated Import Statement in Documentation for OpenAI | https://api.github.com/repos/langchain-ai/langchain/issues/12074/comments | 1 | 2023-10-20T12:31:33Z | 2024-02-06T16:17:06Z | https://github.com/langchain-ai/langchain/issues/12074 | 1,954,194,603 | 12,074 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain Version: 0.0.316 - Python 3.8.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
@hwchase17 @agola11
I am using StructuredTool for multi input support. Below is my initialize_agent
```
sys_msg = "Assistant’s main duty is to decide which tool to use......"
system_message = SystemMessage(content=sys_msg)
agent_executor = initialize_agent(
agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION,
tools=tools,
llm=llm,
verbose=True,
max_iterations=3,
early_stopping_method='generate',
handle_parsing_errors=True,
agent_kwargs = { "system_message": system_message }
)
```
I dont see the above (sys_msg) as the system_message in the openAI query. Instead I see this in the debug logs
```
2023-10-20 08:54:01 DEBUG api_version=None data='{"messages": [{"role": "system", "content": "Respond to the human as helpfully and accurately as possible. You have access to the following tools:\\n\\n
```
I believe this is the default system message. How do I change this to my custom system message in AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION.
I see a similar ticket for OpenAI functions AgentType : https://github.com/langchain-ai/langchain/issues/6334
I also tried specifying system_message=system_message as discussed in the above ticket. That did not help either
### Expected behavior
The custom system_message that I provide in agent_kwargs should replace the default one going out to openAI. | system_message in agent_kwargs not updating System Message in AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION | https://api.github.com/repos/langchain-ai/langchain/issues/12072/comments | 3 | 2023-10-20T11:21:19Z | 2023-11-06T08:27:49Z | https://github.com/langchain-ai/langchain/issues/12072 | 1,954,083,040 | 12,072 |
[
"hwchase17",
"langchain"
]
| ### System Info
Platform: local development on MacOS Ventura
Python version: 3.9.7
langchain.version: 0.0.315
faiss.version: 1.7.4
openai.version: 0.28.1
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
This is my code:
class TestGPT(object):
def __init__(self):
self.llm_model = ChatOpenAI(temperature=0.1, streaming=True, model="gpt-3.5-turbo")
self.store = self.load_store_data()
self.prompt = f"""Use 3 sentences at most."""
@staticmethod
def load_store_data():
with open(f"faiss_store.pkl", "rb") as f:
store = pickle.load(f)
store.index = faiss.read_index(f"docs.index")
return store
def ask(self, user_prompt):
content = self.prompt + "\n\n" + "Question: " + user_prompt + "\n\n"
chain = RetrievalQAWithSourcesChain.from_chain_type(llm=self.llm_model,
retriever=self.store.as_retriever())
gpt_response = chain(content)['answer']
return self.control_response_has_source(gpt_response)
TestGPT().ask("hello")
Error is **"AttributeError: 'OpenAIEmbeddings' object has no attribute 'skip_empty'"**
Traceback:
`Traceback (most recent call last):
File "/Users/md/Desktop/support_gpt.py", line 72, in <module>
res = SupportGPT().ask("uygulama nasıl kullanılır")
File "/Users/md/Desktop/support_gpt.py", line 45, in ask
gpt_response = chain(content)['answer']
File "/Users/md/Desktop/venv/lib/python3.9/site-packages/langchain/chains/base.py", line 310, in __call__
raise e
File "/Users/md/Desktop/venv/lib/python3.9/site-packages/langchain/chains/base.py", line 304, in __call__
self._call(inputs, run_manager=run_manager)
File "/Users/md/Desktop/venv/lib/python3.9/site-packages/langchain/chains/qa_with_sources/base.py", line 151, in _call
docs = self._get_docs(inputs, run_manager=_run_manager)
File "/Users/md/Desktop/venv/venv/lib/python3.9/site-packages/langchain/chains/qa_with_sources/retrieval.py", line 50, in _get_docs
docs = self.retriever.get_relevant_documents(
File "/Users/md/Desktop/venv/venv/lib/python3.9/site-packages/langchain/schema/retriever.py", line 211, in get_relevant_documents
raise e
File "/Users/md/Desktop/venv/lib/python3.9/site-packages/langchain/schema/retriever.py", line 204, in get_relevant_documents
result = self._get_relevant_documents(
File "/Users/md/Desktop/venv/lib/python3.9/site-packages/langchain/schema/vectorstore.py", line 585, in _get_relevant_documents
docs = self.vectorstore.similarity_search(query, **self.search_kwargs)
File "/Users/md/Desktop/venv/lib/python3.9/site-packages/langchain/vectorstores/faiss.py", line 364, in similarity_search
docs_and_scores = self.similarity_search_with_score(
File "/Users/md/Desktop/venv/lib/python3.9/site-packages/langchain/vectorstores/faiss.py", line 305, in similarity_search_with_score
embedding = self._embed_query(query)
File "/Users/md/Desktop/venv/lib/python3.9/site-packages/langchain/vectorstores/faiss.py", line 138, in _embed_query
return self.embedding_function(text)
File "/Users/md/Desktop/venv/lib/python3.9/site-packages/langchain/embeddings/openai.py", line 518, in embed_query
return self.embed_documents([text])[0]
File "/Users/md/Desktop/venv/lib/python3.9/site-packages/langchain/embeddings/openai.py", line 490, in embed_documents
return self._get_len_safe_embeddings(texts, engine=self.deployment)
File "/Users/md/Desktop/venv/lib/python3.9/site-packages/langchain/embeddings/openai.py", line 374, in _get_len_safe_embeddings
response = embed_with_retry(
File "/Users/md/Desktop/venv/lib/python3.9/site-packages/langchain/embeddings/openai.py", line 107, in embed_with_retry
return _embed_with_retry(**kwargs)
File "/Users/md/Desktop/venv/lib/python3.9/site-packages/tenacity/__init__.py", line 289, in wrapped_f
return self(f, *args, **kw)
File "/Users/md/Desktop/venv/python3.9/site-packages/tenacity/__init__.py", line 379, in __call__
do = self.iter(retry_state=retry_state)
File "/Users/md/Desktop/venv/lib/python3.9/site-packages/tenacity/__init__.py", line 314, in iter
return fut.result()
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/concurrent/futures/_base.py", line 438, in result
return self.__get_result()
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/concurrent/futures/_base.py", line 390, in __get_result
raise self._exception
File "/Users/md/Desktop/venv/lib/python3.9/site-packages/tenacity/__init__.py", line 382, in __call__
result = fn(*args, **kwargs)
File "/Users/md/Desktop/venv/lib/python3.9/site-packages/langchain/embeddings/openai.py", line 105, in _embed_with_retry
return _check_response(response, skip_empty=embeddings.skip_empty)
AttributeError: 'OpenAIEmbeddings' object has no attribute 'skip_empty'
`
### Expected behavior
This function should not throw an error and also I do not get any error when I downgrade the langchain to 0.0.251 | AttributeError: 'OpenAIEmbeddings' object has no attribute 'skip_empty' | https://api.github.com/repos/langchain-ai/langchain/issues/12071/comments | 2 | 2023-10-20T11:01:54Z | 2024-02-13T16:10:13Z | https://github.com/langchain-ai/langchain/issues/12071 | 1,954,054,770 | 12,071 |
[
"hwchase17",
"langchain"
]
| ### Feature request
At the moment huggingface_hub.py supports only sentence-transformers because there is a validation:
```
if not repo_id.startswith("sentence-transformers"):
raise ValueError(
"Currently only 'sentence-transformers' embedding models "
f"are supported. Got invalid 'repo_id' {repo_id}."
)
```
### Motivation
At the moment there are other higher-performing embedders on the hub, like e5 or bge family.
### Your contribution
I think you should relax the constraint by also including embedders supported by sentence-transformer
```
if not repo_id.startswith(("sentence-transformers", "intfloat", "BAAI")):
raise ValueError(
"Currently only 'sentence-transformers' embedding models "
f"are supported. Got invalid 'repo_id' {repo_id}."
)
``` | Support other embedder in Hugginface Hub | https://api.github.com/repos/langchain-ai/langchain/issues/12069/comments | 1 | 2023-10-20T08:39:34Z | 2024-01-30T05:53:04Z | https://github.com/langchain-ai/langchain/issues/12069 | 1,953,813,752 | 12,069 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Textract released the [LAYOUT](https://docs.aws.amazon.com/textract/latest/dg/layoutresponse.html) feature, which identifies different layout elements like tables, lists, figures, text-paragraphs and titles. This should be used by the AmazonTextractPDFParser to generate a linearized output to improve downstream LLMs accuracy with those hints.
Text output should render tables and key/value pairs and text in reading order for multi-column text and prefix lists with a *, when features like LAYOUT, TABLES, FORMS are passed to the textract call
### Motivation
Improve downstream LLM accuracy
### Your contribution
I'll submit a PR for this feature. | feat: Add Linearized output to Textract PDFLoader | https://api.github.com/repos/langchain-ai/langchain/issues/12068/comments | 1 | 2023-10-20T08:28:07Z | 2023-10-31T01:02:11Z | https://github.com/langchain-ai/langchain/issues/12068 | 1,953,794,419 | 12,068 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
The current documentation for rag does not focus on adding metadata to chunks to make sure before even doing similarity search on vector db only the relevant docs with correct metadata are retrieved from vector db.
The issue is with open source embedding models having only mostly 512 sequence length.
Lot of context is lost if correct metadata is not present in each chunk.
This improves ability of local model to improve results by factor of magnitude. We tested the same with local hosted embedded model and codellama to improve queries on mermaid and currently getting results onpar with gpt4.
### Idea or request for content:
This notebook covers the same.
https://github.com/unoplat/unoplat-lamp/blob/1-mermaid-expert/llm-rag/Mermaid%20Expert%20RAG.ipynb
The comparison is done with codellama without the KB and gpt4. | DOC: Improve MarkDown Splitting and Metadata as Part of RAG. | https://api.github.com/repos/langchain-ai/langchain/issues/12067/comments | 2 | 2023-10-20T08:02:52Z | 2024-03-16T16:05:11Z | https://github.com/langchain-ai/langchain/issues/12067 | 1,953,754,290 | 12,067 |
[
"hwchase17",
"langchain"
]
| ### System Info
accelerate==0.23.0
aiohttp==3.8.6
aiosignal==1.3.1
altair==5.1.2
annotated-types==0.6.0
anyio==3.7.1
appdirs==1.4.4
asgiref==3.7.2
asttokens==2.4.0
async-timeout==4.0.3
attrs==23.1.0
auto-gptq==0.4.2
backcall==0.2.0
bentoml==1.1.7
bitsandbytes==0.41.1
blinker==1.6.3
build==1.0.3
cachetools==5.3.1
cattrs==23.1.2
certifi==2023.7.22
cffi==1.16.0
chardet==5.2.0
charset-normalizer==3.3.0
circus==0.18.0
click==8.1.7
click-option-group==0.5.6
cloudpickle==3.0.0
cmake==3.27.7
colorama==0.4.6
coloredlogs==15.0.1
comm==0.1.4
contextlib2==21.6.0
contourpy==1.1.1
cryptography==41.0.4
cssselect==1.2.0
cuda-python==12.2.0
cycler==0.12.1
Cython==3.0.4
dataclasses-json==0.6.1
datasets==2.14.5
debugpy==1.8.0
decorator==5.1.1
deepmerge==1.1.0
Deprecated==1.2.14
dill==0.3.7
distro==1.8.0
et-xmlfile==1.1.0
executing==2.0.0
fastcore==1.5.29
filelock==3.12.4
filetype==1.2.0
fonttools==4.43.1
frozenlist==1.4.0
fs==2.4.16
fsspec==2023.6.0
ghapi==1.0.4
gitdb==4.0.10
GitPython==3.1.40
greenlet==3.0.0
grpcio==1.59.0
grpcio-health-checking==1.59.0
grpcio-tools==1.59.0
h11==0.14.0
h2==4.1.0
hpack==4.0.0
httpcore==0.18.0
httpx==0.25.0
huggingface-hub==0.17.3
humanfriendly==10.0
hyperframe==6.0.1
idna==3.4
importlib-metadata==6.8.0
inflection==0.5.1
InstructorEmbedding==1.0.1
ipykernel==6.25.2
ipython==8.16.1
ipywidgets==8.1.1
jedi==0.19.1
Jinja2==3.1.2
joblib==1.3.2
JPype1==1.4.1
jsonpatch==1.33
jsonpointer==2.4
jsonschema==4.19.1
jsonschema-specifications==2023.7.1
jupyter_client==8.4.0
jupyter_core==5.4.0
jupyterlab-widgets==3.0.9
kiwisolver==1.4.5
langchain==0.0.318
langsmith==0.0.47
lark==1.1.7
lxml==4.9.3
markdown-it-py==3.0.0
MarkupSafe==2.1.3
marshmallow==3.20.1
matplotlib==3.8.0
matplotlib-inline==0.1.6
mdurl==0.1.2
mpmath==1.3.0
multidict==6.0.4
multiprocess==0.70.15
mypy-extensions==1.0.0
nest-asyncio==1.5.8
networkx==3.2
ninja==1.11.1.1
nltk==3.8.1
numexpr==2.8.7
numpy==1.26.1
openai==0.28.1
openapi-schema-pydantic==1.2.4
openllm==0.3.9
openllm-client==0.3.9
openllm-core==0.3.9
openpyxl==3.1.2
opentelemetry-api==1.20.0
opentelemetry-instrumentation==0.41b0
opentelemetry-instrumentation-aiohttp-client==0.41b0
opentelemetry-instrumentation-asgi==0.41b0
opentelemetry-instrumentation-grpc==0.41b0
opentelemetry-sdk==1.20.0
opentelemetry-semantic-conventions==0.41b0
opentelemetry-util-http==0.41b0
optimum==1.13.2
orjson==3.9.9
packaging==23.2
pandas==2.1.1
parso==0.8.3
pathspec==0.11.2
pdfminer.six==20221105
pdfquery==0.4.3
peft==0.5.0
pickleshare==0.7.5
Pillow==10.1.0
pip-autoremove==0.10.0
pip-requirements-parser==32.0.1
pip-review==1.3.0
pip-tools==7.3.0
platformdirs==3.11.0
portalocker==2.8.2
prometheus-client==0.17.1
prompt-toolkit==3.0.39
protobuf==4.24.4
psutil==5.9.6
pure-eval==0.2.2
pyarrow==13.0.0
pycparser==2.21
pycryptodome==3.19.0
pydantic==2.4.2
pydantic_core==2.10.1
pydeck==0.8.0
Pygments==2.16.1
Pympler==1.0.1
PyMuPDF==1.23.5
pymupdf-fonts==1.0.5
PyMuPDFb==1.23.5
pynvml==11.5.0
pyparsing==3.1.1
pyproject_hooks==1.0.0
pyquery==2.0.0
pyreadline3==3.4.1
python-dateutil==2.8.2
python-dotenv==1.0.0
python-json-logger==2.0.7
python-multipart==0.0.6
pytz==2023.3.post1
pytz-deprecation-shim==0.1.0.post0
pywin32==306
PyYAML==6.0.1
pyzmq==25.1.1
qdrant-client==1.6.3
referencing==0.30.2
regex==2023.10.3
requests==2.31.0
rich==13.6.0
roman==4.1
rouge==1.0.1
rpds-py==0.10.6
safetensors==0.4.0
schema==0.7.5
scikit-learn==1.3.1
scipy==1.11.3
sentence-transformers==2.2.2
sentencepiece==0.1.99
sigfig==1.3.3
simple-di==0.1.5
six==1.16.0
smmap==5.0.1
sniffio==1.3.0
sortedcontainers==2.4.0
spyder-kernels==2.4.4
SQLAlchemy==2.0.22
stack-data==0.6.3
starlette==0.31.1
streamlit==1.27.2
streamlit-chat==0.1.1
sympy==1.12
tabula-py==2.8.2
tabulate==0.9.0
tenacity==8.2.3
threadpoolctl==3.2.0
tiktoken==0.5.1
tokenizers==0.14.1
toml==0.10.2
toolz==0.12.0
torch==2.1.0
torchaudio==2.1.0
torchvision==0.16.0
tornado==6.3.3
tqdm==4.66.1
traitlets==5.11.2
transformers @ git+https://github.com/huggingface/transformers@43bfd093e1817c0333a1e10fcbdd54f1032baad0
typing-inspect==0.9.0
typing_extensions==4.8.0
tzdata==2023.3
tzlocal==5.1
urllib3==1.26.18
uvicorn==0.23.2
validators==0.22.0
watchdog==3.0.0
watchfiles==0.21.0
wcwidth==0.2.8
widgetsnbextension==4.0.9
wrapt==1.15.0
xformers==0.0.22.post4
xlrd==2.0.1
xxhash==3.4.1
yarl==1.9.2
zipp==3.17.0
using python 3.11
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from dotenv import load_dotenv
import os
from langchain.chat_models import ChatOpenAI
from qdrant_client import QdrantClient as qcqc
from langchain.embeddings import HuggingFaceInstructEmbeddings
from langchain.vectorstores import Qdrant
from langchain.retrievers.self_query.base import SelfQueryRetriever
from langchain.chains.query_constructor.base import AttributeInfo
load_dotenv()
openai_key = os.getenv('OPENAI_API_KEY')
db_path = os.getenv('vectordb_local_path')
key = openai_key
llm = ChatOpenAI(
temperature = 0,
model = 'gpt-3.5-turbo',
streaming = True)
text_metadata = [AttributeInfo(name = 'book name',
description = "name of the book.",
type = "string"),
AttributeInfo(name = 'author',
description = 'Author of the book',
type = 'string'),
AttributeInfo(name = 'creation data',
description = 'the date the book was written',
type = 'list[int]'),
AttributeInfo(name = 'page',
description = "page number.",
type = "int"),
AttributeInfo(name = 'images',
description = "dictionary whoes keys are name and description of images on the page,\
and whoes contents are image references on pdfs",
type = "dict{string:string}"),
AttributeInfo(name = 'tables',
description = 'list of tables from the page',
type = 'list[dataframe]')
]
def retreive_conversation_construct(store,store_content_description, metadata_format=text_metadata,verbose=False):
''' this is the first part of this function, and is the first problem i ran into
'''
retriever = SelfQueryRetriever.from_llm(llm = llm,
vectorstore=store,
document_contents = store_content_description,
metadata_field_info = metadata_format,
enable_limit=True,
fix_invalid = True,
verbose=verbose)
return retriever
client = qcqc(path= db_path)
model_name = "hkunlp/instructor-xl"
model_kwargs = {'device': 'cuda'}
encode_kwargs = {'normalize_embeddings': True}
load_dotenv()
path = os.getenv('instructor_local_dir')
os.environ['CURL_CA_BUNDLE'] = ''
embed_instruction ='Represent the document for retrieval: '
embeddings = HuggingFaceInstructEmbeddings(
model_name=model_name,
model_kwargs=model_kwargs,
encode_kwargs=encode_kwargs,
cache_folder = path,
embed_instruction = embed_instruction)
vector_store = Qdrant(client= client, collection_name= 'my cluster', embeddings= embeddings)
store_content_description = 'this is a paper about generating training data for large language models.'
retreive_conversation_construct(vector_store,store_content_description)
### Expected behavior
retriever should get generated.
I found in self_query.py, the .from_llm() method eventually leads to _get_builtin_translator getting called, which returns QdrantTranslator(metadata_key=vectorstore.metadata_payload_key) as structured_query_translator.
but later when calling structured_query_translator.allowed_operators from qdrant.py, the QdrantTranslator doesn't have allowed_operators, thus returns a None object.
this results in the following error:
File d:\ai_dev\research_assistant\testing.py:74
retreive_conversation_construct(vector_store,store_content_description)
File d:\ai_dev\research_assistant\testing.py:50 in retreive_conversation_construct
retriever = SelfQueryRetriever.from_llm(llm = llm,
File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\retrievers\self_query\base.py:214 in from_llm
query_constructor = load_query_constructor_runnable(
File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\chains\query_constructor\base.py:317 in load_query_constructor_runnable
prompt = get_query_constructor_prompt(
File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\chains\query_constructor\base.py:203 in get_query_constructor_prompt
allowed_operators=" | ".join(allowed_operators),
TypeError: can only join an iterable | qdrant.py doesn't contain any allowed_operators | https://api.github.com/repos/langchain-ai/langchain/issues/12061/comments | 3 | 2023-10-20T03:00:01Z | 2024-02-12T16:10:44Z | https://github.com/langchain-ai/langchain/issues/12061 | 1,953,416,596 | 12,061 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain==0.0.319, mac, python 3.10
### Who can help?
@hwchase17 @agola11
I'm trying to use this exact example from: https://python.langchain.com/docs/expression_language/cookbook/memory
```
model = ChatOpenAI()
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful chatbot"),
MessagesPlaceholder(variable_name="history"),
("human", "{input}")
])
memory = ConversationBufferMemory(return_messages=True)
memory.load_memory_variables({})
chain = RunnablePassthrough.assign(
memory=RunnableLambda(memory.load_memory_variables) | itemgetter("history")
) | prompt | model
inputs = {"input": "hi im bob"}
response = chain.invoke(inputs)
```
and getting:
```
File "/Users/name/.pyenv/versions/3.10.10/lib/python3.10/site-packages/langchain/schema/prompt_template.py", line 60, in <dictcomp>
**{key: inner_input[key] for key in self.input_variables}
KeyError: 'history'
```
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Please run the code above.
### Expected behavior
To have it work like the documentation. | The LCEL memory example returns KeyError | https://api.github.com/repos/langchain-ai/langchain/issues/12057/comments | 9 | 2023-10-20T00:04:37Z | 2024-05-29T07:56:57Z | https://github.com/langchain-ai/langchain/issues/12057 | 1,953,239,021 | 12,057 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Currently Bedrock and BedrockChat models do not sypport async calls and streaming.
it would be very useful to have working ChatOpenAI methods like _acall and _astream in Bedrock llms too so that we can use Claude2 and other Bedrock Models in production easily
### Motivation
without async functionalities it is hard to build production level chatbots using Bedrock models, including claude2 which is one of the most desired models
### Your contribution
I am trying myself, but I am having difficulties. If someone can help, it would be very appreciated by many, | Add Async _acall and _astream to Bedrock | https://api.github.com/repos/langchain-ai/langchain/issues/12054/comments | 4 | 2023-10-19T21:16:32Z | 2024-02-05T22:56:23Z | https://github.com/langchain-ai/langchain/issues/12054 | 1,953,068,062 | 12,054 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
In the [documentation](https://python.langchain.com/docs/modules/agents/tools/custom_tools), it's mentioned that expected input parameters could be defined though `args_schema` for the custom tool:
```
class CalculatorInput(BaseModel):
question: str = Field()
class CustomCalculatorTool(BaseTool):
name = "Calculator"
description = "useful for when you need to answer questions about math"
args_schema: Type[BaseModel] = CalculatorInput
```
The output of my custom function is a complex table with multiple columns and diverse data types. I would like to provide clearer descriptions for each column, including the possible values that can be found in each column. I assume that this way, the agent can utilize the data more effectively.
Can you please clarify if it is possible to describe the output from the custom tool in a similar manner as `args_schema` for the agent?
### Suggestion:
Would something like this work for the output description? (Of course, the need for such a solution would be for more complex outputs.)
```
class CalculatorInput(BaseModel):
question: str = Field()
class CalculatorOutput(BaseModel):
answer: str = Field()
class CustomCalculatorTool(BaseTool):
name = "Calculator"
description = "useful for when you need to answer questions about math"
args_schema: Type[BaseModel] = CalculatorInput
output_schema: Type[BaseModel] = CalculatorOutput
```
Thank you!`
| Issue: description for the custom tool's output | https://api.github.com/repos/langchain-ai/langchain/issues/12050/comments | 1 | 2023-10-19T20:56:28Z | 2024-02-06T16:17:16Z | https://github.com/langchain-ai/langchain/issues/12050 | 1,953,042,836 | 12,050 |
[
"hwchase17",
"langchain"
]
| ### System Info
I filed an issue with llama-cpp here https://github.com/ggerganov/llama.cpp/issues/3689
langchain
```Name: langchain
Version: 0.0.208
Summary: Building applications with LLMs through composability
Home-page: https://www.github.com/hwchase17/langchain
Author:
Author-email:
License: MIT
Location: Work\SHARK\shark.venv\Lib\site-packages
Requires: aiohttp, dataclasses-json, langchainplus-sdk, numexpr, numpy, openapi-schema-pydantic, pydantic, PyYAML, requests, SQLAlchemy, tenacity
```
llama-cpp-python
```Name: llama_cpp_python
Version: 0.2.11
Summary: Python bindings for the llama.cpp library
Home-page:
Author:
Author-email: Andrei Betlen <[email protected]>
License: MIT
Location: Work\SHARK\shark.venv\Lib\site-packages
Requires: diskcache, numpy, typing-extensions
Required-by:
```
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
the toy code is adopted from https://learn.activeloop.ai/courses/take/langchain/multimedia/46317643-langchain-101-from-zero-to-hero
It's the first toy vector db embedding example with "Napoleon"
Here is the code to reproduce the error:
```
import os
from langchain.chains import ConversationChain
from langchain.memory import ConversationBufferMemory
from langchain.embeddings import LlamaCppEmbeddings
from langchain.vectorstores import DeepLake
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.llms import LlamaCpp
from langchain.chains import RetrievalQA
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
# instantiate the LLM and embeddings models
llm = LlamaCpp(model_path="llama-2-13b-chat.Q5_K_M.gguf",
temperature=0,
max_tokens=1000, # this was lowered from the original value of 2000, but did not fix it
top_p=1,
Verbose=True)
embeddings = LlamaCppEmbeddings(model_path="llama-2-13b-chat.Q5_K_M.gguf")
# create our documents
texts = [
"Napoleon Bonaparte was born in 15 August 1769",
"Louis XIV was born in 5 September 1638"
]
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.create_documents(texts)
# create Deep Lake dataset
# TODO: use your organization id here. (by default, org id is your username)
my_activeloop_org_id = "<SOME_ID>"
my_activeloop_dataset_name = "langchain_llama_00"
dataset_path = f"hub://{my_activeloop_org_id}/{my_activeloop_dataset_name}"
db = DeepLake(dataset_path=dataset_path, embedding_function=embeddings)
# add documents to our Deep Lake dataset
db.add_documents(docs)
retrieval_qa = RetrievalQA.from_chain_type(
llm=llm,
chain_type="stuff",
retriever=db.as_retriever())
from langchain.agents import initialize_agent, Tool
from langchain.agents import AgentType
tools = [
Tool(
name="Retrieval QA System",
func=retrieval_qa.run,
description="Useful for answering questions."
),
]
agent = initialize_agent(
tools,
llm,
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=True
)
response = agent.run("When was Napoleone born?")
print(response)
```
in `agent.run(..)` line llama-cpp says it's running out of memory
```
ggml_allocr_alloc: not enough space in the buffer (needed 442368, largest block available 290848)
GGML_ASSERT: C:\Users\jason\AppData\Local\Temp\pip-install-4x0xr_93\llama-cpp-python_fec9a526add744f5b2436cab2e2c4c28\vendor\llama.cpp\ggml-alloc.c:173: !"not enough space in the buffer"
```
I don't know enough about how LlamaCppEmbeddings works to know if this is an error on my end, or a bug in llama-cpp.
Any guidance is appreciated.
Thank you
### Expected behavior
I expect it to work like the openai example!
| Toy vectordb embedding example adopted to llama-cpp-python causes failure | https://api.github.com/repos/langchain-ai/langchain/issues/12049/comments | 4 | 2023-10-19T20:43:27Z | 2024-02-12T16:10:49Z | https://github.com/langchain-ai/langchain/issues/12049 | 1,953,025,897 | 12,049 |
[
"hwchase17",
"langchain"
]
| ### Feature request
It would be nice to have agents that could access dictionary APIs such as the Merriam-Webster API or Urban Dictionary API (for slang).
### Motivation
It can be useful to be able to look up definitions for words using a dictionary to provide additional context. With no current dictionary tools available, it would be beneficial for there to be an implemented dictionary tool available at all.
### Your contribution
We will open a PR that adds a new tool for accessing the Merriam-Webster Collegiate Dictionary API (https://dictionaryapi.com/products/api-collegiate-dictionary[/](https://www.dictionaryapi.com/)), which provides definitions for English words, as soon as possible. In the future this could be extended to support other Merriam-Webster APIs such as their Medical Dictionary API (https://dictionaryapi.com/products/api-medical-dictionary) or Spanish-English Dictionary API (https://dictionaryapi.com/products/api-spanish-dictionary).
We may also open another PR for Urban Dictionary API integration. | Tools for Dictionary APIs | https://api.github.com/repos/langchain-ai/langchain/issues/12039/comments | 1 | 2023-10-19T18:31:45Z | 2023-11-30T01:28:30Z | https://github.com/langchain-ai/langchain/issues/12039 | 1,952,840,501 | 12,039 |
[
"hwchase17",
"langchain"
]
| ### System Info
Name: langchain
Version: 0.0.317
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [X] Callbacks/Tracing
- [X] Async
### Reproduction
Hello @agola11
I got this runtime warning:
RuntimeWarning: coroutine 'AsyncCallbackManagerForLLMRun.on_llm_new_token' was never awaited
run_manager.on_llm_new_token(chunk.text, chunk=chunk)
I try to stream over a websocket the generated tokens. When I try to add a AsyncCallbackHandler to manage this streaming and run acall the warning occurs and nothing is streamed out.
class StreamingLLMCallbackHandler(AsyncCallbackHandler):
def __init__(self, websocket):
self.websocket = websocket
self.intermediate_result = []
async def on_llm_new_token(self, token: str, **kwargs: Any) -> None:
self.intermediate_result.append(token)
await self.websocket.send_text(token)
async def on_llm_end(self, token: str, **kwargs: Any) -> None:
await self.websocket.send_text("[END]")
stream_handler = StreamingLLMCallbackHandler()
model_kwargs = {
"max_tokens_to_sample": 8000,
"temperature": 0.7,
# "top_k": 250,
# "top_p": 1,
"stop_sequences" : ['STOP_LLM']
}
llm = Bedrock(
client=bedrock,
model_id="anthropic.claude-v2",
# provider_stop_sequence_key_name_map={'anthropic': 'stop_sequences'},
streaming=True,
callbacks=[stream_handler],
model_kwargs=model_kwargs
)
prompt_template = f"""
Human:
{{context}} {{question}}
Assistant:"""
PROMPT = PromptTemplate(template=prompt_template, input_variables=["context", "question"])
chain_type_kwargs = {"prompt": PROMPT}
pinecone.init(api_key=PINECONE_API_KEY, environment=PINECONE_API_ENV)
chat_vectorstore = Pinecone.from_existing_index(index_name='intelligencechat1', embedding= OpenAIEmbeddings(openai_api_key= OPENAI_API_KEY), namespace='146')
chain = RetrievalQA.from_chain_type(
llm=llm,
chain_type="stuff",
return_source_documents=True,
chain_type_kwargs=chain_type_kwargs,
retriever=chat_vectorstore.as_retriever(search_kwargs={'k' : 4}),)
query = 'summarize'
result = await chain.acall(({"query": query}))
### Expected behavior
The expected behavior is that each token is streamed sequentially over the websocket.
the chain work in syncronous mode without 'acall' | Bedrock chain not working with AsyncCallbackHandler | https://api.github.com/repos/langchain-ai/langchain/issues/12035/comments | 11 | 2023-10-19T18:09:39Z | 2024-03-29T00:45:02Z | https://github.com/langchain-ai/langchain/issues/12035 | 1,952,806,485 | 12,035 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Hi, I'd love if we could create conversational retrieval agents using BedrockChat LLMs!
### Motivation
This feature would be very useful for many users. | create_conversational_retrieval_agent with BedrockChat models | https://api.github.com/repos/langchain-ai/langchain/issues/12028/comments | 4 | 2023-10-19T15:57:59Z | 2024-05-07T16:06:13Z | https://github.com/langchain-ai/langchain/issues/12028 | 1,952,599,482 | 12,028 |
[
"hwchase17",
"langchain"
]
| ### Feature request
We want some output parser or feature that will restrict llm to generate specifi number of words.in json or any format.
some times user want 10 lines output,sometime only 2 words etc
so this is very helpful feature .thanks
### Motivation
i want to build the next word auto suggetion model using llm,but my llm is giving me crazy output every time
instead of giving only 1 words
example: my input is
input: what is
output : your name
input: good
output: morning
input: what is the meaninig
output: of
so it will give me only 1 word suggetion,insted of giving all other stuff. want to restrict llm for specific words.
### Your contribution
will help with promt.but few quantised llm ar not good with giving only one word | how can i get only 1 or 2 words output from my llm? | https://api.github.com/repos/langchain-ai/langchain/issues/12024/comments | 4 | 2023-10-19T13:06:06Z | 2024-02-11T16:10:01Z | https://github.com/langchain-ai/langchain/issues/12024 | 1,952,216,333 | 12,024 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
```
import os
from langchain.llms import AzureOpenAI
os.environ["OPENAI_API_TYPE"] = "azure"
os.environ["OPENAI_API_VERSION"] = "2023-07-01-preview"
os.environ["OPENAI_API_BASE"] = "https://myurlid.openai.azure.com/"
os.environ["OPENAI_API_KEY"] = "my key"
llm = AzureOpenAI(
deployment_name="gpt35",
openai_api_version="2023-07-01-preview",
)
# Run the LLM
llm("Tell me a joke")
print(llm)
```
this is my demo code, try run it show error info:
```
$ /bin/python /home/good/langchain/cookbook/test/azure_hello.py
Traceback (most recent call last):
File "/home/good/langchain/cookbook/test/azure_hello.py", line 22, in <module>
llm("Tell me a joke")
File "/home/good/.local/lib/python3.9/site-packages/langchain/llms/base.py", line 866, in __call__
self.generate(
File "/home/good/.local/lib/python3.9/site-packages/langchain/llms/base.py", line 646, in generate
output = self._generate_helper(
File "/home/good/.local/lib/python3.9/site-packages/langchain/llms/base.py", line 534, in _generate_helper
raise e
File "/home/good/.local/lib/python3.9/site-packages/langchain/llms/base.py", line 521, in _generate_helper
self._generate(
File "/home/good/.local/lib/python3.9/site-packages/langchain/llms/openai.py", line 401, in _generate
response = completion_with_retry(
File "/home/good/.local/lib/python3.9/site-packages/langchain/llms/openai.py", line 115, in completion_with_retry
return _completion_with_retry(**kwargs)
File "/home/good/.local/lib/python3.9/site-packages/tenacity/__init__.py", line 289, in wrapped_f
return self(f, *args, **kw)
File "/home/good/.local/lib/python3.9/site-packages/tenacity/__init__.py", line 379, in __call__
do = self.iter(retry_state=retry_state)
File "/home/good/.local/lib/python3.9/site-packages/tenacity/__init__.py", line 314, in iter
return fut.result()
File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 439, in result
return self.__get_result()
File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result
raise self._exception
File "/home/good/.local/lib/python3.9/site-packages/tenacity/__init__.py", line 382, in __call__
result = fn(*args, **kwargs)
File "/home/good/.local/lib/python3.9/site-packages/langchain/llms/openai.py", line 113, in _completion_with_retry
return llm.client.create(**kwargs)
File "/home/good/.local/lib/python3.9/site-packages/openai/api_resources/completion.py", line 25, in create
return super().create(*args, **kwargs)
File "/home/good/.local/lib/python3.9/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 155, in create
response, _, api_key = requestor.request(
File "/home/good/.local/lib/python3.9/site-packages/openai/api_requestor.py", line 299, in request
resp, got_stream = self._interpret_response(result, stream)
File "/home/good/.local/lib/python3.9/site-packages/openai/api_requestor.py", line 710, in _interpret_response
self._interpret_response_line(
File "/home/good/.local/lib/python3.9/site-packages/openai/api_requestor.py", line 775, in _interpret_response_line
raise self.handle_error_response(
openai.error.InvalidRequestError: The completion operation does not work with the specified model, gpt-35-turbo. Please choose different model and try again. You can learn more about which models can be used with each operation here: https://go.microsoft.com/fwlink/?linkid=2197993.
```
$ python -V
Python 3.9.9
I'm sure my Azure key and other information are correct, because they work normally elsewhere.
### Suggestion:
_No response_ | Issue:The completion operation does not work with the specified model for azure openai api | https://api.github.com/repos/langchain-ai/langchain/issues/12019/comments | 6 | 2023-10-19T09:49:05Z | 2024-02-11T16:10:06Z | https://github.com/langchain-ai/langchain/issues/12019 | 1,951,757,935 | 12,019 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain 0.0.312
Python 3.11.6
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from bs4 import SoupStrainer
from langchain.document_loaders import WebBaseLoader
from langchain.text_splitter import CharacterTextSplitter
def load_html():
only_body = SoupStrainer('body')
loader = WebBaseLoader(['https://example.com/'], bs_kwargs={'parse_only': only_body})
docs = loader.load()
text_splitter = CharacterTextSplitter(
separator = "\n",
chunk_size = 300,
chunk_overlap = 50,
length_function = len,
)
print(text_splitter.split_documents(docs))
# -> get all texts in html, not filtered by bs_kwargs passed in WebBaseLoader
```
### Expected behavior
expected filtered texts passed by parse_only in bs_kwargs when instantiate WebBaseLoader.
https://github.com/langchain-ai/langchain/blob/12f8e87a0e89a8ff50fc7dbab612ac6770f3d258/libs/langchain/langchain/document_loaders/web_base.py#L245
in lazy_load method, self._scrape is called with path but not other parameters
| self._scrape in lazy_load method is not taken any parameters except path given instantiating WebBaseLoader | https://api.github.com/repos/langchain-ai/langchain/issues/12018/comments | 2 | 2023-10-19T08:23:38Z | 2024-02-06T16:17:36Z | https://github.com/langchain-ai/langchain/issues/12018 | 1,951,581,376 | 12,018 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I'm currently working on accessing a Confluence space using Langchain and performing question answering on its data. The embeddings of this data are stored in a Chromadb vector database once I provide user name,API keyand Space key.
However, I'm looking for a way to automatically generate embeddings for any documents that change in real-time within the Confluence space and enable real-time question answering on the updated data. Any suggestions or solutions on how to achieve this would be greatly appreciated!
### Suggestion:
_No response_ | Issue: How to Automatically Generate Embeddings for Updated Documents in a Confluence Space and Enable Real-Time Question Answering on the Updated Data? | https://api.github.com/repos/langchain-ai/langchain/issues/12013/comments | 2 | 2023-10-19T06:09:01Z | 2024-02-06T16:17:42Z | https://github.com/langchain-ai/langchain/issues/12013 | 1,951,360,189 | 12,013 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
**Below code is for generating embeddings from pdf**
loader = PyPDFLoader(f"{file_path}")
document = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(documents=document)
embedding = OpenAIEmbeddings()
**Below code is for generating embeddings from confluence**
embedding = OpenAIEmbeddings()
loader = ConfluenceLoader(
url=confluence_url,
username=username,
api_key=api_key
)
for space_key in space_key:
documents.extend(loader.load(space_key=space_key,limit=100))
# Split the texts
text_splitter = CharacterTextSplitter(chunk_size=6000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
text_splitter = TokenTextSplitter(chunk_size=1000, chunk_overlap=10, encoding_name="cl100k_base")
texts = text_splitter.split_documents(texts)
I would like to inquire about the feasibility of generating embeddings for both PDF documents and Confluence content and storing them in a single 'embeddings' folder. This would allow us to have the flexibility to perform question answering from either Confluence or multiple PDF sources without switching between different folders. Can you provide guidance on how to achieve this integrated storage approach?
### Suggestion:
_No response_ | Issue:doubt about generating embeddings for both PDF documents and Confluence content and storing them in a single 'embeddings' folder | https://api.github.com/repos/langchain-ai/langchain/issues/12012/comments | 5 | 2023-10-19T06:05:13Z | 2024-02-11T16:10:11Z | https://github.com/langchain-ai/langchain/issues/12012 | 1,951,355,497 | 12,012 |
[
"hwchase17",
"langchain"
]
| ### System Info
Hi, I am facing an issue when attempting to run the "Semi_structured_multi_modal_RAG_LLaMA2.ipynb" notebook from the cookbook.

**Environment Details**
Langchain Version: 0.0.317
I would appreciate any assistance in resolving this issue. Thank you.
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
TypeError: UnstructuredYoloXModel.initialize() got an unexpected keyword argument 'extract_images_in_pdf'
### Expected behavior
The notebook should run without any issues and produce the expected output as documented in the cookbook | TypeError: UnstructuredYoloXModel.initialize() got an unexpected keyword argument 'extract_images_in_pdf' while running Semi_structured_multi_modal_RAG_LLaMA2.ipynb | https://api.github.com/repos/langchain-ai/langchain/issues/12010/comments | 9 | 2023-10-19T05:14:22Z | 2024-02-14T16:09:13Z | https://github.com/langchain-ai/langchain/issues/12010 | 1,951,253,426 | 12,010 |
[
"hwchase17",
"langchain"
]
| ### System Info
Hello! Even though [API](https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.kay.KayAiRetriever.html) mentions a metadata param, it's not found in [code](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/retrievers/kay.py#L9).
Without metadata filtering, querying “Tell me about the returns of Palantir Technologies Inc.” returns docs with 'company_name': 'ETSY INC'.
Thank you
### Who can help?
@eyurtsev?
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.retrievers import KayAiRetriever
retriever = KayAiRetriever.create(dataset_id="company", data_types=["10-K"], num_contexts=1)
retriever.get_relevant_documents(“Tell me about the returns of Palantir Technologies Inc.?")
### Expected behavior
[Document(page_content="Company Name: ETSY INC \n Company Industry: SERVICES-BUSINESS SERVICES, NEC \n Form Title: 10-K 2020-FY \n Form Section: Risk Factors \n Text: Any events causing ... abroad.", metadata={'chunk_type': 'text', 'chunk_years_mentioned': [], 'company_name': 'PALANTIR TECHNOLOGIES INC', 'company_sic_code_description': 'SERVICES-BUSINESS SERVICES, NEC', 'data_source': '10-K', 'data_source_link': 'https://www.sec.gov/Archives/edgar/data/1370637/000137063721000012', 'data_source_publish_date': '2020-01-01T00:00:00Z', 'data_source_uid': '0001370637-21-000012', 'title': 'ETSY INC | 10-K 2020-FY '})] | KayAiRetriever: without metadata filtering, wrong results | https://api.github.com/repos/langchain-ai/langchain/issues/12008/comments | 8 | 2023-10-19T04:29:06Z | 2023-10-24T15:12:17Z | https://github.com/langchain-ai/langchain/issues/12008 | 1,951,178,914 | 12,008 |
[
"hwchase17",
"langchain"
]
| i have 1 folder called with data, in it there are 2 .txt files which are obama.txt and trump.txt, each file contains summary of each person from wikipedia and on the root of the folder, i have anthropic.py and below is the code
```
from langchain.document_loaders import DirectoryLoader
from langchain.indexes import VectorstoreIndexCreator
from langchain.embeddings import BedrockEmbeddings
from langchain.llms.bedrock import Bedrock
boto3_bedrock = boto3.client("bedrock-runtime")
llm = Bedrock(
model_id="anthropic.claude-v2",
client=boto3_bedrock,
model_kwargs={"max_tokens_to_sample": 200},
)
bedrock_embeddings = BedrockEmbeddings(
model_id="amazon.titan-embed-text-v1", client=boto3_bedrock
)
loader = DirectoryLoader("./data/", glob="*.txt")
index = VectorstoreIndexCreator(embedding=bedrock_embeddings).from_loaders([loader])
print(index.query("who is george washington", llm=llm))
```
as you can see i am using anthropic.claude-v2 LLM, and as for the query, i asked "who is george washington." during my first run, i got a response that describes who is george washington which shouldnt have happened because the code says to look for context provided only in obama.txt and trump.txt and george washington is not mentioned in either of them. However, when i re-run the code for the second time with **no changes** anywhere, i got a response saying that it does not know who is george washington which is what should have happened. Why is this the case? Why the answers are different drastically when i run the code with 0 change? I attached a screenshot of the terminal output

### Suggestion:
_No response_ | langchain answers change drastically | https://api.github.com/repos/langchain-ai/langchain/issues/12005/comments | 10 | 2023-10-19T03:18:24Z | 2024-02-14T16:09:18Z | https://github.com/langchain-ai/langchain/issues/12005 | 1,951,117,494 | 12,005 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I have checked the documentation, although both are supported in langchain, but I could not find a way for streaming output.
### Suggestion:
_No response_ | Is there any way to stream output for VLLM and Together ai? | https://api.github.com/repos/langchain-ai/langchain/issues/12004/comments | 3 | 2023-10-19T02:37:53Z | 2024-02-13T12:41:33Z | https://github.com/langchain-ai/langchain/issues/12004 | 1,951,079,170 | 12,004 |
[
"hwchase17",
"langchain"
]
| hi team,
Can I return source documents when using MultiRetrievalQAChain?
I want to fetch the metadata of source.
thx | MultiRetrievalQAChain return source documents | https://api.github.com/repos/langchain-ai/langchain/issues/12002/comments | 3 | 2023-10-19T01:16:15Z | 2024-02-10T16:11:52Z | https://github.com/langchain-ai/langchain/issues/12002 | 1,951,006,205 | 12,002 |
[
"hwchase17",
"langchain"
]
| This is the default class of VectorstoreIndexCreator
```
class VectorstoreIndexCreator(
*,
vectorstore_cls: type[VectorStore] = Chroma,
embedding: Embeddings = OpenAIEmbeddings,
text_splitter: TextSplitter = _get_default_text_splitter,
vectorstore_kwargs: dict = dict
)
```
the default is to use OpenAIEmbeddings as its embedding. What i'm trying to do is to use BedrockEmbeddings, below is my code
```
loaders = TextLoader('data.txt')
index = VectorstoreIndexCreator(embedding=BedrockEmbeddings).from_loaders([loaders])
```
The error that i got
```
VectorstoreIndexCreator(embedding=BedrockEmbeddings).from_loaders([loaders])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pydantic\main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for VectorstoreIndexCreator
embedding
instance of Embeddings expected (type=type_error.arbitrary_type; expected_arbitrary_type=Embeddings)
``` | Overriding VectorstoreIndexCreator() embedding | https://api.github.com/repos/langchain-ai/langchain/issues/12001/comments | 4 | 2023-10-19T01:04:25Z | 2023-10-19T03:07:29Z | https://github.com/langchain-ai/langchain/issues/12001 | 1,950,996,772 | 12,001 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain_version: "0.0.306"
library: "langchain"
library_version: "0.0.306"
platform: "Windows-10-10.0.22621-SP0"
py_implementation: "CPython"
runtime: "python"
runtime_version: "3.9.0rc2"
sdk_version: "0.0.41"
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Streaming the response of a simple route created among 2 Runnables doesn't work. When I stream each Runnable independelty, it streams perfectly.
When I create a route for those Runnables to choose which one of them should be the next step, it doesn't stream the result.
Here's the example:
```
# ----------------- Runnable 1 -----------------
DEFAULT_DOCUMENT_PROMPT = PromptTemplate.from_template(template="{page_content}")
def _combine_documents(docs, document_prompt = DEFAULT_DOCUMENT_PROMPT, document_separator="\n\n"):
doc_strings = [format_document(doc, document_prompt) for doc in docs]
return document_separator.join(doc_strings)
def _format_chat_history(chat_history: List[Tuple]) -> str:
buffer = ""
for dialogue_turn in chat_history:
human = "Human: " + dialogue_turn[0]
ai = "Assistant: " + dialogue_turn[1]
buffer += "\n" + "\n".join([human, ai])
return buffer
_inputs = RunnableMap({
"standalone_question": {
"question": lambda x: x["question"],
"chat_history": lambda x: _format_chat_history(x['chat_history'])
} | PromptFactory.CONDENSE_QUESTION_PROMPT | ChatOpenAI(temperature=0, streaming=True) | StrOutputParser(),
})
_context = {
"context": itemgetter("standalone_question") | vector_retriever | _combine_documents,
"question": lambda x: x["standalone_question"]
}
Runnable1 = _inputs | _context | PromptFactory.PROMPT | ChatOpenAI(temperature=0, streaming=True)
# ----------------- Stream the response: -----------------
for s in Runnable1.stream({"question": "Hello!", "chat_history": []}):
print(s, end="", flush=True)
```
The above stream works
```
# ----------------- Runnable 2 -----------------
Runnable2 = RunnableMap({
"response": {
"question": lambda x: x["question"],
"chat_history": lambda x: _format_chat_history(x['chat_history']),
"context": lambda x: QueriesContext.get_result()
} | PromptTemplate.from_template(PromptFactory.followup_context_template) | ChatOpenAI(temperature=0, streaming=True) | StrOutputParser(),
})
# ----------------- Stream the response: -----------------
for s in Runnable2.stream({"question": "Hello!", "chat_history": [], "context": ['initialvalue']}):
print(s, end="", flush=True)
```
The above stream also works
```
def get_result(text):
return ["newvalue"]
router_chain = PromptTemplate.from_template(PromptFactory.router_template_test) | ChatOpenAI(temperature=0, streaming=True) | StrOutputParser()
def route(info):
if "runnable1" in info["topic"].lower():
return Runnable1
elif "runnable2" in info["topic"].lower():
return Runnable2
else:
raise Exception("Invalid topic")
full_runnable_router_chain = {
"topic": router_chain,
"question": lambda x: x["question"],
"chat_history": lambda x: x["chat_history"],
"context": lambda x: x["context"]
} | RunnableLambda(route)
# ----------------- Stream the response: -----------------
for s in full_runnable_router_chain.stream({"question": "Hello!", "chat_history": [], "context": ['initialvalue']}):
print(s.content, end="", flush=True)
```
The above stream doesn't stream
### Expected behavior
Stream from "full_runnable_router_chain" | Streaming not working when routing between Runnables in LCEL | https://api.github.com/repos/langchain-ai/langchain/issues/11998/comments | 10 | 2023-10-19T00:24:22Z | 2023-12-26T20:49:20Z | https://github.com/langchain-ai/langchain/issues/11998 | 1,950,952,387 | 11,998 |
[
"hwchase17",
"langchain"
]
| ### Feature request
We propose the integration of a new tool into Langchain that will provide comprehensive support for queries on the AlphaVantage Trading API. AlphaVantage offers a wide range of financial data and services, and this integration will enhance Langchain's capabilities for financial data analysis.
Here is the list of AlphaVantage APIs that will be integrated into the new tool:
- [TIME_SERIES_DAILY](https://www.alphavantage.co/documentation/#daily)
- [TIME_SERIES_WEEKLY](https://www.alphavantage.co/documentation/#weekly)
- [Quote Endpoint](https://www.alphavantage.co/documentation/#latestprice)
- [Search Endpoint](https://www.alphavantage.co/documentation/#symbolsearch)
- [Market News & Sentiment](https://www.alphavantage.co/documentation/#news-sentiment)
- [Top Gainers, Losers, and Most Actively Traded Tickers (US Market)](https://www.alphavantage.co/documentation/#gainer-loser)
### Motivation
The integration of AlphaVantage Trading API support in Langchain will provide users with access to a wealth of financial data, enabling them to perform in-depth analysis, develop trading strategies, and make informed financial decisions, all with real-time information
### Your contribution
I am a University of Toronto Student, working in a group and plan to submit a PR for this issue in November | Add Alpha Vantage API Tool | https://api.github.com/repos/langchain-ai/langchain/issues/11994/comments | 8 | 2023-10-18T20:15:47Z | 2024-03-13T19:58:04Z | https://github.com/langchain-ai/langchain/issues/11994 | 1,950,574,553 | 11,994 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I want to interact with my database so I'm using SQLDatabaseChain and SQL Agent to convert natural language query to sql query then execute on the database.
So I want to save the chat history here so that if a user asks anything related previous question/answer it should pick and then answer.
So my doubt is here what memory type I need to use.
Can you show me code example.
### Suggestion:
_No response_ | Issue: Which memory type i need to use for db-backed history | https://api.github.com/repos/langchain-ai/langchain/issues/11985/comments | 6 | 2023-10-18T16:15:42Z | 2024-02-13T16:10:27Z | https://github.com/langchain-ai/langchain/issues/11985 | 1,950,139,790 | 11,985 |
[
"hwchase17",
"langchain"
]
| ### System Info
**Description:**
It's not possible to use the ParentDocumentRetriever and MultiVectorRetriever at the same time.
But, it's a good idea to generate multiple vector for one fragment, then manage the life cycle of all vertions the ParentDocumentRetriever.
I think, the `MultiVectorRetriever` is not a good idea. I think it's better to create a MultiVectorStore to be compatible with all vectorstore interface. Then, it's possible to get a Retriever with vs.as_retriever()
I started to implement this scenario, but, because all my pull-request were never reads, I prefer not to submit any code and to maintain it endlessly.
### Who can help?
@eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
MultiVectorRetriever need a VectorStore.
But MultiVectorRetriever IS NOT a VectorStore.
### Expected behavior
An API to create multiple vectors for the same fragment, and, at the same time, a solution to manage all the vectors with the lifecycle of the original document.
If I remove/update the original document, all the associated vectors must be removed/updated.
| ParentDocumentRetriever is incompatible with MultiVectorRetriever | https://api.github.com/repos/langchain-ai/langchain/issues/11983/comments | 4 | 2023-10-18T15:48:45Z | 2024-03-13T19:58:53Z | https://github.com/langchain-ai/langchain/issues/11983 | 1,950,071,972 | 11,983 |
[
"hwchase17",
"langchain"
]
| ### System Info
**description**
With parent_splitter, it's not possible to know the number of IDs before the split.
So, it's not possible to know the ID of each fragment.
Then, it's not possible to manage the life cycle of the fragment because it's impossible to know the list of IDs associated with the original big document.
### Who can help?
@eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.embeddings import OpenAIEmbeddings
from langchain.retrievers import ParentDocumentRetriever
from langchain.schema import Document
from langchain.storage import InMemoryStore
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.vectorstores.chroma import Chroma
vectorstore = Chroma(
collection_name="full_documents",
embedding_function=OpenAIEmbeddings()
)
store = InMemoryStore()
docs = [Document(page_content=txt, metadata={"id": id}) for txt, id in [("aaaaaa", 1), ("bbbbbb", 2)]]
ParentDocumentRetriever(
vectorstore=vectorstore,
docstore=store,
id_key="id",
parent_splitter=RecursiveCharacterTextSplitter(
chunk_size = 2,
chunk_overlap = 0,
length_function = len,
add_start_index = True,
),
child_splitter=RecursiveCharacterTextSplitter(
chunk_size = 1,
chunk_overlap = 0,
length_function = len,
add_start_index = True,
),
).add_documents(docs,ids=[doc.metadata["id"] for doc in docs])
```
Produce:
```
ValueError: Got uneven list of documents and ids. If `ids` is provided, should be same length as `documents`.
```
### Expected behavior
No error. | ParentDocumentRetriever: parent_splitter and ids are incompatible | https://api.github.com/repos/langchain-ai/langchain/issues/11982/comments | 4 | 2023-10-18T15:39:29Z | 2024-03-13T19:58:37Z | https://github.com/langchain-ai/langchain/issues/11982 | 1,950,052,502 | 11,982 |
[
"hwchase17",
"langchain"
]
| ### System Info
latest version of langchain. python=3.11.4
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
How to connect to SQL view in the database, instead of using the all the tables from the DB, as the DB is huge(6Gb), I'm unable to get any out a correct output, I'm getting token error and when it is performing any join its unable to fetch a correct answer, but when I using a selected table its working efficiently in terms of accuracy and time taking.
Can I get a appropriate method by which I can deal with the huge data without getting token or parser errors.
and how to work with the SQL views in the database.
here is the code for the connection string I have tried which includes the tables and its working fine,
from urllib.parse import quote_plus
driver = 'ODBC Driver 17 for SQL Server'
host = '########'
user = '#####'
database = 'HR_Git'
password = '#########'
encoded_password = quote_plus(password)
db = SQLDatabase.from_uri(f"mssql+pyodbc://{user}:{encoded_password}@{host}/{database}?driver={quote_plus(driver)}", include_tables = ['EGV_emp_departments'], sample_rows_in_table_info=2)
Thank you.
### Expected behavior
How to work with the SQL views inside the database and way to avoid token and parser error. | how to connect to sql view in database | https://api.github.com/repos/langchain-ai/langchain/issues/11980/comments | 9 | 2023-10-18T14:22:33Z | 2024-04-22T16:39:16Z | https://github.com/langchain-ai/langchain/issues/11980 | 1,949,883,986 | 11,980 |
[
"hwchase17",
"langchain"
]
| # Issue:
UnstructuredEmailLoader only returning first or no `element` of the mail irrespective of the `mode`.
### System Info
langchain version: 0.0.316
unstructured version: 0.10.18
### Who can help?
@eyurtsev
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
This is the [link](https://colab.research.google.com/drive/105l79buRBNsWelBIec0E1yUaowyfbVju?usp=sharing) to the code to regenerate the reponse
### Expected behavior
It should have returned the email's message text as the output with it's metadata | UnstructuredEmailLoader just returning the first element | https://api.github.com/repos/langchain-ai/langchain/issues/11978/comments | 4 | 2023-10-18T13:40:37Z | 2023-10-19T06:05:51Z | https://github.com/langchain-ai/langchain/issues/11978 | 1,949,786,869 | 11,978 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I am using the LLMChain with the ConversationBufferMemory and it works pretty well. There is a case where the chain throws this exception: ValueError: unexpected '{' in field name. This happens only when I use the word "field" in my question. The code I have written is down below:
FUNCTION TO RETRIEVE LLMCHAIN:
```python
def get_llm_chain(OPENAI_API_KEY, context):
global glob_llm_chain
if glob_llm_chain is None:
llm = ChatOpenAI(temperature=0, openai_api_key=OPENAI_API_KEY, model_name="gpt-4")
sysprompttemplate = PromptTemplate.from_template("""Given the following extracted parts of a long document and a question, create a final answer in english with references SOURCES.
If you dont know the answer, just say that you don't know. Dont try to make up an answer. ALWAYS return a SOURCES part in your answer.
You also have the ability to remember the previous conversation you had with the human. Answer the question based on context provided.
============
{context}
============""").format(context=context)
messages = [ SystemMessagePromptTemplate.from_template(sysprompttemplate), MessagesPlaceholder(variable_name="chat_history"), HumanMessagePromptTemplate.from_template("{question}")]
prompt = ChatPromptTemplate.from_messages(messages=messages)
MEMORY = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
llm_chain = LLMChain(prompt=prompt,
llm=llm,
verbose=True,
memory=MEMORY)
glob_llm_chain = llm_chain
return glob_llm_chain
else:
return glob_llm_chain
```
FUNCTION THAT RETURNS THE ANSWER
```python
def get_answer(question):
embeddings = OpenAIEmbeddings()
milvus_connection_properties = get_milvus_connection_properties()
vector_store = retrieve_colection(embeddings, "langchain", milvus_connection_properties)
docs = vector_store.similarity_search(question)
context = get_full_context(docs)
llm_chain = get_llm_chain(OPENAI_API_KEY, context)
#result = llm_chain.predict(input={"question": question, "context":context})
result = llm_chain({"question":question})
return result
```
So, for example when I ask: What is a Field Dependency Map? I get:
```
ERROR:root:unexpected '{' in field name
Traceback (most recent call last):
File "/home/ardit/projects/evolutivoAI/langchain/app.py", line 29, in <module>
st.session_state.results = get_answer(question)
^^^^^^^^^^^^^^^^^^^^
File "/home/ardit/projects/evolutivoAI/langchain/scripts/retrieval.py", line 68, in get_answer
llm_chain = get_llm_chain(OPENAI_API_KEY, context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ardit/projects/evolutivoAI/langchain/scripts/retrieval.py", line 41, in get_llm_chain
messages = [ SystemMessagePromptTemplate.from_template(sysprompttemplate), MessagesPlaceholder(variable_name="chat_history"), HumanMessagePromptTemplate.from_template("{question}")]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ardit/projects/evolutivoAI/langchain/.venv/lib/python3.11/site-packages/langchain/prompts/chat.py", line 151, in from_template
prompt = PromptTemplate.from_template(template, template_format=template_format)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ardit/projects/evolutivoAI/langchain/.venv/lib/python3.11/site-packages/langchain/prompts/prompt.py", line 204, in from_template
input_variables = {
^
File "/home/ardit/projects/evolutivoAI/langchain/.venv/lib/python3.11/site-packages/langchain/prompts/prompt.py", line 204, in <setcomp>
input_variables = {
^
ValueError: unexpected '{' in field name
```
Any suggestion why this happens?
| Specific word crashing LLMChain | https://api.github.com/repos/langchain-ai/langchain/issues/11977/comments | 2 | 2023-10-18T13:27:44Z | 2024-02-06T16:18:01Z | https://github.com/langchain-ai/langchain/issues/11977 | 1,949,759,853 | 11,977 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hi!
I was taking a look to the Confluence integration at https://python.langchain.com/docs/integrations/document_loaders/confluence , and in our team we have the following doubt:
from langchain.document_loaders import ConfluenceLoader
loader = ConfluenceLoader(
url="https://yoursite.atlassian.com/wiki",
username="me",
api_key="12345"
)
documents = loader.load(space_key="SPACE", include_attachments=True, limit=100, max_pages=800)
Can we fetch multiple space and generate embeddings at a time and do question answering from either of the space fetched by confluence loader?
### Suggestion:
_No response_ | Issue: Doubt about Confluence Loader | https://api.github.com/repos/langchain-ai/langchain/issues/11976/comments | 1 | 2023-10-18T10:32:57Z | 2023-10-18T13:00:26Z | https://github.com/langchain-ai/langchain/issues/11976 | 1,949,413,017 | 11,976 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain version: v0.0.316
I am following the langchain documentation to add memory to the chat with LLMChain: https://python.langchain.com/docs/modules/memory/adding_memory
```py
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
CYPHER_GENERATION_TEMPLATE = """Task: Generate Cypher statement to query a graph database.
Instructions: ...
Schema:
{schema}
The question is:
{question}
Chat History:
{chat_history}
"""
CYPHER_GENERATION_PROMPT = PromptTemplate(
input_variables=["schema", "question", "chat_history"],
template=CYPHER_GENERATION_TEMPLATE,
)
chain = GraphCypherQAChain.from_llm(
AzureChatOpenAI(
deployment_name=OPENAI_DEPLOYMENT_NAME,
model_name=OPENAI_MODEL_NAME,
openai_api_base=OPENAI_API_BASE,
openai_api_version=OPENAI_API_VERSION,
openai_api_key=OPENAI_API_KEY,
openai_api_type=OPENAI_API_TYPE,
temperature=0
),
graph=graph, verbose=True,
return_intermediate_steps=False,
cypher_prompt=CYPHER_GENERATION_PROMPT,
include_types=property_include_list,
memory=memory,
)
return chain.run(question)
```
So, when I call the `chain.run()` I get the error:
```bash
> Entering new GraphCypherQAChain chain...
Traceback (most recent call last):
File "/home/sudobhat/workspaces/llm/openai-chatgpt-sample-code/neo4j-use-case/neo4j_query_filtered_schema.py", line 171, in <module>
answer = get_openai_answer(question)
File "/home/sudobhat/workspaces/llm/openai-chatgpt-sample-code/neo4j-use-case/neo4j_query_filtered_schema.py", line 162, in get_openai_answer
return chain.run(question)
File "/home/sudobhat/.local/lib/python3.10/site-packages/langchain/chains/base.py", line 501, in run
return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
File "/home/sudobhat/.local/lib/python3.10/site-packages/langchain/chains/base.py", line 306, in __call__
raise e
File "/home/sudobhat/.local/lib/python3.10/site-packages/langchain/chains/base.py", line 300, in __call__
self._call(inputs, run_manager=run_manager)
File "/home/sudobhat/.local/lib/python3.10/site-packages/langchain/chains/graph_qa/cypher.py", line 185, in _call
generated_cypher = self.cypher_generation_chain.run(
File "/home/sudobhat/.local/lib/python3.10/site-packages/langchain/chains/base.py", line 501, in run
return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
File "/home/sudobhat/.local/lib/python3.10/site-packages/langchain/chains/base.py", line 282, in __call__
inputs = self.prep_inputs(inputs)
File "/home/sudobhat/.local/lib/python3.10/site-packages/langchain/chains/base.py", line 439, in prep_inputs
self._validate_inputs(inputs)
File "/home/sudobhat/.local/lib/python3.10/site-packages/langchain/chains/base.py", line 191, in _validate_inputs
raise ValueError(f"Missing some input keys: {missing_keys}")
ValueError: Missing some input keys: {'chat_history'}
```
This is being done almost exactly as in the documentation. Is this a bug or am I missing something?
I also tried an approach with partial variables like this, by looking at the answer in this issue https://github.com/langchain-ai/langchain/issues/8746:
```
CYPHER_GENERATION_PROMPT = PromptTemplate(
input_variables=["schema", "question"],
template=CYPHER_GENERATION_TEMPLATE,
partial_variables={"chat_history": chat_history}
)
```
this does not throw any error, but when I print the final prompt, there is nothing in the chat history.
Also, it seems to work for normal LLmChain, but not GraphCypherQAChain
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce:
1. Run the code provided in the documentation: https://python.langchain.com/docs/modules/memory/adding_memory, but for GraphCypherQAChain
### Expected behavior
The expected behavior is no error thrown like: ValueError: Missing some input keys: {'chat_history'} when chat_history is passed in the prompt template. | ValueError: Missing some input keys: {'chat_history'} when adding memory to GraphCypherQAChain | https://api.github.com/repos/langchain-ai/langchain/issues/11975/comments | 7 | 2023-10-18T10:23:57Z | 2024-06-03T12:12:30Z | https://github.com/langchain-ai/langchain/issues/11975 | 1,949,395,838 | 11,975 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hi developers, I am trying to run SeleniumURLLoader inside a Docker container.
When I tried to do a web-scraping on the URLs I am met with this error:
`ERROR MESSAGE: Message: Service /root/.cache/selenium/chromedriver/linux64/118.0.5993.70/chromedriver unexpectedly exited. Status code was: 127`
I tried running `update apt-get` and installed all the necessary dependencies, as well as running this command in the Dockerfile:
`RUN chmod +x /root/.cache/selenium/chromedriver/linux64/118.0.5993.70/chromedriver` but I was met with this error:
`chmod: cannot access '/root/.cache/selenium/chromedriver/linux64/118.0.5993.70/chromedriver': No such file or directory`
Any advice would be greatly appreciated.
### Suggestion:
_No response_ | SeleniumURLLoader in Docker Container | https://api.github.com/repos/langchain-ai/langchain/issues/11974/comments | 6 | 2023-10-18T09:52:41Z | 2024-05-08T10:53:28Z | https://github.com/langchain-ai/langchain/issues/11974 | 1,949,335,904 | 11,974 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Being able to set a SelfQueryRetriever's kwargs from the output of the Condese_question chain or any chain that runs just before.
### Motivation
The motivation behind this is enabling the ConversationalRetrievalChain based on a vectordb to tackle a big range of queries in a more specific way, for example:
- When asking about a specific topic, I want **k** kwarg to be no more than 10 and to have **lambda_mult** close to 0, meaning that documents taken from **fetch_k** should be similar.
- On the other hand when I am asking it to compare 2 entities I want **k** to be 20 for example and the **lambda_mult** to be really close to 1, meaning documents taken should have a little bit of variance
This will ensure that the passed context is good enough
### Your contribution
for now I am trying to output these variables from the condense_question sub chain, but I am not sure it's going to workout | ConversationalRetrievalChain : make the condense_question chain choose the SelfQueryRetriever kwargs | https://api.github.com/repos/langchain-ai/langchain/issues/11971/comments | 4 | 2023-10-18T08:55:23Z | 2024-02-10T16:12:03Z | https://github.com/langchain-ai/langchain/issues/11971 | 1,949,221,723 | 11,971 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain version: 0.0.316
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.llms.base import LLM
class CustomLLM(LLM):
endpoint_name: str
token: str
@property
def _llm_type(self) -> str:
return "custom"
def _score_model(self, dataset, temperature = 0.75, max_new_tokens = 100):
url = f'https://XXXXXXXXXXX/{self.endpoint_name}/XXXXXX'
headers = {'Authorization': f'Bearer {self.token}', 'Content-Type': 'application/json'}
temp_dict = {'index': [0], 'columns': ['prompt', 'temperature', 'max_new_tokens'], 'data': [[dataset, temperature, max_new_tokens]]}
data_json = json.dumps({'dataframe_split': temp_dict}, allow_nan=True)
response = requests.request(method='POST', headers=headers, url=url, data=data_json)
return response.json()['predictions']['candidates'][0]
def _call(self, prompt: str, stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None, **kwargs: Any):
if stop is not None: raise ValueError("stop kwargs are not permitted.")
return self._score_model(prompt)
@property
def _identifying_params(self) -> Mapping[str, Any]:
"""Get the identifying parameters."""
return {"endpoint_name": self.endpoint_name, "token":self.token}
class MLflowQABot(mlflow.pyfunc.PythonModel):
def __init__(self, llm, retriever, chat_prompt):
# QABot class just call the custom LLM class and customize the output
self.qabot = QABot(llm, retriever, chat_prompt)
def predict(self, context, inputs):
questions = list(inputs['question'])
return [self.qabot.get_answer(q) for q in questions]
system_message_prompt = SystemMessagePromptTemplate.from_template(config['system_message_template'])
human_message_prompt = HumanMessagePromptTemplate.from_template(config['human_message_template'])
chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt])
# Working fine: mean able to log model when I change LLM to ChatOpenAI
# llm = ChatOpenAI(model_name=config['openai_chat_model'], temperature=config['temperature'])
llm = CustomLLM("llama2-7b", token)
model = MLflowQABot(llm, retriever, chat_prompt)
input_columns = [{"type": "string", "name": input_key} for input_key in qa_chain.input_keys]
with mlflow.start_run() as run:
mlflow_result = mlflow.pyfunc.log_model(
python_model = model,
extra_pip_requirements = ['langchain', 'tiktoken', 'openai',
'faiss-gpu', 'typing-inspect', 'typing_extensions'],
artifact_path = 'model',
#registered_model_name=config['registered_model_name'],
signature = infer_signature(input_columns, "This is prediction"))
```
### Expected behavior
### This is expected result.
``` python
(1) MLflow run
Logged [1 run](XXXXXb7d8e14c08964c988339b7c8a7) to an [experiment](XXXXXX80369415271497) in MLflow. [Learn more](https://docs.microsoft.com/azure/databricks/applications/mlflow/tracking#tracking-machine-learning-training-runs)
```
### But getting this error:
```python
---------------------------------------------------------------------------
PicklingError Traceback (most recent call last)
File <command-1869824020018208>, line 3
1 # persist model to mlflow
2 with mlflow.start_run() as run:
----> 3 mlflow_result = mlflow.pyfunc.log_model(
4 python_model = model,
5 extra_pip_requirements = ['langchain', 'tiktoken', 'openai',
6 'faiss-gpu', 'typing-inspect', 'typing_extensions'],
7 artifact_path = 'model',
8 #registered_model_name=config['registered_model_name'],
9 signature = signature)
File /databricks/python/lib/python3.10/site-packages/mlflow/pyfunc/__init__.py:1931, in log_model(artifact_path, loader_module, data_path, code_path, conda_env, python_model, artifacts, registered_model_name, signature, input_example, await_registration_for, pip_requirements, extra_pip_requirements, metadata)
1773 @format_docstring(LOG_MODEL_PARAM_DOCS.format(package_name="scikit-learn"))
1774 def log_model(
1775 artifact_path,
(...)
1788 metadata=None,
1789 ):
1790 """
1791 Log a Pyfunc model with custom inference logic and optional data dependencies as an MLflow
1792 artifact for the current run.
(...)
1929 metadata of the logged model.
1930 """
-> 1931 return Model.log(
1932 artifact_path=artifact_path,
1933 flavor=mlflow.pyfunc,
1934 loader_module=loader_module,
1935 data_path=data_path,
1936 code_path=code_path,
1937 python_model=python_model,
1938 artifacts=artifacts,
1939 conda_env=conda_env,
1940 registered_model_name=registered_model_name,
1941 signature=signature,
1942 input_example=input_example,
1943 await_registration_for=await_registration_for,
1944 pip_requirements=pip_requirements,
1945 extra_pip_requirements=extra_pip_requirements,
1946 metadata=metadata,
1947 )
File /databricks/python/lib/python3.10/site-packages/mlflow/models/model.py:572, in Model.log(cls, artifact_path, flavor, registered_model_name, await_registration_for, metadata, **kwargs)
566 if (
567 (tracking_uri == "databricks" or get_uri_scheme(tracking_uri) == "databricks")
568 and kwargs.get("signature") is None
569 and kwargs.get("input_example") is None
570 ):
571 _logger.warning(_LOG_MODEL_MISSING_SIGNATURE_WARNING)
--> 572 flavor.save_model(path=local_path, mlflow_model=mlflow_model, **kwargs)
573 mlflow.tracking.fluent.log_artifacts(local_path, mlflow_model.artifact_path)
574 try:
File /databricks/python/lib/python3.10/site-packages/mlflow/pyfunc/__init__.py:1759, in save_model(path, loader_module, data_path, code_path, conda_env, mlflow_model, python_model, artifacts, signature, input_example, pip_requirements, extra_pip_requirements, metadata, **kwargs)
1748 return _save_model_with_loader_module_and_data_path(
1749 path=path,
1750 loader_module=loader_module,
(...)
1756 extra_pip_requirements=extra_pip_requirements,
1757 )
1758 elif second_argument_set_specified:
-> 1759 return mlflow.pyfunc.model._save_model_with_class_artifacts_params(
1760 path=path,
1761 signature=signature,
1762 hints=hints,
1763 python_model=python_model,
1764 artifacts=artifacts,
1765 conda_env=conda_env,
1766 code_paths=code_path,
1767 mlflow_model=mlflow_model,
1768 pip_requirements=pip_requirements,
1769 extra_pip_requirements=extra_pip_requirements,
1770 )
File /databricks/python/lib/python3.10/site-packages/mlflow/pyfunc/model.py:189, in _save_model_with_class_artifacts_params(path, python_model, signature, hints, artifacts, conda_env, code_paths, mlflow_model, pip_requirements, extra_pip_requirements)
187 saved_python_model_subpath = "python_model.pkl"
188 with open(os.path.join(path, saved_python_model_subpath), "wb") as out:
--> 189 cloudpickle.dump(python_model, out)
190 custom_model_config_kwargs[CONFIG_KEY_PYTHON_MODEL] = saved_python_model_subpath
192 if artifacts:
File /databricks/python/lib/python3.10/site-packages/cloudpickle/cloudpickle_fast.py:57, in dump(obj, file, protocol, buffer_callback)
45 def dump(obj, file, protocol=None, buffer_callback=None):
46 """Serialize obj as bytes streamed into file
47
48 protocol defaults to cloudpickle.DEFAULT_PROTOCOL which is an alias to
(...)
53 compatibility with older versions of Python.
54 """
55 CloudPickler(
56 file, protocol=protocol, buffer_callback=buffer_callback
---> 57 ).dump(obj)
File /databricks/python/lib/python3.10/site-packages/cloudpickle/cloudpickle_fast.py:602, in CloudPickler.dump(self, obj)
600 def dump(self, obj):
601 try:
--> 602 return Pickler.dump(self, obj)
603 except RuntimeError as e:
604 if "recursion" in e.args[0]:
PicklingError: Can't pickle <cyfunction bool_validator at 0x7f95076a4450>: it's not the same object as pydantic.validators.bool_validator
``` | ISSUE: Not able to log CustomLLM using mlflow.pyfunc.log_model | https://api.github.com/repos/langchain-ai/langchain/issues/11966/comments | 2 | 2023-10-18T08:11:09Z | 2024-02-08T16:16:56Z | https://github.com/langchain-ai/langchain/issues/11966 | 1,949,137,515 | 11,966 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I got this error when I built a ChatBot with Langchain using VertexAI. I'm seeing this error and couldn't find any details so far.
File /opt/conda/lib/python3.10/site-packages/langchain/llms/vertexai.py:100, in completion_with_retry.<locals>._completion_with_retry(*args, **kwargs)
98 @retry_decorator
99 def _completion_with_retry(*args: Any, **kwargs: Any) -> Any:
--> 100 return llm.client.predict(*args, **kwargs)
TypeError: TextGenerationModel.predict() got an unexpected keyword argument 'candidate_count'
### Suggestion:
_No response_ | Issue: TypeError: TextGenerationModel.predict() got an unexpected keyword argument 'candidate_count' | https://api.github.com/repos/langchain-ai/langchain/issues/11961/comments | 11 | 2023-10-18T07:06:38Z | 2024-02-15T16:08:25Z | https://github.com/langchain-ai/langchain/issues/11961 | 1,949,009,892 | 11,961 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
Currently in the streaming documentation page, there is no guidance on how to interrupt the streaming once the model start generation. I want to implement the same "Stop generation" button functionality in chatGPT web, which should stopped the streaming generation. I tried to use try except but it is not working.
### Idea or request for content:
There is no documentation about how to interrupt the streaming generation once the model started generation. Even an error happens during new token generation, the program will not stop running but raise an error. How to stop the generation if an error happened? | DOC: There is no documentation about how to interrupt the streaming generation once the model started generation. | https://api.github.com/repos/langchain-ai/langchain/issues/11959/comments | 17 | 2023-10-18T06:27:34Z | 2024-06-21T20:54:46Z | https://github.com/langchain-ai/langchain/issues/11959 | 1,948,952,565 | 11,959 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I'm experimenting with natural language to SQL conversion using SQLDatabaseChain and SQLDatabaseAgent. In this experiment, I'm utilizing the ConversationBufferWindowMemory. However, I've encountered an issue where the memory is not functioning as expected. When I ask a question related to a previous question or answer, the chain/agent isn't handling memory and instead responds independently of the question asked based on memory.
How to fix this
@dosubot
### Suggestion:
_No response_ | Issue: <ConversationBufferWindowMemory doesn't work with db based chat history> | https://api.github.com/repos/langchain-ai/langchain/issues/11958/comments | 7 | 2023-10-18T03:44:58Z | 2024-04-24T16:37:15Z | https://github.com/langchain-ai/langchain/issues/11958 | 1,948,740,305 | 11,958 |
[
"hwchase17",
"langchain"
]
| ### System Info
It was unexpected that I had to provide the accss_token when using QianfanLLMEndpoint
Name: langchain
Version: 0.0.312
Name: qianfan
Version: 0.0.6
### Who can help?
_No response_
### Information
- [x] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
python
from langchain.llms import QianfanLLMEndpoint
llm = QianfanLLMEndpoint(qianfan_ak=client_id,qianfan_sk=client_secret,model="ERNIE-Bot-turbo")
res = llm("hi")
print(res)
```
error msg:
860 if not isinstance(prompt, str):
861 raise ValueError(
862 "Argument `prompt` is expected to be a string. Instead found "
863 f"{type(prompt)}. If you want to run the LLM on multiple prompts, use "
864 "`generate` instead."
865 )
866 return (
--> 867 self.generate(
868 [prompt],
869 stop=stop,
870 callbacks=callbacks,
871 tags=tags,
872 metadata=metadata,
873 **kwargs,
874 )
875 .generations[0][0]
...
166 )
167 AuthManager().register(self._ak, self._sk, self._access_token)
168 else:
InvalidArgumentError: both ak and sk must be provided, otherwise access_token should be provided
### Expected behavior
Normal operation | It was unexpected that I had to provide the accss_token when using QianfanLLMEndpoint | https://api.github.com/repos/langchain-ai/langchain/issues/11957/comments | 3 | 2023-10-18T03:27:29Z | 2024-05-07T16:06:08Z | https://github.com/langchain-ai/langchain/issues/11957 | 1,948,721,975 | 11,957 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain 0.0.316
langserve 0.0.10
python 3.11.4 on darwin
### Who can help?
@agola11 @hwchase17
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
Langserve example here (https://github.com/langchain-ai/langserve-launch-example/blob/main/langserve_launch_example/chain.py) in which I want to use ConversationChain instead of ChatOpenAI.
server.py
#!/usr/bin/env python
"""A server for the chain above."""
from fastapi import FastAPI
from langserve import add_routes
from chain import conversation_chain
app = FastAPI(title="My App")
add_routes(app, conversation_chain)
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="localhost", port=8000)
chain.py
```
from langchain.llms import OpenAI`
from langchain.chains import ConversationChain
from langchain.prompts import PromptTemplate
from langchain.memory import ConversationBufferMemory
template = """Assistant is a large language model trained by OpenAI.
Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
{history}
Human: {input}
Assistant:"""
prompt = PromptTemplate(input_variables=["history", "input"], template=template)
conversation_chain = ConversationChain(
llm=OpenAI(temperature=0),
prompt=prompt,
verbose=True,
memory=ConversationBufferMemory(),
)
```
### Expected behavior
I was expecting to have a streaming response as the ChatOpenAI behavior. It seems to me that ConversationChain doesn't support streaming response. | Streaming support with ConversationChain | https://api.github.com/repos/langchain-ai/langchain/issues/11945/comments | 13 | 2023-10-17T20:36:50Z | 2024-06-21T13:11:39Z | https://github.com/langchain-ai/langchain/issues/11945 | 1,948,210,943 | 11,945 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
When looking at the documentation for agents, memory, and the Agent Executor, I noticed you pass in tools to both the agent and the executor. What's the purpose of passing the tools to both? Shouldn't you need to just pass it to one?
https://python.langchain.com/docs/use_cases/question_answering/conversational_retrieval_agents#the-agent
**The Agent:**
`agent = OpenAIFunctionsAgent(llm=llm, tools=tools, prompt=prompt)`
**The Agent Executor:**
`agent_executor = AgentExecutor(agent=agent, tools=tools, memory=memory, verbose=True,
return_intermediate_steps=True)`
Perhaps I don't understand what AgentExecutor is doing under the hood? If someone could explain, that would be great.
### Idea or request for content:
_No response_ | DOC: Agent & AgentExecutioner | https://api.github.com/repos/langchain-ai/langchain/issues/11937/comments | 2 | 2023-10-17T18:29:39Z | 2024-02-08T16:17:01Z | https://github.com/langchain-ai/langchain/issues/11937 | 1,948,013,473 | 11,937 |
[
"hwchase17",
"langchain"
]
| ### System Info
I compared the speed of indexing with and without using the Indexing API, and I notice a significant difference. Using Indexing API is 30-50% slower. Also for experiment, I tried to index the exact same data twice, and the Indexing API is taking extremely long time for the second time. Any help would be appreciated. Thank you.
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
With Indexing API:
```
vdb = PGVector(
connection_string,
collection_name,
embedding_function
)
record_manager = SQLRecordManager(
namespace, engine=postgres_engine
)
record_manager.create_schema()
index(
doc_chunks,
record_manager,
vdb,
cleanup=None,
source_id_key="source",
)
```
Without Indexing API:
```
vdb = PGVector.from_texts(
texts
embedding
collection_name
connection_string
)
```
### Expected behavior
I expect the speeds are comparable with/without Indexing API. | Indexing API slow | https://api.github.com/repos/langchain-ai/langchain/issues/11935/comments | 10 | 2023-10-17T17:36:09Z | 2024-02-14T03:47:19Z | https://github.com/langchain-ai/langchain/issues/11935 | 1,947,924,672 | 11,935 |
[
"hwchase17",
"langchain"
]
| ### Feature request
My feature proposal involves the integration of both a greeting module and a gratitude module into the Langchain SQLDatabaseToolkit. The greeting module is designed to deliver an introductory message about the SQL-helpful bot, and the gratitude module aims to express appreciation when users interact with it.
### Motivation
The motivation behind this proposal is to enhance the user experience and make interactions with the SQLDatabaseToolkit more friendly and engaging. By adding these modules, we can create a welcoming and user-focused environment, improving user satisfaction and the overall utility of the toolkit.
I'm always frustrated when I encounter impersonal and uninspiring interactions with SQL database tools. By implementing these modules, we can address this issue and provide a more human-like and engaging experience for users.
### Your contribution
I am capable of improving and thoroughly testing the module currently | Enhancing Human-Level Interaction: Incorporating Greeting and Gratitude Modules into the Langchain SQLDatabaseToolkit | https://api.github.com/repos/langchain-ai/langchain/issues/11931/comments | 2 | 2023-10-17T17:07:58Z | 2024-02-06T16:18:21Z | https://github.com/langchain-ai/langchain/issues/11931 | 1,947,880,043 | 11,931 |
[
"hwchase17",
"langchain"
]
| ### System Info
latest version of all modules
### Who can help?
here's my PROMPT and code:
from langchain.prompts.chat import ChatPromptTemplate
updated_prompt = ChatPromptTemplate.from_messages(
[
("system",
"""
You are a knowledgeable AI assistant specializing in extracting information from the 'inquiry' table in the MySQL Database.
Your primary task is to perform a single query on the schema of the 'inquiry' table and table and retrieve the data using SQL.
When formulating SQL queries, keep the following context in mind:
- Filter records based on exact column value matches.
- If the user inquires about the Status of the inquiry fetch all these columns: status, name, and time values, and inform the user about these specific values.
- Limit query results to a maximum of 3 unless the user specifies otherwise.
- Only query necessary columns.
- Avoid querying for non-existent columns.
- Place the 'ORDER BY' clause after 'WHERE.'
- Do not add a semicolon at the end of the SQL.
If the query results in an empty set, respond with "information not found"
Use this format:
Question: The user's query
Thought: Your thought process
Action: SQL Query
Action Input: SQL query
Observation: Query results
... (repeat for multiple queries)
Thought: Summarize what you've learned
Final Answer: Provide the final answer
Begin!
"""),
("user", "{question}\n ai: "),
]
)
llm = ChatOpenAI(model=os.getenv("OPENAI_CHAT_MODEL"), temperature=0) # best result
sql_toolkit = SQLDatabaseToolkit(db=db, llm=llm)
sql_toolkit.get_tools()
sqldb_agent = create_sql_agent(
llm=llm,
toolkit=sql_toolkit,
agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
handle_parsing_errors=True,
)
sqldb_agent.run(updated_prompt.format(
question="What is the status of inquiry 123?"
))
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
here's my PROMPT and code:
from langchain.prompts.chat import ChatPromptTemplate
updated_prompt = ChatPromptTemplate.from_messages(
[
("system",
"""
You are a knowledgeable AI assistant specializing in extracting information from the 'inquiry' table in the MySQL Database.
Your primary task is to perform a single query on the schema of the 'inquiry' table and table and retrieve the data using SQL.
When formulating SQL queries, keep the following context in mind:
- Filter records based on exact column value matches.
- If the user inquires about the Status of the inquiry fetch all these columns: status, name, and time values, and inform the user about these specific values.
- Limit query results to a maximum of 3 unless the user specifies otherwise.
- Only query necessary columns.
- Avoid querying for non-existent columns.
- Place the 'ORDER BY' clause after 'WHERE.'
- Do not add a semicolon at the end of the SQL.
If the query results in an empty set, respond with "information not found"
Use this format:
Question: The user's query
Thought: Your thought process
Action: SQL Query
Action Input: SQL query
Observation: Query results
... (repeat for multiple queries)
Thought: Summarize what you've learned
Final Answer: Provide the final answer
Begin!
"""),
("user", "{question}\n ai: "),
]
)
llm = ChatOpenAI(model=os.getenv("OPENAI_CHAT_MODEL"), temperature=0)
sql_toolkit = SQLDatabaseToolkit(db=db, llm=llm)
sql_toolkit.get_tools()
sqldb_agent = create_sql_agent(
llm=llm,
toolkit=sql_toolkit,
agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
handle_parsing_errors=True,
)
sqldb_agent.run(updated_prompt.format(
question="What is the status of inquiry 123?"
))
### Expected behavior
t should be comprehensible and generate succinct results | Langchain prompt not working as expected , it's not consistence and not able to understand examples | https://api.github.com/repos/langchain-ai/langchain/issues/11929/comments | 3 | 2023-10-17T16:44:09Z | 2024-02-08T16:17:05Z | https://github.com/langchain-ai/langchain/issues/11929 | 1,947,841,518 | 11,929 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
It seems the default prompts do not support passing a JSON-Schema in create_json_agent to provide description to each field and nested fields. Wanted to understand if this hasn't been deemed necessary?
In addition, if there is a list of JSON objects, with some fields in each JSON object having nested objects or arrays, is the JSON agent the right fit, or should one try to use custom agents (perhaps Pandas) for such cases?
### Suggestion:
_No response_ | Passing a JSON-Schema in create_json_agent | https://api.github.com/repos/langchain-ai/langchain/issues/11927/comments | 3 | 2023-10-17T16:20:36Z | 2024-02-08T16:17:10Z | https://github.com/langchain-ai/langchain/issues/11927 | 1,947,798,052 | 11,927 |
[
"hwchase17",
"langchain"
]
| ### Feature request
A tool to allow agents to search and retrieve data from IMDb (https://www.imdb.com/).
### Motivation
IMDb is one of the largest movie databases available online. Adding this tool would allow agents to intelligently retrieve up-to-date movie information from IMDb, enhancing the experience of its users. For example, with this tool, users could utilize LangChain through easily accessible prompts to find movies based on favorite genres, relations to other movies, or other information such as actors and producers.
### Your contribution
If all goes well, a PR can be ready sometime in November with this feature implemented. | Adding an IMDb tool | https://api.github.com/repos/langchain-ai/langchain/issues/11926/comments | 2 | 2023-10-17T15:52:11Z | 2024-03-13T20:01:05Z | https://github.com/langchain-ai/langchain/issues/11926 | 1,947,748,350 | 11,926 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I am calling LLM `ChatOpenAI` models through `load_qa_with_sources_chain` as shown in the following code snippet.
```python
llm=ChatOpenAI(
model_name=....,
temperature=0,
openai_api_key=...,
max_tokens=...,
)
chain = load_qa_with_sources_chain(
llm, chain_type=chain_type, verbose=verbose, prompt=prompt
)
response = chain(
{"input_documents": docs, "question": query},
)
```
Is there an easy way to obtain somehow the raw response of `OpenAI` for debugging purposes? I am especially interested in the `finish_reason` value (that is retrieved in the `LLMResult` object) to know if the OpenAI response is complete or not!
Thank you in advance for your help
### Suggestion:
_No response_ | Is there a way to obtain `finish_reason` value from OpenAI response when using `load_qa_with_sources_chain` | https://api.github.com/repos/langchain-ai/langchain/issues/11924/comments | 2 | 2023-10-17T15:32:03Z | 2024-02-06T16:18:36Z | https://github.com/langchain-ai/langchain/issues/11924 | 1,947,709,822 | 11,924 |
[
"hwchase17",
"langchain"
]
| ### System Info
Here chain example:
Thought:I can query the 'information_enquiry' table to find out who is assigned to a job.
Action: SQL Query
Action Input: SELECT name FROM nformation_enquiry WHERE job_id = '123'
Observation: SQL Query is not a valid tool, try one of [sql_db_query, sql_db_schema, sql_db_list_tables, sql_db_query_checker].
Thought:I made a mistake in the action. I should use the `sql_db_query` tool to execute the SQL query.
Action: sql_db_query
Action Input: SELECT name FROM customer_information_enquiry WHERE job_no = '123'
Observation:
Thought:I have the information about the person assigned to the job.
Final Answer: The job was assigned to Helena. --> this is wrong , it faked the first row
I have instructed prompt :
Begin!
Question: Who is assigned to 123?
Thought: I need to find the status of a specific job.
Action: SQL Query
Action Input: SELECT name FROM information_enquiry WHERE job_no ='123'
Observation:
Thought: I don't have the information about the job status.
Final Answer: information not found
but it's not working
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
llm = ChatOpenAI(model=os.getenv("OPENAI_CHAT_MODEL"), temperature=0)
sql_toolkit = SQLDatabaseToolkit(db=db, llm=llm)
sql_toolkit.get_tools()
sqldb_agent = create_sql_agent(
llm=llm,
toolkit=sql_toolkit,
agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
handle_parsing_errors=True
)
formatted_prompt = updated_prompt.format(question=query)
result = sqldb_agent.run(formatted_prompt)
sqldb_agent.run(updated_prompt.format(
question="Who is assigned to the job 123"))
### Expected behavior
it should return 'information not found' | Langchain SQLDatabaseToolkit providing incorrect results. it's faking the results using the top k rows | https://api.github.com/repos/langchain-ai/langchain/issues/11922/comments | 3 | 2023-10-17T15:02:23Z | 2024-02-09T16:14:38Z | https://github.com/langchain-ai/langchain/issues/11922 | 1,947,647,017 | 11,922 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I want to modify the prompt of the open ai function agent, so i can add more parameters that i can pass as inputs during execution
initialize_agent(tools=tool_items,
llm=llm,
agent=AgentType.OPENAI_FUNCTIONS)
and during execution such as
result = agent_chain.run({"input": "Whats the distance between Malmo and Stockholm"})
i want to be able to pass several input parameters so that for example it becomes like this
agent_chain.run({"input": "Whats the distance between Malmo and Stockholm","language":"French"}) etc
I have been able to do that successfully for other chains, but not for this one
How do i achieve this
### Suggestion:
_No response_ | Issue: How to modify the actual prompt to add in new input parameters of the open ai functions agent, so that during the running of the agent, we pass those parameters also | https://api.github.com/repos/langchain-ai/langchain/issues/11921/comments | 4 | 2023-10-17T15:00:56Z | 2024-02-12T16:11:14Z | https://github.com/langchain-ai/langchain/issues/11921 | 1,947,643,981 | 11,921 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain version: 0.0.272
Python Version: 3.11.0
### Who can help?
@hwchase17
@ag
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
The `save_context` method of the `ConversationTokenBufferMemory` does pop operation directly on `chat_memory.messages` [link to code](https://github.com/langchain-ai/langchain/blob/31f264169db4ab23689f2e179983f1cfdfd1a33a/libs/langchain/langchain/memory/token_buffer.py#L48-L49)
It works only with in-memory memory and not with db-backed memory like `PostgresChatMessageHistory` because the `buffer`( `self.chat_memory.messages` ) is immutable
```python
def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:
"""Save context from this conversation to buffer. Pruned."""
super().save_context(inputs, outputs)
# Prune buffer if it exceeds max token limit
buffer = self.chat_memory.messages
curr_buffer_length = self.llm.get_num_tokens_from_messages(buffer)
if curr_buffer_length > self.max_token_limit:
pruned_memory = []
while curr_buffer_length > self.max_token_limit:
pruned_memory.append(buffer.pop(0))
curr_buffer_length = self.llm.get_num_tokens_from_messages(buffer)
```
### Expected behavior
The `ChatMessageHistory` should work like an interface with`ConversationTokenBufferMemory`.
Step to resolve
- remove direct calls to `self.chat_memory.messages`
- add a private variable to hold chat history | ConversationTokenBufferMemory doesn't work with db based chat history | https://api.github.com/repos/langchain-ai/langchain/issues/11919/comments | 4 | 2023-10-17T14:28:09Z | 2024-02-11T16:10:42Z | https://github.com/langchain-ai/langchain/issues/11919 | 1,947,569,327 | 11,919 |
[
"hwchase17",
"langchain"
]
| we are trying to use langchain s3 loader to load files from the bucket using python, once we create any subfolder we are getting no such directory error. also we are not able to load the files from those sub folders.
How to fix these errors? | langchain s3 loader not able to load files from subfolders | https://api.github.com/repos/langchain-ai/langchain/issues/11917/comments | 4 | 2023-10-17T13:00:11Z | 2024-02-09T16:14:53Z | https://github.com/langchain-ai/langchain/issues/11917 | 1,947,371,287 | 11,917 |
[
"hwchase17",
"langchain"
]
| ### System Info
Python 3.11
Lanchain 315
### Who can help?
@hwchase17
@agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
import openai
import os
from langchain.llms import OpenAI
llm = OpenAI(temperature=0.3)
print(llm.predict("What is the capital of India?"))
### Expected behavior
When OpenAI quotas are reached (or no payment method is defined), requests should not be retried but should raise an appropriate error.
Error in console :
```
Retrying langchain.chat_models.openai.acompletion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised RateLimitError: You exceeded your current quota, please check your plan and billing details..
```
Equivalent TS issue : https://github.com/langchain-ai/langchainjs/issues/1929
Fix in TS land : https://github.com/langchain-ai/langchainjs/pull/1934 | Support for OpenAI quotas | https://api.github.com/repos/langchain-ai/langchain/issues/11914/comments | 2 | 2023-10-17T10:02:45Z | 2024-02-06T16:19:01Z | https://github.com/langchain-ai/langchain/issues/11914 | 1,947,039,933 | 11,914 |
[
"hwchase17",
"langchain"
]
| ### Discussed in https://github.com/langchain-ai/langchain/discussions/11846
<div type='discussions-op-text'>
<sup>Originally posted by **Marbyun** October 16, 2023</sup>
Hi folks!
i have case to create chatbot that will use 2 sources as the dataset (text file and sqlite). I use Multiple Retrieaval Sources [document](https://python.langchain.com/docs/use_cases/question_answering/multiple_retrieval) as my main code. But the code can not do continuous conversation like this:
> q:'what's name of employee id xx.xx'
> a:'his name is xxx xxx'
> q:'what's his email?'
> a:'his email is [email protected]'
and then i get code from this [update](https://github.com/langchain-ai/langchain/pull/8597/files) by @keenborder786 that can perfomance my q&a, but rigth now i get confuse how to combine it? can you guys help me please, i am very new in here...
> Ps:
> - I have 2 sources Text file have data about company profile and SQLite have employee data
> - Need to combine the sources and can do continuous conversation</div> | How to use Multiple Retrieaval Sources and Added Memory at SQLDatabaseChain | https://api.github.com/repos/langchain-ai/langchain/issues/11908/comments | 2 | 2023-10-17T07:13:01Z | 2024-02-06T16:19:07Z | https://github.com/langchain-ai/langchain/issues/11908 | 1,946,710,273 | 11,908 |
[
"hwchase17",
"langchain"
]
| @dosu-bot
When I use chromadb instead of deeplake, my code worked fine but with deeplake, I am facing this error when I use deeplake for some reason despite everything is the exact same thing:
creating embeddings: 12%|██████████▏ | 3/26 [00:02<00:17, 1.32it/s]Retrying langchain.embeddings.openai.embed_with_retry.<locals>._embed_with_retry in 4.0 seconds as it raised RateLimitError: Rate limit reached for text-embedding-ada-002 in organization org-m0YReKtLXxUATOVCwzcBNfqm on requests per min. Limit: 3 / min. Please try again in 20s. Contact us through our help center at help.openai.com if you continue to have issues. Please add a payment method to your account to increase your rate limit. Visit https://platform.openai.com/account/billing to add a payment method..
Retrying langchain.embeddings.openai.embed_with_retry.<locals>._embed_with_retry in 4.0 seconds as it raised RateLimitError: Rate limit reached for text-embedding-ada-002 in organization org-m0YReKtLXxUATOVCwzcBNfqm on requests per min. Limit: 3 / min. Please try again in 20s. Contact us through our help center at help.openai.com if you continue to have issues. Please add a payment method to your account to increase your rate limit. Visit https://platform.openai.com/account/billing to add a payment method..
Below is my code:
from langchain.chat_models import ChatOpenAI
from langchain.prompts.prompt import PromptTemplate
from langchain.memory import ConversationBufferMemory
from langchain.chains import ConversationalRetrievalChain
from langchain.document_loaders import WebBaseLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.vectorstores import Qdrant
from langchain.callbacks import StreamingStdOutCallbackHandler
from langchain.embeddings import OpenAIEmbeddings
from langchain.callbacks import get_openai_callback
from langchain.vectorstores import Chroma
from langchain.vectorstores import DeepLake
from langchain.document_loaders import DirectoryLoader, PyPDFLoader
from dotenv import load_dotenv
import time
import warnings
# warnings.filterwarnings("ignore")
load_dotenv()
directory_path = "C:\\Users\\Asus\\Documents\\Vendolista"
pdf_loader = DirectoryLoader(directory_path,
glob="**/*.pdf",
show_progress=True,
use_multithreading=True,
silent_errors=True,
loader_cls = PyPDFLoader)
documents = pdf_loader.load()
print(str(len(documents))+ " documents loaded")
def print_letter_by_letter(text):
for char in text:
print(char, end='', flush=True)
time.sleep(0.02)
# Create embeddings
def langchain(customer_prompt, chat_history):
llm = ChatOpenAI(temperature = 0, model_name='gpt-3.5-turbo', callbacks=[StreamingStdOutCallbackHandler()], streaming = True)
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=800,
chunk_overlap=80,
)
chunks = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
persist_directory = "C:\\Users\\Asus\\OneDrive\\Documents\\Vendolista"
knowledge_base = Chroma.from_documents(chunks, embeddings, persist_directory = persist_directory)
#save to disk
knowledge_base.persist()
#To delete the DB we created at first so that we can be sure that we will load from disk as fresh db
knowledge_base = None
new_knowledge_base = Chroma(persist_directory = persist_directory, embedding_function = embeddings)
# Create a custom prompt for your use case
prompt_template = """
Answer the Question as a AI assistant that is answering based on the documents only. If the question is unrelated
then say "sorry this question is completely not related. If you think it is, email the staff
and they will get back to you: [email protected]." Do not ever answer with "I don't know" to any question.
You either give an answer or mention it's not related.
Text: {context}
Question: {question}
Answer :
"""
PROMPT = PromptTemplate(
template=prompt_template, input_variables=["context", "question"]
)
memory = ConversationBufferMemory(llm=llm, memory_key='chat_history', input_key='question', output_key='answer', return_messages=False)
conversation = ConversationalRetrievalChain.from_llm(
llm=llm,
retriever=new_knowledge_base.as_retriever(),
memory=memory,
chain_type="stuff",
verbose=True,
combine_docs_chain_kwargs={"prompt":PROMPT}
)
return conversation({"question": customer_prompt, "chat_history": memory})
def main():
chat_history = []
while True:
customer_prompt = input("Ask me anything about the files (type 'exit' to quit): ")
if customer_prompt.lower() in ["exit"] and len(customer_prompt) == 4:
end_chat = "Thank you for visiting us! Have a nice day"
print_letter_by_letter(end_chat)
break
if customer_prompt:
with get_openai_callback() as cb:
response = (langchain(customer_prompt, chat_history))
print(response['answer'])
print(cb)
if __name__ == "__main__":
main() | RateLimitError | https://api.github.com/repos/langchain-ai/langchain/issues/11907/comments | 2 | 2023-10-17T06:43:59Z | 2024-02-08T16:17:35Z | https://github.com/langchain-ai/langchain/issues/11907 | 1,946,665,722 | 11,907 |
[
"hwchase17",
"langchain"
]
| ### Feature request
There should be a callback handler like [OpenAICallbackHandler](https://github.com/langchain-ai/langchain/blob/31f264169db4ab23689f2e179983f1cfdfd1a33a/libs/langchain/langchain/callbacks/openai_info.py#L120) for AWS Bedrock models, so that we can easily get the token usage and monitor cost.
It looks like the AWS Bedrock API doesn't return with token usage. Unlike OpenAI.
Can we have similar feature, perhaps [using the internal Anthropic tokenizer to count](https://github.com/langchain-ai/langchain/blob/31f264169db4ab23689f2e179983f1cfdfd1a33a/libs/langchain/langchain/llms/bedrock.py#L405)?
### Motivation
Cost monitoring
### Your contribution
PR | Enable token usage count for AWS Bedrock | https://api.github.com/repos/langchain-ai/langchain/issues/11906/comments | 7 | 2023-10-17T05:16:38Z | 2024-03-30T14:02:02Z | https://github.com/langchain-ai/langchain/issues/11906 | 1,946,564,023 | 11,906 |
[
"hwchase17",
"langchain"
]
| @dosu-bot
Below is my code in python about creating a Q & A using Langchain with openai API. I have 3 issues I want to fix:
1) The answer is always being repeated twice.
2) I am using ConversationalRetrievalChain from LangChain therefore, I want to retrieve the source of the document when I get my answer.
3) I want to change the chain_type = "stuff" into Map Re Rank.
from langchain.chat_models import ChatOpenAI
from langchain.prompts.prompt import PromptTemplate
from langchain.memory import ConversationBufferMemory
from langchain.chains import ConversationalRetrievalChain
from langchain.document_loaders import WebBaseLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.vectorstores import Qdrant
from langchain.callbacks import StreamingStdOutCallbackHandler
from langchain.embeddings import OpenAIEmbeddings
from langchain.callbacks import get_openai_callback
from langchain.vectorstores import Chroma
from langchain.document_loaders import DirectoryLoader, PyPDFLoader
from dotenv import load_dotenv
import time
import warnings
# warnings.filterwarnings("ignore")
load_dotenv()
def print_letter_by_letter(text):
for char in text:
print(char, end='', flush=True)
time.sleep(0.02)
# Create embeddings
def langchain(customer_prompt, chat_history):
directory_path = "C:\\Users\\Asus\\Documents\\Vendolista"
pdf_loader = DirectoryLoader(directory_path,
glob="**/*.pdf",
show_progress=True,
use_multithreading=True,
silent_errors=True,
loader_cls = PyPDFLoader)
documents = pdf_loader.load()
print(str(len(documents))+ " documents loaded")
llm = ChatOpenAI(temperature = 0, model_name='gpt-3.5-turbo', callbacks=[StreamingStdOutCallbackHandler()], streaming = True)
# Split into chunks
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=800,
chunk_overlap=80,
)
chunks = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
persist_directory = "C:\\Users\\Asus\\OneDrive\\Documents\\Vendolista"
knowledge_base = Chroma.from_documents(chunks, embeddings, persist_directory = persist_directory)
#save to disk
knowledge_base.persist()
#To delete the DB we created at first so that we can be sure that we will load from disk as fresh db
knowledge_base = None
new_knowledge_base = Chroma(persist_directory = persist_directory, embedding_function = embeddings)
# Create a custom prompt for your use case
prompt_template = """
Answer the Question as a AI assistant that is answering based on the documents only. If the question is unrelated
then say "sorry this question is completely not related. If you think it is, email the staff
and they will get back to you: [email protected]." Do not ever answer with "I don't know" to any question.
You either give an answer or mention it's not related.
Text: {context}
Question: {question}
Answer :
"""
PROMPT = PromptTemplate(
template=prompt_template, input_variables=["context", "question"]
)
memory = ConversationBufferMemory(llm=llm, memory_key='chat_history', input_key='question', output_key='answer', return_messages=True)
conversation = ConversationalRetrievalChain.from_llm(
llm=llm,
retriever=new_knowledge_base.as_retriever(),
memory=memory,
chain_type="stuff",
combine_docs_chain_kwargs={"prompt":PROMPT}
)
return conversation({"question": customer_prompt, "chat_history": memory})
def main():
chat_history = []
while True:
customer_prompt = input("Ask me anything about the files (type 'exit' to quit): ")
if customer_prompt.lower() in ["exit"] and len(customer_prompt) == 4:
end_chat = "Thank you for visiting us! Have a nice day"
print_letter_by_letter(end_chat)
break
if customer_prompt:
with get_openai_callback() as cb:
response = langchain(customer_prompt, chat_history)
print(response['answer'])
print(cb)
if __name__ == "__main__":
main() | Repetitive answer and not getting source of documents | https://api.github.com/repos/langchain-ai/langchain/issues/11905/comments | 2 | 2023-10-17T04:03:05Z | 2024-02-06T16:19:17Z | https://github.com/langchain-ai/langchain/issues/11905 | 1,946,493,093 | 11,905 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
I want to load doc type file use UnstructuredFileLoader, andI installed all require library, but I got this mistake:
Traceback (most recent call last):
File "D:\soft\Anaconda3\lib\site-packages\unstructured\partition\doc.py", line 67, in partition_doc
convert_office_doc(
File "D:\soft\Anaconda3\lib\site-packages\unstructured\partition\common.py", line 294, in convert_office_doc
logger.info(output.decode().strip())
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xd0 in position 42: invalid continuation byte.
Then I see the code of function convert_office_doc that can not change encoding type, this mistake is 'utf-8' is not valid encoding type, so I changed this to 'gb2312', it worked.
So, whether a parameter should be added for the encoding type when reading the file?
### Idea or request for content:
So, whether a parameter should be added for the encoding type when reading the file?
For example, UnstructuredFileLoader(file_path, file_name, coding='utf-8'). | logger.info(output.decode().strip()) of common.py will raise a mistake when use convert_office_doc to convert .doc file to .docx file because it is not encode by 'utf-8' | https://api.github.com/repos/langchain-ai/langchain/issues/11898/comments | 2 | 2023-10-17T01:16:58Z | 2024-02-07T00:57:56Z | https://github.com/langchain-ai/langchain/issues/11898 | 1,946,344,621 | 11,898 |
[
"hwchase17",
"langchain"
]
| ### Feature request
combines LangChain with Stable Diffusion to generate a text-related image
### Motivation
The motivation behind this feature proposal is to enhance the capabilities of LangChain by integrating it with Stable Diffusion to enable the generation of images based on text inputs. This integration serves several purposes:
Enriched User Experience: The ability to generate images from text can significantly enrich user experiences in various applications. It can be applied to chatbots, content generation, creative tools, and more.
Creative Content Generation: This feature can empower users to generate creative content, artwork, or visualizations based on their textual ideas or descriptions.
Enhanced Language Model Integration: By integrating with Stable Diffusion, LangChain can harness the power of state-of-the-art generative models to create meaningful images that complement the textual context.
Research and Innovation: This integration can also serve as a research platform to explore the synergy between language models and generative image models, advancing the field of AI and creative content generation.
### Your contribution
Identify Relevant Repositories, Submit a Pull Request, Engage with the Community | LangChain with Stable Diffusion | https://api.github.com/repos/langchain-ai/langchain/issues/11894/comments | 8 | 2023-10-16T21:53:29Z | 2024-03-18T16:05:49Z | https://github.com/langchain-ai/langchain/issues/11894 | 1,946,164,422 | 11,894 |
[
"hwchase17",
"langchain"
]
| ### System Info
I'm using jupyter notebook and Azure OpenAI
Python 3.11.5
langchain==0.0.315
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I got the error:
InvalidRequestError: The API deployment for this resource does not exist. If you created the deployment within the last 5 minutes, please wait a moment and try again.
When I run
```
from langchain.embeddings.openai import OpenAIEmbeddings
embedding = OpenAIEmbeddings()
sentence1 = "i like dogs"
embedding1 = embedding.embed_query(sentence1)
```
But if I run - not using Lanchain - it works fine:
```
response = openai.Embedding.create(
input="Your text string goes here",
model="text-embedding-ada-002",
engine="embeddingstest"
)
embeddings = response['data'][0]['embedding']
```
### Expected behavior
I would expect the embeddings of my string. | API deployment not found when using Azure with embeddings | https://api.github.com/repos/langchain-ai/langchain/issues/11893/comments | 3 | 2023-10-16T21:00:04Z | 2023-10-17T14:16:56Z | https://github.com/langchain-ai/langchain/issues/11893 | 1,946,085,853 | 11,893 |
[
"hwchase17",
"langchain"
]
| ### System Info
Python 3.11.6
Langchain 0.0.315
Device name Precision7760
Processor 11th Gen Intel(R) Core(TM) i9-11950H @ 2.60GHz 2.61 GHz
Installed RAM 32.0 GB (31.2 GB usable)
Device ID 049EB0D9-D534-47A1-9F59-62B1F3D578D4
Product ID 00355-60713-95419-AAOEM
System type 64-bit operating system, x64-based processor
Pen and touch No pen or touch input is available for this display
Edition Windows 11 Pro
Version 22H2
Installed on 10/7/2023
OS build 22621.2428
Experience Windows Feature Experience Pack 1000.22674.1000.0
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.schema import AIMessage, HumanMessage
from langchain.chat_models import ChatOpenAI
from langchain.memory import ConversationBufferMemory
from langchain.chains import ConversationChain
chat = ChatOpenAI()
memory = ConversationBufferMemory()
conversation = ConversationChain(llm=chat, memory=memory)
user_input="Hola atenea yo me llamo franks y estoy interesado en adquirir un vehículo"
response = conversation.run([HumanMessage(content=str(user_input))])
print(response)
I am getting this errors:
Traceback (most recent call last):
File "F:\Audi\Chatbot_Demo\mini_example.py", line 12, in <module>
response = conversation.run([HumanMessage(content=str(user_input))])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\Python\Python311\Lib\site-packages\langchain\chains\base.py", line 503, in run
return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\Python\Python311\Lib\site-packages\langchain\chains\base.py", line 310, in __call__
final_outputs: Dict[str, Any] = self.prep_outputs(
^^^^^^^^^^^^^^^^^^
File "E:\Python\Python311\Lib\site-packages\langchain\chains\base.py", line 406, in prep_outputs
self.memory.save_context(inputs, outputs)
File "E:\Python\Python311\Lib\site-packages\langchain\memory\chat_memory.py", line 36, in save_context
self.chat_memory.add_user_message(input_str)
File "E:\Python\Python311\Lib\site-packages\langchain\schema\chat_history.py", line 46, in add_user_message
self.add_message(HumanMessage(content=message))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\Python\Python311\Lib\site-packages\langchain\load\serializable.py", line 97, in __init__
super().__init__(**kwargs)
File "E:\Python\Python311\Lib\site-packages\pydantic\v1\main.py", line 341, in __init__
raise validation_error
pydantic.v1.error_wrappers.ValidationError: 1 validation error for HumanMessage
content
**str type expected (type=type_error.str)**
### Expected behavior
The argument passed to HumanMessage is a str, nevertheless an error is produced (as it wasn't). | HumanMessage error expecting a str type in content | https://api.github.com/repos/langchain-ai/langchain/issues/11882/comments | 4 | 2023-10-16T19:31:43Z | 2024-06-03T00:42:51Z | https://github.com/langchain-ai/langchain/issues/11882 | 1,945,944,555 | 11,882 |
[
"hwchase17",
"langchain"
]
| ### System Info
I am uing the code provided in the Chat-Langchain implementation
```python
def load_langchain_docs():
return SitemapLoader(
"https://python.langchain.com/sitemap.xml",
filter_urls=["https://python.langchain.com/"],
parsing_function=langchain_docs_extractor,
default_parser="lxml",
bs_kwargs={
"parse_only": SoupStrainer(
name=("article", "title", "html", "lang", "content")
),
},
meta_function=metadata_extractor,
).load()
```
When Fetcing pages complete the error comes.
```
Fetching pages: 100%|###################################################################################################################| 944/944 [04:17<00:00, 3.66it/s]
Exception ignored in: <function _ProactorBasePipeTransport.__del__ at 0x00000202B4D78CA0>
Traceback (most recent call last):
File "C:\Users\meetg\AppData\Local\Programs\Python\Python39\lib\asyncio\proactor_events.py", line 116, in __del__
self.close()
File "C:\Users\meetg\AppData\Local\Programs\Python\Python39\lib\asyncio\proactor_events.py", line 108, in close
self._loop.call_soon(self._call_connection_lost, None)
File "C:\Users\meetg\AppData\Local\Programs\Python\Python39\lib\asyncio\base_events.py", line 746, in call_soon
self._check_closed()
File "C:\Users\meetg\AppData\Local\Programs\Python\Python39\lib\asyncio\base_events.py", line 510, in _check_closed
raise RuntimeError('Event loop is closed')
RuntimeError: Event loop is closed
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ C:\python\Projects\Langchain\save_langchain_docs.py: │
│ 201 in <module> │
│ │
│ 198 │ working_memory.save(documents=docs_transformed) │
│ 199 │
│ 200 if __name__ == "__main__": │
│ ❱ 201 │ ingest_docs() │
│ 202 │
│ │
│ C:\python\Projects\Langchain\save_langchain_docs.py: │
│ 182 in ingest_docs │
│ │
│ 179 def ingest_docs(): │
│ 180 │ docs_from_documentation = load_langchain_docs() │
│ 181 │ logger.info(f"Loaded {len(docs_from_documentation)} docs from documentation") │
│ ❱ 182 │ docs_from_api = load_api_docs() │
│ 183 │ logger.info(f"Loaded {len(docs_from_api)} docs from API") │
│ 184 │ │
│ 185 │ text_splitter = RecursiveCharacterTextSplitter(chunk_size=4000, chunk_overlap=200) │
│ │
│ C:\python\Projects\Langchain\save_langchain_docs.py: │
│ 159 in load_api_docs │
│ │
│ 156 │
│ 157 │
│ 158 def load_api_docs(): │
│ ❱ 159 │ return RecursiveUrlLoader( │
│ 160 │ │ url="https://api.python.langchain.com/en/latest/", │
│ 161 │ │ max_depth=8, │
│ 162 │ │ extractor=simple_extractor, │
│ │
│ C:\python\Projects\Gesture │
│ Scrolling\keyvenv\lib\site-packages\langchain\document_loaders\recursive_url_loader.py:112 in │
│ __init__ │
│ │
│ 109 │ │ self.timeout = timeout │
│ 110 │ │ self.prevent_outside = prevent_outside if prevent_outside is not None else True │
│ 111 │ │ self.link_regex = link_regex │
│ ❱ 112 │ │ self._lock = asyncio.Lock() if self.use_async else None │
│ 113 │ │ self.headers = headers │
│ 114 │ │ self.check_response_status = check_response_status │
│ 115 │
│ │
│ C:\Users\meetg\AppData\Local\Programs\Python\Python39\lib\asyncio\locks.py:81 in __init__ │
│ │
│ 78 │ │ self._waiters = None │
│ 79 │ │ self._locked = False │
│ 80 │ │ if loop is None: │
│ ❱ 81 │ │ │ self._loop = events.get_event_loop() │
│ 82 │ │ else: │
│ 83 │ │ │ self._loop = loop │
│ 84 │ │ │ warnings.warn("The loop argument is deprecated since Python 3.8, " │
│ │
│ C:\Users\meetg\AppData\Local\Programs\Python\Python39\lib\asyncio\events.py:642 in │
│ get_event_loop │
│ │
│ 639 │ │ │ self.set_event_loop(self.new_event_loop()) │
│ 640 │ │ │
│ 641 │ │ if self._local._loop is None: │
│ ❱ 642 │ │ │ raise RuntimeError('There is no current event loop in thread %r.' │
│ 643 │ │ │ │ │ │ │ % threading.current_thread().name) │
│ 644 │ │ │
│ 645 │ │ return self._local._loop │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
RuntimeError: There is no current event loop in thread 'MainThread'.
```
### Who can help?
@eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [x] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [x] Async
### Reproduction
By executing the function `load_langchain_docs()` from the above code snippet, error will come.
### Expected behavior
All the pages should be scraped and loaded to `Document` class | No current event loop in thread 'MainThread' error while using SitemapLoader() | https://api.github.com/repos/langchain-ai/langchain/issues/11879/comments | 10 | 2023-10-16T19:06:17Z | 2024-02-13T16:10:38Z | https://github.com/langchain-ai/langchain/issues/11879 | 1,945,907,441 | 11,879 |
[
"hwchase17",
"langchain"
]
| ### System Info
Name: kuzu
Version: 0.0.10 (latest)
Name: langchain
Version: 0.0.311
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I am using the example from langchain documentation:
https://python.langchain.com/docs/use_cases/graph/graph_kuzu_qa
This line fails:
graph = KuzuGraph(db)
F```
ile [~/.local/lib/python3.10/site-packages/langchain/graphs/kuzu_graph.py:71](https://file+.vscode-resource.vscode-cdn.net/home/steve/Projects/homelab/homeapi/notebooks/~/.local/lib/python3.10/site-packages/langchain/graphs/kuzu_graph.py:71), in KuzuGraph.refresh_schema(self)
69 for table in rel_tables:
70 current_table_schema = {"properties": [], "label": table["name"]}
---> 71 properties_text = self.conn._connection.get_rel_property_names(
72 table["name"]
73 ).split("\n")
74 for i, line in enumerate(properties_text):
75 # The first 3 lines defines src, dst and name, so we skip them
76 if i < 3:
AttributeError: 'kuzu._kuzu.Connection' object has no attribute 'get_rel_property_names'
```
### Expected behavior
should work like example | KuzuGraph not working | https://api.github.com/repos/langchain-ai/langchain/issues/11874/comments | 5 | 2023-10-16T18:10:46Z | 2024-02-15T19:17:09Z | https://github.com/langchain-ai/langchain/issues/11874 | 1,945,812,797 | 11,874 |
[
"hwchase17",
"langchain"
]
| ### Feature request
connect langchain to denodo database. while support for mysql, postgres etc exists, there is currently none for more niche platforms like denodo. Somthing like this


### Motivation
many organizations now use denodo for storing massive datasets , providing denodo connectivity will greatly enhance the reach of llamaindex and add immense value
### Your contribution
I can perform thorough testing of the feature and then update the documentation. | Denodo connector for langchain | https://api.github.com/repos/langchain-ai/langchain/issues/11873/comments | 1 | 2023-10-16T17:37:46Z | 2024-02-06T16:19:31Z | https://github.com/langchain-ai/langchain/issues/11873 | 1,945,756,011 | 11,873 |
[
"hwchase17",
"langchain"
]
| ### System Info
N/A (issue pertains to the web tutorial, [Retrieval-augmented generation (RAG)](https://python.langchain.com/docs/use_cases/question_answering/#quickstart))
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Navigate to [Retrieval-augmented generation (RAG)](https://python.langchain.com/docs/use_cases/question_answering/#quickstart)].
2. Attempt to access the links provided below:
- "Open in Colab" link, which is intended to direct users to a Colab notebook: [Open in Colab](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/use_cases/question_answering/qa.ipynb)
- "See details in here on the multi-vector retriever for this purpose" link, which should provide more information on the multi-vector retriever: [See details in here on the multi-vector retriever for this purpose](https://python.langchain.com/docs/use_cases/question_answering/docs/modules/data_connection/retrievers/multi_vector)
3. Observe that these links do not lead to the intended resources and instead result in 404 errors or lead to pages that no longer exist.
### Expected behavior
All links in the tutorial lead to active, relevant pages. | Dead Links in Retrieval-Augmented Generation (RAG) Tutorial | https://api.github.com/repos/langchain-ai/langchain/issues/11871/comments | 2 | 2023-10-16T16:19:47Z | 2024-02-06T16:19:37Z | https://github.com/langchain-ai/langchain/issues/11871 | 1,945,627,705 | 11,871 |
[
"hwchase17",
"langchain"
]
| ### System Info
Subject: Bug Report - Langchain API Query Limitation
I am currently utilizing the following code snippet:
```typescript
let data: JsonObject;
try {
const jsonFile = await fs.readFileSync(
'./src/openAI/data/formattedResults.json',
'utf8',
);
data = JSON.parse(jsonFile) as JsonObject;
if (!data) {
throw new Error('Failed to load JSON spec');
}
} catch (e) {
console.error(e);
return;
}
const toolkit = new JsonToolkit(new JsonSpec(data));
const executor = await createJsonAgent(model, toolkit);
const res = await executor.call({ input: question }).catch((err) => {
console.log(err);
});
```
While interacting with the Langchain API, I noticed an issue. When I asked any question, it only returned the top few projects as answers. For instance, when I queried the agent, 'How many projects are there?' it correctly identified that there are 289 projects. However, it incorrectly responded with 'There are 5 projects' instead of providing the accurate count.
Upon further investigation, I realized that the Langchain API appears to be limiting the results to the top 5 data entries, rather than considering the entire dataset of 289 projects. This issue is affecting the accuracy of the responses.
I kindly request assistance in resolving this limitation so that the Langchain API can process and return the complete dataset accurately. Please advise on the necessary steps to address this issue and ensure that all relevant data is considered in responses to queries.
Thank you for your prompt attention to this matter.
### Who can help?
@eyurtsev
@hwchase17
@agola11
@dosu-beta
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
let data: JsonObject;
try {
const jsonFile = await fs.readFileSync(
'./src/openAI/data/formattedResults.json',
'utf8',
);
data = JSON.parse(jsonFile) as JsonObject;
if (!data) {
throw new Error('Failed to load JSON spec');
}
} catch (e) {
console.error(e);
return;
}
const toolkit = new JsonToolkit(new JsonSpec(data));
const executor = await createJsonAgent(model, toolkit);
const res = await executor.call({ input: question }).catch((err) => {
console.log(err);
});
<img width="735" alt="image" src="https://github.com/langchain-ai/langchain/assets/82230052/a0d70ea4-b236-4bb7-9450-52838aff97c5">
Totally there were 289 projects
<img width="178" alt="image" src="https://github.com/langchain-ai/langchain/assets/82230052/881bff40-0ca4-4729-83c5-83f2fe982fe9">
### Expected behavior
The agent should take into consideration all the data, rather than top few data | createJsonAgent returns answer based on top 5 data only | https://api.github.com/repos/langchain-ai/langchain/issues/11867/comments | 2 | 2023-10-16T15:15:52Z | 2024-02-08T16:17:50Z | https://github.com/langchain-ai/langchain/issues/11867 | 1,945,481,349 | 11,867 |
[
"hwchase17",
"langchain"
]
| ### System Info
Windows 10
Python 3.10.11
I am using FAISS as the vectorstore, HuggingFaceEmbeddings, and the llm is HuggingFaceHub with google/flan-t5-small. There are no changes to any of these elements when running with and without 'with_sources'.
All libraries were installed within the last week.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
embeddings = HuggingFaceEmbeddings()
db = FAISS.load_local("FAISS-DB", embeddings)
llm = HuggingFaceHub(repo_id="google/flan-t5-small", model_kwargs={"temperature": 0.2, "max_length": 256})
query = "what does the company sodexo provide?"
similarity_docs = db.similarity_search(query)
chain = load_qa_chain(llm, chain_type="stuff")
results = chain.run(input_documents=similarity_docs, question=query)
print(query)
print(results)
similarity_docs = db.similarity_search(query)
chain = load_qa_with_sources_chain (llm, chain_type="stuff")
results = chain.run(input_documents=similarity_docs, question=query)
print(query)
print(results)
------------------------------
Output:
what does the company sodexo provide?
integrated food, facilities management and other services
what does the company sodexo provide?
0
[similarity_docs.txt](https://github.com/langchain-ai/langchain/files/12918308/similarity_docs.txt)
### Expected behavior
The expected result was the same answer with the source, similar to {'answer': 'integrated food, facilities management and other services', 'source': 'Inputs\\bodyExtract_71756.txt'} | load_qa_chain succeeds where load_qa_with_sources_chain fails to return result | https://api.github.com/repos/langchain-ai/langchain/issues/11865/comments | 2 | 2023-10-16T14:55:08Z | 2024-02-06T16:19:46Z | https://github.com/langchain-ai/langchain/issues/11865 | 1,945,420,637 | 11,865 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
When I use the chatglm code from the official documentation, I can't connect to the endpoint URL provided there.
The official website is: https://python.langchain.com/docs/integrations/llms/chatglm
The specific code is as follows:
from langchain.llms import ChatGLM
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
template = """{question}"""
prompt = PromptTemplate(template=template, input_variables=["question"])
endpoint_url = "http://127.0.0.1:8000"
llm = ChatGLM(
endpoint_url=endpoint_url,
max_token=80000,
history=[["我将从美国到中国来旅游,出行前希望了解中国的城市", "欢迎问我任何问题。"]],
top_p=0.9,
model_kwargs={"sample_model_args": False},
)
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "北京和上海两座城市有什么不同?"
llm_chain.run(question)
The code is encountering the following error:
ValueError: Error raised by inference endpoint: HTTPConnectionPool(host='127.0.0.1', port=8000): Max retries exceeded with
url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x00000163A6691F70>: Failed to establish a new connection: [WinError 10061] 由于目标计算机积极拒绝,无法连接。'))
I have tried many methods I found online, but they didn't have much effect. I'm not sure if it's necessary to open a specific server through cmd to connect to 127.0.0.1:8000.
### Suggestion:
_No response_ | Issue: When using ChatGLM, I can't connect to the endpoint_url. | https://api.github.com/repos/langchain-ai/langchain/issues/11859/comments | 5 | 2023-10-16T11:35:37Z | 2024-02-11T16:10:51Z | https://github.com/langchain-ai/langchain/issues/11859 | 1,945,007,015 | 11,859 |
[
"hwchase17",
"langchain"
]
| This is my current code and it does run however it answers completely differently sometimes and it also writes the answer 3 or 4 times. Please fix it for me and if possible, fix my templates to make my output consistent
from dotenv import load_dotenv
import csv
import PyPDF2
from PyPDF2 import PdfReader
from langchain.document_loaders import DirectoryLoader, PyPDFLoader, PyPDFDirectoryLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.callbacks import get_openai_callback
from langchain.chat_models import ChatOpenAI
from langchain.chains import ConversationalRetrievalChain
from langchain.memory import ConversationBufferMemory
from langchain.prompts import PromptTemplate
import time
from langchain.vectorstores import Qdrant
from langchain.vectorstores import Chroma
from langchain.vectorstores import deeplake
from langchain.chains.qa_with_sources import load_qa_with_sources_chain
from langchain.callbacks import StreamingStdOutCallbackHandler
import pandas as pd
from docx import Document
from nltk.tokenize import sent_tokenize, word_tokenize
from collections import Counter
from nltk.corpus import stopwords
import os
def print_letter_by_letter(text):
for char in text:
print(char, end='', flush=True)
time.sleep(0.02)
def main():
load_dotenv()
my_activeloop_org_id = "yazanrisheh"
my_activeloop_dataset_name = "langchain_course_customer_support"
dataset_path = f"hub://{my_activeloop_org_id}/{my_activeloop_dataset_name}"
# directory_path = input("Copy your directory path here or upload a file: ")
directory_path = "C:\\Users\\Asus\\Documents\\Vendolista"
# pdf_loader = DirectoryLoader(directory_path,
# glob="**/*.pdf",
# show_progress=True,
# use_multithreading=True,
# silent_errors=True,
# loader_cls = PyPDFLoader)
pdf_loader = PyPDFDirectoryLoader(directory_path)
documents = pdf_loader.load()
print(str(len(documents))+ " documents loaded")
llm = ChatOpenAI(temperature = 0, model_name='gpt-3.5-turbo', callbacks=[StreamingStdOutCallbackHandler()], streaming = True)
# Split into chunks
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=800,
chunk_overlap=100,
)
chunks = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
persist_directory = "C:\\Users\\Asus\\OneDrive\\Documents\\Vendolista"
knowledge_base = Chroma.from_documents(chunks, embeddings, persist_directory = persist_directory)
# save to disk
knowledge_base.persist()
#To delete the DB we created at first so that we can be sure that we will load from disk as fresh db
knowledge_base = None
new_knowledge_base = Chroma(persist_directory = persist_directory, embedding_function = embeddings)
# weird_knowledge_base = deeplake(chunks, dataset_path=dataset_path, embedding=embeddings)
# knowledge_base = Qdrant(documents, embeddings)
p_template = """Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question, in its original language.
If the Standalone question is empty or cannot be generated, use the follow up question as Standalone question.
Chat History:
{chat_history}
Follow Up Input: {question}
Standalone question:"""
#CONDENSE_QUESTION_PROMPT = PromptTemplate.from_template(_template)
CONDENSE_QUESTION_PROMPT = PromptTemplate(input_variables=["chat_history","question"],
template=p_template)
memory = ConversationBufferMemory(memory_key="chat_history",input_key="question",output_key='answer',return_messages=True)
chatTemplate = """
Important: You are an intelligent chatbot designed to help agents by answering questions only on Enterprise services & activities.
Answer the question only if there is information in the chat history(delimited by ) and context(delimited by ) below.
If context is not empty and answer cannot be determined from context, say "I cannot detemine the answer from context".
If you don't know the answer, just say that you don't know, don't try to make up an answer.
Do not print your answer starting with "Answer:"
{context} ----------- {chat_history} ----------- Question: {question} Answer: Answer the question only if there is information based on the chat history(delimited by ) and context(delimited by ) below. 1. If context is not empty and answer cannot be determined from context, say "I cannot detemine the answer from context". 2. If you don't know the answer, just say that you don't know, don't try to make up an answer. 3. Do not print your answer starting with "Answer:"
"""
promptHist = PromptTemplate(
input_variables=["context", "question", "chat_history"],
template=chatTemplate
)
qa = ConversationalRetrievalChain.from_llm(
llm = llm,
retriever = new_knowledge_base.as_retriever(),
condense_question_prompt=CONDENSE_QUESTION_PROMPT,
verbose=True,
memory=memory,
condense_question_llm=llm,
return_generated_question=False,
combine_docs_chain_kwargs={"prompt": promptHist},
return_source_documents=False,
)
while True:
question = input("Ask me anything about the files (type 'exit' to quit): ")
if question.lower() in ["exit"] and len(question) == 4:
end_chat = "Thank you for visiting us! Have a nice day"
print_letter_by_letter(end_chat)
break
if question:
# chat_history = []
with get_openai_callback() as cb:
response = qa({"question": question}, return_only_outputs = False)
# chat_history.append(('user', question))
# chat_history.append(('AI', response))
print("AI:", response)
print(cb)
if __name__ == '__main__':
main()
| Not answering correctly according to prompt and writes same answer 3-4 times | https://api.github.com/repos/langchain-ai/langchain/issues/11857/comments | 7 | 2023-10-16T10:23:37Z | 2023-10-16T11:47:30Z | https://github.com/langchain-ai/langchain/issues/11857 | 1,944,879,698 | 11,857 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
hi @dosu-bot , @dosu-beta :
I am not able to figure out how we can pass variale to the input_variable in prompt :
prompt_with_loader = PromptTemplate(
input_variables=["query", "username", "password"],
template=
"""
Based on the instruction you are tasked to construct a Python script that performs the following action: {query}. Utilize the specifics in the given API documentation to understand the required endpoint, method, and parameters.
Credentials for authentication:
Username: {username}
Password: {password}
Please create a Python script using the 'requests' library to execute the desired action. Ensure the script includes:
- Parsing of the necessary details from the API specification
- Basic Authentication with the provided username and password (encoded in Base 64)
- Proper use of the relevant HTTP method, headers, body, and parameters as per the API specification
- print the response of the api
- Error handling for the request
"""
)
chain_type_kwargs = {"prompt": prompt_with_loader} ,
**qa_chain = RetrievalQAWithSourcesChain.from_chain_type(
llm=llm,
chain_type="stuff",
retriever=retriever,
chain_type_kwargs=chain_type_kwargs,
return_source_documents=False,
verbose=True
)**. // when i am goining to hit this block it gives me this ERRR : document_variable_name summaries was not found in llm_chain input_variables: ['query', 'username', 'password'] (type=value_error)
pls can you guide us here
### Suggestion:
_No response_ | Issue: Not clear through docs: how we can pass variale to the input_variable in prompt | https://api.github.com/repos/langchain-ai/langchain/issues/11856/comments | 5 | 2023-10-16T10:11:06Z | 2024-02-11T16:10:57Z | https://github.com/langchain-ai/langchain/issues/11856 | 1,944,857,017 | 11,856 |
[
"hwchase17",
"langchain"
]
| Can someone help me fix this code please.
My error: Traceback (most recent call last):
File "C:\Users\Asus\Documents\Vendolista\app2.py", line 130, in <module>
main()
File "C:\Users\Asus\Documents\Vendolista\app2.py", line 99, in main
qa = ConversationalRetrievalChain.from_llm(
File "C:\Users\Asus\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain\chains\conversational_retrieval\base.py", line 356, in from_llm
return cls(
File "C:\Users\Asus\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain\load\serializable.py", line 75, in __init__
super().__init__(**kwargs)
File "pydantic\main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for ConversationalRetrievalChain
retriever
value is not a valid dict (type=type_error.dict)
PS C:\Users\Asus\Documents\Vendolista> python app2.py
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████| 33/33 [00:02<00:00, 11.17it/s]
87 documents loaded
Traceback (most recent call last):
File "C:\Users\Asus\Documents\Vendolista\app2.py", line 130, in <module>
main()
File "C:\Users\Asus\Documents\Vendolista\app2.py", line 99, in main
qa = ConversationalRetrievalChain.from_llm(
File "C:\Users\Asus\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain\chains\conversational_retrieval\base.py", line 356, in from_llm
return cls(
File "C:\Users\Asus\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain\load\serializable.py", line 75, in __init__
super().__init__(**kwargs)
File "pydantic\main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for ConversationalRetrievalChain
retriever
value is not a valid dict (type=type_error.dict)
My code:
from dotenv import load_dotenv
import csv
import PyPDF2
from PyPDF2 import PdfReader
from langchain.document_loaders import DirectoryLoader, PyPDFLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.callbacks import get_openai_callback
from langchain.chat_models import ChatOpenAI
from langchain.chains import ConversationalRetrievalChain
from langchain.memory import ConversationBufferMemory
from langchain.prompts import PromptTemplate
import time
from langchain.vectorstores import Qdrant
from langchain.vectorstores import Chroma
from langchain.vectorstores import deeplake
from langchain.chains.qa_with_sources import load_qa_with_sources_chain
from langchain.callbacks import StreamingStdOutCallbackHandler
import pandas as pd
from docx import Document
from nltk.tokenize import sent_tokenize, word_tokenize
from collections import Counter
from nltk.corpus import stopwords
import os
def print_letter_by_letter(text):
for char in text:
print(char, end='', flush=True)
time.sleep(0.02)
def main():
load_dotenv()
my_activeloop_org_id = "yazanrisheh"
my_activeloop_dataset_name = "langchain_course_customer_support"
dataset_path = f"hub://{my_activeloop_org_id}/{my_activeloop_dataset_name}"
# directory_path = input("Copy your directory path here or upload a file: ")
directory_path = "C:\\Users\\Asus\\Documents\\Vendolista"
pdf_loader = DirectoryLoader(directory_path,
glob="**/*.pdf",
show_progress=True,
use_multithreading=True,
silent_errors=True,
loader_cls = PyPDFLoader)
documents = pdf_loader.load()
print(str(len(documents))+ " documents loaded")
llm = ChatOpenAI(temperature = 0, model_name='gpt-3.5-turbo', callbacks=[StreamingStdOutCallbackHandler()], streaming = True)
# Split into chunks
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=800,
chunk_overlap=100,
)
chunks = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
persist_directory = "C:\\Users\\Asus\\OneDrive\\Documents\\Vendolista"
knowledge_base = Chroma.from_documents(chunks, embeddings, persist_directory = persist_directory)
# save to disk
knowledge_base.persist()
#To delete the DB we created at first so that we can be sure that we will load from disk as fresh db
knowledge_base = None
new_knowledge_base = Chroma(persist_directory = persist_directory, embedding_function = embeddings)
# weird_knowledge_base = deeplake(chunks, dataset_path=dataset_path, embedding=embeddings)
# knowledge_base = Qdrant(documents, embeddings)
p_template = """Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question, in its original language.
If the Standalone question is empty or cannot be generated, use the follow up question as Standalone question.
Chat History:
{chat_history}
Follow Up Input: {question}
Standalone question:"""
#CONDENSE_QUESTION_PROMPT = PromptTemplate.from_template(_template)
CONDENSE_QUESTION_PROMPT = PromptTemplate(input_variables=["chat_history","question"],
template=p_template)
memory = ConversationBufferMemory(memory_key="chat_history",input_key="question",output_key='answer',return_messages=True)
chatTemplate = """
Important: You are an intelligent chatbot designed to help agents by answering questions only on Enterprise services & activities.
Answer the question only if there is information in the chat history(delimited by ) and context(delimited by ) below.
If context is not empty and answer cannot be determined from context, say "I cannot detemine the answer from context".
If you don't know the answer, just say that you don't know, don't try to make up an answer.
Do not print your answer starting with "Answer:"
{context} ----------- {chat_history} ----------- Question: {question} Answer: Answer the question only if there is information based on the chat history(delimited by ) and context(delimited by ) below. 1. If context is not empty and answer cannot be determined from context, say "I cannot detemine the answer from context". 2. If you don't know the answer, just say that you don't know, don't try to make up an answer. 3. Do not print your answer starting with "Answer:"
"""
promptHist = PromptTemplate(
input_variables=["context", "question", "chat_history"],
template=chatTemplate
)
qa = ConversationalRetrievalChain.from_llm(
llm = llm,
retriever = new_knowledge_base,
condense_question_prompt=CONDENSE_QUESTION_PROMPT,
verbose=False,
memory=memory,
condense_question_llm=llm,
return_generated_question=True,
combine_docs_chain_kwargs={"prompt": promptHist},
return_source_documents=True,
)
while True:
question = input("Ask me anything about the files (type 'exit' to quit): ")
if question.lower() in ["exit"] and len(question) == 4:
end_chat = "Thank you for visiting us! Have a nice day"
print_letter_by_letter(end_chat)
break
if question:
# chat_history = []
# with get_openai_callback() as cb:
response = qa({"question": question}, return_only_outputs = True)
# chat_history.append(('user', question))
# chat_history.append(('AI', response))
print("AI:", response)
if __name__ == '__main__':
main()
| ConversationalRetrievalChain error | https://api.github.com/repos/langchain-ai/langchain/issues/11855/comments | 3 | 2023-10-16T09:34:13Z | 2024-02-09T16:15:18Z | https://github.com/langchain-ai/langchain/issues/11855 | 1,944,786,305 | 11,855 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain : 0.0.306
python : 3.10.12
platform : Ubuntu
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
`loader = S3DirectoryLoader(
bucket="", prefix="data/",
aws_access_key_id="",
aws_secret_access_key="")
data = loader.load()`
It should have load all the files with in the bucket from a given prefix which may contain multiple sub-folders but it does not
### Expected behavior
It should have loaded all the files as list of Document of langchain schema , but it does not | S3 Directory Loader not working as expected | https://api.github.com/repos/langchain-ai/langchain/issues/11854/comments | 4 | 2023-10-16T08:37:20Z | 2024-02-10T16:12:37Z | https://github.com/langchain-ai/langchain/issues/11854 | 1,944,679,764 | 11,854 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain version: v0.0.314
### Who can help?
@eyurtsev @baskaryan
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.document_loaders import ConfluenceLoader
loader = ConfluenceLoader(url="xxx", session=get_session())
docs = loader.load(page_ids=["xxx"], keep_newlines=True)
for doc in docs:
print(doc.page_content)
have_newlines = "\n" in doc.page_content
print(f"{have_newlines=}")
```
read the source code, I found the `keep_newlines` arguments are not passed to `process_page` function when load with `cql` and `page_ids`
file: `langchain.document_loaders.confluence.py`
function: `Confluence.load`
lines: 308-353
<img width="781" alt="image" src="https://github.com/langchain-ai/langchain/assets/19882756/1d11d7dd-f883-4782-8b5d-3365ca2dff87">
<img width="821" alt="image" src="https://github.com/langchain-ai/langchain/assets/19882756/b87f3a16-65a4-490c-851f-0c51a534d171">
### Expected behavior
The `keep_newlines` parameter is passed correctly and empty lines are correctly retained | ConfluenceLoader's keep_newlines is not passed correctly | https://api.github.com/repos/langchain-ai/langchain/issues/11853/comments | 2 | 2023-10-16T08:34:03Z | 2024-02-09T16:15:28Z | https://github.com/langchain-ai/langchain/issues/11853 | 1,944,674,010 | 11,853 |
[
"hwchase17",
"langchain"
]
| ### System Info
python = "=3.10.12"
llama-cpp-python = "=0.2.11"
langchain = "=0.0.313"
### Who can help?
@hwchase17
@agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Run the [tagging docs]() with either `Ollama` or `llama-cpp-python` shown below.
```python
from langchain.chains import create_tagging_chain
from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain.llms import LlamaCpp
from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
model_using = [YOUR MODEL PATH] / "llama-2-7b-chat.Q4_0.gguf"
callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])
llm = LlamaCpp(
model_path=model_using,
n_gpu_layers=32,
n_batch=512,
f16_kv=True,
callback_manager=callback_manager,
verbose=True,
n_ctx=512,
seed=42,
)
# Schema
schema = {
"properties": {
"sentiment": {"type": "string"},
"aggressiveness": {"type": "integer"},
"language": {"type": "string"},
}
}
chain = create_tagging_chain(schema, llm)
inp = "Estoy increiblemente contento de haberte conocido! Creo que seremos muy buenos amigos!"
print(chain.run(inp))
```
Gives an error that OpenAI functions are not available.
```
Traceback (most recent call last):
File "/Users/jak121/Git_repos/jk/20231016_langchain_tagging_issue.py", line 34, in <module>
print(chain.run(inp))
File "/Users/jak121/Git_repos/llm/env/lib/python3.10/site-packages/langchain/chains/base.py", line 501, in run
return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
File "/Users/jak121/Git_repos/llm/env/lib/python3.10/site-packages/langchain/chains/base.py", line 306, in __call__
raise e
File "/Users/jak121/Git_repos/llm/env/lib/python3.10/site-packages/langchain/chains/base.py", line 300, in __call__
self._call(inputs, run_manager=run_manager)
File "/Users/jak121/Git_repos/llm/env/lib/python3.10/site-packages/langchain/chains/llm.py", line 93, in _call
response = self.generate([inputs], run_manager=run_manager)
File "/Users/jak121/Git_repos/llm/env/lib/python3.10/site-packages/langchain/chains/llm.py", line 103, in generate
return self.llm.generate_prompt(
File "/Users/jak121/Git_repos/llm/env/lib/python3.10/site-packages/langchain/llms/base.py", line 498, in generate_prompt
return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs)
File "/Users/jak121/Git_repos/llm/env/lib/python3.10/site-packages/langchain/llms/base.py", line 647, in generate
output = self._generate_helper(
File "/Users/jak121/Git_repos/llm/env/lib/python3.10/site-packages/langchain/llms/base.py", line 535, in _generate_helper
raise e
File "/Users/jak121/Git_repos/llm/env/lib/python3.10/site-packages/langchain/llms/base.py", line 522, in _generate_helper
self._generate(
File "/Users/jak121/Git_repos/llm/env/lib/python3.10/site-packages/langchain/llms/base.py", line 1044, in _generate
self._call(prompt, stop=stop, run_manager=run_manager, **kwargs)
File "/Users/jak121/Git_repos/llm/env/lib/python3.10/site-packages/langchain/llms/llamacpp.py", line 291, in _call
for chunk in self._stream(
File "/Users/jak121/Git_repos/llm/env/lib/python3.10/site-packages/langchain/llms/llamacpp.py", line 343, in _stream
result = self.client(prompt=prompt, stream=True, **params)
TypeError: Llama.__call__() got an unexpected keyword argument 'functions'
ggml_metal_free: deallocating
```
### Expected behavior
As per the docs: ` {'sentiment': 'positive', 'language': 'Spanish'}` | Tagging documentation does not work with LlamaCpp or Ollama | https://api.github.com/repos/langchain-ai/langchain/issues/11847/comments | 3 | 2023-10-16T07:27:07Z | 2024-02-11T16:11:02Z | https://github.com/langchain-ai/langchain/issues/11847 | 1,944,554,193 | 11,847 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I'm encountering an error while trying to integrate confluenece attachment by the keyword include_attachments=True
Created a chunk of size 494, which is longer than the specified 100
Created a chunk of size 230, which is longer than the specified 100
Created a chunk of size 121, which is longer than the specified 100
Created a chunk of size 224, which is longer than the specified 100
Created a chunk of size 103, which is longer than the specified 100
Created a chunk of size 638, which is longer than the specified 100
Created a chunk of size 112, which is longer than the specified 100
Created a chunk of size 228, which is longer than the specified 100
Created a chunk of size 147, which is longer than the specified 100
Created a chunk of size 127, which is longer than the specified 100
Created a chunk of size 306, which is longer than the specified 100
Created a chunk of size 105, which is longer than the specified 100
Below is my code:
text_splitter = CharacterTextSplitter(chunk_size=100, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
text_splitter = TokenTextSplitter(chunk_size=1000, chunk_overlap=10, encoding_name="cl100k_base")
texts = text_splitter.split_documents(texts)
### Suggestion:
_No response_ | Issue: Getting Error while integrating confluence attachment | https://api.github.com/repos/langchain-ai/langchain/issues/11845/comments | 2 | 2023-10-16T06:34:51Z | 2024-02-06T16:20:21Z | https://github.com/langchain-ai/langchain/issues/11845 | 1,944,475,842 | 11,845 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I'm encountering an error while trying to integrate confluenece attachment by the keyword include_attachments=True
Created a chunk of size 494, which is longer than the specified 100
Created a chunk of size 230, which is longer than the specified 100
Created a chunk of size 121, which is longer than the specified 100
Created a chunk of size 224, which is longer than the specified 100
Created a chunk of size 103, which is longer than the specified 100
Created a chunk of size 638, which is longer than the specified 100
Created a chunk of size 112, which is longer than the specified 100
Created a chunk of size 228, which is longer than the specified 100
Created a chunk of size 147, which is longer than the specified 100
Created a chunk of size 127, which is longer than the specified 100
Created a chunk of size 306, which is longer than the specified 100
Created a chunk of size 105, which is longer than the specified 100
Below is my code:
text_splitter = CharacterTextSplitter(chunk_size=100, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
text_splitter = TokenTextSplitter(chunk_size=1000, chunk_overlap=10, encoding_name="cl100k_base")
texts = text_splitter.split_documents(texts)
### Suggestion:
_No response_ | Issue: Getting Error while integrating confluence attachment | https://api.github.com/repos/langchain-ai/langchain/issues/11844/comments | 1 | 2023-10-16T06:00:12Z | 2023-10-16T06:35:03Z | https://github.com/langchain-ai/langchain/issues/11844 | 1,944,428,691 | 11,844 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores import Chroma
from langchain.chat_models import ChatOpenAI
from langchain.prompts import (
FewShotChatMessagePromptTemplate,
SemanticSimilarityExampleSelector,
ChatPromptTemplate,
)
examples = [
{"input": "2+2", "output": "4"},
{"input": "2+3", "output": "5"},
{"input": "2+4", "output": "6"},
{"input": "What did the cow say to the moon?", "output": "nothing at all"},
{
"input": "Write me a poem about the moon",
"output": "One for the moon, and one for me, who are we to talk about the moon?",
},
]
example_selector = SemanticSimilarityExampleSelector.from_examples(
examples,
OpenAIEmbeddings(),
Chroma,
k=1
)
few_shot_prompt = FewShotChatMessagePromptTemplate(
example_selector=example_selector,
example_prompt=ChatPromptTemplate.from_messages(
[("human", "{input}"), ("ai", "{output}")]
),
)
final_prompt = ChatPromptTemplate.from_messages(
[
("system", "You are a wondrous wizard of math."),
few_shot_prompt,
("human", "{input}"),
]
)
# 由于example_selector已经选择了一个样本进行匹配,这将导致少样本示例失效,因为选择的样本可能与给定的输入不相关。
print(final_prompt.format(input="What did the cow say to the moon?"))
### Idea or request for content:
In the "Dynamic Few-Shot Prompting" documentation, it is mentioned that when example_selector selects a sample for matching, it can cause the few-shot examples to lose effectiveness because the selected sample may not be relevant to the given input.
https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/few_shot_examples_chat#dynamic-few-shot-prompting | In the "Dynamic Few-Shot Prompting" documentation, it is mentioned that when example_selector selects a sample for matching, it can cause the few-shot examples to lose effectiveness because the selected sample may not be relevant to the given input. | https://api.github.com/repos/langchain-ai/langchain/issues/11843/comments | 4 | 2023-10-16T05:43:53Z | 2024-05-07T16:06:03Z | https://github.com/langchain-ai/langchain/issues/11843 | 1,944,410,321 | 11,843 |
[
"hwchase17",
"langchain"
]
| ### System Info
I attempted to create an AI agent bot by utilizing the QianfanChatEndpoint. However, I encountered an issue when I attempted to define a tool for making calls.
Subsequent to consulting with a contributor from the QianfanChatEndpoint project, it was determined that it is imperative to merge the following pull request for resolution: [link to the GitHub pull request](https://github.com/langchain-ai/langchain/pull/11107). Once this fix is implemented, my agent will work fine with Function Call integration.
@hwchase17
@agola11
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
"""Example demo for agent.py
"""
import os
from typing import Any, Dict, List
from langchain.agents import tool
from langchain.agents.format_scratchpad import format_to_openai_functions
from langchain.agents.output_parsers import OpenAIFunctionsAgentOutputParser
from langchain.chat_models import QianfanChatEndpoint
from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder
# from langchain.tools.render import format_tool_to_openai_function
prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"You are very powerful assistant, but bad at get"
"today's temperature of location.",
),
("user", "{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
]
)
@tool
def get_current_weather(location: str) -> str:
"""Returns current temperature of location."""
return "25"
tools = [get_current_weather]
if __name__ == "__main__":
os.environ["QIANFAN_AK"] = "ak"
os.environ["QIANFAN_SK"] = "sk"
llm = QianfanChatEndpoint(model="ERNIE-Bot") # 仅EB支持
llm_with_tools = llm.bind(
# functions=functions
functions=[
{
"name": "get_current_weather",
"description": "获得指定地点的天气",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "省,市名,例如:河北省,石家庄",
},
"unit": {"type": "string", "enum": ["摄氏度", "华氏度"]},
},
"required": ["location"],
},
}
]
)
p = {
"input": lambda x: x["input"],
"agent_scratchpad": lambda x: format_to_openai_functions(
x["intermediate_steps"]
),
}
agent: Dict = p | prompt | llm_with_tools | OpenAIFunctionsAgentOutputParser()
# main loop
from langchain.schema.agent import AgentFinish
intermediate_steps: List[Any] = []
while True:
output = agent.invoke(
{"input": "上海市今天天气如何?", "intermediate_steps": intermediate_steps}
)
if isinstance(output, AgentFinish):
final_result = output.return_values["output"]
break
else:
print(output.tool, output.tool_input)
tool = {
"get_current_weather": get_current_weather,
}[output.tool]
observation = tool.run(output.tool_input)
# observation = "{\"temperature\": \"25\", \"description\": \"晴朗\"}"
intermediate_steps.append((output, observation))
print("result", final_result)
```
### Expected behavior
上海市今天气温25度 | QianfanChatEndpoint function call not work for agent. | https://api.github.com/repos/langchain-ai/langchain/issues/11839/comments | 2 | 2023-10-16T02:08:55Z | 2024-02-09T16:15:38Z | https://github.com/langchain-ai/langchain/issues/11839 | 1,944,207,173 | 11,839 |
[
"hwchase17",
"langchain"
]
| ### System Info
Dependencies:
langchain == 0.0.285
pytest ~= 7.4.0
pytest-asyncio ~= 0.21.1
pytest-mock ~= 3.11.1
Using Python 3.11
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
The following Python script is self-contained. It features:
- A custom `Chain` implementation that uses an `LLMChain` internally, `bar`
- We don't care about its inputs or outputs
- We simply care that it takes an `LLMChain` and uses that to compute its output.
- A test (`async` using the `pytest-asyncio` library) where:
- I mocked the `LLMChain` using an `AsyncMock` with spec `LLMChain`
- I mocked the return value of the `arun` function in the `LLMChain`
### Self-contained Python Script
<details>
<summary>Self-contained Python Script</summary>
```py
from langchain.chains.base import Chain
from langchain import LLMChain
from typing import Any
from langchain.callbacks.manager import AsyncCallbackManagerForChainRun, CallbackManagerForChainRun
from unittest.mock import AsyncMock
from pydantic import BaseModel, Extra
class CustomChain(Chain):
bar: LLMChain
class Config:
"""Configuration for this pydantic object."""
extra = Extra.forbid
arbitrary_types_allowed = True
allow_population_by_field_name = True
@property
def input_schema(self) -> type[BaseModel]:
return BaseModel # we don't care about the inputs here
@property
def input_keys(self) -> list[str]:
return []
@property
def output_keys(self) -> list[str]:
return ["output"]
async def _acall(
self,
inputs: dict[str, Any],
run_manager: AsyncCallbackManagerForChainRun | None = None,
) -> dict[str, Any]:
intermediary_result = await self.bar.arun(message="hello")
return {"output": intermediary_result}
def _call(self, inputs: dict[str, Any], run_manager: CallbackManagerForChainRun | None = None) -> dict[str, Any]:
raise NotImplementedError("Only supports async")
async def test_custom_chain():
bar = AsyncMock(LLMChain)
bar.arun.return_value = "Hi there!"
custom_chain = CustomChain(bar=bar)
await custom_chain._acall(inputs={})
bar.arun.assert_awaited_once()
```
</details>
### Observed Output
The test should naturally succeed. Problem is that _somewhere_ in the initialization of the `CustomChain`, `bar` is changed from being an `AsyncMock` to a `MagicMock`. This is the inexplicable bit to me.
Placing a debugger breakpoint right before the initialization of `CustomChain` in the test function:
```
custom_chain = CustomChain(bar=bar)
```
It shows that `bar` is `<AsyncMock spec='LLMChain' id='5218712656'>`, however from inside `CustomChain._acall` function, `self.bar` is a `<MagicMock name='mock._copy_and_set_values()' id='5221083792'>`.
<details>
<summary>Full test output</summary>
```
async def test_custom_chain():
bar = AsyncMock(LLMChain)
bar.arun.return_value = "Hi there!"
custom_chain = CustomChain(bar=bar)
> await custom_chain._acall(inputs={})
test/brain/test_foo.py:49:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = CustomChain(memory=None, callbacks=None, callback_manager=None, verbose=False, tags=None, metadata=None, bar=<MagicMock name='mock._copy_and_set_values()' id='5099464528'>)
inputs = {}, run_manager = None
async def _acall(
self,
inputs: dict[str, Any],
run_manager: AsyncCallbackManagerForChainRun | None = None,
) -> dict[str, Any]:
> intermediary_result = await self.bar.arun(message="hello")
E TypeError: object MagicMock can't be used in 'await' expression
test/brain/test_foo.py:37: TypeError
```
</details>
### Expected behavior
Naturally, I would expect the parameter to my custom chain to be exactly what I passed, unmodified at any point. Hence, I would expect the test and its assertion to succeed.
_Please note_: I tried reproducing the issue outside of LangChain, just by creating a similar class - just inheriting from a Pydantic `BaseModel` rather than LangChain's `Chain`. The test succeeded just as expected. | AsyncMock not preserved when passed to custom Chain | https://api.github.com/repos/langchain-ai/langchain/issues/11838/comments | 4 | 2023-10-15T22:27:00Z | 2024-02-10T16:12:47Z | https://github.com/langchain-ai/langchain/issues/11838 | 1,944,082,144 | 11,838 |
[
"hwchase17",
"langchain"
]
| ### System Info
0.0.314
### Who can help?
@hwchase17 @agola11 @eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.chat_models import ChatOpenAI
from langchain.agents import load_tools, initialize_agent
from langchain.agents import AgentType
from langchain.tools import AIPluginTool
tool = AIPluginTool.from_plugin_url("https://scholar-ai.net/.well-known/ai-plugin.json")
llm = ChatOpenAI(temperature=0, streaming=True, model_name="gpt-3.5-turbo-16k-0613", max_tokens=16000)
tools = [tool]
agent_chain = initialize_agent(
tools, llm, agent=AgentType.OPENAI_FUNCTIONS, verbose=True
)
agent_chain.run("What are the antiviral effects of Sillymarin?")
```
And I get
```
InvalidRequestError: This model's maximum context length is 16385 tokens. However, you requested 17952 tokens (1855 in the messages, 97 in the functions, and 16000 in the completion). Please reduce the length of the messages, functions, or completion.
```
### Expected behavior
The idea is that should provide sensible information based on the Plugin Scholar AI.
It would be terrific, if you could help us here. 🙂 | Use OpenAI ChatGPT Plugin via Python API fails with maximum number of tokens error | https://api.github.com/repos/langchain-ai/langchain/issues/11837/comments | 3 | 2023-10-15T21:59:24Z | 2024-02-11T16:11:07Z | https://github.com/langchain-ai/langchain/issues/11837 | 1,944,073,141 | 11,837 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain Version: 0.0.314
Python 3.9.6
I have been using LangChain with OpenAI and FAISS for building RAG chatbots. Everything was working fine until last week.
I have noticed that for some reason, I have much higher response times. I have debugged the LangChain code and found that the reason is the LLM response itself. I am using OpenAI GPT-3.5-turbo, I haven't changed anything in the prompt being sent, but response time has increased dramatically. For example, a very simple prompt in September took 2 seconds to generate. Now the exact same prompt takes up to 14 seconds to get a response!
Does anybody have the same experience? Does anybody know why this can be happening?
Any help will be appreciated!
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Use OpenAI as an LLM
2. Observer response times
### Expected behavior
The LLM response should be much faster | OpenAI API is extremely slow | https://api.github.com/repos/langchain-ai/langchain/issues/11836/comments | 6 | 2023-10-15T21:39:04Z | 2024-04-10T16:14:04Z | https://github.com/langchain-ai/langchain/issues/11836 | 1,944,067,045 | 11,836 |
[
"hwchase17",
"langchain"
]
| https://github.com/langchain-ai/langchain/blob/a50630277295c3884be8e839b04718d0e99b4ea4/libs/langchain/langchain/chains/moderation.py#L93C43-L93C43
Code a line no. 93 needs to be updated as below
self.client.create(input = text, model = self.model_name) | model_name is not passed as a argument to create method | https://api.github.com/repos/langchain-ai/langchain/issues/11830/comments | 1 | 2023-10-15T16:52:02Z | 2024-02-06T16:20:41Z | https://github.com/langchain-ai/langchain/issues/11830 | 1,943,966,873 | 11,830 |
[
"hwchase17",
"langchain"
]
| ### Feature request
similarly to:
```
db = Chroma(persist_directory=persist_directory, embedding_function=embeddings, client_settings=CHROMA_SETTINGS)
collection = db.get()
```
where ```collection``` is a list of documents already in the vector store, is there a way to get that list from neo4j_vecror that was initialized from_documents() ?
### Motivation
This will allow to filter out existing documents when new ones are added in folder that is being used to build an expanding document vector store.
### Your contribution
i was thinkging using .query() as a workaround but not sure how a CYPHER query can return all possible documents currently in the store. | return all documents in neo4j_vector | https://api.github.com/repos/langchain-ai/langchain/issues/11829/comments | 2 | 2023-10-15T16:41:33Z | 2024-02-06T16:20:46Z | https://github.com/langchain-ai/langchain/issues/11829 | 1,943,962,792 | 11,829 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I have installed langchain and ctransformer using -
```
pip install langchain
pip install ctransformers[cuda]
```
I am trying following piece of code -
```
from langchain.llms import CTransformers
config = {'max_new_tokens': 512, 'repetition_penalty': 1.1, 'context_length': 8000, 'temperature':0, 'gpu_layers':50}
llm = CTransformers(model = "./codellama-7b.Q4_0.gguf", model_type = "llama", gpu_layers=50, config=config)
```
Here gpu_layers parameter is specified still gpu is not being used and complete load is on cpu.
Can someone please point out if there is any step missing.
### Suggestion:
_No response_ | Issue: GPU is not used with Ctransformers even after specifying gpu_layers parameter | https://api.github.com/repos/langchain-ai/langchain/issues/11826/comments | 15 | 2023-10-15T14:35:12Z | 2024-05-16T07:56:34Z | https://github.com/langchain-ai/langchain/issues/11826 | 1,943,913,102 | 11,826 |
[
"hwchase17",
"langchain"
]
| ### System Info
Python 3.11.5
langchain==0.0.260
atlassian-python-api==3.41.2
### Who can help?
@eyurtsev
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
My code:
`data = loader.load(page_ids=[page_id], include_attachments=True, limit=50)`
Error message:
`requests.exceptions.HTTPError: 404 Client Error: Not Found for url: ...`
### Expected behavior
The problem is the URL path is not vaild (you can read my post at atlassian developer community [here ](https://community.developer.atlassian.com/t/download-confluence-content-attachment-via-rest-api-get-404-error/73782))
My solution:
at document_loaders/confluence.py [line 542](https://github.com/langchain-ai/langchain/blob/a50630277295c3884be8e839b04718d0e99b4ea4/libs/langchain/langchain/document_loaders/confluence.py#L542C6-L542C6)
change
`absolute_url = self.base_url + attachment["_links"]["download"]`
to
`absolute_url = self.base_url + "/wiki" + attachment["_links"]["download"]` | Confluence document loader can not get the attachments | https://api.github.com/repos/langchain-ai/langchain/issues/11824/comments | 1 | 2023-10-15T13:44:03Z | 2023-10-16T10:03:34Z | https://github.com/langchain-ai/langchain/issues/11824 | 1,943,895,150 | 11,824 |
[
"hwchase17",
"langchain"
]
| ### System Info
window 、langchain-0.0.314、python 3.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [x] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
run
```
from langchain.chains.summarize import load_summarize_chain
from langchain.chat_models import ChatOpenAI
from langchain import LLMChain
from langchain.prompts.chat import (
ChatPromptTemplate,
HumanMessagePromptTemplate,
)
api_base_url = "http://localhost:10860/v1"
api_key= "EMPTY"
LLM_MODEL = "/***/AquilaChat2-7B"
model = ChatOpenAI(
streaming=True,
verbose=True,
# callbacks=[callback],
openai_api_key=api_key,
openai_api_base=api_base_url,
model_name=LLM_MODEL
)
chain = load_summarize_chain(model,
chain_type="map_reduce",
verbose = True,
)
output_summary = chain.run(docs[:5])
print(output_summary)
``
### Expected behavior
nomal
https://blog.csdn.net/FrenzyTechAI/article/details/131524746 | ValidationError: 1 validation error for ChatMessage | https://api.github.com/repos/langchain-ai/langchain/issues/11823/comments | 3 | 2023-10-15T12:14:33Z | 2024-02-09T16:15:53Z | https://github.com/langchain-ai/langchain/issues/11823 | 1,943,865,066 | 11,823 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain version: 0.0.313
Python version: Python 3
OS: Mac
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.chains.router.multi_prompt import MultiPromptChain
Assume have already installed langchain and other necessary libraries
Error I get:
cannot import name 'BasePromptTemplate' from 'langchain.schema'. I checked in the langchain.schema file. BasePromptTemplate is indeed not there. But then I don't know how to get that file. I see PromptTemplate file in the langchain.schema file
### Expected behavior
To import MultiPromptChain | Getting an error while importing MultiPromptChain | https://api.github.com/repos/langchain-ai/langchain/issues/11819/comments | 6 | 2023-10-15T06:32:23Z | 2024-02-28T16:08:50Z | https://github.com/langchain-ai/langchain/issues/11819 | 1,943,745,728 | 11,819 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I'm encountering an error while trying to integrate confluenece attachment using the keyword "include_attachments=True", without these keyword I am able to do question answering on Text content.
But I want to fetch attachment of the confluence space. Below is my code:

I encounter the following error:

### Suggestion:
_No response_ | Issue: Getting Error while integrating confluence attachment | https://api.github.com/repos/langchain-ai/langchain/issues/11818/comments | 2 | 2023-10-15T04:59:22Z | 2024-02-06T16:20:56Z | https://github.com/langchain-ai/langchain/issues/11818 | 1,943,712,366 | 11,818 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain=0.0.294
python=3.10
### Who can help?
@hwchase17 @mmz-001 @baskaryan
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I have tried multiple text splitters, but none of them seem to respect the chunk_size parameter. My document is a podcast transcript that I create from a simple text file that combines all transcript sentences into a single string.
The SpacyTextSplitter then formats the document into coherent sentences and paragraphs. But I end up with over five individual small documents, even when I test a small part of the transcript of only 13,000 characters.
The RecursiveCharacterTextSplitter and CharacterTextSplitter split the documents into simpler grouped sentences, but they also ignore the chunk_size parameter.
My test file
[3kiv08TDn4vywkNAdoT2In_short_v2.txt](https://github.com/langchain-ai/langchain/files/12908607/3kiv08TDn4vywkNAdoT2In_short_v2.txt)
```
import spacy
from langchain.text_splitter import RecursiveCharacterTextSplitter, CharacterTextSplitter, SpacyTextSplitter
@app.post("/reduce_file/", response_model=ItemResponse)
async def reduce_file(item: Item_Reduce_FileURL):
logging.info("Entering function reduce_file")
try:
# Dynamic file type detection (from summarize_file)
response = requests.get(item.file_link)
if response.status_code != 200:
raise HTTPException(status_code=400, detail="Failed to download file")
content_type = response.headers.get('Content-Type')
file_extension = mimetypes.guess_extension(content_type)
if file_extension and file_extension.startswith('.'):
file_extension = file_extension[1:]
temp_file_name = f"temp_file.{file_extension}"
with open(temp_file_name, "wb") as f:
f.write(response.content)
# Load the document
file_loader = OnlinePDFLoader(file_path=temp_file_name) if file_extension == 'pdf' else UnstructuredAPIFileLoader(
file_path=temp_file_name,
infer_table_structure=True,
strategy="fast",
model_name="chipper",
mode="elements",
pdf_infer_table_structure=False,
ocr_languages=["eng"],
discard_invalid_pages=True,
encoding="utf_8",
api_key=unstructured_api_key
)
try:
doc = file_loader.load()
except Exception as load_exception:
logging.error(f"Error in file_loader.load(): {load_exception}")
return CustomError.handle_exception(load_exception, item, exception_type=str(type(load_exception)))
# Initialize text splitter and split the document (from reduce_pdf)
text_splitter = SpacyTextSplitter(
chunk_size=5000,
chunk_overlap=0,
length_function=len,
)
split_docs = text_splitter.split_documents(doc)
### Expected behavior
I expect only two documents since the chunk_size parameter is set to 10000 characters. I have tested with increasing and decreasing the chunk_size, and I continue to get a consistent amount of smaller documents of 1200 to 1500 characters. | Mutiple text_splitter ignore the chunk_size parameter | https://api.github.com/repos/langchain-ai/langchain/issues/11817/comments | 2 | 2023-10-15T04:42:23Z | 2023-10-16T07:10:32Z | https://github.com/langchain-ai/langchain/issues/11817 | 1,943,707,750 | 11,817 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I'm encountering an error while trying to integrate confluenece attachment using the keyword "include_attachments=True", without these keyword I am able to do question answering on Text content.
But I want to fetch attachment of the confluence space. Below is my code:

I encounter the following error:

### Suggestion:
_No response_ | Issue: Getting Error while integrating confluence attachment | https://api.github.com/repos/langchain-ai/langchain/issues/11816/comments | 3 | 2023-10-15T03:49:23Z | 2024-02-07T16:18:48Z | https://github.com/langchain-ai/langchain/issues/11816 | 1,943,688,748 | 11,816 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hi, I have lots product factsheet in PDF format and would like to perform QnA on these documents. These documents have a common layout. I parsed these documents and stored them in ChromaDB. However, when I try to query the vector store for a specific details for ex. "What is the product category of product A?", it retrieved irrelevant chunks. What approach should be taken for such homogeneous documents QnA use cases?
### Suggestion:
_No response_ | Issue: Similarity Search on Chroma does not retrieve relevant chunk for homogeneous document search | https://api.github.com/repos/langchain-ai/langchain/issues/11815/comments | 2 | 2023-10-15T03:35:42Z | 2024-02-06T16:21:07Z | https://github.com/langchain-ai/langchain/issues/11815 | 1,943,684,633 | 11,815 |
[
"hwchase17",
"langchain"
]
| ### Feature request
A retriever for documents from [Outline](https://github.com/outline/outline).
The API has a search endpoint which allows this to be possible: https://www.getoutline.com/developers#tag/Documents/paths/~1documents.search/post
The implementation will be similar to the Wikipedia retriever:
https://python.langchain.com/docs/integrations/retrievers/wikipedia
### Motivation
Outline is an open source project that let's you create a knowledge base, like a wiki. Creating a retriever for Outline will let your team interact with your knowledge base using an LLM.
### Your contribution
PR will be coming soon. | Create retriever for Outline to ask questions on knowledge base | https://api.github.com/repos/langchain-ai/langchain/issues/11814/comments | 1 | 2023-10-15T01:58:24Z | 2023-11-28T20:58:07Z | https://github.com/langchain-ai/langchain/issues/11814 | 1,943,639,624 | 11,814 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain 0.0.314
When LLM returns with action input parameter without space in-between two tables, call fails. Certain LLMs dont follow "Table1, Table2" format and returns with "Table1,Table2"
This is easily addressed by stripping out all spaces and split with ","
<code>
class InfoSQLDatabaseTool(BaseSQLDatabaseTool, BaseTool):
"""Tool for getting metadata about a SQL database."""
name: str = "sql_db_schema"
description: str = """
Input to this tool is a comma-separated list of tables, output is the schema and sample rows for those tables.
Example Input: "table1, table2, table3"
"""
def _run(
self,
table_names: str,
run_manager: Optional[CallbackManagerForToolRun] = None,
) -> str:
"""Get the schema for tables in a comma-separated list."""
return self.db.get_table_info_no_throw(table_names.split(", "))
</code>
Replace return line with the following:
<code>
import re
.
.
.
def_run(
.
.
.
return self.db.get_table_info_no_throw(re.sub(" *","",table_names).split(","))
</code>
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
This can be reproduced with Claude V2
### Expected behavior
sql_db_schema to not fail | SQL DB Tools sql_db_schema bad input | https://api.github.com/repos/langchain-ai/langchain/issues/11813/comments | 1 | 2023-10-14T23:56:03Z | 2024-02-06T16:21:11Z | https://github.com/langchain-ai/langchain/issues/11813 | 1,943,609,736 | 11,813 |
[
"hwchase17",
"langchain"
]
| ### Feature request
We propose the addition of some of the querying endpoints supported by serpapi that are not suported on langchain at the moment. Some examples, include querying google job, google trends and google finance etc.###
### Motivation
The motivation for this feature proposal is to enhance the querying capabilities of Langchain by incorporating support for various querying endpoints provided by serpapi. The proposed feature aims to improve Langchain's querying capabilities by enabling users to interact with serpapi supported endpoints seamlessly.
### Your contribution
Me and 3 others intend to submit a pull request for this issue at some point in November for a school project. Looking forward to contributing. | Add support for many of the querying endpoints with serpapi | https://api.github.com/repos/langchain-ai/langchain/issues/11811/comments | 2 | 2023-10-14T19:16:42Z | 2024-01-30T16:11:49Z | https://github.com/langchain-ai/langchain/issues/11811 | 1,943,495,649 | 11,811 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.