issue_owner_repo
sequencelengths 2
2
| issue_body
stringlengths 0
261k
⌀ | issue_title
stringlengths 1
925
| issue_comments_url
stringlengths 56
81
| issue_comments_count
int64 0
2.5k
| issue_created_at
stringlengths 20
20
| issue_updated_at
stringlengths 20
20
| issue_html_url
stringlengths 37
62
| issue_github_id
int64 387k
2.46B
| issue_number
int64 1
127k
|
---|---|---|---|---|---|---|---|---|---|
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
from langchain.embeddings import VertexAIEmbeddings
from langchain.vectorstores import PGVector
embeddings = VertexAIEmbeddings()
vectorstore = PGVector(
collection_name=<collection_name>,
connection_string=<connection_string>,
embedding_function=embeddings,
)
vectorstore.delete(ids=[<some_id>])
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I want to particular embeddings of a file and I want to delete particular file embeddings from embeddings of list of files
### System Info
All the dependencies I installed | How to delete/add particular embeddings in PG vector | https://api.github.com/repos/langchain-ai/langchain/issues/17340/comments | 3 | 2024-02-09T20:10:48Z | 2024-02-16T17:13:48Z | https://github.com/langchain-ai/langchain/issues/17340 | 2,127,785,140 | 17,340 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
_No response_
### Idea or request for content:
below's the code which i used to get the answer using MultiQueryRetriever. It was working yesterday, but not working now
```
texts = text_splitter.split_documents(documents)
vectorStore = FAISS.from_documents(texts, embeddings)
llm = OpenAI(temperature=0.2)
retriever = MultiQueryRetriever.from_llm(retriever=vectorStore.as_retriever(), llm=llm)
# Set logging for the queries
import logging
logging.basicConfig()
logging.getLogger("langchain.retrievers.multi_query").setLevel(logging.INFO)
# docs = retriever.get_relevant_documents(query="how many are injured and dead in christchurch Mosque?")
# print(len(docs))
# Create a chain to answer questions
qa_chain = RetrievalQA.from_chain_type(llm=llm,
chain_type="stuff",
retriever=retriever,
return_source_documents=True)
# # Use the chain to answer a question
# llm_response = qa_chain(query)
# process_llm_response(llm_response)
# Format the prompt using the template
context = ""
# question = "does cemig recognizes the legitimacy of the trade unions?"
question = "can you return the objective of ABInBev?"
formatted_prompt = prompt_template.format(context=context, question=question)
# Pass the formatted prompt to the RetrievalQA function
llm_response = qa_chain(formatted_prompt)
process_llm_response(llm_response)
```
below's the error
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/pydantic/v1/main.py](https://localhost:8080/#) in parse_obj(cls, obj)
521 try:
--> 522 obj = dict(obj)
523 except (TypeError, ValueError) as e:
TypeError: 'int' object is not iterable
The above exception was the direct cause of the following exception:
ValidationError Traceback (most recent call last)
15 frames
[/usr/local/lib/python3.10/dist-packages/langchain/output_parsers/pydantic.py](https://localhost:8080/#) in parse_result(self, result, partial)
24 try:
---> 25 return self.pydantic_object.parse_obj(json_object)
26 except ValidationError as e:
[/usr/local/lib/python3.10/dist-packages/pydantic/v1/main.py](https://localhost:8080/#) in parse_obj(cls, obj)
524 exc = TypeError(f'{cls.__name__} expected dict not {obj.__class__.__name__}')
--> 525 raise ValidationError([ErrorWrapper(exc, loc=ROOT_KEY)], cls) from e
526 return cls(**obj)
ValidationError: 1 validation error for LineList
__root__
LineList expected dict not int (type=type_error)
During handling of the above exception, another exception occurred:
OutputParserException Traceback (most recent call last)
[<ipython-input-32-7d1282f5e9fb>](https://localhost:8080/#) in <cell line: 40>()
38 query = "Does the company has one or more channel(s)/mechanism(s) through which individuals and communities who may be adversely impacted by the Company can raise complaints or concerns, including in relation to human rights issues?"
39 desired_count = 10 # The number of unique documents you want
---> 40 unique_documents = fetch_unique_documents(query, initial_limit=desired_count, desired_count=desired_count)
41
42 # # Print the unique documents or handle them as needed
[<ipython-input-32-7d1282f5e9fb>](https://localhost:8080/#) in fetch_unique_documents(query, initial_limit, desired_count)
6 while len(unique_docs) < desired_count:
7 retriever = MultiQueryRetriever.from_llm(retriever=vectorStore.as_retriever(), llm=llm)
----> 8 docs = retriever.get_relevant_documents(query)
9
10 # # Set logging for the queries
[/usr/local/lib/python3.10/dist-packages/langchain_core/retrievers.py](https://localhost:8080/#) in get_relevant_documents(self, query, callbacks, tags, metadata, run_name, **kwargs)
222 except Exception as e:
223 run_manager.on_retriever_error(e)
--> 224 raise e
225 else:
226 run_manager.on_retriever_end(
[/usr/local/lib/python3.10/dist-packages/langchain_core/retrievers.py](https://localhost:8080/#) in get_relevant_documents(self, query, callbacks, tags, metadata, run_name, **kwargs)
215 _kwargs = kwargs if self._expects_other_args else {}
216 if self._new_arg_supported:
--> 217 result = self._get_relevant_documents(
218 query, run_manager=run_manager, **_kwargs
219 )
[/usr/local/lib/python3.10/dist-packages/langchain/retrievers/multi_query.py](https://localhost:8080/#) in _get_relevant_documents(self, query, run_manager)
170 Unique union of relevant documents from all generated queries
171 """
--> 172 queries = self.generate_queries(query, run_manager)
173 if self.include_original:
174 queries.append(query)
[/usr/local/lib/python3.10/dist-packages/langchain/retrievers/multi_query.py](https://localhost:8080/#) in generate_queries(self, question, run_manager)
187 List of LLM generated queries that are similar to the user input
188 """
--> 189 response = self.llm_chain(
190 {"question": question}, callbacks=run_manager.get_child()
191 )
[/usr/local/lib/python3.10/dist-packages/langchain_core/_api/deprecation.py](https://localhost:8080/#) in warning_emitting_wrapper(*args, **kwargs)
143 warned = True
144 emit_warning()
--> 145 return wrapped(*args, **kwargs)
146
147 async def awarning_emitting_wrapper(*args: Any, **kwargs: Any) -> Any:
[/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py](https://localhost:8080/#) in __call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
361 }
362
--> 363 return self.invoke(
364 inputs,
365 cast(RunnableConfig, {k: v for k, v in config.items() if v is not None}),
[/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py](https://localhost:8080/#) in invoke(self, input, config, **kwargs)
160 except BaseException as e:
161 run_manager.on_chain_error(e)
--> 162 raise e
163 run_manager.on_chain_end(outputs)
164 final_outputs: Dict[str, Any] = self.prep_outputs(
[/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py](https://localhost:8080/#) in invoke(self, input, config, **kwargs)
154 try:
155 outputs = (
--> 156 self._call(inputs, run_manager=run_manager)
157 if new_arg_supported
158 else self._call(inputs)
[/usr/local/lib/python3.10/dist-packages/langchain/chains/llm.py](https://localhost:8080/#) in _call(self, inputs, run_manager)
102 ) -> Dict[str, str]:
103 response = self.generate([inputs], run_manager=run_manager)
--> 104 return self.create_outputs(response)[0]
105
106 def generate(
[/usr/local/lib/python3.10/dist-packages/langchain/chains/llm.py](https://localhost:8080/#) in create_outputs(self, llm_result)
256 def create_outputs(self, llm_result: LLMResult) -> List[Dict[str, Any]]:
257 """Create outputs from response."""
--> 258 result = [
259 # Get the text of the top generated string.
260 {
[/usr/local/lib/python3.10/dist-packages/langchain/chains/llm.py](https://localhost:8080/#) in <listcomp>(.0)
259 # Get the text of the top generated string.
260 {
--> 261 self.output_key: self.output_parser.parse_result(generation),
262 "full_generation": generation,
263 }
[/usr/local/lib/python3.10/dist-packages/langchain/output_parsers/pydantic.py](https://localhost:8080/#) in parse_result(self, result, partial)
27 name = self.pydantic_object.__name__
28 msg = f"Failed to parse {name} from completion {json_object}. Got: {e}"
---> 29 raise OutputParserException(msg, llm_output=json_object)
30
31 def get_format_instructions(self) -> str:
OutputParserException: Failed to parse LineList from completion 1. Got: 1 validation error for LineList
__root__
LineList expected dict not int (type=type_error)
```
Can you look into this and help me resolving this? | returning error like LineList expected dict not int (type=type_error) while using MultiQueryRetriever | https://api.github.com/repos/langchain-ai/langchain/issues/17339/comments | 1 | 2024-02-09T20:05:31Z | 2024-02-14T03:34:56Z | https://github.com/langchain-ai/langchain/issues/17339 | 2,127,778,995 | 17,339 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
_No response_
### Idea or request for content:
below's the code which i tried running
```
retriever = MultiQueryRetriever.from_llm(retriever=vectorStore.as_retriever(), llm=llm)
docs = retriever.get_relevant_documents(query="data related to cricket?")
```
below's the output
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/pydantic/v1/main.py](https://localhost:8080/#) in parse_obj(cls, obj)
521 try:
--> 522 obj = dict(obj)
523 except (TypeError, ValueError) as e:
TypeError: 'int' object is not iterable
The above exception was the direct cause of the following exception:
ValidationError Traceback (most recent call last)
14 frames
ValidationError: 1 validation error for LineList
__root__
LineList expected dict not int (type=type_error)
During handling of the above exception, another exception occurred:
OutputParserException Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/langchain/output_parsers/pydantic.py](https://localhost:8080/#) in parse_result(self, result, partial)
27 name = self.pydantic_object.__name__
28 msg = f"Failed to parse {name} from completion {json_object}. Got: {e}"
---> 29 raise OutputParserException(msg, llm_output=json_object)
30
31 def get_format_instructions(self) -> str:
OutputParserException: Failed to parse LineList from completion 1. Got: 1 validation error for LineList
__root__
LineList expected dict not int (type=type_error)
```
i'm facing this issue for first time while retrieving the relevant documents. Can you have a look into it? | returning an error like LineList expected dict not int (type=type_error) while using MultiQueryRetriever | https://api.github.com/repos/langchain-ai/langchain/issues/17336/comments | 4 | 2024-02-09T19:18:14Z | 2024-02-18T08:32:35Z | https://github.com/langchain-ai/langchain/issues/17336 | 2,127,720,355 | 17,336 |
[
"hwchase17",
"langchain"
] | @eyurtsev hello. I'd like to ask a follow up question. the `COSINE` distance strategy is resulting in scores >1.. from this [code](https://github.com/langchain-ai/langchain/blob/023cb59e8aaf3dfaad684b3fcf57a1c363b9abd1/libs/core/langchain_core/vectorstores.py#L184C2-L188C1), it looks like scores returned are calculated from `1-distance`, meaning they are similarity scores, and cosine sim scores should be in [-1, 1] range. but I get scores >1.
is there a reason for that? am I missing something in my implementation? thanks.
this is my code snippet
```
embedder = NVIDIAEmbeddings(model="nvolveqa_40k")
store = FAISS.from_documents(docs_split, embedder, distance_strategy=DistanceStrategy.COSINE)
query = "Who is the director of the Oppenheimer movie?"
docs_and_scores = store.similarity_search_with_score(query)
```
_Originally posted by @rnyak in https://github.com/langchain-ai/langchain/discussions/16224#discussioncomment-8413281_ | Question about the Cosine distance strategy | https://api.github.com/repos/langchain-ai/langchain/issues/17333/comments | 5 | 2024-02-09T18:40:28Z | 2024-07-02T08:49:16Z | https://github.com/langchain-ai/langchain/issues/17333 | 2,127,672,239 | 17,333 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
import os
import sys
import langchain_community
import langchain_core
from langchain_openai import AzureOpenAIEmbeddings
from langchain_community.vectorstores.azuresearch import AzureSearch
embeddings = AzureOpenAIEmbeddings(...)
vectordb = AzureSearch(...)
retriever = vectordb.as_retriever(search_type="similarity_score_threshold", search_kwargs={'score_threshold': 0.8})
#retriever = vectordb.as_retriever() # this works !
print(type(retriever))
query="what is the capital of poland?"
print(len(retriever.vectorstore.similarity_search_with_relevance_scores(query)))
import asyncio
async def f():
await retriever.vectorstore.asimilarity_search_with_relevance_scores(query)
loop = asyncio.get_event_loop()
print(
loop.run_until_complete(f())
)
```
### Error Message and Stack Trace (if applicable)
```
File "/Users/user01/miniconda3/envs/genai/lib/python3.11/site-packages/langchain/chains/retrieval_qa/base.py", line 232, in _aget_docs
return await self.retriever.aget_relevant_documents(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/user01/miniconda3/envs/genai/lib/python3.11/site-packages/langchain_core/retrievers.py", line 280, in aget_relevant_documents
raise e
File "/Users/user01/miniconda3/envs/genai/lib/python3.11/site-packages/langchain_core/retrievers.py", line 273, in aget_relevant_documents
result = await self._aget_relevant_documents(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/user01/miniconda3/envs/genai/lib/python3.11/site-packages/langchain_core/vectorstores.py", line 679, in _aget_relevant_documents
await self.vectorstore.asimilarity_search_with_relevance_scores(
File "/Users/user01/miniconda3/envs/genai/lib/python3.11/site-packages/langchain_core/vectorstores.py", line 351, in asimilarity_search_with_relevance_scores
docs_and_similarities = await self._asimilarity_search_with_relevance_scores(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/user01/miniconda3/envs/genai/lib/python3.11/site-packages/langchain_core/vectorstores.py", line 278, in _asimilarity_search_with_relevance_scores
relevance_score_fn = self._select_relevance_score_fn()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/user01/miniconda3/envs/genai/lib/python3.11/site-packages/langchain_core/vectorstores.py", line 208, in _select_relevance_score_fn
raise NotImplementedError
```
### Description
I've encountered that issue when trying to use `RetrievalQA`. I've noticed then that I cannot use a retriver with a `similarity_score_threshold` on. Then, I checked that the problem was only with `chain.ainvoke`, and not `chain.invoke`.
The code above performs `similarity_search_with_relevance_scores` alright but asynchronous version fails.
It might be related to https://github.com/langchain-ai/langchain/issues/13242 but for async calls.
### System Info
```
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.2.0: Wed Nov 15 21:53:18 PST 2023; root:xnu-10002.61.3~2/RELEASE_ARM64_T6000
> Python Version: 3.11.7 (main, Dec 15 2023, 12:09:56) [Clang 14.0.6 ]
Package Information
-------------------
> langchain_core: 0.1.16
> langchain: 0.1.4
> langchain_community: 0.0.16
> langchain_openai: 0.0.5
``` | asimilarity_search_with_relevance_scores returns NotImplementedError with AzureSearch | https://api.github.com/repos/langchain-ai/langchain/issues/17329/comments | 1 | 2024-02-09T17:14:56Z | 2024-05-17T16:08:48Z | https://github.com/langchain-ai/langchain/issues/17329 | 2,127,547,322 | 17,329 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
Hello,
I wrote a code which will use normal/selfquery retieval and doesn't use RetrievalQachain. I want to add chain of thoughts prompt for accurate retrieval of documents. Below is the code which i used to retrieve docs
```
llm = OpenAI(temperature=0.2)
# Create a retriever for the vector database
document_content_description = "Description of a corporate document outlining human rights commitments and implementation strategies by an organization, including ethical principles, global agreements, and operational procedures."
metadata_field_info = [
{
"name": "document_type",
"description": "The type of document, such as policy statement, modern slavery statement, human rights due diligence manual.",
"type": "string",
},
{
"name": "company_name",
"description": "The name of the company that the document pertains to.",
"type": "string",
},
{
"name": "effective_date",
"description": "The date when the document or policy became effective.",
"type": "date",
},
{
"name": "document_year",
"description": "The year of the document.",
"type": "date",
},
]
retriever = SelfQueryRetriever.from_llm(
llm,
vectorstore,
document_content_description,
metadata_field_info,
use_original_query=False,
enable_limit=True,
verbose=True
)
docs = retriever.get_relevant_documents("data related to cricket")
```
How should i add prompt template to the above to get accurate retrieval? Can you help me with the code?
### Idea or request for content:
_No response_ | can i use chain of thoughts prompt for document retrieval? | https://api.github.com/repos/langchain-ai/langchain/issues/17326/comments | 2 | 2024-02-09T16:32:14Z | 2024-02-09T16:41:05Z | https://github.com/langchain-ai/langchain/issues/17326 | 2,127,480,794 | 17,326 |
[
"hwchase17",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
The following code fails during iteration if the custom_parser is explicitly wrapped in a RunnableLambda.
```python
from langchain_openai import ChatOpenAI
from langchain.pydantic_v1 import BaseModel, Field
from typing import List, Optional
from langchain_core.runnables import RunnableLambda
model = ChatOpenAI()
class UserInfo(BaseModel):
"""Information to extract from the user's input"""
name: Optional[str] = Field(description = "Name of user")
facts: List[str] = Field(description="List of facts about the user")
model_with_tools = model.bind_tools([UserInfo])
async def custom_parser(chunk_stream):
aggregated_message = None
async for chunk in chunk_stream:
if aggregated_message is None:
aggregated_message = chunk
else:
aggregated_message += chunk
yield aggregated_message.additional_kwargs['tool_calls'][0]['function']
custom_parser = RunnableLambda(custom_parser)
chain = model_with_tools | custom_parser
async for chunk in chain.astream('my name is eugene and i like cats and dogs'):
print(chunk)
```
Need to investigate if this is a bug or bad UX | Async generator function fails when wrapped in a Runnable Lambda and used in streaming | https://api.github.com/repos/langchain-ai/langchain/issues/17315/comments | 2 | 2024-02-09T14:46:26Z | 2024-06-01T00:08:30Z | https://github.com/langchain-ai/langchain/issues/17315 | 2,127,288,780 | 17,315 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
below's the code
```
llm = OpenAI(temperature=0.2)
# Create a retriever for the vector database
document_content_description = "Description of a corporate document."
metadata_field_info = [
{
"name": "document_type",
"description": "The type of document, such as policy statement, modern slavery statement, human rights due diligence manual.",
"type": "string",
},
{
"name": "company_name",
"description": "The name of the company that the document pertains to.",
"type": "string",
},
]
retriever = SelfQueryRetriever.from_llm(
llm,
vectorstore,
document_content_description,
metadata_field_info,
use_original_query=False,
enable_limit=True,
verbose=True
)
structured_query = StructuredQuery(
query="related to BCCI company",
# query = "data related to ABinbev",
limit=3 # Set the number of documents to retrieve
)
docs = retriever.get_relevant_documents(structured_query)
print(docs)
```
below is the output
```
[Document(page_content='BCCI is a cricket board that controls and manages the activities in India', metadata={'row': 0, 'source': '/content/files/19.csv'}),
Document(page_content='BBCI is a cricket board in India', metadata={'row': 36, 'source': '/content/files/23.csv'}),
Document(page_content='BCCI is a cricket board that controls and manages the activities in India.', metadata={'row': 14, 'source': '/content/files/11.csv'})]
```
In the output above, there is an issue with duplicated page_content. I need a script that fetches the top k documents in a way that avoids these duplications. For example, if we request the top 5 documents (k=5) and find that there are 2 duplicates among them, the script should discard the duplicates and instead retrieve additional unique documents to ensure we still receive a total of 5 unique documents. Essentially, if duplicates are found within the initially requested top documents, the script should continue to fetch the next highest-ranked document(s) until we have 5 unique page_contents. Can you provide a code that accomplishes this?
### Idea or request for content:
_No response_ | returning duplicates while retrieving the documents | https://api.github.com/repos/langchain-ai/langchain/issues/17313/comments | 2 | 2024-02-09T13:50:24Z | 2024-04-25T09:20:44Z | https://github.com/langchain-ai/langchain/issues/17313 | 2,127,194,884 | 17,313 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
below's the code
```
llm = OpenAI(temperature=0.2)
# Create a retriever for the vector database
document_content_description = "Description of a corporate document."
metadata_field_info = [
{
"name": "document_type",
"description": "The type of document, such as policy statement, modern slavery statement, human rights due diligence manual.",
"type": "string",
},
{
"name": "company_name",
"description": "The name of the company that the document pertains to.",
"type": "string",
},
]
retriever = SelfQueryRetriever.from_llm(
llm,
vectorstore,
document_content_description,
metadata_field_info,
use_original_query=False,
enable_limit=True,
verbose=True
)
structured_query = StructuredQuery(
query="related to BCCI company",
# query = "data related to ABinbev",
limit=3 # Set the number of documents to retrieve
)
docs = retriever.get_relevant_documents(structured_query)
print(docs)
```
below's the output
```
[Document(page_content='BCCI is a cricket board that controls and manages the activities in India', metadata={'row': 0, 'source': '/content/files/19.csv'}),
Document(page_content='BBCI is a cricket board in India', metadata={'row': 36, 'source': '/content/files/23.csv'}),
Document(page_content='BCCI is a cricket board that controls and manages the activities in India.', metadata={'row': 14, 'source': '/content/files/11.csv'})]
```
1st and 3rd output page_content is same, how to do deduplication and make sure its retrieving top k unique documents?
### Idea or request for content:
_No response_ | returning duplicates while retrieving the top k documents | https://api.github.com/repos/langchain-ai/langchain/issues/17310/comments | 9 | 2024-02-09T13:10:50Z | 2024-07-01T09:04:18Z | https://github.com/langchain-ai/langchain/issues/17310 | 2,127,127,974 | 17,310 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```
from langchain.chains import load_chain
chain = load_chain('lc://chains/qa_with_sources/map-reduce/chain.json')
```
### Error Message and Stack Trace (if applicable)
```
Traceback (most recent call last):
File "/snap/pycharm-professional/368/plugins/python/helpers/pydev/pydevconsole.py", line 364, in runcode
coro = func()
^^^^^^
File "<input>", line 1, in <module>
File "/snap/pycharm-professional/368/plugins/python/helpers/pydev/_pydev_bundle/pydev_import_hook.py", line 21, in do_import
module = self._system_import(name, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ModuleNotFoundError: No module named 'streamlit.web.cli.main'; 'streamlit.web.cli' is not a package
import streamlit.web.cli
a:toto
Traceback (most recent call last):
File "/snap/pycharm-professional/368/plugins/python/helpers/pydev/pydevconsole.py", line 364, in runcode
coro = func()
^^^^^^
File "<input>", line 1, in <module>
NameError: name 'toto' is not defined
def f() -> toto:
pass
Traceback (most recent call last):
File "/snap/pycharm-professional/368/plugins/python/helpers/pydev/pydevconsole.py", line 364, in runcode
coro = func()
^^^^^^
File "<input>", line 1, in <module>
NameError: name 'toto' is not defined
from langchain.chains import load_chain
chain = load_chain('lc://chains/qa_with_sources/map-reduce/chain.json')
Traceback (most recent call last):
File "/snap/pycharm-professional/368/plugins/python/helpers/pydev/pydevconsole.py", line 364, in runcode
coro = func()
^^^^^^
File "<input>", line 1, in <module>
File "/home/pprados/workspace.bda/rag-template/.venv/lib/python3.11/site-packages/langchain/chains/loading.py", line 591, in load_chain
if hub_result := try_load_from_hub(
^^^^^^^^^^^^^^^^^^
File "/home/pprados/workspace.bda/rag-template/.venv/lib/python3.11/site-packages/langchain_core/utils/loading.py", line 54, in try_load_from_hub
return loader(str(file), **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pprados/workspace.bda/rag-template/.venv/lib/python3.11/site-packages/langchain/chains/loading.py", line 623, in _load_chain_from_file
return load_chain_from_config(config, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pprados/workspace.bda/rag-template/.venv/lib/python3.11/site-packages/langchain/chains/loading.py", line 586, in load_chain_from_config
return chain_loader(config, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pprados/workspace.bda/rag-template/.venv/lib/python3.11/site-packages/langchain/chains/loading.py", line 124, in _load_map_reduce_documents_chain
reduce_documents_chain = _load_reduce_documents_chain(config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pprados/workspace.bda/rag-template/.venv/lib/python3.11/site-packages/langchain/chains/loading.py", line 178, in _load_reduce_documents_chain
return ReduceDocumentsChain(
^^^^^^^^^^^^^^^^^^^^^
File "/home/pprados/workspace.bda/rag-template/.venv/lib/python3.11/site-packages/langchain_core/load/serializable.py", line 107, in __init__
super().__init__(**kwargs)
File "/home/pprados/workspace.bda/rag-template/.venv/lib/python3.11/site-packages/pydantic/v1/main.py", line 341, in __init__
raise validation_error
pydantic.v1.error_wrappers.ValidationError: 2 validation errors for ReduceDocumentsChain
document_variable_name
extra fields not permitted (type=value_error.extra)
return_intermediate_steps
extra fields not permitted (type=value_error.extra)
```
### Description
We want to create a chain to manipulate a list of documents.
### System Info
langchain==0.1.4
langchain-community==0.0.16
langchain-core==0.1.21
langchain-experimental==0.0.49
langchain-openai==0.0.2.post1
langchain-rag==0.1.18
Linux
Python 3.11.4 | Impossible to load chain `lc://chains/qa_with_sources/map-reduce/chain.json` | https://api.github.com/repos/langchain-ai/langchain/issues/17309/comments | 1 | 2024-02-09T12:45:06Z | 2024-05-17T16:08:43Z | https://github.com/langchain-ai/langchain/issues/17309 | 2,127,065,395 | 17,309 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
from langchain.memory import PostgresChatMessageHistory
connection_string = ""
history = PostgresChatMessageHistory(
connection_string=connection_string,
session_id="trial1",
table_name= "schema.table"
)
history.add_user_message("Msg user 1")
history.add_ai_message("Msg AI 1")
print(history.messages)
```
### Error Message and Stack Trace (if applicable)
```error
Traceback (most recent call last):
File "d:\xtransmatrix\SmartSurgn\SessionHandler\test.py", line 9, in <module>
history.add_user_message("Msg user 1")
File "D:\xtransmatrix\SmartSurgn\env\lib\site-packages\langchain\schema\chat_history.py", line 46, in add_user_message
self.add_message(HumanMessage(content=message))
File "D:\xtransmatrix\SmartSurgn\env\lib\site-packages\langchain\memory\chat_message_histories\postgres.py", line 66, in add_message
self.cursor.execute(
File "D:\xtransmatrix\SmartSurgn\env\lib\site-packages\psycopg\cursor.py", line 732, in execute
raise ex.with_traceback(None)
psycopg.errors.UndefinedTable: relation "smartsurgn.msg_history" does not exist
LINE 1: INSERT INTO "schema.table" (session_id, message) V...
^
```
### Description
we are asked to provide a table name to store the chat message history which stores the table by default in the public schema, if I want to store it in a different schema then I must be able to provide a table name as `<schema_name>.<table_name>` but when I do this it runs into the above error. Note: the table gets created in the right schema but runs into error when trying to write to it
### System Info
`langchain==0.1.5` | Chat message history with postgres failing when destination table has explicit schema | https://api.github.com/repos/langchain-ai/langchain/issues/17306/comments | 9 | 2024-02-09T10:57:56Z | 2024-08-02T11:37:54Z | https://github.com/langchain-ai/langchain/issues/17306 | 2,126,902,036 | 17,306 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
below's the code
```
llm = OpenAI(temperature=0.2)
# Create a retriever for the vector database
document_content_description = "Description of a corporate document outlining"
metadata_field_info = [
{
"name": "document_type",
"description": "The type of document, such as policy statement, modern slavery statement, human rights due diligence manual.",
"type": "string",
},
{
"name": "company_name",
"description": "The name of the company that the document pertains to.",
"type": "string",
},
{
"name": "effective_date",
"description": "The date when the document or policy became effective.",
"type": "date",
},
{
"name": "document_year",
"description": "The year of the document.",
"type": "date",
},
]
retriever = SelfQueryRetriever.from_llm(
llm,
vectorstore,
document_content_description,
metadata_field_info,
use_original_query=False,
verbose=True
)
```
`docs = retriever.get_relevant_documents("Does the company has one or more")`
I don't see any option such as below for SelfQueryRetriever
`retriever = vectorstore.as_retriever(search_kwargs={"k": 5})`
Can you help me out on how to retrieve top k docs for SelfQueryRetriever?
### Idea or request for content:
_No response_ | how to retriever top k docs in SelfqueryRetriever? | https://api.github.com/repos/langchain-ai/langchain/issues/17301/comments | 2 | 2024-02-09T08:53:21Z | 2024-02-09T14:51:46Z | https://github.com/langchain-ai/langchain/issues/17301 | 2,126,714,532 | 17,301 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
I follow the code in this langchain doc: https://python.langchain.com/docs/modules/agents/agent_types/structured_chat
using GPT 3.5 Turbo
The error does not show up when I use GPT4
### Error Message and Stack Trace (if applicable)
_No response_
### Description
The error I get is this:
> Entering new AgentExecutor chain...
{
"action": "Retriever",
"action_input": {
"title": "Key factors to consider when evaluating the return on investment from AI initiatives"
}
}
ValidationError: 1 validation error for RetrieverInput
query
field required (type=value_error.missing)
### System Info
openai==1.7.0
langchain==0.1.1 | Structured chat with GPT3.5 Turbo ValidationError: 1 validation error for RetrieverInput query field required (type=value_error.missing) | https://api.github.com/repos/langchain-ai/langchain/issues/17300/comments | 5 | 2024-02-09T08:25:04Z | 2024-05-17T16:08:38Z | https://github.com/langchain-ai/langchain/issues/17300 | 2,126,678,447 | 17,300 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
below's the code
```
class MyTextSplitter(RecursiveCharacterTextSplitter):
def split_documents(self, documents):
chunks = super().split_documents(documents)
for document, chunk in zip(documents, chunks):
chunk.metadata['source'] = document.metadata['source']
return chunks
text_splitter = MyTextSplitter(chunk_size=500,
chunk_overlap=50)
texts = text_splitter.split_documents(documents)
vectorstore = Chroma.from_documents(texts, embeddings)
```
I'm looking for assistance in utilizing the MyTextSplitter function specifically for cases where the text surpasses OpenAI's maximum character limit. Note: i want to use MyTextSplitter function only for documents page_content which are surpassing OpenAI's maximum character limit, not for all the document page_content data. The objective is to divide the text into smaller segments when it's too lengthy, and subsequently, if any segment includes or constitutes an answer, these segments should be reassembled into a cohesive response before being delivered. I want to check if each document's page_content exceeds the maximum context length allowed by OpenAI Embeddings. For documents where page_content is longer than this limit, I intend to use a function named MyTextSplitter to divide the page_content into smaller sections. After splitting, these sections should be incorporated back into the respective documents, effectively replacing the original, longer page_content with the newly segmented texts. Can you provide code for me?
### Idea or request for content:
_No response_ | handle context length in chroma db | https://api.github.com/repos/langchain-ai/langchain/issues/17299/comments | 4 | 2024-02-09T07:27:16Z | 2024-02-09T14:52:09Z | https://github.com/langchain-ai/langchain/issues/17299 | 2,126,613,340 | 17,299 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
below's the code
```
class MyTextSplitter(RecursiveCharacterTextSplitter):
def split_documents(self, documents):
chunks = super().split_documents(documents)
for document, chunk in zip(documents, chunks):
chunk.metadata['source'] = document.metadata['source']
return chunks
text_splitter = MyTextSplitter(chunk_size=500,
chunk_overlap=50)
texts = text_splitter.split_documents(documents)
vectorstore = Chroma.from_documents(texts, embeddings)
```
I'm looking for assistance in utilizing the MyTextSplitter function specifically for cases where the text surpasses OpenAI's maximum character limit. The objective is to divide the text into smaller segments when it's too lengthy, and subsequently, if any segment includes or constitutes an answer, these segments should be reassembled into a cohesive response before being delivered. I want to check if each document's page_content exceeds the maximum context length allowed by OpenAI Embeddings. For documents where page_content is longer than this limit, I intend to use a function named MyTextSplitter to divide the page_content into smaller sections. After splitting, these sections should be incorporated back into the respective documents, effectively replacing the original, longer page_content with the newly segmented texts. Can you provide code for me?
### Idea or request for content:
_No response_ | handling context length in chromadb | https://api.github.com/repos/langchain-ai/langchain/issues/17298/comments | 2 | 2024-02-09T07:23:49Z | 2024-02-09T14:53:29Z | https://github.com/langchain-ai/langchain/issues/17298 | 2,126,609,550 | 17,298 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
I'm using a function which again uses RecursiveCharacterTextSplitter in below which's working
```
class MyTextSplitter(RecursiveCharacterTextSplitter):
def split_documents(self, documents):
chunks = super().split_documents(documents)
for document, chunk in zip(documents, chunks):
chunk.metadata['source'] = document.metadata['source']
return chunks
text_splitter = MyTextSplitter(chunk_size=500,
chunk_overlap=50)
texts = text_splitter.split_documents(documents)
vectorstore = Chroma.from_documents(texts, embeddings)
```
I'm looking for assistance in utilizing the MyTextSplitter function specifically for cases where the text surpasses OpenAI's maximum character limit. The objective is to divide the text into smaller segments when it's too lengthy, and subsequently, if any segment includes or constitutes an answer, these segments should be reassembled into a cohesive response before being delivered. Can you provide a code for this?
### Idea or request for content:
_No response_ | how to handle the context lengths in ChromaDB? | https://api.github.com/repos/langchain-ai/langchain/issues/17297/comments | 1 | 2024-02-09T07:11:09Z | 2024-02-09T14:49:07Z | https://github.com/langchain-ai/langchain/issues/17297 | 2,126,595,885 | 17,297 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
below's the code i'm using to try for handling longer context lengths
```
# Instantiate the OpenAIEmbeddings class
openai = OpenAIEmbeddings(openai_api_key="sk-")
# Generate embeddings for your documents
documents = [doc for doc in documents]
# Create a Chroma vector store from the documents
vectorstore = Chroma.from_documents(documents, openai.embed_documents)
```
it is returning below error
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
[<ipython-input-81-029281717453>](https://localhost:8080/#) in <cell line: 8>()
6
7 # Create a Chroma vector store from the documents
----> 8 vectorstore = Chroma.from_documents(documents, openai.embed_documents)
2 frames
[/usr/local/lib/python3.10/dist-packages/langchain_community/vectorstores/chroma.py](https://localhost:8080/#) in from_documents(cls, documents, embedding, ids, collection_name, persist_directory, client_settings, client, collection_metadata, **kwargs)
776 texts = [doc.page_content for doc in documents]
777 metadatas = [doc.metadata for doc in documents]
--> 778 return cls.from_texts(
779 texts=texts,
780 embedding=embedding,
[/usr/local/lib/python3.10/dist-packages/langchain_community/vectorstores/chroma.py](https://localhost:8080/#) in from_texts(cls, texts, embedding, metadatas, ids, collection_name, persist_directory, client_settings, client, collection_metadata, **kwargs)
734 documents=texts,
735 ):
--> 736 chroma_collection.add_texts(
737 texts=batch[3] if batch[3] else [],
738 metadatas=batch[2] if batch[2] else None,
[/usr/local/lib/python3.10/dist-packages/langchain_community/vectorstores/chroma.py](https://localhost:8080/#) in add_texts(self, texts, metadatas, ids, **kwargs)
273 texts = list(texts)
274 if self._embedding_function is not None:
--> 275 embeddings = self._embedding_function.embed_documents(texts)
276 if metadatas:
277 # fill metadatas with empty dicts if somebody
AttributeError: 'function' object has no attribute 'embed_documents'
```
Can you assist me in dealing with handling context length issue beacuse i don't wanna use RecursiveCharacterTextSplitter as i've already chunked the data manually, I just want to send the data to ChromaDB at the same time by handling its context length
### Idea or request for content:
_No response_ | unable to use embed_documents function for ChromaDB | https://api.github.com/repos/langchain-ai/langchain/issues/17295/comments | 1 | 2024-02-09T06:53:08Z | 2024-02-09T14:49:50Z | https://github.com/langchain-ai/langchain/issues/17295 | 2,126,577,191 | 17,295 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
below's the FAISS code which i tried to run on for chromadb too, but its not working
```
# Instantiate the OpenAIEmbeddings class
openai = OpenAIEmbeddings(openai_api_key="")
# Generate embeddings for your documents
embeddings = openai.embed_documents([doc.page_content for doc in documents])
# Create tuples of text and corresponding embedding
text_embeddings = list(zip([doc.page_content for doc in documents], embeddings))
# Create a FAISS vector store from the embeddings
vectorStore = FAISS.from_embeddings(text_embeddings, openai)
```
below's the code for chromadb
```
# Instantiate the OpenAIEmbeddings class
openai = OpenAIEmbeddings(openai_api_key="")
# Generate embeddings for your documents
embeddings = openai.embed_documents([doc.page_content for doc in documents])
# Create tuples of text and corresponding embedding
text_embeddings = list(zip([doc.page_content for doc in documents], embeddings))
# Create a FAISS vector store from the embeddings
vectorstore = Chroma.from_embeddings(text_embeddings, openai)
```
the error is below
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
[<ipython-input-57-81938d77957d>](https://localhost:8080/#) in <cell line: 11>()
9
10 # Create a FAISS vector store from the embeddings
---> 11 vectorstore = Chroma.from_embeddings(text_embeddings, openai)
AttributeError: type object 'Chroma' has no attribute 'from_embeddings'
```
can you help me out on how to resolve this issue which i'm facing with chromadb?
### Idea or request for content:
_No response_ | unable to apply the same code on Chroma db which i've used for FAISS | https://api.github.com/repos/langchain-ai/langchain/issues/17292/comments | 6 | 2024-02-09T06:09:18Z | 2024-02-09T14:52:17Z | https://github.com/langchain-ai/langchain/issues/17292 | 2,126,535,653 | 17,292 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```
llm = Bedrock(
client=client,
model_id='amazon.titan-text-express-v1',
model_kwargs={'temperature': 0.1,'},
endpoint_url='https://bedrock-runtime.us-east-1.amazonaws.com',
region_name='us-east-1',
verbose=True,
)
```
Is there a way I can provide the max output limit in model_kwargs? I am using llm in SQLDatabaseChain and seeing that the SQL command generated by llm gets truncated may be because of default max token limit
### Error Message and Stack Trace (if applicable)
truncates the SQL commands suddenly after 128 tokens
### Description
```
llm = Bedrock(
client=client,
model_id='amazon.titan-text-express-v1',
model_kwargs={'temperature': 0.1,'},
endpoint_url='https://bedrock-runtime.us-east-1.amazonaws.com',
region_name='us-east-1',
verbose=True,
)
```
Is there a way I can provide the max output limit in model_kwargs? I am using llm in SQLDatabaseChain and seeing that the SQL command generated by llm gets truncated may be because of default max token limit
### System Info
boto3==1.34.29
chromadb==0.4.22
huggingface-hub==0.20.3
langchain==0.1.4
langchain-experimental==0.0.49
python-dotenv==1.0.1
sentence_transformers==2.3.0
snowflake-connector-python==3.7.0
snowflake-sqlalchemy==1.5.1
SQLAlchemy==1.4.51
streamlit==1.30.0
watchdog==3.0.0
| How to set max_output_token for AWS Bedrock Titan text express model? | https://api.github.com/repos/langchain-ai/langchain/issues/17287/comments | 18 | 2024-02-09T03:52:00Z | 2024-02-14T04:22:58Z | https://github.com/langchain-ai/langchain/issues/17287 | 2,126,422,842 | 17,287 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
I have created a function to create agent and return the agent executor to run query. Here's the code:
```
def agent_executor(tools: List):
try:
# Create the language model
llm = ChatOpenAI(model="gpt-4-1106-preview", temperature=0)
prompt = ChatPromptTemplate.from_messages(
[
SystemMessagePromptTemplate.from_template(
input_variables=["tool_names", "tools", "task_context"],
template=SYSTEM_PROMPT_TEMPLATE,
),
MessagesPlaceholder(variable_name="chat_history", optional=True),
HumanMessagePromptTemplate.from_template(
input_variables=["input", "chat_history", "agent_scratchpad"],
template=HUMAN_PROMPT_TEMPLATE,
),
]
)
# print(f"Prompt: {prompt}")
# Create the memory object
memory = ConversationBufferWindowMemory(
memory_key="chat_history", k=5, return_messages=True, output_key="output"
)
# Construct the JSON agent
if task_context is not None:
agent = create_agent(llm, tools, prompt)
else:
agent = create_structured_chat_agent(llm, tools, prompt)
# Create an agent executor by passing in the agent and tools
agent_executor = AgentExecutor.from_agent_and_tools(
agent=agent,
tools=tools,
verbose=True,
memory=memory,
handle_parsing_errors=True,
return_intermediate_steps=True,
max_iterations=4,
max_execution_time=100,
)
return agent_executor
except Exception as e:
print(f"Error in executing agent: {e}")
return None
```
I have created a `ConversationBufferWindowMemory` for this agent and assigned in the memory. This is how am I am running this agent:
```
query = "Reply to this email......"
tools = [create_email_reply]
agent = agent_executor(tools)
response = agent.invoke({"input": query})
return response["output"]
```
When I run this agent sometimes the the final answer is not string and may look as follows:
Action:
```json
{
"action": "Final Answer",
"action_input": {
"email": {
"to": "[email protected]",
"subject": "Status of Your Invoice post_713",
"body": "Dear valued customer,\n\nThank you for reaching out to us with your inquiry about the status of your invoice number post_713.\n\nI am pleased to inform you that the invoice has been successfully posted and the payment status is marked as 'Paid'. The payment was processed on November 14, 2023, with the payment reference number 740115.\n\nShould you require any further assistance or have any additional questions, please do not hesitate to contact us at [email protected].\n\nBest regards,\n\n[Your Name]\nCustomer Service Team\nAexonic Technologies"
}
}
}
```
You can see the final answer looks like a dictionary. In this case the agent execution shows an error after final answer. If I remove the memory then agent executes without any error.
### Error Message and Stack Trace (if applicable)
Action:
```json
{
"action": "Final Answer",
"action_input": {
"email": {
"to": "[email protected]",
"subject": "Status of Your Invoice post_713",
"body": "Dear valued customer,\n\nThank you for reaching out to us with your inquiry about the status of your invoice number post_713.\n\nI am pleased to inform you that the invoice has been successfully posted and the payment status is marked as 'Paid'. The payment was processed on November 14, 2023, with the payment reference number 740115.\n\nShould you require any further assistance or have any additional questions, please do not hesitate to contact us at [email protected].\n\nBest regards,\n\n[Your Name]\nCustomer Service Team\nAexonic Technologies"
}
}
}
```
> Finished chain.
Traceback (most recent call last):
File "/Users/Cipher/AssistCX/assistcx-agent/main.py", line 80, in <module>
response = main(query)
^^^^^^^^^^^
File "/Users/Cipher/AssistCX/assistcx-agent/main.py", line 72, in main
agent_output = invoice_agent(query)
^^^^^^^^^^^^^^^^^^^^
File "/Users/Cipher/AssistCX/assistcx-agent/main.py", line 41, in invoice_agent
response = agent.invoke({"input": query})
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/Cipher/AssistCX/assistcx-agent/.venv/lib/python3.11/site-packages/langchain/chains/base.py", line 164, in invoke
final_outputs: Dict[str, Any] = self.prep_outputs(
^^^^^^^^^^^^^^^^^^
File "/Users/Cipher/AssistCX/assistcx-agent/.venv/lib/python3.11/site-packages/langchain/chains/base.py", line 440, in prep_outputs
self.memory.save_context(inputs, outputs)
File "/Users/Cipher/AssistCX/assistcx-agent/.venv/lib/python3.11/site-packages/langchain/memory/chat_memory.py", line 39, in save_context
self.chat_memory.add_ai_message(output_str)
File "/Users/Cipher/AssistCX/assistcx-agent/.venv/lib/python3.11/site-packages/langchain_core/chat_history.py", line 122, in add_ai_message
self.add_message(AIMessage(content=message))
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/Cipher/AssistCX/assistcx-agent/.venv/lib/python3.11/site-packages/langchain_core/messages/base.py", line 35, in __init__
return super().__init__(content=content, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/Cipher/AssistCX/assistcx-agent/.venv/lib/python3.11/site-packages/langchain_core/load/serializable.py", line 107, in __init__
super().__init__(**kwargs)
File "/Users/Cipher/AssistCX/assistcx-agent/.venv/lib/python3.11/site-packages/pydantic/v1/main.py", line 341, in __init__
raise validation_error
pydantic.v1.error_wrappers.ValidationError: 2 validation errors for AIMessage
content
str type expected (type=type_error.str)
content
value is not a valid list (type=type_error.list)
### Description
I am trying to create a structured chat agent with memory. When the agent final answer is not a string then the executions fails with error after the final answer. If I remove the memory then it works fine.
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.1.0: Mon Oct 9 21:32:11 PDT 2023; root:xnu-10002.41.9~7/RELEASE_ARM64_T6030
> Python Version: 3.11.5 (main, Sep 15 2023, 16:17:37) [Clang 14.0.3 (clang-1403.0.22.14.1)]
Package Information
-------------------
> langchain_core: 0.1.21
> langchain: 0.1.5
> langchain_community: 0.0.19
> langsmith: 0.0.87
> langchain_openai: 0.0.5
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | Agent executor with memory gives error after final answer | https://api.github.com/repos/langchain-ai/langchain/issues/17269/comments | 5 | 2024-02-08T22:19:52Z | 2024-06-18T11:55:39Z | https://github.com/langchain-ai/langchain/issues/17269 | 2,126,162,650 | 17,269 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
below's the code
```
def _get_len_safe_embeddings(
self, texts: List[str], *, engine: str, chunk_size: Optional[int] = None
) -> List[List[float]]:
"""
Generate length-safe embeddings for a list of texts.
This method handles tokenization and embedding generation, respecting the
set embedding context length and chunk size. It supports both tiktoken
and HuggingFace tokenizer based on the tiktoken_enabled flag.
Args:
texts (List[str]): A list of texts to embed.
engine (str): The engine or model to use for embeddings.
chunk_size (Optional[int]): The size of chunks for processing embeddings.
Returns:
List[List[float]]: A list of embeddings for each input text.
"""
tokens = []
indices = []
model_name = self.tiktoken_model_name or self.model
_chunk_size = chunk_size or self.chunk_size
# If tiktoken flag set to False
if not self.tiktoken_enabled:
try:
from transformers import AutoTokenizer
except ImportError:
raise ValueError(
"Could not import transformers python package. "
"This is needed in order to for OpenAIEmbeddings without "
"`tiktoken`. Please install it with `pip install transformers`. "
)
tokenizer = AutoTokenizer.from_pretrained(
pretrained_model_name_or_path=model_name
)
for i, text in enumerate(texts):
# Tokenize the text using HuggingFace transformers
tokenized = tokenizer.encode(text, add_special_tokens=False)
# Split tokens into chunks respecting the embedding_ctx_length
for j in range(0, len(tokenized), self.embedding_ctx_length):
token_chunk = tokenized[j : j + self.embedding_ctx_length]
tokens.append(token_chunk)
indices.append(i)
# Embed each chunk separately
batched_embeddings = []
for i in range(0, len(tokens), _chunk_size):
token_batch = tokens[i : i + _chunk_size]
response = embed_with_retry(
self,
inputs=token_batch,
**self._invocation_params,
)
if not isinstance(response, dict):
response = response.dict()
batched_embeddings.extend(r["embedding"] for r in response["data"])
# Concatenate the embeddings for each text
embeddings: List[List[float]] = [[] for _ in range(len(texts))]
for i in range(len(indices)):
embeddings[indices[i]].extend(batched_embeddings[i])
return embeddings
```
followed by below
```
# Instantiate the OpenAIEmbeddings class
openai = OpenAIEmbeddings(openai_api_key="")
# Generate embeddings for your documents
embeddings = get_len_safe_embeddings([doc.page_content for doc in documents], engine="text-embedding-ada-002")
# Create tuples of text and corresponding embedding
text_embeddings = list(zip([doc.page_content for doc in documents], embeddings))
# Create a FAISS vector store from the embeddings
vectorStore = FAISS.from_embeddings(text_embeddings, openai)
```
it has returned below issue
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
[<ipython-input-31-14fb4a40f661>](https://localhost:8080/#) in <cell line: 5>()
3
4 # Generate embeddings for your documents
----> 5 embeddings = _get_len_safe_embeddings([doc.page_content for doc in documents], engine="text-embedding-ada-002")
6
7 # Create tuples of text and corresponding embedding
TypeError: _get_len_safe_embeddings() missing 1 required positional argument: 'texts'
```
Can you assist me with this code? It'd much better resolving this issue. Can you write an updated code?
### Idea or request for content:
_No response_ | unable to run get_len_safe_embeddings function which i wrote | https://api.github.com/repos/langchain-ai/langchain/issues/17267/comments | 2 | 2024-02-08T22:15:00Z | 2024-02-09T14:50:36Z | https://github.com/langchain-ai/langchain/issues/17267 | 2,126,156,320 | 17,267 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
below's the code i tried to use for handling long context lengths
```
def _get_len_safe_embeddings(
self, texts: List[str], *, engine: str, chunk_size: Optional[int] = None
) -> List[List[float]]:
"""
Generate length-safe embeddings for a list of texts.
This method handles tokenization and embedding generation, respecting the
set embedding context length and chunk size. It supports both tiktoken
and HuggingFace tokenizer based on the tiktoken_enabled flag.
Args:
texts (List[str]): A list of texts to embed.
engine (str): The engine or model to use for embeddings.
chunk_size (Optional[int]): The size of chunks for processing embeddings.
Returns:
List[List[float]]: A list of embeddings for each input text.
"""
tokens = []
indices = []
model_name = self.tiktoken_model_name or self.model
_chunk_size = chunk_size or self.chunk_size
# If tiktoken flag set to False
if not self.tiktoken_enabled:
try:
from transformers import AutoTokenizer
except ImportError:
raise ValueError(
"Could not import transformers python package. "
"This is needed in order to for OpenAIEmbeddings without "
"`tiktoken`. Please install it with `pip install transformers`. "
)
tokenizer = AutoTokenizer.from_pretrained(
pretrained_model_name_or_path=model_name
)
for i, text in enumerate(texts):
# Tokenize the text using HuggingFace transformers
tokenized = tokenizer.encode(text, add_special_tokens=False)
# Split tokens into chunks respecting the embedding_ctx_length
for j in range(0, len(tokenized), self.embedding_ctx_length):
token_chunk = tokenized[j : j + self.embedding_ctx_length]
tokens.append(token_chunk)
indices.append(i)
# Embed each chunk separately
batched_embeddings = []
for i in range(0, len(tokens), _chunk_size):
token_batch = tokens[i : i + _chunk_size]
response = embed_with_retry(
self,
inputs=token_batch,
**self._invocation_params,
)
if not isinstance(response, dict):
response = response.dict()
batched_embeddings.extend(r["embedding"] for r in response["data"])
# Concatenate the embeddings for each text
embeddings: List[List[float]] = [[] for _ in range(len(texts))]
for i in range(len(indices)):
embeddings[indices[i]].extend(batched_embeddings[i])
return embeddings
```
unable to run above function as it was returning below error and even some nested functions are not defined
```
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
[<ipython-input-19-0e438cfc104c>](https://localhost:8080/#) in <cell line: 2>()
1 def _get_len_safe_embeddings(
----> 2 self, texts: List[str], *, engine: str, chunk_size: Optional[int] = None
3 ) -> List[List[float]]:
4 """
5 Generate length-safe embeddings for a list of texts.
NameError: name 'List' is not defined
```
followed by
```
# Instantiate the OpenAIEmbeddings class
openai = OpenAIEmbeddings(openai_api_key="")
# Generate embeddings for your documents
embeddings = openai._get_len_safe_embeddings([doc.page_content for doc in documents], engine="text-embedding-ada-002")
# Create tuples of text and corresponding embedding
text_embeddings = list(zip([doc.page_content for doc in documents], embeddings))
# Create a FAISS vector store from the embeddings
vectorStore = FAISS.from_embeddings(text_embeddings, openai)
```
Is the complete function code which i gave and the function which i'm calling from openai is same else can you write the code where i can use the custom function and update the code followed by it?
### Idea or request for content:
_No response_ | tried using _get_len_safe_embeddings function, but retuning some issue | https://api.github.com/repos/langchain-ai/langchain/issues/17266/comments | 1 | 2024-02-08T22:05:40Z | 2024-02-09T14:48:53Z | https://github.com/langchain-ai/langchain/issues/17266 | 2,126,145,532 | 17,266 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
below's the code which will load csv, then it'll be loaded into FAISS and will try to get the relevant documents, its not using RecursiveCharacterTextSplitter for chunking as the data is already chunked manually, below's the code
```
# List of file paths for your CSV files
csv_files = ['1.csv']
# Iterate over the file paths and create a loader for each file
loaders = [CSVLoader(file_path=file_path, encoding="utf-8") for file_path in csv_files]
# Now, loaders is a list of CSVLoader instances, one for each file
# Optional: If you need to combine the data from all loaders
documents = []
for loader in loaders:
data = loader.load() # or however you retrieve data from the loader
documents.extend(data)
```
`print(documents[0])`
output is below
`Document(page_content=": 1\nUnnamed: 0: 1\nText: Human Rights Guiding Principles\n We commit to respect internationally recognized human rights as expressed in International Bill of Human Rights meaning \n the Universal Declaration of Human Rights, the International Covenant87543\nx2: 1548.48193973303\ny2: 899.030945822597\nBlock Type: LAYOUT_TEXT\nBlock ID: 54429a7486164c04b859d0a08ac75d54\npage_num: 2\nis_answer: 0", metadata={'source': '1.csv', 'row': 1})`
followed by
```
vectorStore = FAISS.from_documents(documents, embeddings)
# Create a retriever for the vector database
retriever = vectorStore.as_retriever(search_kwargs={"k": 5})
docs = retriever.get_relevant_documents("can you return the details of banpu company hrdd?")
```
I want to handle cases where a single row exceeds the OpenAI embeddings limit by splitting that row and appending it back while returning the answer because i'm not using RecursiveCharacterTextSplitter. For example let's say row 1 has input less than OpenAI length, so it'll be sent, now row 2 has length greater than OpenAI embeddings context length, at that moment, i want to split that row 2 into multiple snippets under same source row 2. This should be done for every row which contains length greater than OpenAI embeddings. Can you assist me and help building/updating above code for me?
### Idea or request for content:
_No response_ | how to split the page_context text which has over length than OpenAI embeddings take? | https://api.github.com/repos/langchain-ai/langchain/issues/17265/comments | 1 | 2024-02-08T21:35:16Z | 2024-02-14T03:34:55Z | https://github.com/langchain-ai/langchain/issues/17265 | 2,126,107,312 | 17,265 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
below's the code which will load csv, then it'll be loaded into FAISS and will try to get the relevant documents, its not using RecursiveCharacterTextSplitter for chunking as the data is already chunked manually, below's the code
```
# List of file paths for your CSV files
csv_files = ['1.csv']
# Iterate over the file paths and create a loader for each file
loaders = [CSVLoader(file_path=file_path, encoding="utf-8") for file_path in csv_files]
# Now, loaders is a list of CSVLoader instances, one for each file
# Optional: If you need to combine the data from all loaders
documents = []
for loader in loaders:
data = loader.load() # or however you retrieve data from the loader
documents.extend(data)
```
`print(documents[0])` output is below
`Document(page_content=": 1\nUnnamed: 0: 1\nText: Human Rights Guiding Principles\n We commit to respect internationally recognized human rights as expressed in International Bill of Human Rights meaning \n the Universal Declaration of Human Rights, the International Covenant87543\nx2: 1548.48193973303\ny2: 899.030945822597\nBlock Type: LAYOUT_TEXT\nBlock ID: 54429a7486164c04b859d0a08ac75d54\npage_num: 2\nis_answer: 0", metadata={'source': '1.csv', 'row': 1})`
followed by
```
vectorStore = FAISS.from_documents(documents, embeddings)
# Create a retriever for the vector database
retriever = vectorStore.as_retriever(search_kwargs={"k": 5})
docs = retriever.get_relevant_documents("can you return the details of banpu company hrdd?")
```
I want to handle cases where a single row exceeds the OpenAI embeddings limit by splitting that row and appending it back while returning the answer because i'm not using RecursiveCharacterTextSplitter. Can you write code for me? I just want a code like below
### Idea or request for content:
_No response_ | how to overcome input context length of OpenAI embeddings without using RecursiveCharacterTextSplitter? | https://api.github.com/repos/langchain-ai/langchain/issues/17264/comments | 7 | 2024-02-08T20:52:57Z | 2024-02-14T03:34:55Z | https://github.com/langchain-ai/langchain/issues/17264 | 2,126,051,745 | 17,264 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
below's the code which loads a CSV file and create a variable documents
```
# List of file paths for your CSV files
csv_files = ['1.csv']
# Iterate over the file paths and create a loader for each file
loaders = [CSVLoader(file_path=file_path, encoding="utf-8") for file_path in csv_files]
# Now, loaders is a list of CSVLoader instances, one for each file
# Optional: If you need to combine the data from all loaders
documents = []
for loader in loaders:
data = loader.load() # or however you retrieve data from the loader
documents.extend(data)
```
now for code `documents[1]`, below's the output
`Document(page_content=": 1\nUnnamed: 0: 1\nText: Human Rights Guiding Principles\n We commit to respect internationally recognized human rights as expressed in International Bill of Human Rights meaning \n the Universal Declaration of Human Rights, the International Covenant on Civil and Political Rights and the International \n Covenant on Economic, Social and Cultural Rights, and International\nx1: 149.214813858271\ny1: 209.333904087543\nx2: 1548.48193973303\ny2: 899.030945822597\nBlock Type: LAYOUT_TEXT\nBlock ID: 54429a7486164c04b859d0a08ac75d54\npage_num: 2\nis_answer: 0", metadata={'source': '1.csv', 'row': 1})`
normal method of chunking data and sending it to index is below
```
text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=50)
texts = text_splitter.split_documents(documents)
vectorStore = FAISS.from_documents(texts, embeddings)
# Create a retriever for the vector database
retriever = vectorStore.as_retriever(search_kwargs={"k": 5})
docs = retriever.get_relevant_documents("can you return the details of banpu company hrdd?")
```
now, how to do i send the documents data to FAISS without splitting it again, because i've already chunked the data manually,
```
vectorStore = FAISS.from_documents(documents, embeddings)
# Create a retriever for the vector database
retriever = vectorStore.as_retriever(search_kwargs={"k": 5})
docs = retriever.get_relevant_documents("can you return the details of banpu company hrdd?")
```
but also also want to handle cases where a single row exceeds the OpenAI embeddings limit by splitting that row and appending it back while returning the answer. Can you write code for me? I just want a code like below
### Idea or request for content:
_No response_ | how to index the data into FAISS without using RecursiveCharacterTextSplitter? | https://api.github.com/repos/langchain-ai/langchain/issues/17262/comments | 5 | 2024-02-08T20:26:36Z | 2024-02-14T03:34:54Z | https://github.com/langchain-ai/langchain/issues/17262 | 2,126,017,010 | 17,262 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
below's the data which will try to index the data into FAISS using OpenAI embeddings,
```
import pandas as pd
from langchain_community.embeddings.openai import OpenAIEmbeddings
# Initialize OpenAIEmbeddings
openai = OpenAIEmbeddings(openai_api_key="your-openai-api-key")
# Load your CSV file
df = pd.read_csv('your_file.csv')
# Get embeddings for each row in the 'Text' column
embeddings = openai.embed_documents(df['Text'].tolist())
# Now, you can use these embeddings to index into your FAISS vector database
# Initialize FAISS
faiss = FAISS()
# Index embeddings into FAISS
faiss.add_vectors(embeddings)
```
it has returned below error
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
[<ipython-input-34-f28b6473f433>](https://localhost:8080/#) in <cell line: 16>()
14 # Now, you can use these embeddings to index into your FAISS vector database
15 # Initialize FAISS
---> 16 faiss = FAISS()
17
18 # Index embeddings into FAISS
TypeError: FAISS.__init__() missing 4 required positional arguments: 'embedding_function', 'index', 'docstore', and 'index_to_docstore_id'
```
can you please let me know how to resolve this? and also how to utilize the faiss after `faiss.add_vectors(embeddings)` for getting relevant documents for a query?
### Idea or request for content:
_No response_ | unable to directly index the data into openai embeddings without chunking | https://api.github.com/repos/langchain-ai/langchain/issues/17261/comments | 3 | 2024-02-08T20:11:00Z | 2024-02-14T03:34:54Z | https://github.com/langchain-ai/langchain/issues/17261 | 2,125,991,752 | 17,261 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
below's the code which will use CSVLoader to load the data which has only column named 'Text' and all i want to do is want to index each row of the 'Text' column from your CSV file into the FAISS vector database without re-chunking the data. I also want to handle cases where a single row exceeds the OpenAI embeddings limit by splitting that row and appending it back while returning the answer. Below's the code which i wrote
```
# List of file paths for your CSV files
csv_files = ['1.csv']
# Iterate over the file paths and create a loader for each file
loaders = [CSVLoader(file_path=file_path, encoding="utf-8") for file_path in csv_files]
# Now, loaders is a list of CSVLoader instances, one for each file
# Optional: If you need to combine the data from all loaders
documents = []
for loader in loaders:
data = loader.load() # or however you retrieve data from the loader
documents.extend(data)
print(documents[1])
```
below's the output
`Document(page_content=": 1\nUnnamed: 0: 1\nText: Human Rights Guiding Principles\n We commit to respect internationally recognized human rights as expressed in International Bill of Human Rights meaning \n the Universal Declaration of Human Rights, the International Covenant on Civil and Political Rights and the International \n Covenant on Economic, Social and Cultural Rights, and International Labour Organization's Declaration on Fundamental \n Principles and Rights at Work. These standards are elaborated upon our WWSBC and/or Code and include:\n Freedom of association and collective bargaining\n Prevention of human trafficking, forced, bonded or compulsory labor\n Anti-child labor\n Anti-discrimination in respect to employment and occupation\n Working hours limitations and Minimum Wage Standards\n Minimum age requirements for employment\n Freedom from harassment\n Diversity, Belonging and Inclusion\n Appropriate wages and benefits\n Right to occupational health and safety\n Supply Chain Responsibility\n Privacy and freedom of expression\n Environmental stewardship\n Anti-Corruption\nx1: 149.214813858271\ny1: 209.333904087543\nx2: 1548.48193973303\ny2: 899.030945822597\nBlock Type: LAYOUT_TEXT\nBlock ID: 54429a7486164c04b859d0a08ac75d54\npage_num: 2\nis_answer: 0", metadata={'source': '1.csv', 'row': 1})`
all i want to do is want to index each row of the 'Text' column from your CSV file into the FAISS vector database without re-chunking the data. I also want to handle cases where a single row exceeds the OpenAI embeddings limit by splitting that row and appending it back while returning the answer. I don't wanna use RecursiveCharacterTextSplitter because i've already chunked the data manually. Can you help me out with the code?
### Idea or request for content:
_No response_ | trying to index data into FAISS without using CharacterTextSplitter | https://api.github.com/repos/langchain-ai/langchain/issues/17260/comments | 3 | 2024-02-08T19:55:30Z | 2024-02-14T03:34:54Z | https://github.com/langchain-ai/langchain/issues/17260 | 2,125,966,785 | 17,260 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
from langchain_nvidia_ai_endpoints import ChatNVIDIA
llm = ChatNVIDIA(model="playground_mixtral_8x7b",
temperature=0.0,
top_p=0,
max_tokens=500,
seed=42,
callbacks = callbacks
)
### Error Message and Stack Trace (if applicable)
ValidationError Traceback (most recent call last)
Cell In[62], line 11 5 pass 9 from langchain_nvidia_ai_endpoints import ChatNVIDIA---> 11 llm = ChatNVIDIA(model="playground_mixtral_8x7b", 12 temperature=0.0, 13 top_p=0, 14 max_tokens=500, 15 seed=42, 16 callbacks = callbacks 17 )
File /usr/local/lib/python3.10/site-packages/langchain_core/load/serializable.py:107, in Serializable.__init__(self, **kwargs) 106 def __init__(self, **kwargs: Any) -> None:--> 107 super().__init__(**kwargs) 108 self._lc_kwargs = kwargs
File /usr/local/lib/python3.10/site-packages/pydantic/v1/main.py:341, in BaseModel.__init__(__pydantic_self__, **data) 339 values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data) 340 if validation_error:--> 341 raise validation_error 342 try: 343 object_setattr(__pydantic_self__, '__dict__', values)
ValidationError: 1 validation error for ChatNVIDIA
temperature
ensure this value is greater than 0.0 (type=value_error.number.not_gt; limit_value=0.0)
### Description
The latest NVCF endpoint now allows the temperature to be 0. However, this wrapper does not allow this to happen, because of the pydantic default validator on the temperature field.
### System Info
langchain==0.1.5
langchain-community==0.0.17
langchain-core==0.1.18
langchain-nvidia-ai-endpoints==0.0.1
langchain-openai==0.0.5
platform mac
Python 3.11 | Temperature for NVIDIA Cloud Function (NVCF) endpoint could not be set to 0 | https://api.github.com/repos/langchain-ai/langchain/issues/17257/comments | 2 | 2024-02-08T19:19:00Z | 2024-05-17T16:08:33Z | https://github.com/langchain-ai/langchain/issues/17257 | 2,125,908,914 | 17,257 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
I have a data, which has been already chunked in the csv format in column name Text, below's the code and format
`one = pd.read_csv('1.csv')[['Text']]`
below's the output
```
Text
--
AMD L
Human Rights Guiding Principles...
We commit to...
```
Now, I don't want to use RecursiveCharacterTextSplitter and use chunk_size, overlap etc. I directly want to sent the above data to the FAISS using `vectorStore = FAISS.from_documents(texts, embeddings)` code and i'm using OpenAI embeddings. Can you help me out with code which will help me doing this? And i directly want to index every row as one snippet as in 1st row of Text row is document[0], 2nd row of document is document[1]. So, what if OpenAI embeddings input has been exceeded? Is there any way to overcome that issue as well? If yes, how split again the single row chunks and append it back?
### Idea or request for content:
_No response_ | how to directly index the data into FAISS with the data which has been already chunked | https://api.github.com/repos/langchain-ai/langchain/issues/17256/comments | 7 | 2024-02-08T19:10:48Z | 2024-02-14T03:34:53Z | https://github.com/langchain-ai/langchain/issues/17256 | 2,125,894,227 | 17,256 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
from langchain.agents import AgentExecutor
from langchain.agents.format_scratchpad import format_log_to_str
from langchain.agents.output_parsers import ReActSingleInputOutputParser
from langchain_core.language_models import BaseLanguageModel
from langchain_core.prompts import PromptTemplate
from langchain_google_genai import GoogleGenerativeAI
def build_executor(llm: BaseLanguageModel, prompt: PromptTemplate):
llm_with_stop = llm.bind(stop=["\nObservation"])
agent = (
{
"input": lambda x: x["input"],
"agent_scratchpad": lambda x: format_log_to_str(x["intermediate_steps"]),
}
| prompt
| llm_with_stop
| ReActSingleInputOutputParser()
)
return AgentExecutor(agent=agent, tools=[])
llm = GoogleGenerativeAI(model='models/text-bison-001')
input_variables = ["input", "agent_scratchpad"]
prompt = PromptTemplate.from_file(
"path/to/agent_template.txt", input_variables=input_variables
)
prompt_template = prompt.partial(custom_prompt="")
executor = build_executor(llm, prompt_template)
print(executor.invoke(input={"input": "What are some of the pros and cons of Python as a programming language?"}))
```
This the prompt template I used -
```text
you are an AI assistant, helping a Human with a task. The Human has asked you a question.
When you have a response to say to the Human, you MUST use the format:
Thought: Do I need to use a tool? No
Final Answer: [your response here]
Begin!
Previous conversation history:
New input: {input}
{agent_scratchpad}
```
### Error Message and Stack Trace (if applicable)
```bash
Traceback (most recent call last):
File "/Users/kallie.levy/dev/repos/app-common/app_common/executor/build_executor.py", line 33, in <module>
print(executor.invoke(input={"input": "What are some of the pros and cons of Python as a programming language?"}))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/kallie.levy/Library/Caches/pypoetry/virtualenvs/app-common-yf0YqSl9-py3.11/lib/python3.11/site-packages/langchain/chains/base.py", line 162, in invoke
raise e
File "/Users/kallie.levy/Library/Caches/pypoetry/virtualenvs/app-common-yf0YqSl9-py3.11/lib/python3.11/site-packages/langchain/chains/base.py", line 156, in invoke
self._call(inputs, run_manager=run_manager)
File "/Users/kallie.levy/Library/Caches/pypoetry/virtualenvs/app-common-yf0YqSl9-py3.11/lib/python3.11/site-packages/langchain/agents/agent.py", line 1376, in _call
next_step_output = self._take_next_step(
^^^^^^^^^^^^^^^^^^^^^
File "/Users/kallie.levy/Library/Caches/pypoetry/virtualenvs/app-common-yf0YqSl9-py3.11/lib/python3.11/site-packages/langchain/agents/agent.py", line 1102, in _take_next_step
[
File "/Users/kallie.levy/Library/Caches/pypoetry/virtualenvs/app-common-yf0YqSl9-py3.11/lib/python3.11/site-packages/langchain/agents/agent.py", line 1102, in <listcomp>
[
File "/Users/kallie.levy/Library/Caches/pypoetry/virtualenvs/app-common-yf0YqSl9-py3.11/lib/python3.11/site-packages/langchain/agents/agent.py", line 1130, in _iter_next_step
output = self.agent.plan(
^^^^^^^^^^^^^^^^
File "/Users/kallie.levy/Library/Caches/pypoetry/virtualenvs/app-common-yf0YqSl9-py3.11/lib/python3.11/site-packages/langchain/agents/agent.py", line 392, in plan
for chunk in self.runnable.stream(inputs, config={"callbacks": callbacks}):
File "/Users/kallie.levy/Library/Caches/pypoetry/virtualenvs/app-common-yf0YqSl9-py3.11/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2424, in stream
yield from self.transform(iter([input]), config, **kwargs)
File "/Users/kallie.levy/Library/Caches/pypoetry/virtualenvs/app-common-yf0YqSl9-py3.11/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2411, in transform
yield from self._transform_stream_with_config(
File "/Users/kallie.levy/Library/Caches/pypoetry/virtualenvs/app-common-yf0YqSl9-py3.11/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1497, in _transform_stream_with_config
chunk: Output = context.run(next, iterator) # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/kallie.levy/Library/Caches/pypoetry/virtualenvs/app-common-yf0YqSl9-py3.11/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2375, in _transform
for output in final_pipeline:
File "/Users/kallie.levy/Library/Caches/pypoetry/virtualenvs/app-common-yf0YqSl9-py3.11/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1035, in transform
for chunk in input:
File "/Users/kallie.levy/Library/Caches/pypoetry/virtualenvs/app-common-yf0YqSl9-py3.11/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 4168, in transform
yield from self.bound.transform(
File "/Users/kallie.levy/Library/Caches/pypoetry/virtualenvs/app-common-yf0YqSl9-py3.11/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1045, in transform
yield from self.stream(final, config, **kwargs)
File "/Users/kallie.levy/Library/Caches/pypoetry/virtualenvs/app-common-yf0YqSl9-py3.11/lib/python3.11/site-packages/langchain_core/language_models/llms.py", line 414, in stream
raise e
File "/Users/kallie.levy/Library/Caches/pypoetry/virtualenvs/app-common-yf0YqSl9-py3.11/lib/python3.11/site-packages/langchain_core/language_models/llms.py", line 398, in stream
for chunk in self._stream(
File "/Users/kallie.levy/Library/Caches/pypoetry/virtualenvs/app-common-yf0YqSl9-py3.11/lib/python3.11/site-packages/langchain_google_genai/llms.py", line 225, in _stream
for stream_resp in _completion_with_retry(
^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/kallie.levy/Library/Caches/pypoetry/virtualenvs/app-common-yf0YqSl9-py3.11/lib/python3.11/site-packages/langchain_google_genai/llms.py", line 65, in _completion_with_retry
return _completion_with_retry(
^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/kallie.levy/Library/Caches/pypoetry/virtualenvs/app-common-yf0YqSl9-py3.11/lib/python3.11/site-packages/tenacity/__init__.py", line 289, in wrapped_f
return self(f, *args, **kw)
^^^^^^^^^^^^^^^^^^^^
File "/Users/kallie.levy/Library/Caches/pypoetry/virtualenvs/app-common-yf0YqSl9-py3.11/lib/python3.11/site-packages/tenacity/__init__.py", line 379, in __call__
do = self.iter(retry_state=retry_state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/kallie.levy/Library/Caches/pypoetry/virtualenvs/app-common-yf0YqSl9-py3.11/lib/python3.11/site-packages/tenacity/__init__.py", line 314, in iter
return fut.result()
^^^^^^^^^^^^
File "/usr/local/Cellar/[email protected]/3.11.6_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/concurrent/futures/_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "/usr/local/Cellar/[email protected]/3.11.6_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
File "/Users/kallie.levy/Library/Caches/pypoetry/virtualenvs/app-common-yf0YqSl9-py3.11/lib/python3.11/site-packages/tenacity/__init__.py", line 382, in __call__
result = fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/Users/kallie.levy/Library/Caches/pypoetry/virtualenvs/app-common-yf0YqSl9-py3.11/lib/python3.11/site-packages/langchain_google_genai/llms.py", line 60, in _completion_with_retry
return llm.client.generate_content(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: module 'google.generativeai' has no attribute 'generate_content'. Did you mean: 'generate_text'?
```
### Description
I'm creating an AgentExecutor with Google GenerativeAI llm, but since version `0.1.1` of `langchain`, I receive this error. If `langchain <= 0.1.0`, this script works.
### System Info
```
python==3.11.6
langchain==0.1.1
langchain-community==0.0.19
langchain-core==0.1.21
langchain-google-genai==0.0.5
langchain-google-vertexai==0.0.3
``` | Invoking agent executor with Google GenerativeAI: AttributeError: module 'google.generativeai' has no attribute 'generate_content' | https://api.github.com/repos/langchain-ai/langchain/issues/17251/comments | 3 | 2024-02-08T17:55:20Z | 2024-07-08T16:05:25Z | https://github.com/langchain-ai/langchain/issues/17251 | 2,125,771,216 | 17,251 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
below's the code which is used for all normal retriver, SelfQuery, MultiQuery and ParentDocument Retriver (same template)
```
# loader = TextLoader('single_text_file.txt')
loader = DirectoryLoader(f'/content', glob="./*.pdf", loader_cls=PyPDFLoader)
documents = loader.load()
# Define your prompt template
prompt_template = """Use the following pieces of information to answer the user's question.
If you don't know the answer, just say that you don't know, don't try to make up an answer.
Context: {context}
Question: {question}
Only return the helpful answer below and nothing else. If no context, then no answer.
Helpful Answer:"""
unique_sources = set()
for doc in documents:
source = doc.metadata['source']
unique_sources.add(source)
num_unique_sources = len(unique_sources)
class MyTextSplitter(RecursiveCharacterTextSplitter):
def split_documents(self, documents):
chunks = super().split_documents(documents)
for document, chunk in zip(documents, chunks):
chunk.metadata['source'] = document.metadata['source']
return chunks
text_splitter = MyTextSplitter(chunk_size=1000,
chunk_overlap=100)
texts = text_splitter.split_documents(documents)
# Create a vector database for the document
# vectorStore = FAISS.from_documents(texts, instructor_embeddings)
vectorStore = FAISS.from_documents(texts, embeddings)
# Create a retriever for the vector database
retriever = vectorStore.as_retriever(search_kwargs={"k": 5})
# Create a chain to answer questions
qa_chain = RetrievalQA.from_chain_type(llm=OpenAI(temperature=0.2),
chain_type="stuff",
retriever=retriever,
return_source_documents=True)
# Format the prompt using the template
context = ""
# question = "what's the final provision of dhl?"
question = "can you return the objective of ABInBev?"
formatted_prompt = prompt_template.format(context=context, question=question)
# Pass the formatted prompt to the RetrievalQA function
llm_response = qa_chain(formatted_prompt)
process_llm_response(llm_response)
```
After seeing all the outputs from different retrievers, I felt that normal retriever and mutliquery retriever are performing well. May i know the reason why SelfQuery and ParentDocument retriever is not returning better results?
### Idea or request for content:
_No response_ | regarding performances between normal retriver, SelfQuery, MultiQuery and ParentDocument Retriver | https://api.github.com/repos/langchain-ai/langchain/issues/17243/comments | 1 | 2024-02-08T15:40:43Z | 2024-02-08T16:54:50Z | https://github.com/langchain-ai/langchain/issues/17243 | 2,125,484,631 | 17,243 |
[
"hwchase17",
"langchain"
] | Feature request discussed in https://github.com/langchain-ai/langchain/discussions/17176
Expand `cache` to accept a cache implementation in addition to a bool value:
https://github.com/langchain-ai/langchain/blob/00a09e1b7117f3bde14a44748510fcccc95f9de5/libs/core/langchain_core/language_models/chat_models.py#L106-L106
If provided, will use the given cache.
# Acceptance Criteria
- [ ] Documentation to the cache variable to explain how it can be used
- [ ] Update https://python.langchain.com/docs/modules/model_io/chat/chat_model_caching
- [ ] Include unit tests to test given functionality
PR can include implementation for caching of LLMs in addition to chat models. | Enhancement: Add ability to pass local cache to chat models | https://api.github.com/repos/langchain-ai/langchain/issues/17242/comments | 1 | 2024-02-08T15:37:44Z | 2024-05-21T16:09:01Z | https://github.com/langchain-ai/langchain/issues/17242 | 2,125,476,844 | 17,242 |
[
"hwchase17",
"langchain"
] | ### Discussed in https://github.com/langchain-ai/langchain/discussions/16446
<div type='discussions-op-text'>
<sup>Originally posted by **jason1315** January 23, 2024</sup>
In my project, I need to implement the following logic. Here is a simple:
```python
import asyncio
from langchain_core.runnables import *
from lang_chain.llm.llms import llm
def _test(_dict):
print("value:", _dict)
return _dict
@chain
def my_method(_dict, **keywords):
print(keywords)
return RunnablePassthrough.assign(key=lambda x: keywords.get("i")) | RunnableLambda(_test)
if name == 'main':
loop = asyncio.new_event_loop()
my_list = ["1", "2", "3", " 4", "5"]
head = RunnablePassthrough()
for i in my_list:
head = head | my_method.bind(i=i)
stream = head.invoke({})
#
# async def __stream(stream1):
# async for i in stream1:
# print(i)
#
# loop.run_until_complete(__stream(stream))
```
When I use the .invoke({}) method, it outputs the following results correctly:
```text
{'i': '1'}
value: {'key': '1'}
{'i': '2'}
value: {'key': '2'}
{'i': '3'}
value: {'key': '3'}
{'i': ' 4'}
value: {'key': ' 4'}
{'i': '5'}
value: {'key': '5'}
```
But if I use the astream_log({}) method, it throws an error:
```text
File "F:\py3.11\Lib\concurrent\futures\thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: RunnableLambda._atransform.<locals>.func() got an unexpected keyword argument 'i'
```
Why is it designed like this? Do I need to implement a runnable similar to the model if I want to achieve the above logic?</div> | In the astream_log() method, you cannot use the bind method with RunnableLambda. | https://api.github.com/repos/langchain-ai/langchain/issues/17241/comments | 1 | 2024-02-08T15:03:18Z | 2024-07-15T16:06:25Z | https://github.com/langchain-ai/langchain/issues/17241 | 2,125,385,400 | 17,241 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
below's the code
```
child_splitter = RecursiveCharacterTextSplitter(chunk_size=400, chunk_overlap=50)
parent_splitter = RecursiveCharacterTextSplitter(chunk_size=1200, chunk_overlap=300)
vectorstore = Chroma(
collection_name="full_documents", embedding_function=embeddings)
store = InMemoryStore()
retriever = ParentDocumentRetriever(
vectorstore=vectorstore,
docstore=store,
child_splitter=child_splitter,
parent_splitter=parent_splitter
)
retriever.add_documents(document, ids=None)
```
above is the code which uses chroma as vectordb, can i use FAISS for child_splitter and parent_splitter like below code
```
text_splitter = RecursiveCharacterTextSplitter(chunk_size=500,
chunk_overlap=50)
texts = text_splitter.split_documents(documents)
# Create a vector database for the document
# vectorStore = FAISS.from_documents(texts, instructor_embeddings)
vectorStore = FAISS.from_documents(texts, embeddings)
```
if yes, ca you please help me with the code?
### Idea or request for content:
_No response_ | can i use FAISS isntead of Chroma for ParentDocumentRetriver? | https://api.github.com/repos/langchain-ai/langchain/issues/17237/comments | 5 | 2024-02-08T13:40:32Z | 2024-02-14T03:34:53Z | https://github.com/langchain-ai/langchain/issues/17237 | 2,125,216,339 | 17,237 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
The function sitemap doesn't fetching, it gives me a empty list.
Code:
```
from langchain_community.document_loaders.sitemap import SitemapLoader
sitemap_loader = SitemapLoader(web_path="https://langchain.readthedocs.io/sitemap.xml")
sitemap_loader.requests_per_second = 2
sitemap_loader.requests_kwargs = {"verify": False}
docs = sitemap_loader.load()
print(docs)
```
Output:
Fetching pages: 0it [00:00, ?it/s]
[]
### Idea or request for content:
The example of documentation doesn't work | Update documentation for sitemap loader to use correct URL | https://api.github.com/repos/langchain-ai/langchain/issues/17236/comments | 1 | 2024-02-08T12:46:55Z | 2024-02-13T00:20:34Z | https://github.com/langchain-ai/langchain/issues/17236 | 2,125,103,304 | 17,236 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
Below's the code
```
loader = DirectoryLoader(f'/content', glob="./*.pdf", loader_cls=PyPDFLoader)
documents = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=500,
chunk_overlap=50)
texts = text_splitter.split_documents(documents)
vectorStore = FAISS.from_documents(texts, embeddings)
llm = OpenAI(temperature=0.2)
retriever = MultiQueryRetriever.from_llm(retriever=vectorstore.as_retriever(), llm=llm)
# Set logging for the queries
import logging
logging.basicConfig()
logging.getLogger("langchain.retrievers.multi_query").setLevel(logging.INFO)
# Create a chain to answer questions
qa_chain = RetrievalQA.from_chain_type(llm=llm,
chain_type="stuff",
retriever=retriever,
return_source_documents=True)
# Format the prompt using the template
context = ""
question = "what's the commitent no 3 and 4?"
formatted_prompt = prompt_template.format(context=context, question=question)
# Pass the formatted prompt to the RetrievalQA function
llm_response = qa_chain(formatted_prompt)
process_llm_response(llm_response)
```
The above code is returning the correct output, but returning the wrong source document. so, i tried playing with chunk and overlap size and sometimes it'll returning correct source document name. How to get the correct source doc name everytime?
### Idea or request for content:
_No response_ | returning wrong source document name | https://api.github.com/repos/langchain-ai/langchain/issues/17233/comments | 6 | 2024-02-08T12:17:21Z | 2024-02-09T14:53:47Z | https://github.com/langchain-ai/langchain/issues/17233 | 2,125,050,901 | 17,233 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
def _execute(
self,
command: str,
fetch: Literal["all", "one"] = "all",
) -> Sequence[Dict[str, Any]]:
"""
Executes SQL command through underlying engine.
If the statement returns no rows, an empty list is returned.
"""
with self._engine.begin() as connection: # type: Connection
if self._schema is not None:
if self.dialect == "snowflake":
connection.exec_driver_sql(
"ALTER SESSION SET search_path = %s", (self._schema,)
)
elif self.dialect == "bigquery":
connection.exec_driver_sql("SET @@dataset_id=?", (self._schema,))
elif self.dialect == "mssql":
pass
elif self.dialect == "trino":
connection.exec_driver_sql("USE ?", (self._schema,))
elif self.dialect == "duckdb":
# Unclear which parameterized argument syntax duckdb supports.
# The docs for the duckdb client say they support multiple,
# but `duckdb_engine` seemed to struggle with all of them:
# https://github.com/Mause/duckdb_engine/issues/796
connection.exec_driver_sql(f"SET search_path TO {self._schema}")
elif self.dialect == "oracle":
connection.exec_driver_sql(
f"ALTER SESSION SET CURRENT_SCHEMA = {self._schema}"
)
elif self.dialect == "sqlany":
# If anybody using Sybase SQL anywhere database then it should not
# go to else condition. It should be same as mssql.
pass
elif self.dialect == "postgresql": # postgresql
connection.exec_driver_sql("SET search_path TO %s", (self._schema,))
cursor = connection.execute(text(command))
if cursor.returns_rows:
if fetch == "all":
result = [x._asdict() for x in cursor.fetchall()]
elif fetch == "one":
first_result = cursor.fetchone()
result = [] if first_result is None else [first_result._asdict()]
else:
raise ValueError("Fetch parameter must be either 'one' or 'all'")
return result
return []
```
### Error Message and Stack Trace (if applicable)
```code
SELECT * FROM metadata_sch_stg.company_datasets LIMIT 2←[0m←[36;1m←[1;3mError: (pg8000.exceptions.DatabaseError) {'S': 'ERROR', 'V': 'ERROR', 'C': '42601', 'M': 'syntax error at or near "$1"', 'P': '20', 'F': 'scan.l', 'L': '1180', 'R': 'scanner_yyerror'}
[SQL: SET search_path TO %s]
[parameters: ('metadata_sch_stg',)]
```
### Description
When attempting to set the PostgreSQL search_path using exec_driver_sql within the SQLDatabase class, an error is thrown. The relevant code snippet is as follows:
```python
elif self.dialect == "postgresql": # postgresql
connection.exec_driver_sql("SET search_path TO %s", (self._schema,))
```
This line attempts to set the search_path to the schema defined in the self._schema attribute. However, this results in a syntax error because the parameter substitution (%s) is not supported for the SET command in PostgreSQL.
Expected Behavior:
The search_path should be set to the specified schema without errors, allowing subsequent queries to run within the context of that schema.
Actual Behavior:
A syntax error is raised, indicating an issue with the SQL syntax near the parameter substitution placeholder.
Steps to Reproduce the error:
Instantiate an SQLDatabase object with the PostgreSQL dialect.
Change the Postgres schema to any other schema, other that 'public' Schema.
Observe the syntax error.
### System Info
langchain==0.1.4
langchain-community==0.0.16
langchain-core==0.1.17
langchain-google-vertexai==0.0.3
langsmith==0.0.85
pg8000==1.29.8
SQLAlchemy==2.0.16
cloud-sql-python-connector==1.2.4
OS: Windows | Error when setting PostgreSQL search_path using exec_driver_sql in SQLDatabase class | https://api.github.com/repos/langchain-ai/langchain/issues/17231/comments | 2 | 2024-02-08T10:31:05Z | 2024-06-12T16:08:01Z | https://github.com/langchain-ai/langchain/issues/17231 | 2,124,824,598 | 17,231 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
https://python.langchain.com/docs/integrations/retrievers/self_query/supabase_self_query
Can anyone actually make out the SQL commands?
In the studio, jump to the [SQL editor](https://supabase.com/dashboard/project/_/sql/new) and run the following script to enable pgvector and setup your database as a vector store: ```sql – Enable the pgvector extension to work with embedding vectors create extension if not exists vector;
– Create a table to store your documents create table documents ( id uuid primary key, content text, – corresponds to Document.pageContent metadata jsonb, – corresponds to Document.metadata embedding vector (1536) – 1536 works for OpenAI embeddings, change if needed );
– Create a function to search for documents create function match_documents ( query_embedding vector (1536), filter jsonb default ‘{}’ ) returns table ( id uuid, content text, metadata jsonb, similarity float ) language plpgsql as $$ #variable_conflict use_column begin return query select id, content, metadata, 1 - (documents.embedding <=> query_embedding) as similarity from documents where metadata @> filter order by documents.embedding <=> query_embedding; end; $$; ```
That is what it looks like in Google Chrome.
### Idea or request for content:
Appropriately from ChatGPT:
Here are the SQL commands you need to run in your Supabase SQL editor to set up a vector store with pgvector. These commands will create the necessary extensions, tables, and functions for your vector store:
1. **Enable pgvector Extension**:
```sql
CREATE EXTENSION IF NOT EXISTS vector;
```
2. **Create a Table for Storing Documents**:
```sql
CREATE TABLE documents (
id uuid PRIMARY KEY,
content text, -- corresponds to Document.pageContent
metadata jsonb, -- corresponds to Document.metadata
embedding vector(1536) -- 1536 dimensions for OpenAI embeddings
);
```
3. **Create a Function for Searching Documents**:
```sql
CREATE FUNCTION match_documents(
query_embedding vector(1536),
filter jsonb DEFAULT '{}'
)
RETURNS TABLE(
id uuid,
content text,
metadata jsonb,
similarity float
)
LANGUAGE plpgsql AS $$
BEGIN
RETURN QUERY SELECT
id,
content,
metadata,
1 - (documents.embedding <=> query_embedding) AS similarity
FROM documents
WHERE metadata @> filter
ORDER BY documents.embedding <=> query_embedding;
END;
$$;
```
These commands will set up your database to work with embedding vectors using pgvector, create a table to store documents with an embedding vector field, and a function to perform document searches based on these embeddings.
Make sure to carefully input these commands in the Supabase SQL editor and adjust any parameters (like the dimension size of the vector or table structure) as needed for your specific use case. For more information and detailed instructions, please refer to the [Supabase documentation](https://supabase.com/docs). | poorly formatted SQL commands for pgvector Supabase | https://api.github.com/repos/langchain-ai/langchain/issues/17225/comments | 1 | 2024-02-08T07:28:07Z | 2024-02-12T10:23:03Z | https://github.com/langchain-ai/langchain/issues/17225 | 2,124,517,651 | 17,225 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
Use following code:
```python
from langchain_community.embeddings import BedrockEmbeddings
embeddings = BedrockEmbeddings(
region_name="us-east-1",
model_id="cohere.embed-english-v3"
)
e1 = embeddings.embed_documents(["What is the project name?"])[0]
e2 = embeddings.embed_query("What is the project name?")
print(e1==e2)
```
### Error Message and Stack Trace (if applicable)
Outputs: `True`
Should ideally be `False`.
### Description
Cohere models has the ability to generate optimized embeddings for type of use case. However it is not being leveraged in AWS Bedrock.
https://github.com/langchain-ai/langchain/blob/00a09e1b7117f3bde14a44748510fcccc95f9de5/libs/community/langchain_community/embeddings/bedrock.py#L123-L131
Current workaround is to define the `input_type` at the construtor which is not ideal (viz. using with predefined tools) compared to native way defined for the `CohereEmbeddings` class.
https://github.com/langchain-ai/langchain/blob/00a09e1b7117f3bde14a44748510fcccc95f9de5/libs/community/langchain_community/embeddings/cohere.py#L125-L134
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.22631
> Python Version: 3.11.6 (tags/v3.11.6:8b6ee5b, Oct 2 2023, 14:57:12) [MSC v.1935 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.1.21
> langchain: 0.1.5
> langchain_community: 0.0.19
> langsmith: 0.0.87
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | Documents and queries with bedrock's cohere model should result in different embedding values | https://api.github.com/repos/langchain-ai/langchain/issues/17222/comments | 1 | 2024-02-08T04:56:58Z | 2024-05-16T16:09:05Z | https://github.com/langchain-ai/langchain/issues/17222 | 2,124,359,955 | 17,222 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```%pip install --upgrade --quiet langchain-pinecone langchain-openai langchain```
### Error Message and Stack Trace (if applicable)
ERROR: Could not find a version that satisfies the requirement langchain-pinecone (from versions: none)
ERROR: No matching distribution found for langchain-pinecone
### Description
To install the new pinecone vector store library LangChain suggests to install the **langchain-pinecone** library through pip install. But the pip install says that such a package does not exist. This is the [document page](https://python.langchain.com/docs/integrations/vectorstores/pinecone) I'm referring to.
### System Info
langchain==0.1.5
langchain-community==0.0.19
langchain-core==0.1.21
pinecone-client==3.0.2
Platform: linux and Google colab | The pip install for langchain-pinecone shows error. | https://api.github.com/repos/langchain-ai/langchain/issues/17221/comments | 4 | 2024-02-08T04:56:41Z | 2024-02-09T09:56:04Z | https://github.com/langchain-ai/langchain/issues/17221 | 2,124,359,758 | 17,221 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
@dosu-bot I am getting the error with SQLDatabase and SQLDatabaseChain. The output tokens of SQL command gets truncated max at 128 even after using max_string_length parameter in SQLDatabase set to 32000.
Here is the code:
db_connection = SQLDatabase.from_uri(
snowflake_url,
sample_rows_in_table_info=1,
include_tables=["table1"],
view_support=True,
max_string_length=32000,
)
return_op = SQLDatabaseChain.from_llm(
llm,
db_connection,
prompt=few_shot_prompt,
verbose=True,
return_intermediate_steps=True,
)
### Error Message and Stack Trace (if applicable)
ProgrammingError: (snowflake.connector.errors.ProgrammingError) 001003 (42000): SQL compilation error: parse error line 1 at position 300 near '<EOF>. [SQL: SELECT DESTINATION_DATA, SUM(CASE WHEN LOWER(TRIM(TVENA_CANE)) = 'skpettion' THEN 1 ELSE 0 END) AS skpettion, SUM(CASE WHEN LOWER(TRIM(TVENA_CANE)) = 'tickk' THEN 1 ELSE 0 END) AS CLICKS FROM TABT_COURS_KBSPBIGN_CIDORT WHERE DIRQANE = 'buj' AND TVENA_CANE >= '2023-12-01' AND TVENA_CANE <= '2023]
### Description
Generated SQL command gets truncated max at 128
### System Info
boto3==1.34.29
chromadb==0.4.22
huggingface-hub==0.20.3
langchain==0.1.4
langchain-experimental==0.0.49
python-dotenv==1.0.1
sentence_transformers==2.3.0
snowflake-connector-python==3.7.0
snowflake-sqlalchemy==1.5.1
SQLAlchemy==1.4.51
streamlit==1.30.0 | SQLDatabaseChain, SQLDatabase max_string_length not working | https://api.github.com/repos/langchain-ai/langchain/issues/17212/comments | 13 | 2024-02-08T00:03:42Z | 2024-05-21T16:08:56Z | https://github.com/langchain-ai/langchain/issues/17212 | 2,124,119,799 | 17,212 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```
>>> from langchain_community.llms import Ollama
>>> from langchain.callbacks import wandb_tracing_enabled
>>> llm = Ollama(model="mistral")
>>> with wandb_tracing_enabled():
... llm.invoke("Tell me a joke")
...
wandb: Streaming LangChain activity to W&B at https://wandb.ai/<redacted>
wandb: `WandbTracer` is currently in beta.
wandb: Please report any issues to https://github.com/wandb/wandb/issues with the tag `langchain`.
# (1.) : WORKS
" Why don't scientists trust atoms?\n\nBecause they make up everything!"
>>> os.environ["LANGCHAIN_WANDB_TRACING"] = "true"
>>> llm.invoke("Tell me a joke")
" Why don't scientists trust atoms?\n\nBecause they make up everything!"
# (2.) Doesn't work
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I'm following the documentation that says, just setting os.environ["LANCHAIN_WANDB_TRACING"]="true" is enough to trace langchain with wandb. It doesn't.
Using the context manager shows that everything is setup correctly.
I have no idea what is happening.
### System Info
```
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Thu Oct 5 21:02:42 UTC 2023
> Python Version: 3.12.1 (main, Jan 23 2024, 13:02:12) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.1.19
> langchain: 0.1.5
> langchain_community: 0.0.18
> langsmith: 0.0.86
> langchain_mistralai: 0.0.4
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
``` | os.environ["LANCHAIN_WANDB_TRACING"]="true" doesn't work | https://api.github.com/repos/langchain-ai/langchain/issues/17211/comments | 1 | 2024-02-08T00:02:59Z | 2024-05-16T16:08:54Z | https://github.com/langchain-ai/langchain/issues/17211 | 2,124,119,088 | 17,211 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
# In langchain/libs/community/langchain_community/agent_toolkits/sql/base.py
if agent_type == AgentType.ZERO_SHOT_REACT_DESCRIPTION:
if prompt is None:
from langchain.agents.mrkl import prompt as react_prompt
format_instructions = (
format_instructions or react_prompt.FORMAT_INSTRUCTIONS
)
template = "\n\n".join(
[
react_prompt.PREFIX,
"{tools}",
format_instructions,
react_prompt.SUFFIX,
]
)
prompt = PromptTemplate.from_template(template)
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Although it respects and uses the format_instructions argument, it completely ignores prefix and suffix in favor of the hardcoded values in from langchain.agents.mrkl import prompt as react_prompt. This unnecessarily requires fully construction the prompt.
### System Info
$ python -m langchain_core.sys_info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Tue Sep 26 19:53:57 UTC 2023
> Python Version: 3.11.7 | packaged by conda-forge | (main, Dec 23 2023, 14:43:09) [GCC 12.3.0]
Package Information
-------------------
> langchain_core: 0.1.19
> langchain: 0.1.4
> langchain_community: 0.0.17
> langsmith: 0.0.86
> langchain_experimental: 0.0.49
> langchain_openai: 0.0.2.post1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | create_sql_agent ignores custom prefix and suffix if agent_type="zero-shot-react-description" | https://api.github.com/repos/langchain-ai/langchain/issues/17210/comments | 4 | 2024-02-07T23:50:17Z | 2024-02-23T18:22:31Z | https://github.com/langchain-ai/langchain/issues/17210 | 2,124,105,690 | 17,210 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
from langchain.document_loaders import AsyncChromiumLoader,AsyncHtmlLoader
from langchain.document_transformers import BeautifulSoupTransformer
# Load HTML
loader = AsyncChromiumLoader(["https://www.tandfonline.com/doi/full/10.1080/07303084.2022.2053479"])
html = loader.load()
bs_transformer = BeautifulSoupTransformer()
docs_transformed = bs_transformer.transform_documents(html, tags_to_extract=['h1','h2',"span",'p'])
print(docs_transformed)
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I'm encountering an issue with web scraping using the provided code snippet in the langchain repository.
The code generally works well for most URLs, but there are specific cases where it fails to extract any content. For instance:
1) URL: "https://www.cdc.gov/populationhealth/well-being/features/how-right-now.htm"When attempting to scrape this URL, no content is extracted.
Upon investigation, I found that the URL redirects to: "https://www.cdc.gov/emotional-wellbeing/features/how-right-now.htm?CDC_AA_refVal=https%3A%2F%2Fwww.cdc.gov%2Fpopulationhealth%2Fwell-being%2Ffeatures%2Fhow-right-now.htm"This redirection might be causing the issue.
2) URL: "https://onlinelibrary.wiley.com/doi/10.1111/josh.13243"Scraping this URL returns the following content:
"'onlinelibrary.wiley.com Comprobando si la conexión del sitio es segura Enable JavaScript and cookies to continue'"It seems like there might be some JavaScript or cookie-based verification process causing the scraping to fail.Steps to Reproduce:Use the provided code snippet with the mentioned URLs.
Observe the lack of extracted content or the presence of unexpected content.
Expected Behavior:
The web scraping code should consistently extract relevant content from the provided URLs without issues.
Additional Information: It might be necessary to handle URL redirections or JavaScript-based content verification to ensure successful scraping.
Any insights or suggestions on how to improve the code to handle these scenarios would be greatly appreciated.
### System Info
pip install -q langchain-openai langchain playwright beautifulsoup4
| Web Scrapping: specific cases where it fails to extract any content | https://api.github.com/repos/langchain-ai/langchain/issues/17203/comments | 1 | 2024-02-07T22:30:41Z | 2024-05-15T16:08:04Z | https://github.com/langchain-ai/langchain/issues/17203 | 2,124,008,096 | 17,203 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
cypher_prompt = PromptTemplate.from_template(CYPHER_GENERATION_TEMPLATE)
cypher_qa = GraphCypherQAChain.from_llm(
llm,
graph=graph,
cypher_prompt=cypher_prompt,
verbose=True,
return_intermediate_steps=True,
return_direct=False
)
------ other part of the code ----
agent_prompt = hub.pull("hwchase17/react-chat")
agent = create_react_agent(llm, tools, agent_prompt)
agent_executor = AgentExecutor(
agent=agent,
tools=tools,
verbose=True,
memory=memory,
return_intermediate_steps=True
)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
GraphCypherQAChain tool is accesible for the react-chat agent an generate good cypher queries, but it do not return the cypher queries to the agent.
What I expect is that with return_intermediate_steps on both the tool and the agent I can have both the cypher queries and the agent steps. Right now I only see the agents return_intermediate_steps, but there is any cypher query from the tool.
### System Info
langchain==0.1.5
langchain-community==0.0.18
langchain-core==0.1.19
langchain-openai==0.0.5
langchainhub==0.1.14
mac
Python 3.9.6
| Can't acces return_intermediate_steps of a tool when dealing with an agent | https://api.github.com/repos/langchain-ai/langchain/issues/17182/comments | 1 | 2024-02-07T14:47:53Z | 2024-05-15T16:07:59Z | https://github.com/langchain-ai/langchain/issues/17182 | 2,123,203,370 | 17,182 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
class MongoHandler:
...
@error_logging
def setup_vector_search(
self,
search_index_name: str,
text_key: str,
embedding_key: str
) -> None:
self._search_index_name = search_index_name
try:
self._vector_search = MongoDBAtlasVectorSearch(
collection=self._collection,
embedding=self._embedding_model,
index_name=search_index_name,
text_key=text_key,
embedding_key=embedding_key,
)
except Exception as e:
logging.error('Mongo Atlas에서 먼저 Search Index 세팅을 해주세요.')
logging.error(e)
raise e
@error_logging
def vector_search(
self,
query: str,
k: int=5,
pre_filter: dict=None,
) -> List[Document]:
assert self._vector_search, 'vector search 세팅을 먼저 해주세요.'
results = self._vector_search.similarity_search(
query=query,
k=k,
pre_filter=pre_filter
)
return results
def search_documents(company_name, question_dict, num_docs=2):
search_result = {}
for key, value in question_dict.items():
for query in value:
contents = ''
pre_filter = {"$and": [{"stock_name": company_name}, {'major_category': key}]}
search = MONGODB_COLLENTION.vector_search(
query=query,
pre_filter=pre_filter,
k=num_docs
)
search_contents = [content.page_content for content in search]
# reference_data = [content.metadata for content in search]
contents += '\n\n'.join(search_contents)
search_result[query] = contents
return search_result
def run(model="gpt-3.5-turbo-0125"):
stock_name_list = get_company_name()
company_info = {}
for company_name in tqdm(stock_name_list[:2]):
question_dict = make_questions(company_name)
search_result = search_documents(company_name, question_dict)
total_answers = get_answers(company_name, search_result, model)
company_info.update(total_answers)
return company_info
company_info_1 = run(model="gpt-3.5-turbo-1106")
for key, value in company_info_1.items():
print(f"{key}:\n{value}\n\n")
```
### Error Message and Stack Trace (if applicable)
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
Cell In[5], line 1
----> 1 company_info_1 = run(model="gpt-3.5-turbo-1106")
2 for key, value in company_info_1.items():
3 print(f"{key}:\n{value}\n\n")
Cell In[4], line 7
5 for company_name in tqdm(stock_name_list[:2]):
6 question_dict = make_questions(company_name)
----> 7 search_result = search_documents(company_name, question_dict)
8 total_answers = get_answers(company_name, search_result, model)
9 company_info.update(total_answers)
Cell In[3], line 24
22 contents = ''
23 pre_filter = {"$and": [{"stock_name": company_name}, {'major_category': key}]}
---> 24 search = MONGODB_COLLENTION.vector_search(
25 query=query,
26 pre_filter=pre_filter,
27 k=num_docs
28 )
29 search_contents = [content.page_content for content in search]
30 # reference_data = [content.metadata for content in search]
File ~/work/team_project/contents-generate-ai/baseLogger.py:210, in error_logging.<locals>.wrapper(*args, **kwargs)
208 except Exception as e:
209 logging.error(e)
--> 210 raise e
File ~/work/team_project/contents-generate-ai/baseLogger.py:206, in error_logging.<locals>.wrapper(*args, **kwargs)
204 def wrapper(*args, **kwargs):
205 try:
--> 206 return func(*args, **kwargs)
208 except Exception as e:
209 logging.error(e)
File ~/work/team_project/contents-generate-ai/src/modules/my_mongodb.py:247, in MongoHandler.vector_search(self, query, k, pre_filter)
238 @error_logging
239 def vector_search(
240 self,
(...)
243 pre_filter: dict=None,
244 ) -> List[Document]:
245 assert self._vector_search, 'vector search 세팅을 먼저 해주세요.'
--> 247 results = self._vector_search.similarity_search(
248 query=query,
249 k=k,
250 pre_filter=pre_filter
251 )
252 return results
File ~/work/team_project/contents-generate-ai/.venv/lib/python3.10/site-packages/langchain_community/vectorstores/mongodb_atlas.py:273, in MongoDBAtlasVectorSearch.similarity_search(self, query, k, pre_filter, post_filter_pipeline, **kwargs)
256 """Return MongoDB documents most similar to the given query.
257
258 Uses the vectorSearch operator available in MongoDB Atlas Search.
(...)
270 List of documents most similar to the query and their scores.
271 """
272 additional = kwargs.get("additional")
--> 273 docs_and_scores = self.similarity_search_with_score(
274 query,
275 k=k,
276 pre_filter=pre_filter,
277 post_filter_pipeline=post_filter_pipeline,
278 )
280 if additional and "similarity_score" in additional:
281 for doc, score in docs_and_scores:
File ~/work/team_project/contents-generate-ai/.venv/lib/python3.10/site-packages/langchain_community/vectorstores/mongodb_atlas.py:240, in MongoDBAtlasVectorSearch.similarity_search_with_score(self, query, k, pre_filter, post_filter_pipeline)
223 """Return MongoDB documents most similar to the given query and their scores.
224
225 Uses the vectorSearch operator available in MongoDB Atlas Search.
(...)
237 List of documents most similar to the query and their scores.
238 """
239 embedding = self._embedding.embed_query(query)
--> 240 docs = self._similarity_search_with_score(
241 embedding,
242 k=k,
243 pre_filter=pre_filter,
244 post_filter_pipeline=post_filter_pipeline,
245 )
246 return docs
File ~/work/team_project/contents-generate-ai/.venv/lib/python3.10/site-packages/langchain_community/vectorstores/mongodb_atlas.py:212, in MongoDBAtlasVectorSearch._similarity_search_with_score(self, embedding, k, pre_filter, post_filter_pipeline)
210 text = res.pop(self._text_key)
211 score = res.pop("score")
--> 212 del res["embedding"]
213 docs.append((Document(page_content=text, metadata=res), score))
214 return docs
KeyError: 'embedding'
### Description
<Situation>
1. Set up the embedding field name, "doc_embedding"
2. Run `MongoDBAtlasVectorSearch(**kwargs).similarity_search(**params)`
3. Raise KeyError: 'embedding'
I think `del res["embedding"]` is hard code.
So I suggest that fix that line to `del res[self._embedding_key]`.
### System Info
[tool.poetry.dependencies]
python = "^3.10"
openai = "^1.7.0"
langchain = "^0.1.0"
langchain-openai = "^0.0.2"
langchain-community = "^0.0.15" | community: [MongoDBAtlasVectorSearch] Fix KeyError 'embedding' | https://api.github.com/repos/langchain-ai/langchain/issues/17177/comments | 2 | 2024-02-07T13:31:02Z | 2024-06-08T16:09:40Z | https://github.com/langchain-ai/langchain/issues/17177 | 2,123,046,146 | 17,177 |
[
"hwchase17",
"langchain"
] | Can we also update the pricing information for the latest OpenAI models (released 0125)?
_Originally posted by @huanvo88 in https://github.com/langchain-ai/langchain/issues/12994#issuecomment-1923687183_
| Can we also update the pricing information for the latest OpenAI models (released 0125)? | https://api.github.com/repos/langchain-ai/langchain/issues/17173/comments | 3 | 2024-02-07T12:41:17Z | 2024-07-17T16:04:53Z | https://github.com/langchain-ai/langchain/issues/17173 | 2,122,949,251 | 17,173 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
%pip install pymilvus
#Imports a PyMilvus package:
from pymilvus import (
connections,
utility,
FieldSchema,
CollectionSchema,
DataType,
Collection,
)
from langchain_openai import OpenAIEmbeddings
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.chat_models import ChatOpenAI
from langchain.schema.runnable import RunnablePassthrough
from langchain.prompts import PromptTemplate
from langchain_community.document_loaders import TextLoader,PyPDFLoader
from langchain_community.vectorstores import Milvus
connections.connect("default", host="localhost", port="19530")
import os
import openai
from dotenv import load_dotenv
load_dotenv()
os.environ['OPENAI_API_KEY'] = os.getenv('OPENAI_API_KEY')
file_path="Code_of_Conduct_Policy.pdf"
loader = PyPDFLoader(file_path)
documents = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=100)
texts = text_splitter.split_documents(documents=documents)
print(texts)
embeddings = OpenAIEmbeddings()
#Creates a collection:
fields = [
FieldSchema(name="pk", dtype=DataType.INT64, is_primary=True, auto_id=False),
FieldSchema(name="random", dtype=DataType.DOUBLE),
FieldSchema(name="embeddings", dtype=DataType.FLOAT_VECTOR, dim=8)
]
schema = CollectionSchema(fields, "hello_milvus is the simplest demo to introduce the APIs")
hello_milvus = Collection("hello_milvus", schema)
vector_db = Milvus.from_documents(
texts,
embeddings,
collection_name="testing",
connection_args={"host": "localhost", "port": "19530"},
)
### Error Message and Stack Trace (if applicable)
KeyError Traceback (most recent call last)
Cell In[18], [line 1](vscode-notebook-cell:?execution_count=18&line=1)
----> [1](vscode-notebook-cell:?execution_count=18&line=1) vector_db = Milvus.from_documents(
[2](vscode-notebook-cell:?execution_count=18&line=2) texts,
[3](vscode-notebook-cell:?execution_count=18&line=3) embeddings,
[4](vscode-notebook-cell:?execution_count=18&line=4) collection_name="testing",
[5](vscode-notebook-cell:?execution_count=18&line=5) connection_args={"host": "localhost", "port": "19530"},
[6](vscode-notebook-cell:?execution_count=18&line=6) )
File [~/.local/lib/python3.9/site-packages/langchain_core/vectorstores.py:508](https://file+.vscode-resource.vscode-cdn.net/home/hs/milvus-demo/~/.local/lib/python3.9/site-packages/langchain_core/vectorstores.py:508), in VectorStore.from_documents(cls, documents, embedding, **kwargs)
[506](https://file+.vscode-resource.vscode-cdn.net/home/hs/milvus-demo/~/.local/lib/python3.9/site-packages/langchain_core/vectorstores.py:506) texts = [d.page_content for d in documents]
[507](https://file+.vscode-resource.vscode-cdn.net/home/hs/milvus-demo/~/.local/lib/python3.9/site-packages/langchain_core/vectorstores.py:507) metadatas = [d.metadata for d in documents]
--> [508](https://file+.vscode-resource.vscode-cdn.net/home/hs/milvus-demo/~/.local/lib/python3.9/site-packages/langchain_core/vectorstores.py:508) return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)
File [~/.local/lib/python3.9/site-packages/langchain_community/vectorstores/milvus.py:984](https://file+.vscode-resource.vscode-cdn.net/home/hs/milvus-demo/~/.local/lib/python3.9/site-packages/langchain_community/vectorstores/milvus.py:984), in Milvus.from_texts(cls, texts, embedding, metadatas, collection_name, connection_args, consistency_level, index_params, search_params, drop_old, ids, **kwargs)
[971](https://file+.vscode-resource.vscode-cdn.net/home/hs/milvus-demo/~/.local/lib/python3.9/site-packages/langchain_community/vectorstores/milvus.py:971) auto_id = True
[973](https://file+.vscode-resource.vscode-cdn.net/home/hs/milvus-demo/~/.local/lib/python3.9/site-packages/langchain_community/vectorstores/milvus.py:973) vector_db = cls(
[974](https://file+.vscode-resource.vscode-cdn.net/home/hs/milvus-demo/~/.local/lib/python3.9/site-packages/langchain_community/vectorstores/milvus.py:974) embedding_function=embedding,
[975](https://file+.vscode-resource.vscode-cdn.net/home/hs/milvus-demo/~/.local/lib/python3.9/site-packages/langchain_community/vectorstores/milvus.py:975) collection_name=collection_name,
(...)
[982](https://file+.vscode-resource.vscode-cdn.net/home/hs/milvus-demo/~/.local/lib/python3.9/site-packages/langchain_community/vectorstores/milvus.py:982) **kwargs,
[983](https://file+.vscode-resource.vscode-cdn.net/home/hs/milvus-demo/~/.local/lib/python3.9/site-packages/langchain_community/vectorstores/milvus.py:983) )
--> [984](https://file+.vscode-resource.vscode-cdn.net/home/hs/milvus-demo/~/.local/lib/python3.9/site-packages/langchain_community/vectorstores/milvus.py:984) vector_db.add_texts(texts=texts, metadatas=metadatas, ids=ids)
[985](https://file+.vscode-resource.vscode-cdn.net/home/hs/milvus-demo/~/.local/lib/python3.9/site-packages/langchain_community/vectorstores/milvus.py:985) return vector_db
File [~/.local/lib/python3.9/site-packages/langchain_community/vectorstores/milvus.py:586](https://file+.vscode-resource.vscode-cdn.net/home/hs/milvus-demo/~/.local/lib/python3.9/site-packages/langchain_community/vectorstores/milvus.py:586), in Milvus.add_texts(self, texts, metadatas, timeout, batch_size, ids, **kwargs)
[584](https://file+.vscode-resource.vscode-cdn.net/home/hs/milvus-demo/~/.local/lib/python3.9/site-packages/langchain_community/vectorstores/milvus.py:584) end = min(i + batch_size, total_count)
[585](https://file+.vscode-resource.vscode-cdn.net/home/hs/milvus-demo/~/.local/lib/python3.9/site-packages/langchain_community/vectorstores/milvus.py:585) # Convert dict to list of lists batch for insertion
--> [586](https://file+.vscode-resource.vscode-cdn.net/home/hs/milvus-demo/~/.local/lib/python3.9/site-packages/langchain_community/vectorstores/milvus.py:586) insert_list = [insert_dict[x][i:end] for x in self.fields]
[587](https://file+.vscode-resource.vscode-cdn.net/home/hs/milvus-demo/~/.local/lib/python3.9/site-packages/langchain_community/vectorstores/milvus.py:587) # Insert into the collection.
[588](https://file+.vscode-resource.vscode-cdn.net/home/hs/milvus-demo/~/.local/lib/python3.9/site-packages/langchain_community/vectorstores/milvus.py:588) try:
File [~/.local/lib/python3.9/site-packages/langchain_community/vectorstores/milvus.py:586](https://file+.vscode-resource.vscode-cdn.net/home/hs/milvus-demo/~/.local/lib/python3.9/site-packages/langchain_community/vectorstores/milvus.py:586), in <listcomp>(.0)
[584](https://file+.vscode-resource.vscode-cdn.net/home/hs/milvus-demo/~/.local/lib/python3.9/site-packages/langchain_community/vectorstores/milvus.py:584) end = min(i + batch_size, total_count)
[585](https://file+.vscode-resource.vscode-cdn.net/home/hs/milvus-demo/~/.local/lib/python3.9/site-packages/langchain_community/vectorstores/milvus.py:585) # Convert dict to list of lists batch for insertion
--> [586](https://file+.vscode-resource.vscode-cdn.net/home/hs/milvus-demo/~/.local/lib/python3.9/site-packages/langchain_community/vectorstores/milvus.py:586) insert_list = [insert_dict[x][i:end] for x in self.fields]
[587](https://file+.vscode-resource.vscode-cdn.net/home/hs/milvus-demo/~/.local/lib/python3.9/site-packages/langchain_community/vectorstores/milvus.py:587) # Insert into the collection.
[588](https://file+.vscode-resource.vscode-cdn.net/home/hs/milvus-demo/~/.local/lib/python3.9/site-packages/langchain_community/vectorstores/milvus.py:588) try:
KeyError: 'pk'
### Description
Every cell is working but when I run vector_db = Milvus.from_documents, its throwing an error, though my docker is running, still I am gettiing the same error
### System Info
pip install pymilvus
| Getting "KeyError: 'pk' " while using Milvus DB | https://api.github.com/repos/langchain-ai/langchain/issues/17172/comments | 9 | 2024-02-07T12:37:12Z | 2024-06-18T08:05:08Z | https://github.com/langchain-ai/langchain/issues/17172 | 2,122,942,016 | 17,172 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```
from langchain.chains import ConversationalRetrievalChain
from langchain.llms.bedrock import Bedrock
from langchain.prompts import PromptTemplate
llm_jurassic_ultra = Bedrock(
model_id="ai21.j2-ultra-v1",
endpoint_url="https://bedrock.us-east-1.amazonaws.com",
model_kwargs={"temperature": 0.7, "maxTokens": 500, "numResults": 1}
print('llm_jurassic_ultra:', llm_jurassic_ultra);
llm_jurassic_mid = Bedrock(
model_id="amazon.titan-text-express-v1",
endpoint_url="https://bedrock.us-east-1.amazonaws.com",
model_kwargs={"temperature": 0.7, "maxTokenCount": 300, "topP": 1}
)
print('llm_jurassic_mid:', llm_jurassic_mid);
#Create template for combining chat history and follow up question into a standalone question.
question_generator_chain_template = """
Here is some chat history contained in the <chat_history> tags and a follow-up question in the <follow_up> tags:
<chat_history>
{chat_history}
</chat_history>
<follow_up>
{question}
</follow_up>
Combine the chat history and follow up question into a standalone question.
"""
question_generator_chain_prompt = PromptTemplate.from_template(question_generator_chain_template)
print('question_generator_chain_prompt:', question_generator_chain_prompt);
#Create template for asking the question of the given context.
combine_docs_chain_template = """
You are a friendly, concise chatbot. Here is some context, contained in <context> tags:
<context>
{context}
</context>
Given the context answer this question: {question}
"""
combine_docs_chain_prompt = PromptTemplate.from_template(combine_docs_chain_template)
# RetrievalQA instance with custom prompt template
qa = ConversationalRetrievalChain.from_llm(
llm=llm_jurassic_ultra,
condense_question_llm=llm_jurassic_mid,
retriever=retriever,
return_source_documents=True,
condense_question_prompt=question_generator_chain_prompt,
combine_docs_chain_kwargs={"prompt": combine_docs_chain_prompt}
)
more here : https://github.com/aws-samples/amazon-bedrock-kendra-lex-chatbot/blob/main/lambda/app.py
```
### Error Message and Stack Trace (if applicable)
```
[ERROR] ValueError: Error raised by bedrock service: An error occurred (ValidationException) when calling the InvokeModel operation: The requested operation is not recognized by the service.
Traceback (most recent call last):
File "/var/task/app.py", line 128, in lambda_handler
result = qa(input_variables)
File "/var/lang/lib/python3.9/site-packages/langchain/chains/base.py", line 310, in __call__
raise e
File "/var/lang/lib/python3.9/site-packages/langchain/chains/base.py", line 304, in __call__
self._call(inputs, run_manager=run_manager)
File "/var/lang/lib/python3.9/site-packages/langchain/chains/conversational_retrieval/base.py", line 159, in _call
answer = self.combine_docs_chain.run(
File "/var/lang/lib/python3.9/site-packages/langchain/chains/base.py", line 510, in run
return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[
File "/var/lang/lib/python3.9/site-packages/langchain/chains/base.py", line 310, in __call__
raise e
File "/var/lang/lib/python3.9/site-packages/langchain/chains/base.py", line 304, in __call__
self._call(inputs, run_manager=run_manager)
File "/var/lang/lib/python3.9/site-packages/langchain/chains/combine_documents/base.py", line 122, in _call
output, extra_return_dict = self.combine_docs(
File "/var/lang/lib/python3.9/site-packages/langchain/chains/combine_documents/stuff.py", line 171, in combine_docs
return self.llm_chain.predict(callbacks=callbacks, **inputs), {}
File "/var/lang/lib/python3.9/site-packages/langchain/chains/llm.py", line 298, in predict
return self(kwargs, callbacks=callbacks)[self.output_key]
File "/var/lang/lib/python3.9/site-packages/langchain/chains/base.py", line 310, in __call__
raise e
File "/var/lang/lib/python3.9/site-packages/langchain/chains/base.py", line 304, in __call__
self._call(inputs, run_manager=run_manager)
File "/var/lang/lib/python3.9/site-packages/langchain/chains/llm.py", line 108, in _call
response = self.generate([inputs], run_manager=run_manager)
File "/var/lang/lib/python3.9/site-packages/langchain/chains/llm.py", line 120, in generate
return self.llm.generate_prompt(
File "/var/lang/lib/python3.9/site-packages/langchain/llms/base.py", line 507, in generate_prompt
return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs)
File "/var/lang/lib/python3.9/site-packages/langchain/llms/base.py", line 656, in generate
output = self._generate_helper(
File "/var/lang/lib/python3.9/site-packages/langchain/llms/base.py", line 544, in _generate_helper
raise e
File "/var/lang/lib/python3.9/site-packages/langchain/llms/base.py", line 531, in _generate_helper
self._generate(
File "/var/lang/lib/python3.9/site-packages/langchain/llms/base.py", line 1053, in _generate
self._call(prompt, stop=stop, run_manager=run_manager, **kwargs)
File "/var/lang/lib/python3.9/site-packages/langchain/llms/bedrock.py", line 427, in _call
return self._prepare_input_and_invoke(prompt=prompt, stop=stop, **kwargs)
File "/var/lang/lib/python3.9/site-packages/langchain/llms/bedrock.py", line 266, in _prepare_input_and_invoke
raise ValueError(f"Error raised by bedrock service: {e}")
```
### Description
I am trying langchain to fetch response from Bedrock and have error as described. I validated if any access related issues and there is nothing as such.
Working on this example: https://github.com/aws-samples/amazon-bedrock-kendra-lex-chatbot/blob/main/lambda/app.py
### System Info
Running on AWS Lambda. | Error raised by bedrock service: An error occurred (ValidationException) when calling the InvokeModel operation. | https://api.github.com/repos/langchain-ai/langchain/issues/17170/comments | 2 | 2024-02-07T11:10:10Z | 2024-06-29T16:07:42Z | https://github.com/langchain-ai/langchain/issues/17170 | 2,122,775,441 | 17,170 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```
model = ChatOpenAI(temperature=0)
class Joke(BaseModel):
setup: str = Field(description="question to set up a joke")
punchline: str = Field(description="answer to resolve the joke")
joke_query = "Tell me a joke."
parser = JsonOutputParser(pydantic_object=Joke)
prompt = PromptTemplate(
template="Answer the user query.\n{format_instructions}\n{query}\n",
input_variables=["query"],
partial_variables={"format_instructions": parser.get_format_instructions()}, #cause
)
chain = prompt | model | parser
chain.invoke({"query": joke_query})
openai_functions = [convert_to_openai_function(Joke)]
parser = JsonOutputFunctionsParser()
chain = prompt | model.bind(functions=openai_functions) | parser
chain.invoke({"query": "tell me a joke"})
# openai.BadRequestError: Error code: 400 - {'error': {'message': "'' does not match '^[a-zA-Z0-9_-]{1,64}$' - 'functions.0.name'", 'type': 'invalid_request_error', 'param': None, 'code': None}}
```
### Error Message and Stack Trace (if applicable)
```
# openai.BadRequestError: Error code: 400 - {'error': {'message': "'' does not match '^[a-zA-Z0-9_-]{1,64}$' - 'functions.0.name'", 'type': 'invalid_request_error', 'param': None, 'code': None}}
```
### Description
For the JsonOutputParser.get_format_instructions function, it modifies the result of pydantic_object.schema() directly. This can lead to unintended issues when the same Pydantic class calls .schema() multiple times. An example of this is the convert_to_openai_function function. Below is an illustration of this scenario.
### System Info
langchain==0.1.4
langchain-anthropic==0.0.1.post1
langchain-community==0.0.16
langchain-core==0.1.16
langchain-openai==0.0.2
| Issue: JsonOutputParser's get_format_instructions() Modifying Pydantic Class Schema | https://api.github.com/repos/langchain-ai/langchain/issues/17161/comments | 1 | 2024-02-07T09:03:18Z | 2024-02-13T22:41:48Z | https://github.com/langchain-ai/langchain/issues/17161 | 2,122,518,120 | 17,161 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
Hi Team,
I am using chromadb for uploading documents and then trying to get the answer from db using using Agent but every time it is generating inconsistent results and the probability to generate correct answer is 0.1 so let me know how can I fix this
```python
from langchain.chains import ChatVectorDBChain, RetrievalQA, RetrievalQAWithSourcesChain, ConversationChain
from langchain.agents import initialize_agent, Tool, load_tools, AgentExecutor, ConversationalChatAgent
from langchain.tools import BaseTool, tool
vectordb = connect_chromadb()
search_qa = RetrievalQAWithSourcesChain.from_chain_type(llm=llm, chain_type="stuff",
retriever=vectordb.as_retriever(search_type="mmr", search_kwargs={"filter": filters}), return_source_documents=True,
chain_type_kwargs=digitaleye_templates.qa_summary_kwargs, reduce_k_below_max_tokens=True)
summary_qa = RetrievalQAWithSourcesChain.from_chain_type(llm=llm, chain_type="stuff",
retriever=vectordb.as_retriever(search_type="mmr", search_kwargs={"filter": filters}),
return_source_documents=True, chain_type_kwargs=digitaleye_templates.general_summary_kwargs,
reduce_k_below_max_tokens=True)
detools = [
Tool(
name = "QA Search",
func=search_qa,
description="Useful for when you want to search a document store for the answer to a question based on facts contained in those documents.",
return_direct=True,
),
Tool(
name = "General Summary",
func=summary_qa,
description="Useful for when you want to summarize a document for the answer to a question based on facts contained in those documents.",
return_direct=True,
),
]
agent = initialize_agent(tools=detools, llm=llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
agent_kwargs={
'prefix':PREFIX,
#'format_instructions':FORMAT_INSTRUCTIONS,
#'suffix':SUFFIX,
"input_variables": ["input","agent_scratchpad"],
},
#prefix=PREFIX,
#format_instructions=FORMAT_INSTRUCTIONS,
#suffix=SUFFIX,
max_iterations=3,
return_intermediate_steps=False,
early_stopping_method="generate",
handle_parsing_errors="Check you output and make sure it conforms! Do not output an action and a final answer at the same time.",
#streaming=True,
)
answer = agent.run({"input": "summarize mrileftkneereport.pdf"})
print(answer)
#here are the samples of answer which I received
answer1 = "As an AI model, I’m unable to open or read files such as “mrileftkneereport.pdf”. However, I can help you generate a
summary or answer specific questions based on the text content you provide from the report. Please copy and paste
the text from the report, and I’ll do my best to assist you."
answer2 = "As an AI, I’m unable to access or analyze specific files such as “mrileftkneereport.pdf”. However, I can guide you on
how to interpret a radiology report based on the guidelines provided.
For a general summary, you would need to identify the patient’s name, date of exam, type of exam requested, and
radiologist name from the report. The first paragraph of the summary should include the reason for the screening,
screening time, protocol, patient background, and comparison date.
The second paragraph should detail the specific findings, including the number of nodules detected, their precise
position and size, findings on lungs, heart, pleura, coronary artery calcification, mediastinum/hilum/axilla, and other
findings.
The third paragraph should provide conclusions, including the overall lung-rads category, management
recommendation and follow-up date, based on the lesion.
If there are incidental findings, these should be introduced in the fourth paragraph.
For a specific question, you would need to locate the relevant information in the report. For example, if the question is
about the size of the largest lung nodule, you would need to find the section of the report that details the nodules and
their sizes.
If the question is about the radiologist’s recommendation for managing a lung nodule, you would need to find the
section of the report that provides the conclusions and recommendations.
Remember to provide the answer in a clear and accurate manner, and provide any relevant context if necessary.
If the question contains both a general summary request and a specific question, prioritize answering the specific
question.
Avoid using legal jargon in your responses and answer what you can based on the information available in the report. If
you are unable to answer the question, explain why, such as the information not being included in the report."
answer3 = "The question appears to be the name of the file, “mrileftkneereport.pdf”, rather than a specific question or a request
for a general summary of the radiology report. Therefore, it’s not clear what information is being requested. Please
provide a specific question or request for a general summary of the radiology report."
answer4 = "The question provided is “mrileftkneeereport.pdf” which does not specify whether a general summary or a specific
detail is required from the MRI report. However, I can provide a general summary based on the information given in
the report.
Summary Report
Patient’s Name: Regina Doe
Date of Exam: Not specified
Type of Exam: MRI Left Knee
Radiologist Name: Ross Banner, MD
The MRI screening was performed due to a left knee twisting injury that Regina Doe experienced on 1/1/2013 The
screening was conducted using a standard protocol on a .3 Tesla Open MRI system, without IV contrast. There were
no
previous scans available for comparison.
The MRI findings revealed a complete mid substance disruption of the anterior cruciate ligament, with its fibers
fibrillating within the joint. This has resulted in the buckling of the posterior crucial ligament. There is also edema
relative to the medial collateral ligament, indicating a grade 1 injury. The lateral collateral ligament complex, including
the iliotibial band, biceps femoris tendon, fibular collateral ligament, and popliteus muscle and tendon, are thought to
be intact. The menisci and patella appear to be in good condition, although there is posterior meniscal capsular
junction
edema. A large suprapatellar bursal effusion and mild reactive synovitis were also noted. The osseous structures and
periarticular soft tissues were largely unremarkable, except for a deepened lateral condylar patellar sulcus of the femur.
The conclusions drawn from the MRI report include a complete full-thickness disruption of the anterior cruciate
ligament, an associated osseous contusion of the lateral condylar patellar sulcus (indicative of a pivot shift injury), and a
grade 1 MCL complex injury. No other associated injuries were identified."
```
where answer4 ls correct but why I am not getting it consistently.
Please help me on this, I will be thankful to you.
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I am trying to get answers from chromadb vectorstore using Agent but every time it is producing inconsistent results.
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.22621
> Python Version: 3.9.11 (main, Mar 30 2022, 02:45:55) [MSC v.1916 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.1.10
> langchain: 0.1.0
> langchain_community: 0.0.12
> langserve: Not Found | AgentExecutor giving inconsistent results | https://api.github.com/repos/langchain-ai/langchain/issues/17160/comments | 3 | 2024-02-07T07:34:57Z | 2024-02-07T13:41:56Z | https://github.com/langchain-ai/langchain/issues/17160 | 2,122,370,678 | 17,160 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```
# Generate summaries of text elements
def generate_text_summaries(texts, tables, summarize_texts=False):
"""
Summarize text elements
texts: List of str
tables: List of str
summarize_texts: Bool to summarize texts
"""
# Prompt
prompt_text = """You are an assistant tasked with summarizing tables and text for retrieval. \
These summaries will be embedded and used to retrieve the raw text or table elements. \
Give a concise summary of the table or text that is well-optimized for retrieval. Table \
or text: {element} """
prompt = PromptTemplate.from_template(prompt_text)
empty_response = RunnableLambda(
lambda x: AIMessage(content="Error processing document")
)
# Text summary chain
model = VertexAI(
temperature=0, model_name="gemini-pro", max_output_tokens=1024
).with_fallbacks([empty_response])
summarize_chain = {"element": lambda x: x} | prompt | model | StrOutputParser()
# Initialize empty summaries
text_summaries = []
table_summaries = []
# Apply to text if texts are provided and summarization is requested
if texts and summarize_texts:
text_summaries = summarize_chain.batch(texts, {"max_concurrency": 1})
elif texts:
text_summaries = texts
# Apply to tables if tables are provided
if tables:
table_summaries = summarize_chain.batch(tables, {"max_concurrency": 1})
return text_summaries, table_summaries
# Get text, table summaries
text_summaries2, table_summaries = generate_text_summaries(
texts[9:], tables, summarize_texts=True
)
```
### Error Message and Stack Trace (if applicable)
```
---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
[<ipython-input-6-4464722c69fb>](https://localhost:8080/#) in <cell line: 51>()
49
50 # Get text, table summaries
---> 51 text_summaries2, table_summaries = generate_text_summaries(
52 texts[9:], tables, summarize_texts=True
53 )
3 frames
[/usr/local/lib/python3.10/dist-packages/pydantic/v1/main.py](https://localhost:8080/#) in __init__(__pydantic_self__, **data)
339 values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)
340 if validation_error:
--> 341 raise validation_error
342 try:
343 object_setattr(__pydantic_self__, '__dict__', values)
ValidationError: 1 validation error for VertexAI
__root__
Unable to find your project. Please provide a project ID by:
- Passing a constructor argument
- Using vertexai.init()
- Setting project using 'gcloud config set project my-project'
- Setting a GCP environment variable
- To create a Google Cloud project, please follow guidance at https://developers.google.com/workspace/guides/create-project (type=value_error)
```
### Description
I am not able to connect to Vertex AI, new to GCP.. what are the steps?
### System Info
```
System Information
------------------
> OS: Linux
> OS Version: #1 SMP PREEMPT_DYNAMIC Sat Nov 18 15:31:17 UTC 2023
> Python Version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.1.19
> langchain: 0.1.5
> langchain_community: 0.0.18
> langsmith: 0.0.86
> langchain_experimental: 0.0.50
> langchain_google_vertexai: 0.0.3
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
``` | ValidationError: 1 validation error for VertexAI | https://api.github.com/repos/langchain-ai/langchain/issues/17159/comments | 2 | 2024-02-07T06:32:43Z | 2024-02-07T13:42:50Z | https://github.com/langchain-ai/langchain/issues/17159 | 2,122,288,498 | 17,159 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
There is a point of improvement in the import statement included in the following sample code that exists in the Usage.
> from langchain_community.llms import OCIGenAI
```py
from langchain_community.llms import OCIGenAI
# use default authN method API-key
llm = OCIGenAI(
model_id="MY_MODEL",
service_endpoint="https://inference.generativeai.us-chicago-1.oci.oraclecloud.com",
compartment_id="MY_OCID",
)
response = llm.invoke("Tell me one fact about earth", temperature=0.7)
print(response)
```
This is a point as well.
> from langchain_community.vectorstores import FAISS
```py
from langchain.schema.output_parser import StrOutputParser
from langchain.schema.runnable import RunnablePassthrough
from langchain_community.embeddings import OCIGenAIEmbeddings
from langchain_community.vectorstores import FAISS
```
### Idea or request for content:
The first item is appropriate to do this.
```py
from langchain_community.llms.oci_generative_ai import OCIGenAI
```
And the second item is appropriate to do this.
```py
from langchain_community.vectorstores.faiss import FAISS
``` | DOC: some import statement of Oracle Cloud Infrastructure Generative AI can be improved | https://api.github.com/repos/langchain-ai/langchain/issues/17156/comments | 3 | 2024-02-07T05:00:30Z | 2024-02-13T06:38:55Z | https://github.com/langchain-ai/langchain/issues/17156 | 2,122,192,776 | 17,156 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
from langchain_community.vectorstores.milvus import Milvus
vdb = Milvus(
embedding_function=embeddings,
connection_args={
"host": "localhost",
"port": 19530,
},
auto_id=True,
)
vdb.add_texts(
texts=[
"This is a test",
"This is another test",
],
metadatas=[
{"test": "1"},
{"test": "2"},
],
)
```
### Error Message and Stack Trace (if applicable)
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
Cell In[8], [line 1](vscode-notebook-cell:?execution_count=8&line=1)
----> [1](vscode-notebook-cell:?execution_count=8&line=1) vdb.add_texts(
[2](vscode-notebook-cell:?execution_count=8&line=2) texts=[
[3](vscode-notebook-cell:?execution_count=8&line=3) "This is a test",
[4](vscode-notebook-cell:?execution_count=8&line=4) "This is another test",
[5](vscode-notebook-cell:?execution_count=8&line=5) ],
[6](vscode-notebook-cell:?execution_count=8&line=6) metadatas=[
[7](vscode-notebook-cell:?execution_count=8&line=7) {"test": "1"},
[8](vscode-notebook-cell:?execution_count=8&line=8) {"test": "2"},
[9](vscode-notebook-cell:?execution_count=8&line=9) ],
[10](vscode-notebook-cell:?execution_count=8&line=10) )
File [d:\Projects\ai-notebook\.venv\Lib\site-packages\langchain_community\vectorstores\milvus.py:586](file:///D:/Projects/ai-notebook/.venv/Lib/site-packages/langchain_community/vectorstores/milvus.py:586), in Milvus.add_texts(self, texts, metadatas, timeout, batch_size, ids, **kwargs)
[584](file:///D:/Projects/ai-notebook/.venv/Lib/site-packages/langchain_community/vectorstores/milvus.py:584) end = min(i + batch_size, total_count)
[585](file:///D:/Projects/ai-notebook/.venv/Lib/site-packages/langchain_community/vectorstores/milvus.py:585) # Convert dict to list of lists batch for insertion
--> [586](file:///D:/Projects/ai-notebook/.venv/Lib/site-packages/langchain_community/vectorstores/milvus.py:586) insert_list = [insert_dict[x][i:end] for x in self.fields]
[587](file:///D:/Projects/ai-notebook/.venv/Lib/site-packages/langchain_community/vectorstores/milvus.py:587) # Insert into the collection.
[588](file:///D:/Projects/ai-notebook/.venv/Lib/site-packages/langchain_community/vectorstores/milvus.py:588) try:
File [d:\Projects\ai-notebook\.venv\Lib\site-packages\langchain_community\vectorstores\milvus.py:586](file:///D:/Projects/ai-notebook/.venv/Lib/site-packages/langchain_community/vectorstores/milvus.py:586), in <listcomp>(.0)
[584](file:///D:/Projects/ai-notebook/.venv/Lib/site-packages/langchain_community/vectorstores/milvus.py:584) end = min(i + batch_size, total_count)
[585](file:///D:/Projects/ai-notebook/.venv/Lib/site-packages/langchain_community/vectorstores/milvus.py:585) # Convert dict to list of lists batch for insertion
--> [586](file:///D:/Projects/ai-notebook/.venv/Lib/site-packages/langchain_community/vectorstores/milvus.py:586) insert_list = [insert_dict[x][i:end] for x in self.fields]
[587](file:///D:/Projects/ai-notebook/.venv/Lib/site-packages/langchain_community/vectorstores/milvus.py:587) # Insert into the collection.
[588](file:///D:/Projects/ai-notebook/.venv/Lib/site-packages/langchain_community/vectorstores/milvus.py:588) try:
KeyError: 'pk'
```
### Description
according to #16256 , if set auto_id=True, milvus will compatible to old version that has not auto_id, but it seems not to compatible.
@jaelgu would you please take a look at this issue?
### System Info
langchain==0.1.5
langchain-community==0.0.18
langchain-core==0.1.19
langchain-openai==0.0.5 | Vectorstore Milvus set auto_id=True seems incompatible with old version | https://api.github.com/repos/langchain-ai/langchain/issues/17147/comments | 9 | 2024-02-07T01:39:21Z | 2024-07-07T17:14:57Z | https://github.com/langchain-ai/langchain/issues/17147 | 2,121,991,274 | 17,147 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
from langchain_core.runnables import RunnablePassthrough
# Prompt template
template = """Answer the question based only on the following context, which can include text and tables:
{context}
Question: {question}
"""
prompt = ChatPromptTemplate.from_template(template)
# LLM
model = ChatOpenAI(temperature=0, model="gpt-4")
# RAG pipeline
chain = (
{"context": retriever, "question": RunnablePassthrough()}
| prompt
| model
| StrOutputParser()
)
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Using the notebook from https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_Structured_RAG.ipynb
and the original "Llama2 article" pdf from the notebook - I get **_only_** incorrect answers related to information in the tables.
<img width="1089" alt="Screenshot 2024-02-06 at 6 59 40 PM" src="https://github.com/langchain-ai/langchain/assets/38818491/68de29e1-c256-474a-99be-df32e823bd4e">
chain.invoke("What is the commonsense reasoning score of Falcon 40B ?")
'The commonsense reasoning score of Falcon 40B is 15.2.' ... Incorrect
chain.invoke("Which model has the worst Reading Comprehension and which one has the best")
'The Llama 1 model has the worst Reading Comprehension and the Falcon model has the best.' ... Incorrect
If one asks without capitalization - 'reading comprehension' no answers found (?!) ... Incorrect
Another table and example:
<img width="913" alt="Screenshot 2024-02-06 at 6 28 32 PM" src="https://github.com/langchain-ai/langchain/assets/38818491/6b0768de-a419-486a-a83d-742144c32379">
chain.invoke("What is the power consumption of training Llama2 34B ?")
'The power consumption of training Llama2 34B is https://github.com/langchain-ai/langchain/commit/172032014ea25f655a3efab5be5abcc2e3693037 W.' ... Incorrect
### System Info
langchain==0.1.5
langchain-community==0.0.17
langchain-core==0.1.18
langchain-openai==0.0.5
langchainhub==0.1.14
Python 3.9.12 (main, Apr 5 2022, 01:53:17)
[Clang 12.0.0 ] :: Anaconda, Inc. on darwin | Incorrect answers related to tables in the original Llama2 article used in the tutorial | https://api.github.com/repos/langchain-ai/langchain/issues/17140/comments | 1 | 2024-02-07T00:02:38Z | 2024-05-15T16:07:49Z | https://github.com/langchain-ai/langchain/issues/17140 | 2,121,894,168 | 17,140 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
pdf_loader = PyPDFLoader("docs\conda-cheatsheet.pdf", True)
pages = pdf_loader.load()
```
### Error Message and Stack Trace (if applicable)
File "C:\Users\luke8\Desktop\chatagent\app.py", line 26, in <module>
pages = pdf_loader.load()
File "D:\Anaconda3\envs\env39\lib\site-packages\langchain\document_loaders\pdf.py", line 161, in load
return list(self.lazy_load())
File "D:\Anaconda3\envs\env39\lib\site-packages\langchain\document_loaders\pdf.py", line 168, in lazy_load
yield from self.parser.parse(blob)
File "D:\Anaconda3\envs\env39\lib\site-packages\langchain\document_loaders\base.py", line 95, in parse
return list(self.lazy_parse(blob))
File "D:\Anaconda3\envs\env39\lib\site-packages\langchain\document_loaders\parsers\pdf.py", line 26, in lazy_parse
pdf_reader = pypdf.PdfReader(pdf_file_obj, password=self.password)
File "D:\Anaconda3\envs\env39\lib\site-packages\pypdf\_reader.py", line 345, in __init__
raise PdfReadError("Not encrypted file")
pypdf.errors.PdfReadError: Not encrypted file
### Description
Trying to extract image from pft to text, got error above. I also tried to follow the syntax in this [Extracting images in Langchain doc](https://python.langchain.com/docs/modules/data_connection/document_loaders/pdf#extracting-images) got this error instead ```Traceback (most recent call last):
File "C:\Users\luke8\Desktop\chatagent\app.py", line 25, in <module>
pdf_loader = PyPDFLoader("docs\TaskWaver.pdf", extract_images=True)
TypeError: __init__() got an unexpected keyword argument 'extract_images'```
### System Info
# packages in environment at D:\Anaconda3\envs\env39:
#
# Name Version Build Channel
aiohttp 3.9.0 py39h2bbff1b_0
aiosignal 1.3.1 pyhd8ed1ab_0 conda-forge
anyio 3.5.0 py39haa95532_0 anaconda
appdirs 1.4.4 pyhd3eb1b0_0
argon2-cffi 20.1.0 py39h2bbff1b_1 anaconda
asttokens 2.0.5 pyhd3eb1b0_0 anaconda
async-timeout 4.0.3 pyhd8ed1ab_0 conda-forge
attrs 23.1.0 py39haa95532_0 anaconda
babel 2.11.0 py39haa95532_0 anaconda
backcall 0.2.0 pyhd3eb1b0_0 anaconda
beautifulsoup4 4.12.2 py39haa95532_0 anaconda
blas 1.0 mkl
bleach 4.1.0 pyhd3eb1b0_0 anaconda
blinker 1.7.0 pypi_0 pypi
brotli 1.0.9 h2bbff1b_7
brotli-bin 1.0.9 h2bbff1b_7
brotli-python 1.0.9 py39hd77b12b_7 anaconda
ca-certificates 2023.12.12 haa95532_0
cachetools 5.3.2 pyhd8ed1ab_0 conda-forge
certifi 2024.2.2 py39haa95532_0
cffi 1.16.0 py39h2bbff1b_0 anaconda
charset-normalizer 2.0.4 pyhd3eb1b0_0 anaconda
click 8.1.7 py39haa95532_0
colorama 0.4.6 py39haa95532_0
coloredlogs 15.0.1 pypi_0 pypi
comm 0.1.2 py39haa95532_0 anaconda
contourpy 1.2.0 py39h59b6b97_0
cryptography 41.0.3 py39h89fc84f_0 anaconda
cycler 0.11.0 pyhd3eb1b0_0
dataclasses-json 0.5.7 pyhd8ed1ab_0 conda-forge
debugpy 1.6.7 py39hd77b12b_0 anaconda
decorator 5.1.1 pyhd3eb1b0_0 anaconda
defusedxml 0.7.1 pyhd3eb1b0_0 anaconda
distro 1.9.0 pypi_0 pypi
docker-pycreds 0.4.0 pyhd3eb1b0_0
entrypoints 0.4 py39haa95532_0 anaconda
et_xmlfile 1.1.0 py39haa95532_0
exceptiongroup 1.0.4 py39haa95532_0 anaconda
executing 0.8.3 pyhd3eb1b0_0 anaconda
filelock 3.13.1 py39haa95532_0
flask 3.0.2 pypi_0 pypi
flask-sqlalchemy 3.1.1 pypi_0 pypi
flatbuffers 23.5.26 pypi_0 pypi
fonttools 4.25.0 pyhd3eb1b0_0
freetype 2.12.1 ha860e81_0
frozenlist 1.4.0 py39h2bbff1b_0
gitdb 4.0.7 pyhd3eb1b0_0
gitpython 3.1.37 py39haa95532_0
gmpy2 2.1.2 py39h7f96b67_0
google-api-core 2.16.1 pyhd8ed1ab_0 conda-forge
google-auth 2.27.0 pyhca7485f_0 conda-forge
googleapis-common-protos 1.62.0 pyhd8ed1ab_0 conda-forge
greenlet 3.0.3 pypi_0 pypi
h11 0.14.0 pypi_0 pypi
httpcore 1.0.2 pypi_0 pypi
httpx 0.26.0 pypi_0 pypi
humanfriendly 10.0 py39haa95532_1
icc_rt 2022.1.0 h6049295_2
icu 58.2 vc14hc45fdbb_0 [vc14] anaconda
idna 3.4 py39haa95532_0 anaconda
importlib-metadata 7.0.1 py39haa95532_0
importlib_resources 6.1.1 py39haa95532_1
iniconfig 2.0.0 pyhd8ed1ab_0 conda-forge
intel-openmp 2023.1.0 h59b6b97_46320
ipykernel 6.25.0 py39h9909e9c_0 anaconda
ipython 8.15.0 py39haa95532_0 anaconda
ipython_genutils 0.2.0 pyhd3eb1b0_1 anaconda
ipywidgets 8.0.4 py39haa95532_0 anaconda
itsdangerous 2.1.2 pypi_0 pypi
jedi 0.18.1 py39haa95532_1 anaconda
jinja2 3.1.3 py39haa95532_0
joblib 1.2.0 py39haa95532_0
jpeg 9e h2bbff1b_1 anaconda
json5 0.9.6 pyhd3eb1b0_0 anaconda
jsonschema 4.19.2 py39haa95532_0 anaconda
jsonschema-specifications 2023.7.1 py39haa95532_0 anaconda
jupyter 1.0.0 py39haa95532_8 anaconda
jupyter_client 7.4.9 py39haa95532_0 anaconda
jupyter_console 6.6.3 py39haa95532_0 anaconda
jupyter_core 5.5.0 py39haa95532_0 anaconda
jupyter_server 1.23.4 py39haa95532_0 anaconda
jupyterlab 3.3.2 pyhd3eb1b0_0 anaconda
jupyterlab_pygments 0.2.2 py39haa95532_0 anaconda
jupyterlab_server 2.25.1 py39haa95532_0 anaconda
jupyterlab_widgets 3.0.9 py39haa95532_0 anaconda
kiwisolver 1.4.4 py39hd77b12b_0
krb5 1.20.1 h5b6d351_1 anaconda
langchain 0.0.291 pyhd8ed1ab_0 conda-forge
langsmith 0.0.86 pyhd8ed1ab_0 conda-forge
lerc 3.0 hd77b12b_0
libbrotlicommon 1.0.9 h2bbff1b_7
libbrotlidec 1.0.9 h2bbff1b_7
libbrotlienc 1.0.9 h2bbff1b_7
libclang 14.0.6 default_hb5a9fac_1 anaconda
libclang13 14.0.6 default_h8e68704_1 anaconda
libdeflate 1.17 h2bbff1b_1
libpng 1.6.39 h8cc25b3_0 anaconda
libpq 12.15 h906ac69_1 anaconda
libprotobuf 3.20.3 h23ce68f_0
libsodium 1.0.18 h62dcd97_0 anaconda
libtiff 4.5.1 hd77b12b_0
libuv 1.44.2 h2bbff1b_0
libwebp-base 1.3.2 h2bbff1b_0
lz4-c 1.9.4 h2bbff1b_0 anaconda
markupsafe 2.1.3 py39h2bbff1b_0
marshmallow 3.20.2 pyhd8ed1ab_0 conda-forge
marshmallow-enum 1.5.1 pyh9f0ad1d_3 conda-forge
matplotlib-base 3.8.0 py39h4ed8f06_0
matplotlib-inline 0.1.6 py39haa95532_0 anaconda
mistune 2.0.4 py39haa95532_0 anaconda
mkl 2023.1.0 h6b88ed4_46358
mkl-service 2.4.0 py39h2bbff1b_1
mkl_fft 1.3.8 py39h2bbff1b_0
mkl_random 1.2.4 py39h59b6b97_0
mpc 1.1.0 h7edee0f_1
mpfr 4.0.2 h62dcd97_1
mpir 3.0.0 hec2e145_1
mpmath 1.3.0 py39haa95532_0
multidict 6.0.4 py39h2bbff1b_0
munkres 1.1.4 py_0
mypy_extensions 1.0.0 pyha770c72_0 conda-forge
nbclassic 0.5.5 py39haa95532_0 anaconda
nbclient 0.8.0 py39haa95532_0 anaconda
nbconvert 7.10.0 py39haa95532_0 anaconda
nbformat 5.9.2 py39haa95532_0 anaconda
nest-asyncio 1.5.6 py39haa95532_0 anaconda
networkx 2.8.8 pyhd8ed1ab_0 conda-forge
notebook 6.5.4 py39haa95532_0 anaconda
notebook-shim 0.2.3 py39haa95532_0 anaconda
numexpr 2.8.7 py39h2cd9be0_0
numpy 1.26.3 py39h055cbcc_0
numpy-base 1.26.3 py39h65a83cf_0
onnxruntime 1.17.0 py39he3bb845_0_cpu conda-forge
openai 1.11.1 pypi_0 pypi
openapi-schema-pydantic 1.2.4 pyhd8ed1ab_0 conda-forge
opencv-python 4.9.0.80 pypi_0 pypi
openjpeg 2.4.0 h4fc8c34_0
openpyxl 3.0.10 py39h2bbff1b_0
openssl 3.0.13 h2bbff1b_0
packaging 23.1 py39haa95532_0 anaconda
pandas 2.2.0 py39h32e6231_0 conda-forge
pandas-stubs 2.1.4.231227 py39haa95532_0
pandocfilters 1.5.0 pyhd3eb1b0_0 anaconda
parso 0.8.3 pyhd3eb1b0_0 anaconda
pathtools 0.1.2 pyhd3eb1b0_1
pickleshare 0.7.5 pyhd3eb1b0_1003 anaconda
pillow 10.0.1 pypi_0 pypi
pip 23.3.1 py39haa95532_0
platformdirs 3.10.0 py39haa95532_0 anaconda
plotly 5.9.0 py39haa95532_0
pluggy 1.4.0 pyhd8ed1ab_0 conda-forge
ply 3.11 py39haa95532_0 anaconda
prometheus_client 0.14.1 py39haa95532_0 anaconda
prompt-toolkit 3.0.36 py39haa95532_0 anaconda
prompt_toolkit 3.0.36 hd3eb1b0_0 anaconda
protobuf 3.20.3 py39hcbf5309_1 conda-forge
psutil 5.9.0 py39h2bbff1b_0 anaconda
pure_eval 0.2.2 pyhd3eb1b0_0 anaconda
pyasn1 0.5.1 pyhd8ed1ab_0 conda-forge
pyasn1-modules 0.3.0 pyhd8ed1ab_0 conda-forge
pyclipper 1.3.0.post5 pypi_0 pypi
pycparser 2.21 pyhd3eb1b0_0 anaconda
pydantic 1.10.12 py39h2bbff1b_1
pygments 2.15.1 py39haa95532_1 anaconda
pyopenssl 23.2.0 py39haa95532_0 anaconda
pyparsing 3.0.9 py39haa95532_0
pypdf 4.0.1 pypi_0 pypi
pyqt 5.15.10 py39hd77b12b_0 anaconda
pyqt5-sip 12.13.0 py39h2bbff1b_0 anaconda
pyreadline3 3.4.1 py39haa95532_0
pysocks 1.7.1 py39haa95532_0 anaconda
pytest 8.0.0 pyhd8ed1ab_0 conda-forge
pytest-subtests 0.11.0 pyhd8ed1ab_0 conda-forge
python 3.9.18 h1aa4202_0
python-dateutil 2.8.2 pyhd3eb1b0_0 anaconda
python-dotenv 1.0.1 pyhd8ed1ab_0 conda-forge
python-fastjsonschema 2.16.2 py39haa95532_0 anaconda
python-flatbuffers 2.0 pyhd3eb1b0_0
python-tzdata 2023.3 pyhd3eb1b0_0
python_abi 3.9 2_cp39 conda-forge
pytorch 2.2.0 py3.9_cpu_0 pytorch
pytorch-mutex 1.0 cpu pytorch
pytz 2023.3.post1 py39haa95532_0 anaconda
pyu2f 0.1.5 pyhd8ed1ab_0 conda-forge
pywin32 305 py39h2bbff1b_0 anaconda
pywinpty 2.0.10 py39h5da7b33_0 anaconda
pyyaml 6.0.1 py39h2bbff1b_0
pyzmq 25.1.0 py39hd77b12b_0 anaconda
qt-main 5.15.2 h879a1e9_9 anaconda
qtconsole 5.5.0 py39haa95532_0 anaconda
qtpy 2.4.1 py39haa95532_0 anaconda
rapidocr-onnxruntime 1.3.11 pypi_0 pypi
referencing 0.30.2 py39haa95532_0 anaconda
requests 2.31.0 py39haa95532_0 anaconda
rpds-py 0.10.6 py39h062c2fa_0 anaconda
rsa 4.9 pyhd8ed1ab_0 conda-forge
scikit-learn 1.3.0 py39h4ed8f06_1
scipy 1.11.4 py39h309d312_0
send2trash 1.8.2 py39haa95532_0 anaconda
sentry-sdk 1.9.0 py39haa95532_0
setproctitle 1.2.2 py39h2bbff1b_1004
setuptools 68.2.2 py39haa95532_0
shapely 2.0.2 pypi_0 pypi
sip 6.7.12 py39hd77b12b_0 anaconda
six 1.16.0 pyhd3eb1b0_1 anaconda
smmap 4.0.0 pyhd3eb1b0_0
sniffio 1.2.0 py39haa95532_1 anaconda
soupsieve 2.5 py39haa95532_0 anaconda
sqlalchemy 2.0.25 pypi_0 pypi
sqlite 3.41.2 h2bbff1b_0
stack_data 0.2.0 pyhd3eb1b0_0 anaconda
stringcase 1.2.0 py_0 conda-forge
sympy 1.12 py39haa95532_0
tbb 2021.8.0 h59b6b97_0
tenacity 8.2.3 pyhd8ed1ab_0 conda-forge
terminado 0.17.1 py39haa95532_0 anaconda
threadpoolctl 2.2.0 pyh0d69192_0
tinycss2 1.2.1 py39haa95532_0 anaconda
tomli 2.0.1 py39haa95532_0 anaconda
tornado 6.3.3 py39h2bbff1b_0 anaconda
tqdm 4.66.1 pypi_0 pypi
traitlets 5.7.1 py39haa95532_0 anaconda
types-pytz 2022.4.0.0 py39haa95532_1
typing-extensions 4.9.0 py39haa95532_1
typing_extensions 4.9.0 py39haa95532_1
typing_inspect 0.9.0 pyhd8ed1ab_0 conda-forge
tzdata 2023d h04d1e81_0
ucrt 10.0.20348.0 haa95532_0
urllib3 1.26.18 py39haa95532_0 anaconda
vc 14.3 hcf57466_18 conda-forge
vc14_runtime 14.38.33130 h82b7239_18 conda-forge
vs2015_runtime 14.38.33130 hcb4865c_18 conda-forge
wandb 0.16.2 pyhd8ed1ab_0 conda-forge
wcwidth 0.2.5 pyhd3eb1b0_0 anaconda
webencodings 0.5.1 py39haa95532_1 anaconda
websocket-client 0.58.0 py39haa95532_4 anaconda
werkzeug 3.0.1 pypi_0 pypi
wheel 0.41.2 py39haa95532_0
widgetsnbextension 4.0.5 py39haa95532_0 anaconda
win_inet_pton 1.1.0 py39haa95532_0 anaconda
winpty 0.4.3 4 anaconda
xz 5.4.2 h8cc25b3_0 anaconda
yaml 0.2.5 he774522_0
yarl 1.7.2 py39hb82d6ee_2 conda-forge
zeromq 4.3.4 hd77b12b_0 anaconda
zipp 3.17.0 py39haa95532_0
zlib 1.2.13 h8cc25b3_0 anaconda
zstd 1.5.5 hd43e919_0 anaconda | Exracting images from PDF Error "pypdf.errors.PdfReadError: Not encrypted file" | https://api.github.com/repos/langchain-ai/langchain/issues/17134/comments | 3 | 2024-02-06T22:17:45Z | 2024-07-14T16:05:47Z | https://github.com/langchain-ai/langchain/issues/17134 | 2,121,762,214 | 17,134 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
import pinecone
from pinecone import Pinecone, ServerlessSpec
from langchain.vectorstores import Pinecone as Pinecone2
import os
# We initialize pinecone
pinecone_api_key=os.getenv("PINECONE_API_KEY"),
pinecone_env=os.getenv("PINECONE_ENV_KEY")
openai_key=os.getenv("OPENAI_API_KEY")
index_name = "langchain"
# We make an object to initialize Pinecone
class PineconeConnected():
def __init__(self, index_name: str, pinecone_api_key: str, pinecone_env: str, openai_key: str):
embeddings = OpenAIEmbeddings(openai_api_key=openai_key)
self.pinecone = pinecone.Pinecone(api_key=pinecone_api_key, host=pinecone_env) # VectorStore object with the reference + Pinecone #index loaded
self.db_Pinecone = Pinecone2.from_existing_index(index_name, embeddings) # VectorStore object with the reference + Pinecone # index load
# We instanciate the object
pc1=PineconeConnected(index_name, pinecone_api_key, pinecone_env ,openai_key)
pc1.pinecone(pinecone_api_key, pinecone_env)
db_Pinecone = pc1.db_Pinecone(index_name, embeddings)
### Error Message and Stack Trace (if applicable)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[154], line 22
18 self.pinecone = pinecone.Pinecone(api_key=pinecone_api_key, host=pinecone_env) # VectorStore object with the reference + Pinecone #index loaded
19 self.db_Pinecone = Pinecone2.from_existing_index(index_name, embeddings) # VectorStore object with the reference + Pinecone # index load
---> 22 pc1=PineconeConnected(index_name, pinecone_api_key, pinecone_env ,openai_key)
23 pc1.pinecone(pinecone_api_key, pinecone_env)
24 db_Pinecone = pc1.db_Pinecone(index_name, embeddings)
Cell In[154], line 19, in PineconeConnected.__init__(self, index_name, pinecone_api_key, pinecone_env, openai_key)
17 embeddings = OpenAIEmbeddings(openai_api_key=openai_key)
18 self.pinecone = pinecone.Pinecone(api_key=pinecone_api_key, host=pinecone_env) # VectorStore object with the reference + Pinecone #index loaded
---> 19 self.db_Pinecone = Pinecone2.from_existing_index(index_name, embeddings)
File ~/.local/lib/python3.8/site-packages/langchain/vectorstores/pinecone.py:264, in Pinecone.from_existing_index(cls, index_name, embedding, text_key, namespace)
257 except ImportError:
258 raise ValueError(
259 "Could not import pinecone python package. "
260 "Please install it with `pip install pinecone-client`."
261 )
263 return cls(
--> 264 pinecone.Index(index_name), embedding.embed_query, text_key, namespace
265 )
TypeError: __init__() missing 1 required positional argument: 'host'
### Description
It seems that the pinecone.Index-function that is called from the from_existing_index()-function requires a host-argument, but when this is supplied, it is never delivered to the Index-function.
### System Info
I'm on a Windows 11-PC, with Python version 3.8.10 and langchain version 0.0.184. I tried installing the pinecone_community-library, but it broke my code, and the langchain_pinecone-library it couldn't find. | In the lanchain.vectorstores Pinecone-library, the method from_existing_index() seems broken | https://api.github.com/repos/langchain-ai/langchain/issues/17128/comments | 1 | 2024-02-06T21:25:58Z | 2024-05-14T16:08:56Z | https://github.com/langchain-ai/langchain/issues/17128 | 2,121,692,628 | 17,128 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```bash
cd libs/langchain
make docker_tests
```
### Error Message and Stack Trace (if applicable)
```
=> ERROR [dependencies 2/2] RUN /opt/poetry/bin/poetry install --no-interaction --no-ansi --with test 2.8s
------
> [dependencies 2/2] RUN /opt/poetry/bin/poetry install --no-interaction --no-ansi --with test:
0.879 Path /core for langchain-core does not exist
0.881 Path /core for langchain-core does not exist
0.882 Path /community for langchain-community does not exist
0.884 Path /core for langchain-core does not exist
0.885 Path /community for langchain-community does not exist
0.886 Path /core for langchain-core does not exist
0.886 Path /community for langchain-community does not exist
1.038 Creating virtualenv langchain in /app/.venv
1.684 Installing dependencies from lock file
2.453 Path /community for langchain-community does not exist
2.453 Path /core for langchain-core does not exist
2.479
2.479 Path /core for langchain-core does not exist
------
Dockerfile:34
--------------------
32 |
33 | # Install the Poetry dependencies (this layer will be cached as long as the dependencies don't change)
34 | >>> RUN $POETRY_HOME/bin/poetry install --no-interaction --no-ansi --with test
35 |
36 | # Use a multi-stage build to run tests
--------------------
```
### Description
`make docker_tests` fails because poetry cannot find the `community` and `core` packages. Most likely related to https://github.com/langchain-ai/langchain/discussions/13823 and https://github.com/langchain-ai/langchain/discussions/14243
### System Info
Python Version: `3.9.18` | langchain: broken docker_tests target | https://api.github.com/repos/langchain-ai/langchain/issues/17111/comments | 1 | 2024-02-06T15:24:36Z | 2024-05-14T16:08:50Z | https://github.com/langchain-ai/langchain/issues/17111 | 2,121,053,514 | 17,111 |
[
"hwchase17",
"langchain"
] |
#11740 new openLLM remote client is not working with langchain.llms.OpenLLM
This raises Attribute Error
```AttributeError: 'GenerationOutput' object has no attribute 'responses'```
- [x] I added a very descriptive title to this issue.
- [x] I searched the LangChain documentation with the integrated search.
- [x] I used the GitHub search to find a similar question and didn't find it.
- [x] I am sure that this is a bug in LangChain rather than my code.
There is no responses key in GenerationOutput schema from openLLM github, rather they have a outputs field.

https://github.com/bentoml/OpenLLM/blob/8ffab93d395c9030232b52ab00ed36cb713804e3/openllm-core/src/openllm_core/_schemas.py#L118
_Originally posted by @97k in https://github.com/langchain-ai/langchain/issues/11740#issuecomment-1929779365_
| OpenLLM: GenerationOutput object has no attribute 'responses' | https://api.github.com/repos/langchain-ai/langchain/issues/17108/comments | 5 | 2024-02-06T14:19:16Z | 2024-02-22T11:19:05Z | https://github.com/langchain-ai/langchain/issues/17108 | 2,120,899,339 | 17,108 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [x] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.embeddings import HuggingFaceEmbeddings
from langchain_community.vectorstores.redis import Redis
import os
embeddings = HuggingFaceEmbeddings(
model_name="sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2",
model_kwargs={'device': 'cpu'}
metadata = [
{
"user": "john",
"age": 18,
"job": "engineer",
"credit_score": "high",
},
{
"user": "derrick",
"age": 45,
"job": "doctor",
"credit_score": "low",
},
{
"user": "nancy",
"age": 94,
"job": "doctor",
"credit_score": "high",
},
{
"user": "tyler",
"age": 100,
"job": "engineer",
"credit_score": "high",
},
{
"user": "joe",
"age": 35,
"job": "dentist",
"credit_score": "medium",
},
]
texts = ["foo", "foo", "foo", "bar", "bar"]
rds = Redis.from_texts(
texts,
embeddings,
metadatas=metadata,
redis_url="redis://localhost:6379",
index_name="users",
)
results = rds.similarity_search("foo")
```
### Error Message and Stack Trace (if applicable)
```python
ResponseError: Error parsing vector similarity query: query vector blob size (1536) does not match index's expected size (6144).
```
### Description
I was able to successfully use Langchain and Redis vector storage with OpenAIEmbeddings, following the documentation example. However, when I tried the same basic example with different types of embeddings, it didn't work. It appears that Langchain's Redis vector store is only compatible with OpenAIEmbeddings.
### System Info
langchain==0.1.4
langchain-community==0.0.15
langchain-core==0.1.16 | Redis vector store using HuggingFaceEmbeddings | https://api.github.com/repos/langchain-ai/langchain/issues/17107/comments | 1 | 2024-02-06T14:07:20Z | 2024-05-14T16:08:46Z | https://github.com/langchain-ai/langchain/issues/17107 | 2,120,871,807 | 17,107 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
from langchain_experimental.text_splitter import SemanticChunker
...
text_splitter=SemanticChunker(embeddings)
docs = text_splitter.split_documents(documents)
-> IndexError: index -1 is out of bounds for axis 0 with size 0
### Error Message and Stack Trace (if applicable)
File "/usr/local/lib/python3.10/dist-packages/langchain_experimental/text_splitter.py", line 138, in create_documents
for chunk in self.split_text(text):
File "/usr/local/lib/python3.10/dist-packages/langchain_experimental/text_splitter.py", line 103, in split_text
breakpoint_distance_threshold = np.percentile(
File "/usr/local/lib/python3.10/dist-packages/numpy/lib/function_base.py", line 4283, in percentile
return _quantile_unchecked(
File "/usr/local/lib/python3.10/dist-packages/numpy/lib/function_base.py", line 4555, in _quantile_unchecked
return _ureduce(a,
File "/usr/local/lib/python3.10/dist-packages/numpy/lib/function_base.py", line 3823, in _ureduce
r = func(a, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/numpy/lib/function_base.py", line 4721, in _quantile_ureduce_func
result = _quantile(arr,
File "/usr/local/lib/python3.10/dist-packages/numpy/lib/function_base.py", line 4830, in _quantile
slices_having_nans = np.isnan(arr[-1, ...])
IndexError: index -1 is out of bounds for axis 0 with size 0
### Description
I'm trying the SemanticChunker and noticed that it fails for documents that can't be splitted into multiple sentences.
I guess using the SemanticChunker for such short documents does not really make sense, however it should cause an exception.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Thu Oct 5 21:02:42 UTC 2023
> Python Version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.1.18
> langchain: 0.1.5
> langchain_community: 0.0.17
> langchain_experimental: 0.0.50
> langchain_openai: 0.0.5
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
| SemanticChunker error with single sentence documents | https://api.github.com/repos/langchain-ai/langchain/issues/17106/comments | 1 | 2024-02-06T13:43:40Z | 2024-05-14T16:08:40Z | https://github.com/langchain-ai/langchain/issues/17106 | 2,120,816,919 | 17,106 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
from langchain.callbacks.base import BaseCallbackHandler
def get_callback(logging_path):
class CustomCallback(BaseCallbackHandler):
def on_llm_start(self, serialized, prompts, **kwargs):
with open(logging_path,'a') as f:
f.write(f"LLM START: {prompts}\n\n")
def on_chat_model_start(self, serialized, messages, **kwargs):
with open(logging_path,'a') as f:
f.write(f"CHAT START: {messages}\n\n")
def on_llm_end(self, response, **kwargs):
with open(logging_path,'a') as f:
f.write(f"LLM END: {response}\n\n")
return CustomCallback()
callback_obj = get_callback("logger.txt")
sql_db = get_database(sql_database_path)
db_chain = SQLDatabaseChain.from_llm(mistral, sql_db, verbose=True,callbacks = [callback_obj])
db_chain.invoke({
"query": "What is the best time of Lance Larson in men's 100 meter butterfly competition?"
})
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
The Custom Callback which i am passing during the instance of SQLDatabaseChain is not executing.
### System Info
langchain==0.1.2
langchain-community==0.0.14
langchain-core==0.1.14
langchain-experimental==0.0.32
langchainhub==0.1.14
platform linux
python 3.9.13 | Callbacks are not working with SQLDatabaseChain | https://api.github.com/repos/langchain-ai/langchain/issues/17103/comments | 1 | 2024-02-06T12:11:48Z | 2024-02-08T03:29:25Z | https://github.com/langchain-ai/langchain/issues/17103 | 2,120,631,069 | 17,103 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(temperature=0,
openai_api_key = my_api_key,
model_name="gpt-4",
model_kwargs = {"logprobs": True,
"top_logprobs":3})
llm.invoke("Please categorize this text below into positive, negative or neutral: I had a good day")
```
```
# OUTPUT
AIMessage(content='Positive')
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Lopgprobs argument is again settable in OpenAi according to this [official source](https://cookbook.openai.com/examples/using_logprobs) (Openai docs):
However, when I try to use it via langchain, it does not exist in the output despite explicitly being set to `True` in `model_kwargs`.
If I put `logprobs` parameter outside `model_kwargs` it does show warning which gives me confidence that the place is right.
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.22635
> Python Version: 3.11.1 (tags/v3.11.1:a7a450f, Dec 6 2022, 19:58:39) [MSC v.1934 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.1.18
> langchain: 0.1.5
> langchain_community: 0.0.17
> langchain_google_genai: 0.0.3
> langchain_openai: 0.0.5
> OpenAI: 1.11.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
EDIT: added openai package version. | ChatOpenai logprobs not reported despite being set to True in `model_kwargs` | https://api.github.com/repos/langchain-ai/langchain/issues/17101/comments | 8 | 2024-02-06T11:27:50Z | 2024-05-20T16:08:45Z | https://github.com/langchain-ai/langchain/issues/17101 | 2,120,554,426 | 17,101 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
The following code:https://github.com/langchain-ai/langchain/blob/f027696b5f068bacad96a9356ae196f5de240007/libs/community/langchain_community/vectorstores/milvus.py#L564-L573 is used to assign the key of the doc.metadata to each collection.schema collumns(including text and embdding), but in previews code:https://github.com/langchain-ai/langchain/blob/f027696b5f068bacad96a9356ae196f5de240007/libs/community/langchain_community/vectorstores/milvus.py#L550C1-L554C10. Those two keys have been assigned already?
### Error Message and Stack Trace (if applicable)
The length of the value from key metadatas cannot align with the length of the value from the milvus collections columns.
### Description

### System Info

| Why assign the value of milvus doc.matadatas to insert dict several times? | https://api.github.com/repos/langchain-ai/langchain/issues/17095/comments | 4 | 2024-02-06T08:43:05Z | 2024-05-14T16:08:30Z | https://github.com/langchain-ai/langchain/issues/17095 | 2,120,246,560 | 17,095 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
The following code produces the duplicated examples of `sunny`. It is output correctly using FAISS.
```from langchain_google_vertexai import VertexAIEmbeddings
from langchain_community.vectorstores import Chroma
examples = [
{"input": "happy", "output": "sad"},
{"input": "tall", "output": "short"},
{"input": "energetic", "output": "lethargic"},
{"input": "sunny", "output": "gloomy"},
{"input": "slow", "output": "fast"},
{"input": "windy", "output": "calm"},
]
example_selector3 = SemanticSimilarityExampleSelector.from_examples(
examples,
VertexAIEmbeddings("textembedding-gecko@001"),
Chroma,
k=2,
)
similar_prompt = FewShotPromptTemplate(
example_selector=example_selector3,
example_prompt=example_prompt,
prefix="Give the antonym of every input",
suffix="Input: {adjective}\nOutput:",
input_variables=["adjective"],
)
print(similar_prompt.format(adjective="rainny"))
```
Output looks like the below
```
Give the antonym of every input
Input: sunny
Output: gloomy
Input: sunny
Output: gloomy
Input: rainny
Output:
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Below is the output using FAISS. I do not expect the output to be the same but at least it should not contain duplicates. If we put k=10, it duplicates the example and give more examples than the original list.
```
Give the antonym of every input
Input: sunny
Output: gloomy
Input: windy
Output: calm
Input: rainny
Output:
```
Below is the output when k=10
```
Give the antonym of every input
Input: sunny
Output: gloomy
Input: sunny
Output: gloomy
Input: sunny
Output: gloomy
Input: sunny
Output: gloomy
Input: sunny
Output: gloomy
Input: sunny
Output: gloomy
Input: sunny
Output: gloomy
Input: sunny
Output: gloomy
Input: sunny
Output: gloomy
Input: windy
Output: calm
Input: rainny
Output:
```
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP PREEMPT_DYNAMIC Sat Nov 18 15:31:17 UTC 2023
> Python Version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.1.18
> langchain: 0.1.5
> langchain_community: 0.0.17
> langchain_google_vertexai: 0.0.3
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | SemanticSimilarityExampleSelector with Chroma return duplicated examples | https://api.github.com/repos/langchain-ai/langchain/issues/17093/comments | 1 | 2024-02-06T08:15:50Z | 2024-05-14T16:08:25Z | https://github.com/langchain-ai/langchain/issues/17093 | 2,120,203,051 | 17,093 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
below's the code
```
%%time
# query = 'how many are injured and dead in christchurch Mosque?'
pdf_file = '/content/documents/Neha Wadikar Resume.pdf'
# Define your prompt template
prompt_template = """Use the following pieces of information to answer the user's question.
If you don't know the answer, just say that you don't know, don't try to make up an answer.
Context: {context}
Question: {question}
Only return the helpful answer below and nothing else. If no context, then no answer.
Helpful Answer:"""
loader = PyPDFLoader(pdf_file)
document = loader.load()
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=1200,
chunk_overlap=300)
# Split the document into chunks
texts = text_splitter.split_documents(document)
# Create a vector database for the document
# vectorstore = FAISS.from_documents(texts, embeddings)
vectorstore = Chroma.from_documents(texts, embeddings)
llm = OpenAI(temperature=0)
# Create a retriever for the vector database
document_content_description = "Description of research papers, a resume and a research proposal"
metadata_field_info = [
AttributeInfo(
name="year",
description="The year the document or event occurred, reflecting release, publication, or professional milestone",
type="integer",
),
AttributeInfo(
name="role",
description="The professional role or job title of the individual described in the document",
type="string",
),
AttributeInfo(
name="sector",
description="The industry or sector focus of the professional profile or study",
type="string",
),
AttributeInfo(
name="skills",
description="A list of skills or expertise areas highlighted in the professional profile or research study",
type="string",
),
AttributeInfo(
name="achievements",
description="Key achievements or outcomes described within the document, relevant to professional profiles or research findings",
type="string",
),
AttributeInfo(
name="education",
description="Educational background information, including institutions attended",
type="string",
),
AttributeInfo(
name="volunteer_work",
description="Details of any volunteer work undertaken by the individual in the professional profile",
type="string",
),
AttributeInfo(
name="researcher",
description="The name of the researcher or author of a study",
type="string",
),
AttributeInfo(
name="supervisor",
description="The supervisor or advisor for a research project",
type="string",
),
AttributeInfo(
name="institution",
description="The institution or organization associated with the document, such as a university or research center",
type="string",
),
AttributeInfo(
name="focus",
description="The main focus or subjects of study, research, or professional expertise",
type="string",
),
AttributeInfo(
name="challenges",
description="Specific challenges addressed or faced in the context of the document",
type="string",
),
AttributeInfo(
name="proposed_solutions",
description="Solutions proposed or implemented as described within the document",
type="string",
),
AttributeInfo(
name="journal",
description="The name of the journal where a study was published",
type="string",
),
AttributeInfo(
name="authors",
description="The authors of a research study or paper",
type="string",
),
AttributeInfo(
name="keywords",
description="Key terms or concepts central to the document's content",
type="string",
),
AttributeInfo(
name="results",
description="The main results or findings of a research study, including statistical outcomes",
type="string",
),
AttributeInfo(
name="approach",
description="The approach or methodology adopted in a research study or professional project",
type="string",
),
]
retriever = SelfQueryRetriever.from_llm(
llm,
vectorstore,
document_content_description,
metadata_field_info,
use_original_query=False,
verbose=True
)
# print(retriever.get_relevant_documents("What is the title of the proposal"))
# print(retriever.invoke("What are techinical skills in resume"))
# logging.basicConfig(level=logging.INFO)
# Create a chain to answer questions
qa_chain = RetrievalQA.from_chain_type(llm=llm,
chain_type="stuff",
retriever=retriever,
return_source_documents=True)
# Format the prompt using the template
context = ""
question = "what's the person name, the travel experience, which countries she's been to? If yes, in which year she's been to and did she attend any conferences??"
formatted_prompt = prompt_template.format(context=context, question=question)
# Pass the formatted prompt to the RetrievalQA function
llm_response = qa_chain(formatted_prompt)
process_llm_response(llm_response)
```
If you see in the above code, in the metadata_field_info, i haven't particularly mentioned any AttributeInfo related to travel and I asked model to return an answer which is related to travel and it has returned that 'I don't know'. So, can you explain me how exactly this metadata_fields_info works? And is it mandatory to create a large metadata_fields_info covering all the aspects of the pdf files, so that it returns the answers taking metdata_fields_info AttributeInfo as reference and return the answer from pdf?
### Idea or request for content:
_No response_ | how metadata_fields_info is helpful in SelfQueryRetrieval? | https://api.github.com/repos/langchain-ai/langchain/issues/17090/comments | 5 | 2024-02-06T06:27:28Z | 2024-02-14T03:34:53Z | https://github.com/langchain-ai/langchain/issues/17090 | 2,120,059,025 | 17,090 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
Please write docs for SQLRecordManager
### Idea or request for content:
- Supported databases
- List of methods
- Description of method parameters | DOC: SQLRecordManager documentation does not exist | https://api.github.com/repos/langchain-ai/langchain/issues/17088/comments | 9 | 2024-02-06T05:04:15Z | 2024-08-09T16:07:08Z | https://github.com/langchain-ai/langchain/issues/17088 | 2,119,969,318 | 17,088 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
response = ""
async for token in chain.astream(input=input):
yield token.content
response += token.content
```
### Error Message and Stack Trace (if applicable)
TypeError: NetworkError when attempting to fetch resource.
### Description
I am using the RunnableMap to create a chain for my application related to RAG. The chain is defined as follow:
```python
context = RunnableMap(
{
"context": (
retriever_chain
),
"question": itemgetter("question"),
"chat_history": itemgetter("chat_history"),
}
)
prompt = ChatPromptTemplate.from_messages(
messages=[
("system", SYSTEM_ANSWER_QUESTION_TEMPLATE),
MessagesPlaceholder(variable_name="chat_history"),
("human", "{question}"),
]
)
response_synthesizer = prompt | llm
response_chain = context | response_synthesizer
```
I have create an endpoint with fastapi for stream response for this chain as follow:
```python
@router.post("/workspace/{workspace_id}/docschat/chat")
async def chat(
data: ChatRequest,
) -> StreamingResponse:
try:
### Few codes
async def generate_response():
input = {"question": data.message,
"chat_history": #...
}
response = ""
async for token in chain.astream(input=input):
yield token.content
response += token.content
return StreamingResponse(generate_response(), media_type="text/event-stream")
except Exception as e:
return JSONResponse(status_code=500, content={"message": "Internal server error", "error": str(e)})
```
When I hit the endpoint for the first time, I get successful streaming response. However, successive response freezes the whole api. I am getting "TypeError: NetworkError when attempting to fetch resource." when I use swagger while the api freezes. I have a doubt the async operation from first response did not complete hence causing error in the succesive chain trigger. How do I solve this issue?
### System Info
"pip freeze | grep langchain"
langchain 0.1.0
langchain-community 0.0.12
langchain-core 0.1.17
langchain-experimental 0.0.49
langchain-openai 0.0.5
langchainhub 0.1.14
langchainplus-sdk 0.0.20
platform:
linux
python --version
Python 3.10.10 | NetworkError when attempting to fetch resource with chain.astream | https://api.github.com/repos/langchain-ai/langchain/issues/17087/comments | 1 | 2024-02-06T04:58:16Z | 2024-02-07T03:51:47Z | https://github.com/langchain-ai/langchain/issues/17087 | 2,119,963,524 | 17,087 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
return_op = SQLDatabaseChain.from_llm(
llm,
db_connection,
prompt=few_shot_prompt,
use_query_checker=True,
verbose=True,
return_intermediate_steps=True,
)
### Error Message and Stack Trace (if applicable)
None
### Description
Is there any parameter in SQLDatabaseChain that can make the intermediate_step editable i.e. the SQL code generated by LLM can it be edited before executing the SQL command at the database?
SQLDatabaseChain.from_llm(
llm,
db_connection,
prompt=few_shot_prompt,
use_query_checker=True,
verbose=True,
return_intermediate_steps=True,
)
### System Info
boto3==1.34.29
chromadb==0.4.22
huggingface-hub==0.20.3
langchain==0.1.4
langchain-experimental==0.0.49
pip_audit==2.6.0
pre-commit==3.6.0
pylint==2.17.4
pylint_quotes==0.2.3
pylint_pydantic==0.3.2
python-dotenv==1.0.1
sentence_transformers==2.3.0
snowflake-connector-python==3.7.0
snowflake-sqlalchemy==1.5.1
SQLAlchemy==2.0.25
streamlit==1.30.0
watchdog==3.0.0 | Editable SQL in SQLDatabaseChain | https://api.github.com/repos/langchain-ai/langchain/issues/17071/comments | 3 | 2024-02-05T22:47:34Z | 2024-05-15T16:07:39Z | https://github.com/langchain-ai/langchain/issues/17071 | 2,119,604,240 | 17,071 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
The current code does not handle top-level conditions like the example below:
```python
filter = {"or": [{"rating": {"gte": 8.5}}, {"genre": "animated"}]}
retriever = vectorstore.as_retriever(search_kwargs={"filter": filter})
```
The current implementation for this example will not find results.
The current implementation does not support GTE and LTE comparators in pgvector.
### Error Message and Stack Trace (if applicable)
_No response_
### Description
* I am trying to filter documents based on metadata, including top level conditions with 'and', 'or' operators
* I am trying to create more comprehensive metadata filtering with pgvector and to create the base for pgvector self querying.
* I am trying to use GTE and LTE comparators in filter clause
* Code for pgvector can be refactored, using already defined comparators and operators from langchain.chains.query_constructor.ir
### System Info
langchain==0.1.5
langchain-community==0.0.17
langchain-core==0.1.18
Python 3.10.12 | PGVector filtering improvements to support self-querying | https://api.github.com/repos/langchain-ai/langchain/issues/17064/comments | 1 | 2024-02-05T21:36:46Z | 2024-05-13T16:10:33Z | https://github.com/langchain-ai/langchain/issues/17064 | 2,119,498,678 | 17,064 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
The following code fails if the splitter is left with just one sentence.
```python
from langchain_experimental.text_splitter import SemanticChunker
from langchain.schema import Document
embeddings_model = OpenAIEmbeddings()
docs = [Document(page_content='....')]
splitter = SemanticChunker(embeddings_model)
split_docs = splitter.split_documents(docs)
```
### Error Message and Stack Trace (if applicable)
```
File "/Users/salamanderxing/Documents/gcp-chatbot/chatbot/scraper/build_db.py", line 24, in scrape_and_split_urls
split_docs = tuple(splitter.split_documents(docs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/langchain_experimental/text_splitter.py", line 159, in split_documents
return self.create_documents(texts, metadatas=metadatas)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/langchain_experimental/text_splitter.py", line 144, in create_documents
for chunk in self.split_text(text):
^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/langchain_experimental/text_splitter.py", line 103, in split_text
breakpoint_distance_threshold = np.percentile(
^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/numpy/lib/function_base.py", line 4283, in percentile
return _quantile_unchecked(
^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/numpy/lib/function_base.py", line 4555, in _quantile_unchecked
return _ureduce(a,
^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/numpy/lib/function_base.py", line 3823, in _ureduce
r = func(a, **kwargs)
^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/numpy/lib/function_base.py", line 4721, in _quantile_ureduce_func
result = _quantile(arr,
^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/numpy/lib/function_base.py", line 4830, in _quantile
slices_having_nans = np.isnan(arr[-1, ...])
~~~^^^^^^^^^
IndexError: index -1 is out of bounds for axis 0 with size 0
```
### Description
I'm simply using this text splitter, and sometimes it raises this issue. I looked into it and its due to the fact that the splitter is left with only one sentence and tries to compute the np.percentile of the distances of an empty list. Made a PR for this. The issue is unfortunately a bit hard to replicate. I could not provide the text that raised the issue. However, by looking at the source code, it's clear that there is a bug occurring when the splitter is left with only one sentence:
From https://github.com/langchain-ai/langchain/blob/af5ae24af2b32e962adf23d78e59ed505d17fff7/libs/experimental/langchain_experimental/text_splitter.py#L84
```python
def split_text(self, text: str) -> List[str]:
"""Split text into multiple components."""
# Splitting the essay on '.', '?', and '!'
single_sentences_list = re.split(r"(?<=[.?!])\s+", text)
sentences = [
{"sentence": x, "index": i} for i, x in enumerate(single_sentences_list)
]
sentences = combine_sentences(sentences)
embeddings = self.embeddings.embed_documents(
[x["combined_sentence"] for x in sentences]
)
for i, sentence in enumerate(sentences):
sentence["combined_sentence_embedding"] = embeddings[i]
distances, sentences = calculate_cosine_distances(sentences)
start_index = 0
# Create a list to hold the grouped sentences
chunks = []
breakpoint_percentile_threshold = 95
breakpoint_distance_threshold = np.percentile(
distances, breakpoint_percentile_threshold
) # If you want more chunks, lower the percentile cutoff
indices_above_thresh = [
i for i, x in enumerate(distances) if x > breakpoint_distance_threshold
] # The indices of those breakpoints on your list
# Iterate through the breakpoints to slice the sentences
for index in indices_above_thresh:
# The end index is the current breakpoint
end_index = index
# Slice the sentence_dicts from the current start index to the end index
group = sentences[start_index : end_index + 1]
combined_text = " ".join([d["sentence"] for d in group])
chunks.append(combined_text)
# Update the start index for the next group
start_index = index + 1
# The last group, if any sentences remain
if start_index < len(sentences):
combined_text = " ".join([d["sentence"] for d in sentences[start_index:]])
chunks.append(combined_text)
return chunks
```
### System Info
langchain==0.1.5
langchain-community==0.0.17
langchain-core==0.1.18
langchain-experimental==0.0.50
MacOS 14.3
Python 3.11.7 | IndexError: index -1 is out of bounds for axis 0 with size 0 in langchain_experimental | https://api.github.com/repos/langchain-ai/langchain/issues/17060/comments | 1 | 2024-02-05T21:23:13Z | 2024-02-06T19:53:11Z | https://github.com/langchain-ai/langchain/issues/17060 | 2,119,479,277 | 17,060 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
import asyncio
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_community.vectorstores.surrealdb import SurrealDBStore
from langchain_openai import OpenAIEmbeddings
async def main():
text = "Here is some sample text with the name test"
texts = RecursiveCharacterTextSplitter(chunk_size=3, chunk_overlap=0).split_text(
text
)
store = await SurrealDBStore.afrom_texts(
texts=texts,
# Can replace with other embeddings
# otherwise OPENAI_API_KEY required
embedding=OpenAIEmbeddings(),
dburl="ws://localhost:8000/rpc",
db_user="root",
db_pass="root",
)
retriever = store.as_retriever()
# Throws TypeError: 'NoneType' object is not a mapping
docs = await retriever.aget_relevant_documents("What is the name of this text?")
print(docs)
asyncio.run(main())
```
### Error Message and Stack Trace (if applicable)
```sh
Traceback (most recent call last):
File "/workspaces/core-service/core_service/recreate_issue.py", line 32, in <module>
asyncio.run(main())
File "/usr/local/lib/python3.12/asyncio/runners.py", line 194, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/asyncio/base_events.py", line 684, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/workspaces/core-service/core_service/recreate_issue.py", line 28, in main
docs = await retriever.aget_relevant_documents("What is the name of this text?")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/vscode/.cache/pypoetry/virtualenvs/core-service-Td9uKyt_-py3.12/lib/python3.12/site-packages/langchain_core/retrievers.py", line 280, in aget_relevant_documents
raise e
File "/home/vscode/.cache/pypoetry/virtualenvs/core-service-Td9uKyt_-py3.12/lib/python3.12/site-packages/langchain_core/retrievers.py", line 273, in aget_relevant_documents
result = await self._aget_relevant_documents(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/vscode/.cache/pypoetry/virtualenvs/core-service-Td9uKyt_-py3.12/lib/python3.12/site-packages/langchain_core/vectorstores.py", line 674, in _aget_relevant_documents
docs = await self.vectorstore.asimilarity_search(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/vscode/.cache/pypoetry/virtualenvs/core-service-Td9uKyt_-py3.12/lib/python3.12/site-packages/langchain_community/vectorstores/surrealdb.py", line 380, in asimilarity_search
return await self.asimilarity_search_by_vector(query_embedding, k, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/vscode/.cache/pypoetry/virtualenvs/core-service-Td9uKyt_-py3.12/lib/python3.12/site-packages/langchain_community/vectorstores/surrealdb.py", line 343, in asimilarity_search_by_vector
for document, _ in await self._asimilarity_search_by_vector_with_score(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/vscode/.cache/pypoetry/virtualenvs/core-service-Td9uKyt_-py3.12/lib/python3.12/site-packages/langchain_community/vectorstores/surrealdb.py", line 236, in _asimilarity_search_by_vector_with_score
metadata={"id": result["id"], **result["metadata"]},
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: 'NoneType' object is not a mapping
```
### Description
I am trying to use the `langchain_commnity.vectorstores.surrealdb` module. The `metadatas` property on the `SurrealDBStore.afrom_texts` is optional int the typing for this class but when used as a retriever it errors unless proper metadata is provided.
This is happening because a `None` check is missing here and it's instead trying to map a `None` type which is the default:
https://github.com/langchain-ai/langchain/blob/75b6fa113462fd5736fba830ada5a4c886cf4ad5/libs/community/langchain_community/vectorstores/surrealdb.py#L236
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Thu Oct 5 21:02:42 UTC 2023
> Python Version: 3.12.1 (main, Dec 19 2023, 20:23:36) [GCC 10.2.1 20210110]
Package Information
-------------------
> langchain_core: 0.1.18
> langchain: 0.1.3
> langchain_community: 0.0.15
> langchain_openai: 0.0.5 | Using SurrealDB as a vector store with no metadatas throws a TypeError | https://api.github.com/repos/langchain-ai/langchain/issues/17057/comments | 1 | 2024-02-05T20:29:47Z | 2024-04-01T00:45:15Z | https://github.com/langchain-ai/langchain/issues/17057 | 2,119,403,982 | 17,057 |
[
"hwchase17",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
Currently we have many "type: ignore" comments in our code for ignoring mypy errors. We should rarely if ever use these. We've had to recently add a bunch of these ignore comments because of an error (i made) that was silently causing mypy not to run in CI.
We should work to remove as many of them as possible by fixing the underlying issues. To find the you can just grep for:
```bash
git grep "type: ignore" libs/
```
This is a big effort, even just removing a few at a time would be very helpful. | Remove "type: ignore" comments | https://api.github.com/repos/langchain-ai/langchain/issues/17048/comments | 4 | 2024-02-05T19:26:32Z | 2024-04-04T14:22:40Z | https://github.com/langchain-ai/langchain/issues/17048 | 2,119,302,272 | 17,048 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
from langchain.vectorstores.elasticsearch import ElasticsearchStore
from langchain.embeddings.huggingface import HuggingFaceBgeEmbeddings
from langchain.chains.query_constructor.base import AttributeInfo
from langchain.retrievers.self_query.base import SelfQueryRetriever
from langchain.chat_models.openai import ChatOpenAI
vectorstore = ElasticsearchStore(
embedding=HuggingFaceBgeEmbeddings(
model_name="BAAI/bge-small-en-v1.5",
model_kwargs={"device": "cpu"},
encode_kwargs={"normalize_embeddings": True},
),
index_name="z-index",
es_url="http://localhost:9200",
)
metadata_field_info = [
...,
AttributeInfo(
name="update_date",
description="Date when the document was last updated",
type="string",
),
...
]
document_content = "an abstract of the document"
retriever = SelfQueryRetriever.from_llm(
ChatOpenAI(temperature=0, api_key=KEY, max_retries=20),
vectorstore,
document_content,
metadata_field_info,
verbose=True,
enable_limit=True
)
r = retriever.invoke("give me all documents in the last two days?")
print(r)
```
### Error Message and Stack Trace (if applicable)
r = retriever.invoke("give me all documents in the last two days?")
File "/usr/local/lib/python3.10/dist-packages/langchain_core/retrievers.py", line 121, in invoke
return self.get_relevant_documents(
File "/usr/local/lib/python3.10/dist-packages/langchain_core/retrievers.py", line 224, in get_relevant_documents
raise e
File "/usr/local/lib/python3.10/dist-packages/langchain_core/retrievers.py", line 217, in get_relevant_documents
result = self._get_relevant_documents(
File "/usr/local/lib/python3.10/dist-packages/langchain/retrievers/self_query/base.py", line 171, in _get_relevant_documents
docs = self._get_docs_with_query(new_query, search_kwargs)
File "/usr/local/lib/python3.10/dist-packages/langchain/retrievers/self_query/base.py", line 145, in _get_docs_with_query
docs = self.vectorstore.search(query, self.search_type, **search_kwargs)
File "/usr/local/lib/python3.10/dist-packages/langchain_core/vectorstores.py", line 139, in search
return self.similarity_search(query, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/langchain_community/vectorstores/elasticsearch.py", line 632, in similarity_search
results = self._search(
File "/usr/local/lib/python3.10/dist-packages/langchain_community/vectorstores/elasticsearch.py", line 815, in _search
response = self.client.search(
File "/usr/local/lib/python3.10/dist-packages/elasticsearch/_sync/client/utils.py", line 402, in wrapped
return api(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/elasticsearch/_sync/client/__init__.py", line 3733, in search
return self.perform_request( # type: ignore[return-value]
File "/usr/local/lib/python3.10/dist-packages/elasticsearch/_sync/client/_base.py", line 320, in perform_request
raise HTTP_EXCEPTIONS.get(meta.status, ApiError)(
**elasticsearch.BadRequestError: BadRequestError(400, 'x_content_parse_exception', '[range] query does not support [date]')**
### Description
The ElasticsearchTranslator should not put comparison value in the field directly since it cause a syntax error in the query, instead if it's a date it should put the value of the date (just like in the issue #16022)
### System Info
System Information
------------------
> OS: Linux
> OS Version: #15~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Fri Jan 12 18:54:30 UTC 2
> Python Version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.1.18
> langchain: 0.1.5
> langchain_community: 0.0.17
> langserve: 0.0.37
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph | ElasticsearchTranslator generating invalid queries for Date type | https://api.github.com/repos/langchain-ai/langchain/issues/17042/comments | 2 | 2024-02-05T15:39:52Z | 2024-02-13T20:26:40Z | https://github.com/langchain-ai/langchain/issues/17042 | 2,118,854,903 | 17,042 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
below's the code
```
pdf_file = '/content/documents/Pre-proposal students.pdf'
# Define your prompt template
prompt_template = """Use the following pieces of information to answer the user's question.
If you don't know the answer, just say that you don't know, don't try to make up an answer.
Context: {context}
Question: {question}
Only return the helpful answer below and nothing else. If no context, then no answer.
Helpful Answer:"""
# Load the PDF file
loader = PyPDFLoader(pdf_file)
document = loader.load()
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=1000,
chunk_overlap=200)
# Split the document into chunks
texts = text_splitter.split_documents(document)
vectorstore = Chroma.from_documents(texts, embeddings)
llm = OpenAI(temperature=0)
# Create a retriever for the vector database
document_content_description = "Description of research papers and research proposal"
metadata_field_info = [
AttributeInfo(
name="title",
description="The title of the research paper.",
type="string",
),
AttributeInfo(
name="institution",
description="The name of the institution or university associated with the research.",
type="string",
),
AttributeInfo(
name="year",
description="The year the research was published.",
type="integer",
),
AttributeInfo(
name="abstract",
description="A brief summary of the research paper.",
type="string",
),
AttributeInfo(
name="methodology",
description="The main research methods used in the study.",
type="string",
),
AttributeInfo(
name="findings",
description="A brief description of the main findings of the research.",
type="string",
),
AttributeInfo(
name="implications",
description="The implications of the research findings.",
type="string",
),
AttributeInfo(
name="reference_count",
description="The number of references cited in the research paper.",
type="integer",
),
AttributeInfo(
name="doi",
description="The Digital Object Identifier for the research paper.",
type="string",
),
]
retriever = SelfQueryRetriever.from_llm(
llm,
vectorstore,
document_content_description,
metadata_field_info,
enable_limit=True,
verbose=True
)
# retriever.get_relevant_documents("What is the title of the proposal")
# logging.basicConfig(level=logging.INFO)
# Create a chain to answer questions
qa_chain = RetrievalQA.from_chain_type(llm=llm,
chain_type="stuff",
retriever=retriever,
return_source_documents=True)
retriever.get_relevant_documents("main research method")
```
below's the output
`[Document(page_content='Training and evaluation corpora inlow-resource\nlanguages may notbeaseffective due tothepaucity of\ndata.\n3.Create acentral dialect tomediate between the\nvarious Gondi dialects, which can beused asa\nstandard language forallGondi speakers.\n4.Low BLEU scores formachine translation model :\nThere isaneed forbetter methods oftraining and\nevaluating machine translation models.\nPOS Tagging\nData Collection', metadata={'page': 0, 'source': '/content/documents/Pre-proposal PhD students.pdf'}))]`
where as in the langchain selfQueryRetriver documentation, below's the output which has been shown
`StructuredQuery(query='taxi driver', filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='science fiction'), Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GTE: 'gte'>, attribute='year', value=1990), Comparison(comparator=<Comparator.LT: 'lt'>, attribute='year', value=2000)]), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='director', value='Luc Besson')]), limit=None)`
where i can see the query above which is classified as taxi driver
### Idea or request for content:
_No response_ | now showing query field when trying to retrieve the documents using SelfQueryRetriver | https://api.github.com/repos/langchain-ai/langchain/issues/17040/comments | 1 | 2024-02-05T14:49:58Z | 2024-02-14T03:34:52Z | https://github.com/langchain-ai/langchain/issues/17040 | 2,118,743,981 | 17,040 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
I am going through the examples in the documentation in a Jupyter Lab notebook. I'm running the code from [here](https://python.langchain.com/docs/expression_language/get_started):
```
# Requires:
# pip install langchain docarray tiktoken
from langchain_community.vectorstores import DocArrayInMemorySearch
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnableParallel, RunnablePassthrough
from langchain_openai.chat_models import ChatOpenAI
from langchain_openai.embeddings import OpenAIEmbeddings
vectorstore = DocArrayInMemorySearch.from_texts(
["harrison worked at kensho", "bears like to eat honey"],
embedding=OpenAIEmbeddings(),
)
retriever = vectorstore.as_retriever()
template = """Answer the question based only on the following context:
{context}
Question: {question}
"""
prompt = ChatPromptTemplate.from_template(template)
model = ChatOpenAI()
output_parser = StrOutputParser()
setup_and_retrieval = RunnableParallel(
{"context": retriever, "question": RunnablePassthrough()}
)
chain = setup_and_retrieval | prompt | model | output_parser
chain.invoke("where did harrison work?")
```
I'm getting this error message on the chain.invoke():
```
ValidationError: 2 validation errors for DocArrayDoc
text
Field required [type=missing, input_value={'embedding': [-0.0192381..., 0.010137099064823456]}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.6/v/missing
metadata
Field required [type=missing, input_value={'embedding': [-0.0192381..., 0.010137099064823456]}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.6/v/missing
```
Is the code in the docs obsolete or is this a problem of my setup? I'm using langchain 0.1.5 and Python 3.11.
### Idea or request for content:
_No response_ | DOC: RAG Search example validation error | https://api.github.com/repos/langchain-ai/langchain/issues/17039/comments | 4 | 2024-02-05T14:23:15Z | 2024-02-06T12:05:14Z | https://github.com/langchain-ai/langchain/issues/17039 | 2,118,686,216 | 17,039 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
below's the code
```
%%time
# query = 'how many are injured and dead in christchurch Mosque?'
# Define your prompt template
prompt_template = """Use the following pieces of information to answer the user's question.
If you don't know the answer, just say that you don't know, don't try to make up an answer.
Context: {context}
Question: {question}
Only return the helpful answer below and nothing else. If no context, then no answer.
Helpful Answer:"""
# Assuming `pdf_files` is a list of your PDF files
for pdf_file in pdf_files:
# Load the PDF file
loader = PyPDFLoader(pdf_file)
document = loader.load()
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=1000,
chunk_overlap=200)
# Split the document into chunks
texts = text_splitter.split_documents(document)
# Create a vector database for the document
# vectorstore = FAISS.from_documents(texts, embeddings)
vectorstore = Chroma.from_documents(texts, embeddings)
llm = OpenAI(temperature=0.2)
# Create a retriever for the vector database
document_content_description = "Description of research papers"
metadata_field_info = [
AttributeInfo(
name="title",
description="The title of the research paper.",
type="string",
),
AttributeInfo(
name="institution",
description="The name of the institution or university associated with the research.",
type="string",
),
AttributeInfo(
name="year",
description="The year the research was published.",
type="integer",
),
AttributeInfo(
name="abstract",
description="A brief summary of the research paper.",
type="string",
),
AttributeInfo(
name="methodology",
description="The main research methods used in the study.",
type="string",
),
AttributeInfo(
name="findings",
description="A brief description of the main findings of the research.",
type="string",
),
AttributeInfo(
name="implications",
description="The implications of the research findings.",
type="string",
),
AttributeInfo(
name="reference_count",
description="The number of references cited in the research paper.",
type="integer",
),
AttributeInfo(
name="doi",
description="The Digital Object Identifier for the research paper.",
type="string",
),
]
retriever = SelfQueryRetriever.from_llm(
llm,
vectorstore,
document_content_description,
metadata_field_info,
verbose=True
)
logging.basicConfig(level=logging.INFO)
# Create a chain to answer questions
qa_chain = RetrievalQA.from_chain_type(llm=llm,
chain_type="stuff",
retriever=retriever,
return_source_documents=True,)
# # Use the chain to answer a question
# query = "how many are injured and dead in christchurch Mosque?"
# llm_response = qa_chain(query)
# process_llm_response(llm_response)
# Format the prompt using the template
context = ""
question = "how many are injured and dead in christchurch Mosque?"
formatted_prompt = prompt_template.format(context=context, question=question)
# Pass the formatted prompt to the RetrievalQA function
llm_response = qa_chain(formatted_prompt)
process_llm_response(llm_response)
```
below's the output of above code
```
I'm sorry, I don't have any information about the Christchurch Mosque incident.
Sources:
/content/10.pdf
51 dead and 49 injured.
Sources:
/content/11.pdf
51 dead and 49 injured
Sources:
/content/11.pdf
CPU times: user 4.38 s, sys: 79.8 ms, total: 4.46 s
Wall time: 8.9 s
```
in the above output if you see it has returned the same answer twice from the same document. How to fix this? Is there any issue with Chroma vector database?
### Idea or request for content:
_No response_ | Chroma db repeating same data and output which is irreleavant | https://api.github.com/repos/langchain-ai/langchain/issues/17038/comments | 1 | 2024-02-05T14:03:23Z | 2024-02-14T03:34:52Z | https://github.com/langchain-ai/langchain/issues/17038 | 2,118,642,283 | 17,038 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
Below's the code
```
%%time
# query = 'how many are injured and dead in christchurch Mosque?'
# Define your prompt template
prompt_template = """Use the following pieces of information to answer the user's question.
If you don't know the answer, just say that you don't know, don't try to make up an answer.
Context: {context}
Question: {question}
Only return the helpful answer below and nothing else. If no context, then no answer.
Helpful Answer:"""
# Assuming `pdf_files` is a list of your PDF files
for pdf_file in pdf_files:
# Load the PDF file
loader = PyPDFLoader(pdf_file)
document = loader.load()
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=1000,
chunk_overlap=200)
# Split the document into chunks
texts = text_splitter.split_documents(document)
# Create a vector database for the document
# vectorstore = FAISS.from_documents(texts, embeddings)
vectorstore = Chroma.from_documents(texts, embeddings)
llm = OpenAI(temperature=0.2)
# Create a retriever for the vector database
document_content_description = "Description of research papers"
metadata_field_info = [
AttributeInfo(
name="title",
description="The title of the research paper.",
type="string",
),
AttributeInfo(
name="institution",
description="The name of the institution or university associated with the research.",
type="string",
),
AttributeInfo(
name="year",
description="The year the research was published.",
type="integer",
),
AttributeInfo(
name="abstract",
description="A brief summary of the research paper.",
type="string",
),
AttributeInfo(
name="methodology",
description="The main research methods used in the study.",
type="string",
),
AttributeInfo(
name="findings",
description="A brief description of the main findings of the research.",
type="string",
),
AttributeInfo(
name="implications",
description="The implications of the research findings.",
type="string",
),
AttributeInfo(
name="reference_count",
description="The number of references cited in the research paper.",
type="integer",
),
AttributeInfo(
name="doi",
description="The Digital Object Identifier for the research paper.",
type="string",
),
]
retriever = SelfQueryRetriever.from_llm(
llm,
vectorstore,
document_content_description,
metadata_field_info,
verbose=True
)
# Create a chain to answer questions
qa_chain = RetrievalQA.from_chain_type(llm=llm,
chain_type="stuff",
retriever=retriever,
return_source_documents=True)
# # Use the chain to answer a question
# llm_response = qa_chain(query)
# process_llm_response(llm_response)
# Format the prompt using the template
context = ""
question = "how many are injured and dead in christchurch Mosque?"
formatted_prompt = prompt_template.format(context=context, question=question)
# Pass the formatted prompt to the RetrievalQA function
llm_response = qa_chain(formatted_prompt)
process_llm_response(llm_response)
```
above code is returning output like below
```
I'm sorry, I don't have enough information to answer your question.
Sources:
/content/11.pdf
51 dead and 49 injured.
Sources:
/content/10.pdf
I'm sorry, I don't have enough context to answer this question.
Sources:
/content/110.pdf
CPU times: user 4.12 s, sys: 68.5 ms, total: 4.19 s
Wall time: 9.8 s
```
How to print the queries which were self generated by SelfQueryRetriever function?
### Idea or request for content:
_No response_ | how to print the self generated queries by SelfQueryRetriever? | https://api.github.com/repos/langchain-ai/langchain/issues/17037/comments | 1 | 2024-02-05T13:44:44Z | 2024-02-14T03:34:52Z | https://github.com/langchain-ai/langchain/issues/17037 | 2,118,601,820 | 17,037 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
below's the code which is using multiqueryretrieval
```
%%time
# query = 'how many are injured and dead in christchurch Mosque?'
from langchain.retrievers.multi_query import MultiQueryRetriever
from langchain.chat_models import ChatOpenAI
# Define your prompt template
prompt_template = """Use the following pieces of information to answer the user's question.
If you don't know the answer, just say that you don't know, don't try to make up an answer.
Context: {context}
Question: {question}
Only return the helpful answer below and nothing else. If no context, then no answer.
Helpful Answer:"""
# Assuming `pdf_files` is a list of your PDF files
for pdf_file in pdf_files:
# Load the PDF file
loader = PyPDFLoader(pdf_file)
document = loader.load()
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=1000,
chunk_overlap=200)
# Split the document into chunks
texts = text_splitter.split_documents(document)
# Create a vector database for the document
# vectorStore = FAISS.from_documents(texts, instructor_embeddings)
vectorstore = FAISS.from_documents(texts, embeddings)
# vectorstore = Chroma.from_documents(texts, embeddings)
llm = OpenAI(temperature=0.2)
retriever = MultiQueryRetriever.from_llm(retriever=vectorstore.as_retriever(), llm=llm)
# docs = retriever.get_relevant_documents(query="how many are injured and dead in christchurch Mosque?")
# print(docs)
# Create a chain to answer questions
qa_chain = RetrievalQA.from_chain_type(llm=llm,
chain_type="stuff",
retriever=retriever,
return_source_documents=True)
# # Use the chain to answer a question
# llm_response = qa_chain(query)
# process_llm_response(llm_response)
# Format the prompt using the template
context = ""
question = "how many are injured and dead in christchurch Mosque?"
formatted_prompt = prompt_template.format(context=context, question=question)
# Pass the formatted prompt to the RetrievalQA function
llm_response = qa_chain(formatted_prompt)
process_llm_response(llm_response)
```
I just want to know what are the queries multiqueryretrieval generated. How do i get those?
### Idea or request for content:
_No response_ | how to get the generated queries output? | https://api.github.com/repos/langchain-ai/langchain/issues/17034/comments | 3 | 2024-02-05T12:10:34Z | 2024-02-14T03:34:51Z | https://github.com/langchain-ai/langchain/issues/17034 | 2,118,387,709 | 17,034 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```
llm = ChatVertexAI(model_name="gemini-pro", convert_system_message_to_human=True, temperature=0)
[SystemMessage(content="Use the following optional pieces of information to fullfil the user's request in French and in markdown format.\n\nPotentially Useful Information:\n\nQuestion: Qu'est-ce qu'une question débile ?"), HumanMessage(content="Qu'est-ce qu'une question débile ?")]
llm.invoke(msgs)
```
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/workspaces/metabot-backend/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 165, in invoke
self.generate_prompt(
File "/workspaces/metabot-backend/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 543, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspaces/metabot-backend/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 407, in generate
raise e
File "/workspaces/metabot-backend/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 397, in generate
self._generate_with_cache(
File "/workspaces/metabot-backend/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 576, in _generate_with_cache
return self._generate(
^^^^^^^^^^^^^^^
File "/workspaces/metabot-backend/.venv/lib/python3.11/site-packages/langchain_google_vertexai/chat_models.py", line 375, in _generate
response = chat.send_message(
^^^^^^^^^^^^^^^^^^
File "/workspaces/metabot-backend/.venv/lib/python3.11/site-packages/vertexai/generative_models/_generative_models.py", line 709, in send_message
return self._send_message(
^^^^^^^^^^^^^^^^^^^
File "/workspaces/metabot-backend/.venv/lib/python3.11/site-packages/vertexai/generative_models/_generative_models.py", line 805, in _send_message
raise ResponseBlockedError(
vertexai.generative_models._generative_models.ResponseBlockedError: The response was blocked.
### Description
The code snippet you provided is trying to use the Gemini Pro language model through the Langchain library to answer a question in French. However, it's encountering an error because the Gemini Pro model has a safety feature that Langchain doesn't currently handle.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Thu Oct 5 21:02:42 UTC 2023
> Python Version: 3.11.7 (main, Jan 26 2024, 08:55:53) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.1.10
> langchain: 0.1.0
> langchain_community: 0.0.12
> langserve: Not Found | Vertex AI Gemini Pro doesn't handle safety measures | https://api.github.com/repos/langchain-ai/langchain/issues/17032/comments | 2 | 2024-02-05T10:49:05Z | 2024-06-08T16:09:36Z | https://github.com/langchain-ai/langchain/issues/17032 | 2,118,222,920 | 17,032 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
below's the code
```
import os
os.environ["OPENAI_API_KEY"] = ""
from langchain.vectorstores import Chroma
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.document_loaders.csv_loader import CSVLoader
from langchain.llms import OpenAI
from langchain.chains import RetrievalQA
from langchain.document_loaders import PyPDFLoader
from langchain.document_loaders import DirectoryLoader
from langchain.embeddings import OpenAIEmbeddings
import pickle
import faiss
from langchain.vectorstores import FAISS
# InstructorEmbedding
from InstructorEmbedding import INSTRUCTOR
from langchain.embeddings import HuggingFaceInstructEmbeddings
# OpenAI Embedding
from langchain.embeddings import OpenAIEmbeddings
from langchain.embeddings import HuggingFaceInstructEmbeddings
# from langchain_community.embeddings import HuggingFaceInstructEmbeddings
from InstructorEmbedding import INSTRUCTOR
import textwrap
root_dir = "/content/data"
pdf_files = ['11.pdf', '12.pdf', '13.pdf']
instructor_embeddings = HuggingFaceInstructEmbeddings(model_name="hkunlp/instructor-xl",
model_kwargs={"device": "cuda"})
def store_embeddings(docs, embeddings, sotre_name, path):
vectorStore = FAISS.from_documents(docs, embeddings)
with open(f"{path}/faiss_{sotre_name}.pkl", "wb") as f:
pickle.dump(vectorStore, f)
def load_embeddings(sotre_name, path):
with open(f"{path}/faiss_{sotre_name}.pkl", "rb") as f:
VectorStore = pickle.load(f)
return VectorStore
embeddings = OpenAIEmbeddings()
def wrap_text_preserve_newlines(text, width=110):
# Split the input text into lines based on newline characters
lines = text.split('\n')
# Wrap each line individually
wrapped_lines = [textwrap.fill(line, width=width) for line in lines]
# Join the wrapped lines back together using newline characters
wrapped_text = '\n'.join(wrapped_lines)
return wrapped_text
# def process_llm_response(llm_response):
# print(wrap_text_preserve_newlines(llm_response['result']))
# print('\nSources:')
# for source in llm_response["source_documents"]:
# print(source.metadata['source'])
def process_llm_response(llm_response):
print(wrap_text_preserve_newlines(llm_response['result']))
print('\nSources:')
if llm_response["source_documents"]:
# Access the first source document
first_source = llm_response["source_documents"][0]
source_name = first_source.metadata['source']
# row_number = first_source.metadata.get('row', 'Not specified')
# Print the first source's file name and row number
print(f"{source_name}")
print("\n")
else:
print("No sources available.")
# query = 'how many are injured and dead in christchurch Mosque?'
# Define your prompt template
prompt_template = """Use the following pieces of information to answer the user's question.
If you don't know the answer, just say that you don't know, don't try to make up an answer.
Context: {context}
Question: {question}
Only return the helpful answer below and nothing else. If no context, then no answer.
Helpful Answer:"""
# Assuming `pdf_files` is a list of your PDF files
for pdf_file in pdf_files:
# Load the PDF file
loader = PyPDFLoader(pdf_file)
document = loader.load()
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=1000,
chunk_overlap=200)
# Split the document into chunks
texts = text_splitter.split_documents(document)
# Create a vector database for the document
# vectorStore = FAISS.from_documents(texts, instructor_embeddings)
vectorStore = FAISS.from_documents(texts, embeddings)
# Create a retriever for the vector database
retriever = vectorStore.as_retriever(search_kwargs={"k": 5})
# Create a chain to answer questions
qa_chain = RetrievalQA.from_chain_type(llm=OpenAI(temperature=0.2),
chain_type="stuff",
retriever=retriever,
return_source_documents=True)
# # Use the chain to answer a question
# llm_response = qa_chain(query)
# process_llm_response(llm_response)
# Format the prompt using the template
context = ""
question = "how many are injured and dead in christchurch Mosque?"
formatted_prompt = prompt_template.format(context=context, question=question)
# Pass the formatted prompt to the RetrievalQA function
llm_response = qa_chain(formatted_prompt)
process_llm_response(llm_response)
```
in the above code, how to add the self query retrieval?
### Idea or request for content:
_No response_ | how to add self query retrieval? | https://api.github.com/repos/langchain-ai/langchain/issues/17031/comments | 8 | 2024-02-05T09:12:55Z | 2024-07-02T15:32:48Z | https://github.com/langchain-ai/langchain/issues/17031 | 2,118,031,252 | 17,031 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
below's the code
```
import os
os.environ["OPENAI_API_KEY"] = ""
from langchain.vectorstores import Chroma
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.document_loaders.csv_loader import CSVLoader
from langchain.llms import OpenAI
from langchain.chains import RetrievalQA
from langchain.document_loaders import PyPDFLoader
from langchain.document_loaders import DirectoryLoader
from langchain.embeddings import OpenAIEmbeddings
import pickle
import faiss
from langchain.vectorstores import FAISS
# InstructorEmbedding
from InstructorEmbedding import INSTRUCTOR
from langchain.embeddings import HuggingFaceInstructEmbeddings
# OpenAI Embedding
from langchain.embeddings import OpenAIEmbeddings
from langchain.embeddings import HuggingFaceInstructEmbeddings
# from langchain_community.embeddings import HuggingFaceInstructEmbeddings
from InstructorEmbedding import INSTRUCTOR
import textwrap
root_dir = "/content/data"
pdf_files = ['/content/documents/11.pdf', '10.pdf', '12.pdf']
instructor_embeddings = HuggingFaceInstructEmbeddings(model_name="hkunlp/instructor-xl",
model_kwargs={"device": "cuda"})
def store_embeddings(docs, embeddings, sotre_name, path):
vectorStore = FAISS.from_documents(docs, embeddings)
with open(f"{path}/faiss_{sotre_name}.pkl", "wb") as f:
pickle.dump(vectorStore, f)
def load_embeddings(sotre_name, path):
with open(f"{path}/faiss_{sotre_name}.pkl", "rb") as f:
VectorStore = pickle.load(f)
return VectorStore
embeddings = OpenAIEmbeddings()
def wrap_text_preserve_newlines(text, width=110):
# Split the input text into lines based on newline characters
lines = text.split('\n')
# Wrap each line individually
wrapped_lines = [textwrap.fill(line, width=width) for line in lines]
# Join the wrapped lines back together using newline characters
wrapped_text = '\n'.join(wrapped_lines)
return wrapped_text
# def process_llm_response(llm_response):
# print(wrap_text_preserve_newlines(llm_response['result']))
# print('\nSources:')
# for source in llm_response["source_documents"]:
# print(source.metadata['source'])
def process_llm_response(llm_response):
print(wrap_text_preserve_newlines(llm_response['result']))
print('\nSources:')
if llm_response["source_documents"]:
# Access the first source document
first_source = llm_response["source_documents"][0]
source_name = first_source.metadata['source']
# row_number = first_source.metadata.get('row', 'Not specified')
# Print the first source's file name and row number
print(f"{source_name}")
print("\n")
else:
print("No sources available.")
query = 'how many are injured and dead in christchurch Mosque?'
# Assuming `pdf_files` is a list of your PDF files
for pdf_file in pdf_files:
# Load the PDF file
loader = PyPDFLoader(pdf_file)
document = loader.load()
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=1000,
chunk_overlap=200)
# Split the document into chunks
texts = text_splitter.split_documents(document)
# Create a vector database for the document
# vectorStore = FAISS.from_documents(texts, instructor_embeddings)
vectorStore = FAISS.from_documents(texts, embeddings)
# Create a retriever for the vector database
retriever = vectorStore.as_retriever(search_kwargs={"k": 5})
# Create a chain to answer questions
qa_chain = RetrievalQA.from_chain_type(llm=OpenAI(temperature=0.2),
chain_type="stuff",
retriever=retriever,
return_source_documents=True)
# Use the chain to answer a question
llm_response = qa_chain(query)
process_llm_response(llm_response)
```
and below's the prompt_template
```
prompt_template = """Use the following pieces of information to answer the user's question.
If you don't know the answer, just say that you don't know, don't try to make up an answer.
Context: {context}
Question: {question}
Only return the helpful answer below and nothing else. If no context, then no answer.
Helpful Answer:"""
```
Can you assist me in how to integrate prompt_template to th existing code?
### Idea or request for content:
_No response_ | how to add prompt template to RetrivealQA function? | https://api.github.com/repos/langchain-ai/langchain/issues/17029/comments | 6 | 2024-02-05T08:00:36Z | 2024-02-14T03:34:51Z | https://github.com/langchain-ai/langchain/issues/17029 | 2,117,905,289 | 17,029 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
here is my code which is using ConversationBufferMemory to store the memory
os.environ['OPENAI_API_KEY'] = openapi_key
Define connection parameters using constants
from urllib.parse import quote_plus
server_name = constants.server_name
database_name = constants.database_name
username = constants.username
password = constants.password
encoded_password = quote_plus(password)
connection_uri = f"mssql+pyodbc://{username}:{encoded_password}@{server_name}/{database_name}?driver=ODBC+Driver+17+for+SQL+Server"
Create an engine to connect to the SQL database
engine = create_engine(connection_uri)
model_name="gpt-3.5-turbo-16k"
llm = ChatOpenAI(temperature=0, verbose=False, model=model_name)
model_name="gpt-3.5-turbo-instruct"
db = SQLDatabase(engine, view_support=True, include_tables=['RND360_ChatGPT_BankView'])
db = SQLDatabase(engine, view_support=True, include_tables=['EGV_emp_departments_ChatGPT'])
db_chain = None
llm = ChatOpenAI(temperature=0, verbose=False, model=model_name)
PROMPT = """
Given an input question, create a syntactically correct MSSQL query by considering only the matching column names from the question,
then look at the results of the query and return the answer.
If a column name is not present, refrain from writing the SQL query. column like UAN number, PF number are not not present do not consider such columns.
Write the query only for the column names which are present in view.
Execute the query and analyze the results to formulate a response.
Return the answer in sentence form.
The question: {question}
"""
PROMPT_SUFFIX = """Only use the following tables:
{table_info}
Previous Conversation:
{history}
Question: {input}"""
_DEFAULT_TEMPLATE = """Given an input question, first create a syntactically correct {dialect} query to run,
then look at the results of the query and return the answer.
Unless the user specifies in his question a specific number of examples he wishes to obtain,
If a column name is not present, refrain from writing the SQL query. column like UAN number, PF number are not not present do not consider such columns.
You can order the results by a relevant column to return the most interesting examples in the database.
Never query for all the columns from a specific table, only ask for a the few relevant columns given the question.
Pay attention to use only the column names that you can see in the schema description. Be careful to not query for columns that do not exist. Also, pay attention to which column is in which table.
Return the answer in user friendly form.
"""
PROMPT = PromptTemplate.from_template(_DEFAULT_TEMPLATE + PROMPT_SUFFIX)
memory = None
Define a function named chat that takes a question and SQL format indicator as input
def chat1(question):
global db_chain
global memory
if memory == None:
# llm = ChatOpenAI(temperature=0, verbose=False, model=model_name)
memory = ConversationBufferMemory()
# db_chain = SQLDatabaseChain.from_llm(llm, db, prompt=PROMPT, verbose=True, memory=memory)
db_chain = SQLDatabaseChain.from_llm(llm, db, prompt=PROMPT, verbose=True, memory=memory)
# while True:
# try:
# print("*****")
# print(memory.load_memory_variables({})['history'])
# question = input("Enter your Question : ")
greetings = ["hi", "hello", "hey"]
if any(greeting == question.lower() for greeting in greetings):
print(question)
print("Hello! How can I assist you today?")
return "Hello! How can I assist you today?"
else:
answer = db_chain.run(question)
print(memory.load_memory_variables({}))
return answer
### Error Message and Stack Trace (if applicable)
Entering new SQLDatabaseChain chain...
what is jyothi employee id
SQLQuery:SELECT EmployeeID FROM EGV_emp_departments_ChatGPT WHERE EmployeeName = 'Jyothi'
SQLResult: [('AD23020933',)]
Answer:Jyothi's employee ID is AD23020933.
Finished chain.
{'history': "Human: what is jyothi employee id\nAI: Jyothi's employee ID is AD23020933."}
Jyothi's employee ID is AD23020933.
127.0.0.1 - - [05/Feb/2024 11:01:28] "GET /get_answer?questions=what%20is%20jyothi%20employee%20id HTTP/1.1" 200 -
what is her mail id
Entering new SQLDatabaseChain chain...
what is her mail id
SQLQuery:SELECT UserMail
FROM EGV_emp_departments_ChatGPT
WHERE EmployeeName = 'Jyothi'
Answer: Jyothi's email ID is [[email protected]](mailto:[email protected]).[2024-02-05 11:01:45,039] ERROR in app: Exception on /get_answer [GET]
Traceback (most recent call last):
File "C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\sqlalchemy\engine\base.py", line 1969, in _exec_single_context
self.dialect.do_execute(
File "C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\sqlalchemy\engine\default.py", line 922, in do_execute
cursor.execute(statement, parameters)
pyodbc.ProgrammingError: ('42000', "[42000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Incorrect syntax near 'Jyothi'. (102) (SQLExecDirectW); [42000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Unclosed quotation mark after the character string 's email ID is [[email protected]](mailto:[email protected]).'. (105)")
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\flask\app.py", line 1455, in wsgi_app
response = self.full_dispatch_request()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\flask\app.py", line 869, in full_dispatch_request
rv = self.handle_user_exception(e)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\flask_cors\extension.py", line 176, in wrapped_function
return cors_after_request(app.make_response(f(*args, **kwargs)))
^^^^^^^^^^^^^^^^^^
File "C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\flask\app.py", line 867, in full_dispatch_request
rv = self.dispatch_request()
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\flask\app.py", line 852, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\rndbcpsoft\OneDrive\Desktop\test\flask8.py", line 46, in generate_answer
answer = chat1(questions)
^^^^^^^^^^^^^^^^
File "c:\Users\rndbcpsoft\OneDrive\Desktop\test\main5.py", line 180, in chat1
answer = db_chain.run(question)
^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\langchain\chains\base.py", line 505, in run
return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\langchain\chains\base.py", line 310, in call
raise e
File "C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\langchain\chains\base.py", line 304, in call
self._call(inputs, run_manager=run_manager)
File "C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\langchain_experimental\sql\base.py", line 208, in _call
raise exc
File "C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\langchain_experimental\sql\base.py", line 143, in _call
result = self.database.run(sql_cmd)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\langchain\utilities\sql_database.py", line 433, in run
result = self._execute(command, fetch)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\langchain\utilities\sql_database.py", line 411, in _execute
cursor = connection.execute(text(command))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\sqlalchemy\engine\base.py", line 1416, in execute
return meth(
^^^^^
File "C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\sqlalchemy\sql\elements.py", line 516, in _execute_on_connection
return connection._execute_clauseelement(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\sqlalchemy\engine\base.py", line 1639, in _execute_clauseelement
ret = self._execute_context(
^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\sqlalchemy\engine\base.py", line 1848, in _execute_context
return self._exec_single_context(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\sqlalchemy\engine\base.py", line 1988, in _exec_single_context
self._handle_dbapi_exception(
File "C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\sqlalchemy\engine\base.py", line 2343, in _handle_dbapi_exception
raise sqlalchemy_exception.with_traceback(exc_info[2]) from e
File "C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\sqlalchemy\engine\base.py", line 1969, in _exec_single_context
self.dialect.do_execute(
File "C:\Users\rndbcpsoft\OneDrive\Desktop\test\envtest\Lib\site-packages\sqlalchemy\engine\default.py", line 922, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.ProgrammingError: (pyodbc.ProgrammingError) ('42000', "[42000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Incorrect syntax near 'Jyothi'. (102) (SQLExecDirectW); [42000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Unclosed quotation mark after the character string 's email ID is [[email protected]](mailto:[email protected]).'. (105)")
[SQL: SELECT UserMail
FROM EGV_emp_departments_ChatGPT
WHERE EmployeeName = 'Jyothi'
Answer: Jyothi's email ID is [[email protected]](mailto:[email protected]).]
(Background on this error at: https://sqlalche.me/e/20/f405)
127.0.0.1 - - [05/Feb/2024 11:01:45] "GET /get_answer?questions=what%20is%20her%20mail%20id HTTP/1.1" 500 -
what is employee name of ad22050853
Entering new SQLDatabaseChain chain...
what is employee name of ad22050853
SQLQuery:SELECT EmployeeName
FROM EGV_emp_departments_ChatGPT
WHERE EmployeeID = 'AD22050853'
SQLResult: [('Harin Vimal Bharathi',)]
Answer:The employee name of AD22050853 is Harin Vimal Bharathi.
Finished chain.
{'history': "Human: what is jyothi employee id\nAI: Jyothi's employee ID is AD23020933.\nHuman: what is employee name of ad22050853\nAI: The employee name of AD22050853 is Harin Vimal Bharathi."}
The employee name of AD22050853 is Harin Vimal Bharathi.
127.0.0.1 - - [05/Feb/2024 11:03:48] "GET /get_answer?questions=%20what%20is%20employee%20name%20of%20ad22050853 HTTP/1.1" 200 -
### Description
here when im asking 2nd question its trowing error as "pyodbc.ProgrammingError: ('42000', "[42000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Incorrect syntax near 'Jyothi'." but when im asking 3rd question which is not related to first 2, its giving answer and storing memory,
### System Info
python: 3.11
langchain: latest | while using ConversationBufferMemory to store the memory i the chatbot "sqlalchemy.exc.ProgrammingError: (pyodbc.ProgrammingError) ('42000', "[42000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Incorrect syntax near " | https://api.github.com/repos/langchain-ai/langchain/issues/17026/comments | 9 | 2024-02-05T05:57:05Z | 2024-05-14T16:08:16Z | https://github.com/langchain-ai/langchain/issues/17026 | 2,117,740,798 | 17,026 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
from langchain_core.messages import _message_from_dict
_message_from_dict({"type": "ChatMessageChunk", "data": {...}})
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I am using the momory runnables and hitting an issue when chatmessagechunk types are used.
### System Info
issue exists in latest:
https://github.com/langchain-ai/langchain/blob/master/libs/core/langchain_core/messages/__init__.py#L71-L96 | The helper function for converting dicts to message types doesn't handle ChatMessageChunk message types. | https://api.github.com/repos/langchain-ai/langchain/issues/17022/comments | 1 | 2024-02-05T04:11:20Z | 2024-05-13T16:10:22Z | https://github.com/langchain-ai/langchain/issues/17022 | 2,117,636,935 | 17,022 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [x] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```
class EmbeddingStore(BaseModel):
"""Embedding store."""
__tablename__ = "langchain_pg_embedding"
collection_id = sqlalchemy.Column(
UUID(as_uuid=True),
sqlalchemy.ForeignKey(
f"{CollectionStore.__tablename__}.uuid",
ondelete="CASCADE",
),
)
collection = relationship(CollectionStore, back_populates="embeddings")
embedding: Vector = sqlalchemy.Column(Vector(vector_dimension))
document = sqlalchemy.Column(sqlalchemy.String, nullable=True)
# Using JSONB is better to process special characters
cmetadata = sqlalchemy.Column(JSON, nullable=True)
# custom_id : any user defined id
custom_id = sqlalchemy.Column(sqlalchemy.String, nullable=True)
_classes = (EmbeddingStore, CollectionStore)
return _classes
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I find the column langchain_pg_embedding.cmetadata uses the json type, so when I want to save some special characters, they will be processed into unicode, not utf-8. I looked at the source code of the pgvector.py file and found the EmbeddingStore class. When cmetadata is defined as type JSONB, special characters are not saved as unicode, but saved as utf-8, and also it can be queried by SQL. Could this part be modified in the main branch of the new version?
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.3.0: Wed Dec 20 21:30:44 PST 2023; root:xnu-10002.81.5~7/RELEASE_ARM64_T6000
> Python Version: 3.11.4 | packaged by conda-forge | (main, Jun 10 2023, 18:08:41) [Clang 15.0.7 ]
Package Information
-------------------
> langchain_core: 0.1.17
> langchain: 0.1.4
> langchain_community: 0.0.16
| The column langchain_pg_embedding.cmetadata uses the json type resulting in special characters being processed into unicode | https://api.github.com/repos/langchain-ai/langchain/issues/17020/comments | 1 | 2024-02-05T03:36:08Z | 2024-05-13T16:10:19Z | https://github.com/langchain-ai/langchain/issues/17020 | 2,117,607,473 | 17,020 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
import pandas as pd
from langchain_experimental.agents.agent_toolkits import create_pandas_dataframe_agent
from typing import Any, List, Mapping, Optional
from langchain_core.callbacks.manager import CallbackManagerForLLMRun
from langchain_core.language_models.llms import LLM
class CustomLLM(LLM):
n: int
@property
def _llm_type(self) -> str:
return "custom"
def _call(
self,
prompt: str,
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> str:
if stop is not None:
raise ValueError("stop kwargs are not permitted.")
return prompt[: self.n]
@property
def _identifying_params(self) -> Mapping[str, Any]:
"""Get the identifying parameters."""
return {"n": self.n}
df = pd.read_csv("./titanic.csv")
llm=CustomLLM(n=10)
agent = create_pandas_dataframe_agent(llm,df,verbose=True)
result = agent.run("这艘船上存活的男性有多少人")
print(result)
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "/langchain_demos.py", line 30, in <module>
agent = create_pandas_dataframe_agent(llm,df,verbose=True,input_variables=input_vars)
File "/Users/yyl/opt/anaconda3/envs/ragpy310/lib/python3.10/site-packages/langchain_experimental/agents/agent_toolkits/pandas/base.py", line 264, in create_pandas_dataframe_agent
runnable=create_react_agent(llm, tools, prompt), # type: ignore
File "/Users/yyl/opt/anaconda3/envs/ragpy310/lib/python3.10/site-packages/langchain/agents/react/agent.py", line 97, in create_react_agent
raise ValueError(f"Prompt missing required variables: {missing_vars}")
ValueError: Prompt missing required variables: {'tool_names', 'tools'}
### Description
I'm trying to use the pandas agent function from Langchain and encountering this error.
### System Info
langchain 0.1.5
langchain-community 0.0.17
langchain-core 0.1.18
langchain-experimental 0.0.50
macos
python 3.10.13 | Pandas agent got error:Prompt missing required variables: {'tool_names', 'tools'} | https://api.github.com/repos/langchain-ai/langchain/issues/17019/comments | 17 | 2024-02-05T01:51:46Z | 2024-07-23T13:33:36Z | https://github.com/langchain-ai/langchain/issues/17019 | 2,117,496,757 | 17,019 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
The provided code combines multiple PDF files into one and then extracts a single answer from all of them using Vectordb. But, I'm interested in code that can extract answers separately from each individual PDF file.
```
import os
os.environ["OPENAI_API_KEY"] = ""
from langchain.vectorstores import Chroma
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.document_loaders.csv_loader import CSVLoader
from langchain.llms import OpenAI
from langchain.chains import RetrievalQA
from langchain.document_loaders import PyPDFLoader
from langchain.document_loaders import DirectoryLoader
import pickle
import faiss
from langchain.vectorstores import FAISS
import textwrap
# InstructorEmbedding
from InstructorEmbedding import INSTRUCTOR
from langchain.embeddings import HuggingFaceInstructEmbeddings
# OpenAI Embedding
from langchain.embeddings import OpenAIEmbeddings
"""### Load Multiple files from Directory"""
root_dir = "/content/data"
# loader = TextLoader('single_text_file.txt')
loader = DirectoryLoader(f'/content/documents', glob="./*.pdf", loader_cls=PyPDFLoader)
documents = loader.load()
"""### Divide and Conquer"""
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=1000,
chunk_overlap=200)
texts = text_splitter.split_documents(documents)
"""### Get Embeddings for OUR Documents"""
# !pip install faiss-cpu
def store_embeddings(docs, embeddings, sotre_name, path):
vectorStore = FAISS.from_documents(docs, embeddings)
with open(f"{path}/faiss_{sotre_name}.pkl", "wb") as f:
pickle.dump(vectorStore, f)
def load_embeddings(sotre_name, path):
with open(f"{path}/faiss_{sotre_name}.pkl", "rb") as f:
VectorStore = pickle.load(f)
return VectorStore
"""### HF Instructor Embeddings"""
from langchain.embeddings import HuggingFaceInstructEmbeddings
# from langchain_community.embeddings import HuggingFaceInstructEmbeddings
from InstructorEmbedding import INSTRUCTOR
instructor_embeddings = HuggingFaceInstructEmbeddings(model_name="hkunlp/instructor-xl",
model_kwargs={"device": "cuda"})
Embedding_store_path = f"{root_dir}/Embedding_store"
db_instructEmbedd = FAISS.from_documents(texts, instructor_embeddings)
retriever = db_instructEmbedd.as_retriever(search_kwargs={"k": 5})
docs = retriever.get_relevant_documents("which method did Ventirozos use?")
# create the chain to answer questions
qa_chain_instrucEmbed = RetrievalQA.from_chain_type(llm=OpenAI(temperature=0.2),
chain_type="stuff",
retriever=retriever,
return_source_documents=True)
"""### OpenAI's Embeddings"""
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
db_openAIEmbedd = FAISS.from_documents(texts, embeddings)
retriever_openai = db_openAIEmbedd.as_retriever(search_kwargs={"k": 5})
# create the chain to answer questions
qa_chain_openai = RetrievalQA.from_chain_type(llm=OpenAI(temperature=0.2, ),
chain_type="stuff",
retriever=retriever_openai,
return_source_documents=True)
def wrap_text_preserve_newlines(text, width=110):
# Split the input text into lines based on newline characters
lines = text.split('\n')
# Wrap each line individually
wrapped_lines = [textwrap.fill(line, width=width) for line in lines]
# Join the wrapped lines back together using newline characters
wrapped_text = '\n'.join(wrapped_lines)
return wrapped_text
def process_llm_response(llm_response):
print(wrap_text_preserve_newlines(llm_response['result']))
print('\nSources:')
if llm_response["source_documents"]:
# Access the first source document
first_source = llm_response["source_documents"][0]
source_name = first_source.metadata['source']
row_number = first_source.metadata.get('row', 'Not specified')
# Print the first source's file name and row number
print(f"{source_name}, Row: {row_number}")
else:
print("No sources available.")
query = 'which method did Ventirozos use??'
print('-------------------Instructor Embeddings------------------\n')
llm_response = qa_chain_instrucEmbed(query)
process_llm_response(llm_response)
query = 'which method did Ventirozos use??'
print('-------------------OpenAI Embeddings------------------\n')
llm_response = qa_chain_openai(query)
process_llm_response(llm_response)
```
Can you have a look into the above code and help me with this? I presume we need to save vector db's separately for every pdf, then iterate through those and return answer for every pdf file
### Idea or request for content:
_No response_ | Unable to return answers from every pdf | https://api.github.com/repos/langchain-ai/langchain/issues/17008/comments | 3 | 2024-02-04T19:02:29Z | 2024-02-14T03:34:50Z | https://github.com/langchain-ai/langchain/issues/17008 | 2,117,259,978 | 17,008 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
embedding = HuggingFaceBgeEmbeddings(
model_name=model,
model_kwargs={'device': 'cuda'},
encode_kwargs={'normalize_embeddings': True}
)
connection_args = {
'uri': milvus_cfg.REMOTE_DATABASE['url'],
'user': milvus_cfg.REMOTE_DATABASE['username'],
'password': milvus_cfg.REMOTE_DATABASE['password'],
'secure': True,
}
vector_db = Milvus(
embedding,
collection_name=collection,
connection_args=connection_args,
drop_old=True,
auto_id=True,
)
# I omitted some document split part here
md_docs = r_splitter.split_documents(head_split_docs)
vector_db.add_documents(md_docs)
```
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "D:\program\python\KnowledgeBot\InitDatabase.py", line 100, in <module>
load_md(config.MD_PATH)
File "D:\program\python\KnowledgeBot\utils\TimeUtil.py", line 8, in wrapper
result = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "D:\program\python\KnowledgeBot\InitDatabase.py", line 82, in load_md
vector_db.add_documents(md_docs)
File "D:\miniconda3\envs\KnowledgeBot\Lib\site-packages\langchain_core\vectorstores.py", line 119, in add_documents
return self.add_texts(texts, metadatas, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\miniconda3\envs\KnowledgeBot\Lib\site-packages\langchain_community\vectorstores\milvus.py", line 586, in add_texts
insert_list = [insert_dict[x][i:end] for x in self.fields]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\miniconda3\envs\KnowledgeBot\Lib\site-packages\langchain_community\vectorstores\milvus.py", line 586, in <listcomp>
insert_list = [insert_dict[x][i:end] for x in self.fields]
~~~~~~~~~~~^^^
KeyError: 'pk'
### Description
This is how my original code looked like:
```python
vector_db = Milvus(
embedding,
collection_name=collection,
connection_args=connection_args,
drop_old=True
)
```
It was able to run successfully.
The version information at that time was:
- python: 3.11
- langchain==0.1.4
- langchain_community==0.0.16
- pymilvus==2.3.5
However, when I updated the version information and tried to run it directly, an error occurred:
_A list of valid ids are required when auto_id is False_
By checking, I found that a new parameter called `auto_id` was added. And after I modified the Milvus setting to the code like this:
```python
vector_db = Milvus(
embedding,
collection_name=collection,
connection_args=connection_args,
drop_old=True,
auto_id=True
)
```
the error has changed to the current one.
### System Info
- python: 3.11
- langchain==0.1.5
- langchain_community==0.0.17
- pymilvus==2.3.6 | An error occurred while adding a document to the Zilliz vectorstore | https://api.github.com/repos/langchain-ai/langchain/issues/17006/comments | 8 | 2024-02-04T16:24:36Z | 2024-05-21T03:13:23Z | https://github.com/langchain-ai/langchain/issues/17006 | 2,117,180,017 | 17,006 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
loader = CSVLoader(file_path='file.csv', csv_args={
'fieldnames': ['column_name_inside_dataset'], #if uncommented the load method fails
"delimiter": ',',
})
docs = loader.load()
```
### Error Message and Stack Trace (if applicable)
```code
AttributeError Traceback (most recent call last)
File [~/code/github/trustrelay/openai-poc/venv/lib/python3.10/site-packages/langchain_community/document_loaders/csv_loader.py:70](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/jorge/code/github/trustrelay/openai-poc/~/code/github/trustrelay/openai-poc/venv/lib/python3.10/site-packages/langchain_community/document_loaders/csv_loader.py:70), in CSVLoader.load(self)
[69](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/jorge/code/github/trustrelay/openai-poc/~/code/github/trustrelay/openai-poc/venv/lib/python3.10/site-packages/langchain_community/document_loaders/csv_loader.py:69) with open(self.file_path, newline="", encoding=self.encoding) as csvfile:
---> [70](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/jorge/code/github/trustrelay/openai-poc/~/code/github/trustrelay/openai-poc/venv/lib/python3.10/site-packages/langchain_community/document_loaders/csv_loader.py:70) docs = self.__read_file(csvfile)
[71](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/jorge/code/github/trustrelay/openai-poc/~/code/github/trustrelay/openai-poc/venv/lib/python3.10/site-packages/langchain_community/document_loaders/csv_loader.py:71) except UnicodeDecodeError as e:
File [~/code/github/trustrelay/openai-poc/venv/lib/python3.10/site-packages/langchain_community/document_loaders/csv_loader.py:105](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/jorge/code/github/trustrelay/openai-poc/~/code/github/trustrelay/openai-poc/venv/lib/python3.10/site-packages/langchain_community/document_loaders/csv_loader.py:105), in CSVLoader.__read_file(self, csvfile)
[102](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/jorge/code/github/trustrelay/openai-poc/~/code/github/trustrelay/openai-poc/venv/lib/python3.10/site-packages/langchain_community/document_loaders/csv_loader.py:102) raise ValueError(
[103](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/jorge/code/github/trustrelay/openai-poc/~/code/github/trustrelay/openai-poc/venv/lib/python3.10/site-packages/langchain_community/document_loaders/csv_loader.py:103) f"Source column '{self.source_column}' not found in CSV file."
[104](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/jorge/code/github/trustrelay/openai-poc/~/code/github/trustrelay/openai-poc/venv/lib/python3.10/site-packages/langchain_community/document_loaders/csv_loader.py:104) )
--> [105](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/jorge/code/github/trustrelay/openai-poc/~/code/github/trustrelay/openai-poc/venv/lib/python3.10/site-packages/langchain_community/document_loaders/csv_loader.py:105) content = "\n".join(
[106](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/jorge/code/github/trustrelay/openai-poc/~/code/github/trustrelay/openai-poc/venv/lib/python3.10/site-packages/langchain_community/document_loaders/csv_loader.py:106) f"{k.strip()}: {v.strip() if v is not None else v}"
[107](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/jorge/code/github/trustrelay/openai-poc/~/code/github/trustrelay/openai-poc/venv/lib/python3.10/site-packages/langchain_community/document_loaders/csv_loader.py:107) for k, v in row.items()
[108](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/jorge/code/github/trustrelay/openai-poc/~/code/github/trustrelay/openai-poc/venv/lib/python3.10/site-packages/langchain_community/document_loaders/csv_loader.py:108) if k not in self.metadata_columns
[109](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/jorge/code/github/trustrelay/openai-poc/~/code/github/trustrelay/openai-poc/venv/lib/python3.10/site-packages/langchain_community/document_loaders/csv_loader.py:109) )
[110](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/jorge/code/github/trustrelay/openai-poc/~/code/github/trustrelay/openai-poc/venv/lib/python3.10/site-packages/langchain_community/document_loaders/csv_loader.py:110) metadata = {"source": source, "row": i}
File [~/code/github/trustrelay/openai-poc/venv/lib/python3.10/site-packages/langchain_community/document_loaders/csv_loader.py:106](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/jorge/code/github/trustrelay/openai-poc/~/code/github/trustrelay/openai-poc/venv/lib/python3.10/site-packages/langchain_community/document_loaders/csv_loader.py:106), in <genexpr>(.0)
[102](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/jorge/code/github/trustrelay/openai-poc/~/code/github/trustrelay/openai-poc/venv/lib/python3.10/site-packages/langchain_community/document_loaders/csv_loader.py:102) raise ValueError(
[103](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/jorge/code/github/trustrelay/openai-poc/~/code/github/trustrelay/openai-poc/venv/lib/python3.10/site-packages/langchain_community/document_loaders/csv_loader.py:103) f"Source column '{self.source_column}' not found in CSV file."
[104](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/jorge/code/github/trustrelay/openai-poc/~/code/github/trustrelay/openai-poc/venv/lib/python3.10/site-packages/langchain_community/document_loaders/csv_loader.py:104) )
[105](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/jorge/code/github/trustrelay/openai-poc/~/code/github/trustrelay/openai-poc/venv/lib/python3.10/site-packages/langchain_community/document_loaders/csv_loader.py:105) content = "\n".join(
--> [106](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/jorge/code/github/trustrelay/openai-poc/~/code/github/trustrelay/openai-poc/venv/lib/python3.10/site-packages/langchain_community/document_loaders/csv_loader.py:106) f"{k.strip()}: {v.strip() if v is not None else v}"
[107](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/jorge/code/github/trustrelay/openai-poc/~/code/github/trustrelay/openai-poc/venv/lib/python3.10/site-packages/langchain_community/document_loaders/csv_loader.py:107) for k, v in row.items()
...
[85](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/jorge/code/github/trustrelay/openai-poc/~/code/github/trustrelay/openai-poc/venv/lib/python3.10/site-packages/langchain_community/document_loaders/csv_loader.py:85) except Exception as e:
---> [86](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/jorge/code/github/trustrelay/openai-poc/~/code/github/trustrelay/openai-poc/venv/lib/python3.10/site-packages/langchain_community/document_loaders/csv_loader.py:86) raise RuntimeError(f"Error loading {self.file_path}") from e
[88](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/jorge/code/github/trustrelay/openai-poc/~/code/github/trustrelay/openai-poc/venv/lib/python3.10/site-packages/langchain_community/document_loaders/csv_loader.py:88) return docs
```
### Description
As illustrated in the code example, passing fieldnames attribute within the csv_args makes it fail when loading the doc.
If fieldnames is not passed, printing doc[0].page_content correctly loads all columns, including the column that I wanted to filter in.
### System Info
langchain==0.1.5
langchain-community==0.0.17
langchain-core==0.1.18 | Passing dictionary key "fieldnames" within csv_args paramenter to CSVLoader fails | https://api.github.com/repos/langchain-ai/langchain/issues/17001/comments | 3 | 2024-02-04T12:48:43Z | 2024-02-04T13:03:28Z | https://github.com/langchain-ai/langchain/issues/17001 | 2,117,082,044 | 17,001 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain_community.llms import LlamaCpp
from langchain_core.runnables import ConfigurableField
def main():
# Callbacks support token-wise streaming
callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])
n_gpu_layers = -1 # The number of layers to put on the GPU. The rest will be on the CPU. If you don't know how many layers there are, you can use -1 to move all to GPU.
n_batch = 512 # Should be between 1 and n_ctx, consider the amount of VRAM in your GPU.
config: dict = {
'model_path': "../../holograph-llm/backend/models/zephyr-7b-beta.Q2_K.gguf",
'n_gpu_layers': n_gpu_layers,
'n_batch': n_batch,
'echo': True,
'callback_manager': callback_manager,
'verbose': True, # Verbose is required to pass to the callback manager
"max_tokens": 250,
"temperature": 0.1
}
# Make sure the model path is correct for your system!
llm_a = LlamaCpp(**config).configurable_fields(
temperature=ConfigurableField(
id="llm_temperature",
name="LLM Temperature",
description="The temperature of the LLM",
),
max_tokens=ConfigurableField(
id="llm_max_tokens",
name="LLM max output tokens",
description="The maximum number of tokens to generate",
),
top_p=ConfigurableField(
id="llm_top_p",
name="LLM top p",
description="The top-p value to use for sampling",
),
top_k=ConfigurableField(
id="llm_top_k",
name="LLM top-k",
description="The top-k value to use for sampling",
)
)
# Working call that overrides the temp, if you removed conditional import of LlamaGrammar.
llm_a.with_config(configurable={
"llm_temperature": 0.9,
"llm_top_p": 0.9,
"llm_top_k": 0.2,
"llm_max_tokens": 15,
}).invoke("pick a random number")
if __name__ == "__main__":
main()
```
A notebook replicating the issue to open in Google Colab on a T4 is available [here](https://gist.github.com/fpaupier/d978e8809bc2b699df9ea3c12c433080)
### Error Message and Stack Trace (if applicable)
```log
---------------------------------------------------------------------------
ConfigError Traceback (most recent call last)
<ipython-input-10-16b4c6671e0e> in <cell line: 1>()
4 "llm_top_k": 0.2,
5 "llm_max_tokens": 15,
----> 6 }).invoke("pick a random number")
/usr/local/lib/python3.10/dist-packages/langchain_core/runnables/base.py in invoke(self, input, config, **kwargs)
4039 **kwargs: Optional[Any],
4040 ) -> Output:
-> 4041 return self.bound.invoke(
4042 input,
4043 self._merge_configs(config),
/usr/local/lib/python3.10/dist-packages/langchain_core/runnables/configurable.py in invoke(self, input, config, **kwargs)
92 self, input: Input, config: Optional[RunnableConfig] = None, **kwargs: Any
93 ) -> Output:
---> 94 runnable, config = self._prepare(config)
95 return runnable.invoke(input, config, **kwargs)
96
/usr/local/lib/python3.10/dist-packages/langchain_core/runnables/configurable.py in _prepare(self, config)
289 if configurable:
290 return (
--> 291 self.default.__class__(**{**self.default.__dict__, **configurable}),
292 config,
293 )
/usr/local/lib/python3.10/dist-packages/langchain_core/load/serializable.py in __init__(self, **kwargs)
105
106 def __init__(self, **kwargs: Any) -> None:
--> 107 super().__init__(**kwargs)
108 self._lc_kwargs = kwargs
109
/usr/local/lib/python3.10/dist-packages/pydantic/main.cpython-310-x86_64-linux-gnu.so in pydantic.main.BaseModel.__init__()
/usr/local/lib/python3.10/dist-packages/pydantic/main.cpython-310-x86_64-linux-gnu.so in pydantic.main.validate_model()
/usr/local/lib/python3.10/dist-packages/pydantic/fields.cpython-310-x86_64-linux-gnu.so in pydantic.fields.ModelField.validate()
ConfigError: field "grammar" not yet prepared so type is still a ForwardRef, you might need to call LlamaCpp.update_forward_refs().
```
### Description
- I'm trying to pass parameters to my `LlamaCpp` model at inference, such as `temperaure`, as described in the [Langchain doc](https://python.langchain.com/docs/expression_language/how_to/configure).
- Yet, when using `configurable`, at inference your model will go create a new instance of your LLM
see the [`_prepare`](https://github.com/langchain-ai/langchain/blob/849051102a6e315072e3a1d8dfdcee1527436c92/libs/core/langchain_core/runnables/configurable.py#L94) function in `langchain_core/runnables/configurable.py`.
```python
return (
self.default.__class__(**{**self.default.__dict__, **configurable}),
config,
)
```
See [source here](https://github.com/langchain-ai/langchain/blob/849051102a6e315072e3a1d8dfdcee1527436c92/libs/core/langchain_core/runnables/configurable.py#L291)
- Here, with a `LlamaCpp` langchain community wrapper, see [llamacpp.py](https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/llamacpp.py), you can see `LlamaCpp` class has several attribute among which a `grammar` one:
```python
class LlamaCpp(LLM):
"""llama.cpp model.
To use, you should have the llama-cpp-python library installed, and provide the
path to the Llama model as a named parameter to the constructor.
Check out: https://github.com/abetlen/llama-cpp-python
Example:
.. code-block:: python
from langchain_community.llms import LlamaCpp
llm = LlamaCpp(model_path="/path/to/llama/model")
"""
client: Any #: :meta private:
model_path: str
"""The path to the Llama model file."""
...
grammar: Optional[Union[str, LlamaGrammar]] = None
"""
grammar: formal grammar for constraining model outputs. For instance, the grammar
can be used to force the model to generate valid JSON or to speak exclusively in
emojis. At most one of grammar_path and grammar should be passed in.
"""
```
This `grammar` as potential type `LlamaGrammar` which is **only invoked** when the `typing` constant `TYPE_CHECKING` evaluates to true, see the import at the top of the file:
```python
if TYPE_CHECKING:
from llama_cpp import LlamaGrammar
```
This is the root cause of the issue since when we are preparing the model to perform an inference with a `configurable` we need to create a new instance of the `LlamaCpp` class (remember `self.default.__class__(**{**self.default.__dict__, **configurable}),` described above), but here the `LlamaGrammar` won't be available in the context, leading to a Pydantic validation error that this type is unknown and crashing the program.
A simple issue is to import the `LlamaGrammar` anytime, without the `TYPE_CHECKING` check, this will enable the `LlamaGrammar` type is always available, preventing such issues. Since this import is only a type definition, it will not create circular dependencies or other issues and performance degradation of having this additional import of a type file should be minor compared to the other LLM inference related functions.
I will open a PR proposing a fix.
### System Info
- package versions:
```text
langchain==0.1.5
langchain-community==0.0.17
langchain-core==0.1.18
llama_cpp_python==0.2.30 # just for information, note that the issue is not due to llama_cpp_python codebase, hence posting here.
```
- Bug reproduced on macOS and Linux (Google colab with a T4)
- python version 3.11.6 on macOS, Python 3.10.12 on Google Collab | Error in LlamaCpp with Configurable Fields, 'grammar' custom type not available | https://api.github.com/repos/langchain-ai/langchain/issues/16994/comments | 1 | 2024-02-04T08:41:51Z | 2024-05-12T16:09:05Z | https://github.com/langchain-ai/langchain/issues/16994 | 2,116,960,550 | 16,994 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
I create a LLM:
def mixtral() -> BaseLanguageModel:
llm = HuggingFaceHub(
repo_id="mistralai/Mixtral-8x7B-Instruct-v0.1",
task="text-generation",
model_kwargs={
"max_new_tokens": 16384,
"top_k": 30,
"temperature": 0.1,
"repetition_penalty": 1.03,
"max_length": 16384,
},
)
return ChatHuggingFace(llm=llm)
And then use in other code:
@classmethod
def default_bot(cls, sys_msg: str, llm: BaseLanguageModel):
h_temp = "{question}"
# Init Prompt
prompt = ChatPromptTemplate(
messages=[
SystemMessage(content=sys_msg),
MessagesPlaceholder(variable_name="chat_history"),
HumanMessagePromptTemplate.from_template(h_temp)
],
)
memory = ConversationSummaryBufferMemory(
llm=llm,
memory_key="chat_history",
return_messages=True,
max_token_limit=2048,
)
chain = LLMChain(
llm=llm,
prompt=prompt,
memory=memory,
# verbose=True,
)
return cls(chain=chain)
### Error Message and Stack Trace (if applicable)
File "/opt/homebrew/lib/python3.11/site-packages/langchain_core/_api/deprecation.py", line 142, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/langchain/chains/base.py", line 538, in run
return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/langchain_core/_api/deprecation.py", line 142, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/langchain/chains/base.py", line 363, in __call__
return self.invoke(
^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/langchain/chains/base.py", line 162, in invoke
raise e
File "/opt/homebrew/lib/python3.11/site-packages/langchain/chains/base.py", line 156, in invoke
self._call(inputs, run_manager=run_manager)
File "/opt/homebrew/lib/python3.11/site-packages/langchain/chains/llm.py", line 103, in _call
response = self.generate([inputs], run_manager=run_manager)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/langchain/chains/llm.py", line 115, in generate
return self.llm.generate_prompt(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 543, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 407, in generate
raise e
File "/opt/homebrew/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 397, in generate
self._generate_with_cache(
File "/opt/homebrew/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 576, in _generate_with_cache
return self._generate(
^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/langchain_community/chat_models/huggingface.py", line 68, in _generate
llm_input = self._to_chat_prompt(messages)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/langchain_community/chat_models/huggingface.py", line 100, in _to_chat_prompt
return self.tokenizer.apply_chat_template(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/transformers/tokenization_utils_base.py", line 1742, in apply_chat_template
rendered = compiled_template.render(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/jinja2/environment.py", line 1301, in render
self.environment.handle_exception()
File "/opt/homebrew/lib/python3.11/site-packages/jinja2/environment.py", line 936, in handle_exception
raise rewrite_traceback_stack(source=source)
File "<template>", line 1, in top-level template code
File "/opt/homebrew/lib/python3.11/site-packages/jinja2/sandbox.py", line 393, in call
return __context.call(__obj, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/transformers/tokenization_utils_base.py", line 1776, in raise_exception
raise TemplateError(message)
jinja2.exceptions.TemplateError: Conversation roles must alternate user/assistant/user/assistant/...
### Description
I want to know what the root cause of this issue is? I simply replaced llm with ChatHuggingFace from openaiGPT4. Why is there such incompatibility? Can the official team consider the compatibility of BaseLanguageModel?
### System Info
❯ pip freeze | grep langchain
langchain==0.1.0
langchain-community==0.0.10
langchain-core==0.1.8
langchain-experimental==0.0.28
langchain-google-genai==0.0.3
langchain-openai==0.0.2
platform: mac M1
| HuggingFace ChatHuggingFace raise Conversation roles must alternate user/assistant/user/assistant/... | https://api.github.com/repos/langchain-ai/langchain/issues/16992/comments | 4 | 2024-02-04T07:12:49Z | 2024-06-27T01:58:12Z | https://github.com/langchain-ai/langchain/issues/16992 | 2,116,920,427 | 16,992 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
# Indexing Code
textbook_directory_number_metadata = {
'Chapter Number': chapter['Chapter Number'],
...
}
record_metadatas = [{
**textbook_directory_number_metadata,'Text': text
}]
metadatas =[]
texts = []
metadatas.extend(record_metadatas)
texts.extend(text)
ids = [str(uuid4()) for _ in range(len(texts))]
embeds = embed.embed_documents(texts)
index.upsert(vectors=zip(ids, embeds, metadatas))
# Query Code
retriever = vectorstore.as_retriever(
search_type="similarity",
search_kwargs={
'k': 8,
'filter': filter_request_json
}
)
```
### Error Message and Stack Trace (if applicable)
No error or exception, it's just the type got changed.
### Description
We have a metadata field that looks like "Chapter Number": 1. We then indexed the document with this metadata in Pinecone VDB. In Query Retrieval we got the meta data field that looks like this: "Chapter Number": 1.0. The number '1' got turned into a floating point '1.0'. There is no type casting in my code.
### System Info
langchain==0.1.1
langchain-community==0.0.13
langchain-core==0.1.13
langchain-openai==0.0.3
Platform: mac
Python Version: Python 3.11.5
python -m langchain_core.sys_info:
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.3.0: Wed Dec 20 21:30:44 PST 2023; root:xnu-10002.81.5~7/RELEASE_ARM64_T6000
> Python Version: 3.11.5 (v3.11.5:cce6ba91b3, Aug 24 2023, 10:50:31) [Clang 13.0.0 (clang-1300.0.29.30)]
Package Information
-------------------
> langchain_core: 0.1.13
> langchain: 0.1.1
> langchain_community: 0.0.13
> langserve: Not Found | Type casting mistake for metadata when indexing documents using Pinecone VDB | https://api.github.com/repos/langchain-ai/langchain/issues/16983/comments | 1 | 2024-02-03T18:35:50Z | 2024-05-11T16:09:32Z | https://github.com/langchain-ai/langchain/issues/16983 | 2,116,635,155 | 16,983 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```
from langchain.chat_models import ChatOpenAI
from langchain.tools import Tool
from src.utils.search import SearchUtil
from langchain.agents import AgentExecutor, create_openai_tools_agent, AgentExecutorIterator
from langchain.schema import SystemMessage
from langchain import hub
prompt = hub.pull("hwchase17/openai-tools-agent")
llm = ChatOpenAI()
multiply_tool = Tool(
name="multiply",
description="Multiply two numbers",
func=lambda x, y: x * y,
)
tools = [multiply_tool]
agent = create_openai_tools_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
async for chunk in agent_executor.astream({'input': 'write a long text'}):
print(chunk, end="|", flush=True)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I am having an issue with streaming chunks from an instance of an AgentExecutor, Here is a very simple high level example of what I am doing
```
from langchain.chat_models import ChatOpenAI
from langchain.tools import Tool
from src.utils.search import SearchUtil
from langchain.agents import AgentExecutor, create_openai_tools_agent, AgentExecutorIterator
from langchain.schema import SystemMessage
from langchain import hub
prompt = hub.pull("hwchase17/openai-tools-agent")
llm = ChatOpenAI()
multiply_tool = Tool(
name="multiply",
description="Multiply two numbers",
func=lambda x, y: x * y,
)
tools = [multiply_tool]
agent = create_openai_tools_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
async for chunk in agent_executor.astream({'input': 'write a long text'}):
print(chunk, end="|", flush=True)
```
When I apply the same chunk loop on a llm or a chain, their implementation of astream seems to be fine. but when I do it on an agent, I get everything back in an object such as:
{'output': 'llm response', intermediate_steps=[], .. } |
I found some recent discussions with people facing the same issue and it seems to be a bug with the AgentExecutor implementation.
### System Info
langchain==0.1.5
langchain-community==0.0.17
langchain-core==0.1.18
langchain-openai==0.0.5
langchainhub==0.1.14 | Streaming Agent responses | https://api.github.com/repos/langchain-ai/langchain/issues/16980/comments | 3 | 2024-02-03T15:21:09Z | 2024-02-06T19:18:14Z | https://github.com/langchain-ai/langchain/issues/16980 | 2,116,530,409 | 16,980 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
``` Python
chat_history = []
query = "What considerations should the HSS follow during emergency registrations?"
result = chain({"question": query, "chat_history": chat_history})
print(result['answer'])
```
### Error Message and Stack Trace (if applicable)
```
/usr/local/lib/python3.10/dist-packages/langchain_core/_api/deprecation.py:117: LangChainDeprecationWarning: The function `__call__` was deprecated in LangChain 0.1.0 and will be removed in 0.2.0. Use invoke instead.
warn_deprecated(
/usr/local/lib/python3.10/dist-packages/transformers/generation/configuration_utils.py:392: UserWarning: `do_sample` is set to `False`. However, `temperature` is set to `0.1` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `temperature`.
warnings.warn(
Setting `pad_token_id` to `eos_token_id`:2 for open-end generation.
---------------------------------------------------------------------------
OutOfMemoryError Traceback (most recent call last)
[<ipython-input-26-85f83d314c4a>](https://localhost:8080/#) in <cell line: 3>()
1 chat_history = []
2 query = "What considerations should the HSS follow during emergency registrations?"
----> 3 result = chain({"question": query, "chat_history": chat_history})
4 print(result['answer'])
44 frames
[/usr/local/lib/python3.10/dist-packages/transformers/modeling_attn_mask_utils.py](https://localhost:8080/#) in _make_causal_mask(input_ids_shape, dtype, device, past_key_values_length, sliding_window)
154 """
155 bsz, tgt_len = input_ids_shape
--> 156 mask = torch.full((tgt_len, tgt_len), torch.finfo(dtype).min, device=device)
157 mask_cond = torch.arange(mask.size(-1), device=device)
158 mask.masked_fill_(mask_cond < (mask_cond + 1).view(mask.size(-1), 1), 0)
OutOfMemoryError: CUDA out of memory. Tried to allocate 104.22 GiB. GPU 0 has a total capacty of 14.75 GiB of which 8.83 GiB is free. Process 252083 has 5.91 GiB memory in use. Of the allocated memory 5.63 GiB is allocated by PyTorch, and 156.29 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
```
### Description
How I can resolve this error
### System Info
Python version: 3.10.10
Operating System: Windows 11
Windows: 11
pip == 23.3.1
python == 3.10.10
long-chain == 0.1.0
transformers == 4.36.2
sentence_transformers == 2.2.2
unstructured == 0.12.0 | OutOfMemoryError: CUDA out of memory. Tried to allocate 104.22 GiB. GPU 0 has a total capacty of 14.75 GiB of which 8.83 GiB is free. Process 252083 has 5.91 GiB memory in use. Of the allocated memory 5.63 GiB is allocated by PyTorch, and 156.29 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF | https://api.github.com/repos/langchain-ai/langchain/issues/16978/comments | 1 | 2024-02-03T14:34:42Z | 2024-02-04T14:23:31Z | https://github.com/langchain-ai/langchain/issues/16978 | 2,116,511,940 | 16,978 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
The issue is in `langchain/libs/community/langchain_community/vectorstores/faiss.py` at line 333
```
if filter is not None and filter_func(doc.metadata):
docs.append((doc, scores[0][j]))
else:
docs.append((doc, scores[0][j]))
```
If there's a filter, this will always add a document whether or not the function succeeds or fails (confirmed with AI: https://chat.openai.com/share/1b68d90b-ed4d-4e7d-9aff-9195acf18f96)
It should be something like:
```
if filter is None:
docs.append((doc, scores[0][j]))
elif filter_func(doc.metadata):
docs.append((doc, scores[0][j]))
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
* using FAISS with a filter just basically ignores the filter
### System Info
```
langchain==0.1.5
langchain-community==0.0.17
langchain-core==0.1.18
langchain-mistralai==0.0.3
``` | FAISS documenting filtering is broken | https://api.github.com/repos/langchain-ai/langchain/issues/16977/comments | 3 | 2024-02-03T14:04:17Z | 2024-05-22T16:08:32Z | https://github.com/langchain-ai/langchain/issues/16977 | 2,116,495,155 | 16,977 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
``` Python
# Load Directory that contains all the documents related to RAG:
from langchain_community.document_loaders import DirectoryLoader
directory = '/content/drive/MyDrive/QnA Pair Documents'
```
``` Python
def load_docs(directory):
loader = DirectoryLoader(directory)
documents = loader.load()
return documents
```
``` Python
documents = load_docs(directory)
```
### Error Message and Stack Trace (if applicable)
```
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/unstructured/partition/common.py](https://localhost:8080/#) in convert_office_doc(input_filename, output_directory, target_format, target_filter)
406 try:
--> 407 process = subprocess.Popen(
408 command,
15 frames
FileNotFoundError: [Errno 2] No such file or directory: 'soffice'
During handling of the above exception, another exception occurred:
FileNotFoundError Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/unstructured/partition/common.py](https://localhost:8080/#) in convert_office_doc(input_filename, output_directory, target_format, target_filter)
412 output, error = process.communicate()
413 except FileNotFoundError:
--> 414 raise FileNotFoundError(
415 """soffice command was not found. Please install libreoffice
416 on your system and try again.
FileNotFoundError: soffice command was not found. Please install libreoffice
on your system and try again.
- Install instructions: https://www.libreoffice.org/get-help/install-howto/
- Mac: https://formulae.brew.sh/cask/libreoffice
- Debian: https://wiki.debian.org/LibreOffice
```
### Description
I have downloaded and installed the `libreoffice` from provided link but I am still getting this error.
### System Info
Python version: 3.10.10
Operating System: Windows 11
Windows: 11
pip == 23.3.1
python == 3.10.10
long-chain == 0.1.0
transformers == 4.36.2
sentence_transformers == 2.2.2
unstructured == 0.12.0 | FileNotFoundError: soffice command was not found. Please install libreoffice on your system and try again. | https://api.github.com/repos/langchain-ai/langchain/issues/16973/comments | 2 | 2024-02-03T10:54:46Z | 2024-04-15T09:27:30Z | https://github.com/langchain-ai/langchain/issues/16973 | 2,116,424,404 | 16,973 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
from langchain_community.llms import HuggingFaceHub
HuggingFaceHub(repo_id="gpt2")("Linux is")
#Exprcted: "a opensource operate system"
#Real: "Linux is a opensource operate system"
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Some text-generation models on huggingface repeat the prompt in their generated response
### System Info
langchain==0.1.4
langchain-community==0.0.17
langchain-core==0.1.18
langchain-experimental==0.0.29
langchain-google-genai==0.0.6
langchainhub==0.1.14
| HuggingFaceHub still needs leading characters removal | https://api.github.com/repos/langchain-ai/langchain/issues/16972/comments | 2 | 2024-02-03T09:02:23Z | 2024-05-13T16:10:07Z | https://github.com/langchain-ai/langchain/issues/16972 | 2,116,370,059 | 16,972 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
db_connection = SQLDatabase.from_uri(
snowflake_url,
sample_rows_in_table_info=1,
include_tables=['transient_table_name'],
view_support=True,
)
### Error Message and Stack Trace (if applicable)
ValueError: include_tables {'TRANSIENT_TABLE_NAME'} not found in database
### Description
the table i want to use is a 'transient table with the following table info like below:
create or replace TRANSIENT TABLE DB_NAME.SCHEMA_NAME.TRANSIENT_TABLE_NAME (
UNIQUE_ID VARCHAR(32),
PRODUCT VARCHAR(255),
CITY VARCHAR(100)),
### System Info
black==24.1.1
boto3==1.34.29
chromadb==0.4.22
huggingface-hub==0.20.3
langchain==0.1.4
langchain-experimental==0.0.49
pip_audit==2.6.0
pre-commit==3.6.0
pylint==2.17.4
pylint_quotes==0.2.3
pylint_pydantic==0.3.2
python-dotenv==1.0.1
sentence_transformers==2.3.0
snowflake-connector-python==3.7.0
snowflake-sqlalchemy==1.5.1
SQLAlchemy==2.0.25
watchdog==3.0.0 | SQLDatabase.from_uri() not recognizing transient table in snowflake | https://api.github.com/repos/langchain-ai/langchain/issues/16971/comments | 4 | 2024-02-03T07:26:16Z | 2024-02-05T20:01:51Z | https://github.com/langchain-ai/langchain/issues/16971 | 2,116,324,244 | 16,971 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
The code from the [Qdrant documentation](https://python.langchain.com/docs/integrations/vectorstores/qdrant) shows the error:
```python
from dotenv import load_dotenv
from langchain.text_splitter import CharacterTextSplitter
from langchain_community.document_loaders import TextLoader
from langchain_community.vectorstores import Qdrant
from langchain_openai import OpenAIEmbeddings
load_dotenv()
loader = TextLoader("../../modules/state_of_the_union.txt")
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
qdrant = Qdrant.from_documents(
docs,
embeddings,
location=":memory:", # Local mode with in-memory storage only
collection_name="my_documents",
)
query = "What did the president say about Ketanji Brown Jackson"
found_docs = qdrant.similarity_search(query)
print(found_docs[0].page_content)
```
The only adjustment here was how to set the `OPENAI_API_KEY` value (any mechanism works).
### Error Message and Stack Trace (if applicable)
```
Traceback (most recent call last):
File "C:\Program Files\JetBrains\PyCharm Community Edition 2023.1.2\plugins\python-ce\helpers\pydev\pydevd.py", line 1534, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\JetBrains\PyCharm Community Edition 2023.1.2\plugins\python-ce\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "C:\Users\sfitts\GitHub\ag2rs\modelmgr\src\main\python\qdrant_example.py", line 26, in <module>
found_docs = qdrant.similarity_search(query)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\sfitts\.virtualenvs\semanticsearch\Lib\site-packages\langchain_community\vectorstores\qdrant.py", line 286, in similarity_search
results = self.similarity_search_with_score(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\sfitts\.virtualenvs\semanticsearch\Lib\site-packages\langchain_community\vectorstores\qdrant.py", line 362, in similarity_search_with_score
return self.similarity_search_with_score_by_vector(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\sfitts\.virtualenvs\semanticsearch\Lib\site-packages\langchain_community\vectorstores\qdrant.py", line 620, in similarity_search_with_score_by_vector
return [
^
File "C:\Users\sfitts\.virtualenvs\semanticsearch\Lib\site-packages\langchain_community\vectorstores\qdrant.py", line 622, in <listcomp>
self._document_from_scored_point(
File "C:\Users\sfitts\.virtualenvs\semanticsearch\Lib\site-packages\langchain_community\vectorstores\qdrant.py", line 1946, in _document_from_scored_point
metadata["_collection_name"] = scored_point.collection_name
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'ScoredPoint' object has no attribute 'collection_name'
python-BaseException
Process finished with exit code 1
```
### Description
I'm trying to use Qdrant to perform similarity searches as part of a RAG chain. This was working fine in `langchain-community==0.0.16`, but produces the error above in `langchain-community==0.0.17`. The source of the break is this PR -- https://github.com/langchain-ai/langchain/pull/16608. While it would be nice to have access to the collection name, the QDrant class `ScoredPoint` does not have the referenced property (and AFAICT never has).
### System Info
```
System Information
------------------
> OS: Windows
> OS Version: 10.0.19045
> Python Version: 3.11.4 (tags/v3.11.4:d2340ef, Jun 7 2023, 05:45:37) [MSC v.1934 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.1.18
> langchain: 0.1.4
> langchain_community: 0.0.17
> langchain_openai: 0.0.5
> langserve: 0.0.41
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
```
Note that the failure was originally found using `langchain==0.1.5`. | Qdrant: Performing a similarity search results in an "AttributeError" | https://api.github.com/repos/langchain-ai/langchain/issues/16962/comments | 3 | 2024-02-02T23:22:06Z | 2024-05-17T16:08:03Z | https://github.com/langchain-ai/langchain/issues/16962 | 2,115,989,043 | 16,962 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
The following cdee
``` python
llm = Bedrock(
credentials_profile_name="bedrock-admin",
model_id="amazon.titan-text-express-v1")
```
Does not correctly retrieve the credentials
### Error Message and Stack Trace (if applicable)
I can't directly copy the error due to corporate security policy.
The error clear says access denied due to the role.
However the profile works in the aws cli.
### Description
I am trying to use credentials_profile_name to assume a role that works with Bedrock.
I added a profile to ~/.aws/config
[profile bedrock-admin]
role_arn = arn:aws:iam::123456789012:role/mybedrockrole
credential_source = Ec2InstanceMetadata
The role does have suitable permissions and I can create a Bedrock client via boto3.
The AWS CLI works. I can run aws s3 ls --profile bedrock-admin and it picks up the profile.
But creating the LLM as shown in the docs does not get the permissions and fails
llm = Bedrock(
credentials_profile_name="bedrock-admin",
model_id="amazon.titan-text-express-v1")
In my case, I am forced to use the EC2 profile as a starting point for credentials. IMDS should still allow the new role to be assumed.
### System Info
langchain 0.1.4 with langserve
AWS linux
Python 3.9.6 | Bedrock credentials_profile_name="bedrock-admin" fails with IMDS | https://api.github.com/repos/langchain-ai/langchain/issues/16959/comments | 3 | 2024-02-02T21:57:50Z | 2024-02-05T18:37:47Z | https://github.com/langchain-ai/langchain/issues/16959 | 2,115,869,570 | 16,959 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.