issue_owner_repo
listlengths 2
2
| issue_body
stringlengths 0
261k
⌀ | issue_title
stringlengths 1
925
| issue_comments_url
stringlengths 56
81
| issue_comments_count
int64 0
2.5k
| issue_created_at
stringlengths 20
20
| issue_updated_at
stringlengths 20
20
| issue_html_url
stringlengths 37
62
| issue_github_id
int64 387k
2.46B
| issue_number
int64 1
127k
|
---|---|---|---|---|---|---|---|---|---|
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
https://python.langchain.com/docs/modules/chains/how_to/openai_functions
from typing import Sequence
class Person(BaseModel):
"""Identifying information about a person."""
name: str = Field(..., description="The person's name")
age: int = Field(..., description="The person's age")
fav_food: Optional[str] = Field(None, description="The person's favorite food")
prompt = ChatPromptTemplate.from_messages(
[
("system", "You are a world class algorithm for extracting information in structured formats."),
("human", "Use the given format to extract information from the following input: {input}"),
("human", "Tip: Make sure to answer in the correct format"),
]
)
class People(BaseModel):
"""Identifying information about all people in a text."""
people: Sequence[Person] = Field(..., description="The people in the text")
chain = create_structured_output_chain(People, llm, prompt, verbose=True)
chain.run(
"Sally is 13, Joey just turned 12 and loves spinach. Caroline is 10 years older than Sally."
)
no work,having some error:
ValidationError: 1 validation error for _OutputFormatter
output
value is not a valid dict (type=type_error.dict)
### Idea or request for content:
_No response_ | documents have error (create_structured_output_chain) | https://api.github.com/repos/langchain-ai/langchain/issues/11524/comments | 4 | 2023-10-08T08:00:40Z | 2024-02-09T16:18:23Z | https://github.com/langchain-ai/langchain/issues/11524 | 1,931,696,800 | 11,524 |
[
"hwchase17",
"langchain"
]
| I am trying to use the `add_texts` method on a previously instanciated Neo4jVector object for setting the node_label.
The scenario is that I am processing different types of text and want to create different nodes accordingly. Because I am handling the database connection somewhere else in my code, I don't want to establish the connection to the database when embedding texts.
```python
[...]
# initialize vector store
self.vector_store = Neo4jVector(
embedding=self.embeddings_model,
username=neo4j_config['USER'],
password=neo4j_config['PASSWORD'],
url=neo4j_config['URI'],
database=neo4j_config['DATABASE'], # neo4j by default
index_name="vector", # vector by default
embedding_node_property="embedding", # embedding by default
create_id_index=True, # True by default
)
[...]
# embed text
created_node = self.vector_store.add_texts(
[text],
metadatas=[
{
"label": [node_label]
}
],
embedding=self.embeddings_model,
node_label=node_label, # Chunk by default
text_node_property=text_node_property, # text by default
)
```
Currently the `node_label` property is overwritten with the default value and not respected when used as a parameter in **kwargs.
https://github.com/langchain-ai/langchain/blob/eb572f41a65c9636b9e0e5a5fb4210a00a67a353/libs/langchain/langchain/vectorstores/neo4j_vector.py#L125
https://github.com/langchain-ai/langchain/blob/eb572f41a65c9636b9e0e5a5fb4210a00a67a353/libs/langchain/langchain/vectorstores/neo4j_vector.py#L450
According to the Docstring, the function is supposed to accept vectorstore specific parameters:
https://github.com/langchain-ai/langchain/blob/eb572f41a65c9636b9e0e5a5fb4210a00a67a353/libs/langchain/langchain/vectorstores/neo4j_vector.py#L458
In order to conform with this, I added this function:
```python
def update_vector_store_properties(self, **kwargs):
# List of vector store-specific arguments we want to check and potentially update
vector_store_args = ["node_label", "text_node_property", "embedding_node_property"]
# Iterate over these arguments
for arg in vector_store_args:
# Check if the argument is present in kwargs
if arg in kwargs:
# Update the corresponding property of the Neo4jVector instance
setattr(self, arg, kwargs[arg])
```
I then modified `add_embeddings` like this:
```python
def add_embeddings(
self,
texts: Iterable[str],
embeddings: List[List[float]],
metadatas: Optional[List[dict]] = None,
ids: Optional[List[str]] = None,
**kwargs: Any,
) -> List[str]:
"""Add embeddings to the vectorstore.
Args:
texts: Iterable of strings to add to the vectorstore.
embeddings: List of list of embedding vectors.
metadatas: List of metadatas associated with the texts.
kwargs: vectorstore specific parameters
"""
if ids is None:
ids = [str(uuid.uuid1()) for _ in texts]
if not metadatas:
metadatas = [{} for _ in texts]
self.update_vector_store_properties(**kwargs)
[...]
```
Don't really know if this is a missing feature or a bug or if I am just using the Vectorstore wrong, but I thought I'd leave this here in case it might be the desired behavior.
| Neo4jVectorstore: Parse vectorstore specific parameters in **kwargs when calling add_embeddings() | https://api.github.com/repos/langchain-ai/langchain/issues/11515/comments | 3 | 2023-10-07T18:29:54Z | 2024-02-07T16:22:08Z | https://github.com/langchain-ai/langchain/issues/11515 | 1,931,456,319 | 11,515 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain 0.0.310
Python 3.11
Windows 10
### Who can help?
@eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Just following the usual steps:
`from langchain.document_loaders import UnstructuredRSTLoader`
### Expected behavior
Expecting a successful import but greeted with
`ImportError: cannot import name 'Document' from 'langchain.schema' ` | Can't import UnstructuredRSTLoader | https://api.github.com/repos/langchain-ai/langchain/issues/11510/comments | 1 | 2023-10-07T11:17:32Z | 2023-10-07T11:33:42Z | https://github.com/langchain-ai/langchain/issues/11510 | 1,931,310,770 | 11,510 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Loaders are designed to load a single source of data and transform it to a list of Documents. If several sources are to be processed, it is up to the developer to call the Loader several times.
The idea is to propose a new `BatchLoader` object that could simplify this task and enable loading tasks to be launched sequentially, mutlithreaded, with mutli processor or asynchronously.
Draft idea:
```
class BatchLoader(BaseLoader):
...
def load(self, method: str = 'sequential', max_workers: int=1) -> List[Document]:
# TODO: faire ça stylé
if method == 'thread':
return self._load_thread(max_workers)
elif method == 'process':
return self._load_process(max_workers)
elif method == 'sequential':
return self._load_sequential()
elif method == 'async':
return asyncio.run(self._async_load())
else:
raise ValueError(f'Invalid method {method}')
...
# Call it with a Loader constructor callable and a set of arguments
batch_loader = BatchLoader(TextLoader, {"file_path": [f"file_{i}.txt" for i in range(1000)]})
```
### Motivation
In most real use case we need to load several Data Sources, combine their outputs and work with them (e.g. store in a vector store). Some use cases also require to wait for I/O intensive tasks or tasks you want to parallelized, so having a Meta loader that wrap this complexity for you could be a good idea.
### Your contribution
WIP: https://github.com/langchain-ai/langchain/pull/11527 | [Feature] Batch loader | https://api.github.com/repos/langchain-ai/langchain/issues/11509/comments | 3 | 2023-10-07T09:48:11Z | 2024-05-13T16:07:52Z | https://github.com/langchain-ai/langchain/issues/11509 | 1,931,282,259 | 11,509 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I want streaming when call chatgpt with n>1. can u support me?
### Suggestion:
_No response_ | why when streaming only setup n=1 | https://api.github.com/repos/langchain-ai/langchain/issues/11508/comments | 2 | 2023-10-07T04:08:32Z | 2024-02-07T16:22:13Z | https://github.com/langchain-ai/langchain/issues/11508 | 1,931,176,253 | 11,508 |
[
"hwchase17",
"langchain"
]
| ### Feature request
We propose the addition of Google Scholar querying with the serpapi, as google Scholar is a prominent platform for accessing academic and scholarly articles.
### Motivation
Google scholar is a crucial resource for researchers, students, and professionals in various fields. Integrating Google Scholar querying with serpapi would provide users with the ability to programmatically retrieve search results from Google Scholar, giving the llms more better/more reliable information to work with.
### Your contribution
We intend to submit a pull request for this issue at some point in November. | Add support for google scholar querying with serpapi | https://api.github.com/repos/langchain-ai/langchain/issues/11505/comments | 4 | 2023-10-06T23:45:22Z | 2023-11-13T20:13:57Z | https://github.com/langchain-ai/langchain/issues/11505 | 1,931,050,084 | 11,505 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain==0.0305
Mac
python == 3.9.6
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I have a simple web page summarize function basacilly followed example
```python
llm = OpenAI(temperature=0)
loader = WebBaseLoader(link)
docs = loader.load()
MAP_PROMPT = PromptTemplate(
template="""Write a concise summary of the following web page(keep all identifiable information about the main entities):
{text}
CONCISE SUMMARY:
""",
input_variables=["text"],
)
REDUCE_PROMPT = PromptTemplate(
template="""Based on the following text, write a concise summary(keep all identifiable information about the main entities)):
{text}
CONCISE SUMMARY:
""",
input_variables=["text"],
)
chain = load_summarize_chain(
llm,
chain_type="map_reduce",
return_intermediate_steps=True,
map_prompt=MAP_PROMPT,
combine_prompt=REDUCE_PROMPT,
)
text_splitter = RecursiveCharacterTextSplitter()
docs = text_splitter.split_documents(docs)
if len(docs) == 0:
return ""
result = chain(
text_splitter.split_documents(documents=docs),
return_only_outputs=True,
)
logging.debug(result)
return result["output_text"]
```
my program loaded in a pdf url link (https://www.fiserv.com/content/dam/fiserv-ent/final-files/marketing-collateral/case-studies/regions-bank-case-study-0623.pdf)
web based loader created 1 page document with a large number of tokens (13524539 characters)
I then started to get openai library error:
```
error_code=rate_limit_exceeded error_message='Rate limit reached for default-text-davinci-003 in organization org-a6hHivOPvKe9X60G8E6YqM5m on tokens_usage_based per min. Limit: 250000 / min. Current: 245411 / min. Contact us through our help center at help.openai.com if you continue to have issues.' error_param=None error_type=tokens_usage_based message='OpenAI API error received' stream_error=False
```
langchain kept retrying on this error definetly more than the default 6 times, I had `14` retry log from langchain in my terminal.
I think this is a quite critical bug, my OpenAI usage limit blew up the $120 limit under 10mins because of this.
### Expected behavior
retry limit should kick it to short circuit this situation or there should be some other method to prevent unexpected length of document being passed to openAI | Retry limit is not respected | https://api.github.com/repos/langchain-ai/langchain/issues/11500/comments | 2 | 2023-10-06T21:37:53Z | 2024-02-07T16:22:18Z | https://github.com/langchain-ai/langchain/issues/11500 | 1,930,961,933 | 11,500 |
[
"hwchase17",
"langchain"
]
| ### Feature request
The Redis Vectorstore `add_documents()` method calls `add_texts()` which embeds documents one by one like:
```
embedding = (
embeddings[i] if embeddings else self._embeddings.embed_query(text)
)
```
The `add_documents()` method would seem to imply that it is a good way to jointly embed and upload a larger list of documents, but using an API call for each document as above is slow and prone to run into rate limits.
Redis `add_documents()` could pass a kwarg `from_documents=True`to `add_texts()` which would change embedding to
```
embedding = (
embeddings[i] if embeddings else self._embeddings.embed_documents(documents)
)
```
### Motivation
This would simplify the overall use of the Redis Vectorstore and would be more inline with how the example documents imply the add_documents() method should be used. Currently `add_documents()` is not suitable for something like a csv or a long list of smaller documents unless it is used like:
```
add_documents(
documents=split_documents,
embeddings=embedder.embed_documents(texts),
)
```
### Your contribution
I could create a PR for this issue if it makes sense! | Redis Vectorestore.add_documents() should use embed_documents() instead of embed_query() | https://api.github.com/repos/langchain-ai/langchain/issues/11496/comments | 3 | 2023-10-06T20:31:55Z | 2024-02-12T16:12:09Z | https://github.com/langchain-ai/langchain/issues/11496 | 1,930,899,139 | 11,496 |
[
"hwchase17",
"langchain"
]
| ### System Info
MacOS, PGVector, TypeScript, NodeJS.
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Hey!
I have been trying to reproduce the langchain's pgvector official documentation without success. For some reason when I try to use async await around PGVector it throws an error.
`const pgvectorStore = await PGVectorStore.initialize(
new OpenAIEmbeddings(),
config
);`
This is the official documentation.
`export const pgvectorStore = await PGVectorStore.initialize(
new OpenAIEmbeddings(),
config
);`
This is my version.
I'm using TypeScript and it recommends to change both module and target versions inside compilerOptions, **done both without success.**
`Top-level 'await' expressions are only allowed when the 'module' option is set to 'es2022', 'esnext', 'system', 'node16', or 'nodenext', and the 'target' option is set to 'es2017' or higher.ts(1378)`
This is the output if I try to run the code:
`node:internal/process/esm_loader:108
internalBinding('errors').triggerUncaughtException(
^
Error: Transform failed with 1 error:
/Users/diogo/Documents/www/ai-api/src/store.ts:34:29: ERROR: Top-level await is currently not supported with the "cjs" output format
at failureErrorWithLog (/Users/diogo/.npm/_npx/fd45a72a545557e9/node_modules/esbuild/lib/main.js:1649:15)
at /Users/diogo/.npm/_npx/fd45a72a545557e9/node_modules/esbuild/lib/main.js:847:29
at responseCallbacks.<computed> (/Users/diogo/.npm/_npx/fd45a72a545557e9/node_modules/esbuild/lib/main.js:703:9)
at handleIncomingPacket (/Users/diogo/.npm/_npx/fd45a72a545557e9/node_modules/esbuild/lib/main.js:762:9)
at Socket.readFromStdout (/Users/diogo/.npm/_npx/fd45a72a545557e9/node_modules/esbuild/lib/main.js:679:7)
at Socket.emit (node:events:517:28)
at addChunk (node:internal/streams/readable:335:12)
at readableAddChunk (node:internal/streams/readable:308:9)
at Readable.push (node:internal/streams/readable:245:10)
at Pipe.onStreamRead (node:internal/stream_base_commons:190:23) {
errors: [
{
detail: undefined,
id: '',
location: {
column: 29,
file: '/Users/diogo/Documents/www/ai-api/src/store.ts',
length: 5,
line: 34,
lineText: 'export const pgvectorStore = await PGVectorStore.initialize(',
namespace: '',
suggestion: ''
},
notes: [],
pluginName: '',
text: 'Top-level await is currently not supported with the "cjs" output format'
}
],
warnings: []
}_`
Would appreciate any help.
### Expected behavior
It should not throw an error. | await PGVectorStore throws an ts(1378) error. | https://api.github.com/repos/langchain-ai/langchain/issues/11492/comments | 2 | 2023-10-06T17:37:48Z | 2024-02-06T16:25:41Z | https://github.com/langchain-ai/langchain/issues/11492 | 1,930,691,209 | 11,492 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
The llm loader `load_llm_from_config` defined [here](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/loading.py#L12) only returns `BaseLLM`s. There is no equivalent chat_model loader in langchain to load Chat Model LLMs from a config that I'm aware of.
What do you think of either extending the `load_llm_from_config` such that it returns both `BaseLLM` and `BaseChatModel`
```
load_llm_from_config(config: dict) -> Union[BaseLLM, BaseChatModel]
```
or create a chat_model loader in `langchain.chat_models.loading.py`
```
load_chat_llm_from_config(config: dict) -> BaseChatModel
```
### Suggestion:
_No response_ | Issue: load_llm_from_config doesn't working with Chat Models | https://api.github.com/repos/langchain-ai/langchain/issues/11485/comments | 1 | 2023-10-06T15:52:46Z | 2023-10-11T23:29:10Z | https://github.com/langchain-ai/langchain/issues/11485 | 1,930,491,541 | 11,485 |
[
"hwchase17",
"langchain"
]
| ### Feature request
The SentimentAnalysis chain is designed to analyze the sentiment of textual data using langchain. It uses a language model like OpenAI's GPT-3.5 & 4 to process text inputs and provide sentiment labels and scores. This chain can be used for various applications, including sentiment analysis of product reviews, social media comments, or customer feedback.
### Motivation
To contribute to open-source
### Your contribution
yes , i am ready with my PR | Sentimental analysis chain | https://api.github.com/repos/langchain-ai/langchain/issues/11480/comments | 2 | 2023-10-06T14:21:14Z | 2024-02-06T16:25:46Z | https://github.com/langchain-ai/langchain/issues/11480 | 1,930,280,545 | 11,480 |
[
"hwchase17",
"langchain"
]
| ### System Info
pip 23.2.1 from /usr/local/lib/python3.12/site-packages/pip (python 3.12)
Python 3.12.0 (main, Oct 3 2023, 01:48:15) [GCC 12.2.0] on linux
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
# The issue
When I run either of these:
`pip3 install 'langchain[all]'` or
`pip install langchain`
The command fails with the full stack trace below.
# How to reproduce easily with docker
`docker run --rm -it python:3 pip install langchain`
# Stack Trace
```
Building wheels for collected packages: aiohttp, frozenlist, multidict, yarl
Building wheel for aiohttp (pyproject.toml) ... error
error: subprocess-exited-with-error
× Building wheel for aiohttp (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [160 lines of output]
*********************
* Accelerated build *
*********************
running bdist_wheel
running build
running build_py
creating build
creating build/lib.linux-x86_64-cpython-312
creating build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/web_ws.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/payload_streamer.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/web.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/helpers.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/web_runner.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/pytest_plugin.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/log.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/web_server.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/http.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/http_writer.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/formdata.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/web_exceptions.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/locks.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/http_websocket.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/test_utils.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/client_ws.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/web_urldispatcher.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/web_request.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/web_middlewares.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/client_proto.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/streams.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/tcp_helpers.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/client.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/typedefs.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/resolver.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/cookiejar.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/payload.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/web_fileresponse.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/worker.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/multipart.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/base_protocol.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/http_parser.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/web_protocol.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/web_log.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/web_routedef.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/http_exceptions.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/connector.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/web_app.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/hdrs.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/web_response.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/__init__.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/client_reqrep.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/tracing.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/client_exceptions.py -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/abc.py -> build/lib.linux-x86_64-cpython-312/aiohttp
running egg_info
writing aiohttp.egg-info/PKG-INFO
writing dependency_links to aiohttp.egg-info/dependency_links.txt
writing requirements to aiohttp.egg-info/requires.txt
writing top-level names to aiohttp.egg-info/top_level.txt
reading manifest file 'aiohttp.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no files found matching 'aiohttp' anywhere in distribution
warning: no previously-included files matching '*.pyc' found anywhere in distribution
warning: no previously-included files matching '*.pyd' found anywhere in distribution
warning: no previously-included files matching '*.so' found anywhere in distribution
warning: no previously-included files matching '*.lib' found anywhere in distribution
warning: no previously-included files matching '*.dll' found anywhere in distribution
warning: no previously-included files matching '*.a' found anywhere in distribution
warning: no previously-included files matching '*.obj' found anywhere in distribution
warning: no previously-included files found matching 'aiohttp/*.html'
no previously-included directories found matching 'docs/_build'
adding license file 'LICENSE.txt'
writing manifest file 'aiohttp.egg-info/SOURCES.txt'
copying aiohttp/_cparser.pxd -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/_find_header.pxd -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/_headers.pxi -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/_helpers.pyi -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/_helpers.pyx -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/_http_parser.pyx -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/_http_writer.pyx -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/_websocket.pyx -> build/lib.linux-x86_64-cpython-312/aiohttp
copying aiohttp/py.typed -> build/lib.linux-x86_64-cpython-312/aiohttp
creating build/lib.linux-x86_64-cpython-312/aiohttp/.hash
copying aiohttp/.hash/_cparser.pxd.hash -> build/lib.linux-x86_64-cpython-312/aiohttp/.hash
copying aiohttp/.hash/_find_header.pxd.hash -> build/lib.linux-x86_64-cpython-312/aiohttp/.hash
copying aiohttp/.hash/_helpers.pyi.hash -> build/lib.linux-x86_64-cpython-312/aiohttp/.hash
copying aiohttp/.hash/_helpers.pyx.hash -> build/lib.linux-x86_64-cpython-312/aiohttp/.hash
copying aiohttp/.hash/_http_parser.pyx.hash -> build/lib.linux-x86_64-cpython-312/aiohttp/.hash
copying aiohttp/.hash/_http_writer.pyx.hash -> build/lib.linux-x86_64-cpython-312/aiohttp/.hash
copying aiohttp/.hash/_websocket.pyx.hash -> build/lib.linux-x86_64-cpython-312/aiohttp/.hash
copying aiohttp/.hash/hdrs.py.hash -> build/lib.linux-x86_64-cpython-312/aiohttp/.hash
running build_ext
building 'aiohttp._websocket' extension
creating build/temp.linux-x86_64-cpython-312
creating build/temp.linux-x86_64-cpython-312/aiohttp
gcc -fno-strict-overflow -Wsign-compare -DNDEBUG -g -O3 -Wall -fPIC -I/usr/local/include/python3.12 -c aiohttp/_websocket.c -o build/temp.linux-x86_64-cpython-312/aiohttp/_websocket.o
aiohttp/_websocket.c: In function ‘__pyx_pf_7aiohttp_10_websocket__websocket_mask_cython’:
aiohttp/_websocket.c:1475:3: warning: ‘Py_OptimizeFlag’ is deprecated [-Wdeprecated-declarations]
1475 | if (unlikely(!Py_OptimizeFlag)) {
| ^~
In file included from /usr/local/include/python3.12/Python.h:48,
from aiohttp/_websocket.c:6:
/usr/local/include/python3.12/cpython/pydebug.h:13:37: note: declared here
13 | Py_DEPRECATED(3.12) PyAPI_DATA(int) Py_OptimizeFlag;
| ^~~~~~~~~~~~~~~
aiohttp/_websocket.c: In function ‘__Pyx_get_tp_dict_version’:
aiohttp/_websocket.c:2680:5: warning: ‘ma_version_tag’ is deprecated [-Wdeprecated-declarations]
2680 | return likely(dict) ? __PYX_GET_DICT_VERSION(dict) : 0;
| ^~~~~~
In file included from /usr/local/include/python3.12/dictobject.h:90,
from /usr/local/include/python3.12/Python.h:61:
/usr/local/include/python3.12/cpython/dictobject.h:22:34: note: declared here
22 | Py_DEPRECATED(3.12) uint64_t ma_version_tag;
| ^~~~~~~~~~~~~~
aiohttp/_websocket.c: In function ‘__Pyx_get_object_dict_version’:
aiohttp/_websocket.c:2692:5: warning: ‘ma_version_tag’ is deprecated [-Wdeprecated-declarations]
2692 | return (dictptr && *dictptr) ? __PYX_GET_DICT_VERSION(*dictptr) : 0;
| ^~~~~~
/usr/local/include/python3.12/cpython/dictobject.h:22:34: note: declared here
22 | Py_DEPRECATED(3.12) uint64_t ma_version_tag;
| ^~~~~~~~~~~~~~
aiohttp/_websocket.c: In function ‘__Pyx_object_dict_version_matches’:
aiohttp/_websocket.c:2696:5: warning: ‘ma_version_tag’ is deprecated [-Wdeprecated-declarations]
2696 | if (unlikely(!dict) || unlikely(tp_dict_version != __PYX_GET_DICT_VERSION(dict)))
| ^~
/usr/local/include/python3.12/cpython/dictobject.h:22:34: note: declared here
22 | Py_DEPRECATED(3.12) uint64_t ma_version_tag;
| ^~~~~~~~~~~~~~
aiohttp/_websocket.c: In function ‘__Pyx_CLineForTraceback’:
aiohttp/_websocket.c:2741:9: warning: ‘ma_version_tag’ is deprecated [-Wdeprecated-declarations]
2741 | __PYX_PY_DICT_LOOKUP_IF_MODIFIED(
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/usr/local/include/python3.12/cpython/dictobject.h:22:34: note: declared here
22 | Py_DEPRECATED(3.12) uint64_t ma_version_tag;
| ^~~~~~~~~~~~~~
aiohttp/_websocket.c:2741:9: warning: ‘ma_version_tag’ is deprecated [-Wdeprecated-declarations]
2741 | __PYX_PY_DICT_LOOKUP_IF_MODIFIED(
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/usr/local/include/python3.12/cpython/dictobject.h:22:34: note: declared here
22 | Py_DEPRECATED(3.12) uint64_t ma_version_tag;
| ^~~~~~~~~~~~~~
aiohttp/_websocket.c: In function ‘__Pyx_PyInt_As_long’:
aiohttp/_websocket.c:3042:53: error: ‘PyLongObject’ {aka ‘struct _longobject’} has no member named ‘ob_digit’
3042 | const digit* digits = ((PyLongObject*)x)->ob_digit;
| ^~
aiohttp/_websocket.c:3097:53: error: ‘PyLongObject’ {aka ‘struct _longobject’} has no member named ‘ob_digit’
3097 | const digit* digits = ((PyLongObject*)x)->ob_digit;
| ^~
aiohttp/_websocket.c: In function ‘__Pyx_PyInt_As_int’:
aiohttp/_websocket.c:3238:53: error: ‘PyLongObject’ {aka ‘struct _longobject’} has no member named ‘ob_digit’
3238 | const digit* digits = ((PyLongObject*)x)->ob_digit;
| ^~
aiohttp/_websocket.c:3293:53: error: ‘PyLongObject’ {aka ‘struct _longobject’} has no member named ‘ob_digit’
3293 | const digit* digits = ((PyLongObject*)x)->ob_digit;
| ^~
aiohttp/_websocket.c: In function ‘__Pyx_PyIndex_AsSsize_t’:
aiohttp/_websocket.c:3744:45: error: ‘PyLongObject’ {aka ‘struct _longobject’} has no member named ‘ob_digit’
3744 | const digit* digits = ((PyLongObject*)b)->ob_digit;
| ^~
error: command '/usr/bin/gcc' failed with exit code 1
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for aiohttp
Building wheel for frozenlist (pyproject.toml) ... done
Created wheel for frozenlist: filename=frozenlist-1.4.0-cp312-cp312-linux_x86_64.whl size=261459 sha256=a163dbc3bcddc5bf23bf228b0774c1c783465a0456dd3ad5da6b3fa065ec3e23
Stored in directory: /root/.cache/pip/wheels/f1/9c/94/9386cb0ea511a93226456388d41d35f1c24ba15a62ffd7b1ef
Building wheel for multidict (pyproject.toml) ... done
Created wheel for multidict: filename=multidict-6.0.4-cp312-cp312-linux_x86_64.whl size=114930 sha256=d6cde2e1fe8812d821a35581f31f1ff4797800ad474bf9ec1407fe5398ba2739
Stored in directory: /root/.cache/pip/wheels/f6/d8/ff/3c14a64b8f2ab1aa94ba2888f5a988be6ab446ec5c8d1a82da
Building wheel for yarl (pyproject.toml) ... done
Created wheel for yarl: filename=yarl-1.9.2-cp312-cp312-linux_x86_64.whl size=285235 sha256=fb299f423d87aae65e64d9a2b450b6540257c0f1a102ee01330a02f746ac791c
Stored in directory: /root/.cache/pip/wheels/84/e3/6a/7d0fa1abee8e4aa39922b5bd54689b4b5e4269b2821f482a32
Successfully built frozenlist multidict yarl
Failed to build aiohttp
ERROR: Could not build wheels for aiohttp, which is required to install pyproject.toml-based projects
```
### Expected behavior
Installation succeeds | Unable to use Python 3.12 due to `aiohttp` and other dependencies not supporting 3.12 yet | https://api.github.com/repos/langchain-ai/langchain/issues/11479/comments | 9 | 2023-10-06T14:05:06Z | 2024-03-09T07:27:40Z | https://github.com/langchain-ai/langchain/issues/11479 | 1,930,250,920 | 11,479 |
[
"hwchase17",
"langchain"
]
| ### System Info
pyython: 3.11
langChain:0.0.309
SO: windows 10
pip list:
Package Version
---------------------------------------- ---------
aiofiles 23.2.1
aiohttp 3.8.5
aiosignal 1.3.1
anyio 3.7.1
async-timeout 4.0.3
asyncer 0.0.2
attrs 23.1.0
auth0-python 4.4.0
backoff 2.2.1
bcrypt 4.0.1
beautifulsoup4 4.12.2
bidict 0.22.1
certifi 2023.7.22
cffi 1.15.1
chardet 5.2.0
charset-normalizer 3.2.0
chroma-hnswlib 0.7.3
chromadb 0.4.10
click 8.1.7
colorama 0.4.6
coloredlogs 15.0.1
cryptography 41.0.3
dataclasses-json 0.5.14
Deprecated 1.2.14
django-environ 0.10.0
docx2txt 0.8
emoji 2.8.0
faiss-cpu 1.7.4
fastapi 0.97.0
fastapi-socketio 0.0.10
filelock 3.12.2
filetype 1.2.0
flatbuffers 23.5.26
frozenlist 1.4.0
fsspec 2023.6.0
google-search-results 2.4.2
googleapis-common-protos 1.60.0
gpt4all 1.0.9
greenlet 2.0.2
grpcio 1.57.0
h11 0.14.0
html2text 2020.1.16
httpcore 0.17.3
httptools 0.6.0
httpx 0.24.1
huggingface-hub 0.16.4
humanfriendly 10.0
idna 3.4
importlib-metadata 6.8.0
importlib-resources 6.0.1
Jinja2 3.1.2
joblib 1.3.2
jsonpatch 1.33
jsonpointer 2.4
langchain 0.0.309
langdetect 1.0.9
langsmith 0.0.43
lxml 4.9.3
markdownify 0.11.6
MarkupSafe 2.1.3
marshmallow 3.20.1
monotonic 1.6
mpmath 1.3.0
multidict 6.0.4
mypy-extensions 1.0.0
nest-asyncio 1.5.7
networkx 3.1
nltk 3.8.1
nodeenv 1.8.0
numexpr 2.8.5
numpy 1.25.2
onnxruntime 1.15.1
openai 0.28.1
openapi-schema-pydantic 1.2.4
opentelemetry-api 1.19.0
opentelemetry-exporter-otlp 1.19.0
opentelemetry-exporter-otlp-proto-common 1.19.0
opentelemetry-exporter-otlp-proto-grpc 1.19.0
opentelemetry-exporter-otlp-proto-http 1.19.0
opentelemetry-instrumentation 0.40b0
opentelemetry-proto 1.19.0
opentelemetry-sdk 1.19.0
opentelemetry-semantic-conventions 0.40b0
overrides 7.4.0
packaging 23.1
pandas 1.5.3
pdf2image 1.16.3
Pillow 10.0.0
pip 23.2.1
playwright 1.37.0
posthog 3.0.2
prisma 0.9.1
protobuf 4.24.1
pulsar-client 3.3.0
pycparser 2.21
pydantic 1.10.12
pyee 9.0.4
PyJWT 2.8.0
pypdf 3.15.5
PyPika 0.48.9
pyreadline3 3.4.1
python-dateutil 2.8.2
python-dotenv 1.0.0
python-engineio 4.5.1
python-graphql-client 0.4.3
python-iso639 2023.6.15
python-magic 0.4.27
python-socketio 5.8.0
pytz 2023.3
PyYAML 6.0.1
regex 2023.8.8
requests 2.31.0
safetensors 0.3.2
scikit-learn 1.3.0
scipy 1.11.2
sentence-transformers 2.2.2
sentencepiece 0.1.99
setuptools 68.0.0
six 1.16.0
sniffio 1.3.0
soupsieve 2.5
SQLAlchemy 1.4.49
starlette 0.27.0
sympy 1.12
syncer 2.0.3
tabulate 0.9.0
tenacity 8.2.3
threadpoolctl 3.2.0
tiktoken 0.5.1
tokenizers 0.13.3
tomli 2.0.1
tomlkit 0.12.1
torch 2.0.1
torchvision 0.15.2
tqdm 4.66.1
transformers 4.31.0
typing_extensions 4.7.1
typing-inspect 0.9.0
tzdata 2023.3
unstructured 0.10.18
uptrace 1.19.0
urllib3 2.0.4
uvicorn 0.22.0
watchfiles 0.19.0
websockets 11.0.3
wheel 0.38.4
wrapt 1.15.0
yarl 1.9.2
zipp 3.16.2
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.chat_models import AzureChatOpenAI
from langchain.chains import create_extraction_chain
# Accessing the OPENAI_API_KEY KEY
import environ
DIR_COMMON = "./common"
env = environ.Env()
environ.Env.read_env(env_file=DIR_COMMON+"/.env_iveco")
# Schema
schema = {
"properties": {
"name": {"type": "string"},
"height": {"type": "integer"},
"hair_color": {"type": "string"},
},
"required": ["name", "height"],
}
# Input
inp = """Alex is 5 feet tall. Claudia is 1 feet taller Alex and jumps higher than him. Claudia is a brunette and Alex is blonde."""
# # Run chain
llm = AzureChatOpenAI(deployment_name="gpt4-datalab",model_name="gpt-4")
chain = create_extraction_chain(schema, llm,verbose= True)
print(chain.run(inp))
```
When i run i get:
```
> Entering new LLMChain chain...
Prompt after formatting:
Human: Extract and save the relevant entities mentionedin the following passage together with their properties.
Only extract the properties mentioned in the 'information_extraction' function.
If a property is not present and is not required in the function parameters, do not include it in the output.
Passage:
Alex is 5 feet tall. Claudia is 1 feet taller Alex and jumps higher than him. Claudia is a brunette and Alex is blonde.
Traceback (most recent call last):
File "C:\Sviluppo\python\AI\Iveco\LangChain\extract_1.1.py", line 28, in <module>
print(chain.run(inp))
^^^^^^^^^^^^^^
File "C:\Users\vw603\Anaconda3\envs\python11\Lib\site-packages\langchain\chains\base.py", line 501, in run
return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vw603\Anaconda3\envs\python11\Lib\site-packages\langchain\chains\base.py", line 306, in __call__
raise e
File "C:\Users\vw603\Anaconda3\envs\python11\Lib\site-packages\langchain\chains\base.py", line 300, in __call__
self._call(inputs, run_manager=run_manager)
File "C:\Users\vw603\Anaconda3\envs\python11\Lib\site-packages\langchain\chains\llm.py", line 93, in _call
response = self.generate([inputs], run_manager=run_manager)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vw603\Anaconda3\envs\python11\Lib\site-packages\langchain\chains\llm.py", line 103, in generate
return self.llm.generate_prompt(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vw603\Anaconda3\envs\python11\Lib\site-packages\langchain\chat_models\base.py", line 469, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vw603\Anaconda3\envs\python11\Lib\site-packages\langchain\chat_models\base.py", line 359, in generate
raise e
File "C:\Users\vw603\Anaconda3\envs\python11\Lib\site-packages\langchain\chat_models\base.py", line 349, in generate
self._generate_with_cache(
File "C:\Users\vw603\Anaconda3\envs\python11\Lib\site-packages\langchain\chat_models\base.py", line 501, in _generate_with_cache
return self._generate(
^^^^^^^^^^^^^^^
File "C:\Users\vw603\Anaconda3\envs\python11\Lib\site-packages\langchain\chat_models\openai.py", line 345, in _generate
response = self.completion_with_retry(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vw603\Anaconda3\envs\python11\Lib\site-packages\langchain\chat_models\openai.py", line 284, in completion_with_retry
return _completion_with_retry(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vw603\Anaconda3\envs\python11\Lib\site-packages\tenacity\__init__.py", line 289, in wrapped_f
return self(f, *args, **kw)
^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vw603\Anaconda3\envs\python11\Lib\site-packages\tenacity\__init__.py", line 379, in __call__
do = self.iter(retry_state=retry_state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vw603\Anaconda3\envs\python11\Lib\site-packages\tenacity\__init__.py", line 314, in iter
return fut.result()
^^^^^^^^^^^^
File "C:\Users\vw603\Anaconda3\envs\python11\Lib\concurrent\futures\_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "C:\Users\vw603\Anaconda3\envs\python11\Lib\concurrent\futures\_base.py", line 401, in __get_result
raise self._exception
File "C:\Users\vw603\Anaconda3\envs\python11\Lib\site-packages\tenacity\__init__.py", line 382, in __call__
result = fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "C:\Users\vw603\Anaconda3\envs\python11\Lib\site-packages\langchain\chat_models\openai.py", line 282, in _completion_with_retry
return self.client.create(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vw603\Anaconda3\envs\python11\Lib\site-packages\openai\api_resources\chat_completion.py", line 25, in create
return super().create(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vw603\Anaconda3\envs\python11\Lib\site-packages\openai\api_resources\abstract\engine_api_resource.py", line 155, in create
response, _, api_key = requestor.request(
^^^^^^^^^^^^^^^^^^
File "C:\Users\vw603\Anaconda3\envs\python11\Lib\site-packages\openai\api_requestor.py", line 299, in request
resp, got_stream = self._interpret_response(result, stream)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vw603\Anaconda3\envs\python11\Lib\site-packages\openai\api_requestor.py", line 710, in _interpret_response
self._interpret_response_line(
File "C:\Users\vw603\Anaconda3\envs\python11\Lib\site-packages\openai\api_requestor.py", line 775, in _interpret_response_line
raise self.handle_error_response(
openai.error.InvalidRequestError: Unrecognized request argument supplied: functions
```
### Expected behavior
```
> Entering new LLMChain chain...
Prompt after formatting:
Human: Extract and save the relevant entities mentionedin the following passage together with their properties.
Only extract the properties mentioned in the 'information_extraction' function.
If a property is not present and is not required in the function parameters, do not include it in the output.
Passage:
Alex is 5 feet tall. Claudia is 1 feet taller Alex and jumps higher than him. Claudia is a brunette and Alex is blonde.
> Finished chain.
[{'name': 'Alex', 'height': 5, 'hair_color'
``` | Fail using langchain extractor with AzureOpenAI | https://api.github.com/repos/langchain-ai/langchain/issues/11478/comments | 8 | 2023-10-06T13:29:21Z | 2024-03-18T16:05:39Z | https://github.com/langchain-ai/langchain/issues/11478 | 1,930,190,172 | 11,478 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain==0.0.294
### Who can help?
@hwchase17 , @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Build a `SimpleSequentialChain` including in its chains a `RunnableLambda`
It throws: `AttributeError: 'RunnableLambda' object has no attribute 'run'`
### Expected behavior
A RunnableLambda to have an attribute `run`...
Or rename the concept of Runnable. | 'RunnableLambda' object has no attribute 'run' | https://api.github.com/repos/langchain-ai/langchain/issues/11477/comments | 2 | 2023-10-06T12:19:43Z | 2024-02-09T16:18:34Z | https://github.com/langchain-ai/langchain/issues/11477 | 1,930,053,551 | 11,477 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain==0.0.279
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
With GPT-3.5 turbo 16k model,
Query - What is universal workflow type?
Thought: Do I need to use a tool? Yes
Action: chat_with_datasource
Action Input: What is universal workflow type?
Observation: The Universal Workflow type represents logic flows in a system. It includes two types: Flows and Subflows.
Thought: Do I need to use a tool? No
raise OutputParserException(f"Could not parse LLM output: `{text}`")
langchain.schema.output_parser.OutputParserException: Could not parse LLM output: `Do I need to use a tool? No`
Analysis -
Although when I checked i found, its using
Langchain/agents/Conversational/prompt.py
--------------------------------------------------
FORMAT_INSTRUCTIONS = """To use a tool, please use the following format:
```
Thought: Do I need to use a tool? Yes
Action: the action to take, should be one of [{tool_names}]
Action Input: the input to the action
Observation: the result of the action
```
When you have a response to say to the Human, or if you do not need to use a tool, you MUST use the format:
```
Thought: Do I need to use a tool? No
{ai_prefix}: [Your response here]
```"""
-----------------------------------------------------
So, when it does not get any tool to use, it should generate some response in "ai_prefix}:"may be previous response of llm/observation at last step"
but, its unable to generate any response and resulted into parser error.
However, when I run the same code with GPT-4, below is the response I get, and its able to parse the output as well, since llm is able to generate
{ai_prefix}: [Your response here] as per the prompt instruction, but GPT-3.5 model is unable to generate it.
-----------------------------------------------------------
Thought: Do I need to use a tool? Yes
Action: chat_with_datasource
Action Input: What is universal workflow type?
Observation: The Universal Workflow type represents logic flows in a system. It includes two types: Flows and Subflows.
Thought: Do I need to use a tool? No
AI: The Universal Workflow is a system that represents logic flows. It consists of different types such as Flows and Subflows.
------------------------------------------------------------
### Expected behavior
With GPT 3.5 model as well, it should generate AI: response, when agent found that it does not need to use a tool, which would fix the parse error. | Parser Error with Langchain/agents/Conversational/Output_parser.py | https://api.github.com/repos/langchain-ai/langchain/issues/11475/comments | 3 | 2023-10-06T12:08:15Z | 2024-02-08T16:22:05Z | https://github.com/langchain-ai/langchain/issues/11475 | 1,930,035,345 | 11,475 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hello,
I'm trying to change dynamically a prompt, based on the value of the context in a ConversationalRetrievalChain
```
prompt_template = """You are an AI assistant.
Use the following pieces of context to answer the question, in a precise way, at the end.
Context: {context if context else "some default: I don't have information about it"}
Question: {question}
Helpful Answer :"""
QA_PROMPT = PromptTemplate(
template=prompt_template, input_variables=["context", "question"]
)
llm = AzureChatOpenAI(
temperature=0,
deployment_name="gpt-35-turbo",
)
memory = ConversationBufferWindowMemory(
k=1,
output_key='answer',
memory_key='chat_history',
return_messages=True)
retriever = vector.as_retriever(search_type="similarity_score_threshold", search_kwargs={"score_threshold": .70})
qa_chain = ConversationalRetrievalChain.from_llm(llm=llm,
memory=memory,
retriever=retriever,
condense_question_prompt=CONDENSE_QUESTION_PROMPT,
combine_docs_chain_kwargs={'prompt': QA_PROMPT},
return_source_documents=True,
verbose=True)
```
The idea is when the retriever gets 0 documents, to have something in the context that push the model to say I don't know and don't make up and answer. I already try by adding in the prompt "if you don't know the answer don't make it up" and didn't work.
Any idea how to add a condition in the template or control the LLM based on the output of the retriever?
### Suggestion:
_No response_ | Add condition in Prompt based on the retriever | https://api.github.com/repos/langchain-ai/langchain/issues/11474/comments | 12 | 2023-10-06T11:45:34Z | 2024-02-20T16:08:12Z | https://github.com/langchain-ai/langchain/issues/11474 | 1,930,003,162 | 11,474 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I use Qdrant through langchain to store vectors. But I can't find any example in docs where the dataset is searched based on a previously created collection. How to load the data? I found that Pinecone has "from_existing_index" function, which probably does the thing. But Qdrant doesn't have such a function. I created my own solution using qdrant_client, but I would like to use Langchain to simplify the script. How to do it?
### Suggestion:
_No response_ | Qdrant - Load the saved vector db/store ? | https://api.github.com/repos/langchain-ai/langchain/issues/11471/comments | 6 | 2023-10-06T10:37:36Z | 2024-01-08T12:29:50Z | https://github.com/langchain-ai/langchain/issues/11471 | 1,929,887,356 | 11,471 |
[
"hwchase17",
"langchain"
]
| ### Feature request
I am using Weaviate with ConversationalRetrievalChain
I am using `similarity_score_threshold` search type.
It will call `similarity_search_with_score` in `vectorstore/weaviate.py`
Currently it's written like this
` if not self._by_text:
vector = {"vector": embedded_query}
result = (
query_obj.with_near_vector(vector)
.with_limit(k)
.with_additional("vector")
.do()
)
else:
result = (
query_obj.with_near_text(content)
.with_limit(k)
.with_additional("vector")
.do()
)
`
The `with_additional` function can accept a list of `AdditionalProperties`
So there's no point of hardcoding to be just the vector, and let the user add the additional arguments required.
I've tested this just by changing ` .with_additional("vector")` to ` .with_additional(["vector", "certainty"] )` and it works just fine
### Motivation
I am using the Conversational Chain just because it's easier to manage than creating one. But it's limitations are apparent. I want to retrieve the scores it get which is currently not possible
### Your contribution
I don't know how to pass in the `_additional` arguments from the vectorstore.as_retriever() but just hard coding the args i want the tests worked fine. | Vectorstore and ConversationalRetrievalChain should return the certainty of the relevance | https://api.github.com/repos/langchain-ai/langchain/issues/11470/comments | 1 | 2023-10-06T10:29:22Z | 2024-02-06T16:26:01Z | https://github.com/langchain-ai/langchain/issues/11470 | 1,929,874,506 | 11,470 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
The first code example following the Pydantic section, right after "Lets define a class with attributes annotated with types."
results for me in the following pydantic errors using gpt-4:
RuntimeError: no validator found for <class '__main__.Properties'>, see `arbitrary_types_allowed` in Config
The following versions of langchain and pydantic were used:
langchain==0.0.309
pydantic==2.4.2
pydantic_core==2.10.1
### Idea or request for content:
It would be nice to have the code updated so that it works out of the box.
Perhaps a reference to the extraction functions in the 'openai functions' section would be nice. | DOC: pydantic extraction example code not working | https://api.github.com/repos/langchain-ai/langchain/issues/11468/comments | 2 | 2023-10-06T09:27:55Z | 2024-02-10T16:14:42Z | https://github.com/langchain-ai/langchain/issues/11468 | 1,929,774,144 | 11,468 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain latest version: 0.0.161
"mammoth": "^1.6.0",
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Creating blob data with the use of fetch
const blobData = await fetch(totalFiles[i].url).then((res) => res.blob());
After this use this blob data in
const loader = new DocxLoader(blobData);
But When I hit the next step
const data = await loader.load();
It give me an error:
Error: Could not find file in options
at Object.openZip (unzip.js:10:1)
at extractRawText (index.js:82:1)
at DocxLoader.parse (docx.js:25:1)
at async langChainInitialization (index.js:88:20)
at async handleRefDocUpload (index.js:136:9)
### Expected behavior
It should load the document. | DOCX loader is not working properly in js | https://api.github.com/repos/langchain-ai/langchain/issues/11466/comments | 3 | 2023-10-06T07:22:12Z | 2024-02-08T16:22:16Z | https://github.com/langchain-ai/langchain/issues/11466 | 1,929,588,339 | 11,466 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain version : 0.0.292
Python version: 3.9.13
Platform: Apple M1, Sonoma 14.0
### Who can help?
@eyurtsev @hwc
### Information
- [x] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce:
1. Have a google drive folder with multiple documents with formats like`.mp4`, `.md`, `google docs`, `google sheets` etc.
2. Try using `from langchain.document_loaders import GoogleDriveLoader`, it doesn't work as it only supports google docs, google sheets and pdf files.
```
folder_id = "0ABqitCqK0S_MUk9PVA" # have your own folder id, this is an example
loader = GoogleDriveLoader(
gdrive_api_file=GOOGLE_ACCOUNT_FILE,
folder_id= folder_id,
recursive=True,
# num_results=2, # Maximum number of file to load
)
```
3. Try using `from langchain.document_loaders import UnstructuredFileIOLoader` to load
```
folder_id = "0ABqitCqK0S_MUk9PVA" # have your own folder id, this is an example
loader = GoogleDriveLoader(
service_account_key=GOOGLE_ACCOUNT_FILE,
folder_id=folder_id,
file_loader_cls=UnstructuredFileIOLoader,
file_loader_kwargs={"mode": "elements"},
recursive=True
)
```
4. It throws the following error because `.mp4` files are not supported
`The MIME type is 'video/mp4'. This file type is not currently supported in unstructured.
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Input In [34], in <cell line: 3>()
1 start_time = time.time()
----> 3 docs = loader.load()
5 print("Total docs loaded", len(docs))
7 end_time = time.time() - start_time
File /opt/homebrew/Caskroom/miniforge/base/envs/py39/lib/python3.9/site-packages/langchain/document_loaders/googledrive.py:351, in GoogleDriveLoader.load(self)
349 """Load documents."""
350 if self.folder_id:
--> 351 return self._load_documents_from_folder(
352 self.folder_id, file_types=self.file_types
353 )
354 elif self.document_ids:
355 return self._load_documents_from_ids()
File /opt/homebrew/Caskroom/miniforge/base/envs/py39/lib/python3.9/site-packages/langchain/document_loaders/googledrive.py:257, in GoogleDriveLoader._load_documents_from_folder(self, folder_id, file_types)
252 returns.extend(self._load_sheet_from_id(file["id"])) # type: ignore
253 elif (
254 file["mimeType"] == "application/pdf"
255 or self.file_loader_cls is not None
256 ):
--> 257 returns.extend(self._load_file_from_id(file["id"])) # type: ignore
258 else:
259 pass
File /opt/homebrew/Caskroom/miniforge/base/envs/py39/lib/python3.9/site-packages/langchain/document_loaders/googledrive.py:316, in GoogleDriveLoader._load_file_from_id(self, id)
314 fh.seek(0)
315 loader = self.file_loader_cls(file=fh, **self.file_loader_kwargs)
--> 316 docs = loader.load()
317 for doc in docs:
318 doc.metadata["source"] = f"https://drive.google.com/file/d/{id}/view"
File /opt/homebrew/Caskroom/miniforge/base/envs/py39/lib/python3.9/site-packages/langchain/document_loaders/unstructured.py:86, in UnstructuredBaseLoader.load(self)
84 def load(self) -> List[Document]:
85 """Load file."""
---> 86 elements = self._get_elements()
87 self._post_process_elements(elements)
88 if self.mode == "elements":
File /opt/homebrew/Caskroom/miniforge/base/envs/py39/lib/python3.9/site-packages/langchain/document_loaders/unstructured.py:319, in UnstructuredFileIOLoader._get_elements(self)
316 def _get_elements(self) -> List:
317 from unstructured.partition.auto import partition
--> 319 return partition(file=self.file, **self.unstructured_kwargs)
File /opt/homebrew/Caskroom/miniforge/base/envs/py39/lib/python3.9/site-packages/unstructured/partition/auto.py:183, in partition(filename, content_type, file, file_filename, url, include_page_breaks, strategy, encoding, paragraph_grouper, headers, ssl_verify, ocr_languages, pdf_infer_table_structure)
181 else:
182 msg = "Invalid file" if not filename else f"Invalid file {filename}"
--> 183 raise ValueError(f"{msg}. The {filetype} file type is not supported in partition.")
185 for element in elements:
186 element.metadata.url = url
ValueError: Invalid file. The FileType.UNK file type is not supported in partition.
`
### Expected behavior
I would expect this error to be handled by the loader, simply ignore the unsupported files and load the rest. Can throw some warning to notify but not break the code. | The MIME type is 'video/mp4'. This file type is not currently supported in unstructured. | https://api.github.com/repos/langchain-ai/langchain/issues/11464/comments | 2 | 2023-10-06T03:04:26Z | 2024-02-06T16:26:17Z | https://github.com/langchain-ai/langchain/issues/11464 | 1,929,358,704 | 11,464 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
we are following https://python.langchain.com/docs/modules/agents/how_to/custom_llm_agent however, by default it's using one input only, how to make it accept multiple inputs same as STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION, thanks.
### Suggestion:
_No response_ | Issue: how to use multiple inputs with custom llm agent | https://api.github.com/repos/langchain-ai/langchain/issues/11460/comments | 2 | 2023-10-06T01:08:12Z | 2024-01-31T23:38:04Z | https://github.com/langchain-ai/langchain/issues/11460 | 1,929,282,468 | 11,460 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain = "^0.0.304"
Python 3.11.5
MacBook Pro, Apple M2 chip, 8GB memory
### Who can help?
@eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.document_loaders import OnlinePDFLoader
OnlinePDFLoader("https://www.africau.edu/images/default/sample.pdf").load()
```
I saw the following runtime import error:
```
ImportError: cannot import name 'PDFResourceManager' from 'pdfminer.converter' (/Users/{user}/Library/Caches/pypoetry/virtualenvs/replai-19tDtF2f-py3.11/lib/python3.11/site-packages/pdfminer/converter.py)
```
### Expected behavior
I expected `OnlinePDFLoader` to load the file. | OnlinePDFLoader crashes with import error | https://api.github.com/repos/langchain-ai/langchain/issues/11459/comments | 4 | 2023-10-05T23:46:16Z | 2024-05-20T18:23:03Z | https://github.com/langchain-ai/langchain/issues/11459 | 1,929,226,840 | 11,459 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain: 0.0.250
Python: 3.11.4
OS: Pop 22.04
### Who can help?
@agola11
@eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
**Input**
```py
from pathlib import Path
from langchain.document_loaders.blob_loaders.schema import Blob
from langchain.document_loaders.parsers.audio import OpenAIWhisperParser
OPENAI_API_KEY = "<MY-KEY>"
audio_file = Path("~/Downloads/audio.mp3").expanduser()
audio_total_seconds = AudioSegment.from_file(audio_file).duration_seconds
print("Audio length (seconds):", audio_total_seconds)
blob = Blob.from_path(audio_file, mime_type="audio/mp3")
parser = OpenAIWhisperParser(api_key=OPENAI_API_KEY)
parser.parse(blob)
```
**Output**
```
Audio length (seconds): 1200.064
Transcribing part 1!
Transcribing part 2!
Attempt 1 failed. Exception: Audio file is too short. Minimum audio length is 0.1 seconds.
Attempt 2 failed. Exception: Invalid file format. Supported formats: ['flac', 'm4a', 'mp3', 'mp4', 'mpeg', 'mpga', 'oga', 'ogg', 'wav', 'webm']
Attempt 3 failed. Exception: Invalid file format. Supported formats: ['flac', 'm4a', 'mp3', 'mp4', 'mpeg', 'mpga', 'oga', 'ogg', 'wav', 'webm']
Failed to transcribe after 3 attempts.
```
### Expected behavior
Due to its internal rule of processing audio in [20-minute chunks](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/document_loaders/parsers/audio.py#L47), `OpenAIWhisperParser` is prone to crashing when transcribing audio with durations that are dangerously close to this limit. Given that OpenAI already has a very low audio length threshold of 0.1 seconds, a simple bypass could effectively resolve this issue.
```python
# https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/document_loaders/parsers/audio.py#L47
chunk_duration = 20
chunk_duration_ms = chunk_duration * 60 * 1000
chunk_length_threshold = 0.1
# Split the audio into chunk_duration_ms chunks
for split_number, i in enumerate(range(0, len(audio), chunk_duration_ms)):
# Audio chunk
chunk = audio[i : i + chunk_duration_ms]
if chunk.duration_seconds < chunk_length_threshold:
continue
``` | `OpenAIWhisperParser` raises error if audio has duration too close to the chunk limit | https://api.github.com/repos/langchain-ai/langchain/issues/11449/comments | 4 | 2023-10-05T19:44:40Z | 2024-02-23T18:09:09Z | https://github.com/langchain-ai/langchain/issues/11449 | 1,928,949,307 | 11,449 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
When I'm running the code "from langchain.chains import LLMChain", I'm getting the following error:
'RuntimeError: no validator found for <class 're.Pattern'>, see `arbitrary_types_allowed` in Config'
<img width="887" alt="Screenshot 2023-10-06 at 12 01 49 AM" src="https://github.com/langchain-ai/langchain/assets/85587494/0b171391-bbc6-41f8-b237-17bc99b11b95">
<img width="871" alt="Screenshot 2023-10-06 at 12 01 58 AM" src="https://github.com/langchain-ai/langchain/assets/85587494/592c2688-be0f-4cf4-8e7d-a4a56d7d056c">
### Suggestion:
_No response_ | Error while importing LLMChain | https://api.github.com/repos/langchain-ai/langchain/issues/11448/comments | 3 | 2023-10-05T18:32:29Z | 2024-02-10T16:14:47Z | https://github.com/langchain-ai/langchain/issues/11448 | 1,928,839,177 | 11,448 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I'm trying to make a general chat bot when ever I try to get it to write code I get [ERROR] OutputParserException: Could not parse LLM output:
is there a way to parse all parse of output?
Any help would be great.
### Suggestion:
_No response_ | Issue: Trying to be able to parse all parse of LLM outputs | https://api.github.com/repos/langchain-ai/langchain/issues/11447/comments | 3 | 2023-10-05T17:10:23Z | 2024-02-07T16:22:53Z | https://github.com/langchain-ai/langchain/issues/11447 | 1,928,727,245 | 11,447 |
[
"hwchase17",
"langchain"
]
| ### System Info
**Specs:**
langchain 0.0.301
Python 3.11.4
embeddings =HuggingFace embeddings
llm = Claud 2
EXAMPLE:
1. Chunks object below in my code contains the following string: leflunomide **(LEF) (≤ 20 mg/day)**
`Chroma.from_documents(documents=chunks, embedding=embeddings, collection_name=collection_name, persist_directory=persist_db) `
2. after saving and retrieving from my local file with:
'db = Chroma(persist_directory=persist_db, embedding_function=embeddings, collection_name=collection_name)'
. . . then extracting with . . .
'db.get(include=['documents'])'
3. That string is now: leflunomide **(LEF) ( ≤20 mg/day)**, with a single space newly inserted before the ≤
This matters because it messes up retrieval augmentation with queries like “what doses of leflunomide are appropriate?” using Claude-2 as the llm
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Chunks object below in my code contains the following string: leflunomide **(LEF) (≤ 20 mg/day)**
`Chroma.from_documents(documents=chunks, embedding=embeddings, collection_name=collection_name, persist_directory=persist_db) `
2. after saving and retrieving from my local file with:
'db = Chroma(persist_directory=persist_db, embedding_function=embeddings, collection_name=collection_name)'
. . . then extracting with . . .
'db.get(include=['documents'])'
3. That string is now: leflunomide **(LEF) ( ≤20 mg/day)**, with a single space newly inserted before the ≤
This matters because it messes up retrieval augmentation with queries like “what doses of leflunomide are appropriate?” using Claude-2 as the llm
### Expected behavior
See description above | Chroma is adding an extra space (very consistently) in front of '≤ ’ characters and it has a real impact. | https://api.github.com/repos/langchain-ai/langchain/issues/11441/comments | 7 | 2023-10-05T16:00:27Z | 2024-02-12T16:12:14Z | https://github.com/langchain-ai/langchain/issues/11441 | 1,928,619,715 | 11,441 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
_No response_
### Suggestion:
_No response_ | Issue: Is it possible to use a HF LLM using Conversational Retrieval Agent? | https://api.github.com/repos/langchain-ai/langchain/issues/11440/comments | 3 | 2023-10-05T15:54:16Z | 2024-02-10T16:14:57Z | https://github.com/langchain-ai/langchain/issues/11440 | 1,928,609,140 | 11,440 |
[
"hwchase17",
"langchain"
]
| Hello there,
I have deployed an OpenLLM on a managed service that is protected by the use of an auth Bearer token:
```bash
curl -X 'POST' \
'https://themodel.url/v1/generate' \
-H "Authorization: Bearer Sh6Kh4[ . . . super long bearer token ]W18UiWuzsz+0r+U"
-H 'accept: application/json' \
-H 'Content-Type: application/json' \
-d '{
"prompt": "What is the difference between a pigeon",
"llm_config": {
"use[ . . . and so on]
}
}'
```
Curl works like a charm.
In LangChain, I try to create my new llm as such:
```python
llm = OpenLLM(server_url="https://themodel.url/v1/generate", temperature=0.2)
```
But I don't know how to include this Bearer token, I even suspect it is impossible...
**Bot seems to confirm this is impossible as of 5 Oct 2023, as seen in https://github.com/langchain-ai/langchain/discussions/11437** ; this issue would then be a feature request.
Thanks!
Cheers,
Colin
_Originally posted by @ColinLeverger in https://github.com/langchain-ai/langchain/discussions/11437_ | Bearer token support for self-deployed OpenLLM | https://api.github.com/repos/langchain-ai/langchain/issues/11438/comments | 4 | 2023-10-05T15:09:13Z | 2024-02-10T16:15:02Z | https://github.com/langchain-ai/langchain/issues/11438 | 1,928,525,648 | 11,438 |
[
"hwchase17",
"langchain"
]
| ### System Info
python: 3.11
langchain: 0.0.306
### Who can help?
Although PGVector implementation utilizes SQLAlchemy that supports connection pooling features, it gets one connection from the pool, saves it as a class attribute and reuses in all requests.
```python
class PGVector(VectorStore):
# ...
def __init__(
self,
# ...
) -> None:
# ...
self.__post_init__()
def __post_init__(
self,
) -> None:
self._conn = self.connect()
# ...
def create_vector_extension(self) -> None:
try:
with Session(self._conn) as session:
# ...
except Exception as e:
self.logger.exception(e)
def create_tables_if_not_exists(self) -> None:
with self._conn.begin():
Base.metadata.create_all(self._conn)
@contextlib.contextmanager
def _make_session(self) -> Generator[Session, None, None]:
yield Session(self._conn)
# ...
```
This means that:
- Since the same connection is user for handle all requests, they will not be executed in parallel but rather enqueued. In a scenario with a large number of requests, this may not scale well.
- Great pool features like connection recycle and test for liveness can not be used.
@hwchase17
@eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Configure Postgres to kill idle sessions after 10 seconds (I'm using the `ankane/pgvector` docker image):
```sql
ALTER SYSTEM SET idle_session_timeout TO '10000';
ALTER SYSTEM SET idle_in_transaction_session_timeout TO '10000';
```
2. Restart the database
3. Confirm the new settings are active
```sql
SHOW ALL;
```
4. Create a sample program that instantiate a `PGVector` class and search by similarity every 30 seconds, sleeping between each iteration
```python
if __name__ == '__main__':
# Logging level
logging.basicConfig(level=logging.DEBUG)
load_dotenv()
# PG Vector
driver = os.environ["PGVECTOR_DRIVER"]
host = os.environ["PGVECTOR_HOST"]
port = os.environ["PGVECTOR_PORT"]
database = os.environ["PGVECTOR_DATABASE"]
user = os.environ["PGVECTOR_USER"]
password = os.environ["PGVECTOR_PASSWORD"]
collection_name = "state_of_the_union_test"
connection_string = f'postgresql+{driver}://{user}:{password}@{host}:{port}/{database}'
embeddings = OpenAIEmbeddings()
loader = TextLoader("./state_of_the_union.txt")
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
chunks = text_splitter.split_documents(documents)
db = PGVector.from_documents(
embedding=embeddings,
documents=chunks,
collection_name=collection_name,
connection_string=connection_string,
)
for i in range(100):
try:
print('--> *** Searching by similarity ***')
result = db.similarity_search_with_score("foo")
print('--> *** Documents retrieved successfully ***')
# for doc, score in result:
# print(f'{score}:{doc.page_content}')
except Exception as e:
print('--> *** Fail ***')
print(str(e))
print('--> *** Sleeping ***')
time.sleep(30)
print('\n\n')
```
5. Confirm in the console output that some requests fail because the idle connection was closed
```
(poc-pgvector-py3.11) ➜ poc-pgvector python -m poc_pgvector
DEBUG:openai:message='Request to OpenAI API' method=post path=https://.../v1/embeddings
DEBUG:urllib3.util.retry:Converted retries value: 2 -> Retry(total=2, connect=None, read=None, redirect=None, status=None)
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): ...:443
DEBUG:urllib3.connectionpool:https://...:443 "POST /v1/embeddings HTTP/1.1" 200 None
DEBUG:openai:message='OpenAI API response' path=https://.../v1/embeddings processing_ms=211 request_id=ec4ef62b-da79-4055-a8da-66fce881a6e9 response_code=200
--> *** Searching by similarity ***
DEBUG:openai:message='Request to OpenAI API' method=post path=https://.../v1/embeddings
DEBUG:openai:api_version=None data='{"input": [[8134]], "model": "text-embedding-ada-002", "encoding_format": "base64"}' message='Post details'
DEBUG:urllib3.connectionpool:https://...:443 "POST /v1/embeddings HTTP/1.1" 200 None
DEBUG:openai:message='OpenAI API response' path=https://.../v1/embeddings processing_ms=40 request_id=f2b791ac-0c88-4362-b14c-d6a36084a932 response_code=200
--> *** Documents retrieved successfully ***
--> *** Sleeping ***
--> *** Searching by similarity ***
DEBUG:openai:message='Request to OpenAI API' method=post path=https://.../v1/embeddings
DEBUG:openai:api_version=None data='{"input": [[8134]], "model": "text-embedding-ada-002", "encoding_format": "base64"}' message='Post details'
DEBUG:urllib3.connectionpool:https://...:443 "POST /v1/embeddings HTTP/1.1" 200 None
DEBUG:openai:message='OpenAI API response' path=https://.../v1/embeddings processing_ms=30 request_id=de593b87-29ad-4342-81c1-3c495d5b4f56 response_code=200
--> *** Fail ***
(psycopg2.OperationalError) server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
[SQL: SELECT langchain_pg_collection.name AS langchain_pg_collection_name, langchain_pg_collection.cmetadata AS langchain_pg_collection_cmetadata, langchain_pg_collection.uuid AS langchain_pg_collection_uuid
FROM langchain_pg_collection
WHERE langchain_pg_collection.name = %(name_1)s
LIMIT %(param_1)s]
[parameters: {'name_1': 'state_of_the_union_test', 'param_1': 1}]
(Background on this error at: https://sqlalche.me/e/20/e3q8)
--> *** Sleeping ***
--> *** Searching by similarity ***
DEBUG:openai:message='Request to OpenAI API' method=post path=https://.../v1/embeddings
DEBUG:openai:api_version=None data='{"input": [[8134]], "model": "text-embedding-ada-002", "encoding_format": "base64"}' message='Post details'
DEBUG:urllib3.connectionpool:https://...:443 "POST /v1/embeddings HTTP/1.1" 200 None
DEBUG:openai:message='OpenAI API response' path=https://.../v1/embeddings processing_ms=42 request_id=e2cb6145-8ceb-4127-b4ee-c5130ba42e8e response_code=200
--> *** Documents retrieved successfully ***
--> *** Sleeping ***
--> *** Searching by similarity ***
DEBUG:openai:message='Request to OpenAI API' method=post path=https://.../v1/embeddings
DEBUG:openai:api_version=None data='{"input": [[8134]], "model": "text-embedding-ada-002", "encoding_format": "base64"}' message='Post details'
DEBUG:urllib3.connectionpool:https://...:443 "POST /v1/embeddings HTTP/1.1" 200 None
DEBUG:openai:message='OpenAI API response' path=https://.../v1/embeddings processing_ms=38 request_id=05e15050-0c37-4f7c-a118-a3fafc2bf979 response_code=200
--> *** Fail ***
(psycopg2.OperationalError) server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
[SQL: SELECT langchain_pg_collection.name AS langchain_pg_collection_name, langchain_pg_collection.cmetadata AS langchain_pg_collection_cmetadata, langchain_pg_collection.uuid AS langchain_pg_collection_uuid
FROM langchain_pg_collection
WHERE langchain_pg_collection.name = %(name_1)s
LIMIT %(param_1)s]
[parameters: {'name_1': 'state_of_the_union_test', 'param_1': 1}]
(Background on this error at: https://sqlalche.me/e/20/e3q8)
--> *** Sleeping ***
--> *** Searching by similarity ***
DEBUG:openai:message='Request to OpenAI API' method=post path=https://.../v1/embeddings
DEBUG:openai:api_version=None data='{"input": [[8134]], "model": "text-embedding-ada-002", "encoding_format": "base64"}' message='Post details'
DEBUG:urllib3.connectionpool:https://...:443 "POST /v1/embeddings HTTP/1.1" 200 None
DEBUG:openai:message='OpenAI API response' path=https://.../v1/embeddings processing_ms=28 request_id=2deecd8b-42de-42dd-a242-004d9d5811eb response_code=200
--> *** Documents retrieved successfully ***
--> *** Sleeping ***
...
```
### Expected behavior
- Use pooled connections to make feasible executing requests in parallel.
- Allow fine tuning the connection pool used by PGVector, like configuring the its maximum size, number of seconds to wait before giving up on getting a connection from the pool, and enabling [pessimist](https://docs.sqlalchemy.org/en/20/core/pooling.html#pool-disconnects-pessimistic) (liveness testing connections when borrowing them from the pool) or optimistic (connection recycle time) strategies for disconnection dandling.
| PGVector bound to a single database connection | https://api.github.com/repos/langchain-ai/langchain/issues/11433/comments | 7 | 2023-10-05T12:52:39Z | 2024-05-11T16:07:07Z | https://github.com/langchain-ai/langchain/issues/11433 | 1,928,223,846 | 11,433 |
[
"hwchase17",
"langchain"
]
| ### System Info
Windows 11
Python 3.10
Langchain 0.0.286
Pyinstaller 5.7.0
File "test1.py", line 5, in <module>
File "langchain\document_loaders\unstructured.py", line 86, in load
File "langchain\document_loaders\pdf.py", line 57, in _get_elements
File "<frozen importlib._bootstrap>", line 1178, in _find_and_load
File "<frozen importlib._bootstrap>", line 1149, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
File "PyInstaller\loader\pyimod02_importers.py", line 499, in exec_module
File "unstructured\partition\pdf.py", line 40, in <module>
File "<frozen importlib._bootstrap>", line 1178, in _find_and_load
File "<frozen importlib._bootstrap>", line 1149, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
File "PyInstaller\loader\pyimod02_importers.py", line 499, in exec_module
File "unstructured\partition\lang.py", line 3, in <module>
File "<frozen importlib._bootstrap>", line 1178, in _find_and_load
File "<frozen importlib._bootstrap>", line 1149, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
File "PyInstaller\loader\pyimod02_importers.py", line 499, in exec_module
File "iso639\__init__.py", line 4, in <module>
File "<frozen importlib._bootstrap>", line 1178, in _find_and_load
File "<frozen importlib._bootstrap>", line 1149, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
File "PyInstaller\loader\pyimod02_importers.py", line 499, in exec_module
File "iso639\language.py", line 12, in <module>
sqlite3.OperationalError: unable to open database file
[19716] Failed to execute script 'test1' due to unhandled exception!
### Who can help?
@agola11
@eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.document_loaders import UnstructuredPDFLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
loader = UnstructuredPDFLoader("C:\\Temp\\AI_experimenting\\indexer_test1\\test_docs\\testdoc1.pdf")
data = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=512, chunk_overlap=20)
docs_transformed = text_splitter.split_documents(data)
print (f'Number of chunks of data: {len(docs_transformed )}')
```
Compile the code above using the following Pyinstaller spec file:
```
# -*- mode: python -*-
import os
from os.path import join, dirname, basename
datas = [
("C:/Temp/AI_experimenting/venv/Lib/site-packages/langchain/chains/llm_summarization_checker/prompts/*.txt", "langchain/chains/llm_summarization_checker/prompts"),
]
googleapihidden = ["pkg_resources.py2_warn",
"googleapiclient",
"apiclient",
"google-api-core"]
# list of modules to exclude from analysis
excludes = []
# list of hiddenimports
hiddenimports = googleapihidden
# binary data
# assets
tocs = []
a = Analysis(['test1.py'],
pathex=[os.getcwd()],
binaries=None,
datas=datas,
hiddenimports=hiddenimports,
hookspath=[],
runtime_hooks=[],
excludes=excludes,
win_no_prefer_redirects=False,
win_private_assemblies=False)
pyz = PYZ(a.pure, a.zipped_data)
exe1 = EXE(pyz,
a.scripts,
name='mytest',
exclude_binaries=True,
icon='',
debug=False,
bootloader_ignore_signals=False,
strip=False,
upx=True,
console=True)
coll = COLLECT(exe1,
a.binaries,
a.zipfiles,
a.datas,
*tocs,
strip=False,
upx=True,
name='mytest_V1')
```
Compile with: pyinstaller --log-level DEBUG test1.spec
Then in cmd terminal change working directory to ....dist\mytest_V1 and run mytest.exe
### Expected behavior
The expected behavior would be an output like:
Number of chunks of data: 18 | Loading UnstructeredPDF fails after build with PyInstaller | https://api.github.com/repos/langchain-ai/langchain/issues/11430/comments | 5 | 2023-10-05T12:08:46Z | 2023-10-05T20:04:59Z | https://github.com/langchain-ai/langchain/issues/11430 | 1,928,130,885 | 11,430 |
[
"hwchase17",
"langchain"
]
| ### System Info
I am using Python 3.11.5 and I run into this issue with `ModuleNotFoundError: No module named 'langchain.document_loaders'` after running `pip install 'langchain[all]'`, which appears to be installing langchain-0.0.39
If I then run `pip uninstall langchain`, followed by `pip install langchain`, it proceeds to install langchain-0.0.308 and suddenly my document loaders work again.
- Why does langchain 0.0.39 have document_loaders broken?
- Why does langchain[all] install a different version of langchain?
### Reproduction
1. Install python 3.11.5 (using pyenv on OSX)
2. Run `pip install `langchain[all]'`
3. Observe langchain version that is installed
4. Run `ipython` and execute `from langchain.document_loaders import WebBaseLoader` to receive the module error.
5. Run `pip uninstall langchain`
6. Run `pip install langchain`
7. Note how an older version of langchain is installed
8. Run `ipython` and execute `from langchain.document_loaders import WebBaseLoader` to see how it's working.
### Expected behavior
`pip install langchain` should install the same underlying version of langchain as the version of langchain that includes all of the various modules. | pip installing 'langchain[all]' installs a newer (broken) version than pip install langchain does | https://api.github.com/repos/langchain-ai/langchain/issues/11426/comments | 1 | 2023-10-05T10:49:12Z | 2023-10-05T11:18:44Z | https://github.com/langchain-ai/langchain/issues/11426 | 1,927,989,312 | 11,426 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
In [QA using a Retriever docs](https://python.langchain.com/docs/use_cases/question_answering/how_to/vector_db_qa) you can observe that after each LLM reply we observe additional empty space. What is the purpose of having this space or is it just a bug in Prompt?
Copied from the docs provided above
```python3
query = "What did the president say about Ketanji Brown Jackson"
qa.run(query)
```
Provides response:
`" The president said that she is one of the nation's top legal minds ..."`
### Idea or request for content:
_No response_ | DOC: QA using a Retriever, why do we need additional empty space? | https://api.github.com/repos/langchain-ai/langchain/issues/11425/comments | 2 | 2023-10-05T10:09:14Z | 2024-02-07T16:23:13Z | https://github.com/langchain-ai/langchain/issues/11425 | 1,927,900,506 | 11,425 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I am trying to access documents from SharePoint using Langchain's [SharePointLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.sharepoint.SharePointLoader.html).
I changed the redirect URI to a local host on this [documentation](https://python.langchain.com/docs/integrations/document_loaders/microsoft_sharepoint). How can I pass the new redirect URI as a parameter to the [SharePointLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.sharepoint.SharePointLoader.html) function.
### Suggestion:
_No response_ | Issue: Custom Redirect URI as parameter in SharePointLoader | https://api.github.com/repos/langchain-ai/langchain/issues/11423/comments | 3 | 2023-10-05T06:16:38Z | 2024-02-07T16:23:18Z | https://github.com/langchain-ai/langchain/issues/11423 | 1,927,461,674 | 11,423 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain==0.0.300
supabase==1.1.1
### Who can help?
@hwaking @eyurtsev @agola11 @eyurtsev @hwchase17 @agola11
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
### **Creation of Supabase client**
`supabase_url: str = os.environ.get("SUPABASE_URL") # type: ignore
supabase_key: str = os.environ.get("SUPABASE_SERVICE_KEY") # type: ignore
supabase_client = create_client(supabase_url, supabase_key)`
### Text Splitter creation
`text_splitter = CharacterTextSplitter(
chunk_size=800,
chunk_overlap=0,
)`
### **Embeddings**
`embeddings = OpenAIEmbeddings()`
### **Loading the document**
`loader = PyPDFLoader("Alice_in_wonderland2.pdf")
pages = loader.load_and_split()
docs = text_splitter.split_documents(pages)`
### **Save values to Supabase**
`vector_store = SupabaseVectorStore.from_documents(documents=docs, embedding=embeddings, client=supabase_client)`
### **Error encountring**
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "D:\VSCode\Python\langchain project\supabase-try\test.py", line 34, in <module>
vector_store = SupabaseVectorStore.from_documents(
File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\site-packages\langchain\vectorstores\base.py", line 417, in from_documents
return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)
File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\site-packages\langchain\vectorstores\supabase.py", line 147, in from_texts
cls._add_vectors(client, table_name, embeddings, docs, ids)
File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\site-packages\langchain\vectorstores\supabase.py", line 323, in _add_vectors
result = client.from_(table_name).upsert(chunk).execute() # type: ignore
File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\site-packages\postgrest\_sync\request_builder.py", line 57, in execute
r = self.session.request(
File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\site-packages\httpx\_client.py", line 814, in request
return self.send(request, auth=auth, follow_redirects=follow_redirects)
File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\site-packages\httpx\_client.py", line 901, in send
response = self._send_handling_auth(
File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\site-packages\httpx\_client.py", line 929, in _send_handling_auth
response = self._send_handling_redirects(
File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\site-packages\httpx\_client.py", line 966, in _send_handling_redirects
response = self._send_single_request(request)
File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\site-packages\httpx\_client.py", line 1002, in _send_single_request
response = transport.handle_request(request)
File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\site-packages\httpx\_transports\default.py", line 218, in handle_request
resp = self._pool.handle_request(req)
File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\contextlib.py", line 135, in __exit__
self.gen.throw(type, value, traceback)
File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\site-packages\httpx\_transports\default.py", line 77, in map_httpcore_exceptions
raise mapped_exc(message) from exc
httpx.WriteTimeout: The write operation timed out
### I tried changing the code according to the langchain docs as
`vector_store = SupabaseVectorStore.from_documents(
docs,
embeddings,
client=supabase_client,
table_name="documents",
query_name="match_documents",
)`
### Then I encountered the following error
2023-10-05 10:33:29,879:INFO - HTTP Request: POST https://scptrclvtrvcwjdunlrn.supabase.co/rest/v1/documents "HTTP/1.1 404 Not Found"
Traceback (most recent call last):
File "D:\VSCode\Python\langchain project\supabase-try\test.py", line 34, in <module>
vector_store = SupabaseVectorStore.from_documents(
**I didnt create the document table in the supabase manually as i need it to be created automatically with the code. And if i need to create it manually i need to know the steps of create that as well and how to integrate it as well. Please help me immediately**
### Expected behavior
SupabaseVectorStore.from_documents works fine and Store all the embeddings in the vector store. | SupabaseVectorStore.from_documents is not working | https://api.github.com/repos/langchain-ai/langchain/issues/11422/comments | 28 | 2023-10-05T05:39:43Z | 2024-07-04T16:06:49Z | https://github.com/langchain-ai/langchain/issues/11422 | 1,927,423,576 | 11,422 |
[
"hwchase17",
"langchain"
]
| ### System Info
I am using UnstructuredWordDocumentLoader module to load a .docx file. I already tried several things... but the text data returned is always without the percentages that are several times in the .docx file.
How can I maintain / get this (%) percentage symbol?
Thanks
### Who can help?
@hwchase17 @agola11 @eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
loader = UnstructuredWordDocumentLoader("example_data/fake.docx", mode="elements")
data = loader.load()
### Expected behavior
Have in the data the percentages characters also. | UnstructuredWordDocumentLoader - Missing Percentages (%) characters in data | https://api.github.com/repos/langchain-ai/langchain/issues/11416/comments | 3 | 2023-10-05T00:35:06Z | 2024-02-07T16:23:23Z | https://github.com/langchain-ai/langchain/issues/11416 | 1,927,175,230 | 11,416 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
In the documentation for some examples like [Weaviate](https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.weaviate.Weaviate.html#langchain.vectorstores.weaviate.Weaviate.delete) and [ElasticsearchVectorStore](https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.elasticsearch.ElasticsearchStore.html#langchain.vectorstores.elasticsearch.ElasticsearchStore.delete), the delete method is described as having the ids parameter as optional.
`delete(ids: Optional[List[str]] = None, **kwargs: Any) → None`
However, the ids parameter is mandatory for those vectorstores, and if it is not provided, a ValueError will be raised with the message "No ids provided to delete." Here is the revised [documentation](https://github.com/langchain-ai/langchain/blob/b9fad28f5e093415d76aeb71b5e555eb87fd2ec2/libs/langchain/langchain/vectorstores/elastic_vector_search.py#L336C5-L348C61) to clarify this:
`
def delete(self, ids: Optional[List[str]] = None, **kwargs: Any) -> None:
"""Delete by vector IDs.
Args:
ids: List of ids to delete.
"""
if ids is None:
raise ValueError("No ids provided to delete.")
# TODO: Check if this can be done in bulk
for id in ids:
self.client.delete(index=self.index_name, id=id)
`
### Idea or request for content:
Please clarify in the documentation
Is it also possible to get bulk delete functionality for some vectorstore (ex. ElasticsearchVectorStore/ Weviate)? | DOC: optional ids issue in delete method in vectorstores | https://api.github.com/repos/langchain-ai/langchain/issues/11414/comments | 1 | 2023-10-04T22:50:28Z | 2024-02-06T16:27:01Z | https://github.com/langchain-ai/langchain/issues/11414 | 1,927,090,102 | 11,414 |
[
"hwchase17",
"langchain"
]
| ### Feature request
I would like to implement a `ListOutputParser` that could transform Markdown list into a string list
### Motivation
It seems that most of the lists generated by LLMs are Markdown lists, so I think this will be useful.
It might also make it easier to handle cases where the generated list doesn't work with the existing `ListOutputParser` (Numered and CommaSeparated).
### Your contribution
Here is the PR: https://github.com/langchain-ai/langchain/pull/11411 | Feature: Markdown list output parser | https://api.github.com/repos/langchain-ai/langchain/issues/11410/comments | 1 | 2023-10-04T21:38:14Z | 2023-10-07T01:57:03Z | https://github.com/langchain-ai/langchain/issues/11410 | 1,927,019,118 | 11,410 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain 0.0.303, python 3.9, redis-py 5.0.1, latest redis stack with RedisSearch module
### Who can help?
@eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Try to create Redis vector store like this from non existent index_name:
`
embeddings = OpenAIEmbeddings()
vectorstore = Redis.from_existing_index(
embeddings,
index_name=index_name,
schema='schema.yaml',
redis_url=REDIS_URL,
)
`
it executes and returns object of Redis type, and logs that index was found. However, of course it fails to query later.
### Expected behavior
It should raise a ValueError accortding to doc. | Redis from_existing_index does not raise ValueError when index does not exist | https://api.github.com/repos/langchain-ai/langchain/issues/11409/comments | 5 | 2023-10-04T21:30:28Z | 2023-10-09T15:05:22Z | https://github.com/langchain-ai/langchain/issues/11409 | 1,927,010,913 | 11,409 |
[
"hwchase17",
"langchain"
]
| ### System Info
Hi, I am using LLMChainFilter.from_llm(llm) but while running, I am getting this error:
ValueError: BooleanOutputParser expected output value to either be YES or NO. Received Yes, the context is relevant to the question as it provides information about the problem in the.
How do I resolve this error?
Langchain version: 0.0.308
### Who can help?
@agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.retrievers import ContextualCompressionRetriever
from langchain.retrievers.document_compressors import LLMChainExtractor, LLMChainFilter
llm = SageMakerEndpointModel
_filter = LLMChainFilter.from_llm(llm)
compressor = LLMChainExtractor.from_llm(llm)
compression_retriever = ContextualCompressionRetriever(base_compressor=_filter, base_retriever=faiss_retriever)
compressed_docs = compression_retriever.get_relevant_documents("What did the president say about Ketanji Jackson Brown?")
### Expected behavior
Get filtered docs | BooleanOutputParser expected output value error | https://api.github.com/repos/langchain-ai/langchain/issues/11408/comments | 6 | 2023-10-04T21:18:38Z | 2024-04-09T20:43:32Z | https://github.com/langchain-ai/langchain/issues/11408 | 1,926,997,959 | 11,408 |
[
"hwchase17",
"langchain"
]
| ### System Info
I’m running this on a local machine Windows 10, Spyder 5.2.1 IDE with Anaconda package management, using python 3.10.
### Who can help?
@leo-gan @holtskinner
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Hi,
I've just started learning to code with python, working with LLMs and I'm following the [tutorial for setting](https://python.langchain.com/docs/integrations/document_transformers/docai) up LangChain with Google Document AI and I’m getting this error “InvalidArgument: 400 Request contains an invalid argument.” with this line of code
`docs = list(parser.lazy_parse(blob))`
Here are the things I’ve tried so far:
• Setting up gcloud ADC so I can run this as authroized session, code wouldn’t work otherwise
• Set the permission in the GSC bucket to Storage Admin so I can read/write
• Chatgpt wrote test to see if current credentials are working, it is
• Chatgpt wrote test to see if DocAIParser object is working, it is
I think there’s some issue with the output path for “lazy_parse” but I can’t get it to work. I've looked into the documentation but I can't tell if I'm missing something or not. How do I get this working?
See full code and full error message below:
```python
import pprint
from google.auth.transport.requests import AuthorizedSession
from google.auth import default
from google.cloud import documentai
from langchain.document_loaders.blob_loaders import Blob
from langchain.document_loaders.parsers import DocAIParser
PROJECT = "[replace with project name]"
GCS_OUTPUT_PATH = "gs://[replace with bucket path]"
PROCESSOR_NAME = "https://us-documentai.googleapis.com/v1/projects/[replace with processor name]"
# Get the credentials object using ADC.
credentials, _ = default()
session = AuthorizedSession(credentials=credentials)
# Create a Document AI client object.
client = documentai.DocumentProcessorServiceClient(credentials=credentials)
"""Tests if the current credentials are working in gcloud."""
import google.auth
def test_credentials():
try:
# Try to authenticate to the Google Cloud API.
google.auth.default()
print("Credentials are valid.")
except Exception as e:
print("Credentials are invalid:", e)
if __name__ == "__main__":
test_credentials()
import logging
from google.cloud import documentai
# Set up logging
logging.basicConfig(level=logging.DEBUG)
# Create DocumentAI client
client = documentai.DocumentProcessorServiceClient()
# Print out actual method call
logging.debug("Calling client.batch_process_documents(%s, %s)",
PROCESSOR_NAME, GCS_OUTPUT_PATH)
"""Test of DocAIParser object is working"""
# Try to create a DocAIParser object.
try:
parser = DocAIParser(
processor_name=PROCESSOR_NAME,
gcs_output_path=GCS_OUTPUT_PATH,
client=client,
)
# If the DocAIParser object was created successfully, then the Google is accepting the parameters.
print("Google is accepting the parameters.")
except Exception as e:
# If the DocAIParser object fails to be created, then the Google is not accepting the parameters.
print("Google is not accepting the parameters:", e)
parser = DocAIParser(
processor_name=PROCESSOR_NAME,
gcs_output_path=GCS_OUTPUT_PATH,
client=client,
)
blob = Blob(path="gs://foia_doc_bucket/input/goog-exhibit-99-1-q1-2023-19.pdf")
docs = list(parser.lazy_parse(blob))
print(len(docs))
```
**********************************************
Full error message:
```
DEBUG:google.auth._default:Checking None for explicit credentials as part of auth process...
DEBUG:google.auth._default:Checking Cloud SDK credentials as part of auth process...
DEBUG:urllib3.util.retry:Converted retries value: 3 -> Retry(total=3, connect=None, read=None, redirect=None, status=None)
DEBUG:google.auth.transport.requests:Making request: POST https://oauth2.googleapis.com/token
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): oauth2.googleapis.com:443
DEBUG:urllib3.connectionpool:https://oauth2.googleapis.com:443 "POST /token HTTP/1.1" 200 None
Traceback (most recent call last):
File "C:\Users\Anaconda\envs\python3_10\lib\site-packages\google\api_core\grpc_helpers.py", line 75, in error_remapped_callable
return callable_(*args, **kwargs)
File "C:\Users\Anaconda\envs\python3_10\lib\site-packages\grpc\_channel.py", line 1161, in __call__
return _end_unary_response_blocking(state, call, False, None)
File "C:\Users\Anaconda\envs\python3_10\lib\site-packages\grpc\_channel.py", line 1004, in _end_unary_response_blocking
raise _InactiveRpcError(state) # pytype: disable=not-instantiable
_InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.INVALID_ARGUMENT
details = "Request contains an invalid argument."
debug_error_string = "UNKNOWN:Error received from peer ipv4:142.250.31.95:443 {created_time:"2023-10-04T19:56:49.9162929+00:00", grpc_status:3, grpc_message:"Request contains an invalid argument."}"
>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\Inspiron 15 amd 5505\Dropbox\[...]\local_doc_upload.py", line 80, in <module>
docs = list(parser.lazy_parse(blob))
File "C:\Users\Anaconda\envs\python3_10\lib\site-packages\langchain\document_loaders\parsers\docai.py", line 91, in lazy_parse
yield from self.batch_parse([blob], gcs_output_path=self._gcs_output_path)
File "C:\Users\Anaconda\envs\python3_10\lib\site-packages\langchain\document_loaders\parsers\docai.py", line 122, in batch_parse
operations = self.docai_parse(blobs, gcs_output_path=output_path)
File "C:\Users\Anaconda\envs\python3_10\lib\site-packages\langchain\document_loaders\parsers\docai.py", line 268, in docai_parse
operations.append(self._client.batch_process_documents(request))
File "C:\Users\Anaconda\envs\python3_10\lib\site-packages\google\cloud\documentai_v1\services\document_processor_service\client.py", line 786, in batch_process_documents
response = rpc(
File "C:\Users\Anaconda\envs\python3_10\lib\site-packages\google\api_core\gapic_v1\method.py", line 131, in __call__
return wrapped_func(*args, **kwargs)
File "C:\Users\Anaconda\envs\python3_10\lib\site-packages\google\api_core\retry.py", line 366, in retry_wrapped_func
return retry_target(
File "C:\Users\Anaconda\envs\python3_10\lib\site-packages\google\api_core\retry.py", line 204, in retry_target
return target()
File "C:\Users\Anaconda\envs\python3_10\lib\site-packages\google\api_core\timeout.py", line 120, in func_with_timeout
return func(*args, **kwargs)
File "C:\Users\Anaconda\envs\python3_10\lib\site-packages\google\api_core\grpc_helpers.py", line 77, in error_remapped_callable
raise exceptions.from_grpc_error(exc) from exc
InvalidArgument: 400 Request contains an invalid argument.
```
### Expected behavior
Suppose to output "11" based on the number of pages in [this pdf](https://abc.xyz/assets/a7/5b/9e5ae0364b12b4c883f3cf748226/goog-exhibit-99-1-q1-2023-19.pdf) per the [Doc AI tutorial](https://python.langchain.com/docs/integrations/document_transformers/docai) | Error "InvalidArgument: 400 Request" by following tutorial for Document AI | https://api.github.com/repos/langchain-ai/langchain/issues/11407/comments | 18 | 2023-10-04T20:32:22Z | 2024-02-15T16:08:50Z | https://github.com/langchain-ai/langchain/issues/11407 | 1,926,936,215 | 11,407 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hey there, I'm trying got find a way to prevent an agent from going over the token limit, I've looked at the docs, asked around and had no luck.
This is the error for better content:
[ERROR] InvalidRequestError: This model's maximum context length is 16384 tokens. However, your messages resulted in 17822 tokens. Please reduce the length of the messages.
Any help would be great 😊
### Suggestion:
_No response_ | Issue: prevent a agent from exceeding the token limit | https://api.github.com/repos/langchain-ai/langchain/issues/11405/comments | 9 | 2023-10-04T20:11:52Z | 2024-02-14T16:09:53Z | https://github.com/langchain-ai/langchain/issues/11405 | 1,926,907,879 | 11,405 |
[
"hwchase17",
"langchain"
]
| ### System Info
python:3.10.13 bookworm (docker)
streamlit
Version: 1.27.1
langchain
Version: 0.0.306
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
# Display assistant response in chat message container
with st.chat_message("🧞♂️"):
message_placeholder = st.empty()
cbh = StreamlitCallbackHandler(st.container())
AI_response = llm_chain.run(prompt,callbacks=[cbh])
### Expected behavior
the "Thinking.." spinner STOPS or hides after LLM finishes its response
No Parameters I can find here
https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.streamlit.streamlit_callback_handler.StreamlitCallbackHandler.html
that would affect this
| StreamlitCallbackHandler thinking.. / spinner Not stopping | https://api.github.com/repos/langchain-ai/langchain/issues/11398/comments | 14 | 2023-10-04T19:30:40Z | 2024-06-25T16:13:40Z | https://github.com/langchain-ai/langchain/issues/11398 | 1,926,846,508 | 11,398 |
[
"hwchase17",
"langchain"
]
| ### System Info
"There was a BUG when using return_source_documents = True with any chain, it was always raising an error!!
!!, this is a temporary fix that requires the 'answer' key to be present there, but FIX IT, it is now impossible to to return_source_docs !!!!!!!!!!!!"
def _get_input_output(
self, inputs: Dict[str, Any], outputs: Dict[str, Any]
) -> Tuple[str, str]:
if self.input_key is None:
prompt_input_key = get_prompt_input_key(inputs, self.memory_variables)
else:
prompt_input_key = self.input_key
if self.output_key is None:
if 'answer' in outputs:
output_key = 'answer'
else:
raise ValueError("Output key 'answer' not found.")
else:
output_key = self.output_key
return inputs[prompt_input_key], outputs[output_key]
### Who can help?
@eyur
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
def create_chain():
retriever = cretae_retriver....
)
memory = memory...
chain = ConversationalRetrievalChain.from_llm(
llm=ChatOpenAI(temperature=0, model="gpt-3.5-turbo"),
retriever=retriever,
verbose=True,
memory=memory,
return_source_documents=True, !!!!!! Here , it does not work
)
return chain
def run_chat():
chain = create_chain()
while True:
query = input("Prompt: ")
if query in ["quit", "q", "exit"]:
sys.exit()
result = chain({"question": query})
print(f"ANSWER: {result['answer']}")
print(f"DOCS_USED: {result['source_documents']}")
query = None
### Expected behavior
It will say this error
raise ValueError(f"One output key expected, got {outputs.keys()}")
ValueError: One output key expected, got dict_keys(['answer', 'source_documents']) | Retrun_source_documets does not work !!!! | https://api.github.com/repos/langchain-ai/langchain/issues/11396/comments | 2 | 2023-10-04T19:13:38Z | 2024-02-06T16:27:06Z | https://github.com/langchain-ai/langchain/issues/11396 | 1,926,823,792 | 11,396 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain: 0.0.308
Python: 3.8
def get_ml_response(self, question):
try:
chain = create_conversation_retrieval_chain()
llm_response = chain({'question': question, "chat_history": []})
logger.info(f"LLm Response: {llm_response}")
except Exception:
err_msg = "Execption! {}".format(traceback.format_exc())
logger.info(err_msg)
def create_conversation_retrieval_chain(self):
try:
session = boto3.Session()
sts_client = session.client("sts")
assumed_role = sts_client.assume_role(RoleArn=role_arn, RoleSessionName="AssumeRoleSession")
# Get the temporary credentials
credentials = assumed_role["Credentials"]
access_key = credentials["AccessKeyId"]
secret_key = credentials["SecretAccessKey"]
session_token = credentials["SessionToken"]
# Configure the default AWS Session with assumed role credentials
# config = Config(region_name="us-west-2")
session = boto3.Session(
aws_access_key_id=access_key,
aws_secret_access_key=secret_key,
aws_session_token=session_token
)
custom_template = """Given the following conversation and a follow up question, rephrase the follow up question
to be a standalone question. At the end of standalone question add this 'Answer the question
in English language.If you do not know the answer reply with 'I am sorry'.
Chat History: {chat_history}
Follow Up Input: {question}
Standalone question:"""
custom_question_prompt = PromptTemplate.from_template(custom_template)
auth = AWSV4SignerAuth(credentials, 'us-west-2', 'aoss')
embeddings = SagemakerEndpointEmbeddingsJumpStart(
endpoint_name="jumpstart-dft-hf-textembedding-all-minilm-l6-v2",
region_name="us-west-2",
content_handler=content_handler_embeddings
)
opensearch_vector_search = OpenSearchVectorSearch(
opensearch_url="<opensearch_url>",
embedding_function=embeddings,
index_name="<index_name>",
http_auth=auth,
connection_class=RequestsHttpConnection,
)
memory = ConversationBufferMemory(
memory_key="chat_history",
return_messages=True,
input_key="question",
output_key="answer",
)
chain = ConversationalRetrievalChain.from_llm(
llm=SagemakerEndpoint(
endpoint_name="jumpstart-dft-hf-llm-falcon-40b-instruct-bf16",
region_name="us-west-2",
content_handler=content_handler_llm,
),
# for experimentation, you can change number of documents to retrieve here
retriever=opensearch_vector_search.as_retriever(
search_kwargs={
"k": 3,
}
),
memory=memory,
condense_question_prompt=custom_question_prompt,
return_source_documents=True,
)
return chain
except Exception:
err_msg = "Execption! {}".format(traceback.format_exc())
logger.info(err_msg)
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
chain = self.create_conversation_retrieval_chain()
llm_response = chain({'question': question, "chat_history": []})
### Expected behavior
I am expecting to connect to connect to Sagemaker endpoints using Assume role permissions but getting Access Denied Exception. I tried invoking the endpoints through the lambda function which is working fine. But not able to figure out how to pass the credentials using Langchain. Could you please help me with this. | Access Denied Exception while accessing Cross Account Sagemaker endpoints. | https://api.github.com/repos/langchain-ai/langchain/issues/11392/comments | 2 | 2023-10-04T17:32:33Z | 2024-02-06T16:27:11Z | https://github.com/langchain-ai/langchain/issues/11392 | 1,926,681,297 | 11,392 |
[
"hwchase17",
"langchain"
]
| ### System Info
I'm using Google Colab, authenticating with my own account.
Packages installation:
!pip install google-cloud-discoveryengine google-cloud-aiplatform langchain==0.0.236 -q
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Sample code:
import vertexai
from langchain.llms import VertexAI
from langchain.retrievers import GoogleCloudEnterpriseSearchRetriever
vertexai.init(project=PROJECT_ID, location=REGION)
llm = VertexAI(model_name=MODEL, temperature=0, top_p=0.2, max_output_tokens=1024)
retriever = GoogleCloudEnterpriseSearchRetriever(
project_id=PROJECT_ID, search_engine_id=DATA_STORE_ID
)
from langchain.chains import RetrievalQAWithSourcesChain
search_query = "How to create a new google calendar?"
retrieval_qa_with_sources = RetrievalQAWithSourcesChain.from_chain_type(
llm=llm, chain_type="stuff", retriever=retriever
)
results = retrieval_qa_with_sources({"question": search_query})
print(f"Answer: {results['answer']}")
print(f"Sources: {results['sources']}")
Return:
Answer: To create a new calendar, follow these steps:
1. On the computer, open Google Calendar.
2. On the left, next to "Other calendars," click Add other calendars->Create new calendar.
3. Add a name and description to the calendar.
4. Click on Create a Schedule.
5. To share the calendar, click on it in the left bar and select Share with specific people.
Sources:
### Expected behavior
Expected to have the sources as URI from Google Cloud Storage. It sometimes return, sometimes not. It is really random. | Sources are not returned in RetrievalQAWithSourcesChain | https://api.github.com/repos/langchain-ai/langchain/issues/11387/comments | 2 | 2023-10-04T16:54:58Z | 2024-02-06T16:27:16Z | https://github.com/langchain-ai/langchain/issues/11387 | 1,926,626,279 | 11,387 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I am using Lang chain version 0.0.307 and I noticed an issue with the similarity_score_threshold. I just upgraded from 0.0.208 or 288 (I forgot).
This is the code I am using to retrieve the documents from pinecone:
```
docsearch = Pinecone.from_existing_index(index, embeddings, text_key="text")
retriever = docsearch.as_retriever(
search_type="similarity_score_threshold",
search_kwargs={
'score_threshold': 0.6
}
)
retriever_docs_result = retriever.get_relevant_documents(query)
```
it used to work fine but after upgrading the langchain version it stopped returning any docs. Upon investigating the code, I've noticed that inside the `_similarity_search_with_relevance_scores` method in `vectorstore.py` , `docs_and_scores` is returning the scores as expected but then this code tries to calculate the relevance score and inverts the values:
`return [(doc, relevance_score_fn(score)) for doc, score in docs_and_scores]`.
I am getting search score values as 0.7 to 0.85 but then the relevancy score inverts it and makes it (1-score) which is around 0.3 to 0.15. The problem is the score_threshold works the opposite way. So if I set score_threshold to 0.6 it wants scores bigger than 0.6 which is impossible because the relevancy score makes it invert (if the relevancy score is smaller it means it is more closer to what we queried).
I am using dot product and I have used the following code as well but it also didn't help:
`
docsearch = Pinecone(index, embeddings, text_key="text", distance_strategy=DistanceStrategy.MAX_INNER_PRODUCT)
retriever = docsearch.as_retriever(
search_type="similarity_score_threshold",
search_kwargs={
'score_threshold': 0.6
}
)
`
Can someone help or I am missing something here ?
### Suggestion:
_No response_ | Issue: similarity_score_threshold not working with pinecone as expected | https://api.github.com/repos/langchain-ai/langchain/issues/11386/comments | 2 | 2023-10-04T16:48:26Z | 2024-02-07T16:23:28Z | https://github.com/langchain-ai/langchain/issues/11386 | 1,926,617,079 | 11,386 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I got this error when I built an ChatBot with Langchain using VertexAI. I'm seeing this error and couldn't find any details so far.
File "/opt/conda/lib/python3.10/site-packages/langchain/chains/base.py", line 292, in __call__
raise e
File "/opt/conda/lib/python3.10/site-packages/langchain/chains/base.py", line 286, in __call__
self._call(inputs, run_manager=run_manager)
File "/opt/conda/lib/python3.10/site-packages/langchain/chains/conversational_retrieval/base.py", line 141, in _call
answer = self.combine_docs_chain.run(
File "/opt/conda/lib/python3.10/site-packages/langchain/chains/base.py", line 492, in run
return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[
File "/opt/conda/lib/python3.10/site-packages/langchain/chains/base.py", line 292, in __call__
raise e
File "/opt/conda/lib/python3.10/site-packages/langchain/chains/base.py", line 286, in __call__
self._call(inputs, run_manager=run_manager)
File "/opt/conda/lib/python3.10/site-packages/langchain/chains/combine_documents/base.py", line 105, in _call
output, extra_return_dict = self.combine_docs(
File "/opt/conda/lib/python3.10/site-packages/langchain/chains/combine_documents/stuff.py", line 171, in combine_docs
return self.llm_chain.predict(callbacks=callbacks, **inputs), {}
File "/opt/conda/lib/python3.10/site-packages/langchain/chains/llm.py", line 257, in predict
return self(kwargs, callbacks=callbacks)[self.output_key]
File "/opt/conda/lib/python3.10/site-packages/langchain/chains/base.py", line 292, in __call__
raise e
File "/opt/conda/lib/python3.10/site-packages/langchain/chains/base.py", line 286, in __call__
self._call(inputs, run_manager=run_manager)
File "/opt/conda/lib/python3.10/site-packages/langchain/chains/llm.py", line 93, in _call
response = self.generate([inputs], run_manager=run_manager)
File "/opt/conda/lib/python3.10/site-packages/langchain/chains/llm.py", line 103, in generate
return self.llm.generate_prompt(
File "/opt/conda/lib/python3.10/site-packages/langchain/llms/base.py", line 504, in generate_prompt
return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/langchain/llms/base.py", line 668, in generate
new_results = self._generate_helper(
File "/opt/conda/lib/python3.10/site-packages/langchain/llms/base.py", line 541, in _generate_helper
raise e
File "/opt/conda/lib/python3.10/site-packages/langchain/llms/base.py", line 528, in _generate_helper
self._generate(
File "/opt/conda/lib/python3.10/site-packages/langchain/llms/vertexai.py", line 281, in _generate
res = completion_with_retry(
File "/opt/conda/lib/python3.10/site-packages/langchain/llms/vertexai.py", line 102, in completion_with_retry
return _completion_with_retry(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/tenacity/__init__.py", line 289, in wrapped_f
return self(f, *args, **kw)
File "/opt/conda/lib/python3.10/site-packages/tenacity/__init__.py", line 379, in __call__
do = self.iter(retry_state=retry_state)
File "/opt/conda/lib/python3.10/site-packages/tenacity/__init__.py", line 314, in iter
return fut.result()
File "/opt/conda/lib/python3.10/concurrent/futures/_base.py", line 451, in result
return self.__get_result()
File "/opt/conda/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
File "/opt/conda/lib/python3.10/site-packages/tenacity/__init__.py", line 382, in __call__
result = fn(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/langchain/llms/vertexai.py", line 100, in _completion_with_retry
return llm.client.predict(*args, **kwargs)
TypeError: TextGenerationModel.predict() got an unexpected keyword argument 'stop_sequences'
### Suggestion:
_No response_ | Issue: TypeError: TextGenerationModel.predict() got an unexpected keyword argument 'stop_sequences' | https://api.github.com/repos/langchain-ai/langchain/issues/11384/comments | 3 | 2023-10-04T16:10:08Z | 2024-02-11T16:12:46Z | https://github.com/langchain-ai/langchain/issues/11384 | 1,926,558,948 | 11,384 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hi, I have implemented the indexing workflow as in https://python.langchain.com/docs/modules/data_connection/indexing making use of Pinecone and sqlite. This runs perfect within a Pycharm dev environment. But after building the app with Pyinstaller, I get the following error: sqlite3.OperationalError: unable to open database file. I think this one is coming from langchain.indexes.index. The error message is very cryptic. Why can't this file be opened? Is it locked? Is it corrupted - no as I can open it with a desktop sql manager app. I tried different locations for storing the sqlite database but makes no difference.
Does anyone has some ideas on how to solve this?
### Suggestion:
_No response_ | Issue: sqlite3.OperationalError: unable to open database file | https://api.github.com/repos/langchain-ai/langchain/issues/11379/comments | 4 | 2023-10-04T12:16:34Z | 2023-10-05T11:22:05Z | https://github.com/langchain-ai/langchain/issues/11379 | 1,926,086,161 | 11,379 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain==0.0.306
Python 3.10.12
(Ubuntu Linux 20.04.6)
### Who can help?
The `ConversationBufferMemory` returns an empty string instead of an empty list when there's nothing stored, which breaks the expectations of the `MessagesPlaceholder` used within the Conversational REACT agent.
Related: https://github.com/langchain-ai/langchain/issues/7365 (where it was commented that changing the loading logic in `ConversationBufferWindowMemory` could break other things).
A possible solution that seems safer (in light of the subsequent type-checking code) could be to default empty strings to empty lists in the `format_messages` method of the `MessagesPlaceholder`, i.e.:
```
class MessagesPlaceholder(BaseMessagePromptTemplate):
...
def format_messages(self, **kwargs: Any) -> List[BaseMessage]:
value = kwargs[self.variable_name]
if not value: ## <-- ADDED CODE
value = [] ## <-- ADDED CODE
if not isinstance(value, list):
raise ValueError(...)
for v in value:
if not isinstance(v, BaseMessage):
raise ValueError(...)
return value
...
```
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
To reproduce the behaviour, make sure you have an OPENAI_API_KEY env var and run the following:
```
# pip install langchain openai
import os
from langchain.memory import ChatMessageHistory, ConversationBufferMemory
from langchain.chat_models import ChatOpenAI
from langchain.agents import initialize_agent
from langchain.tools import BaseTool
conversational_memory = ConversationBufferMemory(
memory_key="chat_history",
chat_memory=ChatMessageHistory()
)
llm = ChatOpenAI(
openai_api_key=os.environ['OPENAI_API_KEY'],
temperature=0,
model_name="gpt-4"
)
class MyTool(BaseTool):
name = "yell_loud"
description = "The tool to yell loud!!!"
def _run(self, query, run_manager=None):
"""Use the tool."""
return 'WAAAAH!'
async def _arun(self, query, run_manager=None):
"""Use the tool asynchronously."""
raise NotImplementedError("no async")
agent = initialize_agent(
agent="chat-conversational-react-description",
tools=[MyTool()],
llm=llm,
max_iterations=5,
verbose=True,
memory=conversational_memory,
early_stopping_method='generate'
)
print(agent.run("Please yell very loud, thank you, and then report the result to me."))
# Will raise:
# ValueError: variable chat_history should be a list of base messages, got
# at location:
# "langchain/prompts/chat.py", line 98, in format_messages
```
### Expected behavior
I would expect the script to not complain when the memory is empty and result in something like the following agent interaction (as I got from the modified `format_messages` tentatively suggested above):
```
$> python agent_script.py
> Entering new AgentExecutor chain...
'''json
{
"action": "yell_loud",
"action_input": "Please yell very loud, thank you"
}
'''
Observation: WAAAAH!
Thought:'''json
{
"action": "Final Answer",
"action_input": "The response to your last comment was a loud yell, as you requested."
}
'''
> Finished chain.
The response to your last comment was a loud yell, as you requested.
``` | ConversationBufferMemory returns empty string (and not a list) when empty, breaking agents with memory ("should be a list of base messages, got") | https://api.github.com/repos/langchain-ai/langchain/issues/11376/comments | 3 | 2023-10-04T09:18:45Z | 2024-03-20T18:51:23Z | https://github.com/langchain-ai/langchain/issues/11376 | 1,925,768,723 | 11,376 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I have applied RetrievalQA's chain_type to map_reduce in my application and would like to customize its system message.
### Suggestion:
There is a way to modify the map_reduce_prompt code in the question answering, but I would like to know how to change it to the argument value of RetrievalQA.from_chain_type. | How to change the system message of qa chain? | https://api.github.com/repos/langchain-ai/langchain/issues/11375/comments | 2 | 2023-10-04T08:53:33Z | 2024-02-06T16:27:36Z | https://github.com/langchain-ai/langchain/issues/11375 | 1,925,720,401 | 11,375 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I am having python REST API which calls to Open AI using following code for embedding.
`openai.Embedding.create(input=text, engine='text-embedding-ada-002')['data'][0]['embedding']`
I am having text length below the 8191 (max token size for text-embedding-ada-002).
When I am calling it, some times it doesn't return any results or exceptions. (execution get blocks after the calling this. But it is giving result on 2nd or 3rd attempts for same input text. It is happening time to time and not consistence.
Kindly help me to resolve this issue.
Thanks
Nuwan
### Suggestion:
_No response_ | Issue: Open AI embedding dose not returning results for some times | https://api.github.com/repos/langchain-ai/langchain/issues/11373/comments | 3 | 2023-10-04T05:46:24Z | 2024-02-07T16:23:38Z | https://github.com/langchain-ai/langchain/issues/11373 | 1,925,448,887 | 11,373 |
[
"hwchase17",
"langchain"
]
| ### System Info
I running code in Google Colab:
LangChain version: 0.0.306
Python 3.10.12
Transformers library 4.34.0
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Hello! I have encountered the following error when trying to develop HuggingFace question answering models:
tokenizer = AutoTokenizer.from_pretrained("/content/flan-t5-large")
model = AutoModelForSeq2SeqLM.from_pretrained("/content/flan-t5-large")
pipe = pipeline("text2text-generation", model=model, tokenizer=tokenizer)
llm = HuggingFacePipeline(
pipeline = pipeline,
model_kwargs={"temperature": 0, "max_length": 512},
)
chain = load_qa_chain(llm, chain_type="stuff")
query = "My question?"
docs = db.similarity_search(query)
chain.run(input_documents=docs, question=query)
But the output is:
TypeError Traceback (most recent call last)
[<ipython-input-34-226234bd6dea>](https://localhost:8080/#) in <cell line: 1>()
----> 1 chain.run(input_documents=docs, question=query)
15 frames
[/usr/local/lib/python3.10/dist-packages/transformers/pipelines/__init__.py](https://localhost:8080/#) in pipeline(task, model, config, tokenizer, feature_extractor, image_processor, framework, revision, use_fast, token, device, device_map, torch_dtype, trust_remote_code, model_kwargs, pipeline_class, **kwargs)
773
774 # Retrieve the task
--> 775 if task in custom_tasks:
776 normalized_task = task
777 targeted_task, task_options = clean_custom_task(custom_tasks[task])
TypeError: unhashable type: 'list'
### Expected behavior
I just wanna find solution, I didn't find nothing about this in the Google. | RAG - TypeError: unhashable type: 'list' | https://api.github.com/repos/langchain-ai/langchain/issues/11371/comments | 5 | 2023-10-04T03:33:06Z | 2023-10-04T15:35:46Z | https://github.com/langchain-ai/langchain/issues/11371 | 1,925,341,660 | 11,371 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I'm creating a langchain chatbot (Conversationl-ReACT-description) with zep long term memory and that bot has access to real-time data. How do I prevent use the memory when same query asked again few minutes later?
### Suggestion:
_No response_ | Issue: Limit Memory Access on LangChain Bot | https://api.github.com/repos/langchain-ai/langchain/issues/11370/comments | 2 | 2023-10-04T03:27:22Z | 2024-02-06T16:27:46Z | https://github.com/langchain-ai/langchain/issues/11370 | 1,925,333,897 | 11,370 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hi, I want to clarify some questions about tool and llm usage.
If an agent is initialized with multiple llm and tools,
1. how does the agent choose which llm or tool to use to answer the prompt?
2. How is llm different from tools? Can you give some examples of each category?
3. Can the agent use multiple tools/llm? If the agent can use multiple tools, what happen if the output generated by such tools/llm conflict with each other? In that case, which result will be returned to user?
for example, in the code beloew (cited from langchain doc), the agent is initialized with both 'llm_chain=llm_chain' (where llm_chain is OpenAI)and 'tools=tools' (where tool is google search engine). Which one is used to generate the result?
```search = GoogleSearchAPIWrapper()
tools = [
Tool(
name="Search",
func=search.run,
description="useful for when you need to answer questions about current events",
)
]
prefix = """Have a conversation with a human, answering the following questions as best you can. You have access to the following tools:"""
suffix = """Begin!"
{chat_history}
Question: {input}
{agent_scratchpad}"""
prompt = ZeroShotAgent.create_prompt(
tools,
prefix=prefix,
suffix=suffix,
input_variables=["input", "chat_history", "agent_scratchpad"],
)
memory = ConversationBufferMemory(memory_key="chat_history")
llm_chain = LLMChain(llm=OpenAI(temperature=0), prompt=prompt)
agent = ZeroShotAgent(llm_chain=llm_chain, tools=tools, verbose=True)
agent_chain = AgentExecutor.from_agent_and_tools(
agent=agent, tools=tools, verbose=True, memory=memory
)
agent_chain.run(input="How many people live in canada?")
agent_chain.run(input="what is their national anthem called?")```
| Issue: differences between tool and llm | https://api.github.com/repos/langchain-ai/langchain/issues/11365/comments | 2 | 2023-10-03T21:42:07Z | 2024-02-06T16:27:51Z | https://github.com/langchain-ai/langchain/issues/11365 | 1,924,996,244 | 11,365 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
When I use directory loader, does langchain index the speaker notes in powerpoint?
### Suggestion:
_No response_ | Does langchain index the speaker notes in powerpoint? | https://api.github.com/repos/langchain-ai/langchain/issues/11363/comments | 6 | 2023-10-03T21:00:18Z | 2024-02-10T16:15:22Z | https://github.com/langchain-ai/langchain/issues/11363 | 1,924,939,090 | 11,363 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Trying to submit a code update. This would be a first. Help a non software eng out. :)
Here's a manual diff of the intended change:
```
langchain\libs\langchain\langchain\document_loaders\text.py
<<[41] with open(self.file_path, encoding=self.encoding) as f:
>>[41] with open(self.file_path, encoding=self.encoding, errors='replace') as f:
```
The COMMIT_EDITMSG file:
"""
Solves unicode [codec can't decode byte] traceback error:
Traceback (most recent call last):
File "C:\Program Files\Python3\Lib\site-packages\langchain\document_loaders\text.py", line 41, in load
text = f.read()
^^^^^^^^
File "<frozen codecs>", line 322, in decode
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa7 in position 549: invalid start byte
"""
The general flow used:
PS C:\dev\github\langchain\libs\langchain\langchain\document_loaders> notepad .\text.py
PS C:\dev\github\langchain\libs\langchain\langchain\document_loaders> git add text.py
PS C:\dev\github\langchain\libs\langchain\langchain\document_loaders> git checkout -b unicode_replace_fix
Switched to a new branch 'unicode_replace_fix'
PS C:\dev\github\langchain\libs\langchain\langchain\document_loaders> git commit -m "Solves unicode [codec can't decode byte] traceback error."
... // had to update user.name/email
PS C:\dev\github\langchain\libs\langchain\langchain\document_loaders> git commit --amend --reset-author
hint: Waiting for your editor to close the file... unix2dos: converting file C:/Dev/Github/langchain/.git/COMMIT_EDITMSG to DOS format...
dos2unix: converting file C:/Dev/Github/langchain/.git/COMMIT_EDITMSG to Unix format...
[unicode_replace_fix 09c9cb77e] Solves unicode [codec can't decode byte] traceback error:
1 file changed, 1 insertion(+), 1 deletion(-)
// below, I got a github auth popup, went through, succeeded - then denied?
PS C:\dev\github\langchain\libs\langchain\langchain\document_loaders> git push origin unicode_replace_fix
info: please complete authentication in your browser...
remote: Permission to langchain-ai/langchain.git denied to hackedpassword.
fatal: unable to access 'https://github.com/langchain-ai/langchain/': The requested URL returned error: 403
// denied but no diff?
PS C:\dev\github\langchain> git diff HEAD
PS C:\dev\github\langchain> git status
On branch unicode_replace_fix
nothing to commit, working tree clean
PS C:\dev\github\langchain> git diff
PS C:\dev\github\langchain>
Maybe I'm up against a repo push permission issue?
### Suggestion:
_No response_ | Issue: Fixed unicode decode byte error | https://api.github.com/repos/langchain-ai/langchain/issues/11359/comments | 4 | 2023-10-03T19:47:13Z | 2024-05-10T16:07:15Z | https://github.com/langchain-ai/langchain/issues/11359 | 1,924,832,590 | 11,359 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Chatbot using prompt template (Langchain) gives proper response to first prompt "Hi there!" but gives below error to second prompt "Give me a few tips on how to start a new garden."
_```
ValueError: Error: Prompt must alternate between '
Human:' and '
Assistant:'.
### Motivation
Yes its a problem being raised by my customer who is blocked currently
### Your contribution
NA

[Error-Test.ipynb.txt](https://github.com/langchain-ai/langchain/files/12796468/Error-Test.ipynb.txt)
| BedrockChat Error | https://api.github.com/repos/langchain-ai/langchain/issues/11358/comments | 2 | 2023-10-03T19:43:11Z | 2024-02-06T16:28:01Z | https://github.com/langchain-ai/langchain/issues/11358 | 1,924,827,161 | 11,358 |
[
"hwchase17",
"langchain"
]
| ### System Info
Today, all of a sudden I'm getting a error status code 429 using the Conversation chain. Nothing has changed, expect the day I've run the script. Any ideas?
It works with the normal LLM chain, so it must be to do with the Conversation Chain not working or the memories.
Using Flowise.
Chrome
@hwchase17
@agola11
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Request failed with status code 429 and body {"error":{"message":"Rate limit reached for 10KTPM-200RPM in organization org-ByeURhCRRujcKf7u7QrlrudH on tokens per min. Limit: 10000 / min. Please try again in 6ms. Contact us through our help center at help.openai.com if you continue to have issues.","type":"tokens","param":null,"code":"rate_limit_exceeded"}}
### Expected behavior
I expect the LLM chain to write the content. | Conversational Chain Error 429 | https://api.github.com/repos/langchain-ai/langchain/issues/11347/comments | 4 | 2023-10-03T16:28:32Z | 2024-02-10T16:15:27Z | https://github.com/langchain-ai/langchain/issues/11347 | 1,924,512,627 | 11,347 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain==0.0.286
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
``` python
def add_product(self, input: str):
print(f"🟢 Add product {input}")
```
``` python
tools = [
Tool(
name="AddProduct",
func=add_product,
description="""
Add product to order.
Input of this tool must be a single string JSON format:
For example: '{{"name":"Nike Pegasus 40"}}'
"""
)
]
```
``` python
agent = initialize_agent(
tools=tools,
llm=__gpt4,
agent=AgentType.OPENAI_FUNCTIONS,
verbose=True,
prompt=prompt
)
```
### Expected behavior
Error:
```
TypeError: add_product() missing 1 required positional argument: 'input'
```
Expected:
```
Function `add_product` called with correct input.
``` | Missing 1 required positional argument for function that has `self` argument. | https://api.github.com/repos/langchain-ai/langchain/issues/11341/comments | 6 | 2023-10-03T15:06:00Z | 2024-07-15T04:32:06Z | https://github.com/langchain-ai/langchain/issues/11341 | 1,924,354,324 | 11,341 |
[
"hwchase17",
"langchain"
]
| ### System Info
latest version of langchain. python=3.11.4
### Who can help?
@all
### Information
- [x] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.agents import *
from langchain.llms import OpenAI
from langchain.sql_database import SQLDatabase
from langchain.agents import create_sql_agent
from langchain.agents.agent_toolkits import SQLDatabaseToolkit
from langchain.sql_database import SQLDatabase
from langchain.llms.openai import OpenAI
from langchain.agents import AgentExecutor
from langchain.chat_models import ChatOpenAI
# from secret_key import openapi_key
openapi_key = "######"
os.environ['OPENAI_API_KEY'] = openapi_key
def chat(question):
llm = OpenAI(temperature=0)
tools = load_tools(["llm-math"], llm=llm)
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION)
driver = 'ODBC Driver 17 for SQL Server'
host = '####'
database = 'chatgpt'
user = 'rnd'
password = '###'
#db_uri = SQLDatabase.from_uri(f"mssql+pyodbc://{user}:{password}@{host}/{database}?driver={driver}")
db = SQLDatabase.from_uri(f"mssql+pyodbc://{user}:{password}@{host}/{database}?driver=ODBC+Driver+17+for+SQL+Server")
llm = ChatOpenAI(model_name="gpt-3.5-turbo",
temperature=0,
max_tokens=1000)
# toolkit = SQLDatabaseToolkit(db=db)
toolkit = SQLDatabaseToolkit(db=db, llm=llm)
agent_executor = create_sql_agent(
llm=llm,
toolkit=toolkit,
verbose=True,
reduce_k_below_max_tokens=True,
)
mrkl = initialize_agent(
tools,
ChatOpenAI(temperature=0),
agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
handle_parsing_errors=True,
)
return agent_executor.run(question)
### Expected behavior
i need to connect to test server, but i'm getting an error while connecting to it, but is working fine on local server , been getting
"OperationalError: (pyodbc.OperationalError) ('08001', '[08001] [Microsoft][ODBC Driver 17 for SQL Server]Named Pipes Provider: Could not open a connection to SQL Server [53]. (53) (SQLDriverConnect); [08001] [Microsoft][ODBC Driver 17 for SQL Server]Login timeout expired (0); [08001] [Microsoft][ODBC Driver 17 for SQL Server]A network-related or instance-specific error has occurred while establishing a connection to SQL Server. Server is not found or not accessible. Check if instance name is correct and if SQL Server is configured to allow remote connections. For more information see SQL Server Books Online. (53)')
" | OperationalError: (pyodbc.OperationalError) ('08001', '[08001] | https://api.github.com/repos/langchain-ai/langchain/issues/11337/comments | 25 | 2023-10-03T14:39:18Z | 2024-02-15T16:08:56Z | https://github.com/langchain-ai/langchain/issues/11337 | 1,924,300,917 | 11,337 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hi! I am trying to create a question answering chatbot with PDFs. There is something different in these documents. I have different articles with their number, example: article 1.2.2.1.3, article 1.2.3.4.5, article 2.3.4.1.3, etc.
When I ask for a specific article, it can't find the answer and return "the article x.x.x.x.x is not in the context". I have tried with some embeddings techniques and vector store, but it does not work.
Any ideas?
PD: The PDF documents are around 450 pages
### Suggestion:
_No response_ | Issue: Documents embeddings with many and similar numbers don't return good results | https://api.github.com/repos/langchain-ai/langchain/issues/11331/comments | 5 | 2023-10-03T12:04:42Z | 2024-02-12T16:12:34Z | https://github.com/langchain-ai/langchain/issues/11331 | 1,923,981,257 | 11,331 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Yeah, so I've been attempting to load Llama2 and utilize Langchain to create a condensed model for my documents. I managed to accomplish the task successfully, as Llama2 generated the results quite effectively. However, when I tried to speed up the process by referencing Langchain's [LLMChain ](https://api.python.langchain.com/en/latest/chains/langchain.chains.llm.LLMChain.html?highlight=llmchain#langchain.chains.llm.LLMChain)and one of its function -- arun, I kept encountering an error that said "Not Implement Error". It didn't matter which function I tried to use, whether it was acall, arun, or apredict, they all failed.
So here are my codes:
```
# load LLama from local drive
tokenizer=AutoTokenizer.from_pretrained("/content/drive/MyDrive/CoLab/LLama7b")
model=LlamaForCausalLM.from_pretrained("/content/drive/MyDrive/CoLab/LLama7b",quantization_config=quant_config,device_map="auto",)
# define output parameters
prefix_llm=transformers.pipeline(
model=model, tokenizer=tokenizer,
return_full_text=True,
task="text-generation",# temperature=0,
top_p=0.15, top_k=15,
max_new_tokens=1060, repetition_penalty=1.2,
do_sample=True
)
# wrap up with Langchain HuggingfaceLLm class
llm=HuggingFacePipeline(pipeline=prefix_llm, cache=False)
# Combine with LLMChain
prompt: PromptTemplate, input_variables="words"
llm_chain = LLMChain(prompt=prompt, llm=llm)
# calling arun
input: list[str]
llm_chain.run(input[num]) || llm_chain.run(words=input[num]) # run smoothly
llm_chain.arun(input) || llm_chain.arun(words=input) # Raise error NotImplementError
```
```
error comes from langchain/llms/base.py in _agenerate(self, prompts, stop, run_manager, **kwargs)
479 ) -> LLMResult:
480 """Run the LLM on the given prompts."""
--> 481 raise NotImplementedError()
482
483 def _stream(
NotImplementedError:
# in this last function, "run_manager" should be None, "stop" too.
```
As far as I explored, this process will call 10 frames. Start on
```
langchian/chains/base.py -> def acall() to
langchian/chains/base.py -> def agenerate()
then jump to
langchian/chains/llm.py -> def _acall() to
langchian/chains/llm.py -> def agenerate()
in the last it stopped at
langchian/llms/base.py -> def agenerate() to
langchian/llms/base.py -> def agenerate_prompt(),
langchian/llms/base.py -> def agenerate_helper()
langchian/llms/base.py -> def _agenerate()
```
I don't know what "thing" I should implement or Langchian didn't implement. the calling code is 100% refer to the [Langchain example](https://python.langchain.com/docs/modules/chains/how_to/async_chain). I suspect maybe it's because Langchain didn't fully support Llama? Since it runs smoothly on OpenAI API.
### Suggestion:
_No response_ | Issue: LLMChain arun running with Llama7B encounter NotImplementError | https://api.github.com/repos/langchain-ai/langchain/issues/11325/comments | 2 | 2023-10-03T09:18:23Z | 2024-02-07T16:23:58Z | https://github.com/langchain-ai/langchain/issues/11325 | 1,923,694,958 | 11,325 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Is there a codeinterpreter plugin for langchain?
Use Case: I want to generate insights with charts as like the paid ChatGPT out there. Is this possible?
### Suggestion:
_No response_ | CodeInterpreter for Langchain | https://api.github.com/repos/langchain-ai/langchain/issues/11319/comments | 1 | 2023-10-03T04:45:52Z | 2024-02-06T16:28:21Z | https://github.com/langchain-ai/langchain/issues/11319 | 1,923,299,236 | 11,319 |
[
"hwchase17",
"langchain"
]
| ### System Info
- Langchain Version: 0.0.306
- Python Version: 3.10.11
- Azure Cognitive Seach (any sku)
- Embedding that has batch functionality
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce:
1. Get a document (or set of documents) that will produce a few hundred chunks
2. Run the sample code provided by Langchain to index those documents into the Azure Search vector store (sample code below)
3. Run steps 1 and 2 on another vector store that supports batch embedding (milvus implementation supports batch embeddings)
4. Analyze the delta between the two vectorization speeds (Azure Search should be noticeably slower)
Sample code:
```python
import openai
import os
from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores.azuresearch import AzureSearch
vector_store_address: str = "YOUR_AZURE_SEARCH_ENDPOINT"
vector_store_password: str = "YOUR_AZURE_SEARCH_ADMIN_KEY"
embeddings: OpenAIEmbeddings = OpenAIEmbeddings(deployment=model, chunk_size=1)
index_name: str = "langchain-vector-demo"
vector_store: AzureSearch = AzureSearch(
azure_search_endpoint=vector_store_address,
azure_search_key=vector_store_password,
index_name=index_name,
embedding_function=embeddings.embed_query,
)
from langchain.document_loaders import TextLoader
from langchain.text_splitter import CharacterTextSplitter
loader = TextLoader("../../../state_of_the_union.txt", encoding="utf-8")
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
vector_store.add_documents(documents=docs)
```
### Expected behavior
The expected behavior for the Azure search implementation to also support batch embedding if the implementation of the embedding class supports batch embedding. This should result in significant speed improvements when comparing the single embedding approach vs the batch embedding approach. | Azure Cognitive Search vector DB store performs slow embedding as it does not utilize the batch embedding functionality | https://api.github.com/repos/langchain-ai/langchain/issues/11313/comments | 6 | 2023-10-02T21:42:29Z | 2024-02-27T11:12:59Z | https://github.com/langchain-ai/langchain/issues/11313 | 1,922,776,905 | 11,313 |
[
"hwchase17",
"langchain"
]
| ### System Info
v: 0.0306
python version: Python 3.11.4
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Recently upgraded to version 0.0.306 of langchainversion. (from 0.0.259)
I see to run into the below error
```
2023-10-02T19:42:20.279102248Z File "/home/appuser/.local/lib/python3.11/site-packages/langchain/__init__.py", line 322, in __getattr__
2023-10-02T19:42:20.279258947Z raise AttributeError(f"Could not find: {name}")
2023-10-02T19:42:20.279296147Z AttributeError: Could not find: llms
```
I have no idea as to what is causing this and from where the call is being made. I know that the error is coming from the below file (line 328)
https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/__init__.py
I am not able find any attribute called "llms" that is being set in any of our code base - hence more confused about the Attribute error.
### Expected behavior
- Perhaps a stack trace that shows the flow of calls
- Documentation that shows the possibilities of the error | Upgrade to v :0.0.306 - AttributeError: Could not find: llms | https://api.github.com/repos/langchain-ai/langchain/issues/11306/comments | 2 | 2023-10-02T19:50:11Z | 2024-05-01T16:05:15Z | https://github.com/langchain-ai/langchain/issues/11306 | 1,922,568,920 | 11,306 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hi everyone,
I've encountered an issue while trying to instantiate the ConversationalRetrievalChain in the Langchain library. It seems to be related to the abstract class BaseRetriever and the required method _get_relevant_documents.
Here's my base code:
```
import os
from langchain.vectorstores import FAISS
from langchain.chat_models import ChatOpenAI
from langchain.chains.question_answering import load_qa_chain
from langchain.embeddings.openai import OpenAIEmbeddings
from dotenv import load_dotenv
from langchain.memory import ConversationBufferMemory
from langchain.chains import ConversationalRetrievalChain
# Load environment variables
load_dotenv("info.env")
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
# Initialize the FAISS vector database
embeddings = OpenAIEmbeddings(openai_api_key=OPENAI_API_KEY)
index_filenames = ["index1", "index2"]
dbs = {f"category{i+1}": FAISS.load_local(filename, embeddings) for i, filename in enumerate(index_filenames)}
# Load chat model and question answering chain
llm = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0, openai_api_key=OPENAI_API_KEY)
chain = load_qa_chain(llm, chain_type="map_reduce")
# Initialize conversation memory
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
# Set the role of the AI
role = "Assistant"
# Write the role of the AI
memory.save_context({"input": f"AI: As an {role}, I can help you with your questions based on the retrievers and my own data."}, {"output": ""})
# Initialize the ConversationalRetrievalChain with memory and custom retrievers
qa = ConversationalRetrievalChain.from_llm(llm, retriever=dbs, memory=memory)
# Pass in chat history using the memory
while True:
user_input = input("User: ")
# Add user input to the conversation history
memory.save_context({"input": f"User: {user_input}"}, {"output": ""})
# Check if the user wants to exit
if user_input == "!exit":
break
# Run the chain on the user's query
response = qa.run(user_input)
# Print the response with a one-line space and role indication
print("\n" + f"{role}:\n{response}" + "\n")
```
And here's the error message I received:
```
pydantic.v1.error_wrappers.ValidationError: 1 validation error for ConversationalRetrievalChain
retriever
Can't instantiate abstract class BaseRetriever with abstract method _get_relevant_documents (type=type_error)
```
It appears that the retriever I provided doesn't fully implement the required methods. Could someone provide guidance on how to properly implement a retriever for ConversationalRetrievalChain and help update the code based on that?
Any help would be greatly appreciated. Thank you!
### Suggestion:
_No response_ | Issue: Abstract Class Implementation problem in Retrievers | https://api.github.com/repos/langchain-ai/langchain/issues/11303/comments | 21 | 2023-10-02T19:06:33Z | 2023-11-15T12:51:45Z | https://github.com/langchain-ai/langchain/issues/11303 | 1,922,487,402 | 11,303 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain==0.0.291
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
elements_for_filter = ['a10','b10']
the_filter = {
'type': {
'in': elements_for_filter
}
}
if rag_similarity_strategy == 'similarity_score_threshold':
retriever = vectordb.as_retriever(search_type=rag_similarity_strategy, search_kwargs={
"k": 4, 'score_threshold': 0.6,
'filter': the_filter})
elif rag_similarity_strategy == 'mmr':
retriever = vectordb.as_retriever(search_type=rag_similarity_strategy, search_kwargs={
'filter': the_filter, "k": 4, 'lambda_mult': 0.5,
'fetch_k': 20,
})
```
### Expected behavior
I expect the retrieve apply the filter when I use 'mmr' as it does when I used 'similarity_score_threshold' but it does not. | PGVector 'mmr' does not apply the filter at the time of retrieval | https://api.github.com/repos/langchain-ai/langchain/issues/11295/comments | 4 | 2023-10-02T16:36:34Z | 2024-02-11T16:13:06Z | https://github.com/langchain-ai/langchain/issues/11295 | 1,922,260,149 | 11,295 |
[
"hwchase17",
"langchain"
]
| ### Feature request
we are using your VLLMOpenAI class in a project to connect to our vLLM. we however don't use openAI, and found kind of weird that you have this implemented, but naming suggests it is specifically for openAI only. You could generalise this class so that it can be used with any kind of vLLM (which it already does). So basically just some refactoring might be the whole thing.
### Motivation
We are using this VLLMOpenAI class with other open source models. Kind of misleading and unintuitive to say we use the VLLMOpenAI class. We wish ourselves a custom class for open source models or any other kind of vLLM deployment.
### Your contribution
No time capacity :( So other than my genuine enthusiasm on your project and the whole documenting of the feature I can't really offer much more... | VLLMOpenAI class should be generalised | https://api.github.com/repos/langchain-ai/langchain/issues/11291/comments | 5 | 2023-10-02T14:25:10Z | 2024-02-09T16:19:48Z | https://github.com/langchain-ai/langchain/issues/11291 | 1,922,030,381 | 11,291 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Hi, I'd like to request the chain type of Chain-of-Verification (CoVe).
### Motivation
I see that there's already a `LLMCheckerChain` and a `SmartLLMChain` which use related techniques, but implementing the below four-step process as described in https://arxiv.org/abs/2309.11495 would, I think, still be a very popular feature.
**Chain-of-Verification steps:**
1. drafts an initial response
2. plans verification questions to fact-check its draft
3. answers those questions independently so the answers are not biased by other responses
4. generates its final verified response
### Your contribution
Not sure yet. | [Feature Request] Chain-of-Verification (CoVe) | https://api.github.com/repos/langchain-ai/langchain/issues/11285/comments | 6 | 2023-10-02T13:38:39Z | 2024-02-13T16:11:28Z | https://github.com/langchain-ai/langchain/issues/11285 | 1,921,942,394 | 11,285 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain version 0.0.305
Python version 3.101.1
Platform VScode
I am trying to create a chatbot using langchain and streamlit by running this code:
```
import os
import streamlit as st
from st_chat_message import message
from dotenv import load_dotenv
from langchain.chat_models import ChatOpenAI
from langchain.schema import (
SystemMessage,
HumanMessage,
AIMessage
)
def init():
load_dotenv()
#Checking Openai api key
if os.getenv("OPENAI_API_KEY") is None or os.getenv("OPENAI_API_KEY") == "":
print("OPENAI_API_KEY is NOT set yet")
exit(1)
else:
print("OPENAI_API_KEY is fully set")
st.set_page_config(
page_title = "ZIKO",
page_icon = "🤖"
)
def main():
init()
chat = ChatOpenAI(temprature = 0)
messages = [
SystemMessage(content="You are a helpful assistant.")
#HumanMessage(content=input),
#AIMessage()
]
st.header("ZIKO 🤖")
with st.sidebar:
user = st.text_input("Enter your message: ", key = "user")
if user:
message(user, is_user = True)
messages.append(HumanMessage(content=user))
response = chat(messages)
message(response.content)
if __name__ == '__main__':
main()
```
But I am getting this message:
ModuleNotFoundError: No module named 'langchain.chat_models'
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce:
Installing langchain
Importing langchain.chat_modules
### Expected behavior
Module loads successfully | ModuleNotFoundError: No module named 'langchain.chat_models' | https://api.github.com/repos/langchain-ai/langchain/issues/11277/comments | 4 | 2023-10-02T06:45:42Z | 2024-02-21T16:08:24Z | https://github.com/langchain-ai/langchain/issues/11277 | 1,921,339,371 | 11,277 |
[
"hwchase17",
"langchain"
]
| ### Feature request
I want simple methods to use on the various memory classes to enable easy serialization and de serialization of the current state of the convo, crucially including system messages as json dictionaries.
If this is currently already included, I cannot for the life of me find it anywhere in the documentation or the source code.
In particular there doesn't seem to be a way to add system messages directly to the memory.
Heres some pseudo code outlining what this would look like
```python
serialized_memory = memory_instance.serialize_to_json()
new_memory = MemoryClass()
new_memory.load_from_json(serialized_memory)
```
In the new memory, the entire contents of the history(at least what has not been pruned) would be included. Including any system messages, which with the summary variants would include the convo summaries.
### Motivation
The absence of easy serialization and deserialization methods makes it difficult to save and load conversations in JSON files or other formats. Working with dictionaries is straightforward, and they can be easily converted into various data storage formats.
### Your contribution
Possibly. I just started messing around with this module today, and I am relatively new to Python. I will be doing my best to learn more about how this library works so I can add this feature.
With some more research and experimentation this could be something I could do. | Simple Serialization and Deserialization of Memory classes to and from dictionaries w/ System Messages included | https://api.github.com/repos/langchain-ai/langchain/issues/11275/comments | 8 | 2023-10-02T04:50:53Z | 2024-02-15T16:09:05Z | https://github.com/langchain-ai/langchain/issues/11275 | 1,921,243,571 | 11,275 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Is possible utilizate another languaje, like spanish?
### Motivation
Another Languaje
### Your contribution
I don't | Another Languaje | https://api.github.com/repos/langchain-ai/langchain/issues/11272/comments | 3 | 2023-10-02T00:27:49Z | 2024-02-08T16:23:26Z | https://github.com/langchain-ai/langchain/issues/11272 | 1,921,054,820 | 11,272 |
[
"hwchase17",
"langchain"
]
| ### System Info
I'm trying to use langchain with cache using gptcache, using pydantic models
(prior to this message I'm building prompt of mixed human and system messages)
```
chain = create_structured_output_chain(Job, llm, prompt, verbose=False)
res = chain.run({'job': job})
```
when not using cache everything seems to work but when getting from cache I'm seeing this error:
```
File "/Users/orlevy/Projects/tfg/llm-service/venv/lib/python3.9/site-packages/langchain/schema/output.py", line 62, in set_text
values["text"] = values["message"].content
KeyError: 'message'
```
When I digged a little further it seems like this code, is not generating the same response (we are losing function call response) :
<img width="661" alt="image" src="https://github.com/langchain-ai/langchain/assets/21291210/cfeccb45-4806-4f85-a01c-67b42a882c8d">
modified code a bit(not sure if this is actually a good idea) but this one seems to work for chat:
```
res = get(prompt, cache_obj=_gptcache)
if res:
return [
ChatGeneration(
message=AIMessage(content='',
additional_kwargs=generation_dict['message']['additional_kwargs']
),
generation_info=dict(finish_reason=generation_dict.get("finish_reason")),
) for generation_dict in json.loads(res)
]
```
would really appreciate
your help with this
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. use chat with chain messages
2. use cachegpt (used postgres)
3. make one call
4. cache should be populated
5. from that point on when using cache we see error
### Expected behavior
I should be able to use cache | Chat cache results in KeyError: 'message' | https://api.github.com/repos/langchain-ai/langchain/issues/11268/comments | 7 | 2023-10-01T17:48:12Z | 2024-02-16T16:08:26Z | https://github.com/langchain-ai/langchain/issues/11268 | 1,920,886,661 | 11,268 |
[
"hwchase17",
"langchain"
]
| I'm using create_conversational_retrieval_agent to create an agent and also passing in a system prompt. However , the prompt is passed in as user message, is this expected behavior?
```
conv_agent_executor = create_conversational_retrieval_agent(
llm, tools, verbose=False,
system_message=SystemMessage(
content="Your name is Votum, an expert legal assistant with extensive knowledge about Indian law. Your task is to respond to the given query in a factually correct and concise manner unless asked for a detailed explanation. Feel free to use any tools available to look up relevant information, only if necessary")
)
```
Server Logs:
```
<|im_start|>system
You are a helpful assistant.<|im_end|>
<|im_start|>user
Your are an expert legal assistant with extensive knowledge about Indian law. Your task is to respond to the given query in a factually correct and concise manner unless asked for a detailed explanation. Feel free to use any tools available to look up relevant information, only if necessary
Answer the following questions as best you can. You have access to the following APIs:
search: Call this tool to interact with the search API. What is the search API useful for? search(query: str) -> str - Useful for retrieving documents related to Indian law. Parameters: {"title": "SearchInput", "type": "object", "properties": {"query": {"title": "Query", "description": "should be a search query in string format", "type": "string"}}, "required": ["query"]}
Use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [search]
Action Input: the input to the action
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can be repeated zero or more times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question
Begin!
```
### Suggestion:
_No response_ | SystemMessage not updating system prompt | https://api.github.com/repos/langchain-ai/langchain/issues/11262/comments | 2 | 2023-10-01T13:18:15Z | 2024-02-07T16:24:28Z | https://github.com/langchain-ai/langchain/issues/11262 | 1,920,763,561 | 11,262 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain version v0.0.305
Python 3.11
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
claudev2 = BedrockChat(
client=bedrock_inference,
model_id="anthropic.claude-v2",
)
from langchain.prompts import PromptTemplate
from langchain.chat_models import ChatOpenAI
from langchain_experimental.smart_llm import SmartLLMChain
prompt = PromptTemplate.from_template(hard_question)
chain = SmartLLMChain(
llm=claudev2,
prompt=prompt,
n_ideas=2,
)
response = chain.run({} )
print(response)
```
Results in error:
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[/Users/austinmw/Desktop/bedrocktesting/bedrock_test.ipynb](https://file+.vscode-resource.vscode-cdn.net/Users/austinmw/Desktop/bedrocktesting/bedrock_test.ipynb) Cell 10 line 1
[5](vscode-notebook-cell:/Users/austinmw/Desktop/bedrocktesting/bedrock_test.ipynb#X14sZmlsZQ%3D%3D?line=4) prompt = PromptTemplate.from_template(hard_question)
[7](vscode-notebook-cell:/Users/austinmw/Desktop/bedrocktesting/bedrock_test.ipynb#X14sZmlsZQ%3D%3D?line=6) chain = SmartLLMChain(
[8](vscode-notebook-cell:/Users/austinmw/Desktop/bedrocktesting/bedrock_test.ipynb#X14sZmlsZQ%3D%3D?line=7) ideation_llm=claudev2,
[9](vscode-notebook-cell:/Users/austinmw/Desktop/bedrocktesting/bedrock_test.ipynb#X14sZmlsZQ%3D%3D?line=8) critique_llm=claudeinstant,
(...)
[15](vscode-notebook-cell:/Users/austinmw/Desktop/bedrocktesting/bedrock_test.ipynb#X14sZmlsZQ%3D%3D?line=14) #verbose=True,
[16](vscode-notebook-cell:/Users/austinmw/Desktop/bedrocktesting/bedrock_test.ipynb#X14sZmlsZQ%3D%3D?line=15) )
---> [18](vscode-notebook-cell:/Users/austinmw/Desktop/bedrocktesting/bedrock_test.ipynb#X14sZmlsZQ%3D%3D?line=17) response = chain.run({} )
[20](vscode-notebook-cell:/Users/austinmw/Desktop/bedrocktesting/bedrock_test.ipynb#X14sZmlsZQ%3D%3D?line=19) print(response)
File [~/mambaforge/envs/bedrock/lib/python3.11/site-packages/langchain/chains/base.py:507](https://file+.vscode-resource.vscode-cdn.net/Users/austinmw/Desktop/bedrocktesting/~/mambaforge/envs/bedrock/lib/python3.11/site-packages/langchain/chains/base.py:507), in Chain.run(self, callbacks, tags, metadata, *args, **kwargs)
505 if len(args) != 1:
506 raise ValueError("`run` supports only one positional argument.")
--> 507 return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
508 _output_key
509 ]
511 if kwargs and not args:
512 return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[
513 _output_key
514 ]
File [~/mambaforge/envs/bedrock/lib/python3.11/site-packages/langchain/chains/base.py:312](https://file+.vscode-resource.vscode-cdn.net/Users/austinmw/Desktop/bedrocktesting/~/mambaforge/envs/bedrock/lib/python3.11/site-packages/langchain/chains/base.py:312), in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
310 except BaseException as e:
311 run_manager.on_chain_error(e)
--> 312 raise e
313 run_manager.on_chain_end(outputs)
314 final_outputs: Dict[str, Any] = self.prep_outputs(
315 inputs, outputs, return_only_outputs
316 )
File [~/mambaforge/envs/bedrock/lib/python3.11/site-packages/langchain/chains/base.py:306](https://file+.vscode-resource.vscode-cdn.net/Users/austinmw/Desktop/bedrocktesting/~/mambaforge/envs/bedrock/lib/python3.11/site-packages/langchain/chains/base.py:306), in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
299 run_manager = callback_manager.on_chain_start(
300 dumpd(self),
301 inputs,
302 name=run_name,
303 )
304 try:
305 outputs = (
--> 306 self._call(inputs, run_manager=run_manager)
307 if new_arg_supported
308 else self._call(inputs)
309 )
310 except BaseException as e:
311 run_manager.on_chain_error(e)
File [~/mambaforge/envs/bedrock/lib/python3.11/site-packages/langchain_experimental/smart_llm/base.py:166](https://file+.vscode-resource.vscode-cdn.net/Users/austinmw/Desktop/bedrocktesting/~/mambaforge/envs/bedrock/lib/python3.11/site-packages/langchain_experimental/smart_llm/base.py:166), in SmartLLMChain._call(self, input_list, run_manager)
164 ideas = self._ideate(stop, run_manager)
165 self.history.ideas = ideas
--> 166 critique = self._critique(stop, run_manager)
167 self.history.critique = critique
168 resolution = self._resolve(stop, run_manager)
File [~/mambaforge/envs/bedrock/lib/python3.11/site-packages/langchain_experimental/smart_llm/base.py:293](https://file+.vscode-resource.vscode-cdn.net/Users/austinmw/Desktop/bedrocktesting/~/mambaforge/envs/bedrock/lib/python3.11/site-packages/langchain_experimental/smart_llm/base.py:293), in SmartLLMChain._critique(self, stop, run_manager)
290 callbacks = run_manager.handlers if run_manager else None
291 if llm:
292 critique = self._get_text_from_llm_result(
--> 293 llm.generate_prompt([prompt], stop, callbacks), step="critique"
294 )
295 _colored_text = get_colored_text(critique, "yellow")
296 _text = "Critique:\n" + _colored_text
File [~/mambaforge/envs/bedrock/lib/python3.11/site-packages/langchain/chat_models/base.py:475](https://file+.vscode-resource.vscode-cdn.net/Users/austinmw/Desktop/bedrocktesting/~/mambaforge/envs/bedrock/lib/python3.11/site-packages/langchain/chat_models/base.py:475), in BaseChatModel.generate_prompt(self, prompts, stop, callbacks, **kwargs)
467 def generate_prompt(
468 self,
469 prompts: List[PromptValue],
(...)
472 **kwargs: Any,
473 ) -> LLMResult:
474 prompt_messages = [p.to_messages() for p in prompts]
--> 475 return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
File [~/mambaforge/envs/bedrock/lib/python3.11/site-packages/langchain/chat_models/base.py:365](https://file+.vscode-resource.vscode-cdn.net/Users/austinmw/Desktop/bedrocktesting/~/mambaforge/envs/bedrock/lib/python3.11/site-packages/langchain/chat_models/base.py:365), in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, **kwargs)
363 if run_managers:
364 run_managers[i].on_llm_error(e)
--> 365 raise e
366 flattened_outputs = [
367 LLMResult(generations=[res.generations], llm_output=res.llm_output)
368 for res in results
369 ]
370 llm_output = self._combine_llm_outputs([res.llm_output for res in results])
File [~/mambaforge/envs/bedrock/lib/python3.11/site-packages/langchain/chat_models/base.py:355](https://file+.vscode-resource.vscode-cdn.net/Users/austinmw/Desktop/bedrocktesting/~/mambaforge/envs/bedrock/lib/python3.11/site-packages/langchain/chat_models/base.py:355), in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, **kwargs)
352 for i, m in enumerate(messages):
353 try:
354 results.append(
--> 355 self._generate_with_cache(
356 m,
357 stop=stop,
358 run_manager=run_managers[i] if run_managers else None,
359 **kwargs,
360 )
361 )
362 except BaseException as e:
363 if run_managers:
File [~/mambaforge/envs/bedrock/lib/python3.11/site-packages/langchain/chat_models/base.py:507](https://file+.vscode-resource.vscode-cdn.net/Users/austinmw/Desktop/bedrocktesting/~/mambaforge/envs/bedrock/lib/python3.11/site-packages/langchain/chat_models/base.py:507), in BaseChatModel._generate_with_cache(self, messages, stop, run_manager, **kwargs)
503 raise ValueError(
504 "Asked to cache, but no cache found at `langchain.cache`."
505 )
506 if new_arg_supported:
--> 507 return self._generate(
508 messages, stop=stop, run_manager=run_manager, **kwargs
509 )
510 else:
511 return self._generate(messages, stop=stop, **kwargs)
[/Users/austinmw/Desktop/bedrocktesting/bedrock_test.ipynb](https://file+.vscode-resource.vscode-cdn.net/Users/austinmw/Desktop/bedrocktesting/bedrock_test.ipynb) Cell 10 line 3
[366](vscode-notebook-cell:/Users/austinmw/Desktop/bedrocktesting/bedrock_test.ipynb#X14sZmlsZQ%3D%3D?line=365) if stop:
[367](vscode-notebook-cell:/Users/austinmw/Desktop/bedrocktesting/bedrock_test.ipynb#X14sZmlsZQ%3D%3D?line=366) params["stop_sequences"] = stop
--> [369](vscode-notebook-cell:/Users/austinmw/Desktop/bedrocktesting/bedrock_test.ipynb#X14sZmlsZQ%3D%3D?line=368) completion = self._prepare_input_and_invoke(
[370](vscode-notebook-cell:/Users/austinmw/Desktop/bedrocktesting/bedrock_test.ipynb#X14sZmlsZQ%3D%3D?line=369) prompt=prompt, stop=stop, run_manager=run_manager, **params
[371](vscode-notebook-cell:/Users/austinmw/Desktop/bedrocktesting/bedrock_test.ipynb#X14sZmlsZQ%3D%3D?line=370) )
[373](vscode-notebook-cell:/Users/austinmw/Desktop/bedrocktesting/bedrock_test.ipynb#X14sZmlsZQ%3D%3D?line=372) message = AIMessage(content=completion)
[374](vscode-notebook-cell:/Users/austinmw/Desktop/bedrocktesting/bedrock_test.ipynb#X14sZmlsZQ%3D%3D?line=373) return ChatResult(generations=[ChatGeneration(message=message)])
[/Users/austinmw/Desktop/bedrocktesting/bedrock_test.ipynb](https://file+.vscode-resource.vscode-cdn.net/Users/austinmw/Desktop/bedrocktesting/bedrock_test.ipynb) Cell 10 line 2
[223](vscode-notebook-cell:/Users/austinmw/Desktop/bedrocktesting/bedrock_test.ipynb#X14sZmlsZQ%3D%3D?line=222) provider = self._get_provider()
[224](vscode-notebook-cell:/Users/austinmw/Desktop/bedrocktesting/bedrock_test.ipynb#X14sZmlsZQ%3D%3D?line=223) params = {**_model_kwargs, **kwargs}
--> [225](vscode-notebook-cell:/Users/austinmw/Desktop/bedrocktesting/bedrock_test.ipynb#X14sZmlsZQ%3D%3D?line=224) input_body = LLMInputOutputAdapterv2.prepare_input(provider, prompt, params)
[226](vscode-notebook-cell:/Users/austinmw/Desktop/bedrocktesting/bedrock_test.ipynb#X14sZmlsZQ%3D%3D?line=225) body = json.dumps(input_body)
[227](vscode-notebook-cell:/Users/austinmw/Desktop/bedrocktesting/bedrock_test.ipynb#X14sZmlsZQ%3D%3D?line=226) accept = "application/json"
[/Users/austinmw/Desktop/bedrocktesting/bedrock_test.ipynb](https://file+.vscode-resource.vscode-cdn.net/Users/austinmw/Desktop/bedrocktesting/bedrock_test.ipynb) Cell 10 line 7
[73](vscode-notebook-cell:/Users/austinmw/Desktop/bedrocktesting/bedrock_test.ipynb#X14sZmlsZQ%3D%3D?line=72) input_body = {**model_kwargs}
[74](vscode-notebook-cell:/Users/austinmw/Desktop/bedrocktesting/bedrock_test.ipynb#X14sZmlsZQ%3D%3D?line=73) if provider == "anthropic":
---> [75](vscode-notebook-cell:/Users/austinmw/Desktop/bedrocktesting/bedrock_test.ipynb#X14sZmlsZQ%3D%3D?line=74) input_body["prompt"] = _human_assistant_format_v2(prompt)
[76](vscode-notebook-cell:/Users/austinmw/Desktop/bedrocktesting/bedrock_test.ipynb#X14sZmlsZQ%3D%3D?line=75) elif provider == "ai21":
[77](vscode-notebook-cell:/Users/austinmw/Desktop/bedrocktesting/bedrock_test.ipynb#X14sZmlsZQ%3D%3D?line=76) input_body["prompt"] = prompt
[/Users/austinmw/Desktop/bedrocktesting/bedrock_test.ipynb](https://file+.vscode-resource.vscode-cdn.net/Users/austinmw/Desktop/bedrocktesting/bedrock_test.ipynb) Cell 10 line 4
[47](vscode-notebook-cell:/Users/austinmw/Desktop/bedrocktesting/bedrock_test.ipynb#X14sZmlsZQ%3D%3D?line=46) count += 1
[48](vscode-notebook-cell:/Users/austinmw/Desktop/bedrocktesting/bedrock_test.ipynb#X14sZmlsZQ%3D%3D?line=47) else:
---> [49](vscode-notebook-cell:/Users/austinmw/Desktop/bedrocktesting/bedrock_test.ipynb#X14sZmlsZQ%3D%3D?line=48) raise ValueError(ALTERNATION_ERROR)
[51](vscode-notebook-cell:/Users/austinmw/Desktop/bedrocktesting/bedrock_test.ipynb#X14sZmlsZQ%3D%3D?line=50) if count % 2 == 1: # Only saw Human, no Assistant
[52](vscode-notebook-cell:/Users/austinmw/Desktop/bedrocktesting/bedrock_test.ipynb#X14sZmlsZQ%3D%3D?line=51) input_text = input_text + ASSISTANT_PROMPT # SILENT CORRECTION
ValueError: Error: Prompt must alternate between '
Human:' and '
Assistant:'.
```
### Expected behavior
No error | SmartLLMChain errors with Bedrock Anthropic models and n_ideas>1 | https://api.github.com/repos/langchain-ai/langchain/issues/11261/comments | 8 | 2023-10-01T13:06:07Z | 2024-05-11T16:07:02Z | https://github.com/langchain-ai/langchain/issues/11261 | 1,920,758,064 | 11,261 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Hi, does langchain have a feature similar to LlamaIndex's [Sub Question Query Engine](https://gpt-index.readthedocs.io/en/latest/examples/query_engine/sub_question_query_engine.html)? If not, I'd like to submit a feature request!
### Motivation
This capability is very useful!
### Your contribution
Not sure. | Sub Question Query Engine | https://api.github.com/repos/langchain-ai/langchain/issues/11260/comments | 10 | 2023-10-01T12:37:44Z | 2024-05-11T16:06:57Z | https://github.com/langchain-ai/langchain/issues/11260 | 1,920,745,638 | 11,260 |
[
"hwchase17",
"langchain"
]
| ### System Info
|Name|Detail|
|--------|--------|
| Python | `3.10.13` |
| LangChain | `0.0.301` |
| System | MacOS `12.2.1` |
### Bug Detail
Playing around with a recursive function to decrement value `k` and ran into an issue whereby `self` is overwritten and bound to instance of `openai.error.InvalidRequestError` when the exception is caught. Note in the code below, the `AttributeError` exception is thrown when `self.save()` is executed upon a successful `run()` or `try` where `openai.error.InvalidRequestError` is not thrown.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
**The code below is a gross oversimplification, but it illustrates how I found this bug:**
```python
class SomeModel(models.Model):
query = models.CharField(max_length=255)
response = models.TextField()
def save(self, *args, **kwargs):
# Unmodified Django save method
return super().save(*args, **kwargs)
def run(self, kb, k=5):
try:
llm = ChatOpenAI()
chain = load_qa_chain(llm=llm, chain_type='stuff')
self.response = chain.run(input_documents=[], human_input=self.query)
self.save() # AttributeError: 'InvalidRequestError' object has no attribute 'save'
except (openai.error.InvalidRequestError,) as e:
if k < 2:
raise e
new_k_val = k - 1
print('K value too high. Trying with %s' % new_k_val)
return self.run(kb, k=new_k_val)
```
### Expected behavior
Where exception happens, `self` should not be bound to another instance/instance of `openai.error.InvalidRequestError` | ChatOpenAI: Exception InvalidRequestError changes bound "self" | https://api.github.com/repos/langchain-ai/langchain/issues/11258/comments | 2 | 2023-10-01T10:23:41Z | 2023-10-01T11:52:12Z | https://github.com/langchain-ai/langchain/issues/11258 | 1,920,688,423 | 11,258 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Is it possible to integrate any language model (LM) trained on source code or opensource codebases on various programming languages e.g. :
- [StarCoder](https://github.com/bigcode-project/starcoder)
- [replit/replit-code-v1-3b](https://huggingface.co/replit/replit-code-v1-3b)
- [Bloom](https://bigscience.huggingface.co/blog/bloom)
- etc...
as an [LLM Model](https://python.langchain.com/en/latest/modules/models.html) or an [Agent](https://python.langchain.com/en/latest/modules/agents.html) with [LangChain](https://github.com/hwchase17/langchain), and [chain](https://python.langchain.com/en/latest/modules/chains.html) it in a complex usecase?
It would be really cool if, during a general query, the LangChain system has the capability to understand code context (possibly when chained to a vector DB or any data source that contains code or a full code repo), _kind of like chatting with your codebase_!
Any help / hints on the same would be appreciated!
### Motivation
Initially inspired by this [Issue #173](https://github.com/nomic-ai/gpt4all/issues/173)
Raised a similar Issue long back with StarCoder [Issue #53](https://github.com/bigcode-project/starcoder/issues/53)
### Your contribution
n/a | Implement a custom LangChain class wrapper (LLM model/Agent) for StarCoder model | https://api.github.com/repos/langchain-ai/langchain/issues/11252/comments | 11 | 2023-09-30T17:57:16Z | 2024-03-13T19:58:25Z | https://github.com/langchain-ai/langchain/issues/11252 | 1,920,348,709 | 11,252 |
[
"hwchase17",
"langchain"
]
| ### System Info
I am following this example: https://python.langchain.com/docs/integrations/memory/dynamodb_chat_message_history
When attempting to import DynamoDBChatMessageHistory I am receiving an error
`from langchain.memory.chat_message_histories import DynamoDBChatMessageHistory`
`---------------------------------------------------------------------------
PydanticUserError Traceback (most recent call last)
/var/folders/jq/w80pdyz11zbbhbvz7p8795gh0000gn/T/ipykernel_9395/3590968237.py in <module>
----> 1 from langchain.memory.chat_message_histories import DynamoDBChatMessageHistory
2
3 # history = DynamoDBChatMessageHistory(table_name="SessionTable", session_id="0")
4
5 # history.add_user_message("hi!")
/usr/local/lib/python3.9/site-packages/langchain/__init__.py in <module>
4 from typing import Optional
5
----> 6 from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain
7 from langchain.cache import BaseCache
8 from langchain.chains import (
/usr/local/lib/python3.9/site-packages/langchain/agents/__init__.py in <module>
1 """Interface for agents."""
----> 2 from langchain.agents.agent import (
3 Agent,
4 AgentExecutor,
5 AgentOutputParser,
/usr/local/lib/python3.9/site-packages/langchain/agents/agent.py in <module>
13 from pydantic import BaseModel, root_validator
14
---> 15 from langchain.agents.tools import InvalidTool
16 from langchain.base_language import BaseLanguageModel
17 from langchain.callbacks.base import BaseCallbackManager
/usr/local/lib/python3.9/site-packages/langchain/agents/tools.py in <module>
2 from typing import Optional
3
----> 4 from langchain.callbacks.manager import (
5 AsyncCallbackManagerForToolRun,
6 CallbackManagerForToolRun,
/usr/local/lib/python3.9/site-packages/langchain/callbacks/__init__.py in <module>
1 """Callback handlers that allow listening to events in LangChain."""
2
----> 3 from langchain.callbacks.aim_callback import AimCallbackHandler
4 from langchain.callbacks.clearml_callback import ClearMLCallbackHandler
5 from langchain.callbacks.comet_ml_callback import CometCallbackHandler
/usr/local/lib/python3.9/site-packages/langchain/callbacks/aim_callback.py in <module>
2 from typing import Any, Dict, List, Optional, Union
3
----> 4 from langchain.callbacks.base import BaseCallbackHandler
5 from langchain.schema import AgentAction, AgentFinish, LLMResult
6
/usr/local/lib/python3.9/site-packages/langchain/callbacks/base.py in <module>
5 from uuid import UUID
6
----> 7 from langchain.schema import (
8 AgentAction,
9 AgentFinish,
/usr/local/lib/python3.9/site-packages/langchain/schema.py in <module>
145
146
--> 147 class ChatGeneration(Generation):
148 """Output of a single generation."""
149
/usr/local/lib/python3.9/site-packages/langchain/schema.py in ChatGeneration()
152
153 @root_validator
--> 154 def set_text(cls, values: Dict[str, Any]) -> Dict[str, Any]:
155 values["text"] = values["message"].content
156 return values
/usr/local/lib/python3.9/site-packages/pydantic/deprecated/class_validators.py in root_validator(pre, skip_on_failure, allow_reuse, *__args)
220 if __args:
221 # Ensure a nice error is raised if someone attempts to use the bare decorator
--> 222 return root_validator()(*__args) # type: ignore
223
224 if allow_reuse is True: # pragma: no cover
/usr/local/lib/python3.9/site-packages/pydantic/deprecated/class_validators.py in root_validator(pre, skip_on_failure, allow_reuse, *__args)
226 mode: Literal['before', 'after'] = 'before' if pre is True else 'after'
227 if pre is False and skip_on_failure is not True:
--> 228 raise PydanticUserError(
229 'If you use `@root_validator` with pre=False (the default) you MUST specify `skip_on_failure=True`.'
230 ' Note that `@root_validator` is deprecated and should be replaced with `@model_validator`.',
PydanticUserError: If you use `@root_validator` with pre=False (the default) you MUST specify `skip_on_failure=True`. Note that `@root_validator` is deprecated and should be replaced with `@model_validator`.
For further information visit https://errors.pydantic.dev/2.1.1/u/root-validator-pre-skip`
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Attempt to import DynamoDBChatMessageHistory
from langchain.memory.chat_message_histories import DynamoDBChatMessageHistory
### Expected behavior
no errors, successful import | Error importing DynamoDBChatMessageHistory | https://api.github.com/repos/langchain-ai/langchain/issues/11250/comments | 2 | 2023-09-30T15:34:02Z | 2024-01-30T00:43:41Z | https://github.com/langchain-ai/langchain/issues/11250 | 1,920,300,925 | 11,250 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain==0.0.305
MacOS M1 Silicon
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I have a basic methon for loading the pdf via mathpix
```
def load_pdf(path):
loader = MathpixPDFLoader(path)
pages = loader.load()
return pages;
pages = load_pdf("example_data/DOF-230519AFYXBECVECGY-0028747724.pdf")
```
but if I run this, I gen only this:
Status: None, waiting for processing to complete
Status: None, waiting for processing to complete
Status: None, waiting for processing to complete
Status: None, waiting for processing to complete
Status: None, waiting for processing to complete
Status: None, waiting for processing to complete
Status: None, waiting for processing to complete
...
in backend of mathpix I see the document aldready parsed.
Any idea on this?
### Expected behavior
Expect the document to return. | MathpixPDFLoader never finishes | https://api.github.com/repos/langchain-ai/langchain/issues/11249/comments | 6 | 2023-09-30T12:12:22Z | 2024-03-13T20:04:02Z | https://github.com/langchain-ai/langchain/issues/11249 | 1,920,235,293 | 11,249 |
[
"hwchase17",
"langchain"
]
| ### System Info
version 0.300
<img width="1397" alt="Screenshot 2023-09-30 at 5 36 41 AM" src="https://github.com/langchain-ai/langchain/assets/17947802/124b1f2a-db02-497f-9cd0-dbad1033b6c8">
\\nObservation: requests_put is not a valid tool, try one of [requests_get, requests_post].\\nThought:I need to use requests_post to update the pet\'s ship date.\\n\\nAction:
This keep happening time and time again!
### Who can help?
hierarchical planner, requests agent
https://github.com/openchatai/OpenCopilot/blob/ecd2f51bfc1f989133a44c4cff15d858b0d2fd30/llm-server/api_caller/planner.py#L319
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
--
### Expected behavior
The agent should be able to use the `put` tool when it needs to use it | Hierarchical Planner - Observation: requests_put is not a valid tool, try one of [requests_get, requests_post]. | https://api.github.com/repos/langchain-ai/langchain/issues/11244/comments | 5 | 2023-09-30T02:45:21Z | 2024-01-30T00:42:35Z | https://github.com/langchain-ai/langchain/issues/11244 | 1,920,074,483 | 11,244 |
[
"hwchase17",
"langchain"
]
| ### Feature request
We all want to have our clients to interact with the database using LLM, langchain with create_sql_agent, are great, the only problem I see, there is no way to limit user access to the database and fence him to his data.
For example we have table called AI_Product with many columns and one of them is storeid, for normal user, he should have limited access to this table, I also don't trust the AI to limit that, there should be a way to programmatically do that.
So let's look at AI_Product
```
id, name, price, store_id
1, water, 2, 10001
2, pepsi, 4, 10001
3, cokezero , 5, 10002
```
For the above table, the user only have rights to access storeid= 10001, for security and true data we need a way to limit what this user can see.
One solution could be is to allow include_tables to have sql
`include_tables = ['AI_Product']
`
to
`include_tables = ['(select * from AI_Product where store_id = '10001') AI_Product']
`
The other one, is allow the SQL query to come back before execution and allow me to modify it
So if I asked , "How many products do we have?"
```
agent: sql:"select count(*) from AI_Product"
Then we can modify the sql to
protected_sql = sql.replace("AI_Product", "(select * from AI_Product where store_id = '10001') AI_Product")
#sql = "select count(*) from (select * from AI_Product where store_id = '10001') AI_Product"
agent.run (protected_sql)
```
### Motivation
Find a way to protect the data while allow end-user unlimited access to it.
### Your contribution
I'm not familiar with the internal code, but I can help if gudied | Limit Data and Enforcing Security for SQLDatabase & Agents | https://api.github.com/repos/langchain-ai/langchain/issues/11243/comments | 17 | 2023-09-29T22:31:23Z | 2024-03-25T16:06:18Z | https://github.com/langchain-ai/langchain/issues/11243 | 1,919,979,559 | 11,243 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
while RAG we create a db and then we ask a query to the db in this way,
`index = VectorstoreIndexCreator(
vectorstore_cls=DocArrayInMemorySearch
).from_loaders([loader])`
`index.query(llm=chat, question=query)`
we get some response.
Now lets say wanted to create dataset with the generations ive got from gpt to fintune llama2 llm.
the main problem comes here, now how to know what is the reference we got from the index search as it is totaly abstracted!!
i need the exact querry that gone into llm(gpt) in this process.
Or else can i have any other meathod that solves my problem(tried similarity search and fed reference+query manualy the quality of the output droped so i need index search it self.
### Suggestion:
_No response_ | Issue: < Urgent Help needed related to VectorstoreIndexCreator > | https://api.github.com/repos/langchain-ai/langchain/issues/11242/comments | 2 | 2023-09-29T21:53:43Z | 2023-10-31T07:58:55Z | https://github.com/langchain-ai/langchain/issues/11242 | 1,919,948,621 | 11,242 |
[
"hwchase17",
"langchain"
]
| ### Feature request
[LanguageParser](https://github.com/langchain-ai/langchain/blob/e9b51513e9bab738c741a348ffbf0f36ce19faf6/libs/langchain/langchain/document_loaders/parsers/language/language_parser.py#L21C7-L21C21) is a parser for Document Loaders that, given source code, splits each top-level function or class into separate documents. As stated in its documentation:
> This approach can potentially improve the accuracy of QA models over source code.
>
> Currently, the supported languages for code parsing are Python and JavaScript.
We would like to add support for additional languages, such as C, C++, Rust, Ruby, Perl, and so on.
### Motivation
There is an [open request](https://github.com/langchain-ai/langchain/issues/10996) for "CPP" (presumably C++) support. By integrating a generic parsing library (such as [tree-sitter](https://github.com/tree-sitter/py-tree-sitter#py-tree-sitter)), we could make LanguageParser work with many more languages, and thus be more generally useful.
### Your contribution
We intend to submit a pull request for this issue no later than mid-November, and likely sooner. | Add support for a variety of languages to LanguageParser | https://api.github.com/repos/langchain-ai/langchain/issues/11229/comments | 33 | 2023-09-29T16:07:57Z | 2024-06-26T09:34:16Z | https://github.com/langchain-ai/langchain/issues/11229 | 1,919,554,598 | 11,229 |
[
"hwchase17",
"langchain"
]
| ### System Info
Python = 3.8.16
langchain==0.0.304
langchainhub==0.1.13
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. install python 3.8.16
2. install langchain==0.0.304 and langchainhub==0.1.13
3. Run the following script :
from langchain import hub
# LANGCHAIN_HUB_API_KEY is set on my .env file
# pull a chat prompt
try:
prompt = hub.pull("efriis/my-first-prompt")
except Exception as e:
print(e)
raise e
### Expected behavior
I'm uncertain about the expected behavior, as I've never been able to successfully execute hub.pull. | Encountering a "type object is not subscriptable" error when invoking hub.pull. | https://api.github.com/repos/langchain-ai/langchain/issues/11226/comments | 12 | 2023-09-29T15:55:14Z | 2024-02-27T11:46:37Z | https://github.com/langchain-ai/langchain/issues/11226 | 1,919,535,609 | 11,226 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
What is the difference between OpenAIFunctionsAgent and OpenAIMultiFunctionsAgent? The document do not clarify it.
### Idea or request for content:
_No response_ | DOC: What is the difference between OpenAIFunctionsAgent and OpenAIMultiFunctionsAgent? | https://api.github.com/repos/langchain-ai/langchain/issues/11225/comments | 4 | 2023-09-29T15:51:08Z | 2024-02-12T16:12:59Z | https://github.com/langchain-ai/langchain/issues/11225 | 1,919,529,912 | 11,225 |
[
"hwchase17",
"langchain"
]
| When using async `LLMChain` calls, the logger prints out a ton of info statements that looks like this:
```
2023-09-29 08:24:51,038:INFO - message='OpenAI API response' path=https://api.openai.com/v1/chat/completions processing_ms=7 request_id=0cde8ecb8d402ec3db4ec674f0450ade response_code=200
```
I tried setting the logging level on the execution level in a bunch of ways, but to no avail. Do you have any recommendation on how to get rid of those logs? | How to disable logger streaming for async LLMChain calls? | https://api.github.com/repos/langchain-ai/langchain/issues/11223/comments | 1 | 2023-09-29T15:26:52Z | 2023-09-29T15:41:29Z | https://github.com/langchain-ai/langchain/issues/11223 | 1,919,488,448 | 11,223 |
[
"hwchase17",
"langchain"
]
| ### System Info
while trying to use BedrockChat , it threw my error below , the updated prompt which am using after the recent changes in claude prompt is below .
prompt_template = """
Human:
You are a helpful, respectful, and honest assistant, dedicated to providing valuable and accurate information.
Guidance for answers below
Answer the question only using the in the context given below, and not with the prior knowledge.
If you don't see answer in the context just Reply "Sorry , the answer is not in the context so I don't know".
Now read this context and answer the question.
{context}
Based on the provided context above and information from the retriever source, provide a detailed answer to the below question
{question}
If the information is not available in the context , respond with "don't know."
Assistant:
"""
Error is below
raise ValueError(ALTERNATION_ERROR)
ValueError: Error: Prompt must alternate between '
Human:' and '
Assistant:'.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
prompt_template = """
Human:
You are a helpful, respectful, and honest assistant, dedicated to providing valuable and accurate information.
Guidance for answers below
Answer the question only using the in the context given below, and not with the prior knowledge.
If you don't see answer in the context just Reply "Sorry , the answer is not in the context so I don't know".
Now read this context and answer the question.
{context}
Based on the provided context above and information from the retriever source, provide a detailed answer to the below question
{question}
If the information is not available in the context , respond with "don't know."
Assistant:
"""
modelId = 'anthropic.claude-v2'
# Langchain chain for invoking Endpoint
llm = BedrockChat(
model_id=modelId,
client=bedrock_client,
model_kwargs={"max_tokens_to_sample": 1000}
)
### Expected behavior
I would have expected the code to run with the prompt. | raise ValueError(ALTERNATION_ERROR) ValueError: Error: Prompt must alternate between ' Human:' and ' Assistant:'. | https://api.github.com/repos/langchain-ai/langchain/issues/11220/comments | 16 | 2023-09-29T13:18:59Z | 2024-05-11T16:06:52Z | https://github.com/langchain-ai/langchain/issues/11220 | 1,919,260,245 | 11,220 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I fetch the result from pinecone using Pinecone's from_existing_index function.
My single Index contains multiple namespaces in pinecone.
Now I have 2 cases:
1. I want to get the result of particular namespace.
2. secondly I want to fetch the result from all the index's namespace.
How is it possible, Please Guide me accordingly?
### Suggestion:
_No response_ | Issue: How can I fetch the response from Pinecone using namespace with similarity_search | https://api.github.com/repos/langchain-ai/langchain/issues/11212/comments | 2 | 2023-09-29T08:39:49Z | 2024-01-30T00:41:52Z | https://github.com/langchain-ai/langchain/issues/11212 | 1,918,850,778 | 11,212 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I have custom memory with memory key as 'context' as follows which i would like to use in the underling chain as readonly memory
```
memory = ScopeContextMemory(memory_key='context')
readonlymemory = ReadOnlySharedMemory(memory_key='context', memory=memory)
```
i have the following template stored in a file
```
Given an input question, first create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer.
Use the following format:
Question: "Question here"
SQLQuery: "SQL Query to run"
SQLResult: "Result of the SQLQuery"
Answer: "Final answer here"
Only use the following tables:
{table_info}
Use the following context for the query generation:
{context}
```
i have prompt initialised as follows
```
with open('prompts/sql_default_prompt.txt','r') as fp:
DEFAULT_PROMPT = PromptTemplate(input_variables=['input','table_info','dialect', 'context'],
template=fp.read())
```
NOTE: 'context' is added as one of the input variable
Ive initialised SQL Database chain as follows
```
llm=OpenAI(temperature=0)
chain = SQLDatabaseChain.from_llm(llm=llm,
db=db,
verbose=True,
use_query_checker=True,
prompt=DEFAULT_PROMPT,
return_intermediate_steps=True,
memory=memory,
top_k=3)
```
when i tried to run im getting an exception on missing input keys 'context'
```
chain("List Items under the shopping cart for Davis")
---------------------------------------------------------------------------
ValueError
169 missing_keys = set(self.input_keys).difference(inputs)
170 if missing_keys:
--> 171 raise ValueError(f"Missing some input keys: {missing_keys}")
ValueError: Missing some input keys: {'context'}
```
### Suggestion:
Ive tried looking at the documention on memory and its usage in chain , tools and agents but could not figure out what is going wrong here. any pointers on the cause is appreciated and how can i remediate the thing | Issue: SQLDatabaseChain fails to run when initialised with custom read only memory | https://api.github.com/repos/langchain-ai/langchain/issues/11211/comments | 8 | 2023-09-29T07:59:22Z | 2023-09-29T14:32:37Z | https://github.com/langchain-ai/langchain/issues/11211 | 1,918,785,813 | 11,211 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Hi,
I am working on Document QA, and using ConversationalRetrievalChain, with map-reduce as chain type.
In the final result of ConversationalRetrievalChain, I want to get the individual output from each map-chain.
### Motivation
This is a separate issue, and I think there should be a parameter that we can pass, to get the output of these chains as well.
### Your contribution
Not as of now. | Return output from map chain in the final result of ConversationalRetrievalChain. | https://api.github.com/repos/langchain-ai/langchain/issues/11209/comments | 4 | 2023-09-29T07:23:10Z | 2024-02-07T16:24:38Z | https://github.com/langchain-ai/langchain/issues/11209 | 1,918,733,653 | 11,209 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
How do you stream a langchain response from Flask to React?
I am having issues with it. I successfully implemented it with Azure OpenAI API only but langchain doesn't give me the right implementation.
### Suggestion:
_No response_ | How do you stream a langchain response from Flask to React? | https://api.github.com/repos/langchain-ai/langchain/issues/11208/comments | 3 | 2023-09-29T04:59:39Z | 2024-04-27T16:28:23Z | https://github.com/langchain-ai/langchain/issues/11208 | 1,918,600,924 | 11,208 |
[
"hwchase17",
"langchain"
]
| ### System Info
Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index".
You'll get this exception while using redis as retriever:

### Who can help?
_No response_
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction

The error is here:

If you look for the index you'll get (empty list or set).
This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason.
I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps.
### Expected behavior
Expected behavior inside Redis: "docs:indexname_:12ss2sadd" | Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"] | https://api.github.com/repos/langchain-ai/langchain/issues/11197/comments | 3 | 2023-09-28T19:57:36Z | 2023-10-24T17:51:26Z | https://github.com/langchain-ai/langchain/issues/11197 | 1,918,183,041 | 11,197 |
[
"hwchase17",
"langchain"
]
| ### System Info
Lambda Python 3.11 runtime
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
As of Bedrock GA today, I get this error:
"/var/task/langchain/llms/bedrock.py", line 237, in _prepare_input_and_invoke raise ValueError(f"Error raised by bedrock service: {e}") ValueError: Error raised by bedrock service: An error occurred (ValidationException) when calling the InvokeModel operation: The requested operation is not recognized by the service.
### Expected behavior
It should invoke the model. I'm guessing with today's GA release of Bedrock, an underlying API call changed. | Bedrock InvokeModel failes on GA release | https://api.github.com/repos/langchain-ai/langchain/issues/11189/comments | 8 | 2023-09-28T18:17:28Z | 2024-02-15T16:09:15Z | https://github.com/langchain-ai/langchain/issues/11189 | 1,918,043,819 | 11,189 |
[
"hwchase17",
"langchain"
]
| ### System Info
Amazon Linux, langchain version 0.0.304, Docker Image
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce:
1. Initialize ChatAnthropic model, with keyword "anthropic_api_key", as documented: https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.anthropic.ChatAnthropic.html#langchain.chat_models.anthropic.ChatAnthropic
2. Call chat model via __call__ method, e.g.:
"""
chat = ChatAnthropic(..., anthropic_api_key=os.environ["ANTHROPIC_API_KEY"])
messages = [....]
response = chat(messages)
"""
### Expected behavior
A normal response for a chat model is expected, but instead we get:
```
Traceback (most recent call last):
File "/var/task/app/src/utils/error_route.py", line 27, in custom_route_handler
return await original_route_handler(request)
File "/var/lang/lib/python3.10/site-packages/fastapi/routing.py", line 273, in app
raw_response = await run_endpoint_function(
File "/var/lang/lib/python3.10/site-packages/fastapi/routing.py", line 190, in run_endpoint_function
return await dependant.call(**values)
File "/var/task/app/app.py", line 184, in chat
reply = chat(data["question"])
File "/var/task/app/src/app_modules/chat.py", line 192, in __call__
res = self.chatbot(messages)
File "/var/lang/lib/python3.10/site-packages/langchain/chat_models/base.py", line 595, in __call__
generation = self.generate(
File "/var/lang/lib/python3.10/site-packages/langchain/chat_models/base.py", line 348, in generate
raise e
File "/var/lang/lib/python3.10/site-packages/langchain/chat_models/base.py", line 338, in generate
self._generate_with_cache(
File "/var/lang/lib/python3.10/site-packages/langchain/chat_models/base.py", line 490, in _generate_with_cache
return self._generate(
File "/var/lang/lib/python3.10/site-packages/langchain/chat_models/anthropic.py", line 184, in _generate
response = self.client.completions.create(**params)
File "/var/lang/lib/python3.10/site-packages/anthropic/_utils/_utils.py", line 250, in wrapper
return func(*args, **kwargs)
File "/var/lang/lib/python3.10/site-packages/anthropic/resources/completions.py", line 223, in create
return self._post(
File "/var/lang/lib/python3.10/site-packages/anthropic/_base_client.py", line 949, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
File "/var/lang/lib/python3.10/site-packages/anthropic/_base_client.py", line 748, in request
return self._request(
File "/var/lang/lib/python3.10/site-packages/anthropic/_base_client.py", line 766, in _request
request = self._build_request(options)
File "/var/lang/lib/python3.10/site-packages/anthropic/_base_client.py", line 397, in _build_request
headers = self._build_headers(options)
File "/var/lang/lib/python3.10/site-packages/anthropic/_base_client.py", line 373, in _build_headers
headers = httpx.Headers(headers_dict)
File "/var/lang/lib/python3.10/site-packages/httpx/_models.py", line 71, in __init__
self._list = [
File "/var/lang/lib/python3.10/site-packages/httpx/_models.py", line 75, in <listcomp>
normalize_header_value(v, encoding),
File "/var/lang/lib/python3.10/site-packages/httpx/_utils.py", line 53, in normalize_header_value
return value.encode(encoding or "ascii")
AttributeError: 'SecretStr' object has no attribute 'encode'
```
Corresponding httpx code:
```
def normalize_header_value(
value: typing.Union[str, bytes], encoding: typing.Optional[str] = None
) -> bytes:
"""
Coerce str/bytes into a strictly byte-wise HTTP header value.
"""
if isinstance(value, bytes):
return value
return value.encode(encoding or "ascii")
``` | Pydantic type conversion to SecreteStr for ChatAnthropic anthropic_api_key keyword value raises attribute error in Anthropic client | https://api.github.com/repos/langchain-ai/langchain/issues/11182/comments | 4 | 2023-09-28T16:37:45Z | 2023-09-28T18:21:58Z | https://github.com/langchain-ai/langchain/issues/11182 | 1,917,900,561 | 11,182 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Bedrock added support for Cohere models and the Bedrock llm code must be updated with the right prompt formatting.
The prompt must be passed as `"prompt"` field and not as the code below does as `"input"`
```
{
"modelId": "cohere.command-text-v14",
"contentType": "application/json",
"accept": "*/*",
"body": {
"prompt": "Write a LinkedIn post about starting a career in tech:",
"max_tokens": 100,
"temperature": 0.8,
"return_likelihood": "GENERATION"
}
}
```
https://github.com/langchain-ai/langchain/blob/44489e70291be95b5a95c2e0476f205202945778/libs/langchain/langchain/llms/bedrock.py#L71C1-L89C26
### Motivation
Getting support for all FM exposed by Bedrock via Langchain
### Your contribution
I can submit a PR | Add support for Bedrock cohere models | https://api.github.com/repos/langchain-ai/langchain/issues/11181/comments | 2 | 2023-09-28T16:10:03Z | 2024-01-30T00:43:13Z | https://github.com/langchain-ai/langchain/issues/11181 | 1,917,854,905 | 11,181 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain version: 0.0.304
### Who can help?
@eyurt
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.document_loaders.web_base import WebBaseLoader
loader = WebBaseLoader("https://www.google.com")
docs = loader.load()
```
Result: RequestException is raised with the message "Invalid URL 'h': No scheme supplied. Perhaps you meant https://h?"
### Expected behavior
The page contents correctly loads to documents. | WebBaseLoader interprets incorrectly the web_path parameter | https://api.github.com/repos/langchain-ai/langchain/issues/11180/comments | 2 | 2023-09-28T15:45:13Z | 2023-09-28T15:48:19Z | https://github.com/langchain-ai/langchain/issues/11180 | 1,917,815,779 | 11,180 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
full_chain = {
"source1": {"question": lambda x: x["question"]} | query_chain | db.run,
"source2": (lambda x: x['question']) | vectorstore.as_retriever(),
"question": lambda x: x['question'],
} | qa_prompt | ChatOpenAI(streaming=True, callbacks=[QueueCallback(q)], temperature=0, model_name='gpt-4')
how can I add chat history to this?
### Suggestion:
_No response_ | Issue: <Please write a comprehensive title after the 'Issue: ' prefix> | https://api.github.com/repos/langchain-ai/langchain/issues/11173/comments | 3 | 2023-09-28T13:22:51Z | 2024-01-30T00:36:52Z | https://github.com/langchain-ai/langchain/issues/11173 | 1,917,547,206 | 11,173 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
full_chain = {
"source1": {"question": lambda x: x["question"]} | query_chain | db.run,
"source2": (lambda x: x['question']) | vectorstore.as_retriever(),
"question": lambda x: x['question'],
} | qa_prompt | ChatOpenAI(streaming=True, callbacks=[QueueCallback(q)], temperature=0, model_name='gpt-4')
how can I add chat history to this please?
### Suggestion:
_No response_ | how to add chat history to runnablesequence instance | https://api.github.com/repos/langchain-ai/langchain/issues/11171/comments | 6 | 2023-09-28T12:54:18Z | 2024-01-30T00:40:13Z | https://github.com/langchain-ai/langchain/issues/11171 | 1,917,489,116 | 11,171 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.